text
stringlengths
1
7.6M
meta
dict
\section{Introduction} In the standard cosmological scenario, CDM particles decouple while non-relativistic from the primordial thermal bath. Their consequently small speeds allow for a clustering that agrees well with an impressive array of observations on large scales \citep{Frenk2012}. But this same `coldness' leads to excessive central densities in the nonlinear regime. As soon as they could resolve the central regions, simulations showed that CDM haloes were endowed with central density cusps, with their density diverging towards the centre \citep{Dubinski1991, Warren1992}. It subsequently transpired that this reflected nearly `universal', self-similar, behaviour; with the density profiles of haloes at different masses, identified at various epochs, well fit with by simple empirical formulae \citep{nfw}. But the central density cusps seemed too dense to match those inferred for dark matter dominated galaxies, like low-surface-brightness and dwarf galaxies \citep{Moore1994, Flores1994}. This `cusp-core' problem is central to the low-redshift small-scale issues associated with CDM structure formation, which have attracted much attention during the past decades, as its resolution may simultaneously alleviate other puzzles associated with small scale structure in CDM simulations, such as the so-called `too-big-to-fail' problem (\citealp{ReadTBF2006, BoylanTBF2011, OgiyaBurkTBF2015}; for general reviews see, e.g., \citealp{delPopolo2017,Bullock_B2017, Salucci:2018hqu}). CDM-based simulations also predict a general over-abundance of small haloes compared to small galaxies. This issue has however become progressively less severe as small Milky Way satellite galaxies, which may be associated with the small CDM haloes, are being discovered and counted in (e.g., \citealp{nadler2020}). Assuming that the data has not been misinterpreted (e.g.~\citealp{Oman2019}), solutions to the aforementioned problems can be principally divided into two sets; those modifying the underlying dark matter particle physics model and those invoking gravitationally mediated CDM interaction with baryons. The most popular of the first category have been warm dark matter and self interacting dark matter, and the more recently topical fuzzy dark matter. The simplest models of the latter type have problems explaining core scaling relations in galaxies \citep{DHHertz18, Burkert20, BarBlumRotII21}, and warm dark matter does not generally succeed in producing central density cores at all, as cold collapse occurs unless the dark matter is `warm' enough to prevent the halo at hand from forming in the first place \citep{Maccio2012b}. Warm dark matter may make baryonic core formation easier, due to lower halo concentrations, but this is debated since warm dark matter also acts to suppress early star formation \citep{Governato2015}. In general particle physics based modifications to the standard structure formation scenario invoking warm dark matter or fuzzy dark matter, have been more successful in explaining the apparent over-abundance of small scale structure. But this, instead of being a strong point, has now joined Lyman-$\alpha$ constraints (e.g., \citealp{ViDvielWDM2017, VidVielFDM}) and strong lensing (e.g., \citealp{GilmanLensMisssat2020}) in setting limits on the efficacy of such models~\citep{SimonMisssat2007, KopGilMisssat2008, KimMisssat2018, ReadMisssat2019, NewtonMisssat2021,nadler2020, Nadler2021}. Interacting dark matter-based models on the other hand generically suffer from eventual gravothermal contraction of cores, which are not easily avoided for all relevant spatial and time scales \citep{Burkert2000, Kochanek2000, CoreColSIDM21}. A non-negligible scattering crossection between standard model particles and dark matter has also been proposed (\citealp{FamaueyBarint2020, SallucciBarint2020}), although this option has not been as fully investigated so far, On the other hand, gravitational coupling between dark matter and baryons has long been known to be effective in transforming CDM cusps into cores. Its main shortcoming however comes from the complex physics and the associated uncertainties in the parameters involved. Indeed, cusp-core transformation through such coupling come in three different forms: one time mass blowout due to a single burst of energy feedback (\citealt{Navarro1996a}, and more recently \citealt{Freundlich2020} and \citealt{Li2022}); the pumping of energy from (gaseous or stellar) baryonic clumps to CDM via dynamical friction \citep{Zant2001, Zant2004}; and density and potential fluctuations in feedback-driven gas during galaxy formation \citep{Read2005, Pontzen2014}. Despite the apparent similarity of the first and third mechanisms (as they both invoke feedback), it is the last two that are in fact more closely related at a deeper level --- as they both involve a long-lived fluctuations progressively heating the CDM on timescales much longer than that of a single feedback starburst, as discussed in \citet{EZFC}. Dynamical friction heating is nevertheless expected to be observationally distinct from supernova heating since the latter correlates with star formation \citep[e.g.][]{Read2019} while the former does not, and both may act in tandem during galaxy formation \citep{Orkney2021, Dekel2021, OgiyaDF2022}. In the particular case of dwarf galaxies, there is mounting observational evidence for dark matter heating from impulsive, `bursty', star formation, ubiquitous in dwarfs below a stellar mass of $M_* \sim 10^8$\,M$_\odot$ \citep[e.g.][]{Collins2022}. Indeed, there is an observed anticorrelation between the inner dark matter density of dwarfs and their stellar-to-halo mass ratio $M_*/M_{200}$ \citep{Read2019, Bouche2022}, which is a proxy for the amount of star formation -- and therefore dark matter heating -- that has taken place \citep[e.g.][]{Penarrubia2012, DiCintio2014, Freundlich2020b}. This may favour a scenario of feedback-driven core formation in such galaxies. As conjectured in \cite{EZFC}, it may be possible to understand the process of feedback-driven core formation with repeated bursts from first principles, bypassing the uncertain complexities of `gastrophysics' and its various subgrid implementations. Indeed the stochastic dynamical model presented there suggested that the process of core formation depended primarily on just two parameters, namely the gas mass fraction and the strength of the fluctuations characterised by the normalisation of a power-law power spectrum. It is our purpose here to study the properties of feedback-driven fluctuations in a full hydrodynamic simulation where a cups-core transformation occurs, with the aforementioned model forming an interpretive framework and guide. Thus testing, in the process, its assumptions and predictions. \section{Modelling halo heating from gas fluctuations} \label{sec:level2} \subsection{Physical setting} \label{sec:physicalset} The general picture envisioned, in \citet{EZFC}, and here, is that of a gas settling into a CDM halo. As it contracts, a critical density is reached, leading to star formation and consequent starburst. The star formation process assumed is akin to that described in \citet{Teyssier2013} and \citet{Readsim2016}. Namely, the threshold is considered high enough so that most star formation does not occur in a few bursts, but instead repetitively over a long timespan. This leads to sustained density and mass fluctuations in the gas that appear amenable to a description in terms of a stationary stochastic process over the timescale of the simulation. If, to a first approximation, the gas density is assumed to be isotropic and homogeneous when averaged over large spatial or time scales, with average density $\rho_0$, then the mass fluctuations within a spatial scale $R$ can (as in characterising cosmic structure) be characterized by a dispersion \begin{equation} \sigma _R^2 = \frac{1}{2 \pi ^2} \int _0^\infty W^2(k,R) \mathcal {P}(k) k^2 {\rm d}k, \label{eq:RMS} \end{equation} where $\mathcal {P}(k)$ is the equal time power spectrum of the density fluctuations $\delta \rho ({\bf r}, t)/\rho_0 - 1$, and $W$ is a Fourier filter function. If the fluctuations furthermore constitute a stationary Gaussian random process, this is all what one needs to know to completely characterise them. The stochastic dynamics can likewise be described completely in terms of the first and second moment statistics of the force field, which are easily obtainable from the Poisson equations: in particular, the $k$-modes of the potential fluctuations are related to those of the density by $\phi _{\boldsymbol k} = -4 \pi G \rho _0 \delta _{\boldsymbol k} k^{-2}$, as discussed in~\cite{EZFC} and, in another context, in~\cite{EZFCH}. As shown in the latter work, this formulation can also describe standard two-body relaxation. As such, it can be used to calculate the effect of fluctuations arising from the presence of a collection of massive particles on a system of lighter ones. Relaxation in this context `heats' the light particles, by increasing their velocity dispersion, while the heavier particles lose energy via dynamical friction. The general theoretical framework used here thus also applies to dynamical friction heating of the halo in the presence of massive clumps. The relevant power spectrum of density fluctuations, in that case, is flat (white noise) over scales larger than the maximum size of the (monolithic) clumps. Thus the description in terms of dynamical friction\ heating is more relevant when the gas fluctuations can be described in terms of a system of long lived distinct clumps with sizes significantly smaller than the region where the core forms. When feedback is strong, and the gas fully turbulent on scales relevant to core formation, the spectrum of density fluctuations may be assumed to be approximated by a power law over such scales. As in the white noise case, the fluctuations lead to energy transfer from the gas to the halo particles, which drives the transformation of the latter's problematic central cusp into a core. The energy transfer may still be described in particularly simple terms, reminiscent of Chandrasekhar's theory of two body relaxation; for a given halo and power spectrum $\mathcal {P} \propto k^{-n}$, the associated relaxation time $t_{\rm relax}$ is predicted to principally depend just on the normalization of that spectrum at the minimal wavenumber $\mathcal{P} (k_m)$ (at which the power law breaks), and the average gas density $\rho_0$. More concretely, for a halo particle moving with an unperturbed speed $v_p$ in a field of gas fluctuations carried by bulk flows moving with characteristic speed $v_r$ relative to it, one finds \begin{equation} t_{\rm relax} = \frac{n v_r v_p^2}{8 \pi (G \rho _0)^2\mathcal {P} (k_m)}. \label{eq:relax} \end{equation} This is the timescale for the random velocity gained by a CDM particle, as a result of its motion in the stochastic force field born of gas fluctuations, to reach the unperturbed characteristic speed $v_p$ (and which reduces to Chandrasekhar's formula for $n = 0$; \citealp{EZFCH}). In this context, the timescale of significant central halo transformation, from cusp to core, should also scale thus. Controlled simulations (using the Herquist-Ostriker code (\citealt{Hernquist1992}), whereby Gaussian random noise was applied to particles of live CDM haloes, produced cores on the expected timescale, when the haloes were kept strictly spherical \citep{EZFC}.~\footnote{When this condition was relaxed, the timescales for core formation were found to be about an order of magnitude shorter. But the scaling with the normalisation of the power spectrum and average density, as reflected in Eq.~(\ref{eq:relax}), remained. As noted in Section~\ref{sec:massmod}, we find no evidence of such accelerated (relative to the relaxation time) core formation rate here.} The characteristic velocity $v_r$ appearing in Eq.~(\ref{eq:relax}) is estimated through the `sweeping' approximation, long used in turbulence theory (\citealp{TaylSwepp1938,KraichSweep1964,TennekSweep1975}) and invoked by~\cite{EZFC}. It assumes that the equal-time spatial statistics of the fluctuation field are swept ('frozen in') into the time domain through large scale fluid flows. In this picture $v_r$ consists of random and regular velocity components $U$ and $V$, such that $v_r = \sqrt{U^2+V^2}$ that are characteristic of those flow. In our case, since the halo particles are also moving with respect to the gaseous flows, the characteristic $v_r$ may be considered to include such motions.~\footnote{In principle, each fluctuating mode may have its own sweeping speed and the velocity distribution of the particles can be taken into account, as in \cite{EZFCH}. We do not consider this more complex case here.} We test this assumption in Section~\ref{sec:disp}. Finally, as the average gas density in a realistic system will not be constant (even if it may slowly change within a certain radius as we will see in relation to Fig.~\ref{fig:densityprofiletime_all}), we thus define the average density, at a given time, within a sphere with radial coordinate $r$ by \begin{equation} \rho_0 (r, t) = \langle \rho_g ({\bf r},~t) \rangle_{|{\bf r}|<r}, \label{eq: rhoavdef} \end{equation} where $\rho_g ({\bf r}, t)$ is the local gas density and the average is evaluated within the volume enclosed by $r = |{\bf r}|$. A further average over time results in $\rho_0 (r)$. If the averaging is not evaluated over a sphere centered at the origin, but over spheres with centres at radial coordinate $r$ and radius $r_{\rm av}$, an analogous average may also be defined (as in equation~\ref{eq:semiloc}). \subsection{Numerical implementation} \begin{figure*} \centering \includegraphics[width=\textwidth]{./PDFFigs/densityMap_faceedge.pdf} \caption{Gas projected density map inside a box of length 5 kpc (2.5 kpc on a side from the centre of the halo). The gas contracts from the initial NFW profile, then much of it is expelled from the inner region during the first few hundred Myr. It gradually settles into a disk-like configuration, but its distribution fluctuates over time.} \label{fig:Gas_Map} \end{figure*} We wish to examine feedback-driven gas fluctuations in a full hydrodynamic simulation, to find out whether their characteristics and their effects may fit the picture outlined above. To facilitate the isolation of the principal features, we focus on the case of an isolated galaxy. Its parameters (Table \ref{table:halo}) are chosen to correspond to those of dwarf galaxies, where dark matter dominates even in the central regions and the discrepancy between the inferred density and a centrally concentrated halo profile seems particularly clear. The simulation of the isolated dwarf is set up and run with the RAMSES adaptive mesh refinement code \citep{Teyssier2002}, and already presented in \citet{Readsim2016}. We use `Run M9c224e6' from that paper, which has 281 time outputs regularly spaced over a timespan of 13.7 Gyr. The initial condition for this simulation assumes a NFW dark matter halo \citep{nfw} with total mass inside the virial radius $M_{200} = 10^9$\,M$_\odot$ and concentration parameter $c_{200} = 22.23$. A fraction $f_b = 0.15$ of that mass ($M_g (r_{\rm vir}) = 0.15 \times 10^9\,M_\odot$), is in gas that is initially in hydrostatic equilibrium, with a metallicity of $10^{-3}$\,Z$_\odot$ and some angular momentum set to match median expectations in a $\Lambda$CDM cosmology. While the simulation assumes that the dwarf galaxy contains the Universal baryon fraction out to its virial radius, the ratio between the gas mass initially within the region of interest for core formation ($\sim 1~{\rm kpc}$) and the mass within the virial radius is much lower, and thus consistent with observational constraints (e.g.~\citealp{ReadBarFrac2005}). At the start of the simulation, the gas rapidly cools and collapses, then re-expands under the influence of feedback. In the process, the central gas mass fraction further decreases (cf. Section~\ref{sec:gasdens}). The sub-grid model for star formation and feedback is described in detail in \citet{Readsim2016}. Briefly, star formation follows a Schmidt relation with a star formation efficiency per free fall time of $\epsilon_{\rm ff} = 0.1$ and a density threshold for star formation of $\rho_* = 300$\,atoms\,cm$^{-3}$. The stellar feedback model is as in \citet{Agertz2013} and includes a model for Type II and Tyle Ia supernovae, stellar winds and radiation pressure. The simulation resolution was chosen to capture momentum driving from individual supernovae events, with a gas spatial resolution of $\Delta x \sim 4$\,pc and a mass resolution in gas, stars and dark matter of $m_{\rm g} = 60\,{\rm M}_\odot$ and $m_* = M_{\rm DM} = 250\,{\rm M}_\odot$, respectively. This allows individual star formation events to be resolved, injecting stellar feedback to the interstellar medium in the correct locations at the correct times. \begin{table} \centering \caption{Initial profile parameters.} \label{tab:nfw} \begin{tabular}{lll} \hline NFW scale length & $r_s$ & $0.88\; {\rm kpc}$ \\ NFW characteristic density & $\rho_s$ & $5.34 \times 10^7~ {\rm M_{\odot} ~kpc^{-3}}$ \\ Concentration parameter & c & $22.23$ \\ Mass within the virial radius & $M_{\rm vir}$ & $10^9\; {\rm M_{\odot}}$ \\ Baryonic mass fraction & $f_b$ & $0.15$ \\ \hline \end{tabular} \label{table:halo} \end{table} \section{Characterizing the gas fluctuations} \begin{figure} \centering \includegraphics[width=\linewidth]{./PDFFigs/GasMassTimeSeries_HSSC.pdf} \caption{Time variation of the gas mass enclosed within different radii $r$ (in kpc), suggesting a quasi-stationary stochastic process.} \label{fig:MTsiers} \end{figure} \begin{figure*} \centering \includegraphics[width= 0.48\textwidth, ]{./PDFFigs/GasAverageDensity_HSSC_fnl_vf.pdf} \includegraphics[width= 0.49\textwidth, ]{./PDFFigs/gas_mass_ratio_vff.pdf} \caption{{\it Left:} Average gas density inside radius $r$, $\rho_0 (r, t)$, for all snapshots, obtained by using equation~(\ref{eq: rhoavdef}). The dashed line is a time average over the indicated interval, which corresponds to the simulation time beyond an initial non-equilibrium phase through which much of the gas is blown out of the region inside $r_s \simeq 1~{\rm kpc}$. The time averaged profile is only mildly varying inside $r_s$. {\it Right:} Ratio of the gas to combined gas and dark matter masses $f_g =M_g /(M_g + M_d)$, with $f_g (r_s)$ denoting the fraction inside $r_s$; thus here $M_g = M_g (<r_s, t)$ and $M_d = M (<r_s, t)$, while $f_{g0}$ denotes the ratio of the gas mass to the initial total mass enclosed: $f_{g0} (r_s) = M_g (<r_s, t) /[M_g (r_s, 0) + M_d (< r_s, 0)]$. Similarly $f_g (r_{1/2})$ and $f_{g0} (r_{1/2})$ refer to those ratios inside the half mass radius of the stellar component that forms from the repeated starbursts, $r_{1/2} \simeq 0.5 {\rm kpc}$, and which physically correlates with the scale of the relatively steady gas density associated with strong fluctuations.} \label{fig:densityprofiletime_all} \end{figure*} In order to apply the physical model described above, leading to Eq.~(\ref{eq:relax}), we need to know the average gas density $\rho_0$. We also need to evaluate the power spectrum of the gas fluctuations and find out whether it may indeed be adequately described by a power-law, in which case we need to estimate values for its normalization $\mathcal{P} (k_m)$ and index $n$, and verify that the spatial spectrum is relevant in describing the temporal fluctuations. This is the objective of this section. \subsection{Gas density and mass over time} \label{sec:gasdens} Fig.~\ref{fig:Gas_Map} shows gas density contrast maps, within a 5 kpc box, 2.5 kpc on a side from the halo centre\footnote{Unless otherwise stated, all centering here refers to the halo centre located using the shrinking sphere method.}. The gas initially contracts from an NFW profile with scale length $r_s$ until the critical threshold for star formation is reached. The fluid is then driven by the nascent feedback, much of it driven completely out of the region delineated by $r_s$; the gas mass inside $\sim$$r_s$ decreases to less than half its initial value (but the dark matter mass is largely unaffected, decreasing only by about $20 \%$ within $r_s$, and thus the ratio decreases significantly, as shown in the right hand panel of Fig.~\ref{fig:densityprofiletime_all}). After that, the gas mass inside $\sim$$r_s$ is relatively well conserved. Variations in the gas mass enclosed within a given radius then principally arise from a pattern of fluctuations (gaseous blobs) materialising on a large range of scales. The gas progressively settles into a disk-like configuration, but the mass fluctuations remain sustained and steady. The assumption that the fluctuations form a quasi-steady stochastic process is suggested by Fig.~\ref{fig:MTsiers}, showing the mass variation within different radii over time. The mass fluctuations persist in time but diminish with radius, as higher mass scales are reached. Significant variations on larger time-scales persist however. They correspond to large scale flows. In the context of the model of \cite{EZFC} (recapped in Section~\ref{sec:physicalset}), such large scale motions `sweep' the smaller scale fluctuations with the characteristic speed $v_r$ (equation~\ref{eq:relax}). For the description of the effect of the gas fluctuations on the halo particles in terms of a standard diffusion process (leading to relaxation timescales in the form of Eq.~\ref{eq:relax}) to be complete, the statistics of the stationary stochastic process should also be entirely described by averages and dispersions, as in a Gaussian random process. This is examined in Appendix~\ref{Sec:Additional_Gauss}. The gas density inside spheres of radius $r$ is shown in Fig.~\ref{fig:densityprofiletime_all}, as a functions of time. After the initial outflow during the first few hundred Myr, the averaged (over time) gas profile is only mildly varying with radius inside the initial NFW scale length ($r_s = 0.88~{\rm kpc}$), and then starts decreasing rapidly beyond that. Any core that forms is expected to be of the order of the scale length of the cusp, which renders the assumptions in~\cite{EZFC} (discussed in Section 2.1.2 therein), plausible. One may for instance attempt to estimate the relaxation time using Eq.~(\ref{eq:relax}) with mean density $\rho_0$ evaluated at radius $\gtrsim r_s$, as the sharp decrease in density beyond that scale suggests that fluctuations in the gas at larger radii would contribute little to the overall variations in the gas potential and associated energy input to the central halo. In Section~\ref{sec:relax}, we estimate that this may indeed be a good approximation. Here we note that there are two theoretically independent reasons why, given the parameters of the simulation, a core is expected to form on a scale of order $1~{\rm kpc}$. For, even if the gas density does not drop beyond $r \gtrsim r_s$, and strong fluctuations are present much beyond that radius, the resulting stochastic perturbations are expected to have a major dynamical effect only up to scale of order $r_s$. This was in fact found to be the case even when the amplitude of the fluctuations (as fixed by the normalization of the power spectrum) were increased well beyond the minimum level required to significantly affect the inner halo profile (cf. Fig.~6 of~\cite{EZFC}). The phenomenon is likely due to the transition to an isothermal profile in an NFW halo at $r \simeq r_s$, which is relatively stable against fluctuations (given that the distribution function tends to the exponential Boltzmann form in that regime (see, e.g., Fig~1 in~\citealp{WidrowDF2000}). This phenomenon relates to the {\it effect} of the fluctuations.The other reason why a core that forms out of a cusp here should have radial scale of order $1 {\rm kpc}$ has to do with the existence of fluctuations in a realistic system: the extent of the spatial scale where the gas density and density contrasts are large is characteristic of the half mass radius $r_{1/2}$ of the stellar component that forms from the repeated starbursts. In our case about $0.5 {\rm kpc}$. Because of this, we will be able to define (in relation to Fig.~\ref{fig:Ein_main}) an energy input saturation radius, of order $\simeq 2 r_{1/2}$, which results from the decreasing gas density and level of fluctuations. Beyond it, there is negligible energy transfer from gas to halo. The right panel of Fig.~\ref{fig:densityprofiletime_all} shows the fraction of gas inside both $r_s$ and $r_{1/2}$. We note that due to the small variation of the gas density averaged over radius, the lines closely correspond to one another. In addition, after the gas loss associated with the initial blowout phase, the gas fraction decreases to about half its initial value. Thus, any effect due to core formation will result from fluctuations from a rather small gas fraction in the inner regions. The efficacy of this perhaps surprisingly small central gas fraction in modifying the dark matter dynamics is in fact a generic prediction of our theoretical framework, provided the strength of the gas fluctuations is at the level of the fiducial model in~\cite{EZFC} (cf. Section~2.1.2 and 4.4.2 there). Below, we will find that the fluctuation levels in the present simulation, as measured by the normalisation of the power spectrum, are in indeed consistent with those assumed in that fiducial model. \subsection{Power spectra} \begin{figure} \centering \includegraphics[width=\linewidth]{./PDFFigs/GasPowSpec_HSSC_vf.pdf} \caption{Power spectrum of gas density fluctuations, averaged over 1 Gyr intervals. The shaded region corresponds to variation of a power law $\sim k^{-n}$, with $2 \le n \le 3$; the horizontal line corresponds to $\mathcal {P} (k_m) = 4~{\rm kpc^3}$. The spectrum follows a power-law form over a range of scales, with a best-fit exponential index $n_{\rm best fit} = 2.31\pm 0.02$ (black dashed line).} \label{fig:powerspecavrg} \end{figure} We now proceed to determine the power spectrum of the density fluctuations qualitatively probed above. We evaluate it in the following way. For a given time snapshot, we consider gas cells within a cubical box of extent 2.5~kpc on each side from the origin.\footnote{Taken here to be the centre of the halo located through the shrinking sphere method. We have verified that the results are robust to displacements of that centre by up to $\sim {\rm 1~kpc}$, and similar results are obtained when the gas centre of mass or shrinking sphere centres are used.} We assign each cell, depending on its radial location, to a spherical shell. The density at the given cell is then subtracted from the average gas density inside the shell. Finally, the Fourier transform is taken and squared in the usual manner. This is done for each time snapshot, and the results are averaged over a chosen set of snapshots. The outcome is shown in Fig.~\ref{fig:powerspecavrg}. The spectrum follows a power-law form over a range of scales, with an index consistent with that assumed in the fiducial model of~\cite{EZFC}, where it was assumed to correspond to 2.4. If we take $k_m \simeq 2 {\rm kpc^{-1}}$, then $\mathcal {P} (k_m) \simeq 4~{\rm kpc}^3$ (again close to what was assumed in the fiducial model of \citealt{EZFC}, namely $\mathcal {P} (k_m) = 4.6 ~{\rm kpc}^3$). \begin{figure} \centering \includegraphics[width=\linewidth]{PDFFigs/HSSC_Pk_E_vf.pdf} \caption{Kinetic energy power spectrum of gas motion with least squares best fit (black dashed line) and Kolmogorov spectrum ($\propto k^{-5/3})$, blue dashed line). The spectrum is calculated through the square of the Fourier transform of $\rho^{1/2}_{g_i} {\bf v}_i$, taken over gas cells $i$ inside a 5~kpc box around the halo (shrinking sphere) centre. The agreement with the Kolmogorov form supports feedback-driven fluctuations having characteristics of fully developed turbulence.} \label{fig:velocitypowerspecfit} \end{figure} The picture of fully turbulent feedback-driven fluctuations is supported by the near Kolmogorov form of the specific kinetic energy spectrum, shown in Fig.~\ref{fig:velocitypowerspecfit}, which is followed for a large range in wave numbers. This spectrum persists even as the gaseous system evolves from a fully three dimensional configuration to disk-like form. Indeed~\cite{GrisRomReadTurb2017} found that it is present even in relatively quiescent phases, as long as feedback is also present (noting that the spectrum flattened when feedback is eliminated). In principle, turbulence may also be driven by self gravitating instabilities rather than feedback (e.g., \citealp{YUKrumTurb2021, NusSilkTurb2022}). A simple calculation of the gas Toomre parameter in the present case nevertheless suggests an absence of strong gravitational instabilities (see however~\citealp{DekelQ2016}). In the context of our theoretical framework, a further (likely related) physical distinction may also be reflected in the shape of the density fluctuation power spectrum; for, as mentioned in Section~\ref{sec:physicalset}, a flat (white noise) power spectrum corresponds to the limit when heating through dynamical friction coupling beween monolithic clumps and the dark matter is the dominant process. In this case the clumps are long lived and distinct, and their maximum size is significantly smaller than the scale associated with the region where the cusp-core transformation takes place (and on which the spectrum takes the flat form). The steep power-law dependence that continues up to large scales, as found here, signals a preeminence of feedback driven fluctuations. Finally, we note that the break in the kinetic energy power spectrum at $k_m \simeq 2 ~{\rm kpc}^{-1}$ is consistent with that inferred from Fig.~\ref{fig:powerspecavrg}, suggesting a demarcation between the turbulent scaling regime and the large scale flows that may carry the fluctuations into the time domain. We examine this issue of the transposition of the fluctuation characteristics from the spatial to temporal domains further in the next subsection. \subsection{RMS mass fluctuations} \label{sec:disp} \begin{figure} \centering \includegraphics[width= 0.49\textwidth]{PDFFigs/sigmaR_wbestfits_vf.pdf} \caption{Gas mass dispersion at scale $R$, $\sigma^2_R = \sum_i (M_i (R) - \langle M (R) \rangle_i)^2/\langle M (R) \rangle_i^2$, where $i$ denotes different time snapshots, and $M(R)$ is the mass inside a sphere of radius $R$. The centre of the sphere is randomized; chosen from a homogeneous distribution inside radius 1~kpc ($\simeq r_s$). The process is repeated for 80 realizations and the average $\sigma_R$ over the realizations (solid line) and dispersion (error bars) are evaluated. The dotted line refers to a (least squares) best-fit using Eq.~(\ref{eq:RMS}) with a power-law power spectrum and a top-hat filter, while the shaded region shows the variation with $n$ in the range of the corresponding shaded region on Fig.~\ref{fig:powerspecavrg}, with moderately smaller normalization at $k_m =2~ {\rm kpc}^{-1}$. All fits assume $\mathcal {P} (< k_m) = \mathcal {P} (k_m)$. The power spectrum parameters, inferred here from simulation data in the time domain, are generally consistent with those of the equal-time power spectra of Fig.~\ref{fig:powerspecavrg}.} \label{fig:massdisper} \end{figure} Our formulation of the dynamical effect of the gas fluctuations makes use of the sweeping and random sweeping approximations of turbulence theory, mentioned at the end of Section~\ref{sec:physicalset}. At some level, this is analogous to an ergodic assumption, in the sense that statistical properties of the random field in the spatial domain are transferred into the time domain. If such an approximation is valid in our case, then we should expect the following: if the variance $\sigma_R^2$ on the left hand side of Eq.~(\ref{eq:RMS}) is calculated in the time domain over the simulation snapshots, it should correspond to the result obtained by plugging the equal-time power spectrum $\mathcal{P} (k)$, with parameters consistent with what is inferred above (Fig.~\ref{fig:powerspecavrg}), into the right hand side of Eq.~(\ref{eq:RMS}). In a stochastic process that is homogeneous in space and time, with the sweeping assumptions holding, the $\sigma_R$ measured in the time domain (over sufficiently long time) will be invariant with respect to the centre it is measured from. This cannot be strictly the case here, however, since the average gas density is not strictly homogeneous. Furthermore, the effect of fluctuations carried by rotational flows will be reduced when calculating $\sigma_R$ in spheres anchored close to the centre of rotation, with little mass transiting through the shells (and those carried by large scale radial flows are enhanced). To account for such effects, we randomise the centre of the sphere within which the mass is calculated. The centres are, in practice, sampled from a homogeneous distribution within the radius of interest for core formation (1~kpc from the origin). We then conduct eighty different realizations of such randomisations and evaluate the average of the dispersion over the realisations. Fig.~\ref{fig:massdisper} shows the result. The best-fit using Eq.~(\ref{eq:RMS}) involves a power-law power spectrum with index $n$ consistent with the lower limit in the shaded region of Fig.~\ref{fig:powerspecavrg}; and also with similar normalization $\mathcal{P} (k_m)$, but at smaller $k_m$. The change in $k_m$, by itself, is not expected to affect the dynamical effect of the fluctuations in the diffusion limit at the basis of our model (with consequence that the relaxation time in Eq.~\ref{eq:relax} does not directly depend on $k_m$; see also Fig.~8 of~\citealt{EZFC}). On the other hand, the shaded area of Fig.~\ref{fig:massdisper} suggests that the mass dispersion is also consistent with larger values of $k_m$, but with mildly smaller normalization (which does moderately affect the relaxation time). The general consistency of the parameters inferred here, in comparison with those obtained from the spatial power spectrum (Fig.~\ref{fig:powerspecavrg}), supports the contention that the spatial fluctuations are transported to the time domain in accordance with the sweeping assumptions (although the separation between the large scale `carrier flows' and the transported fluctuations may not be sharp, and this may contribute to the smaller best fit $k_m$ here). As we will see below, these parameters are also consistent with the actual energy input, from the fluctuating gas to the halo component, in the simulation studied here; in the sense that the inferred energy input is consistent with theoretical estimates using such values. \section{Stochastic energy transfer and core formation} \label{sec:relax} \subsection{The relaxation time and its parameters} Now that we have estimates of the parameters entering Eq.~(\ref{eq:relax}), we can apply it in order to check whether it predicts significant effect for the model galaxy at hand. As a first estimate we simply assume $v_p \approx v_r \approx v_c$, where $v_c = v_c (r_s)$ is the circular speed at $r_s$ (close to the maximal rotation speed). If we furthermore use a value for $\rho_0$ characteristic of the region when the density starts to decrese rapidly ($r \gtrsim r_s$ from Fig.~\ref{fig:densityprofiletime_all}), we find \begin{equation} t_{\rm relax} = 13.2~{\rm Gyr} ~ \frac{n}{2.5} \left(\frac{v}{20 {\rm ~km/s}} \right)^3 \left(\frac{\mathcal{P} (k_m) }{3 {\rm ~kpc}^3}\right)^{-1} \left(\frac{\rho_0}{10^{6} ~{\rm M}_\odot/{\rm kpc}^3}\right)^{-2}\!\!. \label{eq:relaxnum} \end{equation} This suggests that gas fluctuations may have a significant effect on dark matter orbits within $r_s$ on a timescale of the order of a Hubble time. If $v_p$ and $v_r$ are associated with the characteristic velocity dispersion (rather than $v_c$), which is slightly larger (by a factor of about $10 \%$ at $\sim$$r_s$), the above timescale increases by a factor of about $(1.1)^{3/2}$. The timescale also increases if the characteristic speed of the halo particles relative to the gas $v_r$ (cf. Section\ref{sec:physicalset}) is associated with gas velocities at $\sim$$r_s$, which are around ${\rm 30 ~km/s}$ (cf. Fig. ~\ref{fig:velav}). Because flows may take the gas well beyond $r_s$, gas velocities are generally larger than $v_p$. Once a rotating disk starts to materialise (Fig.~\ref{fig:Gas_Map}), the regular component would also include gas rotation, in addition to the orbital speeds of the halo particles. Below, we will generally set $v_r \simeq 30 ~{\rm km/s}$, but note that $v_r$ should increase with radius inside $r_s$, and that it may generally be larger than the value adopted here when all the aforementioned motions and flows are taken into account. The variation of the relaxation time with radius also depends on whether the energy input from gas fluctuations is assumed to be primarily global or local: the relaxation mechanism may in principle be global, in the sense of depending only on some effective $\rho_0$, corresponding to a space and time average over an appropriately chosen region (e.g., the region within which the feedback driven fluctuations are significant), or it may be local; depending on the local gas density $\langle \rho_g \rangle (r)$ (averaged only over time). This is an issue we discuss further below, in connection to Fig.~\ref{fig:Lovsglb}. For a global energy input, the relaxation time depends only on variation in $v_p$, if $v_r$ is kept fixed. with radius. If $v_p$ is associated with $v_c (r)$, then within the initial NFW cusp $v_p^2 \sim r/r_s$ increases by a factor of 6.25 as $r/r_s$ goes from 0.1 to 1. If $v_p$ is taken to correspond to the velocity dispersion then $v_p^2 \sim - r/r_s \ln r/r_s$ deep inside the cusp, and increases by a factor of about 2.25 between $r/r_s =0.1$ and 1. Thus, particles nearer to the centre are expected to be affected first by the fluctuations; if only simply because their initial velocities, and therefore the relaxation times required to affect them significantly, tend to be smaller. Core formation would thus proceed inside out, with mass distribution at larger radii affected at later times. In what follows we evaluate the energy input in the simulation at hand and compare it with the theoretical picture sketched here; testing the `inside out' scheme of core formation and the assumption of dependence of the energy transfer principally on the average density inside $\sim$$r_s$ rather than the local density. \subsection{Energy input} \begin{figure} \centering \includegraphics[width=\linewidth]{./PDFFigs/HaloEnrgyInput_vf.pdf} \caption{Energy input to halo particles from gas fluctuations inside indicated radii $r$. $E_r$ is evaluated via Eq.~(\ref{eq:Eind}), and measured from the temporal zero point $t = t_{\rm diff}$, beyond which the diffusion limit may be assumed to hold. In line with the discussion following Eq.~(\ref{eq:vdisp}), we take $t_{\rm diff} = 1 ~{\rm Gyr}$. In order to reduce fluctuations related to the precise choice of zero point, we calculate in practice $\Delta E_r = E_r (t> t_{\rm diff}) - \langle E_r \rangle_{t_{\rm diff}}$, where the average is taken in the interval $0.5~ {\rm Gyr} \le t \le 1.5 ~{\rm Gyr}$. The dashed line indicates energy transfer through a stationary stochastic process, with parameter values appearing in Eq.~(\ref{eq:Ein}). It assumes that energy input saturates around $r_{\rm sat} = 1.5~ r_s \simeq 1.3~ {\rm kpc}$ (corresponding to the converging lines). For smaller radii, the lines flatten after a few Gyr due to mass and energy transfer resulting from particle migration towards larger radii.} \label{fig:Ein_main} \end{figure} \begin{figure*} \centering \includegraphics[width= 0.49\textwidth]{./PDFFigs/E_in_sub_vf.pdf} \includegraphics[width= 0.49\textwidth]{./PDFFigs/Lovsglb_vf.pdf} \caption{Local vs global energy transfer. {\it Left:} Radial energy transfer as a function of time. {\it Right:} Energy input rate as a function of radius. The left hand panel shows that radial energy transfer is slow compared to the energy input; the straight lines represent steady diffusive energy input, with no transfer outside the radii enclosed. While this holds, one may estimate the energy input rate at various radii by simply evaluating the average slope of the generally linear energy increase. The right hand panel shows those slopes over the first 1, 2, 3 and 4 Gyr (solid lines with indicated values of $T$). The dashed line corresponds to estimate assuming global energy transfer; using Eq.~(\ref{eq:EGlob}) with $n = 2.5$, $v_r = 30.0 ~{\rm km/s}$, $\mathcal{P} (k_m) = 3.6~ {\rm kpc^3}$, and average gas density within $r_s$, $\rho_0 (r_s) = \langle \rho_0 (r_s, t) \rangle_T$, with $\rho(r, t)$ measured directly from the simulation (using Eq.~\ref{eq: rhoavdef}) and averaged over $T = [0: 4] ~{\rm Gyr}$ (giving $\rho_0 = 1.95 \times 10^6 ~M_\odot/{\rm kpc^3}$). The dotted line corresponds to the assumption of purely local energy input; using the indicated equation, where the local gas density $\rho_g ({\bf r}, t)$ is time-averaged (in the range $T = [0: 4] ~{\rm Gyr}$) over spherical shells at $r$. This vastly overestimates the energy input at all radii. The dashed dotted line represents an intermediate alternative, whereby $\rho_g$ is averaged over spheres of radius $r_s$ and centered at $r$ (as in Eq.~\ref{eq:semiloc}).} \label{fig:Lovsglb} \end{figure*} Equation~(\ref{eq:relax}) implies that a particle increases its velocity variance with time $T$ as \begin{equation} \langle (\Delta v)^2 \rangle = \frac{8 \pi (G \rho_0)^2 \mathcal{P}({k_m})}{n v_r} T. \label{eq:vdisp} \end{equation} The linear time dependence here is characteristic of a diffusion process. It is valid in the diffusion limit, which means that $T \gg t_{\rm diff} \equiv (v_r k_m)^{-1}$~\citep{EZFC}. To account for this in practice, we will generally measure $T$ and $\langle (\Delta v)^2 \rangle$ from zero points at times $t \gtrsim t_{\rm diff}$. This is necessitated not just by the applicability of the diffusion limit, but also by the assumption of a quasi-steady stochastic process, which is clearly invalid during the first few hundred Myr, which are characterised by a highly evolving phase, involving the initial gas contraction and the triggering of a starburst that expels most of it out of $r_s$. Assuming $v_r \simeq 30 ~{\rm kpc/Gyr}$, gives $(v_r k_m)^{-1} \lesssim 1/30 {~\rm Gyr}$, for $1/k_m\lesssim 1~{\rm kpc}$. Thus the steady state diffusion limit may be safely assumed to be established beyond 1~Gyr. We will take it to be our zero point for $T = t - t_{\rm diff}$. As in standard two body relaxation, one may assume that fluctuations initially change just the velocities of the halo particles. The average energy change per unit mass resulting from this is $\langle \Delta E \rangle = \langle \Delta~\frac{1}{2} (\Delta v)^2\rangle = v \langle \Delta v \rangle + \frac{1}{2} \langle (\Delta v)^2 \rangle$. In principle, the particles may gain energy through the first term or lose it to the fluctuating field through dynamical friction, as in any general diffusive dynamical process. But in practice the mass of the particles is far too small and so the average energy gain per unit mass is simply $\langle \Delta E \rangle =\frac{1}{2} \langle (\Delta v)^2 \rangle$. By fixing $v_r$, and assuming the energy transfer mechanism to be global --- again, in the sense of depending on an effective characteristic density $\rho_0$ rather than on the local $\langle \rho_g \rangle_T (r)$ --- the expected energy input to halo particles within radius $r$ may be estimated as \begin{equation} E_{\rm in} (< r) =~\langle M (<r) \rangle~\langle \Delta E \rangle~=~\langle M (<r) \rangle~\frac{4 \pi (G \rho_0)^2 \mathcal{P}_{k_m}}{n v_r}~T, \label{eq:EGlob} \end{equation} where $\langle M(<r) \rangle$ is the (time) average of the halo mass enclosed within $r$. We discuss the assumption of global energy input below. First, we assume it holds at least approximately, and use it to estimate the total energy transfer from the gas to the halo. Given the rapidly decreasing gas density beyond $r_s$ (Fig.~\ref{fig:densityprofiletime_all}), one may suppose that the energy input saturates at $r_{\rm sat} \gtrsim r_s$. If the energy transfer depends on global properties, then $\rho_0$ may be considered to correspond to the space and time average of the gas density inside $r_{\rm sat}$, i.e. $\rho_0 ({\rm sat}) = \langle \rho_0 (r_{\rm sat}, t)\rangle_T$, where $\rho_0 (r_{\rm sat}, t)$ is given by~(\ref{eq: rhoavdef}), with $r = r_{\rm sat}$ (unless otherwise stated, particularly in relation to Fig.~\ref{fig:Lovsglb}, time averages are evaluated from $T=0$, i.e. $t = t_{\rm diff}=1~\rm Gyr$, to the end of the simulation at $t = 13.7 \rm Gyr$). For $r_{\rm sat} \gtrsim r_s$, $\langle M (<r) \rangle$ is close to the initial mass at $T=0$, since the dark matter mass inside it is essentially conserved. We thus write, in terms of the initial mass $M_0 (< r)$, \begin{multline} E_{\rm in} (<r_{\rm sat}) = 10^9~M_\odot~{\rm kpc^2~Gyr^{-2}}~\frac{M_0 (< r_{\rm sat})}{10^8 M_\odot}~\frac{T}{\rm Gyr}~\times\\ \left(\frac{n}{2.5}\right)^{-1} \left(\frac{v_r}{30 ~{\rm km/s}} \right)^{-1} \left(\frac{\mathcal{P} (k_m)}{3 {\rm~ kpc}^3}\right) \left(\frac{\rho_0~(r_{\rm sat}) }{10^{6} ~M_\odot ~{\rm kpc}^{-3}}\right)^2, \label{eq:Ein} \end{multline} where we have chosen $v_r$ in accordance with the discussion of the previous subsection, and inserted values characteristic of the initial halo mass and average gas density (as defined above) around $r_s$. More precisely, the numbers correspond to values at $r = r_{\rm sat} = 1.5 r_s \simeq 1.3 ~{\rm kpc}$, but we note that the estimate of the total energy input is not sensitive to the choice of $r_{\rm sat}$; as the product $M_0 (< r) \rho_0^2 (r)$ varies slowly with radius around $r_s$ (indeed, more generally, $\langle M (<r) \rangle \rho_0^2 (r)$ varies by at most a factor of 2 in the range $0.1~ r_s \le r \le 2 r_s$). The power spectrum parameters are estimated from the results in figures \ref{fig:powerspecavrg} and \ref{fig:massdisper} (a larger $\mathcal{P} (k_m) = 4 {\rm kpc}^3$ gives the same result if $v_r = 40 {\rm km/s}$, which would take into account enhancement in the speeds of halo halo particles, relative to the gas, due to their own motion). We now wish to compare the prediction in Eq.~(\ref{eq:Ein}) with the actual energy input inferred from the simulation. For this purpose, we define the quantity \begin{equation} E_r = \frac{m}{2} \left( \sum_i v_i^2 + \Phi_i \right), \label{eq:Eind} \end{equation} where $m$ is the halo particle mass in the simulation, $v_i$ is the speed of particle $i$ and $\Phi_i$ is the Newtonian potential at its location. The summation is evaluated over all particles within radius $r$. Increase in this quantity measures the energy input inside radius $r$, provided there is no dark matter mass or energy outflow from that radius and the mass distribution beyond it remains constant (so that changes in the potential are solely due to modification of the mass distribution within $r$)~\footnote{The potential $\Phi_i$ at particle $i$ is due to other halo particles; we ignore changes in the potential arising directly from the gas flows, as these constitute a fluctuating contribution with small average, particularly after the initial expulsion episode during the first few hundred Myr}. We may expect that these conditions hold, at all radii, for sufficiently small times, as the kinetic energy acquired by the halo particles is still being converted into changes in the mass distribution and potential, a process slower than that characterising the initial energy input. The conditions should hold for longer times as $r$ increases; and we expect that beyond $r_{\rm sat} \gtrsim r_s$, the energy input saturates to a specific value regardless of radius, and little energy is transferred beyond that radius, so the change in $E_r$ from its initial value (at $T=0$) corresponds to the total energy input from the fluctuating gas. In particular, if that input may be estimated theoretically using Eq.~(\ref{eq:Ein}), then one expects $\Delta E_{r_{\rm sat}} \simeq E_{\rm in} (< r_{\rm sat})$. These expectations are confirmed in Fig.~\ref{fig:Ein_main}. At all radii, we initially find a general linear increase in $E_r$, as expected from a steady diffusive process, with stochastic variations on this general trend. At smaller radii, the lines clearly flatten after a few Gyr, as a result of energy and mass outflow onto larger $r$. A saturation radius $r_{\rm sat} \gtrsim r_s$, beyond which the lines converge, can be defined with value consistent with $r_{\rm sat} = 1.5 ~r_s \simeq 1.3 ~{\rm kpc}$ as assumed in Eq.~(\ref{eq:Ein}). The slope derived from that relation is also consistent with that derived directly from the simulation, as indicated by the dashed line. The above suggests that the total energy input within $r_{\rm sat}$ may be adequately described by assuming a global energy transfer mechanism, parameterised by the average gas density and halo mass at $r_{\rm sat}$ inferred from the simulation; and with $v_r$ and density contrast power spectrum parameters also consistent with those found in the simulated hydrodynamics. We now wish to ask to what extent this global approximation holds in general; for, as the time averaged gas density inside $r_s$ is not strictly constant, energy transfer may, in principle, depend instead on the local density. This is a possibility akin to invoking the purely local approximation when evaluating the effect of two body relaxation in stellar dynamics, by using Chandrasekhar's formula and plugging in the local stellar density. This is in principle plausible, but more difficult to interpret in the present case; whereas in two body relaxation logarithmic intervals in impact parameters (or spatial scales) equally contribute to the fluctuations leading to relaxation, here fluctuations at the largest scales (characterized by $\mathcal{P} (k_m)$) are more important. Global energy transfer implies the validity of Eq.~(\ref{eq:EGlob}). Local energy transfer, on the other hand, requires the evaluation of the time average $\langle \rho_g (r,t) \rangle^2 (r)$ over spherical shells centered at $r$, and integrating it over the dark matter mass in those shells, $4 \pi r^2 \rho_d (r)$ (assuming a fixed $v_r$). The right panel of Fig.~\ref{fig:Lovsglb} (dotted line) shows that, when assuming power spectrum parameters and velocity $v_r$ consistent with the simulation, this vastly overestimates the energy input. The non-local assumption --- with energy input rate within radius $r$ scaling simply as $\rho_0^2 M(<r)$ as implied by Eq.~(\ref{eq:EGlob}) --- results in much better agreement, with a moderate discrepancy at small radii that may at least partly be accounted for by an increase of $v_r$ with radius (as discussed above and expected from Fig.~\ref{fig:velav}), which we do not take into account here for simplicity. The global approximation, with total energy input within radius $r$ simply proportional to $M (<r)$, clearly cannot remain valid as $r \rightarrow r_{\rm sat}$, when the gas density and fluctuations rapidly decrease. Indeed, the difference between $\rho_0 (r_s)$ that fits the energy transfer rate in Fig.~\ref{fig:Lovsglb} and $\rho_0(r_{\rm sat})$ used in Fig.~\ref{fig:Ein_main}, principally reflects the gradual saturation process; the fit in Fig.~\ref{fig:Ein_main} effectively assumes sudden saturation, while in reality the process is gradual. To take this into account one may invoke an intermediate regime, between the purely local and purely global energy transfer limits. We do this by defining \begin{equation} \rho_0 (r, r_{\rm av}) = \langle~\rho_g ({\bf r}, ~{\bf r} - {\bf r}_g)~\rangle_{r, |{\bf r} - {\bf r}_g)| < r_{\rm av} , T}, \label{eq:semiloc} \end{equation} where the average over the gas density is evaluated over time at points ${\bf r}_g$ inside spheres with centres at radial coordinate $r$ and radii $r_{\rm av}$. Thus in this context, $\rho_0 (r_{\rm sat}) = \rho_0 (0, r_{\rm sat})$ and $\rho_0 (r_s) = \rho_0 (0, r_s)$. The results for $\rho_0 (r, r_s)$ are shown by the dashed dotted line in the right hand panel of Fig.~\ref{fig:Lovsglb}. They suggest that the energy transfer process is best considered as non-local, with a range $\sim$$r_s$. Finally, we note that although the energy is assumed to be transferred to halo particles initially as kinetic energy, due to modification in their velocities (as given by equation~\ref{eq:vdisp}), the changes eventually affect the self consistent potential. The resulting average gain in total energy per unit mass turns out to be amenable to estimation from a low energy cutoff that appears in the phase space distribution function. In Appendix~\ref{sec:cut}, we show how this can be related to the energy input calculated here, and connected to the change in the potential that accompanies core formation. \begin{figure*} \centering \includegraphics[width= 0.495\linewidth]{./PDFFigs/Mass_linlog.pdf} \includegraphics[width= 0.495\linewidth]{./PDFFigs/densityprof_thr_vf.pdf} \caption{Mass migration and core formation. The left panel shows the ratio between the halo mass within radii $r$ (in kpc) and the corresponding initial mass at $T= 0$ ($t = t_{\rm diff} = 1 {\rm Gyr}$, as discussed in relation to Fig.~\ref{fig:Ein_main}). The dashed lines are exponential fits, $\propto \exp(\alpha T)$, with numbers corresponding to $\alpha (r)$ in ${\rm Gyr}^{-1}$. The exponential decay may be derived from a simple theoretical model for the mass transfer; in its context $\alpha$ is predicted to scale with the inverse of an energy relaxation time (obtained using Eqs.~\ref{eq:Erelax}. \ref{eq:Dcoef}, and~\ref{eq:Einits}). The right panel shows the corresponding evolution in density. Theoretical expectations are shown by the dashed lines. They are obtained by differentiating Eq.~(\ref{eq:ME}), starting from an NFW fit to the dark matter density in the simulation at $T =0$. To reduce uncertainties arising from fluctuations in the density profile around $T=0$, we average simulation outputs over the range $0.5 ~{\rm Gyr} \le t \le 1.5 ~{\rm Gyr}$ (as in Fig.~\ref{fig:Ein_main}). Model predictions are shown at $T =0$ (fitting the averaged profile), and at $2, 4, 6, 8, 10$ and 12 Gyr, corresponding to times $1, 3, 5, 7, 9, 11$ and 13 Gyr.} \label{fig:core} \end{figure*} \subsection{Mass migration and core formation} \label{sec:massmod} The energy input from the fluctuating gas leads to the migration of halo particles from the inner radii, which decreases the enclosed mass, as shown in Fig.~\ref{fig:core}, left panel. Straight lines on the log linear scale suggest a general exponential decrease in mass with time. A full examination of its origin in the context of a diffusion model would require a full Fokker Planck formulation, using the full (first and second order) diffusion coefficients and explicitly including changes in the potential due to the evolving mass distribution (the full formula for the expected mass transfer in this context can be found for example in \citealt{El-Zant08}, Eq.~40). Here, we proceed heuristically in order to obtain a rough estimate. We suppose that the mass flux across energy surface $E$ changes the mass within it principally through the first order energy diffusion coefficient \begin{equation} D [\Delta E] = \frac{1}{M(< r_{\rm sat})} \frac{E_{\rm in} (r_{\rm sat})}{T}, \label{eq:Dcoef} \end{equation} describing the average rate of change of halo particle energy per unit mass due the gas fluctuations. From equation~(\ref{eq:Ein}), this is of the order of $10 ~{\rm kpc^2~Gyr^{-3}}$ in the simulation a hand. The change in mass of particles with energy less than $E$ is \begin{equation} \frac{\partial M (< E)}{\partial t}~= -\frac{\partial M (<E)}{\partial E}~D~[\Delta E], \label{eq:Mflux} \end{equation} where $M (< E)$ is the mass in halo particles with energy less than $E$ and $\partial M (<E)/\partial E$ is the mass-weighed differential energy distribution \citep{BT}. Furthermore, we approximate this latter quantity by its average within $E$, such that \begin{equation} \frac{\partial M (<E)}{\partial E} \approx~a~\frac{M(<E) - M(0)}{E - E (0)}, \end{equation} where $E (0)$ is the energy of a particle at rest at the centre of the potential (so $E (0)= \Phi (0)$ and $M (0) = 0$), and $a$ is a constant numerical factor of order 1. This holds exactly inside pure power-law cusps: e.g., using formulas in \cite{El-Zant08}, one finds $a = (3 + \gamma) / (2 + \gamma)$ for $\rho \propto r^{\gamma}$ (thus $a = 2$ for a $1/r$ cusp, tends towards $1.5$ for flatter profiles, and is larger for steeper ones; diverging in the case of the singular isothermal sphere, where a potential zero point at the centre cannot be fixed as above due to logarithmic divergence). We define the energy relaxation time as \begin{equation} t_{\rm relax} (E) = D[\Delta E]^{-1}~[E- E (0)]. \label{eq:Erelax} \end{equation} Assuming that equation~(\ref{eq:Mflux}) is applicable in the diffusion limit, solving it (starting at time $T=0$) results in \begin{equation} M (< E) = M_0 (<E)~\exp\left[-a~T/t_{\rm relax}(E)\right], \label{eq:ME} \end{equation} where $M_0$ refers to the mass at time $T=0$. Within this picture, the numbers on the lines in the left panel of Fig.~\ref{fig:core}, denoted by $\alpha$, should correspond to $ a/t_{\rm relax} (E)$, if $E$ is associated with particle energies inside the chosen radii. To make this correspondence, we fix $E = \langle E \rangle (r)$ to be the average specific energy at radius $r$ in the initial profile. As a first approximation, we simply associate this with the initial NFW profile of the configuration. The specific potential energy $\Phi (r) - \Phi (0)$ in this case is $4 \pi G \rho_s r_s^2 [1- \ln (1+x)/x]$, with $x = r/r_s$, and $r_s$ and $\rho_s$ as in Table~1. To obtain a simple form for the kinetic energy we make use of established empirical relations according to which the pseudo phase space density varies with radius approximately as $\rho/\langle v^2 \rangle^{3/2} \sim r^{-1.875}$ \citep{TayNav01}, and normalize the velocity variance to its value around $r = r_s$, such that $\langle v^2 \rangle \simeq 470~ {\rm km^2/s^2}$. One may then write \begin{equation} E - E (0) \simeq 4 \pi G \rho_s r_s^2 \left[1 + \frac{2^{4/3}}{10} \frac{\left(x^{0.875}\right)^{2/3}}{(1+x)^{4/3}} - \frac{\ln (1+x)}{x}\right]. \label{eq:Einits} \end{equation} Using this formula in conjunction with equation~(\ref{eq:Erelax}), and using $a = 3.4$, we find $a/t_{\rm relax} = 0.034, 0.047, 0.071, 0.11, 0.17~{\rm Gyr}^{-1}$ at $r = 1, 0.5, 0.25, 0.125, 0.0625~{\rm kpc}$ respectively. These numbers agree to better than $20 \%$ with the values of $\alpha$ indicated on the fitting lines in Fig.~\ref{fig:core} (left panel). The corresponding energy relaxation times are about $100, 72.3, 47.9, 30.9$ and $20 ~{\rm Gyr}$. A more careful analysis, taking into account that the quasi-steady state diffusive process is well established only after $t \gtrsim t_{\rm diff} = 1 \rm Gyr$, suggests $a \simeq 2.5$ (cf. below). The corresponding energy relaxation times are therefore lower by a factor of about 1.4 than those quoted above. Even then, these relaxation times are still significantly larger than what is obtained from the velocity variance (equation~\ref{eq:relaxnum}). This is because $E - E(0)$ is generally larger than the average kinetic energy (from Eq.~\ref{eq:Einits}, by a factor of about four at $r_s$). Indeed, changes in kinetic energy lead to relatively little change in total energy. In fact the relative change in $E_r$ at 1 kpc (as calculated from Eq.~\ref{eq:Eind}, starting at $T=0$), is $\Delta E_{r}/ E_{r} \simeq 0.2$, over 12.7 Gyr. Initially the rate of relative change in energy is much larger at smaller radii (as illustrated by the differences in $E_{\rm in}/T$ inferred from Figs.~\ref{fig:Ein_main} and~\ref{fig:Lovsglb}). But it subsequently saturates, as the energy is redistributed in the system, with mass and energy flowing towards outer radii (Fig.~\ref{fig:Lovsglb}, left panel). When thus redistributed, the modest changes in total energy lead to a modification of the self consistent potential, including the minimal possible energy in it, as discussed in Appendix~\ref{sec:cut}. This leads to core formation, as observed in the right panel of Fig.~\ref{fig:core}, There, we note that we find no evidence of any accelerated core formation, relative to the velocity relaxation time (eq.~\ref{eq:relaxnum}), as found in~\cite{EZFC}, when non-spherical modes were used in conjunction with the Hernquist-Ostriker code. Having obtained the evolution of the enclosed mass within a given radius, it is also possible to define a theoretical density $\rho = r^{-2} d M/d r$ and derive a closed form formula for it using Eqs.~(\ref{eq:ME}), (\ref{eq:Erelax}), and~(\ref{eq:Einits}). The result may then be compared with the dark matter density evolution in the simulation. More care is however required here in defining the initial energies in equation~(\ref{eq:Einits}), as we must start our comparison at $T = t - t_{\rm diff} =0$, i.e., when the diffusion limit at the basis of our model is valid (cf. the discussion in relation to Fig.~\ref{fig:Ein_main}). For this purpose, we average simulation outputs in the time range $0.5~ {\rm Gyr} \le t \le 1.5~ {\rm Gyr}$, and then fit the resulting dark matter density with an NFW profile. The corresponding parameters are $r_s = 1.17 ~{\rm kpc}$ and $\rho_s = 2.7 \times 10^7~ M_\odot/{\rm kpc^3}$. When the latter is adjusted by adding a gas fraction of about $0.075$ (assumed for simplicity to be also NFW with same $r_s$, but taking into account the initial central gas expulsion), the multiplicative factor $\rho_s r_s^2$ in front of the bracket in equation~(\ref{eq:Einits}) remains approximately the same as in the case of the $t=0$ profile. After some trials, we found that values of $a$ in the range 2.5 to 2.8 provide reasonable approximations to the density evolution inferred from the simulation (perhaps surprisingly so, given the various simplifying assumptions of our mass transfer model). This is illustrated in the right panel of Fig.~\ref{fig:core}, where we compare the density evolution expected from our model (fixing $a = 2.5)$ with the results from the simulation. \section{Conclusion} Potential fluctuations from feedback driven gas can `heat' halo cusps, turning them into cores. This work aimed at quantifying this, by measuring the gas fluctuations and tracking the way they transfer energy to the central halo, forcing the outward migration of the dark matter. The interpretive framework we use is a model, first outlined in \citet{EZFC}, which predicts that these processes principally depend on the amplitude of the fluctuations, as measured by the normalisation of their power spectrum, and the average gas density. The result being a standard diffusion process, characterised by a linear temporal increase in velocity variance and energy of halo particles as a result of their interaction with the fluctuating gas field. As shown previously (\citealp{EZFCH}), in this picture the effect of the fluctuations reduces to standard two body relaxation in the case of a white noise power spectrum. It may thus, in this limit, also describe halo heating {\it via} dynamical friction from a system of compact, monolithic massive clumps moving among much lighter dark matter particles. To test this interpretative framework, we measure the density fluctuations in feedback-driven gas from a full hydrodynamic simulation of a model dwarf galaxy (Figs. \ref{fig:Gas_Map}-\ref{fig:densityprofiletime_all}), and obtain their density contrast power spectrum (Fig.~\ref{fig:powerspecavrg}). We find that the spectrum follows a power-law in wave number ($\propto k^{-n}$), with an exponent $ 2 \lesssim n \lesssim 3 $, for the whole period of the simulation (spanning a Hubble time). Although the time-averaged density of the driven gas varies with radius, the variation is modest inside the initial NFW scale length, relative to a sharp drop outside. This suggests a characteristic density that may be used as input for the model. We also examine the velocity distribution of the gas and find that it is approximately fit by Maxwellians at larger radii (Appendix~\ref{Sec:Additional_Gauss}). The kinetic energy power spectrum is close to the Kolmogorov form over large range of scales (Fig.~\ref{fig:velocitypowerspecfit}). This reinforces the picture of a fully turbulent medium. With the input parameters directly measured from the simulation, we use our model to calculate the total energy transfer rate from fluctuating gas to the central halo. The result is compared with the actual energy input to the system of halo particles, as directly inferred from the simulation (Fig.~\ref{fig:Ein_main}). This is found to generally agree --- displaying the general linear increase, expected of a steady diffusion process --- over almost a Hubble time. We also examine the radial distribution of the energy input rate (Fig. \ref{fig:Lovsglb}). We find that the energy transfer is indeed much better approximated as a global rather than local process; in the sense that it depends on the average gas energy density within the core region rather than the local density at each radius. The energy is initially transferred from the gas as kinetic energy to individual halo particles, but it is then redistributed through the self consistent gravity of the system, a process through which the core replaces the cusp. The process comes with a low energy cutoff in the halo phase space distribution function, as particles migrate to higher energy levels. It is straightforward to link the level of this cutoff to the total energy input from the gas, and to the resulting change in halo gravitational potential that comes with core formation (Appendix~\ref{sec:cut}). The energy flow upwards is accompanied by a mass flow outwards. We empirically find an exponential decrease in halo mass with time, within a given radius, in the initial cusp. We then devise a simple approximate description of the mass flow, based on our model, from which the exponential form may be inferred. In this context, the exponential decay time scales with (and is of the order of) the local (energy) relaxation time, and the evolution of the corresponding theoretical density profile mimics that in the simulation (Fig.~\ref{fig:core}). Strictly speaking, energy transfer {\it via} a standard diffusion process requires the gas force fluctuations to be normally distributed. We have verified this to be the case to a good approximation (even if the larger {\it density} fluctuations can be lognormal rather than Gaussian at small radii, cf. Appendix~\ref{Sec:Additional_Gauss}). Another assumption of the model is the use of the `sweeping' approximations of turbulence theory, whereby the statistics of the spatial fluctuations are transferred to the time domain through large scale flows. The general consistency of the spatial power spectrum parameters with those that fit the mass dispersion in the time domain (Fig.~\ref{fig:massdisper}), confirm that this may be an applicable approximation. In general, the assumptions and predictions of the theoretical framework seem vindicated for the model galaxy studied here. This suggests a remarkably concise description of halo core formation from gaseous fluctuations, summarizing the effect of much complex `gastrophysics' in terms of the two principal parameters of gas density and fluctuation levels. Obvious extensions include considering different galaxy masses, as well as verifying that the model also predicts a lack of core formation in simulations that do not produce them. Success there would help delineate particular elements in the physical `subgrid' input that are most crucial in producing the required conditions for core formation, and examining how they compare with observations. It may also be possible to test the predictions regarding the level of the fluctuations required for core formation directly from observations. In particular, it has already been possible to derive surface density power spectra for larger,and relatively quiescent galaxies. Estimating these for actively star forming galaxies, and calibrating them with three dimensional density contrast power spectra entering into our model calculations, may provide a direct test of our picture of core formation through gaseous fluctuations. \section*{Data Availability} The simulation data underlying this article will be shared on reasonable request. \bibliographystyle{mnras}
{ "timestamp": "2022-09-20T02:20:48", "yymm": "2209", "arxiv_id": "2209.08631", "language": "en", "url": "https://arxiv.org/abs/2209.08631" }
\section{Introduction} Modeling in computer vision has long been dominated by convolutional neural networks (CNNs). Recently, transformer models in the field of natural language processing (NLP) \cite{DBLP:journals/corr/abs-1810-04805,NIPS2017-3f5ee243,10.1145/3437963.3441667} have attracted great interests of computer vision (CV) researchers. The Vision Transformer (ViT) \cite{DBLP:journals/corr/abs-2010-11929} model and its variants have gained state-of-the-art results on many core vision tasks \cite{Zhao2020CVPR,pmlr-v139-touvron21a}. The original ViT, inherited from NLP, first splits an input image into patches, while equipped with a trainable class (CLS) token that is appended to the input patch tokens. Following, patches are treated in the same way as tokens in NLP applications, using self-attention layers for global information communication, and finally uses the output CLS token for prediction. Recent work \cite{DBLP:journals/corr/abs-2010-11929,Liu-2021-ICCV} shows that ViT outperforms state-of-the-art convolutional networks \cite{Huang-2018-CVPR} on large-scale datasets. However, when trained on smaller datasets, ViT usually underperforms its counterparts based on convolutional layers. The original ViT lacks inductive bias such as locality and translation equivariance, which leads to overfitting and data inefficiency of ViT models. To address this data inefficiency, numerous subsequent efforts have studied how to introduce the locality of the CNN model into the ViT model to improve its scalability \cite{NEURIPS2021-4e0928de,DBLP:journals/corr/abs-2107-00641}. These methods typically re-introduce hierarchical architectures to compensate for the loss of non-locality, such as the Swin Transformer \cite{Liu-2021-ICCV}. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{DifferentAttentionMechanisms.pdf} \caption{Illustration of different self-attention mechanisms in Transformer backbones. Our AEWin is different from two aspects. First, we split multi-heads into three groups and perform self-attention in local window, horizontal and vertical axes simultaneously. Second, we set different token lengths for window attention and axial attention to achieve fine-grained local and coarse-grained global interactions, which can achieve better trade-off between computation cost and capability.} \label{DifferentAttentionMechanisms-flabel} \end{figure} Local self-attention and hierarchical ViT (LSAH-ViT) has been demonstrated to address data inefficiency and alleviate model overfitting. However, LSAH-ViT uses window-based attention at shallow layers, losing the non-locality of original ViT, which leads to LSAH-ViT having limited model capacity and henceforth scales unfavorably on larger data regimes such as ImageNet-21K \cite{NEURIPS2021-20568692}. To bridge the connection between windows, previous LSAH-ViT works propose specialized designs such as the “haloing operation” \cite{Vaswani-2021-CVPR} and “shifted window” \cite{Liu-2021-ICCV}. These approaches often need complex architectural designs and the receptive field is enlarged quite slowly and it requires stacking a great number of blocks to achieve global self-attention. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{SplitGroup.pdf} \caption{Illustration of parallel implementation of AEWin. It is worthwhile to note that the token length of axial attention is only half of that of windowed attention, so as to set different granularity for local and global.} \label{SplitGroup-flabel} \end{figure} When observing a scene, humans usually focus on a local region while attending to non-attentional regions at coarse granularity. Based on this observation, we present the Axially Expanded Window (AEWin) self-attention, which is illustrated in Figure \ref{DifferentAttentionMechanisms-flabel} and compared with existing self-attention mechanisms. Considering that the visual dependencies between nearby regions are usually stronger than those far away, we perform the fine-grained self-attention within the local window and coarse-grained attention on the horizontal and vertical axes. We split the multi-heads into three parallel groups and the number of heads in the first two groups is half of that in the final group, the first two groups are used for self-attention on the horizontal and vertical axes respectively, and the final group is used for self-attention within the local window. It is worth noting that with AEWin self-attention mechanism, the self-attention in the local window, horizontal axis, and vertical axis are calculated in parallel, and this parallel strategy does not introduce extra computation cost. As shown in Figure \ref{SplitGroup-flabel}, the feature map focuses on its closest surroundings with long tokens and the surrounding on its horizontal and vertical axes with short tokens to capture coarse-grained visual dependencies. Therefore, it has the ability to capture both short- and long-range visual dependencies efficiently. Benefit from the fine-grained window self-attention and coarse-grained axial self-attention, our AEWin self-attention can better balance performance and computational cost compared to existing local self-attention mechanisms shown in Figure \ref{DifferentAttentionMechanisms-flabel}. Based on the proposed AEWin self-attention, we design a general vision transformer backbone with a hierarchical architecture, named AEWin Transformer. Our tiny variant AEWin-T achieves 83.6\% Top-1 accuracy on ImageNet-1K without any extra training data or label. \section{Related Work} Transformers were proposed by Vaswani et al. \cite{NIPS2017-3f5ee243} for machine translation, and have since become the state of the art method in many NLP tasks. Recently, the pioneering work ViT \cite{DBLP:journals/corr/abs-2010-11929} demonstrates that pure Transformer-based architectures can also achieve very competitive results. One challenge for vision transformer-based models is data efficiency. Although ViT \cite{DBLP:journals/corr/abs-2010-11929} can perform better than convolutional networks with hundreds of millions images for pre-training, such a data requirement is not always practical. To improve data efficiency, many recent works have focused on introducing the locality and hierarchical structure of convolutional neural networks into ViT, proposing a series of local and hierarchical ViT. The Swin Transformer \cite{Liu-2021-ICCV} focuses attention on shifted windows in a hierarchical architecture. Nested ViT \cite{zhang2022nested} proposes a block aggregation module, which can more easily achieves cross-block non-local information communication. Focal ViT \cite{DBLP:journals/corr/abs-2107-00641} presents focal self-attention, each token attends its closest surrounding tokens at fine granularity and the tokens far away at coarse granularity, which can effectively capture both short- and long-range visual dependencies. Based on the local window, a series of local self-attentions with different shapes are proposed in subsequent work. Axial self-attention \cite{DBLP:journals/corr/abs-1912-12180} and criss-cross attention \cite{Huang-2019-ICCV} achieve longer-range dependencies in horizontal and vertical directions respectively by performing self-attention in each single row or column of the feature map. CSWin \cite{Dong-2022-CVPR} proposed a cross-shaped window self-attention region including multiple rows and columns. Pale Transformer \cite{DBLP:journals/corr/abs-2112-14000} proposes a Pale-Shaped self-Attention, which performs self-attention within a pale-shaped region to capture richer contextual information. The performance of the above attention mechanisms is either limited by the restricted window size or has a high computation cost, which cannot achieve better trade-off between computation cost and global-local interaction. In this paper, we propose a new hierarchical vision Transformer backbone by introducing axially expanded window self-attention. Focal ViT \cite{DBLP:journals/corr/abs-2107-00641} and CSWin \cite{Dong-2022-CVPR} are the most related works with our AEWin, which allows a better trade-off between computation cost and global-local interaction compared to them. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{OverallArchitecture.pdf} \caption{(a) The overall architecture of our AEWin Transformer. (b) The composition of each block. } \label{OverallArchitecture-flabel} \end{figure*} \section{Method} \subsection{Overall Architecture} An overview of the AEWin-ViT architecture is presented in Figure \ref{OverallArchitecture-flabel} (a), which illustrates the tiny version. AEWin-ViT consists of four hierarchical stages, and we follow the popular design in Swin-ViT \cite{Liu-2021-ICCV} to build hierarchical architecture to capture multi-scale features and alternately use shifted window. Each stage contains a patch merging layer and multiple AEWin Transformer blocks. As the network gets deeper, the input features are spatially downsample by a certain ratio through the patch merging layer, and the channel dimension is expanded by twice to produce a hierarchical image representation. Specifically, the spatial downsampling ratio is set to 4 in the first stage and 2 in the last three stages, using the same patch merging layer as Swin-ViT. The outputs of the patch merging layer are fed into the subsequent AEWin Transformer block, and the number of tokens is kept constant. Finally, we apply a global average pooling step on the output of the last block to obtain the image representation vector for the final prediction. \subsection{Axially Expanded Window Self-Attention} LSAH-ViT uses window-based attention at shallow layers, losing the non-locality of original ViT, which leads to LSAH-ViT having limited model capacity and henceforth scales unfavorably on larger data regimes. Existing works propose specialized designs such as the “haloing operation” \cite{Vaswani-2021-CVPR} and “shifted window” \cite{Liu-2021-ICCV}, to communicate information between windows. These approaches often need complex architectural designs and the receptive field is enlarged quite slowly and it requires stacking a great number of blocks to achieve global self-attention. For capturing dependencies varied from short-range to long-range, inspired by human observation scenes, we propose Axially Expanded Window Self-Attention (AEWin-Attention), which performs fine-grained self-attention within the local window and coarse-grained self-attention on the horizontal and vertical axes. \noindent \textbf{Axially Expanded Windows.} According to the multi head self-attention mechanism, the input feature $X\in {{R}^{(H\times W)\times C}}$ will be first linearly projected to $K$ heads, and then each head will perform local self-attention within the window or horizontal axis or vertical axis. For horizontal axial self-attention, $X$ is evenly split into non-overlapping horizontal stripes $[{{X}^{1}},\cdots ,{{X}^{H}}]$, and each stripe contains $1\times W$ tokens. Formally, suppose the projected queries, keys and values of the ${{k}^{th}}$ head all have dimension ${{d}_{k}}$, then the output of the ${{k}^{th}}$ head's horizontal axis self-attention is defined as: \begin{equation} \begin{aligned} & X=[{{X}^{1}},{{X}^{2}},\cdots ,{{X}^{H}}], \\ & Y_{k}^{i}=\text{MSA}({{X}^{i}}W_{k}^{Q},{{X}^{i}}W_{k}^{K},{{X}^{i}}W_{k}^{V}), \\ & \text{H-MS}{{\text{A}}_{k}}(X)=[Y_{k}^{1},Y_{k}^{2},\cdots ,Y_{k}^{H}] \\ \end{aligned} \label{horizontalAttention-glabel} \end{equation} where ${{X}^{i}}\in {{R}^{(1\times W)\times C}}$, $i\in \left\{ 1,2,\cdots H \right\}$, and $\text{MSA}$ indicates the Multi-head Self-Attention. $W_{k}^{Q}\in {{R}^{C\times {{d}_{k}}}}$, $W_{k}^{K}\in {{R}^{C\times {{d}_{k}}}}$, $W_{k}^{V}\in {{R}^{C\times {{d}_{k}}}}$ represent the projection matrices of queries, keys and values for the ${{k}^{th}}$ head respectively, and ${{d}_{k}}=C/K$. The vertical axial self-attention can be similarly derived, and its output for ${{k}^{th}}$ head is denoted as $\text{V-MS}{{\text{A}}_{k}}(X)$. For windowed self-attention, $X$ is evenly split into non-overlapping local windows $[X_{m}^{1},\cdots ,X_{m}^{N}]$ with height and width equal to $M$, and each window contains $M\times M$ tokens. Based on the above analysis, the output of the windowed self-attention for ${{k}^{th}}$ head is defined as: \begin{equation} \begin{aligned} & {{X}_{m}}=[X_{m}^{1},X_{m}^{2},\cdots ,X_{m}^{N}], \\ & Y_{k}^{i}=\text{MSA}(X_{m}^{i}W_{k}^{Q},X_{m}^{i}W_{k}^{K},X_{m}^{i}W_{k}^{V}), \\ & \text{W-MS}{{\text{A}}_{k}}(X)=[Y_{k}^{1},Y_{k}^{2},\cdots ,Y_{k}^{N}] \\ \end{aligned} \label{windowAttention-glabel} \end{equation} where $N=(H\times W)/(M\times M)$, $M$ defaults to 7. \noindent \textbf{Parallel implementation of different granularities.} We split the $K$ heads into three parallel groups, with $K/4$ heads in the first two groups and $K/2$ heads in the last group, thus building a different granularity between local and global, as shown in Figure \ref{SplitGroup-flabel}. The first group of heads perform horizontal axis self-attention, the second group of heads perform vertical axis self-attention, and the third group of heads perform local window self-attention. Finally, the output of these three parallel groups will be concatenated back together. \begin{equation} \text{hea}{{\text{d}}_{k}}=\left\{ \begin{matrix} \text{H-MS}{{\text{A}}_{k}}\text{(}X\text{) } \\ \text{V-MS}{{\text{A}}_{k}}\text{(}X\text{) } \\ \text{W-MS}{{\text{A}}_{k}}\text{(}X\text{) } \\ \end{matrix}\begin{array}{*{35}{l}} k=1,\cdots ,K/4 \\ k=K/4+1,\cdots ,K/2 \\ k=K/2+1,\cdots ,K \\ \end{array} \right. \label{mergingtoken-glabel} \end{equation} \begin{equation} \text{AEWin(}X\text{)=Concat(hea}{{\text{d}}_{1}}\text{,}\cdots \text{,hea}{{\text{d}}_{K}}\text{)}{{W}^{O}} \label{finalOutMlp-glabel} \end{equation} where ${{W}^{O}}\in {{R}^{C\times C}}$ is the commonly used projection matrix that is used to integrate the output tokens of the three groups. Compared to the step-by-step implementation of axial and windowed self-attention separately, such a parallel mechanism has a lower computation complexity and can achieve different granularities by carefully designing the number of heads in different groups. \noindent \textbf{Complexity Analysis.} Given the input feature of size $H\times W\times C$ and window size $(M,M)$, $M$ is set to 7 by default, the standard global self-attention has a computational complexity of \begin{equation} \Omega (\text{Global})=4HW{{C}^{2}}+2{{(HW)}^{2}}C \label{GlobalComplexity-glabel} \end{equation} however, our proposed AEWin-Attention under the parallel implementation has a computational complexity of \begin{equation} \Omega (\text{AEWin})=4HW{{C}^{2}}+HWC*(\frac{1}{2}H+\frac{1}{2}W+{{M}^{2}}) \label{AEWinComplexity-glabel} \end{equation} which can obviously alleviate the computation and memory burden compared with the global one, since $2HW>>(\frac{1}{2}H+\frac{1}{2}W+{{M}^{2}})$ always holds. \subsection{AEWin Transformer Block} Equipped with the above self-attention mechanism, AEWin Transformer block is formally defined as: \begin{equation} \begin{aligned} & \overset{\wedge }{\mathop{{{X}^{l}}}}\,=\text{AEWin-Attention}(\text{LN}({{X}^{l-1}}))+{{X}^{l-1}}, \\ & {{X}^{l}}=\text{MLP}(\text{LN}(\overset{\wedge }{\mathop{{{X}^{l}}}}\,))+\overset{\wedge }{\mathop{{{X}^{l}}}}\, \\ \end{aligned} \label{windowAttention-glabel} \end{equation} where $\overset{\wedge }{\mathop{{{X}^{l}}}}\,$ and ${{X}^{l}}$ denote the output features of the $\mathsf{AEWin}$ module and the $\mathsf{MLP}$ module for block $l$, respectively. In computing self-attention, we follow Swin-ViT by including a relative position bias $B$ to each head in computing similarity. \section{Experiments} We first compare our AEWin Transformer with the state-of-the-art Transformer backbones on ImageNet-1K \cite{5206848} for image classification. We then compare the performance of AEWin and state-of-the-art Transformer backbones on small datasets Caltech-256 \cite{griffin2007caltech} and Mini-ImageNet \cite{krizhevsky2012imagenet}. Finally, we perform comprehensive ablation studies to analyze each component of AEWin Transformer. \subsection{Experiment Settings} \noindent \textbf{Dataset}. For image classification, we benchmark the proposed AEWin Transformer on the ImageNet-1K, which contains 1.28M training images and 50K validation images from 1,000 classes. To explore the performance of AEWin Transformer on small datasets, we also conducted experiments on Caltech-256 and Mini-ImageNet. Caltech-256 has 257 classes with more than 80 images in each class. Mini-ImageNet contains a total of 60,000 images from 100 classes. \noindent \textbf{Implementation details}. This setting mostly follows \cite{Liu-2021-ICCV}. We use the PyTorch toolbox \cite{paszke2019pytorch} to implement all our experiments. We employ an AdamW \cite{kingma2014adam} optimizer for 300 epochs using a cosine decay learning rate scheduler and 20 epochs of linear warm-up. A batch size of 256, an initial learning rate of 0.001, and a weight decay of 0.05 are used. ViT-B/16 uses an image size 384 and others use 224. We include most of the augmentation and regularization strategies of \cite{Liu-2021-ICCV} in training. \subsection{Image Classification on the ImageNet-1K} \begin{table}[h] \centering \caption{Comparison of different models on ImageNet-1K.} \resizebox{\linewidth}{!}{ \begin{tabular}{l|ccc|c} \hline Method & Image Size & Param. & FLOPs & Top-1 acc. \\ \hline ViT-B \cite{DBLP:journals/corr/abs-2010-11929} & ${{384}^{2}}$ & 86M & 55.4G &77.9 \\ \hline Swin-T \cite{Liu-2021-ICCV} & ${{224}^{2}}$ & 29M & 4.5G &81.3 \\ Swin-B \cite{Liu-2021-ICCV} & ${{224}^{2}}$ & 88M & 15.4G &83.3 \\ \hline Pale-T \cite{DBLP:journals/corr/abs-2112-14000} & ${{224}^{2}}$ & 22M & 4.2G &83.4 \\ Pale-B \cite{DBLP:journals/corr/abs-2112-14000} & ${{224}^{2}}$ & 85M & 15.6G &84.9 \\ \hline CSWin-T \cite{Dong-2022-CVPR} & ${{224}^{2}}$ & 23M & 4.3G &82.7 \\ CSWin-B \cite{Dong-2022-CVPR} & ${{224}^{2}}$ & 78M & 15.0G &84.2 \\ \hline AEWin-T (ours) & ${{224}^{2}}$ & 23M & 4.0G & \textbf{83.6} \\ AEWin-B (ours) & ${{224}^{2}}$ & 78M & 14.6G &\textbf{85.0} \\ \hline \end{tabular} } \label{ImageNet-Top1} \end{table} Table \ref{ImageNet-Top1} compares the performance of our AEWin Transformer with the state-of-the-art Vision Transformer backbones on ImageNet-1K. Compared to ViT-B, our AEWin-T model is +5.7\% better and has much lower computation complexity than ViT-B. Meanwhile, our AEWin Transformer variants outperform the state-of-the-art Transformer-based backbones, and is +0.9\% higher than the most related CSWin Transformer. AEWin Transformer has the lowest computation complexity compared to all models in Table \ref{ImageNet-Top1}. For example, AEWin-T achieves 83.6\% Top-1 accuracy with only 4.0G FLOPs. And for the base model setting, our AEWin-B also achieves the best performance. \begin{table}[h] \centering \caption{Comparison of different models on Caltech-256.} \resizebox{\linewidth}{!}{ \begin{tabular}{l|ccc|c} \hline Method & Image Size & Param. & FLOPs & Top-1 acc. \\ \hline ViT-B \cite{DBLP:journals/corr/abs-2010-11929} & ${{384}^{2}}$ & 86M & 55.4G &37.6 \\ \hline Swin-T \cite{Liu-2021-ICCV} & ${{224}^{2}}$ & 29M & 4.5G &43.3 \\ Swin-B \cite{Liu-2021-ICCV} & ${{224}^{2}}$ & 88M & 15.4G &46.7 \\ \hline Pale-T \cite{DBLP:journals/corr/abs-2112-14000} & ${{224}^{2}}$ & 22M & 4.2G &45.2 \\ Pale-B \cite{DBLP:journals/corr/abs-2112-14000} & ${{224}^{2}}$ & 85M & 15.6G &47.1 \\ \hline CSWin-T \cite{Dong-2022-CVPR} & ${{224}^{2}}$ & 23M & 4.3G &47.7 \\ CSWin-B \cite{Dong-2022-CVPR} & ${{224}^{2}}$ & 78M & 15.0G &48.5 \\ \hline AEWin-T (ours) & ${{224}^{2}}$ & 23M & 4.0G & \textbf{48.6} \\ AEWin-B (ours) & ${{224}^{2}}$ & 78M & 14.6G &\textbf{49.3} \\ \hline \end{tabular} } \label{Caltech-256-Top1} \end{table} \subsection{Image Classification on Caltech-256 and Mini-ImageNet} We show the performance of ViT on small datasets in Table \ref{Caltech-256-Top1} and Table \ref{Mini-ImageNet-Top1}. It is known that ViTs usually perform poorly on such tasks as they typically require large datasets to be trained on. The models that perform well on large-scale ImageNet do not necessary work perform on small-scale Mini-ImageNet and Caltech-256, e.g., ViT-B has top-1 accuracy of 58.3\% and Swin-B has top-1 accuracy of 67.4\% on the Mini-ImageNet, which suggests that ViTs are more challenging to train with less data. Our proposed AEWin can significantly improve the data efficiency and performs well on small datasets such as Caltech-256 and Mini-ImageNet. Compared with CSWin, it has increased by 0.8\% and 0.7\% respectively. \begin{table}[h] \centering \caption{Comparison of different models on Mini-ImageNet.} \resizebox{\linewidth}{!}{ \begin{tabular}{l|ccc|c} \hline Method & Image Size & Param. & FLOPs & Top-1 acc. \\ \hline ViT-B \cite{DBLP:journals/corr/abs-2010-11929} & ${{384}^{2}}$ & 86M & 55.4G &58.3 \\ \hline Swin-T \cite{Liu-2021-ICCV} & ${{224}^{2}}$ & 29M & 4.5G &66.3 \\ Swin-B \cite{Liu-2021-ICCV} & ${{224}^{2}}$ & 88M & 15.4G &67.4 \\ \hline Pale-T \cite{DBLP:journals/corr/abs-2112-14000} & ${{224}^{2}}$ & 22M & 4.2G &67.4 \\ Pale-B \cite{DBLP:journals/corr/abs-2112-14000} & ${{224}^{2}}$ & 85M & 15.6G &68.5 \\ \hline CSWin-T \cite{Dong-2022-CVPR} & ${{224}^{2}}$ & 23M & 4.3G &66.8 \\ CSWin-B \cite{Dong-2022-CVPR} & ${{224}^{2}}$ & 78M & 15.0G &68.4 \\ \hline AEWin-T (ours) & ${{224}^{2}}$ & 23M & 4.0G & \textbf{68.2} \\ AEWin-B (ours) & ${{224}^{2}}$ & 78M & 14.6G &\textbf{69.1} \\ \hline \end{tabular} } \label{Mini-ImageNet-Top1} \end{table} \subsection{Ablation Study} In this section, we compare with existing self-attention mechanisms. For a fair comparison, we use Swin-T as backbone and only change the self-attention mechanism. As shown in Table \ref{differentMechanisms}, our AEWin self-attention mechanism performs better than the existing self-attention mechanism. \begin{table}[h] \centering \caption{Comparison of different self-attention mechanisms.} \resizebox{\linewidth}{!}{ \begin{tabular}{l|c} \hline Attention mode & ImageNet-1K Top-1 acc. \\ \hline Shifted Window \cite{Liu-2021-ICCV} & 81.3 \\ Sequential Axial \cite{DBLP:journals/corr/abs-1912-12180} &81.5 \\ Criss-Cross \cite{Huang-2019-ICCV} & 81.7 \\ Pale \cite{DBLP:journals/corr/abs-2112-14000} & 82.5 \\ Cross-shaped window \cite{Dong-2022-CVPR} & 82.2 \\ Axially expanded window & \textbf{83.1} \\ \hline \end{tabular}} \label{differentMechanisms} \end{table} \section{Conclusions} This work proposes a new efficient self-attention mechanism, called axially expanded window attention (AEWin-Attention). Compared with previous local self-attention mechanisms, AEWin-Attention simulates the way humans observe a scene by performing fine-grained attention locally and coarse-grained attention in non-attentional regions. Different granularities are distinguished by setting different number of group heads, and the parallel computing of three groups further improves the efficiency of AEWin. Based on the proposed AEWin-Attention, we develop a Vision Transformer backbone, called AEWin Transformer, which achieves state-of-the-art performance on ImageNet-1K for image classification. \bibliographystyle{named}
{ "timestamp": "2022-09-20T02:23:50", "yymm": "2209", "arxiv_id": "2209.08726", "language": "en", "url": "https://arxiv.org/abs/2209.08726" }
\section{Introduction}\label{sec:intro} Linear programming (LP) is one of the most useful tools available to theoreticians and practitioners throughout science and engineering. It has been extensively used to solve various problems in a wide range of areas, including operations research, engineering, economics, or even in more abstract mathematical areas such as combinatorics. In machine learning and numerical optimization, LP appears in numerous settings, including $\ell_1$-regularized SVMs~\citep{zhu20041}, basis pursuit (BP)~\citep{yang2011alternating}, sparse inverse covariance matrix estimation (SICE)~\citep{yuan2010high}, the nonnegative matrix factorization (NMF)~\citep{recht2012factoring}, MAP inference~\citep{meshi2011alternating}, adversarial deep learning~\citep{weng2018towards,wong2018provable} etc. Not surprisingly, designing and analyzing LP algorithms is a topic of paramount importance in computer science and applied mathematics. \textcolor{black}{The first algorithm for general-purpose LPs was the famous \textit{simplex algorithm}, proposed by~\citep{dantzig1951maximization}. It worked well in practice, but was shown to have exponential worst-case running times~\citep{klee1972good}. The first polynomial time algorithm for general LPs was the \emph{ellipsoid method}~\citep{khachiyan1979polynomial}, which is rather slow in practice compared to the simplex algorithm. This motivated further research on LP algorithms which are efficient in both theory and practice.} One of the most successful paradigms for solving LPs is the family of Interior Point Methods (IPMs), pioneered by Karmarkar in the mid 1980s~\citep{karmarkar84}. Path-following IPMs (also called central-path algorithms) and, in particular, long-step path following IPMs, are among the most practical approaches for solving linear programs. \textcolor{black}{See Section~\ref{sxn:comparison} for a detailed overview of recent work on path-following IPMs.} Consider the standard form of the primal LP problem: \begin{flalign} \min\,\mathbf{c}^\top\mathbf{x}\,,\text{ subject to }\mathbf{A}\mathbf{x}=\mathbf{b}\,,\mathbf{x}\ge \zero\,,\label{eq:primal} \end{flalign} where $\mathbf{A}\in\RR{m}{n}$, $\mathbf{b}\in\R{m}$, and $\mathbf{c}\in\R{n}$ are the inputs, and $\mathbf{x}\in\R{n}$ is the vector of the primal variables. The associated dual problem is \begin{flalign} \max\,\mathbf{b}^\top\mathbf{y}\,,\text{ subject to }\mathbf{A}^\top\mathbf{y}+\mathbf{s}=\mathbf{c}\,,\mathbf{s}\ge\zero\,,\label{eq:dual} \end{flalign} where $\mathbf{y}\in\R{m}$ and $\mathbf{s}\in\R{n}$ are the vectors of the dual and slack variables respectively. Triplets $(\mathbf{x}, \mathbf{y}, \mathbf{s})$ that uphold both eqns. \eqref{eq:primal} and \eqref{eq:dual} are called \emph{primal-dual solutions}. Path-following IPMs typically converge towards a primal-dual solution by operating as follows: given the current iterate $(\mathbf{x}^{k},\mathbf{y}^{k},\mathbf{s}^{k})$, they compute the Newton search direction $(\Delta\mathbf{x},\Delta\mathbf{y},\Delta\mathbf{s})$ and update the current iterate by making a step towards the search direction. To compute the search direction, one standard approach~\citep{NW06} involves solving the \emph{normal equations}\footnote{Another widely used approach is to solve the augmented system~\citep{NW06}. This approach is less relevant for this paper.}: \begin{flalign} \mathbf{A}\mathbf{D}^2\mathbf{A}^\top\Delta\mathbf{y}=~&\mathbf{p}.\label{eq:normal} \end{flalign} Here, $\mathbf{D} = \mathbf{X}^{1/2}\mathbf{S}^{-1/2}$ is a diagonal matrix, $\mathbf{X},\mathbf{S}\in\RR{n}{n}$ are diagonal matrices whose $i$-th diagonal entries are equal to $\mathbf{x}_i$ and $\mathbf{s}_i$, respectively, and $\mathbf{p} \in \R{m}$ is a vector whose exact definition is given in eqn.~(\ref{eqn:pdef})\footnote{The superscript $k$ in eqn.~(\ref{eqn:pdef}) simply indicates iteration count and is omitted here for notational simplicity.}. Given $\Delta\mathbf{y}$, computing $\Delta \mathbf{s}$ and $\Delta \mathbf{x}$ only involves matrix-vector products. The core computational bottleneck in IPMs is the need to solve the linear system of eqn.~(\ref{eq:normal}) at each iteration. This leads to two key challenges: first, for high-dimensional matrices $\mathbf{A}$, solving the linear system is computationally prohibitive. Most implementations of IPMs use a \emph{direct solver}; see Chapter 6 of~\citep{NW06}. However, if $\mathbf{A}\mathbf{D}^2\mathbf{A}^\top$ is large and dense, direct solvers are computationally impractical. If $\mathbf{A}\mathbf{D}^2\mathbf{A}^\top$ is sparse, specialized direct solvers have been developed, but these do not apply to many LP problems, especially those arising in machine learning applications, due to irregular sparsity patterns. Second, an alternative to direct solvers is the use of iterative solvers, but the situation is further complicated since $\mathbf{A}\mathbf{D}^2\mathbf{A}^\top$ is typically ill-conditioned. Indeed, as IPM algorithms approach the optimal primal-dual solution, the diagonal matrix $\mathbf{D}$ becomes ill-conditioned, which also results in the matrix $\mathbf{A}\mathbf{D}^2\mathbf{A}^\top$ becoming ill-conditioned. Additionally, using approximate solutions for the linear system of eqn.~(\ref{eq:normal}) causes certain invariants, which are crucial for guaranteeing the convergence of IPMs, to be violated; see Section~\ref{sxn:contrib} for details. In this paper, we address the aforementioned challenges, for the special case where $m \ll n$, i.e., the number of constraints is much smaller than the number of variables; see Section~\ref{sxn:extensions} for a generalization. This is a common setting in many applications of LP solvers. For example, in machine learning, $\ell_1$-SVMs and basis pursuit problems often exhibit such structure when the number of available features ($n$) is larger than the number of objects ($m$). Indeed, this setting has been of interest in recent work on LPs \citep{Donoho05,Bienstock06,LondonAAAI2018}. For simplicity of exposition, we also assume that the constraint matrix $\mathbf{A}$ has full rank, equal to $m$. First, we propose and analyze two Krylov subspace-based solvers, namely, preconditioned Conjugate Gradient (CG) and preconditioned Chebyshev iteration for the normal equations of eqn.~(\ref{eq:normal}), using matrix sketching constructions from the Randomized Linear Algebra (RLA) literature. We develop a preconditioner for $\mathbf{A}\mathbf{D}^2\mathbf{A}^\top$ using matrix sketching which allows us to prove strong convergence guarantees for the \textit{residual} of both the CG and the Chebyshev iteration. Second, building upon the work of~\citet{Mon03}, we propose and analyze a provably accurate long-step IPM algorithm. Our framework works for both \emph{feasible} and \emph{infeasible} starting points. The proposed IPM solves the normal equations using iterative solvers. We note that a non-trivial concern is that the use of iterative solvers and matrix sketching tools implies that the normal equations at each iteration will be solved only approximately. In our proposed IPM framework, we develop a novel way to \textit{correct} for the error induced by the approximate solution in order to guarantee convergence. Importantly, this correction step is relatively computationally light, \textcolor{black}{unlike a similar step previously proposed by~\citet{Mon03}, which works similarly, but is computationally inefficient}. Third, we empirically show that our algorithm performs well in practice. We consider solving LPs that arise from $\ell_1$-regularized SVMs and test them on a variety of synthetic and real-world data sets. Several extensions of our work are discussed in Section~\ref{sxn:extensions}. \subsection{Our contributions}\label{sxn:contrib} Our point of departure in this work is the introduction of preconditioned, iterative solvers for solving eqn.~(\ref{eq:normal}). Preconditioning is used to address the ill-conditioning of the matrix $\mathbf{A}\mathbf{D}^2\mathbf{A}^\top$. Iterative solvers allow the computation of approximate solutions using only matrix-vector products while avoiding matrix inversion, Cholesky or LU factorizations, etc. A preconditioned formulation of eqn.~(\ref{eq:normal}) is: \begin{flalign} \mathbf{Q}^{-1}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\Delta\mathbf{y}=\mathbf{Q}^{-1}\mathbf{p}, \label{eq:precond} \end{flalign} where $\mathbf{Q} \in \mathbb{R}^{m \times m}$ is the preconditioning matrix; $\mathbf{Q}$ should be easily invertible (see~\citep{axelsson1984finite,golub2013matrix} for background). An alternative yet equivalent formulation of eqn.~(\ref{eq:precond}), which is more amenable to theoretical analysis, is \begin{flalign} \mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{z}=~\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p},\label{eq:precond_alt} \end{flalign} where $\mathbf{z}\in\R{m}$ is a vector such that $\Delta\mathbf{y}=\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{z}$. Note that the matrix in the left-hand side of the above equation is always symmetric, which is not necessarily the case for eqn.~\eqref{eq:precond}. We do emphasize that one can use eqn.~\eqref{eq:precond} in the actual implementation of the preconditioned solver; eqn.~(\ref{eq:precond_alt}) is much more useful in theoretical analyses. Recall that we focus on the special case where $\mathbf{A} \in \mathbb{R}^{m \times n}$ has $m \ll n$, i.e., it is a short-and-fat matrix. Our first contribution starts with the design and analysis of a preconditioner for the Conjugate Gradient solver. The preconditioner satisfies, with high probability, the following bound: \begin{flalign}\label{eq:pdcond1} \frac{2}{2+\zeta} \leq \sigma^2_{\min}(\mathbf{Q}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}) \leq \sigma^2_{\max}(\mathbf{Q}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}) \leq \frac{2}{2-\zeta}, \end{flalign} for some error parameter $\zeta \in [0,1]$. In the above, $\sigma_{\min}(\cdot)$ and $\sigma_{\max}(\cdot)$ correspond to the smallest and largest singular value of the matrix in parentheses. The above condition says that the preconditioner effectively reduces the condition number of $\mathbf{A}\mathbf{D}$ to a constant. We note that the particular form of the lower and upper bounds in eqn.~(\ref{eq:pdcond1}) was chosen to simplify our derivations. RLA matrix-sketching techniques allow us to construct preconditioners for all short-and-fat matrices that satisfy the above inequality \textit{and} can be inverted efficiently. Such constructions go back to the work of~\citet{Avron2010}; see Section~\ref{sxn:PCG} for details on the construction of $\mathbf{Q}$ and its inverse. Importantly, given such a preconditioner, we then prove that the resulting CG iterative solver satisfies \begin{flalign} \|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-\nicefrac{1}{2}}\tilde{\mathbf{z}}^t-\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\leq \zeta^t \|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2. \label{eq:pdcond2} \end{flalign} Here $\tilde{\mathbf{z}}^t$ is the approximate solution returned by the CG iterative solver after $t$ iterations. In words, the above inequality states that the \textit{residual} achieved after $t$ iterations of the CG iterative solver drops exponentially fast. \textcolor{black}{Given eqn.~\eqref{eq:pdcond1}, we derive eqn.~\eqref{eq:pdcond2} using the monotonic decrease of the preconditioned residual norms of CG. } To the best of our knowledge, \textcolor{black}{such monotonicity of the residual error} is not known in the CG literature: indeed, it is actually well-known that the residual error of CG may oscillate\,\citep{fong2012cg}, even in cases where the energy norm of the solution error decreases monotonically. However, we prove that if the preconditioner is sufficiently good, i.e., it satisfies the constraint of eqn.~\eqref{eq:pdcond1}, then the residual error decreases monotonically as well, \textcolor{black}{resulting in eqn.~\eqref{eq:pdcond2}. It is slightly better than the residual-norm bound derived directly from the energy norm of the solution-error, which is the standard bound for CG. In the later case, the bound in eqn.~\eqref{eq:pdcond2} would have another constant factor involving $\kappa(\mathbf{Q}^{-1/2}\mathbf{A}\mathbf{D})$, the condition number of the preconditioned matrix. Using the aforementioned monotonicity of the residual norms, we are able to get rid of that constant factor. In addition, such monotonic decrease of the residual norms can also be of independent interest in CG literature. See Section~\ref{sxn:cg} for details. } \vspace{0.4mm} In addition, we also analyze another popular Krylov subspace-based solver, namely, \emph{Chebyshev iteration}\,\citep{barrett1994templates,gutknecht2002chebyshev}. This method avoids the computation of the inner products which is typically needed for CG or other non-stationary methods. Inner products are communication intensive in parallel or distributed settings, and, as such, are detrimental to the performance in such setups. However, there is a trade-off; in order to avoid the computation of the inner products, it requires adequate knowledge about spectrum of the coefficient matrix $\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-\nicefrac{1}{2}}$, which, in our context, is nothing but the condition in eqn.~\eqref{eq:pdcond1}. Therefore, given eqn.~\eqref{eq:pdcond1}, we prove that Chebyshev iteration also satisfies eqn.~\eqref{eq:pdcond2}. Our second contribution is the analysis of a novel variant of a long-step IPM algorithm proposed by~\citet{Mon03}. First, we analyze the feasible version of it, which is the one that starts from a strictly feasible point and stays feasible across all the iterations -- namely, any point $(\mathbf{x}^k,\mathbf{y}^k,\mathbf{s}^k)$ with $(\mathbf{x}^k,\mathbf{s}^k)>\zero$ such that such that they are both primal and dual feasible \emph{i.e.}, $\mathbf{A}\mathbf{x}^k=\mathbf{b}$ and $\mathbf{A}^\top\mathbf{y}^k+\mathbf{s}^k=\mathbf{c}$. However, the use of an approximate solver prevents the iterates $(\mathbf{x}^k,\mathbf{y}^k,\mathbf{s}^k)$ from satisfying the above primal and dual feasibility constraints \emph{exactly} right after the first iteration of the IPM. In order to account for the error caused by the CG solver and to push the iterates back to the feasibility, \citet{Mon03} introduced a perturbation vector $\mathbf{v}$ which needs to satisfy a certain linear invariant exactly. Again, we use RLA matrix sketching principles to propose an efficient construction for $\mathbf{v}$ that provably satisfies the invariant. Finally, we combine the above two primitives to prove that Algorithm~\ref{algo:iipm} in Section~\ref{sxn:IIPM} satisfies the following theorem. \begin{theorem}\label{thm:2f} % Let $0 \leq \epsilon \leq 1$ be an accuracy parameter. Consider the long-step feasible IPM Algorithm~\ref{algo:iipm} (Section~\ref{sxn:IIPM}) that solves eqn.~(\ref{eq:precond_alt}) using the iterative solver of Algorithm~\ref{algo:PCG} (Section~\ref{sxn:PCG}). Assume that the iterative solver runs with accuracy parameter $\zeta = \nicefrac{1}{2}$ and iteration count $t = \mathcal{O} (\log n)$. % Then, with probability at least 0.9, the long-step feasible IPM converges after $\mathcal{O}(n \log \nicefrac{1}{\epsilon})$ iterations. % \end{theorem} We note that the constant success probability above is for simplicity of exposition and can be easily amplified using standard techniques. Also, at each iteration of our long-step feasible IPM algorithm, the running time is $\mathcal{O}((\mathop{\mathrm{nnz}}{\mathbf{A}}+m^3)\log n)$. See Section~\ref{sxn:IIPM} for a detailed discussion of the overall running time. However, finding a strictly feasible initial point is a non-trivial task. Therefore, we also briefly discuss the infeasible version of the long-step IPM algorithm in Section~\ref{sxn:ipm_i}. Our empirical evaluation demonstrates that our algorithm requires an order of magnitude much fewer inner CG iterations than a standard IPM using CG, while producing a comparably accurate solution (see Section~\ref{sec:exp}). In practice, our empirical evaluation also indicates that using a CG solver with our sketching-based preconditioner does not increase the number of (outer) iterations of the infeasible IPM, compared to unpreconditioned CG or a direct linear solver. Furthermore, there are instances where our solver performs much better than non-preconditioned CG in terms of (outer) iteration count. \subsection{Comparison with Related Work}\label{sxn:comparison} There is a large body of literature on solving LPs using IPMs, thus we only review literature that is immediately relevant to our work. Recall that we solve the normal equations inexactly at each iteration, and develop a method to \emph{correct} for the error incurred. We focus on IPMs that start with a strictly feasible initial point and discuss papers that present related ideas. The use of an approximate iterative solver for eqn.~(\ref{eq:normal}), followed by a correction step to ``fix'' the approximate solution was proposed by~\citet{Mon03} (see our discussion in Section~\ref{sxn:contrib}). We propose efficient, RLA-based approaches to precondition and solve eqn.~(\ref{eq:normal}), as well as a novel approach to correct for the approximation error in order to guarantee the convergence of the IPM algorithm. Specifically,~\citet{Mon03} propose to solve eqn.~\eqref{eq:normal} using the so-called \emph{maximum weight basis} preconditioner \citep{RV93}. However, computing such a preconditioner needs access to a maximal linearly independent set of columns of $\mathbf{A}\mathbf{D}$ in each iteration, which is costly, taking $\mathcal{O}(m^2n)$ time in the worst-case. More importantly, while~\citep{Mon04} provides a bound on the condition number of the preconditioned matrix that depends only on properties of $\mathbf{A}$, and is independent of $\mathbf{D}$, this bound might, in general, be very large. In contrast, our bound is a constant and it does not depend on properties of $\mathbf{A}$ or its dimension. In addition, \citet{Mon03} assume a bound on the two-norm of the residual of the preconditioned system, but it is unclear how the proposed preconditioner guarantees such a bound. Similar concerns exist for the construction of the correction vector $\mathbf{v}$ proposed by~\citet{Mon03}, which our work alleviates. \textcolor{black}{In this context, there is a long list of works on designing efficient preconditioners for solving the linear system at each iteration of IPM. For a more detailed discussion, we refer the interested readers to the survey of \citep{Gondzio12}.} {\color{black} In the Theoretical Computer Science community, following the lines of \citep{karmarkar84}, there has been a series of efforts to design faster LP solvers with improved worst-case time complexity. The running time of \citep{karmarkar84} was improved by~\citep{renegar1988polynomial} and~\citep{vaidya1989speeding} which proposed an algorithm that takes $\widetilde{\mathcal{O}}\big(n^{2.5+o(1)}\big)$ time. Current state-of-the-art running times involve fast matrix multiplication (a theoretically appealing yet impractical approach). For example,~\citep{CLS19} proposed an algorithm that runs in $\widetilde{\mathcal{O}}\big((n^{\omega}+n^{2.5-\alpha / 2+o(1)}+n^{2+1 / 6})\big)$ time, where $\omega$ is the exponent of matrix multiplication and $\alpha$ is the dual exponent of matrix multiplication. For $\omega \approx 2.38$ and $\alpha \approx 0.31$ this time complexity boils down to $\widetilde{\mathcal{O}}\big(n^{\omega+o(1)}\big)$. More recently,~\citep{jiang2021faster} reduced the running time of~\citep{CLS19} to $\widetilde{\mathcal{O}}\big((n^{\omega}+n^{2.5-\alpha / 2+o(1)}+n^{2+1 /18})\big)$, which further reduces the gap between matrix multiplication and solving LPs. Work by~\citep{lee2019solvingERM,van2020deterministic, song2021oblivious} achieved the same running time as~\citep{CLS19}, using the same so-called \emph{lazy update} framework of~\citep{CLS19}. However, there are subtle differences between these works in terms of the underlying sketching and sampling techniques, as well as on approaches that achieve fast queries for the so-called \emph{projection maintenance} data structure that handles infeasibilities over iterations. For example, while the work of~\citep{CLS19} involves a non-oblivious sampling scheme whose sampling set and size changes over iterations,~\citep{song2021oblivious} utilizes oblivious sketching through an iterative framework to approximate the central path. On the other hand, while~\citep{CLS19, van2020deterministic, song2021oblivious} solve the linear system exactly, other works only maintain infeasible updates in each iteration. Similarly, while~\citep{song2021oblivious} can leverage sparse embeddings, most of the aforementioned works (including~\citep{lee2019solvingERM}) require the usage of dense sketching matrices, which could hinder the sparsity structure of the original linear program. All the aforementioned solvers are designed for the $m\approx n$ setting. If $\mathbf{A}$ is dense and rectangular with $n\gg m$,~\citep{brand2020solving} provides the theoretically fastest solver, with an iteration complexity of $\widetilde{\mathcal{O}}(\sqrt{m})$ and a total time complexity equal to $\widetilde{\mathcal{O}}\left(m n+m^{3}\right)$. For sparse rectangular matrices, the best per iteration complexities are given by $\widetilde{\mathcal{O}}\left(\operatorname{nnz}(\mathbf{A})+m^{\omega}\right)$~\citep{LS14} and $\widetilde{\mathcal{O}}(\mathop{\mathrm{nnz}}{\mathbf{A}} + m^2)$ (see~\citep{LS15}); both algorithms have iteration complexity $\widetilde{\mathcal{O}}(\sqrt{m})$. Overall, we note that \citep{LS14,LS15,lee2019solvingERM,brand2020solving,cohen2021solving,song2021oblivious, jiang2021faster} proposed and analyzed theoretically ground-breaking algorithms for LPs based on novel tools such as the so-called \emph{projection maintenance}, \emph{inverse maintenance}, \emph{fast matrix multiplication}, etc. for accelerating the linear system solvers in IPMs. In contrast, our paper differs from all these aforementioned TCS works in at least the following three directions:} \begin{itemize} \color{black} \item First, all these aforementioned approaches are primarily focused on theoretically fast but practically inefficient short-step path following methods, where the iterates are constrained within a narrow, restrictive neighborhood of the central path in the interior of the feasible region. Therefore, the algorithms based on short-step IPMs do not have much room to maneuver and the progress that they make in each iteration is limited, at least in practice\footnote{Theoretically, the worst-case iteration complexity of short-step central path methods is given by $\widetilde{\mathcal{O}}(\sqrt{n})$, which is the best complexity bound known for IPMs.}. {\color{black} Some of the recent TCS works (for example, \cite{CLS19, song2021oblivious} etc.) rely on a stochastic version of the short step central path method, in which the neighborhood of the central path is slightly wider than that of the traditional short-step IPMs. This is still considered to be quite restrictive, as it needs all the pairwise products $x_i s_i$ to be close to to the duality measure $\mu$ (more precisely, it needs $0.9\mu\le x_i s_i\le 1.1\mu$ for all $i$), which is not a very practical idea in general.} On the other hand, our algorithm is based on long-step path following IPMs which explore a much larger neighborhood around the central path and offer significant flexibility in each iteration. As a result, the iterates can take much longer steps towards optimality. Long-step IPMs are known to be more efficient in practice compared to short-step IPMs, despite the fact that they exhibit worst-case iteration complexity $\mathcal{O}(n)$. \item Second, the theoretical benefits of the aforementioned TCS works rely on techniques such as \emph{projection maintenance} or \emph{inverse maintenance}, where one needs to preserve the orthogonal projection matrix $\mathbf{D} \mathbf{A}^{\top}(\mathbf{A}\mathbf{D}^2\mathbf{A}^{\top})^{-1} \mathbf{A}\mathbf{D}\in \mathbb{R}^{n \times n}$ or $(\mathbf{A}\mathbf{D}^2\mathbf{A}^{\top})^{-1}\in \mathbb{R}^{n \times n}$ in order to solve the linear system at each iteration of the IPM. The idea of maintaining such projection matrices depends on the assumption that the matrix $\mathbf{D}$ does not change much from one iteration to the next. Thus, one only needs to compute the matrix $(\mathbf{A}\mathbf{D}^2\mathbf{A}^{\top})^{-1}$ a few times, which is also known as lazy updating. In addition, if $\mathbf{D}$ only changes in a few of its diagonal entries, one can further use the idea of low-rank updating to maintain the above projection matrix via the Sherman–Morrison–Woodbury (SMW) formula. However, due to large constant factors involved in the lazy updates framework and the numerical instability of the SMW formula, the notion of \emph{projection maintenance} is generally inefficient in practice. In contrast, our methods do not depend on any of the aforementioned techniques. Instead, we use randomized preconditioners combined with iterative solvers to approximately solve the linear system. We also propose a computationally efficient way to correct for the error caused by the solver. As a result, our method is much more relevant in practice and, when $n\gg m$, the per iteration cost of our algorithm is $\widetilde{\mathcal{O}}(\mathop{\mathrm{nnz}}{\mathbf{A}}+m^3)$ \item Third, all the aforementioned papers involve fast matrix multiplication, which is a theoretically relevant yet practically inefficient tool. To the best of our knowledge, there are no in- or out-of-core implementations of such algorithms that work better than traditional matrix multiplication approaches and it seems unlikely that such techniques will become practically relevant in the near future. Our methods leverage standard matrix multiplication routines and in Section~\ref{sec:exp} we show how an implementation of our approach offers advantages over existing, practically relevant approaches. \end{itemize} {\color{black} Another line of research in the Theoretical Computer Science literature that is very close to our work is~\citep{daitch2008faster}, who presented an IPM that uses an approximate solver in each iteration. However, their accuracy guarantee is in terms of the final objective value, which is different from ours. More importantly,~\citep{daitch2008faster} focuses on \textit{short-step} IPMs, whereas our approach is a \emph{long-step} algorithm that works for both feasible and infeasible starting points. Finally, the approximate solver proposed by~\citep{daitch2008faster} works only for the special case of input matrices that correspond to graph Laplacians, following the lines of~\citep{spielman2004nearly,spielman2014nearly}. In this context, we note that there are also other relevant works including~\citep{madry2013navigating,madry2016computing,cohen2017negative} that used IPM-based algorithms with approximate solvers for various combinatorial optimization problems on graphs. However, similar to \citep{spielman2004nearly}, the aforementioned papers also focus on short-step IPMs and the linear systems associated with them are either Laplacian or symmetric diagonally dominant (SDD).} Another relevant line of research is the work by~\citet{CMTH16}, which proposed solving eqn.~\eqref{eq:normal} using preconditioned Krylov subspace methods, including variants of \emph{generalized minimum residual} (GMRES) or CG methods. Indeed, \citet{CMTH16} conducted extensive numerical experiments on LP problems taken from standard benchmark libraries, but did not provide any theoretical guarantees. \textcolor{black}{Beyond experiments, the methods of \citep{CMTH16} primarily differ from ours in the way they used preconditioning. Instead of applying the preconditioner explicitly, their empirical evaluations rely on an implicit preconditioning technique called \emph{(stationary) inner-iterations preconditioning} \citep{morikuni2013inner,morikuni2015convergence}. As the name suggests, the key idea is to use another iterative method as a preconditioner within the linear system that needs to be solved at each iteration of the IPM. From a theoretical perspective, it is not clear how big or small the resulting condition number of their preconditioned system could be; moreover, there are two hyperparameters associated with their preconditioner which need to be tuned optimally and there is no theoretical guideline on how that can be done.} From a matrix-sketching perspective, our work was also partially motivated by~\citep{CYD18}, which presented an iterative, sketching-based algorithm to solve under-constrained ridge regression problems, but did not address how to make use of such approaches in an IPM-based framework, as we do here. We refer the reader to the surveys~\citep{Woodruff14, DM2018,Mahoney11,Drineas2016,martinsson2020randomized} for more background on Randomized Linear Algebra and regression solvers. In the context of deep neural networks (DNNs), \citep{van2021training} recently presented an iterative method to speed up the training of overparametrized DNNs by a similar type of randomized preconditioner as ours; but the algorithm of \citep{van2021training} neither exploited the sparsity in the data, nor they had to correct for the error caused by the inexact solver, as we do here. In another work,~\citep{avron2017faster} also proposed a similar sketching-based preconditioning technique. However, their efforts broadly revolved around speeding up and scaling \emph{kernel ridge regression}.~ \textcolor{black}{\citep{PW17} proposed the so-called \emph{Newton sketch} to construct an approximate Hessian matrix for more general convex problems, of which LP is a special case. Nevertheless, based on their local convergence guarantee for the sketched Newton updates, their paper only derived the underlying iteration complexity of the IPM (see, for example, Theorem 4.3 of~\citep{PW17}). It is not clear how to use their approach to bound the number of inner iterations and, as a result, deriving the per iteration cost of their algorithm is not straightforward. Moreover, their convergence guarantees for the IPM is with respect to the objective value and not in terms of the duality measure. Finally, the Newton-sketch of~\citep{PW17} does not exploit the sparsity of the data, whereas the running time of our algorithm depends on the sparsity of $\mathbf{A}$.} We also note that~\citep{VPL18} proposed a probabilistic algorithm to solve LPs approximately using a random projection reduced feature space. A possible drawback of this work is that the approximate solution is infeasible with respect to the original region. \textcolor{black}{On the empirical side, there are prior implementations of $\widetilde{\mathcal{O}}(\mathop{\mathrm{nnz}}{\mathbf{A}} + \mathrm{poly}(m))$ solvers for speeding various ML applications including ordinary least square regression~\citep{clarkson2017low,cormode2019iterative}, general $\ell_p$-regression~\citep{yang2017weighted}, ridge regression~\citep{chen2015fast,CYD18}, Fisher linear discriminant analysis~\citep{ye2017fast,CYD19}, more general Newton updates~\citep{dahiya2018empirical} and many more. However, to the best of our knowledge, there are no such implementations in the context of general linear programming problems, perhaps with the exception of $\ell_1$-regression. As discussed before, prior work on short-step IPM for LP that came up with per iteration cost $\widetilde{\mathcal{O}}(\mathop{\mathrm{nnz}}{\mathbf{A}} + \mathrm{poly}(m))$ was due to \citep{LS14,LS15}, but there is no empirical evaluation because of its heavy reliance on various theoretical tools such as inverse maintenance, fast matrix multiplication etc.} Finally, in addition to IPMs, LPs can be solved using the Simplex method. In commercial LP packages like Gurobi, both methods are often used in conjunction. For example, multiple solvers are run on multiple threads simultaneously and the one that is finishes first is chosen. Alternatively, an IPM is used initially to get close to the optimal solution and then the simplex algorithm is used to improve the solution~\citep{bixby1992very,glavelis2018improving}. Thus, developing efficient IPMs is vital for solving LPs and provides a crucial building block in commerical packages like Gurobi. \section{Notation and Background}\label{sxn:background} $\mathbf{A}, \mathbf{B}, \ldots$ denote matrices and $\mathbf{a}, \mathbf{b}, \ldots$ denote vectors. For vector $\mathbf{a}$, $\|\mathbf{a}\|_{2}$ denotes its Euclidean norm; for a matrix $\mathbf{A},\|\mathbf{A}\|_{2}$ denotes its spectral norm and $\|\mathbf{A}\|_F$ denotes its Frobenius norm. We use $\zero$ to denote a null vector or null matrix, dependent upon context, and $\one$ to denote the all-ones vector. For any matrix $\mathbf{X}\in\RR{m}{n}$ with $m\leqn$ of rank $m$ a thin Singular Value Decomposition (SVD) is a product $\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\top$ , with $\mathbf{U} \in\mathbb{R}^{m\timesm}$ (the matrix of the left singular vectors), $\mathbf{V} \in$ $\mathbb{R}^{n \times m}($ the matrix of the top-$m$ right singular vectors), and $\mathbf{\Sigma} \in$ $\mathbb{R}^{m \times m}$ a diagonal matrix whose entries are equal to the singular values of $\mathbf{X}$. We use $\sigma_{i}(\cdot)$ to denote the $i$-th singular value of the matrix in parentheses. For any two vectors $\mathbf{a}=(a_1,\dots,a_\ell)^\top$ and $\mathbf{b}=(b_1,\dots,b_\ell)^\top$, we denote $\mathbf{a}\circ\mathbf{b}=(a_1b_1,\dots,a_\ell b_\ell)^\top$. For any two symmetric positive semidefinite (resp. positive definite) matrices $\mathbf{A}_1$ and $\mathbf{A}_2$ of appropriate dimensions, $\mathbf{A}_1\preccurlyeq\mathbf{A}_2$ ($\mathbf{A}_1\prec\mathbf{A}_2$) denotes that $\mathbf{A}_2-\mathbf{A}_1$ is positive semidefinite (resp. positive definite). For any vector $\mathbf{a}\in\R{n}$ its $\ell_\infty$ norm is defined as $\|\mathbf{a}\|_\infty=\max_i\abs{a_i}$. We extensively use the following standard inequality to prove several results in the paper: \begin{flalign}\label{eq:normineq} \abs{\frac{\mathbf{a}^\top\one_n}{n}}\le\|\mathbf{a}\|_\infty\le\|\mathbf{a}\|_2. \end{flalign} We now briefly discuss a result on matrix sketching~\citep{Cohen2016,cohen2016nearly} that is particularly useful in our theoretical analyses. In our parlance,~\citet{Cohen2016} proved that, for any matrix $\mathbf{Z}\in\RR{m}{n}$, there exists a sketching matrix $\mathbf{W}\in\RR{n}{w}$ such that \begin{flalign}\label{eqn:pdprec} \nbr{\mathbf{Z} \mathbf{W} \mathbf{W}^\top \mathbf{Z}^\top - \mathbf{Z} \mathbf{Z}^\top}_2\le \frac{\zeta}{4}\Big(\nbr{\mathbf{Z}}_2^2+\frac{\|\mathbf{Z}\|_F^2}{r}\Big) \end{flalign} holds with probability at least $1-\delta$ for any $r\ge 1$. Here $\zeta \in [0,1]$ is a (constant) accuracy parameter. Ignoring constant terms, $w=\mathcal{O}(r\log(\nicefrac{r}{\delta}))$; $\mathbf{W}$ has $\mathcal{O}(\log(r/\delta))$ non-zero entries per row; and the product $\mathbf{Z}\mathbf{W}$ can be computed in time $\mathcal{O}(\log(r/\delta)\cdot\mathop{\mathrm{nnz}}{\mathbf{Z}})$. \section{Preconditioned CG (PCG) solver}\label{sxn:PCG} \section{Preconditioned Iterative Solver} \label{sxn:PCG} In this section, we discuss the computation of the preconditioner $\mathbf{Q}$ (and its inverse), followed by a discussion on how such a preconditioner can be used to satisfy eqns.~\eqref{eq:pdcond1} and~\eqref{eq:pdcond2}. \begin{algorithm}[H] \caption{Solving eqn.~\eqref{eq:precond_alt} via CG or Chebyshev iteration}\label{algo:PCG} \hspace*{\algorithmicindent}\textbf{Input:} $\mathbf{A}\mathbf{D}\in\RR{m}{n}$, $\mathbf{p}\in\R{m}$, sketching matrix $\mathbf{W} \in \mathbb{R}^{n \times w}$, iteration count $t$; \begin{algorithmic}[1] \State Compute $\mathbf{A}\mathbf{D}\mathbf{W}$ and its SVD: let $\mathbf{U}_{\mathbf{Q}} \in \mathbb{R}^{m \times m}$ be the matrix of its left singular vectors and let $\mathbf{\Sigma}_{\mathbf{Q}}^{\nicefrac{1}{2}} \in \mathbb{R}^{m \times m}$ be the matrix of its singular values; \State Compute $\mathbf{Q}^{-\nicefrac{1}{2}} = \mathbf{U}_{\mathbf{Q}} \mathbf{\Sigma}_{\mathbf{Q}}^{-\nicefrac{1}{2}}\mathbf{U}_{\mathbf{Q}}^\top$; \State Initialize $\tilde{\mathbf{z}}^{0} \gets \zero_m $ and run standard CG or Chebyshev iteration on the preconditioned system of eqn.~\eqref{eq:precond_alt} for $t$ iterations; \end{algorithmic} \hspace*{\algorithmicindent}\textbf{Output:} return $\tilde{\mathbf{z}}^t$; \end{algorithm} \noindent Algorithm~\ref{algo:PCG} takes as input the sketching matrix $\mathbf{W} \in \mathbb{R}^{n \times w}$, which we construct as discussed in Section~\ref{sxn:background}. Our preconditioner $\mathbf{Q}$ is equal to \begin{flalign}\label{eqn:pdprecond} \mathbf{Q}=\mathbf{A}\mathbf{D}\mathbf{W}\Wb^\top\mathbf{D}\mathbf{A}^\top. \end{flalign} Notice that we only need to compute $\mathbf{Q}^{\nicefrac{-1}{2}}$ in order to use it to solve eqn.~(\ref{eq:precond_alt}). Towards that end, we first compute the sketched matrix $\mathbf{A}\mathbf{D}\mathbf{W} \in \mathbb{R}^{m \times w}$. Then, we compute the SVD of the matrix $\mathbf{A}\mathbf{D}\mathbf{W}$: let $\mathbf{U}_{\mathbf{Q}}$ be the matrix of its left singular vectors and let $\mathbf{\Sigma}_{\mathbf{Q}}^{\nicefrac{1}{2}}$ be the matrix of its singular values. Notice that the left (and right) singular vectors of $\mathbf{Q}^{\nicefrac{-1}{2}}$ are equal to $\mathbf{U}_{\mathbf{Q}}$ and its singular values are equal to $\mathbf{\Sigma}_{\mathbf{Q}}^{-\nicefrac{1}{2}}$. Therefore, $\mathbf{Q}^{\nicefrac{-1}{2}} = \mathbf{U}_{\mathbf{Q}} \mathbf{\Sigma}_{\mathbf{Q}}^{-\nicefrac{1}{2}}\mathbf{U}_{\mathbf{Q}}^\top$. Let $\mathbf{A}\mathbf{D} = \mathbf{U}\mathbf{\Sigma}\mathbf{V}^\top$ be the thin SVD representation of $\mathbf{A}\mathbf{D}$. We apply the results of~\citep{Cohen2016} (see Section~\ref{sxn:background}) to the matrix $\mathbf{Z} = \mathbf{V}^\top \in\mathbb{R}^{m \times n}$ with $r=m$ to get that, with probability at least $1-\delta$, \begin{flalign} \nbr{\mathbf{V}^\top \mathbf{W} \mathbf{W}^\top \mathbf{V} - \mathbf{I}_{m}}_2\le \frac{\zeta}{4}\Big(\nbr{\mathbf{V}}_2^2+\frac{\|\mathbf{V}\|_F^2}{m}\Big) \le \frac{\zeta}{2}.\label{eq:cnd1} \end{flalign} In the above we used $\nbr{\mathbf{V}}_2=1$ and $\nbr{\mathbf{V}}_F^2=m$. The running time needed to compute the sketch $\mathbf{A}\mathbf{D}\mathbf{W}$ is equal to (ignoring constant factors) $\mathcal{O}(\mathop{\mathrm{nnz}}{\mathbf{A}}\cdot \log(m/\delta))$. Note that $\mathop{\mathrm{nnz}}{\mathbf{A}\mathbf{D}}=\mathop{\mathrm{nnz}}{\mathbf{A}}$. The cost of computing the SVD of $\mathbf{A}\mathbf{D}\mathbf{W}$ (and therefore $\mathbf{Q}^{\nicefrac{-1}{2}}$) is $\mathcal{O}(m^3\log(m/\delta))$. Overall, computing $\mathbf{Q}^{-\nicefrac{1}{2}}$ can be done in time \begin{flalign}\label{eqn:svdQ} \mathcal{O}(\mathop{\mathrm{nnz}}{\mathbf{A}}\cdot \log(m/\delta)+m^3\log(m/\delta)). \end{flalign} Given these results, we now discuss how to satisfy eqns.~\eqref{eq:pdcond1} and \eqref{eq:pdcond2} using the sketching matrix $\mathbf{W}$. We start with the following bound, which is relatively straightforward given prior RLA work. \begin{lemma}\label{lem:cond3} If the sketching matrix $\mathbf{W}$ satisfies eqn.~\eqref{eq:cnd1}, then, for all $i=1\ldots m$, % \begin{flalign*} (1+\zeta/2)^{-1}\le\sigma_i^2(\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D})\le (1-\zeta/2)^{-1}. \end{flalign*} \end{lemma} \begin{proof} Consider the condition of eqn.~\eqref{eq:cnd1}: \begin{flalign} &~\|\mathbf{V}^\top\mathbf{W}\Wb^\top\mathbf{V}-\mathbf{I}_m\|_2\le\frac{\zeta}{2}~\Leftrightarrow~ -\frac{\zeta}{2}\,\mathbf{I}_{m}\preccurlyeq\mathbf{V}^\top\mathbf{W}\Wb^\top\mathbf{V}-\mathbf{I}_m\preccurlyeq\frac{\zeta}{2}\,\mathbf{I}_{m}\label{eq:full}\\ \Leftrightarrow~&-\frac{\zeta}{2}\,\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\preccurlyeq\mathbf{A}\mathbf{D}\mathbf{W}\Wb^\top\mathbf{D}\mathbf{A}^\top-\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\preccurlyeq\frac{\zeta}{2}\,\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\label{eq:rich_3}\\ \Leftrightarrow~&\left(1-\frac{\zeta}{2}\right)\,\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\preccurlyeq\underbrace{\mathbf{A}\mathbf{D}\mathbf{W}\Wb^\top\mathbf{D}\mathbf{A}^\top}_{\mathbf{Q}}\preccurlyeq\left(1+\frac{\zeta}{2}\right)\,\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\,.\label{eq:rich_4} \end{flalign} % We obtain eqn.~\eqref{eq:rich_3} by pre- and post-multiplying the previous inequality by $\mathbf{U}\mathbf{\Sigma}$ and $\mathbf{\Sigma}\mathbf{U}^\top$ respectively and using the facts that $\mathbf{A}\mathbf{D}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\top$ and $\mathbf{A}\mathbf{D}^2\mathbf{A}^\top=\mathbf{U}\mathbf{\Sigma}^2\mathbf{U}^\top$. Also, from eqn.~\eqref{eq:full}, note that all the eigenvalues of $\mathbf{V}^\top\mathbf{W}\Wb^\top\mathbf{V}$ lie between $(1-\frac{\zeta}{2})$ and $(1+\frac{\zeta}{2})$ and thus $\mathop{\mathrm{rank}}(\mathbf{V}^\top\mathbf{W})=m$. Therefore, $\mathop{\mathrm{rank}}(\mathbf{A}\mathbf{D}\mathbf{W})=\mathop{\mathrm{rank}}(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\top\mathbf{W})=m$, as $\mathbf{U}\mathbf{\Sigma}$ is non-singular and we know that the rank of a matrix remains unaltered by pre- or post-multiplying it by a non-singular matrix. So, we have $\mathop{\mathrm{rank}}(\mathbf{Q})=m$; in words $\mathbf{Q}$ has full rank. Therefore, all the diagonal entries of $\mathbf{\Sigma}_\mathbf{Q}$ are positive and $\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{Q}\Qb^{-\nicefrac{1}{2}}=\mathbf{I}_m$\,. Using the above arguments, pre- and post- multiplying eqn.~\eqref{eq:rich_4} by $\mathbf{Q}^{-1/2}$, we get % \begin{flalign} &~\left(1-\frac{\zeta}{2}\right)\,\mathbf{Q}^{-1/2}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-1/2}\preccurlyeq\mathbf{I}_{m}\preccurlyeq\left(1+\frac{\zeta}{2}\right)\,\mathbf{Q}^{-1/2}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-1/2}\nonumber\\ \Rightarrow&~\left(1+\frac{\zeta}{2}\right)^{-1}\mathbf{I}_{m}\preccurlyeq\mathbf{Q}^{-1/2}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-1/2}\preccurlyeq\left(1-\frac{\zeta}{2}\right)^{-1}\mathbf{I}_{m}\,.\label{eq:normbound} \end{flalign} Eqn.~\eqref{eq:normbound} implies that all the eigenvalues of $\mathbf{Q}^{-1/2}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-1/2}$ are bounded between $\left(1+\frac{\zeta}{2}\right)^{-1}$ and $\left(1-\frac{\zeta}{2}\right)^{-1}$, which concludes the proof of the lemma. \end{proof} \noindent The above lemma directly implies eqn.~\eqref{eq:pdcond1}. We now proceed to show that the above construction for $\mathbf{Q}^{\nicefrac{-1}{2}}$, when combined with the conjugate gradient solver or Chebyshev iteration to solve eqn.~\eqref{eq:precond_alt}, indeed satisfies eqn.~\eqref{eq:pdcond2}. \subsection{Conjugate Gradient Solver}\label{sxn:cg} \textcolor{black}{As already mentioned in Section~\ref{sxn:contrib}, we derive eqn.~\eqref{eq:pdcond2} using the monotonicity property of CG resudual norms. } It is known that even if the energy norm of the error of the approximate solution decreases monotonically, the norms of the CG residuals may oscillate. Interestingly, we can combine a result on the residuals of CG from~\citep{bouyouli2009new} with Lemma~\ref{lem:cond3} to prove that in our setting the norms of the CG residuals also decrease monotonically\footnote{See Chapter 9 of~\citep{Luenberger15} for a detailed overview of CG.}. \textcolor{black}{We do note that in prior work most of the convergence guarantees for CG focus on the error of the approximate solution, from which, directly deriving eqn.~\eqref{eq:pdcond2} induces a constant factor in terms of the condition number of the preconditioned matrix. Here, we are able to avoid that constant factor by deriving and using the aforementioned monotonicity of the CG residuals.} Let $\tilde{\mathbf{f}}^{(j)}$ be the residual at the $j$-th iteration of the CG algorithm: $$\tilde{\mathbf{f}}^{(j)}=\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-\nicefrac{1}{2}}\tilde{\mathbf{z}}^j-\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}.$$ Recall from Algorithm~\ref{algo:PCG} that~$\tilde{\mathbf{z}}^0=\zero$ and thus $\tilde{\mathbf{f}}^{(0)}=-\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}$. In our parlance, Theorem 8 of~\citep{bouyouli2009new} proved the following bound. \begin{lemma}[Theorem 8 of \citep{bouyouli2009new}]\label{lem:prev1}% Let $\tilde{\mathbf{f}}^{(j-1)}$ and $\tilde{\mathbf{f}}^{(j)}$ be the residuals obtained by the CG solver at steps $j-1$ and $j$. Then,% \begin{flalign*} \|\tilde{\mathbf{f}}^{(j)}\|_2\le~\frac{\kappa^2(\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D})-1}{2}\|\tilde{\mathbf{f}}^{(j-1)}\|_2\,, \end{flalign*} % where $\kappa(\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D})$ is the condition number of $\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}$. \end{lemma} \noindent\textbf{Satisfying eqn.~\eqref{eq:pdcond2}.} From Lemma~\ref{lem:cond3}, we get \begin{flalign}\label{eq:condbd} \kappa^2(\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D})=\frac{\sigma_{\max}^2(\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D})}{\sigma_{\min}^2(\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D})}\le\frac{1+\zeta/2}{1-\zeta/2}. \end{flalign} Combining eqn.~\eqref{eq:condbd} with Lemma~\ref{lem:prev1}, \begin{flalign} \|\tilde{\mathbf{f}}^{(j)}\|_2\le~\frac{\frac{1+\zeta/2}{1-\zeta/2}-1}{2}\|\tilde{\mathbf{f}}^{(j-1)}\|_2 =~\frac{\zeta}{2-\zeta}\|\tilde{\mathbf{f}}^{(j-1)}\|_2 \le~\zeta \|\tilde{\mathbf{f}}^{(j-1)}\|_2\label{eq:rec}\,, \end{flalign} where the last inequality follows from $\zeta\le1$. Applying eqn.~\eqref{eq:rec} recursively, we get \begin{flalign} \|\tilde{\mathbf{f}}^{(t)}\|_2\le\zeta\|\tilde{\mathbf{f}}^{(t-1)}\|_2\le\dots\le\zeta^t\|\tilde{\mathbf{f}}^{(0)}\|_2=\zeta^t\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\nonumber\,, \end{flalign} which proves the condition of eqn.~\eqref{eq:pdcond2}. We remark that one can consider using MINRES~\citep{paige1975solution} instead of CG. Our results hinges on bounding the two-norm of the residual. MINRES finds, at each iteration, the optimal vector with respect the two-norm of the residual inside the same Krylov subspace of CG for the corresponding iteration. Thus, the bound we prove for CG applies to MINRES as well \subsection{Chebyshev Iteration} Now, we show that we could potentially replace CG with Chebyshev iteration (see Algorithm~1 of \citep{gutknecht2002chebyshev}) in the step~(3) of Algorithm~\ref{algo:PCG}. As already discussed in Section~\ref{sxn:contrib}, the only requirement Chebyshev iteration needs is to have an upper bound and a lower bound for the singular values of $\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-\nicefrac{1}{2}}$, which we already have in the form of Lemma~\ref{lem:cond3}. Therefore, all we need to show is that the sketching matrix $\mathbf{W}$ satisfies eqn.~\eqref{eq:pdcond2} using Chebyshev iteration. For this, we state the following result from~\citep{Gutknecht08}, which is instrumental in proving eqn.~\eqref{eq:pdcond2}. \begin{lemma}[Theorem~1.6.2 of \citep{Gutknecht08}] The residual norm reduction of the Chebyshev iteration, when applied to an symmetric positive definite (SPD) system whose condition number is upper bounded by $\mathcal{U}$, is bounded according to \begin{flalign} \frac{\|\tilde{\mathbf{f}}^{(t)}\|_2}{\|\tilde{\mathbf{f}}^{(0)}\|_2} \leq 2\left[\left(\frac{\sqrt{\mathcal{U}}+1}{\sqrt{\mathcal{U}}-1}\right)^{t}+\left(\frac{\sqrt{\mathcal{U}}-1}{\sqrt{\mathcal{U}}+1}\right)^{t}\right]^{-1}\label{eq:cs} \end{flalign} \end{lemma} \noindent\textbf{Satisfying eqn.~\eqref{eq:pdcond2}.} From Lemma~\ref{lem:cond3}, we directly have $\mathcal{U}=\frac{2+\zeta}{2-\zeta}$ . Note that for $t=0$ and starting from $\tilde{\mathbf{z}}^0=\zero$ (\emph{i.e., $\tilde{\mathbf{f}}^{(0)}=\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}$}), we directly get eqn.~\eqref{eq:pdcond2} from eqn.~\eqref{eq:cs}. Therefore, we only show that eqn.~\eqref{eq:pdcond2} is satisfied for $t\ge 1$. We already have $\tilde{\mathbf{f}}^{(0)}=\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}$ and letting $a=\left(\frac{\sqrt{\mathcal{U}}-1}{\sqrt{\mathcal{U}}+1}\right)^{t}$, we rewrite eqn.~\eqref{eq:cs} as follows \begin{flalign} \|\tilde{\mathbf{f}}^{(t)}\|_2\le\frac{2}{a+\frac{1}{a}}\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2=\frac{2\,a}{(a^2+1)}\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\le 2\,a\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\,,\label{eq:cs2} \end{flalign} where the last inequality in eqn.~\eqref{eq:cs2} holds as $a^2>0$. Now, we'll work on the bound in eqn.~\eqref{eq:cs2}. Putting back $a=\left(\frac{\sqrt{\mathcal{U}}-1}{\sqrt{\mathcal{U}}+1}\right)^{t}$ and $\mathcal{U}=\frac{2+\zeta}{2-\zeta}$, we rewrite eqn.~\eqref{eq:cs2} as \begin{flalign} \|\tilde{\mathbf{f}}^{(t)}\|_2\le&~ 2\left(\frac{\sqrt{\mathcal{U}}-1}{\sqrt{\mathcal{U}}+1}\right)^{t}\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2=2\left(\frac{\sqrt{\frac{2+\zeta}{2-\zeta}}-1}{\sqrt{\frac{2+\zeta}{2-\zeta}}+1}\right)^{t}\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\nonumber\\ =~& 2\left(\frac{\sqrt{2+\zeta}-\sqrt{2-\zeta}}{\sqrt{2+\zeta}+\sqrt{2-\zeta}}\right)^{t}\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2=2\left(\frac{2\zeta}{\big(\sqrt{2+\zeta}+\sqrt{2-\zeta}\big)^2}\right)^{t}\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\nonumber\\ =~& 2\left(\frac{2\zeta}{4\,\big(1+\sqrt{1-(\nicefrac{\zeta}{2})^2}\big)}\right)^{t}\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2=\frac{\zeta^t}{2^{t-1}\big(1+\sqrt{1-(\nicefrac{\zeta}{2})^2}\big)^t}\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\nonumber\\ \le~& \zeta^t\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\nonumber\,, \end{flalign} where the last inequality holds as $t\ge 1$ and the denominator is greater than unity. This establishes eqn.~\eqref{eq:pdcond2}. \section{The Feasible IPM algorithm}\label{sxn:IIPM} In order to avoid spurious solutions, primal-dual path-following IPMs bias the search direction towards the \emph{central path} and restrict the iterates to a neighborhood of the central path. This search is controlled by the \emph{centering parameter} $\sigma\in[0,1]$. At each iteration, given the current feasible solution $(\mathbf{x}^{k},\mathbf{y}^{k},\mathbf{s}^{k})$, a standard feasible IPM obtains the search direction $(\Delta\mathbf{x}^k,\Delta\mathbf{y}^k,\Delta\mathbf{s}^k)$ by solving the following system of linear equations: \begin{subequations}\label{eq:system} \begin{flalign} \mathbf{A}\mathbf{D}^2\mathbf{A}^\top\Delta\mathbf{y}^k=~&\mathbf{p}^k\,,\label{eq:normal1}\\ \Delta\mathbf{s}^k=~&-\mathbf{A}^\top\Delta\mathbf{y}^k\,,\label{eq:dels}\\ \Delta\mathbf{x}^k=~&-\mathbf{x}^k+\sigma\mu_k\mathbf{S}^{-1}\one_n-\mathbf{D}^2\Delta\mathbf{s}^k.\label{eq:delx} \end{flalign} \end{subequations} Here $\mathbf{D}$ and $\mathbf{S}$ are computed given the current iterate $(\mathbf{x}^{k}$ and $\mathbf{s}^{k})$; we skip the indices on $\mathbf{D}$ and $\mathbf{S}$ for notational simplicity. After solving the above system, the feasible IPM Algorithm~\ref{algo:iipm} proceeds by computing a step-size $\bar{\alpha}$ to return: \begin{flalign}\label{eqn:update} % (\mathbf{x}^{k+1},\mathbf{y}^{k+1},\mathbf{s}^{k+1}) = (\mathbf{x}^{k},\mathbf{y}^{k},\mathbf{s}^{k}) + \bar{\alpha} (\Delta \mathbf{x}^k,\Delta \mathbf{y}^k,\Delta \mathbf{s}^k). % \end{flalign} In the above linear system in eqn.~\eqref{eq:system}, we also use \textit{duality measure} $\mu_k=\nicefrac{{\mathbf{x}^k}^\top\mathbf{s}^k}{n}$ and the vector \begin{flalign}\label{eqn:pdef} % \mathbf{p}^k&=-\sigma\mu_k\mathbf{A}\mathbf{S}^{-1}\one_n+\mathbf{A}\mathbf{x}^k. % \end{flalign} Given $\Delta\mathbf{y}^k$ from eqn.~(\ref{eq:normal1}), $\Delta\mathbf{s}^k$ and $\Delta\mathbf{x}^k$ are easy to compute from eqns.~\eqref{eq:dels} and \eqref{eq:delx}, as they only involve matrix-vector products. However, since we use Algorithm~\ref{algo:PCG} to solve eqn.~\eqref{eq:normal1} approximately using the sketching-based preconditioned solver, the iterates $(\mathbf{x}^{k},\mathbf{y}^{k},\mathbf{s}^{k})$ do not satisfy the primal and dual constraints exactly. For notational simplicity, we now drop the dependency of vectors and scalars on the iteration counter $k$. Let $\hat{\Delta \mathbf{y}}=\mathbf{Q}^{\nicefrac{-1}{2}}\tilde{\mathbf{z}}^t$ be the approximate solution to eqn.~(\ref{eq:normal1}). In order to account for the loss of accuracy due to the approximate solver, we compute $\hat{\Delta\mathbf{x}}$ as follows: \begin{flalign} \hat{\Delta\mathbf{x}}=~-\mathbf{x}+\sigma\mu\mathbf{S}^{-1}\one_n-\mathbf{D}^2\hat{\Delta\mathbf{s}}-\mathbf{S}^{-1}\mathbf{v}\label{eq:delxhat}. \end{flalign} Here $\mathbf{v}\in\R{n}$ is a perturbation vector that needs to exactly satisfy the following invariant at each iteration of the feasible IPM: \begin{flalign} % \mathbf{A}\mathbf{S}^{-1}\mathbf{v}=\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}-\mathbf{p}\,\label{eq:addl}. % \end{flalign} We note that the computation of $\hat{ \Delta \mathbf{s}}$ is still done using, essentially, eqn.~\eqref{eq:dels}, namely \begin{flalign} \hat{\Delta\mathbf{s}}^k=~&-\mathbf{A}^\top\hat{\Delta\mathbf{y}}^k.\label{eq:delshat} \end{flalign} At each iteration of the IPM, if $\mathbf{v}$ satisfies eqn.~\eqref{eq:addl}, then it can be shown that the primal and dual feasibility constraints are satisfied exactly. \vspace{0.02in}\noindent\textbf{Construction of $\mathbf{v}$.} There are many choices for $\mathbf{v}$ satisfying eqn.~\eqref{eq:addl}. \textcolor{black}{Intuitively, we would expect the approximation error due to the solver to be reasonably small. Therefore,} to prove convergence, it is desirable for $\mathbf{v}$ to have a small norm and hence a natural choice is $$\mathbf{v}=(\mathbf{A}\mathbf{S}^{-1})^{\dagger}(\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}-\mathbf{p})\,.$$ \textcolor{black}{The aforementioned choice of $\mathbf{v}$ has a clear geometric interpretation: it not only ensures that $\mathbf{A} \mathbf{S}^{-1} \mathbf{v}$ is a Euclidean projection of the infeasible solution onto the column space of $\mathbf{A}\mathbf{D}$, but it is also the minimum norm least squares solution and satisfies the invariant in eqn.~\eqref{eq:addl} exactly. However, computing $\mathbf{v}$ such a way is expensive, as it involves the evaluation of the pseudoinverse of $\mathbf{A} \mathbf{S}^{-1}$, which is expensive, taking time $\mathcal{O}(m^2n)$.} Instead, we propose to construct $\mathbf{v}$ using the sketching matrix $\mathbf{W}$ of Section~\ref{sxn:background}. More precisely, we construct the perturbation vector \begin{flalign}\label{eq:compv} % \mathbf{v}=(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\mathbf{W}(\mathbf{A}\mathbf{D}\mathbf{W})^{\dagger}(\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}-\mathbf{p}). % \end{flalign} \textcolor{black}{Similar to the minimum-norm solution mentioned above, our sketching based solution in eqn.~\eqref{eq:compv} also guarantees that $\mathbf{A} \mathbf{S}^{-1} \mathbf{v}$ is a projection of the ``infeasibility'' vector $\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}-\mathbf{p}$ onto the column space of $\mathbf{A}\mathbf{D}\mathbf{W}$ (which is identical to the column space of $\mathbf{A}\mathbf{D}$) and satisfies eqn.~\eqref{eq:addl} exactly (see Lemma~\ref{lem:fullrankR} below). The computation of our proposed $\mathbf{v}$ is dominated by the cost of computing $(\mathbf{A}\mathbf{D}\mathbf{W})^{\dagger}$, which can be done much more efficiently as discussed in Section~\ref{sxn:PCG}. Actually, we do not even need to compute it while evaluating $\mathbf{v}$, since we have already computed it during the construction of $\mathbf{Q}^{-1/2}$ in Algorithm~\ref{algo:PCG}. Finally, in Lemma~\ref{lem:conouter}, we showed that using our choice of $\mathbf{v}$, $\|\mathbf{v}\|_2$ remains small enough (with a few iterations of the iterative solver), which essentially leads to the convergence of the IPM.} \begin{lemma}\label{lem:fullrankR} Let $\mathbf{W}\in\RR{n}{w}$ be the sketching matrix of Section~\ref{sxn:background} and $\mathbf{v}$ be the perturbation vector of eqn.~(\ref{eq:compv}). Then, with probability at least $1-\delta$, $\mathop{\mathrm{rank}}(\mathbf{A}\mathbf{D}\mathbf{W})=m$ and $\mathbf{v}$ satisfies eqn.~\eqref{eq:addl}. \end{lemma} \begin{proof} Let $\mathbf{A}\mathbf{D}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\top$ be the thin SVD representation of $\mathbf{A}\mathbf{D}$. We use the exact same $\mathbf{W}$ as discussed in Section~\ref{sxn:PCG}. Therefore, eqn.~\eqref{eq:cnd1} holds with probability $1-\delta$ and it directly follows from the proof of Lemma~\ref{lem:cond3} that $\mathop{\mathrm{rank}}(\mathbf{A}\mathbf{D}\mathbf{W})=m$. Recall that $\mathbf{A}\mathbf{D}\mathbf{W}$ has full \emph{row-rank} and thus $\mathbf{A}\mathbf{D}\mathbf{W}\,(\mathbf{A}\mathbf{D}\mathbf{W})^\dagger=\mathbf{I}_m$. Therefore, taking $\mathbf{v}=(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\mathbf{W}(\mathbf{A}\mathbf{D}\mathbf{W})^{\dagger}(\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}-\mathbf{p})$, we get \begin{flalign*} \mathbf{A}\mathbf{S}^{-1}\,\mathbf{v}=&~\mathbf{A}\mathbf{S}^{-1}(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\mathbf{W}(\mathbf{A}\mathbf{D}\mathbf{W})^{\dagger}(\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}-\mathbf{p})\nonumber\\ =&~\mathbf{A}\mathbf{D}\mathbf{W}(\mathbf{A}\mathbf{D}\mathbf{W})^{\dagger}(\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}-\mathbf{p})\nonumber\\ =&~\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}-\mathbf{p}\,, \end{flalign*} where the second equality follows from $\mathbf{D} = \mathbf{X}^{1/2}\mathbf{S}^{-1/2}$. \end{proof} \noindent We emphasize here that we use the same exact sketching matrix $\mathbf{W} \in \mathbb{R}^{n \times w}$ to form the preconditioner used in the iterative solver of Section~\ref{sxn:PCG} \textit{as well as} the vector $\mathbf{v}$ in eqn.~(\ref{eq:compv}). This allows us to sketch $\mathbf{A} \mathbf{D}$ only once, thus saving time in practice. Next, we present a bound for the two-norm of the perturbation vector $\mathbf{v}$ of eqn.~(\ref{eq:compv}). \begin{lemma}\label{lem:v} With probability at least $1-\delta$, our perturbation vector $\mathbf{v}$ in Lemma~\ref{lem:fullrankR} satisfies % \begin{flalign} % \|\mathbf{v}\|_2\le\sqrt{3n\mu}\,\|\tilde{\mathbf{f}}^{(t)}\|_2, % \end{flalign} % with $\tilde{\mathbf{f}}^{(t)}=\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-\nicefrac{1}{2}}\tilde{\mathbf{z}}^t-\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}$. % \end{lemma} \begin{proof} Recall that $\mathbf{Q}=\mathbf{A}\mathbf{D}\mathbf{W}(\mathbf{A}\mathbf{D}\mathbf{W})^\top=\mathbf{U}_\mathbf{Q}\mathbf{\Sigma}_\mathbf{Q}\mathbf{U}_\mathbf{Q}^\top$. Also, $\mathbf{U}_\mathbf{Q}$ and $\mathbf{\Sigma}_\mathbf{Q}^{\nicefrac{1}{2}}$ are (respectively) the matrices of the left singular vectors and the singular values of $\mathbf{A}\mathbf{D}\mathbf{W}$. Now, let $\widehat{\mathbf{V}}$ be the right singular vector of $\mathbf{A}\mathbf{D}\mathbf{W}$. Therefore, $\mathbf{A}\mathbf{D}\mathbf{W}=\mathbf{U}_\mathbf{Q}\mathbf{\Sigma}_\mathbf{Q}^{\nicefrac{1}{2}}\widehat{\mathbf{V}}^\top$ is the thin SVD representation of $\mathbf{A}\mathbf{D}\mathbf{W}$. Also, from Lemma~\ref{lem:cond3}, we know that $\mathbf{Q}$ has full rank. Therefore, $\mathbf{Q}^{\nicefrac{1}{2}}\mathbf{Q}^{\nicefrac{-1}{2}}=\mathbf{I}_m$. Next, we bound $\|\mathbf{v}\|_2$: \begin{flalign} \|\mathbf{v}\|_2=&~\|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\mathbf{W}(\mathbf{A}\mathbf{D}\mathbf{W})^{\dagger}(\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}-\mathbf{p})\|_2\nonumber\\ =&~\|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\mathbf{W}(\mathbf{A}\mathbf{D}\mathbf{W})^{\dagger}\mathbf{Q}^{\nicefrac{1}{2}}\mathbf{Q}^{\nicefrac{-1}{2}}(\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}-\mathbf{p})\|_2\nonumber\\ \le&~\|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\mathbf{W}(\mathbf{A}\mathbf{D}\mathbf{W})^{\dagger}\mathbf{Q}^{\nicefrac{1}{2}}\|_2\,\|\tilde{\mathbf{f}}^{(t)}\|_2\label{eq:s1}. \end{flalign} % In the above we used $\mathbf{Q}^{\nicefrac{-1}{2}}(\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}-\mathbf{p})=\tilde{\mathbf{f}}^{(t)}$. Using the SVD of $\mathbf{A}\mathbf{D}\mathbf{W}$ and $\mathbf{Q}$, we get $(\mathbf{A}\mathbf{D}\mathbf{W})^{\dagger}\mathbf{Q}^{\nicefrac{1}{2}}=\widehat{\mathbf{V}}\mathbf{\Sigma}_\mathbf{Q}^{-1/2}\mathbf{U}_\mathbf{Q}^\top\,\mathbf{U}_\mathbf{Q}\mathbf{\Sigma}_\mathbf{Q}^{1/2}\mathbf{U}_\mathbf{Q}^\top=\widehat{\mathbf{V}}\mathbf{U}_\mathbf{Q}^\top$. Now, note that $\mathbf{U}_\mathbf{Q}\in\RR{m}{m}$ is an orthogonal matrix and $\|\widehat{\mathbf{V}}\|_2=1$. Therefore, combining with eqn.~\eqref{eq:s1} yields % \begin{flalign} \|\mathbf{v}\|_2\le&~\|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\mathbf{W}\widehat{\mathbf{V}}\mathbf{U}_\mathbf{Q}^\top\|_2\|\tilde{\mathbf{f}}^{(t)}\|_2=\|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\mathbf{W}\widehat{\mathbf{V}}\|_2\|\tilde{\mathbf{f}}^{(t)}\|_2\nonumber\\ \le&\|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\mathbf{W}\|_2\|\tilde{\mathbf{f}}^{(t)}\|_2.\label{eq:semi} \end{flalign} The first equality follows from the unitary invariance property of the spectral norm and the second inequality follows from the sub-multiplicativity of the spectral norm and $\|\widehat{\mathbf{V}}\|_2=1$. Our construction for $\mathbf{W}$ implies that eqn.~\eqref{eqn:pdprec} holds for any matrix $\mathbf{Z}$ and, in particular, for $\mathbf{Z}=(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}$. Eqn.~\eqref{eqn:pdprec} implies that % \begin{flalign}\label{eq:w2} \nbr{(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}} \mathbf{W} \mathbf{W}^\top(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}} - (\mathbf{X}\mathbf{S})}_2\le \frac{\zeta}{4} \left(\|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\|_2^2+\frac{\|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\|_F^2}{m}\right) \end{flalign} % holds with probability at least $1-\delta$. Applying Weyl's inequality on the left hand side of the eqn.~\eqref{eq:w2}, we get \begin{flalign}\label{eq:semi2} \abs{\nbr{(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}} \mathbf{W}}_2^2-\nbr{(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}}_2^2}\le \frac{\zeta}{4} \left(\|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\|_2^2+\frac{\|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\|_F^2}{m}\right). \end{flalign} % Using $\zeta\le 1$ and $\|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\|_2^2\le \|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\|_F^2 =\mathbf{x}^\top\mathbf{s}=n\mu$, we get\footnote{The constant three in eqn.~(\ref{eq:semi3}) could be slightly improved to \nicefrac{3}{2}; we chose to keep the suboptimal constant as the better constant does not result in any significant improvements in the number of iterations of Algorithm~\ref{algo:iipm}.} \begin{flalign}\label{eq:semi3} \nbr{(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}} \mathbf{W}}_2^2\le 3\|(\mathbf{X}\mathbf{S})^{\nicefrac{1}{2}}\|_F^2=3n \mu. \end{flalign} % Finally, combining eqns.~\eqref{eq:semi} and \eqref{eq:semi3}, we conclude \begin{flalign*} \|\mathbf{v}\|_2\le\sqrt{3n\mu}\|\tilde{\mathbf{f}}^{(t)}\|_2. \end{flalign*} \end{proof} Intuitively, the bound in Lemma~\ref{lem:v} implies that $\|\mathbf{v}\|_2$ depends on how close the approximate solution $\hat{\Delta\mathbf{y}}$ is to the exact solution. Lemma~\ref{lem:v} is particularly useful in proving the convergence of Algorithm~\ref{algo:iipm}, which needs $\|\mathbf{v}\|_2$ to be a small quantity. Next, using the properties of our preconditioner $\mathbf{Q}^{-1/2}$, we prove that $\|\mathbf{Q}^{\nicefrac{-1}{2}}\mathbf{p}\|_2= \mathcal{O}(\sqrt{n})\sqrt{\mu}$. This bound allows us to further show that that if we run Algorithm~\ref{algo:PCG} for $\mathcal{O}(\log n)$ iterations, then $\ \|\mathbf{v}\|_2\le\frac{\gamma\sigma}{4}\mu.$ This inequality is critical in the convergence analysis of Algorithm~\ref{algo:iipm} (see Section~\ref{sxn:conv} for details). Before presenting our feasible IPM algorithm, we first prove the above two inequalities using a couple of lemmas. {\color{black}Let $\mathcal{F}^0$ be the set of strictly feasible points respectively \emph{i.e.}, \begin{flalign*} \mathcal{F}^0=&~\{(\mathbf{x},\mathbf{y},\mathbf{s}): ~(\mathbf{x},\mathbf{s})>\zero,~\mathbf{A}\mathbf{x}=\mathbf{b},~\mathbf{A}^\top\mathbf{y}+\mathbf{s}=\mathbf{c}\}. \end{flalign*} In addition, we will need the following definition for the neighborhood \begin{flalign} &\mathcal{N}(\gamma)=\Big\{(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{F}^0:x_i s_i\ge(1-\gamma)\mu\Big\}.\label{eq:neigh} \end{flalign} Here $\gamma \in (0,1)$ and $\mu$ is the duality measure. Note that $\mathcal{N}(\gamma)\subseteq\mathcal{F}^0$ and we assume that $\mathcal{F}^0$ is non-empty.} \begin{lemma}\label{thm:boundf_f} Let $(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{N}(\gamma)$ and let the sketching matrix $\mathbf{W}\in\RR{n}{w}$ satisfy the condition in eqn.~\eqref{eq:pdcond1}. Then, \begin{flalign} \|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\le~\left(1+ \frac{\sigma}{\sqrt{1-\gamma}}\right)\sqrt{2n\mu}\,. \end{flalign} \end{lemma} \begin{proof} To bound $\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2$, first we express $\mathbf{p}$ as in eqn.~\eqref{eqn:pdef} and rewrite \begin{flalign} \mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p} =&~\mathbf{Q}^{-\nicefrac{1}{2}}\left(-\sigma\mu\mathbf{A}\mathbf{S}^{-1}\one_n+\mathbf{A}\mathbf{x}\right)\label{eq:recur3_f}. \end{flalign} Applying the triangle inequality on $\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2$ in eqn.~\eqref{eq:recur3_f}, we get \begin{flalign} \|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\le \Delta_1+\Delta_2\label{eq:recur4_f}\,, \end{flalign} where $\Delta_1=~\sigma\mu\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\one_n\|_2$ and $\Delta_2=~\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}\Db^{-1}\mathbf{x}\|_2$. In order to bound $\Delta_1$ and $\Delta_2$, we use the condition of eqn.~\eqref{eq:pdcond1}. In particular, eqn.~\eqref{eq:pdcond1} implies that $\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}\|_2\le\sqrt{2}$ as $\zeta\le 1$. \paragraph{Bounding $\Delta_1$.} Applying submultiplicativity, we get \begin{flalign} \Delta_1=&~\sigma\mu\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\,\mathbf{A}\mathbf{D}\,(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\one_n\|_2\nonumber\\ \le&~\sigma\mu\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\,\mathbf{A}\mathbf{D}\|_2\|(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\one_n\|_2 \le~\sqrt{2}\,\sigma\mu\,\|(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\one_n\|_2\nonumber\\ =&~\sqrt{2}\,\sigma\mu\,\sqrt{\sum_{i=1}^{n}\frac{1}{x_i s_i}} \le~\sqrt{2}\,\sigma\mu\,\sqrt{\sum_{i=1}^{n}\frac{1}{(1-\gamma)\mu}} =~\sqrt{2}\,\sigma\,\sqrt{\frac{n\,\mu}{(1-\gamma)}}\label{eq:del2_f}\,, \end{flalign} where we used the fact that $(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{N}(\gamma)$. \paragraph{Bounding $\Delta_2$.} Since $\mathbf{D}=\mathbf{S}^{-\nicefrac{1}{2}}\mathbf{X}^{\nicefrac{1}{2}}$ and $\mathbf{x}=\mathbf{X}\,\one_n$, we get % \begin{flalign} \Delta_2=&~\|\mathbf{Q}^{-\nicefrac{1}{2}}\,\mathbf{A}\mathbf{D}\,(\mathbf{S}^{\nicefrac{1}{2}}\mathbf{X}^{-\nicefrac{1}{2}})\,\mathbf{X}\,\one_n\|_2 =~\|\mathbf{Q}^{-\nicefrac{1}{2}}\,\mathbf{A}\mathbf{D}\,(\mathbf{S}\mathbf{X})^{\nicefrac{1}{2}}\,\one_n\|_2\nonumber\\ \le&~\|\mathbf{Q}^{-\nicefrac{1}{2}}\,\mathbf{A}\mathbf{D}\|_2\|(\mathbf{S}\mathbf{X})^{\nicefrac{1}{2}}\,\one_n\|_2 \le~\sqrt{2}\,\sqrt{\sum_{i=1}^{n}x_i s_i}=~\sqrt{2n\,\mu}\label{eq:del3_f}. \end{flalign} \paragraph{Final bound.} Combining eqns.~\eqref{eq:recur4_f},~\eqref{eq:del2_f}, and~\eqref{eq:del3_f}, we get \begin{flalign} \|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\le~\left(1+ \frac{\sigma}{\sqrt{1-\gamma}}\right)\sqrt{2n\mu}\,. \end{flalign} This concludes the proof of Lemma~\ref{thm:boundf_f}. \end{proof} \begin{lemma}\label{lem:conouter} Let $(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{N}(\gamma)$ and let the sketching matrix $\mathbf{W}$ satisfy the conditions of eqns.~\eqref{eq:pdcond1} and \eqref{eq:pdcond2}. Then, after $t\ge\scriptstyle\frac{\log(n\,\psi)}{\log(\nicefrac{1}{\zeta})}$ iterations of the iterative solver in Algorithm~\ref{algo:PCG}, we have $\|\mathbf{v}\|_2\le\frac{\gamma\sigma}{4}\mu.$ Here $\psi=\scriptstyle\frac{4\sqrt{6}\left(1+ \nicefrac{\sigma}{\sqrt{1-\gamma}}\right)}{\gamma\sigma}$ and $\tilde{\mathbf{f}}^{(t)}=\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-\nicefrac{1}{2}}\tilde{\mathbf{z}}^t-\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}$ is the residual of the solver. \end{lemma} \begin{proof} Combining Lemma~\ref{thm:boundf_f} and the condition in eqn.~\eqref{eq:pdcond2}, we get \begin{flalign}\label{eq:bd} \|\tilde{\mathbf{f}}^{(t)}\|_2\le\zeta^t\left(1+ \frac{\sigma}{\sqrt{1-\gamma}}\right)\sqrt{2n\mu}. \end{flalign} % Next, combining Lemma~\ref{lem:v} and eqn.~\eqref{eq:bd} we get \begin{flalign*} \|\mathbf{v}\|_2\le\sqrt{3n\mu}\,\|\tilde{\mathbf{f}}^{(t)}\|_2\le\sqrt{6}n\,\zeta^t\left(1+ \frac{\sigma}{\sqrt{1-\gamma}}\right)\mu \end{flalign*} % Therefore, $\|\mathbf{v}\|_2\le\frac{\gamma\sigma\mu}{4}$ holds if $\sqrt{6}n\,\zeta^t\left(1+ \nicefrac{\sigma}{\sqrt{1-\gamma}}\right)\mu\le\frac{\gamma\sigma\mu}{4}$, which holds for our choice of $t$. Now, fixing $\gamma$, $\sigma$, and $\zeta$, after $t=\mathcal{O} (\logn)$ iterations of Algorithm~\ref{algo:PCG} the conclusions of the lemma hold. \end{proof} Now, we are ready to present the feasible IPM algorithm. Recall the definition the neighborhood $\mathcal{N}(\gamma)$ in eqn.~\eqref{eq:neigh}. \begin{algorithm}[H] \caption{Feasible IPM}\label{algo:iipm}% \hspace*{\algorithmicindent} \textbf{Input:} $\mathbf{A}\in\RR{m}{n}$, $\mathbf{b}\in\R{m}$, $\mathbf{c}\in\R{n}$, $\gamma \in (0,1)$, tolerance $\epsilon> 0$, $\sigma\in (0,\nicefrac{4}{5})$; \hspace*{\algorithmicindent} \textbf{Initialize:} $k\gets 0$; initial point $(\mathbf{x}^{0},\mathbf{y}^{0},\mathbf{s}^{0})\in\mathcal{F}^0$; \begin{algorithmic}[1] \While{$\mu_k > \epsilon$} \State Compute sketching matrix $\mathbf{W} \in \mathbb{R}^{n \times w}$ (Section~\ref{sxn:background}) with $\zeta=1/2$ and $\delta = O(n^{-1})$; \State Solve eqn.~\eqref{eq:precond_alt} for $\mathbf{z}$ using Algorithm~\ref{algo:PCG} with $\mathbf{W}$ from step (2) and $t=\mathcal{O}(\log n)$, \hspace*{-3mm} and then Compute $\hat{\Delta \mathbf{y}}=\mathbf{Q}^{\nicefrac{-1}{2}}{\mathbf{z}}$; \State Compute $\mathbf{v}$ using eqn.~\eqref{eq:compv} with $\mathbf{W}$ from step (2); $\hat{\Delta\mathbf{s}}$ using eqn.~\eqref{eq:dels}; $\hat{\Delta\mathbf{x}}$ using \hspace{-3mm} eqn.~\eqref{eq:delxhat}; \State \label{stepalpha1} Compute $\tilde{\alpha} = \mathop{\mathrm{argmax}}\{ \alpha \in [0,1] : (\mathbf{x}^k,\mathbf{y}^k,\mathbf{s}^k) + \alpha (\hat{\Delta \mathbf{x}}^k,\hat{\Delta \mathbf{y}}^k,\hat{\Delta \mathbf{s}}^k) \in \mathcal{N}(\gamma)\}$. \State \label{stepalpha2}Compute $\bar{\alpha} = \mathop{\mathrm{argmin}}\{\alpha \in [0, \tilde{\alpha}]: (\mathbf{x}^k + \alpha \hat{\Delta \mathbf{x}}^k)^\top (\mathbf{s}^k + \alpha \hat{\Delta\mathbf{s}}^k)\}$. \State Compute $(\mathbf{x}^{k+1}, \mathbf{y}^{k+1}, \mathbf{s}^{k+1}) = (\mathbf{x}^k,\mathbf{y}^k,\mathbf{s}^k) + \bar{\alpha} (\hat{\Delta \mathbf{x}}^k,\hat{\Delta \mathbf{y}}^k,\hat{\Delta \mathbf{s}}^k)$; set $k \gets k + 1$; \EndWhile \end{algorithmic} \end{algorithm} \vspace{0.02in}\noindent\textbf{Running time.} We start by discussing the running time to compute $\mathbf{v}$. As discussed in Section~\ref{sxn:PCG}, $(\mathbf{A}\mathbf{D}\mathbf{W})^{\dagger}$ can be computed in $\mathcal{O}(\mathop{\mathrm{nnz}}{\mathbf{A}}\cdot \log(m/\delta)+m^3\log(m/\delta))$ time. Now, as $\mathbf{W}$ has $\mathcal{O}(\log(m/\delta))$ non-zero entries per row, pre-multiplying by $\mathbf{W}$ takes $\mathcal{O}(\mathop{\mathrm{nnz}}{\mathbf{A}}\log(m/\delta))$ time (assuming $\mathop{\mathrm{nnz}}{A}\ge n$). Since $\mathbf{X}$ and $\mathbf{S}$ are diagonal matrices, computing $\mathbf{v}$ takes $\mathcal{O}(\mathop{\mathrm{nnz}}{\mathbf{A}}\cdot \log(m/\delta)+m^3\log(m/\delta))$ time, which is asymptotically the same as computing $\mathbf{Q}^{\nicefrac{-1}{2}}$ (see eqn.~(\ref{eqn:svdQ})). We now discuss the overall running time of Algorithm~\ref{algo:iipm}. At each iteration, with failure probability $\delta$, the preconditioner $\mathbf{Q}^{\nicefrac{-1}{2}}$ and the vector $\mathbf{v}$ can be computed in $\mathcal{O}(\mathop{\mathrm{nnz}}{\mathbf{A}}\cdot \log(m/\delta)+m^3\log(m/\delta))$ time. In addition, for $t=\mathcal{O}(\log n)$ iterations of Algorithm~\ref{algo:PCG}, all the matrix-vector products in the CG or Chebyshev iteration can be computed in $\mathcal{O}(\mathop{\mathrm{nnz}}{\mathbf{A}}\cdot \log n)$ time. Therefore, the computational time for steps (2)-(4) is given by $\mathcal{O}(\mathop{\mathrm{nnz}}{\mathbf{A}}\cdot(\log n+ \log(m/\delta))+m^3\log(m/\delta))$. \textcolor{black}{Finally, considering $\epsilon$ to be a constant, if we assume that the IPM needs $k=c\,n$ iterations to converge and accordingly, if we fix the failure probability $\delta=\frac{0.1}{c\,n}$ for some suitable constant $c$, then taking a union bound over all the IPM iterations, our algorithm converges with probability at least $1-c\,n\cdot\frac{0.1}{c\,n}=0.9$ and the running time at each iteration is given by $\mathcal{O}((\mathop{\mathrm{nnz}}{\mathbf{A}}+m^3)\log n)$.} \subsection{Convergence Analysis of Algorithm~\ref{algo:iipm}}\label{sxn:conv} {\color{black} In this section, we prove a set of results that ultimately establish Theorem~\ref{thm:2f} and guarantee the convergence of Algorithm~\ref{algo:iipm}. Due to the use of an approximate solver, these proofs typically differ from the standard analysis of long-step feasible IPM~\citep{wright1997primal} in many aspects. For example, all the major results in this section rely on the condition that the error due to the linear solver is small \emph{i.e.}, $\|\mathbf{v}\|_2$ is small, whereas the standard convergence analysis does not have this requirement as the linear system there is solved exactly \emph{i.e.} $\|\mathbf{v}\|_2$ is always zero. This difference makes our case more intricate as we deal with an extra term involving $\|\mathbf{v}\|_2$ which needs more care. On the other hand, while the origin of the statements of our feasible IPM results is essentially \citep{Mon03}, the proofs are different from that of \citep{Mon03}. The exact same analysis of \citep{Mon03} (just by making the primal and dual residuals equal to zero) neither directly applies to the feasible case, nor matches the best iteration complexity of it, whereas our current analysis has the iteration complexity $\mathcal{O}(n\log 1/\epsilon)$, which is the best known for feasible long-step path following IPM algorithms. The proofs that look similar to \citep{Mon03} also have differences. We discuss them individually before the respective lemmas. The only overlap we have with \citep{Mon03} is our Lemma~\ref{lemmaminalpha1_f} that works for both feasible and infeasible setting. Now, we proceed to prove our Theorem~\ref{thm:2f}. } First, we can rewrite the linear system of eqns.~\eqref{eq:delxhat}, \eqref{eq:addl}, \eqref{eq:delshat} as follows: \begin{subequations}\label{eq:iip_3_f} \begin{flalign} \mathbf{A}\hat{\Delta\mathbf{x}}=&~\zero, \label{eq:iip_3_f_1}\\ \mathbf{A}^\top\hat{\Delta\mathbf{y}}+\hat{\Delta\mathbf{s}}=&~\zero, \label{eq:iip_3_f_2}\\ \mathbf{X}\hat{\Delta\mathbf{s}}+\mathbf{S}\hat{\Delta\mathbf{x}}=&-\mathbf{X}\mathbf{S}\,\one_n+\sigma\mu\,\one_n - \mathbf{v}. \label{eq:iip_3_f_3} \end{flalign} \end{subequations} Indeed, we now show how to derive eqns.~\eqref{eq:delxhat}, \eqref{eq:addl}, \eqref{eq:delshat} from eqn.~\eqref{eq:iip_3_f}. Pre-multiplying both sides of eqn.~\eqref{eq:iip_3_f_3} by $\mathbf{A}\mathbf{S}^{-1}$ and noting that $\mathbf{D}^2=\mathbf{X}\mathbf{S}^{-1}$, we get \begin{flalign} &~\mathbf{A}\mathbf{D}^2\hat{\Delta\mathbf{s}}+\mathbf{A}\hat{\Delta \xb}=-\mathbf{A}\mathbf{X}\one_n+\sigma\mu\mathbf{A}\mathbf{S}^{-1}\one_n-\mathbf{A}\mathbf{S}^{-1}\mathbf{v}\nonumber\\ \Rightarrow&~\mathbf{A}\mathbf{D}^2\hat{\Delta\mathbf{s}}=-\mathbf{A}\mathbf{x}+\sigma\mu\mathbf{A}\mathbf{S}^{-1}\one_n-\mathbf{A}\mathbf{S}^{-1}\mathbf{v}.\label{eq:iip_3_f1} \end{flalign} Eqn.~\eqref{eq:iip_3_f1} holds as $\mathbf{A}\mathbf{X}\one_n=\mathbf{A}\mathbf{x}$ and, from eqn.~\eqref{eq:iip_3_f_1}, $\mathbf{A}\hat{\Delta \xb}=\zero$. Next, pre-multiplying eqn.~\eqref{eq:iip_3_f_2} by $\mathbf{A}\mathbf{D}^2$, we get \begin{flalign} &~\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta \yb}+\mathbf{A}\mathbf{D}^2\hat{\Delta \sbb}=\zero\nonumber\\ \Rightarrow &~\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta \yb}=\mathbf{A}\mathbf{x}-\sigma\mu\mathbf{A}\mathbf{S}^{-1}\one_n+\mathbf{A}\mathbf{S}^{-1}\mathbf{v}=\mathbf{p}+\mathbf{A}\mathbf{S}^{-1}\mathbf{v}.\label{eq:iip_3_f11} \end{flalign} The first equality in eqn.~\eqref{eq:iip_3_f11} follows from eqn.~\eqref{eq:iip_3_f1} and the definition of $\mathbf{p}$. This establishes eqn.~\eqref{eq:addl}. Eqn.~\eqref{eq:delshat} directly follows from eqn.~\eqref{eq:iip_3_f_2}. Finally, we get eqn.~\eqref{eq:delxhat} by pre-multiplying eqn.~\eqref{eq:iip_3_f_3} by $\mathbf{S}^{-1}$. We will now use a slightly different notations. We define the next point traversed by the algorithm as $(\xb(\alpha),\yb(\alpha), \sbb(\alpha))$, where \begin{flalign} (\xb(\alpha),\yb(\alpha),\sbb(\alpha)) &= (\mathbf{x}, \mathbf{y}, \mathbf{s}) + \alpha(\hat{\Delta\mathbf{x}},\hat{\Delta \mathbf{y}},\hat{\Delta\mathbf{s}}),\ \mbox{and} \\ \mu(\alpha) &= \left(\nicefrac{1}{n}\right)\xb(\alpha)^\top \sbb(\alpha). \end{flalign} Our goal is to bound the number of outer iterations required by the feasible IPM algorithm. To do so, we bound the magnitude of the step size $\alpha$. First, we provide an upper bound on $\alpha$, which allows us to show that each new point $(\xb(\alpha),\yb(\alpha),\sbb(\alpha))$ traversed by the algorithm stays within the neighborhood $\mathcal{N}(\gamma)$. Second, we provide a lower bound on $\alpha$, which allows us to bound the number of iterations required. The following Lemma will be used throughout the section. \begin{lemma}\label{lemmamaxalpha0} Assume $(\hat{\Delta \xb},\hat{\Delta \sbb},\hat{\Delta \yb})$ satisfies eqns. $(\ref{eq:iip_3})$ for some $\sigma \in \mathbb{R}$ and $\mathbf{v} \in \mathbb{R}^n$. Let $(\mathbf{x},\mathbf{y},\mathbf{s})$ be any point such that $(\mathbf{x},\mathbf{s})>0$. Then, for every $\alpha \in \mathbb{R}$, % \begin{flalign*} (a)&~~\xb(\alpha) \circ \sbb(\alpha) = (1- \alpha) \xb \circ \sbb + \alpha \sigma \mu \one_n - \alpha \mathbf{v} + \alpha^2 \hat{\Delta\mathbf{x}}\circ\hat{\Delta \mathbf{s}}\,,\\ (b)&~~\mu(\alpha) = [1-\alpha(1-\sigma)]\mu - \frac{\alpha\,\mathbf{v}^\top\one_n}{n}. \end{flalign*} % \end{lemma} \begin{proof} Proving $(a)$: \begin{flalign} \xb(\alpha) \circ \sbb(\alpha) &= (\mathbf{x} + \alpha \hat{\Delta\mathbf{x}}) \circ (\mathbf{s} + \alpha \hat{\Delta\mathbf{s}}) \notag \\ & = \xb \circ \sbb + \alpha(\mathbf{x} \circ \hat{\Delta\mathbf{s}} + \mathbf{s}\circ\hat{\Delta\mathbf{x}}) + \alpha^2 \hat{\Delta\mathbf{x}}\circ\hat{\Delta \mathbf{s}} \notag\\ & = \xb \circ \sbb + \alpha(-\xb \circ \sbb + \sigma\mu \one_n - \mathbf{v}) + \alpha^2\hat{\Delta\mathbf{x}}\circ\hat{\Delta \mathbf{s}} \notag \\ & = (1-\alpha)\xb \circ \sbb + \alpha\sigma\mu \one_n - \alpha\mathbf{v} + \alpha^2\hat{\Delta\mathbf{x}}\circ\hat{\Delta \mathbf{s}}\,, \notag \end{flalign} % where the third equality follows from eqn.~\eqref{eq:iip_3_f_3}. Now, left-multiply the above equality by $\one_n^\top$ and divide by $n$ to obtain $(b)$. (Notice that $\hat{\Delta \xb}^\top\hat{\Delta \sbb}=0$ from eqns.~\eqref{eq:iip_3_f_1} and \eqref{eq:iip_3_f_2}.) \end{proof} \noindent Next, we provide an upper bound on $\alpha$, ensuring that each new point $(\xb(\alpha),\yb(\alpha),\sbb(\alpha))$ traversed by the algorithm stays within the neighborhood $\mathcal{N}(\gamma)$. Note that the following result resembles Lemma~3.5 of \citep{Mon03}, but what makes Lemma~\ref{lemmamaxalpha1_f} different from it is the fact that here we need to additionally prove the strict feasibility of the new iterate \emph{i.e.}~$(\xb(\alpha),\yb(\alpha),\sbb(\alpha))\in\mathcal{F}^0$ (in order to show $(\xb(\alpha),\yb(\alpha),\sbb(\alpha)) \in \mathcal{N}(\gamma)$), which was not proven in Lemma~3.5 of \citep{Mon03}. \begin{lemma}\label{lemmamaxalpha1_f} Assume $(\hat{\Delta \xb},\hat{\Delta \yb},\hat{\Delta \sbb})$ satisfies eqns. $(\ref{eq:iip_3_f})$ for some $\sigma > 0$, $(\mathbf{x}, \mathbf{y},\mathbf{s}) \in \mathcal{N}(\gamma)$ for $\gamma \in (0,1)$, and $\|\mathbf{v}\|_2 \leq \frac{\gamma \sigma \mu}{4}$. Then, $(\xb(\alpha),\yb(\alpha),\sbb(\alpha)) \in \mathcal{N}(\gamma)$ for every scalar $\alpha$ such that \begin{flalign} \label{alphalemma1_f} 0 \leq \alpha \leq \min \left\{ 1, \frac{\gamma \sigma \mu}{4 \|\hat{\Delta \xb} \circ \hat{\Delta \sbb} \|_\infty}\right\} . \end{flalign} \end{lemma} \begin{proof} First, we show that $\xb(\alpha) \circ \sbb(\alpha) \geq (1-\gamma) \mu(\alpha) \one_n$. From Lemma~\ref{lemmamaxalpha0}, we get \begin{flalign} &\xb(\alpha) \circ \sbb(\alpha) - (1-\gamma) \mu(\alpha) \one_n \notag \\ &= (1-\alpha)\left(\xb \circ \sbb - (1-\gamma)\mu\,\one_n\right) + \alpha \gamma \sig \mu\, \one_n - \alpha \left(\mathbf{v} - (1-\gamma)\frac{\mathbf{v}^\top\one_n}{n}\one_n\right)\nonumber\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ \alpha^2 \left(\hat{\Delta \xb} \circ \hat{\Delta \sbb} \right) \notag \\ & \geq \alpha \left( \gamma \sig \mu - \nbr{\mathbf{v}- (1-\gamma)\frac{\mathbf{v}^\top\one_n}{n}\,\one_n}_\infty - \alpha \nbr{\hat{\Delta\mathbf{x}}\circ\hat{\Delta\mathbf{s}}}_\infty\right) \one_n \notag \\ & \geq \alpha \Bigg(\gamma \sig \mu - 2 \|\mathbf{v}\|_\infty - \alpha\|\hat{\Delta \xb} \circ \hat{\Delta \sbb} \|_\infty \Bigg)\one_n \notag \\ & \geq \alpha \Bigg(\gamma \sig \mu - \frac{\gamma \sig \mu}{2} - \frac{\gamma \sig \mu}{4}\Bigg) \one_n = \, \alpha\frac{\gamma \sig \mu}{4}\one_n\ge\zero \notag. \end{flalign} The first inequality follows from $\xb \circ \sbb \ge(1-\gamma)\mu\,\one_n$, because $(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{N}(\gamma)$ and $\mathbf{a}\le\|\mathbf{a}\|_\infty\,\one_n$ for any vector $\mathbf{a}\in\R{n}$. The second-to-last inequality follows from the fact that for any $\mathbf{u} \in \mathbb{R}^n$ and $\delta \in [0,n]$, $\nbr{\mathbf{u} - \delta \frac{\mathbf{u}^\top \one_n}{n}\,\one_n}_\infty \leq (1+ \delta)\|\mathbf{u}\|_\infty$. Thus, we prove that the point $(\xb(\alpha),\yb(\alpha),\sbb(\alpha))$ satisfies the proximity condition for $\mathcal{N}(\gamma)$. Finally, we show that $(\xb(\alpha),\yb(\alpha),\sbb(\alpha))\in\mathcal{F}^0$ \emph{i.e.}~it satisfies the primal and dual constraints and $(\xb(\alpha),\sbb(\alpha))>\zero$. From eqn.~\eqref{eq:iip_3_f} and the fact that $(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{F}^0$, we get $\mathbf{A}\,\xb(\alpha)=\mathbf{A}\mathbf{x}+\alpha\mathbf{A}\hat{\Delta\mathbf{x}}=\mathbf{b}$. Similarly, $$\mathbf{A}^\top\,\yb(\alpha)+\sbb(\alpha)=(\mathbf{A}^\top\mathbf{y}+\mathbf{s})+\alpha(\mathbf{A}^\top\hat{\Delta\mathbf{y}}+\hat{\Delta\mathbf{s}})=\mathbf{A}^\top\mathbf{y}+\mathbf{s} = \mathbf{c}.$$ We now show that $(\xb(\alpha),\sbb(\alpha))>\zero$. For $\alpha=0$, we trivially have$(\xb(\alpha),\sbb(\alpha))=(\mathbf{x},\mathbf{s})>\zero$. To prove $(\xb(\alpha),\sbb(\alpha))>\zero$ for $0<\alpha\le1$, we first show $\mu(\alpha)>0$. Using $\gamma \in (0,1)$, the inequality $\abs{\frac{\mathbf{v}^\top\one_n}{n}}\le\|\mathbf{v}\|_\infty\le\|\mathbf{v}\|_2$, and the assumption $\|\mathbf{v}\|_2\le\frac{\gamma \sig \mu}{4}$, we get $\frac{\mathbf{v}^\top\one_n}{n} < \frac{\sigma \mu}{4}$. Thus, from Lemma~\ref{lemmamaxalpha0}(b), \begin{flalign} \mu(\alpha) &= [1-\alpha(1-\sigma)]\mu - \alpha \,\frac{\mathbf{v}^\top\one_n}{n}\nonumber\\ &> [1-\alpha(1-\sigma)]\mu - \alpha\,\frac{\sigma \mu}{4}\nonumber\\ &= (1-\alpha)\mu + \alpha\,\frac{3\,\sigma \mu}{4}>0. \label{last} \end{flalign} The last inequality holds because $\alpha\in (0,1]$, $\sigma\in (0,1)$, and $\mu>0$. We already have $\xb(\alpha)\circ\sbb(\alpha)\ge (1-\gamma)\mu(\alpha)\one_n$. Combining with $\mu(\alpha)>0$ and $\gamma\in (0,1)$ this implies that $\xb(\alpha)\circ\sbb(\alpha)>\zero$. Therefore, $x_i(\alpha)\,s_i(\alpha)>0$ for all $i=1\dots n$ which implies that, for each $i$, either both $x_i(\alpha)$ and $s_i(\alpha)$ are positive or both $x_i(\alpha)$ and $s_i(\alpha)$ are negative. We will use contradiction to prove that the second case is not possible. Indeed, assume that $x_i(\alpha)<0$ and $s_i(\alpha)<0$ for some $i=1 \ldots n$. First, we rewrite $x_i(\alpha)$ and $s_i(\alpha)$ as follows\footnote{Here, $x_i(\alpha)$, $s_i(\alpha)$, $x_i$, $s_i$, $\hat{\Delta x_i}$, and $\hat{\Delta s_i}$ are the $i$-th elements of $\xb(\alpha)$, $\sbb(\alpha)$, $\mathbf{x}$, $\mathbf{s}$, $\hat{\Delta \xb}$, and $\hat{\Delta \sbb}$, respectively.} \begin{subequations} \begin{flalign} x_i(\alpha)=&~x_i+\alpha\hat{\Delta x_i}<0\label{eq:x_i},\\ s_i(\alpha)=&~s_i+\alpha\hat{\Delta s_i}<0\label{eq:s_i}. \end{flalign} \end{subequations} Recall that both $x_i$ and $s_i$ are positive. Therefore, pre-multiplying eqn.~\eqref{eq:x_i} by $s_i$ and eqn.~\eqref{eq:s_i} by $x_i$ we get \begin{subequations} \begin{flalign} x_is_i+\alpha s_i\hat{\Delta x_i}<&~0\label{eq:x_i1},\\ x_is_i+\alpha x_i\hat{\Delta s_i}<&~0\label{eq:s_i1}. \end{flalign} \end{subequations} Adding eqns.~\eqref{eq:x_i1} and \eqref{eq:s_i1} and applying eqn.~\eqref{eq:iip_3_f_3} (element-wise), we get \begin{flalign} &~2x_is_i+\alpha (s_i\hat{\Delta x_i}+x_i\hat{\Delta s_i})<0\nonumber\\ \Rightarrow&~2x_is_i+ \alpha (-x_is_i+\sigma\mu-v_i)<0\nonumber\\ \Rightarrow&~ (2-\alpha)x_is_i+\alpha\sigma\mu-\alpha v_i<0\nonumber\\ \Rightarrow&~ v_i>\frac{2-\alpha}{\alpha}x_is_i+\sigma\mu > \sigma\mu\label{eq:xs}. \end{flalign} In the above $v_i$ is the $i$-th element of $\mathbf{v}$; the first inequality in eqn.~\eqref{eq:xs} holds because $\alpha>0$; the second inequality in eqn.~\eqref{eq:xs} because $\frac{2-\alpha}{\alpha}x_is_i>0$ ($x_i,s_i>0$ and $0 < \alpha \leq 1$). Using $\|\mathbf{v}\|_2\le\frac{\gamma \sig \mu}{4}$ we get $$v_i\le\|\mathbf{v}\|_\infty\le\|\mathbf{v}\|_2\le\frac{\gamma \sig \mu}{4}<\sigma\mu\,,$$ for all $i=1\ldots n$. This contradicts the inequality of eqn.~\eqref{eq:xs}; thus, both $x_i(\alpha)>0$ and $s_i(\alpha)>0$ for all $i=1\ldots n$ and all $\alpha\in[0,1]$. \end{proof} \noindent We now cite a result from \citep{Mon03} that provides a lower bound on $\bar{\alpha}$ and the corresponding $\mu(\bar{\alpha})$. Note that \citep{Mon03} presented it in context of infeasible IPM; however, it holds for the feasible case as well, as long as the perturbation vector $\mathbf{v}$ satisfies $\|\mathbf{v}\|_2\le\frac{\gamma\sigma\mu}{4}$ at each iteration. \begin{lemma}[Lemma 3.6 of \citep{Mon03}]\label{lemmaminalpha1_f} At each iteration of the Algorithm~\ref{algo:iipm}, if $\|\mathbf{v}\|_2\le\frac{\gamma\sigma\mu}{4}$, then the step size $\bar{\alpha}$ satisfies \begin{flalign} \label{alphalemma2_f} \bar{\alpha} \geq \min \left\{ 1, \frac{ \min \{ \gamma \sigma, (1 - \frac{5}{4} \sigma) \} \mu } {4 \|\hat{\Delta x} \circ \hat{\Delta s}\|_\infty} \right\} \end{flalign} and \begin{flalign} \label{mulemma2_f} \mu(\bar{\alpha})= \Big[ 1 - \frac{\bar{\alpha}}{2} (1-\frac{5}{4}\sigma) \Big] \mu. \end{flalign} \end{lemma} \noindent At this point, we have provided a lower bound (eqn.~(\ref{alphalemma2_f})) for the allowed values of the step size $\bar{\alpha}$. Next, we will show that this lower bound is bounded away from zero. From eqn.~(\ref{alphalemma2_f}), is suffices to show that $\|\hat{\Delta \xb} \circ \hat{\Delta \sbb} \|_\infty$ is bounded. First, we state the following inequality that will be instrumental in proving Lemma~\ref{lemmaminalpha2_f}. \begin{lemma}\label{prop:wri} Let $\mathbf{a},\mathbf{b}\in\R{n}$ be any two vectors such that $\mathbf{a}^\top\mathbf{b}\ge 0$. Then \begin{flalign*} \|\mathbf{a}\circ\mathbf{b}\|_2\le \|\mathbf{a}+\mathbf{b}\|_2^2\,. \end{flalign*} \end{lemma} \noindent See~\citep{wright1997primal} for a proof of Lemma~\ref{prop:wri}; as a matter of fact,~\citep{wright1997primal} proved $\|\mathbf{a}\circ\mathbf{b}\|_2\le2^{-3/2}\|\mathbf{a}+\mathbf{b}\|_2^2$, which is tighter. This tighter bound is not needed in our proof. \begin{lemma}\label{lemmaminalpha2_f} Let $(\mathbf{x},\mathbf{y},\mathbf{s}) \in \mathcal{N}(\gamma)$ and $\|\mathbf{v}\|_2\le\frac{\gamma\sigma\mu}{4}$. Then $(\hat{\Delta \mathbf{x}},\hat{\Delta \mathbf{y}},\hat{\Delta \mathbf{s}})$ satisfies \begin{flalign} \label{normsmax_f} \|\hat{\Delta \xb}\circ\hat{\Delta \sbb}\|_\infty\le\left(1+\frac{\sigma^2}{1-\gamma}\right)n\mu + \frac{\gamma^2\sigma^2}{16(1-\gamma)}\mu+\frac{\gamma\sigma^2}{2}\mu\,. \end{flalign} \end{lemma} \begin{proof} First, we multiply eqn.~(\ref{eq:iip_3_f_3}) on the left by $(\mathbf{X} \mathbf{S})^{-1/2}$ to get \begin{flalign}\label{eq:Dint} \mathbf{D}^{-1} \hat{\Delta \mathbf{x}} + \mathbf{D} \hat{\Delta \mathbf{s}} = -(\mathbf{X} \mathbf{S})^{1/2}\one_n + \sigma \mu (\mathbf{X} \mathbf{S})^{-1/2} \one_n - (\mathbf{X} \mathbf{S})^{-1/2} \mathbf{v}\,. \end{flalign} Next, pre-multiplying eqn.~\eqref{eq:iip_3_f_2} by $\hat{ \Delta \mathbf{x}}^\top$ and applying eqn.~\eqref{eq:iip_3_f_1}, we have $\hat{\Delta \xb}^\top\hat{ \Delta \mathbf{s}}=0$. This also implies that $\hat{\Delta \xb}^\top\hat{ \Delta \mathbf{s}}=(\mathbf{D}^{-1}\hat{\Delta \xb})^\top(\mathbf{D}\hat{ \Delta \mathbf{s}})$. Applying Lemma~\ref{prop:wri} with $\mathbf{a}=\mathbf{D}^{-1}\hat{\Delta \xb}$ and $\mathbf{b}=\mathbf{D}\hat{\Delta \sbb}$, and using eqn.~\eqref{eq:Dint} we get \begin{flalign} \|\hat{ \Delta \mathbf{x}}\circ\hat{ \Delta \mathbf{s}}\|_2=&~\|(\mathbf{D}^{-1}\hat{ \Delta \mathbf{x}})\circ(\mathbf{D}\hat{ \Delta \mathbf{s}})\|_2\le\|\mathbf{D}^{-1} \hat{\Delta \mathbf{x}} + \mathbf{D} \hat{\Delta \mathbf{s}}\|_2^2\nonumber\\ =&~\|-(\mathbf{X} \mathbf{S})^{1/2}\one_n + \sigma \mu (\mathbf{X} \mathbf{S})^{-1/2} \one_n - (\mathbf{X} \mathbf{S})^{-1/2} \mathbf{v}\|_2^2\nonumber\\ =&~\|(\mathbf{X} \mathbf{S})^{1/2}\one_n+(\mathbf{X} \mathbf{S})^{-1/2}(\mathbf{v}-\sigma\mu\one_n)\|_2^2\nonumber\\ =&~\|(\mathbf{X} \mathbf{S})^{1/2}\one_n\|_2^2+ 2\cdot\one_n^\top(\mathbf{v}-\sigma\mu\one_n)+\|(\mathbf{X} \mathbf{S})^{-1/2}(\mathbf{v}-\sigma\mu\one_n)\|_2^2\nonumber\\ \le&~n\mu+2n (\|\mathbf{v}\|_2-\sigma\mu)+\|(\mathbf{X} \mathbf{S})^{-1/2}(\mathbf{v}-\sigma\mu\one_n)\|_2^2\label{eq:dineq}. \end{flalign} The inequality in eqn.~\eqref{eq:dineq} follows from $\|(\mathbf{X} \mathbf{S})^{1/2}\one_n\|_2^2=n\mu$ and $\abs{\one_n^\top\mathbf{v}}\le n\,\|\mathbf{v}\|_\infty\le n\,\|\mathbf{v}\|_2$. Next, consider the last term on the right hand side of eqn.~\eqref{eq:dineq}: \begin{flalign} \|(\mathbf{X} \mathbf{S})^{-1/2}(\mathbf{v}-\sigma\mu\one_n)\|_2^2=&~\|(\mathbf{X} \mathbf{S})^{-1/2}\mathbf{v}\|_2^2+\sigma^2\mu^2\|(\mathbf{X} \mathbf{S})^{-1/2}\one_n\|_2^2-2\sigma\mu\,\one_n^\top(\mathbf{X}\mathbf{S})^{-1}\mathbf{v}\nonumber\\ \le&~\frac{\|\mathbf{v}\|_2^2}{\min_i x_is_i}+\sigma^2\mu^2\sum_{i=1}^{n}\frac{1}{x_is_i}-2\sigma\mu\sum_{i=1}^{n}\frac{v_i}{x_is_i}.\label{eq:dint2} \end{flalign} Eqn.~\eqref{eq:dint2} follows from $\|(\mathbf{X} \mathbf{S})^{-1/2}\mathbf{v}\|_2^2\le\|(\mathbf{X} \mathbf{S})^{-1/2}\|_2^2\|\mathbf{v}\|_2^2$ and $\|(\mathbf{X} \mathbf{S})^{-1/2}\|_2^2=\nicefrac{1}{\min_i x_is_i}$. Moreover, it is easy to verify that $\|(\mathbf{X} \mathbf{S})^{-1/2}\one_n\|_2^2=\sum_{i=1}^{n}\nicefrac{1}{x_is_i}$ and $\one_n^\top(\mathbf{X}\mathbf{S})^{-1}\mathbf{v}=\sum_{i=1}^{n}\nicefrac{v_i}{x_is_i}$. Now, we have $x_is_i\ge(1-\gamma)\mu$ for $i=1 \ldots n$ as $(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{N}(\gamma)$. Also, $x_is_i\le\sum_{i=1}^{n}x_is_i=n\mu$. Using the above we rewrite eqn.~\eqref{eq:dint2} as \begin{flalign} \|(\mathbf{X} \mathbf{S})^{-1/2}(\mathbf{v}-\sigma\mu\one_n)\|_2^2\le\frac{\|\mathbf{v}\|_2^2}{(1-\gamma)\mu}+\frac{n\sigma^2\mu^2}{(1-\gamma)\mu}-2\sigma\mu\frac{\sum_{i=1}^{n}v_i}{n\mu}.\label{eq:dint3} \end{flalign} Combining eqns.~\eqref{eq:dineq} and~\eqref{eq:dint3} we get \begin{flalign} \|\hat{ \Delta \mathbf{x}}\circ\hat{ \Delta \mathbf{s}}\|_2\le&~ n\mu+2n (\|\mathbf{v}\|_2-\sigma\mu)+\frac{\|\mathbf{v}\|_2^2}{(1-\gamma)\mu}+\frac{n\sigma^2\mu}{(1-\gamma)}-2\sigma\frac{\sum_{i=1}^{n}v_i}{n}\nonumber\\ =&~\left(1+\frac{\sigma^2}{1-\gamma}\right)n\mu+2n (\|\mathbf{v}\|_2-\sigma\mu)+\frac{\|\mathbf{v}\|_2^2}{(1-\gamma)\mu}-2\sigma\frac{\sum_{i=1}^{n}v_i}{n}\,.\label{eq:dint4} \end{flalign} Using $\|\mathbf{v}\|_2\le\frac{\gamma\sigma\mu}{4}$ and the fact that $\abs{\nicefrac{\mathbf{v}^\top\one_n}{n}}\le\|\mathbf{v}\|_\infty\le\|\mathbf{v}\|_2$, we get \begin{flalign} \|\hat{ \Delta \mathbf{x}}\circ\hat{ \Delta \mathbf{s}}\|_2\le\left(1+\frac{\sigma^2}{1-\gamma}\right)n\mu+\frac{\gamma^2\sigma^2}{16 (1-\gamma)}\mu+\frac{\gamma\sigma^2}{2}\mu. \end{flalign} Finally, we conclude the proof using $\|\hat{ \Delta \mathbf{x}}\circ\hat{ \Delta \mathbf{s}}\|_\infty\le\|\hat{ \Delta \mathbf{x}}\circ\hat{ \Delta \mathbf{s}}\|_2$. \end{proof} The next result guarantees the convergence of Algorithm~\ref{algo:iipm} . \begin{lemma}\label{theoremOuter_f} Assume that the constants $\gamma$ and $\sigma$ are such that $\max\{\gamma^{-1},(1-\gamma)^{-1},\sigma^{-1},(1-\frac{5}{4}\sigma)^{-1}\}=\mathcal{O}(1)$. At each iteration of Algorithm~\ref{algo:iipm}, if $\|\mathbf{v}\|_2\le\frac{\gamma\sigma\mu}{4}$, then after $k=\mathcal{O}(n \log{\nicefrac{1}{\epsilon}})$ iterations, $(\mathbf{x}^{k}, \mathbf{s}^{k}, \mathbf{y}^{k})$ satisfies $$\mu_k \leq \epsilon \mu_0.$$ \end{lemma} \vspace{-2mm} \begin{proof} From Lemma~\ref{lemmaminalpha2_f}, \begin{flalign} &~\|\hat{\Delta \xb}\circ\hat{\Delta \sbb}\|_\infty\le \left(1+\frac{\sigma^2}{1-\gamma} + \frac{\gamma^2\sigma^2}{16(1-\gamma)}+\frac{\gamma\sigma^2}{2}\right)n\mu\nonumber\\ \Rightarrow&~\mu\|\hat{\Delta \xb}\circ\hat{\Delta \sbb}\|_\infty^{-1}\ge n^{-1} \left(1+\frac{\sigma^2}{1-\gamma} + \frac{\gamma^2\sigma^2}{16(1-\gamma)}+\frac{\gamma\sigma^2}{2}\right)^{-1}.\label{eq:step1} \end{flalign} Combining eqns.~\eqref{alphalemma2_f} and \eqref{eq:step1} we get \begin{flalign} \bar{\alpha} \geq \min \left\{ 1, \frac{ \min \{ \gamma \sigma, (1 - \frac{5}{4} \sigma) \} } {4 \,n \left(1+\frac{\sigma^2}{1-\gamma} + \frac{\gamma^2\sigma^2}{16(1-\gamma)}+\frac{\gamma\sigma^2}{2}\right)} \right\}.\label{eq:step2} \end{flalign} Let $\max\{\gamma^{-1},(1-\gamma)^{-1},\sigma^{-1},(1-\frac{5}{4}\sigma)^{-1}\}\le \lambda$ for some constant $\lambda> 1$. Therefore, $\gamma\sigma\ge\frac{1}{\lambda^2}$ and $(1-\frac{5}{4}\sigma)\ge\frac{1}{\lambda}$, which further implies that $\min\{ \gamma \sigma, (1 - \frac{5}{4} \sigma)\}\ge\min\{\frac{1}{\lambda},\frac{1}{\lambda^2}\}=\frac{1}{\lambda^2}$. Also, $\frac{1}{1-\gamma}\le\lambda$. Combining these with eqn.~\eqref{eq:step2}, we get \begin{flalign} \bar{\alpha}\ge\min\left\{1,\frac{\frac{1}{\lambda^2}}{4n (1+\lambda\sigma^2+\frac{\lambda\gamma^2\sigma^2}{16}+\frac{\gamma\sigma^2}{2})}\right\}=\min\left\{1,\frac{1}{4n(\lambda^2+\lambda^3\sigma^2+\frac{\lambda^3\gamma^2\sigma^2}{16}+\frac{\lambda^2\gamma\sigma^2}{2})}\right\}.\label{eq:step3} \end{flalign} Note that in eqn.\eqref{eq:step3}, $4n(\lambda^2+\lambda^3\sigma^2+\frac{\lambda^3\gamma^2\sigma^2}{16}+\frac{\lambda^2\gamma\sigma^2}{2})> 1$. Thus, \begin{flalign} &~\bar{\alpha}\ge \frac{1}{4n(\lambda^2+\lambda^3\sigma^2+\frac{\lambda^3\gamma^2\sigma^2}{16}+\frac{\lambda^2\gamma\sigma^2}{2})}\nonumber\\ \Rightarrow&~ \frac{\bar{\alpha}}{2}\left(1-\frac{5}{4}\sigma\right)\ge \frac{1-\frac{5}{4}\sigma}{8n(\lambda^2+\lambda^3\sigma^2+\frac{\lambda^3\gamma^2\sigma^2}{16}+\frac{\lambda^2\gamma\sigma^2}{2})}\ge \frac{1}{8n\lambda^3(1+\lambda\sigma^2+\frac{\lambda\gamma^2\sigma^2}{16}+\frac{\gamma\sigma^2}{2})}=\frac{\beta}{n}\,,\label{eq:step4} \end{flalign} where \begin{equation}\label{eqn:pdbeta1} \beta=\frac{1}{8\lambda^3(1+\lambda\sigma^2+\frac{\lambda\gamma^2\sigma^2}{16}+\frac{\gamma\sigma^2}{2})}. \end{equation} We also note that the second inequality in eqn.~\eqref{eq:step4} holds because $1-\frac{5}{4}\sigma\ge\frac{1}{\lambda}$. Let $\mu=\mu_k$ and $\mu(\bar{\alpha})=\mu_{k+1}$ in eqn.~\eqref{mulemma2_f}; applying eqn.~\eqref{eq:step4}, we get, $\mu_{k+1} \leq \left(1 - \nicefrac{\beta}{n}\right) \mu_k, \forall k \geq 0$. Applying the above inequality recursively, we get $\mu_{k} \leq \left(1 - \nicefrac{\beta}{n}\right)^k \mu_0$. Therefore, for any accuracy parameter $\epsilon\in(0,1)$, $\mu_{k}\le\epsilon\mu_0$ holds, if $\left(1 - \nicefrac{\beta}{n}\right)^k\le \epsilon$ holds. Thus, it suffices for $k$ to be at least \begin{flalign} k\ge \frac{n}{\beta}\log (\nicefrac{1}{\epsilon}). \end{flalign} Therefore, since $\beta$, as defined in eqn.~(\ref{eqn:pdbeta1}), is a constant, we need $\mathcal{O}(n\log\frac{1}{\epsilon})$ iterations to satisfy $\mu_k\le \epsilon\mu_0$. \end{proof} \vspace{-4mm} \textbf{Proof of Theorem~\ref{thm:2f}.} Finally, the proof of our Theorem~\ref{thm:2f} directly follows from combining Lemma~\ref{lem:conouter} and Lemma~\ref{theoremOuter_f}. \vspace{-3mm} \section{Infeasible IPM}\label{sxn:ipm_i} In this section, we briefly discuss the long-step infeasible IPM using approximate solver with our sketching-based preconditioner. Recall that such algorithms can, in general, start with an initial point that is not necessarily feasible, but the initial point does need to satisfy some, more relaxed, constraints. Following the lines of~\citep{Zh94,Mon03}, let $\mathcal{S}$ be the set of feasible and optimal solutions of the form $(\mathbf{x}^*,\mathbf{y}^*,\mathbf{s}^*)$ for the primal and dual problems of eqns.~\eqref{eq:primal} and~\eqref{eq:dual} and assume that $\mathcal{S}$ is not empty. Then, long-step infeasible IPMs can start with any initial point $(\mathbf{x}^{0},\mathbf{y}^{0},\mathbf{s}^{0})$ that satisfies $(\mathbf{x}^{0},\mathbf{s}^{0}) > 0$ \textit{and} $(\mathbf{x}^{0},\mathbf{s}^{0}) \geq (\mathbf{x}^{*},\mathbf{s}^{*})$, for some feasible and optimal solution $(\mathbf{x}^{*},\mathbf{s}^{*})\in \mathcal{S}$. In words, the starting primal and slack variables must be strictly positive \textit{and} larger (element-wise) when compared to some feasible, optimal primal-dual solution. See Chapter 6 of \citep{wright1997primal} for a discussion regarding why such choices of starting points are are relevant to computational practice and can be identified more efficiently than feasible points. The flexibility of infeasible IPMs comes at a cost: long-step \textit{feasible} IPMs converge in $\mathcal{O}(n\log\nicefrac{1}{\epsilon})$ iterations, while long-step \textit{infeasible} IPMs need $\mathcal{O}(n^2 \log\nicefrac{1}{\epsilon})$ iterations to converge~\citep{Zh94,Mon03}. Here $\epsilon$ is the accuracy of the approximate LP solution returned by the IPM. Let \begin{subequations}\label{eq:ipm_residuals} \begin{flalign} \mathbf{A}\mathbf{x}^k-\mathbf{b}&= \mathbf{r}_p^k, \label{eq:primalres}\\ \mathbf{A}^\top\mathbf{y}^k+\mathbf{s}^k-\mathbf{c} &= \mathbf{r}_d^k,\label{eq:dualres} \end{flalign} \end{subequations} where $\mathbf{r}_p^k \in \mathbb{R}^n$ and $\mathbf{r}_d^k \in \mathbb{R}^m$ are the \textit{primal} and \textit{dual} residuals, respectively that characterize how far the iterate $(\mathbf{x}^k,\mathbf{y}^k,\mathbf{s}^k)$ is from being feasible. As long-step infeasible IPM algorithms iterate and update the primal and dual solutions, the residuals are updated as well. In case of convergence, these residuals $\mathbf{r}_p^k$ and $\mathbf{r}_d^k$ are reduced at the same rate as the duality measure $\mu_k$ and eventually converge to zero~\citep{wright1997primal}. Let $\mathbf{r}^k = (\mathbf{r}_p^k,\mathbf{r}_d^k) \in \mathbb{R}^{n+m}$ be the primal and dual residual at the $k$-th iteration: it is well-known that the convergence analysis of infeasible long-step IPMs critically depends on $\mathbf{r}^k$ lying on the line segment between 0 and $\mathbf{r}^0$ \emph{i.e.}, the initial residual. Unfortunately, using approximate solvers for the normal equations violates this invariant. Similar to the feasible case, a simple solution to fix this problem by adding a perturbation vector $\mathbf{v}$ to the current primal-dual solution that guarantees that the invariant is satisfied is proposed in~\citep{Mon03}. In this case, the following modified system is slightly different from eqns.~\eqref{eq:delxhat}-\eqref{eq:delshat}, as it now involves the residuals\footnote{For notational simplicity, we drop the index $k$.} $\mathbf{r}_p^k$ and $\mathbf{r}_d^k$: \begin{subequations}\label{eq:ipm_inf} \begin{flalign} \mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}} &=\mathbf{A}\mathbf{S}^{-1}\mathbf{v}+ \mathbf{p}\,\label{eq:addl_i}\\ \hat{\Delta\mathbf{x}} &=~-\mathbf{x}+\sigma\mu\mathbf{S}^{-1}\one_n-\mathbf{D}^2\hat{\Delta\mathbf{s}}-\mathbf{S}^{-1}\mathbf{v}\label{eq:delxhat_i}\\ \hat{\Delta\mathbf{s}}&=~-\mathbf{r}_d-\mathbf{A}^\top\hat{\Delta\mathbf{y}}\,,\label{eq:delshat_i} \end{flalign} \end{subequations} where the expression of $\mathbf{p}$ is now slightly different from eqn.~\eqref{eqn:pdef} due to infeasibility and is given by, $\mathbf{p}=-\mathbf{r}_p-\sigma\mu\mathbf{A}\mathbf{S}^{-1}\one_n+\mathbf{A}\mathbf{x}-\mathbf{A}\mathbf{D}^2\mathbf{r}_d.$ Again, we use the exact same sketching-based construction of $\mathbf{v}$ that provably satisfies the invariant. Next, we present our main theorem for long-step infeasible IPM: \begin{theorem}\label{thm:1} % Let $0 \leq \epsilon \leq 1$ be an accuracy parameter. Consider the long-step infeasible IPM Algorithm~\ref{algo:ipm_i} that solves eqn.~(\ref{eq:precond_alt}) using the CG or Chebyshev iteration of Algorithm~\ref{algo:PCG} (Section~\ref{sxn:PCG}). Assume that the iterative solver runs with accuracy parameter $\zeta = \nicefrac{1}{2}$ and iteration count $t = \mathcal{O} (\log n)$. % Then, with probability at least 0.9, the long-step infeasible IPM converges after $\mathcal{O}(n^2 \log \nicefrac{1}{\epsilon})$ iterations. % \end{theorem} Before presenting the infeasible IPM algorithm, we will need the following definition for the neighborhood. The involvement of the residuals $\mathbf{r}^k$ makes it different from the one in the feasible case: $$\mathcal{N}(\gamma)=\left\{(\mathbf{x},\mathbf{y},\mathbf{s}): (\mathbf{x},\mathbf{s})>\zero, x_i s_i\ge(1-\gamma)\mu\ \text{and}\ \frac{\|\mathbf{r}\|_2}{\|\mathbf{r}^0\|_2} \leq \frac{\mu}{\mu_0}\right\}.$$ Notice that Lemma~\ref{lem:v} also holds for our infeasible IPM. The only difference is the expression of the vector $\mathbf{p}$ which now contains the residuals. Combining a result from~\citep{Mon03} with our preconditioner $\mathbf{Q}^{\nicefrac{-1}{2}}$, we can prove that $\|\mathbf{Q}^{\nicefrac{-1}{2}}\mathbf{p}\|_2= \mathcal{O}(n)\sqrt{\mu}$. Again, it is to be noted that the above bound is worse than Lemma~\ref{thm:boundf_f} by a factor of $\sqrt{n}$. This bound allows us to prove that if we run Algorithm~\ref{algo:PCG} for $\mathcal{O}(\log n)$ iterations, then $\|\mathbf{v}\|_2\le\frac{\gamma\sigma}{4}\mu$. However, the extra $\sqrt{n}$ factor essentially contributes to the $\mathcal{O}(n^2)$ iteration complexity of Algorithm~\ref{algo:ipm_i}. See Appendix~\ref{app:convergence} for details. \begin{algorithm}[H] \caption{Infeasible IPM}\label{algo:ipm_i}% \hspace*{7mm}\textbf{Input:} $\mathbf{A}\in\RR{m}{n}$, $\mathbf{b}\in\R{m}$, $\mathbf{c}\in\R{n}$, $\gamma \in (0,1)$, tolerance $\epsilon> 0$, centering parameter \hspace*{6mm} $\sigma\in (0,\nicefrac{4}{5})$; \vspace{1mm} \hspace*{\algorithmicindent} \textbf{Initialize:} $k\gets 0$; initial point $(\mathbf{x}^{0},\mathbf{y}^{0},\mathbf{s}^{0})$; \begin{algorithmic}[1] \vspace{1mm} \While{$\mu_k > \epsilon$} \State Compute sketching matrix $\mathbf{W} \in \mathbb{R}^{n \times w}$ (Section~\ref{sxn:background}) with $\zeta=1/2$ and $\delta = O(n^{-2})$; \State Compute $\mathbf{r}_p^k=\mathbf{A}\mathbf{x}^k-\mathbf{b}$; $\mathbf{r}_d^k=\mathbf{A}^\top\mathbf{y}^k+\mathbf{s}^k-\mathbf{c}$; and $\mathbf{p}^k$ from eqn.~(\ref{eq:ipm_residuals}); \State Solve the linear system of eqn.~\eqref{eq:precond_alt} for $\mathbf{z}$ using Algorithm~\ref{algo:PCG} with $\mathbf{W}$ from step (2) \hspace*{-3mm} and $t=\mathcal{O}(\log n)$. Compute $\hat{\Delta \mathbf{y}}=\mathbf{Q}^{\nicefrac{-1}{2}}{\mathbf{z}}$; \State Compute $\mathbf{v}$ using eqn.~\eqref{eq:compv} with $\mathbf{W}$ from step (2); $\hat{\Delta\mathbf{s}}$ using eqn.~\eqref{eq:delshat_i}; $\hat{\Delta\mathbf{x}}$ using \hspace*{-3mm} eqn.~\eqref{eq:delxhat_i}; \State Compute $\tilde{\alpha} = \mathop{\mathrm{argmax}}\{ \alpha \in [0,1] : (\mathbf{x}^k,\mathbf{y}^k,\mathbf{s}^k) + \alpha (\hat{\Delta \mathbf{x}}^k,\hat{\Delta \mathbf{y}}^k,\hat{\Delta \mathbf{s}}^k) \in \mathcal{N}(\gamma)\}$. \State Compute $\bar{\alpha} = \mathop{\mathrm{argmin}}\{\alpha \in [0, \tilde{\alpha}]: (\mathbf{x}^k + \alpha \hat{\Delta \mathbf{x}}^k)^\top (\mathbf{s}^k + \alpha \hat{\Delta\mathbf{s}}^k)\}$. \State Compute $(\mathbf{x}^{k+1}, \mathbf{y}^{k+1}, \mathbf{s}^{k+1}) = (\mathbf{x}^k,\mathbf{y}^k,\mathbf{s}^k) + \bar{\alpha} (\hat{\Delta \mathbf{x}}^k,\hat{\Delta \mathbf{y}}^k,\hat{\Delta \mathbf{s}}^k)$; set $k \gets k + 1$; \EndWhile \end{algorithmic} \end{algorithm} Notice that as compared to the feasible IPM \emph{i.e.}~Algorithm~\ref{algo:iipm}, Algorithm~\ref{algo:ipm_i} needs an additional step to compute the primal and dual residuals, namely, $\mathbf{r}_p$ and $\mathbf{r}_d$ respectively (see Step~(3)). However, per iteration cost of Algorithm~\ref{algo:ipm_i} is asymptotically the same as that of Algorithm~\ref{algo:iipm} (see Section~\ref{sxn:IIPM}) since computing $\mathbf{r}_p$ and $\mathbf{r}_d$ only involve a matrix-vector product and therefore, are dominated by the SVD of $\mathbf{A}\mathbf{D}\mathbf{W}$ and the computation of the perturbation vector $\mathbf{v}$. See Appendix~\ref{app:convergence} for the convergence analysis of Algorithm~\ref{algo:ipm_i}. \section{Experiments}\label{sec:exp} Here we demonstrate the empirical performance of our algorithm on a variety of real-world data sets from the UCI ML Repository~\citep{Dua2019}. More specifically, we consider two problems that were part of the NeurIPS 2003 feature selection challenge: ARCENE and DEXTER~\citep{guyon2005result}. For the ARCENE data set, the task is to distinguish between cancer and normal patterns from mass-spectrometric data and DEXTER data set is for a text classification problem. Further, we consider DrivFace~\citep{diaz2016reduced}, a problem concerned with identifying the gaze direction in photos of human subjects taken while driving, and a gene expression cancer RNA-Sequencing data set, accessible on the UCI ML Repository, which is part of the RNA-Seq (HiSeq) PANCAN data set \citep{Weinstein2013}. It is a random extraction of gene expressions from patients who have different types of tumors: BRCA, KIRC, COAD, LUAD and PRAD. We considered the binary classification task of identifying BRCA versus other types. We also perform experiments on synthetic data sets (see Appendix~\ref{app:rand} for details). The experiments were implemented in Python and we observed that the results for both synthetic data (generated as described in Appendix~\ref{app:rand}) and real-world data were qualitatively similar. Thus, we highlight results on several representative real-world datasets. The experiments were implemented in Python and run on a server with Intel E5-2623V3@3.0GHz 8 cores and 64GB RAM. As an application, we consider $\ell_1$-regularized SVMs. All of the data sets are concerned with binary classification with $m \ll n$, where $n$ is the number of features. The SVM problem is a core model in machine learning that is crucial for applications in both regression and classification. While there are many variations of SVMs, we use the classical version of SVMs with an $\ell_1$ regularizer to illustrate the application of our algorithm. In Appendix~\ref{app:svm}, we describe the $\ell_1$-SVM problem and how it can be formulated as an LP. Here, $m$ is the number of training points, $n$ is the feature dimension, and the size of the constraint matrix in the LP becomes $m \times (2n +1)$. \vspace{0.02in}\noindent\textbf{Comparisons and Metrics}. \textcolor{black}{Our empirical evaluations serve as a proof-of-concept verifying our theoretical findings, by evaluating the effectiveness of our randomized preconditioner combined with an approximate solver. State-of-the-art implementations of LP solvers are highly optimized; therefore, it is unlikely to get a fair time-comparison between our algorithm and industrial-grade solvers, since the true algorithmic efficiency of commercial solvers is confounded by the built-in optimization strategies. We do not report running times to avoid such direct comparisons with heavily optimized benchmark LP solvers.} In most of our evaluations, we use the infeasible case \emph{i.e.},~Algorithm~\ref{algo:ipm_i} as finding a strictly feasible staring point is a non-trivial task. In addition, we focus on CG iterative solver to compute the approximate search directions. We compare Algorithm~\ref{algo:ipm_i} with a standard IPM (see \citep[Ch.~10]{NumericalRecipes}) using CG, and a standard IPM using a direct solver. We also use CVXPY as a benchmark to compare the accuracy of the solutions; we define the \emph{relative error} $\nicefrac{\|\hat{\mathbf{x}} - \mathbf{x}^\star\|_2}{\|\mathbf{x}^\star\|_2}$, where $\hat{\mathbf{x}}$ is our solution and $\mathbf{x}^\star$ is the solution generated by CVXPY. \textcolor{black}{In addition, in some of the experiments, we also consider the primal-dual error $\mathbf{c}^\top\mathbf{x}-\mathbf{b}^\top\mathbf{y}$ as a key metric to evaluate the quality of the solution returned by our algorithm.} We also consider the number of \emph{outer iterations}, namely the number of iterations of the IIPM algorithm, as well as the number of \emph{inner iterations}, namely the number of iterations of the CG solver. The \emph{inner iterations} is highly dependent on the condition number of the matrices in the normal equations (eqn.~\eqref{eq:normal} or \eqref{eq:precond}), which we also report. We denote the relative stopping tolerance for CG by \tolCG~ and we denote the outer iteration residual error by \tolOuterRes. If not specified: \tolOuterRes~$= 10^{-9}$, \tolCG~$ = 10^{-5}$, and $\sigma = 0.5$. We evaluated a Gaussian sketching matrix, and the initial triplet $(\mathbf{x}, \mathbf{y}, \mathbf{s})$ for all IPM algorithms was set to be all ones. \begin{table*}[ht] \caption{Comparison of (our) sketched IPM with CG, standard IPM with CG, and Standard IPM with a direct solver, for the $\ell_1$-SVM problem on UCI Machine Learning Repository~\citep{Dua2019} data sets. Across all, $\tau = 10^{-9}$ and a relative error of $10^{-3}$ or less was achieved. We define $\kappa_{\text{Sk}} = $ \kprecNormal~and $\kappa_{\text{Stan}} = \kappa(\mathbf{A} \mathbf{D}^2\mathbf{A}^T)$.} \label{tableSVM} \begin{center} \resizebox{\textwidth}{!}{\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Problem}} & \multicolumn{1}{|c|}{\textbf{Size}} & \multicolumn{4}{|c|}{\textbf{Sketch IPM w/ Precond. CG}} & \multicolumn{3}{|c|}{\textbf{Stand. IPM w/ Unprec. CG}} & \multicolumn{1}{|c|}{\textbf{IPM w/ Dir.}} \\ & \multicolumn{1}{c}{$(m \times N )$} & \multicolumn{1}{|c}{$w$} & \multicolumn{1}{c}{In. It.} & \multicolumn{1}{c}{Out. It.} & \multicolumn{1}{c|}{$\kappa_{\text{Sk}}$} & \multicolumn{1}{c}{In. It.} & \multicolumn{1}{c}{Out. It.} & \multicolumn{1}{c|}{$\kappa_{\text{Stan}}$} & \multicolumn{1}{c|}{Out. It.}\\ \hline ARCENE &$(100 \times 10K)$ & $200$ & $\mathbf{30}$ & $50$ & $38.09$ & $\mathbf{1.1 K}$ & $59$ & $4.4 \times 10^{8}$ & $50$\\ DEXTER &$(300 \times 20K)$ & $500$ & $\mathbf{39}$ & $39$ & $75.42$ & $\mathbf{4.6K}$ & $39$ & $7.6 \times 10^{9}$ & $39$\\ DrivFace &$(606 \times 6400)$ & $1000$ & $\mathbf{50}$ & $42$ & $68.87$ & $\mathbf{139K}$ & $43$ & $17 \times 10^{12}$ & $42$\\ Gene RNA &$(801 \times 20531)$ & $2000$ & $\mathbf{27}$ & $44$ & $20.03$ & $\mathbf{101K}$ & $208$ & $4.7 \times 10^{12}$ & $44$\\ \hline \end{tabular}} \end{center} \label{tab:multicol} \end{table*} \vspace{0.02in}\noindent\textbf{Experimental Results}. Figure~\ref{fig:iter_ARCENE}(a) shows that our Algorithm~\ref{algo:iipm} uses an order of magnitude fewer \textit{inner} iterations than the un-preconditioned standard solver. This is due to the improved conditioning of the respective matrices in the normal equations, as demonstrated in Figure~\ref{fig:iter_ARCENE}(b). Across various real-world and synthetic data sets, the results were qualitatively similar to those shown in Figure~\ref{fig:iter_ARCENE}. Results for several real-world data sets are summarized in Table~\ref{tableSVM}. In general, our preconditioned CG solver used in Algorithm~\ref{algo:iipm} does not increase the total number of \textit{outer} iterations as compared to the standard IPM with CG, and the standard IPM with a direct linear solver (denoted IPM w/Dir), as seen in Table~\ref{tableSVM}. Actually, for unpreconditioned CG there is clearly more outer iterations, especially for Gene RNA, which has x5 outer iterations. Figure~\ref{fig:iter_ARCENE} also demonstrates the relative insensitivity to the choice of $w$ (the sketching dimension, i.e., the number of columns of the sketching matrix $\mathbf{W}$ of Section~\ref{sxn:background}). For smaller values of $w$, our algorithm requires more inner iterations. However, across various choices of $w$, the number of inner iterations is always an order of magnitude smaller than the number required by the standard solver. Figure~\ref{fig:ARCENE} shows the performance of our algorithm for a range of ($w$, \tolCG) pairs. Figure~\ref{fig:ARCENE}(a) demonstrates that the number of the inner iterations is robust to the choice of \tolCG~ and $w$. The number of inner iterations varies between $15$ and $35$ for the ARCENE data set, while the standard IPM took on the order of $1,000$ iterations across all parameter settings. Across all settings, the relative error was fixed at $0.04\%$. In general, our sketched IPM is able to produce an extremely high accuracy solution across parameter settings. Thus we do not report additional numerical results for the relative error, which was consistently $10^{-3}$ or less. Figure~\ref{fig:ARCENE}(b) demonstrates a trade-off of our approach: as both \tolCG~ and $w$ are increased, the condition number \kprecNormal~decreases, corresponding to better conditioned systems. As a result, fewer inner iterations are required. In this context, Figure~\ref{fig:ARCENE_more} shows that how the number of inner CG iterations (Figure~\ref{fig:ARCENE_v_more1}) or the condition number of $\mathbf{Q}^{-1/2}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-1/2}$ (Figure~\ref{fig:ARCENE_v_more2}) decreases with the increase in sketching dimension $w$ for various \tolCG\,. \begin{figure}[t] \centering \subfigure[ ]{ \label{fig:ARCENE_v_more1} \includegraphics[width=2.8in]{figures/svm_ARCENE_m100_n20001_9-eps-converted-to.pdf}} \subfigure[]{ \label{fig:ARCENE_v_more2} \includegraphics[width=2.8in]{figures/svm_ARCENE_m100_n20001_8-eps-converted-to.pdf}} \caption{\emph{ARCENE data set}: As $w$ increases, (a) the number of inner iterations decreases, and is relatively robust to \tolCG and (b) the condition number decreases as well. } \label{fig:ARCENE_more} \end{figure} \begin{figure}[t] \centering \includegraphics[width=15cm,height=15cm,keepaspectratio]{figures/real_pd_figures.png} \caption{\emph{Feasible vs. infeasible start}: For all four data sets, we see that the algorithm takes much fewer number of outer iteration if one can start from a feasible point. Here, we take $w=1000$. Primal-dual errors are in log scale. } \label{fig:starting_point} \end{figure} \textcolor{black}{ Next, we further evaluate the performance of our algorithm in terms of the total number of \emph{outer iterations} when starting from an infeasible point vs. starting from a strictly feasible point. The objective is to study the effect of feasibility in convergence of our algorithms. For the feasible IPM, we assume that we already have a strictly feasible starting point. See Appendix~\ref{app:feasible_start} on how to find a strictly feasible point of an LP. Figure~\ref{fig:starting_point} shows that if we already know a feasible starting point beforehand, then, across all the data sets, our algorithm indeed takes much fewer number of outer iterations as compared to that of infeasible-start IPM. Additionally, starting from a feasible or an infeasible point does not seem to affect the rate with which the primal-dual error decreases } Finally, we run our IPM solver without a \emph{v-correction} \emph{i.e.}, without using the perturbation vector $\mathbf{v}$ in step~(5) of Algorithm~\ref{algo:ipm_i} and notice that our algorithm still converges without significantly changing the inner or outer iteration counts (see Table \ref{tableNoCorrection}). We leave the corresponding theoretical analysis for future work. We use the same tolerance parameters and sketching dimension as in Table~\ref{tab:multicol}. \begin{table*}[ht] \caption{Comparison of (our) sketched IPM with and without correction, for the $\ell_1$-SVM problem on UCI Machine Learning Repository~\citep{Dua2019} data sets. Across all, $\tau = 10^{-9}$ and a relative error of $10^{-3}$ or less was achieved.} \label{tableNoCorrection} \begin{center} \resizebox{\textwidth}{!}{\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Problem}} & \multicolumn{1}{|c|}{\textbf{Size}} & \multicolumn{1}{|c|}{\textbf{Sketch Size}} & \multicolumn{3}{|c|}{\textbf{Precond. Sketch IPM without Correction}} & \multicolumn{3}{|c|}{\textbf{Precond. Sketch IPM with Correction}} \\ & \multicolumn{1}{c}{$(m \times N )$} & \multicolumn{1}{|c|}{$w$} & \multicolumn{1}{c}{Max In. It.} & \multicolumn{1}{c}{Sum In. It.} & \multicolumn{1}{c|}{Out. It.} & \multicolumn{1}{c}{Max In. It.} & \multicolumn{1}{c}{Sum In. It.} & \multicolumn{1}{c|}{Out. It.} \\ \hline ARCENE &$(100 \times 10K)$ & $200$ & $29$ & $1868$ & $73$ & $29$ & $1873$ & $73$\\ DEXTER &$(300 \times 20K)$ & $500$ & $40$ & $2307$ & $62$ & $40$ & $2271$ & $62$\\ DrivFace &$(606 \times 6400)$ & $1000$ & $52$ & $2820$ & $66$ & $50$ & $2804$ & $66$\\ Gene RNA &$(801 \times 20531)$ & $2000$ & $27$ & $1445$ & $67$ & $27$ & $1434$ & $67$\\ \hline \end{tabular}} \end{center} \label{tab:multicol2} \end{table*} \section{Conclusions and Open Problems} We proposed and analyzed a long-step IPM algorithm (both feasible and infeasible) using a preconditioned conjugate gradient solver for the normal equations and a novel perturbation vector to correct for the error due to the approximate solver. Thus, we speed up each iteration of the IPM algorithm, without increasing the overall number of iterations. We demonstrate empirically that our IPM requires an order of magnitude fewer inner iterations within each linear solve than standard IPMs. Several important questions remain open. First of all, from a theoretical perspective, using the vector $\mathbf{v}$ to correct for infeasibility was necessary for our theoretical analysis, but, from an empirical perspective, we observed that the correction was not needed. A theoretical analysis of long-step IPMs without a correction vector would be of interest. Second, it would be interesting to explore whether there are other ways to use the preconditioner to design a feasible step instead of the $\mathbf{v}$-correction. \textcolor{black}{Third}, a thorough empirical evaluation of the effect of preconditioning and approximate solvers, with or without the $\mathbf{v}$-correction, would be a significant undertaking in future work. \textcolor{black}{Finally, it would be interesting to investigate what other theoretically impactful ideas could be used to efficiently solve linear program in practice. There exists barriers to using methods such as \textit{inverse maintenance} and \textit{lazy updates} in practice, as discussed in Section \ref{sxn:comparison}. However, it is unknown whether these issues are fundamental or avoidable.} \section{Richardson Iteration}\label{sxn:rchardson} \section{Convergence analysis of Algorithm~\ref{algo:ipm_i}}\label{app:convergence} \textcolor{black}{The proofs of long-step feasible IPM and long-step infeasible IPM are different from each other, since the latter needs additional assumptions on the initial iterate $(\mathbf{x}^0,\mathbf{y}^0,\mathbf{s}^0)$. For a detailed comparison between proofs of these two variants, we refer the readers to Chapters~5 and 6 of ~\citep{wright1997primal}. In the context of an approximate solver, most of the proofs related to the convergence of the long-step infeasible IPMs followed from~\citep{Mon03}, except for the fact that we used our sketching-based preconditioner $\mathbf{Q}^{-1/2}$, as well as our choice of the vector $\mathbf{v}$ that corrects for the error caused by the inexact solver. Here, we only prove results that are different from~\citep{Mon03}. For our feasible IPM proofs, there is no prior work that analyzed the theoretical aspects of long-step feasible IPMs with an approximate solver. Therefore, compared to the prototypical, long-step feasible IPM of~\citep{wright1997primal}, our proofs needed extra care in bounding the duality gap decrease in each iteration when the linear system is only approximately solved.} \subsection{Number of iterations for the iterative solver} In this section, most of the proofs follow \citep{Mon03} except for the fact that we used our sketching based preconditioner $\mathbf{Q}^{\nicefrac{-1}{2}}$. Recall that $\mathcal{S}$ is the set of optimal and feasible solutions for the proposed LP. \begin{lemma}\label{lem:ineq} Let $(\mathbf{x}^{0},\mathbf{y}^{0},\mathbf{s}^{0})$ be the initial point with $(\mathbf{x}^{0},\mathbf{s}^{0})>\zero$ and $(\mathbf{x}^{*},\mathbf{y}^{*},\mathbf{s}^{*})\in\mathcal{S}$ such that $(\mathbf{x}^{*},\mathbf{s}^{*})\le(\mathbf{x}^{0},\mathbf{s}^{0})$ with $\mathbf{s}^{0}\ge|\mathbf{A}^\top\mathbf{y}^{0}-\mathbf{c}|$. Then, for any point $(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{N}(\gamma)$ such that $\mathbf{r}=\eta\,\mathbf{r}^{0}$ and $0\le\eta\le\min\left\{1,\frac{\mathbf{s}^\top\mathbf{x}}{\mathbf{s}^{0\top}\mathbf{x}^{0}}\right\}$, we get \begin{subequations} \begin{flalign} &(i)~~\eta\,(\mathbf{x}^\top\mathbf{s}^{0}+\mathbf{s}^\top\mathbf{x}^{0})\le\,3n\mu\label{eq:ineq}\,,\\ &(ii)~~\eta\,\|\mathbf{S}(\mathbf{x}^{*}-\mathbf{x}^{0})\|_2\le\eta\,\|\mathbf{S}\mathbf{x}^{0}\|_2\le\eta\mathbf{s}^{\top}\mathbf{x}^{0}\le\,3n\mu\label{eq:ineq2}\,,\\ &(iii)~~\eta\,\|\mathbf{X}(\mathbf{s}^{0}+\mathbf{A}^\top\mathbf{y}^{0}-\mathbf{c})\|_2\le~2\eta\,\|\mathbf{X}\mathbf{s}^{0}\|_2\le~2\eta\,\mathbf{x}^\top\mathbf{s}^{0}\le~6n\mu\label{eq:ineq3}\,. \end{flalign} \end{subequations} \end{lemma} \begin{proof} We prove eqns.~\eqref{eq:ineq}--\eqref{eq:ineq3} below. \noindent\textbf{Proof of eqn.~\eqref{eq:ineq}.} For completeness, we provide a proof of eqn.~\eqref{eq:ineq} following~\citep{Mon03}. Since $(\mathbf{x}^{*}, \mathbf{s}^{*}, \mathbf{y}^{*}) \in \mathcal{S}$, the following equalities hold: \begin{subequations} \begin{flalign} \mathbf{A}\mathbf{x}^{*} &=\mathbf{b}\label{eq:cond1}\\ \mathbf{A}^\top\mathbf{y}^{*}+\mathbf{s}^{*} &=\mathbf{c}.\label{eq:cond2} \end{flalign} \end{subequations} \noindent Furthermore, $\mathbf{r}=\eta \mathbf{r}^{0}$ implies \begin{subequations} \begin{flalign} \mathbf{A}\mathbf{x}-\mathbf{b} &=\eta(\mathbf{A}\mathbf{x}^{0}-b)\label{eq:cond3}\\ \mathbf{A}^\top\mathbf{y}+\mathbf{s}-\mathbf{c} &=\eta(\mathbf{A}^\top\mathbf{y}^{0}+\mathbf{s}^{0}-\mathbf{c}).\label{eq:cond4} \end{flalign} \end{subequations} \noindent Combining eqn.~\eqref{eq:cond1} with eqn.~\eqref{eq:cond3} and eqn.~\eqref{eq:cond2} with eqn.~\eqref{eq:cond4}, we get \begin{subequations} \begin{flalign} \mathbf{A}\big(\mathbf{x}-\eta\mathbf{x}^{0}-(1-\eta)\mathbf{x}^{*}\big) &=\zero\label{eq:cond5}\\ \mathbf{A}^\top(\mathbf{y}-\eta\mathbf{y}^{0}-(1-\eta)\mathbf{y}^{*})+(\mathbf{s}-\eta\mathbf{s}^{0}-(1-\eta)\mathbf{s}^{*}) &=\zero.\label{eq:cond6} \end{flalign} \end{subequations} Multiplying eqn.~\eqref{eq:cond6} by $\left(\mathbf{x}-\eta \mathbf{x}^{0}-(1-\eta)\mathbf{x}^{*}\right)^\top$ on the left and using eqn.~\eqref{eq:cond5}, we get $$ \left(\mathbf{x}-\eta \mathbf{x}^{0}-(1-\eta)\mathbf{x}^{*}\right)^\top\left(\mathbf{s}-\eta\mathbf{s}^{0}-(1-\eta)\mathbf{s}^{*}\right)=0. $$ Expanding we get % \begin{flalign} &\eta\left(\mathbf{x}^{0^{\top}}\mathbf{s}+\mathbf{x}^{\top}\mathbf{s}^{0}\right)=\eta^{2} \mathbf{x}^{0^{\top}}\mathbf{s}^{0}+(1-\eta)^{2} (\mathbf{x}^{*})^{\top} \mathbf{s}^{*}+\mathbf{x}^{\top}\mathbf{s}\nonumber\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\eta(1-\eta)\left(\mathbf{x}^{0^{\top}}\mathbf{s}^{*}+(\mathbf{x}^{*})^{\top} \mathbf{s}^{0}\right)-(1-\eta)\left((\mathbf{x}^{*})^{\top}\mathbf{s}+\mathbf{x}^{\top} \mathbf{s}^{*}\right).\label{eq:cond7} \end{flalign} % Next, we use the given conditions and rewrite eqn.~\eqref{eq:cond7} as \begin{flalign} \eta\left(\mathbf{x}^{0^{\top}} \mathbf{s}+\mathbf{s}^{0^{\top}} \mathbf{x}\right) & \leq \eta^{2} \mathbf{x}^{0^{\top}} \mathbf{s}^{0}+\mathbf{x}^{\top} \mathbf{s}+\eta(1-\eta)\left(\mathbf{x}^{0^{\top}} \mathbf{s}^{*}+\mathbf{s}^{0^{\top}} \mathbf{x}^{*}\right) \nonumber\\ & \leq \eta^{2} \mathbf{x}^{0^{\top}} \mathbf{s}^{0}+\mathbf{x}^{\top} \mathbf{s}+2 \eta(1-\eta) \mathbf{x}^{0^{\top}} \mathbf{s}^{0} \nonumber\\ & \leq 2 \eta \mathbf{x}^{0^{\top}} \mathbf{s}^{0}+\mathbf{x}^{\top} \mathbf{s} \leq 3 \mathbf{x}^{\top} \mathbf{s}=3n\mu.\label{eq:fin_bound} \end{flalign} The first inequality in eqn.~\eqref{eq:fin_bound} follows from the following facts. First, $(1-\eta)((\mathbf{x}^{*})^{\top}\mathbf{s}+\mathbf{x}^{\top} \mathbf{s}^{*})\ge0$ as $(\mathbf{x}^{*}, \mathbf{s}^{*}) \geq \zero$ and $(\mathbf{x}^{0}, \mathbf{s}^{0}) \geq \zero$. Second, as $(\mathbf{x}^{*}, \mathbf{s}^{*}, \mathbf{y}^{*}) \in \mathcal{S}$ (which implies $\mathbf{x}^{*}\circ\mathbf{s}^{*}=\zero$), we have $(\mathbf{x}^{*})^{\top} \mathbf{s}^{*}=0$. The second inequality in eqn.~\eqref{eq:fin_bound} holds as $\mathbf{x}^{*} \leq \mathbf{x}^{0}$, $\mathbf{s}^{*} \leq \mathbf{s}^{0}$, $(\mathbf{x}^{*}, \mathbf{s}^{*}) \geq \zero$, and $(\mathbf{x}^{0}, \mathbf{s}^{0}) \geq \zero$; combining them we get $(\mathbf{x}^{0^{\top}} \mathbf{s}^{*}+\mathbf{s}^{0^{\top}} \mathbf{x}^{*})\le2\,\mathbf{x}^{0^\top}\mathbf{s}^{0}$. Third inequality in eqn.~\eqref{eq:fin_bound} is true as we have $ \eta^{2} \mathbf{x}^{0^{\top}}+2 \eta(1-\eta) \mathbf{x}^{0^{\top}} \mathbf{s}^{0}=2 \eta\mathbf{x}^{0^{\top}} \mathbf{s}^{0}-\eta^2\mathbf{x}^{0^{\top}} \mathbf{s}^{0}\le2 \eta\mathbf{x}^{0^{\top}} \mathbf{s}^{0}$. The final inequality holds as $\eta \leq \frac{\mathbf{x}^{\top} \mathbf{s}}{\mathbf{x}^{0^\top} \mathbf{s}^{0}}$.\qed \noindent\textbf{Proof of eqn.~\eqref{eq:ineq2}.} The last inequality follows from eqn.~\eqref{eq:ineq}. The second to last inequality is also easy to prove as \begin{flalign} \|\mathbf{S}\mathbf{x}^{0}\|_2=\sqrt{\sum_{i=1}^{s}(s_ix_i^{0})^2}\le\sqrt{\left(\sum_{i=1}^{s}s_ix_i^{0}\right)^2}=\mathbf{s}^\top\mathbf{x}^{0}\,. \end{flalign} \noindent To prove the first inequality in eqn.~\eqref{eq:ineq2}, we use the fact $\mathbf{x}^{0}\ge\mathbf{x}^{*}$ as follows: \begin{flalign} \|\mathbf{S}\mathbf{x}^{0}\|_2^2-\|\mathbf{S}(\mathbf{x}^{*}-\mathbf{x}^{0})\|_2^2=&~\sum_{i=1}^n(s_ix_i^{0})^2-\sum_{i=1}^n s_i^2\left((x_i^{*})^2+(x_i^{0})^2-2x_i^{*}x_i^{0}\right)\nonumber\\ =&~\sum_{i=1}^n s_i^2\left(2x_i^{*}x_i^{0}-(x_i^{*})^2\right)\ge 0\nonumber\,. \end{flalign}\qed \noindent \textbf{Proof of eqn.~\eqref{eq:ineq3}.} To prove this we use a similar approach as in eqn.~\eqref{eq:ineq2}. The last inequality directly follows from eqn.~\eqref{eq:ineq}; the second to last inequality is also easy to prove as \begin{flalign} \|\mathbf{X}\mathbf{s}^{0}\|_2=\sqrt{\sum_{i=1}^{n}(x_is_i^{0})^2}\le\sqrt{\left(\sum_{i=1}^{n}x_is_i^{0}\right)^2}=\mathbf{x}^\top\mathbf{s}^{0}\,. \end{flalign} % For the first inequality, we proceed as follows: \begin{flalign} \|\mathbf{X}(\mathbf{s}^{0}+\mathbf{A}^\top\mathbf{y}^{0}-\mathbf{c})\|_2^2=&~\|\mathbf{X}\mathbf{s}^{0}\|_2^2+\|\mathbf{X}(\mathbf{A}^\top\mathbf{y}^{0}-\mathbf{c})\|_2^2+2\mathbf{s}^{0^\top}\mathbf{X}^\top\mathbf{X}(\mathbf{A}^\top\mathbf{y}^{0}-\mathbf{c})\nonumber\\ =&~\|\mathbf{X}\mathbf{s}^{0}\|_2^2+\sum_{i=1}^n x_i^2(\mathbf{A}^\top\mathbf{y}^{0}-\mathbf{c})_i^2+2\sum_{i=1}^n x_i^2s_i^{0}(\mathbf{A}^\top\mathbf{y}^{0}-\mathbf{c})_i\nonumber\\ \le&~\|\mathbf{X}\mathbf{s}^{0}\|_2^2+\sum_{i=1}^n (x_is_i^{0})^2+2\sum_{i=1}^n(x_is_i^{0})^2\nonumber\\ =&~\|\mathbf{X}\mathbf{s}^{0}\|_2^2+\|\mathbf{X}\mathbf{s}^{0}\|_2^2+2\|\mathbf{X}\mathbf{s}^{0}\|_2^2=4\|\mathbf{X}\mathbf{s}^{0}\|_2^2.\label{eq:note4} \end{flalign} The inequality in eqn.~\eqref{eq:note4} follows from $x_i\ge0$, $s_i^{0}\ge0$ and $\abs{(\mathbf{A}^\top\mathbf{y}^{0}-\mathbf{c})_i}\le s_i^{0}$ for all $i=1\ldots n$. \end{proof} \noindent Our next result bounds $\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2$ which is instrumental in proving the final bound. \begin{lemma}\label{thm:boundf} Let $(\mathbf{x}^{0},\mathbf{y}^{0},\mathbf{s}^{0})$ be the initial point with $(\mathbf{x}^{0},\mathbf{s}^{0})>\zero$ such that $\mathbf{x}^{0}\ge\mathbf{x}^{*}$ and $\mathbf{s}^{0}\ge\max\{\mathbf{s}^{*},|\mathbf{c}-\mathbf{A}^\top\mathbf{y}^{0}|\}$ for some $(\mathbf{x}^{*},\mathbf{y}^{*},\mathbf{s}^{*})\in\mathcal{S}$. Furthermore, let $(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{N}(\gamma)$ with $\mathbf{r}=\eta\,\mathbf{r}^{0}$ for some $0\le\eta\le1$. If the sketching matrix $\mathbf{W}\in\RR{n}{w}$ satisfies the condition in eqn.~\eqref{eq:pdcond1}, then \begin{flalign} \|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\le~\sqrt{2}\left(\frac{9n}{\sqrt{1-\gamma}}+\sigma\sqrt{\frac{n}{1-\gamma}}+\sqrt{n}\right)\sqrt{\mu}\nonumber\,. \end{flalign} Recall that $\mathbf{r}=(\mathbf{r}_p,\mathbf{r}_d)=(\mathbf{A}\mathbf{x}-\mathbf{b},\mathbf{A}^\top\mathbf{y}+\mathbf{s}-\mathbf{c})$ and $\mathbf{r}^{0}=(\mathbf{r}_p^{0},\mathbf{r}_d^{0})=(\mathbf{A}\mathbf{x}^{0}-\mathbf{b},\mathbf{A}^\top\mathbf{y}^{0}+\mathbf{s}^{0}-\mathbf{c})$\,. \end{lemma} \begin{proof} Note that after correcting the approximation error of the iterative solver using $\mathbf{v}$, the primal and dual residuals $\mathbf{r}=(\mathbf{r}_p,\mathbf{r}_d)$ corresponding to an iterate $(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{N}(\gamma)$ always lie on the line segment between zero and $\mathbf{r}^{(0)}$. In other words, $\mathbf{r}=\eta\mathbf{r}^{(0)}$ always holds for some $\eta\in[0,1]$. This was formally proven in \citep[Lemma 3.3]{Mon03}. In order to bound $\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2$, first we express $\mathbf{p}$ as in eqn.~\eqref{eq:normal} and rewrite \begin{flalign} \mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p} =&~\mathbf{Q}^{-\nicefrac{1}{2}}\left(-\mathbf{r}_p-\sigma\mu\mathbf{A}\mathbf{S}^{-1}\one_n+\mathbf{A}\mathbf{x}-\mathbf{A}\mathbf{D}^2\mathbf{r}_d\right).\label{eq:recur3} \end{flalign} % Then, applying the triangle inequality to $\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2$ in eqn.~\eqref{eq:recur3}, we get \begin{flalign} \|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\le \Delta_1+\Delta_2+\Delta_3+\Delta_4\label{eq:recur4}\,, \end{flalign} where \begin{flalign} \Delta_1=&~\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{r}_p\|_2\nonumber\,,\\ \Delta_2=&~\sigma\mu\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\one_n\|_2\nonumber\,,\\ \Delta_3=&~\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}\Db^{-1}\mathbf{x}\|_2\,,\nonumber\\ \Delta_4=&~\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}^2\mathbf{r}_d\|_2\,.\nonumber \end{flalign} % To bound $\Delta_1$, $\Delta_2$, $\Delta_3$ and $\Delta_4$ we heavily use the condition of eqn.~\eqref{eq:pdcond1}. \paragraph{Bounding $\Delta_1$.} Using $\mathbf{r}_p=\eta\,\mathbf{r}_p^{0}$, $\mathbf{r}_p^{0}=\mathbf{A}\mathbf{x}^{0}-\mathbf{b}$ and $\mathbf{b}=\mathbf{A}\mathbf{x}^{*}$, we rewrite $\Delta_1$ as \begin{flalign} \Delta_1=&~\eta\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}(\mathbf{x}^{0}-\mathbf{x}^{*})\|_2\nonumber\\ =&~\eta\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}\Db^{-1}(\mathbf{x}^{0}-\mathbf{x}^{*})\|_2\nonumber\\ \le&~\eta\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}\|_2\|\mathbf{D}^{-1}(\mathbf{x}^{0}-\mathbf{x}^{*})\|_2\nonumber\\ \le&~\sqrt{2}\eta\,\|\mathbf{D}^{-1}(\mathbf{x}^{0}-\mathbf{x}^{*})\|_2\nonumber\\ =&~\sqrt{2}\eta\,\|(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\mathbf{S}(\mathbf{x}^{0}-\mathbf{x}^{*})\|_2\nonumber\\ \le&~\sqrt{2}\eta\,\|(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\|_2\,\|\mathbf{S}(\mathbf{x}^{0}-\mathbf{x}^{*})\|_2\,,\label{eq:del11} \end{flalign} where the above steps follow from submultiplicativity and eqn.~\eqref{eq:pdcond1}. From eqn.~\eqref{eq:pdcond1}, note that we have $\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}\|_2\le\sqrt{2}$ as $\zeta\le 1$\,. Now, applying eqn.~\eqref{eq:ineq2} and $\|(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\|_2=\max_{1\le i \le n}\frac{1}{\sqrt{x_is_i}}$, we further have % \begin{flalign} \Delta_1\le&~\sqrt{2}\,\max_{1\le i \le n}\frac{1}{\sqrt{x_is_i}}\cdot 3n\mu\nonumber\\ \le&~ 3\sqrt{2}\,n\sqrt{\frac{\mu}{1-\gamma}}\label{eq:del12}\,, \end{flalign} where the last inequality follows from $(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{N}(\gamma)$. \paragraph{Bounding $\Delta_2$.} Applying submultiplicativity, we get \begin{flalign} \Delta_2=&~\sigma\mu\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\,\mathbf{A}\mathbf{D}\,(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\one_n\|_2\nonumber\\ \le&~\sigma\mu\,\|\mathbf{Q}^{-\nicefrac{1}{2}}\,\mathbf{A}\mathbf{D}\|_2\|(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\one_n\|_2\nonumber\\ \le&~\sqrt{2}\,\sigma\mu\,\|(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\one_n\|_2\nonumber\\ =&~\sqrt{2}\,\sigma\mu\,\sqrt{\sum_{i=1}^{n}\frac{1}{x_i s_i}} \le~\sqrt{2}\,\sigma\mu\,\sqrt{\sum_{i=1}^{n}\frac{1}{(1-\gamma)\mu}}\nonumber\\ =&~\sqrt{2}\,\sigma\,\sqrt{\frac{n\,\mu}{(1-\gamma)}}\label{eq:del2}\,, \end{flalign} where the second to last inequality follows from eqn.~\eqref{eq:pdcond1} and the last inequality holds as $(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{N}(\gamma)$. \paragraph{Bounding $\Delta_3$.} Using $\mathbf{D}=\mathbf{S}^{-\nicefrac{1}{2}}\mathbf{X}^{\nicefrac{1}{2}}$ and $\mathbf{x}=\mathbf{X}\,\one_n$ we get \begin{flalign} \Delta_3=&~\|\mathbf{Q}^{-\nicefrac{1}{2}}\,\mathbf{A}\mathbf{D}\,(\mathbf{S}^{\nicefrac{1}{2}}\mathbf{X}^{-\nicefrac{1}{2}})\,\mathbf{X}\,\one_n\|_2\nonumber\\ =&~\|\mathbf{Q}^{-\nicefrac{1}{2}}\,\mathbf{A}\mathbf{D}\,(\mathbf{S}\mathbf{X})^{\nicefrac{1}{2}}\,\one_n\|_2\nonumber\\ \le&~\|\mathbf{Q}^{-\nicefrac{1}{2}}\,\mathbf{A}\mathbf{D}\|_2\|(\mathbf{S}\mathbf{X})^{\nicefrac{1}{2}}\,\one_n\|_2\nonumber\\ \le&~\sqrt{2}\,\sqrt{\sum_{i=1}^{n}x_i s_i}=~\sqrt{2n\,\mu}\label{eq:del3}\,, \end{flalign} where the inequalities follow from submultiplicativity and eqn.~\eqref{eq:pdcond1}. \paragraph{Bounding $\Delta_4$.} Using $\mathbf{r}_d=\eta\,\mathbf{r}_d^{0}$, we have \begin{flalign} \Delta_4=&~\eta\|\mathbf{Q}^{-\nicefrac{1}{2}}\,\mathbf{A}\,\mathbf{D}^2\mathbf{r}_d^{0}\|_2\nonumber\\ \le&~ \eta\|\mathbf{Q}^{-\nicefrac{1}{2}}\,\mathbf{A}\mathbf{D}\|_2\|(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\mathbf{X}\mathbf{r}_d^{0}\|_2\nonumber\\ \le&~ \sqrt{2}\eta\,\|(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\mathbf{X}(\mathbf{A}^\top\mathbf{y}^{0}+\mathbf{s}^{0}-\mathbf{c})\|_2\nonumber\\ \le&~\sqrt{2}\eta\,\|(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\|_2\,\|\mathbf{X}(\mathbf{A}^\top\mathbf{y}^{0}+\mathbf{s}^{0}-\mathbf{c})\|_2\,,\nonumber \end{flalign} where the above inequalities follow from submultiplicativity and eqn.~\eqref{eq:pdcond1}. Now, applying eqn.~\eqref{eq:ineq3} and $\|(\mathbf{X}\mathbf{S})^{-\nicefrac{1}{2}}\|_2\le\frac{1}{\sqrt{(1-\gamma)\mu}}$, we further have % \begin{flalign} \Delta_4\le~6\sqrt{2}n\sqrt{\frac{\mu}{1-\gamma}}.\label{eq:del4} \end{flalign} \paragraph{Final bound.} Combining eqns.~\eqref{eq:recur4}, \eqref{eq:del12}, ,\eqref{eq:del2}, \eqref{eq:del3} and \eqref{eq:del4}, we get \begin{flalign} \|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\le~\sqrt{2}\left(\frac{9n}{\sqrt{1-\gamma}}+\sigma\sqrt{\frac{n}{1-\gamma}}+\sqrt{n}\right)\sqrt{\mu}\,. \end{flalign} This concludes the proof of Lemma~\ref{thm:boundf}. \end{proof} \subsection{Determining step-size, bounding the number of iterations, and proof of Theorem~\ref{thm:1}} Assume that the triplet $(\hat{\Delta \xb},\hat{\Delta \yb},\hat{\Delta \sbb})$ satisfies eqns.~\eqref{eq:addl_i}, \eqref{eq:delxhat_i}, and \eqref{eq:delshat_i}. We rewrite this system in the following alternative form: \begin{subequations}\label{eq:iip_3} \begin{flalign} \mathbf{A}\hat{\Delta\mathbf{x}}=&-\mathbf{r}_p, \label{eq:iip_3_1}\\ \mathbf{A}^\top\hat{\Delta\mathbf{y}}+\hat{\Delta\mathbf{s}}=&-\mathbf{r}_d, \label{eq:iip_3_2}\\ \mathbf{X}\hat{\Delta\mathbf{s}}+\mathbf{S}\hat{\Delta\mathbf{x}}=&-\mathbf{X}\mathbf{S}\,\one_n+\sigma\mu\,\one_n - \mathbf{v}. \label{eq:iip_3_3} \end{flalign} \end{subequations} Indeed, we now show how to derive eqns.~\eqref{eq:delxhat_i}, \eqref{eq:addl_i} and \eqref{eq:delshat_i} from eqn.~\eqref{eq:iip_3}. Pre-multiplying both sides of eqn.~\eqref{eq:iip_3_3} by $\mathbf{A}\mathbf{S}^{-1}$ and noting that $\mathbf{D}^2=\mathbf{X}\mathbf{S}^{-1}$, we get \begin{flalign} &~\mathbf{A}\mathbf{D}^2\hat{\Delta\mathbf{s}}+\mathbf{A}\hat{\Delta \xb}=-\mathbf{A}\mathbf{X}\one_n+\sigma\mu\mathbf{A}\mathbf{S}^{-1}\one_n-\mathbf{A}\mathbf{S}^{-1}\mathbf{v}\nonumber\\ \Rightarrow&~\mathbf{A}\mathbf{D}^2\hat{\Delta\mathbf{s}}=-\mathbf{A}\mathbf{x}+\mathbf{r}_p+\sigma\mu\mathbf{A}\mathbf{S}^{-1}\one_n-\mathbf{A}\mathbf{S}^{-1}\mathbf{v}.\label{eq:iip_31} \end{flalign} Eqn.~\eqref{eq:iip_31} holds as $\mathbf{A}\mathbf{X}\one_n=\mathbf{A}\mathbf{x}$ and, from eqn.~\eqref{eq:iip_3_1}, $\mathbf{A}\hat{\Delta \xb}=-\mathbf{r}_p$. Next, pre-multiplying eqn.~\eqref{eq:iip_3_2} by $\mathbf{A}\mathbf{D}^2$, we get \begin{flalign} &~\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta \yb}+\mathbf{A}\mathbf{D}^2\hat{\Delta \sbb}=-\mathbf{A}\mathbf{D}^2\mathbf{r}_d\nonumber\\ \Rightarrow &~\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta \yb}=-\mathbf{r}_p-\sigma\mu\mathbf{A}\mathbf{S}^{-1}\one_n+\mathbf{A}\mathbf{x}-\mathbf{A}\mathbf{D}^2\mathbf{r}_d+\mathbf{A}\mathbf{S}^{-1}\mathbf{v}=\mathbf{p}+\mathbf{A}\mathbf{S}^{-1}\mathbf{v}.\label{eq:iip_311} \end{flalign} The first equality in eqn.~\eqref{eq:iip_311} follows from eqn.~\eqref{eq:iip_31} and the definition of $\mathbf{p}$. This establishes eqn.~\eqref{eq:addl_i}. Eqn.~\eqref{eq:delshat_i} directly follows from eqn.~\eqref{eq:iip_3_2}. Finally, we get eqn.~\eqref{eq:delxhat_i} by pre-multiplying eqn.~\eqref{eq:iip_3_3} by $\mathbf{S}^{-1}$. Next, we define each new point traversed by the algorithm as $(\xb(\alpha),\yb(\alpha), \sbb(\alpha))$, where \begin{flalign} (\xb(\alpha),\yb(\alpha),\sbb(\alpha)) &= (\mathbf{x}, \mathbf{y}, \mathbf{s}) + \alpha(\hat{\Delta\mathbf{x}},\hat{\Delta \mathbf{y}},\hat{\Delta\mathbf{s}}), \\ \mu(\alpha) &= \xb(\alpha)^\top \sbb(\alpha)/ n, \\ \mathbf{r}(\alpha) &= \mathbf{r}\left(\xb(\alpha),\sbb(\alpha),\yb(\alpha)\right). \end{flalign} The goal in this section is to bound the number of iterations required by Algorithm~\ref{algo:ipm_i}. Towards that end, we bound the magnitude of the step size $\alpha$. First, we provide an upper bound on $\alpha$, which allows us to show that each new point $(\xb(\alpha),\sbb(\alpha),\yb(\alpha))$ traversed by the algorithm stays within the neighborhood $\mathcal{N}(\gamma)$. Second, we provide a lower bound on $\alpha$, which allows us to bound the number of iterations required. We use multiple lemmas from~\citep{Mon03}, which we reproduce here, without their proofs. First, we provide an upper bound on $\alpha$, ensuring that each new point $(\xb(\alpha),\yb(\alpha),\sbb(\alpha))$ traversed by the algorithm stays within the neighborhood $\mathcal{N}(\gamma)$. \begin{lemma}[Lemma 3.5 of \citep{Mon03}]\label{lemmamaxalpha1} Assume $(\hat{\Delta \xb},\hat{\Delta \yb},\hat{\Delta \sbb})$ satisfies eqns.~(\ref{eq:iip_3}) for some $\sigma > 0$, $(\mathbf{x}, \mathbf{y},\mathbf{s}) \in \mathcal{N}(\gamma)$ (for $\gamma \in (0,1)$), and $\|\mathbf{v}\|_2 \leq \frac{\gamma \sigma \mu}{4}$. Then, $(\xb(\alpha),\yb(\alpha),\sbb(\alpha)) \in \mathcal{N}(\gamma)$ for every scalar $\alpha$ such that \begin{flalign} \label{alphalemma1} 0 \leq \alpha \leq \min \left\{ 1, \frac{\gamma \sigma \mu}{4 \|\hat{\Delta \xb} \circ \hat{\Delta \sbb} \|_\infty}\right\} . \end{flalign} \end{lemma} \noindent We now provide a lower bound on the values of $\bar{\alpha}$ and the corresponding $\mu(\bar{\alpha})$; see Algorithm~\ref{algo:ipm_i}. \begin{lemma}[Lemma 3.6 of~\citep{Mon03}]\label{lemmaminalpha1} In each iteration of Algorithm~\ref{algo:ipm_i}, if $\|\mathbf{v}\|_2\le\frac{\gamma\sigma\mu}{4}$, then the step size $\bar{\alpha}$ satisfies \begin{flalign} \label{alphalemma2} \bar{\alpha} \geq \min \left\{ 1, \frac{ \min \{ \gamma \sigma, (1 - \frac{5}{4} \sigma) \} \mu } {4 \|\hat{\Delta x} \circ \hat{\Delta s}\|_\infty} \right\} \end{flalign} and \begin{flalign} \label{mulemma2} \mu(\bar{\alpha})= \Big[ 1 - \frac{\bar{\alpha}}{2} (1-\frac{5}{4}\sigma) \Big] \mu. \end{flalign} \end{lemma} \noindent At this point, we have provided a lower bound (eqn.~(\ref{alphalemma2})) for the allowed values of the step size $\bar{\alpha}$. Next, we show that this lower bound is bounded away from zero. From eqn.~(\ref{alphalemma2}) this is equivalent to showing that $\|\hat{\Delta \xb} \circ \hat{\Delta \sbb} \|_\infty$ is bounded. \begin{lemma}[Lemma 3.7 of~\citep{Mon03} (slightly modified)]\label{lemmaminalpha2} Let $(\mathbf{x}^{0},\mathbf{y}^{0},\mathbf{s}^{0})$ be the initial point with $(\mathbf{x}^{0},\mathbf{s}^{0})>0$ and $(\mathbf{x}^0,\mathbf{s}^0) \geq (\mathbf{x}^{*}, \mathbf{s}^{*}) $ for some $(\mathbf{x}^{*},\mathbf{y}^{*},\mathbf{s}^{*}) \in \mathcal{S}$. Let $(\mathbf{x},\mathbf{y},\mathbf{s}) \in \mathcal{N}(\gamma)$ be such that $\mathbf{r} = \eta \mathbf{r}^{0}$ for some $\eta \in [0,1]$ and $\|\mathbf{v}\|_2\le\frac{\gamma\sigma\mu}{4}$. Then, the search direction $(\hat{\Delta \mathbf{x}},\hat{\Delta \mathbf{y}},\hat{\Delta \mathbf{s}})$ produced by Algorithm \ref{algo:ipm_i} at each iteration satisfies \begin{flalign} \label{normsmax} \max \{\| \mathbf{D}^{-1} \hat{\Delta \mathbf{x}}\|_2, \| \mathbf{D} \hat{\Delta \mathbf{s}}\|_2 \} \le~\left(1+\frac{\sigma^2}{1-\gamma}-2\sigma\right)^{\nicefrac{1}{2}}\sqrt{n\mu} +\frac{6n}{\sqrt{(1-\gamma)}}\sqrt{\mu}+\frac{\gamma\sigma}{4\,\sqrt{1-\gamma}}\sqrt{\mu}. \end{flalign} \end{lemma} \noindent We should note here that the above lemma is slightly different from~\citep[Lemma 3.7]{Mon03}. Indeed, \citep[Lemma 3.7]{Mon03} actually proves the following bound: \begin{flalign} \label{normsmax2} \max \{\| \mathbf{D}^{-1} \hat{\Delta \mathbf{x}}\|_2, \| \mathbf{D} \hat{\Delta \mathbf{s}}\|_2 \} \le~\left(1+\frac{\sigma^2}{1-\gamma}-2\sigma\right)^{\nicefrac{1}{2}}\sqrt{n\mu}+\frac{6n}{\sqrt{(1-\gamma)}}\sqrt{\mu}+\frac{\gamma\sigma}{4\sqrt{n}}\sqrt{\mu}\,. \end{flalign} Notice that there is slight difference in the last term in the right-hand side, which does not asymptotically change the bound. The underlying reason for this difference is the fact that~\citep{Mon03} constructed the vector $\mathbf{v}$ differently. In our case, we need to bound $\|(\mathbf{X}\mathbf{S})^{-1/2}\mathbf{v}\|_2$, which we do as follows: \begin{flalign} \| (\mathbf{X} \mathbf{S})^{-1/2}\mathbf{v}\|_2\le\| (\mathbf{X} \mathbf{S})^{-1/2}\|_2\,\|\mathbf{v}\|_2\le \frac{1}{\min_i \sqrt{x_is_i}}\,\frac{\gamma\sigma\mu}{4}\label{eq:db1}\,, \end{flalign} where in the above expression we use the fact that $\| (\mathbf{X} \mathbf{S})^{-1/2}\|_2=\frac{1}{\min_i\sqrt{x_is_i}}$. Now as $(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{N}(\gamma)$, we further have $x_is_i\ge (1-\gamma)\mu$ for all $i=1\ldots n$. Combining this with eqn.~\eqref{eq:db1}, we get \begin{flalign} \| (\mathbf{X} \mathbf{S})^{-1/2}\mathbf{v}\|_2\le\frac{\gamma\sigma\mu}{4\sqrt{(1-\gamma)\mu}}=\frac{\gamma\sigma}{4\,\sqrt{1-\gamma}}\sqrt{\mu}.\label{eq:bd2} \end{flalign} On the other hand,~\citep{Mon03} had a different construction of $\mathbf{v}$ for which $\|(\mathbf{X}\mathbf{S})^{-1/2}\mathbf{v}\|_2=\|\tilde{\mathbf{f}}^{(t)}\|_2$ holds. Therefore they had the following bound: $$\|(\mathbf{X}\mathbf{S})^{-1/2}\mathbf{v}\|_2=\|\tilde{\mathbf{f}}^{(t)}\|_2\le\frac{\gamma\sigma}{4\sqrt{n}}\sqrt{\mu}.$$ The next lemma bounds the number of iterations that Algorithm~\ref{algo:ipm_i} needs when started with an infeasible point that is sufficiently positive. \begin{lemma}[Theorem 2.6 of \citep{Mon03}] \label{theoremOuter} Assume that the constants $\gamma$ and $\sigma$ are such that $\max\{\gamma^{-1},(1-\gamma)^{-1},\sigma^{-1},(1-\frac{5}{4}\sigma)^{-1}\}=\mathcal{O}(1)$. Let the initial point $(\mathbf{x}^{0},\mathbf{s}^{0},\mathbf{y}^{0})$ satisfy $(\mathbf{x}^{0}, \mathbf{s}^{0}) \geq (\mathbf{x}^{*}, \mathbf{s}^{*} )$ for some $(\mathbf{x}^{*}, \mathbf{s}^{*},\mathbf{y}^{*}) \in \mathcal{S}$ and $\|\mathbf{v}\|_2\le\frac{\gamma\sigma\mu}{4}$. Algorithm \ref{algo:ipm_i} generates an iterate $(\mathbf{x}^{k}, \mathbf{s}^{k}, \mathbf{y}^{k})$ satisfying $\mu_k \leq \epsilon \mu_0$ and $\| \mathbf{r}^{k}\|_2 \leq \epsilon \| \mathbf{r}^{0}\|_2$ after $\mathcal{O}(n^2 \log{\nicefrac{1}{\epsilon}})$ iterations. \end{lemma} \noindent Finally, Theorem~\ref{thm:1} follows from Lemmas~\ref{lem:conouter} and~\ref{theoremOuter}. \section{Additional notes on experiments}\label{app:experiments} \subsection{Support Vector Machines (SVMs)} \label{app:svm} The classical $\ell_1$-SVM problem is as follows. We consider the task of fitting an SVM to data pairs $S = \{ (x_i, y_i)\}_{i=1}^m$, where $x_i \in \mathbb{R}^n$ and $y_i \in \{ + 1, - 1\}$. Here, $m$ is the number of training points, and $n$ is the feature dimension. The SVM problem with an $\ell_1$ regularizer has the following form: \begin{align} \label{svm2} \underset{{ w }}{\operatorname{minimize}} \quad & \|w \|_1 \\ \text{subject to} \quad& y_i (w^T x_i + b') \geq 1, \quad i=1\ldots m. \nonumber \end{align} This problem can be written as an LP by introducing the variables $w^+$ and $w^-$, where $w = w^+ - w^-$. The objective becomes $\sum_{j=1}^n w^+_j + w^-_j$, and we constrain $w^+_i \geq 0$ and $w^-_i \geq 0$. Note that the size of the constraint matrix in the LP becomes $m \times (2n +1)$. \subsection{Random data} \label{app:rand} We generate random synthetic instances of linear programs as follows. To generate $\mathbf{A} \in \mathbb{R}^{m \times n}$, we set ${a}_{ij} \sim_{i.i.d.}U(0,1)$ with probability $p$ and ${a}_{ij} = 0$ otherwise. We then add min$\{m,n\}$ i.i.d. draws from $U(0,1)$ to the main diagonal, to ensure each row of $\mathbf{A}$ has at least one nonzero entry. We set $\mathbf{b} = \mathbf{A} \mathbf{x} + 0.1\mathbf{z}$, where $\mathbf{x}$ and $\mathbf{z}$ are random vectors drawn from $N(0,1)$. Finally, we set $c \sim N(0,1)$. \subsection{Real-world data} \label{app:real} We used a gene expression cancer RNA-Sequencing dataset, taken from the UCI Machine Learning repository. It is part of the RNA-Seq (HiSeq) PANCAN data set~\citep{Weinstein2013} and is a random extraction of gene expressions from patients who have different types of tumors: BRCA, KIRC, COAD, LUAD, and PRAD. We considered the binary classification task of identifying BRCA versus other types. We also used the DrivFace dataset taken from the UCI Machine Learning repository. In the DrivFace dataset, each sample corresponds to an image of a human subject, taken while driving in real scenarios. Each image is labeled as corresponding to one of three possible gaze directions: left, straight, or right. We considered the binary classification task of identifying two different gaze directions: straight, or to either side (left or right). \subsection{Feasible starting point} \label{app:feasible_start} We construct a linear program to find a primal feasible starting point $(\mathbf{x}_0, \mathbf{s}_0, \mathbf{y}_0)$ such that $\mathbf{A}\mathbf{x}_0 = \mathbf{b}$. Without loss of generality, assume that all entries of $\mathbf{b}$ are positive and let $\mathbf{z} \in \R{m}$. Then, $(\mathbf{x}, \mathbf{z})$ is an optimal solution to the following linear program when $\mathbf{z} = \zero$ and $\mathbf{A}\mathbf{x} = \mathbf{b}$. \begin{flalign*} \min\,\mathbf{z}\,,\text{ subject to }\mathbf{A}\mathbf{x} + \mathbf{I} \mathbf{z} =\mathbf{b}\,,\mathbf{x}, \mathbf{z} \ge \zero\,, \end{flalign*} \section*{Acknowledgements} AC, PD, and GD were partially supported by NSF 1760353 and 1814041 and DOE SC0022085. HA was partially supported by BSF 2017698. PL was supported by an Amazon Graduate Fellowship in Artificial Intelligence. This work was done when AC was a graduate student in the Department of Statistics, Purdue University. Oak Ridge National Laboratory is operated by UT-Battelle LLC for the U.S. Department of Energy under contract number DEAC05-00OR22725. \newpage \setlength{\bibsep}{6pt} \bibliographystyle{abbrvnat} { \section{Extensions}\label{sxn:extensions} We briefly discuss extensions of our work. Note that we focus only on analyzing preconditioned CG and preconditioned Chebyshev iteration due to their practical advantages over other solvers. In addition, Chebyshev iteration also offers several advantages in a parallel environment as it does not need to evaluate communication-intensive inner products for computing the recurrence parameters. However, from a theoretical perspective, in \citep{CLAD20}, we analyzed two more solvers, namely, preconditioned Richardson Iteration and the preconditioned Steepest Descent that could replace the proposed CG or Chebyshev iteration without any loss in accuracy or any increase in the number of iterations for the long-step feasible IPM Algorithm~\ref{algo:iipm} of Section~\ref{sxn:IIPM}. Second, recall that our approach focused on full rank input matrices $\mathbf{A} \in \mathbb{R}^{m \times n}$ with $m \ll n$. Our overall approach still works if $\mathbf{A}$ is any $m \times n$ matrix that is low-rank, e.g., $\mathop{\mathrm{rank}}(\mathbf{A})=k\ll \min\{m,n\}$. In that case, using the thin SVD of $\mathbf{A}$, we can rewrite the linear constraints as follows $\mathbf{U}_\mathbf{A}\mathbf{\Sigma}_\mathbf{A}\mathbf{V}_\mathbf{A}^\top\mathbf{x}=\mathbf{b}$, where $\mathbf{U}_\mathbf{A}\in\RR{m}{k}$ and $\mathbf{V}_\mathbf{A}\in\RR{n}{k}$ are the matrices of left and right singular vectors of $\mathbf{A}$ respectively; $\mathbf{\Sigma}_\mathbf{A}\in\RR{k}{k}$ is the diagonal matrix with the $k$ non-zero singular values of $\mathbf{A}$ as its diagonal elements. The LP of eqn.~\eqref{eq:primal} can be restated as \begin{flalign} \min\,\mathbf{c}^\top\mathbf{x}\,,\text{ subject to }\mathbf{V}_\mathbf{A}^\top\mathbf{x}=\widetilde{\mathbf{b}}\,,\mathbf{x}\ge \zero\,,\label{eq:primal2} \end{flalign} where $\widetilde{\mathbf{b}}=\mathbf{\Sigma}_\mathbf{A}^{-1}\mathbf{U}_\mathbf{A}^\top\mathbf{b}$. Note that, $\mathop{\mathrm{rank}}(\mathbf{V}_\mathbf{A})=k\ll n$ and therefore eqn.~\eqref{eq:primal2} can be solved using our framework. The matrices $\mathbf{U}_\mathbf{A}$, $\mathbf{V}_\mathbf{A}$, and $\mathbf{\Sigma}_\mathbf{A}$ can be approximately recovered using the fast SVD algorithms of \citep{Halko2011,BouDriMag14,clarkson2017low}. However, the accuracy of the final solution will depend on the accuracy of the approximate SVD and we defer this analysis to future work. Third, even though we chose to use the Count-Min sketch and its analysis from~\citep{Cohen2016} (Section~\ref{sxn:background}), there are many other alternative sketching matrix constructions that would lead to similar results. A particularly simple one is the Gaussian sketching matrix $\mathbf{W}_G \in \mathbb{R}^{n \times w}$, where every entry is a $\mathcal{N}(0,1)$ random variable. Setting $w=\mathcal{O}\left(\nicefrac{m+\log (1/ \delta)}{\zeta^{2}}\right)$ would result in the same accuracy guarantees as the sketching matrix of Section~\ref{sxn:background}. However, the (theoretical) running time needed to compute $\mathbf{A} \mathbf{D} \mathbf{W}$ increases to $\mathcal{O} (m \cdot\mathop{\mathrm{nnz}}{\mathbf{A}} )$. In practice, at least for relatively small matrices, using Gaussian sketching matrices is a reasonable alternative; see the discussion in~\citep{Meng2014SISC} which argued that the Gaussian matrix sketching-based solvers are considerably better than direct solvers. We also opted to use Gaussian matrices in our empirical evaluation, since we primarily interested in measuring the accuracy of the final solution as a function of the number of iterations of the solver and the IPM algorithm. Other known constructions of sketching matrices that are also applicable in our setting include (any) sub-gaussian sketching matrix; the Subsampled Randomized Hadamard transform (SRHT); and any of the Sparse Subspace Embeddings of~\citep{clarkson2017low,nelson2013osnap,meng2013low,cohen2016nearly}. \section{Feasible IPM} \subsection{The linear system} Let $\mathcal{F}$ and $\mathcal{F}^0$ be the sets of feasible and strictly feasible points respectively \emph{i.e.}, \begin{flalign*} \mathcal{F}=&~\{(\mathbf{x},\mathbf{y},\mathbf{s}):~(\mathbf{x},\mathbf{s})\ge\zero,~\mathbf{A}\mathbf{x}=\mathbf{b},~\mathbf{A}^\top\mathbf{y}+\mathbf{s}=\mathbf{c}\}\nonumber\\ \mathcal{F}^0=&~\{(\mathbf{x},\mathbf{y},\mathbf{s}): ~(\mathbf{x},\mathbf{s})>\zero,~\mathbf{A}\mathbf{x}=\mathbf{b},~\mathbf{A}^\top\mathbf{y}+\mathbf{s}=\mathbf{c}\}. \end{flalign*} In addition, let the neighborhood $\mathcal{N}(\gamma)$ be \begin{flalign} &\mathcal{N}(\gamma)=\Big\{(\mathbf{x},\mathbf{y},\mathbf{s})\in\mathcal{F}^0:x_i s_i\ge(1-\gamma)\mu\Big\}.\nonumber \end{flalign} Here $\gamma \in (0,1)$ and $\mu$ is the duality measure. Note that $\mathcal{N}(\gamma)\subseteq\mathcal{F}^0\subseteq\mathcal{F}$. We assume that $\mathcal{F}^0$ is non-empty. At each iteration, given the current solution $(\mathbf{x}^{k},\mathbf{y}^{k},\mathbf{s}^{k})$, IPMs obtain the search direction $(\Delta\mathbf{x}^k,\Delta\mathbf{y}^k,\Delta\mathbf{s}^k)$ by solving the following system of linear equations: \begin{subequations}\label{eq:system_f} \begin{flalign} \mathbf{A}\mathbf{D}^2\mathbf{A}^\top\Delta\mathbf{y}=~&\mathbf{p}\,,\label{eq:normal1_f}\\ \Delta\mathbf{s}=~&-\mathbf{A}^\top\Delta\mathbf{y}^k\,,\label{eq:dels_f}\\ \Delta\mathbf{x}=~&-\mathbf{x}+\sigma\mu\mathbf{S}^{-1}\one_n-\mathbf{D}^2\Delta\mathbf{s}\,,\label{eq:delx_f} \end{flalign} \end{subequations} where $\mathbf{p}=-\sigma\mu\mathbf{A}\mathbf{S}^{-1}\one_n+\mathbf{A}\mathbf{x}$. Given $\Delta\mathbf{y}$ from eqn.~(\ref{eq:normal1_f}), $\Delta\mathbf{s}$ and $\Delta\mathbf{x}$ are easy to compute from eqns.~\eqref{eq:dels_f} and \eqref{eq:delx_f}, as they only involve matrix-vector products. Note that we solve eqn.~\eqref{eq:normal1_f} approximately using a sketching-based preconditioned CG solver. Let $\hat{\Delta \mathbf{y}}=\mathbf{Q}^{\nicefrac{-1}{2}}\tilde{\mathbf{z}}^t$ be the approximate solution to eqn.~(\ref{eq:normal1_f}). In order to account for the loss of accuracy due to this approximate solution, we compute $\hat{\Delta\mathbf{x}}$ as follows: \begin{flalign} \hat{\Delta\mathbf{x}}=~-\mathbf{x}+\sigma\mu\mathbf{S}^{-1}\one_n-\mathbf{D}^2\hat{\Delta\mathbf{s}}-\mathbf{S}^{-1}\mathbf{v}\label{eq:delxhat_f}. \end{flalign} Here $\mathbf{v}\in\R{n}$ is a perturbation vector that needs to exactly satisfy the following invariant at each iteration of the IPM: \begin{flalign} \mathbf{A}\mathbf{S}^{-1}\mathbf{v}=\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}-\mathbf{p}\,\label{eq:addl_f}. \end{flalign} We note that the computation of $\hat{ \Delta \mathbf{s}}$ is still done using eqn.~\eqref{eq:dels_f}, which does not change. Therefore, we rewrite the approximate solution $(\hat{\Delta\mathbf{x}},\hat{\Delta\mathbf{y}},\hat{\Delta\mathbf{s}})$ together with the perturbation vector $\mathbf{v}\in\R{n}$ satisfying eqn.~\eqref{eq:addl_f} as follows: \begin{subequations}\label{eq:system1} \begin{flalign} \mathbf{A}\mathbf{D}^2\mathbf{A}^\top\hat{\Delta\mathbf{y}}=~&\mathbf{p}+\mathbf{A}\mathbf{S}^{-1}\mathbf{v}\,,\label{eq:normal12}\\ \hat{\Delta\mathbf{s}}=~&-\mathbf{A}^\top\hat{\Delta\mathbf{y}}\,,\label{eq:dels1}\\ \hat{\Delta\mathbf{x}}=~&-\mathbf{x}+\sigma\mu\mathbf{S}^{-1}\one_n-\mathbf{D}^2\hat{\Delta\mathbf{s}}-\mathbf{S}^{-1}\mathbf{v}\label{eq:delxhat1}\,. \end{flalign} \end{subequations} \noindent\textbf{Bounding the norm of $\mathbf{v}$.} We note that using the same analysis as in the case of infeasible IPMs, it can be shown that we need $t=\mathcal{O}(\log n)$ iterations to guarantee $$\|\mathbf{v}\|_2\le\frac{\gamma\sigma\mu}{4}.$$ \subsection{Structural conditions} Recall that our preconditioner $\mathbf{Q}^{-\frac{1}{2}}$ satisfies \begin{flalign}\label{eq:pdcond1_f} \frac{2}{2+\zeta} \leq \sigma^2_{\min}(\mathbf{Q}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}) \leq \sigma^2_{\max}(\mathbf{Q}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}) \leq \frac{2}{2-\zeta}, \end{flalign} for some error parameter $\zeta \in [0,1]$. In the above, $\sigma_{\min}(\cdot)$ and $\sigma_{\max}(\cdot)$ correspond to the smallest and largest singular value of the matrix in parentheses. Moreover, given such a preconditioner, we prove that the resulting iterative solver satisfies \begin{flalign} \|\tilde{\mathbf{f}}^{(t)}\|_2=\|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{A}\mathbf{D}^2\mathbf{A}^\top\mathbf{Q}^{-\nicefrac{1}{2}}\tilde{\mathbf{z}}^t-\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2\leq \zeta^t \|\mathbf{Q}^{-\nicefrac{1}{2}}\mathbf{p}\|_2. \label{eq:pdcond2_f} \end{flalign} Here $\tilde{\mathbf{z}}^t$ is the approximate solution returned by the iterative solver after $t$ iterations.
{ "timestamp": "2022-09-26T02:13:56", "yymm": "2209", "arxiv_id": "2209.08722", "language": "en", "url": "https://arxiv.org/abs/2209.08722" }
\section{Introduction} Researchers, including journalists \cite{mask_stockpile_ars_2020,buffalo_shooter_huffpo_2022}, social scientists \cite{arms2006,MEET:MEET14504801096}, and historians \cite{milligan_history_2019} increasingly make use of web archives. Web archives preserve the content of web pages as they were at a specific point in time as \textbf{mementos}. Web archives are vast, the largest consisting of billions of documents \cite{kahle_wayback_count_2020}. Collections are a common organizational technique employed to bring order to this vastness. Web archive collections consist of web pages that were hand-picked by archivists and subject matter experts to represent a specific topic. Using collections simplifies management for archivists and allows them to showcase content, making patrons aware of the collection as well as their web archiving organization as a whole. Patrons benefit from collections because they have an intelligently selected set of mementos to review that supports their topic of interest. We recognize that the term \emph{collection} has many definitions. In the scope of this paper, we define \textbf{collection} based on a web archive platform's front-end presentation of a set of mementos that are grouped by topic. Figure \ref{fig:archive-it-collection} shows a screenshot of a collection landing page for Archive-It's collection \emph{Environmental Justice}\footnote{\url{https://archive-it.org/collections/7635}}. This page contains a list of resources that a visitor can examine, along with metadata about the collection. Figure \ref{fig:pandora-collection} contains a screenshot of the collection landing page of PANDORA's \emph{Indigenous Australians} collection\footnote{\url{https://pandora.nla.gov.au/subject/12}}. At PANDORA, a visitor can view the list of page titles, but also how this collection fits within an overall hierarchy of topics, sub-collections, and subcategories. Similarly Figures \ref{fig:haw-collection}, \ref{fig:ukwa-collection}, \ref{fig:ia-collection}, and \ref{fig:conifer-collection} show collection landing pages\footnote{\url{https://haw.nsk.hr/en/category/biology/}}\footnote{\url{https://www.webarchive.org.uk/en/ukwa/collection/2387}} from the Croatian Web Archive (HAW) the United Kingdom Web Archive (UKWA), an Internet Archive (IA) user account\footnote{\url{https://archive.org/details/@shawnmjones?tab=web-archive}}, and Conifer\footnote{\url{https://conifer.rhizome.org/despens/usa-today-the-wall}} respectively. We differentiate collections from the \textbf{greater web archive} -- the web archiving platform as a whole. For example, PANDORA would be the greater web archive containing \emph{Indigenous Australians}. \begin{figure*} \centering \begin{subfigure}{0.5\hsize} \centering \includegraphics[width=\textwidth]{archiveit-landing-page-example.png} \caption{Archive-It's \emph{Environmental Justice}} \label{fig:archive-it-collection} \end{subfigure}% \begin{subfigure}{0.5\hsize} \includegraphics[width=\textwidth]{pandora-collection-landing-page-example.png} \caption{PANDORA's \emph{Indigenous Australians}} \label{fig:pandora-collection} \end{subfigure} \caption{Screenshots of some example web archive collection landing pages.} \end{figure*} \begin{figure*}\ContinuedFloat \centering \begin{subfigure}{0.5\hsize} \centering \includegraphics[width=\textwidth]{haw-landing-page-example.png} \caption{HAW's \emph{Biology}} \label{fig:haw-collection} \end{subfigure}% \begin{subfigure}{0.5\hsize} \includegraphics[width=\textwidth]{ukwa-landing-page-example.png} \caption{UKWA's \emph{Celtic Studies}} \label{fig:ukwa-collection} \end{subfigure} \caption{Screenshots of some example web archive collection landing pages.} \end{figure*} \begin{figure*}\ContinuedFloat \centering \begin{subfigure}{0.5\hsize} \centering \includegraphics[width=\textwidth]{ia-landing-page-example.png} \caption{IA's user account web archives for one of this paper's authors} \label{fig:ia-collection} \end{subfigure}% \begin{subfigure}{0.5\hsize} \includegraphics[width=\textwidth]{conifer-landing-page-example.png} \caption{Conifer's \emph{USA Today: The Wall}} \label{fig:conifer-collection} \end{subfigure} \caption{Screenshots of some example web archive collection landing pages.} \label{fig:landing_pages} \end{figure*} Even in the examples shown in Figure \ref{fig:landing_pages} there are differences in approaches to these collections. If a memento is created in an Archive-It collection, it is not shared between collections, whereas, with PANDORA, this is possible. Design decisions like this represent different \textbf{collection structures} that serve as models for how an archivist or platform designer might thematically organize their mementos. Collection structures have implications for how visitors interact with the collection. For example, does a visitor first visit the landing page to view a set of page titles and then decide among mementos for that page, or does the collection directly present links to mementos? Additionally, collection structures present different challenges to authors of third-party tools that consume and analyze these collections as part of Big Data efforts. While the collection structure of each platform may vary depending on their organization's requirements and design choices, each structure shares some basic elements. In this work, we define and standardize nomenclature so we can discuss these elements. Our goal is not to prescribe how a collection should be designed, but rather to understand the different collection structures already in existence. Thus, we reviewed the collection structures of Archive-It\footnote{\url{https://archive-it.org/}}, the National Library of Australia's (NLA) PANDORA\footnote{\url{http://pandora.nla.gov.au/}} and Trove\footnote{\url{https://trove.nla.gov.au/}} archives, the Croatian Web Archive (HAW)\footnote{\url{https://haw.nsk.hr/}}, the Library of Congress Web Archive (LC), the United Kingdom Web Archive (UKWA)\footnote{\url{https://www.webarchive.org.uk/en/ukwa/index}}, Conifer\footnote{\url{https://conifer.rhizome.org/}} (formerly Webrecorder\cite{webrecorder_replay_2020}). Finally, we include the Internet Archive's\footnote{\url{https://archive.org}} (IA) user account web archives because IA's \emph{Wayback Machine} is synonymous with web archiving, even though its collections are tied to a specific user rather than a theme. While there are many web archiving initiatives \cite{gomes_2011,enwiki:1080525996}, we focused on these eight platforms because they provide collections as defined above. Through this review, we address the following research questions: \begin{itemize} \item RQ1: What different collection structures exist? \item RQ2: What do these distinct collection structure approaches have in common? \end{itemize} Existing work has focused on the nature of digital collections \cite{fenlon_toward_2017}, user behavior when curating personal collections \cite{MULL2014192,doi:10.1177/2056305116662173}, the behavior of archivists \cite{ogden2017observing}, the capabilities of web archive platforms \cite{niu2012}, and the challenges of building collections with Archive-It \cite{doi:10.1086/669993,doi:10.1086/685975}. Our work augments this by focusing on existing web archive collection structures for the benefit of future archivists as well as web archive platform and analytics tool designers. We recognize that many organizations create collections with Archive-It, but we produced this work to highlight the novel concepts of many different platforms, including those from national libraries like the Library of Congress (LC), the National Library of Australia (NLA) (through PANDORA/Trove), and the British Library (through the UKWA). Our contributions are as follows: \begin{itemize} \item Documentation of the collection structures followed by different web archives, not only to help current archivists and developers understand the present state, but also to consolidate and summarize knowledge for platform developers. \item An analysis of the similarities among these various web archive collection structure approaches to provide ideas to future platform developers. \end{itemize} This work is provided to assist collection analysis software projects like the Off-Topic Memento Toolkit \cite{alnoamany_detecting_2016,jones_off-topic_2018_nonanon}, Hypercane \cite{jones2021hypercane,jones2021hypercane2}, and ArchiveSpark \cite{holzmann_archivespark:_2016}. \section{Background} When building a web archive collection, an archivist selects a set of URIs as seeds. Each seed is an \textbf{original resource} that reflects the current state of the web resource. Each memento is an observation of that resource at a particular point in time, its \textbf{memento-datetime}. Each original resource is identified by its URI-R (e.g., \url{https://www.cnn.com}) and each of its mementos is identified by a URI-M (e.g., \url{https://wayback.archive-it.org/7678/20190319204514/https://www.cnn.com/}). A \textbf{TimeMap} is a listing of the mementos created for an original resource, including the URI-M of each memento and its memento-datetime. \textbf{Human-readable} TimeMaps are rendered as a list or calendar with links to each URI-M. Some examples of human-readable TimeMaps are shown in Figure \ref{fig:human-readable-timemaps}. \textbf{Machine-readable} TimeMaps can take a variety of formats, such as JSON. Many web archives are compliant with the Memento Protocol \cite{van_de_sompel_rfc_2013}, which formalizes these concepts and provides standardized methods of linking mementos, original resources, and TimeMaps. Not all original resources are seeds. An archivist can instruct the web archiving platform to follow links from a seed to other original resources and capture those as well. \textbf{Seed mementos} are mementos that the archivist directly asked the platform to capture. \textbf{Deep mementos} are mementos captured by crawling a seed memento's links. We make this distinction because a visitor can immediately discover the seed mementos through a web archive collection's user interface, but may need to click links from seed mementos to discover deep mementos. Tools attempting to capture information about a collection are also limited by this distinction because seed mementos are advertised through the user interface while deep mementos must be crawled to be discovered. Archive-It is an example of a platform that requires this distinction. \begin{figure*} \centering \begin{subfigure}{0.5\hsize} \centering \includegraphics[width=\textwidth]{loc-human-readable-timemap.png} \caption{Library of Congress (LC)} \label{fig:haw-human-readable-timemap} \end{subfigure}% \begin{subfigure}{0.5\hsize} \includegraphics[width=\textwidth]{tep-human-readable-timemap.png} \caption{Trove Title Entry Page (TEP)} \label{fig:tep-human-readable-timemap} \end{subfigure} \caption{Screenshots showing examples of human-readable TimeMaps.} \label{fig:human-readable-timemaps} \end{figure*} \section{Related Work} Much work exists in analyzing the use and creation of collections. Fenlon \cite{fenlon_toward_2017} performed an in-depth study of two digital collections, detailing different approaches to data models, supporting context, and overall visualization of content. She notes that all collection curators may learn from the structures of existing digital collections. She details how collections have implications for scholarly communications and that the goal of a collection influences its collection structure. Though Fenlon did not analyze web archives, her work has inspired our technical analysis of collection structures. Mull and Lee \cite{MULL2014192} applied the Users and Gratifications model \cite{doi:10.1111/j.00117315.2004.02524.x} to understand why Pinterest users select certain items for their collections. They contrasted the behavior of Pinterest users with other social media platforms and found that image-sharing platforms have unique factors. Wang et al. \cite{doi:10.1177/2056305116662173} completed a similar study to understand how users not only created collections, but interacted with them on Pinterest. Ogden et al. \cite{ogden2017observing} studied web archivists themselves to better understand the ways in which they ``shape and maintain the preserved Web.'' Nwala et al. \cite{nwala2018} analyzed how to leverage search engine results to populate web archive collections. Nwala et al. \cite{nwala_bootstrapping_2018} also used Archive-It collections to compare human-made vs. automatically or semi-automatically generated collections. Klein et al. investigated the possibility of performing focused crawls to build collections from greater web archives \cite{10.1145/3201064.3201085}. Where those studies focused on creating and curating collections, or understanding archivists' motivations for doing so, our work analyzes what is present already and the behavior of platform designers as revealed through collection structures. Web archives and the challenges they face have been extensively examined in the past. Crook \cite{doi:10.1108/02640470910998542} detailed the state of web archiving in Australia in 2009, noting the challenges with establishing different capture efforts as part of the PANDORA archive. Slania \cite{doi:10.1086/669993} and Deutch \cite{doi:10.1086/685975} detailed their experiences with using Archive-It to archive art web sites. In 2012, Niu \cite{niu2012} conducted an analysis of 10 different web archive platforms and discovered that search capabilities by URL and keyword were common, with varying levels of capability, but none provided data mining services at that time. Our work is similar in that we are analyzing the capabilities of different web archives, but, unlike the work of others, we are focusing on the subset of web archives that offer themed collections, and we provide a model for understanding their different collection structures. How users directly leverage web archive collections has received attention. Milligan \cite{milligan2016lost} discussed how historians of the present and future might benefit from understanding large collections in the Internet Archive (IA) and Archive-It. According to Risse \cite{risse2014you} and Gossen et al. \cite{gossen2016analyzing}, scientists are generally interested in examining smaller and more targeted event-centric collections of documents available in a web archive. They discuss the difficulties of working with web archives and provide a research methodology for extracting and analyzing archive sub-collections focusing on specific subjects and events. Jones et al. \cite{jones2018many} focused on web archive collections' structural characteristics to better comprehend them. They applied concepts from AlSum's work \cite{alsum_thumbnail_2014} to demonstrate how the growth of collections could be compared by quantifying when mementos and seeds were added, including their age and frequency of creation. This work is similar because we are analyzing the structures within web archives that support collections, but we differ in several ways. Milligan, Risse, and Gossen discussed how visitors would consume collections, but did not analyze their structures. Jones analyzed the structural features but not the structures themselves. Ke et al. \cite{ke2008toward} semantically leveraged collection structures to aid user exploration of massive data corpora through clustering. They developed the LAIR2 clustering method and the prototype LAIR2 Scatter/Gather \cite{10.1145/133160.133214} browser for exploring collections. Through analyzing user behavior with the scatter/gather concept and health information searches, Zhan et al. \cite{zhang2014evaluation} learned that users' mental models of search have a significant influence in how they utilize search interfaces. Our work exists to support such efforts by detailing other models of collection presentation. Other research examined the fundamental properties of web archive collections. Padia et al. \cite{padia_visualizing_2012} created several visualization techniques to help users better grasp collection characteristics. AlNoamany et al. \cite{alnoamany2017generating} pioneered the concept of combining social media stories with web archive collections to provide a user-friendly interface for corpus summarization. She asked domain experts to manually select mementos that represented a collection. She then developed an algorithm that took into account the collection structure to automatically select mementos. Test subjects could not tell the difference between stories generated by her algorithm or those generated by domain experts. Our work analyzes the collection structures that make this type of visualization and summarization possible. Hypercane \cite{jones2021hypercane,jones2021hypercane2} is a toolkit that uses intelligent sampling to summarize large web archive collections. This software uses the structural aspects of the collection and the content of the collection's mementos to automatically select mementos that are representative of a web archive collection. Hypercane relies on the AIU \cite{jones2018aiu_nonanon} library to discover seeds within Archive-It, PANDORA, and Trove collections. All of these web archive platforms have adopted Memento \cite{van_de_sompel_rfc_2013}, but they have no standard method for clients to discover seeds and metadata. AIU accepts a collection identifier and then scrapes the associated collection landing page for that data. The Off-Topic Memento Toolkit (OTMT) \cite{jones_off-topic_2018_nonanon} applies textual similarity metrics to determine which mementos in a collection have gone off-topic (e.g., crawled as 404s, database error pages, no longer matching the collection topic). The OTMT analyzes the mementos within a given TimeMap, but needs to know which TimeMaps belong to a web archive collection, hence it also relies on AIU. Jones et al. also investigated how users might better understand web archive collections by using social cards \cite{jones2019social}. They discuss how surrogates representing a subset of a collection can help users determine what information needs a collection might satisfy. From this work they developed Raintale \cite{jones_mementoembed_raintale_2020} that creates surrogates for groups of mementos. Combining Hypercane's sampling with Raintale's visualization capability provides summaries of web archive collection in formats easily understood by web users. These tools all belong to the Dark and Stormy Archives (DSA) project \cite{jones_dissertation_nonanon,dsa_code4lib_2022} which focuses on analyzing and summarizing web archive collections. Our work exists to help efforts like the DSA that consume and analyze web archive collections. \section{Web Archive Collection Structures} In December 2021, we chose eight web archives to analyze because they support our definition of a collection. We looked at the web archive platforms at Archive-It, Conifer, the Croatian Web Archive (HAW), the Internet Archive's user account web archives, Library of Congress (LC), the National Library of Australia (NLA: PANDORA and Trove web archive platforms), and the UK Web Archive (UKWA). Table~\ref{tab:collection_structures_per_archive_platform} shows the collection structures of the different platforms we analyzed, addressing RQ1. Here, we have a summary of the many behaviors that represent the requirements of various web archiving platforms discussed earlier. \begin{table*}[t] \caption{Details on different web archive platform collection structures.} \label{tab:collection_structures_per_archive_platform} \centering \resizebox{\textwidth}{!}{% \begin{tabular}{>{\raggedright}p{2cm} >{\raggedright}p{2cm} >{\raggedright}p{2cm} >{\raggedright}p{2.5cm} >{\raggedright}p{2cm} >{\raggedright}p{2cm} >{\raggedright}p{2cm} >{\raggedright}p{2cm} p{2cm} p{2cm} } \toprule \textbf{Collection Platform} & \textbf{Name for mementos} & \textbf{Sub-collections?} & \textbf{Attribution} & \textbf{Private collections supported?} & \textbf{Mementos are accessible from more than one collection?} & \textbf{Deep mementos accessible from within the collection?} & \textbf{Human-readable TimeMap membership} & \textbf{Embargoed resources?} & \textbf{Navigational hierarchy} \\ \midrule Archive-It & Captures & No & Single account & Yes & No* & Yes & Collection & No & Type 1\\\midrule Conifer & Captures & Yes & Single account & Yes & No & Yes & No human-readable TimeMaps & No & Type 2 \\\midrule HAW & Archived copies & Yes & Greater web archive projects & No & Yes & No & Greater web archive & No & Type 1 \\\midrule Internet Archive (IA) user account web archives & Captures & No & Single account & No & Yes & No & Greater web archive & No & Type 2 \\ \midrule LC & Captures & No & Greater web archive projects & No & Yes & No & Greater web archive & Yes & Type 1 \\ \midrule UKWA & Captures & Yes & Greater web archive projects & No & Yes & No & Greater web archive & Yes & Type 2 \\ \midrule PANDORA Subject/ Collection & Webpage snapshots & Yes & Organizational collaborators & No & Yes & No & Greater web archive & No & Type 1 \\ \midrule Trove Collection & Webpage snapshots & Yes & Organizational collaborators & No & Yes & No & Greater web archive & No & Type 2 \\ \bottomrule \end{tabular} } * -- the \texttt{/all/} collection is an exception containing all mementos on Archive-It\\ \end{table*} \subsection{Different Web Archive Platform Collection Structures' Features} For each platform, we highlighted the similarities and differences in collection structures between platforms. \begin{itemize} \item \textbf{The term used to define mementos} \tabto{0.5cm} We see different names for mementos. The term \emph{memento} was formally established in RFC 7089 \cite{van_de_sompel_rfc_2013} in 2013. Many platforms predate that formality, thus different terms used exist for mementos. Some call them mementos, some call them copies, captures, or snapshots. Developers will need to understand the nomenclature of the platform and how these are synonymous. \item \textbf{Existence of sub-collections} \tabto{0.5cm} Most platforms support sub-collections, allowing an archivist to further narrow a collection's topic. These sub-collections have a variety of names, such as sub-collection (Trove, PANDORA), subject (PANDORA), or subcategory (HAW, PANDORA). Tools that encounter sub-collections must handle this hierarchy. \item \textbf{Attribution} \tabto{0.5cm} Some web archives attribute curation to a single entity while others cite different organizational collaborators. UKWA, LC, and HAW all attribute the selection of their mementos to the greater web archive, meaning that the projects of the archiving organization as a whole led to the creation of these mementos. In contrast, PANDORA and Trove collections attribute memento selection to one or more organizations who requested their capture. Archive-It and Conifer, however, support individual accounts, thus memento selection is done by the person or organization that maintains that account. \item \textbf{Support for private collections} \tabto{0.5cm} Some web archives have put in place measures to control access so that other users of the web archive can or cannot view specific content. As account-based services, both Conifer and Archive-It allow private collections for users who are not yet ready to share their mementos. When a user makes a collection private, it means that the particular collection including all of its seeds will be private and can be only viewed and accessed while logged into your account. \item \textbf{If the mementos (seed and deep) are accessible from more than one collection?} \tabto{0.5cm} On platforms like Archive-It and Conifer, each collection is isolated from one another. Archive-It supports an \texttt{/all/} collection containing mementos from every collection, but it is not advertised through the user interface. With this exception, a memento from one Archive-It collection is not accessible from another Archive-It collection. If two archivists create two Archive-It mementos from the same original resource at the same memento-datetime, but within different collections, then the mementos are distinct. This holds true for Conifer as well. Likewise, any deep mementos created from the crawls in these collections are only accessible to a user or tool browsing within the collection. \item \textbf{Human-readable TimeMap membership} \tabto{0.5cm} Typically, a Human-readable TimeMap for an original resource will include all of the mementos of that URI-R in the web archive, rather than mementos within a collection. But Archive-It takes this a step further, with human-readable TimeMaps that only apply to an original resource as captured within the collection. \item \textbf{If the resources are embargoed?} \tabto{0.5cm} Some archives limit access to their content offsite (locations other than their library premises). This is known as embargoing resources. Both LC and UKWA embargo resources. In both cases, some mementos are only available to patrons who physically visit their library campus. This creates challenges for Internet-based collection analysis tools because some mementos are hidden. \end{itemize} \subsection{Navigational Hierarchies} In addition to the features of each web archive's collection structures, we also note different \textbf{navigational hierarchies}. These navigational hierarchies help us understand how a visitor or crawler navigate each collection for information. We identified two main navigational hierarchy types followed by the web archive platforms. Type 1 (Figure \ref{fig:type1-navigational-hierarchy}) allows the visitor to review a TimeMap before choosing a memento from the collection, thus an original resource supports the collection's theme. Type 2 (Figure \ref{fig:type2-navigational-hierarchy}) gives visitors direct access to mementos in a collection without having to go through a TimeMap, thus the memento supports the collection's theme. \begin{figure*} \centering \begin{subfigure}{0.5\hsize} \centering \includegraphics[width=0.8\textwidth]{type1.png} \caption{Navigational hierarchy: Type 1} \label{fig:type1-navigational-hierarchy} \end{subfigure}% ~ \begin{subfigure}{0.5\hsize} \includegraphics[width=0.8\textwidth]{type2.png} \caption{Navigational hierarchy: Type 2} \label{fig:type2-navigational-hierarchy} \end{subfigure} \caption{The two main types of navigational hierarchies of different web archive collection platforms.} \end{figure*} \subsection{Different Web Archive Platforms} \subsubsection{Archive-It} \hfill\\ Archive-It is an Internet Archive subscription service where users can create their own collections. Figure \ref{fig:archiveit-navigational-hierarchy} illustrates the navigational hierarchy of an Archive-It collection. Each collection advertises a set of original resources (seeds). Each seed has its own TimeMap that provides mementos (seed mementos). Seed mementos link to deep mementos that are not shown on the collection landing page. The mementos (seed or deep) are always contained within the collection. This means that even if a visitor keeps browsing for more mementos (seed/deep) they will still stay with the initial collection in which they started. An Archive-It visitor following links from seed and deep mementos never reaches a memento outside of the collection. Within each collection, an original resource's mementos are listed in its TimeMap. Each TimeMap is specific to a collection and does not cross collections. Even though an original resource may appear in more than one collection, its mementos and its TimeMap do not. Archive-It collections do not support sub-collections. The archivist has the authority to add or remove seeds into the collection. The archivist schedules the crawls that create mementos. The archivist must, at a minimum, give their collection a name and supply seed URLs. From a visitor's perspective, the minimum amount of metadata provided by an Archive-It collection is limited to: the collection name, the collecting organization, the creation date of the collection, and the seed URLs. An archivist can, at their discretion, add more metadata to the collection or to individual seeds by choosing fields from Dublin Core \cite{dublin-core} or creating their own fields. In the Archive-It service, the mementos are referred to as “captures”. The archivist has the option to make a collection publicly available to everyone or they can make it private, preventing others outside of their organization from viewing it. Example URLs for Archive-It collection objects: \begin{itemize} \item Collection URL: \url{https://archive-it.org/collections/1064} \item Seed URI: \url{http://beta.worldbank.org/climatechange/} \item Human-readable TimeMap: \url{https://wayback.archive-it.org/1064/*/http://beta.worldbank.org/climatechange/} \item Machine-readable TimeMap (not seen by the user, but accessible to Memento clients): \url{https://wayback.archive-it.org/1064/timemap/link/http://www.worldbank.org/en/topic/climatechange} \item Memento (seed memento): \url{https://wayback.archive-it.org/1064/20171206011529/http://www.worldbank.org/en/topic/climatechange} \item Memento (deep memento) : \url{https://wayback.archive-it.org/1064/20171206031816/http://www.worldbank.org/en/topic/climatechange/overview} \end{itemize} \subsubsection{Conifer} \hfill\\ Conifer (formerly known as Webrecorder) is a service that allows a user to record and replay websites. With Conifer's navigational hierarchy, shown in Figure \ref{fig:conifer-navigational-hierarchy}, the visitor views the collection landing page. From this landing page, they can visit archivist-created lists \emph{lists} that serve as organized sub-collections. Inside each list is a set of page titles and URI-Rs, but these titles link directly to the memento that Conifer captured. From there a visitor can follow links to deep mementos. This service is different from other public archives that we have discussed because a Conifer user can use their browser to control the archiving process. Conifer allows users to create their own accounts with which they create and share their own collections. Each collection can be either public or private. Conifer has no concept of human-readable TimeMaps. Instead, mementos are grouped into ``sessions'' and with sessions, a user record web pages while browsing naturally. The account owner chooses the resources to preserve. Example URLs for Conifer collection objects: \begin{itemize} \item Collections by a particular user: \url{https://conifer.rhizome.org/shawnmjones} \item Public collection: \url{https://conifer.rhizome.org/shawnmjones/wac_collection1} \item Private collection: \url{https://conifer.rhizome.org/shawnmjones/wac_collection2} \end{itemize} \subsubsection{Croatian Web Archive (HAW)} \hfill\\ The Croatian Web Archive -- or Hrvatski Arhiv Weba (HAW) -- was built by the National and University Library in Zagreb in collaboration with the University of Zagreb University Computing Centre (Srce). Figure \ref{fig:haw-navigational-hierarchy} shows the navigational hierarchy of a HAW collection. The top HAW landing page lists the subjects. Clicking on a subject takes the visitor to a list of subcategories. Each subcategory/collection advertises a set of original resource titles and URI-Rs with a list of mementos. A visitor clicks on one of these URI-Rs to reach a human-readable TimeMap in list format. At HAW, the common term for mementos is ``copies'' and they are derived from the general web archive. The collections at HAW are publicly applicable to the users. These collections are compiled by the greater web archive. Example URLs for HAW collections: \begin{itemize} \item HAW category/subjects: \url{https://haw.nsk.hr/en/category/biology-botany-and-zoology} \item HAW subcategory/collection: \url{https://haw.nsk.hr/en/category/biology} \item TimeMap: \url{https://haw.nsk.hr/en/publikacija/4818/} \end{itemize} \subsubsection{Internet Archive's (IA) User Account Web Archives} \hfill\\ The Internet Archive is the largest and oldest web archive. It has its own types of collections that contain web archive files/data of different media types, but they do not present individual mementos for user consumption. However, there are collections that falls under the scope of our study, which we refer to as “IA’s user account web archives” which is a collection made by a particular user who has an account created at the Internet Archive. Its collections are tied to a specific user rather than a theme. The navigational hierarchy of IA's user account web archives is shown in Figure \ref{fig:ia-navigational-hierarchy}. Once a visitor has reached a IA’s user account web archives collection, they click the name of a URI-R and directly get to the specific memento chosen for that collection. The design of these collections emphasizes that specific mementos, not all mementos for an original resource, are chosen as collection members. From each memento, a visitor can reach a human-readable TimeMap, allowing them to view other mementos for that same original resource. Example URLs for Internet Archive's (IA) user account web archive collections: \begin{itemize} \item IA's user account web archive URL: \url{https://archive.org/details/@shawnmjones?tab=web-archive} \end{itemize} \subsubsection{Library of Congress (LC)} \hfill\\ At the Library of Congress, a visitor can access the available collections by clicking on the ``Digital Collections'' tab on the library home page. There are no sub-collections. The collections can contain other digital material (eg: images, video, PDF, etc.) besides web pages. For web pages, the Figure \ref{fig:loc-navigational-hierarchy} shows the navigational hierarchy of a LC collection. Once a visitor has selected a collection item, clicking on ``view captures'' on the image preview or caption will let the visitor access the TimeMap used to access the mementos. The collections are derived from the by the greater web archive itself. Mementos are referred to as ``captures''. The mementos are derived from the general web archive. The metadata to describe the collection contains, ``name'', ``description'', ``collection period'', ``frequency of collection'', ``languages'', and ``acquisition information.'' All collections at the Library of Congress are public, however, there are collection items that may contain embargoed content. Embargoed content is not accessible to those users who are off library premises or before a particular expiration date. Example URLs for LC collections: \begin{itemize} \item Collection URL: \url{https://www.loc.gov/collections/egyptian-elections-web-archive/} \item Collection Item (full access only at the library): \url{https://www.loc.gov/item/lcwaN0006607/} \item Collection Item (content may be embargoed): \url{https://www.loc.gov/item/lcwaN0006608/} \item Collection Item (images + web pages): \url{https://www.loc.gov/collections/tenth-to-sixteenth-century-liturgical-chants/} \end{itemize} \subsubsection{The United Kingdom Web Archive (UKWA)} \hfill\\ At the UKWA, the collections are listed as ``Topics and Themes'' on the home page. The navigational hierarchy of a UKWA collection is shown in Figure \ref{fig:ukwa-navigational-hierarchy}. Once a visitor to the UKWA has reached a collection or sub-collection, they click the name of a URI-R and directly reach a memento specifically chosen for that collection similar to the IA's user account web archives. The UKWA curates the collections themselves. Each collection consists of a set of individual mementos. Once a visitor selects a collection, they can search (using URI-R or keyword) within the collection. In addition to mementos, a collection may contain sub-collections. Mementos come from the general UKWA; they are not bound to a collection. From each memento, a visitor can reach a human-readable TimeMap, allowing them to view other mementos for that same original resource. In the UKWA, mementos are referred to as ``captures''. Although all collections are public, most of mementos are only viewable from the library premises. Example URLs for UKWA collections: \begin{itemize} \item Collection URL (without sub-collections): \url{https://www.webarchive.org.uk/en/ukwa/collection/44} \item Collection URL (with sub-collections): \url{https://www.webarchive.org.uk/en/ukwa/collection/910} \item Sub-collection URL: \url{https://www.webarchive.org.uk/en/ukwa/collection/911} \item Memento (available outside library): \url{https://www.webarchive.org.uk/wayback/archive/20161128182225/http://www.iwa.wales/click/2016/07/referendum-ukips-role-now/} \item Memento (available inside library only) are linked to: \url{https://www.webarchive.org.uk/en/ukwa/noresults} \end{itemize} \subsubsection{National Library of Australia (NLA)} \hfill\\ Trove \footnote{https://trove.nla.gov.au/} is an initiative between the National Library of Australia (NLA) and other Australian partner institutions. The NLA and these partner organizations decide which resources to preserve. In the National Library of Australia, Trove is the new discovery system and PANDORA is the much older Australian web archive. Figure \ref{fig:nla-navigational-hierarchy} shows the complex navigational hierarchy of PANDORA and Trove. This navigational hierarchy exists because NLA has been migrating its mementos to Trove but still wishes to retain the effort put into creating collections at PANDORA. Thus, a visitor has several potential points of entry. If the visitor comes from Trove, then their path is much like with UKWA and IA's user account web archives, going from collection directly to its mementos. All Trove mementos come from the greater web archive and are not restricted to a collection as with Archive-It. These mementos link to other deep mementos which are not featured on the collection page. Trove collections can contain mementos and sub-collections. If the visitor comes from a PANDORA Subject or Collection, then they are presented with a list of titles representing original resources chosen for inclusion in the collection. Clicking on one of these titles takes the user to a Title Entry Page (TEP), which is a human-readable TimeMap. From the TEP, the visitor can then choose a memento hosted at Trove. This means that ultimately, PANDORA subjects and collections are linked to Trove TEPs making a link between Trove and Pandora. Additionally, each TEP is informed by data stored at a URL like (\url{https://webarchive.nla.gov.au/bamboo-service/tep/{TEP_ID}}) that returns a JSON object with the URI-Ms and other metadata about the TEP. This JSON resource acts like a machine-readable TimeMap of its own kind. A TEP's human-readable metadata contains the original resource page title and a list of collecting organizations (as “partner organizations”) and their logos. In both Trove collections and TEPs the mementos are commonly referred to as “webpage snapshots”. All the collections at Trove are public. The home page of PANDORA lists the main subjects under ``Browse subjects''. A PANDORA subject may contain “subcategories” that also fall under the PANDORA subject category and “collections”. In addition to TEP pages, some of these PANDORA collections can contain sub-collections. All Pandora subjects and collections are public. At Pandora (both Pandora subjects and collections), the mementos are referred to as ``webpage snapshots''. These mementos are derived from the greater web archive at Trove; they are not bound to a collection as with Archive-It. Example URLs for NLA collections: \begin{itemize} \item Trove collection: \url{https://webarchive.nla.gov.au/collection/15003} \item Trove sub-collection: \url{https://webarchive.nla.gov.au/collection/15052} \item The Pandora subject URL for “Arts”: \url{http://pandora.nla.gov.au/subject/2} \item A Pandora subcategory URL of “Arts” named “Dance”: \url{http://pandora.nla.gov.au/subject/42} \item A Pandora collection URL: \url{https://pandora.nla.gov.au/col/12142} \item A Pandora sub-collection URL: \url{https://pandora.nla.gov.au/col/12203} \item A Trove TEP URL: \url{https://webarchive.nla.gov.au/tep/88147} \end{itemize} \begin{figure*} \centering \begin{subfigure}{0.5\hsize} \centering \includegraphics[width=\textwidth]{archiveit-navigational-hierarchy.png} \caption{Archive-It} \label{fig:archiveit-navigational-hierarchy} \end{subfigure}% \begin{subfigure}{0.5\hsize} \includegraphics[width=\textwidth]{conifer-navigational-hierarchy.png} \caption{Conifer} \label{fig:conifer-navigational-hierarchy} \end{subfigure} \caption{The navigational hierarchies of different web archive collection platforms.} \end{figure*} \begin{figure*}\ContinuedFloat \centering \begin{subfigure}{0.5\hsize} \centering \includegraphics[width=\textwidth]{haw-navigational-hierarchy.png} \caption{Croatian Web Archive (HAW)} \label{fig:haw-navigational-hierarchy} \end{subfigure}% \begin{subfigure}{0.5\hsize} \includegraphics[width=\textwidth]{ia-navigational-hierarchy.png} \caption{Internet Archive (IA) user account web archives} \label{fig:ia-navigational-hierarchy} \end{subfigure} \caption{The navigational hierarchies of different web archive collection platforms.} \end{figure*} \begin{figure*}\ContinuedFloat \centering \begin{subfigure}{0.5\hsize} \includegraphics[width=\textwidth]{loc-navigational-hierarchy.png} \caption{Library of Congress (LC)} \label{fig:loc-navigational-hierarchy} \end{subfigure} \begin{subfigure}{0.5\hsize} \includegraphics[width=\textwidth]{ukwa-navigational-hierarchy.png} \caption{United Kingdom Web Archive (UKWA)} \label{fig:ukwa-navigational-hierarchy} \end{subfigure}% \caption{The navigational hierarchies of different web archive collection platforms.} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{nla-navigational-hierarchy.png} \caption{The navigational hierarchies of the collections at the NLA.} \label{fig:nla-navigational-hierarchy} \end{figure*} \section{Future Work} One could extend our research to understand the any socioeconomic, political, or management factors that influence how each collection structure is arranged. We also intend to study collection structures of other national web archives such as the Portuguese Web Archive \footnote{https://arquivo.pt/} and the Icelandic Web Archive \footnote{https://vefsafn.is/}. We would also like to understand the structure and features of web archiving initiatives run by universities, such as Stanford University Libraries \footnote{https://library.stanford.edu/spc/university-archives} and Columbia University Libraries \footnote{https://library.columbia.edu/index.html}. We would like to understand why some web archives do not yet have collections. Is it related to the size of the web archive? Perhaps there are social, cultural, or management challenges that are not visible from examining the web archive itself? Finally, with this understanding we intend to suggest enhancements and improvements to tools like AIU using this knowledge of collection structures. \section{Conclusion} Web archives play a role in preserving our digital history. As web archives grow, archivists eventually create collections to make their archives easier to understand and manage. Collections help visitors narrow the number of documents they need to review for a specific topic. Web archive collections are also targets for different data management and analysis tools. Each collection's structure influences how it meets these different use cases. We have addressed RQ1 by reviewing the the collection structures of Archive-It, NLA (Trove and PANDORA), HAW, LC, UKWA, and Conifer. Through this analysis we discovered different approaches to collection structure, revealing a diversity of capabilities for potential new web archiving platforms and tools. These approaches appear to be informed heavily by the nature of each platform. For RQ2 we sought similarities in these collection structures so we could better understand their current state. Archive-It and Conifer are account-centric, meaning that an individual user or organization cares for the collection separately from the archiving platform itself. The others are general web archives that created collections from their vast holdings based on internal projects. Where mementos Archive-It or Conifer's mementos are accessible within a collection, the other platforms share mementos between collections. Archive-It or Conifer offer direct curatorial attribution to the account owner, Trove and PANDORA cite different organizational collaborators who requested the creation of mementos, and the others are inconsistent in this regard. Most archives offer at least one level of sub-collection and some web archives embargo resources. We discovered two types of navigational hierarchies for collections. In Type 1, an original resource supports the collection’s theme. In Type 2, a memento supports the collection’s theme. In Type 1 navigational hierarchy, used by platforms like Archive-It and LC, the user navigates through zero or more sub-collections before reaching a human-readable TimeMap. From there, they can select the memento of their choice. In Type 2 navigational hierarchy, used in plaforms like UKWA and Trove, the user navigates through zero or more sub-collections before reaching a page that directly links to mementos. Visitors and tool designers need to understand this distinction. These different structures reflect different decisions on resource membership among collections. We did not attempt to prescribe how a web archive collection should be designed but rather analyzed existing platforms. Such information is helpful to different parties. Future archivists and platform designers need ideas for their own archives. Software developers need to understand how to process these collection structures to build tools. With the growth in interest in Big Data, web archives will increasingly become the target of interest for researchers. With an understanding of collection structures, researchers will know how to acquire the metadata and mementos they seek. \bibliographystyle{ACM-Reference-Format} \newpage
{ "timestamp": "2022-09-20T02:21:20", "yymm": "2209", "arxiv_id": "2209.08649", "language": "en", "url": "https://arxiv.org/abs/2209.08649" }
\section{Introduction} \label{sec:introduction} \begin{figure}[t!] \centering \includegraphics[width=0.97\linewidth]{fig/bt-sot} \caption{Integration between the SoT approach and BTs in our framework. The BT and the SoT approach run in parallel at different control frequencies. On the basis of the current robot and environment state, the BT configures a hierarchical control problem, which is then solved by the SoT control strategy at a much higher control frequency. The new tasks are set by the leaf nodes of the BT and the tasks that are no longer needed are removed by the internal nodes. More details are in \ref{subsec:sot-bt}.} \label{fig:bt-sot} \end{figure} Mechanical systems with a high number of degrees of freedom, such as robot arms or legged robots, can fulfill multiple goals simultaneously. In the context of robot motion generation, goals can be encoded by means of a \textit{task}, or \textit{error}, function that maps a kinodynamic robot configuration to a scalar value --- e.g., a particular end-effector pose can be formulated as a task via the forward kinematics function of the robot state. Control of these systems has been addressed by Siciliano et al. \cite{240390} and Kanoun et al. \cite{5766760}, who proposed a local and hierarchical control strategy: \textit{task-priority} or \textit{Stack-of-Tasks} (SoT). This approach allows to specify a priority for each task, preventing the lower-ranked tasks (e.g., move the end-effector to a position) from interfering with the higher-ranked ones (e.g., avoid collision with a wall). The SoT approach is local, which allows for fast and efficient implementations that can deal with changes on-line by greedily taking the locally optimal action at every iteration, without having to rely on a typically slower global re-planning. However, since this control strategy relies on the minimization of quadratic error functions, it can only address problems which can be formulated in terms of quadratic objectives. This shortcoming can be tackled by using either a motion planning algorithm (e.g., LQR-trees \cite{lqr-trees-tedrake}) or higher-level decision logic, such as Finite State Machines, that can be used to switch between different SoT controllers. However, both solutions affect the reactivity of the SoT control. On the one hand, a re-planning of the motion would be needed in case of unexpected changes in the environment. On the other hand, Finite State Machines (FSMs) allow to compose tasks to avoid local minima, but are characterized by an intrinsic trade-off between reactivity (i.e., ability to quickly and efficiently react to changes) and modularity (i.e., system's components may be separated into building blocks, and recombined) \cite{DBLP:journals/corr/abs-1709-00084}. This limitation makes them impractical for defining reactive behaviors in dynamic environments. Recent work has explored Behavior Trees (BTs) as a task switching structure in the context of robotics and artificial intelligence \cite{DBLP:journals/corr/abs-2005-05842}. Despite BTs being functionally equivalent to FSMs \cite{petter-bts, 6907656}, they promise to address their limitations in reactivity and modularity \cite{DBLP:journals/corr/abs-1709-00084}. The execution of a BT relies on the generation of a signal, or \textit{tick}, that is propagated from the root to the children. However, the frequency of tick generation is too slow to use BT for direct high-frequency control (i.e., to use them for direct hardware control). In order to make them usable in this context, a real-time controller in between is typically used, and control functionalities are encapsulated into the leaf nodes of the BT that are responsible for performing commands. Current works encapsulate entire \textit{skills} (e.g., pushing an object) that typically involve multiple goals in these leaf nodes. This choice makes the inner working behind each skill not explicitly transparent and can negatively affect the skill re-usability, since some of the specific goals might not be suitable for different contexts. In this letter, we introduce a new framework for robot control by employing BTs as a task composition structure for an SoT control strategy (Fig. \ref{fig:bt-sot}). In our approach, a BT and an SoT strategy run in parallel at different control frequencies. On the basis of the current robot and environment state, the BT periodically configures a hierarchical control problem, which is then solved by the SoT strategy at a much higher control frequency. The new tasks are set by the leaf nodes of the BT and the tasks that are no longer needed are removed by the internal nodes. Our main contribution is a novel method to combine a Stack-of-Tasks approach with Behavior Trees in a unified framework, addressing some of the limitations of each model. On one hand, we are able to extend BTs to high-frequency control tasks, without losing transparency and modularity, since each goal is independently set by a leaf node in the BT. On the other hand, we endow the SoT approach with higher-level decision logic, without having to compromise on reactivity and modularity. This makes it easier to manage the situations that have locally optimal solutions, but not globally optimal ones, and facilitates re-usability of sub-behaviors in different scenarios. Since robotics tasks are, on a high level, typically formulated in terms of sub-tasks that are sequentially or simultaneously executed, our approach provides a transparent and modular tool for the design of robotic behaviors which involve high-frequency control. We validate our methodology on the set-up shown in Fig. \ref{fig:exp-setup}, where the goal is to pick a small cube, place it at the base of a ramp, and then push it to the top. The experimental results show that, using our approach, the robot is not only able to robustly achieve the goal at the first attempt, but also to react to unexpected changes, by exploiting the intrinsic reactivity of SoT and BTs. \begin{figure}[t] \vspace{0.3cm} \centering \includegraphics[width=0.9\linewidth]{fig/exp-setup} \caption{Experimental set-up. Microsoft Kinect V2, Franka Emika Panda 7-DOF manipulator and 40mm cube. The goal is to pick the cube, place it at the base of the ramp, and then push it to the top.} \label{fig:exp-setup} \end{figure} \section{Related Work} \label{sec:related_work} \subsection{Hierarchical Stack-of-Tasks Control Strategy} Each goal for a redundant mechanical system can be formulated in terms of minimizing a separate \textit{task} function of the robot state $\boldsymbol{q}$, which can be regulated with an ordinary differential equation. In case of multiple tasks, the corresponding equations can be sorted by priority and solved each in the solutions set of higher priority tasks (\textit{task-priority} or \textit{Stack-of-Tasks}). Kanoun et al. \cite{5766760} proposed a prioritized task-regulation framework based on a sequence of quadratic programs (QP) that generalizes the previous \textit{task-priority} framework \cite{240390} to inequality tasks. In \cite{escande2014hierarchical} a more numerically efficient solution of the same problem was proposed. Successful applications of this approach have been pointed out in different contexts. \cite{7387707} exploits a hierarchical control strategy for autonomous picking and palletizing, while \cite{stoyanov2018assisted}, \cite{NICOLIS20175672} and \cite{8253809} for single-arm and dual-arm robot teleoperation. Outside the robotic manipulation context, \cite{7803330} presents a control based approach on a whole body control framework combined with hierarchical optimization. Moreover, \cite{9568848} and \cite{8968452} show it is also possible to exploit the SoT approach in the context of Reinforcement Learning (RL). Existing methods that do not involve motion planning typically exploit Finite State Machines (FSMs). In this way tasks can be composed such that the robot does not get stuck in local minima when dealing with multi-step manipulation problems that require non-quadratic objectives. However, FSMs are characterized by an intrinsic trade-off between reactivity and modularity (more details in Sec. \ref{subsec:sot-fsm-shortcomings}). In this letter we propose to overcome the previous shortcoming by exploiting Behavior Trees. \subsection{Behavior Trees in Manipulation} Behavior Trees have been successfully applied in the fields of Robotics and Artificial Intelligence \cite{DBLP:journals/corr/abs-1709-00084, DBLP:journals/corr/abs-2005-05842}. Thanks to their transparency and modularity, they have been used for robotic arm manipulation tasks. Bagnell et al. \cite{6385888} exploit a BT to model the logical structuring of dexterous manipulation tasks. BTs have also been used within industrial robotics, thanks to their intrinsic reactivity. In \cite{7140065}, Guerin et al. use BTs to provide non-experts with a set of generalizable skills to visually create pick and place operations. Related to their work, \cite{d1fbd409b9ce4ba0b49f3f25509ed2a9} presents an extended system for robot operation (or skill) composition, showing demonstrations in wire-bending and polishing operations. \cite{8594319} models skills as reusable motion primitives that are dynamically activated when conditions trigger, enabling the composition of complex behaviours. Extending on their work, \cite{9636292} uses black-box optimizers to learn the parameters of the model skills for a BT policy. The methods above handle \textit{skills} (i.e., grasping or pushing an object) as integral units, making the inner workings not explicitly transparent and obfuscating the action semantics within each skill. In the more general context of robot missions that involve a number of desired objectives, some of which need to be taken into account at the same time, Özkahraman et al. \cite{ozkahraman-petter} investigate the combination of BTs with Control Barrier Functions. However, since this approach relies on the definition of a BT where only one leaf node can be run at a time, if multiple tasks have to be performed simultaneously, they have to be encapsulated in a single leaf node. In contrast, in our framework we exploit an SoT strategy for decomposing a skill into a set of prioritised tasks, whose composition is explicitly and concurrently handled by different leaf nodes of the BT. To the best of our knowledge, there is no previous work leveraging BTs to dynamically compose prioritised tasks in an SoT control strategy. \section{Background} \label{sec:approach} In this section we briefly describe the used methodologies for the proposed approach. We start by describing the SoT control strategy in \ref{subsec:sot}. Then we discuss the shortcomings of the SoT framework in combination with Finite State Machines. In \ref{subsec:bt} we provide the background of BTs and we explain what makes them suitable for our purpose. \subsection{Hierarchical Stack-of-Tasks Control Strategy} \label{subsec:sot} In this work, we build upon the real-time hierarchical Stack-of-Tasks control strategy proposed in \cite{5766760}. In this section, we briefly revise the concept and refer the reader to the original work for the details. Let the vectors $\boldsymbol{q}$ and $\dot{\boldsymbol{q}}$ be joint configuration and joint velocity respectively, where $n$ is the number of degrees of freedom. In our work, we are interested in redundant mechanical systems (i.e. $n>6$) that are able to fulfill multiple tasks simultaneously. According to~\cite{escande2014hierarchical}, we define each task map from the joint space to the operational space via the derivatives of error functions $\boldsymbol{e}(\boldsymbol{q})$. The task evolution is given by \begin{align} \label{evolution} \dot{\boldsymbol{e}}(\boldsymbol{q})&=\boldsymbol{J}(\boldsymbol{q})\dot{\boldsymbol{q}}, \end{align} where $\boldsymbol{J}(\boldsymbol{q})=\frac{\partial \boldsymbol{e}(\boldsymbol{q})}{\partial \boldsymbol{q}}$ denotes the task Jacobian. A desired evolution of (\ref{evolution}) can be imposed via control law $\dot{\boldsymbol{e}}^*(\boldsymbol{q})$, where we use simple P controllers of the form \begin{equation} \label{control-law} \dot{\boldsymbol{e}}^*(\boldsymbol{q}) = -K_p \boldsymbol{e}(\boldsymbol{q}), \end{equation} where $K_p$ is a diagonal gain matrix. In case of a single priority level equality task, the following least square Quadratic Program (QP) has to be solved \begin{equation} \arg \min_{\boldsymbol{\dot{q}}} \|\boldsymbol{J} \dot{\boldsymbol{q}} - \dot{\boldsymbol{e}}^*\|^2. \end{equation} In order to allow for inequality tasks, we henceforth use a general task formulation with upper bounds \begin{equation} \label{inequality} \boldsymbol{J} \dot{\boldsymbol{q}} \leq \dot{\boldsymbol{e}}^*(\boldsymbol{q}). \end{equation} Escande et al. in \cite{escande2014hierarchical} show how this allows to transcribe lower bounds, double bounds and equalities by reformulating the task. If the constraint in (\ref{inequality}) is infeasible, the least squares problem can be formulated introducing a slack variable $\boldsymbol{w}$ \begin{align} &\min_{\dot{\boldsymbol{q}}, \boldsymbol{w}} \|\boldsymbol{w}\|^2 \\ \text{subject to} \quad &\boldsymbol{J}\dot{\boldsymbol{q}} \leq \dot{\boldsymbol{e}}^*+\boldsymbol{w}. \nonumber \end{align} To form a hierarchical Stack-of-Tasks with $p=1,\dots,P$ priority levels, we stack all task Jacobians of equal priority $p$ in a matrix $\boldsymbol{A}_p$ and the corresponding upper bounds (i.e., reference velocities) in a vector $\boldsymbol{b}_p$ to form one constraint of the form $\boldsymbol{A}_p\dot{\boldsymbol{q}} \leq \boldsymbol{b}_p$ for each hierarchy level. The main idea of a hierarchical Stack-of-Tasks control strategy is to solve the tasks of lower priority in the null space of ones of higher priority. Therefore, the following QP needs to be solved for $p=1,\dots,P$ \begin{align} \label{objective} &\min_{\dot{\boldsymbol{q}}, \boldsymbol{w}_p} \|\boldsymbol{w}_p\|\\ \text{subject to} \quad &\boldsymbol{A}_i \dot{\boldsymbol{q}} \leq \boldsymbol{b}_i+\boldsymbol{w}^*_i, \quad i=1,\dots,p-1 \nonumber \\ &\boldsymbol{A}_p \dot{\boldsymbol{q}} \leq \boldsymbol{b}_p+\boldsymbol{w}_p, \nonumber \end{align} where the previous slack variable solutions $\boldsymbol{w}^*_i$ are frozen between iterations. This method essentially provides a global control policy by solving a sequence of instantaneous optimal control problems at each time step. \subsection{Limitations of SoT with Finite State Machines} \label{subsec:sot-fsm-shortcomings} The hierarchical SoT control strategy described in \ref{subsec:sot} relies on the assumption that it is always possible to reach a globally optimal solution by minimizing a quadratic error function. In reality, this is not always the case and it often happens to deal with problems that require non-quadratic objectives. In these situations, a single SoT might not be enough, because the control strategy might bring the robot to a disadvantageous configuration with respect to the global goal. The only way to address these scenarios, without considering a more complex error function or a motion planner, is to compose a sequence of local approximations (i.e., Stack-of-Tasks) that prevent the control strategy from falling into a local minimum. Finite State Machines (FSMs) have been typically used for this purpose, since they are a standard choice in the context of task switching structures. However, as pointed out in \cite{DBLP:journals/corr/abs-1709-00084}, FSMs are characterized by an intrinsic trade-off between reactivity (i.e., ability to quickly and efficiently react to changes) and modularity (i.e., system's components may be separated into building blocks, and recombined). The state transitions in FSMs are \textit{one-way control transfers}, in the sense that, making an analogy with programming languages, they behave like \textit{Goto} statements, which have well-known shortcomings \cite{10.1145/362929.362947}. Making a system reactive by using FSMs requires defining many transitions (i.e., \textit{one-way control transfers}) between components. As a consequence, this negatively affects the modularity, because if, for example, one component is removed, every transition to that component needs to be revised. \subsection{Behaviour Trees} \label{subsec:bt} \begin{figure}[t!] \centering \subfigure[]{\includegraphics[height=0.15\linewidth]{fig/bt-control-nodes} \label{fig:control-nodes} } \subfigure[]{\includegraphics[width=0.5\linewidth]{fig/bt-execution-nodes} \label{fig:execution-nodes} } \caption{(a) Control flow nodes, from left to right: Sequence, Parallel, Fallback and Decorator. (b) Execution nodes, from left to right: Condition and Action.} \end{figure} Behaviour Trees are an alternative to FSMs that promise to address some of the limitations in modularity, re-usability and reactivity. Following the terminology from \cite{DBLP:journals/corr/abs-1709-00084}, a BT is composed of a root node, and at least one internal node and one leaf node. The execution of a BT starts at the root node, which generates signals, or \textit{ticks}, with a given frequency, that are propagated to the children, down to the leaf nodes. A node is executed if and only if it receives a tick, and returns a status to the parent. The possible statuses are \textit{Running}, if the execution is under way, \textit{Success} if the node has achieved its goal, or \textit{Failure} otherwise. Internal Nodes, or Control Nodes, can be of type \textit{Sequence}, \textit{Fallback}, \textit{Parallel}, or \textit{Decorator} (Fig. \ref{fig:control-nodes}). A Sequence Node propagates the tick to its children nodes from left to right and it returns \textit{Success} if and only if all its children return \text{Success}. A Fallback Node propagates the tick to its children nodes from left to right until one of them returns \textit{Success}. Note that, when a child returns \textit{Running}, the Sequence and the Fallback Nodes do not tick the next child if any. A Parallel Node propagates the tick to all its children simultaneously and it returns \textit{Success} if a set amount of them return \textit{Success}. Lastly, a Decorator node modifies the return status of a single child node with any custom policy. Leaf Nodes, or Execution Nodes, include two categories (Fig. \ref{fig:execution-nodes}). When ticked, an Action Node performs a command. It returns \textit{Success} if the action defined by the command is correctly completed, \textit{Failure} if it has failed and \textit{Running} if it is ongoing. A Condition Node checks a proposition, returning \textit{Success} if it holds, \textit{Failure} otherwise. In the previous section we pointed out that the state transitions in FSMs are \textit{one-way control transfer}. Although BTs are equivalent to FSMs (\cite{petter-bts, 6907656}), they exploit \textit{two-way control transfers}. When getting the tick from the parent, a node is executed and returns a status. As a consequence, the flow of a BT is governed by its internal nodes, on the basis of the status returned by the children. This substantial difference makes BTs more reactive and modular at the same time when compared to FSMs. On the one hand, the reactivity is given both by the continual generation of ticks and by the their tree traversal in a closed loop execution. On the other hand, the modularity derives from the possibility of having each sub-tree of a BT as an independent module, with a standard interface given by the return statuses. The combination of these two aspects makes BTs more suitable than FSMs as a task switching structure for the SoT control strategy. \section{Proposed Approach} The main contribution of our letter is the integration of the SoT control strategy with BTs, such that a BT is used to configure the hierarchical control problem to be solved by the SoT approach. In \ref{subsec:sot-bt} we explain our framework and in \ref{subsec:example} we provide an example to clarify its functioning. \subsection{Combining SoT with BTs} \label{subsec:sot-bt} \begin{figure}[t!] \centering \subfigure[]{\includegraphics[height = 0.15\linewidth]{fig/new-control-nodes} \label{fig:new-control-nodes} } \subfigure[]{\includegraphics[height = 0.11\linewidth]{fig/new-execution-nodes} \label{fig:new-execution-nodes} } \caption{New BT nodes in our framework. Nodes in Fig. \ref{fig:new-control-nodes} are \textit{SoT-Control Nodes} which upon completion remove the tasks posed by their children from the SoT tasks set by the children. Fig. \ref{fig:new-execution-nodes} shows the two new leaf nodes we propose, namely the \textit{Non-Blocking} and \textit{Blocking Action Nodes} respectively. The \textit{Non-Blocking Action Node} sets one task for the SoT control strategy and then returns \textit{Success}. The \textit{Blocking Action Node} sets one task and then returns \textit{Running} until the task error $\boldsymbol{e}(\boldsymbol{q})$ either satisfies the imposed constraint (returns Success) or a timeout occurs (returns Failure).} \label{fig:new-nodes} \end{figure} As mentioned in \ref{subsec:sot-fsm-shortcomings}, a single SoT control instance might not be sufficient in all situations when trying to fulfill a goal. The aim is to exploit the flow of a BT to dynamically update the task functions in the SoT, while preserving the framework's transparency and modularity. The basic idea is to have a BT and an SoT approach running in parallel at different control frequencies, and to use the BT for mapping a robot and environment state to a hierarchical control problem. Once configured by the BT, this control problem is then solved by the SoT strategy at a much higher control frequency. In this way, we can constantly update the task functions in the SoT by either setting new tasks, or removing the ones which are no longer necessary. We exploit Action and Control Nodes in the BT to configure the hierarchical control problem. Regarding the Action Nodes, we distinguish between \textit{Non-Blocking} and \textit{Blocking} Action Nodes (Fig. \ref{fig:new-execution-nodes}). A \textit{Non-Blocking} Action Node sets a new task $x = (\boldsymbol{e}_x(\boldsymbol{q}), p_x, {K_p}_x)$, where $\boldsymbol{e}_x(\boldsymbol{q})$ is the \textit{task error function}, $p_x \in \mathbb{N}_+$ is the \textit{task priority}, and ${K_p}_x \in \mathbb{R}_+$ is the \textit{task gain}. As a consequence, a new \textit{task constraint} in the general form of $\boldsymbol{J}_x \dot{\boldsymbol{q}} \leq \dot{\boldsymbol{e}}_x^*(\boldsymbol{q}) + \boldsymbol{w}$ is added to the hierarchical control problem (i.e., the task $x$ has been \textit{set}), where $\dot{\boldsymbol{e}}_x^*(\boldsymbol{q}) = -{K_p}_x \boldsymbol{e}_x(\boldsymbol{q})$ according to Eq. (\ref{control-law}). This Action Node returns \textit{Success} immediately after the task $x$ is set. A \textit{Blocking} Action Node sets a new task $x = (\boldsymbol{e}_x(\boldsymbol{q}), p_x, {K_p}_x, s_x, f_x)$, where $\boldsymbol{e}_x(\boldsymbol{q}), p_x, {K_p}_x$ are defined as before, while $s_x, f_x, t_x \in \mathbb{R}_+$ are an \textit{error threshold}, a \textit{time threshold} and the \textit{task execution time} respectively. This Action Node returns: \begin{itemize} \item \textit{Success}, if $x$ is set, $\boldsymbol{e}_x(\boldsymbol{q}) \leq s_x$ and $t_x \leq f_x$; \item \textit{Running}, if $x$ is set, $\boldsymbol{e}_x(\boldsymbol{q}) > s_x$ and $t_x \leq f_x$; \item \textit{Failure}, if $x$ is not set or $t_x > f_x$. \end{itemize} \begin{figure*}[t!hb] \centering \subfigure[]{\includegraphics[height = 0.27\linewidth]{fig/bt-example1} \label{fig:example-a} } \subfigure[]{\includegraphics[height = 0.27\linewidth]{fig/bt-example2} \label{fig:example-b} } \caption{Functioning example of our framework. The goal for the robot manipulator is to reach two different points, while not violating some constraints. Fig. \ref{fig:example-a} and \ref{fig:example-b} illustrate the robot performing the motions respectively to the first and the second point. For each of them, we show a frame of the running BT (green denotes \textit{Success}, orange denotes \textit{Running}, red denotes \textit{Failure}, and grey denotes \textit{not ticked}), a table with information about the active tasks in the SoT, and a visualization of the obstacle planes, the goal points, and the constraint line associated to the active tasks.} \label{fig:example} \end{figure*} The main difference between the two Action Nodes lies in the \textit{blocking} of the tick, which takes place only in the \textit{Blocking} Action Node. We need to block the tick of the BT (i.e., return \textit{Running} and avoid to tick the rest of the BT) after a certain control problem is configured by the Action Nodes, in order to wait for reaching a specific robot and environment state before updating the SoT. Since a \textit{Non-Blocking} Action Node returns \textit{Success} immediately after having set a task, it never returns \textit{Running} and, consequently, never prevents the tick from being propagated to the rest of the tree. Although it is possible to exploit Condition Nodes for this purpose, their use would imply that no Action Node has an internal mechanism to determine its progress, that the \textit{Running} status is never returned and that the condition is checked at the low frequency of the BT. For these reasons, we introduce the \textit{Blocking} Action Node and exploit two threshold values in it. On the one hand, we monitor the error function $\boldsymbol{e}(\boldsymbol{q})$ of a task and compare it with an \textit{error threshold} $s_x$, to determine if the task is \textit{completed} (i.e., $\boldsymbol{e}_x(\boldsymbol{q}) \leq s_x$) and, consequently, if the Action Node that sets the task is still \textit{Running} or not. On the other hand, we monitor the \textit{execution time} $t_x$ of the task and compare it with a \textit{time threshold} $f_x$, to identify possible situations in which the task can not be achieved (i.e., $t_x > f_x$). The idea of task monitoring is, however, not suitable for all the types of tasks that can be set. For tasks that define desired motions (e.g., reach a point), a lower value of $\boldsymbol{e}(\boldsymbol{q})$ certainly states that the motion is closer to \textit{completion} (e.g., the robot has almost reached the point). On the contrary, when taking into account tasks that define constraints (e.g., avoid collision with a wall), the value of $\boldsymbol{e}(\boldsymbol{q})$ can no longer be interpreted in the same way. For this reason, we use both \textit{Non-Blocking} and \textit{Blocking} Action Nodes in our framework, in order to exploit the first ones for constraint tasks and the second ones for motion tasks. Setting one task in each Action Node, instead of an entire Stack-of-Tasks, brings two main advantages. On the one hand, it improves understandability, since the BT provides a clear way both to visualize all the set tasks and to identify the \textit{blocking} ones. On the other hand, tasks that are in common for an entire subtree and not strictly related to the behavior defined by it (typically constraints) can be defined only once, outside the scope of the subtree. This makes each subtree more independent and easily re-usable in different contexts. Regarding the Control Nodes, we extend them to remove the tasks that are no longer needed by creating \textit{SoT}-Control Nodes (Fig. \ref{fig:new-control-nodes}). Let an \textit{SoT}-Control Node $C$ be a Control Node with $m$ children, of which $n \leq m$ are Action Nodes. $C$ removes a set of tasks $X_C$, where $x \in X_C$ has been set by an Action Node that is a child of $C$ ($|X_C| = n$). As a consequence, all the associated \textit{task constraints} previously added by the children of $C$ are removed from the hierarchical control problem. Task removal takes place immediately before the Control Node returns its final outcome (i.e., \textit{Success} or \textit{Failure}), without affecting the node behavior described in Section \ref{subsec:bt}. In case of the \textit{Running} status, the task removal takes place if and only if the \textit{SoT}-Control Node is \textit{halted} (i.e., the execution of its children is interrupted), because a previous condition in the BT is no longer met. The creation of new Control Nodes allows us to exploit these nodes for task removal when needed, without removing the possibility to use the standard ones just for controlling the flow of the BT. Control Nodes are more suitable than Action and Condition Nodes for removing tasks. On the one hand, task removal in the Action Nodes would make it problematic to remove the tasks set by \textit{Non-Blocking} Action Nodes, since they have to be removed after the tasks set by \textit{Blocking} Action Nodes. On the other hand, since Condition Nodes have no information about the other nodes in the tree, it would be complex to deduce which tasks have to be removed. On the contrary, Control Nodes have direct access to their children and provide us with a straight way to specify the tasks to remove by exploiting the BT design. As a result of the above definitions of the new nodes, the hierarchical control problem is dynamically updated at each tick according to the logic modelled by the BT, and solved in parallel by the SoT strategy according to Eq. (\ref{objective}). \subsection{Framework Example} \label{subsec:example} To clarify the functioning of our framework, we provide a simple illustrative example in Fig. \ref{fig:example}. Consider a robot manipulator on a flat surface, such as a table, near an obstruction, such as a wall. The manipulator must move its end-effector to two successive points. The motion from the start position to the first point requires operating near the obstruction, so collision with it has to be avoided. On the contrary the motion from the first point to the second must be accomplished by keeping the end-effector on a line and it is supposed to occur further away from the obstruction. While performing both motions, the collision avoidance with the table has to be also taken into account. Fig. \ref{fig:example-a} and \ref{fig:example-b} illustrate the robot performing the motions respectively to the first and the second point. For each of them, we show a frame of the running BT (green denotes \textit{Success}, orange denotes \textit{Running}, red denotes \textit{Failure}, and grey denotes \textit{not ticked}), a table with information about the active tasks in the SoT, and a visualization of the the obstacle planes, the goal points, and the constraint line associated with the active tasks. Since the first Control Node under the root is an SoT-Parallel Node, the Action Node \textit{Avoid Table} and the Condition Node \textit{Has Point1 Been Visited} are ticked at the same time. The Action Node sets an inequality task and returns \textit{Success}\footnote{Note that the task priority is internally defined by the Action Node and not explicitly visible in the BT.}, while the Condition Node returns \textit{Failure}, since the condition is not met. Then, the SoT-Parallel Node under the Fallback Node ticks \textit{Avoid Wall} and \textit{Go to Point1} simultaneously (Fig. \ref{fig:example-a}). Similarly to \textit{Avoid Table}, \textit{Avoid Wall} just sets an inequality task and returns \textit{Success}. The obstacle planes associated to the inequality tasks are represented respectively in purple and red. On the contrary, since \textit{Go to Point1} is a Blocking Action Node, after it sets the equality task, it returns \textit{Running} until the error function of its task ($\boldsymbol{e}(\boldsymbol{q})$ in the table) is lower than the specified threshold. The goal point is shown in light blue. Once \textit{Go to Point1} returns \textit{Success}, the SoT-Parallel Node above it removes the tasks previously set by the children and returns \textit{Success}. As a consequence, the Fallback Node returns \textit{Success} and the Sequence Node above propagates the tick to the next SoT-Parallel Node, which in turn ticks the Action Nodes \textit{Follow Line} and \textit{Go to Point2}. The former sets an equality task whose constraint line is shown in grey and then returns \textit{Success}, while the latter behaves analogously to \textit{Go to Point1}. At the next tick, the Condition Node \textit{Has Point1 Been Visited} is met and the Action Nodes \textit{Avoid Wall} and \textit{Go to Point1} are not executed again (Fig. \ref{fig:example-b}). Once \textit{Go to Point2} returns \textit{Success}, the SoT-Parallel Node above it removes the two tasks set by the children. Then, the Sequence Node returns \textit{Success} and the tick is back-propagated to the above SoT-Parallel Node, which removes the remaining active tasks (only the one set by \textit{Avoid Table}) and lastly returns \textit{Success}. \section{Evaluation} \label{sec:evaluation} We evaluate our approach with a Franka Emika Panda 7-DOF manipulator. We build upon the implementation of SoT exploited in \cite{7387707} and the \textit{BehaviorTree.CPP\footnote{https://www.behaviortree.dev/}} library for BTs. To test the validity of our framework, we utilize the set-up in Fig. \ref{fig:exp-setup} for real-world experiments and design a task that consists of picking up a 40 mm cube, placing it at the base of the ramp and pushing it up until the cube safely reaches the elevated platform. The use of a standard two-finger gripper makes the goal challenging to achieve, as it is prone to failure during the pushing operation. To track the position of the cube, we use a Microsoft Kinect V2 sensor and fiducial markers attached to the surface of the cube. The modular nature of BTs allows us to design the BT and tune the controller parameters in isolation for the picking, placing and pushing operations. Then we combine the sub-trees into a unified BT and evaluate the full system during the experiments. We tick the BT continuously until either the end goal is achieved (i.e., the BT root node returns \textit{Success} and the cube is at the top of the ramp) or the robot encounters failure scenarios from which it cannot recover (i.e., collisions with the environment). Regarding BT frequency, the tree is ticked again as soon as the root returns the status of the previous tick. This resulted in a variable frequency, as it depends on the number of nodes that need to be ticked before returning a status. Moreover, the SoT frequency is also variable, as it depends on the number of tasks that need to be solved for the current stack. During the experiments, the BT frequency ranged between 2-110 Hz and the SoT frequency ranged between 220-1.500 Hz. \subsection{Robustness} \label{sec:robustness} To show the robustness of our framework, we evaluate the success rate of the previously described operation over 50 trials, considering 5 different fixed starting positions of the cube. The results are shown in Table \ref{tab:tab1}. \textit{Attempt 1} refers to the success rate without the BT ever having returned \textit{Failure} (i.e., first attempt). \textit{Attempt 2} refers to the success rate when the robot has returned one \textit{Failure} and has recovered from it (i.e., second attempt). \textit{Overall} refers to the success rate at any attempt (i.e., in our case with at most one \textit{Failure}). In 90\% of the trials, the task was completed at first attempt. In two instances, the cube fell down the ramp during the pushing. In both cases the system automatically recovered from the failure and succeeded at the second attempt (100\%). The overall success rate is 94\%. The remaining 6\% fail rate is associated to errors during the pose estimation of the fiducial marker on the cube. This caused a displacement of the end-effector during the picking and resulted in a collision of the cube with the table during the placing, from which the system could not recover. \begin{table}[h!] \centering \caption{Robustness experiments} \label{tab:tab1} \begin{tabular}{rrr|rr} \toprule Position & \# Trials & Overall & Attempt 1 & Attempt 2 \\ \midrule 1 & 10 & 90\% & 80\% & 100\% \\ 2 & 10 & 100\% & 90\% & 100\% \\ 3 & 10 & 80\% & 80\% & - \\ 4 & 10 & 100\% & 100\% & - \\ 5 & 10 & 100\% & 100\% & - \\ \midrule \textbf{Total} & \textbf{50} & \textbf{94\%} & \textbf{90\%} & \textbf{100\%} \\ \bottomrule \end{tabular} \end{table} \subsection{Reactivity} To test the reactivity benefit from combining the SoT approach with BTs, we devise two experiments that simulate different types of disturbance during execution. In the first experiment, we evaluate the reactivity induced by the SoT framework by introducing a \textit{local} disturbance, in the sense that it does not result in a failure in the BT, but is rather handled online by the controller. In particular, we randomly perturb the picking location of the cube while the robot is moving (Fig. \ref{fig:exp-react-local}). This change is picked up by the Kinect camera and triggers an update on the goal location, which the SoT then conforms to. In the second experiment, we evaluate the reactivity induced by the BT by introducing a \textit{global} disturbance, that is handled by the BT. In particular, we simulate a failure during the pushing by manually replacing the cube from the ramp to a random position on the table (Fig. \ref{fig:exp-react-global}). This artificially makes the condition node that checks that the cube is placed in front of the end-effector fail. As a consequence the BT is forced to re-structure the SoT, and perform the full operation again. Table \ref{tab:tab2} registers the success results for the combined 50 trials for both experiments. The system successfully completed the task in 92\% and 88\% of the trials in case of \textit{local} and \textit{global} disturbances respectively. For both experiments, all completed trials were performed at first attempt after the introduced disturbance. As for the previous experiments, the failure rates are associated to inaccuracies in the tracking of the cube. Four of the failed attempts (2 for \textit{local}, 2 for \textit{global}) happened during the placing of the cube, causing a collision similar to the one described in the robustness experiments in Section \ref{sec:robustness}. The other registered failure happened during the picking, for which a tracking error caused the robot to close the gripper outside of the cube boundaries. \begin{figure}[h!] \vspace{0.3cm} \centering \subfigure[]{ \includegraphics[height = 0.23\linewidth]{fig/approach-1-1} \hspace{0.025cm} \includegraphics[height = 0.23\linewidth]{fig/approach-22-1} \hspace{0.025cm} \includegraphics[height = 0.23\linewidth]{fig/approach-2-1} \hspace{0.025cm} \label{fig:exp-react-local} } \subfigure[]{ \includegraphics[height = 0.23\linewidth]{fig/push-1-1} \hspace{0.025cm} \includegraphics[height = 0.23\linewidth]{fig/push-2-1} \hspace{0.025cm} \includegraphics[height = 0.23\linewidth]{fig/push-3-1} \hspace{0.025cm} \label{fig:exp-react-global} } \caption{Disturbances applied for reactivity experiments. To test SoT reactivity, in Fig. \ref{fig:exp-react-local} we randomly perturb the picking location of the cube while the robot is moving. To test BT reactivity, in Fig. \ref{fig:exp-react-global} we manually replace the cube from the ramp to a random position on the table.} \label{fig:reactivity} \end{figure} \begin{table}[h!] \centering \caption{Reactivity experiments} \label{tab:tab2} \begin{tabular}{lrr} \toprule Disturbance & \# Trials & Success rate \\ \midrule \textit{Local} & 25 & 92\% \\ \textit{Global} & 25 & 88\% \\ \bottomrule \end{tabular} \end{table} \section{Conclusion} \label{sec:conclusion} In this work, we combine a hierarchical Stack-of-Tasks control strategy with Behavior Trees for task composition in robot control. The proposed framework benefits from the reactivity of the two models, without compromising on modularity and understandability. The experiments on a 7-DOF manipulator show that the robot can robustly achieve a challenging goal, being able to react to unexpected changes. A limitation of the current framework lies in the design of the hierarchical control problems in dynamic environments, as it becomes difficult to predict all possible scenarios in advance. Since recent works have showed promising results in the learning of BTs \cite{learning-colledanchise-2019, 8794104}, combining SoT with them opens new possibilities for learning how to successfully compose tasks. \balance \bibliographystyle{IEEEtran}
{ "timestamp": "2022-09-20T02:20:31", "yymm": "2209", "arxiv_id": "2209.08619", "language": "en", "url": "https://arxiv.org/abs/2209.08619" }
\section{Introduction} Over the past decade, deep neural networks have produced groundbreaking results. To name a few, they have demonstrated impressive performance on visual classification tasks \citep{2016-He-deep-residual-learning-for-image-recognition}, parsing and synthesizing natural language \citep{2018-Devlin-bert-pre-training,2020-Brown-language-models-are-few-shot-learners}, and super-human performance in various games \citep{2013-Mnih-playing-atari-with-deep-reinforcement-learning}. These achievements establish neural networks as effective models for many relevant data generating processes. Statistical theory on neural networks indicate graceful scaling of sample complexity. For example, when the data is generated by a ReLU teacher network with $W$ parameters, \citet{2022-Jeon-an-information-theoretic-framework} demonstrate that the sample complexity of an \emph{optimal} learner is $\tilde{O}(W)$. However, existing computational theory suggests that, even for single-hidden-layer teacher networks, the computation required to achieve this sample complexity is intractable. For example, \citet{2020-Goel-superpolynomial-lower-bounds, 2020-Diakonikolas-algorithms-and-sq-lower-bounds} establish that, for batched stochastic gradient descent with respect to squared or logistic loss to achieve small generalization error \emph{for all} single-hidden-layer teacher networks, the number of samples or number of gradient steps must be superpolynomial in input dimension or network width. Furthermore, current theoretical guarantees for all computationally tractable algorithms proposed for fitting single-hidden-layer teacher networks with parameters drawn from natural distributions only bound sample complexity by high-order polynomial \citep{2015-Janzamin-generalization-bounds-for-neural-networks-through-tensor-factorization,2017-Ge-learning-one-hidden-layer-neural-networks-with-landscape-design} or exponential \citep{2017-Zhong-recovery-guarantees, 2020-Fu-guaranteed-recovery-of-one-hidden-layer-neural-networks-via-cross-entropy} functions of input dimension or width. In this work, we aim to reconcile the gap between these negative theoretical results and the apparent practical success of stochastic gradient descent (SGD) in training performant neural networks. To do so, we fit single-hidden-layer ReLU neural networks to data generated by single-hidden-layer ReLU teacher networks with parameters drawn from a natural distribution. We demonstrate that SGD with automated width selection attains small expected error with a number of samples and total number of queries both nearly linear in the input dimension and width. This suggests that SGD nearly achieves the information-theoretic sample complexity bounds established in \citet{2022-Jeon-an-information-theoretic-framework,2019-Bartlett-nearly-tight-vc-dimension} in a computationally efficient manner. An important difference between our empirical results and the negative theoretical results of \citet{2020-Goel-superpolynomial-lower-bounds, 2020-Diakonikolas-algorithms-and-sq-lower-bounds} is that the latter address worst-case error of deterministic algorithms, while our analysis centers on expected error. The focus on expected error is more in line with the information-theoretic sample complexity bounds of \citet{2022-Jeon-an-information-theoretic-framework}. Our results suggest that such expected-error analyses may be better-suited for understanding empirical properties of neural network learning. \section{Related Work} Our work contributes to the literature on the sample and computational complexity of single-hidden-layer networks. To put our work in context, we review related work in this area, grouped into several categories. \subsection{Stochastic Query Lower Bounds} Most lower bounds on the sample and computational complexity of single-hidden-layer neural networks have been established through the \emph{stochastic query} framework \citep{2020-Goel-superpolynomial-lower-bounds,2020-Diakonikolas-algorithms-and-sq-lower-bounds,2017-Song-on-the-complexity}. A stochastic query algorithm accesses an oracle that returns the expectation of a query function within some tolerance. The literature focuses on query functions that enable gradient descent with respect to common loss functions, with one query per gradient descent step. Aside from the results of \citet{2020-Goel-superpolynomial-lower-bounds, 2020-Diakonikolas-algorithms-and-sq-lower-bounds}, which were discussed in the introduction, \citet{2017-Song-on-the-complexity} show that in a setting where the number of samples is less than the product of the input dimension and the width, exponentially many stochastic queries are required. \subsection{Sample Complexity Upper Bounds} \citet{2022-Jeon-an-information-theoretic-framework,2019-Bartlett-nearly-tight-vc-dimension} study the sample complexity of \emph{optimal} learning from data generated by teacher networks, without addressing algorithms or computational complexity. \citet{2019-Bartlett-nearly-tight-vc-dimension} establish upper and lower bounds on the VC dimension (see \citet{1971-Vapnik-on-the-uniform-convergence}) of noiseless neural networks. For piece-wise linear activation functions, their work shows that the VC dimension of a network with $W$ parameters and $L$ layers is upper bounded by $O(WL\log W)$ and that there exist networks with $W$ parameters and $L$ layers with VC dimension lower bounded by $\Omega(WL\log(W/L))$. These bounds on the VC dimension translate to both upper and lower bounds on the sample complexity of any probably approximately correct (PAC) learning algorithm \citep{1984-Valiant-a-theory-of-the-learnable}. Results in \citet{2016-Hanneke-the-optimal-sample-complexity-of-pac-learning} show that for a PAC algorithm that learns up to within tolerance $\epsilon$ and failure rate at most $\delta$, the sample complexity is $\Theta\left(\frac{1}{\epsilon}(\text{VC}+\log\frac{1}{\delta})\right)$. In our context, this implies $O\left(\frac{1}{\epsilon}(WL\log W+\log\frac{1}{\delta})\right)$ sample complexity \emph{for all} teacher networks with $W$ weights and $L$ layers and $\Omega\left(\frac{1}{\epsilon}(WL\log (W/L)+\log\frac{1}{\delta})\right)$ \emph{for some} of these teacher networks. \citet{2022-Jeon-an-information-theoretic-framework} use information theory to study the number of samples required to learn from a noisy teacher network such that the \emph{expected} error is small. Instead of relying on VC dimension, their bounds scale linearly in the rate-distortion function of the neural network. For networks with ReLU or sign activations, their results imply an $\tilde{O}(W/\epsilon)$ sample complexity bound, where $W$ is the total number of parameters, and $\epsilon$ is the expected error. For \emph{single-hidden-layer} ReLU teacher networks, both works suggest an upper bound on sample complexity that is linear in the number of parameters, up to logarithmic factors. However, no practical algorithm is given. The VC dimension upper bound implies PAC-learnability, and \citeauthor{2022-Jeon-an-information-theoretic-framework} studies the \emph{expected} performance of an optimal Bayesian learner. An important difference between these results and the negative stochastic query results is that the latter analyze worst-case performance. \subsection{Concrete Algorithms} A segment of the literature offers concrete algorithms for learning from single-hidden-layer teacher networks \citep{2017-Zhong-recovery-guarantees,2020-Fu-guaranteed-recovery-of-one-hidden-layer-neural-networks-via-cross-entropy,2015-Janzamin-generalization-bounds-for-neural-networks-through-tensor-factorization,2017-Ge-learning-one-hidden-layer-neural-networks-with-landscape-design}. In \citet{2017-Zhong-recovery-guarantees}, to fit the data generated by a \emph{noiseless} single-hidden-layer ReLU neural network, the weights are first initialized by a tensor method, which guarantees linear convergence under gradient descent with high probability. However, the sample complexity is exponential in the input dimension and the number of hidden neurons when the weights are sampled i.i.d. Gaussian, as we assume in this paper (see Appendix~\ref{sec:appendix-sample-complexity-zhong} for details). \citet{2020-Fu-guaranteed-recovery-of-one-hidden-layer-neural-networks-via-cross-entropy} adapts the tensor initialization of \citet{2017-Zhong-recovery-guarantees} and provides similar results for cross entropy loss, instead of L2 loss. \citet{2017-Ge-learning-one-hidden-layer-neural-networks-with-landscape-design} design an alternate objective function $G$ such that using SGD to minimize $G$ can recover the parameters of the single-hidden-layer teacher network with high probability. The sample and computational complexity are high-order polynomials in the input dimension and width. \citet{2015-Janzamin-generalization-bounds-for-neural-networks-through-tensor-factorization} use tensor factorization, Fourier analysis, and ridge regression to fit the data generated by single-hidden-layer teacher networks with high probability. In the case of Gaussian inputs, the sample and computational complexity are high-order polynomial in input dimension and width. Note that the results in \citet{2017-Ge-learning-one-hidden-layer-neural-networks-with-landscape-design,2015-Janzamin-generalization-bounds-for-neural-networks-through-tensor-factorization} do not contradict the results of \citet{2020-Goel-superpolynomial-lower-bounds,2020-Diakonikolas-algorithms-and-sq-lower-bounds}, since the former construct algorithms that work \emph{with high probability}. Results from a couple papers that focus on networks with multiple hidden layers bear additional implications if specialized to single-hidden-layer networks. \citet{2014-Arora-provable-bounds-for-learning-some-deep-representations} propose an algorithm that learns a distribution generated by a sparse neural network with sign activation units and random edge weights. When specialized to a single hidden layer this gives rise to an $\tilde{O}(M^3)$ sample complexity bound, where $M$ is the width. \citet{2016-Zhang-l1-regularized-neural-networks} propose an algorithm for which sample complexity depends exponentially on maximum among neurons of L1 norms of incoming weights for particular activation units. \section{Preliminaries} In this section we give necessary definitions for our experiments, much of which is directly adapted from \citet{2022-Jeon-an-information-theoretic-framework}. \subsection{Teacher Network} We assume that the training algorithm is given a set $S$ of $N$ i.i.d samples \[ S=\{(x_1, y_1), ..., (x_N, y_N)\} \subset \mathbb{R}^d\times\mathbb{R} . \] The input $X\sim\mathcal{N}(0, I_d)$, and the output $Y$ is produced by a \emph{random} single-hidden-layer teacher network $g$ with noise $W$: $$Y=g(X) + W: \mathbb{R}^d\to\mathbb{R}.$$ The \emph{random} single-hidden-layer teacher network $g$ is parametrized by $(a, b, \theta)$: $$g(X)=\sum_{i=1}^M\theta_i\text{relu}(a^T_iX+b_i),$$ where $M$ is the width of the hidden layer and $\text{relu}(x)=\max(0, x)$. For the learnable parameters, we assume that for all $i\in[M], a_i\overset{iid}{\sim}\mathcal{N}(0, \frac{1}{d+1}I_d)$, $b_i\overset{iid}{\sim}\mathcal{N}(0, \frac{1}{d+1})$, and $\theta_i\overset{iid}{\sim}\mathcal{N}(0, \frac{1}{M})$. The choice of variances keeps the variance of $g(x)$ relatively fixed across different $d$ and $M$. We further assume that the noise $W\sim\mathcal{N}(0, \sigma^2)$. We denote the hyperparameters for this teacher network by $\gamma:=(d, M, \sigma)$. Note that this is a special case of the ReLU data generating process from \citet{2022-Jeon-an-information-theoretic-framework}. \subsection{Error} We define test error as $ \mathbf{d}_{KL}(P_Y\| P_{\hat{Y}}) $, the KL-divergence between the predictive distribution of $Y$ $(P_{\hat{Y}})$ relative to its true distribution $P_Y$. We assume that the predictive distribution of $Y$ is Gaussian with the same variance as the real distribution of $Y$, i.e., $\hat{Y}\sim\mathcal{N}(\hat{g}_S(X), \sigma^2)$, where $\hat{g}_S$ is the model trained on $S$. Then, the KL-divergence simplifies to L2 error with respect to the \emph{noiseless} teacher network scaled inversely by the noise (see section 2.6 of \citet{2022-Jeon-an-information-theoretic-framework}): \begin{equation} \label{eq:def-error} \mathbf{d}_{KL}(P_Y\| P_{\hat{Y}}) = \frac{\mathbb{E}\left[\left(\hat{g}_S(X)-g(X)\right)^2 | g, S \right]} {2\sigma^2} . \end{equation} \subsection{Sample Complexity} Our definition of sample complexity is adapted from Definition 4. in \citet{2022-Jeon-an-information-theoretic-framework}. For any $\epsilon>0$, we defined the sample complexity $N_\epsilon$ of a training procedure as the minimal number of samples $N$ such that after training on $N$ samples, the \emph{expected} error is at most $\epsilon$: \[ N_\epsilon = \min \left\{ N : \frac{\mathbb{E}\left[\left(\hat{g}_S(X)-g(X)\right)^2\right]}{2\sigma^2} \leq \epsilon \right\} , \] where $S$ is an iid set of $N$ training samples, and $\hat{g}_S$ is the model trained on this set. Here the expectation is taken over both $X$ and $g$; so this definition of sample complexity captures the \emph{expected} performance of a training algorithm. \subsection{Computational Complexity} We use the total number of queries to the training data points as a proxy for computational complexity, which we denote by $T$. More concretely, if batches of size $n$ are trained for $m$ batches, then the number of queries to the training data points would be $nm$. When each data point is queried, it generates a forward pass and a backward pass. So the actual computation complexity of the algorithm is a product of $T$ and a scaling factor that depends on the fitting model size. \section{Experiment Setup} In this section we describe how the experiments are conducted. We first describe the experiment pipeline and then discuss the various components. \subsection{Experiment Pipeline} The experiment pipeline is outlined in Algorithm~\ref{alg:exp-data-gen} The definition of various parameters and the respective values chosen for the experiments are summarized in Table~\ref{tab:exp-params}. \begin{algorithm}[htb!] \caption{Experimental Data Generation Algorithm} \label{alg:exp-data-gen} \begin{algorithmic}[1] \For{each data generation hyperparameter $\gamma\in\Gamma$} \For{each sample number $N\in\mathcal{N}$} \For{$i \in [\text{num trials}]$} \State{$g\gets\text{sample\_g}(\gamma)$ } \State{Sample $N$ i.i.d. $(x_1, x_2, ..., x_N)$ according to $N(0, I_d)$} \State{$\forall j\in [N]$, calculate $y_{j\ \text{noiseless}}\gets g(x_j)$} \State{$\forall j\in [N]$, calculate $y_{j} \gets y_{j\ \text{noiseless}} + w_j$, where $w_j\overset{iid}{\sim} N(0, \sigma^2)$.} \State{Set $S\gets \{ (x_j, y_j) | j\in [N]\}$} \State{$\hat{g}_S \gets \text{train}(\gamma, S)$, logging the number of queries to data points $T_{\gamma, N, i}$.} \State{Evaluate error according to \eqref{eq:def-error}: \[ \text{error}_{\gamma, N, i}\gets \frac{\mathbb{E}\left[\left(\hat{g}_S(X)-g(X)\right)^2 | g, S \right]}{2\sigma^2} \] } \EndFor \State{ Average over experiments: let \[ \text{error}_{\gamma,N}\gets \frac{1}{\text{num trials}}\sum_{i\in [\text{num trials}]}\text{error}_{\gamma, N, i} \] and \[ T_{\gamma, N}\gets \frac{1}{\text{num trials}}\sum_{i\in [\text{num trials}]}T_{\gamma, N, i} . \] } \EndFor \State{Calculate $N_{\gamma, \epsilon}$: \[ N_{\gamma, \epsilon} \gets \min\{N\in\mathcal{N}: \text{error}_{\gamma,N} \leq \epsilon \} \] } \EndFor \end{algorithmic} \end{algorithm} \begin{table}[htbp] \caption{Summary of Parameters in Experiment} \label{tab:exp-params} \begin{center} \begin{tabular}{ c p{4cm} p{5cm} } \hline Parameter & Descriptions & Values Chosen \\ \hline $\gamma=(d, M, \sigma)\in\Gamma$ & Hyperparameters of Teacher Network & \\ $d$ & Input Dimension & $\{1, 2, 4, ..., 2^7=128\}$ \\ $M$ & Number of Hidden Neurons & $\{1, 2, 4, ..., 2^7=128\}$ \\ $\sigma$ & Standard Deviation of Noise & $\{0.1, 0.2\}$ \\ \hline $\epsilon$ & Target Test Error & \begin{tabular}{@{}c@{}} $1$ to $0.01$ for $d, M\leq 2^6=64$ \\ $1$ to $0.1$ for $\max(d, M) =2^7=128$ \end{tabular} \\ \hline $N\in\mathcal{N}$ & Number of Samples & Successive powers of two to reach all target $\epsilon$ \\ \hline num trials & Number of trials to run for each configuration & At least 32 \\ \hline \end{tabular} \end{center} \end{table} To experimentally verify the the dependence of sample and computational complexity on $d$ and $M$, we generate teacher-networks where $d$ and $M$ are increasing powers of two: $(d, M) \in\{1,2,4,...,128\}^2$. Then, we estimate the sample complexity $N_\epsilon$ for target error $\epsilon$ spanning two orders of magnitude ($\epsilon\in\{1, 0.1, 0.01\}$) with a training algorithm that automatically tunes the width. We run the training algorithm on samples $S$ of increasing size $N$ until the test error is below the specified $\epsilon$, and set the smallest such $N$ as $N_\epsilon$. By choosing $N$ to double each time, we estimate $N_\epsilon$ within a factor of 2. The above procedure is performed for noise $\sigma=0.1$ and $\sigma=0.2$; and for each configuration, at least $32$ trials are performed to reduce the noise in gathered data. \subsection{Training} We split the samples $S$ into an internal training set $S_t$ and a validation set $S_v$ using a $80/20$ ratio. We train single-hidden-layer neural networks of different widths on $S_t$ using golden-section search, and select the model with the best performance on the validation set $S_v$. Various details are described below. \subsubsection{Architecture of Fitting Network} The fitting network is an single-hidden-layer ReLU network, the same as the teacher network, but with different widths (number of hidden neurons). No explicit form of regularization like dropout or weight decay is used. To find the best width, we perform golden-section search (\verb|scipy.optimize.golden|) on widths ranging from $2$ to $32+8\cdot \max(N, \sqrt{dM} + \max(d, M))$. The maximum width is chosen to allow ample over-fitting, considering either the number of provided samples, or the architecture of the teacher network. Golden-section search is performed on the \emph{logarithm (base 2)} of the width, with tolerance set to $0.25$. The idea is that we can get relatively close to a good width by searching relatively few points. For example, at most $8$ steps are needed to search through widths from $2$ to $1000$ in this scheme (the number of steps is at most $\frac{\ln(\text{initial range}/\text{tolerance})}{\ln(\phi)}$). We believe that model performance should roughly be a unimodal function of width; so in this case golden-section search should find widths near the optimum. The number of queries $T$ is the sum of the number of queries for each searched width. \subsubsection{Optimization} To train the network, we use Adam \citep{2015-Kingma-adam} with respect to L2 loss. Aside from the learning rate, We use the default parameters from the PyTorch implementation ($\beta_1=0.9, \beta_2=0.999$). As empirical evidence suggests that small batch sizes generalize better (for example, see \citet{2017-Keskar-on-large-batch-training}), we set the batch size to $64$ for a balance between model performance and training speed. To automatically set the initial learning rate, we adapt the method first proposed in \citet{2017-Smith-cyclical-learning-rates}. We start with a very small learning rate (1e-8) and exponentially increase it until the model starts to diverge. We adapt three methods implemented in the fastai library\footnote{https://docs.fast.ai/} to estimate the best learning rate \footnote{steep, where the loss as the steepest descent; minimum, for a learning rate $1/20$ of where the loss is the smallest; and valley, when the loss is in the middle of its longest valley}, and use their medium as the initial learning rate. The queries to the data points in the phase are included in the calculation of $T$. During training, we reduce the learning rate by a factor of $10$ when the validation loss plateaus using \verb|ReduceLROnPlateau| from PyTorch (mode=`min' and patience=12). We stop training whenever the best validation loss fails to decrease relatively by more than $1\%$ in $24$ epochs, and use the model corresponding to the best validation loss. For each fitting network, there is a hard cap of 1500 epochs of training, which is typically never reached. \section{Results and Discussion} \subsection{Sample Complexity} In Figure~\ref{fig:sample-complexity-main}, we plot $\epsilon N_\epsilon$ against $dM$ (left) and $\frac{N_\epsilon}{dM}$ against $\epsilon^{-1}$ (right) for the different choices of noise ($\sigma=0.1,0.2$). In these plots, $\epsilon$ is the average test error, and $N_\epsilon$ is the corresponding number of samples provided. Both the horizontal axis and the vertical axis are drawn in log scale, with equal aspect ratio. In all plots, we included a scatter plot of the points, and a reference line of unit slope in the log plot, which corresponds to a linear fit of the data. In the plots on the left, we also plotted lines corresponding to the median, the mean, and the $95$ and $5$ percentiles. In the plots on the right, we use locally weighted smoothing \citep{1979-Cleveland-robust-locally-weighted-regression} to estimate the trend, and the $95\%$ confidence interval is produced by bootstrap resampling two-thirds of the data. \begin{figure}[htb] \centering \foreach \noise in {0.1, 0.2} { \begin{subfigure}{1.0\linewidth} \begin{subfigure}{0.5\linewidth} \includegraphics[width=\columnwidth]{pics/noise-\noise-sample-complexity-dM.png} \end{subfigure}% \begin{subfigure}{0.5\linewidth} \includegraphics[width=\columnwidth]{pics/noise-\noise-sample-complexity-epsilon.png} \end{subfigure} \caption{Sample complexity when $\sigma=\noise$} \end{subfigure} } \caption{ Sample complexity is almost linear in $\frac{dM}{\epsilon}$ for wide range of $d$, $M$, and $\epsilon$. $\epsilon$ is the average test error, and $N_\epsilon$ is the corresponding sample size. All vertical and horizontal axises are in the log scale, with equal aspect ratio. A unit slope reference is provided to indicate a linear relationship in the log scale. For $\sigma=0.1$ (top), the reference lines correspond to $\epsilon N_\epsilon=1.79dM$; for $\sigma=0.2$ (bottom), the reference lines correspond to $\epsilon N_\epsilon=1.11dM$. The confidence intervals on the right are generated by bootstrap resampling of two-thirds of the data. } \label{fig:sample-complexity-main} \end{figure} As we can see in the plots, $\epsilon N_\epsilon$ is almost proportional to $dM$, for a wide range of $d$, $M$, and $\epsilon$. Both \citet{2019-Bartlett-nearly-tight-vc-dimension} and \citet{2022-Jeon-an-information-theoretic-framework} predict the theoretical sample complexity to be proportional to $\frac{dM}{\epsilon}$, up to log factors. So our results indicate that SGD on neural networks (with automatic width selection) can achieve the theoretical sample complexity of ``optimal'' learners in the case of single-hidden-layer teacher network. We note that while the dependence of $N_\epsilon$ on $dM$ is very close to linear for big $dM$, the dependence of $N_\epsilon$ on $\epsilon^{-1}$ is noticeably worse than linear for very small $\epsilon$. Additional plots of the dependence of $N_\epsilon$ on $d$ and $M$ for fixed $\epsilon$ can be found in Appendix \ref{sec:additional-sample-complexity}. \subsection{Computational Complexity} We plot the number of queries $T$ against the number of samples $N$ in Figure~\ref{fig:time-complexity-main}. As in the previous plots, both the horizontal and vertical axis are in log scale and have equal aspect ratio. We include a reference line of unit slope in the log plot, which corresponds to a linear fit of the data. From Figure~\ref{fig:time-complexity-main} we can see that the dependence of $T$ on $N$ is slightly less than linear and so $T = O(N)$. In the previous section, we demonstrated that $N_\epsilon$, the number of samples necessary to achieve test error within $\epsilon$ tolerance, appears to be $O(\frac{dM}{\epsilon})$. Therefore, $T_\epsilon$, the total number of queries to datapoints to achieve $\epsilon$ tolerance, is also approximately proportional to $\frac{dM}{\epsilon}$. This implies that for all $N$, the average number of times \emph{each single} data point is queried is bounded above by a constant. In our experiments, the width of the fitting network is $O(d+M)$. Since each query of a data point corresponds to at most one forward pass and one backward pass, the overall computational complexity is $O(Nd(d+M))=\tilde{O}(d^2M(d+M))$ for fixed $\epsilon$. We hypothesize that by tightening the upper bound on the fitting network's width to $O(M)$, the current results would still hold, and the corresponding computational complexity could be improved to $\tilde{O}(d^2M^2)$. \begin{figure}[htb] \centering \foreach \noise in {0.1, 0.2} {\begin{subfigure}{0.5\linewidth} \includegraphics[width=\columnwidth]{pics/noise-\noise-time-complexity-N.png} \caption{Computational complexity when $\sigma=\noise$} \end{subfigure}} \caption{ Total number of queries to datapoints is sublinear in sample size. All vertical and horizontal axises are in the log scale, with equal aspect ratio. A unit slope reference is provided to indicate a linear relationship in the log scale. For $\sigma=0.1$ (left), the reference line corresponds to $T=1940N$; for $\sigma=0.2$ (right), the reference line corresponds to $T=1622N$. The sublinear relationship indicates that the average number of times \emph{each single} data point is queried is $O(1)$ for all $N$. } \label{fig:time-complexity-main} \end{figure} \section{Comparison with Existing Results} \label{sec:comparison} \subsection{Comparison with theoretical results} In the case of single-hidden-layer neural networks, both \citet{2019-Bartlett-nearly-tight-vc-dimension} and \citet{2022-Jeon-an-information-theoretic-framework} give theoretical \emph{upper bounds} on the sample complexity that is $\tilde{O}\left(\frac{dN_\epsilon}{\epsilon}\right)$. As for lower bounds on sample complexity, the result in \citet{1998-Bartlett-almost-linear-vc-dimension} implies the \emph{existence} of single-hidden-layer neural networks\footnote{Mild constraints are imposed on the activation functions. Sigmoid, for example, satisfies the constraints.} with sample complexity at least linear in the total number of weights. However, to the best of our knowledge, there is no tight theoretical lower bound on sample complexity when the teacher network is assumed to be drawn from a distribution, and even for single-hidden-layer teacher networks this seems to remain an open problem. We empirically demonstrate that for single-hidden-layer teacher networks, running SGD on neural networks with adequate hyper-parameters achieves the best known theoretical bounds on sample complexity, with very manageable run time -- the average number of queries per datapoint is constant. SGD empirically works well in terms of sample and computational complexity \emph{in spite of} negative theoretical results in the stochastic query framework \citep{2020-Goel-superpolynomial-lower-bounds,2020-Diakonikolas-algorithms-and-sq-lower-bounds}. The discrepancy between theory and practice is best explained by the analysis framework. While \citet{2020-Goel-superpolynomial-lower-bounds,2020-Diakonikolas-algorithms-and-sq-lower-bounds} analyzes the \emph{worst case} performance of algorithms and prove that either sample or computational complexity must be super-polynomial, our empirical work studies the \emph{average} performance of SGD. The focus on average case performance is also more in line with the actual uses of neural networks -- in practice, people don't necessarily need guarantees that SGD on neural networks works for all datasets, as long as practical algorithms succeed \emph{with high probability}. \subsection{Comparison with other algorithms} \label{sec:comparison-zhong-fu} \citet{2017-Zhong-recovery-guarantees,2020-Fu-guaranteed-recovery-of-one-hidden-layer-neural-networks-via-cross-entropy,2015-Janzamin-generalization-bounds-for-neural-networks-through-tensor-factorization,2017-Ge-learning-one-hidden-layer-neural-networks-with-landscape-design} all construct algorithms to fit single-hidden-layer teacher networks with provable guarantees on sample complexity, computational complexity, and error. Here we highlight some differences between their works and ours: \begin{itemize} \item While our results indicate sample complexity linear in number of parameters, the mentioned works either have high-order polynomial \citep{2015-Janzamin-generalization-bounds-for-neural-networks-through-tensor-factorization,2017-Ge-learning-one-hidden-layer-neural-networks-with-landscape-design} or exponential (\citet{2017-Zhong-recovery-guarantees,2020-Fu-guaranteed-recovery-of-one-hidden-layer-neural-networks-via-cross-entropy}, see Appendix~\ref{sec:appendix-sample-complexity-zhong} for details) sample complexity. \item Our work uses standard machine learning tools (Adam, random weight initialization, early stopping, learning rate decay), while the mentioned works use algorithms not commonly found in practice. \citet{2017-Zhong-recovery-guarantees,2020-Fu-guaranteed-recovery-of-one-hidden-layer-neural-networks-via-cross-entropy} uses a tensor method to initialize the weights before applying SGD; \citet{2015-Janzamin-generalization-bounds-for-neural-networks-through-tensor-factorization} use tensor factorization, Fourier analysis, and ridge regression instead of SGD; and \citet{2017-Ge-learning-one-hidden-layer-neural-networks-with-landscape-design} designs an alternate objective function. \end{itemize} \section{Conclusions} In this work, we empirically demonstrate that to reach a small \emph{expected} error for single-hidden-layer teacher networks, SGD with automatic width tuning can nearly achieve theoretic sample complexity bounds in a computationally efficient manner. This helps bridge a gap that previously existed between theoretic sample complexity upper bounds and the absence of algorithms that achieve this upper bound computationally efficiently. In addition, the near optimal sample and computation complexity of SGD on neural networks opens up the possibility of modelling it as an \emph{optimal Bayesian learner}. We hope that this new perspective contributes to the general understanding of performance of SGD on neural networks. Investigating whether our results extend to \emph{multiple}-hidden-layer teacher networks remains an interesting question for future research. \subsubsection*{Acknowledgments} We thank the Stanford Electrical Engineering Research Experience for Undergraduates program, which funded Yifan Zhu during the summer of 2022. We thank NSF for providing funding via the GRFP Fellowship. Financial support from Army Research Office (ARO) grant W911NF2010055 is gratefully acknowledged. \paragraph{Reproducibility Statement} We provide the source code for reproducing the experiments (Appendix~\ref{sec:appendix-code-url}).
{ "timestamp": "2022-09-20T02:20:43", "yymm": "2209", "arxiv_id": "2209.08627", "language": "en", "url": "https://arxiv.org/abs/2209.08627" }
\section{Introduction}\label{sec:intro} Quantum speed limit \cite{Mandelstam45,Mandelstam1991,Fleming,Margolus98, Aharonov,Zych} is a well-known fundamental concept related with the question about time-energy uncertainty relations in quantum mechanics. However, at the same time, several diverse applications of quantum speed limit (QSL) can be found \cite{PhysRevLett.110.050402,PhysRevLett.110.050403,PhysRevLett.120.070401,PhysRevLett.120.070402,PhysRevLett.120.060409,Campaioli2019tightrobust,PhysRevA.103.022210,PhysRevResearch.2.023125,PhysRevLett.126.180603}. The most popular form of QSL is due to Mandelstam and Tamm \cite{Mandelstam45,Mandelstam1991} and depends on the variance of the generator of time evolution. However, sometimes the variance would give an unreasonable assessment, and the average value of the generator is employed, leading to the so-called Margolus-Levitin QSL \cite{Margolus98}, which in fact has been derived before by Fleming \cite{Fleming}. One can ask a bit philosophical question, namely, why does QSL provide useful pieces of information, given that it is ``just'' a mere consequence of an underlying time evolution. Here we consider a problem which, on top of being interesting and relevant by itself, also perfectly illustrates the way in which QSL overcomes an overall complexity of the full description of a quantum system's dynamics. To be more precise, we are concerned with quantum channels generated by Markovian dynamics of open quantum systems. We ask a question whether such channels are entanglement breaking. Any channel $\Phi$ is called entanglement breaking (EB) if the composite state $\cv{\Phi\otimes\mathcal{I}}\cvb{\rho}$ is always separable, even for entangled input states $\rho$ \cite{Horodecki2003}. Since we only consider quantum channels which belong to a semi-group parametrized by $t\geqslant 0$, i.e. channels of the form $\Phi_t=e^{t\mathcal{L}}$, the question we pose gains a bit of a structure. In particular, for $t=0$ we have $\Phi_0=\mathcal{I}$, so the channel $\Phi_0$ \textit{is not} entanglement breaking. We call such channels entanglement preserving (EP). One can expect that, depending on $\mathcal{L}$, the range of time splits into two non-empty and disjoint sets \begin{equation} t\in[0,\infty[=T_{\mathrm{EP}}(\mathcal{L})\cup T_{\mathrm{EB}}(\mathcal{L}), \end{equation} with self explanatory interpretations. Even though $t=0\in T_{\mathrm{EP}}(\mathcal{L})$, for every $\mathcal{L}$, both sets in general will not be connected. It might happen that the channel is EP for $t\in[0,t_{\mathrm{EB}}[$ and starts to be EB at $t=t_{\mathrm{EB}}$. However, at a later time it is again EP (revival of the EP property). We might ask a question whether both sets can be characterized in a more direct way. It is clear that, even for qubit channels, the above task seems hopeless. In this simplest case, even though one is able to explicitly write down cumbersome conditions which describe the EB property \cite{Ruskai2003}, it is not possible to turn such complex inequalities into an informative description of $T_{\mathrm{EB}}(\mathcal{L})$. In this paper we establish quantum speed limit for (potentially) entanglement breaking channels, i.e. given the fact that $\Phi_0$ is entanglement preserving, we bound from below the time in which this property might be lost. In other words, we are to bound the time $t_{\mathrm{EB}}$, such that a given channel $\Phi_t$ is certainly entanglement preserving for $t\in[0,t_{\mathrm{EB}}[$. The standard formulation of QSL tells us what is the minimum time necessary to pass from a given state to an orthogonal state. Our results tell what is the minimal time during which the channel in question is entanglement preserving. The paper is organized as follows. In Sec. \ref{sec:prelim} we present the methodology as well as a necessary formal background concerning quantum channels. We then utilize entanglement witnesses to establish in Secs. \ref{sec:Breaking-time} and \ref{sec:W-Psi} various bounds for $t_{EB}$, in particular, bounds inspired by the Mandelstam-Tamm QSL and Margolus-Levitin QSL. \section{Preliminaries}\label{sec:prelim} In accordance with formalism of quantum mechanics, let $\mathcal{H}_1$ be a Hilbert space of a system under consideration and let $\mathcal{H}_2$ be a Hilbert space of an auxiliary system. To study open system evolution, being a $t-$parametrized completely positive and trace preserving map on $\mathcal{B}\cv{\mathcal{H}_1},$ one can rely on expectation values \cite{sakurai2017,Pollock2018} \begin{equation} \cvr{A\cv{t}}_\rho = \tr{\oper{A}~\!\cv{\Phi_t\otimes\mathcal{I}}\cvb{\rho}}.\label{eq:OQD-basic} \end{equation} Here $\rho$ denotes an initial state acting on a composite Hilbert space $\mathcal{H}_1\otimes\mathcal{H}_2$, while $\oper{A}$ is an arbitrary observable therein, i.e. a Hermitian operator from $\mathcal{B}\cv{\mathcal{H}_1\otimes\mathcal{H}_2}$. Different ways to characterize the map $\Phi_t$ concern different choices of initial states and observables. For instance, in quantum process tomography \cite{Gladden1997,heinosaari2011}, the overall profile of the map $\Phi_t$ is recovered by collecting several $\cvr{A\cv{t}}$ constructed from various pairs of $\rho$ and $\oper{A}$, so that both inputs and observables exhaust some fixed bases of the matrix space $\mathcal{B}\cv{\mathcal{H}_1}$. In this case the full knowledge about the auxiliary space $\mathcal{H}_2$ is not required. An another example is the extraction of a characteristic of an external system, encoded in an interaction with the probe system $\mathcal{H}_1$ (signal), where in this case the auxiliary system $\mathcal{H}_2$ can be taken as a reference (idler), a catalyst, or the source of enhancement for the manipulation \cite{Jonathan1999}. For the sake of studying the dynamics of entanglement, concerning the action of the map $\Phi_t$, an entanglement witness $\oper{W}$ \cite{Terhal2000,Terhal2001,Horodecki1996,Ghne2009} will be an appropriate choice for the observable $\hat A$. With this choice: \begin{equation} w_\rho\cv{t} := \cvr{W\cv{t}}_\rho = \tr{ \oper{W}~\!\cv{\Phi_t\otimes\mathcal{I}}\cvb{\rho}} \label{eq:w_t-gen}, \end{equation} and \begin{equation} w_\rho\cv{0} < 0 \label{eq:init-w-gen}, \end{equation} since by definition $\operatorname{tr}(\sigma \oper{W})\geqslant 0$ if $\sigma$ is a separable state, and we require $\rho$ to be entangled. In a more formal fashion we now recall that $\Phi_t$ is called \emph{entanglement breaking} \cite{Ruskai2003,Horodecki2003} if $\cv{\Phi_t\otimes\mathcal{I}}\cvb{\sigma}$ is separable for all density matrices $\sigma$ acting on $\mathcal{H}_1\otimes\mathcal{H}_2$, where $\mathcal{H}_2$ is a finite dimensional Hilbert space. By setting $\rho$ to be entangled, as exposed in Eq.~\eqref{eq:init-w-gen}, and if $\Phi_t$ becomes entanglement breaking at a time moment $t_{\mathrm{EB}}$, this property will be reflected in the sign of the function $w_{\rho}\cv{t_\mathrm{EB}}$. This sign shall change before $t_{\mathrm{EB}}$, or, in an optimal case $w_{\rho}\cv{t_\mathrm{EB}}=0$. Interestingly, when both subsystems are finite dimensional and symmetric, i.e. $\mathcal{H}_1=\mathcal{H}_2\simeq \mathbb{C}^d$, the initial state $\rho$ can be chosen to be a maximally entangled, pure state $\rho=\ketbra{\Psi_+}{\Psi_+}$ \cite{heinosaari2011}, where \begin{equation}\label{MaxEntS} \ket{\Psi_+}=d^{-1/2}\sum_{k}\ket{k}\otimes\ket{k}, \end{equation} and $\cvc{\ket{k}}$ is a basis of $\mathbb{C}^d$. In such a case, the final state $\rho_{\Phi_t}=\cv{\Phi_t\otimes\mathcal{I}}\cvb{\rho}$ is simply a Choi-Jamio{\l}kowski isomorphic representation of the map $\Phi_t$ \cite{Jamiokowski1972,Choi1975}. In this sense one can say that entanglement of $\rho_{\Phi_t}$ certifies that the corresponding map $\Phi_t$ is entanglement preserving. For non-symmetric finite dimensional cases, and also for infinite dimensional case, even though the notion of isomorphism may not be applicable, the basic concepts can be illustrated in the similar way, i.e. an entanglement witness for the outcome state can also be an EP witness for the dynamical map. However, here we shall restrict our attention only to finite dimension and symmetric situation (both subsystems have the same dimension). From the perspective of our main question, we are particularly interested in a \emph{configuration breaking} time $t_{CB}{\cv{\rho,\oper{W}}}$ defined as \begin{equation} t_{CB}\cv{\rho,\oper{W}} = \min\arg \cvc{w_\rho\cv{t}\geqslant 0} \label{eq:t_B-finding}. \end{equation} In other words, this is minimal time $t$ in which, for a given prepare-measure configuration $\cv{\rho,\oper{W}}$, the state $\rho_{\Phi_t}$ is no longer classified as entangled. Clearly \begin{equation} t_\mathrm{EB}=\max_{\cv{\rho,\oper{W}}} t_{CB}\cv{\rho,\oper{W}}, \end{equation} because on the one hand side, for a given $\rho$, we shall find an optimal entanglement witness which works for the longest possible time, while in the second step we need to find optimal entangled state $\rho$. The aim of this paper is to find lower bounds for $t_{CB}\cv{\rho,\oper{W}}$, and consequently, bounds for $t_\mathrm{EB}$. From now on, we shall omit the arguments of $t_{CB}$. \subsection{Partitioning by Spectral Basis}\label{sec:spectral} \begin{table*} \caption{\label{tab:classification}Classification of contributions to the witness function with respect to the characteristics of eigenvalues of the dynamical generator $\mathcal{L}.$ The class I corresponds to the trivial eigenvalue; the class II represents pure decay; the class III comprises the rest. The index $j$ runs from $1$ to $L=(d^2-K-1)/2$. Note that we express arguments of complex numbers with respect to the branch $\left(-\pi,\pi\right]$. } \begin{ruledtabular} \begin{tabular}{ccccc} Class & $\Gamma_\alpha=\Re\lambda_\alpha$ & $\omega_\alpha= \Im\lambda_\alpha$ & $r_\alpha$ & $\phi_\alpha$ \\ \hline I & $\Gamma_0=0$ & $\omega_0=0$ & $r_0\geqslant 0$ & $\phi_0=\in\{0,\pi\}$ \\ II & $\Gamma_1,\ldots,\Gamma_K\leqslant 0$ & $\omega_1=\ldots=\omega_K=0$ & $r_\alpha\geqslant 0$ & $\phi_1=\ldots=\phi_K\in\{0,\pi\}$ \\ III & $\tilde{\Gamma}_{j}=\tilde{\Gamma}_{2j}\leqslant 0$ & $\tilde{\omega}_{j}=-\tilde{\omega}_{2j}\geqslant 0$ & $\tilde{r}_{j}=\tilde{r}_{2j}\geqslant 0$ & $\tilde{\phi}_{j}=-\tilde{\phi}_{2j}> 0$ \\ \end{tabular} \end{ruledtabular} \end{table*} Throughout this article we assume that $\Phi_t$ is rendered by a Lindblad generator \cite{Lindblad1976}, where the map converges to an identity map as the time $t\downarrow 0$ and supposedly become entanglement breaking later on. In particular, for a given density matrix $\sigma\in\mathcal{B}\cv{\mathcal{H}_1},$ it is a continuous completely positive and trace preserving map $\Phi_t=e^{t\mathcal{L}}$ on $\mathcal{B}\cv{\mathcal{H}_1}$ with \begin{equation} \dfrac{d\sigma}{dt} = \mathcal{L}\cvb{\sigma}\label{eq:Lindblad}, \end{equation} satisfying $\Phi_{t+s}=\Phi_t\circ\Phi_s$ for $t\geqslant 0$ and $\lim_{t\downarrow 0}\Phi_t=\mathcal{I}$ where $\mathcal{I}$ is an identity map on $\mathcal{B}\cv{\mathcal{H}_1}.$ For further convenience we describe the situation in the Heisenberg picture, where the dual map $\Phi^*_t=e^{t\mathcal{L}^*}$ is unital. The generator $\mathcal{L}^*$ can be written as \begin{equation} \mathcal{L}^*=\sum_{\alpha}\lambda_\alpha u_\alpha\tr{ v^\dagger_\alpha\cdot}, \end{equation} where both $u_\alpha$ and $v_\alpha$ are mutually orthogonal operators satisfying \begin{equation} \tr{ v^\dagger_\alpha u_{\alpha'}}=\delta_{\alpha\alpha'}. \end{equation} This is simply the spectral decomposition on the operational level, where $\mathcal{L}^*\cvb{ u_\alpha}=\lambda_\alpha u_\alpha$ defines the corresponding eigenbasis. Hence by spectral theorem \[\Phi^*_t=e^{t\mathcal{L}^*}=\sum_{\alpha}e^{\lambda_\alpha t} u_\alpha\tr{ v^\dagger_\alpha\cdot}.\] Since we discuss the case when both parties in the composite system are finite dimensional, the operators can be vectorized and the dynamical map becomes a linear transformation \cite{heinosaari2011}. Let us therefore write $ u_\alpha=\cket{\alpha}$ and $\tr{ v^\dagger_\alpha\cdot}=\cbra{\alpha}$, so that $\mathcal{L}^*=\sum_{\alpha}\lambda_\alpha\cketbra{\alpha}{\alpha}$ and $\Phi^*_t=\sum_{\alpha}e^{\lambda_\alpha t}\cketbra{\alpha}{\alpha}$. We can see that the conditions $\mathcal{L}^*\cket{\alpha}=\lambda_\alpha\cket{\alpha}$ and $\cbraket{\alpha}{\alpha'}=\delta_{\alpha\alpha'}$ define the basis and its dual for the subspace of operators with respect to $\mathcal{L}^*$. We stress that, within the introduced vectorization, $\cket{\alpha}$ does not need to be a Hermitian operator. We also assume that $\cvc{\cket{\alpha}}$ form a basis of $\mathcal{B}\cv{\mathcal{H}_1}$ inducing a resolution\footnote{If $\{\cket{\alpha}\}_\alpha$ does not form the basis, one can restrict the description to the subspace of $\mathcal{B}\cv{\mathcal{H}_1}$, in accordance with the set $\{\cket{\alpha}\}_\alpha$, and choose $\rho$ and $\oper{W}$ accordingly.} of identity map $\mathcal{I}=\sum_\alpha \cketbra{\alpha}{\alpha}$. Consequently, Eq.~\eqref{eq:w_t-gen} reads \begin{equation} w_{\rho}\cv{t} = \sum_{\alpha}e^{\lambda_\alpha t}\big(\rho\big\vert\Big[\big\vert \alpha)\big(\alpha\big\vert\otimes\mathcal{I}\Big] \big\vert\oper{W}\big)\label{eq:w_t-decom}. \end{equation} Let us rewrite \[\big(\rho\big\vert\Big[\big\vert \alpha)\big(\alpha\big\vert\otimes\mathcal{I}\Big] \big\vert\oper{W}\big)=r_\alpha e^{i\phi_\alpha},\] which is just a polar decomposition of a complex number, and denote $\Gamma_\alpha=\Re\lambda_\alpha\leqslant 0$ and $\omega_\alpha=\Im\lambda_\alpha$ to respectively be the decay rates and oscillation frequencies associated with the dynamics. In Appendix~\ref{appen:no-Im-w} we show that all these parameters split into three classes, summarized in Table~\ref{tab:classification}. Looking at the table one can immediately recognize that the function $w_{\rho}\cv{t}$ is real, so that \begin{equation} \Im w_{\rho}\cv{t} = \sum_{\alpha}r_\alpha e^{\Gamma_\alpha t}\sin\cv{\omega_\alpha t+\phi_\alpha} =0, \end{equation} and Eq.~\eqref{eq:w_t-decom} is equivalent to \begin{equation} w_{\rho}\cv{t}=\sum_{\alpha}r_\alpha e^{\Gamma_\alpha t}\cos\cv{\omega_\alpha t+\phi_\alpha} \label{eq:w_t_real}. \end{equation} We observe that the negative contributions, which reflect entanglement, are associated with oscillation frequencies $\omega_\alpha$ and initial phases $\phi_\alpha$. The first type of parameters describe the sole properties of the dynamics, while the latter type refers to the relative direction of the dynamical axes with respect to prepare-measurement configuration. \subsection{Structure of the Dynamical Components}\label{sec:struct} From the previous discussion we can see that one can characterize the behavior of the witness looking at the interplay between different parameters involved. In particular, one can decompose the average of the witness as follows:\begin{subequations} \begin{equation} w_{\rho}\cv{t} = w_I + w_{II}\cv{t} + w_{III}\cv{t} \label{eq:w_split}, \end{equation} where: \begin{align} w_I &= \big(\rho\big\vert\Big[\big\vert 0\big)\big( 0\big\vert\otimes\mathcal{I}\Big] \big\vert\oper{W}\big)=r_0 e^{i \phi_0}=\pm r_0\label{eq:W_I},\\ w_{II}\cv{t} &=\sum_{j=1}^{K}r_j e^{\Gamma_j t}\cos\cv{\phi_j}, ~~\phi_j\in\{0,\pi\} \label{eq:W_II},\\ w_{III}\cv{t} &= 2\sum_{j=1}^{L}\tilde{r}_j e^{\tilde{\Gamma}_j t}\cos\cv{\tilde{\omega}_jt + \tilde{\phi}_j}\label{eq:W_III}. \end{align} Clearly $w_I$ is the constant term corresponding to $\lambda_0=0$, while $w_{II}(t)$ represents the decay. \end{subequations} We relabeled $\Gamma_{K+1},\ldots,\Gamma_{d}$ as $\tilde{\Gamma}_{1},\ldots,\tilde{\Gamma}_{2L}$, where $L$ is the number of elements in the class $III$. The same pattern has been applied to $\tilde{\omega}_j,$ $\tilde{r}_{j},$ and $\tilde{\phi}_{j}$. Note that the total dimension decomposes as $L=\cv{d^2-K-1}/2$. Note also that the eigenvector associated with $\lambda_0=0$, an identity operator $\id$, which constitutes the class $I$, is denoted by a vector $\cket{0}$. Let us now remark on an interesting case $w_{I}=r_0\geqslant 0$, corresponding to $\phi_0=0$. This can occur, for example, when \[\Big[\big\vert 0)\big( 0\big\vert\otimes\mathcal{I}\Big] \big\vert\oper{W}\big) \propto \cket{0}^{\otimes 2},\] or when $\oper{W}$ is orthogonal to the identity. In this scenario the term $w_{II}\cv{t}+w_{III}\cv{t}$ will contribute to the negativity of $w_{\rho}\cv{t}$ while the time-independent term $w_{I}$ will set the threshold for the former terms. In other words, entanglement remains at time $t$ if \begin{equation} \cvv{w_{II}\cv{t}+w_{III}\cv{t}} > w_I. \label{eq:ent_cri} \end{equation} Let us also point out a specific setting, in which one prepares a pair $\cv{\rho,\oper{W}}$ in such a way that $\phi_j=\tilde{\phi}_j=\pi$ for $j>0$. At the initial time $t=0$ the summands in Eqs.~\eqref{eq:W_II} and \eqref{eq:W_III} are all negative. Furthermore, we also observe that at time $t\geqslant 0$ \begin{align} w_{II}\cv{t} &= -\sum_{j=1}^{K}r_j e^{\Gamma_j t} \leqslant 0\label{eq:W_II-opt},\\ w_{III}\cv{t} &= -2\sum_{j=1}^{L}\tilde{r}_j e^{\tilde{\Gamma}_j t}\cos\cv{\tilde{\omega}_jt} \label{eq:W_III-opt}. \end{align} The term $w_{II}\cv{t}$ is an increasing function eventually approaching $0$. We call such special prepare-measure pairs of operators $\cv{\rho,\oper{W}}$ \textit{good configurations}. Given that a good configuration exists for every channel (which indeed is the case, as constructively shown in Sec. \ref{sec:W-Psi}), we infer that coherent dynamics speeds up the deterioration of the EP property. In the following section we derive bounds for $t_{CB}$ using two standard methods relevant for quantum speed limit and mentioned in the introduction, as well as obtain a specific bound valid for good configurations. While the Mandelstam–Tamm-inspired bound will be valid in general, the Margolus–Levitin-inspired bound will also only apply to good configurations. \section{Entanglement Breaking Time}\label{sec:Breaking-time} In general, the problem of finding $t_{CB}\cv{\rho,\oper{W}}$ defined in Eq.~\eqref{eq:t_B-finding}, with explicit form of the input function taken from Eq.~\eqref{eq:w_t_real}, is very complicated. This is due to its non-linear character and plenty of involved parameters. In fact, since $w_\rho\cv{0} < 0$, this problem boils down to solving highly non-linear equation $w_\rho\cv{t} = 0$, and further seeking for the minimum root. Obviously, if all parameters are known, one can employ numerical methods to find the exact breaking time. Therefore, our goal is to get explicit results without specifying the parameters. To this end we shall follow three routes in order to bound $t_{EB}$. \subsection{Mandelstam–Tamm-inspired Bound}\label{sec:generic} In this part we consider the general case pertaining to an arbitrary prepare-measure pair $\cv{\rho,\oper{W}}$. Firstly we follow the most standard approach towards quantum speed limit. We observe that \begin{align} w_{\rho}&\left(t\right)-w_{\rho}\left(0\right) = \int_{0}^{t}dt\dot{w}_{\rho}\left(t\right) = \int_{0}^{t}dt\sum_{\alpha}r_{\alpha}\lambda_\alpha e^{\lambda_{\alpha}t +i\phi_\alpha} \label{eq:w-gen-int}. \end{align} Since $r_{\alpha}\geqslant 0$ and $\left|e^{\lambda_{\alpha}t +i\phi_\alpha}\right|=\left|e^{\Gamma_{\alpha}t}\right|\leqslant 1$ we are able bound \begin{eqnarray} \left\vert w_{\rho}\left(t\right)-w_{\rho}\left(0\right) \right\vert & = & \left\vert\int_{0}^{t}dt\sum_{\alpha}r_{\alpha}\lambda_\alpha e^{\lambda_{\alpha}t +i\phi_\alpha}\right\vert \nonumber\\ &\leqslant&\int_{0}^{t}dt\sum_{\alpha}r_{\alpha}\left\vert\lambda_\alpha\right\vert\nonumber\\ & =&t \sum_{\alpha}r_{\alpha}\left\vert\lambda_{\alpha}\right\vert, \end{eqnarray} where the last equation just follows from evaluating the remaining trivial integral. Consequently, we get \begin{equation} t\geqslant\dfrac{\left\vert w_{\rho}\left(t\right)-w_{\rho}\left(0\right)\right\vert}{\sum_{\alpha}r_{\alpha}\left\vert\lambda_{\alpha}\right\vert} \label{eq:t-bound-gen}. \end{equation} Note that $\left\vert\lambda_{\alpha}\right\vert=\mathpalette\DHLhksqrt{\Gamma_{\alpha}^2+\omega_{\alpha}^2}.$ Since $w_{\rho}\left(0\right)<0$ and because $w_\rho\cv{t_{CB}}=0$, we get the first final result \begin{equation} t_{CB}\geqslant \dfrac{\left\vert w_{\rho}\left(0\right)\right\vert}{\sum_{\alpha}r_{\alpha}\left\vert\lambda_{\alpha}\right\vert}. \label{eq:t_B-gen} \end{equation} Note that \begin{equation} w_{\rho}\left(0\right)=\sum_{\alpha}r_{\alpha}\cos\left(\phi_{\alpha}\right). \end{equation} As already mentioned, the time $t_{CB}$ also depends on the details of the prepare-measure configuration $\cv{\rho,\oper{W}}$. From now on we intend to leverage our results by appropriately using this freedom. Therefore, all remaining results of this section will be derived under an assumption that the pair $\cv{\rho,\oper{W}}$ forms a good configuration. Then, in Sec. \ref{sec:W-Psi} we explicitly construct such a configuration, in which the state is maximally entangled while the witness is based on the projection on this very special state. \subsection{General bound for Good Configurations}\label{sec:eff-bound} Let $\Gamma_l$ be a lower bound for all real parts of the non-trivial eigenvalues $\{\gamma_\alpha\}_{\alpha\neq 0},$ i.e. $\Gamma_l\leqslant\Gamma_\alpha$ for all $\alpha$. Because of Eq.~\eqref{eq:W_II-opt}, fulfilled by good prepare and measure configurations, we can provide a bound \begin{equation} w_{II}\cv{t} \leqslant -e^{\Gamma_l t}C_{II} \leqslant 0, \label{eq:bounds_w_II} \end{equation} where $C_{II}=\sum_{j=1}^{K}r_j$. In a similar fashion, if we denote \begin{equation} \tilde\Omega=\max_j \tilde\omega_j, \end{equation} we get \begin{equation} w_{III}\cv{t} \leqslant -e^{\Gamma_l t}C_{III}(T) \leqslant0, \label{eq:bounds_w_III} \end{equation} where $C_{III}(T)=2\sum_{j=1}^{L}\tilde{r}_j\cos\cv{\tilde{\omega}_jT}$, valid for $0\leqslant t\leqslant T$ whenever $T\leqslant \pi/(2\tilde\Omega)$. This is true because $\cos\cv{\tilde{\omega}_jT}\geqslant0$ for all $j$. Consequently, if $t_{CB}\leqslant T$, after a few algebraic steps we get the lower bound \begin{equation} t_{CB} \geqslant \dfrac{1}{\cvv{\Gamma_l}}\ln\Bigg(\dfrac{C_{II}+C_{III}(T)}{w_I}\Bigg):=\tau_{CB}(T). \label{eq:w-opt-summary} \end{equation} By the above arguments we get that if the right hand side of the last inequality is smaller than $T$, it forms a valid lower bound on $t_{CB}$. On the other hand, if this is not the case, we know that $t_{CB}$ is at least $T$. Therefore, for any $0\leqslant T\leqslant \pi/(2\tilde\Omega)$ we find that \begin{equation} t_{CB} \geqslant \min\left\{T,\tau_{CB}(T)\right\}. \end{equation} Consequently, we also get \begin{equation}\label{maximizing} t_{CB} \geqslant \max_{0\leqslant T\leqslant \pi/(2\tilde\Omega)} \min\left\{T,\tau_{CB}(T)\right\}. \end{equation} We stress that, contrarily to the general result in Eq. (\ref{eq:t_B-gen}) the above bound only holds for good prepare-measure configurations. In particular, the logarithm therein requires that $w_I>0$, which is one of the characteristic features of such configurations. \subsection{Margolus–Levitin-inspired Bound for Good Configurations} For the sake of completeness we shall also discuss a variant of the bound inspired by the Margolus–Levitin bound. However, the method used to derive this bound turns out to be unsuitable in the general case. This happens because that bound is based on the inequality \begin{equation}\label{cosb} \cos\cv{x} \geqslant 1-\dfrac{2}{\pi}\left(x+\sin\cv{x}\right), \end{equation} valid for $x\geqslant 0$. In our problem, this inequality could potentially be applied to bound $\cos(\omega_\alpha t+\phi_\alpha)$ factors in (\ref{eq:w_t_real}). However, Eq. (\ref{cosb}) constitutes a lower bound which, given that we rely on $w_\rho(t_{CB})\geqslant0$, is not useful. Moreover, in general we have no control over the sign of the arguments $\omega_\alpha t+\phi_\alpha$. Quite interestingly, both limiting factors pointed out above disappear for good configurations. First of all, the trigonometric terms appear only in $w_{III}(t)$ and are always multiplied by $-1$. Moreover, since all $\tilde\omega_j$ are non-negative, we also do not suffer from the sign issue. Therefore, for good configurations we can bound $w_{III}(t)$ as follows \begin{eqnarray} w_{III}(t) & = & -2\sum_{j=1}^{L}\tilde{r}_{j}e^{\tilde{\Gamma}_{j}t}\cos\left(\tilde{\omega}_{j}t\right)\\ & \leqslant & 2\sum_{j=1}^{L}\tilde{r}_{j}e^{\tilde{\Gamma}_{j}t}\left[\frac{2}{\pi}\tilde{\omega}_{j}t+\frac{2}{\pi}\sin\left(\tilde{\omega}_{j}t\right)-1\right]\nonumber\\ & \leqslant & 2\sum_{j=1}^{L}\tilde{r}_{j}e^{\tilde{\Gamma}_{j}t}\left[\frac{2}{\pi}\tilde{\omega}_{j}t-\frac{\pi-2}{\pi}\right].\nonumber \end{eqnarray} In the last line we just bounded the sin function by $1$. Since all $\tilde{\omega}_{j}\geqslant 0$, the first term is positive so that the exponent multiplying it can be bounded by 1 (since all $\tilde{\Gamma}_{j}\leqslant 0$). On the other hand the second term is negative, so we can bound the exponent as follows \begin{equation} e^{\tilde{\Gamma}_{j}t}=e^{-\left|\tilde{\Gamma}_{j}\right|t}\geqslant 1-\left|\tilde{\Gamma}_{j}\right|t, \end{equation} using the fact that $e^{-x}\geqslant 1-x$ for $x\geqslant0$. As a result we get the inequality \begin{equation} w_{III}(t)\leqslant 2\sum_{j=1}^{L}\tilde{r}_{j}\left[\frac{2}{\pi}\tilde{\omega}_{j}t+\frac{\pi-2}{\pi}\left(\left|\tilde{\Gamma}_{j}\right|t-1\right)\right]. \end{equation} Following the same reasoning concerning the exponential decay we also bound \begin{equation} w_{II}(t)\leqslant\sum_{j=1}^{K}r_{j}\left(\left|\Gamma_{j}\right|t-1\right). \end{equation} Finally, since $w_{\rho}\left(t_{CB}\right)\geqslant 0$, we get the lower bound \begin{equation} t_{CB}\geqslant\frac{\sum_{j=1}^{K}r_{j}+2\frac{\pi-2}{\pi}\sum_{j=1}^{L}\tilde{r}_{j}-r_{0}}{\sum_{j=1}^{K}r_{j}\left|\Gamma_{j}\right|+2\sum_{j=1}^{L}\tilde{r}_{j}\left(\frac{2}{\pi}\tilde{\omega}_{j}+\frac{\pi-2}{\pi}\left|\tilde{\Gamma}_{j}\right|\right)}. \end{equation} The above bound for the entanglement breaking time looks rather cumbersome. It does not only depend on the spectrum of the channel (superoperator) represented by the parameters $\Gamma_j$, $\tilde \Gamma_j$ and $\tilde\omega_j$, but also on an interplay between a state, an entanglement witness and the eigenbasis of the superoperator (through $r_0$, $r_j$ and $\tilde r_j$). In the next section we select both the state and the entanglement witness in such a way that together the not only form a good configuration but also, due to very high symmetry of this configuration, render the parameters ``r'' which do not depend on the basis $\big\vert \alpha)$. \section{Symmetric Entanglement Witness}\label{sec:W-Psi} In this section we consider a specific choice for the entanglement witness operator \begin{equation} \hat{W}_{\Psi_+}=\id^{\otimes 2}-d\left\vert\Psi_{+}\right\rangle \left\langle \Psi_{+}\right\vert \label{eq:W_d}, \end{equation} with the maximally entangled state $\left\vert \Psi_{+}\right\rangle$ already defined in (\ref{MaxEntS}). One can observe that the average value of this witness is $0$ for all separable states, a fact which suggests a certain optimality of this choice of the witness. Moreover, in order to strengthen the configuration we also select the state to be maximally entangled, i.e. $\hat \rho=\left\vert\Psi_{+}\right\rangle \left\langle \Psi_{+}\right\vert$. We shall call this whole choice a \textit{symmetric configuration}. As a result, the time-dependent average value of the witness becomes \begin{align} w_{\Psi_+}\left(t\right) &= 1 - d\big(\Psi_+\big\vert\Phi_t\otimes\mathcal{I} \big\vert\Psi_+\big),\nonumber\\ &=1- d\sum_{\alpha}e^{\lambda_\alpha t}s_\alpha \label{eq:w_sym}, \end{align} where \begin{equation} s_\alpha=\big(\Psi_+\big\vert\Big[\big\vert \alpha)\big(\alpha\big\vert\otimes\mathcal{I}\Big] \big\vert\Psi_+\big). \label{eq:coeff_s} \end{equation} In the vectorized notation, the state \begin{equation} \hat \rho=\left\vert\Psi_{+}\right\rangle \left\langle \Psi_{+}\right\vert=\dfrac{1}{d}\sum_{j,j'=1}^d\ketbra{j}{j'}\otimes\ketbra{j}{j'}, \end{equation} is represented by a vector \begin{equation} \cket{\Psi_{+}} = \dfrac{1}{d}\sum_{j,j'=1}^d\cket{e_{jj'}}\otimes\cket{e_{jj'}}, \label{eq:Psi} \end{equation} where $\cket{e_{jj'}}$ are members of a canonical basis of the matrix space $\mathcal{B}\cv{\mathcal{H}_1}$. In other words, $\cket{e_{jj'}}$ is a vectorized form of $\ketbra{j}{j'}$ (we remember that $\mathcal{H}_1=\mathcal{H}_2\simeq \mathbb{C}^d$). We are in position to use the above vectorization to prove two technical results concerning the choice (\ref{eq:w_sym}): \begin{lemma}\label{Lemma1} For every channel $\Phi_t$ we have that \begin{equation} \forall_{\alpha}\quad s_{\alpha}=\frac{1}{d^{2}}. \end{equation} \end{lemma} To show this result, one shall perform the calculation as follows (we omit summation ranges for brevity) \begin{eqnarray} s_\alpha &=& \cbra{\Psi_+}\left[\cketbra{\alpha}{\alpha}\otimes\mathcal{I}\right]\cket{\Psi_+}\nonumber\\ &=& \dfrac{1}{d^2}\sum_{j,j',j'',j'''}\cbraket{e_{jj'}}{\alpha}\cbraket{\alpha}{e_{j''j'''}}\cbraket{e_{jj'}}{e_{j''j'''}}\nonumber\\ &=& \dfrac{1}{d^2}\sum_{j,j'}\cbraket{\alpha}{e_{jj'}}\cbraket{e_{jj'}}{\alpha}= \dfrac{1}{d^2}\cbraket{\alpha}{\alpha} = \dfrac{1}{d^2} \label{eq:s_alpha}. \end{eqnarray} Passing from the second to the third line we used orthogonality of the vectors $\cket{e_{jj'}}$. \begin{lemma}\label{Lemma2} For every channel $\Phi_t$ we have that \begin{align} r_\alpha e^{i\phi_\alpha} &= \delta_{\alpha 0} - d s_\alpha=\left\{\begin{array}{lr} 1-\frac{1}{d}, & \alpha=0\\ -\dfrac{1}{d}, & \text{~otherwise} \end{array} \right. \label{eq:coef_sym}, \end{align} provided that $\cv{\rho,\oper{W}}$ is a symmetric configuration. \end{lemma} First of all, since the witness under discussion is vectorized to the form $\big\vert\oper{W}\big)=\big\vert0\big)^{\otimes 2}-d \cket{\Psi_{+}}$, for the symmetric configuration we know that \begin{align*} r_\alpha &e^{i\phi_\alpha} = \big(\rho\vert\Big[\big\vert \alpha)\big(\alpha\big\vert\otimes\mathcal{I}\Big] \big\vert\oper{W}\big)\\ &= \big(\Psi_+\big\vert\Big[\big\vert \alpha)\big(\alpha\big\vert\otimes\mathcal{I}\Big] \big\vert0\big)^{\otimes 2}- d\big(\Psi_+\big\vert\Big[\big\vert \alpha)\big(\alpha\big\vert\otimes\mathcal{I}\Big] \big\vert\Psi_+\big)\\ &= \big(\Psi_+\big\vert\Big[\big\vert \alpha)\big(\alpha\big\vert\otimes\mathcal{I}\Big] \big\vert0\big)^{\otimes 2} - d s_\alpha. \end{align*} Therefore, given Lemma \ref{Lemma1} we only need to explicitly calculate the first term. To this end, we observe that $\cket{0}$, which corresponds to the identity operator $\id=\sum_{j=1}^d \ketbra{j}{j}$, is represented as $\cket{0}=\sum_{j=1}^d\cket{e_{jj}}$. We can then explicitly calculate \begin{align} \hspace{1cm}&\hspace{-1cm}\big(\Psi_+\big\vert\Big[\big\vert \alpha)\big(\alpha\big\vert\otimes\mathcal{I}\Big] \big\vert0\big)^{\otimes 2}\nonumber\\ &= \dfrac{1}{d}\sum_{j,j'}\cbraket{e_{jj'}}{\alpha}\cbraket{\alpha}{ 0}\cbraket{e_{jj'}}{ 0}\nonumber\\ &= \dfrac{1}{d}\sum_{j,j'}\cbraket{e_{jj'}}{\alpha}\delta_{\alpha 0}\Big[\sum_{j''}\cbraket{e_{jj'}}{e_{j''j''}}\Big]\nonumber\\ &= \dfrac{\delta_{\alpha 0}}{d}\sum_{j,j'}\cbraket{e_{jj'}}{\alpha}\sum_{j''}\delta_{jj''}\delta_{j'j''}\nonumber\\ &= \dfrac{\delta_{\alpha 0}}{d}\sum_{j,j'}\cbraket{e_{jj'}}{\alpha}\delta_{jj'}\nonumber\\ &= \dfrac{\delta_{\alpha 0}}{d}\sum_{j}\cbraket{e_{jj}}{ 0}= \dfrac{\delta_{\alpha 0}}{d}\sum_{jj'}\cbraket{e_{jj}}{e_{j'j'}}=\delta_{\alpha 0} \label{eq:coef_sym_fixed-term}. \end{align} This finalizes the proof. Given both lemmas above, we reach the conclusion \begin{corollary} The symmetric configuration is also good configuration. \end{corollary} Lemma \ref{Lemma2} says that $r_0=1-\frac{1}{d}\geqslant0$ and consequently $\phi_0=0$. On the other hand, for $\alpha\neq0$ (i.e. for members of the classes II and III) we can see that $r_\alpha e^{i\phi_\alpha}$ is real and negative. Therefore, $\phi_\alpha=\pi$ in all these cases. This are exactly the conditions defining the good configuration. As we can see one can find a good configuration for any channel $\Phi_t$, simply by means of the symmetric configuration discussed in this section. Moreover, since the entanglement breaking time $t_{EB}$ is lower bounded by all $t_{CB}$, we conclude that all three bounds derived in the previous section do apply to $t_{EB}$. In the following, we simplify these bounds given the symmetric configuration. Before doing so, we note in passing that the symmetric configuration has an additional interesting feature, namely, it can be related to the geometric measure of entanglement \cite{GEOM}. The latter was recently shown to be equal to a minimal time required for a unitary (global) transformation to transform a given pure entangled state to a closed separable state~\cite{Rudnicki2021}. An analogue of $t_{EB}$ in this problem reads $\Omega^{-1}\arccos\left(\mathpalette\DHLhksqrt{d}\right)$ with $\Omega$ defined as an energy scale of a Hamiltonian rendering the global time evolution. \subsection{Speed of entanglement breaking property} We are now going to summarize the above findings by bounding time when the channel becomes EB as \begin{equation} t_{EB}\geqslant t_{CB}\cv{\left\vert\Psi_{+}\right\rangle \left\langle \Psi_{+}\right\vert,\oper{W}_{\Psi_+}}. \end{equation} The Mandelstam-Tamm-inspired lower bound for the symmetric configuration provides \begin{equation} t_{EB}\geqslant \dfrac{d\cv{d-1}}{\sum_{\alpha}\left\vert\lambda_{\alpha}\right\vert}=T_{\mathrm{M-T}} \label{eq:t_B-gen-sym}. \end{equation} In the case of the general bound for good prepare measure configurations we can slightly simplify the intermediate bound defined in (\ref{eq:w-opt-summary}) to the form \begin{equation} \tau_{CB}(T)=\dfrac{1}{\cvv{\Gamma_l}}\ln\Bigg(\dfrac{K+2\sum_{j=1}^{L}\cos\cv{\tilde{\omega}_jT}}{d-1}\Bigg). \end{equation} Still, this bound does depend on the time threshold $T$, and needs to be optimized as in (\ref{maximizing}). Finally, The Margolus–Levitin-inspired lower bound for the symmetric configuration gives \begin{equation} t_{EB}\geqslant\frac{K+2\frac{\pi-2}{\pi}L-d+1}{\sum_{j=1}^{K}\left|\Gamma_{j}\right|+2\sum_{j=1}^{L}\left(\frac{2}{\pi}\tilde{\omega}_{j}+\frac{\pi-2}{\pi}\left|\tilde{\Gamma}_{j}\right|\right)}=T_{\mathrm{M-L}}. \end{equation} \section*{Acknowledgments} We acknowledge support by the Foundation for Polish Science (IRAP project, ICTQT, contract no. 2018/MAB/5, co-financed by EU within Smart Growth Operational Programme). \bibliographystyle{apsrev4-1}
{ "timestamp": "2022-09-20T02:22:41", "yymm": "2209", "arxiv_id": "2209.08689", "language": "en", "url": "https://arxiv.org/abs/2209.08689" }
\section{Introduction} Deep neural networks (DNNs) have been deployed in many applications including security-critical ones such as biometric authentication and automated driving, but DNNs are vulnerable to adversarial examples (AEs), which are perturbed by noises to mislead DNNs without affecting human perception. In addition, AEs generated for a source model fool other (target) models, and this property is called adversarial transferability. This transferability allows attackers to use a substitute model to generate AEs that may also fool other target models, so reducing its influence has become an urgent issue. \par Many studies have investigated both AEs and the transferability of AEs to build models robust against these attacks \cite{aprilpyone2021block, croce2020reliable, croce2020minimally, maksym2020square}. In contrast, various methods for generating AEs have also been proposed to fool DNNs \cite{kiya2022overview, ian2015explaining, su2019one}. In addition, recently, the adversarial transferability between the vision transformer (ViT) \cite{Alexey2021an} and convolutional neural network models was confirmed to be low \cite{mahmood2021on, naseer2022on, tanaka2022transferability}. However, the adversarial transferability of ConvMixer \cite{trockman2022patches} has never been investigated. ConvMixer is an isotropic network the same as ViT, but it includes CNNs. Accordingly, in this paper, we aim to investigate the adversarial transferability between ConvMixer and other CNN models to confirm the difference between ConvMixer and ViT models. \par In this paper, the adversarial transferability of encrypted models is evaluated under the use of a benchmark attack method, referred to as AutoAttack, which was proposed to objectively evaluate the robustness of models against AEs. In an experiment, the use of ConvMixer models is verified not only to make the transferability of AEs generated from other CNN models weak but to also make that generated from ViT models weak. \section{Related Work} \subsection{Adversarial Examples} AEs are classified into three groups based on the knowledge of a particular model and training data available to the adversary: white-box, black-box, and gray-box. Under white-box settings \cite{ian2015explaining, madry2018towards, croce2020minimally}, the adversary has direct access to the model, its parameters, training data, and defense mechanism. However, the adversary does not have any knowledge on the model, except the output of the model in black-box attacks \cite{su2019one, maksym2020square, yandogn2019nattack}. Situated between white-box and black-box methods are gray-box attacks that imply that the adversary knows something about the system. With the development of AEs, numerous adversarial defenses have been proposed in the literature. Conventional defenses have been compared under a benchmark attack framework called AutoAttack \cite{croce2020reliable}. \par However, conventional defenses are not effective against adversarial transferability, which means AEs generated for a source model can fool another black-box model (target model) with a non-trivial probability in general, although many studies \cite{nicolas2016transfer, christian2014intriguing, liu2017delving, mahmood2021on} have investigated adversarial transferability. As mentioned above, the adversarial transferability between the vision transformer (ViT) and convolutional neural network models was recently confirmed to be low. This result is expected to lead to a novel insights into defending models. However, the adversarial transferability of ConvMixer has never been investigated. ConvMixer is an isotropic network the same as ViT, but it includes CNNs. Accordingly, in this paper, we aim to investigate the adversarial transferability between ConvMixer and other CNN models to confirm the difference between ConvMixer and ViT models. \subsection{AutoAttack} Many defenses against AEs have been proposed, but it is very difficult to judge the value of defense methods without an independent test. For this reason, AutoAttack \cite{croce2020reliable}, which is an ensemble of adversarial attacks used to test adversarial robustness objectively, was proposed as a benchmark attack. AutoAttack consists of four attack methods: Auto-PGD-cross entropy (APGD-ce) \cite{croce2020reliable}, APGD-target (APGD-t), FAB-target (FAB-t) \cite{croce2020minimally}, and Square Attack \cite{maksym2020square}, as summarized in Table I. In this paper, we use these four attack methods to objectively evaluate the transferability of AEs. \begin{table}[h] \centering \caption{Attack methods used in AutoAttack} \begin{tabular}{c|c|c|} \multirow{2}{*}{Attack} & Target (T) & White-box (W) \\ & / Non-target (N) & / Black-box (B) \\ \hline APGD-ce & N & W \\ APGD-t & T & W \\ FAB-t & T & W \\ Square & N & B \\ \end{tabular} \label{table1} \end{table} \subsection{ConvMixer} ConvMixer \cite{trockman2022patches} is well-known to perform highly in image classification tasks, even though it has a small number of model parameters. It is a type of isotropic network. It is inspired by the vision transformer (ViT) \cite{Alexey2021an}, so the architecture has a unique feature, called patch embedding. \par Fig. 1 shows the architecture of the network, which consists of two main structures: patch embedding and ConvMixer layers. First, an input image $x$ is divided into patches by patch embedding with a patch size of $P$. Next, the patches are transformed by $L$ ConvMixer layers. Each layer consists of depthwise convolution and pointwise convolution. Therefore, ConvMixer is a CNN-based model, not a Transformer-based model like ViT. Finally, the output of the $L$th ConvMixer layer is transformed by Global Average Pooling and a softmax function to obtain a result. \par In previous work, the transferability between CNN models and ViT was mentioned to be lower than the transferability between CNN models \cite{mahmood2021on}. However, the reason for the low transferability was not clearly indicated. In this paper, we evaluate the hypothesis that patch embedding is the cause of the low transferability. To evaluate this hypothesis, we use ConvMixer as a CNN-based model with patch embedding, and we compare this model with a transformer-based model, ViT. \begin{figure*}[t] \centering \includegraphics[keepaspectratio, width=160mm]{figure/APSIPA_convmixer_202208.png} \caption{Architecture of ConvMixer \cite{trockman2022patches}} \label{fig:my_label} \end{figure*} \section{Evaluation of Adversarial Transferability} In this paper, robustness against adversarial transferability is evaluated under the use of ConvMixer models. Targets for comparison are summarized here. \subsection{Type of Model} Various models have been proposed for image classification tasks. The residual network (ResNet) \cite{he2016deep} and very deep convolutional network (VGGNet) \cite{karen2015deep} use a convolutional neural network(CNN). In contrast, vision transformers (ViT) \cite{Alexey2021an} do not use CNNs. As mentioned, the transferability between CNN models and ViT was found to be lower than the transferability between CNN models \cite{mahmood2021on}. In this paper, we use three CNN models, ResNet18, ResNet50 and VGG16, and we also use two isotropic networks, ViT and ConvMixer to investigate the transferability of AEs between models. In addition, encrypted ConvMixer models, which were proposed for defense against AEs \cite{aprilpyone2021block}, are evaluated in terms of the transferability of AEs. \subsection{Encrypted ConvMixer Model} A block-wise transformation with secret keys, which was inspired by learnable encryption \cite{maung2021ensemble, kiya2022overview, tanaka2018learnable,Madono2020BlockwiseSI, chuman2019encryption,watanabe2004afast,ibuki2016unitary, sirichotedumrong2021a}, was proposed for adversarial defense \cite{aprilpyone2021block} where a model is trained by using encrypted images as below (see Fig. 2). \begin{enumerate} \item Each training image x is divided into blocks with a size of $M \times M$. \item Every block in x is encrypted by a transformation algorithm with secret keys to generate encrypted images. \item A model is trained by using the encrypted images to generate an encrypted model. \item A query image is encrypted with key K, and the encrypted image is then input to the encrypted model to get an estimation result. \end{enumerate} There are two parameters used in steps 1) and 2) when encrypting a model: the block size $M$ and transformation algorithm. In \cite{aprilpyone2021block}, three transformation algorithms were proposed: pixel shuffling (SHF), bit flipping (NP), and format-preserving, Feistel-based encryption (FFX). \par Fig. 2 shows an example of images encrypted from the original image with a size of $224 \times 224$ in Fig. 2 (a) by using these three algorithms with $M=16$. In this paper, we evaluate the transferability of AEs between models including encrypted ones. \begin{figure}[h] \centering \begin{minipage}{0.45\linewidth} \centering \includegraphics[keepaspectratio, scale=0.4]{figure/img_original.png} \subcaption{} \end{minipage} \begin{minipage}{0.45\linewidth} \centering \includegraphics[keepaspectratio, scale=0.4]{figure/img_SHF.png} \subcaption{} \end{minipage} \\ \begin{minipage}{0.45\linewidth} \centering \includegraphics[keepaspectratio, scale=0.4]{figure/img_NP.png} \subcaption{} \end{minipage} \begin{minipage}{0.45\linewidth} \centering \includegraphics[keepaspectratio, scale=0.4]{figure/img_FFX.png} \subcaption{} \end{minipage} \caption{Example of transformed images ($M=16$) (a): plain image, (b): SHF, (c): NP, (d): FFX} \label{fig4} \end{figure} \subsection{AEs Designed with Source Model} Fig. 3 shows the framework of attacks with adversarial transferability where AEs are designed by using a source model. In this paper, the four methods used in AutoAttack are used to generate AEs. After generating AEs, the AEs are input to a target model. \par In this paper, we consider not only plain models but also encrypted models as a source model. Adversarial defenses with encrypted models are illustrated in Fig. 4 \cite{aprilpyone2021block} where an encrypted model is trained with encrypted images, and an AE is transformed with a secret key. For the case of using encrypted models, the framework of attacks with adversarial transferability is given in Fig. 5 where a perturbation generated from an encrypted model is added to plain query images. \begin{figure} \centering \includegraphics[keepaspectratio, scale=0.28]{figure/createAEplain.png} \caption{Framework of attacks with adversarial transferability} \label{fig:my_label} \end{figure} \begin{figure} \centering \includegraphics[keepaspectratio, scale=0.4]{figure/framework_maung2.png} \caption{Adversarial defense with encrypted models \cite{aprilpyone2021block}} \label{fig:my_label} \end{figure} \begin{figure} \centering \includegraphics[keepaspectratio, scale=0.28]{figure/createAEenc.png} \caption{Framework of attacks with adversarial transferability (with encrypted models)} \label{fig:my_label} \end{figure} \section{Experiment} \subsection{Experiment Setup} In the experiment, we used five networks for image classification, ResNet18, ResNet50, VGG16, ViT, and ConvMixer, to evaluate the transferability of AEs. In addition, ConvMixer was also used for generating encrypted models where the above three transformation algorithms, SHF, NP, and FFX, were applied to images in accordance with the steps in Sec I\hspace{-.1em}I\hspace{-.1em}I B. The experiment was carried out on the CIFAR-10 dataset (with 10 classes), which consists of 60,000 color images with a dimension of $3 \times 32 \times 32$, where 50,000 of the images are for training, 10,000 images are for testing, and each class contains 6000 images. The images in the dataset were resized to $3 \times 224 \times 224$ for fitting with pretrained ViT models. AEs were generated by using the four attack methods used in AutoAttack under the $l_\infty$ norm with $\epsilon = 8/255$: APGD-ce, APGD-t, FAB-t, and Square. \par The transferability of the AEs was evaluated by using the attack success rate (ASR). The ASR between a source classifier model $\mathrm{C_s}$ and a target classifier model $\mathrm{C_t}$ is given by \begin{equation} ASR = \frac{100}{N_\mathrm{c}} \sum^{N_\mathrm{c}}_{k=1} \left \{ \begin{aligned} 1 \: &(\mathrm{A_{C_t}}(x_k, y_k) \land \left \{ \mathrm{C_s}(x_k) = y_k \right \} ) \\ 0 \: & (\mathrm{otherwise}) \end{aligned}, \right . \end{equation} \begin{equation} \mathrm{A_{C}}(x, y) = \left \{ \mathrm{C}(x) = y \right \} \land \left \{ \mathrm{C}(x^\mathrm{adv}) \neq y \right \} , \end{equation} where $N_\mathrm{c}$ is the number of images correctly classified in both $\mathrm{C_s}$ and $\mathrm{C_t}$, $x_k$ is an image used to generate an AE, $y_k$ is a label of an image $x_k$, and $x^\mathrm{adv}$ is an AE generated from an image $x$. The ASR is in the range of $[0, 100]$, and a lower value indicates that the transferability is lower. \subsection{Results} \subsubsection{Transferability among Five Models} Tables I\hspace{-.1em}I and I\hspace{-.1em}I\hspace{-.1em}I show the transferability among the five models in terms of ASR. In Table I\hspace{-.1em}I, the result when ResNet18 was chosen as the source model is given. From the table, ResNet50 and VGG16 were misled by AEs. In contrast, ViT and ConvMixer were not misled. Table I\hspace{-.1em}I\hspace{-.1em}I shows the ASR when ConvMixer was used as the source model. The AEs generated for ConvMixer could not mislead the other four models. Both ViT and ConvMixer are isotropic networks, but ViT was not fooled. Accordingly, the transferability between ConvMixer and other models including ViT was confirmed to be low. \begin{table}[t] \centering \caption{ASR of five models (Source: ResNet18)} \begin{tabular}{c||c|c|c|c|} Target & APGD-ce & APGD-t & FAB-t & Square \\ \hline ResNet18 & 100.0 & 100.0 & 99.62 & 100.0\\ ResNet50 & 97.08 & 68.97 & 1.35 & 21.74\\ VGG16 & 54.39 & 33.06 & 0.69 & 7.68\\ ViT & 32.16 & 5.49 & 0.04 & 4.09\\ ConvMixer & 17.19 & 8.33 & 0.39 & 5.69\\ \end{tabular} \label{table1} \end{table} \begin{table}[t] \centering \caption{ASR of five models (Source: ConvMixer)} \begin{tabular}{c||c|c|c|c|} Target & APGD-ce & APGD-t & FAB-t & Square \\ \hline ResNet18 & 48.91 & 27.65 & 0.5 & 7.0\\ ResNet50 & 47.64 & 26.3 & 0.5 & 10.41\\ VGG16 & 40.88 & 27.17 & 0.63 & 5.21\\ ViT & 36.48 & 10.6 & 0.09 & 3.14\\ ConvMixer & 100.0 & 100.0 & 99.99 & 100.0\\ \end{tabular} \label{table1} \end{table} \subsubsection{Transferability between ConvMixer and encrypted ConvMixer} The transferability between ConvMixer and encrypted ConvMixer is shown in Table I\hspace{-.1em}V where ConvMixer was used as the source model, and encrypted ConvMixer models were used as target models. From the result, the use of encrypted models enhanced the robustness against each attack. In particular, models encrypted by using FFX was effective compared to other encryption methods. In addition, the transferability was not affected by selection of block sizes. \begin{table}[t] \centering \caption{ASR of ConvMixer models (Source: ConvMixer, Target: Encrypted ConvMixer)} \begin{tabular}{c|c||c|c|c|c|} transform & block size & APGD-ce & APGD-t & FAB-t & Square \\ \hline SHF & 16 & 71.13 & 38.41 & 2.22 & 17.4\\ SHF & 8 & 72.16 & 38.55 & 1.83 & 17.04\\ SHF & 4 & 74.95 & 42.51 & 1.85 & 16.45\\ \hline NP & 16 & 73.07 & 39.94 & 1.84 & 17.33\\ NP & 8 & 73.78 & 39.8 & 1.77 & 16.13\\ NP & 4 & 72.57 & 39.36 & 1.85 & 17.64\\ \hline FFX & 16 & 30.59 & 18.5 & 2.7 & 16.55\\ FFX & 8 & 31.15 & 19.31 & 2.87 & 15.21\\ FFX & 4 & 30.74 & 18.49 & 2.81 & 14.53\\ \end{tabular} \label{table1} \end{table} \subsubsection{Transferability between encrypted ConvMixer models} Table V shows the transferability between models encrypted under different conditions, where the source model and target models were also encrypted. The source model was encrypted by using SHF with a block size of 16. As shown in the table, using different encryption parameters had no ability to reduce the transferability. Conversely, from Table I\hspace{-.1em}V and V, the transferability between encrypted ConvMixer models was higher than that between plain ConvMixer and encrypted ConvMixer ones. \begin{table*}[t] \centering \caption{ASR of ConvMixer models (Source: Encrypted ConvMixer (SHF, $M=16$), Target: Encrypted ConvMixer (with key different from key used for source model))} \begin{tabular}{c|c||c|c|c|c|} transform & block size & APGD-ce & APGD-t & FAB-t & Square \\ \hline SHF (with same & \multirow{2}{*}{16} & \multirow{2}{*}{100.0} & \multirow{2}{*}{100.0} & \multirow{2}{*}{100.0} & \multirow{2}{*}{100.0} \\ key as source) & & & & & \\ SHF & 16 & 94.61 & 71.84 & 1.82 & 16.05\\ SHF & 8 & 95.46 & 71.48 & 1.94 & 15.55\\ SHF & 4 & 95.85 & 73.69 & 1.94 & 14.95\\ \hline NP & 16 & 95.76 & 73.26 & 2.12 & 15.64\\ NP & 8 & 95.66 & 72.64 & 1.88 & 14.57\\ NP & 4 & 95.13 & 71.79 & 2.02 & 16.0\\ \hline FFX & 16 & 45.42 & 28.97 & 3.16 & 15.87\\ FFX & 8 & 47.16 & 29.94 & 3.08 & 14.22\\ FFX & 4 & 45.84 & 29.09 & 3.03 & 13.86\\ \end{tabular} \label{table1} \end{table*} \section{Conclusion} In this paper, we investigated the transferability between models including ConvMixer. To objectively verify the transferability, the four attack methods used in AutoAttack were used to generate AEs from a source model. In the experiment, the use of ConvMixer was confirmed to reduce the influence of the transferability between models including ViT. However, the use of encrypted ConvMixer models could not enhance the effect of reducing the influence of adversarial transferability. We will consider how the influence of adversarial transferability is avoided as the next step. \section*{Acknowledgment} This research was partially supported by JST CREST (Grant Number JPMJCR20D3) and ROIS NII Open Collaborative Research 2022-(22S1401). \bibliographystyle{ieicetr
{ "timestamp": "2022-09-20T02:23:44", "yymm": "2209", "arxiv_id": "2209.08724", "language": "en", "url": "https://arxiv.org/abs/2209.08724" }
\section{Introduction} Current best practice in recommender systems evaluation, both in academic and industrial settings, is based on using real datasets to compute \emph{offline} metrics that are proxies to \emph{online} performance. In academic work, these metrics are used to validate the effectiveness of newly proposed approaches in the literature. In industry, if a sufficient number of offline metrics are promising, the method is further tested in online experiments such as A/B-tests. In this paper we argue: \emph{this is a sub-optimal methodology}. The proxies that we use are too poorly correlated with online performance to give a reasonable measurement of how likely that a candidate model succeeds in an online experiment. We propose an alternative methodology where instead of using offline proxy metrics, we simulate user behaviour and evaluate whether the recommender system is able to generate \emph{good} recommendations on simulated timelines. The advantage of simulations is that we can measure estimates of actual \emph{online} reward, instead of needing to resort to offline proxies with their widely reported flaws. The dilemma at the heart of this issue is: \begin{enumerate*} \item should we use proxies that are quite poorly correlated with actual performance and consequently suffer from Goodhart's law: ``Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes'', or \item should we use simulated timelines that allow actual calculation of reward but are by virtue of being simulation not representative of a real system (although possibly informed by it). \end{enumerate*} In this paper we make the case in favor of simulation. In Section 2, we discuss the disadvantages of proxy metrics such as Recall@K and click-rank and how they can be poor proxies to actual performance. In Section 3, we discuss why reward-optimizing recommendation using inverse propensity scoring (IPS) is not viable in practice. In Section 4, we argue that the ability of a recommender system to optimize the reward in a simulation environment such as RecoGym and RecSim \citep{rohde2018recogym,ie2019recsim} is a more compelling case that it will perform well in production. Section 5 concludes our argument. \section{Proxy methods are not trustworthy indicators of performance} Over the past decade or so, the focus of recommendation research has moved on from the classical \emph{rating prediction} task to that of \emph{next-item prediction}~\cite{Steck2013}. Here, the goal of the recommender is to complement sequences of observed organic user-item interactions with other items that the user may find relevant. Following classical paradigms in evaluation of supervised learning methods, certain items are obfuscated to make up the ground truth of the test set. Information Retrieval-inspired metrics such as Recall, Normalised Discounted Cumulative Gain, Mean Average Precision and others are then computed on top-$K$ lists of generated recommendations, and used to evaluate which systems are better at ranking the obfuscated items higher than others. Such metrics result in having an idealized list of recommendations produced with a heuristic \cite{Benhalloum21}. Time and again, research has shown that this type of evaluation procedure does not yield results that are sufficiently correlated with the results from randomised control trials (which we see as the golden standard)~\cite{Garcin2014,Rossetti2016,Jeunen2019_DS}. While these methods are powerful and have fostered impressive research progress over the years, they remain a compromise. This mismatch between offline and online evaluation results in turn leads to a rift between academia and industry, which increasingly rely on these two respective alternatives. Several reasons can be cited for this disparity, such as temporal constraints in the data not being adhered to~\cite{Jeunen2018}, data leakage~\cite{ji2021critical} or a lack of clearly defined best practices~\cite{Dacrema2019}. Additionally, we argue that these offline metrics do not measure the same signal as the online metrics that we care about: they are but proxies~\cite{Jeunen2019REVEAL_EVAL}. Indeed, this setting still does not accurately reflect the recommendation use-case practitioners face in industry. In practice, we care about some notion of \emph{reward} that we wish to maximise. The probability that a recommendation yields some reward is a function that maps the cross of a \emph{user timeline} and a list (or slate) of recommendations to a reward. Optimising this directly is usually not viable in real-world systems, because it isn't possible to accurately measure the reward function. Not only do proxy methods not correlate with performance they do not have the same support. If our online evaluation metrics (such as click-through-rate (CTR)) rely on interventions (shown recommendations), our offline metrics should equally make use of interventional data. Moving away from proxies -- we can then adopt \emph{offline} counterfactual estimators for these \emph{online} metrics, and make meaningful assertions about the projected performance of a system. Nevertheless, counterfactual estimation methods are no silver bullet, and have several problems that are difficult to fully mitigate, as we discuss in the following Section. \section{Inverse Propensity Scoring Methods are not viable in production} Another approach that has been popular academically but has had relatively little impact in production systems is to use the inverse propensity scoring (IPS) \cite{bottou2013counterfactual} to design unbiased estimators of the reward, that we denote by $\hat{V}_n(\cdot)$. Precisely, we have access to logged data $\mathcal{D}_n = \{ x_i, a_i, r_i\,, \ i=1, \ldots, n\}$ collected by a logging policy $\pi_0$ where samples $(x_i, a_i, r_i)$ are drawn independently as $(x, a, r) \sim \nu(x) \pi_0(a \mid x) p(r \mid x, a)$. Here $\nu(\cdot)$ is a distribution over the contexts and $p(\cdot \mid x, a)$ is the distribution of the reward given $x$ and $a$. IPS-based methods remove the preference bias of the logging policy $\pi_0$ in logged data $\mathcal{D}_n$ by re-weighting samples using the discrepancy between the target policy $\pi$ and the logging policy $\pi_0$. A typical estimator $\hat{V}_n(\cdot)$ has the form \[ \hat{V}_n(\pi) = \frac{1}{n} \sum_{i=1}^n r_i \frac{\pi(a_i|x_i)}{\pi_0(a_i|x_i)}. \] The target policy is often parametrized as $\pi_\beta$, and can be optimized to place high mass on historical actions that resulted in positive reward. Now, consider the following ad-placement example where the reward is a click indicator. Here, we assume that we have $10^6 + 1$ items to recommend and the context $x$ is discrete and has $10^3$ equally probable states. We also assume that the logging policy is epsilon-greedy with $\epsilon=0.01$, and that we have a large dataset with $n=10^9$. Finally, assume that the best action has a CTR of 2\% but a poor action has a CTR of 1\%. For a particular context $x$, we have observed $10^6$ impressions. Of those, $10^4$ will be used for exploration (the rest exploit the best arm as estimated by the logging policy). The $10^4$ exploration steps will be distributed over the $10^6$ other actions. Even if all of these actions have a CTR of $1$\%, we will then see around $100$ clicks on actions that we have usually only tried a single time. The IPS estimator will estimate an illegal CTR of $100=\frac{1}{0.01} =10\,000\%$, i.e. a CTR much greater than $1$. IPS extensions like weight capping~\cite{Gilotte_2018} might be implemented in order to make the estimate legal, but the capping parameter will dictate whether estimates that observe one out of one click are deemed to be better than the existing production system. We've made the simplifying assumption that a single recommendation is delivered at a time, which is often unrealistic in real-world systems. When dealing with lists (or slates) of items $a_1, \ldots, a_K$, an action space of size of $10^6 + 1$ is extremely small. IPS-based methods do not handle these settings well, as they are based upon counts of exact matches of contexts and actions. In order to alleviate this sort of problem, \citeauthor{chen2019top} make the assumption that only one recommendation has ongoing impact \cite{chen2019top}. This is functionally equivalent to administering multiple drugs to a patient, taking a measurement, and on the basis of this measurement ignoring all but one of the administered drugs in the measurement of any effects. In short. it violates the protocols of randomized control trials in a very egregious way. This isn't to say this assumption may not be useful, but it is simply a departure from reward-optimizing recommendation. For other similar approaches in a slate setting, see \cite{li2018offline, swaminathan2017offpolicy, McInerney2020}. \citeauthor{Gilotte_2018} provide an excellent survey of variance issues with IPS methods in the context of recommendation \cite{Gilotte_2018}. It is important to note that it is \emph{impossible} to borrow strength or reduce estimation variance by restricting the parametric form of $\pi(a_1,...,a_K|x)$. Borrowing strength amounts to assuming that $p(r|a_1,...,a_K,x)$ is correlated with $p(r|a_1',...,a_K',x')$ if $a_1,...,a_K$ is close to $a_1',...,a_K'$ and $x$ is close to $x'$. Implementing these assumptions is possible using slate models \cite{Aouali21}, and the fundamental distances of bandit recommendation \cite{Sakhi2020}. In contrast, restricting $\pi(a_1,...,a_K|x)$ does nothing to reduce the variance of the reward of a given distribution within $\pi$, as only exact coincidence between $a_1,...,a_K$ and $x$ can be used. \section{Simulation offers a partial path to reward optimizing recommendation} We have argued so far that it isn't feasible to use proxy offline metrics to measure performance of real-world systems. Our experience is that, although these are widely used for \emph{candidate} generation of promising recommendation models, they are not fully trusted. In academic research, these offline metrics are often the only available tool for experimental validation, but several recent works have cast doubt on their utility, and there is a growing consensus that they should not be taken at face value. In many industrial settings, the output of these models is manually validated by an editorial team before they are progressed to online tests. In practice, a chosen recommender system candidate model may not outperform on the offline metrics but, is put forward because it appears to produce sensible output. In other words, Goodhart's law is assumed to be operating. We further argued that IPS style counterfactual estimators that -- in contrast to proxies -- should be correlated to reward, are of little use in practical settings. They have been a source of intense academic study, but have had little broad practical impact. We now introduce the possibility of simulation-based evaluation as an alternative. At the outset, we need to state that simulation has an obvious downside -- it is likely to differ from the real world, but we need to also keep in mind the downsides of proxy based approaches and counterfactual based approaches. There are two broad ways we could use simulation to validate recommender systems: \begin{enumerate*} \item Seed the simulator many times and compute actual performance metrics averaging over the many simulation runs. This sort of simulation has the form of a basic sanity check. Was the system able to exploit a recommendation signal, did the learning method manage to infer a sensible model of the data-generating process? It can be viewed as a test in software engineering, or as measuring an estimator's performance in statistics. If good performance is obtained, the recommender system is trained on the actual logs after validation in the simulation environment. \item Use a simulator that is informed by past recommendations and a modelling approach. The simulator will use modelling to answer as accurately as possible the expected reward of new recommendations. In this case, when a good recommender system is discovered, it will be deployed directly. At its most idealistic, this approach is compatible with Bayesian decision theory. \end{enumerate*} Both approaches may have value, likely as stated (2) is too idealistic but going beyond (1) will likely have further value. Adopting either of these approaches consistently would require a paradigm shift in the recommender systems community, and it is speculative to say at this stage if it will work. Likely a transition from metrics to simulation could only be achieved gradually, perhaps after studies comparing simulation and offline metrics as candidate generators where A/B-tests are the final arbiter. An interesting counterargument to the use of simulation methods, is that a good simulator requires an understanding of user behaviour that is just as complex as the recommender system itself. With the first approach -- we depart from the strict view that the simulator must be an extremely accurate representation of a real-world system. In contrast, we view the simulator as a data generating process -- and we wish to measure which learning method can model the data generating process well enough such that it can accumulate a higher reward in future interactions. Even though the data generating process is not entirely the same as the one in the natural world, we can say with some confidence that the performance of learning methods might transfer from one to the other. Using the second approach, we must answer: \emph{How do you know the simulator is good enough?} This is different to the usual view of performance that is adopted in machine learning and recommender systems. We are not measuring performance, but rather fidelity. Ideas such as posterior predictive tests may help \cite{gelman1995bayesian}. In academia, the case for simulation is both more straight forward and more difficult. Simulation studies now have a track record of showing the viability of recommendation algorithms in ways that go beyond what is possible with offline data sets \cite{Mykhaylov2019CausalML,Jeunen2019REVEAL,Sakhi2020,JeunenKDD2020,Jeunen2020REVEAL,Jeunen2021,Jeunen2021B,Bendada2020}. On the other hand, the inertia created by the \emph{MovieLens} tradition in recommender systems will be displaced only very slowly, and even though offline metrics are flawed, they have clearly shown their value in the past decades of research progress. \section{Conclusion: the case for simulation} A recommender system usually consists of a relatively simple piece of engineering - a personalized ranker of items. This simple piece of engineering interacts with a complex world of users both interacting independently of the recommender system and at other times receiving sequences of slates of recommendations. Furthermore the owner of the recommender system may have quite complex goals: for example, to encourage long term engagement of users or to drive sales. The current state of the art approach is to dodge this complexity and formulate a machine learning problem as a distance between items and users (usually summarized as a sequence of items), and sometimes as simply a distance between items and items. Both of these approaches massively simplify what the system does and raise technical questions that are arguably impossible to answer. When we formulate the problem in these terms, we limit how we see the world by adopting the limitations of implementable decision rules. While producing accurate simulation is obviously fraught - simulation remains one of the few ways of attempting to do true reward-optimizing recommendation. Whether this approach can really replace the massive reductionism of item-user distance or item-item distance is unclear at this stage, but it must be noted that simulation is one of the very few ways we have available to tackle the central open question of recommender systems: \emph{How can we build true reward-optimizing recommender systems?} \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2022-09-20T02:21:15", "yymm": "2209", "arxiv_id": "2209.08642", "language": "en", "url": "https://arxiv.org/abs/2209.08642" }
\section{Using Uncertainty for Next-Best View Selection and Model Refinement} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{fig/explore_avg.pdf} \caption{A NeRF ensemble can select informative next-best training views. Starting from 5 highly similar views, we incrementally add new views, chosen at random or based on the ensemble uncertainty.} \label{fig:exploration} \end{figure} \begin{figure*}[tb] \centering \includegraphics[width=0.32\linewidth]{fig/lego_0.jpg} \includegraphics[width=0.32\linewidth]{fig/lego_5.jpg} \includegraphics[width=0.32\linewidth]{fig/lego_10.jpg} \caption{Next-best view selection using ensemble uncertainty. Left: 5 initial and very similar training views lead to high model error. Centre: After selecting 5 views, the model error is significantly reduced. Right: After adding another 5 views to the training set.} \label{fig:exploration-qualitative} \end{figure*} We now describe an experiment that utilises the ensemble uncertainty for next-best view selection and model refinement. \noindent\textbf{Datasets:} We use five synthetic scenes (\emph{Lego, Hotdog, Ficus, Drums, Microphone}) from the dataset published alongside the original NeRF paper~\cite{mildenhall2021nerf}. In these scenes, the cameras of the training set are positioned in a semi-sphere around the object, facing the object centre. From the training dataset of every scene, we randomly select 5 images from very similar viewpoints for the initial training, and keep the remaining images as candidate views to select the next-best (most informative) training image. The test views are independent from the training data and used to evaluate the quality of the scene reconstruction, using the PSNR (Peak Signal-to-Noise Ratio) metric, as is standard in the NeRF literature~\cite{mildenhall2021nerf, shen2021stochastic}. \noindent\textbf{Method, Metric and Baseline:} Starting with the initial training set of 5 views, we train an ensemble of $M=5$ NeRFs for 2,000 steps each. We then calculate the predictive uncertainty for each of the \emph{remaining} unused candidate views, and select the view with the maximum uncertainty as the next view to be added to the training data. We then retrain the NeRF and iterate this process (see Fig.~\ref{fig:exploration-qualitative}). As a simple baseline method, we select the next-best view at random. After every training iteration, we evaluate the model quality with the independent test dataset. Comparing the rendered test views with the ground truth images, we calculate the average PSNR over all test images. We identify the lowest PSNR to measure the worst-case performance, i.e. the test view with the highest error compared to the ground truth. To make the PSNR comparable across all scenes, we re-scale PSNR so that the average PSNR at the beginning of our evaluation loop (i.e. only using the initial 5 views) is 0, and that the best average PSNR across all iterations (usually the PSNR of the final iteration which has access to most training images) is equal to 1. We can then plot the average and worst-case performance in one plot for both strategies of selecting the next best view (uncertainty-based selection and random selection) in Fig.~\ref{fig:exploration}. \noindent\textbf{Results:} The uncertainty measured by our ensembling approach is an effective way of selecting the next-best view to add to the training dataset. As we can see in Fig.~\ref{fig:exploration}, the average-case performance is better compared to the random baseline. Although both methods eventually converge to the same performance as more images are used for training, using uncertainty for view selection achieves a higher gain in quality per training image by selecting more informative views. Similarly, the worst-case performance is significantly improved when selecting views using our quantified uncertainty. This indicates that our density-aware ensemble uncertainty is an effective proxy for ground-truth error -- by selecting the view with the highest uncertainty, we effectively choose to add the view that causes a high error to the training set for the next iteration. These results are significant for robotics and active vision, where gathering exhaustive training data is too expensive or time consuming, and a trade-off between representation quality and the number of training views (or the time required to gather those views) has to be considered. \section{Introduction} Neural Radiance Fields~\cite{mildenhall2021nerf} (NeRFs) implicitly represent the geometry and appearance of complex 3D scenes as a continuous function that is implemented as a relatively simple deep neural network. NeRFs and other implicit representations were met with an immense interest in the past two years, leading to an often-quoted ``explosion'' of work in this area~\cite{dellaert2020neural}. While most of this work has been conducted by the computer graphics and vision communities, researchers in robotics have quickly started to explore possible use cases of NeRFs for important robotics tasks such as navigation~\cite{adamkiewicz2022navigation}, SLAM~\cite{sucar2021imap,zhu2022nice}, or manipulation~\cite{li2022visumotor, yen2022nerf, ichnowski2021dex}. Since the appearance of highly efficient NeRFs, such as Instant-NGP~\cite{mueller2022instant}, that can be trained in seconds rather than many hours, the adoption of NeRFs for robotics has become palpable. NeRFs provide an interesting new take on the long-standing problem of how to represent the 3D world for robotics, with all its geometric and semantic complexity~\cite{cadena2016past,rosen2021advances,garg2020semantics}. However, NeRFs face the same challenges as other deep learning approaches in robotics~\cite{sunderhauf2018limits} -- they lack the ability to express uncertainty in their predictions. The incorporation of uncertainty and the generally probabilistic nature of data, estimations, and predictions is well-established in large parts of the robotics literature~\cite{thrun2002probabilistic}, yet it is still a highly active area of research in deep learning~\cite{abdar2021review}. Approaches range from principled Bayesian Deep Learning~\cite{mackay1992practical, neal2012bayesian} to simple but surprisingly effective approximate methods such as MC Dropout~\cite{gal2016dropout} or Deep Ensembles~\cite{lakshminarayanan2017simple}. Despite the large body of work in NeRFs, little research has investigated adopting the above methods to quantify the predictive uncertainty of NeRFs. \begin{figure}[t] \centering \includegraphics[width=0.49\linewidth]{fig/hero_trex_mean.jpg} \includegraphics[width=0.49\linewidth]{fig/hero_trex_std.jpg} \includegraphics[width=0.49\linewidth]{fig/hero_trex_error.jpg} \includegraphics[width=0.49\linewidth]{fig/hero_trex_ours.jpg} \caption{An ensemble of Neural Radiance Fields can render a mean RGB image and use the colour variance in pixel space to quantify its predictive uncertainty (top). However, this naive approach to ensembling often does not capture the epistemic uncertainty in parts of a scene that were \emph{unobserved} during training. In our example, the ensemble agrees to render the unobserved bottom portion of the images in black, resulting in negligible uncertainty, despite the high error (bottom left) when compared to the ground truth. We show that an epistemic uncertainty term that captures the termination probabilities along each ray must be considered in addition to the RGB variance to make ensembling an effective approach to quantifying uncertainty in Neural Radiance Fields (bottom right). } \label{fig:hero} \end{figure} Our paper addresses this gap. Specifically, we show that \emph{ensembling}~\cite{lakshminarayanan2017simple} is an effective approach to quantify uncertainty in Neural Radiance Fields and that prior work has dismissed the ensembles approach prematurely~\cite{shen2021stochastic, shen2022conditional}. Our key insight is that instead of only averaging and calculating variance in the RGB space, a NeRF ensemble should consider an additional epistemic uncertainty~\cite{kendall2017uncertainties} term that depends on the densities and termination probabilities along individual rays. We show that such a \emph{density-aware} ensemble of Instant-NGP NeRFs~\cite{mueller2022instant} achieves new state-of-the-art performance in terms of uncertainty quality, and can furthermore effectively select the next-best view when iteratively building a training dataset and refining an implicit object model. \section{Related Work} \subsection{Neural Radiance Fields} Neural Radiance Fields~\cite{mildenhall2021nerf} learn a radiance and density field that -- in combination with a volumetric rendering process -- can explain a set of posed training images and realistically render novel views. Many variations of the original NeRF have emerged~\cite{xie2022neural}, including follow-up works~\cite{tancik2020fourfeat, Chen2022ECCV, yu_and_fridovichkeil2021plenoxels, barron2021mipnerf, mueller2022instant} that highlight the impact of the input encoding on training and inference speed. Instant-NGP~\cite{mueller2022instant} is one of the fastest and most optimised formulations, using a parametric encoding alongside a smaller MLP. It trains in seconds and renders at real-time rates, making it a great candidate for adoption in robotics, and thus is the backbone chosen for our work. NeRFs provide an exciting new approach to scene representation in robotics, and a variety of use cases are currently explored. For example, NiceSLAM~\cite{zhu2022nice} and iMap~\cite{sucar2021imap} use a NeRF to represent a map within a SLAM system, while~\cite{22-driess-NeRF-RL-preprint, li2022visumotor} use it as a decoder within an autoencoder framework to learn an alternative representation for planning and reinforcement learning, and~\cite{yen2022nerf, ichnowski2021dex} use NeRFs for data augmentation. \subsection{Uncertainty Quantification in Deep Learning} Uncertainty Quantification in Deep Learning is a rapidly growing field and we refer the reader to~\cite{abdar2021review} for a recent review. Kendall et al.~\cite{kendall2017uncertainties} identified two relevant types of uncertainty for computer vision -- aleatoric uncertainty and epistemic uncertainty. Aleatoric uncertainty refers to uncertainty in the input data, introduced by noise or random processes~\cite{kendall2017uncertainties}. In contrast, epistemic uncertainty refers to uncertainty in the model's learnt parameters, representing the model's lack of knowledge due to a finite training dataset \cite{kendall2017uncertainties}. Deep Ensembles is a popular approach for uncertainty quantification, producing predictive uncertainty that captures both aleatoric and epistemic uncertainty~\cite{lakshminarayanan2017simple}. \subsection{Uncertainty Quantification for NeRFs} Stochastic NeRF (S-NeRF)~\cite{shen2021stochastic} is one of the few papers investigating uncertainty quantification for NeRFs. It reformulates the NeRF optimisation as a Bayesian estimation problem, and applies a Variational Inference approach to effectively estimate the posterior distribution over the parameters of all possible radiance fields given the observed training data. Mean and variance for each rendered pixel can be calculated by sampling from the approximated posterior distribution over all radiance fields. The variance in pixel space is then used as the uncertainty measure for a particular pixel. Conditional-Flow NeRF (CF-NeRF)~\cite{shen2022conditional} by the same authors builds on this work and relaxes some of S-NeRF's constraints on the involved distributions, especially the independence assumption between radiance and density. Both methods~\cite{shen2021stochastic,shen2022conditional} involve a complex reformulation of the NeRF architecture, rendering process, and its training regime, limiting the applicability of the proposed approach to other NeRF implementations such as Instant NGP~\cite{mueller2022instant}. In contrast, the ensembles approach we propose here is extremely simple to implement as it does not require any changes in the underlying NeRF architecture. While the research field around implicit representations continues to evolve rapidly and better or faster NeRF formulations are regularly proposed, we see the ability of easy adaptation as a main advantage of our method. \input{uncertainty.tex} \input{exploration.tex} \section{Discussion, Conclusions, and Future Work} We have shown that the key to making ensembling an effective approach for uncertainty quantification in NeRFs is to combine the simple RGB variance with an additional epistemic uncertainty term that is informed by the predicted densities along each individual ray. A major advantage of our method is the simplicity of adopting ensembling strategies to new emerging variants of NeRFs. This allows us to leverage progress regarding training time, required training views, representational power or rendering speed, while maintaining the ability to readily quantify predictive uncertainty in emerging NeRFs in the future. NeRF ensembling approaches are already computationally feasible after the appearance of hash-encoding NeRFs~\cite{mueller2022instant} that can train in mere seconds (with the option of training all ensemble members in parallel) and render in real time. Our qualitative results (Fig.~\ref{fig:rays}) suggest that there is no strict distinction between aleatoric and epistemic~\cite{kendall2017uncertainties} uncertainty in our uncertainty measure. Being able to separately quantifying both is worthwhile for future work, since in some applications (e.g. next-best view selection) epistemic uncertainty is more informative than aleatoric uncertainty. Another interesting direction for future work is to use our density-aware ensemble to guide exploration and navigation of a mobile robot through an unknown scene, while mapping it with the NeRF. While we have shown the uncertainty to be effective in selecting the next-best view from a set of candidate views, we are interested in extracting the gradient of the uncertainty measure with respect to the camera pose, and plan or control a trajectory for scene exploration based on this local gradient information. \addtolength{\textheight}{-8cm} \pagebreak \bibliographystyle{IEEEtran} \section{A Review of the NeRF Rendering Equation} We start with revisiting the foundational NeRF rendering equation from Mildenhall et al.~\cite{mildenhall2021nerf}. Given a ray $\vect{r}(t) = \vect{o} + t\vect{d}$ with origin point $\vect{o}$ and direction $\vect{d}$, the expected colour $C(\vect{r})$ of camera ray $\vect{r}(t)$ with near and far bounds $t_n$ and $t_f$ is: \begin{equation} C(r) = \int_{t_n}^{t_f} T(t) \cdot \rho(r_{(t)}) \cdot \vect{c}(\vect{r}_{(t)}, \vect{d}) \;dt. \end{equation} This integral has three components: $T(t)$ is the accumulated transmittance along the ray from $t_n$ to $t$, i.e. the probability that the ray travels from $t_n$ to $t$ without hitting another particle. It describes how much light is attenuated as it travels back to the camera. It is calculated as: \begin{equation} T(t) = \exp \left( -\int_{t_n}^{t}\rho(\vect{r}_{(s)})\;ds \right). \end{equation} The density $\rho(r_{(t)})$ denotes to the differential probability that a ray interacts with the volumetric medium of the scene at a particular point. $\rho(r_{(t)})$ is not a probability, but a density that can take on values greater than 1. Lastly, $\vect{c}(\vect{r}_{(t)}, \vect{d})$ denotes the predicted colour at the point $\vect{r}_{(t)}$ seen from direction $\vect{d}$. In practice these integrals are approximated via the quadrature rule, using sums and stratified sampling from $N$ evenly spaced bins to make the rendering process tractable: \begin{equation} C(\vect{r}) = \sum_{i=1}^N T_i \cdot \big(1-\exp(-\rho_i\delta_i)\big) \cdot \vect{c}_i, \label{eq:render_sum} \end{equation} Now $1-\exp(-\rho_i\delta_i)$ is the probability of the $i$-th bin being occupied. As before, $T_i$ is the transmittance, this time calculated using discrete summation instead of continuous integration: \begin{equation} T_i = \exp\left(-\sum_{j=1}^{i-1}\rho_j\delta_j\right), \end{equation} and $\delta_i = t_{i+1} - t_i$ is the distance between adjacent samples. The complete rendering equation for a single ray is therefore given as: \begin{equation} C(\vect{r}) = \sum_{i=1}^N \exp\left(-\sum_{j=1}^{i-1}\rho_j\delta_j\right) \cdot \big(1-\exp(-\rho_i\delta_i)\big) \cdot \vect{c}_i. \end{equation} Following the rule that $\exp(a+b) = \exp(a) \cdot \exp(b)$, we can rewrite the transmittance as a product: \begin{equation} \label{eq:transmittance_prod} T_i = \prod_{j=1}^{i-1} \exp(-\rho_j\delta_j). \end{equation} Combining equations (\ref{eq:transmittance_prod}) and (\ref{eq:render_sum}), we can express the full rendering equation for a single ray as \begin{equation} C(\vect{r}) = \sum_{i=1}^N \prod_{j=1}^{i-1} \exp(-\rho_j\delta_j) \cdot \big(1-\exp(-\rho_i\delta_i)\big) \cdot \vect{c}_i, \end{equation} By writing the occupancy probability $1-\exp(-\rho_i\delta_i)$ as $o_i$, this becomes the convenient expression \begin{equation} C(\vect{r}) = \sum_{i=1}^N \overbrace{ \underbrace{\prod_{j=1}^{i-1} (1-o_j)}_\text{transmittance} \cdot \underbrace{o_i}_\text{occupancy} }^\text{termination probability} \cdot \overbrace{\vect{c}_i}^\text{colour} \label{eq:render} \end{equation} \section{Density-Aware NeRF Ensembles} \subsection{Preliminaries} A NeRF is a parametric function $f_\vectg{\theta}(\vect{x}, \vect{d}): \mathbb{R}^3 \times \mathbb{R}^2 \rightarrow \mathbb{R}^4$, implemented as a deep neural network with parameters $\vectg{\theta}$, that encodes the density $\rho$ and colour $\vect{c}$ of all point-direction pairs $(\vect{x}, \vect{d})$ in a scene. These density and colour predictions can be used to render a new view of the scene represented by the NeRF through the following process. Given a ray $\vect{r}(t) = \vect{o} + t\vect{d}$ with origin point $\vect{o}$ and direction vector $\vect{d}$, the expected colour $C(\vect{r})$ of camera ray $\vect{r}(t)$ with near and far bounds $t_n$ and $t_f$ is \begin{equation} C(\vect{r}) = \int_{t_n}^{t_f} T(t) \cdot \rho(\vect{r}_{(t)}) \cdot \vect{c}(\vect{r}_{(t)}, \vect{d}) \;dt \end{equation} In practice this integral is approximated via the quadrature rule, using discrete sums and stratified sampling from $N$ evenly spaced bins to make the rendering process tractable. Through a series of simplifications that are not relevant for the remainder of the paper (we refer the interested reader to~\cite{mildenhall2021nerf}), this becomes the convenient expression \begin{equation} C(\vect{r}) = \sum_{i=1}^N \overbrace{ \underbrace{\prod_{j=1}^{i-1} (1-o_j)}_\text{transmittance} \cdot \underbrace{o_i}_\text{occupancy} }^\text{termination probability} \cdot \overbrace{\vect{c}_i}^\text{colour} \label{eq:render} \end{equation} Given a neural radiance field $f_\vectg{\theta}$ and equation (\ref{eq:render}), we can render an image $\mathcal{I}$ by evaluating $C(\vect{r})$ for all rays $\vect{r}$ that pass through the camera center and the image plane. In the following, we use the notation $c_\vectg{\theta}(\vect{r})$ to indicate the predicted colour along ray $\vect{r}$ using the rendering process of (\ref{eq:render}) and the NeRF $f_\vectg{\theta}$. \subsection{NeRF Ensembles for Predictive RGB Uncertainty} \label{sec:ensemble} Following the Deep Ensembles approach~\cite{lakshminarayanan2017simple}, we propose to quantify the predictive uncertainty of a NeRF by training an \emph{ensemble} of networks $\{f_{\vectg{\theta}_k}\}_{k=1...M}$. The $M$ ensemble members are initialised with different parameters $\vectg{\theta}^{(0)}_k$, but trained on the same data. By interpreting the ensemble as a uniformly-weighted mixture model, the members' predictions are combined through averaging, and the predictive uncertainty is expressed as the variance over the individual member predictions. With an ensemble of NeRFs, the expected colour of ray $\vect{r}$ in a scene is \begin{equation} \vectg{\mu}_\text{RGB}(\vect{r}) = \frac{1}{M}\sum_{k=1}^M c_{\vectg{\theta}_k}(\vect{r}). \end{equation} The predictive uncertainty can be expressed as the variance over the individual member predictions: \begin{equation} \vectg{\sigma}_\text{RGB}^2(\vect{r}) = \frac{1}{M}\sum_{k=1}^M \left(\vectg{\mu}(\vect{r}) - c_{\vectg{\theta}_k}(\vect{r})\right)^2. \end{equation} $\vectg{\mu}_\text{RGB}$ and $\vectg{\sigma}_\text{RGB}^2$ can be calculated very easily by rendering the $M$ individual RGB images $\mathcal{I}_i$ and calculating the mean and variance directly in pixel space. Both will be 3-vectors over the RGB colour channels, i.e we do not consider the covariance \emph{between} colour channels. We combine the variances from the colour channels into a single variance by taking the average along the three channels: \begin{equation} \bar\sigma^2_\text{RGB}(\vect{r}) = \frac{1}{3} \cdot \sum_{c \in \{RGB\}} \vectg{\sigma}^2_{\text{RGB}, (c)}(\vect{r}), \label{eq:sigma_rgb} \end{equation} where $\vectg{\sigma}^2_{\text{RGB}, (c)}(\vect{r})$ indicates the variance associated with colour channel $c$. \subsection{Limitations of Simple Ensembling} Ensembling NeRFs and using the variance in RGB space to quantify the predictive uncertainty (i.e. aleatoric and epistemic) is a simple and partially effective method. However, this approach can fail to capture the model's epistemic uncertainty arising from parts of the scene that have \emph{not} been observed during training. Fig.~\ref{fig:hero} illustrates an instructive example; when rendering a novel view that exposes parts of the floor, the NeRF is forced to render areas of the scene that have never been observed during training. Although one would expect the NeRF ensemble to express high uncertainty in these image regions, all ensemble members agree to render this area in black, resulting in negligible variance in colour space $\bar\sigma^2_\text{RGB}$. \subsection{Density-Aware Ensembles to Capture Epistemic Uncertainty in Unseen Areas} A closer inspection of the ensemble predictions along a ray $\vect{r}$ in previously unobserved regions reveals that the individual NeRFs assign a low termination probability along all sample points on $\vect{r}$. The sum of the termination probabilities along the ray is close to zero (see Fig.~\ref{fig:rays} (left)), indicating the model does not assign belief to the hypothesis that the ray intersects the scene geometry within the near and far rendering bounds $t_n$ and $t_f$. In short, writing the sum of the termination probabilities along a ray $\vect{r}$ as $q_{\vectg{\theta}_k}(\vect{r})$, we observe \begin{equation} q_{\vectg{\theta}_k}(\vect{r}) = \sum_{i=1}^N \overbrace{ \underbrace{\prod_{j=1}^{i-1} (1-o_j)}_\text{transmittance} \cdot \underbrace{o_i}_\text{occupancy} }^\text{termination probability at sample $i$} \approx 0. \label{eq:sum_termination} \end{equation} The average summed termination probability along ray $\vect{r}$ across the ensemble is then given by \begin{equation} \bar{q}(\vect{r}) = \frac{1}{M}\sum_{k=1}^M q_{\vectg{\theta}_k}(\vect{r}), \end{equation} where $\bar{q}(\vect{r})\approx 1$ for rays that intersect with the scene structure observed during training, and $\bar{q}(\vect{r})\approx 0$ otherwise. We interpret this as an expression of some of the model's \emph{epistemic} uncertainty, arising from a fundamental lack of knowledge about the scene geometry and appearance along $\vect{r}$. To capture this uncertainty, we introduce an additional density-aware epistemic term $\sigma^2_\text{epi}(\vect{r})$ that we define as: \begin{equation} \sigma^2_\text{epi}(\vect{r}) = \left(1-\bar{q}(\vect{r})\right)^2 \end{equation} Finally, we combine the uncertainty measures based on the RGB-variance and above epistemic measure into the overall uncertainty $\psi^2(\vect{r})$: \begin{equation} \psi^2(\vect{r}) = \bar\sigma^2_\text{RGB}(\vect{r}) + \sigma^2_\text{epi}(\vect{r}) \end{equation} The predicted rendered colour along a ray according to the ensemble is then modelled as a Gaussian with diagonal covariance matrix: \begin{equation} \tilde C(\vect{r}) \sim \mathcal{N}\left(\vectg{\mu}_\text{RGB}(\vect{r}), \vect{I}_{3\times 3} \cdot \psi^2(\vect{r}) \right) \end{equation} We call this method \emph{Density-aware} Ensembling, as $\sigma^2_\text{epi}(\vect{r})$ depends on the individual density predictions along each ray. As the experiments in the next section will show, $\sigma^2_\text{epi}(\vect{r})$ and $\bar\sigma^2_\text{RGB}(\vect{r})$ are complementary, capturing different aspects of the model's aleatoric and epistemic uncertainty to different. \section{Experiments and Results} \begin{figure*}[t] \centering \includegraphics[width=0.24\linewidth]{fig/hero_room_ensemble.jpg} \includegraphics[width=0.24\linewidth]{fig/hero_room_mean.jpg} \includegraphics[width=0.24\linewidth]{fig/hero_room_ours.jpg} \includegraphics[width=0.24\linewidth]{fig/hero_room_error.jpg} \includegraphics[width=0.24\linewidth]{fig/hero_fern_ensemble.jpg} \includegraphics[width=0.24\linewidth]{fig/hero_fern_mean.jpg} \includegraphics[width=0.24\linewidth]{fig/hero_fern_ours.jpg} \includegraphics[width=0.24\linewidth]{fig/hero_fern_error.jpg} \caption{Qualitative results for two views of the \emph{Room} and \emph{Fern} scene of the LLFF dataset.} \label{fig:qualitative} \end{figure*} \begin{table*}[tb] \centering \caption{Measuring uncertainty quantification with Negative Log Likelihood (NLL) for baselines and different ensemble sizes of our proposed method. Considered baselines are Monte Carlo Dropout (MC-DO), a naive ensembles approach implemented by~\cite{shen2021stochastic}, NeRF in the Wild (NeRF-W)~\cite{martinbrualla2020nerfw}, S-NeRF~\cite{shen2021stochastic}, and CF-NeRF~\cite{shen2022conditional}. The latter paper only published average NLL over all scenes instead of individual results. A $\dagger$ indicates results taken from~\cite{shen2021stochastic}, $\ddagger$ from~\cite{shen2022conditional}.} \begin{tabular}{@{}rc|cccccc|rrrr@{}}\toprule & Training & \multicolumn{6}{c}{Negative Log-Likelihood $\downarrow$} & \multicolumn{4}{c}{Ablation Ensemble Size (NLL $\downarrow$)} \\ & Views & MC-DO$\dagger$ & Naive Ens$\dagger$ & NeRF-W$\dagger$ & S-NeRF$\dagger$ & CF-NeRF$\ddagger$ & Ours & & & \\ Dataset & (Ours) & $M=5$ & $M=5$ & \cite{martinbrualla2020nerfw} & \cite{shen2021stochastic} & \cite{shen2022conditional} & $M=5$ & $M=2$ & $M=4$ & $M=8$ & $M=10$ \\ \midrule Flower & 7 & 4.63 & 1.63 & 1.71 & 1.27 & -- & \textbf{1.00} & 1.88 & 1.13 & 0.90 & 0.85\\ Fortress & 8 & 5.19 & 2.29 & 1.04 & -0.03 & -- & \textbf{-1.30} & -1.28 & -1.29 & -1.30 & -1.30\\ Leaves & 5 & 2.72 & 2.66 & 0.79 & \textbf{0.68} & -- & 0.97 & 2.32 & 1.10 & 0.80 & 0.73\\ Horns & 12 & 4.18 & 2.17 & 0.78 & 0.60 & -- & \textbf{-0.55} & 0.06 & -0.50 & -0.64 & -0.66 \\ T-Rex & 11 & 4.10 & 2.28 & 1.91 & 1.37 & -- & \textbf{-0.31} & 2.69 & 0.00 & -0.65 & -0.69\\ Fern & 4 & 4.90 & 2.47 & 2.16 & 2.01 & -- & \textbf{-0.98} & -0.89 & -0.97 & -0.99 & -1.00 \\ Orchids & 5 & 5.74 & 2.23 & 2.24 & 1.95 & -- & \textbf{-0.28} & 0.06 & -0.17 & -0.29 & -0.31\\ Room & 8 & 5.06 & 2.13 & 4.93 & 2.35 & -- & \textbf{-1.35} & -1.29 & -1.34 & -1.35 & -1.35 \\ \midrule Average & & 4.57 & 2.23 & 1.95 & 1.27 & 0.57 & \textbf{-0.35} & 0.44 & -0.26 & -0.44 & -0.47 \\ \bottomrule \end{tabular} \label{tab:main_results} \end{table*} \begin{table*}[tb] \centering \caption{Ablation study on the influence of the individual components of the uncertainty measure $\psi^2$. We report the average-mean and average-median NLL per scene along with the standard deviations. See the text for explanation.} \begin{tabular}{@{}r|cc|cc|cc@{}}\toprule & \multicolumn{6}{c}{Negative Log-Likelihood $\downarrow$ ($M=10$)} \\ & \multicolumn{2}{c}{$\psi^2 = \sigma^2_\text{RGB} + \sigma^2_\text{epi}$} & \multicolumn{2}{c}{$\psi^2 = \sigma^2_\text{RGB} $} & \multicolumn{2}{c}{$\psi^2 = \sigma^2_\text{epi}$} \\ Dataset & Mean & Median & Mean & Median & Mean & Median \\ \midrule Flower & 0.85 $\pm$ 0.26 & -0.07 $\pm$ 0.18 & 1997 $\pm$ 2764 & 0.76 $\pm$ 0.39 & 2.85 $\pm$ 0.95 & 0.46 $\pm$ 0.38 \\ Fortress & -1.30 $\pm$ 0.07 & -1.40 $\pm$ 0.01 & -0.51 $\pm$ 1.17 & -2.26 $\pm$ 0.35 & -1.26 $\pm$ 0.10 & -1.42 $\pm$ 0.01\\ Leaves & 0.73 $\pm$ 0.174 & -0.20 $\pm$ 0.10 & 3762 $\pm$ 3821 & -0.06 $\pm$ 0.15 & 4.76 $\pm$ 1.72 & 0.72 $\pm$ 0.32\\ Horns & -0.66 $\pm$ 0.26 & -1.35 $\pm$ 0.10 & 3.08 $\pm$ 2.81 & -1.49 $\pm$ 0.26 & 0.35 $\pm$ 0.60 & -1.42 $\pm$ 0.08 \\ T-Rex & -0.69 $\pm$ 0.53 & -1.50 $\pm$ 0.05 & 137 $\pm$ 480 & -1.62 $\pm$ 0.821 & 7.36 $\pm$ 2.62 & -1.49 $\pm$ 0.05\\ Fern & -1.00 $\pm$ 0.11 & -1.30 $\pm$ 0.05 & 14.2 $\pm$ 39.7 & -1.66 $\pm$ 0.23 & -0.90 $\pm$ 0.14 & -1.39 $\pm$ 0.02\\ Orchids & -0.31 $\pm$ 0.09 & -0.95 $\pm$ 0.06 & 1.51 $\pm$ 0.55 & -0.911 $\pm$ 0.11 & 0.51 $\pm$ 0.26 & -1.09 $\pm$ 0.09\\ Room & -1.35 $\pm$ 0.08 & -1.43 $\pm$ 0.01 & -0.54 $\pm$ 1.51 & -2.58 $\pm$ 0.20 & -1.30 $\pm$ 0.10 & -1.44 $\pm$ 0.01 \\ \midrule Average & -0.47 & -1.02 & 739 & -1.23 & 1.55 & -0.88 \\ \bottomrule \end{tabular} \label{tab:ablation} \end{table*} \subsection{Experimental Setup} \noindent\textbf{Datasets:} We follow the evaluation protocol for uncertainty quantification in NeRFs established by~\cite{shen2021stochastic}, evaluating our approach on the eight scenes of the LLFF dataset~\cite{mildenhall2021nerf} -- 3 outdoor scenes (\emph{Flower, Leaves, Orchids}) and 5 indoor scenes (\emph{Fortress, Horns, T-Rex, Room, Fern}). As motivated in~\cite{shen2021stochastic}, we randomly split the available views from the individual datasets into 20\% for training, and keep the remaining 80\% for testing. This way, we can evaluate our model in a scenario where only a few (4-12) training views of a scene are available, which is more realistic for robotics applications than the dense viewpoint coverage typically encountered in NeRF datasets. \noindent\textbf{Baselines:} Using the established datasets for evaluation allows us to directly compare with S-NeRF~\cite{shen2021stochastic} and the baseline results published in~\cite{shen2021stochastic}. These are Monte Carlo Dropout sampling~\cite{gal2016dropout}, a naive ensembling approach based on deep ensembles~\cite{lakshminarayanan2017simple}, and NeRF in the Wild (NeRF-W)~\cite{martinbrualla2020nerfw}. For the Monte Carlo Dropout baseline, \cite{shen2021stochastic} added a Dropout layer after every odd layer in the network, sampled 5 times, and used the variance in RGB space as the uncertainty measure. The Naive Ensembling baseline also uses the RGB variance, after training an ensemble with 5 members. The NeRF-W experiment in \cite{shen2021stochastic} was conducted by removing the latent embedding components and keeping only the uncertainty estimation layers. We refer the reader to~\cite{martinbrualla2020nerfw} for details on the architecture of NeRF-W. We additionally compare against CF-NeRF~\cite{shen2022conditional}, new work by the authors of~\cite{shen2021stochastic}. Since they follow the same evaluation protocol, we can directly compare against the results reported in~\cite{shen2022conditional}. \noindent\textbf{Evaluation Metric:} We report the Negative Log-Likelihood (NLL) as a principled and established way of assessing the quality of predictive uncertainty. NLL measures the likelihood of the \emph{true} colour at a pixel under a Gaussian model $\mathcal{N}(\vectg{\mu}_\text{RGB}(\vect{r}), \psi^2(\vect{r}))$ formed by the \emph{predicted} mean colour $\vectg{\mu}_\text{RGB}$ and our squared uncertainty measure as variance. \noindent\textbf{Implementation:} We implement our Density-aware Ensembles based on the publicly available Instant-NGP~\cite{mueller2022instant} implementation. A simple modification lets us access the underlying densities along each ray during rendering. This enables us to calculate $q_{\vectg{\theta}_k}(\vect{r})$ as per equation (\ref{eq:sum_termination}). We train 10 ensemble members for 5,000 training steps. Our machine with a Nvidia RTX 3090 trains for around 26 seconds per ensemble member, but we note that the process could be effectively parallelised. To calculate $\bar\sigma^2_\text{RGB}$, we render novel views at the full resolution of the ground truth image and calculate mean and variance in pixel space (see Section~\ref{sec:ensemble}). \subsection{Results and Ablation Study} \noindent \textbf{Main Results:} We show the main results of our experiments in Table~\ref{tab:main_results}. Our Density-aware Ensemble with 5 members achieves a better NLL on average across all scenes of the LLFF dataset. While S-NeRF~\cite{shen2021stochastic} and CF-NeRF~\cite{shen2022conditional} report average performance of $1.27$ and $0.57$, our ensemble sets a new state of the art with a NLL of $-0.35$. We achieve a lower NLL on all individual scenes apart from \emph{Leaves} \noindent\textbf{Influence of Ensemble Size:} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{fig/ablationEnsemble.pdf} \caption{Ensembling is feasible and effective for moderate ensemble sizes. The Negative Log-Likelihood converges quickly for most scenes, improving negligibly beyond sizes of 5 except for \emph{T-Rex}.} \label{fig:ablation} \end{figure} We ablate the influence of $M$, the number of ensemble members. While we reported our main results for $M=5$ ensemble members to directly compare to prior work, Table~\ref{tab:main_results} also reports the results for different choices of $M$ in the four rightmost columns. As expected, a larger ensemble results in higher quality uncertainty estimates, achieving an average Negative Log-Likelihood of $-0.47$ for $M=10$ compared to $-0.35$ for the smaller ensemble with $M=5$. This is consistent with previous findings on the performance of sampling-based uncertainty quantification~\cite{lakshminarayanan2017simple, gal2016dropout}. In Addition, Fig.~\ref{fig:ablation} plots the achieved Negative Log-Likelihood for $M=2\dots 10$. From this plot, it is apparent that although the performance increases monotonically with $M$, the gains become more and more diminishing for larger $M$. This is significant for practical use cases, as it indicates that even a relatively small NeRF ensemble can express uncertainty of high quality. \noindent\textbf{Influence of Individual Uncertainty Measures:} \begin{figure}[t] \centering \includegraphics[width=0.49\linewidth]{fig/hero_trex_rays.pdf} \includegraphics[width=0.49\linewidth]{fig/hero_room_rays.pdf} \caption{The average sum of termination probabilities $\bar{q}_{\vectg{\theta}_k}(\vect{r})$ for a view from the \emph{T-Rex} and \emph{Room} scenes (RGB images in Fig.~\ref{fig:hero} and \ref{fig:qualitative}). The model assigns small termination probability to rays corresponding to parts of the view that were never observed in training (bottom part, left image). Interestingly, we also observe small fluctuations in termination probability per ray in other parts of the scene, where the model assigns a belief of slightly less than 1. Our ablation indicates this is correlated with the prediction error.} \label{fig:rays} \end{figure} Our proposed uncertainty measure $\psi^2(\vect{r})$ consists of two terms, the mean RGB variance $\bar\sigma^2_\text{RGB}$ and the epistemic variance $\sigma^2_\text{epi}$. To better understand their individual influence, Table~\ref{tab:ablation} reports results when using only either term, or both terms combined. For this ablation study, we report both the mean-average and the mean-median Negative Log-Likelihood, along with their standard deviations, for every scene in the LLFF datset. The mean-average NLL is calculated by averaging the NLL over all pixels in a rendered test image, and then calculating the mean over all test images in a scene. In contrast, the mean-median NLL calculates the \emph{median} NLL over all pixels, before averaging over all test views. Reporting both metrics reveals that using only the RGB-based variance $\sigma^2_\text{RGB}$ as the uncertainty measure is affected by severe outliers, i.e. pixels with very high (bad) NLL. This becomes clear when comparing the mean-average and the mean-median performances: while the former is subject to outliers, the latter is relatively robust against outliers. As explained in the motivation for our method in Section~\ref{sec:ensemble}, the outlier pixels with very high NLL are caused by parts of the scene that were not observed during training. As a somewhat surprising result, we observe that the epistemic uncertainty term $\sigma^2_\text{epi}$ by itself achieves reasonable performance on most scenes. Upon closer inspection, this is caused by small fluctuations in the individual $q_{\vectg{\theta}_k}(\vect{r})$, the sum over the termination probabilities per ray. We observe that the model assigns slightly lower $q_{\vectg{\theta}_k}(\vect{r})$ for rays with higher uncertainty, i.e. close to but not exactly 1. We illustrate this in Fig.~\ref{fig:rays} for views from two scenes. However, using the sum of $\sigma^2_\text{RGB}$ and $\sigma^2_\text{epi}$ as the proposed uncertainty measure $\psi^2(\vect{r})$ consistently yields the best results, indicating the complementarity of both components.
{ "timestamp": "2022-09-20T02:23:38", "yymm": "2209", "arxiv_id": "2209.08718", "language": "en", "url": "https://arxiv.org/abs/2209.08718" }
\section{Introduction} A highly researched problem by a variety of research communities is the problem of data clustering. However, high-dimensional data clustering still constitutes a significant challenge, plagued by the \emph{curse of dimensionality} \citep{hutzenthaler2020overcoming}. Hierarchical divisive algorithms developed in the recent years \citep{TASOULIS20103391, pavlidis2016minimum, hofmeyr2016clustering, hofmeyr2019minimum, hofmeyr2019ppci} have shown great potential for the particular case of high dimensional data, incorporating dimensionality reduction iteratively within their algorithmic procedure. Additionally, they seem unique in providing a hierarchical format of the clustering result with low computational cost, in contrast to the commonly used but computationally demanding agglomerative clustering methods. Although the discovery of a hierarchical format is crucial in many fields, such as bioinformatics \citep{luo2003hierarchical,modena2014gene}, to the best of our knowledge, this package is the first native python implementation of divisive hierarchical clustering algorithms. We particularly focus on the ``Principal Direction Divisive Clustering (PDDP)'' algorithm \citep{boley1998principal} for its potential to effectively tackle the \emph{curse of dimensionality} and its impeccable time performance \citep{TASOULIS20103391}. Simultaneously, we provide implementations of a complete set of hierarchical divisive clustering algorithms with a similar basis. These are the dePDDP \citep{TASOULIS20103391}, the iPDDP \citep{TASOULIS20103391}, the kM-PDDP \citep{zeimpekis2008principal}, and the bisecting k-Means (BKM) \citep{savaresi2001performance}. We also provide additional features not included in the original developments of the aforementioned methodologies that make them appropriate for the discovery of arbitrary shaped or non-linear separable clusters. In detail, we incorporate kernel Principal Component Analysis (kPCA) \citep{Scholkopf99kernelprincipal} and Independent Component Analysis (ICA) \citep{hyvarinen2000independent, tharwat2020independent} for the iterative dimensionality reduction steps. As a result, the package provides a fully parameterized set of algorithms that can be applied in a diverse set of applications, for example, non-linear separable clusters, automated identification for the cluster number, and outlier control. \section{Software Description} The HiPart (Hierarchical Partitioning) package is divided into three major sections: \begin{itemize} \setlength{\itemsep}{5pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item Method implementation \item Static Visualization \item Interactive Visualization \end{itemize} \subsection{Method Implementation} The package employs an object-oriented approach for the implementation of the algorithms, similarly to that of~\cite{JMLR:v23:21-0862}, while incorporating design similarities with the scikit-learn library \citep{pedregosa2011scikit}. Meaning, a class executes each of the algorithms, and the class's parameters and attributes are the algorithm's hyper-parameters and results. For the execution of the algorithms, the user needs to call either the method \textbf{predict} or \textbf{fit\_predict} of each algorithm's execution class. The algorithm parameterization can be applied at the constructor of their respective class. \subsection{Static Visualization} Two static visualization methods are included. The first one is a 2-Dimensional representation of all the data splits generated by each algorithm during the hierarchical procedure. The goal is to provide an insight to the user regarding each node of the clustering tree and, subsequently, each step of the algorithm's execution. The second visualization method is a dendrogram that represents the splits of all the divisive algorithms. The dendrogram's figure creation is implemented by the \emph{SciPy} package, and it is fully parameterized as stated in the library. \subsection{Interactive Visualization} In the interactive mode, we provide the possibility for stepwise manipulation of the algorithms. The user can choose a particular step (node of the tree) and manipulate the split-point on top of a two-dimensional visualization, instantly altering the clustering result. Each manipulation resets the algorithm's execution from that step onwards, resulting in a restructuring of the sub-tree of the manipulated node. \section{Development Notes} For the development of the package, we complied with the \textbf{PEP8} style standards, and we enforced it with the employment of \emph{flake8} command-line utility. To ensure the code's quality, we implemented the \emph{unittest} module to the entirety of the source code. In addition, platform compatibility has been assured through extensive testing, and the package development in its entirety uses only well-established or native python packages. The package has been released as open-source software under the ``MIT License''. For more information regarding potential contributions or for the submission of an issue, or a request, the package is hosted as a repository on Github. \section{Experiments and Comparisons} In this section, we provide clustering results with respect to the execution speed and clustering performance for the provided implementations. For direct comparison, we employ a series of well-established clustering algorithms. These are the k-Means \citep{likas2003global}, the Agglomerative (AGG) \citep{ackermann2014analysis} and the OPTICS \citep{ankerst1999optics} of the scikit-learn \citep{pedregosa2011scikit} python library and the fuzzy c-means (FCM) algorithm \citep{bezdek1984fuzzy} of the fuzzy-c-means \citep{dias2019fuzzy} python package. Clustering performance is evaluated using the Normalized Mutual Information (NMI) score \citep{yang2016comparative}. Four widely used data sets from the field of bioinformatics are employed along with two popular data sets benchmark data set for text and image clustering, respectively: \noindent \begin{minipage}{.44\textwidth} \begin{itemize} \setlength{\itemsep}{5pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item the Deng, \citep{DengData} \item the TGCA Pan-cancer\footnotemark (Cancer), \item the USPS, \citep{291440} \end{itemize} \end{minipage} \hfill \begin{minipage}{.55\textwidth} \begin{itemize} \setlength{\itemsep}{5pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item the Baron, \citep{baron2016single} \item the Chen, \citep{chen2017single} \item the BBC, \citep{greene06icml}, \vspace{-3mm} \end{itemize} \end{minipage}{} \footnotetext{\url{https://www.doi.org/10.7303/syn300013}} All experiments took place on a server computer with Linux operating system, kernel version 5.11.0, with an Intel Core i7-10700K CPU @ 3.80GHz and four DDR4 RAM dims of 32GB with 2133MHz frequency. Default parameters were used for the execution of all the algorithms, and the actual number of clusters was provided to algorithms as a parameter when required. In Table \ref{tbl:resutls} we present the mean performance of all methods with respect to execution time (time in secs) and NMI across 100 experiments. We observe that HiPart implementations perform exquisitely in terms of execution time while still being comparable with respect to clustering performance. \noindent \begin{minipage}[b]{0.52\textwidth} \centering \scriptsize \begin{tabular}{|l|cc|cc|c|} \hline Algorithm & time & NMI & time & NMI & \\ \hline\hline & \multicolumn{2}{c|}{Deng (135, 12548)} & \multicolumn{2}{c|}{Baron (1886, 14878)} & \parbox[t]{2mm}{\multirow{21}{*}{\rotatebox[origin=c]{90}{Gene Expression Data}}} \\ \cline{1-5} iPDDP & 0.10 & 0.76 & 1.16 & 0.08 & \\ dePDDP & 0.14 & 0.70 & 2.16 & 0.53 & \\ PDDP & 0.15 & 0.54 & 2.55 & 0.53 & \\ kM-PDDP & 0.25 & 0.61 & 3.81 & 0.52 & \\ BKM & 0.50 & 0.64 & 11.51 & 0.52 & \\ k-Means & 0.14 & 0.71 & 7.52 & 0.48 & \\ AGG & 0.04 & 0.72 & 14.54 & 0.51 & \\ OPTICS & 27.78 & 0.48 & 710.99 & 0.13 & \\ FCM & 1.37 & 0.68 & 163.63 & 0.45 & \\ \cline{1-5} & \multicolumn{2}{c|}{Cancer (801, 20531)} & \multicolumn{2}{c|}{Chen (14437, 23284)}& \\ \cline{1-5} iPDDP & 0.90 & 0.67 & 13.18 & 0.30 & \\ dePDDP & 1.06 & 0.93 & 20.71 & 0.36 & \\ PDDP & 1.04 & 0.74 & 37.52 & 0.48 & \\ kM-PDDP & 1.27 & 0.86 & 53.73 & 0.48 & \\ BKM & 5.94 & 0.88 & 255.85 & 0.48 & \\ k-Means & 1.54 & 0.98 & 249.72 & 0.48 & \\ AGG & 3.09 & 0.98 & 1218.68 & 0.49 & \\ OPTICS & 266.39 & 0.34 & 27089.35 & 0.00 & \\ FCM & 5.53 & 0.53 & 5710.83 & 0.26 & \\ \hline\hline & \multicolumn{2}{c|}{USPS (4575, 256)} & \multicolumn{2}{c|}{BBC (2225, 21213)} & \parbox[t]{2mm}{\multirow{10}{*}{\rotatebox[origin=c]{90}{Benchmark Data}}} \\ \cline{1-5} iPDDP & 0.02 & 0.55 & 2.02 & 0.60 & \\ dePDDP & 0.05 & 0.65 & 1.70 & 0.60 & \\ PDDP & 0.04 & 0.60 & 2.57 & 0.78 & \\ kM-PDDP & 0.17 & 0.50 & 2.93 & 0.74 & \\ BKM & 0.38 & 0.58 & 9.92 & 0.65 & \\ k-Means & 0.12 & 0.72 & 4.68 & 0.35 & \\ AGG & 1.15 & 0.77 & 23.18 & 0.64 & \\ OPTICS & 635.05 & 0.04 & 1055.75 & 0.06 & \\ FCM & 2.92 & 0.58 & 4.73 & 0.60 & \\ \hline \end{tabular} \captionof{table}{Clustering results with respect to execution time and clustering performance.} \label{tbl:resutls} \end{minipage} \hfill \begin{minipage}[b]{0.46\textwidth} \centering \scriptsize \includegraphics[width=\textwidth, height=3.18in]{figures/dendrogram.png} \captionof{figure}{Dendrogram figure for the Cancer data set with the use of the dePDDP algorithm and the dendrogram visualization module of the HiPart library. The line below the tree represents the colour of the original cluster each sample belongs.} \label{fig:dendrogram} \end{minipage} \section{Conclusions and Future Work} We present a highly time-efficient clustering package with a suite of tools that give the capability of addressing problems in high-dimensional data clustering. Also, the developed new visualization tools enhance understanding and identification of the underlying clustering data structure. We plan to continuously expand the HiPart package in the future through the addition of more hierarchical algorithms and by providing even more options for dimensionality reduction, such as the use of recent projection pursuit methodologies \citep{pavlidis2016minimum, hofmeyr2016clustering, hofmeyr2019minimum, hofmeyr2019ppci}. Our final aim is to establish the golden standard when considering hierarchical divisive clustering. \acks{This project has received funding from the Hellenic Foundation for Research and Innovation (HFRI), under grant agreement No 1901.}
{ "timestamp": "2022-09-20T02:22:27", "yymm": "2209", "arxiv_id": "2209.08680", "language": "en", "url": "https://arxiv.org/abs/2209.08680" }
\section{Introduction} The ingress of hydrogen into a metal brings a reduction in material toughness, ductility and fatigue crack growth resistance \cite{Gangloff2003,Gangloff2012,Djukic2019}. This phenomenon, often referred to as \emph{hydrogen embrittlement}, is pervasive across the energy, defence, transport and construction sectors, and is gaining increasing attention due to the higher susceptibility of modern, high-strength alloys \cite{RILEM2021}. As a result, there is a significant body of literature devoted to the development of chemo-mechanical models for predicting hydrogen assisted failures (see, e.g., \cite{Yu2016a,Nagao2018,CMAME2018,Anand2019,Shishvan2020,IJP2021} and Refs. therein). These hydrogen embrittlement models commonly solve a coupled deformation-diffusion problem to define a fracture criterion as a function of mechanical fields (stress, strain) and hydrogen concentration. Consistent with experimental observations, model predictions are very sensitive to the hydrogen content, a key input in the transport sub-problem. However, the quantification of hydrogen ingress remains a challenge, and this is particularly the case when hydrogen originates from water vapour or aqueous electrolytes \cite{Marcus2012,Turnbull2015}. In other words, our inability to quantify hydrogen uptake is holding back the predictive potential of current hydrogen embrittlement models.\\ By far, the most widely used strategy for modelling hydrogen ingress is the definition of a constant hydrogen concentration at the surfaces of the sample exposed to the hydrogen-containing environment (see, e.g., \cite{Yu2016a,Nagao2018,CMAME2018,IJP2021,Moriconi2014,Duda2018,CS2020b,Wu2020b,Colombo2020}). Recently, a few authors have proposed instead to prescribe a constant chemical potential \cite{DiLeo2013,IJHE2016,Diaz2016b,Elmukashfi2020,AM2020}. This is more accurate as it enables capturing the increase in hydrogen solubility associated with volumetric strains (lattice dilatation effects near the surface). However, neither the hydrogen content nor the chemical potential are typically known in most hydrogen-containing environments, such as aqueous electrolytes. Atrens and co-workers \cite{Liu2014,Venezuela2018a} postulated the concept of an equivalent fugacity to relate the hydrogen content at the surface to the overpotential through experimental calibration. Turnbull and co-workers \cite{Turnbull1996,CS2020} and Kehler and Scully \cite{Kehler2008} have gone one step further and, building upon a number of assumptions, respectively defined the flux and the hydrogen concentration to be a function of the absorption and desorption reaction rate constants. In the regimes where their assumptions are relevant, this enables quantifying hydrogen ingress as a function of the overpotential and pH of the environment. However, the \emph{local} pH and overpotential are typically unknown and can differ significantly from the \emph{bulk} pH and overpotential, the commonly known quantities. For example, the narrow confines of occluded areas such as cracks or pits limit the exchange of dissolved metal ions with the bulk electrolyte, resulting in a very different local chemistry – e.g., the pH can change from 9 (global) to 2 (local) \cite{Mccafferty2004,Carneiro-Neto2016,Duddu2016}. In turn, these pH differences can result in different reaction rates \citep{Recio2011,Fujimoto2017}, and can thus result in significant differences in absorbed hydrogen between the area near the crack compared to the exterior boundaries \citep{Cooper2007}. An accurate estimation of hydrogen ingress requires resolving not only the absorption kinetics but also the bulk and surface electrochemistries, coupled with bulk hydrogen diffusion and mechanical straining.\\ In this work, we present a theoretical and computational modelling framework that fully resolves the physics of hydrogen uptake. The model combines: (i) the electrochemical behaviour of the electrolyte (ion transport, electrolyte potential distribution), (ii) the Volmer, Heyrovsky, and Tafel reactions intrinsic to the Hydrogen Evolution Reaction (HER), (iii) adsorption and absorption surface kinetics, and (iv) hydrogen ingress, diffusion and trapping in a mechanically-deforming solid. For the first time, the electrochemistry of hydrogen uptake is explicitly modelled, enabling us to establish a connection between the bulk environment and the influx of hydrogen for arbitrary sample and defect geometries. Moreover, unlike previous attempts to connect the environment to the hydrogen ingress process, we do not establish any \textit{a priori} assumptions and thus do not limit our predictions to specific conditions. The competition between different reaction rates is investigated as a function of the applied electric potential and pH, establishing regimes of dominance and particularising the generalised model presented. Our calculations span a wide range of applied potentials, from anodic to cathodic regimes, mapping the impact of the environment on hydrogen absorption. Furthermore, we study the influence of the crack geometry and fluid flow velocity, establishing the scenarios where these effects are important. Finally, we provide simplified models which, together with the maps provided, can be used to approximate hydrogen ingress without explicitly simulating the electrolyte. The performance of these simplified models is compared to the results obtained from the complete electro-chemo-mechanical framework as well as to those calculated employing commonly used boundary conditions.\\ The remainder of this paper is structured as follows. The theory and governing equations are presented in Section \ref{sec:gov_eq}. The numerical framework is then briefly described in Section \ref{sec:disc}. Subsequently, model predictions are validated against computational and experimental results in Section \ref{sec:verif}. In Section \ref{sec:results}, hydrogen uptake is quantified as a function of the environmental conditions and the defect geometry. Finally, Section \ref{sec:bcs} shows how the results estimated with the complete and simplified models presented compare with commonly used modelling strategies. Concluding remarks end the manuscript in Section \ref{sec:conclusion}. \section{Theory} \label{sec:gov_eq} We consider a domain composed of two parts, an electrolyte in $\Omega_e$ and a metal in $\Omega_m$, as shown in Fig. \ref{fig:domains}. These two sub-domains interact on the interface $\Gamma_{int}$. In terms of primary fields, the metal domain is described through its displacement vector $\mathbf{u}$ and the hydrogen concentration at interstitial lattice sites $C_L$; the electrolyte behaviour is characterised by the concentration of the $\pi$ ionic species $C_\pi$ and the electric potential $\varphi$; and the internal interface is described through the coverage of adsorbed hydrogen $\theta_{ads}$. The focus is on surface reactions and hydrogen diffusion, and consequently no crack growth or material dissolution is considered, with pits and cracks being represented geometrically through the shape of the simulated domains.\\ A note on notation and units. For consistency across domains, the SI units $\mathrm{mol}/\mathrm{m}^3$ are used for all concentration quantities. As a consequence, reaction constants units follow accordingly; e.g., the water auto-ionization constant is given by $10^{-8}\;(\mathrm{mol}/\mathrm{m^3})^2$ (as opposed to the more common terminology of $10^{-14}\;(\mathrm{mol}/\mathrm{L})^2$). Also, we define the electric potential relative to the standard hydrogen electrode, considering the equilibrium potential of hydrogen related reactions to be at $0\;\mathrm{V}_{SHE}$. We use lightface italic letters for scalars, e.g. $C_L$, upright bold letters for vectors, e.g. $\mathbf{u}$, and bold italic letters, such as $\bm{\sigma}$, for second and higher order tensors. First-, second-, and fourth-order tensors are in most cases respectively represented by small Latin, small Greek, and capital Latin letters. The gradient and the divergence are respectively denoted by $\bm{\nabla}\mathbf{u}= u_{i,j}$ and $\bm{\nabla}\cdot\bm{\sigma}=\sigma_{ij,j}$. And the trace of a second order tensor is written as $\text{tr} \,\bm{\varepsilon}=\varepsilon_{ii}$. \begin{figure} \centering \includegraphics{GenericDomain-eps-converted-to.pdf} \caption{Schematic overview of the electrolyte and metal domains, and (in brackets) the degrees of freedom used to describe the electrochemical phenomena relevant to each of them.} \label{fig:domains} \end{figure} \subsection{Metal sub-domain} \label{Sec:MetalDomain} The transport of ions inside the electrolyte and of hydrogen inside the metal are much slower than the deformation experienced by the material. Accordingly, the metal can be assumed to be in a state of quasi-static equilibrium, while the diffusion of the ions and hydrogen atoms is time-dependent. This quasi-static mechanical equilibrium is characterised by the momentum balance: \begin{equation} \bm{\nabla}\cdot\bm{\sigma} = \mathbf{0} \label{eq:mombalance} \end{equation} in which the Cauchy stress tensor $\bm{\sigma}$ is estimated based on the assumption of linear-elastic material behaviour.\\ The hydrogen inside the metal is located at interstitial lattice sites, $C_L$, and within $i$ sets of hydrogen traps, $C_{T}^i$. The mass conservation for the total hydrogen content is given by (see, e.g. \cite{Dadfarnia2011,Fernandez-Sousa2022}): \begin{equation} \dot{C}_L+\sum_i \dot{C}_T^i + \bm{\nabla}\cdot\left(-D_L \bm{\nabla}C_L \right) + \bm{\nabla}\cdot\left(\frac{D_L C_L \overline{V}_H}{RT}\bm{\nabla}\sigma_H\right) = 0 \label{eq:massbalance1} \end{equation} where $T$ is the temperature, $R$ is the universal gas constant, and $\overline{V}_H$ is the partial molar volume of hydrogen in the material. We use $\dot{C_L}$ and $\dot{C}_T^i$ to indicate the time derivatives of the hydrogen concentrations within the lattice and traps (of type $i$). The equation assumes that hydrogen diffuses through the interstitial lattice sites with a diffusion coefficient $D_L$, whereas traps are isolated and do not form an extended network through which hydrogen atoms can diffuse. Additionally, there is a contribution from the hydrostatic stress $\sigma_H=\text{tr}(\bm{\sigma})/3$, as the hydrogen solubility increases in areas of high volumetric strains due to lattice dilatation.\\ Let us now define the relation between lattice and trapping sites, for which several models exist. The transfer of hydrogen atoms between lattice and trap sites can be simulated using a kinetic formulation \cite{McNabb1963,Turnbull1996,Turnbull1997}, which can capture trapping behaviour that is perfectly reversible, (quasi)-irreversible, and asymmetric (different absorption and desorption energies). Alternatively, a common assumption is that of fast trapping kinetics, which results in the consideration of all traps being reversible and results in an equilibrium relationship between the trapped and lattice hydrogen content \citep{Oriani1970,Diaz2019}. To formulate this equilibrium relationship, let us first introduce the densities of lattice ($N_L$) and trapping ($N_T^i$) sites, which allows then to define the occupancy of lattice and trapping sites as $\theta_L=C_L/N_L$ and $\theta_T^i=C_T^i/N_T^i$, respectively. Then, assuming a low lattice occupancy ($\theta_L<<1$), one can define the trap occupancy as a function of the lattice occupancy and the trap binding energy $E_b$, as follows: \begin{equation} \theta_T^i = \frac{C_L/N_L \exp{\left(\frac{E_b^i}{RT}\right)}}{1+C_L/N_L \exp{\left(\frac{E_b^i}{RT}\right)}} \end{equation} This allows the mass balance (Eq. \eqref{eq:massbalance1}) to solely be given in terms of the lattice concentration as: \begin{equation} \left(1+ \sum_i \frac{N_T^i/N_L \; \; \exp{\left(E_b^i/(RT)\right)}}{\left(1+C_L/N_L \exp{\left(E_b^i/(RT)\right)}\right)^2} \right)\dot{C}_L + \bm{\nabla}\cdot\left(-D_L \bm{\nabla}C_L \right) + \bm{\nabla}\cdot\left(\frac{D_L C_L \overline{V}_H}{RT}\bm{\nabla}\sigma_H\right) = 0 \label{eq:massbalance2} \end{equation} Eqs. \eqref{eq:mombalance} and \eqref{eq:massbalance2} describe the mechanical behaviour of the metal and the transport of hydrogen within it. These equations are subject to the following boundary conditions on $\Gamma_{ext}$ and $\Gamma_{int}$: \begin{align} \mathbf{u} = \overline{\mathbf{u}} \qquad &\text{or} \qquad \bm{\sigma}\cdot\mathbf{n} = \overline{\mathbf{t}} \\ C_L = \overline{C}_L \qquad &\text{or} \qquad -D_L \bm{\nabla}C_L + \frac{D_L C_L \overline{V}_H}{RT}\bm{\nabla}\sigma_H = \overline{J} \label{eq:BCflux} \end{align} with $\overline{\mathbf{u}}$, $\overline{\mathbf{t}}$, $\overline{C}_L$, and $\overline{J}$ being the externally enforced displacements, tractions, lattice concentration, and hydrogen inflow flux. For the metal-electrolyte interface we assume a traction free boundary condition, $\overline{\mathbf{t}}=0$. Regarding the field $C_L$, a hydrogen influx $\overline{J}$ is generally defined at the interface, as detailed in Section \ref{Sec:ElectrolyteDomain} (Eq. \ref{eq:JfluxAbs}) but, for comparison purposes, results are also obtained with the simplistic and widely used boundary condition of prescribing a constant lattice hydrogen content $C_L$ (Section \ref{sec:bcs}). \subsection{Electrolyte sub-domain} \label{Sec:ElectrolyteDomain} We model a seawater-like electrolyte, consisting of the ionic species $\mathrm{H}^+,\;\mathrm{OH}^-,\;\mathrm{Na}^+,\;\mathrm{Cl}^-$, and the dissolved metal concentration; here, iron $\mathrm{Fe}^{2+}$ and its reaction product $\mathrm{FeOH}^+$. Inside this electrolyte, each ionic species $\pi$ is described through their respective concentration $C_\pi$. In addition, an electric field $\varphi$ is present. The evolution of these concentrations is given through the Nernst-Planck mass balance: \begin{equation} \dot{C}_{\pi}+(\mathbf{v}\cdot\bm{\nabla})c_\pi+\bm{\nabla}\cdot\left(-D_\pi \bm{\nabla}C_\pi\right) + \frac{z_\pi F}{RT} \bm{\nabla} \cdot \left(-D_\pi C_\pi \bm{\nabla} \varphi\right) +R_\pi = 0 \label{eq:nernstplanck} \end{equation} where $z_\pi$ is the ionic charge and $F$ is Faraday's constant. The velocity field of the electrolyte $\mathbf{v}$ is presumed to be known; i.e., electrolyte fluid flow simulations are not conducted. In addition to the mass balance, we assume electro-neutrality, requiring the electrolyte to be neutrally charged throughout the domain: \begin{equation} \sum_\pi z_\pi C_\pi = 0 \label{eq:electroneutrality} \end{equation} These equations assume negligible interactions between the ion species outside of chemical reactions \cite{Sarkar2011}.\\ The $\mathrm{H}^+$ and $\mathrm{OH}^-$ ion concentrations are related through the water auto-ionization process: \begin{equation} \mathrm{H}_2\mathrm{O} \xrightleftharpoons[k_{w}']{k_{w}} \mathrm{H}^+ + \mathrm{OH}^- \end{equation} which is implemented through the reaction term $R_\pi$ as: \begin{equation} {R_{\mathrm{H}^+}}_1=R_{\mathrm{OH}^-} = k_{w}C_{\mathrm{H}_2\mathrm{O}} - k_{w}'C_{\mathrm{H}^+}C_{\mathrm{OH}^-} = k_{eq} \left(K_w-C_{\mathrm{H}^+} C_{\mathrm{OH}^-} \right) \label{eq:water_react} \end{equation} where $K_w = 10^{-8} \; \mathrm{mol}^2/\mathrm{m}^6$ is the water auto-ionization constant and the variable $k_{eq}$ is given a sufficiently high value to enforce an equilibrium reaction; here, $k_{eq}=10^5\;\mathrm{m}^3/(\mathrm{mol}\cdot \mathrm{s})$. In addition, the $\mathrm{Fe}^{2+}$ ions react with water according to: \begin{equation} \mathrm{Fe}^{2+} + \mathrm{H}_2\mathrm{O} \xrightleftharpoons[k_{fe}']{k_{fe}} \mathrm{FeOH}^+ + \mathrm{H}^+ \label{eq:Fe_H20} \end{equation} which in turn can react further through: \begin{equation} \mathrm{FeOH}^{+} + \mathrm{H}_2\mathrm{O} \xrightharpoonup{k_{feoh}} \mathrm{Fe}(\mathrm{OH})_2 + \mathrm{H}^+ \label{eq:FeOH_H2O} \end{equation} with $\mathrm{Fe}(\mathrm{OH})_2$ assumed to not dissolve in water and its volume to be negligible compared to the domain size. These assumptions allow the concentration of $\mathrm{Fe}(\mathrm{OH})_2$ not to be explicitly simulated, and instead serve as a pathway for iron ions to exit the domain. We also assume that these solid reactants do not interfere with the surface reactions. Since Reactions \eqref{eq:Fe_H20} and \eqref{eq:FeOH_H2O} both produce $\mathrm{H}^+$, the pH of the electrolyte is expected to decrease in regions with large amounts of iron ion production due to corrosion. Each of the reactions are implemented through their associated reaction terms, given by: \begin{align} R_{\mathrm{Fe}^{2+}}&=-k_{fe}C_{\mathrm{Fe}^{2+}}+k_{fe}'C_{\mathrm{FeOH}^-}C_{\mathrm{H}^+} \\ R_{\mathrm{FeOH}^+}&=k_{fe}C_{\mathrm{Fe}^{2+}}-C_{\mathrm{FeOH}^-}(k_{feoh}+k_{fe}'C_{\mathrm{H}^+})\\ {R_{\mathrm{H}^+}}_2&=k_{fe}C_{\mathrm{Fe}^{2+}}-C_{\mathrm{FeOH}^-}(k_{fe}'C_{\mathrm{H}^+}-k_{feoh}) \label{ref:RH2} \end{align} Here, one should note that the reaction term associated with $\mathrm{H}^+$ comprises both Eqs. \eqref{eq:water_react} and \eqref{ref:RH2}, such that $R_{\mathrm{H}^+} = {R_{\mathrm{H}^+}}_1+{R_{\mathrm{H}^+}}_2 $.\\ Finally, the electrolyte is subjected to the following boundary conditions on $\Gamma_{ext}$ and $\Gamma_{int}$: \begin{align} &-D_{\pi}\left(\bm{\nabla}C_\pi+\frac{z_\pi F}{RT} C_\pi\bm{\nabla}\varphi\right) = \overline{J}_{\pi} \qquad \text{or} \qquad C_\pi = \overline{C}_\pi \\ &\varphi = \overline{\varphi} \end{align} with the externally imposed fluxes, concentrations, and electric fields given by $\overline{J}_{\pi}$, $\overline{C}_{\pi}$, and $\overline{\varphi}$, respectively. \subsection{Interface interactions} \begin{figure} \centering \includegraphics{ReactionsOverview-eps-converted-to.pdf} \caption{Schematic illustration of the hydrogen evolution (left) and corrosion (right) reactions occurring near the metal surface.} \label{fig:reactions_overview_figure} \end{figure} At the interface between the metal and the electrolyte, (electro-)chemical reactions convert the hydrogen within the electrolyte into surface hydrogen, defined through the adsorbed hydrogen concentration at the surface $\theta_{ads}$. For acidic electrolytes, the dominant reactions are given by the Volmer, Heyrovsky, Tafel, and absorption reactions: \begin{alignat}{2} \text{Volmer:} && \mathrm{H}^+ + \mathrm{M} + \mathrm{e}^- &\xrightleftharpoons[k_{Va}']{k_{Va}} \mathrm{MH}_{ads} \label{react:1} \\ \text{Heyrovsky:} && \qquad \mathrm{H}^+ + \mathrm{e}^- + \mathrm{MH}_{ads}&\xrightleftharpoons[k_{Ha}']{k_{Ha}} \mathrm{M} + \mathrm{H}_2 \label{react:2} \\ \text{Tafel:} && 2 \mathrm{MH}_{ads} &\xrightleftharpoons[k_T']{k_T} 2\mathrm{M} + \mathrm{H}_2 \label{react:3} \\ \text{Absorption:} && \mathrm{MH}_{ads} &\xrightleftharpoons[k_A']{k_A} \mathrm{MH}_{abs} \label{react:4} \end{alignat} whereas in non-acidic environments the alkaline versions of the Volmer and Heyrovsky reactions become more relevant: \begin{alignat}{2} \text{Volmer:} && \mathrm{H}_2\mathrm{O} + \mathrm{M} + \mathrm{e}^- &\xrightleftharpoons[k_{Vb}']{k_{Vb}} \mathrm{MH}_{ads} + \mathrm{OH}^- \label{react:5} \\ \text{Heyrovsky:} && \qquad \mathrm{H}_2\mathrm{O} + \mathrm{e}^- + \mathrm{MH}_{ads}&\xrightleftharpoons[k_{Hb}']{k_{Hb}} \mathrm{M} + \mathrm{H}_2 + \mathrm{OH}^- \label{react:6} \end{alignat} in which $\mathrm{M}$ denotes a metal atom interacting with hydrogen at the surface, and $k$ and $k'$ indicate forward and backward reaction constants. The reactions are schematically shown in Fig. \ref{fig:reactions_overview_figure}. The reaction rates for these reactions are given by \citep{Elhamid2000,Danaee2011,Liu2014,CS2020}:\\ \makebox[18cm][c]{ \hspace{-2.5cm} \begin{minipage}{19cm} \begin{alignat}{4} \nonumber && && & \qquad\mathrm{Forward} && \qquad\qquad \mathrm{Backward} \\ \mathrm{Volmer (acid):} && \; && \nu_{Va} &= k_{Va} C_{\mathrm{H}^+}(1-\theta_{ads})\exp{\left(-\alpha_{Va} \frac{\eta F}{RT}\right)}\;\; && \nu_{Va}' = k_{Va}' \theta_{ads}\exp{\left((1-\alpha_{Va}) \frac{\eta F}{RT}\right)} \label{eq:react1}\\ \mathrm{Heyrovsky (acid):} && && \nu_{Ha} &= k_{Ha} C_{\mathrm{H}^+}\theta_{ads}\exp{\left(-\alpha_{Ha} \frac{\eta F}{RT}\right)}\qquad && \nu_{Ha}' = k_{Ha}' (1-\theta_{ads}) p_{\mathrm{H}_2} \exp{\left((1-\alpha_{Ha}) \frac{\eta F}{RT}\right)} \label{eq:react2}\\ \mathrm{Tafel:} && && \nu_T &= k_T\theta_{ads}^2\qquad && \nu_T' = k_T' (1-\theta_{ads})\sqrt{p_{\mathrm{H}_2}} \label{eq:react3}\\ \mathrm{Absorption:} && && \nu_A &= k_A (N_L - C_L)\theta_{ads}\qquad && \nu_A' = k_A' C_L (1-\theta_{ads}) \label{eq:react4}\\ \mathrm{Volmer (base):} && && \nu_{Vb} &= k_{Vb} (1-\theta_{ads})\exp{\left(-\alpha_{Vb} \frac{\eta F}{RT}\right)}\qquad && \nu_{Vb}' = k_{Vb}' C_{\mathrm{OH}^-} \theta_{ads}\exp{\left((1-\alpha_{Vb}) \frac{\eta F}{RT}\right)} \label{eq:react5}\\ \mathrm{Heyrovsky (base):} && && \nu_{Hb} &= k_{Hb} \theta_{ads}\exp{\left(-\alpha_{Hb} \frac{\eta F}{RT}\right)}\qquad && \nu_{Hb}' = k_{Hb}' (1-\theta_{ads}) p_{\mathrm{H}_2} C_{\mathrm{OH}^-} \exp{\left((1-\alpha_{Hb}) \frac{\eta F}{RT}\right)} \label{eq:react6} \end{alignat} \\ \end{minipage}} where $\alpha$ is used to denote the charge transfer coefficients and the partial pressure of $\mathrm{H}_2$ is assumed to be negligible ($p_{\mathrm{H}_2} \approx 0$), allowing the backwards reaction rates for Reactions \eqref{eq:react2}, \eqref{eq:react3}, and \eqref{eq:react6} to be neglected. The electric overpotential $\eta$ is given by \begin{equation} \eta=E_m - \varphi - E_{eq,\mathrm{H}} \label{eq:overpotential} \end{equation} where $E_m$ is the electric potential of the metal and $E_{eq,\mathrm{H}}$ denotes the equilibrium potential.\\ Equilibrium between the inflow and outflow fluxes can be assumed, thereby eliminating the need to treat the surface coverage as an independent degree of freedom (see \cite{CS2020}). Instead, we here choose to solve for $\theta_{ads}$, which simplifies the formulations for the fluxes at the interface and allows the surface reactions to be in a state of non-equilibrium. Accordingly, the evolution of the hydrogen surface coverage is given by: \begin{equation} N_{ads} \dot{\theta}_{ads} - (\nu_{Va}-\nu_{Va}') + (\nu_{Ha}-\nu_{Ha}') +2 (\nu_T-\nu_T') + (\nu_A-\nu_A') - (\nu_{Vb}-\nu_{Vb}') + (\nu_{Hb}-\nu_{Hb}') = 0 \label{eq:massbalanceinterface} \end{equation} with $N_{ads}$ being the number of adsorption sites per metal surface area.\\ On the other hand, the reaction rate for the corrosion of the metal surface (right side of Fig. \ref{fig:reactions_overview_figure}) is given by: \begin{equation} \nu_{Fe} = k_c \exp{\left((1-\alpha_c)\frac{\eta F}{RT}\right)} \end{equation} where $k_c$ is the corrosion rate constant and the overpotential is estimated using the equilibrium potential for the corrosion reaction $E_{eq,\mathrm{Fe}}$ in Eq. \eqref{eq:overpotential}, as opposed to $E_{eq,\mathrm{H}}$. Since the focus is on hydrogen uptake, we assume the corrosion rate to be small. This allows the effects of the corrosion and $\mathrm{Fe}^{2+}$ concentration on the local pH to be included, without the need to model changes in domain boundaries due to metal dissolution. \\ The interactions with the electrolyte are included through the $\mathrm{H}^+$, $\mathrm{OH}^-$, and $\mathrm{Fe}^{2+}$ fluxes at the internal boundary $\Gamma_{int}$: \begin{align} \overline{J}_{\mathrm{H}^+} &= -(\nu_{Va} - \nu_{Va}') - (\nu_{Ha} - \nu_{Ha}') \\ \overline{J}_{\mathrm{OH}^-} &= \nu_{Vb} - \nu_{Vb}' + \nu_{Hb} - \nu_{Hb}' \\ \overline{J}_{\mathrm{Fe}^{2+}} &= \nu_{Fe} \end{align} and the interaction with the metal is accounted for through the absorbed hydrogen flux going into the metal: \begin{equation} \overline{J} = \nu_A - \nu_A' \label{eq:JfluxAbs} \end{equation} These ion fluxes couple the $C_{\mathrm{H}^+}$, $C_{\mathrm{OH}^-}$, and $C_L$ concentrations at the internal interface $\Gamma_{int}$. Since the electrolyte pressure is negligible, $\Gamma_{int}$ is traction-free ($\overline{\mathbf{t}}=0$). Furthermore, the internal interface does not allow other ionic species to enter the metal ($\overline{J}_{\pi}=0$ for $\pi \neq \mathrm{H}^+,\mathrm{OH}^-,\mathrm{Fe}^{2+}$), and the metal has a constant and uniform electric potential due to its conductivity. Finally, the displacements in the material are assumed to be small, thereby neglecting changes in the size and shape of the electrolyte domain and preventing the need for deforming the mesh or re-meshing.\\ The interface reaction equations presented enable capturing the uptake of hydrogen and all relevant surface phenomena as a function of the local pH and overpotential. When combined with the equations describing the coupled deformation-diffusion behaviour of the metal (Section \ref{Sec:MetalDomain}) and the electrochemical behaviour of the electrolyte (Section \ref{Sec:ElectrolyteDomain}), hydrogen ingress can be quantified as a function of bulk environmental conditions (pH and potential difference). However, one should note that an important set of inputs to the model is the (backward and forward) reaction rate constants $k$ and $k'$. These have to be determined experimentally and vary from one material to another. Table \ref{tab:reactions_lit} reports reaction rate constants measured in the literature for pure Fe and Fe-based materials. The large scatter observed in the values reported for some reaction rate constants suggests a high sensitivity to the material and surface conditions, motivating the need for careful experimental measurements to improve the accuracy of modelling predictions. \enlargethispage{4\baselineskip} \begin{table} \begin{adjustwidth}{-1.5cm}{} \centering \caption{Forward and backward reaction rate constants reported or used in literature for pure Fe or Fe-based materials.} \label{tab:reactions_lit} \begin{tabular}{ |l|l l|l l|l| } \hline Reaction & Forward reaction rate constant $k$ & & Backward reaction rate constant $k'$ & \\ \hline $\nu_{Va}$ & \makecell[l]{$5\cdot10^{-10}$ \citep{Elhamid2000}; $5$ \citep{Turnbull1996, CS2020b}; $3.2\cdot10^{-5}$ \citep{Pickering1988z};\\ $2\cdot10^{-10}$, $1\cdot10^{-9}$, $2\cdot10^{-9}$, $2\cdot10^{-3}$\citep{Iyer1989}} & $ \mathrm{m}/\mathrm{s}$ & $0$ \citep{Turnbull1996} & $\mathrm{mol/(m}^2\mathrm{s)}$ \\ \Xhline{0.1pt} $\nu_{Ha}$ & $5\cdot10^4$ \citep{Turnbull1996,CS2020b}\tablefootnote{One should note that this magnitude appears to be inconsistent with other values listed in Table \ref{tab:reactions_lit}. For instance, causing the acidic Heyrovsky reaction to be dominant even in highly alkaline environments (see the analysis in Section \ref{sec:dimensionalAnalysis}), and resulting in virtually no hydrogen absorption within the metal.} & $\mathrm{m/s}\;\;$ & $0$ \citep{Turnbull1996} &$\mathrm{mol/(m}^2\mathrm{Pa\; s)}$ \\ \Xhline{0.1pt} $\nu_T$ & \makecell[l]{$7.8\cdot10^{-11}$ \citep{Bhardwaj2008}; $1.8\cdot10^{-3}$ \citep{Elhamid2000}; $22$\citep{Vecchi2018a};\\ $7\cdot10^{-7}$, $1\cdot10^{-3}$, $3\cdot10^{-2}$, $5\cdot10^{-2}$\citep{Iyer1989}} & $\mathrm{mol/(m}^2\mathrm{s)}$ & $0$ \citep{Turnbull1996} & $\mathrm{mol/(m}^2\mathrm{s \;Pa}^{1/2})$ \\ \Xhline{0.1pt} $\nu_A$ & \makecell[l]{$1.22\cdot10^5$ \citep{Turnbull1996,CS2020b}; $2.4\cdot10^{-12}$\citep{Elhamid2000b};\\$3.3\cdot10^{-10}$, $5.8\cdot10^{-9}$, $6.6\cdot10^{-8}$, $6\cdot10^{9}$\citep{Vecchi2018a}} & $\mathrm{m/s}$ & $8.8\cdot10^9$\citep{Turnbull1996,CS2020b}; $1.9\cdot10^{-5}$\citep{Elhamid2000b} & $\mathrm{m/s}$ \\ \Xhline{0.1pt} $\nu_{Vb}$ & $10^{-4}$\citep{Hitz2002};$8.29\cdot10^{-8}$ \citep{Bhardwaj2008}; $6.5\cdot10^{-3}$ \citep{Pickering1988z} & $\mathrm{mol/(m}^2\mathrm{s})$ & $2.84\cdot10^{-10}$ \citep{Bhardwaj2008}; $\mathcal{O}(10^{-7})$ \citep{Hitz2002} & $\mathrm{m/s}$ \\ \Xhline{0.1pt} $\nu_{Hb}$ & $10^{-7}$ \citep{Hitz2002}; $1.9\cdot10^{-10}$ \citep{Bhardwaj2008}& $\mathrm{mol/(m}^2\mathrm{s)}$ & $0$ \citep{Bhardwaj2008}; $5.5\cdot10^{-12}$ \citep{Hitz2002} & $\mathrm{m/(Pa \;s)}$ \\ \hline \end{tabular} \end{adjustwidth} \end{table} \subsubsection{Regimes of relevance of individual interface reactions} \label{sec:dimensionalAnalysis} It should be noted that the generalised model presented results in significantly more reaction constants relative to other models which only include either the acidic or non-acidic hydrogen reactions \cite{Lee1971,Turnbull2015,Vecchi2018,Lasia2019}. Other simplifying assumptions for the hydrogen evolution reactions such as using a single rate-determining reaction \citep{Ma2020}, or assuming most backward reactions to be negligible \citep{Liu2014, Sun2019, Tang2020} are also often made to reduce the number of constants needed. In contrast, by including both acidic and non-acidic reactions within a single scheme, and not assuming a single rate-determining reaction step, our model is valid for the complete range of electrolyte pH and electric overpotentials. As a result, it can capture the large differences that occur between the open environment and occluded areas such as cracks. In addition, the generalised model presented encapsulates all other existing models, enabling its particularisation (e.g., to validate individual parts). However, not all reactions are relevant at the same time. A schematic illustration of the regimes of dominance of individual reactions is shown in Fig. \ref{fig:RegimePlot} as a function of the environment (pH, surface coverage). An estimate of the relative relevance of the acidic and base hydrogen producing Reactions, \eqref{eq:react1} and \eqref{eq:react5}, can be obtained by considering the ratio of their reaction rates: \begin{equation} \frac{\nu_{Va}}{\nu_{Vb}} = \frac{C_{\mathrm{H}^+} k_{Va}}{k_{Vb}}\exp{\left((\alpha_{Vb}-\alpha_{Va})\frac{\eta F}{RT}\right)} \label{eq:scaling_15} \end{equation} Upon assuming $\alpha_{Vb}=\alpha_{Va}$, Eq. (\ref{eq:scaling_15}) implies that the non-acidic reaction becomes more important than the acidic one for $\mathrm{pH} >-\log_{10} (k_{Vb}/(1000k_{Va}))$ \footnote{The factor $1000$ is introduced due to the units of concentration used, $\mathrm{mol}/\mathrm{m}^3$, while the pH definition uses $\mathrm{mol}/\mathrm{L}$}. Below this pH, Reaction \eqref{eq:react1} will produce significantly more adsorbed hydrogen, whereas above this pH, Reaction \eqref{eq:react5} will determine the adsorbed hydrogen amount. Similar results are obtained for the $\mathrm{H}^2$ production through the Heyrovsky reactions \eqref{eq:react2} and \eqref{eq:react6}, with the non-acidic version becoming dominant for $\mathrm{pH} >-\log_{10} (k_{Hb}/(1000k_{Ha}))$.\\ An estimate for the total $\mathrm{H}^+$ diffusion within the electrolyte inside of a crack is given by $\nu_e = h D_{\mathrm{H}^+}C_{\mathrm{H}^+}^*/L$, with $C_{H^+}^*$ being an estimate for the $H^+$ concentration at the crack mouth, and $h$ and $L$ respectively denoting the height and length of the crack \footnote{Here, 1D $\mathrm{H}^+$ transport is assumed, due to a concentration gradient $C_{\mathrm{H}^+}^*/L$ through a channel with height $h$, such that the total flux through this channel is $J = hD C_{\mathrm{H}^+}^*/L$}. This parameter can be used to estimate whether the acidic surface reaction is the rate-limiting step, or if instead $\mathrm{H}^+$ diffusion within the crack is what limits the amount of adsorbed hydrogen. The ratio between the two is given by: \begin{equation} \frac{\nu_{Va}}{\nu_{e}} = \frac{k_{Va} L^2 \exp{(-\alpha_{Va}\frac{\eta F}{RT})}}{hD_{\mathrm{H}^+}} \label{eq:scaling_LH} \end{equation} with the surface reactions being the rate-limiting step when this ratio is much smaller than one. Hence, longer and sharper cracks translate into a smaller transport of $\mathrm{H}^+$ towards the crack tip. The dominance of $\mathrm{H}^+$ diffusion becomes more relevant for negative overpotentials, as these require more hydrogen ions to sustain the surface reactions. This also gives a first-order estimate on whether simulating the electrolyte is important. When the ratio $\nu_{Va}/\nu_{e}$ is close to 1 or higher, then the pH of the electrolyte changes significantly within the crack or pit. However, if $\nu_{Va}/\nu_{e} << 1$, a rather uniform distribution of $C_{H^+}$ is expected and resolving local changes in pH might not be necessary. It should also be noted that the non-acidic reaction is never limited by the transport of $\mathrm{H}^+$, and therefore should be mostly independent of the crack geometry.\\ \begin{figure} \centering \includegraphics[scale=1.1]{RegimePlot-eps-converted-to.pdf} \caption{Schematic overview of the regimes of dominance of individual reactions as a function of the hydrogen surface coverage and the local pH. Reaction $\nu_A$ is not included. The regimes relevant to surface hydrogen producing reactions are shown in light grey, while white is used for regimes associated with surface hydrogen-consuming reactions.} \label{fig:RegimePlot} \end{figure} Similarly, the diffusion of absorbed hydrogen within the metal can be described by $\nu_{m}=D_LC_L/L^*$, with $L^*$ being a characteristic length scale of the diffusion problem. By assuming the absorption reaction to occur fast, the surface coverage can be related to the lattice concentration through $C_L \approx \theta_{ads}k_A N_L/k_A'$. This allows the amount of adsorbed hydrogen that diffuses into the metal to be compared to the rates at which it combines into $\mathrm{H}^2$ as: \begin{equation} \frac{\nu_{Ha}}{\nu_m} = \frac{k_{Ha} C_{\mathrm{H}^+} L^* \exp{(-\alpha_{Ha}\frac{\eta F}{RT})}}{D_L k_A N_L/k_A'} \qquad \frac{\nu_T}{\nu_m} = \frac{k_T \theta_{ads} L^* }{D_L k_AN_L/k_A'} \qquad \frac{\nu_{Hb}}{\nu_m} = \frac{k_{Hb} L^* \exp{(-\alpha_{Hb}\frac{\eta F}{RT})}}{D_L k_AN_L/k_A'} \end{equation} where values lower than one indicate that hydrogen entry is limited by the reaction rate, while values far above one would be indicative of a hydrogen ingress process being limited by diffusion within the metal, resulting in an increased surface coverage and hydrogen gas production on the surface. When the diffusive length scale $L^*$ is close to zero, such as on the onset of simulations as this scale can be estimated based on the elapsed time ($L^*=\sqrt{tD_L}$), large amounts of hydrogen diffuse into the metal. When the forward absorption reaction constant is much larger than the backwards constant, $k_A/k_A'$, a high lattice concentration will be obtained inside the metal, and large amounts of hydrogen will diffuse into the material. However, as $D_Lk_A/k_A'$ decreases, the $\mathrm{H}^2$ reactions start to become more dominant. For a low surface coverage, Reactions \eqref{eq:react2} and \eqref{eq:react6} will remove the adsorbed hydrogen, whereas for a higher surface coverage and low electric overpotential, Reaction \eqref{eq:react3} will be the dominant one. Since Reaction \eqref{eq:react3} is environment-independent (not a function of the pH or electrolyte potential), it does not require an accurate representation of the electrolyte. Thus, when the surface coverage is expected to be high, the effects of simulating the electrolyte compared to using simplifications are limited. However, when a lower pH electrolyte is present or large overpotentials (in absolute value) are expected, the electrochemical behaviour of the electrolyte needs to be explicitly simulated in order to accurately predict the amount of adsorbed hydrogen reacting towards $\mathrm{H}_2$ (instead of being absorbed into the metal). \section{Weak forms and numerical implementation} \label{sec:disc} The governing equations are discretised using the finite element method, requiring these equations to be cast into their weak forms. For the metal domain, this is done by multiplying the momentum balance from Eq. \eqref{eq:mombalance} with the test function for the displacements, $\delta \mathbf{u}$, and the hydrogen mass balance from Eq. \eqref{eq:massbalance2} with the test function for the lattice hydrogen concentration, $\delta C_L$, and integrating both over $\Omega_m$, resulting in: \begin{align} \int_{\Omega_m}& \bm{\nabla}^s \left(\delta \mathbf{u}\right) \mathcal{L}_{el}\bm{\nabla}^s\mathbf{u}\; \mathrm{d}\Omega_m - \int_{\Gamma_{ext}} \delta \mathbf{u} \; \overline{\mathbf{t}}\; \mathrm{d}\Gamma_{ext} = \mathbf{0} \label{eq:weak1}\\ \begin{split} \int_{\Omega_m}& \delta C_L \left(1+ \sum_i \frac{N_T^i/N_L \; \; \exp{\left(E_b^i/RT\right)}}{\left(1+C_L/N_L \exp{\left(E_b^i/RT\right)}\right)^2} \right)\dot{C}_L + D_L \bm{\nabla} \left(\delta C_L\right)\bm{\nabla} C_L - \frac{D_L \overline{V}_H}{3RT}\bm{\nabla}\left(\delta C_L\right) C_L \text{tr}\left(\mathcal{L}_{el}\bm{\nabla}^s\mathbf{u}\right)\; \mathrm{d}\Omega_m \\ &- \int_{\Gamma_{ext}} \delta C_L \overline{J}\; \mathrm{d}\Gamma_{ext} - \int_{\Gamma_{int}} \delta C_L \left( \nu_A-\nu_A'\right)\; \mathrm{d}\Gamma_{int} = 0 \end{split} \label{eq:weak2} \end{align} where $\mathcal{L}_{el}$ is the linear-elastic stiffness matrix. Note that the boundary flux arising from the boundary condition (Eq. \eqref{eq:BCflux}) is divided into two parts: One associated with the exterior boundary $\Gamma_{ext}$, where a hydrogen flux $\overline{J}$ can be prescribed, and one in the internal boundary $\Gamma_{int}$, where the flux is due to the absorption reaction - see Eq. \eqref{eq:JfluxAbs}. This last term provides the coupling between the adsorbed hydrogen and the hydrogen in the metal lattice.\\ Similarly, the weak forms for the electrolyte, Eqs. \eqref{eq:nernstplanck}-\eqref{eq:electroneutrality}, are obtained by multiplying with the test function for the ion concentrations, $\delta C_\pi$, and the electric potential $\delta \varphi$. This results in the following weak forms for the $\mathrm{H}^+$ and $\mathrm{OH}^-$ mass balances: \begin{align} \begin{split} \int_{\Omega_e} &\delta C_{\mathrm{H}^+} \dot{C}_{\mathrm{H}^+}+\delta C_{\mathrm{H}^+} \mathbf{v}^T\bm{\nabla}C_{\mathrm{H}^+}+D_{\mathrm{H}^+}\bm{\nabla}\left(\delta C_{\mathrm{H}^+}\right) \bm{\nabla}C_{\mathrm{H}^+} \\ &+ \frac{FD_{H^+}}{RT} \bm{\nabla}\left(\delta C_{\mathrm{H}^+}\right) C_{\mathrm{H}^+} \bm{\nabla} \varphi + k_{eq}\delta C_{\mathrm{H}^+}\left(K_w-C_{\mathrm{H}^+}C_{\mathrm{OH}^-}\right)\; \mathrm{d}\Omega_e \\ &-\int_{\Gamma_{ext}} \delta C_{\mathrm{H}^+} \overline{J}_{\mathrm{H}^+}\; \mathrm{d}\Gamma_{ext} - \int_{\Gamma_{int}} \delta C_{\mathrm{H}^+} \left(-(\nu_{Va}-\nu_{Va}')-(\nu_{Ha}-\nu_{Ha}')\right) \;\mathrm{d}\Gamma_{int}= 0 \end{split} \label{eq:weak3} \\ \begin{split} \int_{\Omega_e} &\delta C_{\mathrm{OH}^-} \dot{C}_{\mathrm{OH}^-}+\delta C_{\mathrm{OH}^-} \mathbf{v}^T\bm{\nabla}C_{\mathrm{OH}^-}+D_{\mathrm{OH}^-}\bm{\nabla}\left(\delta C_{\mathrm{OH}^-}\right) \bm{\nabla}C_{\mathrm{OH}^-} + \frac{FD_{\mathrm{OH}^-}}{RT} \bm{\nabla}\left(\delta C_{\mathrm{OH}^-}\right) C_{\mathrm{OH}^-} \bm{\nabla} \varphi \\ &+ k_{eq}\delta C_{\mathrm{OH}^-}\left(K_w-C_{\mathrm{H}^+}C_{\mathrm{OH}^-}\right)\; \mathrm{d}\Omega_e \\ &-\int_{\Gamma_{ext}} \delta C_{\mathrm{OH}^-} \overline{J}_{\mathrm{OH}^-}\; \mathrm{d}\Gamma_{ext} - \int_{\Gamma_{int}} \delta C_{\mathrm{OH}^-} \left((\nu_{Vb}-\nu_{Vb}')+(\nu_{Hb}-\nu_{Hb}')\right) \;\mathrm{d}\Gamma_{int}= 0 \, , \end{split} \label{eq:weak4} \end{align} the weak forms for the $\mathrm{Fe}^{2+}$ and $\mathrm{FeOH}^+$ mass balances: \begin{align} \begin{split} \int_{\Omega_e} &\delta C_{\mathrm{Fe}^{2+}} \dot{C}_{\mathrm{Fe}^{2+}}+\delta C_{\mathrm{Fe}^{2+}} \mathbf{v}^T\bm{\nabla}C_{\mathrm{Fe}^{2+}}+D_{\mathrm{Fe}^{2+}}\bm{\nabla}\left(\delta C_{\mathrm{Fe}^{2+}}\right) \bm{\nabla}C_{\mathrm{Fe}^{2+}} + \frac{2FD_{\mathrm{Fe}^{2+}}}{RT} \bm{\nabla}\left(\delta C_{\mathrm{Fe}^{2+}}\right) C_{\mathrm{Fe}^{2+}} \bm{\nabla} \varphi \\ & - k_{fe}\delta C_{\mathrm{Fe}^{2+}} C_{\mathrm{Fe}^{2+}} + k'_{fe} \delta C_{\mathrm{Fe}^{2+}} C_{\mathrm{FeOH}^+}C_{\mathrm{H}^+}\; \mathrm{d}\Omega_e -\int_{\Gamma_{ext}} \delta C_{\mathrm{Fe}^{2+}} \overline{J}_{\mathrm{Fe}^{2+}}\; \mathrm{d}\Gamma_{ext} - \int_{\Gamma_{int}} \delta C_{\mathrm{Fe}^{2+}}\nu_{Fe} \;\mathrm{d}\Gamma_{int}= 0 \end{split} \\ \begin{split} \int_{\Omega_e} &\delta C_{\mathrm{FeOH}^{+}} \dot{C}_{\mathrm{FeOH}^{+}}+\delta C_{\mathrm{FeOH}^{+}} \mathbf{v}^T\bm{\nabla}C_{\mathrm{FeOH}^{+}}+D_{\mathrm{FeOH}^{+}}\bm{\nabla}\left(\delta C_{\mathrm{FeOH}^{+}}\right) \bm{\nabla}C_{\mathrm{FeOH}^{+}} \\ &+ \frac{FD_{\mathrm{FeOH}^{+}}}{RT} \bm{\nabla}\left(\delta C_{\mathrm{FeOH}^{+}}\right) C_{\mathrm{FeOH}^{+}} \bm{\nabla} \varphi +k_{fe}\delta C_{\mathrm{FeOH}^{+}} C_{\mathrm{Fe}^{2+}} + \delta C_{\mathrm{FeOH}^{+}} C_{\mathrm{FeOH}^+} \left( k'_{fe}C_{\mathrm{H}^+}+k_{feoh}\right)\; \mathrm{d}\Omega_e \\ & -\int_{\Gamma_{ext}} \delta C_{\mathrm{FeOH}^+} \overline{J}_{\mathrm{FeOH}^+}\; \mathrm{d}\Gamma_{ext} = 0 \, , \end{split} \end{align} and the weak form for the mass balances of the other $\pi$ ion phases: \begin{equation} \int_{\Omega_e} \delta C_{\pi} \dot{C}_{\pi}+\delta C_{\pi} \mathbf{v}^T\bm{\nabla}C_{\pi}+D_{\pi}\bm{\nabla}\left(\delta C_{\pi}\right) \bm{\nabla}C_{\pi} + \frac{z_{\pi}FD_{\pi}}{RT} \bm{\nabla}\left(\delta C_{\pi}\right) C_{\pi} \bm{\nabla} \varphi \; \mathrm{d}\Omega_e -\int_{\Gamma_{ext}} \delta C_{\pi} \overline{J}_{\pi}\; \mathrm{d}\Gamma_{ext} = 0 \label{eq:weak5} \end{equation} The weak form for the electroneutrality condition is given by: \begin{equation} \int_{\Omega_e} \delta \varphi \sum_{\pi} z_\pi C_\pi \; \mathrm{d}\Omega_e = 0 \label{eq:weak6} \end{equation} Finally, the weak form of the mass balance at the internal interface is obtained by multiplying Eq. \eqref{eq:massbalanceinterface} with $\delta \theta$: \begin{equation} \begin{split} \int_{\Gamma_{int}} N_{ads} \delta \theta \; \dot{\theta}_{ads} + \delta \theta \Big(- (\nu_{Va}-\nu_{Va}') + (\nu_{Ha}-\nu_{Ha}') +2 (\nu_T-\nu_T') \\ + (\nu_A-\nu_A') - (\nu_{Vb}-\nu_{Vb}') + (\nu_{Hb}-\nu_{Hb}')\Big) \; \mathrm{d}\Gamma_{int} = 0 \label{eq:weak7} \end{split} \end{equation} This last weak form couples the metal domain to the electrolyte domain through its reaction rates. The electrolyte potential and concentrations, together with the surface coverage, determine the reaction rates $\nu_{Va}$, $\nu_{Ha}$, $\nu_{Vb}$, $\nu_{Hb}$, and $\nu_{T}$. In turn, the surface coverage and the lattice hydrogen concentration provide reaction rate $\nu_{A}$, which couples the metal and electrolyte domains. These weak forms are discretised using quadratic quadrilateral elements for all variables in both domains, except for the hydrogen surface coverage which is discretised using quadratic line elements. The implementation is performed in the commercial finite element package \texttt{COMSOL Multiphysics}. The built-in tertiary current module \citep{COMSOL2020, Dickinson2014} is used for the electrolyte, while a new interface is developed using the physics builder to implement the interface reactions and the hydrogen transport inside the metal\footnote{The computational platform developed is made freely available at \url{www.empaneda.com/codes}}. The temporal discretisation was performed using a backward difference method. A mesh sensitivity study is conducted for all case studies so as to ensure reporting mesh-objective results. The number of DOFs employed ranged between 230,000 (Section \ref{sec:verif1}) and 390,000 (Section \ref{sec:results}). A full Newton-Raphson scheme was used to obtain converged solutions for the non-linear system. \section{Verification case studies} \label{sec:verif} To verify the numerical implementation and physical behaviour of the model, we compare our results to two benchmark case studies: a numerical study simulating localised corrosion and its effect on the pH \cite{Sun2019}, and an experimental study measuring local pH within a large channel filled with metallic samples at set intervals \cite{Gangloff2014}. \subsection{Numerical verification: localised corrosion} \label{sec:verif1} \begin{figure} \centering \includegraphics[width=7cm]{SunDudduDomain-eps-converted-to.pdf} \caption{Localised corrosion verification case study. Geometry of the electrolyte (top) and metal (bottom) domains. Corrosion is allowed to occur at the red boundary (bottom of the notch), while the remaining metal-electrolyte boundaries (blue) are exposed to hydrogen reactions.} \label{fig:verif1} \end{figure} \begin{figure} \centering \begin{subfigure}{8cm} \centering \includegraphics[trim={0 0 0 0},clip]{SunDuddu_pH-eps-converted-to.pdf} \caption{} \label{fig:verif2a} \end{subfigure} \begin{subfigure}{8cm} \centering \includegraphics[trim={0 0 0 0},clip]{SunDuddu_phi-eps-converted-to.pdf} \caption{} \label{fig:verif2b} \end{subfigure} \caption{Localised corrosion verification case study. Predicted contours of (a) pH and lattice hydrogen concentration, and (b) electrolyte electric potential.} \label{fig:verif2} \end{figure} \begin{table} \centering \caption{Localised corrosion verification case study. Parameters used, based on the work by Sun and Duddu \cite{Sun2019}.} \label{table:verif1} \begin{tabular}{ |l l||l| } \hline Parameter & & Value\\ \hline $\mathrm{H}^+$ diffusion coefficient & $D_{\mathrm{H}^+}$ & $9.3\cdot10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ $\mathrm{OH}^-$ diffusion coefficient & $D_{\mathrm{OH}^-}$ & $5.3\cdot10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ $\mathrm{Na}^+$ diffusion coefficient & $D_{\mathrm{Na}^+}$ & $10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ $\mathrm{Cl}^-$ diffusion coefficient & $D_{\mathrm{Cl}^-}$ & $10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ $\mathrm{Fe}^{2+}$ diffusion coefficient & $D_{\mathrm{Fe}^{2+}}$ & $10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ $\mathrm{FeOH}^+$ diffusion coefficient & $D_{\mathrm{FeOH}^+}$ & $10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ \hline Surface adsorption sites & $N_{ads}$ & $10^{-4}\;\mathrm{mol}/\mathrm{m}^2$ \\ Lattice sites & $N_L$ & $10^6\;\mathrm{mol}/\mathrm{m}^3$ \\ Lattice diffusion coefficient & $D_L$ & $10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ Trap concentrations & $N_T$ & $[2.5,\; 1.0]\;\mathrm{mol}/\mathrm{m}^3$ \\ Binding energies & $E_b$ & $[15,\;30]\;\mathrm{kJ}/\mathrm{mol}$\\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Localised corrosion verification case study. Reaction rates used, based on the work by Sun and Duddu \cite{Sun2019}.} \label{table:verif1b} \begin{tabular}{ |l|l|l|l|l| } \hline Reaction & $k$ & $k'$ & $\alpha$ & $E_{eq}$\\ \hline $\nu_{Va}$ & $2.07\cdot10^{-12}\; \mathrm{m}/\mathrm{s}$ & $0\;\mathrm{mol/(m}^2\mathrm{s})$ & $0.5$ & $0\;\mathrm{V}_{SHE}$\\ $\nu_{Ha}$ & $0 \;\mathrm{m/s}$ & $0 \;\mathrm{mol/(m}^2\mathrm{Pa \;s})$ & $0.5$ & $0\;\mathrm{V}_{SHE}$\\ $\nu_T$ & $0 \;\mathrm{mol/(m}^2\mathrm{s})$ & $0 \;\mathrm{mol/(m}^2\mathrm{s \;Pa}^{1/2})$ & $-$ & $-$ \\ $\nu_A$ & $1.2\cdot10^5 \;\mathrm{m/s}$ & $8.8\cdot10^9 \;\mathrm{m/s}$ & $-$ & $-$\\ $\nu_{Vb}$ & $8.29\cdot10^{-15} \; \mathrm{mol/(m}^2\mathrm{s})$ & $0 \;\mathrm{m/s}$ & $0.5$ & $0\;\mathrm{V}_{SHE}$\\ $\nu_{Hb}$ & $0 \;\mathrm{mol/(m}^2\mathrm{s})$ & $0 \;\mathrm{m/(Pa \;s)}$ & $0.5$ & $0\;\mathrm{V}_{SHE}$\\ $\nu_{Fe}$ & $2.8\cdot10^6\;\mathrm{mol}/\mathrm{(m}^2\mathrm{s})$ & $-$ & $0$ & $0\;\mathrm{V}_{SHE}$\\ \hline \hline $k_{fe}/k_{fe}'$ &$1.625\cdot10^{-4}\;\mathrm{mol}/\mathrm{m}^3$ & & & \\ $k_{feoh}$ & $0 \;\mathrm{s}^{-1}$ & & &\\ \hline \end{tabular} \end{table} The geometry, parameters and initial and boundary conditions of the first verification case follow the computational study by Sun and Duddu \cite{Sun2019}. As shown in Figure \ref{fig:verif1}, the boundary value problem consists of a square domain containing a notched metallic sample. Corrosion is taking place at the bottom of the notch, while hydrogen reactions are allowed to occur at the remaining boundaries. The electrolyte consists of a solution of $\mathrm{NaCl}$ at an initial concentration of $C_{\mathrm{Na}^+}=C_{\mathrm{Cl}^-}=1\; \mathrm{mol}/\mathrm{m}^3$ and an initial pH of 7 ($C_{\mathrm{H}^+}=C_{\mathrm{OH}^-}=10^{-4}\; \mathrm{mol}/\mathrm{m}^3$), with these initial conditions also imposed as boundary conditions on the external boundaries. The initial and boundary concentrations of iron $\mathrm{Fe}^{2+}$ ions and $\mathrm{FeOH}^+$ reactants are zero. The parameters used for this simulation are given in Table \ref{table:verif1}, with the reaction constants being given in Table \ref{table:verif1b}. On the external boundaries, a constant electrolyte potential $\overline{\varphi}=0\;\mathrm{V}_{SHE}$ is imposed, the lattice hydrogen concentration is set to $\overline{C}_L=0\;\mathrm{mol}/\mathrm{m}^3$, and no mechanical load is applied. A constant electric potential of $E_m=-0.2\;\mathrm{V}_{SHE}$ is assigned to the metal. In contrast to the reference solution, we also model the absorbed hydrogen transport inside the metal using a diffusion coefficient $D_L = 1\cdot10^{-9}\;\mathrm{m}^2/\mathrm{s}$, a lattice site density $N_L = 10^6\;\mathrm{mol}/\mathrm{m}^3$, and considering two types of hydrogen traps, with densities $N_T=[2.5,\; 1.0]\;\mathrm{mol}/\mathrm{m}^3$ and binding energies $E_b = [15,\;30]\;\mathrm{kJ}/\mathrm{mol}$.\\ The results obtained after steady-state is reached (at $t=12\;\mathrm{s}$) are shown in Fig. \ref{fig:verif2}, and are in good agreement with the results reported by Sun and Duddu \cite{Sun2019}; minimum pH of 3.85 and maximum electrolyte potential of $\varphi=0.196\;\mathrm{V}_{SHE}$ versus reference results of pH=3.824 and $\varphi=0.1973\;\mathrm{V}_{SHE}$ \cite{Sun2019}. The pH inside the crack decreases due to corrosion creating $\mathrm{Fe}^{2+}$ ions, which then react to produce $\mathrm{H}^+$ and $\mathrm{FeOH}^+$ faster than the hydrogen reactions convert the $\mathrm{H}^+$ into absorbed hydrogen. While not explored in Ref. \cite{Sun2019}, our simulation shows that this pH drop and the applied boundary conditions lead to larger amounts of absorbed hydrogen in the notch, whereas the hydrogen reactions are slower on the boundaries far away from it. A similar effect is seen for the electric potential of the electrolyte, which increases locally due to the strong corrosion reaction taking place. This results in a reduction of the corrosion rate and in an acceleration of the hydrogen reactions. Since this effect is also most pronounced inside the notch, near the corroding surface, it also leads to a larger hydrogen uptake inside the notch compared to the exterior. \subsection{Experimental verification: local pH measurements in an artificial crevice cell} \label{sec:verif2} \begin{figure} \centering \begin{subfigure}{15cm} \centering \includegraphics[width=12cm]{Domain_Gangloff-eps-converted-to.pdf} \subcaption{} \label{fig:verif3} \end{subfigure} \begin{subfigure}{15cm} \centering \includegraphics[width=14cm]{Ganglof_V06_pH-eps-converted-to.pdf} \caption{$E_m=-0.6\;\mathrm{V}_{SHE}$} \label{fig:verif4a} \end{subfigure} \begin{subfigure}{15cm} \centering \includegraphics[width=14cm]{Ganglof_V08_pH-eps-converted-to.pdf} \caption{$E_m=-0.8\;\mathrm{V}_{SHE}$} \label{fig:verif4b} \end{subfigure} \begin{subfigure}{15cm} \centering \includegraphics[width=14cm]{Ganglof_V1_pH-eps-converted-to.pdf} \caption{$E_m=-1\;\mathrm{V}_{SHE}$} \label{fig:verif4c} \end{subfigure} \caption{Experimental verification case study: (a) geometry of the artificial crevice cell used to measure local pH in Ref. \cite{Gangloff2014}, and predictions of pH and lattice hydrogen concentration at $t=300\;\mathrm{s}$ for (b) $E_m=-0.6\;\mathrm{V}_{SHE}$, (c) $E_m=-0.8\;\mathrm{V}_{SHE}$ and (d) $E_m=-1\;\mathrm{V}_{SHE}$.} \label{fig:verif4} \end{figure} The second verification case study aims at benchmarking model predictions with local pH measurements from the artificial crevice electrochemical cell developed by Gangloff \textit{et al.} \cite{Gangloff2014}. As shown in Fig. \ref{fig:verif3}, the testing configuration consists of an artificial opening of $100\times1 \; \mathrm{mm}$ attached to a large reservoir filled with the electrolyte. At the bottom of the opening, 6 metal samples with size $17\times2.5 \; \mathrm{mm}$ are present at regular intervals. An electric potential is applied to these metallic samples, and a neutral electric potential boundary condition is applied to the left and top ends of the reservoir. Since no reaction or diffusion constants are given for the metal used (Monel K-500, a Ni-based superalloy), we have estimated these by iterating the simulation results until a reasonable match was achieved for the case of $E_m=-1\;\mathrm{V}_{SHE}$, after which these parameters were used to predict the results for the $E_m=-0.6\;\mathrm{V}_{SHE}$ and $E_m=-0.8\;\mathrm{V}_{SHE}$ cases. The electrolyte consists of a $600\;\mathrm{mol}/\mathrm{m}^3$ $\mathrm{NaCl}$ solution at pH 7 ($C_{\mathrm{H}}^+=C_{\mathrm{OH}}^-=10^{-4}\;\mathrm{mol}/\mathrm{m}^3$). The diffusivity of the ion species in this electrolyte is $D_{\mathrm{H}^+}=9.3\cdot10^{-9}\;\mathrm{m}^2/\mathrm{s}$, $D_{\mathrm{OH}^-}=5.3\cdot10^{-9}\;\mathrm{m}^2/\mathrm{s}$ and $D_{\mathrm{Na}^+}=D_{\mathrm{Cl}^-}=10^{-9}\;\mathrm{m}^2/\mathrm{s}$. At the surface, the reactions are described through $k_{Va}=10^{-9}\;\mathrm{m}/\mathrm{s}$, $k_A=1.2\cdot10^{5}\;\mathrm{m}/\mathrm{s}$, $k_A'=8.8\cdot10^{8}\;\mathrm{m}/\mathrm{s}$, and $k_{Vb} = 10^{-15}\;\mathrm{mol}/(\mathrm{m}^2\mathrm{s})$, with all other reaction constants set to zero. Also, corrosion is neglected. Inside the metal, the lattice diffusion is given by $D_L=10^{-9}\;\mathrm{m}^2/\mathrm{s}$ and no trapping sites are considered.\\ The pH inside the opening and the lattice hydrogen concentration inside the metal are shown in Fig. \ref{fig:verif4}. Since the $E_m=-1\;\mathrm{V}_{SHE}$ simulation was used to calibrate the reaction constants, the pH inside the opening matches the pH measured in Ref. \cite{Gangloff2014} after $300\;\mathrm{s}$, $\mathrm{pH}\approx 10$. It is also observed that, at the time of the results, the hydrogen has barely diffused through the metal, and most of the $\mathrm{H}^+$ that reacted at the surface was already present at the onset of the simulation instead of being supplied though diffusion within the electrolyte. The simulation of the other two cases, using $E_m = -0.6\;\mathrm{V}_{SHE}$ and $E_m=-0.8\;\mathrm{V}_{SHE}$, leads to pH predictions of $\mathrm{pH}=7.7$ and $\mathrm{pH}=8.6$, respectively. These are in good agreement with the experimental measurements: pH values of 7 and 8, respectively. The small differences observed can be justified by the intrinsic experimental scatter; the values reported in \citep{Gangloff2014} for the initial conditions are ``$\mathrm{pH}=6\;\mathrm{to}\;7$", and it is therefore reasonable to assume that a similar error band is valid for the results within the crack. This verification exercise confirms that the model is capable of replicating experimental observations. \FloatBarrier \section{Quantifying hydrogen ingress and metal-electrolyte interactions} \label{sec:results} \begin{figure} \centering \includegraphics[width=8cm]{Domain-eps-converted-to.pdf} \caption{Overview of the used geometry and boundary conditions} \label{fig:geo} \end{figure} \begin{table \caption{Material and ionic transport parameters used in the metal-electrolyte interaction studies of Section \ref{sec:results}.} \label{table:params} \centering \begin{tabular}{ |l l||l| } \hline Parameter & & Value\\ $\mathrm{H}^+$ diffusion coefficient & $D_{\mathrm{H}^+}$ & $9.3\cdot10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ $\mathrm{OH}^-$ diffusion coefficient & $D_{\mathrm{OH}^-}$ & $5.3\cdot10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ $\mathrm{Na}^+$ diffusion coefficient & $D_{\mathrm{Na}^+}$ & $1.3\cdot10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ $\mathrm{Cl}^-$ diffusion coefficient & $D_{\mathrm{Cl}^-}$ & $2\cdot10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ $\mathrm{Fe}^{2+}$ diffusion coefficient & $D_{\mathrm{Fe}^{2+}}$ & $1.4\cdot10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ $\mathrm{FeOH}^+$ diffusion coefficient & $D_{\mathrm{FeOH}^+}$ & $10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ \hline Young's Modulus & $E$ & $200\;\mathrm{GPa}$\\ Poisson ratio & $\nu$ & $0.25$\\ Partial molar volume & $\overline{V}_H$ & $2\cdot10^{-6}\;\mathrm{mol}/\mathrm{m}^3$\\ Surface adsorption sites & $N_{ads}$ & $10^{-4}\;\mathrm{mol}/\mathrm{m}^2$ \\ Lattice sites & $N_L$ & $10^6\;\mathrm{mol}/\mathrm{m}^3$ \\ Lattice diffusion coefficient & $D_L$ & $10^{-9}\;\mathrm{m}^2/\mathrm{s}$ \\ Trap concentrations & $N_T$ & $[2.5,\; 1.0]\;\mathrm{mol}/\mathrm{m}^3$ \\ Binding energies & $E_b$ & $[15,\;30]\;\mathrm{kJ}/\mathrm{mol}$\\ Temperature & $T$ & $293.15\;\mathrm{K}$\\ \hline \end{tabular} \end{table} \begin{table} \caption{Reaction rate constants used in the metal-electrolyte interaction studies of Section \ref{sec:results}.} \label{tab:reactionsused} \centering \begin{tabular}{ |l|l|l|l|l| } \hline Reaction & $k$ & $k'$ & $\alpha$ & $E_{eq}$\\ \hline $\nu_{Va}$ & $1\cdot10^{-4}\; \mathrm{m}/\mathrm{s}$ & $1\cdot10^{-10}\;\mathrm{mol/(m}^2\mathrm{s)}$ & $0.5$ & $0\;\mathrm{V}_{SHE}$ \\ $\nu_{Ha}$ & $1\cdot10^{-10} \;\mathrm{m/s}\;\;$ & $0 \;\mathrm{mol/(m}^2\mathrm{Pa\; s)}$ & $0.3$ & $0\;\mathrm{V}_{SHE}$\\ $\nu_T$ & $1\cdot10^{-6} \;\mathrm{mol/(m}^2\mathrm{s)}$ & $0 \;\mathrm{mol/(m}^2\mathrm{s \;Pa}^{1/2})$ & $-$ & $-$ \\ $\nu_A$ & $1.2\cdot10^5 \;\mathrm{m/s}$ & $8.8\cdot10^9 \;\mathrm{m/s}$ & $-$ & $-$ \\ $\nu_{Vb}$ & $1\cdot10^{-8} \; \mathrm{mol/(m}^2\mathrm{s})$ & $1\cdot10^{-13} \;\mathrm{m/s}$ & $0.5$ & $0\;\mathrm{V}_{SHE}$ \\ $\nu_{Hb}$ & $8\cdot10^{-10} \;\mathrm{mol/(m}^2\mathrm{s)}$ & $0 \;\mathrm{m/(Pa \;s)}$ & $0.3$ & $0\;\mathrm{V}_{SHE}$ \\ $\nu_{Fe}$ & $3.1\cdot10^{-10}\;\mathrm{mol}/\mathrm{(m}^2\mathrm{s)}$ & $-$ & $0$ & $-0.4\;\mathrm{V}_{SHE}$ \\ \hline $k_{fe}$ & $1\;\mathrm{s}$ & $10^{-3}\;\mathrm{m}^3/(\mathrm{mol}\;\mathrm{s})$ & & \\ $k_{feoh}$ & $10^{-2} \;\mathrm{s}^{-1}$ & & & \\ $k_{eq}$ & $10^5\;\mathrm{m}^3/(\mathrm{mol}\;\mathrm{s})$ & & & \\ \hline \end{tabular} \end{table} We use our electro-chemo-mechanical model to provide new insight by exploring the interactions between the metal and the electrolyte. The roles of the applied potential (Section \ref{sec:res_E}), fluid velocity (Section \ref{sec:fluid_V}) and defect geometry (Section \ref{sec:res_geo}) are investigated. To this end, we simulate two $10\times10\;\mathrm{mm}$ domains representing the electrolyte and the metal, with the metal containing an initial defect (pit, crack) of dimensions $L\times h$, as shown in Fig. \ref{fig:geo}. These defect dimensions are generally taken to be $L=5\;\mathrm{mm}$ and $h=0.4\;\mathrm{mm}$ but are also varied in Section \ref{sec:res_geo} to investigate their influence. The metallic sample is assumed to be uncharged with hydrogen at the onset of the simulations ($t=0$). In regards to its mechanical behaviour, the bottom of the sample is fixed, with both vertical and horizontal displacements constrained, and a vertical displacement of $U_{ext}=0.5\;\mu\mathrm{m}$ is applied at the top edge. The electrolyte has an initial $\mathrm{pH}=5$ with initial concentrations $C_{\mathrm{H}^+}=10^{-2}\;\mathrm{mol}/\mathrm{m}^3$, $C_{\mathrm{OH}^-}=10^{-6}\;\mathrm{mol}/\mathrm{m}^3$, $C_{\mathrm{Na}^+}=599.99\;\mathrm{mol}/\mathrm{m}^3$, $C_{\mathrm{Cl}^-}=6\cdot10^{2}\;\mathrm{mol}/\mathrm{m}^3$, and $C_{\mathrm{Fe}^{2+}}=C_{\mathrm{FeOH}^+}=0\;\mathrm{mol}/\mathrm{m}^3$. Together with $\overline{\varphi}=0\;\mathrm{V}_{SHE}$, these concentrations are also prescribed on the left boundary as boundary conditions throughout the simulation. The electric potential of the metal is often kept at $E_m=0\;\mathrm{V}_{SHE}$ ($-0.24\;\mathrm{V}_{SCE}$) but also varied between $E_m=-0.7\;\mathrm{V}_{SHE}$ ($-0.94\;\mathrm{V}_{SCE}$) and $E_m=0.5\;\mathrm{V}_{SHE}$ ($0.26\;\mathrm{V}_{SCE}$). These values were chosen to span a wide range of environments, from positive potentials where corrosion reactions dominate, to negative potentials where hydrogen reactions govern the electrochemical behaviour. It should be noted that numerical convergence worsens significantly for applied potentials smaller than $-0.7\;\mathrm{V}_{SHE}$, due to the high reaction rates at the electrolyte-metal interface. The fluid velocity is generally assumed to be negligible (Section \ref{sec:res_E}) or assumed to change in a linear fashion from $V_{max}$ at the left edge of the electrolyte to zero at the electrolyte-metal interface (see Fig. \ref{fig:geo}). We investigate its influence by considering a value of $V_{max}=10\;\mathrm{mm}/\mathrm{s}$ in Section \ref{sec:res_geo} and by varying its magnitude from $V_{max}=0\;\mathrm{mm}/\mathrm{s}$ to $V_{max}=29\;\mathrm{mm}/\mathrm{s}$ in Section \ref{sec:fluid_V}. When a fluid velocity is included, additional boundary conditions are defined at the bottom and top of the electrolyte sub-domain: an inflow boundary condition at the bottom, setting the concentrations on this boundary equal to the initial conditions to emulate new electrolyte coming into the domain, and an outflow boundary condition at the top, restricting diffusion across this boundary while allowing advective species transport through it.\\ The material and ionic transport parameters used are given in Table \ref{table:params}, while the magnitudes of the reaction constants adopted are listed in Table \ref{tab:reactionsused}. Our choices aim at characterising the behaviour of Fe or Fe-based materials, for which sufficient data exists. In particular, our choices of reaction rate constants are based on those reported in the literature (see Table \ref{tab:reactions_lit}), focusing on the values reported for pure Fe and taking intermediate values within the range provided.\\ \subsection{Electric overpotential} \label{sec:res_E} \begin{figure} \centering \begin{subfigure}{12cm} \centering \includegraphics[width=12cm]{surf_CL_VS_Em_-05.jpg} \caption{$E_m=-0.5\;\mathrm{V}_{SHE}$} \label{fig:Em_m05} \end{subfigure} \begin{subfigure}{12cm} \centering \includegraphics[width=12cm]{surf_CL_VS_Em_0.jpg} \caption{$E_m=0\;\mathrm{V}_{SHE}$} \label{fig:Em_0} \end{subfigure} \begin{subfigure}{12cm} \centering \includegraphics[width=12cm]{surf_CL_VS_Em_05.jpg} \caption{$E_m=0.5\;\mathrm{V}_{SHE}$} \label{fig:Em_p05} \end{subfigure} \caption{Metal-electrolyte interactions: influence of the applied potential $E_m$. Contours of pH (left, electrolyte domain) and lattice hydrogen concentration (right, metal domain) at a time $t=10\;\mathrm{min}$ for the following metal potentials: (a) $E_m=-0.5\;\mathrm{V}_{SHE}$, (b) $E_m=0\;\mathrm{V}_{SHE}$, and (c) $E_m=0.5\;\mathrm{V}_{SHE}$.} \label{fig:Em} \end{figure} \begin{figure} \centering \begin{subfigure}{8cm} \centering \includegraphics[width=8cm]{surfPhi_CL_VS_Em_-05.jpg} \caption{$E_m=-0.5\;\mathrm{V}_{SHE}$} \label{fig:Em_Phi_m05} \end{subfigure} \begin{subfigure}{8cm} \centering \includegraphics[width=8cm]{surfPhi_CL_VS_Em_05.jpg} \caption{$E_m=0.5\;\mathrm{V}_{SHE}$} \label{fig:Em_Phi_0} \end{subfigure} \caption{Metal-electrolyte interactions: influence of the applied potential $E_m$. Contours of electrolyte potential $\varphi$ at a time $t=10\;\mathrm{min}$ for the following metal potentials: (a) $E_m=-0.5\;\mathrm{V}_{SHE}$ and (b) $E_m=0.5\;\mathrm{V}_{SHE}$.} \label{fig:Em_Phi} \end{figure} \begin{figure} \centering \includegraphics{lines_CL_VS_Em-eps-converted-to.pdf} \caption{Metal-electrolyte interactions: Estimations of pH (blue crosses, left $y$-axis) and lattice hydrogen concentration (orange circles, right $y$-axis) at the crack tip after $t=10\;\mathrm{min}$, as a function of the applied potential.} \label{fig:Em_LatticeH} \end{figure} \begin{figure} \centering \includegraphics{lines_CL_VS_Em_React-eps-converted-to.pdf} \caption{Metal-electrolyte interactions: Log-linear plot of the reaction rates at the crack tip, as a function of the applied potential after a time of $t=10\;\mathrm{min}$.} \label{fig:Em_ReactionRate} \end{figure} Corrosion reaction rates are reduced when a negative electric potential is applied to a metal, and thus this is one of the most commonly used methods to prevent corrosion (either by direct application, or through the addition of a sacrificial metal). However, these negative electric potentials increase the rate of the hydrogen reactions, augmenting the amount of absorbed hydrogen and the risk of experiencing hydrogen assisted failures \citep{Kehler2008,AM2016}. The results obtained are shown in Fig. \ref{fig:Em}. Specifically, contours of pH and lattice hydrogen are provided after $t=10$ min for selected values of the applied potential ($E_m$): $-0.5\;\mathrm{V}_{SHE}$, $0\;\mathrm{V}_{SHE}$, and $0.5\;\mathrm{V}_{SHE}$. The results show how the lowest applied potential considered accelerates the hydrogen reactions, up to the point where they absorb all available $\mathrm{H}^+$ ions, resulting in a high pH not just inside the pit but also on the exterior of the domain - see Fig. \ref{fig:Em}a. In addition, the non-acidic reaction in Eq. \eqref{eq:react5} is accelerated such that large amounts of hydrogen are absorbed inside the metal despite the high environmental pH, with this reaction producing additional $\mathrm{OH}^-$ ions to sustain the high pH of the electrolyte. In contrast, the corrosion reaction becomes relevant when a large positive electric potential is applied, producing iron ions which react within the electrolyte to produce additional $\mathrm{H}^+$, resulting in a lower pH near the metal (see Fig. \ref{fig:Em}c). This low pH strongly increases the adsorbed hydrogen produced through Reaction \eqref{eq:react1}, counteracting the reduction of hydrogen reaction rates associated with high electric potentials. Finally, when a neutral electric potential is applied, Fig. \ref{fig:Em}b, these two effects are balanced, with the pH being lowered by the corrosion reaction and raised by the hydrogen reactions. Another effect contributing to the differences is the geometry of the simulated domain. Near the entrance of the crack a higher lattice hydrogen concentration is observed since hydrogen is absorbed into the metal from both the exterior and crack faces. In contrast, at the crack tip the hydrogen concentration is slightly decreased due to the diffusion away from the crack tip causing the lattice hydrogen to spread over an increased area. \\ Changes in the applied potential also have an effect on the electrolyte potential, as shown in Fig. \ref{fig:Em_Phi}. For the case of a negative applied potential, $E_m=-0.5\;\mathrm{V}_{SHE}$, the corrosion reactions are non-existent and the hydrogen reactions are relatively slow (when compared with more aggressive cathodic potentials). As a result, small changes in the distribution of the electrolyte potential are observed, with the removal of positively charged $\mathrm{H}^+$ species translating into a small reduction in the electrolyte potential. In contrast, for the case of $E_m=0.5\;\mathrm{V}_{SHE}$ (Fig. \ref{fig:Em_Phi}b), the corrosion reaction dominates, causing large quantities of positively charged $\mathrm{Fe}^{2+}$ to enter the electrolyte at the interface. As a result, the electrolyte potential increases significantly, causing noticeable differences between the initial and boundary electric overpotential after just 10 minutes.\\ The results show that hydrogen uptake is enhanced through two mechanisms: (i) higher hydrogen reaction rates due to lower applied potentials, and (ii) a smaller pH resulting from corrosion, as observed at high applied potentials. Accordingly, there is an intermediate regime where the hydrogen uptake is reduced. This is shown in Fig. \ref{fig:Em_LatticeH}, where crack tip predictions of pH (blue crosses, left $y$-axis) and lattice hydrogen concentration $C_L$ (orange circles, right $y$-axis) are shown as a function of the applied potential. A point of minimum hydrogen uptake is observed at $E_m=-0.2 \mathrm{V}_{SHE}$ ($-0.441 \mathrm{V}_{SCE}$). Such behaviour is also observed experimentally, with hydrogen embrittlement susceptibility diminishing with increasing applied potential up to a certain point, after which susceptibility increases with $E_m$ \cite{AM2016}. Further insight into the dependence on the applied potential and the competition between the different reaction kinetics can be gained through Fig. \ref{fig:Em_ReactionRate}, where individual reaction rates are reported as a function of $E_m$. As predicted from Eq. \eqref{eq:scaling_15}, reaction $\nu_{Va}$ is dominant for low pH values, while $\nu_{Vb}$ becomes the dominant reaction for pH values above 7. The application of a negative metal potential increases the overpotential to accelerate reaction $\nu_{Vb}$, from being almost negligible to becoming the sole source of adsorbed hydrogen. In contrast, positive electric potentials slow down all hydrogen reactions by altering the overpotential, while accelerating reaction $\nu_{Va}$ by strongly decreasing the pH. This causes a strong rise in lattice hydrogen going from neutral to positive potentials. However, this rise flattens for higher potentials due to the increased availability of $\mathrm{H}^+$ being negated by the reduction of the reaction rate through the overpotential. \subsection{Fluid velocity} \label{sec:fluid_V} \begin{figure} \centering \begin{subfigure}{12cm} \centering \includegraphics[width=12cm]{surf_Res_VMax_0.001.jpg} \caption{$V_{max}=1\;\mathrm{mm}/\mathrm{s}$} \label{fig:Vel1} \end{subfigure} \begin{subfigure}{12cm} \centering \includegraphics[width=12cm]{surf_Res_VMax_0.02.jpg} \caption{$V_{max}=20\;\mathrm{mm}/\mathrm{s}$} \label{fig:Vel20} \end{subfigure} \caption{Metal-electrolyte interactions: Effect of the electrolyte velocity on the pH and lattice hydrogen concentration at $t=5\;\mathrm{min}$. The electrolyte velocity varies linearly from $V_{max}$ at the left edge of the electrolyte, to 0 at the electrolyte-metal interface (see Fig. \ref{fig:geo}). Here, results are provided for the choices of (a) $V_{max}=1\;\mathrm{mm}/\mathrm{s}$ and (b) $V_{max}=20\;\mathrm{mm}/\mathrm{s}$.} \label{fig:Vel} \end{figure} \begin{figure} \centering \includegraphics{lines_V-eps-converted-to.pdf} \caption{Metal-electrolyte interactions: Effect of the electrolyte velocity on the pH (blue, left $y$-axis) and lattice hydrogen concentration (orange, right $y$-axis) at $t=5\;\mathrm{min}$ and various locations along the metal-electrolyte interface ($y$, with $y=5$ mm being the centre height).} \label{fig:Vel_LatticeH} \end{figure} While it is accurate to assume negligible fluid flow within the occluded geometry of the crack, this is less realistic for the electrolyte located in the bulk of the domain. To investigate the interplay between the bulk electrolyte velocity and hydrogen ingress, we prescribe a fluid velocity that varies linearly from a magnitude $V_{max}$ at the edge of the electrolyte to zero at the electrolyte-metal interface (see Fig. \ref{fig:geo}). Specifically, we vary the maximum velocity from $V_{max}=0\;\mathrm{mm}/\mathrm{s}$ (corresponding to the results from the previous section) up to $V_{max}=29\;\mathrm{mm}/\mathrm{s}$, using $1\;\mathrm{mm}/\mathrm{s}$ intervals. \\ The distributions of electrolyte pH and metal lattice hydrogen concentration are shown in Fig. \ref{fig:Vel} for the representative cases of $V_{max}=1\;\mathrm{mm}/\mathrm{s}$ and $V_{max}=20\;\mathrm{mm}/\mathrm{s}$. The locations at which the $\mathrm{pH}$ and lattice concentration are shown are the crack tip ($y=5\;\mathrm{mm}$), and at the exterior surface $2.5\;\mathrm{mm}$ above and below the crack mouth ($y=2.5\;\mathrm{mm}$ and $y=7.5\;\mathrm{mm}$). The results show that, while a moving electrolyte has a significant effect on the concentrations within the bulk and near the exterior of the metal, the pH inside the defect is rather insensitive. This independence of the local pH on the bulk electrolyte velocity is due to the fact that the electrolyte near the crack tip is in a state of local equilibrium, as it is located too far from the outer domain for any meaningful quantity of ions to diffuse to the crack tip. As a result, the hydrogen uptake near the crack tip within the metal also shows only a small sensitivity to the electrolyte velocity. However, the effect of the electrolyte velocity on the hydrogen uptake near the exterior boundary is more noticeable. This is shown in Fig. \ref{fig:Vel_LatticeH}, where the pH (left, blue colour) and the lattice hydrogen concentration (right, orange) are plotted as a function of $V_{max}$ at different positions along the interface height ($y$). Outside of the crack mouth, the pH is sensitive to the fluid velocity, increasing with $V_{max}$ up to the point of becoming closer to its initial value (pH=5). While corrosion decreases the pH locally, this effect is limited for higher velocities where the $\mathrm{Fe}^{2+}$ and $\mathrm{FeOH}^+$ ions are removed due to advection before they can react to create $\mathrm{H}^+$ ions. Furthermore, the imposed velocity removes the $\mathrm{H}^+$ ions that are added as a result of the corrosion reaction. These ions are advected upwards along the metal-electrolyte interface, causing the ion concentrations nearer to the bottom of the electrolyte domain to be close to the boundary conditions. Higher up, the combination of advected ions and newly created ions due to reactions causes the pH to deviate more from these boundary conditions. As a result, the effect of including the fluid flow on the pH is strongest near the bottom of the domain (compare with the $V_{max}=0$ result in Fig. \ref{fig:Em}), whereas its effect is lessened further upward. The fluid velocity also changes the electric potential near the metal-electrolyte interface, causing higher potentials closer to the bottom of the domain, while lower potentials are observed nearer the top. Hence, the electric potential can locally counteract the effect of pH on hydrogen ingress. Thus, while a higher fluid velocity leads to a raise in bulk pH near the interface (resulting in less hydrogen absorption in the exterior boundaries), the increase in $\varphi$ with fluid velocity in the bottom region of the domain results in a reduction in the hydrogen uptake close to the inflow boundary condition. This indicates that if the fluid velocity is sufficiently high compared to the domain size, simulating the exterior electrolyte becomes less relevant, and simply imposing the initial concentrations might be appropriate. However, this is not valid for occluded regions, where the pH is significantly different from the initial concentration and insensitive to the imposed velocity. \subsection{Defect geometry} \label{sec:res_geo} \begin{figure} \centering \begin{subfigure}{8cm} \centering \includegraphics{pH_Sweep-eps-converted-to.pdf} \caption{} \label{fig:hL_surfs_pH} \end{subfigure} \begin{subfigure}{8cm} \centering \includegraphics{CL_Sweep-eps-converted-to.pdf} \caption{} \label{fig:hL_surfs_CL} \end{subfigure} \begin{subfigure}{8cm} \centering \includegraphics{H_Sweep-eps-converted-to.pdf} \caption{} \label{fig:hL_surfs_H} \end{subfigure} \begin{subfigure}{8cm} \centering \includegraphics{E_Sweep-eps-converted-to.pdf} \caption{} \label{fig:hL_surfs_E} \end{subfigure} \caption{Metal-electrolyte interactions: effect of the defect geometry on crack tip (a) pH, (b) absorbed lattice hydrogen concentration $C_L$, (c) $\mathrm{H}^+$ concentration, and (d) electrolyte electric potential $\varphi$. The maps are built for a range of defect lengths ($L=0.1$ mm to $L=5$ mm) and heights ($h=0.1$ mm to $h=4$ mm). Results shown at $t=5\;\mathrm{mins}$, with the red crosses indicating simulation data points and the contours being built by interpolating linearly between these points. } \label{fig:hL_surfs} \end{figure} \begin{figure} \centering \begin{subfigure}{12cm} \centering \includegraphics[width=12cm]{surf_pH_Res_Hfrac_0.001_Lfrac_0.005.jpg} \caption{$h=1\;\mathrm{mm}$} \label{fig:LH_opening1} \end{subfigure} \begin{subfigure}{12cm} \centering \includegraphics[width=12cm]{surf_pH_Res_Hfrac_0.004_Lfrac_0.005.jpg} \caption{$h=4\;\mathrm{mm}$} \label{fig:LH_opening4} \end{subfigure} \caption{Metal-electrolyte interactions: influence of the defect dimensions. Electrolyte pH and absorbed lattice hydrogen concentration in the metal for two representative case studies: (a) a defect of height $h=1\;\mathrm{mm}$, and (b) a defect of height $h=5\;\mathrm{mm}$. In both cases, the defect length equals $L=5\;\mathrm{mm}$, and the results correspond with a time of $t=5\;\mathrm{min}$.} \label{fig:LH_opening} \end{figure} As seen in the previous section, the pH and the hydrogen uptake near the crack tip are not influenced by the exterior electrolyte's behaviour and pH when the crack is sufficiently long. This was also seen in the scaling analysis of Eq. \eqref{eq:scaling_LH}, indicating that for all but the shortest crack lengths the acidic hydrogen evolution reaction rate will be limited by the hydrogen diffusion into the crack and by the hydrogen produced by local corrosion reactions. In this section, we will investigate the effect of the defect geometry, changing the defect length over a range going from $L=0.1$ mm to $L=5$ mm and the defect height from $h=0.1$ mm to $h=4$ mm. The radius at the defect tip is taken as the smaller of the two dimensions. This span of $L$ and $h$ values aims at covering a wide range of occluded geometries, from pit-like circular defects that can arise from localised corrosion to long sharp cracks. For the fluid velocity, we will use $V_{max}=10\;\mathrm{mm}/\mathrm{s}$, enforcing a near to constant pH at the defect mouth. The results obtained are shown in Fig. \ref{fig:hL_surfs}, where maps are constructed that relate the defect geometry to the crack tip estimates of pH (Fig. \ref{fig:hL_surfs}a), absorbed lattice hydrogen (Fig. \ref{fig:hL_surfs}b), $\mathrm{H}^+$ concentration (Fig. \ref{fig:hL_surfs}c), and electrolyte potential (Fig. \ref{fig:hL_surfs}d).\\ Consider first the pH and $\mathrm{H}^+$ concentration results, Figs. \ref{fig:hL_surfs}a and \ref{fig:hL_surfs}c. For shorter defects ($L<2\;\mathrm{mm}$), the pH shows a strong sensitivity to the defect length but only a weak sensitivity to the defect opening. The $\mathrm{H}^+$ generated through the reacting $\mathrm{Fe}^{2+}$ diffuses more readily away from the defect tip when the defect length is small. Hence, wider pits allow for more diffusion compared to their surface area, resulting in a pH closer to the exterior conditions. For defects longer than $2\;\mathrm{mm}$, diffusion becomes negligible and the pH is solely governed by the local equilibrium of the hydrogen and corrosion reactions. This strong dependency on the defect length, and weak dependency on its height, corresponds roughly with the predictions from Eq. \eqref{eq:scaling_LH}, as discussed in Section \ref{sec:dimensionalAnalysis}. However, it should be noted that what determines if the results are dominated by the local reactions or by diffusion for a near-neutral metal potential is the diffusion of $\mathrm{H}^+$ ions away from the defect, and not towards it. Consider now the sensitivity of crack tip electrolyte potential to the defect dimensions, Fig. \ref{fig:hL_surfs}d. As it can be observed, the crack tip value of $\varphi$ generally increases with the defect length. Longer and thinner defects hinder electrolyte ionic transport and lead to noticeable increases in the crack tip electrolyte potential. Finally, the uptake of hydrogen is shown in Fig. \ref{fig:hL_surfs_CL}, in terms of the absorbed lattice hydrogen concentration at the crack tip. The results reveal a significant sensitivity to the defect geometry. In particular, the lattice hydrogen content increases with the defect height. This is also observed for large defects, where changes in the pH and electrolyte potential are low. This behaviour is intrinsic to the crack tip geometry as the hydrogen diffuses away from it into the metal. For narrow cracks with small crack tip radii, the radial diffusion of $C_L$ implies that the diffusion will spread the available lattice hydrogen over a relatively large area. In contrast, defects with a large opening height and a large tip radius, such as pits, will cause almost one-dimensional diffusion away from their tip. This effect is shown in Fig. \ref{fig:LH_opening}, where contours of electrolyte pH and metal lattice hydrogen content are shown for two selected values of the defect height. The hydrogen concentrations at the top and bottom faces of the crack are almost identical, as could be expected given the similar pH and low electric potential, while the hydrogen around the crack tip spreads over a larger region for the $1\;\mathrm{mm}$ height (relative to the $4\;\mathrm{mm}$ case). However, these bulk diffusion effects can be naturally captured by existing metal deformation-diffusion models, without explicitly simulating the electrolyte. \\ In terms of electrolyte-geometry interactions, our results show that when the defect is sufficiently long ($L>2\;\mathrm{mm}$), the defect geometry is less relevant, as long as the influence of the electric potential is low. For these cases, having an exact description of the geometry is not needed to estimate the local pH, and one can consider local equilibrium to describe the environmental conditions. Similarly, if the local environmental conditions are known for a specific geometry, it can be reasonably expected that they would be applicable to other geometries within the ``large crack'' regime ($L>2\;\mathrm{mm}$, for the material and conditions considered here). In contrast, for shorter defects the geometry has a significant effect on electrolyte behaviour, as the diffusion of ions into and out of the defect becomes important. In those circumstances, small deviations in crack length can cause significant changes in environmental conditions and hydrogen uptake. \FloatBarrier \section{Assessment of modelling strategies} \label{sec:bcs} \begin{figure} \centering \includegraphics{lines_CL_VS_Em_Env-eps-converted-to.pdf} \caption{Mapping the crack tip pH and electrolyte potential as a function of the applied potential $E_m$. The results correspond for the case of a sufficiently large crack, after a time of $t=10\;\mathrm{min}$, neglecting the fluid velocity and prescribing a pH of 5 and a potential of $\varphi=0\;\mathrm{V}_{SHE}$ at the external electrolyte edge (located at $10\;\mathrm{mm}$ from the metal surface). The pH results use blue crosses and refer to the left $y$-axis, while the electrolyte potential results use orange circles and refer to the right $y$-axis.} \label{fig:EnvironmentalMap} \end{figure} The results shown so far reveal that the pH and the electrolyte potential near the metal surface deviate significantly from the initial pH and the applied potential. However, the results presented in Section \ref{sec:res_geo} also show that the electric potential is only weakly dependent on the crack geometry, and that an almost constant pH is obtained inside of the defect, as long as the defect is sufficiently long. Hence, our model can be used to determine the local pH and electrolyte potential associated with a given applied potential, and these would be relevant for all sufficiently long cracks. These relations, shown in Fig. \ref{fig:EnvironmentalMap}, can be used as input to simplified hydrogen uptake models that can provide a relatively accurate quantification of hydrogen ingress without the need to resolve the complete electro-chemo-mechanical problem. Thus, we proceed to derive simplified relationships and assess their accuracy, as well as to inspect the predictions from simplistic yet widely-used models. As a word of caution, one should note that the environmental maps such as the one provided in Fig. \ref{fig:EnvironmentalMap}, which can serve as key input to simplified models, are solely an estimate and do not capture the initial period during which the pH near the fracture tip slowly changes towards a more stable solution. They are also not representative of the steady-state behaviour obtained when the metal lattice is fully saturated with hydrogen. However, in this intermediate period, they can give a sensible estimate for the environmental conditions based on the applied metal potential. Throughout this section, the results refer to an Fe-based material, as characterised by the parameters shown in Tables \ref{table:params} and \ref{tab:reactionsused}.\\ Building upon local environmental maps such as the one provided in Fig. \ref{fig:EnvironmentalMap}, let us proceed to derive simplified estimates of the hydrogen influx. First, assuming the number of surface sites to be small, and hence the surface to be in a state of local equilibrium, Eq. \eqref{eq:massbalanceinterface} can be simplified to: \begin{equation} J = \nu_A-\nu_A'= (\nu_{Va}-\nu_{Va}') - (\nu_{Ha}-\nu_{Ha}') -2 (\nu_T-\nu_T') + (\nu_{Vb}-\nu_{Vb}') - (\nu_{Hb}-\nu_{Hb}') \label{eq:massbalanceinterface_steady} \end{equation} Next, we neglect all backwards reaction rates except for $\nu_A'$. This is sensible as long as these backwards reaction rates are sufficiently small compared to their forward rates. The implication is that the hydrogen is solely adsorbed through the Volmer or backwards absorption reactions, and that is solely removed from the metal surface through the Tafel, Heyrovsky, and forwards absorption reactions. As a result, the interfacial mass balance dictates the hydrogen influx into the metal as: \begin{equation} \begin{split} J = \nu_A-\nu_A' = &k_{Va} C_{\mathrm{H}^+} (1-\theta_{ads}) \exp{\left(-\alpha_{Va} \frac{\eta F}{RT}\right)} - k_{Ha} C_{\mathrm{H}^+} \theta_{ads} \exp{\left(-\alpha_{Ha} \frac{\eta F}{RT}\right)} \\ &-2k_T\theta_{ads}^2+k_{Vb} (1-\theta_{ads})\exp{\left(-\alpha_{Vb} \frac{\eta F}{RT}\right)} - k_{Hb} \theta_{ads} \exp{\left(-\alpha_{Hb} \frac{\eta F}{RT}\right)} \end{split}\label{eq:49} \end{equation} \begin{figure} \centering \includegraphics{lines_CL_VS_Em_Env2-eps-converted-to.pdf} \caption{Hydrogen entry rate estimated from the simplified model shown in Eq. (\ref{eq:env_simple}) and the environmental conditions given in Figure \ref{fig:EnvironmentalMap}. The results correspond for the case of a sufficiently large crack, after a time of $t=10\;\mathrm{min}$, neglecting the fluid velocity and prescribing a pH of 5 and a potential of $\varphi=0\;\mathrm{V}_{SHE}$ at the external electrolyte edge (located at $10\;\mathrm{mm}$ from the metal surface). Other parameters used $k_{Va}=10^{-4}\;\mathrm{m}/\mathrm{s}$, $k_{Vb}=10^{-8}\;\mathrm{m}/\mathrm{s}$ and $\alpha = 0.5$.} \label{fig:EnvironmentalMap_Eval} \end{figure} \begin{figure}[t] \centering \begin{subfigure}{6cm} \centering \includegraphics{lines2_EffectBC-eps-converted-to.pdf} \caption{} \label{fig:BC-05a} \end{subfigure} \begin{subfigure}{10cm} \centering \includegraphics{lines2_EffectBC_time_CT-eps-converted-to.pdf} \caption{\hspace{3cm}\;} \label{fig:BC-05b} \end{subfigure} \\ \centering \begin{subfigure}{6cm} \centering \includegraphics{lines1_EffectBC-eps-converted-to.pdf} \caption{} \label{fig:BC0a} \end{subfigure} \begin{subfigure}{10cm} \centering \includegraphics{lines1_EffectBC_time_CT-eps-converted-to.pdf} \caption{\hspace{3cm}\;} \label{fig:BC0b} \end{subfigure} \\ \begin{subfigure}{6cm} \centering \includegraphics{lines3_EffectBC-eps-converted-to.pdf} \caption{} \label{fig:BC05a} \end{subfigure} \begin{subfigure}{10cm} \centering \includegraphics{lines3_EffectBC_time_CT-eps-converted-to.pdf} \caption{\hspace{3cm}\;} \label{fig:BC05b} \end{subfigure} \caption{Assessment of modelling strategies. Lattice hydrogen distribution ahead of the crack after $t=10$ min. (left) and evolution of the crack tip lattice hydrogen concentration (right). Seven modelling strategies are considered: (i) prescribing a constant hydrogen concentration, (ii) prescribing a constant hydrogen flux, (iii) our approximated flux model $\tilde{J}_1$, (iv) our further simplified flux model $\tilde{J}_2$, two cases where the electrochemistry is not solved for upon the assumption that the pH is known, one using the bulk pH (v) and another one using the local one (vi), and (vii) our complete electro-chemo-mechanical model (the reference result). Results are shown for three values of the applied potential: $E_m = -0.5\;\mathrm{V}_{SHE}$ (a \& b), $E_m = 0\;\mathrm{V}_{SHE}$ (c \& d) and $E_m = 0.5\;\mathrm{V}_{SHE}$ (e \& f). Based on the assumptions discussed and Fig. \ref{fig:EnvironmentalMap_Eval}, the values used are: $C_L=19\;\mathrm{mol}/\mathrm{m}^3$, $\tilde{J}_2=1.1\cdot10^{-4}\;\mathrm{mol}/(\mathrm{m^2}\;\mathrm{s})$, $J_{10}=4.2\cdot10^{-5}\;\mathrm{mol}/(\mathrm{m^2}\;\mathrm{s})$, and local pH$=13$ ($E_m = -0.5\;\mathrm{V}_{SHE}$); $C_L=6.3\;\mathrm{mol}/\mathrm{m}^3$, $\tilde{J}_2=2.0\cdot10^{-5}\;\mathrm{mol}/(\mathrm{m^2}\;\mathrm{s})$, $J_{10}=1.5\cdot10^{-5}\;\mathrm{mol}/(\mathrm{m^2}\;\mathrm{s})$, and local pH$=3.7$ ($E_m = 0\;\mathrm{V}_{SHE}$); $C_L=60\;\mathrm{mol}/\mathrm{m}^3$, $\tilde{J}_2=3.4\cdot10^{-4}\;\mathrm{mol}/(\mathrm{m^2}\;\mathrm{s})$, $J_{10}=1.5\cdot10^{-4}\;\mathrm{mol}/(\mathrm{m^2}\;\mathrm{s})$, and local pH$=1.6$ and ($E_m = 0.5\;\mathrm{V}_{SHE}$).} \label{fig:BC} \end{figure} A further simplification is made, assuming the absorption reaction to occur much faster compared to all other reactions, resulting in the surface and lattice hydrogen concentrations to be in local equilibrium, and related to each other through: \begin{equation} \theta_{ads} = \frac{C_L}{\frac{k_A}{k_A'}(N_L-C_L)+C_L} \end{equation} As a result, the environmental conditions ($C_{\mathrm{H}^+}$ and $\varphi$) and current hydrogen concentration within the metal can be used to impose an approximate hydrogen influx as: \begin{align} \begin{split} &\tilde{J}_1= \left(1-\frac{C_L}{\frac{k_A}{k_A'}(N_L-C_L)+C_L} \right) \left(k_{Va} C_{\mathrm{H}^+} \exp{\left(-\alpha_{Va} \frac{(E_m-E_{eq}-\varphi) F}{RT}\right)} + k_{Vb} \exp{\left(-\alpha_{Vb} \frac{(E_m-E_{eq}-\varphi) F}{RT}\right)}\right) \\ &- \frac{C_L}{\frac{k_A}{k_A'}(N_L-C_L)+C_L} \Bigg( k_{Ha} C_{\mathrm{H}^+} \exp{\left(-\alpha_{Ha} \frac{(E_m-E_{eq}-\varphi) F}{RT}\right)} \\ & + k_{Hb} \exp{\left(-\alpha_{Hb} \frac{(E_m-E_{eq}-\varphi) F}{RT}\right)} +2k_T \frac{C_L}{\frac{k_A}{k_A'}(N_L-C_L)+C_L} \Bigg) \end{split} \label{eq:mapping} \end{align} Under the assumptions outlined above, this equation can be used to impose a hydrogen influx \textit{via} a non-linear Neumann-type boundary condition. At this point, it is of interest to compare this expression to the state-of-the-art generalised flux boundary condition for hydrogen ingress; namely, that of Turnbull and co-workers \cite{Turnbull1996,Turnbull2015,CS2020}. The main differences are the following; in our simplified model $\tilde{J}_1$, (i) non-acidic reactions are not neglected, and (ii) the effects of $\mathrm{H}^+$ and $\mathrm{OH}^-$ concentrations are not encapsulated in the reaction constants, ensuring the applicability of a single set of constants over a wide range of environmental conditions. Thus, the approximation from Eq. \eqref{eq:mapping} is valid over a larger range of external environments and is able to accommodate environment-independent reaction constants.\\ One further simplification can be made for relatively short time scales; assuming $C_L<<N_Lk_A/k_A'$ (for our reaction constants, valid up to $C_L\approx 10^2\;\mathrm{mol}/\mathrm{m}^3\approx 13\;\mathrm{wppm}$) and assuming that the electrolyte potential is given relative to the hydrogen reactions $E_{eq}=0$. This reduces Eq. \eqref{eq:mapping} to: \begin{equation} \tilde{J}_{2} = \left(k_{Va} 10^{-pH+3}+k_{Vb}\right) \exp{\left(-\alpha \frac{(E_m-\varphi) F}{RT}\right)} \label{eq:env_simple} \end{equation} Eq. (\ref{eq:env_simple}) provides a straightforward relationship between the environment (pH, $\varphi$, $E_m$) and the hydrogen influx. For the case of a long crack, and the reaction constants and conditions considered here, the resulting hydrogen influx is given in Fig. \ref{fig:EnvironmentalMap_Eval} as a function of the applied potential. Thus, the information given in Fig. \ref{fig:EnvironmentalMap_Eval} can be used as input to standard chemo-mechanical models that do not resolve the electrochemistry, upon the assumptions discussed above.\\ In the remainder of this section, we will compare the results obtained with the complete electro-chemo-mechanics model to those obtained from various simplified models, so as to assess their accuracy. In particular, in addition to the reference case (the `complete' model), we consider: (i) the most commonly used approach of prescribing a constant hydrogen concentration, (ii) prescribing a constant hydrogen flux, (iii) our approximated flux model $\tilde{J}_1$, (iv) our further simplified flux model $\tilde{J}_2$, and two cases where the electrochemistry is not solved for upon the assumption that the pH is known, one using the bulk pH (v) and another one using the local one (vi). It should be noted that current models that prescribe a constant hydrogen concentration are unable to relate the environment to the magnitude of $C_L$ prescribed in the crack faces. Here, we assume that one has access to a map such as the one provided in Fig. \ref{fig:hL_surfs}b, and take that input from a complete electro-chemo-mechanical model as the boundary value of $C_L$. So, the value of $C_L$ is chosen to match the reference result at a particular time (here, $t=10$ min.) and then its evolution is assessed. Also, the constant hydrogen flux, case (ii), is chosen assuming access to the outcome of a complete electro-chemo-mechanical simulation. Specifically, the flux prescribed corresponds to the flux obtained with the complete model at a certain time (again, $t=10$ min., $J_{10}$). The magnitude of $\tilde{J}_1$ is estimated from (\ref{eq:mapping}), taking the crack tip pH and electrolyte potential from the map provided in Fig. \ref{fig:EnvironmentalMap_Eval} and from the current value of $C_L$ (a primary variable of the model). And the pH-based approaches, (v) and (iv), solve both the deformation-diffusion problem in the metal and the surface kinetics ($\theta_{ads}$), but do not resolve the electrolyte electrochemistry problem - assuming $\varphi=0$ and, for the case of the local pH, estimating this through the map provided in Fig. \ref{fig:EnvironmentalMap_Eval}. Hence, the boundary conditions of strategies (iii), (v), (vi) and the complete electro-chemo-mechanical model are time- and solution-dependent; unlike modelling strategies (i), (ii) and (iv), which take a constant $C_L$ or $J$ value. Calculations are conducted for three selected values of the applied potential: $E_m=-0.5$, $0$, and $0.5\;\mathrm{V}_{SHE}$. For $E_m=0.5\;\mathrm{V}_{SHE}$ it follows from Fig. \ref{fig:EnvironmentalMap} that a good approximation for the local pH and electrolyte potential are 1.6 and $0.4\;\mathrm{V}_{SHE}$, respectively. Based on Eq. \eqref{eq:env_simple}, this results in $\tilde{J}_{2}=3.4\cdot10^{-4}\;\mathrm{mol}/\mathrm{m}^2\mathrm{s}$. Similarly, for $E_m=0\;\mathrm{V}_{SHE}$ (pH$=3.7$, $\varphi=0.5\;\mathrm{mV}_{SHE}$) an approximation of the influx is given as $\tilde{J}_{2}=2\cdot10^{-5}\;\mathrm{mol}/\mathrm{m}^2\mathrm{s}$, and for $E_m=-0.5\;\mathrm{V}_{SHE}$ (pH=13, $\varphi=-0.03\;\mathrm{V}_{SHE}$) one reaches $\tilde{J}_{2}=1.1\cdot10^{-4}\;\mathrm{mol}/\mathrm{m}^2\mathrm{s}$. The global pH is the same used throughout the manuscript (pH=5). Relevant to the constant flux boundary condition, (ii), the values of hydrogen flux obtained using the electro-chemo-mechanical model after a time $t=10$ min. are $J_{10}=4.2\cdot10^{-5}\;\mathrm{mol}/(\mathrm{m}^2\mathrm{s})$, $J_{10}=1.5\cdot10^{-5}\;\mathrm{mol}/(\mathrm{m}^2\mathrm{s})$, and $J_{10}=1.5\cdot10^{-4}\;\mathrm{mol}/(\mathrm{m}^2\mathrm{s})$ for the $-0.5$, $0$, and $0.5\;\mathrm{V}_{SHE}$ metal potential cases, respectively.\\ The results obtained with each of the aforementioned modelling strategies are shown in Fig. \ref{fig:BC}. Two types of graphs are shown: the lattice hydrogen distribution ahead of the crack tip ($\theta=0^\circ$) after $t=10$ min. (left column) and the crack tip lattice hydrogen evolution as a function of time (right column). Consider first the results obtained for an applied potential $E_m=-0.5\;\mathrm{V}_{SHE}$, Figs. \ref{fig:BC}a and \ref{fig:BC}b. First, it can be readily observed that simply prescribing the initial pH of the electrolyte gives results that very significantly deviate from the reference case (the complete model). When the electrolyte is simulated, large changes in pH are observed near the metal surface and within the defect, with these changes limiting the reaction rates at the surface. However, by assuming a constant pH, this limiting effect is not present and as a result large amounts of $\mathrm{H}^+$ ions react at the surface, leading to an overprediction of the hydrogen uptake. A better approximation is obtained by considering the local pH, since this approach includes the aforementioned reaction rate limiting effects. However, this modelling strategy neglects changes in the electric potential of the electrolyte near the pit, leading to noticeable differences with the reference result. The accuracy compared to simply prescribing the global pH also improves when using the approximations $\tilde{J_1}$ (\ref{eq:mapping}) and $\tilde{J_2}$ (\ref{eq:env_simple}), as these take both the local pH and electrolyte potential into account. Within these, a better agreement is attained by making use of $\tilde{J_1}$, emphasising the role played by changes in surface coverage. A very good agreement can be obtained if the hydrogen influx is known due to a previous electro-chemo-mechanical simulation, as shown by the case $J_{10}$. However, one should note that this statement is only applicable for the case of $E_m=-0.5\;\mathrm{V}_{SHE}$ and the time scales considered. Specifically, one would expect predictions to worsen as the transient problem approaches the steady-state. In contrast, $\tilde{J}_2$ provides a worse approximation over short time scales but is likely to improve the steady-state prediction due to the inclusion of Tafel and Heyrovsky reactions. On the other side, prescribing a constant $C_L$, as commonly done in the literature, gives sensible results only if taking as input the outcome of a complete electro-chemo-mechanical analysis (such as the map provided in Fig. \ref{fig:hL_surfs}b) and only for the specific time instant considered (see Fig. \ref{fig:BC}b).\\ \FloatBarrier For the $E_m=0\;\mathrm{V}_{SHE}$ and $E_m=0.5\;\mathrm{V}_{SHE}$ cases, prescribing a constant pH equal to the initial pH also produces poor results, as shown in Figs. \ref{fig:BC}c-f. In these cases, the actual $\mathrm{H}^+$ concentration is orders of magnitude higher than the initial one, such that the amount of absorbed hydrogen is significantly underestimated if the initial $\mathrm{H}^+$ concentration is considered. Prescribing the local pH provides better results for the neutral potential case but still results in negligible hydrogen being absorbed for the positive potential case. This is explained by the large differences in electrolyte potential observed in Fig. \ref{fig:Em_Phi_m05}, which accelerate the reaction rate of the hydrogen reactions but are not accounted for by solely prescribing a pH as boundary condition. As was also the case for the negative potential simulations, prescribing a constant lattice concentration results in a reasonable result at the time step this concentration is based on, but it does not capture the temporal behaviour correctly. Finally, the comparison of the results obtained using imposed hydrogen fluxes shows an offset between these and the reference results. This offset is caused by the inability of flux-based approaches to capture the impact that pH changes in time have. Specifically, the pH gradually decreases from the initial pH to the local environmental pH, causing the pH-dependent reactions to initially occur at a slow rate and only accelerate once the stable pH is reached, resulting in a lower amount of hydrogen initially entering the metal. However, after this initial period both the hydrogen flux based on results after 10 minutes ($J_{10}$) and the approximate hydrogen flux $\tilde{J}_1$ produce results that show similar gradients to the reference electro-chemo-mechanical model. The more simplified flux model $\tilde{J}_2$ also shows a similar trend but overpredicts the quantity of absorbed hydrogen as it neglects the hydrogen recombination reactions. \section{Conclusions} \label{sec:conclusion} The amount of absorbed hydrogen in metals is a key input in hydrogen embrittlement predictions. However, its quantification remains a challenge. In this work, we have presented a generalised electro-chemo-mechanical model that enables quantifying hydrogen absorption for any choice of environment and sample/defect geometry. The model combines the simulation of ionic transport in an electrolyte with the diffusion of hydrogen within a deformable metal containing microstructural traps. At the interface between these two domains, electrochemical reactions are prescribed to relate the electrolyte pH and potential to the amount of hydrogen being absorbed into the metal. These elements are coupled, resulting in the first model that incorporates the physics governing electrolyte behaviour, hydrogen evolution and corrosion reactions, surface adsorption and stress-assisted hydrogen uptake and diffusion in a metal lattice. We numerically implement our theory and quantify hydrogen absorption as a function of the environment (bulk pH and applied potential), the fluid velocity and the crack and specimen dimensions. Furthermore, we postulate hypotheses and use them to present simplified versions of our model that enable quantifying the hydrogen influx from known local environmental conditions. Calculations are conducted to test these hypotheses and compare the predictions resulting from our simplified and generalised models to those obtained with the simplified boundary conditions commonly used in the literature, establishing regimes of validity. Our main findings are: \begin{itemize} \item Hydrogen ingress shows significant sensitivity to changes in electrolyte potential and pH, despite these changes being neglected in existing models. Negative applied potentials reduce the $\mathrm{H}^+$ concentration and accelerate hydrogen reactions. The latter significantly enhances hydrogen uptake but the effect is limited due to the decrease in available $\mathrm{H}^+$ ions in the electrolyte. Positive potentials slow down hydrogen reaction kinetics but still lead to significant hydrogen uptake due to the associated reduction in pH. An intermediate regime exists where hydrogen ingress is minimised. \item The fluid velocity has a minor influence on the hydrogen uptake ahead of cracks and pits. However, the bulk electrolyte potential and pH distributions are sensitive to the fluid velocity, and so is the hydrogen uptake at the exterior boundaries. \item Short and wide cracks/pits favour the diffusion of hydrogen ions into and out of the defect, with a stronger dependence on the defect length compared to its height. For sufficiently long cracks, this diffusion becomes severely limited, resulting in the pH becoming independent of the crack geometry. In contrast, the electrolyte potential exhibits a higher sensitivity to the defect geometry, even for long cracks. \item Neglecting electrolyte behaviour by defining the lattice hydrogen concentration at the surface based on the bulk pH introduces significant errors. Considering instead the local pH improves the accuracy of predictions but still shows deviations from the reference result, as changes in electrolyte potential are not accounted for. \item Boundary conditions commonly used in hydrogen embrittlement models, such as prescribing a constant, pre-determined lattice concentration, result in significant deviations from the hydrogen absorption predicted by the complete electro-chemo-mechanical model. \item Environmental maps that relate the applied potential to the local (crack tip) pH and electrolyte potential, such as the one provided in Fig. \ref{fig:EnvironmentalMap}, can be used to determine the hydrogen influx in a relatively accurate manner, without the need to explicitly simulate electrolyte behaviour. \item While some simplifications provide relatively close estimates, there exists phenomena such as pH evolution that can only be captured with a complete electro-chemo-mechanical model. These phenomena lead to hydrogen uptake overpredictions when using simplified models that do not simulate the electrochemical behaviour of the electrolyte. \end{itemize} \noindent Moreover, maps are provided that enable readers to relate measurable environmental conditions (bulk pH, applied potential) to local environmental quantities (pH, electrolyte potential) and absorbed hydrogen. The model is also capable of predicting the influence of the surface condition but this is done through changes in the reaction rate constants and thus requires input from careful experimentation. Potential future extensions to the model include the use of kinetic trapping formulations \cite{McNabb1963,Turnbull2015}, the coupling with models that explicitly simulate the embrittlement process (e.g., through the use of phase field approaches \cite{TAFM2020c}) and incorporating the role of recombination poisons such as H2S. \section*{Acknowledgments} \noindent Financial support through grant EP/V009680/1 (``NEXTGEM") from the Engineering and Physical Sciences Research Council (EPSRC) is gratefully acknowledged. Emilio Mart\'{\i}nez-Pa\~neda additionally acknowledges financial support from UKRI's Future Leaders Fellowship programme [grant MR/V024124/1]. \section*{Data availability} \noindent The COMSOL physics builder model file incorporating the metal diffusion, interface reactions, and simplified boundary conditions is made freely available at \url{www.imperial.ac.uk/mechanics-materials/codes} and \url{www.empaneda.com}. Documentation is also provided, along with example files that enable to reproduce results shown in Section \ref{sec:bcs}. \FloatBarrier
{ "timestamp": "2022-09-20T02:21:04", "yymm": "2209", "arxiv_id": "2209.08635", "language": "en", "url": "https://arxiv.org/abs/2209.08635" }
\section{Introduction} Deep generative model provides a powerful framework for representing complex data distributions and have seen many successful applications in image and video synthesis \citep{karras2019style,saito2020train,xiao2021tackling}, representation learning \citep{oord2017neural} as well as unsupervised or semi-supervised learning \citep{izmailov2020semi,pang2020semi}. Such model, referred to as \textit{generator model}, usually consists of low-dimensional latent variables that follow non-informative prior distribution, and a top-down network that maps such latent vector to the observed example. The informative prior model in the latent space \citep{pang2020ebmprior, aneja2021ncpvae} can be learned to further improve the expressive power of the whole model. Specifically, we consider learning energy-based model (EBM) in the latent space as our informative prior for the generator model. Learning latent space EBM can be challenging and requires iterative Markov Chain Monte Carlo (MCMC) sampling step which is computationally expensive and sensitive to hyperparameters. In this paper, we instead propose to use noise contrastive estimation (NCE) \citep{JMLR:v13:gutmann12a} for learning EBM prior via density ratio estimation. The EBM is learned discriminatively by classifying the latent vector sampled from the prior density and the latent sampled from posterior density. Instead of variational learned inference \citep{kingma2014vae,rezende2014stochastic} which needs a separate inference network designed, we obtain the posterior latent sample through short-run Langevin dynamics \citep{nijkamp2020learning} to ensure more accurate inference. However, the success of NCE depends on the closeness of prior density and posterior density \citep{hoffman2016elbo}. Given large gap between two densities, NCE typically fails to accurately estimate such density ratio which leads to inaccurate EBM modeling. To effectively tackle the inaccurate estimation issue and further learn more expressive prior model, we develop the adaptive multi-stage density ratio estimation for latent space EBM training. The proposed model breaks the density estimation into multiple stages and learn different stages of density ratio sequentially. Thanks to the low-dimensionality of the latent space and the short-run style posterior inference, in each stage, the gap between prior and posterior density could be kept in check which makes NCE easier. The density ratio estimated in previous stage can be further integrated into the current prior model as a correction term to build more expressive prior density for later stage. With such framework, the final latent space EBM prior can then be naturally formed by product of ratios in different stages on top of the initial base prior. \textbf{Contributions:} 1) we propose an EBM prior on generator model which is modelled through estimation of density ratios in multiple stages. 2) We develop the adaptive multi-stage noise contrastive estimation to learn different stages of ratios sequentially and adaptively. The ratio estimated in previous stage can be integrated to form the more informative prior in the later stage. 3) we demonstrate strong empirical results to illustrate the proposed method. \section{Background} \subsection{Maximum likelihood learning of deep latent variable models} \label{ML latent model} Let ${\mathbf{x}} \in \mathbb{R}^D$ be an observed example such as an image, and ${\mathbf{z}} \in \mathbb{R}^d$ be the latent variables where $d<D$. A latent variable generative model (a.k.a, \textit{generator model}) factorize the joint distribution of $({\mathbf{x}}, {\mathbf{z}})$ as \begin{align} \label{eq: decoder model} p_{\theta}({\mathbf{x}}, {\mathbf{z}}) = p({\mathbf{z}})p_{\theta}({\mathbf{x}}|{\mathbf{z}}), \end{align} where $p({\mathbf{z}})$ is the prior distribution over latent variables $z$ , $p_{\theta}({\mathbf{x}}|{\mathbf{z}})$ is the top-down generation model with parameters $\theta$. Usually the prior distribution is chosen to be a simple one such as $\mathcal{N}\left(0, \mathbf{I}_{d}\right)$, but it can also be more expressive with learnable parameters \citep{pang2020ebmprior}. The generation model is the same as that in VAE \citep{kingma2014vae}, i.e., ${\mathbf{x}} = g_{\theta}({\mathbf{z}}) + \epsilon$ with $g_{\theta}$ to be the decoder network and $\epsilon \sim \mathcal{N}\left(0, \sigma^2 \mathbf{I}_{D}\right)$, so that $p_{\theta}({\mathbf{x}}|{\mathbf{z}}) = \mathcal{N}\left(g_{\theta}({\mathbf{z}}), \sigma^2\mathbf{I}_{D}\right)$. As in VAE, $\sigma^2$ takes a pre-specified value. Given a set of $N$ training samples $\{{\mathbf{x}}_i, i = 1,\dots,N\}$ from the unknown data distribution $p_{\text{data}}({\mathbf{x}})$, the model $p_{\theta}$ can be trained by maximizing the log likelihood over training samples $\mathcal{L}(\theta)=\frac{1}{N} \sum_{i=1}^{N} \log p_{\theta}\left({\mathbf{x}}_{i}\right)$. Maximizing the log likelihood $\mathcal{L}(\theta)$ can be accomplished by gradient ascent where the gradient can be obtained from \begin{align}\label{eq: maximum likelihood} \nabla_{\theta} \log p_{\theta}({\mathbf{x}}) &=\frac{1}{p_{\theta}({\mathbf{x}})} \nabla_{\theta} p_{\theta}({\mathbf{x}}) =\int\left[\nabla_{\theta} \log p_{\theta}({\mathbf{x}}, {\mathbf{z}})\right] \frac{p_{\theta}({\mathbf{x}}, {\mathbf{z}})}{p_{\theta}({\mathbf{x}})} d {\mathbf{z}} \notag\\ &=\mathbb{E}_{p_{\theta}({\mathbf{z}} \mid {\mathbf{x}})}\left[\nabla_{\theta} \log p_{\theta}({\mathbf{x}}, {\mathbf{z}})\right] . \end{align} $\nabla_{\theta} \log p_{\theta}({\mathbf{x}}, {\mathbf{z}})$ can be easily computed according to the form of $\log p_{\theta}({\mathbf{x}}, {\mathbf{z}})$, however, approximating the expectation requires drawing samples from $p_{\theta}({\mathbf{z}} | {\mathbf{x}})$, which can be difficult. Sampling from the intractable posterior $p_{\theta}({\mathbf{z}} | {\mathbf{x}})$ requires MCMC, and one convenient MCMC algorithm is Langevin Dynamics (LD) \citep{neal2011mcmc}. Given a step size $s > 0$, and an initial value ${\mathbf{z}}_0$, the Lanegvin dynamics iterates \begin{align}\label{eq: LD} {\mathbf{z}}^{k+1}={\mathbf{z}}^{k}+ \frac{s}{2}\nabla_{{\mathbf{z}}} \log p_{\theta}({\mathbf{z}} | {\mathbf{x}})+\sqrt{s} \omega_{k}, \end{align} where $\mathbf{\omega}_k \sim \mathcal{N}(0, \mathbf{I})$. For sufficiently small step size $s$, the marginal distribution of ${\mathbf{z}}_k$ will converge to $p_{\theta}({\mathbf{z}}|{\mathbf{x}})$ as $k \rightarrow \infty$. However, it is not feasible to run Langevin dynamics until convergence, and in practice the iteration in Eq. \ref{eq: LD} is run for finite iterations, which yields a Markov chain with an invariant distribution approximately close to the original target distribution. When ${\mathbf{z}}_0$ is initialized from the noise distribution, the algorithm is called noise-initialized short-run LD \citep{nijkamp2019learning, nijkamp2020learning}. \subsection{Learning EBMs with discriminative density ratio estimation} \label{background: NCE} Suppose there are two distributions with density functions $p({\mathbf{x}})$ and $q({\mathbf{x}})$ from which we can sample, we can estimate the density ratio\footnote{Assuming $q({\mathbf{x}})>0$ when $p({\mathbf{x}})>0$.} $r({\mathbf{x}}) = \frac{p({\mathbf{x}})}{q({\mathbf{x}})}$ by training a classifier to distinguish samples from $p$ and $q$ \citep{sugiyama2012density}. Specifically, we can train the binary classifier $D: \mathbb{R}^{n} \rightarrow(0,1)$ by minimizing the binary cross-entropy loss \begin{align*} \min _{D}-\mathbb{E}_{{\mathbf{x}} \sim q({\mathbf{x}})}[\log D({\mathbf{x}})]-\mathbb{E}_{{\mathbf{x}} \sim p({\mathbf{x}})}[\log (1-D({\mathbf{x}}))]. \end{align*} The objective is minimized when $D(\mathbf{x})=\frac{q(\mathbf{x})}{q(\mathbf{x})+p(\mathbf{x})}$ \citep{goodfellow2014generative}, and denoting the classifier at optimality by $D^*({\mathbf{x}})$, we have $r(\mathbf{x})=\frac{p(\mathbf{x})}{q(\mathbf{x})} \approx \frac{1-D^{*}(\mathbf{x})}{D^{*}(\mathbf{x})}$. Equivalently, the ratio $r({\mathbf{x}}) = \frac{p({\mathbf{x}})}{q({\mathbf{x}})}$ can be estimated by directly minimizing \begin{align}\label{eq: NCE equivalent} \mathcal{L}(\phi)=& -\mathbb{E}_{\mathbf{x} \sim p({\mathbf{x}})} \log \left(\frac{r_{\phi}\left(\mathbf{x} \right)}{1+r_{\phi}\left(\mathbf{x} \right)}\right) -\mathbb{E}_{\mathbf{x} \sim q({\mathbf{x}})} \log \left(\frac{1}{1+r_{\phi}\left(\mathbf{x} \right)}\right), \end{align} where $r_{\phi}({\mathbf{x}})$ is a non-negative ratio estimating model implemented as the exponential of an unconstrained neural network with scalar output. The minimizer $\phi^*$ satisfies $r_{\phi^*}({\mathbf{x}}) = \frac{p({\mathbf{x}})}{q({\mathbf{x}})}$ \citep{JMLR:v13:gutmann12a}. Such a technique can be useful for training Energy-based models (EBMs). Given samples from the true data distribution $p_{\text{data}}({\mathbf{x}})$ and a base distribution $q({\mathbf{x}})$ that we can sample from, we consider EBMs of the form $p_{\phi}({\mathbf{x}}) = \frac{1}{Z}r_{\phi}({\mathbf{x}})q({\mathbf{x}})$, where $Z$ is the normalizing constant and $r_{\phi}$ is an unconstrained positive function. With this parametrization, obviously the optimal $r_{\phi}$ equals the density-ratio $\frac{p_{\text{data}}({\mathbf{x}})}{q({\mathbf{x}})}$. In fact, if $r_{\phi}({\mathbf{x}})$ is trained with density ratio estimation, the normalizing constant $Z$ is simply $1$. Therefore, the problem of learning an EBM becomes the problem of estimating a density-ratio, which can be solved by discriminative density ratio estimation. Typically the base distribution $q({\mathbf{x}})$ is chosen to be Gaussian, resulted in so-called noise contrastive estimation (NCE) \cite{JMLR:v13:gutmann12a}. Although NCE provides a promising way to train EBMs without running MCMC, the accuracy of the density ratio estimation depends on the closeness between the two distributions. The ratio estimator is often severely inaccurate when the gap between $p$ and $q$ is large. \citet{rhodes2020telescoping} propose Telescoping density-Ratio Estimation (TRE), which breaks the density ratio estimation task into a collection of harder sub-tasks and show improvement over simple NCE on density ratio estimation. However, it is still difficult to apply the technique to energy-based modeling. On one hand, EBMs in high-dimensional data space such as image space can be highly complex and multi-modal, making them extremely far away from simple noise distribution. On the other hand, the intermediate distributions are pre-designed through linear transition, making them less effective to connect complicated target densities. In \citet{rhodes2020telescoping}, TRE only obtains limited success on training EBMs through density estimation on MNIST dataset. \vspace{-2mm} \section{Adaptive Multi-stage Desnity Ratio Estimation} In this section, we introduce adaptive multi-stage density ratio estimation on latent space in details. \iffalse \subsection{From NCE to TRE} In NCE, the ratio $r({\mathbf{x}}) = \frac{p({\mathbf{x}})}{q({\mathbf{x}})}$ can be estimated by minimizing \begin{align}\label{eq: NCE equivalent} \mathcal{L}(\phi)=& -\mathbb{E}_{\mathbf{x} \sim p({\mathbf{x}})} \log \left(\frac{r_{\phi}\left(\mathbf{x} \right)}{1+r_{\phi}\left(\mathbf{x} \right)}\right) \notag\\ & -\mathbb{E}_{\mathbf{x} \sim q({\mathbf{x}})} \log \left(\frac{1}{1+r_{\phi}\left(\mathbf{x} \right)}\right), \end{align} where $r_{\phi}({\mathbf{x}})$ is a non-negative ratio estimating model implemented as the exponential of an unconstrained neural network with scalar output. The minimizer $\phi^*$ satisfies $r_{\phi^*}({\mathbf{x}}) = \frac{p({\mathbf{x}})}{q({\mathbf{x}})}$, and training such a ratio model $r_{\phi}$ is equivalent to training the binary classifier $D$ described in Sec. \ref{background: NCE} \citep{JMLR:v13:gutmann12a}. As discussed in \citet{rhodes2020telescoping}, when the gap between $p$ and $q$ is large, the ratio estimator is often severely inaccurate. Intuitively, when there is a big gap between $p$ and $q$, the task of discriminating samples from them becomes too easy, and thus the estimator is not enforced to capture the information accurately. Motivated by this, \citet{rhodes2020telescoping} propose Telescoping density-Ratio Estimation (TRE), which breaks the density ratio estimation task into a collection of harder sub-tasks. Denoting $p \equiv p_{0}$ and $q \equiv p_{m}$, TRE express the density ratio as a telescoping product \begin{align*} \frac{p_{0}(\mathbf{x})}{p_{m}(\mathbf{x})}=\frac{p_{0}(\mathbf{x})}{p_{1}(\mathbf{x})} \frac{p_{1}(\mathbf{x})}{p_{2}(\mathbf{x})} \cdots \frac{p_{m-2}(\mathbf{x})}{p_{m-1}(\mathbf{x})} \frac{p_{m-1}(\mathbf{x})}{p_{m}(\mathbf{x})}, \end{align*} where each $p_k$ is chosen such that a classifier cannot easily distinguish it from its two neighbouring densities. An estimate of the original density ratio can be expressed by the product of intermediate ratios: \begin{align}\label{eq: TRE} r_{\phi}(\mathbf{x})=\prod_{k=0}^{m-1} r_{\phi_k}\left(\mathbf{x}\right) \approx \prod_{k=0}^{m-1} \frac{p_{k}(\mathbf{x})}{p_{k+1}(\mathbf{x})}=\frac{p_{0}(\mathbf{x})}{p_{m}(\mathbf{x})}. \end{align} To train TRE, we need samples from the intermediate distributions $p_k({\mathbf{x}})$, which can be obtained by simply taking linear combinations of samples from original distributions $p_0$ and $p_m$. The training is done by simultaneously optimizing Eq. \ref{eq: NCE equivalent} for each intermediate density ratio estimation task, which is a multi-task learning problem \citep{ruder2017overview}. Although TRE makes significant improvement over simple NCE on density ratio estimation, it is still difficult to apply the technique to energy-based modeling. On one hand, EBMs in high-dimensional data space such as image space can be highly complex and multi-modal, making them extremely far away from simple noise distribution. On the other hand, the intermediate distributions are pre-designed through linear transition, making them less effective to connect complicated target densities. In \citet{rhodes2020telescoping}, TRE only obtains limited success on training EBMs through density estimation on MNIST dataset. \fi \subsection{Multi-stage density ratio estimation in latent space} Instead of modeling directly on high-dimensional data space, it is easier to introduce low-dimensional latent variables and learn an EBM in latent space, while also learning a mapping from the latent space to the data space \citep{bengio2013better, kumar2019maximum}. We follow this approach and attempt to model a latent space EBM using contrastive estimation. The latent EBM can be learned discriminatively by estimating the ratio between prior density and the posterior density. Due to low-dimensionality of the latent space, such densities can be much easier to deal with than those in high dimensional data space. However, it presents new challenges. Firstly, while the target density in data space is given and fixed (i.e., empirical data distribution), posterior density in latent space is driven by the prior density and the inference on the posterior can be hard. Secondly, while the prior is typically assumed to be un-informative and fixed (e.g., unit Gaussian), the expressiveness of the model is limited. Inspired by \citep{rhodes2020telescoping}, we propose to learn the latent space EBM of the below form through multiple stages \begin{align}\label{eq: prior model} p_{\phi}({\mathbf{z}}) = \prod_{k=0}^{m-1} r_{\phi_k}\left(\mathbf{z}\right)p_0({\mathbf{z}}), \end{align} where $p_0({\mathbf{z}})$ is the unit Gaussian base distribution, and $r_{\phi_k}$ is the intermediate density ratio learned in each stage. Such proposed model shares the similar root as the Product-of-Expert (PoE) \citep{hinton2002training} where $r_{\phi_k}$ in each stage can be treated as individual expert model, and it has the potential to produce much sharper distribution than the one with single expert model built such as \citep{pang2020ebmprior}. Although the formulation of EBM in Eq. \ref{eq: prior model} is related to the TRE proposed in \citet{rhodes2020telescoping}, our training method is fundamentally different. In the next section, we will introduce the training of our model, and highlight the distinction with TRE. \vspace{-3mm} \subsection{Learning latent EBMs with adaptive multi-stage density ratio estimation} Our proposed generator model specifies the distribution on joint space $({\mathbf{x}}, {\mathbf{z}})$: $ p_{\theta, \phi}({\mathbf{x}}, {\mathbf{z}}) = p_{\phi}({\mathbf{z}})p_{\theta}({\mathbf{x}}|{\mathbf{z}})$, where $p_{\phi}(z)$ is the prior model specified in Eq. \ref{eq: prior model}, and $\phi=\{\phi_0, \dots, \phi_{m-1}\}$ that collects parameters for all intermediate learned ratios. It is tempting to apply maximum likelihood estimation (MLE) to train such model. However, there are several challenges: (1) learning of latent EBM $p_\phi({\mathbf{z}})$ needs costly and hard mixing MCMC sampling. (2) the prior $p_\phi({\mathbf{z}})$ needs to have a fixed form during training and cannot be adaptively adjusted. To alleviate the aforementioned limitations, we therefore break the density ratio estimation of $p_\phi({\mathbf{z}})$ into $m$ stages, learn and build the prior sequentially and adaptively. Specifically, in the $k^{th}$ stage, we consider the generator model of the form \begin{align} \label{eq: stage k decoder model } p_{\theta, \phi_k}({\mathbf{x}}, {\mathbf{z}}) = p_{\phi_k}({\mathbf{z}})p_{\theta}({\mathbf{x}}|{\mathbf{z}}), \end{align} where $p_{\phi_k}({\mathbf{z}})=\prod_{i=0}^{k-1} r_{\phi_i}\left(\mathbf{z}\right)p_0({\mathbf{z}})$. The whole training procedure iterates between the maximum likelihood estimation of generation model $\theta$ and the sequential contrastive estimation of prior $\phi$. \textbf{MLE for generation model $\theta$: }The generation model can be trained by maximizing the marginal log-likelihood $p_\theta({\mathbf{x}})$. In $k^{th}$ stage, the complete data log-likelihood of the model $p_{\theta, \phi_k}({\mathbf{x}}, {\mathbf{z}})$ can be expressed as \begin{align*} \log p_{\theta, \phi_k}({\mathbf{x}}, {\mathbf{z}}) &=\log \left[p_{\phi_k}({\mathbf{z}}) p_{\theta}({\mathbf{x}} | {\mathbf{z}})\right] =\log p_{\phi_k}({\mathbf{z}})-\frac{1}{2}\left[\left\|{\mathbf{x}}-g_{\theta}({\mathbf{z}})\right\|^{2} / \sigma^{2}\right]+C \end{align*} where $g_{\theta}$ is the decoder and $C$ is a constant independent of $\theta$. The generation model parameter $\theta$ is then updated using the gradient based on Eq. \ref{eq: maximum likelihood} with a batch of training $n$ samples ${\mathbf{x}}_i$: \begin{align}\label{eq:generation_learn} \theta_{t+1}=\theta_{t}+\eta_{t} \sum_{i=1}^{n} \mathbb{E}_{p_{\theta_{t}}\left({\mathbf{z}}_{i} \mid {\mathbf{x}}_{i}\right)}\left[\left.\frac{\partial}{\partial \theta} \log p_{\theta, \phi_k}\left({\mathbf{x}}_{i}, {\mathbf{z}}_{i}\right)\right|_{\theta=\theta_{t}}\right], \end{align} where $\eta_t$ is the learning rate. The expectation over the posterior can be approximated by running short-run Lanegvin dynamics in Eq. \ref{eq: LD}. Note that the running LD to sample from $p_{\theta}({\mathbf{z}}|{\mathbf{x}})$ is equivalent to sample from $p_{\theta}({\mathbf{x}}, {\mathbf{z}})$ with fixed ${\mathbf{x}}$. \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.36]{figure/adaptiveCE_train_final.png}} \caption{Training adaptive multi-stage density ratio estimation. We estimate the density ratio $r_k({\mathbf{z}})$ in each stage using contrastive estimation which trains a classifier to distinguish samples from the prior $p_{\phi_k}({\mathbf{z}})$ and samples from the aggregate posterior $q_k({\mathbf{z}})$. Posterior samples are obtained by short-run LD (blue dashed curve), prior samples can be obtained either by short-run LD (orange dashed curve) or using persistent chain (orange dashed line). The ratio estimated in stage $k$ can be integrated to form a new prior in stage $k+1$. The whole prior is adapted across multiple stages and learned sequentially. } \vspace{-3mm} \label{illustrate} \end{center} \end{figure} \textbf{Adaptive multi-stage NCE for prior $\phi$: }The prior model $p_\phi({\mathbf{z}})$ can be sequentially and adaptively learned to bridge the gap between prior and posterior densities in the previous stages. Specifically, in $k^{th}$ stage, the correction term $r_{\phi_k}$ can be trained to estimate the density ratio between $p_{\phi_k}({\mathbf{z}})$ and its aggregated posterior $q_k({\mathbf{z}})$ through contrastive estimation using Eq. \ref{eq: NCE equivalent}. The appealing advantage of this estimator is that it simply trains a binary classifier rather than using expensive MCMC sampling. The optimality of such logistic loss leads to the estimated $r_{\phi_k}({\mathbf{z}}) \approx \frac{q_k({\mathbf{z}})}{p_{\phi_k}({\mathbf{z}})}$. The prior model in the $(k+1)^{th}$ stage can then be sequentially adapted to match the previous aggregated posterior $q_k({\mathbf{z}})$, i.e., letting $p_{\phi_{k+1}}({\mathbf{z}})= r_{\phi_{k}}({\mathbf{z}})p_{\phi_k}({\mathbf{z}})$ match $q_k({{\mathbf{z}}})$. Given the new prior, we similarly infer the posterior using short-run LD in Eq. \ref{eq: LD}. Then, the next density ratio estimator $r_{\phi_{k+1}}$ can be learned through contrastive estimation to match the updated prior and its aggregated posterior, which is further used to adapt the prior in next stage. Particularly, we have \begin{align*} \frac{q_{m-1}(\mathbf{z})}{p_{0}(\mathbf{z})}=\frac{q_{m-1}(\mathbf{z})}{p_{\phi_{m-1}}(\mathbf{z})} \frac{q_{m-2}(\mathbf{z})}{p_{\phi_{m-2}}(\mathbf{z})} \cdots \frac{q_{0}(\mathbf{z})}{p_{0}(\mathbf{z})}, \end{align*} where $q_k({\mathbf{z}})$ is the aggregated posterior for prior $p_{\phi_k}({\mathbf{z}})$. The above telescoping product holds since the new prior is designed to match the aggregated posterior in previous stage, i.e., $p_{\phi_{k+1}}({\mathbf{z}}) \approx q_k({{\mathbf{z}}})$. Each stage estimates the ratio $r_{\phi_k}({\mathbf{z}}) \approx \frac{q_k({\mathbf{z}})}{p_{\phi_k}({\mathbf{z}})}$ via contrasive estimation. Then the aggregated posterior $q_{m-1}({\mathbf{z}})$ can be obtained via \begin{align*}\label{eq: final model} q_{m-1}({\mathbf{z}}) =r_{\phi_{m-1}}\left(\mathbf{z}\right) p_{\phi_{m-1}}({\mathbf{z}}) =\prod_{k=0}^{m-1} r_{\phi_k}\left(\mathbf{z}\right)p_0({\mathbf{z}}), \end{align*} Our final prior model can then be obtained by matching such aggregated posterior $q_{m-1}({\mathbf{z}})$ which has the same form as Eq. \ref{eq: prior model}. The proposed training is illustrated in Figure \ref{illustrate} and the algorithm is detailed in Algorithm \ref{algo:short}. \textbf{Comparison with TRE in \citet{rhodes2020telescoping}: }The most significant difference between our training method and TRE is that TRE assumes a fixed target distribution and construct multiple stages simultaneously via interpolation, whereas our model considers adaptive targets and learn multi-stage density ratio estimators sequentially via NCE. We conduct empirical comparisons in Sec. \ref{sec:ablation}. \textbf{Sampling from prior $p_{\phi_k}({\mathbf{z}})$: }The density ratio estimation of $r_{\phi_k}$ in each stage requires the samples from the prior $p_{\phi_k}({\mathbf{z}})$ and posterior. The posterior samples are inferred through short-run Langevin dynamics which can be efficient and accurate. For drawing prior samples from $p_{\phi_k}({\mathbf{z}})$, we can either use short-run prior Langevin dynamics or persistent update. One one hand, we could directly utilize the short-run Langevin on the $p_{\phi_k}({\mathbf{z}})$ to obtain prior samples. On the other hand, the samples from prior can also be obtained in a persistent chain manner to avoid the prior Langevin altogether. When introducing the $(k+1)^{th}$ stage of density ratio estimation, we assume that the current estimator $r_{\phi_k}({\mathbf{z}})$ performs well in modeling the ratio between the current aggregated posterior distribution $q_k({\mathbf{z}})$ and current prior $p_{\phi_k}({\mathbf{z}})$. Therefore, we simply use samples from $q_k({\mathbf{z}})$ to approximately serve as samples from the new prior $p_{\phi_{k+1}}({\mathbf{z}})$ for the learning of $r_{\phi_{k+1}}({\mathbf{z}})$. In practice, it is achieved by maintaining a memory matrix that stores a posterior samples $\tilde{{\mathbf{z}}}_i$ associated to each data point ${\mathbf{x}}_i$. Note that we only need to keep one memory matrix throughout the training, as only the posterior samples from the previous stage are needed. \begin{algorithm} \small \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \DontPrintSemicolon \Input{Learning iterations~$T$, number of stages $K$, observed examples~$\{{\mathbf{x}}_i \}_{i=1}^n$, number of posterior sampling steps $L$, initial prior model $p_0({\mathbf{z}})$} \Output{ Estimated parameters $\theta, \phi_k, k=1, \dots, m$.} $k$ = 0 \\ \For{$t = 0:T-1$}{ \smallskip 1. {\bf Mini-batch}: Sample observed examples $\{ {\mathbf{x}}_i \}_{i=1}^m$. \\ 2. {\bf Posterior sampling for $q_k({{\mathbf{z}}})$\ }: For each ${\mathbf{x}}_i$, sample ${\mathbf{z}}_i \sim p_{\theta_t}({\mathbf{z}}|{\mathbf{x}}_i)$ using Eq. (\ref{eq: LD}) for $L$ steps with current prior $p_k({\mathbf{z}})$.\\ 3. {\bf Learning density ratio $r_{\phi_{k}}({\mathbf{z}})$}: Update $\phi_{k}$ using contrastive estimation between $q_k({{\mathbf{z}}})$ and $p_k({\mathbf{z}})$ via Eq. (\ref{eq: NCE equivalent}).\\ 4. {\bf Learning generation model}: update $\theta$ according to Eq. (\ref{eq:generation_learn})\\ \If{$t$ is a multiple of $T/K$}{ 5. {\bf Stage transition and prior update}: Construct new stage $k=k+1$, update the prior $p_{k}({\mathbf{z}})= r_{\phi_{k-1}}({\mathbf{z}})p_{k-1}({\mathbf{z}})$} } \caption{Adaptive Multi-stage Density Ratio Estimation.} \label{algo:short} \end{algorithm} \textbf{Test time sampling: }After obtaining the density ratio estimators in each stage and form the final EBM prior $p_\phi({\mathbf{z}})$, we can sample latent variables ${\mathbf{z}} \sim p_{\phi}({\mathbf{z}})$ and produce sample ${\mathbf{x}}$ by decoding ${\mathbf{z}}$. Sampling from $p_{\phi}({\mathbf{z}})$ can be done by either running Langvin dynamics with $\nabla_{{\mathbf{z}}} \log p_{\phi}({\mathbf{z}}) = \nabla_{{\mathbf{z}}}\left(\sum_{i=0}^{m-1} \log r_{\phi_i}({\mathbf{z}}) - \frac{1}{2}\|{\mathbf{z}}\|^{2}\right)$, or Sampling-Importance-Resampling (SIR) techniques. \section{Related Work} \textbf{Latent variable deep generative models: }Our proposed method aims to improve the performance of latent variable deep generative models. Such models consist of a decoder for generation, and require an inference mechanism to infer latent variables. VAEs \citep{kingma2014vae, vahdat2020nvae} learn the decoder network by training a tractable inference network (encoder) to approximate the intractable posterior distribution of the latent variables. Alternatively, \citet{han2017alternating,han2019learning, xie2019learning,nijkamp2020learning} infer the latent variables by Langevin sampling from the posterior distribution without using a encoder. Our method follows the latter approach that uses Langevin sampling to infer latent variables. \textbf{Discriminative contrastive estimation for learning generative models: } The efforts has been made to combine the discriminative and generative models \citep{lazarow2017introspective, jin2017introspective, wu2018tale}, particularly, as introduced in Section \ref{background: NCE}, discriminative contrastive estimation can be applied to learning EBMs. \citet{gao2020flow} use a normalizing flow \citep{papamakarios2021normalizing} as the base distribution for contrastive estimation. \citet{aneja2021ncpvae} refine the prior distribution of a pre-trained VAE by noise contrastive estimation. However, such a method may fail if the empirical latent distribution (called aggregated posterior) is far away from the Gaussian noise. \citet{rhodes2020telescoping} propose telescoping density-ratio estimation, which breaks the estimation into several sub-problems. The method is connected to a range of methods leverage sequences of intermediate distributions such as \citep{gelman1998simulating, marinari1992simulated,kirkpatrick1983optimization}. \textbf{Generator model with flexible prior: }Our method trains an energy-based prior on the latent space by proposed adaptive multi-stage NCE, so our work is related to the broader line of previous papers on introducing flexible prior distribution. \citet{tomczak2018vae} parameterized the prior based on the posterior inference model, and \citep{bauer2019resampled} proposed to construct priors using rejection sampling. Some previous work adopt a two-stage approach, which first trains a latent variable model with simple prior, and then trains a separate prior model to match the aggregated posterior distribution. For example, 2s-VAE \citep{dai2019diagnosing} trains another VAE in the latent space; \citet{ghosh2019variational} fit a Gaussian mixture model on latent codes. Additional work in this line include \citep{oord2017neural, esser2021taming, xiao2019generative, xiao2020vaebm, patrini2020sinkhorn, yu2021unsupervised, yu2022latent}. \citet{pang2020ebmprior} have the closest connection to our work. Similar to us, they introduce an EBM on the latent space. Both the latent space EBM and the generator network are learned jointly by maximum likelihood, and in particular the training involves short-run MCMC sampling from both the prior and posterior distributions. In contrast, we sequentially learn a more expressive EBM with our novel adaptive multi-stage NCE, which avoids running MCMC for EBM prior. We also show improved results on image generation and outlier detection tasks. \vspace{-3mm} \section{Experiments} In this section, we present a set of experiments which highlight the effectiveness of our proposed method. We want to show that our method can (i) learn an generator model with expressive prior distribution from which visually realistic images can be synthesized, (ii) generalize well by faithfully reconstructing test images during training, and (iii) successfully perform anomaly detection. To show the performance of our method, we mainly include SVHN \citep{37648}, CelebA \cite{liu2015deep} and CIFAR-10 \citep{krizhevsky2010cifar} in our study. Besides, we also include studies on the training dynamics and the Langevin sampling, as well as ablation studies to better understand our method. Details about the experiments, including network architecture, the choices of the model hyper-parameters and the optimization method for each dataset can be found in Appendix \ref{app:details}. \vspace{-1mm} \subsection{Image Synthesis and Reconstruction} We evaluate the quality of the generated and reconstructed images. Ideally, if the model is well-trained, the EBM prior on latent space will fit the marginal distribution of latent variables, which in turn leads to realistic samples and faithful reconstructions. We benchmark our model against a variety of previous methods including VAE \citep{kingma2014vae}, Alternating Back-propogation (ABP) \citep{han2017alternating} and Short-run Inference (SRI) \citep{nijkamp2020learning} which assume a simple standard Gaussian prior distribution for the latent vector, as well as recent two-stage methods such as 2-stage VAE \citep{dai2019diagnosing}, RAE \citep{ghosh2019variational} and NCP-VAE \citep{aneja2021ncpvae}, whose prior distributions are learned with posterior samples in a second stage after the generator is trained. We also compare our method with LEBM \citep{pang2020ebmprior}, which learns a EBM prior adaptively during training the generator, while the EBM prior is trained by maximum likelihood instead of density ratio estimation. To make fair comparisons, we follow the protocol as in \citep{pang2020ebmprior}. \textbf{Synthesis: }We report the quantitative results of FID \cite{heusel2017gans} in Table \ref{table:main}, where we observe that across all datasets, our proposed method achieves superior generation performance compared to baseline models based with simple or learned prior distribution. We show qualitative results of generated samples in Figure \ref{fig:main}, where we observe that our model can generate diverse, sharp and high-quality samples. Additional qualitative samples are presented in Appendix \ref{app:additional results}. To test our method’s scalability, we trained a larger generator on CelebA-HQ ($128 \times 128$) and show samples in Appendix \ref{app:additional results}, the model can produce realistic samples. \begin{table*} \small \caption{MSE($\downarrow$) and FID($\downarrow$) obtained from models trained on different datasets. For our reported results, the FID is computed based on 50k generated images and 50k real images and the MSE is computed based on 10k test images.} \label{table:main} \centering \begin{tabular}{ccccccc} \toprule & \multicolumn{2}{c}{SVHN} & \multicolumn{2}{c}{CelebA} & \multicolumn{2}{c}{CIFAR-10} \\ & MSE & FID & MSE & FID & MSE & FID\\ \midrule VAE \citep{kingma2014vae} &0.019 &46.78&0.021&65.75&0.057&106.37\\ ABP \citep{han2017alternating}&-&49.71&-&51.50&-&-\\ SRI \citep{nijkamp2020learning} &0.018&44.86&0.020&61.03&-&-\\ SRI (L=5) \citep{nijkamp2020learning} &0.011&35.32&0.015&47.95&-&-\\ 2s-VAE \citep{dai2019diagnosing} &0.019&42.81&0.021&44.40&0.056&72.90\\ RAE \citep{ghosh2019variational}& 0.014&40.02&0.018&40.95&0.027&74.16\\ NCP-VAE \citep{aneja2021ncpvae} &0.020&33.23&0.021&42.07&0.054&78.06\\ LEBM \cite{pang2020ebmprior} &0.008&29.44&0.013&37.87&0.020&70.15\\ \midrule Adaptive CE (ours) &\textbf{0.004}&\textbf{26.19}&\textbf{0.009}&\textbf{35.38}&\textbf{0.008}&\textbf{65.01}\\ \bottomrule \end{tabular} \end{table*} \begin{figure*} \centering \begin{subfigure}{.32\linewidth} \includegraphics[scale=0.42]{figure/sample_svhn_small.png} \caption{SVHN} \end{subfigure} \begin{subfigure}{.32\linewidth} \includegraphics[scale=0.21]{figure/celeba_sample.png} \caption{CelebA} \end{subfigure} \begin{subfigure}{.32\linewidth} \includegraphics[scale=0.42]{figure/cifar_sample.png} \caption{CIFAR-10} \end{subfigure} \caption{\label{fig:main} Samples generated from our models trained on SVHN, CelebA and CIFAR-10 datasets.} \end{figure*} \textbf{Reconstruction: }Note that the posterior Langevin dynamics should not only help to learn the latent space EBM prior model but also produce samples that approximately come from true posterior distribution $p_{\theta}({\mathbf{z}}|{\mathbf{x}})$ of the generator model. To verify this, we evaluate the accuracy of the posterior inference by looking at reconstruction error on test images. We quantitatively compare reconstructions of test images with baseline models using mean square error (MSE) in Table \ref{table:main}. We observe that our method consistently obtain lower reconstruction error than competing methods do. We also provide qualitative results of reconstruction in Appendix \ref{sec:recon}. \subsection{Anomaly Detection} Anomaly detection is another task to evaluate the generator model. With a generator and an EBM prior model trained on in-distribution data, the posterior $p_{\theta}({\mathbf{z}}|{\mathbf{x}})$ would have separated probability densities for in-distribution and out-of-distribution (anomalous) samples. In particular, we decide whether a test sample ${\mathbf{x}}$ is anomalous or not by first sampling ${\mathbf{z}}$ from the posterior $p_{\theta}({\mathbf{z}}|{\mathbf{x}})$ by short-run Langevin dynamics, and then computing the joint density $p_{\theta,\phi}({\mathbf{x}}, {\mathbf{z}}) = p_{\theta}({\mathbf{x}}|{\mathbf{z}})p_{\phi}({\mathbf{z}})$. A higher value of log joint density indicates the test sample is more likely to be a normal sample. Some prior work on using latent variable generative model for anomaly detection includes \citep{xiao2020likelihood,havtorn2021hierarchical,pang2020ebmprior}. Following the experimental settings in \citep{kumar2019maximum,zenati2018efficient}, we set each class in the MNIST dataset as an anomalous class and leave the other $9$ classes as normal. Note that it is a challenging task and all previous methods do not perform well. To evaluate the performance, we use the log-posterior density to compute the area under the precision-recall curve (AUPRC) \citep{fawcett2006introduction}. We compare our method with related models in Table \ref{table:aucroc}, where we observe that our method obtains significant improvements. \begin{table} \small \caption{AUPRC($\uparrow$) scores for unsupervised anomaly detection on MNIST. Numbers are taken from \citep{pang2020ebmprior} and results for our model are averaged over last 10 trials to account for variance.} \label{table:aucroc} \centering \begin{tabular}{cccccc} \toprule Heldout Digit & 1 & 4 &5 &7&9 \\ \midrule VAE \citep{kingma2014vae} &0.063 &0.337&0.325&0.148&0.104\\ ABP \citep{han2017alternating}&$0.095\pm 0.03$&$ 0.138\pm 0.04$&$0.147\pm0.03$ &$ 0.138\pm 0.02$&$0.102\pm0.03$\\ MEG \citep{kumar2019maximum} &$0.281\pm 0.04$ &$0.401\pm 0.06$&$0.402\pm 0.06$ &$ 0.290\pm 0.04$ &$0.342\pm 0.03 $\\ BiGAN-$\sigma$ \citep{zenati2018efficient} &$0.287\pm 0.02$& $0.443\pm 0.03$&$0.514\pm 0.03$&$0.347\pm 0.02$&$0.307\pm 0.03$\\ LEBM \citep{pang2020ebmprior} &$0.336\pm 0.01$&$0.630\pm 0.02$&$0.619\pm 0.01$&$0.463\pm 0.01$&$ 0.413\pm 0.01 $\\ \midrule Adaptive CE (ours) &\textbf{0.531} $\pm$ \textbf{0.02} &\textbf{0.729} $\pm$ \textbf{0.02} & \textbf{0.742} $\pm$ \textbf{0.01} &\textbf{0.620} $\pm$ \textbf{0.02} &\textbf{0.499} $\pm$ \textbf{0.01}\\ \bottomrule \end{tabular} \end{table} \subsection{Analyzing Training Loss} In Figure \ref{fig:loss curve}, we plot the evolution of the density ratio estimation loss (Eq. \ref{eq: NCE equivalent}) for each stage of estimation during training. Our experiment has $4$ estimation stages, resulted in $4$ density ratio estimators. We observe that the loss for the first stage, which estimates the density ratio between unit Gaussian prior $p_0({\mathbf{z}})$ and aggregated posterior is significantly lower than later stages, which estimate the ratio between the updated prior and updated posterior. This observation is consistent with our intuition: directly discriminating between Gaussian prior and posterior is very easy, while introducing additional stages of estimation make the task more difficult, and hence the estimated density ratio is more reliable. Higher losses in later stages also suggests that the prior is getting close to aggregated posterior, as the discrimination becomes harder. \subsection{Analyzing Langevin Dynamics} \begin{figure}[h] \centering \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.22]{figure/celeba_traverse.png} \caption{\label{fig:LD traverse}Transition of Langevin dynamics initialized from $p_0({\mathbf{z}})$ towards $p_{\phi}({\mathbf{z}})$ for $200$ steps.} \end{minipage}\hfill \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.46]{figure/loss_curve.png} \caption{\label{fig:loss curve} Density ratio estimation loss for each estimation stage.} \end{minipage} \end{figure} In Figure \ref{fig:LD traverse}, we visualize the transition of Langevin dynamics initialized from $p_0({\mathbf{z}})$ towards $p_{\phi}({\mathbf{z}})$ on a model trained on CelebA. The LD iterates for $200$ steps, which is longer than the LD for training ($30$ steps). We expect that with a well-trained $p_{\phi}({\mathbf{z}})$, the trajectory of a Markov chain should transit towards samples of higher quality. Indeed, we observe that the quality of synthesis improves significantly with as the LD progresses. In addition, we observe human faces with different identities along the LD, suggesting that the Markov chain can mix between different modes of the prior distribution. This indicates that the density function of learned EBM prior has a smooth geometry that allows MCMC to mix well. \vspace{-1mm} \subsection{Ablation Study}\label{sec:ablation} To better understand our proposed method, we conduct ablation study on number of density ratio estimators and training methods. We use CelebA for the ablation experiments. \textbf{Number of stages. }The most important hyper-parameter of our method is the number of density ratio estimators, or equivalently, the number of training stages. We present the FID score of models trained with different number of stages in the first part of Table \ref{table:ablation}. The line of $0$ stage means no latent EBM at all, i.e., simply training a generator model by short-run inference and sampling from it by decoding ${\mathbf{z}} \sim p_{0}({\mathbf{z}})$. We make the following observations. Firstly, directly sampling latent variables from $p_{0}$ leads to poor results, while any latent EBM trained by density ratio estimation significantly improves the performance, suggesting the necessity of learning the latent EBM. Secondly, we see that multi-stage density ratio estimation can further significantly improve the performance of single-stage estimation. The results indicate that multi-stage density ratio estimation facilitates the training of latent EBM by gradually making the estimation task harder. We observe that the FID score does not improve for more than $4$ stages, and therefore we choose $4$ as the number of stages for our main experiments. \textbf{Training method: adaptive vs. non-adaptive. }It is important to distinguish our method from TRE in \citet{rhodes2020telescoping}. TRE assumes the target distribution to be fixed, therefore, if we adopt TRE, the posterior distribution $p_{\theta}({\mathbf{z}}|{\mathbf{x}})$ will be a fixed one throughout the training. In contrast, our training method is adaptive in the sense that the target posterior is updated by incorporating the current EBM prior into the joint distribution when a new stage is introduced. To quantitatively compare these two approaches, we also train non-adaptive version of the model and report the numbers in the second part of Table \ref{table:ablation}. We observe that models trained with non-adaptive multi-stage density ratio estimation obtain significantly worse results. Therefore, we believe that it is crucial to learn density ratios sequentially with adaptive posterior. \iffalse \begin{table}[t] \caption{Results for ablation study on CelebA dataset.} \label{table:ablation} \centering \begin{tabular}{ccc} \toprule & Number of stages & FID \\ \midrule \multirow{5}*{Adaptive} & 0 & 62.78\\ & 1 & 44.17\\ & 2 & 39.85\\ & 4 & \textbf{35.38}\\ & 8 & 35.84\\ \midrule \multirow{5}*{Non-adaptive} & 0 & 62.78\\ & 1 & 43.84\\ & 2 & 42.61\\ & 4 & 42.48\\ & 8 & 43.06\\ \bottomrule \end{tabular} \end{table} \fi \vspace{-3mm} \begin{table}[t] \small \caption{Results for ablation study on CelebA dataset.} \label{table:ablation} \centering \begin{tabular}{c| c c c c c| c c c c c} \hline Method & \multicolumn{5}{c}{Adaptive} & % \multicolumn{5}{c}{Non-adaptive} \\ \cline{1-11} $\#$ of stages & 0 & 1 & 2 & 4 & 8 & 0 & 1 & 2 & 4 & 8\\ \hline FID & 62.78 & 44.17&39.85 & \textbf{35.38} & 35.84& 62.78&43.84 &42.61 &42.48 &43.06 \\ \hline \end{tabular} \end{table} \subsection{Parameter Efficiency} One potential disadvantage of our method is its parameter inefficiency from multiple estimator networks. Moreover, since the training is sequential, we cannot share parameters between estimators as done in \citet{rhodes2020telescoping}. Fortunately, our EBM is on the latent space so the network is light-weighted. For example, with $4$ density ratio estimators, the number of parameters in the prior EBM is only around $1\%$ of the number of parameters in the generator. In addition, we confirm the larger number of parameters in the latent EBM is not the cause of improvements, as we train a single stage model with $4\times$ size and observe no improvement. \section{Conclusions} In this paper, we propose adaptive multi-stage density ratio estimation, which is an effective method for learning a EBM prior for a generator model. Our method learns the latent EBMs by introducing multiple density ratio estimators that learn the density ratio between prior and posterior sequentially and adaptively. We demonstrate the effectiveness of our method by conducting comprehensive experiments, and empirical results show the advantage of our method on generation, reconstruction and anomaly detection tasks. As future directions, our method can potentially be applied to modeling the latent space of generator models in other domains, such as text \citep{pang2021latent} and graph. We also tend to develop more advanced and efficient inference schemes for posterior density estimation. \section{Experimental Details} \label{app:details} In this section, we introduce the detailed settings of our experiments. \subsection{Datasets} We mainly study our method with SVHN \citep{37648} $(32 \times 32 \times 3)$, CIFAR-10 \citep{krizhevsky2010cifar} $(32 \times 32 \times 3)$, and CelebA \citep{liu2015deep} $(64 \times 64 \times 3)$. Following \citet{pang2020ebmprior}, we use the full training set of SVHN ($73,257$) and CIFAR-10 ($50,000$), and take $40,000$ examples of CelebA as training data following~\cite{nijkamp2019learning}. The training images are resized and scaled to $[-1, 1]$. \subsection{Network architectures.} For experiments on SVHN, CelebA and CIFAR-10, each density ratio estimator network has a simple fully-connected structure described in Table \ref{table:energy net}. \begin{table}[ht] \caption{Network structures for density ratio estimator. LReLU indicates the Leaky ReLU activation function. The slope in Leaky ReLU is set to be 0.1.}\label{table:energy net} \centering \begin{tabular}{cc} Layers & In-Out Size \\ \hline Input: $z$ & 100 \\ Linear, LReLU & 200 \\ Linear, LReLU & 200 \\ Linear & 1 \\ \hline \end{tabular} \end{table} We let the generator network having a simple deconvolution structure, similar to DCGAN \citep{radford2015unsupervised}. The generator network for each dataset is depicted in Table \ref{table:generator}. \begin{table}[ht] \caption{Network structures for the generator networks of SVHN, CelebA, CIFAR-10 (from top to bottom). convT($n$) indicates a transposed convolutional operation with $n$ output channels. LReLU indicates the Leaky-ReLU activation function. The slope in Leaky ReLU is set to be 0.2.}\label{table:generator} \centering \begin{tabular}{ccc} \hline Layers & In-Out Size & Stride\\ \hline Input: $x$ & 1x1x100 &- \\ 4x4 convT(ngf x $8$), LReLU & 4x4x(ngf x 8) & 1 \\ 4x4 convT(ngf x $4$), LReLU & 8x8x(ngf x 4) & 2\\ 4x4 convT(ngf x $2$), LReLU & 16x16x(ngf x 2) & 2\\ 4x4 convT(3), Tanh & 32x32x3 & 2 \\ \hline \end{tabular} \begin{tabular}{ccc} \hline Layers & In-Out Size & Stride\\ \hline Input: $x$ & 1x1x100 &- \\ 4x4 convT(ngf x $8$), LReLU & 4x4x(ngf x 8) & 1 \\ 4x4 convT(ngf x $4$), LReLU & 8x8x(ngf x 4) & 2\\ 4x4 convT(ngf x $2$), LReLU & 16x16x(ngf x 2) & 2\\ 4x4 convT(ngf x $1$), LReLU & 32x32x(ngf x 1) & 2\\ 4x4 convT(3), Tanh & 64x64x3 & 2 \\ \hline \end{tabular} \quad \begin{tabular}{ccc} \hline Layers & In-Out Size & Stride\\ \hline Input: $x$ & 1x1x128 & -\\ 8x8 convT(ngf x $8$), LReLU & 8x8x(ngf x 8) & 1 \\ 4x4 convT(ngf x $4$), LReLU & 16x16x(ngf x 4) & 2\\ 4x4 convT(ngf x $2$), LReLU & 32x32x(ngf x 2) & 2\\ 3x3 convT(3), Tanh & 32x32x3 & 1 \\ \hline \end{tabular} \end{table} \begin{figure*}[t!] \centering \begin{subfigure}{.48\linewidth} \includegraphics[scale=0.68]{figure/original_svhn.png} \end{subfigure} \begin{subfigure}{.48\linewidth} \includegraphics[scale=0.68]{figure/recon_svhn.png} \end{subfigure} \begin{subfigure}{.48\linewidth} \includegraphics[scale=0.35]{figure/original_celeba.png} \end{subfigure} \begin{subfigure}{.48\linewidth} \includegraphics[scale=0.35]{figure/recon_celeba.png} \end{subfigure} \begin{subfigure}{.48\linewidth} \includegraphics[scale=0.68]{figure/original_cifar.png} \end{subfigure} \begin{subfigure}{.48\linewidth} \includegraphics[scale=0.68]{figure/recon_cifar.png} \end{subfigure} \caption{\label{fig:recon} Qualitative results of reconstruction on test images. Left: real images from test set. Right: reconstructed images by sampling from the posterior.} \end{figure*} \subsection{Training Hyper-parameters} We introduce some hyper-parameter setting for training our model. For the main experiments, we have $4$ density ratio estimation stages. We adopt the persistent approach for generating samples from prior distribution. For the posterior sampling Langevin dynamics, we use step size $0.1$, and run the LD for 30 steps for SVHN and CelebA, and 40 steps on CIFAR-10. The parameters for the density ratio estimators and image generators are initialized with Xavier initialization \citep{glorot2010understanding}. We train both the generator and density ratio estimators using Adam \cite{kingma2014adam} optimizer. The learning rate for the generator is $1e-4$ and the learning rate for the density ratio estimator is $5e-5$. We train the model for 100 epochs for SVHN and CelebA, where a new estimation stage is introduced every $25$ epochs. For CIFAR-10, we train the model for 200 epochs for SVHN and CelebA, where a new estimation stage is introduced every $50$ epochs. During test stage, we run LD on the learned EBM prior with step size $0.1$ for $100$ steps. \section{Reconstruction Samples}\label{sec:recon} In Figure \ref{fig:recon}, we provide some qualitative examples of reconstructing test images. We see that our model can reconstruct unseen images faithfully. \begin{figure*} \centering \includegraphics[scale=0.6]{figure/celebahq_sample.png} \caption{\label{fig:high-res} High-resolution samples on CelebA-HQ.} \end{figure*} \section{Additional Qualitative Results} \label{app:additional results} We trained our model on high-resolution $128\times 128$ CelebA-HQ, samples are shown in Figure \ref{fig:high-res} We also provide additional qualitative samples from our models trained on SVHN, CelebA and CIFAR-10 in Figure \ref{fig:additional}. \begin{figure*}[h] \centering \begin{subfigure}{.5\linewidth} \includegraphics[scale=0.56]{figure/svhn_large.png} \end{subfigure} \begin{subfigure}{.5\linewidth} \includegraphics[scale=0.28]{figure/celeba_large.png} \end{subfigure} \begin{subfigure}{.5\linewidth} \includegraphics[scale=0.56]{figure/cifar_large.png} \end{subfigure} \caption{\label{fig:additional} Additional randomly generated samples from our models.} \end{figure*}
{ "timestamp": "2022-09-20T02:23:56", "yymm": "2209", "arxiv_id": "2209.08739", "language": "en", "url": "https://arxiv.org/abs/2209.08739" }
\section{Introduction} The electromagnetic local density of states (LDOS), a measure of local response---the electric field from a dipolar current source---at a given point in space~\cite{novotny_principles_2012}, plays a central role in many nanophotonic phenomena and applications, including spontaneous~\cite{purcell_absorption_1969} and stimulated~\cite{francs_optical_2010} emission, quantum information~\cite{barnes_classical_2020}, surface enhanced Raman scattering~\cite{michon_limits_2019,gaponenko_strong_2021}, photovoltaics~\cite{wang_optimization_2014}, radiative heat transfer~\cite{cuevas_radiative_2018}, non-linear frequency conversion~\cite{lin_cavity-enhanced_2016}, scintillation~\cite{roques-carmes_framework_2022}, to name a few. Enhancing the LDOS by nanostructuring bulk media, a persistent theme in photonics design, is often achieved through the creation of optical resonances~\cite{vahala_optical_2003}: a cavity or resonator supporting a single mode of quality factor $Q$ and mode volume $V$ enhances the LDOS in proportion to the Purcell factor $Q/V$. Many classic design schemes rely on maximizing $Q$ (ring resonators~\cite{armani_ultra-high-q_2003}), minimizing $V$ (plasmonic nanocavities~\cite{baumberg_extreme_2019,chikkaraddy_single-molecule_2016}, bow-tie antennae~\cite{kinkhabwala_large_2009}), or a combination thereof (photonic crystal cavities~\cite{joannopoulos_photonic_2008}, metal-dielectric hybrid structures~\cite{karamlou_metal-dielectric_2018}). Going beyond the single mode picture, multi-mode and dispersion-based effects such as exceptional points~\cite{pick_general_2017,lin_enhanced_2016} and slow light devices~\cite{dewhurst_slow-light-enhanced_2010} have also been explored. Recently, complex devices obtained by application of large-scale structural optimization appear to combine several confinement mechanisms~\cite{wang_maximizing_2018,albrechtsen_nanometer-scale_2021}, providing further improvements on device performance. Given its relevance in optics and the ever increasing structural freedom brought on by advances in nanofabrication~\cite{molesky_inverse_2018}, interest in assesing limits on achievable LDOS enhancement has grown over the past few decades~\cite{chao_physical_2022}. Spectral sum rules~\cite{barnett_sum_1996,scheel_sum_2008,sanders_analysis_2018} pin the frequency-integrated LDOS of any system but are of limited utility in practical settings involving finite and often narrow source bandwidths. Specialized results pertaining to achievable mode quality factors~\cite{raman_upper_2013} or mode volumes in cavity settings~\cite{zhao_minimum_2020} have proven useful in many applications but are not sufficiently general to account for multi-mode effects. Passivity requirements based on achievable material response were recently used to bound both single-frequency~\cite{miller_fundamental_2016} and finite-bandwidth~\cite{shim_fundamental_2019} LDOS. However, passivity alone cannot fully capture the wave nature of light as constrained by Maxwell's equations, e.g., the necessity of phase matching to achieve resonance, leading to loose limits when contributions other than those coming from evanescent fields become relevant. In this article, we generalize and lift several limitations imposed by these prior approaches to provide expressions and predictions for the largest spectral-integrated LDOS that may be achieved in a structured medium. The derived bounds incorporate wave and material constraints imposed by Maxwell's equations over any desired length-scale and are geometry agnostic: besides specifying the material susceptibility $\chi$, a frequency window of interest, and a bounding domain that the structured medium must reside within, no further assumptions are made on device topology. We consider sources both enclosed within and external to devices, obtaining bounds that generally come within an order of magnitude of complicated structures discovered through inverse methods~\cite{liang_formulation_2013}. Furthermore, we find that the mere requirement of energy conservation is sufficient to place tight limits on large devices; we exploit this fact to derive an analytical upper bound (\ref{eq:theta_dual}) along with integral expressions and asymptotic analysis for the particular scenario of a source above a semi-infinite, structured slab. The bounds show saturation to a finite value as $\chi \rightarrow \infty$ and varying power-law scalings with respect to source bandwidth $\Delta \omega$, with the maximum LDOS transitioning from a scaling $\propto \Delta \omega^{-1}$ to one $\propto \Delta \omega^{-1/4}$ as material absorption begins to limit the net enhancement that any one mode may contribute. \section{Problem Formulation} Working in dimensionless units of $\epsilon_0=\mu_0=1$, and considering only nonmagnetic materials, the partial LDOS at frequency $\omega$ and position $\vb{x}'$ along the direction $\hat{\vb{e}}$ can be shown to be, by Poynting's theorem~\cite{jackson_classical_1999}, directly proportional to the average power $\rho(\omega)$ emitted by a harmonic dipole source $\vb{J}(\vb{r})e^{-i\omega t} = e^{-i\omega t} \delta(\vb{x}-\vb{x'}) \hat{\vb{e}}$ at the same location, \begin{equation} \rho(\omega) \equiv -\frac{1}{2} \Re{\int \vb{J}^*(\vb{r}) \cdot \vb{E}(\vb{r}) \,\dd\vb{r} } \end{equation} where the electric field generated by the $\vb{J}$ solves Maxwell's equations, \begin{equation} \curl \curl \vb{E}(\vb{r}) - \omega^2 \epsilon(\vb{r}) \vb{E}(\vb{r}) = i\omega\vb{J}(\vb{r}) . \end{equation} Since real sources emit light over a finite bandwidth (e.g., the Planck spectrum of heated bodies or fluorescence/spontaneous emission linewidth of atoms~\cite{hindmarsh_atomic_2014}), we instead consider the more natural choice of a frequency average of $\rho(\omega)$ over a Lorentzian lineshape centered at $\omega_0$ with bandwidth $\Delta\omega_{src} = \omega_0/2 Q_{src}$ (corresponding source ``quality factor'' $Q_{src}$): \begin{subequations} \begin{align} \langle \rho \rangle_{\omega_0, Q_{src}} &= \int_{-\infty}^\infty \frac{\Delta\omega_{src}/\pi}{(\omega-\omega_0)^2 + \Delta\omega_{src}^2} \rho(\omega) \,\dd\omega, \label{eq:freq_avg} \end{align} \end{subequations} As pointed out in ~\cite{liang_formulation_2013}, such a modified figure of merit not only captures the spectral lineshape of many practical sources~\cite{hindmarsh_atomic_2014}, but also offers a computational and conceptual advantage: instead of evaluating the frequency integral in (\ref{eq:freq_avg}) directly, one can instead carry out a complex-$\omega$ contour integral over the upper half plane and exploit causality to convert, using the residue theorem, the spectral average $\langle \rho \rangle$ to the single complex-frequency $\rho(\tilde{\omega})$ and an electrostatic (zero-frequency) contribution: \begin{equation} \langle \rho \rangle_{\omega_0, Q_{src}} = \rho\bigg(\tilde{\omega}\equiv \omega_0 + \frac{\omega_0}{2Q_{src}} i \bigg) + \frac{4Q_{src}}{\pi(4Q_{src}^2 + 1) \omega_0} \alpha. \label{eq:cplx_freq_rho} \end{equation} Here $\alpha=\frac{1}{2} \Re{ \vb{p}_0 \cdot \vb{E}_0}$, where $\vb{p}_0$ is a unit amplitude electrostatic dipole and $\vb{E}_0$ the field it generates; this electrostatic contribution leads to an all-frequency integrated sum rule in the wide bandwidth limit~\cite{sanders_analysis_2018,shim_fundamental_2019}. In this paper we will be focusing on the practically important case of moderate or narrow source bandwidth with $Q_{src} \gg 1$ and thus will neglect the electrostatic term in (\ref{eq:cplx_freq_rho}) going forward. Thus, the structural design problem for maximizing bandwidth-averaged LDOS can be formulated as \begin{subequations} \begin{align} \max_{\epsilon(\vb{r}; \tilde{\omega})} \quad \rho(\tilde{\omega}) = -\frac{1}{2}\Re{\int \vb{J}^*(\vb{r};\tilde{\omega}) \cdot \vb{E}(\vb{r}; \tilde{\omega}) \,\dd\vb{r} } \label{eq:TO_objective} \end{align} given the constraints \begin{align} &\curl \curl \vb{E}(\vb{r}; \tilde{\omega}) - \tilde{\omega}^2 \epsilon(\vb{r}; \tilde{\omega}) \vb{E}(\vb{r}; \tilde{\omega}) = i\tilde{\omega}\vb{J}(\vb{r}; \tilde{\omega}) \\ &\epsilon(\vb{r}; \tilde{\omega}) = \begin{cases} 1 \text{ or } 1+\chi(\tilde{\omega}) & \vb{r} \in V \\ 1 & \vb{r} \notin V \end{cases} \label{eq:structural_opt} \end{align} \end{subequations} where $V$ is a pre-specified design region that the structure resides within and $\chi(\tilde{\omega})$ the Fourier transform of the bulk material susceptibility evaluated at complex frequency $\tilde{\omega}$. In later expressions we may suppress the $\tilde{\omega}$ dependence of time-harmonic variables for notation simplicity. \begin{figure*} \includegraphics[width=\linewidth]{finite_size_aggregate.pdf} \caption{\label{fig:FDFD_aggregate} Bounds and inverse designs with TM polarization, $\chi=4$, $Q_{src}\in \{10^2, 10^4, 10^6\}$, and finite design domain size $L$ for (a) dipole on the side of a square design region and (b) dipole in the center of a square design region. Solid lines are bounds that are converged with respect to increasing the number of constraint subregions $V_k$. Dotted lines are bounds with just the global constraint $V_k=V$. Squares are topology optimized structures from random initializations. The hollow circle is a ring resonator with inner and outer radii approximately 1.85 and 2.05 respectively, with the exact resonant radius requiring accuracy up to 8 significant figures to achieve the plotted enhancement (SI); the filled circle is a topology optimized structure starting from that ring resonator. LDOS enhancement as a function of $Q_{src}$ for fixed $L=1.6$ is shown beneath the main figures, along with designs corresponding to filled shapes.} \end{figure*} Due to the high dimensionality of the structural degrees of freedom $\epsilon(\vb{r})$ and the non-convex dependence of the field $\vb{E}(\vb{r})$ on $\epsilon(\vb{r})$, it is generally not possible to solve for the global optimum of (\ref{eq:structural_opt})~\cite{christiansen_inverse_2021}. However, it is possible to exploit an alternative parametrization of the problem in which the polarization density $\vb{P}(\vb{r})=(\epsilon(\vb{r})-1)\vb{E}(\vb{r})$ rather than $\epsilon(\vb{r})$ serve as optimizations degrees of freedom, to obtain bounds on $\rho(\tilde{\omega})$ applicable to \emph{any possible structure}, given only the choices of design region $V$ and material susceptibility $\chi$~\cite{chao_physical_2022}. In this context, it becomes convenient to decompose the total field into the vacuum field emitted by the source and scattered field emitted by the polarization currents: \begin{align*} &\vb{E}(\vb{r}) = \vb{E}_{vac}(\vb{r}) + \vb{E}_{sca}(\vb{r}) \\ &= \frac{i}{\tilde{\omega}} \int \mathbb{G}(\vb{r},\vb{r'}) \cdot \vb{J}(\vb{r'}) \,\dd\vb{r'} + \int \mathbb{G}(\vb{r},\vb{r'}) \cdot \vb{P}(\vb{r'}) \,\dd\vb{r'} \numthis \end{align*} with $\mathbb{G}(\vb{r},\vb{r'})$ denoting the vacuum dyadic Green's function, the solution to \begin{equation} \curl \curl \mathbb{G} - \tilde{\omega}^2 \mathbb{G} = \tilde{\omega}^2 \mathbb{I} . \end{equation} Note that our definition of the Green's function has an extra global factor of $\tilde{\omega}^2$ compared to the standard definition. Similarly, the net power extracted from the dipole source can be decomposed into constant (structure-independent) vacuum and scattered-field contributions: \begin{subequations} \begin{equation} \rho = \rho_{vac} + \rho_{sca}(\vb{P}) , \end{equation} \begin{equation} \rho_{vac} = -\frac{1}{2}\Re{\frac{i}{\tilde{\omega}} \iint \vb{J}^*(\vb{r}) \cdot \mathbb{G}(\vb{r},\vb{r'}) \cdot \vb{J}(\vb{r'}) \,\dd\vb{r'} \,\dd\vb{r} } , \end{equation} \begin{align*} \rho_{sca}(\vb{P}) &= -\frac{1}{2} \Re{ \iint \vb{J}^*(\vb{r}) \cdot \mathbb{G}(\vb{r},\vb{r'}) \cdot \vb{P}(\vb{r'}) \,\dd\vb{r'} \,\dd\vb{r}} \\ &= \frac{1}{2} \Im{ \tilde{\omega} \int \vb{E}_{vac}(\vb{r'}) \cdot \vb{P}(\vb{r'}) \,\dd\vb{r'} } ,\numthis \label{eq:rho_sca} \end{align*} \end{subequations} where in the second line of (\ref{eq:rho_sca}) we made use of the fact that $\vb{J}^* = \vb{J}$ (the global phase of dipole source is irrelevant) and the reciprocity relation $\mathbb{G}(\vb{r},\vb{r'}) = \mathbb{G}^T(\vb{r'},\vb{r})$. They key to formulating a shape-independent bound on $\rho$ is to forgo the need for structural information by relaxing the requirement that $\vb{P}$ satisfy Maxwell's equations everywhere, imposing instead a smaller subset of wave constraints~\cite{chao_physical_2022}. Such constraints can be indeed be derived from Maxwell's equations through a complex frequency generalization of the time-harmonic Poynting's theorem (see~\cite{chao_physical_2022} and SI), giving \begin{align*} &\int_{V_k} \vb{E}_{vac}(\vb{r}) \cdot \vb{P}(\vb{r}) \,\dd\vb{r} = \int_V \vb{P}^*(\vb{r}) \cdot \int_{V_k} \mathbb{U}(\vb{r},\vb{r'}) \cdot \vb{P}(\vb{r'}) \,\dd\vb{r'} \,\dd\vb{r} ,\\ &\mathbb{U} \equiv \chi^{*-1} \delta(\vb{r}-\vb{r'}) - \mathbb{G}^*(\vb{r},\vb{r'}) \numthis \label{eq:constraints} \end{align*} where $V_k \subseteq V$ is any spatial region within the design domain $V$ and we have defined the composite operator $\mathbb{U}$. Notably, (\ref{eq:constraints}) can be interpreted as a statement of the conservation of energy over each spatial region, requiring only specification of the allotted design footprint $V$ and available susceptibility $\chi$. Note also that geometric information is contained only implicitly in $\vb{P}$, with material properties specified by the known complex scalar $\chi$. The resulting optimization problem for the LDOS can be written as \begin{subequations} \begin{equation} \max_{\vb{P}(\vb{r}; \tilde{\omega})} \quad \rho_{sca} = -\frac{1}{2} \Im{ \tilde{\omega} \bra{\vb{E}_{vac}^*}\ket{\vb{P}} } \label{eq:primal_objective} \end{equation} such that $\forall V_k \subseteq V$ \begin{multline} \Re\text{,}\Im \big\{ \bra{\vb{E}_{vac}}\mathbb{I}_{V_k}\ket{\vb{P}} - \bra{\vb{P}} \mathbb{U}\mathbb{I}_{V_k}\ket{\vb{P}} \big\}= 0 \label{eq:primal_constraint} \end{multline} \label{eq:primal_problem}% \end{subequations} where $\mathbb{I}_{V_k}$ is the projection operator into the region $V_k$. Above, we employed Dirac bra-ket notation for brevity, with $\ket{\vb{a}}$ denoting the vector field $\vb{a}(\vb{r})$ over the design domain $V$, $\bra{\vb{a}}\ket{\vb{b}}=\int_V \vb{a}^*(\vb{r}) \cdot \vb{b}(\vb{r}) \,\dd\vb{r}$ is the conjugated inner product, and operator action $\mathbb{A} \ket{\vb{a}} = \int_V \mathbb{A}(\vb{r},\vb{r'}) \cdot \vb{a}(\vb{r'}) \,\dd\vb{r'}$. (\ref{eq:primal_problem}) is a quadratically constrained linear program for $\vb{P}$. While direct optimization is still not guaranteed to find a global optimum due to the non-convexity of certain constraints in (\ref{eq:primal_constraint}), a bound on the problem can be computed efficiently via the Lagrange dual function~\cite{angeris_heuristic_2021,chao_physical_2022,boyd_convex_2004}, hereby referred to as the dual bound. In principle, the maximum LDOS achievable in a structured medium is known to grow indefinitively in the single-frequency limit of $\Delta\omega_{src}\to 0$. For instance, in lossless media, the whispering gallery modes of ring resonators and defect modes of photonic crystals exhibit exponential scaling in $Q$ with increasing system size~\cite{marcatili_bends_1969,joannopoulos_photonic_2008}, while their mode volumes either increase polynomially or remain constant, respectively, leading to exponential growth in the Purcell factor. Geometries with sharp tips can also give rise to field singularities and hence vanishing mode volumes~\cite{budaev_electromagnetic_2007}. However, in practice, finite source bandwidths, device footprint, and material losses limit the utility of diverging lifetimes, while fabrication imperfections and atomic-scale effects preclude realization of arbitrarily small features. As will be seen below, the imposition of finite $\Delta \omega_{src}$ and a minimum vacuum gap $d$ between source and medium regularizes such divergences, paving the way for investigations of LDOS growth characteristics in realistic settings. We also note that for finite $\Delta \omega_{src}$ the vacuum LDOS contribution $\rho_{vac}$ diverges due to contributions from high frequencies. This is an artifact of the Lorentzian lineshape chosen and has little bearing on practical applications; the results presented will thus focus on the structural contribution $\rho_{sca}$ which does remain finite. To make explicit the scale invariance of Maxwell's equations, all lengths are given in units of the center wavelength $\lambda_0 = 2\pi c/\omega_0$ and $\rho_{sca}$ normalized by the single-frequency vacuum dipole radiation $\rho_0 \equiv \rho_{vac}(\omega_0)$. \begin{figure*} \includegraphics[width=\linewidth]{lossless_halfspace_results.pdf} \caption{\label{fig:halfspace_lossless} Maximum LDOS enhancement near a half-space design region as a function of (a) separation distance and (b) real material susceptibility $\chi$ for lossless dielectrics. While (b) plots bounds for lossless dielectrics, the material independent limit as $\chi \rightarrow \infty$ is a bound for general lossy $\chi$ as well (SI). } \end{figure*} \section{Numerical Results} The dual bound can be obtained numerically for arbitrary domains via a suitable representation of the Green's function and Maxwell operator. We do so for two standard settings: an external--dipole configuration where the dipole is adjacent to the structure, relevant for instance to gratings and solid-state defect couplers~\cite{chakravarthi_inverse-designed_2020,wambold_adjoint-optimized_2021}; and an enclosed--dipole configuration in which the dipole is surrounded by the structure, relevant to photonic crystal cavities~\cite{joannopoulos_photonic_2008}, bullseye gratings~\cite{li_efficient_2015}, and bowtie antennas~\cite{kinkhabwala_large_2009}. For computational convenience, all calculations are performed in 2d for both the out-of-plane (scalar) TM and in-plane (vectorial) TE electric-field polarizations. The finite-difference frequency-domain method is used to represent all relevant fields and operators (SI). Progressively tighter bounds were obtained by gradually increasing the number of constraint sub-regions $V_k$ down to the computational pixel level. Inverse designs were also obtained to compare against the bounds, following the topology optimization (TO) approach detailed in~\cite{liang_formulation_2013}. Results pertaining to maximum LDOS for both external and enclosed configurations are shown in Figure~\ref{fig:FDFD_aggregate} as a function of the design footprint. Since the vacuum LDOS diverges when integrated against a Lorentzian lineshape~\cite{barnett_sum_1996,liang_formulation_2013,shim_fundamental_2019}, this and subsequent results pertain only to the structural contribution $\rho_{sca}$, defining the enhancement factor in relation to the single-frequency vacuum emission $\rho_0 \equiv \rho_{vac}(\omega_0)$. In both settings the medium is assumed to be lossless, $\Im\chi=0$. For a fixed $Q_{src}$, the bounds are seen to grow exponentially before saturating with increasing domain size $L$. Conversely, for fixed $L$ the bounds grow linearly before saturating with increasing $Q_{src}$. Both observations are consistent with a resonant enhancement mechanism: since the finite vacuum gap $d$ precludes field divergences in the vicinity of the dipole, thereby imposing an upper bound on its coupling to any mode, one would expect improvements in the Purcell factor to be mainly driven by growth in the modal lifetimes. Supposing a system with response dominated by a single mode of quality factor $Q_{mode} = \omega_0/2\Delta\omega_{mode}$, (\ref{eq:freq_avg}) yields a frequency-integrated LDOS that scales as \begin{equation} \int_{-\infty}^\infty \frac{\Delta\omega_{src}/\pi}{\omega^2+\Delta\omega_{src}^2} \frac{\Delta\omega_{mode}/\pi}{\omega^2+\Delta\omega_{mode}^2} \,\dd\omega = \frac{Q_{src} Q_{mode}}{2\pi\omega_0(Q_{src}+Q_{mode})} . \label{eq:single_mode_avg} \end{equation} Thus, this modal picture predicts linear growth $\propto Q_{src}$ in (\ref{eq:freq_avg}) so long as the system supports a sufficiently long-lived mode, $Q_{mode} \gg Q_{src}$, eventually saturating to a value $\propto Q_{mode}$ whenever $Q_{mode} \ll Q_{src}$. In the absence of material dissipation, the highest achievable $Q_{mode}$ is constrained solely by radiative losses and, as confirmed by our bound calculations, known to scale at least exponentially with $L$ in ring-resonator and photonic-crystal cavities. Comparing our bounds (solid lines) against inverse designs (squares), one observes remarkable alignment, with bounds and designs often coming within an order of magnitude of each other. The biggest performance gaps occur at high $Q_{src}$ in the external configuration and can at least be partially attributed to TO getting trapped in local optima that, regardless of exhaustive initial conditions, underperform simple ring resonator geometries. Even starting with ring resonators as initial seeds, TO was only able to make modest improvements. In contrast, Bragg--onion type structures discovered by TO in the cavity setting are seen to tightly approach corresponding bounds. Comparing the high degree of precision needed to specify the dimensions of the ring resonator against the relative robustness of photonic bandgap confinement~\cite{rodriguez_disorder-immune_2005}, we hypothesize that attaining high-performing designs for the external configuration at narrow bandwidths requires resonances based on sensitive interference cancellation. This leads to an ill-behaved optimization landscape with many subpar local optima to get stuck in, with ring resonators as an example of a class of high-performing designs not easily discoverable via brute-force optimization. Finally, the observed dependence of the dual bound on the number and size of subregion constraints (\ref{eq:primal_constraint}) reveals two important features: first, the necessity of imposing many subwavelength constraint regions (SI) for the bounds to exhibit saturation as $Q_{src} \to \infty$; second, the fact that a single, global energy-conservation constraint appears sufficient to produce tight bounds when either device sizes or source bandwidths become sufficiently large. This suggests that subregion constraints are necessary to adequately ``resolve'' wave effects, including phase-matching restrictions, that limit mode confinement in finite systems; conversely, when system sizes no longer limit the highest achievable $Q_{mode}$ in relation to $Q_{src}$, the latter and not the former becomes a bottleneck for further enhancements. \section{Large-size Semi-analytics} We now exploit the remarkable success of global energy conservation in describing large system behavior to obtain a semi-analytical bound on the maximum LDOS above a semi-infinite structure, i.e., the $L\rightarrow\infty$ limit of the external--dipole configuration depicted in Figure~\ref{fig:FDFD_aggregate}a. As shown in SI, the solution of the dual problem under both resistive and reactive energy conservation constraints (\ref{eq:primal_constraint}) can be mapped to a family of solutions involving a unitary phasor parametrized by the single constraint angle $\theta$, leading to the modified problem: \begin{subequations} \begin{equation} \max_{\vb{P}(\vb{r}; \tilde{\omega})} \quad \rho_{sca} = -\frac{1}{2} \Im{ \tilde{\omega} \bra{\vb{E}_{vac}^*}\ket{\vb{P}} } \label{eq:theta_problem} \end{equation} such that for $\mathbf{P} \in V$, \begin{equation} \Im{e^{i\theta} \bra{\vb{E}_{vac}}\ket{\vb{P}}} - \bra{\vb{P}} \text{Asym}(e^{i\theta}\mathbb{U})\ket{\vb{P}}) = 0 \label{eq:theta_constraint} \end{equation} \end{subequations} where $\text{Asym}(\mathbb{A}) = (\mathbb{A}-\mathbb{A}^\dagger)/2i$ is the Hermitian anti-symmetric component of the operator $\mathcal{A}$. Owing to its simplicity, the solution to this dual bound for any given $\theta$ can be written explicitly as \begin{multline} \rho_{sca} \leq \frac{1}{4} \mathrm{Re}\Big\{\Big(-\tilde{\omega}^* e^{i\theta} \ket{\vb{E}^*_{vac}}+ |\tilde{\omega}| \ket{\vb{E}_{vac}}\Big)^\dagger \\ \text{Asym}(e^{i\theta}\mathbb{U})^{-1} \ket{\vb{E}_{vac}}\Big\}, \label{eq:theta_dual} \end{multline} which notably, depends only on the analytically known quantities $\vb{E}_{vac}$, $\chi$, and $\mathbb{U}$. The tightest dual bound consistent with (\ref{eq:theta_dual}) can thus be obtained numerically by carrying out an additional optimization over $\theta$, restricted so that $\text{Asym}(e^{i\theta} \mathbb{U})$ is positive definite (SI). Further analytical insight can be gleaned by expanding the operators and fields in a spectral basis conforming to the symmetry of the design domain: for a half-space enclosure, the natural choice is a Fourier basis $e^{ik_\parallel x_\parallel}$ parametrized by the wavevector $k_\parallel$ parallel to the half-space surface. Carrying out this expansion for the incident field $\ket{\vb{E}_{vac}}$ yields, \begin{subequations} \begin{equation} \ket{\vb{E}_{vac}} = -\frac{\tilde{\omega}}{2\sqrt{2\pi}} \int_{-\infty}^\infty \frac{e^{ik_\parallel x}}{\sqrt{2\pi}}\frac{1}{k_\perp} e^{ik_\perp x_\perp} e^{ik_\perp d} \dd k_\parallel \end{equation} \end{subequations} where $k_\perp = \sqrt{\tilde{\omega}^2-k_\parallel^2}$ and $\Im(k_\perp)\geq0$. For each conserved $k_\parallel$, the inverse operator image $ \text{Asym}(e^{i\theta}\mathbb{U})_{k_\parallel}^{-1} \cdot e^{ik_\perp x_\perp}$ can be similarly expanded and evaluated explicitly as the sum of two complex sinusoids, leading to \begin{widetext} \begin{equation} \rho_{sca} \leq \min_\theta \frac{1}{16\pi} \int_0^\infty \Re{ -\frac{ \tilde{\omega}^3 e^{i\theta} e^{2ik_\perp d}}{k_\perp^2} \left[ \frac{R_1}{r_1-ik_\perp} + \frac{R_2}{r_2-ik_\perp} \right] + \frac{\lvert\tilde{\omega}\rvert^3 e^{-2\Im(k_\perp)d}}{\lvert k_\perp \rvert^2} \left[\frac{R_1}{r_1+ik_\perp^*} + \frac{R_2}{r_2+ik_\perp^*} \right] } \,\dd k_\parallel \label{eq:halfspace_integral} \end{equation} \end{widetext} where the closed-form complex coefficients $R_{1,2}(k_\parallel;\theta)$ and decay constants $r_{1,2}(k_\parallel;\theta)$ are given in the SI. The main challenge in evaluating (\ref{eq:theta_dual}) comes from the need to compute the inverse image of $\text{Asym}(e^{i\theta}\mathbb{U})=\left[\Im{e^{i\theta}/\chi^*} + \text{Asym}(e^{-i\theta}\mathbb{G}))\right]^{-1}$, which contains constraints on wave propagation via its dependence on the vacuum Green's function $\mathbb{G}$. In that regard, it is useful to consider an expansion of (\ref{eq:theta_dual}) in orders of $\text{Asym}(e^{-i\theta} \mathbb{G})$ (after using the Cauchy-Schwartz inequality to relax the term involving $\vb{E}_{vac}^*$ (SI)): \begin{align*} &\langle \rho \rangle_{max} = \frac{|\tilde{\omega}|}{2} \bra{\vb{E}_{vac}}\left[\Im{\frac{e^{i\theta}}{\chi^*}} + \text{Asym}(e^{-i\theta}\mathbb{G})\right]^{-1}\ket{\vb{E}_{vac}} \\ &= \frac{|\tilde{\omega}|}{2} \bra{\vb{E}_{vac}} \Im{\frac{e^{i\theta}}{\chi^*}}^{-1} \left[\mathbb{I} - \frac{\text{Asym}(e^{-i\theta} \mathbb{G})}{\Im{e^{i\theta}/\chi^*}} + \cdots \right] \ket{\vb{E}_{vac}} \\ &\leq \frac{|\tilde{\omega}|}{2} \Im{\frac{e^{i\theta}}{\chi^*}}^{-1} \braket{\vb{E}_{vac}} .\numthis \label{eq:Born} \end{align*} The zeroth-order approximation given in (\ref{eq:Born}) is especially simple to evaluate as it only depends on $\chi$, and will hereafter be referred to as a material bound. A special yet sub-optimal value of $\theta$ in (\ref{eq:Born}) recovers prior LDOS bounds based on passivity (SI)~\cite{shim_fundamental_2019}. Notably, this zeroth order approximation becomes loose whenever $\Im{e^{i\theta}/\chi^*}^{-1} \text{Asym}(e^{-i\theta} \mathbb{G})$ is large as compared to $\mathbb{I}$; this is the case for large $|\chi|$ or large design domains (e.g., the half-space) where $\text{Asym}\mathbb{G}$ dominates. Intuitively, terms containing the vacuum Green's function capture physical limitations imposed by multiple scattering, screening, and the finite speed of light, which limit achievable LDOS. Figure~\ref{fig:halfspace_lossless} shows upper bounds on the LDOS for both TM and TE sources, obtained by evaluating (\ref{eq:halfspace_integral}). In the near-field limit of $d \rightarrow 0$, the TM bounds are found to asymptote to a constant while the TE bounds scale $\propto 1/d^2$, in agreement with a detailed asymptotic analysis of the evanescent $k_\parallel \rightarrow \infty$ behavior of the integrand in (\ref{eq:halfspace_integral}) (SI), and matching prior results based on passivity~\cite{shim_fundamental_2019}. The constant asymptote observed for TM sources as they approach the device reflects a known artifact of 2d scalar electromagnetism, which precludes non-integrable field singularities at sharp corners~\cite{andersen_field_1978}. In contrast, the $1/d^2$ scaling of TE sources confirms the known divergence in the energy concentration of a vector dipole in its near field, with the exponent of $d$ tied to the number of spatial dimensions; note that the maximum LDOS for a 3d source grows $\propto 1/d^3$ as $d \rightarrow 0$ (SI). In the opposite far-field limit of $d \rightarrow \infty$ and for finite $Q_{src}$, the bounds ultimately tend toward zero, reflecting a kind of space--bandwidth constraint on the ability of structuring to affect far-field emission over a finite bandwidth. One can, however, observe plateaus ocurring at intermediate regimes, $\omega d /c \ll Q_{src}$, wherein structuring can efficiently reflect traveling planewaves back onto the dipole position. Achievable enhancements in this regime can be shown to decay $\propto e^{-d/Q_{src}}$, manifesting as plateaus in the various plots of Figure~\ref{fig:halfspace_lossless}b. Only in the strictly single-frequency limit $Q_{src} \rightarrow \infty$ do far-field LDOS contributions tend toward constants, $\rho_0/2$ and $\rho_0$ for TE and TM sources, respectively, independently of separation. The importance of properly capturing relevant wave interference effects in these far-field regimes becomes evident when analyzing corresponding predictions for material bounds~\cite{shim_fundamental_2019}, which exhibit drastically inflated $\propto Q_{src}^2$ scaling in this half-space setting (SI). Asymptotic analysis of (\ref{eq:halfspace_integral}) supported by Figure~\ref{fig:halfspace_lossless}b reveal that LDOS enhancements saturate to finite values as $\abs{\chi} \rightarrow \infty$ (SI), in contrast to prior bounds which grow indefinitely with increasing material response~\cite{shim_fundamental_2019}. This is somewhat surprising given that larger $\chi$ implies a larger (potentially infinite) density of states within the material itself; ultimately, multiple scattering and screening effects lead to restrictions on the possible field localization at the source location that cannot be overcome with clever structuring. Similar conclusions have been reached concerning the efficacy of hyperbolic metamaterials~\cite{miller_effectiveness_2014} at enhancing the LDOS in that particular geometry class. It is also worth noting that the saturation characteristics of the full bounds as a function of $d$ is distinct for the TM and TE polarizations. For TM, reducing $d$ increases the relative advantage of stronger materials; $\rho(\chi \rightarrow \infty) \propto \log(1/d)$ as $d \rightarrow 0$ whereas $\rho(\chi<\infty)$ is finite as $d \rightarrow 0$ (SI). In contrast, for TE the saturation behavior for large $|\chi|$ scales uniformly with $d$; decreasing separation does not increase the relative advantage of stronger materials. \begin{figure} \includegraphics[width=\linewidth]{halfspace_vary_Qs_noMetal.pdf} \caption{\label{fig:halfspace_vary_Qs} Maximum LDOS enhancement near a halfspace design region as a function of the source bandwidth. Squares are the performance of TM inverse designs over a $10 \times 10$ design region for $\chi=5+1i$, with insets $A$ and $B$ the structures found via TO for $Q_{src}=1$ and $Q_{src}=100$ respectively.} \end{figure} With regard to bandwidth scaling, the broad structural freedom afforded by an infinite design domain coupled with the lack of material dissipation allows for creation of resonances with arbitrarily small radiative loss rates, $Q_{mode} \to \infty$, leading to the same $\propto Q_{src}$ dependence observed in Figure~\ref{fig:FDFD_aggregate}. The situation changes in dissipative media where mode lifetimes $Q_{mode} = (\frac{1}{Q_{abs}}+\frac{1}{Q_{rad}})^{-1}$ contain both radiative $Q_{rad}$ and absorptive $Q_{abs}$ contributions, with $Q_{abs} \propto 1/\Im(\chi)$ denoting the proportion of stored energy dissipated in the medium per unit cycle. Hence, under finite absorption, $Q_{abs}$ sets a bound on the highest achievable mode lifetime, with (\ref{eq:single_mode_avg}) suggesting a saturating LDOS $\propto Q_{abs}$ as $Q_{src} \to \infty$. In contrast, Figure~\ref{fig:halfspace_vary_Qs} shows that the bounds exhibit a transition from the linear $\propto Q_{src}$ scaling of a wide-bandwidth source coupled to a single resonance toward \emph{diverging} response $\propto Q_{src}^{1/4}$ as $Q_{src} \to \infty$, with the transition taking place as $Q_{src} \to Q_{abs}$. Intuitively, in this regime wherein the lifetime of a single resonance is capped, one might expect the optimal design strategy to shift toward exploiting degeneracies. One class of geometry capable of achieving high modal degeneracies are waveguides, e.g., coupled cavities or photonic crystal gratings, which at least in 1d are known to generate van Hove singularities in the density of states~\cite{ibanescu_enhanced_2006}. Generally, a spectral singularity of the form $\lvert \omega-\omega_0\rvert^{-\alpha}, \, 0<\alpha<1$ will yield a Lorentzian-averaged LDOS that scales as \begin{equation} \int_{-\infty}^\infty \frac{\Delta\omega_{src}/\pi}{(\omega-\omega_0)^2 + \Delta\omega_{src}^2} \frac{1}{\lvert \omega-\omega_0 \rvert^\alpha} \,\dd\omega \propto \frac{1}{(\Delta\omega_{src})^\alpha} \propto Q_{src}^\alpha . \end{equation} The bounds for the 2d half-space configuration thus suggest the possibility of near-field absorbers supporting quartic ``band-edge'' dispersions capable of efficiently extracting evanescent fields, resulting in singularities of the form $\lvert \omega-\omega_0\rvert^{-1/4}$ (anomalous quartic "slow" group-velocity dispersions have been studied, for instance, in the context of waveguide solitons\cite{blanco-redondo_pure-quartic_2016}). Indeed, TO discovers structures resembling adiabatic gratings (Figure~\ref{fig:halfspace_vary_Qs} insets) and exhibiting the predicted quartic bandwidth scaling, though eventually saturating due to the finite computational domain (SI). \section{Summary} We proposed an improved framework for evaluating upper bounds on the LDOS in structured media, and study in detail the effects of source bandwidth, material susceptibility, and device footprint. The bounds provide a strong top-down complement to bottom-up design in at least two ways: First, they are useful in quantifying the optimality of existing "best" structures while diagnosing potential areas of improvement: as seen in narrow bandwidth regimes, existing formulations of inverse design fail to converge upon structures with performance close to the global optimum, often outperformed by traditional ring resonators. Second, they allow studies of fundamental scaling characteristics without prior assumptions on device topology. For finite bandwidths, the quick saturation of the bounds with design size indicate that a device footprint of a few wavelengths is generally enough to achieve near-optimal performance. The observed saturation with increasing susceptibility has direct implications for material choice: a weak material response can be mitigated given sufficient design size and judicious structuring; conversely there are diminishing returns associated with seeking large absolute values of the susceptibility, raising the importance of other concerns such as material loss and dispersion characteristics. The impact of material loss was also evaluated in the context of maximum LDOS above a semi-infinite structure, showing a transition from the intuitive linear $\propto Q_{src}$ scaling expected of single-mode resonant enhancement to a less obvious $\propto Q_{src}^{1/4}$ dependence given absorptive materials, inviting further studies into the associated dispersion engineering mechanisms making such scaling possible. While only results for dielectrics ($\Re(\chi)>0$) where shown in this paper, the framework can handle metals ($\Re(\chi)<0$) just as well, yielding broadly similar conclusions. \section{Acknowledgments} This work was supported by the National Science Foundation under the Emerging Frontiers in Research and Innovation (EFRI) program, Award No. EFMA-164098 and the Defense Advanced Research Projects Agency (DARPA) under Agreements No. HR00111820046, No. HR00112090011, and No. HR0011047197. The views, opinions and findings expressed herein are those of the authors and should not be interpreted as representing the official views or policies of any institution. R.K.D. gratefully acknowledges financial support from the Princeton Presidential Postdoctoral Research Fellowship and from the National Academies of Science, Engineering, and Medicine Ford Foundation Postdoctoral Fellowship program. \section{Computational Details} Inverse design in this work was performed using the ceviche Maxwell FDFD solver~\cite{hughes_forward-mode_2019} combined with the method of moving asymptotes algorithm as implemented in the non-linear optimization package NLOPT~\cite{johnson_nlopt_2019}. \\ Finite size LDOS bounds were computed using an in-house FDFD code to represent the Green's function and the electric fields over the design domain. The dual function is optimized with an in-house implementation of BFGS; for details on the computation of the dual gradient see~\cite{molesky_hierarchical_2020}. \section{Dependence of bounds on number of subregion constraints} \begin{figure}[!ht] \includegraphics[width=\linewidth]{subregion_convergence.pdf} \caption{\label{fig:subregion_convergence} Plot showing the computed LDOS dual bounds as a function of the number of subregion constraints used. The subplots (a) and (b) correspond to the exterior dipole and interior dipole geometries shown in Fig. 1 of the main text. Solid lines are for system size $L=1$ and dotted lines are for system size $L=2$.} \end{figure} Fig. \ref{fig:subregion_convergence} shows how the bounds depend on the number of subregion constraints used. Generally there is a rapid transition of many orders of magnitude when the number of subregions reaches a critical value before convergence of the bounds. For larger $Q_{src}$ the transition is steeper, indicating the importance of subregion constraints for capturing the effects of radiative loss. For larger system size $L$, the transition is shallower: as stated in the main text, in the limit of large $L$ global constraints alone are sufficient to achieve tight bounds. \pagebreak \section{Ring resonator LDOS enhancement} \begin{figure}[!ht] \includegraphics[width=\linewidth]{ring_resonator_results.pdf} \caption{\label{fig:ring resonator} (a) LDOS enhancement of ring resonators with a fixed ring width of $0.2$ and TM dipole source separated $0.2$ away from the ring. The system size $L$ is 2 times the outer radius of the rings. (b) At $Q_{src}\rightarrow \infty$, the change in the ring radius $\Delta r$ such that the LDOS enhancement drops by a half. } \end{figure} In the main text we presented a specific ring resonator design with a ring width of $0.2$ that outperformed topology optimization from multiple random initializations. Here are extra data on the performance of ring resonators as a function of size for various $Q_{src}$. In the single frequency limit, we see the exponential scaling of LDOS enhancement as a function of ring size; for finite $Q_{src}$ there is a point where larger rings have higher $Q_{mode}$ but do not lead to larger bandwidth-averaged LDOS. For the $Q_{src}=10^6$ design shown in the main text the outer ring diameter is approximately $4.1$; Fig. \ref{fig:ring resonator}b indicates that we require around 8 significant figures of accuracy on the ring size to fall within $50\%$ of the maximum performance. \section{Bandwidth saturation of finite size LDOS bounds} \begin{figure}[!ht] \centering \includegraphics[width=0.5\linewidth]{halfspace_L10_chi5+1j_vary_Qs.pdf} \caption{\label{fig:finite_size} LDOS bounds as a function of $Q_{src}$ for $\chi=5+1i$. } \end{figure} In Fig. 3 of the main text, the inverse designs are performed over a finite design domain of size $10$ by $10$, and the LDOS enhancement saturates with increasing $Q_{src}$. Fig. \ref{fig:finite_size} shows that this saturation is also present we you compute the bounds for the finite design domain; continued $Q_{src}^{1/4}$ scaling is seen just for the semi-infinite halfspace bounds. \section{Differences in notation in supplementary information compared to the main text} In the following sections, some quantities and variables use a different notation than what is written in the main text. This is predominantly for conciseness in writing down long derivations. For convenience a complete list of the differences in notation is listed here; each individual notation deviation will also be noted at its first occurrence in the SI. \begin{itemize} \item The vacuum field $\vb{E}_{vac}$ in main text is referred to as $\vb{E}_v$ in the SI. \item Use of $k_x$ and $k_y$ instead of $k_\parallel$ and $k_\perp$ for half-space bounds, with the $\hat{x}$ direction set as parallel to the half-space surface and $\hat{y}$ direction perpendicular to it. \item the complex parameters $R_{1,2}$ and $r_{1,2}$ in the main text are replaced with $R_\pm$ and $r_\pm$ in the SI, where the $\pm$ subscripts have more background context within the derivation. \item With dimensionless units $c=1$, $\tilde{k} = \tilde{\omega}/c$ will often be used in place $\tilde{\omega}$ in the context of spatial Fourier integrals. \end{itemize} \section{Derivation of power conservation constraints via complex Poynting's theorem} Here we present a derivation of the complex scattering constraints used via the complex Poynting's Theorem. For alternative derivations see early work [...]. \subsection{Complex Poynting's theorem for time harmonic fields and complex $\omega$} In prior literature the complex Poynting's Theorem for time harmonic fields is presented given a real angular frequency $\omega$. The generalization to a complex $\omega$ is straightforward and written explicitly here for clarity. All fields have (complex) harmonic time dependence $e^{-i\omega t}$, with the harmonic time Maxwell's equations \begin{subequations} \begin{align} \div{\vb{D}} &= \rho_f \\ \div{\vb{B}} &= 0 \\ \curl{\vb{E}} &= i\omega \vb{B} \\ \curl{\vb{H}} &= \vb{J}_f - i\omega\vb{D} \end{align} \end{subequations} and linear constitutive relations $\vb{D}(\vb{r}) = \epsilon_0 \overline{\epsilon}_r(\vb{r}) \vb{E}(\vb{r})$ and $\vb{B}(\vb{r}) = \mu_0 \overline{\mu}_r(\vb{r}) \vb{H}(\vb{r})$, with $\overline{\epsilon}(\vb{r})$ and $\overline{\mu}(\vb{r})$ being tensor fields to allow for anisotropy. \\ Define the complex Poynting vector \begin{equation} \vb{S} = \vb{E} \cross \vb{H}^* \end{equation} Taking its divergence yields \begin{equation} \div{\vb{E}\cross\vb{H}^*} = \vb{H}^* \cdot \curl{\vb{E}} - \vb{E}\cdot\curl{\vb{H}^*} \end{equation} Now applying Maxwell's equations gives the differential form of the complex Poynting's theorem \begin{align*} \div{\vb{E}\cross\vb{H}^*} &= i\omega \vb{H}^*\cross\vb{B} - i\omega^* \vb{E}\cdot\vb{D}^* - \vb{E}\cdot\vb{J}^* \\ &= i\omega \vb{H}^* \cdot \overline{\mu} \cdot \vb{H} - i\omega^* \vb{E}\cdot\overline{\epsilon}^*\cdot\vb{E}^* - \vb{E} \cdot \vb{J}^* \numthis \end{align*} The corresponding integral form is \begin{equation} \int_{\partial V} \dd\vb{\sigma} \cdot (\vb{E}\cross\vb{H}^*) = i\omega \int_V \vb{H}^*\cdot\overline{\mu}\cdot\vb{H} \,\dd V - i\omega^* \int_V \vb{E}\cdot\overline{\epsilon}^*\cdot\vb{E}^* \,\dd V - \int_V \vb{E}\cdot\vb{J}^* \,\dd V \end{equation} in the case where $\omega$ is real we have \begin{equation} \int_{\partial V} \dd\vb{\sigma} \cdot (\vb{E}\cross\vb{H}^*) = i\omega \int_V (\vb{H}^*\cdot\overline{\mu}\cdot\vb{H} - \vb{E}\cdot\overline{\epsilon}^*\cdot\vb{E}^*) \,\dd V - \int_V \vb{E}\cdot\vb{J}^* \,\dd V \end{equation} which is a more familiar form. \subsection{Generalized energy conservation constraints} To arrive at the generalized energy conservation constraints, we start with a scattering theory framework in which an initial free current source $\vb{J}_v$ produces the fields $\vb{E}_v$, $\vb{H}_v$ in vacuum. These initial fields interact with a scatterer, producing polarization currents $\vb{J}_s$ within the scatterer that generate scattered fields $\vb{E}_s$, $\vb{H}_s$. For simplicity, we assume that the scatterer is non-magnetic, i.e., $\overline{\mu} = \mu_0$, and that the electric permittivity is local but may be anisotropic: $\overline{\epsilon}_r=\epsilon_0(\mathbb{I} + \mathbb{I}_s\overline{\chi}\mathbb{I}_s)$. The net result is a total field $\vb{E}_t = \vb{E}_v + \vb{E}_s$, $\vb{H}_t = \vb{H}_v + \vb{H}_s$. The complex Poynting theorem then applies in three settings: to the free current and initial fields $(\vb{J}_v, \vb{E}_v, \vb{H}_v)$ in vacuum, to the polarization current and scattered fields $(\vb{J}_s, \vb{E}_s, \vb{H}_s)$ in vacuum, and to the free current and total fields $(\vb{J}_v, \vb{E}_t, \vb{H}_t)$. \\ We consider as our region of interest some subset $V_k$ of the design region. For $(\vb{J}_v, \vb{E}_v, \vb{H}_v)$ in vacuum we have \begin{equation} \int_{\partial V_k} \,\dd \vb{\sigma}\cdot(\vb{E}_v\cross\vb{H}_v) = i\omega \mu_0 \int_{V_k} \vb{H}_v^* \cdot \vb{H}_v \,\dd V - i\omega^* \epsilon_0 \int_{V_k} \vb{E}_v \cdot \vb{E}_v^* \,\dd V . \label{eq_Poynting_inc} \end{equation} For $(\vb{J}_s, \vb{E}_s, \vb{H}_s)$ in vacuum we have \begin{equation} \int_{\partial V_k} \,\dd \vb{\sigma}\cdot(\vb{E}_s\cross\vb{H}_s) = i\omega \mu_0 \int_{V_k} \vb{H}_s^* \cdot \vb{H}_s \,\dd V - i\omega^* \epsilon_0 \int_{V_k} \vb{E}_s \cdot \vb{E}_s^* \,\dd V - \int_{V_k} \vb{E}_s \cdot \vb{J_s}^* \,\dd V . \label{eq_Poynting_sca} \end{equation} For $(\vb{J}_v, \vb{E}_t, \vb{H}_t)$ with the scatterer we have (noting that the free current is situated outside of the design region) \begin{align*} &\int_{\partial V_k} \,\dd \vb{\sigma}\cdot((\vb{E}_v+\vb{E}_s)\cross(\vb{H}_v+\vb{H}_s)) \\ &= i\omega \mu_0 \int_{V_k} (\vb{H}_v^* + \vb{H}_s^*) \cdot (\vb{H}_v + \vb{H}_s) \,\dd V - i\omega^* \epsilon_0 \int_{V_k} (\vb{E}_v+\vb{E}_s) \cdot (\mathbb{I} + \mathbb{I}_s\overline{\chi}^*\mathbb{I}_s) \cdot (\vb{E}_v^* + \vb{E}_s^*) \,\dd V . \numthis \label{eq_Poynting_tot} \end{align*} Subtracting (\ref{eq_Poynting_inc}) and (\ref{eq_Poynting_sca}) from (\ref{eq_Poynting_tot}) gives \begin{align*} &\int_{\partial V_k} \,\dd \vb{\sigma}\cdot(\vb{E}_v\cross\vb{H}_s + \vb{E}_s\cross\vb{H}_v) \\ = \, &i\omega \mu_0 \int_{V_k} (\vb{H}_v^* \cdot \vb{H}_s + \vb{H}_s^* \cdot \vb{H}_v) \,\dd V \\ &- i\omega^*\epsilon_0 \int_{V_k} \big[ \vb{E}_v \cdot \mathbb{I}_s\overline{\chi}^*\mathbb{I}_s \cdot \vb{E}_v^* + \vb{E}_s \cdot \mathbb{I}_s\overline{\chi}^*\mathbb{I}_s \cdot \vb{E}_s^* + \vb{E}_v \cdot (\mathbb{I} + \mathbb{I}_s\overline{\chi}^*\mathbb{I}_s) \cdot \vb{E}_s^* + \vb{E}_s \cdot (\mathbb{I} + \mathbb{I}_s\overline{\chi}^*\mathbb{I}_s) \cdot \vb{E}_v^* \,\dd V \\ &+ \int_{V_k} \vb{E}_s \cdot \vb{J_s}^* \,\dd V . \numthis \end{align*} To further simplify this expression, we investigate the cross term \begin{equation*} \vb{H}_v^* \cdot \vb{H}_s = \frac{1}{\mu_0^2 |\omega|^2} (\curl{\vb{E}_s}) \cdot (\curl{\vb{E}_v^*}) = \frac{1}{\mu_0^2 |\omega|^2} \div(\vb{E}_s \cross \curl{\vb{E}_v^*}) + \frac{1}{\mu_0^2 |\omega|^2} \vb{E}_s \cdot \curl{\curl{\vb{E}_v^*}} \end{equation*} where we have used the identity \begin{equation*} (\curl{\vb{A}})\cdot\vb{B} = \div(\vb{A}\cross\vb{B}) + \vb{A} \cdot (\curl{\vb{B}}) . \end{equation*} Now $\vb{E}_v$ satisfies the vector wave equation \begin{align*} \curl{\curl{\vb{E}_v}} - \frac{\omega^2}{c^2} \vb{E}_v &= 0 \\ \Rightarrow \curl{\curl{\vb{E}_v^*}} = \frac{\omega^{*2}}{c^2} \vb{E}_v^* \end{align*} over a source-free vacuum region, leading to \begin{equation*} \vb{H}_v^* \cdot \vb{H}_s = \frac{1}{\mu_0^2 |\omega|^2} \div(\vb{E}_s \cross \curl{\vb{E}_v^*}) + \frac{\omega^*}{\mu_0^2 c^2 \omega} \vb{E}_s \cdot \vb{E}_v^* , \end{equation*} \begin{align*} i \omega \mu_0 \int_{V_k} \vb{H}_v^* \cdot \vb{H}_s \,\dd V &= \frac{i}{\mu\omega^*} \int_{\partial V_k} \dd\sigma \cdot (\vb{E}_s \cross \curl{\vb{E}_v^*}) + i\omega^*\epsilon_0 \int_{V_k} \vb{E}_s \cdot \vb{E}_v^* \,\dd V \\ &= \int_{\partial V_k} \dd\sigma \cdot (\vb{E}_s \cross \vb{H}_v^*) + i\omega^*\epsilon_0 \int_{V_k} \vb{E}_s \cdot \vb{E}_v^* \,\dd V . \end{align*} Thus we have the useful relation \begin{equation} i \omega \mu_0 \int_{V_k} \vb{H}_v^* \cdot \vb{H}_s \,\dd V - \int_{\partial V_k} \dd\sigma \cdot (\vb{E}_s \cross \vb{H}_v^*) - i\omega^*\epsilon_0 \int_{V_k} \vb{E}_s \cdot (\mathbb{I} + \mathbb{I}_s\overline{\chi}^*\mathbb{I}_s) \cdot \vb{E}_v^* \,\dd V = -i\omega^*\epsilon_0 \int_{V_k} \vb{E}_s \cdot \mathbb{I}_{s}\overline{\chi}^*\mathbb{I}_{s} \cdot \vb{E}_v^* \,\dd V . \end{equation} Similarly the other cross terms satisfy the relation \begin{align*} &i \omega \mu_0 \int_{V_k} \vb{H}_v^* \cdot \vb{H}_s \,\dd V - \int_{\partial V_k} \dd\sigma \cdot (\vb{E}_s \cross \vb{H}_v^*) - i\omega^*\epsilon_0 \int_{V_k} \vb{E}_s \cdot (\mathbb{I} + \mathbb{I}_s\overline{\chi}^*\mathbb{I}_s) \cdot \vb{E}_v^* \,\dd V \\ & = -i\omega^*\epsilon_0 \int_{V_k} \vb{E}_s \cdot \mathbb{I}_s\overline{\chi}^*\mathbb{I}_s \cdot \vb{E}_v^* \,\dd V + \int_{V_k} \vb{E}_v \cdot \vb{J}_s^* \,\dd V \numthis \end{align*} where the additional $\int_{V_k} \vb{E}_v \cdot \vb{J}_s^* \,\dd V$ comes from from $\vb{E}_s$ satisfying a vector wave equation with the polarization currents as source: \begin{equation*} \curl{\curl{\vb{E}_s}} - \frac{\omega^2}{c^2} \vb{E}_s = i\omega \mu_0 \vb{J}_s . \end{equation*} Substituting these cross-term relations into the expression and taking the complex conjugate gives \begin{equation} \int_{V_k} \vb{E}_v^* \cdot \vb{J}_s \,\dd V = -i\omega \epsilon_0 \int_{V_k} \vb{E}_t^* \cdot \mathbb{I}_s \overline{\chi} \mathbb{I}_s \cdot \vb{E}_t \,\dd V - \int_{V_k} \vb{E}_s^* \cdot \vb{J}_s \,\dd V . \end{equation} At this point, we can replace the polarization current $\vb{J}_s$, scattered electric field $E_s$, and total electric field $E_t$ with the polarization density $\vb{P}$ via the following relations \begin{equation} \vb{J}_s = -i\omega \vb{P} \qquad \vb{P} = \epsilon_0 \overline{\chi} \mathbb{I}_s \cdot \vb{E}_t \qquad \vb{E}_s = \frac{1}{\epsilon_0} \mathbb{G} \cdot \vb{P} \end{equation} to finally obtain \begin{equation} \int_{V_k} \vb{E}_v \cdot \bigg(\frac{\vb{P}}{\epsilon_0}\bigg) \,\dd V = \int_{V_k} \bigg(\frac{\vb{P}}{\epsilon_0}\bigg)^* \cdot \bigg(\frac{1}{\chi^*} - \mathbb{G}^*\bigg) \cdot \bigg(\frac{\vb{P}}{\epsilon_0}\bigg) \,\dd V . \end{equation} \section{Lagrangian duality given global constraints} In this section we detail how the dual problem for the two global power conservation constraints is equivalent to the dual problem for a single constraint with an additional parameter in the form of a complex phase rotation. We are interested in placing dual bounds on the power extracted from a dipole source given global conservation of power: \begin{subequations} \begin{align} \text{maximize} \quad &\rho_{sca} = -\frac{1}{2} \Im{\tilde{\omega} \bra{\vb{E}^*_i} \ket{\vb{P}}} \\ \text{such that} \quad &\Re\bra{\vb{E}_v}\ket{\vb{P}} - \expval{\text{Sym}\mathbb{U}}{\vb{P}} = 0 \\ &\Im\bra{\vb{E}_v}\ket{\vb{P}} - \expval{\text{Asym}\mathbb{U}}{\vb{P}} = 0 . \end{align} \end{subequations} The Lagrangian for this constrained optimization problem is \begin{align*} \mathcal{L}(\vb{P}, \alpha_{Re}, \alpha_{Im}) &= -\frac{1}{2} \Im{\tilde{\omega}\bra{\vb{E}^*_i} \ket{\vb{P}}} + \alpha_{Re} \big[ \Re\bra{\vb{E}_v}\ket{\vb{P}} - \expval{\text{Sym}\mathbb{U}}{\vb{P}} \big] + \alpha_{Im} \big[\Im\bra{\vb{E}_v}\ket{\vb{P}} - \expval{\text{Asym}\mathbb{U}}{\vb{P}} \big] \\ &= -\frac{1}{2} \Im{\tilde{\omega} \bra{\vb{E}^*_i} \ket{\vb{P}}} + \big(\frac{\alpha_{Re}}{2} + \frac{\alpha_{Im}}{2i}\big) \bra{\vb{E}_v}\ket{\vb{P}} + \big(\frac{\alpha_{Re}}{2} - \frac{\alpha_{Im}}{2i}\big) \bra{\vb{P}}\ket{\vb{E}_v} \\ &+ \expval{\big[\big(\frac{\alpha_{Re}}{2} + \frac{\alpha_{Im}}{2i}\big)\mathbb{U} + \big(\frac{\alpha_{Re}}{2} - \frac{\alpha_{Im}}{2i}\big)\mathbb{U}^\dagger \big] }{\vb{P}} , \end{align*} With the corresponding dual function \begin{equation} \mathcal{D}(\alpha_{Re}, \alpha_{Im}) = \max_{\vb{P}} \mathcal{L}(\vb{P}, \alpha_{Re}, \alpha_{Im}) . \end{equation} Now define $\alpha \equiv \sqrt{\alpha_{Re}^2+\alpha_{Im}^2}$ and a complex phase rotation $p \equiv e^{i\theta} = (\alpha_{Im} + i\alpha_{Re})/\alpha$ the Lagrangian can be re-written as \begin{align*} \mathcal{L}(\vb{P}, \alpha; \theta) &= -\frac{1}{2} \Im{\tilde{\omega} \bra{\vb{E}^*_i} \ket{\vb{P}}} + \alpha \bigg( \frac{e^{i\theta}\bra{\vb{E}^*_i}\ket{\vb{P}} - e^{-i\theta} \bra{\vb{P}}\ket{\vb{E}^*_i}}{2i} - \expval{ \frac{e^{i\theta}\mathbb{U} - e^{-i\theta}\mathbb{U}^\dagger}{2i} }{\vb{P}} \bigg) \\ &= -\frac{1}{2} \Im{\tilde{\omega} \bra{\vb{E}^*_i} \ket{\vb{P}}} + \alpha \big[ \Im(p \bra{\vb{E}^*_i}\ket{\vb{P}}) - \expval{\text{Asym}(p \mathbb{U})}{\vb{P}} \big] \numthis \end{align*} which is exactly the Lagrangian of the single constraint optimization \begin{subequations} \begin{align} \text{maximize} \quad & \rho_{sca} = -\frac{1}{2}\Im{\tilde{\omega}\bra{\vb{E}^*_i} \ket{\vb{P}}} \label{eq:theta_primal_objective}\\ \text{such that} \quad &\Im(p \bra{\vb{E}_v}\ket{\vb{P}}) - \expval{\text{Asym}(p \mathbb{U})}{\vb{P}} = 0 \label{eq:theta_primal_constraint} \end{align} \label{eq:theta_primal} \end{subequations} with corresponding dual function \begin{equation} \mathcal{D}(\alpha; \theta) = \max_{\vb{P}} \mathcal{L}(\vb{P}, \alpha; \theta) . \label{def:theta_dual} \end{equation} Now it is clear that $(e^{i\theta}, \alpha)$ is just an alternate parametrization of the multiplier space of $(\alpha_{Re}, \alpha_{Im})$, hence the tightest dual bound is \begin{equation} \min_{\alpha_{Re}, \alpha_{Im}} \mathcal{D}(\alpha_{Re}, \alpha_{Im}) = \min_\theta \min_\alpha \mathcal{D}(\alpha; \theta) . \end{equation} We can now derive an expression for $\min_\alpha \mathcal{D}(\alpha; \theta)$ for fixed phase rotation $\theta$. First, note that the Lagrangian $\mathcal{L}(\vb{P}, \alpha; \theta)$ only has a finite maximum when $\text{Asym}(p\mathbb{U}) \succ 0$. See the following section for a spectral analysis of $\text{Asym}(p\mathbb{U})$ ascertaining the existence and range of complex phase rotation $p$ such that $\text{Asym}(p\mathbb{U}) \succ 0$. We evaluate (\ref{def:theta_dual}) by calculating the stationary point of $\mathcal{L}$ with respect to $\ket{\vb{P}}$: \begin{equation} \pdv{\mathcal{L}}{\bra{\vb{P}}} = 0 \quad \Rightarrow \quad \ket{\vb{P}} = -\frac{i \tilde{\omega}^*}{4\alpha} \text{Asym}(p\mathbb{U})^{-1} \ket{\vb{E}_v^*} + \frac{i}{2} p^* \text{Asym}(p\mathbb{U})^{-1} \ket{\vb{E}_v}. \label{eq:theta_Popt} \end{equation} This stationary point maximizes $\mathcal{L}$ when $\text{Asym}(p\mathbb{U}) \succ 0$. Given this positive definite condition, the primal problem (\ref{eq:theta_primal}) is also a convex problem with a non-empty feasible set, so strong duality holds [cite]. Thus to solve for the optimal $\bar{\alpha}$ that minimizes $\mathcal{D}(\alpha;\theta)$, we substitute (\ref{eq:theta_Popt}) into (\ref{eq:theta_primal_constraint}) and obtain \begin{equation} \bar{\alpha} = \sqrt{\frac{\lvert \tilde{\omega} \rvert^2}{4\lvert p \rvert^2} \frac{\expval{\text{Asym}(p\mathbb{U})^{-1}}{\vb{E}_v^*} }{\expval{\text{Asym}(p\mathbb{U})^{-1}}{\vb{E}_v}} } . \label{eq:theta_alphaopt} \end{equation} Now (\ref{eq:theta_Popt}) and (\ref{eq:theta_alphaopt}) can be simultaneously substituted back into (\ref{def:theta_dual}) to get the bound \begin{align*} \rho_{sca} \leq \min_{\alpha} \mathcal{D}(\alpha; \theta) =& \, \frac{\lvert \tilde\omega \rvert}{4} \sqrt{ \expval{\text{Asym}(p\mathbb{U})^{-1}}{\vb{E}_v^*} \expval{\text{Asym}(p\mathbb{U})^{-1}}{\vb{E}_v} } \\ &- \frac{1}{4} \Re\Big\{ p^* \tilde{\omega} \bra{\vb{E}_v^*}\text{Asym}(p\mathbb{U})^{-1}\ket{\vb{E}_v} \Big\} \numthis \label{eq:theta_bound} \end{align*} as seen in the main text. \section{Spectral analysis of Green's function} Notation: in this and future sections of the supplementary info we will sometimes use the spatial wavevector $\tilde{k} = \tilde{\omega} / c = \tilde{\omega}$; in the main text only $\tilde{\omega}$ is used to reduce the number of different variables shown and to take advantage of the dimensionless units $c=1$. Starting from the planewave expansion of $\mathbb{G}$ as given by~\cite{tsang_scattering_2004} (multiplied by $\tilde{k}^2$ following our convention) \begin{equation} \mathbb{G} = \frac{1}{(2\pi)^3} \iiint \dd^3\vb{k} e^{i\vb{k}\cdot(\vb{r}-\vb{r'})} \frac{\mathbb{I} \tilde{k}^2 - \vb{k}\otimes\vb{k}}{k^2-\tilde{k}^2}, \end{equation} we have \begin{equation} \text{Asym}\mathbb{G} = \frac{1}{(2\pi)^3} \iiint \dd^3\vb{k} e^{i\vb{k}\cdot(\vb{r}-\vb{r'})} \bigg\{ \Im\bigg(\frac{\tilde{k}^2}{k^2-\tilde{k}^2}\bigg)\mathbb{I} - \Im\bigg(\frac{k^2}{k^2-\tilde{k}^2}\bigg) \unitv{k}\otimes\unitv{k} \bigg\} . \end{equation} Now $\tilde{k}^2 = \tilde{k}_r^2 - \tilde{k}_i^2 + 2i\tilde{k}_r\tilde{k}_i$; for notational convenience define $A_r\equiv\Re{\tilde{k}^2}=\tilde{k}_r^2-\tilde{k}_i^2$ and $A_i=\Im{\tilde{k}^2}=2\tilde{k}_r\tilde{k}_i$. Taking the imaginary part, we get \begin{equation} \text{Asym}\mathbb{G} = \frac{1}{(2\pi)^3} \iiint \dd^3\vb{k} e^{i\vb{k}\cdot(\vb{r}-\vb{r'})} \bigg\{ \frac{A_i k^2}{(k^2-A_r)^2 + A_i^2} \mathbb{I} - \frac{A_i k^2}{(k^2-A_r)^2 + A_i^2} \unitv{k}\otimes\unitv{k} \bigg\} . \end{equation} It is apparent now that the eigenwaves of $\text{Asym}\mathbb{G}$ have the form $\unitv{e} e^{i\vb{k}\cdot\vb{r}}$, where $\unitv{e}$ is an eigenvector of the $3\times3$ matrix in parentheses. Under a triad that includes $\unitv{k}$, the $3\times3$ matrix is diagonal, and we see the longitudinal eigenwaves $\unitv{k} e^{i\vb{k}\cdot\vb{r}}$ have eigenvalue $0$. The transverse eigenwaves have eigenvalues \begin{equation} \rho_t(k) = \frac{A_i k^2}{(k^2-A_r)^2+A_i^2} = \frac{2 \tilde{k}_r \tilde{k}_i k^2}{(k^2 - \tilde{k}_r^2 + \tilde{k}_i^2)^2 + (2 \tilde{k}_r \tilde{k}_i)^2} \geq 0 \label{eq_AsymGeig} . \end{equation} Some observations about the scaling of these eigenvalues that are relevant for later discussion: \begin{itemize} \item $\text{Asym}\mathbb{G}$ is positive semi-definite. The null-space always includes the longitudinal waves (which are irrelevant to the TM case). For single frequencies $\tilde{k}_i=0$ the null-space includes all evanescent waves; for finite bandwidth $\tilde{k}_i>0$ the null-space only contains extremely fast oscillations / extreme evanescent waves in the limit $|k|\rightarrow\infty$. \item Some idea of the scaling of $\text{Asym}\mathbb{G}^{-1}$ can be seen through the inverse of the transverse eigenvalues $1/\rho_t = \frac{1}{A_i} k^2 + (\frac{A_r^2}{A_i}+A_i)\frac{1}{k^2}-2\frac{A_r}{A_i}$, which for $k\geq \tilde{k}_r$ has a minimum around $k=|\tilde{k}|$ before scaling $\sim k^2$ as $k\rightarrow\infty$. \end{itemize} \section{Spectral Analysis of $\text{Asym}(p\mathbb{U})$} We have $\text{Asym}(p\mathbb{U}) = \text{Asym}(\chi^{-1\dagger}p) + \text{Asym}(p^* \mathbb{G})$. $\text{Asym}(\chi^{-1\dagger}p)$ contributes a constant to all eigenvalues; we focus our attention on the second term $\text{Asym}(p^* \mathbb{G})$. Similar to the analysis in the previous subsection, we have \begin{equation} \text{Asym}\mathbb{G} = \frac{1}{(2\pi)^3} \iiint \dd^3\vb{k} e^{i\vb{k}\cdot(\vb{r}-\vb{r'})} \bigg\{ \Im\bigg(\frac{p^* \tilde{k}^2}{k^2-\tilde{k}^2}\bigg)\mathbb{I} - \Im\bigg(\frac{p^* k^2}{k^2-\tilde{k}^2}\bigg) \unitv{k}\otimes\unitv{k} \bigg\} . \end{equation} The longitudinal and transverse eigenvalues of $\text{Asym}(p^* \mathbb{G})$ come out to be \begin{align*} \rho_{\mathbb{G},l}(p) &= p_i ,\\ \rho_{\mathbb{G},t}(p) &= \frac{p_r k^2 A_i + p_i [-A_r(k^2-A_r) + A_i^2]}{(k^2-Ar)^2 + A_i^2} . \end{align*} The eigenvalues of $\text{Asym}(p\mathbb{U})$ are \begin{align} \rho_{\mathbb{U},l} &= \frac{\chi_i}{\chi_r^2+\chi_i^2} p_r + \bigg(1 + \frac{\chi_r}{\chi_r^2+\chi_i^2}\bigg) p_i ,\\ \rho_{\mathbb{U},t} &= \frac{p_r\chi_i + p_i\chi_r}{\chi_r^2 + \chi_i^2} + \frac{p_r k^2 A_i + p_i [-A_r(k^2-A_r) + A_i^2]}{(k^2-Ar)^2 + A_i^2} . \end{align} In order for the dual bound (\ref{eq:theta_bound}) to be valid we need $\text{Asym}(p\mathbb{U})$ to be positive definite, i.e., $\rho_{\mathbb{U}}>0$. \subsection{Lossless dielectrics} For lossless dielectrics we have $\chi_i=0$, $\chi_r>0$. The PD condition $\rho_{\mathbb{U},l}>0$ then leads to $p_i>0$. Setting $p_r=1$, we require \begin{equation} \rho_{\mathbb{U},t} = \frac{1}{\chi_r} p_i + \frac{k^2 A_i + p_i [-A_r(k^2-A_r) + A_i^2]}{(k^2-A_r)^2 + A_i^2} > 0. \end{equation} Defining for convenience $u \equiv k^2$, the problem is then finding \begin{equation} \min_{u \geq 0} f(u) = \frac{(A_i - p_i A_r)u + p_i(A_r^2+A_i^2)}{(u-A_r)^2 + A_i^2} . \end{equation} It is clear that $f(0) = p_i>0$ and $\lim_{u\rightarrow\infty} f(u) = 0$. In the special case $p_i = A_i/A_r$, the only critical point is $u^* = A_r$ which is a maximum, so $\min f(u) = 0$ and $\rho_{\mathbb{U},t} = \frac{1}{\chi_r} p_i > 0$. When $p_i \neq A_i/A_r$ the critical points are (assuming $A_i \ll A_r$) \begin{equation} u^{*\pm} \approx \frac{p_i A_r^2 \pm \sqrt{1+p_i^2} A_i A_r}{p_i A_r - A_i} . \end{equation} If $p_i < A_i/A_r$ then $u^{*+}<0$ and irrelevant, while $u^{*-}$ is a maximum, so $\min f(u)=0$. If $p_i>A_i/A_r$, then $u^{*+}$ is the minimum, with \begin{equation} \min f(u) = f(u^{*+}) \approx \frac{-\sqrt{1+p_i^2} A_r A_i}{\bigg[ \frac{(1+\sqrt{1+p_i^2}) A_r A_i}{p_i A_r - A_i} \bigg]^2 + A_i^2} < 0 . \end{equation} Thus for lossless dielectrics, $\text{Asym}(p\mathbb{U})$ is PD given $p_r=1$ and $0<p_i\leq A_i / A_r$. For specific values of $\chi_r$, larger rotation angles with $p_i>A_i / A_r$ is possible. By how much? Set \begin{equation} p_i = c \cdot \frac{A_i}{A_r} ; \end{equation} now, assume that $p_i \ll 1$ in the limit $A_i \rightarrow 0$ for vanishing bandwidth, this assumption can be checked for consistency at the end. This yields \begin{equation} \min f(u) \approx - \frac{(c-1)^2}{4} \frac{A_i}{A_r} . \end{equation} Then \begin{equation} \rho_{\mathbb{U},t}>0 \Rightarrow \frac{p_i}{\chi_r} > \frac{(c-1)^2}{4} \frac{A_i}{A_r} \Rightarrow c^2 - \bigg(\frac{4}{\chi_r} + 2\bigg)c + 1 < 0 . \end{equation} Conclude that \begin{equation} c < 1 + 2\bigg[\frac{1}{\chi_r} + \sqrt{\frac{1}{\chi_r^2} + \frac{1}{\chi_r}} \bigg] , \label{eq:cmax} \end{equation} so $c_{max}$ is a monotonic function of $\chi_r$ and not dependent on the bandwidth, $p_{i,max} \propto A_i \ll 1$ in the limit $A_i \rightarrow 0$. \section{Global constraint bounds for 2D TM dipole near a half-space} In this section we evaluate (\ref{eq:theta_bound}) explicitly for the case of 2D TM dipole source $\vb{J}(x',y') = \delta(x')\delta(y'+d)$ a distance $d$ away from a half-space design region $V=\{(x,y)|y>0\}$. Notation: in the main text the $\parallel$ and $\perp$ symbols were used to indicate directions parallel and perpendicular to the surface of the half-space design region, respectively. In this section we use explicit Cartesian coordinates with $x$ being $\parallel$ and $y$ being $\perp$. \subsection{TM Green's function and dipole field} Following \cite{tsang_scattering_2004}, the 2D TM Green's function is \begin{equation} \mathbb{G}^{TM}(\vb{\rho},\vb{\rho'}) = \tilde{k}^2 \frac{i}{4\pi} \begin{cases} \int_{-\infty}^\infty \dd k_x \frac{\hat{z}\hat{z}}{k_y} e^{i k_x(x-x')}e^{ik_y(y-y')} & y>y' \\ \\ \int_{-\infty}^\infty \dd k_x \frac{\hat{z}\hat{z}}{k_y} e^{i k_x(x-x')}e^{-ik_y(y-y')} & y<y' \end{cases} \label{def:TM_Green} \end{equation} where $\tilde{k} = \tilde{k}_r + i\tilde{k}_i$ is complex for finite bandwidths, $k_y=\sqrt{\tilde{k}^2-kx^2}$ always taking the root with positive imaginary part. The dipole source $\vb{J}(x',y') = \delta(x')\delta(y'+d)$ produces a vacuum electric field within the $y>0$ design region \begin{subequations} \begin{equation} \vb{E}_v^{TM}(x,y) = \int_{-\infty}^\infty \frac{e^{i k_x x}}{\sqrt{2\pi}} \vb{E}_{v,k_x}^{TM}(y) \,\dd k_x, \end{equation} \begin{equation} \vb{E}_{v,k_x}^{TM}(y) = -\frac{\tilde{k}}{2\sqrt{2\pi}} \frac{1}{k_y} e^{i k_y y} e^{i k_y d} . \label{eq:TM_Ei} \end{equation} \end{subequations} The single frequency vacuum dipole radiation is \begin{equation} \rho_0^{TM} = \frac{\Re{\tilde{k}}}{8} = \frac{\pi}{4 \lambda_0} . \end{equation} The composite $\text{Asym}(p\mathbb{U}^{TM})$ integral operator is \begin{subequations} \begin{equation} \text{Asym}(p\mathbb{U}^{TM})(x,y,x',y') = \int_{-\infty}^\infty \dd k_x \frac{e^{i k_x x}}{\sqrt{2\pi}} \text{Asym}(p\mathbb{U}^{TM})_{k_x}(y,y') \frac{e^{-i k_x x'}}{\sqrt{2\pi}} \end{equation} with \begin{equation} \text{Asym}(p\mathbb{U}^{TM})_{k_x} (y,y') = \Im{p/\chi^*} \delta(y-y') + \frac{1}{2} \Re{p^* \frac{\tilde{k}^2}{k_y} e^{i k_y |y-y'|}} . \label{eq:TM_AsympUkx} \end{equation} \end{subequations} \subsection{Simplification of (\ref{eq:theta_bound})} For the half-space design region, (\ref{eq:theta_bound}) may be simplified by observing that $\expval{\text{Asym}(p\mathbb{U}^{TM})^{-1}}{\vb{E}_v^{TM}} = \expval{\text{Asym}(p\mathbb{U}^{TM})^{-1}}{\vb{E}_v^{TM*}}$. We have \begin{equation} \expval{\text{Asym}(p\mathbb{U}^{TM})^{-1}}{\vb{E}_v^{TM}} = \frac{\abs{\tilde{k}}^2}{8\pi} \int_{-\infty}^\infty \dd k_x \frac{e^{-2k_{yi} d}}{\abs{k_y}^2} \iint_0^\infty e^{-ik_y^* y} \text{Asym}(p\mathbb{U}^{TM})_{k_x}^{-1}(y,y') e^{i k_y y} \, \dd y' \dd y \label{eq:Ei_AsympUinv_Ei} \end{equation} where we have made multiple use of the Fourier completeness relation $\frac{1}{2\pi} \int_{-\infty}^\infty e^{i(k_x - k_x') x} \,\dd x = \delta(k_x - k_x')$. Now, \begin{align*} \vb{E}_v^{TM*}(x,y) &= -\frac{\tilde{k}^*}{2\sqrt{2\pi}} \int_{-\infty}^\infty \frac{e^{-ik_x x}}{\sqrt{2\pi}} \frac{1}{k_y^*} e^{-i k_y^* y} e^{-i k_y^* d} \, \dd k_x \\ &= -\frac{\tilde{k}^*}{2\sqrt{2\pi}} \int_{-\infty}^\infty \frac{e^{ik_x x}}{\sqrt{2\pi}} \frac{1}{k_y^*} e^{-i k_y^* y} e^{-i k_y^* d} \, \dd k_x \qquad kx \rightarrow -kx \numthis \end{align*} where in the second line we have made use of the fact that $k_y = \sqrt{\tilde{k}^2 - k_x^2}$ is invariant to the sign of $k_x$. This gives \begin{equation} \expval{\text{Asym}(p\mathbb{U}^{TM})^{-1}}{\vb{E}_v^{TM}} = \frac{\abs{\tilde{k}}^2}{8\pi} \int_{-\infty}^\infty \dd k_x \frac{e^{-2k_{yi} d}}{\abs{k_y}^2} \iint_0^\infty e^{i k_y y} \text{Asym}(p\mathbb{U}^{TM})_{k_x}^{-1}(y,y') e^{-i k_y^* y} \, \dd y' \dd y. \label{eq:Ei*_AsympUinv_Ei*} \end{equation} Now from (\ref{eq:TM_AsympUkx}) we see that $\text{Asym}(p\mathbb{U}^{TM})_{k_x}(y,y')$ is invariant under exchange of the $y$, $y'$ dummy integration variables. Since (\ref{eq:Ei_AsympUinv_Ei}) and (\ref{eq:Ei*_AsympUinv_Ei*}) are related by the exchange of $y$, $y'$, conclude that \begin{equation} \expval{\text{Asym}(p\mathbb{U}^{TM})^{-1}}{\vb{E}_v^{TM}} = \expval{\text{Asym}(p\mathbb{U}^{TM})^{-1}}{\vb{E}_v^{TM*}}. \label{eq:Ei*_Ei_equality} \end{equation} This allows us to simplify (\ref{eq:theta_bound}) to \begin{equation} \rho_{sca}^{TM} \leq \frac{\abs{\tilde{\omega}}}{4} \expval{\text{Asym}(p\mathbb{U}^{TM})^{-1}}{\vb{E}_v^{TM}} - \frac{1}{4} \Re\Big\{ p^* \tilde{\omega} \bra{\vb{E}_v^{TM*}} \text{Asym}(p\mathbb{U})^{-1} \ket{\vb{E}_v^{TM}} \Big\} . \label{eq:theta_bound_simplified} \end{equation} For certain asymptotic and approximation analyses, we will also make use of (\ref{eq:Ei*_Ei_equality}) and the Cauchy-Schwartz inequality to relax the second term in (\ref{eq:theta_bound_simplified}), yielding \begin{equation} \rho_{sca}^{TM} \leq \frac{\abs{\tilde{\omega}}}{2} \expval{\text{Asym}(p\mathbb{U}^{TM})^{-1}}{\vb{E}_v^{TM}} \label{eq:theta_bound_simplified_CS} \end{equation} \subsection{Evaluation of $\text{Asym}(p\mathbb{U}^{TM})^{-1} \ket{\vb{E}_v^{TM}}$} The core calculation in (\ref{eq:theta_bound_simplified}) is the evaluation of \begin{equation} \text{Asym}(p\mathbb{U}^{TM})^{-1} \ket{\vb{E}_v^{TM}} = -\frac{\tilde{k}}{2\sqrt{2\pi}} \int_{-\infty}^\infty \dd k_x \frac{e^{i k_x x}}{\sqrt{2\pi}} \frac{e^{i k_y d}}{k_y} \int_0^\infty \text{Asym}(p\mathbb{U}^{TM})_{k_x}^{-1}(y,y') e^{i k_y y'} \, \dd y'. \label{eq:AsympUinv_Evac} \end{equation} Defining \begin{equation} f(y) = \int_0^\infty \text{Asym}(p\mathbb{U}^{TM})_{k_x}^{-1}(y,y') e^{i k_y y'} \, \dd y', \end{equation} we have \begin{equation} \int_0^\infty \text{Asym}(p\mathbb{U}^{TM})_{k_x}(y,y') f(y') \,\dd y' = e^{i k_y y'}. \label{eq:TM_Laplace_target} \end{equation} The action of $\text{Asym}(p\mathbb{U})_{k_x}$ is a convolution over $y$; this suggests that the Laplace transform $\mathscr{L}\{h(t)\}(s) = \int_0^\infty h(t) e^{-st} \,\dd t$ may be used to solve (\ref{eq:TM_Laplace_target}). We now take the Laplace transform of both sides of (\ref{eq:TM_Laplace_target}), defining $F(s) \equiv \mathscr{L}\{f(y)\}$ and the auxiliary variable $B \equiv p^* \frac{\tilde{k}^2}{k_y}$. The LHS gives \begin{align*} \Im{\frac{p}{\chi^*}} F(s) &+ \frac{B_r+iB_i}{4} \mathscr{L}\bigg\{ \int_0^y e^{-(k_{yi}-ik_{yr})(y-y')} f(y')\,\dd y' \bigg\} + \frac{B_r-iB_i}{4} \mathscr{L}\bigg\{ \int_0^y e^{-(k_{yi}+ik_{yr})(y-y')} f(y') \,\dd u' \bigg\} \\ &+ \frac{B_r+iB_i}{4} \mathscr{L} \bigg\{ \int_y^\infty e^{(k_{yi}-ik_{yr})(y-y')} f(y') \,\dd y' \bigg\} + \frac{B_r-iB_i}{4} \mathscr{L} \bigg\{ \int_y^\infty e^{(k_{yi}+k_{yr})(y-y')} f(y') \,\dd y' \bigg\}. \end{align*} The RHS is simply \begin{equation*} \mathscr{L}\bigg\{ e^{(-k_{yi}+ik_{yr})y} \bigg\} = \frac{1}{s+(k_{yi}-ik_{yr})} . \end{equation*} Evaluating the LHS, the Laplace transform of a convolution gives \begin{equation} \Laplace{\int_0^y e^{-a (y-y')} f(y')\,\dd y' } = \frac{F(s)}{s+a} . \label{eq:Laplace_conv} \end{equation} The transform of the other type of integral can be evaluated explicitly to a simple form: \begin{align*} \Laplace{\int_y^\infty e^{a (y-y')} f(y') \,\dd y'} &= \int_0^\infty e^{-sy} \int_y^\infty e^{a(y-y')} f(y') \,\dd y' \,\dd y\\ &= \int_0^\infty e^{-ay'} f(y') \bigg( \int_0^{y'} e^{-(s-a)y} \, \dd y \bigg) \, \dd y' \qquad \text{exchange order of integration} \\ &= \int_0^\infty f(y') e^{-ay'} \frac{1}{a-s} \bigg( e^{(a-s)y'} -1 \bigg) \, \dd y' \\ &= \frac{1}{a-s} \bigg( \int_0^\infty f(y') e^{-sy'} \,\dd y' - \int_0^\infty f(y') e^{-ay'} \,\dd y' \bigg) \\ &= \frac{1}{a-s} ( F(s) - F(a) ) . \numthis \label{eq:Laplace_cross_corr} \end{align*} Thus $F(s)$ satisfies \begin{align*} \frac{1}{s+(k_{yi}-ik_{yr})} = \frac{p_i}{\chi} Fs) &+ \frac{Br+iB_i}{4} \frac{F(s)}{s+(k_{yi}-ik_{yr})} \\ &+ \frac{B_r - iB_i}{4} \frac{F(s)}{s+(k_{yi}+ik_{yr})} \\ &+ \frac{B_r+iB_i}{4} \frac{1}{(k_{yi}-ik_{yr})-s} \bigg[ F(s) - F(k_{yi}-ik_{yr})\bigg] \\ &+ \frac{B_r - iB_i}{4} \frac{1}{(k_{yi}+ik_{yr})-s} \bigg[ F(s) - F(k_{yi}+ik_{yr}) \bigg] . \numthis \end{align*} Solving this will give us an explicit form of $F(s)$ dependent on two free parameters $F(k_{yi}-ik_{yr})$ and $F(k_{yi}+ik_{yr})$. This seemingly gives, instead of a single solution $f(y)$, a whole family of solutions. However, as we shall see, only one member of this family belongs to $L^2[0,\infty)$, decaying as $y \rightarrow \infty$. The other members of this family grow exponentially as $y \rightarrow \infty$ which violates the requirements of Fubini's theorem \cite{rudin_real_2006} needed to justify the order of integration exchange used in (\ref{eq:Laplace_cross_corr}), and are not true solutions to (\ref{eq:TM_Laplace_target}). Solving for $F(s)$ yields \begin{subequations} \begin{equation} F(s) = \frac{F_{num}(s)}{F_{denom}(s)} \end{equation} where the numerator is \begin{align*} F_{num}(s) = [s-(k_{yi}-ik_{yr})][s^2-(k_{yi}+ik_{yr})^2] &- \frac{Br+iB_i}{4} \gamma_- [s+(k_{yi}-ik_{yr})][s^2-(k_{yi}+ik_{yr})^2] \\ &- \frac{B_r-iB_i}{4} \gamma_+ [s+(k_{yi}+ik_{yr})][s^2-(k_{yi}-ik_{yr})^2] \numthis \end{align*} and the denominator is \begin{equation} F_{denom}(s) = \Im{\frac{p}{\chi^*}} s^4 - \bigg[ 2\Im{\frac{p}{\chi^*}} (k_{yi}^2-k_{yr}^2) + (B_r k_{yi} + B_i k_{yr})\bigg]s^2 + \Im{\frac{p}{\chi^*}} \abs{k_y}^4 + (B_r k_{yi} - B_i k_{yr}) \abs{k_y}^2 , \end{equation} \end{subequations} where for notational convenience, we have defined $\gamma_{\pm} \equiv F(k_{yi} \pm ik_{yr})$. The roots $r$ of $F_{denom}(s)$ are then the poles of $F(s)$: \begin{subequations} \begin{align} &(+,-)r_+ = (+,-)\bigg\{(k_{yi}^2 - k_{yr}^2) + \frac{1}{2\Im{p/\chi^*}} \bigg[(B_r k_{yi} + B_i k_{yr}) + \sqrt{\Delta} \,\bigg] \bigg\}^{1/2} ,\\ &(+,-)r_- = (+,-)\bigg\{(k_{yi}^2 - k_{yr}^2) + \frac{1}{2\Im{p/\chi^*}} \bigg[(B_r k_{yi} + B_i k_{yr}) - \sqrt{\Delta} \,\bigg] \bigg\}^{1/2} ,\\ &\Delta=(B_r k_{yi} + B_i k_{yr})^2 + 8\Im{\frac{p}{\chi^*}} \Bigg( B_i k_{yr} k_{yi}^2 - B_r k_{yr}^2 k_{yi} - \Im{\frac{p}{\chi^*}} k_{yr}^2 k_{yi}^2 \Bigg), \end{align} \begin{equation} F_{denom}(s) = \Im{\frac{p}{\chi^*}} (s-r_+)(s+r_+)(s-r_-)(s+r_-) . \end{equation} \label{eq:TM_halfspace_poles} \end{subequations} We see that the poles come in two pairs $\pm r_+$ and $\pm r_-$. In general, one each from $(+,-) r_+$ and $(+,-) r_-$ will have a positive real part and its opposite sign counterpart will have a negative real part; we take $r_+$ and $r_-$ to be the poles with positive real parts (in the main text these are $r_1$ and $r_2$ respectively. The inverse Laplace transform is \begin{equation} f(y) = \mathscr{L}^{-1}\bigg\{ F(s) \bigg\} = \frac{1}{2\pi i} \int_{T-i\infty}^{T+i\infty} F(s) e^{sy} \,\dd s \end{equation} where $T \in \mathbb{R}$ is greater than the real parts of all the poles of $F(s)$. Since $y>0$, we can deform the line integration and complete the contour in the $\Re{s}<0$ half plane upon which the contour completion part decays to 0; thus the inverse Laplace transform picks out the residue of $F(s) e^{sy}$. It is clear then that the contribution to the residue from $r_+$ and $r_-$ leads to functions with exponential growth as $y \rightarrow \infty$ whereas the contribution from $-r_+$ and $-r_+$ lead to functions with exponential decay as $y \rightarrow \infty$. To recover $f(y) \in L^2[0,\infty)$ we need to select free parameters $\gamma_\pm$ such that the residue for $r_+$ and $r_-$ are 0. This leads to the following linear system of equations from which we solve for $\gamma_\pm$: \begin{subequations} \begin{align*} &[r_+ - (k_{yi}-ik_{yr})][r_+^2 - (k_{yi}+ik_{yr})^2] \\ &= \frac{B_r+iB_i}{4}[r_+ + (k_{yi}-ik_{yr})][r_+^2 - (k_{yi}+ik_{yr})^2]\gamma_- + \frac{Br-iB_i}{4}[r_+ + (k_{yi}+ik_{yr})][r_+^2 - (k_{yi}-ik_{yr})^2]\gamma_+ , \numthis \\ &[r_- - (k_{yi}-ik_{yr})][r_-^2 - (k_{yi}+ik_{yr})^2] \\ &= \frac{B_r+iB_i}{4}[r_- + (k_{yi}-ik_{yr})][r_-^2 - (k_{yi}+ik_{yr})^2]\gamma_- + \frac{Br-iB_i}{4}[r_- + (k_{yi}+ik_{yr})][r_-^2 - (k_{yi}-ik_{yr})^2]\gamma_+ . \numthis \\ \end{align*} \label{eq:TM_halfspace_gamma} \end{subequations} $f(y)$ then takes on the form \begin{equation} f(y) = R_+ e^{-r_+ y} + R_- e^{-r_- y} \label{eq:AsympUinv_expiky} \end{equation} with the coefficients of the exponentials ($R_1$ and $R_2$ in the main text) given by \begin{align*} R_\pm = \frac{1}{\mp 2\Im{p/\chi^*} r_\pm \cdot (r_+^2-r_-^2)} \bigg\{&[-r_\pm - (k_{yi}-ik_{yr})][r_\pm^2-(k_{yi}+ik_{yr})^2] \\ &- \frac{B_r+iB_i}{4} [-r_\pm + (k_{yi}-ik_{yr})][r_\pm^2-(k_{yi}+ik_{yr})^2]\gamma_- \\ &- \frac{B_r-iB_i}{4}[-r_\pm + (k_{yi}+ik_{yr})][r_\pm^2-(k_{yi}-ik_{yr})^2]\gamma_+ \bigg\} .\numthis \label{eq:TM_halfspace_residues} \end{align*} (\ref{eq:AsympUinv_expiky}) can now be substituted back into (\ref{eq:AsympUinv_Evac}) and (\ref{eq:theta_bound_simplified}) to get an analytical integral expression for the LDOS enhancement bound near a half-space: \begin{equation} \rho_{sca} \leq \frac{1}{16\pi} \int_0^\infty \Re{ -\frac{\tilde{k}^3 p \, e^{2ik_y d}}{k_y^2} \left( \frac{R_+}{r_+ - ik_y} + \frac{R_-}{r_- - ik_y} \right) + \frac{\abs{\tilde{k}}^3 e^{-2 k_{yi} d}}{\abs{k_y}^2} \left( \frac{R_+}{r_+ + ik_y^*} + \frac{R_-}{r_- + ik_y^*} \right) } \label{eq:TM_halfspace_kxintegral} \end{equation} \subsection{TE Green's function and dipole field} The 2D TE Green's function is \begin{equation} \mathbb{G}^{TE}(x,y,x',y') = -\hat{y}\hat{y}\delta(x-x')\delta(y-y') + \tilde{k}^2 \frac{i}{4\pi} \begin{cases} \int_{-\infty}^\infty \dd k_x \frac{\hat{h}(k_y)\hat{h}(k_y)}{k_y} e^{i k_x(x-x')}e^{ik_y(y-y')} & y>y' \\ \\ \int_{-\infty}^\infty \dd k_x \frac{\hat{h}(-k_y)\hat{h}(-k_y)}{k_y} e^{i k_x(x-x')}e^{-ik_y(y-y')} & y<y' \end{cases} \end{equation} with the unit vectors $\hat{h}$ given by \begin{equation} \hat{h}(k_y) = -\frac{k_y}{\tilde{k}}\hat{x} + \frac{k_x}{\tilde{k}}\hat{y} \qquad \hat{h}(-k_y) = \frac{k_y}{\tilde{k}}\hat{x} + \frac{k_x}{\tilde{k}}\hat{y}. \end{equation} A dipole source of the form $\vb{J}^y(x',y') = \hat{y} \delta(x')\delta(y'+d)$ produces a vacuum field \begin{subequations} \begin{equation} \vb{E}_v^{TEy}(x,y) = \int_{-\infty}^\infty \frac{e^{i k_x x}}{\sqrt{2\pi}} \vb{E}_{v,k_x}^{TEy}(y) \, \dd k_x, \end{equation} \begin{equation} \vb{E}_{v,k_x}^{TEy}(y) = -\frac{\sqrt{2\pi}}{4\pi \tilde{k}} \cdot e^{ik_y d} e^{ik_y y} \cdot \left(-k_x \hat{x} + \frac{k_x^2}{k_y} \hat{y} \right) . \end{equation} \end{subequations} The vacuum dipole radiation is \begin{equation} \rho_0^{TEy} = \frac{\Re{\tilde{k}}}{16} = \frac{\pi}{8\lambda_0} . \end{equation} The composite $\text{Asym}(p\mathbb{U}^{TE})$ integral operator is \begin{multline} \text{Asym}(p\mathbb{U}^{TE}) = \int_{-\infty}^\infty \dd k_x \frac{e^{ik_x(x-x')}}{2\pi} \Bigg\{ \Im{\frac{p}{\chi^*}} \delta(y-y')(\hat{x}\hat{x}+\hat{y}\hat{y}) + \delta(y-y')p_i \hat{y}\hat{y} + \\ \frac{1}{2} \Re{p^* k_y e^{ik_y\abs{y-y'}}} \hat{x}\hat{x} + \frac{1}{2} \Re{p^*(k_x^2/k_y)e^{i k_y\abs{y-y'}}}\hat{y}\hat{y} - \frac{1}{2}\text{sgn}(y-y') \Im{p^* k_x e^{i k_y \abs{y-y'}}} (\hat{x}\hat{y} + \hat{y}\hat{x}) \Bigg\}. \label{eq:TE_AsympU} \end{multline} The corresponding global constraint bound for a TE dipole source near a half-space can be derived following an analogous procedure to that of a TM dipole source, though the expressions involved are tedious and was done in practice using Mathematica. For conciseness, the expressions are not reproduced here. \section{Asymptotic analysis} Based on the results from the prior Section, we can do asymptotic analysis to understand the observed scaling of the half-space bounds with regards to bandwidth, material, and separation. \subsection{TM bandwidth scaling} To understand the scaling of the LDOS bounds with bandwidth, it is illuminating to consider the contributions from the traveling waves and evanescent waves separately. Here traveling waves and evanescent waves refer to different $k_x$ in the planewave decomposition of the dipole field: traveling waves have $|k_x|<\tilde{k}_r$ and a predominantly real $k_y$, propagating for long distances with decay rate proportional to the bandwidth $\tilde{k}_i$; evanescent waves have $|k_x|>\tilde{k}_r$ and a predominantly imaginary $k_y$, rapidly decaying along the $y$ direction with decay rate strongly dependent on $|k_x|$. \subsubsection{Traveling wave contribution} We show that in the limit of zero bandwidth / single frequency, the contribution from just the traveling waves to the LDOS limits tends toward a finite constant. The following analysis is for TM fields but an analogous result can be obtained for TE fields.\\ We take the material susceptibility $|\chi|\rightarrow\infty$ and phase rotation $p=1$ so the results are material-independent and $\text{Asym}(p\mathbb{U}^{TM})=\text{Asym}\mathbb{G}^{TM}$: \begin{align*} \text{Asym}\mathbb{G}^{TM}(x,y;x',y') &= \int_{-\infty}^\infty \frac{e^{ik_x(x-x')}}{2\pi} \text{Asym}\mathbb{G}_{k_x}^{TM}(y,y') \,\dd k_x ,\\ \text{Asym}\mathbb{G}_{k_x}^{TM}(y,y') &= \frac{1}{2} \Re{\frac{\tilde{k}^2}{k_y} e^{ik_y|y-y'|}} . \end{align*} In the limit of zero bandwidth / single frequency, $\tilde{k}\in\mathbb{R}$. Then for $|k_x|>\tilde{k}$, $k_y$ is purely imaginary and $\text{Asym}\mathbb{G}_{k_x}^{TM}=0$, leading to \begin{align*} \text{Asym}\mathbb{G}^{TM} &= \int_{-\tilde{k}}^{\tilde{k}} \,\dd k_x \frac{e^{ik_x(x-x')}}{2\pi} \frac{\tilde{k}^2}{k_y} \cos k_y(y-y') \\ &= \int_{-\tilde{k}}^{\tilde{k}} \,\dd k_x \frac{e^{ik_x(x-x')}}{2\pi} \frac{\tilde{k}^2}{k_y} \big[\cos(k_y y)\cos(k_y y') + \sin(k_y y)\sin(k_y y') \big] . \end{align*} We consider a design region shaped as a slab with infinite extent in the $x$ direction and a thickness $h$ in the $y$ direction with the origin at the center. This is different from a half-space, but as we shall see the final result is completely independent of $h$ and thus should apply in the limit $h \rightarrow \infty$. The different parities of $\cos$ and $\sin$ about the origin allow us to directly write down the eigenvectors of $\text{Asym}\mathbb{G}_{k_x}$ as \begin{subequations} \begin{align} \ket{q_{cos} (k_y)} &= \frac{\cos(k_y y)}{a_{cos}} ,\\ \ket{q_{sin} (k_y)} &= \frac{\sin(k_y y)}{a_{sin}} \end{align} with normalization factor \begin{align} a^2_{cos} &= \int_{-h/2}^{h/2} \cos^2(k_y y) \,\dd y = \frac{1}{2k_y}(k_y h + \sin(k_y h)) ,\\ a^2_{sin} &= \int_{-h/2}^{h/2} \sin^2(k_y y) \,\dd y = \frac{1}{2k_y}(k_y h - \sin(k_y h)) \end{align} and corresponding eigenvalues \begin{align} \rho_{cos} (k_y) &= \frac{\tilde{k}^2}{2k_y} a_{cos}^2 ,\\ \rho_{sin} (k_y) &= \frac{\tilde{k}^2}{2k_y} a_{sin}^2 . \end{align} \end{subequations} We can thus write \begin{equation} \text{Asym}\mathbb{G}^{TM} = \int_{-\tilde{k}}^{\tilde{k}} \,\dd k_x \frac{e^{ik_x(x-x')}}{2\pi} \bigg( \rho_{cos}(k_y) \ket{q_{cos}(k_y)}\bra{q_{cos}(k_y)} + \rho_{sin}(k_y) \ket{q_{sin}(k_y)}\bra{q_{sin}(k_y)} \bigg) . \end{equation} The dipole source $\vb{J}(x',y') = \delta(x')\delta(y'-d-h/2)$ generates an incident field with the traveling wave components given by \begin{align*} \vb{E}_{v,travel}^{TM} &= -\frac{\tilde{k}}{2\sqrt{2\pi}} \int_{-\tilde{k}}^{\tilde{k}} \dd k_x \frac{e^{ik_x x}}{\sqrt{2\pi}} \frac{1}{k_y} e^{ik_y(d+h/2)} \big( \cos(k_y y) - i \sin(k_y y) \big) ,\\ -\vb{E}_{v,travel}^{TM*} &= \frac{\tilde{k}}{2\sqrt{2\pi}} \int_{-\tilde{k}}^{\tilde{k}} \dd k_x \frac{e^{ik_x x}}{\sqrt{2\pi}} \frac{1}{k_y} e^{-ik_y(d+h/2)} \big( \cos(k_y y) + i \sin(k_y y) \big) . \end{align*} While $\text{Asym}\mathbb{G}$ is technically semi-definite, the incident field $\vb{E}_{v,travel}$ is contained completely in the span of the non-singular eigenvectors $q_{cos}(k_y)$, $q_{sin}(k_y)$. Thus we can write down $\text{Asym}\mathbb{G}^{-1} \cdot \vb{E}_{v,travel}$ as \begin{equation} \text{Asym}\mathbb{G}^{TM-1} \cdot \vb{E}_{v,travel}^{TM} = -\frac{\tilde{k}}{2\sqrt{2\pi}} \int_{-\tilde{k}}^{\tilde{k}} \dd k_x \frac{e^{ik_x x}}{\sqrt{2\pi}} \frac{1}{k_y} e^{ik_y(d+h/2)} \big( \rho_{cos}^{-1} \cos(k_y y) - i \rho_{sin}^{-1} \sin(k_y y) \big) \end{equation} which can be formally interpreted as a pseudo-inverse or the inverse restricted to the relevant non-singular subspace. Thus we have \begin{align*} \bra{\vb{E}_{v,travel}^{TM}}\text{Asym}\mathbb{G}^{TM-1}\ket{\vb{E}_{v,travel}^{TM}} &= \frac{ \tilde{k}^2 }{8\pi} \int_{-\tilde{k}}^{\tilde{k}} \frac{1}{k_y^2} \bigg(\frac{a_{cos}^2}{\rho_{cos}} + \frac{a_{sin}^2}{\rho_{sin}} \bigg) \,\dd k_x \\ &= \frac{ \tilde{k}^2 }{8\pi} \int_{-\tilde{k}}^{\tilde{k}} \frac{4}{\tilde{k}^2 k_y} \,\dd k_x \\ &= \frac{1}{2\pi} \int_{-\tilde{k}}^{\tilde{k}} \frac{1}{\sqrt{\tilde{k}^2-k_x^2}} \,\dd k_x \\ &= \frac{1}{2} .\numthis \end{align*} A similar calculation yields \begin{align*} \bra{-\vb{E}_{v,travel}^*}\text{Asym}\mathbb{G}^{-1}\ket{\vb{E}_{v,travel}} &= \frac{ \tilde{k}^2}{8\pi} \int_{-\tilde{k}}^{\tilde{k}} \frac{e^{ik_y(2d+h)}}{k_y^2} \bigg(\frac{a_{cos}^2}{\rho_{cos}} - \frac{a_{sin}^2}{\rho_{sin}} \bigg) \,\dd k_x \\ &= 0 .\numthis \end{align*} This gives the traveling wave contribution to the material-independent LDOS enhancement limits \begin{align*} \rho_{sca}^{TM} &= \frac{1}{4} \Re{ \bra{-\vb{E}_{v,travel}^{TM*}} (\text{Asym}\mathbb{G}^{TM})^{-1}\ket{\vb{E}_{v,travel}^{TM}} } + \frac{\tilde{k}}{4} \bra{\vb{E}_{v,travel}^{TM}} (\text{Asym}\mathbb{G}^{TM})^{-1}\ket{\vb{E}_{v,travel}^{TM}} \\ &= \frac{\tilde{k}}{8} .\numthis \end{align*} Thus, if only the traveling waves (far field) are considered, $\rho_{sca}^{TM}/\rho_0^{TM}$ tends towards a constant value of $1$ in the single frequency limit. This can indeed be observed in the numerics for narrow bandwidth and large separation $d$, where the near-field contribution has decayed exponentially to the point of being negligible. An analogous calculation for TE shows that there the ratio is $1/2$. \subsubsection{Evanescent wave contribution\label{sec:TM_evan} } We evaluate the bandwidth scaling of the evanescent wave contribution for lossless materials ($\chi_i=0$) and TM polarization. To do so, we choose the phase parameter $p = 1+ip_i$ with the imaginary part $p_i \rightarrow 0$. As we shall see, this not only simplifies the asymptotic analysis for small bandwidth but also will lead to material independent bounds that are finite. Given $\chi_i=0$, we have the material factor $\lim_{p_i\rightarrow 0} \Im{p/\chi^*} \rightarrow 0$. Furthermore, from (\ref{eq:TM_halfspace_poles}) we have \begin{equation} r_- \sim \sqrt{k_{yi}^2 - k_{yr}^2 - \frac{2 k_{yr}k_{yi} (B_i k_{yi} - B_r k_{yr})}{B_r k_{yi} + B_i k_{yr}} } \qquad r_+ \sim \sqrt{\frac{\chi (B_r k_{yi} + B_i k_{yr})}{p_i}} \qquad p_i \rightarrow 0 \end{equation} so we see that in the limit $p_i \rightarrow 0$, $r_-$ tends toward a finite value while $r_+$ diverges $\propto p_i^{-1/2}$. To simplify the analysis, we can use the Cauchy-Schwartz relaxed (\ref{eq:theta_bound_simplified_CS}) to get \begin{align*} \rho_{sca}^{TM} &\leq \frac{\abs{\tilde{\omega}}}{2} \expval{\text{Asym}(p\mathbb{U}^{TM})^{-1}}{\vb{E}_v^{TM}} \\ &= \frac{1}{8\pi} \int_0^\infty \Re{ \frac{\abs{\tilde{k}}^3 e^{-2 k_{yi} d}}{\abs{k_y}^2} \left( \frac{R_+}{r_+ + ik_y^*} + \frac{R_-}{r_- + ik_y^*} \right) } \,\dd k_x . \numthis \label{eq:TM_halfspace_kxintegral_CS} \end{align*} Now, treating $r_+$ as a large variable in the limit $p_i \rightarrow 0$, asymptotic analysis of (\ref{eq:TM_halfspace_gamma}) and (\ref{eq:TM_halfspace_residues}) gives \begin{equation} \frac{R_+}{r_+ + ik_y^*} + \frac{R_-}{r_- + ik_y^*} \sim \frac{8 k_{yi} \abs{k_y}^2}{\Im{p/\chi^*} r_+^2 \left[(r_-+k_{yi})^2 +k_{yr}^2 \right]} \qquad p_i \rightarrow 0 \end{equation} Since $\Im{p/\chi^*} \propto p_i$ and $r_+^2 \propto p_i^{-1}$ in the limit $p_i$, we see that the entire integrand of (\ref{eq:TM_halfspace_kxintegral_CS}) tends toward a finite value \begin{equation} \rho_{sca}^{TM} \leq \frac{1}{8\pi} \int_0^\infty \Re{ \abs{\tilde{k}}^3 e^{-2 k_{yi} d} \frac{8 k_{yi}}{(B_r k_{yi} + B_i k_{yr}) \left[(r_-+k_{yi})^2 +k_{yr}^2 \right]} } \,\dd k_x , \label{eq:TM_halfspace_kxintegrand_CS_mat_indep} \end{equation} so the bound with $p=1$ is well-defined. We can now consider the narrow bandwidth limit $\tilde{k}_i \rightarrow 0$ given $p=1$. It has already been established that the traveling wave contribution from the $k_x<\tilde{k}_r$ part of the integrand is a constant in the narrow bandwidth limit. In the evanescent region $kx > \tilde{k}_r$, to lowest order in $\tilde{k}_i$ we have \begin{subequations} \begin{equation} k_{yi} \sim \sqrt{k_x^2 - \tilde{k_r}^2} \quad k_{yr} \sim \frac{\tilde{k}_r}{k_{yi}} \cdot \tilde{k}_i \qquad \tilde{k}_i \rightarrow 0 \label{eq:evan_ky} \end{equation} \begin{equation} B_i \sim -\frac{\tilde{k}_r^2}{k_{yi}} \quad B_r \sim \left( \frac{\tilde{k}_r^3}{k_{yi}^3} + \frac{2\tilde{k}_r}{k_{yi}} \right) \cdot \tilde{k}_i \qquad \tilde{k}_i \rightarrow 0 \end{equation} \begin{equation} r_-(p=1) \sim \sqrt{\tilde{k}_r^2 + k_{yi}^2} \qquad \tilde{k}_i \rightarrow 0 \end{equation} \label{eq:TM_halfspace_smallki} \end{subequations} All of these combine together to give \begin{equation} \Re{ \abs{\tilde{k}}^3 e^{-2 k_{yi} d} \frac{8 k_{yi}}{(B_r k_{yi} + B_i k_{yr}) \left[(r_-+k_{yi})^2 +k_{yr}^2 \right]} } \propto \tilde{k}_i^{-1} \qquad kx>\tilde{k}_r, \tilde{k}_i \rightarrow 0 \end{equation} and we see that the evanescent wave contribution gives the inverse bandwidth scaling for lossless materials, as seen in the main text. \subsection{TM material Scaling} From section \ref{sec:TM_evan} we see that given lossless $\chi$ the bound with $p=1$ is finite and well-defined, amounting to \begin{equation} \rho_{sca}^{TM} \leq \frac{\tilde{\omega}}{2} \expval{\text{Asym}\mathbb{G}^{TM-1}}{\vb{E}_v^{TM}}, \label{eq:TM_mat_indep} \end{equation} which is a bound independent of $\chi$, and thus therefore applies to all lossless $\chi$. The next step is to show that (\ref{eq:TM_mat_indep}) is also a bound for lossy $\chi$ with $\Im(\chi)>0$. We can see this by again setting $p=1$ in (\ref{eq:theta_bound_simplified}) and applying the Cauchy-Schwartz inequality to arrive at a bound for lossy $\chi$: \begin{equation} \rho_{sca}^{TM} \leq \frac{\abs{\tilde{\omega}}}{2} \expval{\left(\frac{\chi_i}{\abs{\chi}^2} + \text{Asym}\mathbb{G}^{TM}\right)^{-1}}{\vb{E}_v^{TM}} . \end{equation} Now as operators we have $\frac{\chi_i}{\abs{\chi}^2} \succ 0$ and $\text{Asym}\mathbb{G}^{TM} \succeq 0$, so $\expval{\left(\frac{\chi_i}{\abs{\chi}^2} + \text{Asym}\mathbb{G}^{TM}\right)^{-1}}{\vb{E}_v^{TM}} < \expval{\text{Asym}\mathbb{G}^{TM-1}}{\vb{E}_v^{TM}}$ and it is clear that the material independent bound (\ref{eq:TM_mat_indep}) while derived assuming $\chi_i=0$ applies to general lossy $\chi$. Thus in general as $|\chi| \rightarrow \infty$, the TM halfspace LDOS limits do not diverge, but are always bounded by (\ref{eq:TM_mat_indep}). \subsection{TM separation scaling} In this section we investigate the scaling of the TM half-space bounds with the vacuum separation $d$ as $d \rightarrow 0$. The $k_x$ integral in (\ref{eq:TM_halfspace_kxintegral_CS}) converges for finite $d$ due to the exponential decay factor $e^{-2 k_{yi} d}$: as $k_x \rightarrow \infty$, $k_{yi} \approx k_x \rightarrow \infty$. The scaling of (\ref{eq:TM_halfspace_kxintegral_CS}) with $d$ thus depends on the $k_x$ scaling of the rest of the integrand as $k_x \rightarrow \infty$. \subsubsection{Finite $\chi$ \label{sec:asymptotics_TM_finitechi}} Given finite $\chi$, we can always select a complex phase $p$ such that $\Im{p/\chi^*} > 0$. Now from (\ref{eq:TM_AsympUkx}) and (\ref{eq:evan_ky}) it is clear that $\lim_{k_x \rightarrow \infty} \text{Asym}(p\mathbb{U}^{TM})_{k_x} = \Im{p/\chi^*}$, since $\lim_{k_x \rightarrow \infty} \tilde{k}^2/k_y = 0$ and $\lim_{k_x \rightarrow \infty} e^{ik_y|y-y'|} \rightarrow 0$ for $y \neq y'$ (if $y=y'$ then $e^{ik_y|y-y'|}=1$ but this is a finite non-zero value of the integral kernel with support over a set of measure 0 and the integral operator as a whole still goes to 0). The $k_x$ integrand of $\expval{\text{Asym}(p\mathbb{U}^{TM})^{-1}}{\vb{E}_v^{TM}}$ given in (\ref{eq:Ei_AsympUinv_Ei}) for large $k_x$ is then approximately \begin{equation*} \frac{e^{-2k_x d}}{k_x^2} \Im{p/\chi^*}^{-1} \int_0^\infty e^{-2k_x y} \, \dd y \propto \frac{e^{-2k_x d}}{k_x^3}. \end{equation*} Even at exactly $d=0$, the integrand scales $\propto k_x^{-3}$ as $k_x \rightarrow \infty$ and the integral converges. Thus for finite $\chi$ the half-space LDOS bounds approach a finite constant as $d \rightarrow 0$. \subsubsection{Material independent bounds} For the material independent bound given in (\ref{eq:TM_mat_indep}), the $k_x$ integrand is given by (\ref{eq:TM_halfspace_kxintegrand_CS_mat_indep}) and (\ref{eq:TM_halfspace_smallki}). In the narrow-bandwidth, large $k_x$ limit the integrand \begin{equation} \Re{ \abs{\tilde{k}}^3 e^{-2 k_{yi} d} \frac{8 k_{yi}}{(B_r k_{yi} + B_i k_{yr}) \left[(r_-+k_{yi})^2 +k_{yr}^2 \right]} } \sim \frac{\abs{\tilde{k}}^3}{k_r} \frac{1}{k_i} \frac{e^{-2k_x d}}{k_x} \qquad k_i \rightarrow 0, k_x \rightarrow \infty \qquad k_x \rightarrow \infty. \end{equation} Thus the material independent bound scaling as $d \rightarrow 0$ can be evaluated as \begin{align*} \rho_{sca, max}^{TM} \sim \frac{1}{8\pi} \frac{\abs{\tilde{k}}^3}{k_r} \frac{1}{k_i} \int_{\beta \tilde{k}_r}^{\infty} \frac{e^{-2k_x d}}{k_x} \,\dd k_x \\ = \frac{1}{8\pi} \frac{\abs{\tilde{k}}^3}{k_r} \frac{1}{k_i} E_1(2\beta \tilde{k}_r d) \\ \sim \frac{1}{8\pi} \frac{\abs{\tilde{k}}^3}{k_r} \frac{1}{k_i} \ln(\lambda_0 / d) \numthis \label{eq:TM_halfspace_d_asymptotics_mat_indep} \end{align*} where we have selected a constant $\beta \gg 1$ such that the large $k_x$ approximations hold; as seen in (\ref{eq:TM_halfspace_d_asymptotics_mat_indep}) the exact value of $\beta$ is irrelevant to the leading order asymptotic of $\ln(1/d)$. Note that this is a different scaling than the finite $\chi$ bounds in the previous subsection. A consequence of the material independent bounds diverging whilst fixed material bounds saturating as $d \rightarrow 0$ is that smaller $d$ increases the relative advantage of large materials, as seen in Fig. (2b) in the main text. \subsection{TE separation scaling} The prior sections on the asymptotics of the TM case have demonstrated scaling of the bounds with the inverse of the bandwidth (for lossless materials) and saturation with increasing material susceptibility; the TE bounds share these scaling characteristics so for the sake of simplicity we will not do a detailed TE asymptotic analysis. However, there is a difference in the $d \rightarrow 0$ scaling between the TM and TE results, which can be understood by comparing the $k_x \rightarrow \infty$ characteristics of $\vb{E}_{v,k_x}^{TM}$ and $\vb{E}_{v,k_x}^{TEy}$. For TE, we have \begin{align*} \rho_{sca}^{TEy} &\leq 2 \int_0^\infty \expval{\text{Asym}(p\mathbb{U}_{k_x}^{TE})^{-1}}{\vb{E}_{v,k_x}^{TEy}} \, \dd k_x\\ &= 2 \int_0^\infty \expval{(\Im{p/\chi^*} + \text{Asym}(p^*\mathbb{G}_{k_x}^{TE}))^{-1}}{\vb{E}_{v,k_x}^{TEy}} \, \dd k_x \\ & < 2 \int_0^\infty \Im{p/\chi^*}^{-1} \lVert \vb{E}_{v,k_x}^{TEy} \rVert^2 \, \dd k_x \\ &= \frac{1}{4\pi \abs{\tilde{k}}^2} \int_0^\infty \frac{e^{-2 k_{yi}d}}{2 k_{yi}} \left(k_x^2 + \frac{k_x^4}{\abs{k_y}^2} \right) \, \dd k_x . \end{align*} In the $k_x \rightarrow \infty$ limit, the integrand $\sim k_x e^{-2 k_x d}$, so similar to (\ref{sec:asymptotics_TM_finitechi}), for the $d \rightarrow 0$ asymptotics we have \begin{align*} \rho_{sca}^{TEy} &\lesssim \frac{1}{4\pi \abs{\tilde{k}}^2} \int_{\beta \tilde{k}_r}^\infty k_x e^{-2 k_x d} \, \dd k_x \qquad d \rightarrow 0 \\ &= \frac{1}{d^2} \frac{1}{4\pi \abs{\tilde{k}}^2} \cdot \frac{1}{4} e^{-2\beta \tilde{k}_r d} (2 \beta \tilde{k}_r d + 1) \qquad d\rightarrow 0 \\ &\propto \frac{1}{d^2} \qquad d \rightarrow 0 \numthis \label{eq:TE_halfspace_d_asymptotics} \end{align*} and we recover the $1/d^2$ scaling of the TE bounds seen in the main text. \subsubsection{3D separation scaling} For a point dipole near a 3D half-space design region, a similar analysis would yield $1/d^3$ scaling of the LDOS bounds. Due to the extra spatial dimension as compared with the TE case, the 1D $k_x$ integral becomes a 2D $k_\parallel$ integral. For simplicity, suppose that the dipole is aligned with the normal of the design region surface, giving the problem cylindrical symmetry. The spatial integration kernel is then $2\pi \int_0^\infty k_\parallel (\cdot) \dd k_\parallel$, with the bound integral analogous to (\ref{eq:TE_halfspace_d_asymptotics}) having an extra $k_\parallel$ factor in the integrand. This will lead to $\propto 1/d^3$ scaling, as seen in \cite{shim_fundamental_2019}. \section{Comparison to passivity bounds} The main prior work concerning finite bandwidth LDOS limits is \cite{shim_fundamental_2019}, where an bound based on passivity constraints was derived: \begin{subequations} \begin{equation} \rho_{sca} \leq \frac{|\tilde{\omega}|}{2} f(\tilde{\omega}, \chi) \bra{\vb{E}_v}\ket{\vb{E}_v} \end{equation} with the material figure of merit \begin{equation} f(\tilde{\omega}, \chi) = \frac{|\tilde{\omega}\chi|^2 + |\tilde{\omega}\chi|\Delta\tilde{\omega}}{|\tilde{\omega}|\Im(\tilde{\omega}(1+\chi))}. \label{eq:passivity_FOM} \end{equation} \label{eq:passivity_bounds} \end{subequations} Note that compared to (22) in \cite{shim_fundamental_2019}, we have dropped the electrostatic contribution (as in the main text), and there is an additional factor of $\pi |\tilde{\omega}|^2$ due to differences in definitions (the $\pi$ factor is a matter of convention; the $|\tilde{\omega}|^2$ factor comes from $\vb{E}_v$ in our work coming from a unit amplitude dipolar current source instead of a unit dipole moment). Compared to the material bound given by (16) in the main text, (\ref{eq:passivity_bounds}) essentially corresponds to a particular value of $p$. For example, given a lossless $\chi$ and narrow bandwidth $\Delta\omega \ll |\tilde{\omega}|$, we have \begin{equation*} f(\tilde{\omega}, \chi) \approx \frac{|\tilde{\omega}|}{\Delta\omega} \frac{\chi^2}{1+\chi}. \end{equation*} Now if we select a phase rotation $p$ such that $p_r=1$, \begin{equation} p_i = \frac{\chi+1}{\chi} \frac{\Delta{\omega}}{\abs{\tilde{\omega}}}, \end{equation} the material bound is \begin{align*} \frac{|\tilde{\omega}|}{2} \Im{p/\chi^*}^{-1} \bra{\vb{E}_v}\ket{\vb{E}_v} = \frac{|\tilde{\omega}|}{2} \frac{|\tilde{\omega}|}{\Delta\omega} \frac{\chi^2}{1+\chi} \bra{\vb{E}_v}\ket{\vb{E}_v} = \frac{|\tilde{\omega}|}{2} f(\tilde{\omega}, \chi) \bra{\vb{E}_v}\ket{\vb{E}_v} \end{align*} which is exactly equivalent to the passivity bound. Checking (\ref{eq:cmax}) it is clear that with such a phase rotation $p$, $\text{Asym}(p\mathbb{U})$ remains positive definite. \subsection{Bandwidth scaling of the passivity bounds} The passivity bounds (\ref{eq:passivity_bounds}) have increased bandwidth scaling given an infinite halfspace design region coming from traveling wave contributions to $\bra{\vb{E}_v}\ket{\vb{E}_v}$. For example, given TM polarization we have \begin{align*} \bra{\vb{E}_v^{TM}}\ket{\vb{E}_v^{TM}} &= \frac{\abs{\tilde{k}}^2}{8\pi} \int_{-\infty}^\infty \frac{ e^{-2 k_{yi} d}}{\abs{k_y}^2} \int_0^\infty e^{-2 k_{yi} y} \dd y \, \dd k_x \\ &= \frac{\abs{\tilde{k}}^2}{8\pi} \int_{-\infty}^\infty \frac{ e^{-2 k_{yi}d} }{2\abs{k_y}^2 k_{yi}} \, \dd k_x. \end{align*} Now for traveling waves $\abs{k_x} < \tilde{k}_r$, it can easily be shown from $k_y = \sqrt{\tilde{k}^2 - k_x^2}$ that $k_{yi} \propto \tilde{k}_i$, leading to $\bra{\vb{E}_v^{TM}}\ket{\vb{E}_v^{TM}} \propto 1/\tilde{k}_i \propto 1/\Delta\omega$ (this analysis also applies to TE polarization). For lossless dielectrics, $f(\tilde{\omega}, \chi) \approx \frac{|\tilde{\omega}|}{\Delta\omega}$, giving the passivity bounds an overall $1/\Delta\omega^2$ scaling, as compared to the $1/\Delta\omega$ scaling for the full bounds. For lossy materials $f(\tilde{\omega}, \chi) \approx \frac{\abs{\chi}^2}{\chi_i}$, giving the passivity bounds an overal $1/\Delta\omega$ scaling, as compared to the $1/\Delta\omega^4$ scaling of the full bounds seen in Fig (3) of the main text. \bibliographystyle{plain}
{ "timestamp": "2022-10-04T02:04:48", "yymm": "2209", "arxiv_id": "2209.08668", "language": "en", "url": "https://arxiv.org/abs/2209.08668" }
\section{\label{sec:level1}Introduction} Valley degree of freedom and associated manipulations have become rising topics in recent years \cite{2016Valley,2007Valley,2012Valley}. As local extrema in band structures, multiple valleys are selectively expressed through their contrasting optoelectronic properties. Besides introducing external magnetic field or magnetic proximity effect \cite{MF1,MF2,MF3,MF4,PRB15}, a vertical electric field, as more energy-efficient means, has been applied in a number of two-dimensional valley materials to modify the valley structure \cite{bilayerMoS2,twisted-WSe2,TiSiCO}. However, the study on the electric control of magnetic valley materials has few examples, and the influences of magnetic order and the stacking configuration remain unclear, which are worth further investigation. A new family of two-dimensional materials, $\rm MA_2Z_4$ (M=transition metal; A=Si, Ge; Z=N, P, As), has recently proposed and synthesized \cite{MA2Z4,2MA2Z4}. As a typical example, the $\rm VSi_2N_4$ monolayer is a ferromagnetic, two-valley semiconductor, in which the valley splittings appear due to the simultaneous presence of the magnetic order and sizable spin-orbit coupling \cite{VSi2N4Monolayer,VSi2N4}. Further considering the $\rm VSi_2N_4$ bilayer, it will provide a powerful platform for studying the interplay of spin, valley and layer degrees of freedom. In this work, we investigate magnetic and electronic properties of $\rm VSi_2N_4$ bilayers by first-principles density functional theory calculations. Taking into account different interlayer magnetic couplings and stacking orders, the $\rm VSi_2N_4$ bilayers exhibit varied valley and spin degeneracies. In the most stable $\rm AA^{\prime}$ stacking, the bands are spin-degenerate with sizable valley splitting in the electronic structure of the interlayer antiferromagnetic coupling, while the valley degeneracy is kept in the single-spin band structures for the ferromagnetic coupling. The introduction of a vertical electric field leads to highly tunable splittings of the above degeneracies, owing to the spin-layer and valley-layer couplings. Meanwhile, the modifications of the electronic structures give rise to distinct transport behaviors, i.e. there is a transition from the valley/spin contrasting Hall currents to single anomalous Hall current with any combination of valley and spin. The valley-spin physics from the other stacking orders is further discussed. The above characteristics are also expected in more bilayers and multilayers of magnetic valley semiconductors. The tunable valley and spin splittings from the layer degree of freedom, including the interlayer magnetic orders and the stacking orders, add a new dimension to the valley-spin physics and provide a practical avenue for designing advanced spintronic and valleytronic devices. \section{METHODS} We use density functional theory calculations within the generalized gradient approximation (GGA) to study the atomic and electronic structures of bilayer $\rm VSi_2N_4$ \cite{density,GGA}.The calculations are implemented in the Vienna Ab initio Simulation Package, with the projector-augmented wave potentials and the Perdew–Burke–Ernzerhof exchange-correlation functional \cite{VASP1,VASP2,PBE}. A plane-wave cutoff of 500 eV and a Monkhorst–Pack $ \bold k $-point mesh of 15 $\times$ 15 $\times$ 1 are adopted. A vacuum slab of approximately 20 $ \rm \AA $ is inserted to minimize the interaction between the $\rm VSi_2N_4$ bilayer and its periodic images. Structure optimization is performed with a convergence threshold of $10^{-4}$ eV/$ \rm \AA $ on the interatomic forces, and the convergence criterion of the total energy for the electronic iteration is set to $10^{-8}$ eV. To better describe the on-site electron-electron interaction, the GGA+U method is applied to 3$d$ orbitals of the vanadium atom, with an effective $U$ of 3 eV \cite{GGAU,VS2bilayer,VSi2N4Monolayer}. The van der Waals correction is added by the DFT-D2 method to better describe the interlayer interaction of the $\rm VSi_2N_4$ bilayer \cite{DFTD2}. The spin-orbit coupling (SOC) is further introduced into the electronic structure calculations, where the out-of-plane magnetization is considered to realize the valley polarization in each $\rm VSi_2N_4$ monolayer \cite{VSi2N4Monolayer}. To quantify the valley and spin degeneracy splittings in electronic band structures of the $\rm VSi_2N_4$ bilayers, we define the valley splitting as \begin{eqnarray} \Delta_{\text {val }}^{v / c}=E_{\text {val }}^{v / c, K+}-E_{\text {val }}^{v / c, K-} \end{eqnarray} for a given band (the valence or conduction band), and the spin splitting as \begin{eqnarray} \Delta_{\text {spin }}^{v / c, K_{\tau}}=E_{\uparrow}^{v / c, K_{\tau}}-E_{\downarrow}^{v / c, K_{\tau}} \end{eqnarray} for a given band at a given valley. Here, the superscripts, $v$ and $c$, denote the valence and conduction bands, respectively. The index, $\tau=\pm 1$, distinguishes $K_{\pm}$ valleys. ↑ and ↓ represent the spin-up and spin-down states, respectively. $E_{\text {val }}^{v / c, K \tau}$ in Eq. (1) is the energy extremum of the band-edge states at $K_{\tau}$ valley for a certain band regardless of the spin, while $E_{\uparrow / \downarrow}^{v / c, K\tau}$ in Eq. (2) is the energy of the band-edge state with up/down spin. \section{RESULTS} \subsection{\label{sec:level2}Atomic structures and magnetic properties} For the $\rm VSi_2N_4$ bilayer, six highly-symmetric interlayer stacking configurations are taken into account in our calculations to search for the most stable atomic structure. The detailed information of these stacking configurations is given in Supplementary Material (SM hereafter). Given that the $\rm VSi_2N_4$ monolayer has a ferromagnetic order \cite{VSi2N4Monolayer}, both the interlayer ferromagnetic and antiferromagnetic couplings are further considered for each stacking configuration. According to the total energy calculations, it is found the $\rm AA^{\prime}$ stacking has the lowest energy among all atomic configurations. The energy of the interlayer antiferromagnetic coupling in the $\rm AA^{\prime}$ stacking is further smaller than that of the interlayer ferromagnetic one by 0.04 meV. Therefore, the antiferromagnetically coupled $\rm AA^{\prime}$ stacking is the most stable. Besides, the interlayer magnetic coupling is not strong, and it has the same order of magnitude with those of $\rm KCrS_2$ and $\rm MnBi_4Te_7$ van der Waals layers \cite{MCrS2,MnBiTe}. The interlayer magnetic order is thus likely to be switched by applied magnetic field \cite{magneticfield}, magnetic proximity effect \cite{PRB15} or other methods \cite{spin-orbit,strain-tuning}. The $\rm AA^{\prime}$ stacking with two kinds of interlayer magnetic couplings are focused on in the following calculations, and they exhibit distinct valley and spin properties. We also consider magnetic properties and valley-spin physics of $\rm VSi_2P_4$ bilayer, and its interlayer magnetic coupling is larger than that of the $\rm VSi_2N_4$ bilayer by one order of magnitude, which we will discuss in detail later. \begin{figure}[htbp] \includegraphics[width=7.5cm]{Fig1.pdf \caption{\label{fig:epsart} Atomic structure of the $\rm AA^{\prime}$-stacked $\rm VSi_2N_4$ bilayer. (a) The side view and (b) the top view. Red, blue and grey balls stand for V, Si and N atoms, respectively. The unit cell is bound by dashed lines.} \end{figure} Fig. 1 shows the atomic structure of the $\rm AA^{\prime}$ stacking of the $\rm VSi_2N_4$ bilayer. For each monolayer of the bilayer structure, it forms a two-dimensional hexagonal lattice, and consists of seven atomic layers stacked as N-Si-N-V-N-Si-N \cite{VSi2N4Monolayer}. The $\rm VN_2$ layer in the middle of the monolayer exhibits a similar structure with the 2H-$\rm MoS_2$ monolayer \cite{2012Valley}, i.e. each V atom has six neighboring N atoms forming a trigonal prism. Two SiN layers, with a buckled honeycomb lattice, are located on the top and bottom sides of the $\rm VN_2$ layer. When two $\rm VSi_2N_4$ monolayers are stacked together, the stacking configuration is determined by relative in-plane positions between the $\rm VN_2$ layers from the two monolayers. For the $\rm AA^{\prime}$ stacking, the V atomic layers from two $\rm VN_2$ layers coincide with each other, and the N atoms from the upper (lower) $\rm VN_2$ layer are superposed onto the centers of the hexagonal rings of the lower (upper) $\rm VN_2$ layer. For the $\rm VSi_2N_4$ bilayer, the in-plane lattice constant of its hexagonal lattice is computed to 2.89 $\rm \AA$. The thickness of each monolayer is 6.87 $\rm \AA$, and the van der Waals gap between the two monolayers is 2.98 $\rm \AA$. Each V atom contributes to a magnetic moment of 1.2 $\rm \mu_B$, while magnetic moments of the other atoms are orders of magnitude smaller, which is consistent with previous results of the $\rm VSi_2N_4$ monolayer \cite{VSi2N4Monolayer}. The above computed structural and magnetic parameters are insensitive to the interlayer magnetic orders. \subsection{Valley and spin properties in electronic structures} \begin{figure}[htbp] \includegraphics[width=8cm]{Fig2.pdf \caption{\label{fig:epsart} Electronic band structures of the $\rm AA^{\prime}$-stacked $\rm VSi_2N_4$ bilayers. (a) The interlayer antiferromagnetic coupling and (b) the ferromagnetic one. The red solid circles and blue hollow circles are used to indicate the atom-projected weights of electronic states on the V atoms from the upper and lower monolayers, resspectively. The valence band maximum is set to zero energy. The insets schematically show the interlayer magnetic order.} \end{figure} We then investigate the band structures and associated valley-spin properties of $\rm AA^{\prime}$-stacked $\rm VSi_2N_4$ bilayers with interlayer antiferromagnetic and ferromagnetic couplings, which are shown in Fig. 2(a) and (b) with atomic projections and the schematic depictions of the interlayer magnetic orders. In the electronic band structure of the interlayer antiferromagnetically coupled bilayer, it is seen that all bands are doubly degenerate at each crystal wave vector $\bold k $. There are two local, unequal valence band maxima (conduction band minima) at the highly symmetric $K_+$ and $K_-$ points of the hexagonal Brillouin zone. That is, a pair of inequivalent valleys appear with energy splittings for both the valence and conduction bands. According to the definition in the above section, the valley splittings have values of 64.1 meV and -1.6 meV for the valence and conduction bands, respectively, and the signs of the valley splittings can be changed by reversing the magnetization directions of both monolayers, which are similar to the case of the $\rm VSi_2N_4$ monolayer \cite{VSi2N4Monolayer}. Moreover, the direct band gaps are also unequal at $K_+$ and $K_-$ valleys, with magnitudes of 357.9 meV and 423.5 meV, respectively. Since the global valence band maximum and conduction band minimum are both located at $K_+$, the $\rm VSi_2N_4$ bilayer is a direct-band-gap semiconductor. According to the atomic projections in Fig. 2(a), it is seen that the electronic bands at $K_{\pm}$ valleys are dominantly contributed by $ d $ orbitals of V atoms. For each pair of doubly-degenerate bands, they are plotted by red solid circles and blue hollow circles, corresponding to the contributions from the V atoms of the upper and lower monolayers, respectively. Further considering the spin orientation as illustrated by the spin-resolved band structures in Fig. S2 of SM, it is found that the bands composed by different monolayers exhibit opposite spins. This leads to the spin-layer locking and indicates that the doubly-degenerate bands also correspond to opposite spins. The spin degeneracy arises from the invariance of the magnetic system under the combined operations of the spatial inversion ($\mathcal{P}$) and time reversal ($\mathcal{T}$). The $\mathcal{PT}$ symmetry relates the electronic states with opposite spins at a certain $\bold k $ point, and ensures that they have the same energy \cite{13PANS}. In contrast to the valley splittings in the interlayer antiferromagnetic coupling, there are two degenerate valleys at $K_{\pm}$ points for the $\rm AA^{\prime}$-stacked bilayer with the interlayer ferromagnetic coupling. Owing to the valley degeneracy, the direct band gaps at $K_{\pm}$ are equal with a magnitude of 357.7 meV, and the bilayer is still a direct-band-gap semiconductor. Moreover, the electronic bands are no longer degenerate. Two valence bands and two conduction bands appear at each valley within the energy range of Fig. 2(b). The splitting between the first valence (conduction) band near the Fermi level and the second one is 64.1 meV (1.6 meV). The atomic projections in Fig. 2(b) further demonstrate that for the $K_+$ ($K_-$) valley, the first (second) valence and conduction bands with red solid circles are mainly contributed by the V atom from the upper monolayer, while the V atom from the lower monolayer dominantly contributes to the second (first) valence and conduction bands with blue hollow circles. Therefore, for each band, the electronic states at $K_+$ and $K_-$ valleys have distinct distributions at two monolayers, indicating a characteristic of the valley-layer coupling. When concentrating on the red fat bands contributed by the upper monolayer, valley splittings of 64.1 meV and -1.6 meV are found for the valence and conduction bands, respectively, which agrees with the splittings in single $\rm VSi_2N_4$ monolayer \cite{VSi2N4Monolayer}. For the blue fat bands contributed by the lower monolayer, the valley splittings have the same magnitude but opposite signs. Moreover, it is found that the bands composed by different monolayers have the same spin for the interlayer ferromagnetic coupling, by the spin projection in Fig. S2 of SM. The degenerate valleys with the single spin are ensured by the spatial inversion symmetry of the bilayer system, which relates the electronic states with the same spin but at opposite crystal wave vectors. The $\rm AA^{\prime}$-stacked $\rm VSi_2N_4$ bilayers with the interlayer antiferromagnetic and ferromagnetic couplings exhibit distinct valley degeneracy and spin degeneracy. Besides, there are the spin-layer coupling in the antiferromagnetic order and the valley-layer coupling in the ferromagnetic order. Therefore, the selection of the layer is likely to lead to valley or spin polarization. In the followings, we will investigate the roles of applied vertical electric field in tuning electronic band structure of the $\rm VSi_2N_4$ bilayers. \subsection{The roles of applied electric field} We first study the effects of the electric field on the antiferromagnetically coupled bilayer with the valley splitting and spin degeneracy. When adding a positive electric field that points upwards, it is found that each pair of degenerate bands in Fig. 2(a) is splitted into two non-degenerate bands with atomic contributions from different monolayers and opposite spins, as shown in Fig. 3(a). Specifically speaking, the first (second) valence band generated by the splitting is contributed by only the upper (lower) monolayer with up (down) spin at both valleys, and the first (second) conduction band has the same atomic contribution and spin orientation as the second (first) valence band. The first valence and conduction bands thus have opposite spins at each valley, indicating that the bilayer is a bipolar magnetic semiconductor \cite{BMS}. The spin splittings above arise from the up shift of the spin-up red bands in Fig. 3(a) with respect to the spin-down blue bands, which is also demonstrated schematically in Fig. 3(b). The relative band shift is due to the electric potential difference between the upper and lower monolayers induced by the applied electric field and the spin-layer coupling. From the view of the symmetry, it is the electric field that breaks the $\mathcal{PT}$ symmetry and associated spin degeneracy. \begin{figure}[htbp] \includegraphics[width=9cm]{Fig3.pdf \caption{\label{fig:epsart} Electronic properties of the interlayer antiferromagnetically coupled bilayer, under applied electric field. (a) The calculated atom-projected band structure with an electric field of 0.5 V/nm. (b) Schematic depiction of the band shift. The black dashed lines and red/blue solid lines represent the degenerate bands and the splitted bands at $K_\pm$ valleys, before and after adding the electric field, respectively. The upward arrow indicates the direction of the electric field. (c) The valley and spin splittings as functions of the electric field. While the valley splitting is that of the valence band, the spin splitting corresponds to the valence band at $K_+$ valley.} \end{figure} \vspace{0.3cm} Moreover, the evolutions of the spin splitting and the valley splitting with the electric field are demonstrated in Fig. 3(c). As the strength of the electric field is enhanced, the spin splittings are linearly increased for both valence and conduction bands at two valleys, by the same rate of about 18.2 meV per 0.1 V/nm. Correspondingly, the direct band gap at two valleys linearly decreases with the electric field by the above rate. In contrast, the valley splittings of the bilayer do not change with the electric field, since the orbital component of the valence/conduction bands at two valleys come from the same monolayer and the valley splitting of the single monolayer is insensitive to the applied electric field. Therefore, the presence of the applied electric field leads to the spin splitting and has no influence on the valley splittings. Besides, the sign of the spin splittings is determined by the direction of the electric field. When applying a negative electric field that points downwards, the induced spin splittings are reversed compared with the case of the positive electric field. \begin{figure}[htbp] \includegraphics[width=9cm]{Fig4.pdf \caption{\label{fig:epsart}Electronic properties of the interlayer ferromagnetically coupled bilayer, under applied electric field. (a) and (b) The atom-projected band structures with electric fields of 0.2 V/nm and 0.5 V/nm, respectively. (c) The band and valley splittings as functions of the electric field. The band splitting, $\Delta_{\text {band }}^{v, K_+}$, corresponds to the relative shift between the first and second valence bands at $K_+$ valley, while the valley splitting is that of the valence band.} \end{figure} On the other hand, the electric field is introduced into the ferromagnetically coupled bilayer with the valley degeneracy. Under the action of the positive electric field, the red bands contributed by the upper monolayer move upwards with respect to the blue bands contributed by the lower monolayer, which is similar to the above case and shown in Figs. 4(a) and (b) with two representative electric fields. Since the degenerate bands from two valleys in Fig. 2(b) correspond to different monolayers of the pristine bilayer, i.e. the valley-layer coupling, the band shifts under the electric field results in the valley splitting. The size of the valley splitting depends on the strength of the electric field. We first consider the valley splitting of the valence band, as shown in Fig. 4(c). When the positive electric field is within the range of 0-0.35 V/nm, the splitting is enhanced linearly with the electric field, up to 64.1 meV. When the electric field continues increasing, the splitting is unchanged. The electronic bands in Figs. 4(a) and (b) correspond to the two strength ranges of the electric field, respectively. The trend change is due to a critical electric field of 0.35 V/nm that gives rise to a band exchange between the first and second valence bands at $K_-$ valley. Beyond the critical electric field, the first valence bands at two valleys are both contributed by the same monolayer. As a result, the valley-layer coupling is broken and the valley splitting of the valence band is no longer varied with the electric field. The calculated critical electric field can be well estimated by dividing the aforementioned band splitting between the first and second valence bands of the pristine bilayer by the relative shift rate of the bands from two monolayers under the electric field. Given that the band splitting above is 64.1 meV and the band shift rate is 18.2 meV per 0.1 V/nm, the estimated critical electric field is also 0.35 V/nm. Moreover, the reversal of the electric field changes the sign of the valley splitting. Further considering the conduction band, its valley splitting has the similar trend as that of the valence band. However, the critical electric field for the conduction band is much smaller, with a value of 9 $\times$ $10^{-3}$ V/nm. It is because the splitting between the first and second conduction bands is much smaller, compared with that of the valence bands. Besides, owing to the valley splitting induced by the electric field, there is a transition from the direct-band-gap semiconductor to the indirect-band-gap one. For the positive (negative) electric field, the valence band maximum and conduction band minimum are located at $K_+$ ($K_-$) and $K_-$ ($K_+$) valleys, respectively. \section{Discussions} Distinct band structures and subsequent electric control in $\rm AA^{\prime}$-stacked bilayers lead to various Hall effects. For the interlayer antiferromagnetic coupling, a moderate carrier doping will occupy one valley due to the presence of the valley splitting in Fig. 2(a). The spin degeneracy ensured by the $\mathcal{PT}$ symmetry gives rise to opposite anomalous Hall currents generated by the carriers with opposite spins from the occupied valley, owing to their opposite Berry curvatures. That is, a spin Hall effect will appear when the vertical electric field is absent, similar to the case of honeycomb lattices with N{\'e}el antiferromagnetism \cite{13PANS}. For the interlayer ferromagnetic coupling, the valley degeneracy in Fig. 2(b) results in opposite anomalous Hall currents from different valleys but with the same spin by carriers doping, indicating a valley Hall effect. Therefore, the bilayer with the interlayer ferromagnetic order can be regards as single-spin version of the two-valley semiconductor. On the other hand, after applying a vertical electric field, neither the valley degeneracy nor spin degeneracy is present in the antiferromagnetic/ferromagnetic coupled bilayer, which leads to a single anomalous Hall current with certain indices of valley and spin. Moreover, the indices can be selectively switched by applying an opposite electric field or reversing the magnetization direction. \begin{figure}[htbp] \includegraphics[width=9cm]{Fig5.pdf \caption{\label{fig:epsart}Electronic properties of the AB-stacked bilayer with the interlayer antiferromagnetic order. (a) and (b) The atom-projected band structures without any electric field and with an electric field of 1.4 V/nm, respectively. (c) The valley and spin splittings as functions of the electric field. While the valley splitting is that of the valence band, the spin splitting corresponds to the valence band at $K_+$ valley.} \end{figure} \setlength{\belowcaptionskip}{2cm} Besides the aforementioned $\rm AA^{\prime}$ stacking, the other stable stackings also exhibit different valley and spin properties. For the second stable configuration-the AB stacking, its interlayer antiferromagnetic coupling gives a band structure with both spin splittings and valley splittings, as shown in Fig. 5(a). Under the action of applied electric field, the valley and spin splittings are reversed in Fig. 5(b). Therefore, spin-polarized and valley-polarized anomalous Hall currents can appear in the configuration, and the polarization can be changed by the electric field. Fig. 5(c) further demonstrates the evolutions of the splittings as functions of the electric field. When applying an electric field in the range of 0.57-1.25 V/nm, the valley splitting of the valence band linearly increases from -64.1 meV to +64.1 meV. With a positive electric field of smaller than 0.57 V/nm (more than 1.25 V/nm), the valley splitting is insensitive to the electric field, with a value of -64.1 meV (64.1 meV). The negative electric field also has no influence on the valley splitting. Therefore, the sign of the valley splitting can be reversed by a unidirectional electric field, which is the third kind of the change trend of the valley splitting, in contrast to the cases of the $\rm AA^{\prime}$-stacked bilayer with two kinds of the interlayer magnetic couplings in Figs. 3(c) and 4(c). Besides, the valley splitting of the conduction band exhibits the similar trend, but with much smaller electric field. Different from the valley splittings, the spin splittings linearly changes within the strength range of the electric field considered here. Moreover, the interlayer antiferromagnetically coupled AA stacking has a similar band structure as Fig. 2(b), except that its bands contributed by different monolayers also correspond to different spins. Besides the valley-layer coupling in Fig. 2(b), there are also spin-layer coupling and valley-spin coupling for the AA stacking. More details on different stackings are summarized in SM. The above intriguing physics in $\rm VSi_2N_4$ bilayers have generalizations to other magnetic van der Waals bilayers, such as the bilayers of $\rm VSi_2P_4$ \cite{VSi2P4}, $\rm CrSi_2N_4$ \cite{CrSi2N4}, $\rm CrSi_2P_4$ \cite{CrSi2N4}, 2H-$\rm VS_2$ \cite{VS2bilayer}, 2H-$\rm VSe_2$ \cite{2VSe2} and so on. Taking the $\rm VSi_2P_4$ bilayer for another example, we also computed its atomic and electronic properties. It is found that the $\rm VSi_2P_4$ bilayer has similar characteristics with the $\rm VSi_2N_4$ bilayer, including the most stable configuration, valley and spin degeneracies, and responses to applied electric field, which are given in SM. Compared with $\rm VSi_2N_4$, the $\rm VSi_2P_4$ bilayer has much larger interlayer magnetic coupling, according to an energy difference of 0.4 meV between the antiferromagnetically and ferromagnetically coupled bilayers. The band shift of $\rm VSi_2P_4$ bilayer is also approximately linear in the strength of the electric field, but with a smaller rate that is about a half of the one of the $\rm VSi_2N_4$ bilayer. \section{Conclusion} In summary, by first-principles calculations we have demonstrated rich valley-spin physics in $\rm VSi_2N_4$ bilayers and subsequent electric controls. Considering the most stable $\rm AA^{\prime}$ stacking, there are the spin degeneracy and the valley splitting in the electronic band structure of the interlayer antiferromagnetic coupling, while the valley degeneracy and the band splitting appear for the interlayer ferromagnetic coupling. As a result, two kinds of the interlayer magnetic orders give rise to spin and valley Hall effects, respectively. Besides, according to the atomic projections, they exhibit the spin-layer coupling and the valley-layer coupling, respectively, enabling the electric controls of spin and valley by selectively expressing the layer degree of freedom. Under the action of applied vertical electric field, both the spin degeneracy in the antiferromagnetic order and the valley degeneracy in the ferromagnetic order are lifted, where corresponding splittings are highly tunable for both the magnitude and the sign. The resulted single anomalous Hall current can selectively arise from any valley with any spin. Moreover, the other stacking orders provide more choices of valley and spin degeneracies, and more material realizations can be expected among a number of magnetic bilayers with inequivalent valleys. This findings give full play to the valley and spin degrees of freedom with the help of layered structure and associated electric control, and offers various possibilities for advanced spintronic and valleytronic devices. \begin{acknowledgments} We acknowledge financial supports from the National Natural Science Foundation of China Grant 11904173 and the Jiangsu Specially-Appointed Professor Program. \end{acknowledgments}
{ "timestamp": "2022-09-20T02:23:51", "yymm": "2209", "arxiv_id": "2209.08730", "language": "en", "url": "https://arxiv.org/abs/2209.08730" }
\section{Introduction} \label{sec:introduction} Resilience engineering is a process performed by many software and site reliability engineers today to ensure that their application is resilient to faults. One specific class of faults that are a concern for the developers of microservice applications are \textit{partial failures}: where the unavailability of one or more dependent services can render an entire application unusable. One way that engineers anticipate this inevitable partial failure is by implementing \textit{fallbacks}: where, in the event of a failure of one service, a different service can compensate for that failure. For example, by replacing content that is unavailable due to failure with different content on a user's homepage, as done by Netflix~\cite{10.1145/3472883.3487005}. Therefore, an important component of resilience engineering is ensuring that this fallback behavior works correctly. There are several different approaches for ensuring fallback behavior works correctly. For example, chaos engineering~\cite{chaos-monkey, 10.1109/ICSE-SEIP.2019.00012, 10.1145/2987550.2987555} identifies these issues using stochastic, coarse-grained fault injection performed in the live, production environment that affects actual end-user traffic. Complementary to chaos engineering, request-level fault injection (RLFI)~\cite{10.1145/3472883.3486986} enables resilience engineering by introducing failures at the level of an individual remote procedure call (RPC). RLFI enables Service-level Fault Injection Testing{} (SFIT{})~\cite{10.1145/3472883.3487005} a systematic search technique for microservice applications that exhaustively covers the space of inter-service RPC failures. SFIT starts with an existing functional test that exercises application behavior when no faults are present. SFIT then re-executes these tests repeatedly, each time choosing one or more RPCs to fail until all faults and combinations of RPCs have been tested. RLFI-based techniques require the ability to uniquely identify a dynamic instance of an RPC in order to target it for fault injection. Service-level Fault Injection Testing{} requires that these identifiers are also deterministic across multiple test executions to support systematic search. Existing RLFI implementations~\cite{10.1145/3472883.3486986, 10.1145/3472883.3487005} use a combination of RPC signatures and invocation counts per signature to do this identification. However, they fail to account for common programming patterns found in microservice applications: for example, loops containing RPCs, branching control flow statements that contain RPCs, and the use of concurrency primitives for invoking RPCs, where scheduling nondeterminism is possible. These limitations may result in an \textit{unsound} analysis, as faults may be injected on an incorrect inter-service RPC. In the specific case of exhaustive search, these limitations may result in an \textit{incomplete} analysis, where valid faults are not explored. In this paper, we present \textit{distributed execution indexing{}} (DEI{}), a technique for precisely identifying an inter-service RPC uniquely for RLFI, and in the case of Service-level Fault Injection Testing{}, deterministically across multiple test executions. To see why such an indexing scheme is important and challenging to devise, consider an example of a booking service for cinemas. As part of the process of retrieving a users's reservations, the \emph{users} service contacts the \emph{bookings} service and the \emph{movies} service to retrieve information on both the users's bookings and detailed information on each booked movie. A trivial way to identify RPC invocations would be to use the source and destination service name, as a pair of strings, allowing the system to distinguish between the two different RPCs: (\emph{users}, \emph{bookings}) vs. (\emph{users}, \emph{movies}). Of course, this will not be sufficient if there are more than one RPCs between the same pair of services in a given end-to-end execution; e.g. the \emph{users} service may invoke more than one method of the \emph{bookings} service, and perhaps repeat this for multiple users. Contemporary RLFI systems uses RPC signatures to identify a particular RPC instance to inject a failure on. Signatures include the destination service, the invoked method name, and it's parameters. \textsc{3MileBeach}{}~\cite{10.1145/3472883.3486986} additionally accumulates the entire causal history of RPCs that are invoked during an execution, and uses invocation counts for each identifier can be then used to distinguish between multiple RPC invocations with the same signature. Further extending our cinema example, this would alter our identifiers to also contain the list of all previously invoked RPCs: if the \emph{movies} service was invoked after \emph{bookings}, we can consider the identifier of the RPC to \emph{movies} to contain the identifier of the preceding RPC to \emph{bookings} as well. As such, the identifiers are path-sensitive. While this approach accounts for multiple invocations with the same signature, it fails to account for branching conditional statements where an RPC with the same signature is invoked on multiple branches of the conditional. \textsc{Filibuster}{}~\cite{10.1145/3472883.3487005} also uses RPC signatures for identification, but keeps track of the call stack at the time of invocation, as well as an invocation counter associated with each stack state. This serves to distinguish RPCs in loops and branching conditional statements. However, \textsc{Filibuster}{} is designed specifically for single-threaded Python microservices that communicate using HTTP for RPC; its identification scheme fails to account for concurrency and scheduling nondeterminism, where multiple RPCs may be invoked between the same pair of services with the same call stack state at the same time, and their ordering cannot be controlled. The problem of scheduling nondeterminism manifests itself in RLFI through the permutation of RPCs: concurrent RPCs with the same signature can be executed in different orders across multiple executions, thereby permuting the identifiers associated with each. One solution for deterministic identifier assignment is to explicitly control thread scheduling. Unfortunately, this is not a feasible approach for large, microservice applications because of two core issues. First, these applications are typically implemented in microservice frameworks, and rely on RPC frameworks, where threads are reused for performance. Second, these applications are typically implemented in multiple languages across many different services, where using a centralized thread scheduler is not practical. \begin{mdframed}[skipabove=0.1cm, skipbelow=0.1cm, nobreak] \noindent \textbf{Key observation and insight.} Our solution leverages the following \textit{key observation}: while microservice applications may issue concurrent RPCs with the same signature, these concurrent RPCs will rarely contain the same payload: the precise arguments provided to that RPC. Therefore, our \textit{key insight} is that the inclusion of the RPC payload in each RPC's identifier enables the deterministic assignment of identifiers to each RPC without requiring control of thread creation or thread scheduler. We name our formulation \emph{distributed execution indexing}, as it extends the concept of {execution indexing}~\cite{10.1145/1379022.1375611} to distributed systems. \end{mdframed} Empirically validating our key observation about concurrent RPCs is not a straightforward task, due to the lack of research corpora that contain microservice applications. In fact, recent academic work focused on the construction of a microservice application corpus~\cite{10.1145/3472883.3487005} that contains resilience bugs \textit{do not contain a single example} where an individual service issues concurrent RPCs. To address this limitation, we recently established a working relationship with an industrial partner that runs a large microservice application with over 500 services in order to validate our technique at scale. We plan to use this partnership to perform the empirical validation needed to justify our key observation and key insight. Finally, we present a partial synthetic evaluation of DEI{} that supports function indirection, looping, branching control flow and concurrency, performed using the \textsc{Filibuster}{} application corpus. In the process of this evaluation, we extend the \textsc{Filibuster}{} corpus with several new valuable examples demonstrating problematic patterns that will be valuable for future researchers in microservice resilience testing. Along with this contribution to the \textsc{Filibuster}{} corpus, we provide an open-source implementation of DEI{} and an extension of \textsc{Filibuster}{} that supports both gRPC \& HTTP, in Java. The contributions of this paper are the following: \begin{itemize} \item \textbf{a formal description of DEI{}.} (\cref{sec:dei}) \\ We present a formal definition of DEI{}, a technique for identifying inter-service RPCs in microservice applications where these identifiers establish a correspondence across multiple executions. We demonstrate that existing RLFI techniques use special cases of DEI{}. \item \textbf{an implementation of DEI{}}. (\cref{sec:implementation}) \\ We provide an open-source implementation of DEI{} implemented in Java for the JVM, integrated directly into a new \textsc{Filibuster}{} client library for Java that supports both gRPC and HTTP (using Armeria.) We discuss challenges in both the implementation of DEI{} and its integration into \textsc{Filibuster}{}. \item \textbf{an evaluation of DEI{}}. (\cref{sec:evaluation}) \\ We present a preliminary, synthetic evaluation of DEI{} using the \textsc{Filibuster}{} open-source microservice application corpus. We also contribute two new examples, that demonstrate problematic programming patterns for existing RLFI techniques, to this open-source microservice application corpus. \end{itemize} \section{Microservices and Fault Injection} \label{sec:fi} In order to demonstrate the complexity of microservice applications and the fault-injection methodology that is required to guarantee resilience, we consider the microservice application presented in Figure~\ref{fig:fi:example-application}a. In Figure~\ref{fig:fi:example-application}, there are five services. Service A receives requests from the end user and issues RPCs to B, C, and D before returning a response to the end user; B issues an RPC to E before returning a response to A. As the developers of these applications assume that any one of A's direct dependencies may be unavailable at any point, they design each service in a manner where default responses for each RPC are used in the event that any one of the RPCs should fail; they do the same for Service B's direct dependency as well. The question that remains is: \textit{does this fallback behavior functions as expected?} \begin{figure} \includegraphics[width=\linewidth]{app.png} \caption{Example microservice application with instrumentation that enables SFIT{}.} \label{fig:fi:example-application} \end{figure} One approach for verifying that this fallback behavior works as expected is through the use of test mocks. This approach requires that developers, for each point in the fault space, write a functional test to exercise the application under fault along with the mock necessary to simulate that fault. The fault space is large: at a minimum, developers must first consider each individual RPC and all of the exceptions that can be thrown by that RPC; then, they must explore all combinations. More often than not, these tests are not written. We believe this to be a result of the effort required and the complexity involved in running an entire microservice application locally while mocking individual RPCs for failure. Somewhat ironically, it is these very tests that are most important to system reliability: recent research investigating~\cite{186171} critical errors in open source distributed data systems has identified that the error handling code responsible for preventing these errors was either missing or never tested. While distributed data systems are not microservice applications, in some ways they are simpler and easier to test: they contain fewer distinct services, as these services are typically deployed in replica sets, they are homogeneous in their implementation language, and usually have well-defined behavior under failure, as they typically implement a distributed protocol. In contrast, microservice applications typically have hundreds of different services, all with different behavior and developed by independent teams, where a single service can rely on multiple, different distributed data stores for storage of application state. Further complicating matters, behavior under failure may not be well-defined, as developers working on individual services may not realize a certain failure is possible when it is only triggered by a certain combination of simultaneous faults across different services. We believe that if developers are not writing these tests for distributed data systems, they most likely are not writing them for microservice applications either. \subsection{Service-level Fault Injection Testing} Service-level Fault Injection Testing{} (SFIT{}) enables a systematic search for microservice applications using RLFI that exhaustively covers the space of inter-service RPC failures. \paragraph{Static Analysis.} SFIT{} starts by identifying the faults that each service in an application should be tested for. This constructs the fault space that will be exhaustively searched. To do this, SFIT{} starts by statically analyzing each framework that is used to issue RPCs in order to determine the throwable exceptions at each call site. For example, both HTTP and gRPC frameworks throw exceptions when the remote host is unreachable. Then, the implementation of each service is also analyzed to identify any response types, specific to that service, that are used to indicate failure. For example, an HTTP service can indicate failure through either 400 or 500-series error codes; for gRPC, specific error codes can indicate different failures such as \texttt{FAILED\_PRECONDITION}. \paragraph{Instrumentation.} SFIT{} relies on instrumented versions of each RPC framework. We depict this in Figure~\ref{fig:fi:example-application}. This enables SFIT{} to identify RPC invocations, identify where those RPCs are received, propagate required metadata information between inter-service RPCs, and perform fault injection. This is orchestrated by a centralized test server. \paragraph{Dynamic Analysis.} For a test oracle, SFIT{} uses an existing functional test suite for the microservice application. For each functional test, SFIT{} will first run an initial execution where no faults will be injected. It will then repeatedly re-execute the test, injecting different combinations of faults, until the fault space has been exhausted. Using Figure~\ref{fig:fi:example-application} to demonstrate, the RPC between B and E would first be tested for all possible faults identified through our static analysis. Then, the RPCs between A and its direct dependencies B, C, D would be tested for those faults as well, including all possible combinations thereof. \begin{mdframed}[skipabove=0.1cm, skipbelow=0.1cm, nobreak] \cmark~The exhaustive search performed by this dynamic analysis requires that dynamic invocations of RPCs are identified both uniquely and deterministically across all test executions in order to identify when the exhaustive search is complete. \end{mdframed} \paragraph{Test Adaptation.} As faults are injected, assertions in the test oracle will fail. This is \textit{expected}, as the test only encodes application behavior when no faults are present. Therefore, developers will be prompted to use conditional assertions to encode the desired behavior under failure; effectively encoding derivations from the behavior under fault depending on where the fault as injected. \paragraph{Dynamic Reduction.} Finally, to keep the technique scalable and avoid executing redundant tests SFIT{} leverages a property of microservice applications called \textit{service encapsulation}. Service encapsulation states simply that if there are two services, Services X and Y, where X invokes RPCs on Service Y, any failure of Y is only visible to the callers of X as a failure or success of X itself: in this case, A is said to \textit{encapsulate} B. This property holds true as long as the two services (A) do not share state in the same database, (B) are deterministic in their responses for a given test, and (C) do not contain data dependencies on previous failures. These properties are both derived from, and in line with, proper microservice design and testing tenets. Using Figure~\ref{fig:fi:example-application} to demonstrate, the test execution where faults are injected in Service D and Service E simultaneously is considered redundant with this optimization. The reason for this is because Service B encapsulates Service E. The only way that Service A can encode conditional logic on the failure of Service D and Service E simultaneously failing is by using one of the responses from E to determine that. However, given that SFIT{} has already tested the execution where B and D have simultaneously failed, it has already observed this outcome. \begin{mdframed}[skipabove=0.1cm, skipbelow=0.1cm, nobreak] \cmark~Similar to the requirements of dynamic analysis, this optimization of the search also requires unique and deterministic identification of dynamic invocations of RPCs. \end{mdframed} \section{Distributed Execution Indexing} \label{sec:dei} At the root of the SFIT{} analysis, is the need to uniquely (and deterministically) identify each RPC. For example, in Figure~\ref{fig:fi:example-application}, specific identification of the RPC issued between B and E. In order for SFIT{} to know when the systematic search is complete, this specific RPC --- between B and E --- must be identified the same across all test executions. While this may seem rather trivial, as it only requires the identification of a single edge in a microservice graph, it becomes more complex when multiple RPCs between the same pairs of services may exist in the same test execution. The programming patterns that cause this behavior are rather commonplace: loops, branching, function indirection, and concurrency. We need to define what it means for the SFIT{} analysis to be correct. In terms of correctness, we will concern ourselves with only the dynamic component of SFIT{}, as it is responsible for the assignment of unique (and deterministic) identifiers to each RPC. As with all analyses, they must ideally be both sound and complete. \textbf{Soundness} is violated when either a fault is injected on an RPC invocation where not intended or by the failure to inject a fault where intended. \textbf{Completeness} is violated when a required fault injection, for exhaustive search, is missed. \subsection{Signatures Are Too Coarse-Grained} Consider one simple way of identifying RPCs, as discussed in the introduction, the RPC's signature. We formally define an RPC signature as follows: \begin{definition} \label{def:signature} A \textit{signature} is a triple $(m, f, a)$ where \begin{itemize}[] \item $m$ is the module or class name of the RPC stub; \item $f$ is the method or function name; and \item $a$ is the parameter names and types. \end{itemize} With gRPC, the class name and method map directly; parameters are the parameter types and names for the gRPC endpoint. With HTTP, the URI and HTTP method can be combined to form the signature as it contains the target service, method name, and parameter names and types, which are assumed to be \texttt{String}. \end{definition} Let us see how the RPC signature is too coarse-grained to uniquely (and deterministically) identify an RPC and may result in an unsound or incomplete analysis. \begin{figure} \begin{minted}[obeytabs=true,tabsize=4,linenos,numbersep=-10pt,fontsize=\footnotesize]{python} @service_a.method("helloworld") def service_a_helloworld(): hello = echo("Hello") world = echo("World") s = hello + " " + world return s def echo(s : String): try: res = rpc(service_b, "echo", s) log_success(res) return res except Exception as e: log_error(e) return s @service_b.method("echo") def service_b_echo(s : String): return s \end{minted} \caption{RPC signature alone cannot distinguish between the RPCs issued on lines 3 and 4; call stack or invocation count must be combined with signature.} \label{fig:helloworld:fallbacks} \end{figure} Consider the example in Figure~\ref{fig:helloworld:fallbacks}. In this example, we present a microservice application composed of 2 services written in pseudocode. Service A exposes a single RPC endpoint, \texttt{helloworld}, which issues two RPCs to B's RPC endpoint, \texttt{echo}, before combining the responses and returning a response. In the event that Service B is down, a default response is returned by the function wrapping the RPC, \texttt{echo}, on line 8. In the case of the RPC invocation at line 10, the signature, would be composed of the target service name B, the method \texttt{echo}, and the parameter \texttt{(s,String)}. In this application, the signature for both of the RPCs invoked by Service A, on lines 3 and 4, would be identical: $(\mathrm{\texttt{B}}, \mathrm{\texttt{echo}}, \mathrm{\texttt{(s,String)}})$. SFIT would not be able to distinguish between the first and second RPCs for systematic fault injection; that is, the RPC signature alone is too \textit{coarse-grained} for identifying a particular RPC. \subsection{Increasing Granularity: \\ Invocation Count or Call Stack} One solution for resolving the issue where identical identifiers are assigned to different RPCs is to increase the granularity of the identifiers that we assign. We examine two different ways that this could be accomplished and demonstrate that they must be used together. In the following discussion, since we're going beyond just signature-based identifiers, we assume (for ease of presentation and without loss of generality) that a service (say \texttt{A})) makes RPC invocations to only one other service (say \texttt{B}) and only a single RPC endpoint (\textit{e.g.} \texttt{echo}) per service. Thus, we use only the \textit{invoking service name} (\textit{e.g.,} \texttt{A}) as a shorthand for an outgoing RPC from \texttt{A} that stands in for the full signature which would contain the target service, method name, and parameters. \begin{enumerate}[] \item \textit{Invocation count.} \textsc{3MileBeach}{}~\cite{10.1145/3472883.3486986} and \textsc{Filibuster}{}~\cite{10.1145/3472883.3487005} both keep track of the number of invocations for each RPC call site in order to distinguish multiple calls to the same call site. In Figure~\ref{fig:helloworld:fallbacks}, the same RPC is invoked twice. We use the ``$|$'' symbol to indicate the invocation count of an RPC signature. For example, the identifiers $A|_1$ and $A |_2$ distinguish the \nth{1} and \nth{2} RPC invocations made from service \texttt{A} at lines 10. \item \textit{Call stack.} Another approach is to increase the granularity of the identifier with some representation of the call stack. In Figure~\ref{fig:helloworld:fallbacks}, the RPC is invoked twice at line 10, however, with different calling contexts for the \texttt{echo} function (lines 3 and 4). We use a \textit{superscript} to indicate the line number(s) corresponding to call stack at the time of invocation. For example, the two RPC invocations in Figure~\ref{fig:helloworld:fallbacks} can be distinguished by identifiers $A^{3,10}$ and $A^{4,10}$. \end{enumerate} For the example in Figure~\ref{fig:helloworld:fallbacks}, either invocation count or call-stack based identification works to disambiguate the two RPCs. However, neither approach is sufficient on its own in general. A better approach is to use a \emph{combination} of invocation count \emph{and} calling contexts for identifying RPCs, e.g. $A^{3,10}|_1$, denoting the first invocation of RPC from A with the calling context $(3, 10)$. To demonstrate the need for both these terms, we refer the reader to Figure~\ref{fig:helloworld:loops}. In Figure~\ref{fig:helloworld:loops}, we present a different implementation for A; we assume that the implementation of B from Figure~\ref{fig:helloworld:fallbacks} is the same. In this example, A's RPC endpoint \texttt{helloworld} takes, as parameters, a list of \texttt{String}. For each \texttt{String} that is provided, a RPC is invoked to B's \texttt{echo} endpoint. In the event that the RPC to B throws an exception, the remainder of the list traversal is aborted and a final RPC is made to B using a default value and that value returned by A. When no exceptions are thrown, the aggregated results are joined and returned by A. \begin{figure}[t] \begin{minted}[obeytabs=true,tabsize=4,linenos,numbersep=-10pt,fontsize=\footnotesize]{python} @service_a.method("helloworld") def service_a_helloworld(ss : List[String]): rs = [] failure = False for s in ss: try: r = rpc(service_b, "echo", s) rs.append(r) except Exception as e: failure = True break if failure: s = "Hello World" r = rpc(service_b, "echo", s) return r else: return rs.join(" ") \end{minted} \caption{Signature combined with invocation count insufficient in distinguishing \nth{2} iteration of loop from \nth{1} invocation of failure handler; signature combined with call stack insufficient in distinguishing loop iterations.} \label{fig:helloworld:loops} \end{figure} Consider a functional test that invokes \texttt{helloworld} with a list containing two \texttt{String}s. For simplicity, we assume that each RPC can only throw a single runtime exception. Therefore, we must execute 5 different executions of the test to fully exhaust the fault space. First, consider the execution where both loop iterations execute and all RPCs are successful, which we denote as a sequence of RPC invocations: $e_1: (A^8|_1, A^8|_2)$. Next, we consider the executions where the RPC throws an exception, using the $\neg$ symbol to denote a failed RPC invocation. When a fault is injected in the \nth{2} iteration of the loop, there are two cases when the fallback RPC either completes successfully or fails: $e_2: (A^8|_1, \neg{A^8|_2}, A^{16}|_1) ,~e_3: (A^8|_1, \neg{A^8|_2}, \neg{A^{16}|_1})$. Finally, we consider the executions where the RPC throws in the \nth{1} iteration and the fallback RPC either completes successfully or fails: $e_4: (\neg{A^8|_1}, A^{16}|_1),~e_5: ( \neg{A^8|_1}, \neg{A^{16}|_1})$. Using this example and these test executions, we now examine why invocation count and call stack are, by themselves and in combination with the signature, insufficient for ensuring correctness based on our criteria. Therefore, they must be combined. \begin{itemize}[] \item \textbf{Invocation Count Alone is Insufficient.} Consider executions $e_1$ and $e_4$. Using this technique, $e_1: (A|_1, A|_2)$ and $e_4: (\neg{A|_1}, A|_2)$. However, $A|_2$ in $e_1$ refers to the invocation at line 8 and $A|_2$ in $e_4$ refers to the invocation at line 16. Therefore, to properly assign identifiers to these RPCs, we must increase the granularity to include the call stack that resulted in the RPC invocation. \item \textbf{Call Stack Alone is Insufficient.} In $e_1$, both requests would be assigned the same identifier: $e_1: (A^8, A^8)$. Therefore, to properly assign identifiers to these RPCs, we must increase the granularity to include the number of times each RPC invocation statement is reached. \end{itemize} \subsection{Increasing Granularity: Payload} \label{sec:dei:payload} While the addition of the call stack and invocation count to the RPCs signature are sufficient for distinguishing RPC invocations in the presence of loops and function indirection, they are not sufficient in the presence of concurrency and scheduling nondeterminism. For example, consider Figure~\ref{fig:helloworld:async}, a modified version of Figure~\ref{fig:helloworld:loops}, where line 7 invokes an RPC using the \texttt{async} primitive and the results are \texttt{await}ed on line 11. In this example, the invoked RPCs execute concurrently and both their execution order is susceptible to \textit{scheduling nondeterminism}. This scheduling nondeterminism poses problems for an analysis like SFIT{}, where it may result in both an unsound or incomplete analysis. Similar to before, we assume a functional test that invokes the \texttt{helloworld} RPC endpoint with two \texttt{String}s. For example, the first test execution that we run should read as follows: $e_1: (A^7|_1, A^7|_2)$: $A^7|_1$ is the RPC invoked in the \nth{1} iteration of the loop, where $A^7|_2$ is the RPC invoked in the \nth{2} iteration of the loop. However, on repeated execution of this test through deterministic replay, or when performing exhaustive search, scheduling nondeterminism may result in the \nth{2} iteration of the loop being assigned $A^7|_1$, if the \nth{2} block happens to execute first. \begin{figure} \begin{minted}[escapeinside=||,obeytabs=true,tabsize=4,linenos,numbersep=-10pt,fontsize=\footnotesize]{python} @service_a.method("helloworld") def service_a_helloworld(ss : List[String]): rs = [] for s in ss: r = |\colorbox{yellow}{async}| { return rpc(service_b, "echo", s) } rs.append(r) |\colorbox{yellow}{awaitAll}| rs return rs.join(" ") \end{minted} \caption{Scheduling nondeterminism can permute assignment of identifiers. In this case, $A^7|_1$, can refer to the RPC invocation from either the \nth{1} or \nth{2} loop iteration.} \label{fig:helloworld:async} \end{figure} Model checkers for distributed systems~\cite{267763, 10.5555/2685048.2685080, 10.1145/3302424.3303986} also face the problem of scheduling nondeterminism. However, these model checkers were originally designed for identifying concurrency bugs before later being extended for failure testing (\textit{e.g.,} message omission) and therefore rely on control of the thread scheduler. Even previous work on using execution indexes in multithreaded programs to detect deadlocks (\textit{e.g.,} \textsc{DeadlockFuzzer}~\cite{10.1145/1543135.1542489}) relies on specialized compilation when testing, for scheduler control. Controlling the scheduler is an unrealistic for large, microservice applications where \textit{a.)} they may not be able to run all services on a single machine during testing, and where \textit{b.)} services are implemented in a number of different languages. Therefore, we set out to identify a solution that did not require control of the thread scheduler. We examine three different ways that this could be accomplished and demonstrate that none are sufficient. \begin{enumerate}[] \item \textit{Cloning per block.} One approach is to \textit{clone} the state that is used to generate identifiers for each asynchronous block. This would ensure that each block would count invocations for each RPC signature, and associated call stack, independently. However, this approach does not work. In Figure~\ref{fig:helloworld:async}, this technique would result in identical identifiers for each of the RPCs executed during the loop: $(A^7|_1, A^7|_1)$. \item \textit{Encode thread creation.} \textsc{DeadlockFuzzer}~\cite{10.1145/1543135.1542489}, a system for detecting deadlocks in concurrent programs using execution indexes, proposed an approach where thread creation is included in the identifier. This approach does not work in the case of asynchronous blocks, as they may execute on an existing thread pool provided by the system or framework where the threads have already been created. \item \textit{Cloning per thread.} If we were to follow this line of thinking, we could also \textit{clone} the state that is used to generate the identifiers for each thread. This does not work either. In Figure~\ref{fig:helloworld:async}, scheduling nondeterminism may cause two of the RPCs to execute on a single thread in one execution $(A^7|_1, A^7|_2)$ and on two different threads in a subsequent execution: $(A^7|_1, A^7|_1)$. \end{enumerate} The approach that we arrived at as most practical stems from our \textit{key observation} about microservice applications: while these applications may issue concurrent RPCs with the same signature, these concurrent RPCs will rarely contain the same payload: the precise argument values supplied at invocation time. Therefore, our \textit{key insight} is that, through the inclusion of the payload in each RPC's identifier, identifiers will be assigned deterministically without requiring control of thread creation or the thread scheduler. To achieve this, we \textit{share} the state used to derive identifiers across all threads that are used to execute concurrent code by reference. We refer to this as the \emph{invocation payload}. \begin{definition} \label{def:invocation-payload} The \textit{invocation payload} $p$ for an RPC with $n$ parameters is a sequence $(k_1, v_1)(k_2, v_2)...(k_n, v_n)$ such that for each $i$ in $[1, n]$, the term $k_i$ is the $i$-th argument's name and $v_i$ is the $i$-th argument's value. For gRPC, these are the precise argument values at invocation time. For HTTP, these are the combination of query-string arguments and request body. \end{definition} In Figure~\ref{fig:helloworld:async}, and assuming the concrete argument provided to the function is the list \texttt{["Hello", "World"]}, we represent the execution: $e_1: (A(\mathrm{\texttt{(s,Hello)}})^7|_1, ~A(\mathrm{\texttt{(s,World)}})^7|_1)$. It is important to note that the invocation count in both of these identifiers is $1$, as it considers both the call stack and payload together. This ensures deterministic assignment regardless of scheduling nondeterminism. We can use the combination of RPC signature, the calling context, and the invocation payload to create a dynamic \emph{invocation signature}, defined as follows: \begin{definition} \label{def:invocation-signature} The \textit{invocation signature} for an RPC invocation is a triple $(s, p, t)$, usually denoted as $s(p)^t$, where: \begin{itemize}[] \item $s$ is the signature of the RPC; \item $p$ is the invocation payload of the RPC; and \item $t$ is a representation of the call stack of the RPC. \end{itemize} Thus, the notation $s(p)^t|_k$ refers to the $k$-th invocation of an RPC with invocation signature $s(p)^t$. \end{definition} An important point to note is that while RPC signatures (Definition~\ref{def:signature}) can be statically determined, the invocation signatures (Definition~\ref{def:invocation-signature}) are determined only based on observed executions. While we have framed our presentation in this section using \texttt{async}/\texttt{await}, many other concurrency primitives (\textit{e.g.,} futures, coroutines) exist that have the same challenges. We believe our technique extends to all of them. \subsection{Increasing Granularity: \\ Path to Currently Invoked RPC} In Figure~\ref{fig:helloworld:path-encoding}, we present another variation on our \texttt{helloworld} microservice application. Similar to Figure~\ref{fig:helloworld:loops}, Service A receives a list of \texttt{String}s, invokes an RPC on Service B for each member in the list, and accumulates the result. In the event of an exception, a placeholder value is accumulated and the failure is recorded. The recorded failures are then iterated in a retry loop and, if successful, the value replaces the placeholder. Different from Figure~\ref{fig:helloworld:loops}, Service B invokes an RPC on a third service, Service C, and decorates the response somehow before returning a response to Service A. \begin{figure}[t] \begin{minted}[obeytabs=true,tabsize=4,linenos,numbersep=-10pt,fontsize=\footnotesize]{python} @service_a.method("helloworld") def service_a_helloworld(ss : List[String]): rs = [] failure = False failures = [] for s in ss: try: r = rpc(service_b, "echo", s) rs.append(r) except Exception as e: failure = True failures.append(len(rs) - 1, s) rs.append("") if failure: for (i, s) in failures: try: r = rpc(service_b, "echo", s) rs[i] = r except Exception as e: pass return rs.join(" ") @service_b.method("echo") def service_b_decorate_echo(s : String): try: r = rpc(service_c, "echo", s) return r except Exception as e: return s @service_c.method("echo") def service_c_echo(s : String): return s \end{minted} \caption{RPC signature, when extended with invocation count and call stack, is insufficient when RPC invocation is triggered by different incoming RPC requests.} \label{fig:helloworld:path-encoding} \end{figure} We assume a functional test that issues an RPC to A containing a list of two \texttt{String}s: \texttt{Hello} and \texttt{World}. We abbreviate these to $H$ and $W$ and omit the parameter name \texttt{s} in their invocation signatures. Using the technique from the previous section, the execution where the list iteration completes and no faults are injected is: $e_1: (A(H)^9|_1, B(H)^{29}|_1, A(W)^9|_1, B(W)^{29}|_1)$. For each iteration, A issues an RPC from line 9 to B; when B receives the RPC from A, it issues an RPC to C from line 29. Now, let us consider the test execution where a fault is injected on the RPC in the \nth{2} iteration of the loop represented as follows: $e_2: (A(H)^9|_1, B(H)^{29}|_1, \neg{A(W)^9|_1}, A(W)^{19}|_1, B(W)^{29}|_1)$. As before, during the \nth{1} iteration of the loop, Service A issues an RPC to Service B at line 9; Service B then issues an RPC to Service C at line 29. When we reach the \nth{2} iteration of the loop, a fault is injected for the RPC from Service A to Service B. Then, the failure condition is met and a subsequent RPC is issued from Service A to Service B on line 19; Service B then issues an RPC to Service C on line 29 before returning a response. The issue we experience in this example is that the RPC identified by $B(W)^{29}|_1$ in test execution $e_1$ is \textit{not} the same as the RPC identified by $B(W)^{29}|_1$ in test execution $e_2$. In execution $e_1$, the RPC from Service B to Service C at line 29 is caused by the RPC issued by Service A on line 9. In execution $e_2$, the RPC from Service B to Service C at line 29 is caused by the RPC issued by Service A on line 19. These are not the same, even though they issue the same RPC with the same arguments and payload. They represent distinct call sites in different parts of the code: one is part of the normal operation of the RPC endpoint where no failure occurs, and one represents error handling code that needs to be tested to ensure correct operation of the application under failure. Therefore, associating the same identifier to these RPCs results in both unsound and incomplete behavior: either, the injection of faults on the incorrect RPC or the failure to explore the fault space during exhaustive search. To resolve this issue, we need to include the path of RPC invocations that resulted in the current RPC, as this information is not captured by the call stack. To achieve this, we accumulate a list of identifiers as we invoke RPC messages from service to service as part of handling a received RPC invocation: for example, $[A(W)^9|_1 :: B(W)^{29}|_1])$ to indicate that the \nth{1} invocation of invocation signature $B(W)^{29}$ occurred as a result of the \nth{1} invocation of invocation signature $A(W)^9$. We can reformulate test executions $e_1$ and $e_2$ as follows: \begin{itemize}[] \item $e_1: ([A(H)^9|_1], [A(H)^9|_1 :: B(H)^{29}|_1], $ \\ $~~~~~~~~~[A(W)^9|_1], [A(W)^9|_1 :: B(W)^{29}|_1])$ \\ The RPC invocations from A to B on line 9 are denoted with the prefixes $A(H)^9|_1$ and $A(W)^9|_1$ to include the enclosing RPC from A. \item $e_2: ([A(H)^9|_1], [A(H)^9|_1 :: B(H)^{29}|_1], [\neg{A(W)^9|_1}],$ \\ $~~~~~~~~~[A(W)^{19}|_1], [A(W)^{19}|_1 :: B(W)^{29}|_1])$ \\ The RPC invocation from B to C on line 29 is prefixed by $A(W)^{19}|_1$ which distinguishes it from the \nth{2} RPC in execution $e_1$ from A to B on line 9 that triggered the RPC from B to C on line 29. \end{itemize} \begin{definition} \label{def:dei} The \textit{distributed execution index} (DEI) for an RPC invocation is a sequence $[ r_1|_{c_1} :: r_2|_{c_2} :: \dots :: r_n|_{c_n} ]$ where: \begin{itemize}[] \item $r_n$ is the invocation signature of the invocation; and, \item the current RPC invocation is the $c_n$-th invocation of $r_n$ with the path having DEI $[ r_1|_{c_1} :: r_2|_{c_2} :: \dots :: r_{n-1}|_{c_{n-1}} ]$. \end{itemize} The definition of a DEI is thus recursive, with the base case being the top-level entry point to the application, whose path is the empty sequence $[~]$. \end{definition} \subsection{Special Cases of DEI{}} \label{sec:dei:specialcases} We now look at both \textsc{3MileBeach}{}~\cite{10.1145/3472883.3486986} and \textsc{Filibuster}~\cite{10.1145/3472883.3487005} and show how the RPC identification used by both of these systems are special cases of of DEI{}. \textsc{3MileBeach}{}~\cite{10.1145/3472883.3486986} uses a combination of signature and invocation count to identify each RPC. Rather than track the path of enclosing RPCs to the currently invoking RPC, it accumulates the entire history of RPCs that have been issued up to the current RPC. This type of identification is path sensitive, where each RPC is identified using the history of all of the RPCs previously issued before it, and in the same execution order. As demonstrated, this is problematic for branching control flow, during exhaustive search, and scheduling nondeterminism, when concurrency is present. \textsc{Filibuster}~\cite{10.1145/3472883.3487005} only supports HTTP RPC's and uses a combination of signature, invocation count, and call stack to identify each RPC uniquely. Without consideration of the payload, this type of identification is problematic for scheduling nondeterminism, when concurrency is present. This makes sense: \textsc{Filibuster}{} only considers single-threaded Python code. \section{Implementation} \label{sec:implementation} In order to evaluate DEI{} and demonstrate its applicability to multiple languages and RPC frameworks, we implemented an extension of \textsc{Filibuster}{} in Java that supports both (A) HTTP-based RPC using the popular Armeria microservice programming framework and (B) Google's popular gRPC framework. Both framework choices were influenced by choices made by our industry partner that we plan to use for our evaluation of DEI{} at scale. Java was chosen as the implementation language for two reasons. First, Java is a widely used language with both concurrency primitives and true parallelism. Second, by implementing our algorithm and extensions in Java, they would be able to be used by any language that uses the JVM for its runtime; for example: Scala, Kotlin, and Clojure. Kotlin specifically is used by the industry partner that we plan to use for our evaluation of DEI{} at scale. Our work involved the following: \begin{enumerate} \item an implementation of the DEI{} algorithm; \item an implementation of a \textsc{Filibuster}{} instrumentation library in Java that is used to assign execution indexes to each RPC invocation and maintain the required execution index state for each request; and \item several modifications to the existing, open-source \textsc{Filibuster}{} server (in Python) to generalize its handling of RPC invocations. \end{enumerate} This work started in May 2021 and was completed by the end of November 2021. In total, our implementation was 11.7 KLOC: 3.4 KLOC of implementation code with 8.3 KLOC of test code. \subsection{Instrumentation Challenges} Modifications to the existing, open-source \textsc{Filibuster}{} prototype to generalize its tracking of RPC invocations for exhaustive search were straightforward: generalizing HTTP as a RPC invocation with a target, method, and parameters. Similarly, implementation of the DEI{} algorithm and associated data structures also straightforward. However, propagation of DEI{} state -- required to construct the concatenative execution indexes that identify RPCs deep in the graph (\textit{i.e.,} identifying the B to E RPC, when A calls B which causes B to call E) -- proved quite challenging in practice. Most of these challenge arose from internal thread management, typically done to improve performance, in the underlying web or RPC framework library code used to issue or handle RPCs. Consider \textsc{DeadlockFuzzer}, a Java tool for the identification of deadlocks using execution indexing. \textsc{DeadlockFuzzer} instruments the Java byte code at compile time to control scheduling and give it visibility into thread creation and synchronization. Then, thread-local state is used to track the internal state required for each execution index. In contrast, \textsc{Filibuster}{} is trying to avoid scheduler control (or specialized compilation for testing) by crafting execution indexes that distinguish concurrent RPCs. This is required at the scale of microservice applications and their potential language diversity. The implications of this are that thread-local state \textit{used naively} is not sufficient, unless you have visibility into Java thread operations (\textit{e.g.,} creation, context switching, etc.) This is essential given the common performance optimizations seen in web and RPC frameworks today. We provide a few demonstrative examples of concrete challenges we encountered when implementing various prototype solutions: \begin{itemize} \item \textit{Event Loops} are found in the implementations of many microservice frameworks. Used for single-threaded server implementations, they are never descheduled to avoid the penalty of context switches and handle incoming RPCs both serially and synchronously. Therefore, developers are encouraged (and, in many cases forced through runtime exceptions) to perform any asynchronous operations using futures on a specific thread pool. When the user has to perform blocking operations, they are encouraged to use a specific thread pool for blocking operations, in order to avoid scheduling issues with other nonblocking operations, or deadlocks with other blocking operations that might compete for a shared resource using locks, mutexes, semaphores, or introduce cycles into the microservice graph (\textit{i.e.,} service reentrancy.) Armeria is one example of a microservice framework that uses an event loop. \item \textit{Thread Pools} pose problems when using thread local variables due to thread reuse: thread local state will either be uninitialized or contain values from work performed by the last function executing on that thread. For example, in Java, developers are able to create \texttt{Runnable} objects containing code that should be executed on a different thread. When executing these objects, developers are able to specify a thread pool to use for execution; if not specified, a shared ``common pool'' of threads supplied by the JVM are used. \item \textit{Asynchronous IO} is a feature provided by most RPC frameworks. These frameworks are typically implemented with and share the same problems as thread pools: a single RPC may execute across different threads. For example, an RPC between two services may start on one thread when issuing an RPC, be suspended while waiting for IO, and then resume execution on a different thread. \item \textit{Futures}, common in asynchronous code implemented in Java, can contain arbitrary user code and execute on either a developer specified thread pool or the common thread pool. (\textit{e.g.,} \mintinline{java}{CompletableFuture}) \item \textit{Coroutines} and other types of suspendable functions, such as those provided by languages that compile to Java (\textit{e.g.,} Kotlin, Scala), may execute across several different threads during their execution and without allowing the user to specify a custom thread pool for their execution. \end{itemize} To address this problem, we researched open-source distributed tracing frameworks to understand how they addressed this problem. This research led us to the OpenTelemetry project, used to automatically instrument many libraries used when building microservice applications to enable distributed tracing. OpenTelemetry achieves using a combination of three different techniques, of which we both leveraged for our \textsc{Filibuster}{} integration. \begin{enumerate} \item Applications are not directly modified for instrumentation. Instead, a runtime parameter \mintinline{java}{-javaagent} is supplied that points to a JAR file containing code that is allowed to instrument libraries at runtime with arbitrary code using an API provided by Java. \item Using this API, the standard Java concurrency libraries (and standard libraries for other languages that run on the JVM) are installed that allows OpenTelemetry to automatically migrate context information between threads when context switches occur (or, other concurrency mechanisms are present, such as coroutines where values must be moved between coroutine scopes.) \item Finally, this same API is used to install code, specific to each web framework, RPC framework, and database client supported by OpenTelemetry, to perform the distributed tracing using the available context information from (2). This is the API that we leveraged to automatically install the required \textsc{Filibuster}{} instrumentation and integrate our client library for Java with the proper context information needed for tracking and propagating our execution indexes. \end{enumerate} \subsection{Streaming Challenges} Including the payload in in the execution index was also not a straightforward engineering task either. This is a result of many RPC frameworks supporting streaming and being structured, in their implementation, streaming-\textit{first}: where, even when a single message is sent, it is structured as a stream containing a single message. This proved challenging for \textsc{Filibuster}{}. For example, \textsc{Filibuster}{} propagates execution indexes between different services using headers or protocol-specific metadata. This is in direct contrast to \textsc{3MileBeach}{}, which modifies the protocol definitions between services and assigns metadata inside of the (de-)serializer to new fields that are used only for fault injection and tracing. With streams, regardless of the number of messages that are transmitted, headers are transmitted \textit{prior to} the payload, and therefore the payload is not known at the time the headers are transmitted. For unary messages -- streams containing a single message only -- we were able to buffer the header transmission inside of the \textsc{Filibuster}{} instrumentation code until the payload was known. Once known, we are able to replay the header messages containing the payload encoding as part of the execution index. However, this does not address the general problem of stream usage. \subsubsection{Execution Index Rewriting} To address this problem more generally, we leveraged a property of our execution indexes where they are always unique \textit{for a given execution} without payload inclusion, however, in the presence of scheduling nondeterminism as a result of concurrency, \textit{are only deterministic across executions} when the payload is included. Therefore, even if we assign incorrect -- but unique -- execution indexes during a single test execution, they can be rewritten to be deterministic as long as it is completed \textit{prior to} both (A) completion of the current test execution and (B) when subsequent test executions are scheduled based on newly discovered program paths. This remapping from \textit{preliminary} execution index to the actual execution index is performed by the \textsc{Filibuster}{} server when the RPC is completed once the actual execution index is known. To demonstrate, consider Figure~\ref{fig:streaming}. In this example, scheduling nondeterminism can result in either the \texttt{Hello} RPC executing first and the \texttt{World} RPC executing second, or the reverse where the \texttt{World} RPC executes first and the \texttt{Hello} RPC executes second. Regardless of the execution order, the RPC signature and call stack will be the same for both RPCs; the only difference in these RPCs is the payload. \begin{figure}[t] \begin{minted}[obeytabs=true,tabsize=4,linenos,numbersep=-10pt,fontsize=\footnotesize]{python} @service_a.route("/") def service_a_index(): b_stream = create_stream(service_b) def call_b(string): return rpc(b_stream, "/", string) words = ["Hello", "World"] futures = [] for i in words: futures.append(async call_b(i)) return await_all(futures).join(" ") @service_b.route("/") def service_b_hello(): return payload.get_string() \end{minted} \caption{Use of RPC streaming API that exhibits scheduling nondeterminism.} \label{fig:streaming} \end{figure} When the stream is opened (Figure~\ref{fig:streaming}, line 3), a preliminary execution index, which uses an empty payload and the location of the stream creation as the call stack when generating the invocation signature, is generated and transmitted to the \textsc{Filibuster}{} server. That is represented as $([A(\mintinline{java}{null})^3|_{x}])$ (where $x = 1$ for Figure~\ref{fig:streaming}, specifically.) We also include a flag in the headers or metadata to indicate that this execution index is preliminary, to distinguish from non-streaming RPCs with empty (or \mintinline{java}{null}) payloads. When the RPCs are actually invoked at the caller, our instrumentation records the final execution index locally along with a mapping from the preliminary execution index and what item it was on the stream. This includes the correct invocation signature (containing the correct call stack of the actual location where the message was sent; in Figure~\ref{fig:streaming}, line 6) and the RPC payload. For \mintinline{java}{Hello}, this is $([A(\mintinline{java}{Hello})^6|_1])$; for \mintinline{java}{World}, this is $([A(\mintinline{java}{World})^6|_1])$. When the invocation is received by the callee, it implicitly increments the invocation count from the starting, preliminary execution index that was received in the header. This incremented execution index is the execution index that is propagated when subsequent RPCs are issued from the callee. For example, the \nth{1} message containing \mintinline{java}{Hello} is implicitly assigned $([A(\mintinline{java}{null})^3|_{x+1}])$; the \nth{2} message \mintinline{java}{World} is implicitly assigned $([A(\mintinline{java}{null})^3|_{x+2}])$. These identifiers may be permuted across executions but it does not matter: for \textit{this specific test execution}, they are unique. When the invocation is complete at the caller, and a response is received for each of these RPCs, the \textsc{Filibuster}{} server is notified of this completion by the caller with both the preliminary and actual execution indexes. From there, it rewrites any execution index matching (or containing, if a nested request) these preliminary execution indexes to contain the finalized execution indexes. It is at this point, the \textsc{Filibuster}{} server has execution indexes that are both unique and deterministic. As stated above, all that is necessary is that these DEI{}'s are corrected before the next execution for fault injection is scheduled to ensure a valid execution is being scheduled and that the execution is not redundant with respect to the exhaustive search and any optimizations that rely on these identifiers. To account for this, we adapted the \textsc{Filibuster}{} server to delay scheduling additional test executions until all preliminary DEI{} were finalized. As a note, while we have implemented this solution and tested it for single element streams, we have not evaluated it for larger streams yet. \section{Preliminary Evaluation} \label{sec:evaluation} Using \textsc{Filibuster}{} and its corpus, we demonstrate the need for the inclusion of invocation count, call stacks, and RPC path into the invocation's identifiers. Using our extension of \textsc{Filibuster}{}, we demonstrate the problem of scheduling nondeterminism and show that the inclusion of the payload avoids these issues. \begin{figure} \includegraphics[width=\linewidth]{cinema-structures-2.png} \caption{Structure of \textit{cinema-1} and \textit{cinema-2}.} \label{fig:evaluation:structure} \end{figure} \subsection{Required for Correctness: \\ Invocation Count, Stack, and Path} \label{sec:evaluation:corpus} To demonstrate the need for invocation count, call stacks, and RPC path, we use the \textsc{Filibuster}{} corpus. This corpus is composed of 4 industrial examples, re-created from the descriptions of actual microservice applications taken from presentations at industrial conferences on resilience engineering, and 8 ``cinema'' examples, that are used to demonstrate a particular microservice RPC pattern with the help of a microservice application that tracks user's cinema reservations, as briefly described in the introduction. As none of the industrial examples contained the coding patterns discussed in Section~\ref{sec:dei}, we used the cinema examples in our evaluation. With regard to the cinema examples, we were able to identify one example, cinema-3, that demonstrated the need for inclusion of the call stack or invocation count. To demonstrate the need for both, we needed to combine the structure of cinema-6 with the use of retries on failure from cinema-1. We also had to extract the RPC invocation into a helper function. To demonstrate the need for inclusion of the execution path, we needed to combine the structure of cinema-2 with the use of default responses on failure from cinema-5. We refer to these \textit{new} examples as cinema-9 and 10, respectively. Cinema is composed of 4 services, depicted in Figure~\ref{fig:evaluation:structure}. Users retrieves the bookings for a user: this involves an RPC to bookings and then to movies for each booking. In the two variations we use, either: a.) users RPCs to movies after the response from bookings; or b.) bookings RPCs to movies directly. We use a single functional test that get the bookings for a user. For fault injection, we consider a single connection error exception per RPC. Table~\ref{tab:evaluation:table} summarizes these results. \begin{table} \begin{center} \begin{tabular}{|| c || c | c | c | c | c ||} \hline Cinema & All & No & No & No IC, & No Path, IC \\ Ex. & & IC & Stk & Stk & \& Stk \\ \hline\hline 3 & 7 & \textbf{4} & 7 & - & - \\ \hline 9 & 5 & 5 & 5 & \textbf{3} & - \\ \hline 10 & 6 & 6 & 6 & 6 & \textbf{5}\\ \hline \end{tabular} \end{center} \caption{Results that demonstrate all techniques must be combined for correct RPC identification.} \label{tab:evaluation:table} \end{table} \paragraph{Invocation count.} For this experiment, we use cinema-3, where the RPC from users to bookings is done with a loop and re-executed once on failure. Exhaustive search requires 7 executions; without invocation count, we run 4 executions. \begin{mdframed}[skipabove=0.1cm, skipbelow=0.1cm, nobreak] SFIT{} is incorrect without \textit{invocation counting:} \begin{itemize}[leftmargin=*,noitemsep,topsep=0pt] \item \textit{Unsound.} As each RPC in the loop will be assigned the same identifier, RLFI will either inject a fault on zero or all iterations. \item \textit{Incomplete.} Requests that occur as a result of any iteration, not the \nth{1}, will not have faults injected. \end{itemize} \end{mdframed} \paragraph{Call stack.} For this experiment, we used cinema-9. In the event of failure of the \nth{1} RPC from users to bookings, it will mark the request as failed and try that request later from a different call site. This differs from the loop where the same call site is used. Each call site uses a helper function to issue the RPC to ensure the stacks are different. Exhaustive search requires 5 executions; without call stack, we run 3 executions. \begin{mdframed}[skipabove=0.1cm, skipbelow=0.1cm, nobreak] SFIT{} is incorrect without \textit{call stack inclusion:} \begin{itemize}[leftmargin=*,noitemsep,topsep=0pt] \item \textit{Unsound.} As each call invocation of the helper's RPC will be assigned the same identifier, RLFI will either inject a fault on both or neither. \item \textit{Incomplete.} Requests that occur as a result of the \nth{1} invocation will not have faults injected. \end{itemize} \end{mdframed} \paragraph{RPC Path.} For this experiment, we used cinema-10. In the event of a failure of bookings, a default response is used in place of the failure and the movies service contacted by the users service directly. Exhaustive search requires 6 test executions; without RPC path, we run 5 executions. \begin{mdframed}[skipabove=0.1cm, skipbelow=0.1cm, nobreak] SFIT{} is incorrect without \textit{RPC path inclusion:} \begin{itemize}[leftmargin=*,noitemsep,topsep=0pt] \item \textit{Unsound.} Since the bookings RPC to movies and the users RPC to movies share the same identifier, RLFI will either inject a fault on both or neither. \item \textit{Incomplete.} As RLFI will always fail the \nth{2} RPC to movies, we do not explore the successful case. \end{itemize} \end{mdframed} \subsection{Nondeterminism is a Problem} \label{sec:evaluation:nondeterminism} In order to understand the impact of scheduling nondeterminism within the JVM on correct identifier assignment, we constructed a small example with the Armeria that contained two services: \textit{Hello} and \textit{World}. In this example, the \textit{World} service exposed a single endpoint that returned a \texttt{String} constant when it received an RPC. The \textit{Hello} service exposed an endpoint that, when it received an RPC from our test harness, would launch a configurable number of threads, each that issued an RPC to the \textit{World} service, and then wait for them to complete. Each thread was defined as a class in Java, where the \textit{Hello} service would create instances of this class of in a loop: this ensured that the call site of the RPC was the same and the stack trace of the call site were identical. All RPCs were made the same service and differed only in the payload, which contained the identifier of the thread determined by thread creation order. For this experiment, we used our \textsc{Filibuster}{} implementation extended with support for Java, gRPC, and DEI{}. We reconfigured the DEI{} algorithm to include the thread creation order. Therefore, payload differed only by this identifier. We ran this test application for varying numbers of RPCs (2, 4, 8, 16, 32, 64) for 100 iterations each. We fixed the thread pool size at the \textit{Hello} service at size 2. For each iteration, we recorded whether or not the execution index assignment matched the thread creation order by examining the execution indexes payload values. The results are presented in Figure~\ref{fig:evaluation:scheduling-nondeterminism}. With only 2 RPCs, 44\% of the tests exhibited an RPC execution order that did not match the creation order; by 64 , 100\% of the RPCs did not match. \begin{figure} \includegraphics[width=\linewidth]{analysis.pdf} \caption{Percentage of executions with deterministic assignment for configuration of 2 threads.} \label{fig:evaluation:scheduling-nondeterminism} \end{figure} \begin{mdframed}[skipabove=0.1cm, skipbelow=0.1cm, nobreak] \cmark~Even in the presence of relatively low amounts of concurrency, scheduling nondeterminism is a problem for existing RLFI techniques. \end{mdframed} \subsection{Payload Inclusion Distinguishes} \label{sec:evaluation:payload} Using the same example from Section~\ref{sec:evaluation:nondeterminism}, we were able to verify our \textit{key insight}: inclusion of the payload into the identifier for each RPC invocation was sufficient for distinguishing these concurrent, inter-service RPCs. With DEI{}, all RPC identifiers were unique and deterministic, as shown in Figure~\ref{fig:evaluation:scheduling-nondeterminism}. Therefore, scheduling nondeterminism was not an issue. \begin{mdframed}[skipabove=0.1cm, skipbelow=0.1cm, nobreak] \cmark~Payload is sufficient for distinguishing concurrent, inter-service RPCs in microservice applications, under the assumption that these RPCs will not share the same payload when issued to the same service and method with the same parameters. \end{mdframed} \section{Related Work} \label{sec:related-work} Execution indexing~\cite{10.1145/1379022.1375611} was originally devised to identify unique points in an execution, in a manner that established a correspondence across multiple executions. It has also been used for deadlock identification~\cite{10.1145/1543135.1542489} and traversal analysis~\cite{10.1109/ICSE.2017.50}. In Section~\ref{sec:dei:payload}, we described how model checkers for distributed systems~\cite{267763, 10.5555/2685048.2685080, 10.1145/3302424.3303986} were originally designed for identifying concurrency bugs and later extended to test failures. Therefore, they rely on control of the thread scheduler, which is unrealistic for large, microservice applications that run on hundreds of different machines, implemented in different languages. In Section~\ref{sec:dei:specialcases}, we discussed the differences between \textsc{Filibuster}{}~\cite{10.1145/3472883.3487005, filibuster} and \textsc{3MileBeach}{}~\cite{10.1145/3472883.3486986}, two modern approaches to RLFI, and demonstrated that the techniques used by both are special cases of DEI{}. Many practitioners still rely on stochastic techniques~\cite{8383672, chaos-monkey, gremlin, alfi, 10.1109/ICSE-SEIP.2019.00012} that attempt to minimize the blast radius of random experiments in production: the \textsc{Filibuster}{} paper identifies 32 different companies all using this style of experimentation on microservice applications. Recently, there has been interest in using these techniques into the local development environment~\cite{chaos-toolkit, linkedout, 7536505} to minimize the blast radius even further. However, these techniques each require that developers manually specify the fault configurations that are tested, necessitating the need for a mechanism such as DEI{}, that supports a sound and complete systematic search. \section{Future Work} In this section, we present several ideas for future work. \subsection{Alternative Instantiations} We believe that a promising area of research is in exploring different instantiations of the distributed execution indexing{} algorithm. As an example, we have already shown that two different instantiations have been of value. For example, \textsc{Filibuster}{}'s instantiation, which supports fault injection and exhaustive search in applications that contain no concurrency and only use synchronous RPC. Similarly, \textsc{3MileBeach}{}'s instantiation, which provides temporal, nondeterministic concurrent fault injection in a live system. We envision several different instantiations that may be of use for microservice application developers. First, we believe that an instantiation that does not include (or more precisely sets to \mintinline{java}{null}) the invocation count or stack trace may be useful in identifying a particular microservice anti-pattern where the same RPC endpoint is accessed by the same service multiple times as part of a single request. We have empirically observed this pattern several times. For example, a service contains a helper function called \mintinline{java}{getCurrentUser} that is repeatedly called at different locations by a service as part of processing a single request instead of retrieving the user a single time and passing those values around through function calls. We hypothesize that this pattern emerges from the combination of abstraction and the lack of visibility where RPC's actually occur within an application and a result of this, a perceived lack of cost. We also believe that same instantiation, augmented by an analysis, can be used to detect a second microservice anti-pattern: specifically, where a remote services does not directly expose a required API and therefore the invoker of those services must combine two remote calls to get the information they require. For example, calling \mintinline{java}{getCurrentUser} and then immediately calling \mintinline{java}{getProfileByUser} and providing the user record that was returned from the first call as an argument to the second. We hypothesize that these patterns arise from the decentralized nature of microservice development where a single developer of a service must implement their functionality with ``what's available'' as APIs from the services that they take dependencies on. Finally, we believe that when testing more advanced resilience techniques such as circuit breakers and load shedding, other instantiations will be necessary. When it comes to circuit breakers that operate on \textit{per service}, it may be useful to perform fault injection on any execution index that targets a particular service. This may require coarser execution indexes that do not discriminate for invocation count, stack trace, or payload if the goal is to inject a fault for each RPC to that service. Similarly, the same might apply to load shedding as well. For circuit breakers that operate \textit{per method}, a different instantiation may be required that includes the stack trace, or a minimized stack trace that only considers the final frame. For load shedding that operates \textit{per method}, an instantiation that only considers the RPC signature maybe appropriate. Regardless of the instantiation and the precise use case, we believe that the distributed execution indexing{} algorithm provides a framework for supporting all of these and identifying the mapping between instantiations and applications is a promising future research direction. In fact, we have already begun work on several of these and a mechanism for projecting executions indexes with full instantiation into execution indexes containing these alternative instantiations at runtime for analysis without requiring that the system generate and maintain all possible representations. \subsection{Graph Analysis} One interesting property of the execution indexes produced by the distributed execution indexing{} algorithm is that they can be used to determine the structure of the application graph. For example, if Service A calls Service B and Service B calls Service C, the execution index for the specific call between B and C will contain the execution index of the call from A to B as a prefix. Not only can this be used to dynamically reconstruct the application graph from recorded observations of traces --- for example, if storing the augmented OpenTelemetry information in Jaeger --- but we believe that this information could also be used for detection and activation of fault tolerance mechanisms at runtime. One such application is to detect when circuit breakers have been activated. For example, by injecting faults repeatedly on a request with a given execution index and subsequently observing the disappearance of that execution index in a trace, along with all execution indexes that contain that execution index a a prefix, in subsequent traces, one might be able to conclude that a circuit breaker opened and prevented subsequent requests to the request where the fault was injected. By continuing to issue requests where no faults are injected and subsequently observing the subgraph restored in following traces, one may be able to conclude that the circuit breaker has re-closed. The key here is that the execution indexes are stable under control flow changes, so when the circuit breaker opens, any subsequent requests that occur are guaranteed to not appear like or be confused with the requests that occurred when the circuit breaker was closed. This is one possible application of distributed execution indexing{} for identifying fault tolerance mechanisms through observation of traces containing execution indexes. \subsection{Implementation} When it comes to implementation, our \textsc{Filibuster}{} extension still needs to be evaluated at scale, with different concurrency primitives, to determine it's viability. For example, when it comes to Kotlin specifically, we identified a several situtation where, when using suspendable functions combined with Armeria's futures -- used when issuing both HTTP and GRPC calls -- the call stack did not contain any frames originating in the application code. This makes identification of the actual RPC location in application code not possible. We believe this to be an artifact of Kotlin's coroutine handling and suspect that deeper integration with Kotlin's stacktrace reassembly mechanism may be necessary to properly identify the location where the RPCs are actually invoked in the application code. Somewhat similarly, Kotlin coroutines may be subject to multiple context switches and rescheduling across different threads. The result of this is that, depending on scheduling nondeterminism, call stacks may differ across executions by one or more frames originating in internal language libraries (\textit{e.g.,} \mintinline{java}{java.lang.Thread}) thereby explicitly encoding the context switches or other scheduling decisions into the call stack itself. To address this, we implemented a mechanism for white/blacklisting frames in the call stack related to standard libraries. This has proven useful in preliminary tests, but has not been evaluated at scale yet. \subsection{Algorithm} Inclusion of the RPC signature, call stack, and invocation path -- what can be thought of as \textit{synchronous} distributed execution indexing{} -- has already been evaluated as part of our work on SFIT{} and was published at ACM's Symposium on Cloud Computing in 2021~\cite{10.1145/3472883.3487005}. However, while evaluated synthetically, \textit{asynchronous} distributed execution indexing{} -- where the payload is included in the execution index -- still needs further evaluation in an industrial microservice application in order to demonstrate its viability. The design decisions that underly the inclusion of the payload in the execution index are rooted in observations we made when examining a large-scale, industrial microservice application, composed of over 500 different services. Examination of this application resulted in two \textit{key observations:} \begin{enumerate} \item In the majority of cases where concurrent RPCs were executed, asynchronously, to the same service and method, from the same call site, they were typically done to retrieve \textit{different} information in parallel, as fast as possible, as indicated by the arguments to the RPC. For example, retrieving records from a database and then issuing RPCs inside of a loop to aggregate those records. There is some risk here: databases without proper constraints may return a list of records containing duplicate entries resulting in concurrent retrieval of the same record. \item In a few of the cases we identified, parallel execution of the same workflow, parameterized by the same arguments, resulted in concurrent execution of the same \textit{single} RPC invocation. \end{enumerate} These key observations resulted in the following \textit{key insight.} When concurrent RPCs are executed, asynchronously, to the same service and method, from the same call site for the same payload, for the purposes of aggregation, permutation of the identifiers for these RPCs is observationally equivalent. In that, whether we inject a fault on the first or second has no outcome on the program, as both responses from each RPC will be the same. This is true for any API that is true for retrieving records, but may not hold true for APIs that perform mutations. When it comes to mutations, this holds true for any mutation that is deterministic, idempotent, and commutative: and therefore may apply to some, but not all. Further evaluation is still required in order to understand if these observations hold true across all industrial microservice applications. \section{Conclusion and Future Work} \label{sec:conclusion} In this paper, we presented \textit{distributed execution indexing{}} (DEI{}), a technique for microservice applications that precisely identifies dynamic instances of inter-service RPCs. DEI{} addresses a real need in modern, microservice resilience testing, as existing RLFI techniques all fail to handle common RPC patterns that exist in industrial microservice applications. We formally defined the general concept of DEI{} and demonstrated that two of the most recent RLFI systems use special cases of DEI{}.
{ "timestamp": "2022-09-20T02:24:00", "yymm": "2209", "arxiv_id": "2209.08740", "language": "en", "url": "https://arxiv.org/abs/2209.08740" }
\section{Introduction} Let $R=\K[x_1,\dots,x_n]$ and $I$ be an ideal of $R$. Denote the unique graded maximal ideal of $R$ by $M$. Green-Lazarsfeld introduced the condition $N_p$ which means that the resolution of $R/I$ is 2-linear the $p$ first times. Thus $N_1$ means that the ideal is generated in degree 2, $N_2$ means that furthermore all syzygies between the generators are linear. The largest $p$ for which the graded algebra $R/I$ satisfies $N_p$ is called the \emph{index} of $R/I$ (see \cite{GreenLazarsfeld1984,GreenLazarsfeld1985}). Later the condition $N_{k,p}$ was introduced. This means that the resolution is $k$-linear the $p$ first times. Thus $R/I$ satisfies $N_{k,1}$ if $I$ is generated in degree $k$, it satisfies $N_{k,2}$ if furthermore the syzygies of the generators of $I$ are linear and so on. We call the largest $p$ for which $R/I$ satisfies $N_{k,p}$ the $k$-\emph{index} of $R/I$. That the $k$-index of $R$ is $\infty$ means that $R/I$ has a $k$-linear resolution. A quotient ring $S=R/I$ has a linear resolution if all syzygies are linear. That is, the resolution of $S$ is $k$-linear if $\tor^R_{i,j}(S,\K)=0$ if $j\neq k+i-1$ if $i>0$. For a positive integer $k$, Eisenbud and Goto in \cite{EisenbudGoto1984} define $I_{\geq k}=I\cap M^k$. They show that for any graded ring $R/I$, then $R/I_{\geq k}$ has a $k$-linear resolution for $k\geq\reg(I)$ (see {\cite[Theorem 1.2]{EisenbudGoto1984}). Moreover, a formula for the $k$-index is given by Eisenbud, Huneke, and Ulrich in {\cite[Proposition 1.6]{EisenbudHunekeUlrich2006}}. For a monomial ideal $I$, let $I_k$ be the squarefree part of $I_{\geq k}$. That is $I_k$ is the squarefree truncation of $I$ past $k$. If $I\subseteq R$ is a squarefree monomial ideal and $M_k=\left(u\mid u\text{ is a squarefree monomial in }M^k\right)$, then $I_k=I\cap M_k$. In Theorem \ref{betti I_k}, we compute the graded Betti numbers of $R/I_k$ for $k>\min\{\deg(u)\mid u\in I\}$. The Betti numbers provide a deeper insight about the quotient ring than $k$-index invariant. For a monomial ideal (not squarefree) $I$, in Corollary \ref{I_k, I_geq k} we show that the truncation of $I$ past $k$ has a linear resolution if and only if the squarefree truncation of the polarization of $I$ past $k$, $\mathcal{P}(I)_{k}$, has a linear resolution. Finally, for any graded ideal $I$, in Theorem \ref{betti I_geq k} we compute the graded Betti numbers of $R/I_{\geq k}$ for $k>\min\{\deg(u)\mid u\in I\}$. \section{The graded Betti numbers of $I_k$} In this section, let $I$ be a squarefree monomial ideal of $R$. Define $J_k=\{ u\in M_k\mid u\notin I_k\}$. Let $e_u=\supp(u)$ where $\supp(u)=\{x_i\mid u=x_1^{a_1}\dots,x_n^{a_n}, a_i\neq 0\}$ and $G(I)$ be the set of minimal generators of $I$. The Stanley-Reisner complex of $I$ is the simplicial complex consisting of faces which correspond to the squarefree monomials not in $I$, that is $$\Delta_{I}=\{e_u\subset \{x_1,\dots,x_n\}\mid u\text{ is a squarefree monomial not in } I\}.$$ An element $F$ of $\Delta_{I}$ is a face of $\Delta_{I}$ of dimension $|F|-1$. A maximal face (with respect to inclusion) of $\Delta_{I}$ is a facet. Denote the set of the facets of $\Delta_{I}$ by $\mathcal{F}(\Delta_{I})$. The dimension of $\Delta_{I}$ is $\dim\Delta_{I}=\max\{|F|\mid F\in\mathcal{F}(\Delta_{I})\}-1$. The pure $k$-skeleton of $\Delta_{I}$ is a simplicial complex $\Delta_{I}^{[k]}$ whose facets are in $\mathcal{F}(\Delta_{I})$ with $|F|=k+1$. Note that $I_{k+1}=\left(ux_i\mid\deg(u)=k,u\in I_k, x_i\notin e_u\right)+\left(v\mid \deg(v)>k,v\in I_k\right).$ \begin{lemma}\label{J_{k+1}} Let $I$ be a squarefree monomial ideal of $R$. Then for $k\geq\min\{\deg(u)\mid u\in I\}$ we have $u\in J_{k+1}$ if and only if $e_u\in\Delta_{I_k}^{[k]}$. \end{lemma} \begin{proof} ($\Rightarrow$) Let $u\in J_{k+1}$. Assume that $e_{u}\notin\Delta_{I_k}^{[k]}$. Then $u\in I_k$ and $u$ is a squarefree monomial of degree $k+1$. Thus $u\in I_{k+1}$ which is a contradiction. ($\Leftarrow$) We have $u\in G(M_{k+1})$ since $e_u\in \Delta^{[k]}_{I_k}$. We claim $u\notin I_{k+1}$. Suppose to the contrary that $u\in I_{k+1}$. Then there exists $x_t\in e_u$ such that $\hat{u}_t\in I_k$. Hence $\hat{u}_t\notin J_k$ for some $x_t\in e_u$, a contradiction. The claim follows. \end{proof} In the following Proposition we find the facets, i.e. maximal faces, of $\Delta_{I}$, then later use these facets to compute the Betti numbers of $I_k$. \begin{proposition}\label{simplicial D_{I_k+1}} Let $I$ be a squarefree monomial ideal of $R$. Then for $k\geq\min\{\deg(u)\mid u\in I\}$ we have $$\mathcal{F}(\Delta_{I_{k+1}})=\{e_u\mid u\in G(I_k),\deg(u)=k\}\cup\{e_v\mid e_v\in\mathcal{F}(\Delta_{I_k}),\deg(v)\geq k\}.$$ \end{proposition} \begin{proof} Let $P_k:=\{e_u\mid u\in G(I_k),\deg(u)=k\}$ and $Q_k:=\{e_v\mid e_v\in\mathcal{F}(\Delta_{I_k}),\deg(v)\geq k\}$. Let $F\in\mathcal{F}(\Delta_{I_{k+1}})$. Then $F=e_u$ for some $u\notin I_{k+1}$. We need to consider the following three cases. \begin{itemize} \item The case when $|F|>k$. If $e_u$ is not a face of $\Delta_{I_k}$, then $u\in I_k$. But $\deg(u)\geq k+1$ so $u\in I_{k+1}$, and this contradicts $e_u\in\Delta_{I_{k+1}}$. It remains to show that $F$ is maximal in $\Delta_{I_{k}}$. Suppose $e_{ux}\in\Delta_{I_{k}}$ for some $x\notin e_u$. Note that $ux\not\in I_{k+1}$, otherwise having $\deg(ux)>k+1$ implies $ux\in I_{k}$. Hence $e_{ux}\in\Delta_{I_{k+1}}$, a contradiction. Therefore $F\in Q_k$. \item The case when $|F|=k$. If $u\notin I_k$, then $u\in J_k$. To obtain a contradiction assume that $e_{ux}\in\Delta_{I_k}$ for some $x\notin e_u$. Then $\deg({ux})=k+1$, and Lemma~\ref{J_{k+1}} implies $ux\in J_{k+1}$. Hence $e_{ux}\in\Delta_{I_{k+1}}$, again this contradicts the assumption $e_u\in\mathcal{F}(\Delta_{I_{k+1}})$. Therefore $e_u\in P_k$ or $e_u\in Q_k$. \item The case when $|F|<k$. We have $e_{ux}\notin\Delta_{I_{k+1}}$. Hence $ux\in I_{k+1}$ with $\deg({ux})\leq k$ which is a contradiction. \end{itemize} Conversely, let $e_u\in P_k\cup Q_k$, we show that $e_u\in\mathcal{F}(\Delta_{I_{k+1}})$ by considering the following three cases: \begin{itemize} \item When $|e_u|=k$. If $e_u\in P_k$, then $ux\in I_{k+1}$ for all $x\notin e_u$. It follows that $e_u\in\mathcal{F}(\Delta_{I_{k+1}})$. Now if $e_u\in Q_k$, then $u\not\in I_k$. Suppose that $ux\not\in I_{k+1}$ for some $x\notin e_u$. Then $ux\notin I_k$ otherwise $ux\in I_{k+1}$. Hence $e_{ux}\in\Delta_{I_k}$ which is a contradiction. Therefore $ux\in I_{k+1}$ for all $x\notin e_u$. Hence $e_u\in\mathcal{F}(\Delta_{I_{k+1}})$. \item If $|e_u|=k+1$, then $e_{u}\in\Delta_{I_k}$. By Lemma \ref{J_{k+1}} we have $u\in J_{k+1}$, hence $e_u\in\Delta_{I_{k+1}}$. If $e_u$ is not a facet of $\Delta_{I_{k+1}}$, then there an $x\notin e_u$ such that $e_{ux}\in\Delta_{I_{k+1}}$. Then $ux\notin I_{k+1}$, and so $ux\notin I_{k}$. Hence $e_{ux}\in\Delta_{I_{k}}$ some $x\notin e_u$ which is a contradiction. \item When $|e_u|\geq k+2$. If $u\in I_{k+1}$ then $u\in I_k$, and so $e_u\notin\Delta_{I_{k}}$, which is also a contradiction. The face $e_u$ is maximal in $\Delta_{I_{k+1}}$ by the same argument used in the case $|e_u|=k+1$. \end{itemize} \end{proof} For a fixed $t$, we shall write $(f_{-1}^t,f_{0}^t,\dots, f_{d-1}^t)$ for the $f$-vector of $\Delta_{I_t}$. \begin{corollary}\label{f-vector I_k} Let $I$ be a squarefree monomial ideal of $R$ and $f(\Delta_{I})=(f_{-1},f_0,f_1,\dots,f_{d-1})$. Then for $k\geq\min\{|e_u|\mid u\in I\}$ we have $$f(\Delta_{I_{k+1}})=\begin{cases} \left(\binom{n}{0},\binom{n}{1},\binom{n}{2},\dots,\binom{n}{k},f_{k},\dots,f_{d-1}\right)\quad&\text{ if }k\leq d,\\ \left(\binom{n}{0},\binom{n}{1},\binom{n}{2},\dots,\binom{n}{k}\right)\quad&\text{ if }d<k<n. \end{cases}$$ \end{corollary} \begin{proof} By Proposition \ref{simplicial D_{I_k+1}}, we have $f^{k+1}_{k-1}=f^{k}_{k-1}+|\{e_u\mid u\in G(I_k),\deg(u)=k\}|=|\{e_u\mid u\in J_k\}|+|\{e_u\mid u\in G(I_k),\deg(u)=k\}|=\binom{n}{k}$. Then the result follows by induction. \end{proof} The following Lemma of Hochster provides a very useful explanation of the graded Betti numbers of a Stanley-Reisner ring (see {\cite[Theorem 5.1]{Hochster1977}} or {\cite[Lemma 9]{Froberg2021}}). Let $I$ be a squarefree monomial ideal of $R=\K[x_1,\dots,x_n]$. We write $\Delta_W$ for the simplicial complex on the vertex set $W\subseteq\{x_1,\dots,x_n\}$, whose faces are $F\in\Delta_{I}$ with $F\subseteq W$. \begin{lemma}[\textbf{Hochster}]\label{hochster} Let $W\subseteq\{x_1,\dots,x_n\}$ and $K_{R(W)}$ be the part of the Koszul complex $K_R$ which is of degree $\delta(W)= (d_1,\dots,d_n)$, where $d_i=1$ if $x_i\in W$ and $d_i=0$ otherwise. Then $H_{i,\delta(W)}=H_i(K_{R(W)})\cong \tilde{H}_{|W|-i-1}\left(\Delta_W;\K\right)$. \end{lemma} Now we come to our main result. We have a precise formula for the graded Betti numbers of the graded algebra $R/I_k$ for $k>\min\{\deg(u)\mid u\in I\}$. \begin{theorem}\label{betti I_k} Let $I$ be a squarefree monomial ideal of $R$. Then for $k>\min\{|e_u|\mid u\in I\}$ we have $$\beta_{i,i+j}(R/I_k)=\begin{cases} 1\quad&\mbox{ if } i=0,j=0,\\ 0\quad&\mbox{ if }i\neq 0, 0\leq j\leq k-2,\\ \alpha_{i,i+k-1}\quad&\mbox{ if }i\neq 0, j=k-1,\\ \beta_{i,i+j}(R/I)\quad&\mbox{ if }i\neq 0, j\geq k. \end{cases}$$ where \begin{equation}\label{formula of I_k} \displaystyle\alpha_{i,i+j}=\sum_{r=0}^{i+j}(-1)^{j-r}\binom{n-r}{i+j-r}f_{r-1}^k+\sum_{\scriptstyle \ell+m=i+j\atop\scriptstyle\ell< i}(-1)^{\ell-i-1}\beta_{\ell,\ell+m}(R/I). \end{equation} \end{theorem} \begin{proof} Let the $f$-vector of $\Delta_{I}$ be $(f_{-1},f_{0},f_{1}\dots, f_{d-1})$. Then Corollary \ref{f-vector I_k} implies that for $0\leq i\leq k-1$, $f^k_{i-1}=\binom{n}{i}$. Then $\beta_{i,i+j}=0$ for all $0\leq j\leq k-2$ except $\beta_{0,0}=1$. Hence by Hochster's formula, Lemma \ref{hochster}, the only graded Betti numbers that change are $$\beta_{i,i+k-1}(R/I_k)=\sum_{\scriptstyle |W|=k-1,\atop W\subseteq \{x_1,\dots,x_n\}}\dim_{\K}\tilde{H}_{k-i-2}\left(\Delta_W;\K\right)=\alpha_{i,i+k-1}.$$ where $\Delta_W$ is the simplicial complex on the vertex set $W$, whose faces are $F\in\Delta_{I_k}$ with $F\subseteq W$. Thus $\beta_{i,i+j}(R/I_{k})=\beta_{i,i+j}(R/I)$ for $j\geq k$. The Hilbert series of $\K[\Delta_{I}]$ is $$H_{\K[\Delta_{I}]}(t)=\sum_{r=0}^{d} \frac{f_{r-1} t^{r}}{(1-t)^{r}}=\frac{\sum_i(-1)^i\sum_j\beta_{i,i+j}t^{i+j}}{(1-t)^n}.$$ We have that $$\displaystyle\sum_{r=0}^{d} \frac{f_{r-1} t^{r}}{(1-t)^{r}}\times(1-t)^n=\sum_{s=0}^{n}\sum_{r=0}^{s}(-1)^{s-r}\binom{n-r}{s-r}f_{r-1}t^s.$$ Then from the Hilbert series of $\K[\Delta_{I}]$ we have $$\sum_{s=0}^{n}\sum_{r=0}^{s}(-1)^{s-r}\binom{n-r}{s-r}f_{r-1}t^s=\sum_{i}(-1)^i\sum_j\beta_{i,i+j}t^{i+ j}$$ If $s=i+j$, then $$\sum_{r=0}^{s}(-1)^{s-r}\binom{n-r}{s-r}f_{r-1}=\sum_{i}(-1)^i\sum_{i+j=s}\beta_{i,s},\quad s=0,1,\dots,n.$$ Let $j=k-1$. Since $\beta_{i,i+j}(R/I_k)\neq0$ whenever $j\geq k-1$, we have $$\beta_{i,i+j}(R/I_{k})=\sum_{r=0}^{i+j}(-1)^{j-r}\binom{n-r}{i+j-r}f_{r-1}^k+\sum_{\scriptstyle \ell+m=s\atop\scriptstyle \ell<i}(-1)^{\ell-i+1}\beta_{\ell,\ell+m}(R/I).$$ \end{proof} \begin{corollary} Let $I$ be a squarefree monomial ideal of $R$. If $I$ has a linear resolution, then $I_k$ has a linear resolution for any positive integer $k$. \end{corollary} \begin{proof} Let $I$ be generated in degree $d$. If $k\leq d$, then $I_k=I_d=I$. For the case $k>d$, one can apply Theorem \ref{betti I_k}. \end{proof} \begin{corollary}\label{reg I_k} Let $I$ be a squarefree monomial ideal of $R$ and $\reg(I)=d$. Then for any positive integer $k$ we have $$\reg(I_k)=\begin{cases} d\quad\mbox{ if }d\geq k,\\ k\quad\mbox{ otherwise}. \end{cases}$$ \end{corollary} \begin{proof} We have $\beta_{i,i+d}(I)\neq0$ for some $i$. Theorem \ref{betti I_k} implies that $\reg(I_k)=d$ whenever $d\geq k$ and $\reg(I_k)=k$ otherwise. \end{proof} \begin{corollary}\label{reg linear} Let $I$ be a squarefree monomial ideal of $R$. Then $\reg(I)=d$ if and only if $d$ is the smallest integer such that $d\geq\min\{\deg(u)\mid u\in I\}$ and $I_{d}$ has a linear resolution. In particular, $I_k$ has a linear resolution for all $k\geq\reg(I)$. \end{corollary} \begin{proof} We have $\beta_{i,i+d}(I)\neq0$ for some $i$. Hence Theorem \ref{betti I_k} implies that $I_d$ has a $d$-linear resolution but $I_{d-1}$ does not have a linear resolution. Conversely, there is a non-negative integer $m$ such that $I=I_{d-m}$. By Theorem \ref{betti I_k} we have $\beta_{i,i+d}(I)=\beta_{i,i+d}(I_{d-m})=\beta_{i,i+d}(I_d)$. Thus $\reg I=d$. \end{proof} \begin{example}\label{example 1} Let $I=\left(x_1x_2x_3,x_4x_5x_6x_7,x_1x_2x_4x_5x_8x_9\right)$ be an ideal of $R=\K[x_1,\dots,x_9]$. Then the $f$-vector of $\Delta_{I}$ is $(1,9,36,83,119,106,53,10)$ and the graded Betti numbers of $R/I$ are $\beta_{0,0}=1$, $\beta_{1,3}=1$, $\beta_{1,4}=1$, $\beta_{1,6}=1$, $\beta_{2,7}=2$, $\beta_{2,8}=1$ and $\beta_{3,9}=1$. Corollary \ref{f-vector I_k} implies that the $f$-vector of $\Delta_{I_{5}}$ is $(1,9,36,84,126,106,53,10)$. By Theorem \ref{betti I_k}, the graded Betti numbers of $T=R/I_5$ for $j=4$ are determined by $$\beta_{i,i+4}(T)=\sum_{r=0}^{4+i}(-1)^{4-r}\binom{9-4}{4+i-r}f_{r-1}^5+\sum_{\scriptstyle \ell+m=i+4\atop\scriptstyle \ell<i}(-1)^{\ell-i+1}\beta_{\ell,\ell+m}(R/I),$$ and $\beta_{1,6}(T)=1$, $\beta_{2,7}(T)=2$, $\beta_{2,8}(T)=1$ and $\beta_{3,9}(T)=1$. Then $\beta_{1,5}(T)=20$, $\beta_{2,6}(T)=49+\beta_{1,6}=50$, $\beta_{3,7}(T)=53+\beta_{2,7}=55$, $\beta_{4,8}(T)=30-\beta_{2,8}=29$ and $\beta_{5,9}(T)=7-\beta_{3,9}=6$. By Corollary \ref{reg linear}, $I_7$ has a linear resolution but $I_6$ does not have since $\reg(I)=7$. \end{example} \section{Polarizations} Let $I=(u_1,\dots,u_t)$ be a monomial ideal of $R=\K[x_1,\dots,x_n]$ with $u_j=\prod_{i=1}^n x_i^{a_{i,j}} $ for $1\leq j\leq t$. For ${1\leq i\leq n}$, let $a_i=\max\{a_{i,j}\mid 1\leq j\leq t\}$. The polarization of $I$, denoted by $\mathcal{P}(I)$, is a squarefree monomial ideal of a polynomial ring $$S=\K[x_{1,1},x_{1,2},\dots,x_{1,a_{1}},x_{2,1},x_{2,2},\dots,x_{2,a_{2}},\dots,x_{n,1},x_{n,2},\dots,x_{n,a_{n}}]$$ with minimal generators $\mathcal{P}(u_1),\dots,\mathcal{P}(u_t)$ where $$\mathcal{P}(u_j)=\prod_{i=1}^n\prod_{l=1}^{a_{i,j}} x_{i,l}, 1\leq j\leq t.$$ In 1982, Fr\"oberg proved the following Lemma. \begin{lemma}[\cite{Froberg1982}]\label{Froberg} Let $I=(u_1,\dots,u_t)$ be a monomial ideal of $R$ such that for each $i$, the variable $x_{i,a_{i}}$ appears in at least on of the monomials $\mathcal{P}(u_1),\dots,\mathcal{P}(u_t)$. Then for $1\leq i\leq n$ and $1<j\leq a_{i}$, $f_{i,j}=x_{i,1}-x_{i,j}$ forms a regular sequence of degree one in $S/\mathcal{P}(I)$. Moreover, $R/I\cong S/(\mathcal{P}(I)+J)$ where $J=(f_{i,j}\mid1\leq i\leq n, 1< j\leq a_{i})$. \end{lemma} For any quotient ring, factoring out with a regular sequence of degree one does not change the Betti numbers. That is, the graded Betti numbers of $R/I$ and $S/\mathcal{P}(I)$ are the same (see \cite{Froberg1982}). Hence $R/I$ has a linear resolution if and only if $S/\mathcal{P}(I)$ has a linear resolution. We show that it is also valid for componentwise linear ideals. Herzog and Hibi in \cite{HerzogHibi1999} defined the concept of (squarefree) componentwise linear. A graded ideal $I\subseteq R$ is componentwise linear if $I_{<j>}$ has a linear resolution for all $j$ where $I_{<j>}$ is the ideal generated by all homogeneous polynomials of degree $j$ belonging to $I$. An ideal $I\subseteq R$ is squarefree componentwise linear if $I_{[j]}$ has a linear resolution for all $j$ where $I_{[j]}$ is the ideal generated by all squarefree homogeneous polynomials of degree $j$ belonging to $I$. The following Theorem is standard but we include for the sake of self-containment. \begin{theorem}\label{linear reg} Let $R/I$ be a graded algebra with all minimal generators of $I$ of degree $\ge k$. Then $R/I$ has a $k$-linear resolution if and only if $\reg(I)=k$. \end{theorem} \begin{proof} Suppose $R/I$ has a $k$-linear resolution $$0\longleftarrow R/I\longleftarrow R\longleftarrow R^{\beta_1}[-k]\longleftarrow R^{\beta_2}[-k-1]\longleftarrow\cdots\longleftarrow R^{\beta_m}[-k-m+1]\longleftarrow 0$$ Thus reg$(R/I)=\max\{ k+i-i-1\}=k-1$, so reg$(I)=k$ . If all generators of $I$ are of degree $\ge k$ a resolution of $R/I$ looks like this: $$0\longleftarrow R/I\longleftarrow R\longleftarrow \oplus_{j=1}^{j_1} R^{\beta_{1,k-1+j}}[-k+1-j]\longleftarrow \oplus_{j=1}^{j_2} R^{\beta_{2,k+j}}[-k-j]\longleftarrow\cdots$$ $$\cdots\longleftarrow\oplus_{j=1}^{j_m} R^{\beta_{m,k+m-2+j}}[-k-m+2-j]\longleftarrow0$$ If reg$(I)=k$ (so reg$(R/I)=k-1$) then $\{ k+i-2+j_i-i\}\le k-1$ for all $i$, so $j_i=1$ for all $i$, so the resolution is linear. \end{proof} \begin{proposition}\label{componentwise linear} Let $I$ be monomial ideal $R$. Then the following conditions are equivalent: \begin{enumerate}[\rmfamily i)] \item $I$ is componentwise linear; \item $\mathcal{P}(I)$ is componentwise linear; \item $\mathcal{P}(I)$ is squarefree componentwise linear. \end{enumerate} \end{proposition} \begin{proof} $(i)\Leftrightarrow (ii)$ Let $k$ be a positive integer. By Lemma \ref{Froberg}, $\reg(I)=\reg(\mathcal{P}(I))$ and $I$ has a linear resolution if and only if $\mathcal{P}(I)$ has a linear resolution. Then it follows from Theorem \ref{linear reg} that $\mathcal{P}(I_{<k>})$ has a linear resolution if and only if $\reg(I_{<k>})=k$. But Theorem \ref{linear reg} also implies that $\mathcal{P}(I)_{<k>}$ has a linear resolution if and only if $\reg(\mathcal{P}(I)_{<k>})=k$. Hence $\mathcal{P}(I_{<k>})$ has a linear resolution if and only if $\mathcal{P}(I)_{<k>}$ has a linear resolution. Then the proof follows from the definition of componentwise linear. $(ii)\Leftrightarrow (iii)$ See {\cite[Proposition 1.5]{HerzogHibi1999}}. \end{proof} \begin{corollary}\label{I_k, I_geq k} Let $I$ be a monomial ideal of $R$. Then the following conditions are equivalent: \begin{enumerate}[\rmfamily i)] \item $I_{\geq k}$ has a linear resolution; \item $\mathcal{P}(I)_{\geq k}$ has a linear resolution; \item $\mathcal{P}(I)_k$ has a linear resolution. \end{enumerate} \end{corollary} \begin{proof} Note that by the argument in the proof of Proposition \ref{componentwise linear} we have $\mathcal{P}(I_{\geq k})$ has a linear resolution if and only if $\mathcal{P}(I)_{\geq k}$ has a linear resolution. Then the proof is an immediate consequence of the statement of Proposition \ref{componentwise linear}. \end{proof} \begin{example} Let $J=\left(x_1^3,x_2^4,x_1^2x_2^2x_3^2\right)$ be an ideal of $\K[x_1,x_2,x_3]$. The polarization of $J$ can be thought as the ideal $I$ in Example \ref{example 1}. It follows from Corollaries \ref{reg linear} and \ref{I_k, I_geq k} that $I_{\geq k}$ has a linear resolution for all $k\geq7$ but it does not have for $k\leq6$. \end{example} \section{The graded Betti numbers of $I_{\geq k}$} In this section we consider quotients of $I_{\ge k}=I\cap M^k$, where $M$ is the graded maximal ideal, for any graded, not necessarily monomial, ideal. We show that we can determine all graded Betti numbers of $R/I_{\ge k}$ if we know the Betti numbers of $R/I$ and the Hilbert series of $R/I_{\ge k}$. The following theorem is probably well known, but we haven't found any reference. \begin{theorem}\label{betti I_geq k} Let $I$ be a graded ideal in $R=k[x_1,\ldots,x_n]$. Let $k=\min\{\deg(u)\mid u\in I\}$. If $j<i+k-1$, then $\beta_{i,j}(R/I_{\ge k})=0$. If $j>i+k-1$, then $\beta_{i,j}(R/I_{\ge k})=\beta_{i,j}(R/I)$. If we know the graded Betti numbers of $R/I$ and the Hilbert series of $R/I_{\ge k}$, we can determine the graded Betti numbers of $R/I_{\ge k}$. \end{theorem} \begin{proof} We think about the Betti numbers as the dimension of the homology of the Koszul complex of $R/I$. A ``monomial" $mT_{l_1}\wedge\cdots\wedge T_{l_i}$ is of degree $(i,i+j)$ if $m\in R/I$ is of degree $j$, and elements of degree $(i,i+j)$ are linear combinations of such monomials. Now $\beta_{1,j}(R/I_{\ge k})=0$ if $j<k$ since the generators of $I_{\ge k}$ have degree $\ge k$, and then $\beta_{i,j}(R/I_{\ge k})=0$ if $j<i+k-1$, since if $j_i$ is the smallest $j$ for which $\beta_{i,j}(R/I_{\ge k})\ne0$, then $j_i<j_{i+1}$ because we have a minimal resolution. In degrees $j\ge i-k-1$ $R/I_{\ge k}$ is identical with $R/I$, so if $j>i+k-1$ $\beta_{i,j}(R/I_{\ge k})=\beta_{i,j}(R/I)$. Since the Hilbert series of $R/I_{\ge k}$ equals $\sum_{i=0}^n(-1)^i\beta_{i,i+j}(R/I_{\ge k})t^{i+j}/(1-t)^n$, we can determine the remaining Betti numbers $\beta_{i,i+k-1}(R/I_{\ge k})$ from the Hilbert series. \end{proof} We first give some corollaries. \begin{corollary} We have that $k$-index of $R/I_{\ge k}$ is smaller or equal to the $(k+1)$-index. In particular, if $R/I_{\geq k}$ has a linear resolution, then $R/I_{\geq k+1}$ has a linear resolution. \end{corollary} \begin{corollary} Let $I$ be a graded ideal of $R$ and $\reg(I)=d$. Then for any positive integer $k$ we have $$\reg(I_{\geq k})=\begin{cases} d\quad\mbox{ if }d\geq k,\\ k\quad\mbox{ otherwise}. \end{cases}$$ \end{corollary} \begin{proof} We have $\beta_{i,i+d}(I)\neq0$ for some $i$. Theorem \ref{betti I_geq k} implies that $\reg(I_{\geq k})=d$ whenever $d\geq k$ and $\reg(I_{\geq k})=k$ otherwise. \end{proof} For a complete intersection generated in degree 2, the result is easy to describe. \begin{corollary}\label{ind} If $I$ is an artinian complete intersection generated in degree 2, then the $k$-index of $R/I_{\geq k}$ is $k-1$ if $2\le k\le n$ and the $k$-index is $\infty$ if $k\ge n+1$. \end{corollary} \begin{corollary} If $S=k[x_1,\ldots,x_n]/((f_1,\ldots,f_n)\cap M^k)$, $f_1,\ldots,f_n$ a complete intersection in degree $d$, then $S$ has a linear resolution exactly for $k\ge nd-n+1$. \end{corollary} \begin{corollary} If $S=k[x_1,\ldots,x_n]/(f_1,\ldots,f_n)$, where $f_1,\ldots,f_n$ is a complete intersection in degree $d$, and $T=k[x_1,\ldots,x_n]/((f_1,\ldots,f_n)\cap M^k)$. Suppose that $k\le n(d-1)$. Then $\beta_{i,j}(T)\ne0$ only if $(i,j)=(0,0)$, $(i,j)=(i,k+i-1)$ for $1\le i\le n$, and for $(i,j)=(r,dr)$ for $\frac{k-1}{d-1}<r\le n$. Furthermore, $\beta_{r,dr}={n\choose r}$ for those $r$. \end{corollary} \begin{example} Let $S=k[x_1,\ldots,x_8]/\left((x_1^2,\ldots,x_8^2)\cap M^5\right)$. In $S$ we have all monomials of degree $<5$, but only the squarefree in degrees 6,7, and 8, and nothing in degrees $>8$. Thus the Hilbert series is $$1+{8\choose1}t+{9\choose2}t^2+{10\choose3}t^3+{11\choose4}t^4+{8\choose3}t^5+{8\choose2}t^6+{8\choose1}t^7+{8\choose0}t^8=$$ $$\frac{(1-t)^8(1+{8\choose1}t+{9\choose2}t^2+{10\choose3}t^3+{11\choose4}t^4+{8\choose3}t^5+{8\choose2}t^6+{8\choose1}t^7+{8\choose0}t^8)}{(1-t)^8}=$$ $$\frac{1-736t^5+4200t^6-10528t^7+14910t^8-12832t^9+6720t^{10}-2016t^{11}+288t^{12}-8t^{14}+t^{16}}{(1-t)^8}.$$ We have $\beta_{5,10}=56,\beta_{6,12}=28,\beta_{7,14}=8,\beta_{8,16}=1$. The Hilbert series gives $\beta_{0,0}=1,\beta_{1,5}=736,\beta_{2,6}=4200,\beta_{3,7}=10528,\beta_{4,8}=14910,\beta_{5,9}=12832,\beta_{6,10}=6720+56=6776,\beta_{7,11}=2016, \beta_{8,12}=288-28=260$. \end{example} \bibliographystyle{plain}
{ "timestamp": "2022-09-20T02:21:21", "yymm": "2209", "arxiv_id": "2209.08650", "language": "en", "url": "https://arxiv.org/abs/2209.08650" }
\section{Introduction} Bergman discovered that the Riemann map associated to a bounded simply-connected domain $D $ in the complex plane $ \mathbb C$ can be expressed very simply in terms of his kernel function $K(z,w)$. For some fixed $p \in D$, after a dilation and translation, $$ K^{-1}(z, p) \left. \frac{\partial}{\partial \overline {t } } \right|_{t=p} K(z, t) $$ is a biholomorphic map from $D$ onto the unit disc around the origin (see \cite{Bell06} and \cite[Chap. VI]{Ber70}). Bergman in \cite{Ber} introduced his representative coordinates as a tool of generalizing the Riemann mapping theorem to $ \mathbb C^n, n \geq 1$. Let $\Omega \subset \mathbb C^n$ be a bounded domain whose Bergman metric is denoted by $g$. Relative to a point $p\in \Omega$, the Bergman representative coordinate $T (z) =(w_1, ... , w_n)$ is defined as \begin{equation} \label{rep} w_{\alpha} (z):=\sum _{j=1}^{n} g^{\bar j \alpha }(p) \left(K(z, p) ^{-1} \left. \frac{\partial}{\partial \overline {t_j} } \right|_{t=p} K(z, t) - \left. \frac{\partial}{\partial \overline {t_j} } \right|_{t=p} \log K(t, t)\right), \quad \alpha = 1, \cdots, n, \end{equation} where $K(\cdot , p)$ is the Bergman kernel and $g^{\bar j \alpha }$ are the entries of the inverse of the matrix $[g_{\alpha \bar{j}}]$ associated with the Bergman metric. Since the possible obstruction in \eqref{rep} is that $K(\cdot , p)$ may have zeros, $T (z)$ is only generally defined and holomorphic outside the zero set of $K(\cdot , p)$. Studying zeros of the Bergman kernel attracted much interest, and domains for which the Bergman kernel is zero-free are known as Lu Qi-Keng domains (or those satisfying the Lu Qi-Keng conjecture, cf. \cite{Bo}), after Lu's well-known paper \cite{Lu} on his uniformization theorem. In that paper, Lu proved that for a bounded domain in $\mathbb C^n$ with a complete Bergman metric of constant holomorphic sectional curvature, the Bergman representative coordinate is a {\it biholomorphism that maps $\Omega$ to a Euclidean ball}. Alternative proofs of Lu's theorem are available by following Bergman's key idea that biholomorphic mappings become linear when represented in his representative coordinates, cf. \cite{GKK, Yoo}. See also \cite{CW} for a simplification of Lu's proof by Cheung and the second author. Lu's theorem also played a decisive role in the resolution of the Cheng conjecture, which asserts that for a smoothly bounded strictly pseudoconvex domain, the Bergman metric is K\"{a}hler-Einstein if and only if the domain is biholomorphic to a ball, and was recently confirmed by Huang and Xiao \cite{HX}, after the previous works of Fu and the second author \cite{FW} and Nemirovski and Shafikov \cite{NS}. In all the above proofs, the completeness of the Bergman metric was crucial. Since a bounded domain which is complete with respect to the Bergman metric is pseudoconvex (see \cite{Bre}), and the Bergman completeness holds for any bounded pseudoconvex domain with H\"older boundary in $\mathbb C^n$ (see \cite{AHP, Chen}), in this paper we mostly focus on domains with worse boundaries. \medskip On the other hand, the past decade has witnessed the remarkable progresses around the sharp $L^2$ extension theorems of the Ohsawa-Takegoshi type (see \cite{OT, Bl13, GZ, BL}, the survey papers \cite{Bl, O20, O20B, Z}, and the references therein), which particularly solved a long-standing conjecture of Suita \cite{Su}. The Green's function on a hyperbolic Riemann surface $X$ induces the logarithmic capacity $c_{\beta}$, which is defined as $$ c_{\beta}(z_0) = \lim_{z \to z_0} \exp(G(z, z_0) - \log|w(z)|), $$ where $w$ is a fixed local coordinate in a neighborhood of $z_0 \in X$ with $w(z_0) =0$. Denote by $K$ the Bergman kernel on the diagonal for holomorphic 1-forms on $X$. Suita's conjecture (now a theorem of B\l{}ocki, and Guan and Zhou) asserts that for any surface $X$ as above, it holds that \begin{equation} \label{leq} \pi K \geq c_{\beta}^2, \end{equation} and equality in \eqref{leq} holds true at some $z\in X$ if and only if $X$ is {\it biholomorphic to a disc possibly less a relatively closed polar set.} Here, a polar set is the local singularity set of a subharmonic function. Since the possible polar part is negligible for $L^2$ holomorphic 1-forms, the biholomorphism can be expressed in terms of the Bergman representative coordinate (see \cite[Chap. 4]{O18}). Due to an identity $ (\log c_{\beta} )_{ z \bar z}=\pi K $ proved by Suita in the same paper \cite{Su}, his conjecture has geometric interpretation: the inequality part in \eqref{leq} is equivalent to that the Gaussian curvature of the invariant metric $c_{\beta}^2(z)dz\otimes d\overline{z}$ always has an upper bound $-4$; the equality/rigidity part, which says that the curvature attains $-4$ at some $z \in X$, guarantees that the surface is necessarily as asserted with the curvature identically $-4$. Similar curvature property is satisfied for the Carath\'eodory metric (see \cite{Su73, W}); additional results on metrics of constant Gaussian curvature were obtained by the first author in \cite{D}. Notice that neither boundary regularity nor completeness of the metric is assumed in the above Suita type problems. Moreover, B\l{}ocki and Zwonek \cite{BZ15} obtained a multidimensional version of the Suita conjecture (see also \cite{BBMV, GZa} for related comparison results). \medskip Linking the above two uniformization results of the Lu and Suita types, we aim to provide a good curvatural characterization of pseudoconvex domains that are biholomorphic to a ball possibly less a relatively closed pluripolar set. This will not only extend Lu's theorem towards the Bergman-incomplete situation but also generate higher dimensional results analogous to the equality part of Suita's conjecture. In \cite{DWo}, the authors initiated such an attempt by using ideas from Lu’s original paper as well as Calabi’s concept of diastasis, and substituted the Bergman completeness with other conditions before dropping it completely. There, the authors showed that a bounded pseudoconvex domain $\Omega \subset \mathbb C^n$ whose Bergman metric has negative constant holomorphic sectional curvature is a Lu Qi-Keng domain, and such $\Omega$ must be {\it biholomorphic to a ball possibly less a relatively closed pluripolar set} if there exists some point $p \in \Omega$ such that \medskip 1) $|K(z, p)|$ is bounded from above by a finite constant $\mathcal C>0$ for any $z \in \Omega$, and \medskip 2) the Bergman representative coordinate $T $ defined at $p$ is continuous up to $ \overline\Omega$. \medskip In this paper, we are able to drop the above second condition for domains in $\mathbb C$. \begin{theorem} \label{1-dim} For a domain $\Omega \subset \mathbb C$ whose Bergman metric has Gaussian curvature identically equal to $-2$, it holds that \begin{enumerate} \item [$1)$] If there exists some point $p \in \Omega$ such that $ \left| K(z, p) \right| $ is bounded from above by a finite constant $\mathcal C_1>0$ for any $z \in \Omega$, then the Bergman representative coordinate $T (z) $ relative to $p$ is biholomorphic from $\Omega$ to a disc possibly less a relatively closed polar set, and $T $ extends continuously up to $\overline\Omega $. \item [$2)$] Under the same assumption as in $1)$, if $ \left| K(z, p) \right| $ is bounded from below by a finite constant $\mathcal C_2>0$ for any $z \in \Omega$, then $T $ extends to a homeomorphism of the closures. \end{enumerate} \end{theorem} Theorem \ref{1-dim} improves Theorem 1.3 in \cite{DWo} for the case of planar domains. The conditions in Theorem \ref{1-dim} relate to an important result of Kerzman \cite{Ker}, who used Kohn's theory of the $\bar \partial$-Neumann problem to show that on a bounded strictly pseudoconvex domain $\Omega$ with $C^\infty$-smooth boundary, for each fixed $p \in \Omega$, the Bergman kernel $K(\cdot, p)$ is $C^\infty$ up to the boundary. In \cite[p.151-152]{Ker}, he gave an example of a simply-connected planar domain whose Bergman kernel $K(\cdot, p)$ blows up to infinity at some boundary point; see also \cite{For} for an example of Forn\ae ss. \medskip Let $D$ be a bounded simply-connected domain in $\mathbb C$, and let $S: \mathbb D \to D$ be the Riemann map with $S(0) = 0$ and $S'(0) > 0$, where $\mathbb D$ denotes the unit disc around the origin. It is well known that $S$ extends continuously up to $\overline {\mathbb D}$ if and only if $\partial D$ is a continuous curve, and a celebrated theorem of Carath\'eodory \cite{Cara} states that $S$ extends to a homeomorphism of $\overline{\mathbb D}$ onto $ \overline D$ if and only if $\partial D$ is a Jordan curve. In the latter case, if the Jordan curve is $C^\infty$-smooth, then $S$ and all its derivatives extend continuously to $ \overline {\mathbb D}$ (see \cite{BK}). The corollary below gives sufficient conditions for the extension of the Riemann map to the closures. \begin{cor} \label{simply} Let $D$ be a bounded, simply-connected domain in $\mathbb C$. Then, for any $p \in D$, the Bergman representative coordinate $T (z) $ relative to $p$ is biholomorphic from $D$ to a disc $\mathbb D_r:= \{ w\in \mathbb C : |w|^2 < 2g^{-1}(p) \}$, where $g$ is the Bergman metric of $D$. Moreover, \begin{enumerate} \item [$1)$] If there exists some point $p \in D$ such that $ \left| K(z, p) \right| $ is bounded from above by a finite constant $\mathcal C_1>0$ for any $z \in D$, then $T $ extends continuously up to $\overline { D}$. \item [$2)$] If there exists some point $p \in \Omega$ such that $ \left| K(z, p) \right| $ is bounded from below by a finite constant $\mathcal C_2>0$ for any $z \in D$, then the inverse map of $T$ extends continuously up to $\overline {\mathbb D_r}$; in particular, $\partial D$ is a continuous curve. \item [$3)$] Under the same assumption as in $1)$, if $ \left| K(z, p) \right| $ is bounded from below by a finite constant $\mathcal C_2>0$ for any $z \in D$, then $T $ extends to a homeomorphism of the closures, $\tilde {T} : \overline D \to \overline{\mathbb D_r}$; in particular, $\partial D$ is a Jordan curve. \end{enumerate} \end{cor} Condition 3) in Corollary \ref{simply} is only sufficient but not necessary for the continuous extension to the closures, in view of Kerzman's example in \cite {Ker}. Corollary \ref{simply} also yields a criteria (see Proposition \ref {simply counter}) for a simply-connected planar domain $D$ to satisfy $$ \inf_{z \in D} |K(z, p)| = 0, $$ which may be compared with two examples of Webster in \cite{Web}. The conditions in Theorem \ref{1-dim} and Corollary \ref{simply} are entirely about the Bergman kernel, without assuming any topological condition. \medskip Historically, the Bergman representative coordinate provides a right tool to analyze the extension of biholomorphic maps to the boundary. Fefferman \cite{Fef} proved that biholomorphic maps between two bounded domains in $ \mathbb C^n$ with smooth, strictly pseudoconvex boundaries admit smooth extensions to the closures of the domains. Previously, it was known that such maps extend to homeomorphisms of the closures (see the papers of Henkin \cite{Hen} and Vormoor \cite{Vo}). Webster \cite{Web}, and Nirenberg, Webster and Yang \cite{NWY} gave news proof of Fefferman's mapping theorem, whose original proof used the asymptotic expansion of the Bergman kernel and an analysis of the boundary behavior of the geodesics of the Bergman metric. Later, the Bergman-style coordinates were applied by Bell and Ligocka \cite{BLi, Lig} to prove that subelliptic estimates for the $\bar \partial$-Neumann problem imply boundary regularity of biholomorphic maps. See also \cite{Bell81, DF, For} for the extensions of biholomorphic maps involving more general domains. For more applications of the Bergman representative coordinates, see the papers \cite{GK, Bell06, BCOV, Dinew, Berceanu, Yoo, Kra} and the references therein. \medskip For higher dimensional domains, we improve our results in \cite{DWo} by considering the following technical condition, which is similar to the so-called Condition $ R$, cf. \cite{BLi, BB}. \begin{definition} A domain $\Omega$ in $ \mathbb C^n, n\ge 1$, is said to satisfy Condition $(B)$ if there exists some point $p \in \Omega$ such that for each $ j \in \{1, \ldots, n\}$, $$ \left| { \frac{\partial }{\partial z_j } K(z, p) } \right| \leq \mathcal C |K(z, p) |, \quad \text{for any\, } z \in \Omega, $$ where $\mathcal C>0$ is a finite constant. \end{definition} For a bounded domain $ \Omega $ satisfying Condition $(B)$, it follows from our Lemma \ref{similar lemma} that $|K(z, p)|$ is also bounded from above by a finite constant for all $z\in \Omega$, if $K(\cdot, p) $ has no zero set. This further implies that Condition $(B)$ is {\it not} a biholomorphic invariant (see Remark \ref{disc}). Our next theorem is stated as follows. \begin{theorem} \label{with B} Let $\Omega \subset \mathbb C^n, n \ge 1$, be a bounded pseudoconvex domain whose Bergman metric $g$ has its holomorphic sectional curvature identically equal to a negative constant $-c^2$. \begin{enumerate} \item [$1)$] If $\Omega$ satisfies Condition $(B)$ at some point $p \in \Omega$, then the Bergman representative coordinate $T (z) $ relative to $p$ is biholomorphic from $\Omega$ to a ball \begin{equation} \label{ball} \mathcal B:= \{ w\in \mathbb C^n : \sum_{\alpha, \beta=1}^n w_\alpha g_{ \alpha \bar \beta } (p) \overline {w_\beta } < {2}{c^{-2}} \} \end{equation} possibly less a relatively closed pluripolar set, where $n = 2 c^{-2}-1$, and $T $ extends continuously up to $\overline\Omega $. \item [$2)$] Under the same assumption as in $1)$, if $ \left| K(z, p) \right| $ is bounded from below by a finite constant $\mathcal C_2>0$ for any $z \in \Omega$, then $T $ extends to a homeomorphism of the closures. \end{enumerate} \end{theorem} \medskip The pseudoconvexity in Theorem \ref{with B} is a necessary assumption. To see this, remove from $\mathbb B^n, n \geq 2$, a compact subset $G$ of Lebesgue $\mathbb R^{2n}$-measure zero such that $\mathbb B^n \setminus G$ is connected. Then, the Bergman metric on $\mathbb B^n \setminus G$ extends to $\mathbb B^n$ by Hartogs' extension theorem. So, the assertion of Theorem \ref{with B} fails if $G$ is not pluripolar. \medskip We say a set $E$ is pluripolar if there exists a plurisubharmonic function $\varphi$ in $\mathbb C^n$ such that $\varphi = -\infty$ on $E$. In view of a result \cite{S82} of Siciak, a pluripolar set is negligible for $L^2$ holomorphic functions. It is known that a bounded $L^2$-domain of holomorphy is pseudoconvex and its boundary contains no pluripolar part, cf. \cite{PZ, I04}; see Example \ref{exam} for a pseudoconvex domain which is not an $L^2$-domain of holomorphy. Theorem \ref {with B} directly yields the following corollary. \begin{cor} \label{Cor} Let $\Omega \subset \mathbb C^n$ be a bounded $L^2$-domain of holomorphy such that the holomorphic sectional curvature of the Bergman metric on $\Omega$ is identically equal to a negative constant $-c^2$. \begin{enumerate} \item [$1)$] If $\Omega$ satisfies Condition $(B)$ at some point $p \in \Omega$, then the Bergman representative coordinate $T (z) $ relative to $p$ is biholomorphic from $\Omega$ to a ball $\mathcal B $, and $T $ extends continuously up to $\overline\Omega $. \item [$2)$] Under the same assumption as in $1)$, if $ \left| K(z, p) \right| $ is bounded from below by a finite constant $\mathcal C_2>0$ for any $z \in \Omega$, then $T $ extends to a homeomorphism of the closures, $\tilde {T} : \overline\Omega \to \overline{\mathcal B}$. \end{enumerate} \end{cor} The assumptions on the Bergman kernel in Part $2)$ of both Theorem \ref{with B} and Corollary \ref{Cor} can be further weakened as stated in our Theorem \ref{weak thm} and Corollary \ref{weak cor}. Our last result gives an estimate of the Bergman kernel on bounded domains in $\mathbb C^n$ with the Bergman metric of constant holomorphic sectional curvature. \begin{theorem} \label{kernel similar} Let $ \Omega $ be a bounded domain whose Bergman metric has constant holomorphic sectional curvature in $ \mathbb C^n, n\geq 1$. Then, for any $p \in \Omega$, there exists its small neighborhood $U $ such that \begin{equation} \label{2 similar} \sup_{\zeta \in U } \left| K(z, \zeta)\right| \le 2 \left| K(z, p) \right|, \quad \text{for any\, } z \in \Omega. \end{equation} \end{theorem} For any domain in $\mathbb C^n$ admitting the Bergman metric, we show in Proposition \ref{Bergman bounded} that if there exists a point $p $ and its neighborhood $ U $ such that \eqref{2 similar} holds true, then the Bergman representative coordinate relative to $p$ becomes a bounded holomorphic map. \medskip The organization of the paper is as follows. In Section 2, after deriving some inequalities for domains whose Bergman metric has constant holomorphic sectional curvature, we prove our multidimensional results. In Section 3, we obtain one dimensional results, including those on the extension of the Riemann map to the boundaries of simply-connected planar domains. In Section 4, we study the boundedness of the Bergman representative coordinate. \section{Proofs of multidimensional results} Recall that a bounded domain $\Omega \subset \mathbb C^n$ is called a Lu Qi-Keng domain if for any $p \in \Omega$, its Bergman kernel $K(\cdot , p)$ has no zero set, cf. \cite{Bo, Lu}. The authors in \cite{DWo} showed that if its (not necessarily complete) Bergman metric $g$ has constant holomorphic sectional curvature $-c^2$, then the domain $\Omega$ is Lu Qi-Keng and the Bergman representative coordinate $T$ relative to $p$ maps $\Omega$ to a ball $ \mathcal B $ defined by \eqref{ball}. Previously, Lu's theorem in \cite{Lu} yields the same conclusions under the additional assumption that $\Omega$ is Bergman complete. One key step in \cite {DWo} was the use of Calabi’s concept of diastasis on a bounded domain $\Omega \subset \mathbb C^n$. Fix a point $z_0\in \Omega$ and let $A_{z_0}:=\{z\in \Omega \, | \,K(z, z_0) =0 \, \}$ be the zero set of the Bergman kernel $K(\cdot , z_0)$. Since $A_{z_0}$ is an analytic variety, as domains $\Omega \setminus A_{z_0}$ and $\Omega$ have the same Bergman kernel $K$ and Bergman metric $g$. Consider on $\Omega \setminus A_{z_0}$ the K\"{a}hler potential \begin{equation} \label{dia} \Phi_{z_0}(z):= \log \frac{K(z, z) K(z_0, z_0)}{ |K(z, z_0)|^2} \end{equation} for the Bergman metric $g=\partial \overline \partial \Phi_{z_0}$, and we call the function $\Phi_{z_0}(z)$ the Bergman-Calabi diastasis relative to $z_0$. The idea in \cite{Lu, DWo} was to investigate locally the Taylor expansion of the K\"ahler potentials for the Bergman metric. More precisely, at any $p \in \Omega$ with \begin{equation} \label{normal} g_{\alpha \bar \beta} (p) =\delta_{\alpha \beta }, \end{equation} there exists a neighborhood $U_p$ such that the Bergman kernel on the diagonal can be decomposed as \begin{equation} \label{K(z, z)} K(z, z)= \left (1 - \frac{c^2}{2} |T(z)|^2 \right )^{\frac{-2}{c^2}} e^{f(T(z))+\overline{f(T(z))}}, \quad z\in U_p, \end{equation} where $f$ is holomorphic on $U_p$. Let $\Omega^{\prime}:= \{z \in \Omega \setminus A_p : T(z) \in \mathbb B^n \}$ denote the set of points in $\Omega \setminus A_p$ that are mapped into a ball $ \mathbb B^n (0, {\sqrt{2} c^{-1} }):=\{(w_1, ... , w_n) : |w|^2 < {2}{c^{-2}} \}$, where $A_{p}$ is the zero set of $K(\cdot , p)$. Then, by \eqref{K(z, z)} and the theory of power series, one may duplicate the variable with its conjugate so that the full Bergman kernel can be complex analytically continued as \begin{equation} \label{K(z, z_0)} K(z, {z_0})= \left (1 - \frac{c^2}{2} \sum_{\alpha=1}^n w_\alpha(z) \overline {w_\alpha(z_0)} \right )^{\frac{-2}{c^2}} e^{f(T(z))+\overline{f(T({z_0}))}}, \quad z, z_0\in U_p. \end{equation} Moreover, for any $z, z_0\in U_p$, it holds that $$ \Phi_{z_0}( z) = {\frac{-2}{c^2}} \log \left[ \left (1 - \frac{c^2}{2} |T(z)|^2 \right ) \left (1 - \frac{c^2}{2} |T(z_0)|^2 \right ) \left | 1 - \frac{c^2}{2} \sum_{\alpha=1}^n w_\alpha(z) \overline {w_\alpha(z_0)} \right |^{ -2} \right ], \quad z\in U_p. $$ Then, the symmetry of the Bergman-Calabi diastasis defined in \eqref{dia} further yields that $$ \Phi_{p}(z_0)= \Phi_{z_0}(p) = {\frac{-2}{c^2}} \log \left (1 - \frac{c^2}{2} |T(z_0)|^2 \right ). $$ Since on $\Omega^{\prime}$ both $\Phi_{p}(z)$ and ${\frac{-2}{c^2}} \log \left (1 - \frac{c^2}{2} |T(z )|^2 \right )$ are well-defined, these two real-analytic functions coincide on $U_p$, and thus are identical to each other on $\Omega^{\prime}$. That is, \begin{equation} \label{on Omega prime} \Phi_{p}(z )= {\frac{-2}{c^2}} \log \left (1 - \frac{c^2}{2} |T(z )|^2 \right ), \quad z \in \Omega^{\prime}. \end{equation} One can show by contradiction that no point in $\Omega \setminus A_p$ is mapped outside the ball $ \mathbb B^n (0, {\sqrt{2} c^{-1} }) $ by $T$, so $\Omega^{\prime}= \Omega \setminus A_p $. Therefore, \eqref{on Omega prime} in fact holds on $\Omega \setminus A_p $. By the Riemann removable singularity theorem, $T$ extends across the analytic variety $A_p$ to the whole domain $\Omega$ with $|T(z)|^2 \leq {2}{ c^{-2}}$, and the maximum modulus principle yields that $|T(z)|^2 < {2}{ c^{-2}}$ on $\Omega$. The above explicit formula of $\Phi_{p}(z )$ also guarantees that the zero set $A_{z_0}=\emptyset$, as shown in \cite{DWo}, so $\Omega$ is a Lu Qi-Keng domain. \medskip Based on these facts, in this section, we first obtain the following estimates. \begin{pro} \label{zero} Let $\Omega \subset \mathbb C^n$ be a bounded domain whose Bergman metric $g$ has its holomorphic sectional curvature identically equal to a negative constant $-c^2$. Then, for any $p \in \Omega$ satisfying \eqref{normal}, there exists a finite constant $ C_p>0$ such that for each $\alpha, j = 1, \ldots, n$, it holds that \begin{equation} \label{good} \left |K(z, p) \frac{\partial w_{\alpha} (z)}{\partial z_j} \right | \leq \left | { \left. \frac{\partial^2}{\partial z_j \partial \overline {t_\alpha} } \right|_{t=p} K(z, t) } \right| + C_p \left| { \frac{\partial K(z, p)}{\partial z_j } } \right|, \quad \forall z\in \Omega. \end{equation} \end{pro} \begin{proof} By the above discussion, the Bergman representative coordinate $T$ relative to $p$ maps $\Omega$ to a ball $ \mathbb B^n (0, {\sqrt{2} c^{-1} }) $. It then follows that $$ \sum_{j =1}^n \left | K(z, p) ^{-1} \left. \frac{\partial}{\partial \overline {t_j} } \right|_{t=p} K(z, t) - \left. \frac{\partial}{\partial \overline {t_j} } \right|_{t=p} \log K(t, t)\right |^2 < {2}{c^{-2}}, \quad \forall z\in \Omega, $$ which further implies that $$ \sum_{j =1}^n \left| K(z, p) ^{-1} \left. \frac{\partial}{\partial \overline {t_j} } \right|_{t=p} K(z, t) \right| \leq \sum_{j =1}^n \left| \left. \frac{\partial}{\partial \overline {t_j} } \right|_{t=p} \log K(t, t) \right| + n {\sqrt 2}{c^{-1}} =:C_p, \quad \forall z\in \Omega. $$ Here $C_p$ is a finite positive constant depending on $p$ since the Bergman kernel is locally uniformly bounded. Thus, for each $j = 1, \ldots, n$, \begin{equation} \label{less} \left| \left. \frac{\partial}{\partial \overline {t_j} } \right|_{t=p} K(z, t) \right| \leq C_p |K(z, p)|, \quad \forall z\in \Omega. \end{equation} By the definition \eqref{rep}, we know that for each $\alpha, j = 1, \ldots, n$, \begin{align*} \frac{\partial w_{\alpha} (z)}{\partial z_j} &= \frac{\partial }{\partial z_j} \left\{K(z, p) ^{-1} \left. \frac{\partial}{\partial \overline {t_\alpha} } \right|_{t=p} K(z, t)\right\} \\ &= \frac{ K(z, p) \left. \frac{\partial^2}{\partial z_j \partial \overline {t_\alpha} } \right|_{t=p} K(z, t) - \frac{\partial K(z, p)}{\partial z_j } \left. \frac{\partial }{ \partial \overline {t_\alpha} } \right|_{t=p} K(z, t) }{K(z, p)^2}, \quad \forall z\in \Omega. \end{align*} This combined with \eqref{less} will yield that for any $z\in \Omega$, \begin{align*} \left |K(z, p) \frac{\partial w_{\alpha} (z)}{\partial z_j} \right | & \leq \left | \left. \frac{\partial^2}{\partial z_j \partial \overline {t_\alpha} } \right|_{t=p} K(z, t) \right | + \left | \frac{ \frac{\partial K(z, p)}{\partial z_j } \left. \frac{\partial }{ \partial \overline {t_\alpha} } \right|_{t=p} K(z, t) } {K(z, p) } \right | \\ & \leq \left | \left. \frac{\partial^2}{\partial z_j \partial \overline {t_\alpha} } \right|_{t=p} K(z, t) \right | + C_p \left | \frac{\partial K(z, p)}{\partial z_j } \right |. \end{align*} \end{proof} Next, we will show the following boundedness lemma for domains satisfying Condition $(B)$. \begin{lem} \label{similar lemma} Let $ \Omega $ be a bounded domain in $ \mathbb C^n, n\ge 1$. Suppose $ \Omega $ satisfies Condition $(B)$ at some point $p\in \Omega$ and $K(z, p) \neq 0$ for any $z\in \Omega$. Then, there exists a finite constant $\mathcal C_1 >0$ such that for any $z\in \Omega$, \begin{equation} \label{kernel is bounded} |K(z, p) | \leq \mathcal C_1, \end{equation} \begin{equation} \label{kernel is bounded!} \quad \sup_{w \in U} \left| \frac{{ \frac{\partial }{ \partial {z_j} } K(z, w ) }} {K(z, w)} \right| \leq \, \mathcal C_1, \quad \forall j \in \{1, \ldots, n\}, \end{equation} where $ U $ is a small neighborhood of $ p$. \end{lem} \begin{proof} For the first inequality, since $K(\cdot, p)$ is zero free, let $h(z):= 2 \log |K(z, p)|$ be a pluriharmonic function on $ \Omega$. Then, Condition $(B)$ implies that for each $ j \in \{1, \ldots, n\}$, $$ \left| \frac{\partial h}{\partial z_j } (z) \right| = \left| \frac{\partial \log (K(z, p) \overline {K(z, p)})}{\partial z_j } \right| = \left| \frac{ \frac{\partial }{\partial z_j } K(z, p) } { K(z, p)} \right| \leq \mathcal C, \quad \forall z \in \Omega, $$ By the mean value theorem for several variables and the Cauchy-Schwarz inequality, it holds that $h$ is Lipschitz continuous and there exists a finite constant $\mathcal C_0>0$ such that $|h | < \mathcal C_0$ on $ \Omega$. Therefore, $|K(z, p)| = e^{\frac{h (z)}{2}} \le e^{\frac{|h (z)|}{2}} \le e^{\frac{\mathcal C_0}{2}}, \quad \forall z\in \Omega.$ For the second inequality, note that for each fixed $z\in \Omega$ and $j \in \{1, \ldots, n\}$, $$ \frac{ \frac{\partial K(w, z ) }{\partial \overline {z_j} } } {K(w, z)} $$ is holomorphic in $w$ and $$ \left| \frac{ \frac{\partial K(z, w ) }{ \partial {z_j} } } {K(z, w)} \right| = \left| \frac{ \frac{\partial K(w, z ) }{\partial \overline {z_j} } }{K(w, z)} \right|. $$ By the continuity, for any $\epsilon \in (0, 1)$, there exists a neighborhood $U$ of $p$ such that $$ \left| \frac{ \frac{\partial K(w, z ) }{\partial \overline {z_j} } } {K(w, z)} - \frac{ \frac{\partial K(p, z ) }{\partial \overline {z_j} } } {K(p, z)} \right| \le \epsilon, \quad \text{whenever } w \in U. $$ Thus, by Condition $(B)$, it holds that $$ \sup_{w \in U} \left| \frac{ \frac{\partial K(w, z ) }{\partial \overline {z_j} } } {K(w, z)} \right| \le \left| \frac{ \frac{\partial K(p, z ) }{\partial \overline {z_j} } } {K(p, z)} \right| + \epsilon \le \mathcal C + 1. $$ Since $z$ is arbitrary, we have completed the proof by taking $\mathcal C_1 := \max \{ e^{\frac{\mathcal C_0}{2}}, \mathcal C + 1\}$. \end{proof} Lemma \ref{similar lemma} implies that Condition $(B)$ is not satisfied for the examples of Forn\ae ss \cite{For} and Kerzman \cite {Ker} (see also Remark \ref{disc}). One can also check that in Kerzman's example the determinant of the Jacobian of a Bergman representative coordinate is unbounded (see \cite{DWo}). We will need the following theorem to prove our Theorem \ref{with B}. \begin{theorem} {\normalfont\cite{DWo}} \label{biholo} Let $\Omega \subset \mathbb C^n$ be a bounded domain whose Bergman metric has its holomorphic sectional curvature identically equal to a negative constant $-c^2$. Then, for some $z_0\in \Omega$ , $\Phi_{z_0}(z)$ blows up to infinity at $\partial \Omega$ if and only if $\Omega$ is biholomorphic to a ball and $n = 2/c^2-1$. \end{theorem} Under the constant holomorphic sectional curvature assumption, two proofs of the above equivalence using and not using Lu's theorem were given in \cite{DWo}. For a general bounded domain $\Omega$, as demonstrated in \cite[Proposition 3.1]{DWo}, the fact that for some $z_0\in \Omega$, $\Phi_{z_0} $ blows up to infinity at $\partial \Omega$ does not imply the completeness of the Bergman metric of $\Omega$. \begin{proof} [{\bf Proof of Theorem \ref{with B}}] {\bf Part 1)} We first assume that $\Omega$ satisfies Condition $(B)$ at some $p $ with \eqref{normal}. Choose a small neighborhood $ U $ of $p$ as in Lemma \ref{similar lemma} and take a small polydisc $\mathbb D^n(p; r_p) \subset U$ for some $0< r_p \ll 1$. By Cauchy's integral formula for derivatives and \eqref{kernel is bounded!}, for each $\alpha, j = 1, \ldots, n$, it holds that \begin{align*} \left| \left. \frac{\partial }{ \partial {t_\alpha} } \right|_{t=p} \frac{ \frac{\partial }{ \partial \overline {z_j} } K(t, z) } {K(t, z)} \right| & \leq \left| \frac{1}{2\pi i} \int_0^{2\pi} \frac{\partial }{ \partial \overline {z_j} } K( p + r_p e^{i \theta} , z) \frac{ r_p e^{i \theta} i }{r_p^2 K( p + r_p e^{i \theta} , z) } d \theta \right|\\ & \leq \frac{1}{r_p} \sup_{U } \left| \frac{ \frac{\partial K( \cdot , z) }{ \partial \overline {z_j} } } { K( \cdot , z) } \right|\\ & \leq \frac{\mathcal C_1}{r_p}, \quad \forall z\in \Omega. \end{align*} Thus, by \eqref{less}, \begin{align*} \left| \frac{ \left. \frac{\partial^2 }{ \partial \overline {z_j} \partial {t_\alpha} } \right|_{t=p} K(t, z) }{K (p, z)} \right| & = \left| \left. \frac{\partial }{ \partial {t_\alpha} } \right|_{t=p} \frac{ \frac{\partial }{ \partial \overline {z_j} } K(t, z) } {K(t, z)} + \frac{ { \frac{\partial }{ \partial \overline {z_j} } K(p, z) } }{K (p, z)} \frac{ \left. \frac{\partial }{ \partial {t_\alpha} } \right|_{t=p} K(t, z)}{K(p, z)}\right|\\ & \le \frac{\mathcal C_1}{r_p} + \mathcal C \cdot C_p, \quad \forall z\in \Omega. \end{align*} Therefore, by \eqref{good}, it holds that \begin{align*} \left | \frac{\partial w_{\alpha} (z)}{\partial z_j} \right | & \leq \left | \frac{{ \left. \frac{\partial^2}{\partial z_j \partial \overline {t_\alpha} } \right|_{t=p} K(z, t) } } {K(z, p) }\right| + C_p \left| \frac{ \frac{\partial K(z, p)}{\partial z_j } } {K(z, p) }\right|\\ & \leq \frac{\mathcal C_1}{r_p} + 2\mathcal C \cdot C_p, \quad \forall z\in \Omega. \end{align*} Since $w_{\alpha} (z)$ is a bounded holomorphic function whose partial derivatives are bounded from above by some finite positive constant on $\Omega $, the Bergman representative coordinate $T (z) =(w_1, ... , w_n)$ defined by \eqref{rep} is Lipschitz. Thus, $T (z) $ is continuous up to $\overline\Omega $ by an elementary calculus argument involving the uniform continuity. Let $w \in \partial \Omega$ be an arbitrary boundary point such that $$ \limsup_{z \to w} K(z, z) = \infty. $$ By Lemma \ref{similar lemma}, there exists a finite constant $\mathcal C_1 >0$ such that \begin{equation} \label{limsup K} \limsup \limits_ {\Omega \ni z\to w} \Phi_{p}( z) \geq \limsup \limits_ {\Omega \ni z\to w}\log \frac{K(z, z) K(p, p)} { \mathcal C_1 ^2} = \infty. \end{equation} Since $T (z) $ is continuous up to $\overline\Omega $, for any two sequences of points $ (z_j)_{j\in \mathbb N}, (w_j)_{j\in \mathbb N} \subset \Omega$, both approaching $w$, it holds that \begin{equation} \label{same} \lim_{j \to \infty}T(z_j)=\lim_{j \to \infty}T(w_j). \end{equation} By the explicit formula of the Bergman-Calabi diastasis \begin{equation} \label{formula} \Phi_{p}( z) = {\frac{-2}{c^2}} \log \left (1 - \frac{c^2}{2} |T(z)|^2 \right ), \quad z \in \Omega, \end{equation} we know that $$ \lim \limits_ {\Omega \ni z\to w} \Phi_{p}( z)= \infty. $$ For any boundary point $q\in \partial \Omega$ such that \begin{equation} \label{finite} \limsup \limits_ {\Omega \ni z\to q} K(z, z) < \infty, \end{equation} a result of Pflug and Zwonek \cite{PZ} says that $q \in \text{int} (\overline \Omega)$ and there exists a neighborhood $U$ of $q$ such that $P:=U \setminus \Omega$ is a pluripolar set. Taking all boundary points $q_j$ that satisfy \eqref{finite}, we get the corresponding neighborhoods $U_j$ and pluripolar sets $P_j$. Then the (bounded) domain $$ \tilde\Omega:= \bigcup\limits_{j}U_j \cup \Omega $$ has the same Lebesgue measure as $\Omega$ due to the pluripolarity. In view of \cite{PZ}, $\partial \tilde \Omega$ coincides with the non-pluripolar part of $\partial \Omega$. Moreover, as domains $\tilde\Omega$ and $\Omega$ have the same Bergman metric (with constant holomorphic sectional curvature). To see this, notice that $P_j=U_j \cap \partial \Omega$ is relatively closed in $U_j$. So, restricting any function $f \in L^2 \cap \mathcal O (\Omega)$ to $U_j\setminus P_j$, we get a function $F \in L^2 \cap \mathcal O (U_j)$ such that $F=f$ on $U_j\setminus P_j$. Hence one has an $L^2$ holomorphic extension to $\Omega \cup U_j$, and consequently to $\tilde\Omega$. By the above discussions, we know that the Bergman-Calabi diastasis $\Phi_{p}$ blows up to infinity at $\partial \tilde \Omega$, since $w$ is arbitrary. Then, Theorem \ref{biholo} guarantees that $\tilde\Omega$ is biholomorphic to a ball and $n = 2/c^2-1$. Define the set $$ E:=\bigcup\limits_{j}P_j = \bigcup\limits_{j} U_j \cap \partial \Omega = \tilde\Omega \cap \partial \Omega, $$ which is relatively closed in $\tilde\Omega$. Since the biholomorphic (pre)images and countable union of pluripolar sets are still pluripolar, we conclude that $T (z) $ is biholomorphic from $\Omega$ to a ball $ \mathbb B^n (0, {\sqrt{2} c^{-1} })$ possibly less a relatively closed pluripolar set $E$. If $\Omega$ satisfies Condition $(B)$ at some general point $p \in \Omega$, let $g_{\alpha \bar \beta} (p)$ be the positive-definite Hermitian matrix associated with the Bergman metric at $p$. One performs a linear transformation $F$ from $\Omega$ to $\Omega^1$ such that the Bergman metric $g^1$ on $\Omega_1 $ satisfies $ g^1_{\alpha \bar \beta} (F (p)) =\delta_{\alpha \beta }. $ Since $F$ is a biholomorphism, the Bergman metric on $\Omega^1 $ also has its holomorphic sectional curvature identically equal to a negative constant $-c^2$. Moreover, the Bergman kernels on $\Omega$ and $\Omega^1$ differ by a multiple constant, which is the determinant square of $g_{\alpha \bar \beta} (p)$, due to the transformation rule. Therefore, $\Omega^1$ satisfies Condition $(B)$ at the point $F (p) $. By the previous argument, the Bergman representative coordinate $T^1 (z) $ relative to $F (p)$ is biholomorphic from $\Omega^1$ to a ball $\mathbb B^n_r$ possibly less a relatively closed pluripolar set, and extends continuously up to $\overline{\Omega^1}$. Finally, the composition map $F^{-1} \circ T^1 \circ F$ is the Bergman representative coordinate $T (z) $ relative to $ p$ and it is biholomorphic from $\Omega$ to a ball $\mathcal B$ defined in \eqref{ball} possibly less a relatively closed pluripolar set $E$. The continuous extension up to $\overline{\Omega}$ follows due to the linearity of $F$. {\bf Part 2)} Since the pluripolar set $E$ is negligible for $L^2$ holomorphic functions, the transformation formula of the Bergman kernel yields that \begin{equation} \label{transf} K(z, p) = D_{T} (z) \cdot \overline{D_{T}(p) }\cdot K_{\mathcal B \setminus E}\left(T (z), \bar 0\right) = D_{T} (z) \cdot K_{\mathcal B}\left(0, \bar 0\right)= \frac{D_{T} (z)} {v( \mathcal B)}, \quad z \in \Omega, \end{equation} where $D_{T} $ is the determinant of the complex Jacobian of T and $v(\cdot)$ denotes the Euclidean volume. Let $S$ be the inverse map of $T $. By the inverse function theorem, the complex Jacobian of $S$ is the inverse of the complex Jacobian of $T$, and $$JS = (JT)^{-1} = \frac{\text{Adj}(JT)}{D_{T} },$$ where $Adj (JT)$ is the adjugate matrix of $JT$, the complex Jacobian of T. If $ \left| K(z, p) \right| $ is also bounded from below by a positive constant for any $z \in \Omega$, then so is $D_{T} (z)$. Therefore, each component of $S$ is a bounded holomorphic function whose partial derivatives are bounded from above by some finite constant on $\mathcal B \setminus E$. We conclude that $S $ is Lipschitz and thus continuous up to $\overline {\mathcal B \setminus E}$, which implies that $T $ extends to a homeomorphism of the closures. \end{proof} Corollary \ref{Cor} follows directly from Theorem \ref{with B} as the boundary of a bounded $L^2$-domain of holomorphy, which is the domain of existence of some $L^2$ holomorphic function, contains no pluripolar part, cf. \cite{PZ, I04}. The following example is provided by Peter Pflug, and we present it here with his kind permission. \begin{example} \label{exam} $\mathcal D:=\{ (z, w) \in \mathbb C^2 : |z|^2+|w|^2<1, z \neq 0\}$ is an example of a {\it pseudoconvex} domain that is not an $L^2$-domain of holomorphy. To see this, let $\mathbb B^2$ be the unit ball in $\mathbb C^2$ and let $E := \{ (0, w) \in \mathbb C^2 : |w|<1 \}$ be a relatively closed pluripolar set of $ \mathbb B^2$. Then, $\mathcal D = \mathbb B^2 \setminus E$ and $\mathcal D$ is not an $L^2$-domain of holomorphy since all $L^2$ holomorphic functions on $\mathcal D$ extend across $E$. Indeed, $\mathcal D$ is pseudoconvex. For instance, the function $f(z, w)=1/z$ is holomorphic on $\mathcal D$, but $f$ cannot extend across $E$ on which it blows up. \end{example} \medskip To conclude the biholomorphisms in Theorem \ref{with B} and Corollary \ref{Cor}, $\Omega$ satisfying Condition $(B)$ can be weakened to $\Omega$ being biholomorphic to a domain $\Omega_1$ satisfying Condition $(B)$. Moreover, to conclude the homeomorphisms of the closures, the assumptions on the Bergman kernel can be further weakened as follows. \medskip \begin{theorem} \label{weak thm} Let $\Omega \subset \mathbb C^n, n \ge 1$, be a bounded pseudoconvex domain whose Bergman metric has its holomorphic sectional curvature identically equal to a negative constant $-c^2$. If there exists some point $p \in \Omega$ such that for any $z\in \Omega$, \begin{equation} \label{both} \left| { \frac{\partial }{\partial z_j } K(z, p) } \right| \leq \mathcal C_1, \quad \left| K(z, p) \right| \geq \mathcal C_2, \quad \forall j \in \{1, \ldots, n\}, \end{equation} where $\mathcal C_1, \mathcal C_2>0$ are finite constants, then the Bergman representative coordinate $T (z) $ relative to $p$ is biholomorphic from $\Omega$ to a ball $\mathcal B $ defined in \eqref{ball} possibly less a relatively closed pluripolar set, where $n = 2 c^{-2}-1$, and $T $ extends to a homeomorphism of the closures. \end{theorem} \begin{proof} We first assume that $\Omega$ satisfies \eqref{both} at some $p $ with \eqref{normal}. Similar to the proof of \eqref{kernel is bounded!}, one can show by \eqref{both} that there exists a small neighborhood $ U$ of $ p$ such that for each $ j \in \{1, \ldots, n\}$, $$ \sup_{w \in U} \left| {{ \frac{\partial }{ \partial {z_j} } K(z, w ) }} \right| \leq \, \mathcal C_1 +1, \quad \forall z\in \Omega. $$ Then, by \eqref{good} and \eqref{both}, it follows that for each $\alpha, j = 1, \ldots, n$, \begin{equation} \label{good!} \mathcal C_1 \left | \frac{\partial w_{\alpha} (z)}{\partial z_j} \right | \leq \left | { \left. \frac{\partial^2}{\partial z_j \partial \overline {t_\alpha} } \right|_{t=p} K(z, t) } \right| + C_p \mathcal C_1, \quad \forall z\in \Omega. \end{equation} Similar to the proof of Theorem \ref{with B}, Part 1), one uses Cauchy's integral formula to show that for each $\alpha, j = 1, \ldots, n$ and for any $z\in \Omega$, \begin{align*} \left| \left. \frac{\partial^2}{\partial z_j \partial \overline {t_\alpha} } \right|_{t=p} K(z, t) \right| & =\left| \left. \frac{\partial }{ \partial {t_\alpha} } \right|_{t=p} \frac{\partial K(t, z) }{ \partial \overline {z_j} } \right| \\ & \leq \left| \frac{1}{2\pi i} \int_0^{2\pi} \frac{1}{r_p^2} \frac{\partial K( p + r_p e^{i \theta} , z) }{ \partial \overline {z_j} } r_p e^{i \theta} i d \theta \right|\\ & \leq \frac{1}{r_p} \sup_{U} \left| \frac{\partial K( \cdot , z) }{ \partial \overline {z_j} } \right|\\ & \leq \frac{1}{r_p} \, ( \mathcal C_1 +1), \end{align*} where $0< r_p \ll 1$ and the small polydisc $\mathbb D^n(p; r_p) \subset U$. This combined with \eqref{good!} will imply that for each $\alpha, j = 1, \ldots, n$ and for any $z\in \Omega$, $$ \left | \frac{\partial w_{\alpha} (z)}{\partial z_j} \right | \leq \mathcal C_2^{-1} \left( \frac{1}{r_p} \, ( \mathcal C_1 +1) + C_p \mathcal C_1 \right) < \infty. $$ Since $w_{\alpha} (z)$ is a bounded holomorphic function whose partial derivatives are bounded from above by some finite constant on $\Omega $, the Bergman representative coordinate $T (z) =(w_1, ... , w_n)$ defined by \eqref{rep} is Lipschitz and thus continuous up to $\overline\Omega $. Since \eqref{both} implies that $ \left| K(z, p) \right| $ is bounded from above by a finite constant $\mathcal C_3>0$ for any $z \in \Omega$, the remaining part of the proof is the same as that of Theorem \ref{with B}. In particular, if $\Omega$ satisfies \eqref{both} at some general point $p \in \Omega$, we may use a similar argument as in the proof of Theorem \ref{with B}, Part 1) to conclude that the Bergman representative coordinate $T (z) $ relative to $ p$ is biholomorphic from $\Omega$ to a ball $\mathcal B$ defined in \eqref{ball} possibly less a relatively closed pluripolar set $E$, and it extends continuously up to $\overline{\Omega}$; to further conclude that $T $ extends to a homeomorphism of the closures, one uses the same argument as in the proof of Theorem \ref{with B}, Part 2). \end{proof} Theorem \ref {weak thm} directly yields the following corollary. \begin{cor} \label{weak cor} Let $\Omega \subset \mathbb C^n$ be a bounded $L^2$-domain of holomorphy such that the holomorphic sectional curvature of the Bergman metric on $\Omega$ is identically equal to a negative constant $-c^2$. If there exists some point $p \in \Omega$ such that \eqref{both} holds, then the Bergman representative coordinate $T (z) $ relative to $p$ is biholomorphic from $\Omega$ to a ball $\mathcal B $ defined in \eqref{ball}, where $n = 2 c^{-2}-1$, and $T $ extends to a homeomorphism of the closures, $\tilde {T} : \overline\Omega \to \overline{ \mathcal B}$. \end{cor} Condition \eqref{both} is weaker than the assumptions on the Bergman kernel in Part $2)$ of both Theorem \ref{with B} and Corollary \ref{Cor}, in view of \eqref{kernel is bounded}. \section{Proofs of one dimensional results} In this section, we prove our one dimensional results for planar domains in $\mathbb C$. Our arguments continue from the authors' previous works \cite {D, DWo}. We begin with the following lemma. \begin{lem} \label{1-dim lemma} Let $\Omega \subset \mathbb C$ be a domain whose Bergman metric $ g (z)dz\otimes d\overline{z}:= (\log K (z, z))_{ z\overline{z}}dz\otimes d\overline{z} $ has Gaussian curvature identically equal to $-2$. Then, for any $p \in \Omega$, the Bergman representative coordinate $T (z) $ relative to $p$ satisfies $$ |{T^\prime (z)}| \leq \frac{2 \pi } { g (p) } |K(z, p)| , \quad z \in \Omega. $$ \end{lem} \begin{proof} By the discussion in Section 2 (cf. \cite{DWo, D}), the zero set $A_{z_0}=\emptyset$ and the explicit formula \eqref{formula} of the Bergman-Calabi diastasis relative to $p$ is given by $$ \Phi_{p}( z) = -2 \log \left (1 - \frac{1}{2} |T(z)|^2 g (p) \right ), \quad z \in \Omega, $$ which yields $$ g(z) = | T^{\prime}(z)|^2 g (p) \left (1 - \frac{1}{2} |T(z)|^2 g (p) \right )^{-2}, \quad z \in \Omega. $$ Therefore, by the above two formulas and \eqref{dia}, it holds that \begin{equation} \label{longineq} \frac{|K(z, p)|^2} {K(z, z)K(p, p)} = e^{ -\Phi_p} (z) = \left(1 - \frac{1}{2} |T (z)|^2 g (z)\right)^{2} = \frac{ |{T^\prime (z)}|^2 g (p)}{ g (z)}, \quad z \in \Omega. \end{equation} From \cite{D}, we know that for any $w\in \Omega$, \begin{equation} \label{ineq} \pi K(w, w) \geq c_B^2(w) \geq \frac{g(w)}{2}, \end{equation} where $c_B$ is the analytic capacity defined as $$ c_B(w):=\sup \left \{ | h^{\prime}(w) | \, : \, h\text{ is holomorphic on $\Omega$ with $h({w})= 0$ and} \, \, |h|\leq1 \right\}. $$ Notice that the first inequality of \eqref{ineq} was proved by Suita \cite{Su}. Therefore, by \eqref{longineq} and \eqref{ineq}, we get $$ { |K(z, p)| } = \sqrt{ \frac{ g (p) }{g (z) } K(z, z)K(p, p) } |{T^\prime (z)}| \geq \frac{ g (p) }{2 \pi } |{T^\prime (z)}| , \quad z \in \Omega. $$ \end{proof} Some rigidity theorems related to capacities were given in \cite{DT, DTZ}. \begin{proof} [{\bf Proof of Theorem \ref{1-dim}}] For Part 1), by assumption, we know that $ \left| T^\prime(z) \right| $ is bounded from above by a finite constant $\mathcal C_1>0$ for any $z \in \Omega$. Therefore, we conclude that $T$ is Lipschitz and thus continuous up to $\overline {\Omega} $. Similar to the proof of Theorem \ref{with B}, Part 1), we know that $T$ is biholomorphic from $\Omega$ to a disc $\mathbb D_r :=\{ \eta \in \mathbb C : |\eta |^2 < {2 g^{-1} (p)} \}$ possibly less a relatively closed polar set $P$. For Part 2), the transformation formula of the Bergman kernel under biholomorphism yields \begin{equation} \label{relation} K(z, p) = T^{\prime} (z) \cdot \overline{T^{\prime}(p) }\cdot K_{\mathbb D_r \setminus P}\left(T (z), \bar 0\right) = \frac{T ^{\prime}(z) } { \pi r^2} = \frac{ g (p)} { 2 \pi } T ^{\prime}(z), \quad z \in \Omega, \end{equation} where the second equality holds due to that the polar set is negligible for $L^2$ holomorphic functions. If $ \left| K(z, p) \right| $ is additionally bounded from below by a finite constant $\mathcal C_2>0$ for any $z \in \Omega$, then the inverse map of $T $ will be also Lipschitz and thus continuous up to $\overline {\mathbb D_r \setminus P} $. Therefore, we conclude that $T$ extends to a homeomorphism of the closures, $\tilde {T} : \overline \Omega \to \overline{\mathbb D_r \setminus P}$. \end{proof} Any bounded, simply-connected planar domain $D$ is biholomorphic to a disc by the Riemann mapping theorem, cf. \cite{GKim}. We will use the Bergman kernel to give sufficient conditions for the extension of the Riemann map to the closures. \begin{proof} [{\bf Proof of Corollary \ref{simply}}] By the uniqueness of the Riemann map, the Bergman representative coordinate $T (z)$ defined by \eqref{rep} is biholomorphic from $D$ to $\mathbb D_r $ such that $T (p)=0$ and $T^{\prime}(p)=1$. See, for instance, \cite[Chap. VI]{Ber70}) or \cite{Lu}. Also, \eqref{relation} holds for any $ z \in D$. For 1), by \eqref{relation}, $|T^{\prime} (z)|$ is bounded from above by the finite constant $ { 2 \pi }{ g ^{-1} (p)} \mathcal C_1 >0 $ for any $z \in D$. Therefore, we conclude that $T$ is Lipschitz and thus continuous up to $\overline { D} $. For 2), denote by $\tau: \mathbb D_r \to D $ the inverse biholomorphic map of $T$. By \eqref{relation} and the inverse function theorem, $|\tau ^{\prime} (z)|$ is bounded from above by the finite constant $ { g (p)} (2 \pi \mathcal C_2)^{-1} >0 $ for any $z \in \mathbb D_r$. Therefore, we conclude that $\tau $ is also Lipschitz and thus continuous up to $\overline {\mathbb D_r} $. Moreover, $\tau (\partial \mathbb D_r) \subset \partial D$. Thus $\tau$ maps $\overline {\mathbb D_r} $ onto $\overline {D} $ and so $\tau (\partial \mathbb D_r) = \partial D$, which shows that $\partial D$ is a continuous curve. For 3), if $ \left| K(z, p) \right| $ is bounded from both above and below by finite positive constants for any $z \in D$, then both $T$ and its inverse $\tau$ extend continuously up to the closures. Therefore, $T$ extends to a homeomorphism of $\overline {\mathbb D_r} $ onto $\overline { D} $, which implies that $\partial D$ is a Jordan curve. \end{proof} In Kerzman's example in \cite {Ker}, the boundary of the domain $D$ is a Jordan curve, but $ \left| K(z, p) \right| $ is not bounded from above by a finite constant for any $z \in D$. Consequently, Condition 3) in Corollary \ref{simply} is only sufficient but not necessary for the continuous extension to the closures. On the other hand, although $D$ is a Lu Qi-Keng domain, namely for any $p \in D$, the Bergman kernel $K(\cdot , p)$ has no zero set, we have the following observation. \begin{pro} \label{simply counter} Let $D \subset \mathbb C$ be a bounded, simply-connected domain whose boundary is not a continuous curve. Then, for any $p\in D$, neither $ \left| K(z, p) \right| $ nor $|T^{\prime} (z)|$ is bounded from below by a finite constant $\mathcal C>0$ for any $z \in D$, i.e., $$ \inf_{z \in D} |K(z, p)| = 0 = \inf_{z \in D} |T^{\prime} (z)|. $$ \end{pro} \begin{proof} Assume the contrary that for some $p\in D$, $$ \inf_{z \in D} |K(z, p)| > 0. $$ Then by Part 2) of Corollary \ref{simply}, $\partial D$ is necessary continuous, which is a contradiction. The same thing happens to $T^{\prime} (z)$ in view of \eqref{relation}, and we have completed the proof. \end{proof} \begin{example} An example of a bounded, simply-connected planar domain $D$ whose boundary is not a continuous curve is given below in Figure \ref{Figure 1}. In this example, the boundary $\partial D$ consists of the comb space. Here, the boundary $\partial D$ being a continuous curve means that there exists a continuous function $\gamma: [0, 1] \to \mathbb C$ such that $\gamma (0) = \gamma (1)$ and $\gamma ([0, 1]) = \partial D$. But since the comb space is not locally connected, it follows that $\partial D$ is not continuous, cf. \cite[Chapter 14]{Conway}. {\center \tikzstyle{every node}=[circle, draw, fill=black!50, inner sep=0pt, minimum width=4pt] \begin{tikzpicture}[thick,scale=0.9] \draw[very thick] (0,0)--(7,0)--(7, -5)--(0, -5)--cycle; \draw[very thick] \foreach \x in {0, 0.2, ..., 4.5} { (0+ \x^3/16, -5) -- (0+\x^3/16, -2) }; \end{tikzpicture} \captionof{figure}{A simply-connected domain with discontinuous boundary} \label{Figure 1} } \end{example} \begin{remark} \label{disc} Condition $(B)$ is not a biholomorphic invariant. First, it is satisfied for the unit disc $\mathbb D$, whose Bergman kernel is written as $$ K(z, p) = \frac{1}{\pi} \frac{1}{(1- z \bar p)^2} \quad \text{and} \quad \frac{\partial }{\partial z}K(z, p) = \frac{ 2 \bar p}{\pi (1- z \bar p)^3}. $$ For any $p\in \mathbb D$, choose $$c > \frac{2}{1-|p|}.$$ Then, for any $z \in \mathbb D$, it follows that $$ \left|\frac{\partial }{\partial z}K(z, p) \right| \leq \frac{2}{\pi \left| 1- z \bar p \right|^3} = |K(z, p)| \frac{2 }{|1- z \bar p|} < |K(z, p)| \frac{2 }{1- |p|} < c |K(z, p)|. $$ However, Condition $(B)$ is {\it not} satisfied for the examples of Forn\ae ss \cite{For} and Kerzman \cite {Ker}, where $|K(z, p)|$ are unbounded as $z$ approaches certain boundary point; we can see from Lemma \ref{similar lemma} that if a bounded, simply-connected domain $D$ in $\mathbb C$ satisfies Condition $(B)$, then $|K(z, p)|$ is bounded from above by a finite constant for all $z\in D$. \end{remark} \section{Bounded Bergman representative coordinates} In this section, we study the boundedness of the Bergman representative coordinate. \begin{pro} \label{Bergman bounded} Let $\Omega$ be a domain admitting the Bergman metric in $ \mathbb C^n, n\geq 1$, and let $K(z, w)$ denote the Bergman kernel of $\Omega$. Assume that there exists a point $p \in \Omega$ such that \begin{equation} \label{similar} \sup_{\zeta \in U } \left| K(\zeta, z)\right| \le \mathcal C \left| K(p , z)\right|, \quad \forall z \in \Omega, \end{equation} where $U $ is a neighborhood of $p$ and $\mathcal C >0$ is a finite constant. Then, the Bergman representative coordinate $T (z) $ relative to $p$ maps $\Omega$ holomorphically to a bounded domain in $ \mathbb C^n$. \end{pro} \begin{proof} We first assume that $\Omega$ satisfies \eqref{similar} at some $p $ with \eqref{normal}. Let $A_{p}$ be the zero set of the Bergman kernel $K(\cdot , p)$. Then, for each fixed $ j = 1, \ldots, n$, $$ w_{j} (z):= K(z, p) ^{-1} \left. \frac{\partial}{\partial \overline {t_j} } \right|_{t=p} K(z, t) - \left. \frac{\partial}{\partial \overline {t_j} } \right|_{t=p} \log K(t, t), \quad z \in \Omega \setminus A_{p}. $$ Take a small polydisc $\mathbb D^n(p; r_p) \subset U $ for some $0< r_p \ll 1$. For each $ j = 1, \ldots, n$, by Cauchy's integral formula for derivatives and \eqref{similar}, it holds that \begin{align*} \left|\left. \frac{\partial}{\partial \overline {t_j} } \right|_{t=p} K(z, t) \right| & = \left| \frac{1}{2\pi \sqrt{-1}} \int_{\{|t_j-p_j|= r_p \}} \frac{ K((p_1, \ldots, t_j, \ldots, p_n), z) }{(t_j - p_j) ^{2}} dt_j \right|\\ & = \frac{1}{2\pi {r_p }} \left| \int_0^{2\pi} K( (p_1, \ldots, p_j + r_p e^{i \theta}, \ldots, p_n), z) d \theta \right|\\ & \leq \frac{1}{ {r_p }} \sup_{ U} |K( \cdot , z)| \\ & \leq \frac{\mathcal C}{ {r_p }} |K( p, z)|, \quad \forall z \in \Omega \setminus A_{p}. \end{align*} Therefore, for any $z \in \Omega \setminus A_{p}$, $$ |w_j(z)| \leq \left | K(z, p) ^{-1} \left. \frac{\partial}{\partial \overline {t_j} } \right|_{t=p} K(z, t) - \left. \frac{\partial}{\partial \overline {t_j} } \right|_{t=p} \log K(t, t)\right | \leq \frac{\mathcal C}{ {r_p }} + C_p. $$ By the Riemann removable singularity theorem, the bounded holomorphic function $w_j$ extends across the analytic variety $A_p$ to the whole domain $\Omega$. , which completes the proof. If $\Omega$ satisfies \eqref{similar} at some general point $p \in \Omega$, let $g_{\alpha \bar \beta} (p)$ be the positive-definite Hermitian matrix associated with the Bergman metric at $p$. One performs a linear transformation $F$ from $\Omega$ to $\Omega^1$ such that the Bergman metric $g^1$ on $\Omega_1 $ satisfies $ g^1_{\alpha \bar \beta} (F (p)) =\delta_{\alpha \beta }. $ Since $F$ is a biholomorphism, the Bergman kernels on $\Omega$ and $\Omega^1$ differ by a multiple constant, which is the determinant square of $g_{\alpha \bar \beta} (p)$, due to the transformation rule. Therefore, $\Omega^1$ satisfies \eqref{similar} with respect to the point $F (p) $. By the previous argument, the Bergman representative coordinate $T^1 (z) $ relative to $F (p)$ maps $\Omega^1$ holomorphically to a bounded domain in $ \mathbb C^n$. Finally, the composition map $F^{-1}\circ T^1 \circ F$ is the Bergman representative coordinate $T (z) $ relative to $ p$ and it is holomorphic from $\Omega$ to a bounded domain in $ \mathbb C^n$. \end{proof} Using Proposition \ref{Bergman bounded}, we will prove Theorem \ref{kernel similar}. \begin{proof} [{\bf Proof of Theorem \ref{kernel similar}}] Assume that the holomorphic sectional curvature of the Bergman metric $g$ is identically $-c^2$. Note that if $U $ is small enough, then it is convex and the set $T (U) $ is contained in a ball $\mathcal B$ defined in \eqref{ball}. For any $\zeta \in U$, by the Cauchy-Schwarz inequality and mean value theorem, there exists a point $\eta = sz + (1-s) p \in U$, where $s\in (0, 1)$, such that $$ |K(\zeta, t) - K(p, t)| \leq |\left. \nabla_{ {w}} \right|_{w= \eta} K(w, t) | \cdot |\zeta-p|, $$ where $\nabla_w := (\frac{\partial }{\partial {w_1} } , ... , \frac{\partial }{\partial {w_n} } )$ denotes the complex gradient operator. Therefore, by \eqref{less}, it holds for any $\zeta \in U$ and $ z\in \Omega$ that \begin{align*} |K(\zeta, z)| & \leq | K(p, z)| + \sqrt{ \sum_{j=1}^n \left|\left. \frac{\partial}{\partial {w_j} } \right|_{w=\eta} K(w, z) \right| ^2} \cdot |\zeta-p| \\ & \leq | K(p, z)| + \sqrt{ n } C_\eta |K( \eta, z) | \cdot |\zeta-p|, \end{align*} where $$ C_\eta:=\sum_{j =1}^n \left| \left. \frac{\partial}{\partial {z_j} } \right|_{z=\eta} \log K(z, z) \right| + n {\sqrt 2}{c^{-1}} $$ is a finite positive constant depending on $\eta$. Thus, $ C_U := \sup_{ \zeta \in {U}} C_\zeta >0 $ is also a finite constant as the Bergman kernel is locally uniformly bounded. Choosing a smaller neighborhood $U_1$ of $p $ such that $ \sqrt{ n } C_U |\zeta-p| <\frac{1}{2}$ whenever $\zeta \in U_1$, we will get $$ |K(\zeta, z)| \leq | K(p, z)| + \frac{1}{2} |K( \eta, z) | \leq | K(p, z)| + \frac{1}{2} \sup_{U_1} |K( \cdot, z) |, \quad \forall \zeta \in U_1, \, \forall z \in \Omega. $$ Since the above right hand side is independent of $\zeta$, it follows that $$ \sup_{U_1} |K(\cdot, z)| \leq 2 | K(p, z)|, \quad \forall z \in \Omega. $$ For simplicity, still denote $U_1$ by $U$, and we have completed the proof. \end{proof} We remark that \eqref {2 similar} indeed holds on symmetric bounded domains, cf. \cite[Proposition 2.2]{DLT}. For general simply-connected complete K\"{a}hler manifolds with sectional curvatures bounded between negative constants, one expects a good hold of the Bergman kernel as pointed out by Greene and Wu in \cite[Chap. 8]{GW}. \bibliographystyle{alphaspecial}
{ "timestamp": "2022-09-20T02:24:00", "yymm": "2209", "arxiv_id": "2209.08741", "language": "en", "url": "https://arxiv.org/abs/2209.08741" }
\section{Introduction} Interacting with computing devices using natural language is a long-standing pursuit in human-computer interaction \cite{bolt1980put, karat2002conversational, folstad2017chatbots}. Language, as both the input and the output, allows users to efficiently communicate with a computing system and access its functionalities when other I/O modalities are unavailable or cumbersome, e.g., users with motor or visual impairments, or situationally impaired when occupied by real-world tasks \cite{wobbrock2019situationally, sarsenbayeva2017challenges}. Recently, intelligent assistants, such as Google Assistants and Siri, have significantly advanced language-based interaction for performing simple daily tasks, e.g., setting a timer. However, these intelligent agents still fall short of enabling conversational interaction in existing mobile user interfaces where many user tasks are performed \cite{li2017sugilite}. For example, answering the user's question regarding specific information on the screen. It often requires an agent to have a computational understanding of GUI screens and the tasks they support, which is lacking in existing intelligent assistants. To enable conversational interactions in mobile UIs, prior work has investigated several important technical building blocks, e.g., summarizing a mobile screen for users to quickly understand its purpose \cite{10.1145/3472749.3474765}, mapping language instructions to graphical user interface (GUI) actions \cite{li-etal-2020-mapping, pasupat-etal-2018-mapping, liu2018reinforcement} or modeling GUIs so that they are more amenable for language-based interaction \cite{wu2021screen, li2021screen2vec, zhang2021screen, li2021vut, 10.1145/3472749.3474765}. Each of these works addresses a specific aspect of conversational interaction and requires significant effort, e.g., curating task-specific datasets with tens of thousands of examples and training dedicated models \cite{li-etal-2020-mapping, 10.1145/3472749.3474765, vut2021, li2020widget}. There is a broad spectrum of conversational interactions that can happen on mobile UIs, as Todi et al. have revealed~\cite{todi21conversations}. It is desirable to develop a more lightweight and generalizable approach to realize conversational interaction on GUIs. Recently, pre-trained large language models (LLMs) such as GPT-3 \cite{brown2020language} and PaLM \cite{chowdhery2022palm} have shown abilities to adapt themselves to various downstream tasks when being \textit{prompted} with a handful of examples of the target task. Such generalizability is promising for us to potentially support various conversational tasks on GUIs without having to develop a specific model for each task. However, the feasibility of doing so is unclear. Little work has been conducted to understand how LLMs, trained with natural languages, can work with GUIs for conversational interaction. Therefore, we investigate in this paper the viability and the how-to of utilizing LLMs to enable diverse language-based interactions with Mobile UIs. Inspired by the theory of \textit{grounding} in communication \cite{clark1991grounding}, we propose a design space consisting of four types of unit conversations between a user and a conversational agent to establish mutual understanding when working together to accomplish mobile tasks. We categorize the conversations as either human or agent-initiated and whether the goal is to solicit or provide information. Further, we propose a set of prompting principles specifically developed for prompting LLMs for mobile UI tasks. To demonstrate the feasibility of our approach, we experimented with tasks from each conversation category, including \textit{screen question-generation}, \textit{screen summarization}, \textit{screen question-answering}, and \textit{mapping Instruction to UI action}. Our evaluation shows that our approach can achieve decent performance on each UI task with only two examples or fewer. Lastly, we discuss the use cases of the proposed methods and the implications of our investigation. Specifically, our work showcases LLMs' potential for interaction designers and developers to rapidly prototype conversational interactions and test them with users before investing efforts and budget into developing dedicated datasets and models. In summary, our paper makes the following contributions: \begin{itemize} \item Our work is the first investigation for using LLMs to enable conversational interaction on mobile UIs, which advances the understanding of using LLMs for interaction tasks. \item We propose a design space that categorizes four types of user-agent conversation when they collaboratively accomplish a mobile task---that lays a conceptual framework for others to further study the topic. \item We designed a novel method for feeding GUIs to LLMs---that is pre-trained for natural language---and a set of techniques to prompt LLMs to perform a range of conversational tasks on mobile UI screens. These techniques produce competitive performance; others can immediately use them in their work. \item We experimented with our approach with four language-based interaction tasks, demonstrating our approach's feasibility with LLMs for conversational GUI interaction and potentially lowering the threshold for developing conversational agents for GUIs. \end{itemize} \section{Related Work} Our work is related to the literature on bridging user interfaces and natural language, prompting techniques for LLMs, and the use of LLMs to facilitate interactive tasks. \subsection{Bridging GUIs with Natural Language} There has been increasing interest in using machine learning to bridge graphical user interfaces and natural language for use cases such as accessibility and multimodal interaction. For example, Widget Captioning \cite{li2020widget} and Screen Recognition \cite{zhang2021screen} predict semantically meaningful alt-text labels for GUI components. Screen2Words \cite{10.1145/3472749.3474765} took a step further to predict text summaries that concisely describe the entire screen. Li et al. \cite{li-etal-2020-mapping} uses a transformer-based model to map natural language instructions to mobile UI action sequences. These prior works typically train a model dedicated to the task based on a sizeable dataset collected. In contrast, our work leverages the few-shot learning ability of LLMs to enable language-based UI tasks by providing a small number of examples. To achieve this, we propose a novel method to represent the UI so that an LLM pre-trained for natural language can efficiently process it. Another relevant body of work is to develop a conversational or multimodal agent that can help the user accomplish mobile tasks \cite{li2017sugilite, li2019pumice, li2020multi, 8506506}. For example, SUGILITE \cite{li2017sugilite} enables users to create task automation on smartphones by user demonstration and perform the tasks through a conversational interface. KITE \cite{8506506} helps developers create task-oriented bots templates from existing apps. Our work shows that LLMs can enable versatile language-based interactions when prompted with exemplars for different tasks, lowering the threshold for developing versatile multimodal agents. \subsection{Prompting Pre-trained Large Language Models} Finetuning pre-trained task-invariant models such as BERT has been a common practice to leverage and adapt large models for specific tasks. Yet, the GPT-3 \cite{brown2020language} model with 175B parameters introduced a new norm for leveraging pre-trained language models for downstream tasks by so-called in-context few-show learning. By prompting the pre-trained model with only a few examples, it can generalize to various downstream tasks without updating the parameters in the underlying model. Recently studies have shown that prompting is one of the emergent abilities that appear only when the model size is large enough \cite{wei2022emergent}. While prompting LLMs may not outperform benchmark models on many tasks, it provides a lightweight method to achieve competitive performance on various tasks \cite{brown2020language, chowdhery2022palm}. Typically, when the prompt consists of $N$ pairs of input and output exemplars from the target tasks, it is referred to as $N$-shot learning. When more shots are provided, the model generally performs better on the target tasks \cite{brown2020language, chowdhery2022palm}. Prior work has also shown that by using specific prompting language such as \textit{"Let's think step by step"}, one can solicit reasoning from the model to perform tasks that require logical reasoning, such as solving math problems \cite{kojima2022think}, in a zero-shot setting. In addition, various prompting paradigms have been proposed to solicit reasoning from the language model \cite{wei2022chainofthought, zhou2022leasttomost, wang2022rae}. For example, chain-of-thought prompting \cite{wei2022chainofthought} proposes to use the models to generate intermediate results (i.e., chain of thoughts) before generating the final output. The core idea resembles the divide-and-conquer method in algorithms which breaks more complicated problems into subproblems that can be solved more easily. Prompting LLMs remains an ongoing research topic in the community. Our work builds upon prior work to contribute a set of prompting techniques designed to prompt LLMs for conversational tasks on mobile UIs. \subsection{Interactive Applications of Large Language Models} LLMs have been applied to enable a broad range of language-related interactive applications in the HCI community \cite{chung2022talebrush, dang2022prompt, lee2022promptiverse, kim2022stylette, 2022saycan, lee2022coauthor, lee2022interactive, wu2022aichain, jian2022case, jiang2022nlp, liu2022will}. For example, Chang et al. \cite{chung2022talebrush} proposes TaleBrush, a generative story ideation tool that uses line sketching interactions with a GPT-based language model for control and sensemaking of a protagonist’s fortune in co-created stories. Stylette \cite{kim2022stylette} allows users to modify web designs with language commands and uses LLMs to infer the corresponding CSS properties. Lee et al \cite{lee2022coauthor} present CoAuthor, a dataset designed for revealing GPT-3’s capabilities in assisting creative and argumentative writing. Since LLMs can encode a wealth of semantic knowledge, they have also been used to support physical applications. For example, SayCan \cite{2022saycan} extracts and leverages the knowledge priors within LLMs to execute real-world, abstract, long-horizon robot commands. Our work contributes the first effort of applying LLMs to enable various conversational interactions on mobile UIs. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{figures/design-space2.png} \caption{The design space of different types of unit conversation when humans and agent work collaboratively toward completing user goals on mobile UIs. The space has two axes: initiative and purpose. An agent can take the initiative to solicit new information from the user or to provide information to the user, and vice versa.} \Description{} \label{dspace} \end{figure*} \section{Conversation for Mobile UI Tasks} \label{conversation_space} To guide a systematic investigation of using LLMs to support conversational mobile interaction, we derived a design space to characterize conversations between users and agents when carrying out tasks on mobile UIs. Conversational interaction with mobile devices is typically embodied as human users conversing with an intelligent assistant. However, unlike chit-chat conversational bots that support free-form, open-ended conversations, intelligent assistants focus on goal-oriented conversations—for example, helping users accomplish mobile tasks such as booking hotels and checking emails. During the conversation, the user and the agent exchange information necessary to achieve the user goals, which resembles the process of \textit{grounding}. Communication theories \cite{clark1991grounding, groundinggrounding} define grounding as the process of building a common ground based on shared mutual information to communicate successfully. According to the original model, there are two phases of contributions to the conversation. 1) \textit{Presentation phase}, in which the speaker produces an utterance addressed to a conversational partner, and 2) \textit{Acceptance phase}, in which the partner explicitly acknowledge the utterance or implicitly accept it by continuing with the following relevant utterance \cite{Brennan2000TheGP}. The original grounding theory addresses human-to-human conversation. We adopt the two-phase model to the context of human-agent conversations for mobile tasks. Since we consider only goal-oriented conversations, we assume that any contribution to the conversation is 1) based on the screen context and 2) made to exchange information necessary for goal completion. This excludes chit-chat-type conversations. We focus on unit conversations that are simplistic and fundamental but can be chained together to form a multi-turn conversation. We define a unit conversation as a single-turn conversation consisting of an agent turn and a user turn. As shown in figure \ref{dspace}, we categorize unit conversations with two dimensions: \textit{Initiative} and \textit{Purpose}. A conversation can be mixed-initiative \cite{horvitz1999principles}, either initiated by the agent or the user. The purpose of initiated conversation could be either \textit{soliciting} or \textit{providing} information. This step resembles the \textit{Presentation phase}. Once a conversation is initiated, the receiver would need to provide the inquired information or acknowledge that the message has been received. This step resembles the \textit{Acceptance phase}. A unique characteristic of human-agent conversation is that, in addition to a language acknowledgment, the acceptance phase for the agent could be executing actions based on the user's utterances, e.g., clicking on the home button. Users who see the screen updated by an executed action realize their messages have been well-received. While the proposed categorization is not meant to cover all possible conversations, it provides a systematic perspective to facilitate our investigation of language interactions with GUIs. In the following subsections, we discuss each category in detail. For each category, we propose one representative task to investigate the feasibility of using LLMs to enable conversation interaction. The tasks include \textit{Screen Question-Generation}, \textit{Screen Summarization}, \textit{Screen Question-Answering}, and \textit{Mapping Instruction to UI Action}. After that, we introduce the prompting techniques used to enable each task in section \ref{sec:prompting_techniques} and present the experimental results in section \ref{exp}. \subsection{Agent-initiated Conversation} When an agent initiates a conversation, it can be either soliciting or providing information essential for the user to proceed on a mobile UI screen. \subsubsection{Agent solicits information from user} Mobile UIs usually require users to input information relevant to their goals. For example, \textit{destination city} or \textit{travel dates} for hotel booking. This information is typically requested by the input fields on mobile UIs. When users see a text field for the destination city, they know the UI expects them to enter where they plan to travel. A conversational agent should be able to similarly solicit essential information from users using language. For example, it should ask users questions like \textit{"Which hotel do you want to search for."} or \textit{"What is the check-in date of your stays?"}. We refer to this type of task as \textit{Screen Question-Generation} since the questions should be generated based on the current screen contexts. After the questions are asked, the user could respond to provide the inquired information. \subsubsection{Agent provides information to user} An important purpose of GUIs is visually presenting information to users. Similarly, a conversational agent should be able to present the information embedded in GUIs in language to the user. Because UIs contain rich information and what user needs differ \cite{todi21conversations}, there are many ways to deliver different screen information. An example is \textit{Screen Summarization} \cite{10.1145/3472749.3474765}, which provides a short description of the purpose of the current screen, e.g. \textit{"A list of grocery stores nearby"}, or \textit{"A step-by-step recipes of butter chicken"}. The descriptions can help users quickly understand the UI when visual information is unavailable. To minimize interaction effort, the user is not required to acknowledge the agent's information. However, the user can initiate follow-up conversations to solicit further information from the agent. \subsection{User-initiated Conversation} Similarly, the user can initiate conversations to request information or proactively provide information for the agent to process. \subsubsection{User solicits information from the agent} A single mobile screen can sometimes contain a significant amount of text information, e.g., product specifications. Finding the specific information that the user cares about, e.g., the size of a television, requires user effort to skim through the long texts and spot the relevant information. It becomes even inefficient if users have to access the texts through screen readers--they will need to wait for screen readers to scan through all the content on the screen until the relevant text is read. To make information access more efficient, the user should be able to request specific screen information from the agent using language. When the request is done with a question such as \textit{"What's the TV's size?"}, the agent should respond \textit{"50 inches"} based on the specifications presented on the screen. We call this type of conversation \textit{Screen Question-Answering}, similar to the visual question-answering (VQA) \cite{antol2015vqa} and standard text question-answering \cite{rajpurkar2016squad} but instead based on a mobile UI screen. \label{user_solicits_info} \subsubsection{User provides information to the agent} Users can take the initiative to communicate new information to guide the agent in carrying out mobile tasks. The information can be slot values for input fields on the screen such as \textit{"My password is CHI2023."}. It can also be language commands that convey user intent, such as \textit{"Turn on the WIFI"}, or \textit{"Click on the search button."} After receiving information from the user, the agent could acknowledge with a language response such as \textit{"Sure, I will click on the search button."} and/or perform the UI actions to click the corresponding button---the task referred to as \textit{Mapping Instruction to UI Action}. \label{user_provide_info} \section{Prompting Large-Language Models for Mobile UI Tasks} \label{sec:prompting_techniques} Pre-trained LLMs support in-context few-shot learning via \textit{prompting}---instead of finetuning or re-training models for each new task, one can prompt an LLM with a few input and output exemplars of the desired task \cite{brown2020language, chowdhery2022palm, wei2022chainofthought, zhou2022leasttomost}. For some NLP tasks such as question-answering or translation, prompting can perform on par with previous benchmark approaches~\cite{brown2020language}. However, language models take text input only, while mobile UIs are multimodal, containing text, image, and structural information in their view hierarchy data and screenshots. In addition, since the UIs of a mobile app encapsulates the logic of target user tasks \cite{li20218kite}, logical reasoning based on UI contexts is essential for the model to support conversations toward task completion. These unique aspects of mobile UIs pose two open problems for designing prompts: \begin{enumerate} \item How to represent mobile UIs in texts to leverage the few-shot prompting of LLMs? \item How to elicit reasoning based on the mobile UIs? \end{enumerate} We respond to these questions by describing our proposed prompting techniques and their design rationales below. Prompting LLMs remains an ongoing research problem. As the first to investigate prompting with mobile UI, we provide a strong baseline approach and encourage future work to build upon our design and further study the open problems. \begin{figure*}[ht] \centering \includegraphics[width=1\linewidth]{figures/prompt-all-in-1-7.png} \caption{Left: An example illustrating the proposed prompt structure. A prompt starts with the preamble, which describes the task. Following the preamble, there will be zero or more task exemplars from the target tasks. Each exemplar consists of an input screen HTML, a chain of thoughts (if applicable), and a task-specific output. Additional exemplars are appended to the end of the previous ones. Right: An illustration of prompting large language models in our use cases. The prompt contains N exemplars from the target tasks. We append the screen HTML of the test screen to the end of the prompt. We then feed the prompt + test screen HTML as input into the LLM. The LLM then generate word tokens in an auto-regressive manner, which helps them capture the exemplars' pattern to produce task-specific output.} \Description{} \label{prompt} \end{figure*} \subsection{Screen Representation} \subsubsection{Representing View Hierarchy as HTML} There can be various ways to represent a mobile UI in text, e.g., concatenating all the text elements on the UI into a token sequence or using natural language sentences to describe UI elements, such as \textit{"a menu button in the top left corner."} To design our screen representation, we leverage the insight that if a prompt falls within the training data distribution of a large language model, it is more likely for few-shot learning to perform. This is because LLMs are trained to predict the subsequent tokens that maximize the probability based on the training data. LLMs' training data is typically scraped from the web, including both natural language and code. For example, 5\% of PaLM's \cite{chowdhery2022palm} training data was scraped from GitHub, including 24 common programming languages such as Java, HTML, and Python \cite{chowdhery2022palm}. Therefore, we represent a mobile UI in texts by converting its view hierarchy data using HTML syntax. HTML is particularly suitable for representing mobile UIs as it is already a markup language representing web UIs. The conversion is conducted by traversing the view hierarchy tree using a depth-first search. We detail our conversion algorithm in the following sections. Note that since the view hierarchy is not designed to be represented in HTML syntax, a perfect one-to-one conversion does not exist. In contrast, our goal is to make the converted view hierarchy look similar to the HTML syntax to generate a data representation closer to the training data distribution. \subsubsection{View Hierarchy Properties} Converting a mobile UI's view hierarchy into HTML syntax can preserve the detailed properties of UI elements and their structural relationship. The view hierarchy is a structural tree representation of the UI where each node, corresponding to a UI element, contains various properties such as the class, visibility-to-user, and the element's bounds. However, using all element properties will result in lengthy HTML text, which may exceed the input length limit of the language model, e.g., 1920 tokens for PaLM and 2048 tokens for GPT-3. Therefore, we use a subset of properties related to the text description of an element: \begin{itemize} \item \texttt{class}: Android object type such as \texttt{TextView} or \texttt{Button}. \item \texttt{text}: element texts that are visible to the user. \item \texttt{resource\_id}: text identifiers that describes the referenced resource. \item \texttt{content\_desc}: content description that describes the element for accessibility purpose---the alt-text. \end{itemize} \subsubsection{Class Mapping} We developed heuristics to map the Android classes to HTML tags with similar functionalities. We map \texttt{TextView} to the \texttt{<p>} tag as they are both used for presenting texts; all button-related classes such as \texttt{Button} or \texttt{ImageButton} are mapped to \texttt{<button>}. We map all image-related classes such as \texttt{IMAGEVIEW} to \texttt{<img>}, including icons and images. Lastly, we convert the text input class \texttt{EDITTEXT} to \texttt{<input>} tag. We focus on the most common element classes for simplicity, and the rest of the Android classes, including containers such as \texttt{LinearLayout} are mapped to the \texttt{<div>} tag. \subsubsection{Text, Resource\_Id, and Content Description} We insert the \texttt{text} properties of Android elements in between the opening and closing HTML tags, following the standard syntax of texts in HTML. The \texttt{resource\_id} property contains three entities: {package\_name}, {resource\_type}, and {resource\_name}. Among them, {resource\_name} usually contains additional descriptions of an element's functionality or purpose, written by the developers. For example, in the Gmail app, an element with {resource\_name} of \texttt{"unread\_count\_textView"} shows how many emails are unread; whereas a \texttt{"date"} means the element shows the the date of receiving a mail. Such information helps the model to better understand the screen context. We insert the resource\_name tokens that describe each element's purpose as additional identifiers in the \texttt{"class"} attributes, which originally contain identifiers linked to a style sheet or used by JavaScript to access the element. Word tokens in resource\_name are typically concatenated with underscores, which we replace as spaces when inserting. Lastly, we insert the \texttt{content\_desc} as the \texttt{"alt"} attribute in the HTML tags when the the property is present. \subsubsection{Numeric Indexes for Referencing} To help model referencing specific UI elements, we insert numeric indexes to each element as the \texttt{"id"} attribute. The indexes are generated with the depth-first search order in the view hierarchy tree. For tasks such as predicting which button to click based on language instructions, the model can refer to elements using numeric indexes, which is more efficient and space-saving than spelling out the complete HTML tag. \subsection{Chain-of-Thought Prompting} Mobile UIs encapsulate the logic of user tasks \cite{li20218kite}; therefore, it is vital for models to perform reasoning when used for conversational interaction. LLMs have demonstrated abilities to reason \cite{brown2020language, chowdhery2022palm, 2022saycan} as they captured real-world knowledge during training with a large number of texts. Recent work further shows that LLM's reasoning ability can be improved by generating and chaining intermediate results to obtain the final answers, namely \textit{Chain-of-Thought} prompting \cite{wei2022chainofthought}. The idea is straightforward, i.e., simply appending a chain of thoughts describing intermediate results before the answers in the prompt. The model would then follow the patterns to generate a chain of thoughts during inference. \textit{Chain-of-Thought} prompting has been shown to be helpful for reasoning tasks. The results are also more interpretable as the model would articulate its thought process before coming up with the answer. However, prior work has not investigated whether it can facilitate reasoning in generating conversations based on mobile UIs. Therefore, we incorporate the method in the task that is applicable in our experiments. \subsection{Prompt Structure} We follow a similar prompt structure shown effective in \cite{brown2020language}. Each prompt starts with a \textit{preamble} which explains the prompt's purpose. The preamble is followed by multiple exemplars consisting of the input, a chain of thought (if applicable), and the output for each task. Each exemplar's input is a mobile screen in the HTML syntax. To better leverage few-shot learning while complying with LLM's input length limits, we only show the leaf nodes visible to the users, as non-leaf nodes are usually containers that do not contain textual information. Following the input, a chain of thoughts is provided to elicit logical reasoning from LLMs, if applicable to the task. The output is the desired outcome for the target tasks, e.g., a screen summary or an answer of the question asked by the user. Figure \ref{prompt}-left shows an example of a 1-shot prompt. Few-shot prompting can be achieved with more than one exemplar included in the prompt. During prediction, we feed the model with the prompt with a new input screen appended at the end. Therefore, for $N$-shot learning, the prompt will consist of a preamble, $N$ exemplars, and the test screen for prediction, as shown in Figure \ref{prompt}-right. \section{Feasibility Experiments} As shown in figure \ref{tasks}, we demonstrate the feasibility of using LLMs to enable conversations on GUIs through experiments with four tasks: 1) \textit{Screen Question-Generation}, 2) \textit{Screen Summarization}, 3) \textit{Screen Question-Answering}, and 4) \textit{Mapping Instruction to UI Action}. Following the common practices of few-shot prompting \cite{wei2022chainofthought, brown2020language}, we select a handful of exemplar data to construct prompts for each task. We then evaluate the effectiveness using task-specific metrics detailed in each experiment. All the studies were conducted with the PaLM model \cite{chowdhery2022palm}, which performs similarly to other LLMs such as GPT-3 \cite{wei2022chainofthought}. The PaLM model is trained with a maximum input length of 1920 tokens. Therefore, we limit the number of exemplars in a prompt to be two or fewer, excluding the test screen, to avoid exceeding the length limit. The experiments aim to understand what can be achieved by simply prompting LLMs with a few exemplars from the target tasks and compared to the baseline, or benchmark if available. \label{exp} \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{figures/task_example7.png} \caption{We experimented with four UI tasks: screen question-generation, screen summarization, screen question-answering, and mapping instruction to UI action. Each task is associated with a conversation category described in Section \ref{conversation_space}. Bounding boxes on the mobile UI highlight elements relevant to the example conversational interactions from each task. } \Description{} \label{tasks} \end{figure} \subsection{Screen Question-Generation} \subsubsection{Task Formulation} Given a mobile UI screen, the goal of screen question-generation is to synthesize coherent, grammatically-correct natural language questions relevant to the UI elements requiring user input. The task occurs when the agent requests user input for the UI elements. \begin{figure*}[ht] \centering \includegraphics[width=1\linewidth]{figures/question-7.png} \caption{Example screen questions generated by the LLM. Left: The LLM can utilize screen contexts to generate grammatically-correct questions relevant to each input field on the mobile UI, while the template approach falls short. Right: We observed that the LLM could use its prior knowledge to combine multiple related input fields to ask a single question. The example shows the LLM combing the fields for the minimum and maximum prices into a single question asking about the price range. Elements relevant to each question are highlighted in corresponding colors and numbered indexes.} \Description{} \label{q} \end{figure*} \subsubsection{Prompt Construction} Figure \ref{prompt} shows an example prompt we used to generate questions. We use a preamble of \textit{"Given a screen, the agent needs to identify the elements requiring user input and generates corresponding questions."}. We used chain-of-thought techniques to generate three intermediate results 1) input fields count, 2) screen summary, and 3) input enumeration. The input field counts were fed with a ground truth count extracted from the screen HTML. We found this step essential to prevent the model from omitting some input elements and only generating a subset of questions. Next, we asked the model to summarize the screen's purpose, which produces the screen context that provides details in the questions. After that, the model enumerates which elements are asking for what information. After the chain of thoughts, the model generates the questions, enclosed by \texttt{<SOQ>} and \texttt{<EOQ>} tokens, representing start-of-question and end-of-question, respectively. The tokens are used as delimiters for conveniently parsing the questions from the output texts generated by the model. We use similar ways to insert special tokens for parsing model output in the rest of the experiments. An example prompt can be found in appendix \ref{prompt:1}. \subsubsection{Experimental Setup} We aim to understand the quality of LLMs for natural language generation based on UI elements. Since there is currently no existing dataset for screen question generation, we follow the common practice of evaluating language generation quality with human ratings. We randomly sampled 400 screens from the RICO dataset\cite{Deka:2017:Rico}. Each of these screens contains at least one \texttt{EditText} element, representing the text input field for users to enter information on the UI. We randomly select another two screens from the RICO dataset as exemplars to include in the prompt. An \texttt{EditText} element represents an input field for the user to enter information, and we generate questions for every input field. We labeled questions generated from the sampled screens using a prompt constructed with two exemplars. Some screens contain multiple input fields, and sometimes several of them are relevant and can be asked collectively. For example, three fields asking for the birth year, month, and date can be combined into a single question as \textit{"when is your birthday?"}. Combining questions can lead to a more efficient conversation between an agent and a user. Therefore, we include an exemplar that combines relevant questions in the prompt to see if the model will also learn to combine relevant questions. We compare LLM's results with a rule-based approach that use words in \texttt{resource\_id}, referred to as \textit{res\_tokens}, to fill in the template of \textit{"What is \{res\_tokens\}?"}. We use \textit{res\_token} instead of \textit{text} because most text input fields are blank by default, and \textit{res\_token} contains the most meaningful description of an input field. We recruited 17 raters who work as professional data labelers at a tech company to provide ratings. To ensure the quality of the labels, a group of quality audits sampled and reviewed ~5\% of the total number of questions answered by every rater. We provide a UI screenshot of a mobile UI and a generated question for each labeling task. The \texttt{EditText} element associated with the question is highlighted with a bounding box. We solicit human ratings on whether the questions are grammatically correct and relevant to the input fields for which they were generated. In addition to the human-labeled language quality, we automatically test how well LLMs can cover all the elements that need to generate questions. The evaluation metrics include: \begin{itemize} \item \textbf{Grammar Correctness}: How correct is the grammar of a generated question? Are the sentences intelligible and plausible? This metric tests the language generation quality in general and is rated on a 5-point Likert scale with 1 as completely incorrect and 5 as completely correct. \item \textbf{UI Relevance}: Whether a generated question is relevant to the highlighted UI element. This metric tests whether the connection between a UI element and a question is correctly established by the model, which is rated on a binary scale as either relevant or not relevant. \item \textbf{Question Coverage}: How well can the model identify the elements on the screen that need question generation? This metric is automatically computed by comparing the indices of ground truth input elements with those identified by the model within the chain of thoughts. \end{itemize} \begin{table} \caption{Grammar correctness, UI relevance, and question coverage results from the screen question-generation experiment.} \label{tab:human} \begin{tabular}{lccc} \toprule Method& Grammar & Relevance & Coverage F1\\ \midrule Template & 3.60 ($\sigma$=0.69) & 84.1\% & 100\%\\ LLM & 4.98 ($\sigma$=0.07) & 92.8\% & 95.9\%\\ \bottomrule \end{tabular} \end{table} \subsubsection{Results} We evaluated 931 questions for both the LLMs and the template-based approach. Three different human raters examined each question to obtain aggregated scores. Table \ref{tab:human} shows the results of our evaluation. Our approach achieves an almost perfect average score of 4.98 on grammar correctness, while the rule-based approach receives a 3.6 average rating. A Mann–Whitney U test shows that the difference between the two methods is statistically significant (p < 0.0001). LLMs also generate 8.7\% more relevant questions compared to the baseline. In terms of questions coverage, our approach achieves an F1 score of 95.9\% (precision = 95.4\%, recall = 96.3\%). Since the rule-based method iterates through every input field to generate questions, its question coverage is naturally 100\%. Altogether, the results show that our approach can precisely identify input elements and generate relevant questions that are intelligible. We further analyzed the model behaviors and found that when generating a question for a field, the model considers both the field element and the \textit{screen context} (information from other screen objects). For example, Figure \ref{q} shows how the model leveraged screen contexts to generate four questions for the input fields on a credit card register screen. While the baseline outputs use the \textit{ref\_tokens} to convey somewhat relevant information, they are less intelligible than the LLM output and do not articulate the specific information requested by the fields. In contrast, all four questions generated by LLM are grammatically correct and ask for relevant information. For Question 3 in figure \ref{q}, the LLM additionally uses the texts above the input field to ask for the \textit{"last 4 digits of SSN"}. Note that the model does not simply copy the screen contexts. Instead, it blends the contexts into the generated question. For instance, Question 2 asks for "credit card expiration date," while the texts above did not mention the word "credit." We also observed that the model did exhibit the behavior of combining relevant fields into a single question on three test screens. For example, Figure \ref{q} shows the model can combine two input elements asking for minimum and maximum values for price into a single question \textit{"What is the price range?"}. \begin{figure*}[ht] \centering \includegraphics[width=1\linewidth]{figures/summaries7.png} \caption{Example summaries generated by prompting the LLM with 2 exemplars (2-shot learning). The LLM is likelier to use specific texts on the screen to compose summaries (top-left and bottom-right). Moreover, the LLM is more likely to generate more extended summaries that leverage multiple vital elements on the screen (top-right). We also observed that the LLM would use its prior knowledge to help summarize the screens. For example, the bottom-right shows the LLM had inferred the screen is for the London Tube system from the station names displayed on the screen. UI elements relevant to the highlighted phrases in the summaries are called out by bounding boxes with corresponding colors. Screen2Words outputs were obtained from the original paper authors.} \Description{} \label{summary} \end{figure*} \begin{table*} \caption{Screen summarization performance on automatic metrics. } \label{tab:sum_number} \begin{tabular}{lcccccccc} \toprule Model & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & CIDEr & ROUGE-L & METEOR \\ \midrule 0-shot LLM & 7.8 & 6.4 & 5.9 & 5.7 & 1.5 & 3.4 & 4.5 \\ 1-shot LLM & 42.3 & 21.1 & 14.8 & 12.1 & 40.9 & 28.9 & 15.3 \\ 2-shot LLM & 45.0 & 25.1 & 17.6 & 14.1 & 39.9 & 33.0 & 17.7 \\ Screen2Words~\cite{10.1145/3472749.3474765} & 65.5 & 45.8 & 32.4 & 25.1 & 61.3 & 48.6 & 29.5 \\ \bottomrule \end{tabular} \end{table*} \subsection{Screen Summarization} \label{summarization} \subsubsection{Task Formulation} Screen summarization was proposed in \cite{10.1145/3472749.3474765} as the automatic generation of descriptive language overviews that cover essential functionalities of mobile screens. The task helps users quickly understand the purpose of a mobile UI, which is particularly useful when the UI is not visually accessible. \subsubsection{Prompt Construction}: We use a preamble of \textit{"Given a screen, summarize its purpose."} We did not use chain-of-thought prompting as no intermediate result is needed to be generated for the task. We used $N$ pairs of screen HTML and corresponding summary following the preamble, where $N=0,1,2$ represents $N$-shot learning. The output summaries are enclosed by special tags <SOS> and <EOS>, meaning the start and end of a summary, respectively. An example prompt can be found in appendix \ref{prompt:2}. \subsubsection{Experiment Setup} We use the Screen2Words dataset \cite{10.1145/3472749.3474765} to test LLM's ability to summarize screens. The dataset contains human-labeled summaries for more than 24k mobile UI screens, each with five summary labels. To gauge the quality of our approach, we test the performance of using LLMs to summarize screens with Screen2Words' test set, consisting of 4310 screens from 1254 unique apps, which was used by the benchmark. We randomly sampled two screens from the dataset and one of their corresponding summaries as the exemplars for prompt construction. We use the same automatic metrics reported in the original paper, including BLEU, CIDEr, ROUGE-L, and METEOR. \subsubsection{Results} Table \ref{tab:sum_number} shows the results of screen summarization on automatic metrics. The model could not generate meaningful summaries in the zero-shot setting (not providing any exemplar). This result is expected as the LLM's training data may not have covered the task of screen summarization. When provided with the model one exemplar, the performance significantly boosted across all metrics. More examples in the prompt provide only marginally higher scores. LLM performances are generally lower than the full benchmark model from the original paper across the automatics metrics. However, the original paper also found that automatic scores may not correlate well with human evaluation results. One reason may be that certain words repeatedly appear across all summaries, such as 'display of', 'screen', or 'app', which provide general information about the UI. Training a model with tens of thousands of data with those words would teach the model to predict these words frequently. Since these words also appear frequently in the test set's ground truth references, the model could also get a high score by favoring these words. During manually reviewing the results, we found that the summaries generated by the LLM are of high quality. However, since the model is not fine-tuned with the whole dataset, the language distribution that the LLM used to predict the summary differs from the Screen2Words model, leading to lower scores across automatic metrics when compared with the human labels from Screen2Words. Figure \ref{summary} shows screens with summaries annotated by human labelers and the output from both Screen2Words and the LLM model. We found that the LLM is more likely to use specific texts on the screen to compose summaries, such as San Francisco (top-left) and Tiramisu Cake Pop (bottom-left) while the Screen2Words dataset and the benchmark model output tend to be more generic. Moreover, the LLM is more likely to generate more extended summaries that leverage multiple vital elements on the screen. For example, the top-right screen shows that the LLM composes a longer summary by leveraging the app name, the send file button, and the recipient fax button: \textit{"FaxFile app screen where a user can select a file to send via fax and pick recipients."} We also observed that the LLM's prior knowledge is beneficial for screen summarization. For example, the bottom-right photo shows a station search results page for London's tube system. The LLM predicts \textit{"Search results for a subway stop in the London tube system."} However, the input HTML does not contain the words \textit{"London"} nor \textit{"tube."} Therefore, the model has utilized its prior knowledge about the station names, learned from large language datasets, to infer that they belong to the London Tube. Such summaries may not have been generated if the model is trained only on the Screen2Words dataset. This explains the gap between our automatic scores for LLM's summaries and the observed actual quality. \begin{figure*}[ht] \centering \includegraphics[width=1\linewidth]{figures/screenQA-2.png} \caption{Left: Example results from the screen QA experiment. The LLM significantly outperforms the off-the-shelf QA model DistillBert (Baseline). Right: Example of the three metrics used to assess the LLM's performance on screen QA, including Exact Match, Contains Ground Truth, and Sub-String of Ground Truth. For each question, the UI element related to its ground truth is highlighted with a bounding box that has the corresponding color and question index. } \Description{} \label{fig:qa} \end{figure*} \begin{table*} \caption{The LLM's performance on the screen QA task.} \label{tab:qa_number} \begin{tabular}{lcccc} \toprule Model & Exact Matches & Contains GT & Sub-String of GT & Micro-F1 \\ \midrule 0-shot LLM & 30.7\% & 6.5 & 5.6\% & 31.2\%\\ 1-shot LLM & 65.8\% & 10\% & 7.8\% & 62.9\% \\ 2-shot LLM & 66.7\% & 12.6\% & 5.2\% & 64.8\% \\ DistilBERT~\cite{Sanh2019distilbert} & 36.0\% & 8.5\% & 9.9\% & 37.2\% \\ \bottomrule \end{tabular} \end{table*} \subsection{Screen Question-Answering (QA)} \subsubsection{Task Formulation} Given a mobile UI and an open-ended question asking about information regarding the UI, the model should provide the correct answer. We focus on factual questions, which require answers based on facts presented on the screen. \subsubsection{Prompt Construction}: We use a preamble of \textit{"Given a screen and a question, provide the answer based on the screen information."}. We did not use chain-of-thought prompting as no intermediate result is needed to be generated for the task. In our experiments, $N$ sets of screen HTML and question-answer pairs follow the preamble, where $N=0, 1, 2$ represents $N$-shot learning. The output answers are enclosed by special tags <SOA> and <EOA>, meaning the start and end of an answer, respectively. The prompt we used in the study can be found in appendix \ref{prompt:3}. \subsubsection{Experiment Setup} We use a dataset of 300 human-labeled question-answer pairs from 121 unique screens in the RICO dataset \cite{Deka:2017:Rico}. It is a preliminary dataset obtained from the authors of a large-scale data collection for screen question-answering \cite{screenqa}. The data labeling process involves two stages: question annotation and answer annotation. For question annotation, the annotators were asked to frame questions given a screenshot as the context. The annotators were expected to compose questions that only inquire about information that requires no logical reasoning and can be directly read off from the screen. After that, another set of annotators answers the previously annotated questions given the associated screenshots. We randomly held out three screens associated with 12 QA pairs for prompt construction. We randomly select one QA pair from each screen to include in the prompt. In the remaining 288 screens, 57 ground truth answers are not present in the view hierarchy data. This is expected because the labeling is based on screenshots instead of view hierarchies, and many screens in the RICO dataset contain inaccurate view hierarchy data \cite{li2022denoising}. In this work, we focus on the answers that are present in the view hierarchy data, and incorporating screenshot information in the LLM will be a critical direction to investigate in the future. Since the answers are generated instead of retrieved from screen HTML, some correct answers may not completely match the labels. For example, \textit{"2.7.3"} versus \textit{"version 2.7.3"}. Therefore, we report performances on four metrics: 1) \textit{Exact Matches}: the predicted answer is identical to the ground truth. 2) \textit{Contains GT}: the answer is longer than the ground truth and fully contains it. 3) \textit{Sub-String of GT}: the answer is a sub-string of the ground truth. 4) \textit{Micro-F1}: the micro F1 scores, calculated based on the number of shared words between the predicted answer and the ground truth across the entire dataset. We consider \textit{Exact Match} answers correct, and those fall within \textit{Contains GT} and \textit{Sub-String of GT} relevant. The three metrics are exclusive to each other. We compare the LLM with an off-the-shelf QA model DistilBERT \cite{Sanh2019distilbert}. We use the \texttt{distilbert-base-cased-distilled-squad} implementation on \texttt{huggingface.co}, which achieved 79.6\% Exact Match and 87.0\% F1 score on the SQuAD dataset \cite{rajpurkar2016squad}. Unlike existing QA models that extract answers from the input text, the LLM may generate answers in different representations from the ground truth (e.g., 9/15 and September 15th). Therefore, our metrics based on string matching may lead to under-reported scores. \subsubsection{Results} Table \ref{tab:qa_number} shows the QA results of different settings. Unlike screen summarization, we found that the LLM can already perform screen QA with the zero-shot setting. 30.7\% of the generated answers match exactly with the ground truth, 6.5\% answers contain the ground truth, and 5.6\% answers are sub-strings of the ground truth. The zero-shot performance might be because the training data of LLMs already contain many QA-related data from the internet. Therefore, the model had already learned to perform question-answering. The off-the-shelf DistillBert model achieves 36\% Exact Match, 8.5\% Contains GT scores, and 9.9\% Contains GT scores, slightly better than the zero-shot performance of LLMs. DistillBert model performs much poorly on our tasks compared to standard question-answering benchmarks, which might be because it was not trained with HTML data. Similar to screen summarization, by providing a single exemplar, the performance boosted significantly, achieving 65.8\% Exact Match, 10\% Contains GT, and 7.8\% Sub-String of GT--summing up to 83.6\% answers relevant to the ground truth. However, we again found that the 2-shot setting only leads to a moderate performance boost compared to the 1-shot setting. Figure \ref{fig:qa}-left shows example QA results from our experiment using 2-shot learning. The LLM can effectively understand the screen and generate a correct or relevant answer. For the shown screen, the LLM correctly answers Q1, Q2, and Q4. For Q3, the LLM generates an answer containing the ground truth "Dec 23rd, 2016" but includes the time within the day "4:50" am. In contrast, the baseline model trained with standard text question-answering corpus only managed to answer Q4 correctly; for Q3, its answer only contains "2016", a sub-string of the ground truth. Q2 shows that the baseline model sometimes incorrectly retrieves HTML code from the input screen. Figure \ref{fig:qa}-right shows different relationships between the predicted answer and ground truth. \subsection{Mapping Instruction to UI Action} \label{study:4} \subsubsection{Task Formulation} Given a mobile UI screen and a natural language instruction to control the UI, the model needs to identify the correct object to perform the instructed action. For example, when instructed with "Open Gmail," the model should correctly identify the Gmail icon on the home screen. This task is useful for controlling mobile apps using language input such as voice access. \subsubsection{Prompt Construction}: We use a preamble of \textit{ "Given a screen, an instruction, predict the id of the UI element to perform the instruction."}. The preamble is followed by $N$ exemplars consisting of the screen HTML, instruction, and ground truth id. Here $N=0, 1, 2$ representing $N$-shot learning. We did not use chain-of-thought prompting as no intermediate result is needed to be generated for the task. The output answers are enclosed by special tags <SOI> and <EOI>, meaning the start and end of the predicted element id, respectively. An example prompt can be found in appendix \ref{prompt:4}. \subsubsection{Experiment Setup} We use the PixelHelp dataset \cite{li-etal-2020-mapping}, which contains 187 multi-step instructions for performing everyday tasks on Google Pixel phones, such as switching Wi-Fi settings or checking emails. We randomly sampled one screen from each unique app package in the dataset as prompt modules. When constructing prompts, we randomly sampled from the prompt modules. We conducted experiments under two conditions: 1) in-app and 2) cross-app. In the former, the prompt contains a prompt module from the same app package with the test screen, while in the latter, it does not. We expect the in-app case will have better performance. Following the original paper, we report the percentage of partial matches and complete matches of target element sequences. \subsubsection{Results} Our experimental results show that the 0-shot setting cannot perform the task at all, with nearly zero partial or complete accuracy. In the cross-app condition, one-shot prompting significantly achieves 74.69 partial and 31.67 complete, meaning ~75\% of elements associated with the instructions were correctly predicted, and more than 30\% tasks are entirely correct. The 2-shot setting offers incremental boosts for both metrics. In the in-app condition, both the 1-shot and 2-shot settings achieve higher scores than their counterparts in the cross-app condition. Our best performing setting is the 2-shot LLM \& in-app, which achieves 80.36 partial and 45.0 complete accuracy scores, as shown in table \ref{tab:grounding}. While our approach underperforms the benchmark results from the Seq2Act model \cite{li-etal-2020-mapping}, it shows impressive performance of few-shot learning, which uses only two examples in the prompt. At the same time, Seq2Act was trained on several dedicated datasets with hundreds of thousands of examples \cite{li-etal-2020-mapping}. Few-shot prompting is challenging as the model can only see a few examples from the target tasks and does not update its parameters \cite{brown2020language}. Therefore, we do not expect prompting LLMs can consistently achieve better performance than the dedicated models across all tasks. \begin{table} \caption{Mapping Instruction to UI Action Results} \label{tab:grounding} \begin{tabular}{lcccc} \toprule Model & Partial & Complete \\ \midrule 0-shot LLM & 1.29 & 0.00 \\ 1-shot LLM (cross-app) & 74.69 & 31.67 \\ 2-shot LLM (cross-app) & 75.28 & 34.44 \\ 1-shot LLM (in-app)& 78.35 & 40.00 \\ 2-shot LLM (in-app)& 80.36 & 45.00 \\ \midrule Seq2Act~\cite{li-etal-2020-mapping} & 89.21 & 70.59 \\ \bottomrule \end{tabular} \end{table} \section{Discussions and Limitations} We discuss the implications of our investigation, the limitations of our approach, and how future work can build upon our work. \subsection{Implications for Language-Based Interaction} Many language-based UI tasks, such as screen question generation and screen question answering, do not have feasible heuristics alternatives. Therefore, achieving these tasks requires significant effort and budget to create the dataset and develop the model, making fast prototyping and iteration of these interaction capabilities difficult. On the other hand, our feasibility experiments demonstrate that our proposed techniques can achieve decent performances on various UI tasks without requiring sizeable datasets. An important takeaway from our studies is that \textit{prototyping novel language interactions on mobile UIs can be as easy as designing an exemplar}. As a result, an interaction designer can quickly create functioning mock-ups to test new ideas with end users. Moreover, developers and researchers can explore different possibilities of a target task before investing significant efforts into developing new datasets and models. For example, as discussed in Section \ref{summarization}, the summaries in the Screen2Words dataset follow a particular sentence structure, while there are many other ways to summarize a screen. Researchers could write various summary exemplars to prompt an LLM and inspect how well these exemplars work on different screens before spending efforts to develop dedicated datasets and models. Another use case of our approach is end-user programming. When a conversational agent fails to understand or carry out the tasks associated with the user's commands, prior work has leveraged programming by demonstration technique that asks users to provide tasks demonstrations to teach the agent \cite{li2017sugilite, li2019pumice}. Our work offers an alternative way for users to teach the model by specifying the desired outcome of language commands. The model could then use the user input as prompting exemplars to adapt to new tasks. \subsection{Chaining Unit Conversations for Real-World Scenarios} We have proposed four types of unit conversations in our conversation categorization and conducted experiments to demonstrate LLMs' capability to enable each of them. In real-world use cases, multiple of these unit conversations can be chained together to fulfill complex UI tasks. For example, when a user wants to book a hotel, they can announce the command "Open hotel booking app." The model would then predict which element on the screen to click. Once the app is open, the model can provide a summary of "Currently showing the search page of the hotel booking app." Following the summary, the model could ask questions based on the input fields on the screen, such as "Which city do you plan to stay in?" or "When do you plan to check in?" When the search results are shown, the user could ask, "which hotel has the highest ratings?" The model would provide the answer based on the screen information. The conversational interaction enabled by our approach is beneficial for accessibility. It can also be blended with other input/output modalities, such as touch input and screen readers, to offer new possibilities for developing multi-modal interaction. For the next step, we plan to use the proposed method to build an agent that can assist users end-to-end to finish a task. In addition, future work could explore techniques such as model distillation \cite{hinton2015distill} or model compression \cite{song2015} to achieve a more efficient inference of LLMs. \subsection{Shots, Input Lengths, and Model Performance} Language models often have input length constraints, which limit the number of effective exemplars that can be included in the prompt. The length of a screen HTML has a considerable variance, depending on how much information was conveyed through the view hierarchy and the inherent complexity of the screen. A possible way to avoid exceeding the input length limit is to configure the prompt screens dynamically--when the test screen's HTML is more extended, one can choose prompt screens with a shorter HTML length. However, imbalanced lengths between prompt and test screens may also lead to inferior performances. On the other hand, while experiments on general NLP tasks showed that including more shots in the prompt increase the performance, our studies showed that including the second example sometimes only marginally improves the performance. In contrast, the first exemplar usually leads to a significant boost. Prior work by Reynolds and McDonell \cite{reynolds2021prompt} suggests that the effectiveness of few-shot prompting lies within guiding LLMs to locate specific task locations in the model's existing space of learned tasks. Therefore, the first shot may be the most helpful, and more examples may only marginally help the model narrow the focus. That said, few-shot prompting and understanding of LLMs' behaviors are an ongoing research problem in the community. Future work could investigate the trade-off between the number of shots and the length of each shot and their impact on the model performance of different UI tasks. \subsection{Screen Representation} Mobile UI screens contain multiple modalities, including pixels, texts, and even audio when media content is present. A limitation of our investigation is that we only use the view hierarchy information, which is converted to an HTML representation, and leave other modalities unused. This limitation is imposed by the type of input expected by LLMs. While our studies showed LLMs could perform decently on various UI tasks, they could fail on cases requiring information not present in the view hierarchy but available as pixels. For instance, many icons or images on UI screens have missing captions or alt texts (text description of a visual element), and LLMs may not be able to perform tasks based on these elements. Moreover, visual information is particularly crucial for some apps, e.g., photo editing tools, and our approach may fall short of enabling conversational interaction based on these apps. Many models have started using multiple modalities, including visual and text information of UIs~\cite{zhang2021screen, 10.1145/3472749.3474765, li2020widget, motif}. Future work could exploit these prior models to generate missing captions or alt-text of elements, which can lead to more comprehensive screen information in the HTML input to LLMs. Our approach can also be extended by leveraging large-scale vision language models such as Flamingo \cite{Alayrac2022flamingo} to encode a screen's visual and structural information for few-shot learning. \section{Conclusion} We investigated the feasibility of using large language models to enable various conversational interactions on mobile UIs. We proposed a design space to categorize types of conversation between the users and the agent when they perform UI tasks collaboratively. We proposed a set of prompting techniques to adapt large language models to various conversational UI tasks. To understand the effectiveness of our approach, we conducted feasibility experiments on four language-based UI tasks, one for each unit conversation in our categorization. The results showed that compared to traditional machine learning pipelines that consist of expensive large-scale data collection and model training, one could quickly realize novel language-based interaction using large language models while achieving decent performance. \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2022-09-20T02:21:30", "yymm": "2209", "arxiv_id": "2209.08655", "language": "en", "url": "https://arxiv.org/abs/2209.08655" }
\section{Introduction} Conventional neural machine translation (NMT) cannot dynamically incorporate external corpus at inference once finishing training \citep{bahdanau2015neural,vaswani2017attention}, resulting in bad performance when facing unseen domains, even if feeding millions or billions of sentence pairs for training \citep{koehn2017six}. To address this problem, researchers developed retrieval-enhanced NMT (\textsc{Renmt}) to flexibly incorporate external translation knowledge. Early \textsc{Renmt}s leverage a search engine to find the similar bitext to improve the translation performance \citep{zhang-etal-2018-guiding,cao-xiong-2018-encoding,DBLP:conf/aaai/GuWCL18,DBLP:conf/aaai/XiaHLS19}. However, the results of sentence-level retrieval with high similarity are generally sparse in practical applications, while noises in low similarity retrieval could lead to severe performance degradation \citep{cao-xiong-2018-encoding}. \textit{k}NN-MT\xspace proposed by \citet{khandelwal2021nearest} effectively alleviates the sparse problem by introducing the word-level k-nearest neighbor mechanism. Instead of storing the discrete word sequence, \textit{k}NN-MT\xspace uses a pre-trained NMT model to force decoding the external corpus and remembers the word-level continuous context representation, e.g., the output of the last decoder layer. During inference, \textit{k}NN-MT\xspace assumes that the same target words have similar contextual representations and weights word selection through retrieving current context representation from the memorized datastore. However, we point out that it is sub-optimal to directly use the off-the-shelf context representation in the translation task because this vector is not specific to fine-grained retrieval. In this work, we attempt to decouple the context representation by learning an independent retrieval representation. To this end, we leverage supervised contrastive learning with multiple positive and negative samples to learn a good retrieval representation (called \textsc{Clknn}\xspace). We also propose a fast and effective method to construct hard negative samples. Experimental results on five domains show that our approach outperforms the vanilla \textit{k}NN-MT\xspace in terms of BLEU and retrieval accuracy. \section{Background} \paragraph{Vanilla NMT} Given a source sentence ${\bm{x}}=\{x_1, x_2, \ldots, x_{|{\bm{x}}|}\}$ and a target prefix ${\bm{y}}_{<t}=\{y_1, y_2, \ldots, y_{t-1}\}$, the vanilla NMT predicts the next target word $y_t$ by: \begin{equation} \label{eq:p_c} p_{c}(y_t|{\bm{x}},{\bm{y}}_{<t}) \propto \textrm{exp}\Big(q({\bm{h}}_{t})\Big) \end{equation} where ${\bm{h}}_{t}=f_{\theta}({\bm{x}}, {\bm{y}}_{<t}) \in \mathcal{R}^{d}$ is the context vector at step $t$ with respect to ${\bm{x}}$ and ${\bm{y}}_{<t}$; $f_{\theta}(\cdot)$ can be arbitrary encoder-decoder network with parameters $\theta$, such as Transformer \citep{vaswani2017attention}; $q(\cdot)$ linearly projects ${\bm{h}}_{t}$ to target vocabulary size. \paragraph{\textit{k}NN-MT\xspace} \textit{k}NN-MT\xspace hypothesizes that the same target words have similar representations. To dynamically incorporate external sentence pairs $\mathcal{D}=\{({\bm{x}}^{(i)}, {\bm{y}}^{(i)})\}_{i=1}^{|\mathcal{D}|}$, \textit{k}NN-MT\xspace extends Eq.~\ref{eq:p_c} by interpolating a retrieval-based probability $p_{r}$: \begin{equation} \label{eq:p_knn} p_{knn} = (1-\lambda) \times p_{c} + \lambda \times p_{r} \end{equation} where $\lambda$ is the interpolation coefficient as a hyper-parameter. Specifically, \textit{k}NN-MT\xspace first uses a pre-trained NMT model to force decoding each sentence pair $({\bm{x}}^{(i)}, {\bm{y}}^{(i)})$ to build a key-value datastore $\mathcal{H}$: \begin{equation} \label{eq:knn_datastore} \mathcal{H} = \bigcup_{i=1}^{|\mathcal{D}|} \bigcup_{t=1}^{|{\bm{y}}^{(i)}|} \Big\{({\bm{h}}^{(i)}_t, y_t^{(i)}) \Big\} \end{equation} The key is the word-level context representation ${\bm{h}}^{(i)}_t$ and the value is the gold target word $y^{(i)}_t$. Then, given $\mathcal{H}$ and predicted target prefix $\hat{{\bm{y}}}_{<t}$ at test time, \textit{k}NN-MT\xspace models $p_r(\hat{y_t}|{\bm{x}}, \hat{{\bm{y}}}_{<t})$ by measuring the distance between query $\hat{{\bm{h}}}_t=f_{\theta}({\bm{x}}, \hat{{\bm{y}}}_{<t})$ and its k-nearest representations $\{(\tilde{{\bm{h}}}_i, \tilde{v}_i)\}_{i=1}^k$ in $\mathcal{H}$: \begin{equation} \label{eq:knn_qr} p_r(\hat{y}_t|{\bm{x}}, \hat{{\bm{y}}}_{<t}) \propto \sum_{i=1}^k \mathbbm{1}_{\hat{y}_t=\tilde{v}_i} \textrm{exp} \Big( \frac{-d(\tilde{{\bm{h}}}_i, \hat{{\bm{h}}}_t)}{T} \Big), \end{equation} where $d(\cdot)$ is $L_2$ distance; $T$ is temperature hyper-parameter; $\mathbbm{1}$ is the indicator function. \section{Approach} \paragraph{Motivation} According to Eq.~\ref{eq:p_c}-\ref{eq:knn_qr}, we can see that the context representation ${\bm{h}}$ simultaneously plays two roles in \textit{k}NN-MT\xspace: (1) the semantic vector for $p_c$; (2) the retrieval vector for $p_r$. We note that coupling the same ${\bm{h}}$ in the two scenes is sub-optimal. Recall that ${\bm{h}}$ in the translation model is generally learned through cross-entropy loss, which only pays attention to the gold target token and ignores others.\footnote{In practice, we often use its label-smooth variant, which evenly assigns a small probability mass to all non-gold labels without distinction.} However, a good retrieval vector should be able to distinguish between different tokens, especially those owning similar representations. Therefore, we attempt to derive a new retrieval vector ${\bm{z}}$ from ${\bm{h}}$ for better retrieval performance. \paragraph{Retrieval representation adapter} \iffalse \begin{algorithm}[tb] \caption{Training Algorithm for \textsc{Clknn}\xspace} \label{alg:training} \begin{algorithmic}[1] \Require Training data $D$ including distillation targets, pretrained AT model $\textrm{M}_{at}$, chunk size $k$, mixed distillation rate $p_{raw}$, schedule coefficient $\lambda$ \Ensure Hybrid-Regressive Translation model $\textrm{M}_{hrt}$ \State $\textrm{M}_{hrt} \gets \textrm{M}_{at}$ \Comment{fine-tune on pre-trained AT} \For {$t$ in $1,2,\ldots,T$} \State ${\bm{X}}=\{{\bm{x}}_1, \ldots, {\bm{x}}_n\}$, ${\bm{Y}}=\{{\bm{y}}_1, \ldots, {\bm{y}}_n\}$, ${\bm{Y}}'=\{{\bm{y}}'_1, \ldots, {\bm{y}}'_n\}$ $\gets$ fetch a batch from $D$ \For {$i$ in $1,2,\ldots,n$} \State ${\bm{B}}_i=({\bm{X}}_i, {\bm{Y}}^*_i) \gets$ sampling ${\bm{Y}}^*_i$ $\sim$ \{${\bm{Y}}_i$, ${\bm{Y}}'_i$\} with $P({\bm{Y}}_i)=p_{raw}$ \Comment{mixed distillation} \EndFor \State $p_{k} \gets (\frac{t}{T})^\lambda $ \Comment{curriculum learning} \State ${\bm{B}}_{c=k}, {\bm{B}}_{c=1} \gets {\bm{B}}_{:\lfloor n \times p_k \rfloor}, {\bm{B}}_{\lfloor n \times p_k \rfloor:} $ \Comment{split batch} \State ${\bm{B}}_{c=k}^{at}, {\bm{B}}_{c=k}^{nat} \gets$ construct \{Skip-AT, Skip-CMLM\} training samples based on ${\bm{B}}_{c=k}$ \State ${\bm{B}}_{c=1}^{at}, {\bm{B}}_{c=1}^{nat} \gets$ construct \{AT, CMLM\} training samples based on ${\bm{B}}_{c=1}$ \State Optimize $\textrm{M}_{hrt}$ using ${\bm{B}}_{c=k}^{at} \cup {\bm{B}}_{c=1}^{at} \cup {\bm{B}}_{c=k}^{nat} \cup {\bm{B}}_{c=1}^{nat}$ \Comment{joint training} \EndFor \end{algorithmic} \end{algorithm} \fi We use a simple feedforward network as an adapter to transform the original representation ${\bm{h}}$ to desired retrieval representation ${\bm{z}}$: \begin{equation} \label{eq:ffn} {\bm{z}} = \textrm{FFN}({\bm{h}}) = \textrm{ReLU}({\bm{h}} {\bm{W}}_1 + {\bm{b}}_1){\bm{W}}_2 + {\bm{b}}_2, \end{equation} where ${\bm{W}}_1 \in \mathcal{R}^{d \times d_f}$, ${\bm{W}}_2 \in \mathcal{R}^{d_f \times d_o}$, ${\bm{b}}_1 \in \mathcal{R}^{d_f}$, and ${\bm{b}}_2 \in \mathcal{R}^{d_o}$ are learnable parameters; $d_f$ and $d_o$ are the intermediate hidden size and output size of the adapter, respectively. When $d_o < d$, the adapter network can be regarded as a dimension reducer. As \textrm{FFN} is very lightweight compared to the calculation of ${\bm{h}}$, there is almost no latency in converting ${\bm{h}}$ to ${\bm{z}}$ For convenience, in the following description, we redefine ${\bm{h}}_i$ as the key of $i$-th key-value pair in the original datastore $\mathcal{H}$, and the corresponding value is denoted by $Y_i$ when there is no ambiguity. In this way, the new datastore $\mathcal{Z}$ can be denoted as $\mathcal{Z}=\{({\bm{z}}_i, Y_i)| i=1,\ldots,|\mathcal{H}|\}$, where ${\bm{z}}_i=\textrm{FFN}({\bm{h}}_i)$. \paragraph{Supervised contrastive learning} In machine translation field, contrastive learning has been applied in multilingual translation \citep{pan-etal-2021-contrastive,wei2021on}, cross-modal translation \citep{ye-etal-2022-cross}, and learning robust representation for low-frequency word \citep{zhang-freq-aware} etc. In this work, we use supervised contrastive learning \citep{NEURIPS2020_f3ada80d} with multiple positive and negative samples to learn the desired retrieval representation ${\bm{z}}$. Here, we regard the unique token $v$ in the target vocabulary $V$ as a natural supervision signal. We aim to make ${\bm{z}}$ more distinguishable, for example, pulling ${\bm{z}}$ of the same words together and pushing ${\bm{z}}$ of different words apart. Specifically, we first divide $\mathcal{Z}$ into $|V|$ clusters according to the token class label. E.g., $C_v = \{{\bm{z}}_i| i=1,\ldots,|\mathcal{Z}|, Y_i=v\}$, where $C_v$ is the context representation cluster of token $v$. Thus, given any context representation ${\bm{z}} \in \mathcal{Z}$ and its token label $v$, we can construct M positive samples ${\bm{z}}^+=\{{\bm{z}}_1^+, \ldots, {\bm{z}}_i^+, \ldots, {\bm{z}}_M^+\}$, where ${\bm{z}}_i^+$ is uniformly sampled from its owned cluster $C_v$ and ${\bm{z}}_i^+ \ne {\bm{z}}$.\footnote{We use sampling with replacement when $|C_v| < M$.} Likely, we further construct N negative samples ${\bm{z}}^-=\{{\bm{z}}_1^-, \ldots, {\bm{z}}_i^-, \ldots, {\bm{z}}_N^-\}$, where ${\bm{z}}_i^- \in \backslash C_v$, $\backslash C_v$ denotes other clusters except $C_v$. In the next part, we will describe how to build ${\bm{z}}^-$. Finally, given the anchor vector ${\bm{z}}$, its multiple positive samples ${\bm{z}}^+$ and multiple negative samples ${\bm{z}}^-$, we learn the adapter network through the following contrastive learning loss: \begin{equation} \label{eq:infonce} - \textrm{log} \frac{\sum\limits_{1 \le i \le M}\textrm{exp}(s({\bm{z}}, {\bm{z}}_{i}^+))}{\sum\limits_{1 \le i \le M}\textrm{exp}(s({\bm{z}}, {\bm{z}}_{i}^+)) + \sum\limits_{1 \le j \le N}\textrm{exp}(s({\bm{z}}, {\bm{z}}_{j}^-))}, \end{equation} where $s(\cdot)$ is the score function implemented as cosine similarity with temperature $T'$: $s({\bm{a}},{\bm{b}})=\frac{1}{T'} \times \frac{{\bm{a}}^T {\bm{b}}}{\Vert {\bm{a}} \Vert \cdot \Vert {\bm{b}} \Vert}$. Note that $T'$ is the temperature in training, which is different from the inference temperature $T$ in Eq.~\ref{eq:knn_qr}. \begin{table*}[htb] \begin{center} \begin{tabular}{l c c c c c c} \toprule[1pt] \multicolumn{1}{c}{\textbf{Dataset}} & \multicolumn{1}{c}{\textbf{Medical}} & \multicolumn{1}{c}{\textbf{Law}} & \multicolumn{1}{c}{\textbf{IT}} & \multicolumn{1}{c}{\textbf{Koran}} & \multicolumn{1}{c}{\textbf{Subtitle}} & \multicolumn{1}{c}{\textbf{NC+Euro}} \\ \hline Train & 248K & 467K & 222K & 52K & 500K & 2M \\ Valid & 2000 & 2000 & 2000 & 2000 & 2000 & -\\ Test & 2000 & 2000 & 2000 & 2000 & 2000 & -\\ \hline Datastore & 6.9M & 19.0M & 3.6M & 0.5M & 6.2M & 5M$^\dagger$ \\ \bottomrule[1pt] \end{tabular} \caption{Statistics of datasets in different domains. $\dagger$: Due to limited memory, we randomly sampled 5M samples from a total of 65.7M samples in NC+Euro for training.} \label{table:data} \end{center} \end{table*} \begin{table*}[t] \begin{center} \resizebox{0.8\textwidth}{!} { \begin{tabular}{l c c c c c c c} \toprule[1pt] \multicolumn{1}{c}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{Medical}} & \multicolumn{1}{c}{\textbf{Law}} & \multicolumn{1}{c}{\textbf{IT}} & \multicolumn{1}{c}{\textbf{Koran}} & \multicolumn{1}{c}{\textbf{Subtitle}} & \multicolumn{1}{c}{\textbf{Avg.}} \\ \hline Baseline (WMT19 winner, \citet{ng-etal-2019-facebook}) & 39.91 & 45.71 & 37.98 & 16.3 & 29.21 & 33.82 \\ \textit{k}NN-MT\xspace \citep{khandelwal2021nearest} & 54.35 & 61.78 & 45.82 & 19.45 & \textbf{31.73}$^\dagger$ & 42.63 \\ \textit{k}NN-MT\xspace (our implementation) & 54.41 & 61.01 & 45.20 & 21.07 & 29.67 & 42.27 \\ \hline \multicolumn{7}{c}{\textit{train by out-domain data}} \\ \hline \textsc{Clknn}\xspace & 56.37 & 61.54 & 46.50 & 21.52 & 30.81 & 43.35 \\ \textsc{Clknn}\xspace + $\lambda^*$ & \textbf{56.52} & 61.63 & 46.68 & 21.60 & 30.86 & 43.46 \\ \hline \multicolumn{7}{c}{\textit{train by in-domain data}} \\ \hline \textsc{Clknn}\xspace & 55.86 & 61.92 & 47.77 & 21.46 & 31.02 & 43.61 \\ \textsc{Clknn}\xspace + $\lambda^*$ & 55.87 & \textbf{62.01} & \textbf{47.84} & \textbf{21.81} & 31.05 & \textbf{43.72} \\ \bottomrule[1pt] \end{tabular} } \vspace{-.5em} \caption{The SacreBLEU scores of our proposed \textsc{Clknn}\xspace and the baseline methods in five domains. $\lambda^*$ denotes using retrieval confidence aware interpolation coefficiency. $^\dagger$ denotes the number is not comparable because \citet{khandelwal2021nearest} use full-size subtitle data than ours. All the \textsc{Clknn}\xspace results are significantly better (p$<$0.01) than our re-implemented \textit{k}NN-MT\xspace, measured by paired bootstrap resampling \citep{koehn-2004-statistical}.} \label{table:main_results} \vspace{-.5em} \end{center} \end{table*} \paragraph{Fast hard negative sample} The key for Eq.~\ref{eq:infonce} is the construction of negative samples ${\bm{z}}^-$. A trivial solution is randomly sampling from the entire space of $\backslash C_v$. However, this negative sample may be too easy to provide the effective learning signal \citep{robinson2020contrastive}. On the contrary, an extreme method for hard negative samples is to traverse $\backslash C_v$ to find the most similar negative samples for the anchor. The problem is that $|\backslash C_v|$ is close to $|\mathcal{Z}|$, with a scale of millions or more, resulting in enormous computational complexity. To solve it, we propose a fast and cheap approach to constructing hard negative samples. Specifically, we first collect the cluster centre $\bar{C}_v = \frac{1}{|C_v|} \sum_{i=1}^{|\mathcal{Z}|} \mathbbm{1}_{Y_i=v} {\bm{z}}_i$. We calculate the nearest K (K$>=$N) cluster centers w.r.t the anchor and randomly sample N clusters to make the source of the negative sample diverse. Then we randomly sample one point from the corresponding cluster as a negative sample. As the anchor vector only involves querying $|C|$ cluster centers and $|C| << |\mathcal{Z}|$, our approach runs faster than the exact global search. \paragraph{Inference} After training, we use the well-trained $\textrm{FFN}$ to rebuild the retrieval datastore $\mathcal{H}$ into $\mathcal{Z}$. To further reduce calculation cost at test time, we introduce PCA to reduce the dimension of the retrieval vector. We also add normalization after PCA to guarantee the numerical stability of the input to the inner product. Another difference with Eq.~\ref{eq:knn_qr} is that we use the inner product instead of the L2 distance as distance metrics. The reason is that using consistent distance metrics in training and inference improves performance in primitive experiments. Concretely, we modify the original \textit{k}NN-MT\xspace in Eq.~\ref{eq:knn_qr} as: \begin{equation} \label{eq:new_qr} p_r(\hat{{\bm{y}}}_t|{\bm{x}}, \hat{{\bm{y}}}_{<t}) \propto \sum_{i=1}^k \mathbbm{1}_{\hat{y}_t=\tilde{v}_i} \textrm{exp} \Big( \frac{g(\tilde{{\bm{z}}}_i) \otimes g(\hat{{\bm{z}}}_t)}{T} \Big), \end{equation} where $g(x) = \textrm{Norm}(\textrm{PCA}(x))$, $\otimes$ denotes inner product operation, $\tilde{{\bm{z}}}_i$ is the $i$-th nearest neighbor in $\mathcal{Z}$ for the current retrieval representation $\hat{{\bm{z}}}_t$. As a bonus, since the numeric range of the normalized inner product is $[0, 1]$, which can be seen as the confidence in retrieving.\footnote{L2 distance lacks this feature because its numeric range is too broad, e.g., {0\textasciitilde1000} in our observation.} We leverage this nature to modify the interpolation coefficient $\lambda$ in Eq.~\ref{eq:p_knn} to be aware of retrieval confidence: \begin{equation} \label{eq:new_qr} \lambda^* = \lambda \times \frac{\sum_{i=1}^k g(\tilde{{\bm{z}}}_i) \otimes g(\hat{{\bm{z}}}_t)}{k}. \end{equation} $\lambda^*$ can be considered a simple adaptive coefficiency like \citet{zheng-etal-2021-adaptive,jiang-etal-2021-learning,wang-cluster-knn}, but does not require training. \begin{figure}[t] \begin{center} \includegraphics[width=.48\textwidth]{difference_v2.png} \end{center} \begin{center} \vspace{-.5em} \caption{Illustration the differences between CKMT and \textsc{Clknn}\xspace in constructing positive and negative samples. Different colors indicate different tokens. \texttt{A}/\texttt{P}/\texttt{N} means anchor, positive sample and negative sample, respectively. } \label{fig:difference} \vspace{-1.2em} \end{center} \end{figure} \paragraph{Discussion} The closest work with us is CKMT \citep{wang-cluster-knn}. As illustrated in Figure~\ref{fig:difference}, there are two major differences compared with CKMT: (1) \textsc{Clknn}\xspace uses multiple positive and negative samples, while CKMT only considers a single positive and negative sample, limiting the exploration of representation space. (2) CKMT requires to partition clusters through cost-expensive clustering in full-scale datastore, while \textsc{Clknn}\xspace predefines clusters based on vocabulary labels and only involves calculating cluster centers. In practice, we spent about 6 hours on the CPU to complete the cluster operation in CKMT, while \textsc{Clknn}\xspace only takes about 3 minutes. \section{Experiments} \label{sec:exp} \paragraph{Setup} \begin{figure*}[t] \begin{center} \resizebox{0.8\textwidth}{!} { \begin{tabular}{C{.3\textwidth}C{.3\textwidth}C{.3\textwidth}} \subfloat [\small{High}] { \includegraphics[width=.3\textwidth]{high.png} } & \subfloat [\small{Middle} ] { \includegraphics[width=.3\textwidth]{middle.png} } & \subfloat [\small{Low} ] { \includegraphics[width=.3\textwidth]{low.png} } \\ \end{tabular} } \end{center} \begin{center} \vspace{-.5em} \caption{Visualization of retrieval vector on different frequency words by t-SNE. We uniformly sample 10 classes in each category, and each class contains ten random representations. The same color denotes the same class. } \label{fig:visual} \vspace{-1.2em} \end{center} \end{figure*} \noindent To fairly compared with previous work \citep{khandelwal2021nearest}, we use WMT'19 German-English news translation task winner \citep{ng-etal-2019-facebook} as our strong general domain baseline. We use the same German-English multi-domain datasets, consisting of five domains, including \texttt{Medical}, \texttt{Law}, \texttt{IT}, \texttt{Koran} and \texttt{Subtitles} \footnote{We use the provided 500K sentence pairs version subtitle data rather than full size 12.4M due to memory limitation.}. Besides, to test the proposed training approach robust in out-domain scenery, we also use a 2M subset of the baseline's training data, including \textit{News Commentary v14} and \textit{Europarl v9}, and randomly sample 5M samples out of 65.7M samples from its datastore. See Table~\ref{table:data} for detailed data statistics. \paragraph{Implementation details} \noindent All experiments run on a single NVIDIA 2080 Ti GPU. We use \textit{Faiss} \footnote{\url{https://github.com/facebookresearch/faiss}} for vector retrieval. For \textsc{Clknn}\xspace, the number of positive samples is M=2, and the number of negative samples is N=32. We sample N negative samples from K=128 nearest clusters. The training batch size is 32. During training, we set $T^{'}$=0.01, while we vary $T$ according to the validation set at test time. The hidden state size $d_f$ and output size $d_o$ of adapter is 4096 and 512, respectively. The output dimension of PCA is 128. We train all models for 500k steps and select the best model on the validation set. We use a beam size of 5 and a length penalty of 1.0 for all experiments for inference. We measure case-sensitive detokenized BLEU by SacreBLEU. \paragraph{Experimental results} Table~\ref{table:main_results} reports the SacreBLEU scores in five domains. We can see that: (1) {\textsc{Clknn}\xspace} is robust about training data: using out-domain or in-domain average improves 1+ points than our \textit{k}NN-MT\xspace; (2) The gap between in-domain and out-domain is small (about 0.3 points), meaning that our approach does not rely on in-domain data and is more practical than \citet{zheng-etal-2021-adaptive,jiang-etal-2021-learning}; (3) using proposed $\lambda^*$ slightly improve the performance across the board. These results show that learning independent retrieval representation is helpful for vanilla \textit{k}NN-MT\xspace. Besides, we also compare the inference speed between \textsc{Clknn}\xspace and \textit{k}NN-MT\xspace through running five times on \texttt{IT} test set. The results show that \textsc{Clknn}\xspace has a comparable speed (97\%$\pm$2\%) to that of \textit{k}NN-MT\xspace because the adapter in \textsc{Clknn}\xspace is very lightweight. \section{Analysis} \label{sec:analysis} \paragraph{Effect of the number of contrastive samples} \begin{table}[t] \begin{center} \resizebox{0.35\textwidth}{!} { \begin{tabular}{c c c | c c c} \toprule[1pt] \multicolumn{1}{c}{\textbf{M}} & \multicolumn{1}{c}{\textbf{N}} & \multicolumn{1}{c|}{\textbf{BLEU}} & \multicolumn{1}{c}{\textbf{M}} & \multicolumn{1}{c}{\textbf{N}} & \multicolumn{1}{c}{\textbf{BLEU}} \\ \hline 1 & 1 & 45.54 & 2 & 16 & 46.37\\ 1 & 16 & 45.91 & 2 & 32 & 46.68\\ 1 & 32 & 46.13 & 2 & 64 & 46.55 \\ 1 & 64 & 45.88 & 4 & 32 & 46.29\\ \bottomrule[1pt] \end{tabular} } \vspace{-.5em} \caption{The BLEU scores on \texttt{IT} test set against the number of the positive (M) and negative (N) samples.} \label{table:sample_num} \vspace{-1em} \end{center} \end{table} One of the main differences between \citet{wang-cluster-knn} and us is that we use multiple positive and negative samples in our training objective. We vary the number of M and N and report the BLEU scores in Table~\ref{table:sample_num}. As we can see, increasing M and N is helpful for our method. However, large M cannot befit more than increasing N. We attribute it to positive samples that are too easy to learn because most are close in embedding space. On the contrary, negative samples from different clusters can provide a stronger learning signal. To further validate the effectiveness of multiple samples, we also conduct experiments on \texttt{Medical}. The results are similar to that of \texttt{IT}: using M=2, N=32 is 1.64 BLEU points higher than using M=1, N=1 (56.52 vs. 54.88). It indicates that using multiple positive and negative samples is necessary to achieve good performance for contrastive learning. \paragraph{Retrieval accuracy} Intuitively, our approach can learn more accurate retrieval representation than vanilla \textit{k}NN-MT\xspace. To validate this hypothesis, we use \texttt{IT} validation as the datastore and plot the retrieval accuracy on top-k in Figure~\ref{fig:acc}. We can see that {\textsc{Clknn}\xspace} has more robust retrieval accuracy than \textit{k}NN-MT\xspace no matter how k changes. It indicates that the performance improvement comes from our better retrieval representation. \begin{figure}[tbp] \begin{center} \renewcommand\arraystretch{0.0} \setlength{\tabcolsep}{1pt} \begin{tabular}{c} \iffalse \multicolumn{2}{c} { \begin{tikzpicture} \scriptsize \node (l0) at (0,0) {}; \draw[red, mark=otimes*] (0,0) -- plot[](0.25,0) -- (0.5,0) node[right] (l1) {KNNMT}; \draw[blue, mark=square] (2,0) -- plot[](2.25,0) -- (2.5,0) node[right] (l2) {CLKNN}; \begin{pgfonlayer}{background} \node[rectangle,draw,inner sep=1pt] [fit = (l0) (l1) (l2) ] {}; \end{pgfonlayer} \end{tikzpicture} } \\ \fi \iffalse \subfloat[\footnotesize{Accuracy}] { \begin{tikzpicture}{baseline} \scriptsize{ \begin{axis}[ xmajorgrids, ylabel near ticks, width=.24\textwidth, height=.26\textwidth, legend style={at={(0.15,0.23)}, anchor=south west}, xlabel={}, ylabel style={yshift=-0em},xlabel style={yshift=0.0em}, ymin=0.5,ymax=0.8, legend style={yshift=-12pt, legend plot pos=left,font=\tiny,cells={anchor=west}} ] \addplot[red,mark=otimes*,line width=0.5pt] coordinates {(1, 0.5076) (2, 0.5982) (4, 0.6694) (8, 0.7183) (16,0.7545) (32,0.7841) (64,0.8060)}; \addlegendentry{KNN-MT} \addplot[blue,mark=square,line width=0.5pt] coordinates {(1, 0.5657) (2, 0.6449) (4, 0.6974) (8, 0.7327) (16,0.7595) (32,0.7817) (64,0.8027)}; \addlegendentry{CLKNN} \end{axis} } \label{fig:enc_depth_en2de} \end{tikzpicture} } & \fi \begin{tikzpicture}{baseline} \scriptsize{ \begin{axis}[ xmajorgrids, ylabel near ticks, width=.4\textwidth, height=.2\textwidth, legend style={at={(0.62,0.7)}, anchor=south west}, xlabel={\scriptsize{top-k}}, ylabel={\scriptsize{Acc.}}, ylabel style={yshift=-0em},xlabel style={yshift=0.0em}, ymin=0.2,ymax=0.6 legend style={yshift=-12pt, legend plot pos=left,font=\tiny,cells={anchor=west}} ] \addplot[red,mark=otimes*,line width=0.5pt] coordinates {(1, 0.5076) (2, 0.4931) (4, 0.4687) (8, 0.4305) (16,0.3822) (32,0.3301) (64,0.2792)}; \addlegendentry{KNN-MT} \addplot[blue,mark=square,line width=0.5pt] coordinates {(1, 0.5657) (2, 0.5494) (4, 0.5247) (8, 0.4888) (16,0.4420) (32,0.3884) (64,0.3343)}; \addlegendentry{CLKNN} \end{axis} } \label{fig:enc_depth_zh2en} \end{tikzpicture} \end{tabular} \end{center} \begin{center} \vspace{-0.5em} \caption{Retrieval accuracy curve against top-k.} \label{fig:acc} \vspace{-1.0em} \end{center} \end{figure} \paragraph{Visualization} We visually present the differences between baseline and {\textsc{Clknn}\xspace} on embedding space. Specifically, we split three categories according to the word frequency in \texttt{IT} training set: \texttt{HIGH}(the first 1\%), \texttt{Middle}(40\%-60\%) and \texttt{LOW}(the last 1\%) \footnote{We filter words whose frequency is less than 10.}. We uniformly sample 10 unique words in each category and randomly sample 10 unique vector representations from the training datastore. We use t-SNE to plot these representations, as illustrated in Figure~\ref{fig:visual}. We can see that: (1) high-frequency words' representations are prone to distinguish for both baseline and {\textsc{Clknn}\xspace}; (2) {\textsc{Clknn}\xspace} has more close distances in the same vocabulary than baseline; (3) {\textsc{Clknn}\xspace} has more robust accuracy for low-frequency words. \iffalse \section{Related work} \label{sec:related_work} \paragraph{Retrieval augmented NMT} Earlier retrieval-augmentated NMT mainly focus on sentence-level retrieval on bitext \citep{cao-xiong-2018-encoding,zhang-etal-2018-guiding,DBLP:conf/aaai/GuWCL18,DBLP:conf/aaai/XiaHLS19}. By leveraging the search results from the out-of-shefle search engine, \citet{cao-xiong-2018-encoding,DBLP:conf/aaai/GuWCL18} modify the network architecture to adapt for multi-source input and \citet{DBLP:conf/aaai/XiaHLS19} model the datastore via graph network. Instead of changing the original network architecture, \citet{zhang-etal-2018-guiding} reward the model score when target n-grams of similar sentences appear on translation hypothesis; \citet{xu-etal-2020-boosting,bulte-tezcan-2019-neural} leverage data argumentation by similar sentence pairs. In addition, \citet{cai-etal-2021-neural} extend to retrieve on target monolingual data. \paragraph{Contrastive learning} Contrastive learning is an effective unsupervised representation method, which is widely used in computer vision~\citep{pmlr-v119-chen20j,NEURIPS2020_f3ada80d,DBLP:conf/icml/ZbontarJMLD21}, natural language process~\citep{reimers-gurevych-2019-sentence} and cross-modal~\citep{pmlr-v139-radford21a}. In machine translation field, contrastive learning has been applied in multilingual NMT \citep{pan-etal-2021-contrastive,wei2021on}, learning robust representation for low-frequency word \citep{zhang-freq-aware} etc. The closest work with us is CKMT \citep{wang-cluster-knn}. \fi \section{Conclusion} \label{sec:conclusion} In this work, we proposed to use supervised contrastive learning to decouple the context representation from vanilla \textit{k}NN-MT\xspace. Experimental results on several tasks show that our approach outperforms \textit{k}NN-MT\xspace and learns a more accurate retrieval representation. \section*{Acknowledgements} We would like to thank the anonymous reviewers for the helpful comments. We also thank Shuqin Pan for the writing suggestions.
{ "timestamp": "2022-09-21T02:12:13", "yymm": "2209", "arxiv_id": "2209.08738", "language": "en", "url": "https://arxiv.org/abs/2209.08738" }
\section{\label{sec:level1}Electron-SPP matrix element} To determine the electron-SPP coupling, we solve Maxwell’s equation for the spatial dependence of the electric potential $\varphi$ due to the SPP, using the following ansatz~\cite{fischetti2001effective}: \begin{equation} \varphi(\boldsymbol{\rho},z,t)=\sum_{\boldsymbol{q}}\varphi(z)e^{i(\boldsymbol{q}.\boldsymbol{\rho}-\omega_{ph} t)}. \label{eq:anz} \end{equation} Here, $\boldsymbol{q}$ and $\boldsymbol{\rho}$ are the two-dimensional phonon wave vector and the spatial coordinate, respectively, and $\omega_{ph}$ is the phonon frequency. In isotropic materials, the Poisson equation $\nabla\varepsilon\nabla\varphi=0$ requires $\varphi(z)\propto e^{\pm q_zz}$ with $q_z=q$. However, in an anisotropic dielectric $q_z=q\sqrt{\varepsilon_{\parallel}/\varepsilon_{\perp}}$. Following Ref.~\cite{screenSPP_PRL2022}, we treat monolayer (bilayer) graphene as a dielectric layer of thickness $2h_s$, as shown in Fig.~\ref{Fig:Structure}, where $h_s=1.7$ \AA ($h_s=3.4$ \AA) for monolayer (bilayer) and accounts for the size of the electron cloud of the $\pi_z$ orbitals of carbon atoms. The thickness hBN $t_{\rm hBN}=t-d$ is placed at the van der Waals distance $d=3.4$ \AA ($d=5.1$ \AA) for monolayer (bilayer) graphene. Perpendicular to the plane, we choose the dielectric constant $\varepsilon_{\perp}=6$, as appropriate for the interface of bilayer graphene~\cite{bilayereps2019}. Within the plane, we choose a static dielectric function $\varepsilon_{\parallel}=1+v_c\Pi(q, E_F)$, where $v_c=2\pi e^2/q$, $\Pi(q, E_F)$ is the polarization function from the Random Phase Approximation, and we use the zero temperature limit~\cite{Hwang2008,DasSarma2010}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Structure.png} \caption{Schematic illustration of the cross-section of the graphene/hBN heterostructure. The monolayer or bilayer graphene is taken to be of thickness $2h_s$, with an anisotropic dielectric function $(\varepsilon_{\perp}, \varepsilon_{\parallel}(q))$ and centered around $z=0$. The hBN of thickness $t_{\rm hBN}=t-d$, and dielectric function $\varepsilon(\omega)$, supports the SPP. } \label{Fig:Structure} \end{figure} The solution for $\varphi(z)$ in Eq.~(\ref{eq:anz}) has the form: \begin{equation}\label{phiz} \varphi(z)= \begin{cases} A\;e^{q(z+h_s)} & z \leqslant -h_s,\\ B\;e^{q_z(z-h_s)}+C\;e^{-q_z(z+h_s)} & |z|<h_s,\\ D\;e^{q(z-d)}+F\;e^{-q(z-h_s)} & h_s\leqslant z <d,\\ E\;e^{q(z-t)}+G\;e^{-q(z-d)} & d \leqslant z < t,\\ H\;e^{-q(z-t)} & t \leqslant z. \end{cases} \end{equation} The coefficients $A$ $-$ $H$ and the dispersion relation for the SPPs are found using the boundary conditions $\varphi^+=\varphi^-$, $\varepsilon^+d\varphi^+/dz=\varepsilon^-d\varphi^-/dz$ at $z=\pm h_s$ , $z=d$, and $z=t$, where the superscripts ``$+$'' and ``$-$'' indicate functions to the right and left of the boundaries, respectively. The dielectric function hBN $\varepsilon(\omega)$ is given by: \begin{equation} \varepsilon(\omega)=\frac{\epsilon_{\infty}\omega^2-\epsilon_{0}\omega_{TO}^2}{\omega^2-\omega_{TO}^2}, \end{equation} where $\varepsilon_0=5.09$, $\varepsilon_{\infty}=4.575$, and $\hbar\omega_{TO}=97.3$ meV~\cite{Geick1966,perebeinos2010PRB}. (We omit the higher energy SPP branch at $\sim$200 meV.) The resulting solution for the SPP frequency is given by: \begin{equation} \omega_{ph}(q)=\omega_{TO}\sqrt{\dfrac{\varepsilon_0+\alpha(q)}{\varepsilon_{\infty}+\alpha(q)}}, \label{Omega_SPP} \end{equation} where $\alpha(q)=-\varepsilon(\omega)$ is the solution of the dispersion relation. To find the coefficients in Eq.~(\ref{phiz}) and the dispersion relation, we define $\varepsilon_{\rm ave} = \sqrt{\varepsilon_{\parallel}\varepsilon_{\perp}}$ and introduce the following variables: \begin{equation}\label{Vars} \begin{split} & C_d = {\rm cosh}(q (d-h_s)), \;\; T_d = {\rm tanh}(q (d-h_s)), \\ & C_t = {\rm cosh}(q (t-d)), \;\; T_t = {\rm tanh}(q (t-d)), \\ & C_z = {\rm cosh}(2q_zh_s), \;\; T_z = {\rm tanh}(2q_zh_s). \end{split} \end{equation} The coefficients are given by the following: \begin{equation}\label{Coeff} \begin{split} B =& A\frac{\varepsilon_{\rm ave}+1}{2\varepsilon_{\rm ave}}C_z(1+T_z), \\ C =& A\frac{\varepsilon_{\rm ave}-1}{2\varepsilon_{\rm ave}} , \\ D =& \frac{B(\varepsilon_{\rm ave}+1)-C(\varepsilon_{\rm ave}-1)C_z(1-T_z)}{2}\\ \times &C_d(1+T_d), \\ F =& \frac{B(1-\varepsilon_{\rm ave})+C(\varepsilon_{\rm ave}+1)C_Z(1-T_z)}{2}, \\ E =& \frac{D(\varepsilon(\omega)+1)+F(\varepsilon(\omega)-1)C_d(1-T_d)}{2\varepsilon(\omega)}\\ \times & C_t(1+T_t), \\ G =& \frac{D(\varepsilon(\omega)-1)+F(\varepsilon(\omega)+1)C_d(1-T_d)}{2\varepsilon(\omega)}. \end{split} \end{equation} The dispersion relation is obtained from the condition for the coefficient $H$: \begin{equation}\label{Dis} \begin{split} E+GC_t(1-T_t)=-\varepsilon(\omega)(E-GC_t(1-T_t)). \end{split} \end{equation} After rearranging Eqs.~(\ref{Dis}) and (\ref{Coeff}), we find that $\varepsilon(\omega)=-\alpha(q)$, where $\alpha(q)$ is given by: \begin{equation}\label{Alpha} \begin{split} &\alpha(q) = \frac{b\pm\sqrt{b^2-4ac}}{2a}, \\ a & = T_t(\varepsilon_{\rm ave} + T_z + T_d(\varepsilon_{\rm ave}+T_z\varepsilon^2_{\rm ave})),\\ b & = (1+T_d)(2\varepsilon_{\rm ave} + T_z + T_z\varepsilon^2_{\rm ave}),\\ c & = T_t(T_d(\varepsilon_{\rm ave} + T_z) + \varepsilon_{\rm ave}+T_z\varepsilon^2_{\rm ave}). \end{split} \end{equation} The two different solutions of $\alpha(q)$ are due to the two surfaces of the finite thickness hBN. To find the form of the SPP potential that interacts with electrons in graphene \emph{i.e.}, $\varphi_0\equiv\varphi(z=0)=(B+C)e^{-q_zh_s}$ according to Eq.~(\ref{phiz}), we apply the normalization condition~\cite{paradisanos2020prominent,stroscio2001phonons}: \begin{equation}\label{norm0} \dfrac{1}{L^2}\dfrac{\hbar}{2\omega}=\int\dfrac{1}{4\pi}\dfrac{1}{2\omega}(\dfrac{\partial\epsilon}{\partial\omega}|\mathbf{E}_{\perp}|^2+\dfrac{\partial\epsilon}{\partial\omega}|\mathbf{E}_{\parallel}|^2)dr, \end{equation} which allows us to solve for the magnitude of the coefficients $E^2+G^2$. In Eq.~(\ref{norm0}), $\mathbf{E}(\mathbf{r}) =-\nabla \varphi(\mathbf{r})$, $L^2=N_kA_c$ is the sample area, $A_c$ is the unit cell area, and $N_k$ is the number of k points. Finally, the e-SPP coupling constant $M_{\mathbf{k}\mathbf{q}}$ can be obtained as \begin{equation}\label{eq:e-SPP} \begin{split} & \vert M_{\mathbf{k}\mathbf{q}}\vert^2=(e\varphi_0)^2|\langle\psi_{\mathbf{k}}\vert\psi_{\mathbf{k+q}}\rangle|^2/N_k, \\ & (e\varphi_0)^2=\dfrac{2\pi e^2}{qA_c}\hbar \omega \left(\dfrac{1}{\varepsilon_{\infty}+\alpha(q)}-\dfrac{1}{\varepsilon_{0}+\alpha(q)}\right)\\ & \times \frac{(B+C)^2}{E^2+G^2} \frac{C_z(1-T_z)(1+T_t)}{2T_t}. \end{split} \end{equation} Here, $\psi_{k}$ is a single particle wave function and the inner product in Eq.~(\ref{eq:e-SPP}) should be understood as corresponding to the wave function overlap in a primitive unit cell, not the entire sample. In the low-energy model, the wavefunction overlap between two states in the conduction band is $|\langle\psi_{\mathbf{k}}\vert\psi_{\mathbf{k+q}}\rangle|^2=(1+\cos{(\theta_{kk+q})})/2$ for monolayer graphene, and $|\langle\psi_{\mathbf{k}}\vert\psi_{\mathbf{k+q}}\rangle|^2=(1+\cos{(2\theta_{kk+q})})/2$ for bilayer, where $\theta_{kk+q}$ is the angle between the two wavevectors $k$ and $k+q$~\cite{Novoselov2006}. We note that the unscreened potential can be obtained by setting $h_s=0$ in the above equations and that the two SPP branches become degenerate in the absence of screening and infinitely thick hBN \emph{i.e}., $h_s=0$ and $t-d=\infty$. However, in this case, the electron-SPP coupling for each phonon branch would be half of the conventional coupling for semi-infinite hBN~\cite{perebeinos2010PRB,Scharf_SPP_2013}. SPPs in the above solution form symmetric and antisymmetric linear combinations of two localized phonons at the two surfaces, with each of them contributing half of the coupling to electrons. The screening of graphene breaks the symmetry and the SPP branch, which corresponds to the larger root $\alpha(q)$ in Eq.~(\ref{Alpha}) (or smaller SPP energy), gives the dominant electron-SPP coupling. At the same time, the smaller root of $\alpha(q)$ in Eq.~(\ref{Alpha}) gives negligible contribution at large values of $q$ and is consistently below 25\% throughout the range of values of $q$ and $t_{\rm hBN}$ reported in this study. To address that artifact of the model, which does not include losses in hBN, we use the larger root for $\alpha(q)$ in Eq.~(\ref{Alpha}) for the phonon energy, and contributions $(B+C)^2/(E^2+G^2)$ from both branches of SPP for the electron-SPP coupling. Note that in the limit of small $q$, $\alpha(q)=\infty$ and according to Eq.~(\ref{Omega_SPP}) $\omega_{ph}=\omega_{TO}$, whereas in the opposite limit $q=\infty$, $\alpha(q)=1$ and the conventional result for the SPP frequency corresponding to semi-infinite hBN and unscreened potential are obtained: $\omega_{ph}=\omega_{TO}\sqrt{(\varepsilon_0+1)/(\varepsilon_{\infty}+1)}$. Consequently, the overall SPP dispersion width, $\hbar\omega_{ph}( q=\infty)-\hbar\omega_{ph}(q=0)\approx4$ meV, is much smaller than the typical SPP energy $\hbar\omega_{ph}\approx 100$ meV.
{ "timestamp": "2022-09-20T02:22:16", "yymm": "2209", "arxiv_id": "2209.08677", "language": "en", "url": "https://arxiv.org/abs/2209.08677" }
\section{Introduction}\label{sec:intr} The notion of a conformal algebra encodes an axiomatic description of the operator product expansion of chiral fields in conformal field theory. The theory of Lie conformal algebras appeared as a formal language describing the algebraic properties of the operator product expansion in two-dimensional conformal field theory (\cite{BPZ,Ka,Ka1}). The structure of a Lie conformal algebra gives an axiomatic description of the operator product expansion of chiral fields in conformal field theory. Structure theory and representation theory of Lie conformal algebras are widely studied in a series of papers (see \cite{BKV,CK,CKW,DK}, and so on). Associative conformal algebras naturally come from representations of Lie conformal algebras. Moreover, some Lie conformal algebras appeared in physics are embeddable into an associative one \cite{Ro, Ro1}. The structure theory and representation theory of associative conformal algebras have attracted the attention of many scholars and achieved a series of results, see \cite{BFK,BFK1,BFK2,Ko,Ko1,Ko2,Ko3,Re,Re1}. In the recent years a great advance in the cohomology theory of associative conformal algebras. In \cite{Do} and \cite{Do1}, Dolguntseva introduced the Hochschild cohomology of associative conformal algebras, and proved that the second Hochschild cohomology group of the conformal Weyl algebra, ${\rm Cend}_{n}$ and ${\rm Cur}_{n}$ with values in any bimodule is trivial. Kozlov showed that the second Hochshild cohomology group of the associative conformal algebra ${\rm Cend}_{1,x}$ with values in any bimodule is also trivial \cite{Koz}. Recently, Kolesnikov and Kozlov proved that all semisimple algebras of conformal endomorphisms which have the trivial second Hochschild cohomology group with coefficients in every conformal bimodule (see \cite{KK}). In \cite{Lib}, Liberati defined the cohomology of associative H-pseudoalgebras and described the zero, the first and the second cohomologies of associative H-pseudoalgebras. In this paper, we consider the Gerstenhaber algebra structure on the Hochschild cohomology of an associative conformal algebra. It is a classical fact that the Hochschild cohomology $\HH^\bullet(A, A)$ of an associative algebra $A$ carries a Gerstenhaber algebra structure \cite{Ger}. A Gerstenhaber algebra is a graded commutative, associative algebra $\Big(\bigoplus_{i\in\mathbb{Z}} \mathcal{A}^i,\, \sqcup\Big)$ together with a degree $-1$ graded Lie bracket $[-,-]$ compatible with the product $\sqcup$ in the sense of the following Leibniz rule \begin{align}\label{g-com} [a\sqcup b, c] = [a, c]\sqcup b + (-1)^{(|c|-1) |a|} a\sqcup [b, c]. \end{align} For an associative algebra $A$, the Hochschild cochain complex $\mathcal{C}^{\ast}(A, A)$ carry a cup product $\sqcup: \mathcal{C}^{m}(A, A)\times \mathcal{C}^{n}(A, A) \rightarrow \mathcal{C}^{m+n}(A, A)$ defined by \begin{align*} (f\sqcup g)(a_{1},\dots, a_{m+n}) =f(a_{1}, \dots, a_{m})g(a_{m+1},\dots, a_{m+n}). \end{align*} It turns out that the Hochschild coboundary operator $\delta$ is a graded derivation with respect to the cup product. Hence, it induces a cup product $\sqcup$ on the Hochschild cohomology $\HH^{\ast}(A, A)$. Moreover, the cochain groups $\mathcal{C}^{\ast}(A, A)$ carry a degree $-1$ graded Lie bracket compatible with the Hochschild cobundary \cite{Ger}. Therefore, it gives rise to a degree $-1$ graded Lie bracket on $\HH^{\ast}(A, A)=\bigoplus_{i\geq0}\HH^{i}(A, A)$. The cup product and the degree $-1$ graded Lie bracket on the Hochschild cohomology $\HH^{\ast}(A, A)$ are compatible in the sense of (\ref{g-com}) to make it into a Gerstenhaber algebra. Recently, for the Hom-associative, the Gerstenhaber algebra structure on the Hochschild cohomology is given \cite{Das}. Here we imitate the case of associative algebra, and consider the Gerstenhaber algebra structure on the Hochschild cohomology of an associative conformal algebra. In 1999, Bakalov, Kac and Voronov have given the definitions of Hochschild cohomology complex and the Hochschild cohomology groups of associative conformal algebras \cite{BKV}. Here we define a cup product and a Gerstenhaber bracket on this complex, and show that their induce a Gerstenhaber algebra structure on the Hochschild cohomology of an associative conformal algebra. The paper is organized as follows. In Section \ref{sec:prel}, we recall the notions of an associative conformal algebras, conformal bimodules, and the Hochschild cohomology of associative conformal algebras. In Section \ref{sec:lie}, we define a Gerstenhaber bracket on the Hochschild cohomology complex of an associative conformal algebra $A$. This bracket induces graded Lie bracket $[-,-]$ on $\HH^{\ast}(A, A)=\bigoplus_{i\geq0} \HH^{i}(A, A)$ with degree $-1$. In Section \ref{sec:cup}, we define a product on the Hochschild cohomology complex of an associative conformal algebra $A$. This product induces a cup product $\sqcup$ on the Hochschild cohomology $\HH^{\ast}(A, A)=\bigoplus_{i\geq0}\HH^{i}(A, A)$. In Section \ref{sec:Gerst}, we show that $\HH^{\ast}(A, A)= \bigoplus_{i\geq0}\HH^{i}(A, A)$ with the cup product $\sqcup$ and the graded Lie bracket $[-,-]$ of degree $-1$ is a Gerstenhaber algebra. In Section \ref{sec:exten}, we define the split extension associative conformal algebra $A\widehat{\oplus}M$ of an associative conformal algebra $A$ by a conformal bimodule $M$, consider the relationship between $\HH^{\ast}(A, A)$ and $\HH^{\ast}(A\widehat{\oplus}M, A\widehat{\oplus}M)$, and give an algebra homomorphism from $\HH^{\ast}(A\widehat{\oplus}M)$ to $\HH^{\ast}(A)$. Throughout this note, we fix $\cb$ an algebraical closed field and characteristic zero (for example, the field of complex number), denote by $\mathbb{Z}_{+}$ the sets of all nonnegative integers. All vector spaces are $\cb$-vector spaces, all linear maps and bilinear maps are $\cb$-linear, all tensor product are over $\cb$, unless otherwise specified. For any vector space $V$ and variable $\lambda$, we use $V[\lambda]$ to denote the set of polynomials of $\lambda$ with coefficients in $V$. \section{Associative conformal algebras and Hochschild cohomology} \label{sec:prel} We recall the notions of associative conformal algebras, conformal bimodules over associative conformal algebras, and the Hochschild cohomology of an associative conformal algebra with coefficients in a bimodule. For the details see \cite{BDK,DK,BKV}. \begin{defi}\label{de:alg} A {\rm conformal algebra} $A$ is a $\cb[\partial]$-module endowed with a $\cb$-bilinear map $\cdot\oo{\lambda}\cdot : A\times A\rightarrow A[\lambda]$, $(a, b)\mapsto a\oo{\lambda} b$ satisfying \begin{eqnarray*} \partial a\oo{\lambda}b=-\lambda a\oo{\lambda}b, \qquad a\oo{\lambda}\partial b=(\partial+\lambda)a\oo{\lambda}b, \quad\forall\, a,b\in A. \end{eqnarray*} An {\rm associative conformal algebra} $A$ is a conformal algebra satisfying \begin{eqnarray*} (a\oo{\lambda}b)\oo{\lambda+\mu}c=a\oo{\lambda}(b\oo\mu c), \quad \forall\, a, b, c\in A. \end{eqnarray*} \end{defi} Let $(A,\; \cdot\ooo{\lambda}{A}\cdot)$, $(B,\; \cdot\ooo{\lambda}{B}\cdot)$ be two associative conformal algebras. A $\cb[\partial]$-module homomorphism $f: A\rightarrow B$ is call a {\it homomorphism of associative conformal algebras} if for any $a, b\in A$, $f(a\oo{\lambda}b)=f(a)\oo{\lambda}f(b)$. \begin{ex} Let $(A,\cdot)$ be an associative algebra. Then ${\rm Cur}(A)=\cb[\partial]\otimes A$ is an associative conformal algebra with the following $\lambda$-product: \begin{eqnarray*} (p(\partial)a)\oo\lambda (q(\partial)b)=p(-\lambda)q(\lambda+\partial) (a\cdot b), \qquad\forall\, \text{$p(\partial)$, $q(\partial)\in \cb[\partial]$, $a$, $b\in A$.} \end{eqnarray*} \end{ex} Now we recall that definition of left (or right) modules over an associative conformal algebra. \begin{defi}\label{def:mod} A {\rm (conformal) left module} $M$ over an associative conformal algebra $A$ is a $\cb[\partial]$-module endowed with a $\cb$-bilinear map $A\times M\rightarrow M[\lambda]$, $(a, v)\mapsto a\ool{\lambda} v$, satisfying the following axioms: \begin{align*} (\partial a)\ool{\lambda} v&=-\lambda a\ool{\lambda} v,\qquad a\ool{\lambda}(\partial v)=(\partial+\lambda)(a\ool{\lambda} v),\\ &(a\oo{\lambda} b)\ool{\lambda+\mu}v=a\ool{\lambda}(b\ool{\mu} v), \end{align*} for any $a, b\in A$ and $v\in M$. We denote it by $(M, \triangleright)$. A {\rm (conformal) right module} $M$ over an associative conformal algebra $A$ is a $\cb[\partial]$-module endowed with a $\cb$-bilinear map $M\times A\rightarrow M[\lambda]$, $(v, a)\mapsto v\oor{\lambda} a$, satisfying: \begin{align*} (\partial v)\oor{\lambda} a&=-\lambda v\oor{\lambda} a,\qquad v\oor{\lambda}(\partial a)=(\partial+\lambda)(v\oor{\lambda} a),\\ &(v\oor{\lambda} a)\oor{\lambda+\mu}b=v\oor{\lambda}(a\oo{\mu} b), \end{align*} for any $a, b\in A$ and $v\in M$. We denote it by $(M, \triangleleft)$. An {\rm (conformal) $A$-bimodule} is a triple $(M, \triangleright, \triangleleft)$ such that $(M, \triangleright)$ is a left $A$-module, $(M, \triangleleft)$ is a right $A$-module, and they satisfy \begin{eqnarray*} (a\ool{\lambda} v)\oor{\lambda+\mu}b=a\ool{\lambda}(v\oor{\mu}b), \end{eqnarray*} for any $a, b\in A$ and $v\in M$. \end{defi} Let $A$ be an associative conformal algebra. Define two bilinear maps $\triangleright^{A},\; \triangleleft^{A}:\; A\otimes A\rightarrow A$ by $a\oll{\lambda}{A} b=a\oo{\lambda} b$ and $b\orr{\lambda}{A} a=b\oo{\lambda}a$ for all $a$, $b\in A$. Then $(A, \triangleright^{A}, \triangleleft^{A})$ is a bimodule of $A$, it is called the regular bimodule of $A$. In 1999, Bakalov, Kac and Voronov firstly gave the definition of Hochschild cohomology of associative conformal algebras \cite{BKV}. This definition is a conformal analogue of Hochschild cohomology of associative algebras. In \cite{DK}, for the case of Lie conformal algebras, the definition was improved by taking $n-1$ variables. Following this idea, we define the Hochschild cohomology for an associative conformal algebra $A$ and a bimodule $M$ over $A$. We denote $\mathcal{C}^{0}(A, M)=M/\partial M$, $\mathcal{C}^{1}(A, M)=\Hom_{\cb[\partial]} (A, M)$, the set of $\cb[\partial]$-module homomorphism from $A$ to $M$, and for $n\geq2$, the space of $n$-cochains $\mathcal{C}^{n}(A, M)$ consists of all maps $$ \varphi_{\lambda_{1},\dots, \lambda_{n-1}}:\; A^{\otimes n}\longrightarrow M[\lambda_{1},\dots, \lambda_{n-1}], $$ such that $$ \varphi_{\lambda_{1},\dots, \lambda_{n-1}}(a_{1},\dots, \partial a_{i},\dots, a_{n}) =-\lambda_{i}\varphi_{\lambda_{1},\dots, \lambda_{n-1}}(a_{1},\dots, a_{n}), $$ for $i=1,2,\cdots,n-1$ and $$ \varphi_{\lambda_{1},\dots, \lambda_{n-1}}(a_{1},\dots, a_{n-1},\partial a_{n}) =(\partial+\lambda_{1}+\dots+\lambda_{n-1})\varphi_{\lambda_{1},\dots, \lambda_{n-1}} (a_{1},\dots, a_{n}). $$ The differentials are defined by $d_{0}: M/\partial M\rightarrow \Hom_{\cb[\partial]}(A, M)$, \begin{equation*} d_{0}(v+\partial M)(a)=(a\ool{-\lambda-\partial}v-v\oor{\lambda}a)\mid_{\lambda=0}, \end{equation*} and for $n\geq1$, \begin{multline}\nonumber \qquad d_{n}(\varphi)_{\lambda_{1},\dots, \lambda_{n}}(a_{1},\dots, a_{n+1}) =(a_{1}\ool{\lambda_{1}}\varphi_{\lambda_{2},\dots, \lambda_{n}} (a_{2},\dots, a_{n+1})) \\ +\sum_{i=1}^{n}(-1)^{i}\varphi_{\lambda_{1},\dots, \lambda_{i-1}, \lambda_{i}+\lambda_{i+1}, \lambda_{i+2}\dots,\lambda_{n}} (a_{1},\dots, a_{i-1}, a_{i}\oo{\lambda_{i}} a_{i+1} , a_{i+2}\dots, a_{n+1}) \\ +(-1)^{n+1}(\varphi_{\lambda_{1},\dots, \lambda_{n-1}}(a_{1},\dots, a_{n})\oor{\lambda_{1}+\dots+\lambda_{n}} a_{n+1}).\qquad \end{multline} One can verify that the operator $d_{n}$ preserves the space of cochains and $d_{n+1}\circ d_{n}=0$. The cochains of an associative conformal algebra $A$ with coefficients in a bimodule $M$ form a complex $(\mathcal{C}^{\ast}(A, M), d_{\ast})$, called the {\it Hochschild complex}. We denote the space of $n$-cocycles by $\mathcal{Z}^{n}(A, M)=\{\varphi\in\mathcal{C}^{n}(A, M)\mid d_{n}(\varphi)=0\}$, and the space of $n$-coboundaries by $\mathcal{B}^{n} (A, M)=\{d_{n-1}(\varphi) \mid\varphi\in\mathcal{C}^{n-1}(A, M)\}$. The the $n$-th Hochschild cohomology of $A$ with coefficients in $M$ is define by \begin{equation*} \HH^{n}(A, M)=\mathcal{Z}^{n}(A, M)/\mathcal{B}^{n}(A, M) \end{equation*} In particular, if $M=A$ as conformal bimodules, we denote $\HH^{n}(A):=\HH^{n}(A, M)$, and call the $n$-th Hochschild cohomology of $A$. In this paper, we consider the Gerstenhaber algebra structure on $\HH^{n}(A)$. For the description of the lowest degree of the Hochschild cohomology, we have \begin{itemize} \item[(1)] $\Img d_{0}=\Inn(A, M)$, where $\Inn(A, M)=\{f_{v}\in\Hom_{\cb[\partial]} (A, M)\mid v\in M, f_{v}(a)=a\oo{-\partial}v-v\oo{0}a\}$; \item[(2)] $\Ker d_{1}=\Der(A, M)$, where $\Der(A, M)=\{f\in\Hom_{\cb[\partial]} (A, M)\mid f(a\oo{\lambda}b)=a\ool{\lambda}f(b)+f(a)\oor{\lambda}b\}$; \item[(3)] if $A$ as $\cb[\partial]$-module is projective, the equivalence classes of $\cb[\partial]$-split abelian extensions of $A$ by $M$ $\cb[\partial]$-module is projective correspond bijectively to $\HH^{n}(A, M)$. For the details see \cite{Do,Lib}. \end{itemize} \section{Graded Lie algebra structure}\label{sec:lie} In this section, we give a greded Lie algebra structure on the Hochschild cohomology of an associative conformal algebra. First, let us recall that the Gerstenhaber's classic conclusion for the pre-Lie system. \begin{defi}\label{def:pre-Lie sys} A pre-Lie system $\{V_{m}, \circ_{i}\}$ consists of a sequence of vector spaces $V_{m}$, $m \geq 0$, and for each $m , n \geq 0$ there exist linear maps \begin{equation*} \circ_{i} : V_{m} \otimes V_{n} \rightarrow V_{m+n}, \qquad (f, g)\mapsto f \circ_{i} g, \end{equation*} for any $0\leq i \leq m$, satisfying \begin{equation*} (f\circ_{i} g)\circ_{j} h= \begin{cases}(f \circ_{j} h) \circ_{i+p} g, &\mbox{ if} \quad 0\leq j\leq i-1;\\ f\circ_{i} (g \circ_{j-i} h), &\mbox{ if} \quad i\leq j\leq n+i, \end{cases} \end{equation*} for $f\in V_{m}$, $g\in V_{n}$ and $h\in V_{p}$. \end{defi} Let $\{V_{m}, \circ_{i}\}$ be a pre-Lie system. Then for any $m, n \geq 0$, there is a new linear map $\circ : V_m \otimes V_n \rightarrow V_{m+n}$, $(f, g)\mapsto f\bar{\circ} g$ defined by \begin{equation*} f\bar{\circ} g = \sum_{i=0}^{m}(-1)^{ni}f \circ_{i} g, \end{equation*} for any $f\in V_{m}$, $g \in V_{n}$. Then we have the following theorem. \begin{thm}[\cite{Ger}]\label{thm: Pre-Lie alg} Let $\{V_{m}, \circ_{i}\}$ be a pre-Lie system and $f\in V_{m}$, $g \in V_{n}$ and $h \in V_{p}$, respectively. Then \begin{itemize} \item[(i)] $(f\bar{\circ} g)\bar{\circ} h-f\bar{\circ}(g\bar{\circ} h) =\sum_{i, j}(-1)^{ni+pj}(f\circ_{i}g)\circ_{j}h$, where the summation is indexed over those $i$ and $j$ with either $0\leq j\leq i-1$ or $n+i+1\leq j\leq m+n$, \item[(ii)] $(f\bar{\circ} g)\bar{\circ} h-f\bar{\circ}(g\bar{\circ} h) =(-1)^{np}[(f\bar{\circ} h)\bar{\circ} g-f\bar{\circ}(h\bar{\circ} g)]$. \end{itemize} \end{thm} \begin{defi}\label{def:pre-Lie alg} A {\rm graded pre-Lie algebra} $\{V_{m}, \bar{\circ}\}$ consists of vector spaces $V_{m}$ together with linear maps $\bar{\circ}: V_{m}\otimes V_{n}\rightarrow V_{m+n}$ satisfying \begin{equation*} (f\bar{\circ} g)\bar{\circ} h-f\bar{\circ}(g\bar{\circ} h) =(-1)^{np}[(f\bar{\circ} h)\bar{\circ} g-f\bar{\circ}(h\bar{\circ} g)], \end{equation*} for any $f\in V_{m}$, $g\in V_{n}$ and $h\in V_{p}$. \end{defi} It is proved in \cite{Ger} that the graded commutator of a graded pre-Lie algebra defines a graded Lie algebra structure. More precisely, if $\{V_{m}, \bar{\circ}\}$ is a graded pre-Lie algebra, then the bracket $[-,-]$ defined by $[f, g]=f\bar{\circ} g-(-1)^{mn}g\bar{\circ} f$, for any $f\in V_{m}$, $g \in V_{n}$, defines a graded Lie algebra structure on $\bigoplus_{m\geq0}V_{m}$. Let now $A$ be a $\cb[\partial]$-module. For $n\geq 2$, a linear map $f: A^{\otimes n}\rightarrow A[\lambda_{0},\cdots, \lambda_{n-2}]$, is called {\it conformal sesquilinear} if \begin{align*} f_{\lambda_{0},\dots,\lambda_{n-2}}(a_{0}, \dots, \partial a_{i},\dots, a_{n-1}) &=-\lambda_{i}f_{\lambda_{0},\dots,\lambda_{n-2}}(a_{0},\dots, a_{i},\dots, a_{n-1}), \quad 0\leq i\leq n-2,\\ f_{\lambda_{0},\dots, \lambda_{n-2}}(a_{0},\dots, a_{n-2}, \partial a_{n-1}) &=(\partial+\lambda_{0}+\dots+\lambda_{n-2})f_{\lambda_{0},\dots, \lambda_{n-2}} (a_{0},\dots, a_{n-1}). \end{align*} We denote by $U_{0}=\Hom_{\cb[\partial]}(A, A)$, and denote by $U_{n}$ the set of all conformal sesquilinear maps from $A^{\otimes n+1}$ to $A[\lambda_{1},\cdots, \lambda_{n}]$ for $n\geq1$. Define $\bullet_{i}: U_{m}\times U_{n}\rightarrow U_{m+n}$ by \begin{align*} &(f\bullet_{i} g)_{\lambda_{0},\dots,\lambda_{m+n-1}}(a_{0}, a_{1},\dots, a_{m+n})\\ =&f_{\lambda_{0},\dots,\lambda_{i-1}, \lambda_{i}+\cdots+\lambda_{i+n}, \lambda_{i+n+1},\dots,\lambda_{m+n-1}}\big(a_{0},\dots, a_{i-1}, g_{\lambda_{i}, \dots,\lambda_{i+n-1}}(a_{i},\dots, a_{i+n}), a_{i+n+1}, \dots, a_{m+n}\big), \end{align*} for any $f\in U_{m}$, $g\in U_{n}$ and $0\leq i\leq m$. In particular, if $m=0$, \begin{equation*} (f\bullet_{0} g)_{\lambda_{0},\dots,\lambda_{n-1}}(a_{0}, a_{1}, \dots, a_{n})=f\Big(g_{\lambda_{0},\dots,\lambda_{n-1}}(a_{0}, a_{1}, \dots, a_{n})\Big), \end{equation*} where $f$ is extended canonically to a $\cb[\partial]$-module homomorphism from $A[\lambda_{0},\dots,\lambda_{n-1}]$ to $A[\lambda_{0},\dots,\lambda_{n-1}]$, and if $n=0$, \begin{equation*} (f\bullet_{i} g)_{\lambda_{0},\dots,\lambda_{m-1}} (a_{0}, a_{1},\dots, a_{m})=f_{\lambda_{0},\dots,\lambda_{m-1}} (a_{0},\dots, a_{i-1}, g(a_{i}), a_{i-1},\dots, a_{m}). \end{equation*} Then we get a pre-Lie system. \begin{lem}\label{lem:pre-Lie sys} With the above notations, $\{U_{m}, \bullet_{i}\}$ forms a pre-Lie system. \end{lem} \begin{proof} Direct calculation shows that for any $f\in U_{m}$, $g\in U_{n}$ and $h\in U_{p}$, (1) if $0\leq j\leq i-1$, we have \begin{align*} &((f\bullet_{j}h)\bullet_{i+p}g)_{\lambda_{0},\dots,\lambda_{m+n+p-1}} (a_{0}, a_{1},\dots, a_{m+n+p})\\ =&(f\bullet_{j}h)_{\lambda_{0},\dots,\lambda_{i+p-1}, \lambda_{i+p}+\cdots +\lambda_{i+p+n},\lambda_{i+p+n+1},\dots,\lambda_{m+n+p-1}} \big(a_{0},\dots, a_{i+p-1}, \\ &\qquad g_{\lambda_{i+p},\dots,\lambda_{i+p+n-1}}(a_{i+p},\dots, a_{i+n+p}), a_{i+n+p+1}, \dots, a_{m+n+p}\big),\\ =&f_{\lambda_{0},\dots,\lambda_{j-1}, \lambda_{j}+\cdots+\lambda_{j+p}, \lambda_{j+p+1},\dots,\lambda_{i+p},\lambda_{i+p+1}+\cdots +\lambda_{i+p+n+1},\lambda_{i+p+n+2},\dots,\lambda_{m+n+p}} \big(a_{0},\dots, a_{j-1}, h_{\lambda_{j},\dots,\lambda_{j+p-1}} (a_{j},\dots,a_{j+p}),\\ &\qquad a_{j+p+1},\dots, a_{i+p-1}, g_{\lambda_{i+p},\dots,\lambda_{i+p+n-1}} (a_{i+p},\dots, a_{i+n+p}),a_{i+n+p+1}, \dots, a_{m+n+p}\big),\\ =&((f\bullet_{i} g)\bullet_{j} h)_{\lambda_{0},\dots,\lambda_{m+n+p-1}} (a_{0}, a_{1},\dots, a_{m+n+p}); \end{align*} (2) if $i\leq j\leq n+i$, we have \begin{align*} &(f\bullet_{i}(g\bullet_{j-i}h))_{\lambda_{0},\dots,\lambda_{m+n+p-1}} (a_{0}, a_{1},\dots, a_{m+n+p})\\ =&f_{\lambda_{0},\dots,\lambda_{i-1},\lambda_{i}+\cdots +\lambda_{i+p+n},\lambda_{i+p+n+1},\dots,\lambda_{m+n+p-1}} \big(a_{0},\dots, a_{i-1}, \\ &\qquad (g\bullet_{j-i}h)_{\lambda_{i},\dots,\lambda_{i+p+n-1}}(a_{i}, \dots, a_{i+n+p}),a_{i+n+p+1}, \dots, a_{m+n+p}\big),\\ =&f_{\lambda_{0},\dots,\lambda_{i-1},\lambda_{i}+\cdots +\lambda_{i+p+n},\lambda_{i+p+n+1},\dots,\lambda_{m+n+p-1}} \big(a_{0},\dots, a_{i-1}, g_{\lambda_{i},\dots,\lambda_{j-1}, \lambda_{j}+\cdots+ \lambda_{j+p}, \lambda_{j+p+1},\dots, \lambda_{i+n+p-1}}(a_{i},\dots,\\ &\qquad a_{j-1}, h_{\lambda_{j},\dots,\lambda_{j+p-1}}(a_{j},\dots,a_{j+p}), a_{j+p+1},\dots, a_{i+n+p}), a_{i+n+p+1}, \dots, a_{m+n+p}\big),\\ =&((f\bullet_{i} g)\bullet_{j} h)_{\lambda_{0},\dots,\lambda_{m+n+p-1}} (a_{0}, a_{1},\dots, a_{m+n+p}). \end{align*} Thus $\{U_{m}, \bullet_{i}\}$ is a pre-Lie system. \end{proof} Then by Theorem \ref{thm: Pre-Lie alg}, we get a pre-Lie algebra $\Big(\bigoplus_{i\geq0}U_{i},\; \bullet\Big)$, where \begin{equation*} f\bullet g = \sum_{i=0}^{m}(-1)^{ni}f\bullet_{i} g, \end{equation*} for any $f\in U_{m}$, $g\in U_{n}$. Therefore, we get a graded Lie algebra $\Big(\bigoplus_{i\geq0}U_{i},\; [-, -]\Big)$ with \begin{equation*} [f, g]=f\bullet g-(-1)^{mn}g\bullet f,\qquad \forall\; f\in U_{m},\; g\in U_{n}. \end{equation*} One can check that $\rho\in U_{2}$ such that $(A, \rho)$ is an associative conformal algebra if and only if $[\rho, \rho]=0$, that is, $\rho$ satisfying the Maurer-Cartan equation of $\Big(\bigoplus_{i\geq0}U_{i},\; [-, -]\Big)$. Let $A$ be an associative conformal algebra, where we denote the conformal multiplication by $\rho(a, b)=a\oo{\lambda}b$ for any $a, b\in A$. Now go back to the Hochschild cohomology of associative conformal algebra $A$. Note the space of $n$-cochains $\mathcal{C}^{n}(A, A)=U_{n-1}$ for all $n\geq1$, we get a graded Lie algebra $\Big(\bigoplus_{i\geq1}\mathcal{C}^{i}(A, A),\; [-, -]\Big)$, with degree $-1$ bracket $[-, -]: \mathcal{C}^{m}(A, A)\times \mathcal{C}^{n}(A, A)\rightarrow\mathcal{C}^{m+n-1}(A, A)$, for $m, n\geq1$, is given by \begin{equation*} [f, g]=f\bullet g-(-1)^{(m-1)(n-1)}g\bullet f, \end{equation*} for any $f\in\mathcal{C}^{m}(A, A)$ and $g\in\mathcal{C}^{n}(A, A)$. For any $f\in\mathcal{C}^{m}(A, A)$, we equate $f$ with it's image in $\HH^{n}(A)$, then the Lie bracket induces a multiplication $[-, -]:\HH^{m}(A)\times \HH^{m}(A) \rightarrow \HH^{m+n-1}(A)$. \begin{thm}\label{thm:graded-Lie} The Lie bracket $[-, -]$ on $\bigoplus_{i\geq1}\mathcal{C}^{i}(A, A)$ induces a bracket $[-, -]$ of degree $-1$ on $\bigoplus_{i\geq1}\HH^{i}(A)$ such that $\Big(\bigoplus_{i\geq1}\HH^{i}(A),\; [-, -]\Big)$ is a graded Lie algebra. \end{thm} \begin{proof} We denote the conformal multiplication on $A$ by $\rho$, i.e., $\rho_{\lambda}(a, b)=a\oo{\lambda}b$. Then for any $f\in\mathcal{C}^{m}(A, A)$, we have \begin{align*} &(f\bullet\rho-(-1)^{m-1}\rho\bullet f)(a_{1},\dots, a_{m+1})\\ =& \sum_{i=1}^{m}(-1)^{i-1}f_{\lambda_{1},\dots,\lambda_{i-1}, \lambda_{i}\lambda_{i+1}, \lambda_{i+2}\dots,\lambda_{m}} \Big(a_{1},\dots,a_{i-1},\rho_{\lambda_{i}}(a_{i}, a_{i+1}),a_{i+2},\dots,a_{m+1}\Big)\\ &-(-1)^{m-1}\rho_{\lambda_{1}+\cdots+\lambda_{m}}\Big(f_{\lambda_{1},\dots, \lambda_{m-1}}(a_{1},\dots,a_{m}), a_{m+1}\Big) -\rho_{\lambda_{1}}\Big(a_{1}, f_{\lambda_{2},\dots,\lambda_{m}}(a_{2}, \dots,a_{m+1})\Big)\\ =&-d_{m}(f)(a_{1},\dots, a_{m+1}). \end{align*} That is, $d_{m}(f)=-[f, \rho]=(-1)^{m+1}[\rho, f]$. Thus, for any $f\in\mathcal{C}^{m}(A, A)$ and $g\in\mathcal{C}^{n}(A, A)$, \begin{align*} d_{m+n-1}([f, g])=&(-1)^{m+n}[\rho, [f, g]]\\ =&(-1)^{m+n}\Big([[\rho, f], g]]+(-1)^{n+1}[f, [\rho, g]]\Big)\\ =&(-1)^{n+1}[d_{m}(f), g]+[f, d_{n}(g)]. \end{align*} This means that if $f\in\mathcal{Z}^{m}(A, A)$ and $g\in\mathcal{Z}^{n}(A, A)$, then $[f, g]\in\mathcal{Z}^{m+n-1}(A, A)$; if $f\in\mathcal{Z}^{m}(A, A)$, $g\in\mathcal{Z}^{n}(A, A)$, and $f\in\mathcal{B}^{m}(A, A)$ or $g\in\mathcal{B}^{n}(A, A)$, then $[f, g]\in\mathcal{B}^{m+n-1}(A, A)$. Hence the bracket $[-, -]$ is well-defined, and $\Big(\bigoplus_{i\geq1}\HH^{i}(A),\; [-, -]\Big)$ is a graded Lie algebra. \end{proof} Under the bracket $[-, -]$, $\HH^{1}(A)$ is a Lie algebra, and $\HH^{n}(A)$ are $\HH^{1}(A)$-modules for all $n\geq2$. \section{The cup product}\label{sec:cup} In this section, we define a product on the Hochschild cohomology complex of an associative conformal algebra. This product is conformal analogue of the cup product on Hochschild cohomology complex of associative algebras. Thus we also call it cup product. For any $f\in\mathcal{C}^{m}(A, A)$ and $g\in\mathcal{C}^{n}(A, A)$, we define $f\sqcup g\in\mathcal{C}^{m+n}(A, A)$ by \begin{equation*} (f\sqcup g)(a_{1},\dots, a_{m+n})=f_{\lambda_{1},\dots,\lambda_{m-1}} (a_{1},\dots, a_{m})\oo{\lambda_{1}+\cdots+\lambda_{m}} g_{\lambda_{m+1},\dots,\lambda_{m+n-1}}(a_{m+1},\dots, a_{m+n}). \end{equation*} If we view the conformal multiplication $\rho$ on $A$ as an element in $\mathcal{C}^{2}(A, A)$, we have \begin{equation*} f\sqcup g=((\rho\bullet_{0}f)\bullet_{m} g),\qquad g\sqcup f=((\rho\bullet_{1}f)\bullet_{0} g). \end{equation*} For any $f\in\mathcal{C}^{m}(A, A)$, $g\in\mathcal{C}^{n}(A, A)$ and $h\in\mathcal{C}^{p}(A, A)$, we have \begin{align*} &(f\sqcup(g\sqcup h))(a_{1},\dots, a_{m+n+p})\\ =&f_{\lambda_{0},\dots,\lambda_{m-2}}(a_{1},\dots, a_{m})\oo{\lambda_{1} +\cdots+\lambda_{m}}(g\sqcup h)_{\lambda_{m+1},\dots,\lambda_{m+n+p-1}} (a_{m+1},\dots, a_{m+n+p})\\ =&f_{\lambda_{1},\dots,\lambda_{m-1}}(a_{1},\dots, a_{m})\oo{\lambda_{1} +\cdots+\lambda_{m}}\Big(g_{\lambda_{m+1},\dots,\lambda_{m+n-1}} (a_{m+1},\dots, a_{m+n})\oo{\lambda_{m+1}+\cdots+\lambda_{m+n}}\\ &\qquad h_{\lambda_{m+n+1},\dots,\lambda_{m+n+p-1}}(a_{m+n+1},\dots, a_{m+n+p})\Big)\\ =&\Big(f_{\lambda_{1},\dots,\lambda_{m-1}}(a_{1},\dots, a_{m})\oo{\lambda_{1} +\cdots+\lambda_{m}}g_{\lambda_{m+1},\dots,\lambda_{m+n-1}}(a_{m+1},\dots, a_{m+n})\Big)\oo{\lambda_{1}+\cdots+\lambda_{m+n}}\\ &\qquad h_{\lambda_{m+n+1},\dots,\lambda_{m+n+p-1}}(a_{m+n+1},\dots, a_{m+n+p})\\ =&((f\sqcup g)\sqcup h)(a_{1},\dots, a_{m+n+p}). \end{align*} That is, $\Big(\bigoplus_{i\geq1}\mathcal{C}^{i}(A, A),\; \sqcup\Big)$ is a graded associative algebra. We call the product $\sqcup$ the cup product. The following proposition show that the cup product is also inducing a graded associative algebra structure on $\bigoplus_{i\geq1}\HH^{i}(A)$. \begin{pro}\label{pro: cup-product} The cup product $\sqcup$ on $\bigoplus_{i\geq1}\mathcal{C}^{i}(A, A)$ induces a product on $\bigoplus_{i\geq1}\HH^{i}(A)$, denoted by $\sqcup$, such that $\Big(\bigoplus_{i\geq1}\HH^{i}(A),\; \sqcup\Big)$ is a graded associative algebra. \end{pro} \begin{proof} we firstly show that the differential $d$ on complex $\mathcal{C}^{\bullet}(A, A)$ satisfies the graded Leibniz rule with respect to the cup product. For any $f\in\mathcal{C}^{m}(A, A)$, $g\in\mathcal{C}^{n}(A, A)$, $m, n\geq1$, we have \begin{align*} &d_{m+n}(f\sqcup g)(a_{1},\dots, a_{m+n+1})\\ =&a_{1}\oo{\lambda_{1}}(f\sqcup g)_{\lambda_{2},\dots,\lambda_{m+n}} (a_{2},\dots, a_{m+n+1})\\ &+\sum_{i=1}^{m+n}(-1)^{i}(f\sqcup g)_{\lambda_{1},\dots,\lambda_{i-1}, \lambda_{i}+\lambda_{i+1}, \lambda_{i+2},\dots,\lambda_{m+n}}(a_{1}, \dots, a_{i-1}, a_{i}\oo{\lambda_{i}}a_{i+1}, a_{i+2},\dots, a_{m+n+1})\\ &+(-1)^{m+n+1}(f\sqcup g)_{\lambda_{1},\dots,\lambda_{m+n-1}} (a_{1},\dots, a_{m+n})\oo{\lambda_{1}+\cdots+\lambda_{m+n}}a_{m+n+1}\\ =&a_{1}\oo{\lambda_{1}}\Big(f_{\lambda_{2},\dots,\lambda_{m}}(a_{2},\dots, a_{m+1}) \oo{\lambda_{2}+\cdots+\lambda_{m+1}}g_{\lambda_{m+2},\dots,\lambda_{m+n}} (a_{m+2},\dots, a_{m+n+1})\Big)\\ &+\sum_{i=1}^{m}(-1)^{i}f_{\lambda_{1},\dots,\lambda_{i-1},\lambda_{i} +\lambda_{i+1}, \lambda_{i+2},\cdots,\lambda_{m}}(a_{1}, \dots, a_{i-1}, a_{i} \oo{\lambda_{i}}a_{i+1}, a_{i+2},\dots, a_{m+1})\oo{\lambda_{1} +\cdots+\lambda_{m+1}}\\ &\qquad g_{\lambda_{m+2},\cdots,\lambda_{m+n}}(a_{m+2},\dots, a_{m+n+1})\\ &+\sum_{i=m+1}^{m+n}(-1)^{i}f_{\lambda_{1},\dots,\lambda_{m-1}}(a_{1},\dots, a_{m})\oo{\lambda_{1}+\cdots+\lambda_{m}}g_{\lambda_{m+1},\dots,\lambda_{i-1}, \lambda_{i}+\lambda_{i+1}, \lambda_{i+2},\dots,\lambda_{m+n}}(a_{m+1}, \dots,\\ &\qquad a_{i-1}, a_{i}\oo{\lambda_{i}}a_{i+1}, a_{i+2},\dots, a_{m+n+1})\\ &+(-1)^{m+n+1}\Big(f_{\lambda_{1},\dots,\lambda_{m-1}}(a_{1},\dots, a_{m}) \oo{\lambda_{1}+\cdots+\lambda_{m}}g_{\lambda_{m+1},\dots,\lambda_{m+n-1}} (a_{m+1},\dots, a_{m+n})\Big)\oo{\lambda_{1}+\cdots+\lambda_{m+n}}a_{m+n+1}\\ =&d_{m}(f)_{\lambda_{1},\dots,\lambda_{m}}(a_{1},\dots, a_{m+1})\oo{ \lambda_{1}+\cdots+\lambda_{m+1}}g_{\lambda_{m+2},\dots,\lambda_{m+n}} (a_{m+2},\dots, a_{m+n+1})\\ &\qquad +(-1)^{m}f_{\lambda_{1},\dots,\lambda_{m-1}}(a_{1},\dots, a_{m})\oo{ \lambda_{1}+\cdots+\lambda_{m}}d_{n}(g)_{\lambda_{m+1},\dots,\lambda_{m+n}} (a_{m+1},\dots, a_{m+n+1})\\ =&\Big(d_{m}(f)\sqcup g+(-1)^{m}f\sqcup d_{n}(g)\Big)(a_{1},\dots, a_{m+n+1}). \end{align*} Thus $d_{m+n}(f\sqcup g)=d_{m}(f)\sqcup g+(-1)^{m}f\sqcup d_{n}(f)$. This means that if $f\in\mathcal{Z}^{m}(A, A)$ and $g\in\mathcal{Z}^{n}(A, A)$, then $f\sqcup g\in\mathcal{Z}^{m+n}(A, A)$; if $f\in\mathcal{Z}^{m}(A, A)$, $g\in\mathcal{Z}^{n}(A, A)$, and $f\in\mathcal{B}^{m}(A, A)$ or $g\in\mathcal{B}^{n}(A, A)$, then $f\sqcup g\in\mathcal{B}^{m+n}(A, A)$. Hence the cup product $\sqcup$ is well-defined, and $\Big(\bigoplus_{i\geq1}\HH^{i}(A),\; \sqcup\Big)$ is a graded associative algebra. \end{proof} In general, the cup product $\sqcup$ on the Hochschild cohomology complex $\mathcal{C}^{\ast}(A, A)$ is not graded commutative. But we have the following theorem. \begin{thm}\label{thm: commutative} The graded associative algebra $\Big(\bigoplus_{i\geq1}\HH^{i}(A),\; \sqcup\Big)$ is graded commutative. \end{thm} \begin{proof} For any $f\in\mathcal{C}^{m}(A, A)$, $g\in\mathcal{C}^{n}(A, A)$, $m, n\geq1$, we have a equation \begin{equation*} f\bullet d_{n}(g)+(-1)^{n-1}d_{m}(f)\bullet g-d_{m+n-1}(f\bullet g) =(-1)^{n-1}\Big(g\sqcup f-(-1)^{mn}f\sqcup g\Big). \end{equation*} Indeed, note that \begin{align*} &f\bullet d_{n}(g)+(-1)^{n-1}d_{m}(f)\bullet g-d_{m+n-1}(f\bullet g)\\ =&(-1)^{n-1}f\bullet(\rho\bullet g)-f\bullet(g\bullet\rho) +(-1)^{m+n}(\rho\bullet f)\bullet g\\ &\qquad -(-1)^{n-1}(f\bullet\rho)\bullet g-(-1)^{m+n}\rho\bullet(f\bullet g) +(f\bullet g)\bullet\rho, \end{align*} and $\Big(\bigoplus_{i\geq0}U_{i},\; \bullet\Big)$ is a pre-Lie algebra, i.e., \begin{equation*} (f\bullet g)\bullet\rho-f\bullet(g\bullet\rho) =(-1)^{n-1}\Big((f\bullet\rho)\bullet g-f\bullet(\rho\bullet g)\Big), \end{equation*} we get \begin{align*} &f\bullet d_{n}(g)+(-1)^{n-1}d_{m}(f)\bullet g-d_{m+n-1}(f\bullet g) \\ =&(-1)^{m+n}\Big((\rho\bullet f)\bullet g-\rho\bullet(f\bullet g)\Big) \\ =&(-1)^{m+n}\Big((-1)^{m(n-1)}(\rho\bullet_{0}f)\bullet_{m}g +(-1)^{m-1}(\rho\bullet_{1}f)\bullet_{0} g\Big) \\ =&(-1)^{m+n}\Big((-1)^{m(n-1)}f\sqcup g+(-1)^{m-1}g\sqcup f\Big)\\ =&(-1)^{n-1}\Big(g\sqcup f-(-1)^{mn}f\sqcup g\Big). \end{align*} Thus, if $f\in\mathcal{Z}^{m}(A, A)$ and $g\in\mathcal{Z}^{n}(A, A)$, then \begin{equation*} (-1)^{n-1}\Big(g\sqcup f-(-1)^{mn}f\sqcup g\Big)=d_{m+n-1}(f\bullet g) \in\mathcal{B}^{m+n-1}(A, A). \end{equation*} Hence, $g\sqcup f=(-1)^{mn}f\sqcup g$ if $f\in\HH^{m}(A)$ and $g\in\HH^{n}(A)$, and so that $\Big(\bigoplus_{i\geq1}\HH^{i}(A),\; \sqcup\Big)$ is graded commutative. \end{proof} \section{The Gerstenhaber algebra structure}\label{sec:Gerst} In this section, we prove that the degree $-1$ graded Lie bracket $[-,-]$ and the cup product $\sqcup$ on the Hochschild cohomology $\bigoplus_{i\geq1}\HH^{i}(A)$ of an associative conformal algebra $A$ satisfy the Leibniz rule of a Gerstenhaber algebra. This give the Gerstenhaber algebra structure on $\bigoplus_{i\geq1}\HH^{i}(A)$. For any $f\in\mathcal{C}^{m}(A, A)$, $g\in\mathcal{C}^{n}(A, A)$, $h\in\mathcal{C}^{p}(A, A)$ $m, n, p\geq1$, $1\leq i\leq p-1$ and $m+i\leq j\leq m+p-1$, we set \begin{align*} H_{i,j}=& a_{1}\oo{\lambda_{1}}h_{\lambda_{2},\dots,\lambda_{i}, \lambda_{i+1}+\cdots+\lambda_{i+m},\lambda_{i+m+1},\dots,\lambda_{j}, \lambda_{j+1}+\cdots+\lambda_{j+n},\lambda_{j+n+1},\dots,\lambda_{m+n+p-2}} \Big(a_{2},\dots, a_{i}, \\ &\quad f_{\lambda_{i+1},\dots,\lambda_{i+m-1}}(a_{i+1},\dots, a_{i+m}), a_{i+m+1},\dots, a_{j},\\ &\quad g_{\lambda_{j+1},\dots,\lambda_{j+n-1}}(a_{j+1}, \dots, a_{j+n}), a_{j+n+1},\dots, a_{m+n+p-1}\Big)\\ &+\sum_{q=1}^{i-1}(-1)^{q}h_{\lambda_{1},\dots,\lambda_{q-1}, \lambda_{q} +\lambda_{q+1},\lambda_{q+2},\dots,\lambda_{i}, \lambda_{i+1}+\cdots+\lambda_{i+m}, \lambda_{i+m+1},\dots,\lambda_{j},\lambda_{j+1}+\cdots+\lambda_{j+n}, \lambda_{j+n+1},\dots, \lambda_{m+n+p-2}}\\ &\quad \Big(a_{1},\dots,a_{q-1}, a_{q}\oo{\lambda_{q}}a_{q+1}, a_{q+2},\dots, a_{i}, f_{\lambda_{i+1},\dots,\lambda_{i+m-1}}(a_{i+1},\dots, a_{i+m}),\\ &\quad a_{i+m+1},\dots,a_{j}, g_{\lambda_{j+1},\dots,\lambda_{j+n-1}} (a_{j+1},\dots, a_{j+n}),a_{j+n+1},\dots, a_{m+n+p-1}\Big)\\ &+(-1)^{i}h_{\lambda_{1},\dots,\lambda_{i-1}, \lambda_{i}+\cdots+\lambda_{i+m},\lambda_{i+m+1},\dots,\lambda_{j}, \lambda_{j+1}+\cdots+\lambda_{j+n},\lambda_{j+n+1},\dots,\lambda_{m+n+p-2}} \Big(a_{1},\dots, a_{i-1}, \\ &\quad a_{i}\oo{\lambda_{i}}f_{\lambda_{i+1},\dots,\lambda_{i+m-1}}(a_{i+1},\dots, a_{i+m}), a_{i+m+1},\dots, a_{j},\\ &\quad g_{\lambda_{j+1},\dots,\lambda_{j+n-1}}(a_{j+1}, \dots, a_{j+n}), a_{j+n+1},\dots, a_{m+n+p-1}\Big), \end{align*} \begin{align*} H_{i,j}'=&(-1)^{m+i-1}h_{\lambda_{1},\dots,\lambda_{i-1}, \lambda_{i}+\cdots+\lambda_{i+m},\lambda_{i+m+1},\dots,\lambda_{j}, \lambda_{j+1}+\cdots+\lambda_{j+n},\lambda_{j+n+1},\dots,\lambda_{m+n+p-1}} \Big(a_{1},\dots, a_{i-1},\\ &\quad f_{\lambda_{i},\dots,\lambda_{i+m-2}} (a_{i},\dots, a_{i+m-1})\oo{\lambda_{i}+\cdots+\lambda_{i+m-1}} a_{i+m}, a_{i+m+1},\dots, a_{j}, \\ &\quad g_{\lambda_{j+1},\dots,\lambda_{j+n-1}}(a_{j+1}, \dots, a_{j+n}), a_{j+n+1},\dots, a_{m+n+p-1}\Big) \\ &+\sum_{q=m+i}^{j-1}(-1)^{q}h_{\lambda_{1},\dots,\lambda_{i-1}, \lambda_{i}+\cdots+\lambda_{i+m-1}, \lambda_{i+m},\dots,\lambda_{q-1}, \lambda_{q}+\lambda_{q+1},\lambda_{q+2},\dots,\lambda_{j},\lambda_{j+1}+\cdots +\lambda_{j+n},\lambda_{j+n+1},\dots,\lambda_{m+n+p-2}}\\ &\quad \Big(a_{1},\dots,a_{i-1}, f_{\lambda_{i},\dots,\lambda_{i+m-2}} (a_{i},\dots, a_{i+m-1}), a_{i+m},\dots, a_{q-1}, a_{q}\oo{\lambda_{q}}a_{q+1},\\ &\quad a_{q+2},\dots, a_{j}, g_{\lambda_{j+1},\dots,\lambda_{j+n-1}}(a_{j+1}, \dots, a_{j+n}), a_{j+n+1},\dots, a_{m+n+p-1}\Big)\\ &+(-1)^{j}h_{\lambda_{1},\dots,\lambda_{i-1}, \lambda_{i}+\cdots+\lambda_{i+m-1}, \lambda_{i+m},\dots,\lambda_{j-1}, \lambda_{j}+\cdots+\lambda_{j+n},\lambda_{j+n+1},\dots,\lambda_{m+n+p-2}} \Big(a_{1},\dots,\\ &\quad a_{i-1}, f_{\lambda_{i},\dots,\lambda_{i+m-2}} (a_{i},\dots, a_{i+m-1}), a_{i+m},\dots,a_{j-1},\\ &\quad a_{j}\oo{\lambda_{j}}g_{\lambda_{j+1},\dots,\lambda_{j+n-1}}(a_{j+1}, \dots, a_{j+n}), a_{j+n+1},\dots, a_{m+n+p-1}\Big), \end{align*} and \begin{align*} H_{i,j}''=&(-1)^{j+n-1}h_{\lambda_{1},\dots,\lambda_{i-1}, \lambda_{i}+\cdots+\lambda_{i+m-1},\lambda_{i+m},\dots,\lambda_{j-1}, \lambda_{j}+\cdots+\lambda_{j+n},\lambda_{j+n+1},\dots,\lambda_{m+n+p-2}} \Big(a_{1},\dots,\\ &\quad a_{i-1},f_{\lambda_{i},\dots,\lambda_{i+m-2}} (a_{i},\dots, a_{i+m-1}), a_{i+m},\dots, a_{j-1}, \\ &\quad g_{\lambda_{j},\dots,\lambda_{j+n-2}}(a_{j+1}, \dots, a_{j+n-1})\oo{\lambda_{j}+\cdots+\lambda_{i+n-1}} a_{j+n}, a_{j+n+1},\dots, a_{m+n+p-1}\Big) \\ &+\sum_{q=j+n}^{m+n+p-2}(-1)^{q}h_{\lambda_{1},\dots,\lambda_{i-1}, \lambda_{i}+\cdots+\lambda_{i+m-1}, \lambda_{i+m},\dots,\lambda_{j-1}, \lambda_{j}+\cdots+\lambda_{j+n-1}, \lambda_{j+n},\dots,\lambda_{q-1}, \lambda_{q}+\lambda_{q+1},\lambda_{q+2},\dots,\lambda_{m+n+p-2}}\\ &\quad \Big(a_{1},\dots,a_{i-1}, f_{\lambda_{i},\dots,\lambda_{i+m-2}} (a_{i},\dots, a_{i+m-1}), a_{i+m},\dots,a_{j-1},\\ &\quad g_{\lambda_{j},\dots,\lambda_{j+n-2}}(a_{j},\dots, a_{j+n-1}), a_{j+n},\dots, a_{q-1}, a_{q}\oo{\lambda_{q}}a_{q+1},a_{q+2},\dots, a_{m+n+p-1}\Big)\\ &+(-1)^{m+n+p-1}h_{\lambda_{1},\dots,\lambda_{i-1}, \lambda_{i}+\cdots+\lambda_{i+m-1}, \lambda_{i+m},\dots,\lambda_{j-1}, \lambda_{j}+\cdots+\lambda_{j+n-1},\lambda_{j+n},\dots,\lambda_{m+n+p-3}} \Big(a_{1},\dots,\\ &\quad a_{i-1}, f_{\lambda_{i},\dots,\lambda_{i+m-2}} (a_{i},\dots, a_{i+m-1}), a_{i+m},\dots,a_{j-1},g_{\lambda_{j},\dots, \lambda_{j+n-2}}(a_{j},\dots, a_{j+n-1}), \\ &\quad a_{j+n},\dots,a_{m+n+p-3}\Big)\oo{\lambda_{1} +\cdots+\lambda_{m+n+p-2}}a_{m+n+p-1}, \end{align*} for any $a_{1},\dots,a_{m+n+p-1}\in A$. Then one can check that the following lemma is true. \begin{lem}\label{lem:relation} Let $A$ be an associative conformal algebra, and $f\in\mathcal{Z}^{m}(A, A)$, $g\in\mathcal{Z}^{n}(A, A)$, $h\in\mathcal{C}^{p}(A, A)$, $m, n, p\geq1$. Then for any $1\leq i\leq p-1$ and $m+i\leq j\leq m+p-1$, we have \begin{equation*} d_{m+n+p-1}((h\bullet_{i-1}f)\bullet_{j-1}g)(a_{1},\dots, a_{m+n+p-1}) =H_{i,j}+H_{i,j}'+H_{i,j}''. \end{equation*} \end{lem} For any $f\in\mathcal{Z}^{m}(A, A)$, $g\in\mathcal{Z}^{n}(A, A)$, $h\in\mathcal{Z}^{1}(A, A)=\Der(A, A)$, $m, n\geq1$, we have \begin{align*} (h\bullet(f\sqcup g))(a_{1},\dots, a_{m+n}) =&h\Big(f_{\lambda_{1},\dots,\lambda_{m-1}}(a_{1},\dots, a_{m}) \oo{\lambda_{1}+\cdots+\lambda_{m}}g_{\lambda_{m+1},\dots,\lambda_{m+n-1}} (a_{m+1},\dots, a_{m+n})\Big),\\ ((h\bullet f)\sqcup g)(a_{1},\dots, a_{m+n}) =&h\Big(f_{\lambda_{1},\dots,\lambda_{m-1}}(a_{1},\dots, a_{m})\Big) \oo{\lambda_{1}+\cdots+\lambda_{m}}g_{\lambda_{m+1},\dots,\lambda_{m+n-1}} (a_{m+1},\dots, a_{m+n}),\\ (f\sqcup (h\bullet g))(a_{1},\dots, a_{m+n}) =&f_{\lambda_{1},\dots,\lambda_{m-1}}(a_{1},\dots, a_{m}) \oo{\lambda_{1}+\cdots+\lambda_{m}}h\Big(g_{\lambda_{m+1},\dots,\lambda_{m+n-1}} (a_{m+1},\dots, a_{m+n})\Big). \end{align*} That is, $h\bullet(f\sqcup g)-(h\bullet f)\sqcup g-f\sqcup (h\bullet g)=0$ since $h\in\Der(A, A)$. But in general, for $h\in\mathcal{Z}^{p}(A, A)$, $p\geq2$, the left hand side of the above equality may not be zero. The following proposition show that this is given by a coboundary. \begin{pro}\label{pro:coboundary} Let $A$ be an associative conformal algebra, and $f\in\mathcal{Z}^{m}(A, A)$, $g\in\mathcal{Z}^{n}(A, A)$, $h\in\mathcal{Z}^{p}(A, A)$, $m, n\geq1$ and $p\geq2$. Set \begin{equation*} \mathcal{H}=\sum_{i=0}^{p-2}\sum_{j=m+i}^{m+p-2}(-1)^{(m-1)i+(n-1)j} (h\bullet_{i}f)\bullet_{j}g. \end{equation*} Then $\mathcal{H}\in\mathcal{C}^{m+n+p-2}(A, A)$ and \begin{equation*} d_{m+n+p-2}(\mathcal{H})=(-1)^{(m-1)n}\Big(h\bullet(f\sqcup g) -(-1)^{n(p-1)}(h\bullet f)\sqcup g-f\sqcup (h\bullet g)\Big). \end{equation*} \end{pro} \begin{proof} By Lemma \ref{lem:relation}, we have \begin{equation*} d_{m+n+p-1}((h\bullet_{i-1}f)\bullet_{j-1}g)(a_{1},\dots, a_{m+n+p-1}) =H_{i,j}+H_{i,j}'+H_{i,j}''. \end{equation*} for any $1\leq i\leq p-1$, $m+i\leq j\leq m+p-1$ and $a_{1},\dots, a_{m+n+p-1}\in A$. Note that if $m+i+1\leq j\leq m+p-2$, \begin{align*} &H_{i,j}+(-1)^{m-1}H_{i+1,j}'+(-1)^{m+n}H_{i+1,j+1}''\\ =&d_{p}(h)_{\lambda_{1},\dots,\lambda_{i}, \lambda_{i+1}+\cdots+\lambda_{i+m}, \lambda_{i+m+1},\dots,\lambda_{j}, \lambda_{j+1}+\cdots+\lambda_{j+n-1}, \lambda_{j+n+1},\dots,\lambda_{m+n+p-2}}\Big(a_{1},\dots, a_{i}, f_{\lambda_{i+1}, \dots,\lambda_{i+m-1}}(a_{i+1},\dots, \\ &\qquad a_{i+m}), a_{i+m+1},\dots, a_{j}, g_{\lambda_{j+1},\dots,\lambda_{j+n-1}} (a_{j+1},\dots, a_{j+n}), a_{j+n+1},\dots, a_{m+n+p-1}\Big)\\ =&0, \end{align*} since $h\in\mathcal{Z}^{p}(A, A)$. In order for this equation to hold for any $m+i\leq j\leq m+p-1$, we need the following signs. \begin{itemize} \item[] For $j=m, m+1,\dots, m+p+1$, \begin{equation}\label{equ:re1} H_{0,j}=(f\sqcup(h\bullet_{j-m}g))(a_{1},\dots, a_{m+n+p-1}). \end{equation} \item[] For $i=1, 2,\dots, p$, \begin{equation}\label{equ:re2} H'_{i,m+i-1}=(-1)^{m+i-1}(h\bullet_{j-1}(f\sqcup g))(a_{1},\dots, a_{m+n+p-1}). \end{equation} \item[] For $i=1, 2,\dots, p$, \begin{equation}\label{equ:re3} H''_{i,m+p}=(-1)^{m+n+p-1}((h\bullet_{i-1}f)\sqcup g)(a_{1},\dots, a_{m+n+p-1}). \end{equation} \end{itemize} Then one can check that \begin{equation}\label{equ:11} \sum_{i=0}^{p-1}\sum_{j=m+i}^{m+p-1}(-1)^{(m-1)(i-1)+(n-1)(j-1)} \Big(H_{i,j}+(-1)^{m-1}H'_{i+1,j}+(-1)^{m+n}H''_{i+1,j+1}\Big)=0 \end{equation} Moreover, by Lemma \ref{lem:relation}, one can check that \begin{equation*} d_{m+n+p-2}(\mathcal{H})(a_{1},\dots, a_{m+n+p-1}) =\sum_{i=0}^{p-1}\sum_{j=m+i}^{m+p-1}(-1)^{(m-1)(i-1)+(n-1)(j-1)} \Big(H_{i,j}+H'_{i+1,j}+H''_{i+1,j+1}\Big). \end{equation*} Thus, by equation (\ref{equ:11}), we obtain \begin{align*} 0=&d_{m+n+p-2}(\mathcal{H})(a_{1},\dots, a_{m+n+p-1})+\sum_{i=0}^{p-1} (-1)^{(m-1)(i-1)+(n-1)(m+p-2)+m+n}H''_{i+1,m+p}\\ &+\sum_{i=0}^{p-1}(-1)^{(m-1)(i-1)+(n-1)(m+i-1)+m-1}H'_{i+1,m+i} +\sum_{j=m}^{m+p+1}(-1)^{(n-1)(j-1)-(m-1)}H_{0,j}. \end{align*} Bring equations (\ref{equ:re1}), (\ref{equ:re2}) and (\ref{equ:re3}) into the above equation, we get \begin{equation*} d_{m+n+p-2}(\mathcal{H})=(-1)^{(m-1)n}f\sqcup (h\bullet g) -(-1)^{(m-1)n}h\bullet(f\sqcup g)-(-1)^{(m-1)n+n(p-1)}(h\bullet f)\sqcup g. \end{equation*} Hence, we obtain the proposition. \end{proof} On the other hand, we have the following proposition. \begin{pro}\label{pro:equation} Let $A$ be an associative conformal algebra. For any $f\in\mathcal{C}^{m}(A, A)$, $g\in\mathcal{C}^{n}(A, A)$, $h\in\mathcal{C}^{p}(A, A)$, $m, n, p\geq1$, we have \begin{equation*} (f\sqcup g)\bullet h=(f\bullet h)\sqcup g+(-1)^{m(p-1)}f\sqcup(g\bullet h). \end{equation*} \end{pro} \begin{proof} For $a_{0},\dots, a_{m+n+p-2}\in A$, we have \begin{align*} &((f\sqcup g)\bullet h)(a_{1},\dots, a_{m+n+p-1}) \\ =&\sum_{i=1}^{m+n}(-1)^{(p-1)(i-1)}(f\sqcup g)_{\lambda_{1},\dots,\lambda_{i-1}, \lambda_{i}+\cdots+\lambda_{i+p-1},\lambda_{i+p},\dots,\lambda_{m+n+p-2}} (a_{1},\dots, a_{i-1},\\ &\qquad h_{\lambda_{i},\dots,\lambda_{i+p-2}}(a_{i},\dots, a_{i+p-1}), a_{i+p},\dots, a_{m+n+p-1}) \\ =&\sum_{i=1}^{m}(-1)^{(p-1)(i-1)}f_{\lambda_{1},\dots,\lambda_{i-1}, \lambda_{i}+\cdots+\lambda_{i+p-1},\lambda_{i+p},\dots,\lambda_{m+p-2}} \Big(a_{1},\dots, a_{i-1},h_{\lambda_{i},\dots,\lambda_{i+p-2}}(a_{i},\dots a_{i+p-1}),\\ &\qquad a_{i+p},\dots,a_{m+p-1}\Big)\oo{\lambda_{1}+\cdots+\lambda_{m+p-1}} g_{\lambda_{m+p},\dots,\lambda_{m+n+p-2}}(a_{m+p},\dots, a_{m+n+p-1})\\ &+\sum_{i=m+1}^{m+n}(-1)^{(p-1)(i-1)}f_{\lambda_{1},\dots,\lambda_{m-1}} (a_{1},\dots, a_{m}))\oo{\lambda_{1}+\cdots+\lambda_{m}}g_{\lambda_{m+1}, \dots,\lambda_{i-1}, \lambda_{i}+\cdots+\lambda_{i+p-1}, \lambda_{i+p},\dots, \lambda_{m+n+p-2}}\\ &\qquad \Big(a_{m+1},\dots, a_{i-1}, h_{\lambda_{i}+\cdots+\lambda_{i+p-2}} (a_{i},\dots, a_{i+p-1}), a_{i+p}, \dots, a_{m+n+p-1}) \Big) \\ =&(f\bullet h)_{\lambda_{1},\dots,\lambda_{m+p-2}}(a_{1}, \dots, a_{m+p-1}) \oo{\lambda_{1}+\cdots+\lambda_{m+p-1}}g_{\lambda_{m+p},\dots,\lambda_{m+n+p-2}} (a_{m+p},\dots, a_{m+n+p-1})\\ &+(-1)^{m(p-1)}f_{\lambda_{1},\dots,\lambda_{m-1}}(a_{1},\dots, a_{m}) \oo{\lambda_{1}+\cdots+\lambda_{m-1}}(g\bullet h)_{\lambda_{m+1}, \dots,\lambda_{m+n+p-2}}(a_{m+1},\dots, a_{m+n+p-1}) \\ =&\Big((f\bullet h )\sqcup g+(-1)^{m(p-1)}f\sqcup(g\bullet h)\Big) (a_{1},\dots, a_{m+n+p-1}). \end{align*} Thus, $(f\sqcup g)\bullet h=(f\bullet h)\sqcup g+(-1)^{m(p-1)}f\sqcup(g\bullet h)$. \end{proof} Now, by Propositions \ref{pro:coboundary} and \ref{pro:equation}, we obtain the Gerstenhaber algebra structure on the Hochschild cohomology $\bigoplus_{i\geq1}\HH^{i}(A)$. \begin{thm}\label{thm:Gers-alg} Let $A$ be an associative conformal algebra. Then $\Big(\bigoplus_{i\geq1}\HH^{i}(A),\; \sqcup,\; [-, -]\Big)$ is a Gerstenhaber algebra. \end{thm} \begin{proof} Let $f\in\mathcal{Z}^{m}(A, A)$, $g\in\mathcal{Z}^{n}(A, A)$, $h\in\mathcal{Z}^{p}(A, A)$, $m, n, p\geq1$. If $p=1$, by direct calculate, we have \begin{equation*} [f\sqcup g, h]-[f, h]\sqcup g-(-1)^{m(p-1)}f\sqcup[g, h]=0. \end{equation*} If $p\geq2$, by Propositions \ref{pro:coboundary} and \ref{pro:equation}, we have \begin{align*} &[f\sqcup g, h]-[f, h]\sqcup g-(-1)^{m(p-1)}f\sqcup[g, h] \\ =&(f\sqcup g)\bullet h-(-1)^{(p-1)(m+n-1)}h\bullet(f\sqcup g)-(f\bullet h)\sqcup g +(-1)^{(m-1)(p-1)}(h\bullet f)\sqcup g \\ &-(-1)^{m(p-1)}f\sqcup(g\bullet h) +(-1)^{m(p-1)}(-1)^{(n-1)(p-1)}f\sqcup(h\bullet g) \\ =&-(-1)^{m(p-1)}(-1)^{(n-1)(p-1)}h\bullet(f\sqcup g)+(-1)^{p}(h\bullet f)\sqcup g -(-1)^{(n-1)(p-1)}f\sqcup(h\bullet g)\\ =&\pm\Big(h\bullet(f\sqcup g)-f\sqcup(h\bullet g) +(-1)^{np-n+1}(h\bullet f)\sqcup g \Big) \\ =&\pm d_{m+n+p-2}(\mathcal{H}). \end{align*} Thus, for any $f\in\HH^{m}(A)$, $g\in\HH^{n}(A)$, $h\in\HH^{p}(A)$, $m, n, p\geq1$, \begin{equation*} [f\sqcup g, h]-[f, h]\sqcup g-(-1)^{m(p-1)}f\sqcup[g, h]=0. \end{equation*} By the definition of Gerstenhaber algebra, we get this theorem. \end{proof} \section{Cohomology of split extension}\label{sec:exten} In this section, we consider the relation between Hochschild cohomology of a conformal associative algebra $A$ and Hochschild cohomology of it's split extension. First, we give the definition of split extension of a conformal associative algebra. \begin{defi} Let $A$ be an associative conformal algebra, and $M$ be a conformal $A$-bimodule with conformal product $\cdot\oo{\lambda}\cdot: M\times M\rightarrow M[\lambda]$. The {\rm split extension} of $A$ by $M$ is a conformal associative algebra on $A\oplus M$, where the conformal product is given by \begin{equation*} (a,\, u)\oo{\lambda}(b,\, v)=(a\oo{\lambda}b,\; a\ool{\lambda}v+u\oor{\lambda}b+ u\oo{\lambda}v), \end{equation*} for any $a, b\in A$ and $u, v\in M$. We denote by this conformal associative algebra by $A\widehat{\oplus}M$. \end{defi} \begin{ex} Let $A$ be an associative conformal algebra, and $M$ be a conformal $A$-bimodule. The {\rm trivial extension} conformal algebra is a associative conformal algebra on $A\oplus M$ with conformal product \begin{equation*} (a,\, u)\oo{\lambda}(b,\, v)=(a\oo{\lambda}b,\; a\ool{\lambda}v+u\oor{\lambda}b). \end{equation*} Trivial extension conformal algebras is a class of important split extension conformal algebras. \end{ex} Given a split extension $A\widehat{\oplus}M$, there is an exact sequence of $\cb[\partial]$-modules: \begin{equation*} \xymatrix@C=0.5cm{ 0 \ar[r] & M\ar[rr]^{i\quad} && A\widehat{\oplus}M\ar[rr]<0.2ex>^{\quad p} && A\ar[r]\ar[ll]<0.5ex>^{\quad q} & 0 }, \end{equation*} in which $p: (a, u)\mapsto a$, $i: u\mapsto (0, u)$, $q: a\mapsto (a, 0)$. Clearly, $p$ and $q$ are morphisms of conformal associative algebra, $p\circ q=\id_{A}$, $p$ induces a linear map $\tilde{p}: A\widehat{\oplus}M/ \partial(A\widehat{\oplus}M)\rightarrow A/\partial A$ and $q$ induces a linear map $\tilde{q}: A/\partial A\rightarrow A\widehat{\oplus}M/\partial(A\widehat{\oplus}M)$. Moreover, this sequence is a split sequence as conformal bimodule over $A$, and a sequence of bimodule over $A\widehat{\oplus}M$, where the $A\widehat{\oplus}M$-bimodule structure on $A$ given by the map $p$. Let $\varphi: A\widehat{\oplus}M\rightarrow A\widehat{\oplus}M$ be a $\cb[\partial]$-module homomorphism. Then $p\circ\varphi\circ q: A\rightarrow A$ is also a $\cb[\partial]$-module homomorphism. For any $\varphi\in\mathcal{C}^{n} (A\widehat{\oplus}M, A\widehat{\oplus}M)$, one can check that $p\circ\varphi\circ q^{\otimes n}$ is also conformal sesquilinear, where \begin{equation*} q^{\otimes n}(a_{0},\dots,a_{n-1})=(q(a_{0}),\dots,q(a_{n-1})), \end{equation*} and $p$ is extended canonically to a $\cb[\partial]$-module homomorphism from $A\widehat{\oplus}M[\lambda_{0},\dots,\lambda_{n-2}]$ to $A[\lambda_{0},\dots, \lambda_{n-2}]$, i.e., $p(\sum x_{i_{0},\dots,i_{n-2}} \lambda_{0}^{i_{0}}\cdots\lambda_{n-2}^{i_{n-2}}) =\sum p(x_{i_{0},\dots,i_{n-2}})\lambda_{0}^{i_{0}}\cdots\lambda_{n-1}^{i_{n-2}}$. \begin{lem}\label{lem:map} Let $A\widehat{\oplus}M$ be the split extension of $A$ by bimodule $M$. Let $\tilde{d}_{n}$ (resp. $d_{n}$) denote the $n$-th differential in the Hochschild complex of $A\widehat{\oplus}M$ with coefficients in $B$ (resp. of $A$ with coefficients in $A$). Then we have \begin{equation}\label{eq:0th} d_{0}(\tilde{p}((a, u)+\partial(A\widehat{\oplus}M))) =\tilde{p}\circ\tilde{d}_{0}((a, u)+\partial(A\widehat{\oplus}M))\circ\tilde{q} \end{equation} for any $(a, u)+\partial(A\widehat{\oplus}M)\in A\widehat{\oplus}M/\partial(A\widehat{\oplus}M)$; for $n\geq1$, and any $\varphi\in\mathcal{C}^{n}(A\widehat{\oplus}M, A\widehat{\oplus}M)$, \begin{equation}\label{eq:nth} d_{n}(p\circ \varphi\circ q^{\otimes n}) =p\circ \tilde{d}_{n}(\varphi)\circ q^{\otimes (n+1)}. \end{equation} \end{lem} \begin{proof} First, if $n=0$, for any $b+\partial A\in A/\partial A$, \begin{align*} \tilde{p}\circ\tilde{d}_{0}((a, u)+\partial(A\widehat{\oplus}M))\circ\tilde{q} (b+\partial A)&=\tilde{p}\circ\tilde{d}_{0}((a, u)+\partial(A\widehat{\oplus}M)) ((b,0)+\partial(A\widehat{\oplus}M))\\ &=\tilde{p}((b,0)\oo{-\partial}(a, u)-(a, u)\oo{0}(b,0))\\ &=b\oo{-\partial}a-a\oo{0}b\\ &=d_{0}(\tilde{p}((a, u)+\partial(A\widehat{\oplus}M)))(b+\partial A). \end{align*} Second, for any $n\geq1$, $a_{0},\dots,a_{n}\in A$, since $p, q$ are morphisms of conformal associative algebra and $p\circ q=\id_{A}$, we have \begin{align*} &d_{n}(p\circ \varphi\circ q^{\otimes n})_{\lambda_{0},\dots,\lambda_{n-1}} (a_{0},\dots,a_{n})\\ =&a_{0}\oo{\lambda_{0}}(p\circ \varphi\circ q^{\otimes n})_{\lambda_{1},\dots, \lambda_{n-1}}(a_{1},\dots,a_{n})\\ &+\sum_{i=0}^{n-1}(-1)^{i+1}(p\circ \varphi\circ q^{\otimes n})_{\lambda_{0}, \dots,\lambda_{i-1},\lambda_{1}+\lambda_{i+1}, \lambda_{i+2}\dots,\lambda_{n-1}} (a_{0},\dots,a_{i-1}, a_{i}\oo{\lambda_{i}}a_{i+1}, a_{i+2},\dots,a_{n})\\ &+(-1)^{n+1}(p\circ \varphi\circ q^{\otimes n})_{\lambda_{0},\dots, \lambda_{n-2}}(a_{0},\dots,a_{n-1})\oo{\lambda_{n-1}}a_{n}\\ =&p\circ q(a_{0})\oo{\lambda_{0}}p\circ \varphi_{\lambda_{1},\dots, \lambda_{n-1}}(q(a_{1}),\dots,q(a_{n}))\\ &+\sum_{i=0}^{n-1}(-1)^{i+1}p\circ\varphi_{\lambda_{0},\dots,\lambda_{i-1}, \lambda_{1}+\lambda_{i+1},\lambda_{i+2}\dots,\lambda_{n-1}}(q(a_{0}),\dots, q(a_{i-1}),q(a_{i}\oo{\lambda_{i}}a_{i+1}), q(a_{i+2}),\dots,q(a_{n}))\\ &+(-1)^{n+1}p\circ \varphi_{\lambda_{0},\dots, \lambda_{n-2}}(p(a_{0}), \dots,p(a_{n-1}))\oo{\lambda_{n-1}}p\circ q(a_{n})\\ =&(p\circ \tilde{d}_{n}(\varphi)\circ q^{\otimes (n+1)})_{\lambda_{0},\dots, \lambda_{n-1}}(a_{0},\dots,a_{n}). \end{align*} Thus $d_{n}(p\circ \varphi\circ q^{\otimes n}) =p\circ \tilde{d}_{n}(\varphi)\circ q^{\otimes (n+1)}$. \end{proof} For $n\geq0$, and any $\varphi\in\mathcal{C}^{n}(A, A)$, we denote it image under the natural surjection $\mathcal{C}^{n}(A, A)\rightarrow \HH^{n}(A)$ by $[\varphi]$. Then by equation (\ref{eq:0th}), we have a linear map $\Phi^{0}: \HH^{0}(A\widehat{\oplus}M)\rightarrow \HH^{0}(A)$, $[(a, u)+\partial(A\widehat{\oplus}M)]\mapsto [\tilde{p}((a, u)+\partial(A\widehat{\oplus}M))]$. More general, we have the following corollary. \begin{cor}\label{cor:HH-map} Let $A\widehat{\oplus}M$ be the split extension of $A$ by bimodule $M$. Then there exists a linear map $\Phi^{n}: \HH^{n}(A\widehat{\oplus}M)\rightarrow \HH^{n}(A)$ given by $[\varphi]\mapsto [p\circ\varphi\circ q^{\otimes n}]$, for each $n\geq1$. \end{cor} \begin{proof} Assume that $\varphi\in\mathcal{C}^{n}(A\widehat{\oplus}M, A\widehat{\oplus}M)$ is a cocycle, that is, that $\tilde{d}_{n}(\varphi)=0$. Then the formula (\ref{eq:nth}) shows that $d_{n}(p\circ\varphi\circ q^{\otimes n})=0$, and so that $p\circ\varphi\circ q^{\otimes n}$ is a cocycle. If $\varphi\in\mathcal{C}^{n}(A\widehat{\oplus}M, A\widehat{\oplus}M)$ is a coboundary, that is, $\varphi=\tilde{d}_{n-1}(\psi)$ for some $\psi\in \mathcal{C}^{n-1}(A\widehat{\oplus}M, A\widehat{\oplus}M)$, then by formula (\ref{eq:nth}), $p\circ\varphi\circ q^{\otimes n}=p\circ\tilde{d}_{n-1}(\psi) \circ q^{\otimes n}=d_{n-1}(p\circ\psi\circ q^{\otimes (n-1)})$. That is, $p\circ\varphi\circ q^{\otimes n}$ is also a coboundary. Thus $\Phi^{n}$ is well-defined. Finally, it is easy to see that $\Phi^{n}$ is $\cb$-linear. \end{proof} Now we can give the main conclusions of this section. \begin{thm}\label{thm:alg-map} Considering $\HH^{\ast}(A\widehat{\oplus}M)=\bigoplus_{i\geq 1} \HH^{i}(A\widehat{\oplus}M)$ and $\HH^{\ast}(A)=\bigoplus_{i\geq 1}\HH^{i}(A)$ as rings under the cup product, the maps $\Phi^{n}$ induce an ring morphism \begin{equation*} \Phi^{\ast}: \HH^{\ast}(A\widehat{\oplus}M)\rightarrow\HH^{\ast}(A). \end{equation*} \end{thm} \begin{proof} Let $\eta=[f]\in\HH^{s}(A\widehat{\oplus}M)$ and $\zeta=[g]\in\HH^{t}(A\widehat{\oplus}M)$, where $f\in\mathcal{C}^{s}(A\widehat{\oplus}M, A\widehat{\oplus}M)$ and $g\in\mathcal{C}^{t}(A\widehat{\oplus}M, A\widehat{\oplus}M)$ for any $s, t\geq1$. Then by Corollary \ref{cor:HH-map}, $\Phi^{s}(\eta)=[p\circ f\circ q^{\otimes s}]$ and $\Phi^{t}(\zeta)=[p\circ g\circ q^{\otimes t}]$, and \begin{align*} &(p\circ f\circ q^{\otimes s})\sqcup(p\circ g\circ q^{\otimes t}) (a_{0},\dots,a_{s+t-1})\\ =&(p\circ f\circ q^{\otimes s})_{\lambda_{0},\dots,\lambda_{s-2}}(a_{0}, \dots,a_{s-1})\oo{\lambda_{0}+\cdots+\lambda_{s-1}}(p\circ g\circ q^{\otimes t})_{\lambda_{s},\dots,\lambda_{s+t-2}}(a_{s},\dots,a_{s+t-1})\\ =&p\circ f_{\lambda_{0},\dots,\lambda_{s-2}}(q(a_{0}),\dots,q(a_{s-1})) \oo{\lambda_{0}+\cdots+\lambda_{s-1}}(p\circ g_{\lambda_{s},\dots,\lambda_{s+t-2}} (q(a_{s}),\dots,q(a_{s+t-1}))\\ =&p\circ (f\sqcup g)(q(a_{0}),\dots,q(a_{s+t-1}))\\ =&p\circ (f\sqcup g)\circ q^{\otimes (s+t)}(a_{0},\dots,a_{s+t-1}). \end{align*} Thus $(p\circ f\circ q^{\otimes s})\sqcup(p\circ g\circ q^{\otimes t}) =p\circ (f\sqcup g)\circ q^{\otimes (s+t)}$. Taking the cohomology classes, we get $\Phi^{s}(\eta)\sqcup\Phi^{t}(\zeta)=\Phi^{s+t}(\eta\sqcup\zeta)$. Therefore $\Phi^{\ast}: \HH^{\ast}(A\widehat{\oplus}M)\rightarrow\HH^{\ast}(A)$ is a ring morphism. \end{proof} The algebra homomorphism $\Phi^{\ast}$ is often called the Hochschild projective morphism. For the Hochschild cohomology of associative algebras and the Poisson cohomology of Poisson algebras, the Hochschild projective morphism is not a morphism of graded Lie algebras in general. And in \cite{AGST} and \cite{ZW}, the authors have given some conditions for the Hochschild projective morphism to be surjective. For the associative conformal algebra, we need to further consider the condition that $\Phi^{\ast}$ is surjective. The sequence $\xymatrix@C=0.5cm{ 0 \ar[r] & M\ar[r] & A\widehat{\oplus}M \ar[r]<0.2ex>& A\ar[r]\ar[l]<0.5ex> & 0 }$ consider in this section is a non-abelian extension of $A$ by $M$. In the following work \cite{HZ}, we will give the classification of the non-abelian extensions of associative conformal algebras by using the Maurer-Cartan elements of a differential greded Lie algebra related to cohomology of associative conformal algebras. \bigskip \noindent {\bf Acknowledgements. } This work was financially supported by National Natural Science Foundation of China (No.11301144, 11771122, 11801141), China Postdoctoral Science Foundation (2020M682272) and NSF of Henan Province (212300410120).
{ "timestamp": "2022-09-20T02:23:36", "yymm": "2209", "arxiv_id": "2209.08715", "language": "en", "url": "https://arxiv.org/abs/2209.08715" }
\section{Introduction} Deep learning has emerged as a disruptive technology capable of solving complex problems in vision~\cite{krizhevsky2012,gaugan,dalle}, language~\cite{brown2020}, and even science~\cite{reichstein2019,alphafold,choudhary2022}. Dramatic advances brought by model size~\cite{kaplan2020,henighan2020} spurred an arms race towards training larger deep neural networks, that have grown to contain hundreds of billions of weights~\cite{hestness2017,brown2020,fedus2021,megatron530b}, consuming vasts amount of resources to train and run tasks~\cite{thompson2020}. Thus far, the primary interest in deep learning has been on practical and industrial applications, such as compression techniques~\cite{sparsity,fp16,fp8} to reduce costs, driven by experimental and applied sciences. This is akin to the early days of steam engines, where industrial necessity set the stage for a more robust theory of thermodynamics that was developed over a century later~\cite{hunt2010}. While foundational research for deep neural networks exists~\cite{spectral,bntheory,statmechdl,poggio2020}, the underlying mechanisms that makes them perform well on specific tasks continue to elude us. For example, there remains no clear way to distinguish a well- from a poorly-trained network other than through evaluation on tasks, which by itself can be insufficient to determine whether a network has been properly trained. Deep neural networks are by construction reminiscent of magnetic model systems where nodes are connected by couplings (weights), giving rise to collective behavior that cannot be described by their individual parts. Similar analogies drawn in other areas of science have motivated the search for ``universal" properties emerging in complex systems found in finance~\cite{plerou2001}, geology~\cite{geology}, social networks~\cite{barabasi1999}, and many more. Thus, the powerful formalism of statistical mechanics may be employed to study neural networks in the quest of understanding their mechanics from unique properties that define them, even if large deep neural networks capable of solving complex problems are composed of hundreds of billions of weights, while classical model systems are typically studied on regular grids, with limited connectivity. A recent line of research attempts the above approach by borrowing ideas used to study complex systems. Refs.~\cite{rmtnature,rmtjmlr} employ concepts from random matrix theory, which originates from nuclear physics~\cite{Wigner}, to identify when a neural network might have problems during training. Refs.~\cite{gabrie2018,goldfeld2019} examine neural networks using information-theoretic quantities, like entropy and mutual information. A shortcoming of these works is that they do not take into account structure of the network architecture, but rather analyze each layer individually, thereby potentially missing on critical information about its topology. This work borrows the concepts of statistical physics (and thermodynamics) to analyze deep neural networks by mapping them to a well-known problem in statistical physics, the Ising model. With this formulation, the weights of a neural network are taken to represent exchange interactions between spins represented by the nodes of the network, and the system can be studied using various properties of spin glass models. The density of states proves particularly effective at uncovering the presence of structures in neural networks after training. These structures are present across a suite of pretrained transformers and shown to correlate with performance on tasks, which suggests an emerging behavior after training that is characteristic of complex systems. \section{From Deep Neural Networks to Ising Models} In its simplest form, a deep neural network consists of a set of $L$ layers, where each layer $\ell$ contains $m$ nodes that are connected to $n$ nodes in its neighboring layer $\ell+1$ through weights $w_\ell$ expressed as a $m\times n$ matrix. The goal of training is to learn values of the weights that minimize some energy function $f(x;w_1,\dots,w_L)$ to capture correlations in the data $x$. After training, weights can perform tasks by mapping inputs to desired outputs (e.g., images into labels, questions into answers, translation between languages, etc). \begin{figure}[htb] \begin{tabular}{cc} \includegraphics[trim=100 100 400 50, clip, width=\linewidth]{sketch.pdf} \end{tabular} \caption{Illustration of how a transformer layer is mapped into an Ising model spin glass, where weights (e.g, Linear layers) denote exchange couplings, and spins represent neurons spanning multiple activation layers (e.g., Dropout, Add, and Norm).} \label{figising} \end{figure} \begin{figure*}[t] \begin{tabular}{c} \includegraphics[width=\linewidth]{dos.pdf} \end{tabular} \caption{The density of states, $g(E)$, for a range of transformer networks (of different sizes) pretrained on different data and tasks, and after their values have been shuffled, representing untrained networks without changing the distribution of weights.} \label{figdos} \end{figure*} We propose representing neural networks as Ising models, where each neuron is viewed as a spin with two possible states, an ``up'' or a ``down'' orientation, and each spin pair between consecutive layers interacts via an exchange coupling $J$ that corresponds to the weight matrices $w$. For instance, given a neural network with $y=w_\ell x$, the neuron layers $x$ and $y$ form $m$ and $n$ spins respectively, where spins $x_i$ and $y_j $ are connected through a coupling $J_{ij}\equiv w_{ij}$. Moreover, all neuron layers between a pair of weights are treated as a single set of spins, e.g. given a neural network with $y=w_2g(f(w_1x))$, the in-between layers $f(\cdot)$ and $g(\cdot)$ are collapsed, forming three unique sets of neurons: $x$, $y$, and $z=g(f(\cdot))$. Since weights can take any positive or negative real value, the resulting ``magnetic system'' can be interpreted as a spin glass with quenched and disordered spin interactions. However, different from most spin models addressed in the literature, neural networks have a large degree, where each node connects to $n$ nearest neighbors instead of typically double the dimension in regular lattices (e.g. $4$ for 2d square, or $6$ for 3d cubic lattice nearest neighbor models). For this work, neural networks based on transformer architectures~\cite{vaswani2017} are considered. Linear weights in the transformer layers constitute exchange interactions, and activations between those layers are collapsed together into spins, as Figure~\ref{figising} illustrates. Weights for query, key, and value projections are summed, since they eventually merge into the same spins, while other network weights such as biases and normalizations are ignored, including residual connections. Thus, for a neural network of $L$ transformer layers, the spin system consists of $4L+1$ sets of spins, where the number of spins in each set is a multiple of the network size, $n_\ell\propto H$, which are connected to spins in the nearest neighboring layer, amounting to a total of $N=H(7L+1)$ spins and $B=10H^2L$ bonds. Using this formulation, neural networks can be defined through the Hamiltonian: \begin{align} E &= -\sum_{<i,j>}J_{ij}S_iS_j \nonumber \\ &=-\sum_{\ell=1}^{4L}\sum_{i=1}^{n_\ell}\sum_{j=1}^{n_{\ell+1}}J_{ij}^{\ell}S_i^{\ell}S_j^{\ell+1}, \end{align} where $<>$ denotes summation pairs in neighboring layers, $J\equiv w$ corresponds to the exchange coupling (or weights), and $S_i=\pm1$ is the spin (or neurons) at site $i$. In other words, the Hamiltonian is a summation over neural network weights, where each weight makes a positive or negative contribution to the sum based on values of the neurons it connects. \section{Calculating the Density of States} After mapping a neural network to a spin glass model, a number of thermodynamic variables can be used to study it. Many such variables are determined from suitable derivatives of the partition function, which in turn is computed from the density of states. The density of states of a system describes the number $g(E)$ of states (spin configurations) that are accessible to the system at a particular energy level $E$. Since the density of state curves depend on the topology of the lattice alone, they make an ideal candidate for analyzing structures in neural networks. Wang-Landau algorithm~\cite{wanglandau} has proved to be the most successful approach for estimating the density of states. The algorithm conducts an iterative procedure via a random walk which produces a flat histogram in energy space, and is succintly described as follows. First, the density of states is initialized to $g(E)=1$ for all energies $E$. Then a random walk is performed in energy space by flipping spins randomly with a transition probability of \begin{align} p(E_1\rightarrow E_2) = min\left(\frac{g(E_1)}{g(E_2)},1\right), \end{align} where $E_1$ and $E_2$ are energies before and after a spin is flipped, while simultaneously augmenting the density of states $g(E)\rightarrow g(E)*f$ by a multiplicative factor $f>1$ and incrementing a histogram $H(E)$ of visited configurations. By construction, more probable (higher entropy) energy levels develop higher $g(E)$, and transition probabilities level out, producing a flat histogram. The factor $f$ starts with a high value, such as $f_0=2.718$ ($\sim e$, the base of natural logarithms), and is reduced according to some schedule whenever the histogram fulfills a flatness criterion (e.g., $H(E)$ is not less than $80\%$ of $\langle H(E)\rangle$ for all possible $E$), after which the histogram is reset $H(E)=0$. The simulation process is repeated until a lower bound of $f$ is reached (typically, $f_{min} = 10^{-8}$). The appendix details parameter choices made for this work. \section{Simulations} Using the above defined formulation, we map a range of deep neural networks onto Ising models and analyze their spin possible configurations, that is, their density of states. This work focuses on transformers~\cite{vaswani2017}, as they have spurred much interest in recent years, and grown to billions of weights, consuming vasts amounts of resources to train. Pretrained transformers are retrieved from Huggingface (\url{https://huggingface.co/}), covering both encoder- and decoder-based transformers of various sizes ($L\in[12,24]$ and $H\in[768,1024,2048]$) for language and vision tasks, including: GPT2~\cite{gpt2}, OPT~\cite{opt}, Bloom~\cite{bloom}, BERT~\cite{bert}, BEiT~\cite{beit}, DeiT~\cite{deit}, ViT~\cite{vit}. We construct the Ising models and compute their density of states using weight from trained networks, and draw comparisons to networks that take ``random" weights in order to quantify what properties might appear after training. These pseudo-random models are built by shuffling the trained weights, rather than taking their values at initialization, as weights distributions can change after training, thus impacting the energy magnitudes. More concretely, shuffling swaps each network weight $w^\ell_{ij}$ with a randomly chosen weight $w^{\ell^\prime}_{i^\prime j^\prime}$, such that all weights are swapped at least once, and repeating this entire process ten times. \begin{table}[!b] \caption{Minimum energy $E_{min}$ and width $W$ of the density of states $g(E)$ for various trained neural networks and after shuffling all $B$ bonds, normalized by the number of spins $N$.} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{lrrrrrrrrrrrr} \hline \hline && && && \multicolumn{3}{c}{Trained} && \multicolumn{3}{c}{Shuffled} \\ Network && $B (10^6)$ && $N$ && $E_{min}/N$ && $W/N$ && $E_{min}/N$ && $W/N$ \\ \hline vit-base && 71 && 65,280 && -2.33 && 8.84 && -1.07 && 1.89 \\ deit-base && 71 && 65,280 && -0.24 && 0.44 && -0.19 && 0.43 \\ beit-base && 71 && 65,280 && -0.43 && 1.14 && -0.38 && 0.86 \\ bert-base && 71 && 65,280 && -2.01 && 2.74 && -0.52 && 0.90 \\ bert-large && 252 && 173,056 && -0.25 && 0.42 && -0.16 && 0.25 \\ opt-125m && 71 && 65,280 && -2.35 && 3.02 && -0.56 && 0.97 \\ opt-350m && 252 && 173,056 && -0.47 && 0.88 && -0.08 && 0.22 \\ opt-1.3b && 1007 && 346,112 && -0.23 && 0.27 && -0.04 && 0.09 \\ bloom-350m && 252 && 173,056 && -0.10 && 0.22 && -0.10 && 0.19 \\ gpt2 && 71 && 65,280 && -1.79 && 6.96 && -1.67 && 3.0 \\ gpt2-medium && 252 && 173,056 && -0.33 && 0.92 && -0.52 && 0.82 \\ \hline \hline \end{tabular} } \label{tabdos} \end{table} Figure~\ref{figdos} plots $g(E)$ for various transformers, where the energy curves differ substantially between the trained and shuffled weights. More specifically, networks that have been trained observe a wider $g(E)$, where the width is given by $W=E_{max}-E_{min}$, which means there is a large dispersion in energies between realizable configurations. On the other hand, shuffling the weights substantially diminishes $W$, or the range of energies that different spin configurations can achieve. For instance, $g(E)$ for bert-base spans $2.74$ energy values per spin after training compared to $0.9$ after shuffling. Ising models constructed from trained weights also achieve substantially lower ground states $E_{min}$ than from shuffled weights. For example, opt-1.3b achieves an energy minimum of $-0.23$ per spin compared to $-0.04$ after shuffling. Similar conclusions can be made for other transformer models, as shown in Table~\ref{tabdos} and corresponding $g(E)$ curves in the appendix. These results show that networks behave widely different after training compared to taking a random ordering of their values, which can only arise from structure, or how weights are arranged in a neural network. One plausible explanation is that they learn specific structures throughout training that makes them more amenable for achieving low-energy states. More specifically, combinations of large weights will determine the value of $E_{min}$, as they contribute the most to the energy (e.g., an energy difference of $\Delta E=w_1w_2$ arises from a weight pair $w_1,w_2$ connected with a neuron with additive contributions), whereas small weights have a negligible impact. Furthermore, the emergence of structures across neural networks of various sizes and trained on widely different data for distinct tasks suggests they represent a ``universal" property to learning. \begin{figure}[t] \begin{tabular}{cc} \includegraphics[width=\linewidth]{blocks.pdf} \end{tabular} \caption{Difference in $g(E)$ widths for trained and shuffled, $\Delta=W_\text{train}-W_\text{shuffled}$, as a function of the number of transformer layers $l$ that participate in the Ising model, normalized by the maximum $L$. Colors denote the various neural networks considered: opt-125m, bert-base, and gpt2.} \label{figlayers} \end{figure} To determine whether the observed structures appear across the entire network or are specific to a few layers, Ising models are constructed using a subset of transformer layers (each consisting of four weight matrices). Figure~\ref{figlayers} shows the difference in $g(E)$ widths between trained and shuffled, $\Delta=W_{\text{train}}-W_{\text{shuffle}}$, for different amounts of transformer layers $l$. It can be seen that $\Delta$ increases as a function of layers for all networks considered. While the trained $g(E)$ approaches that of shuffled which has no structure (i.e., $\Delta=0$) when using few layers, adding more layers shows a clear distinctions between weights that have been trained and shuffled. More specifically, $\Delta$ saturates once half of the network layers are included ($l/L=0.5$), which suggests that structures span most of the network after training and cannot be observed in a single transformer layer, or even worse a single weight matrix, as evaluated in previous works. \begin{figure}[!t] \begin{tabular}{cc} \includegraphics[width=\linewidth]{specificheat.pdf} \end{tabular} \caption{Specific heat $C(T)$ as a function of temperature $T$ (in arbitrary units) for various neural networks: gpt2, opt-125m, bert-base, and vit-base, denoted by different colors. Solid and dotted lines represent trained and shuffled, respectively.} \label{figspecificheat} \end{figure} \begin{table}[!b] \caption{Density of states width $W/N$ and task error $\mathcal{E}$ (tasks are denoted in subscript) across various networks using different fractions $f$ of values being shuffled.} \centering \resizebox{\columnwidth}{!}{\begin{tabular}{llrrrrrrrrrrrrrrr} \hline \hline && $f$ && $0$ && $0.01$ && $0.05$ && $0.1$ && $0.2$ && $0.5$ && $1$ \\ \hline bert-base && $W/N$ && 2.73 && 2.38 && 1.69 && 1.20 && 0.86 && 0.85 && 0.90 \\ \hline opt-125m && $W/N$ && 2.74 && 2.29 && 1.59 && 1.12 && 0.92 && 0.93 && 0.92 \\ && $\mathcal{E}_{\text{Wikitext2}}$ && $0$ && $0.02$ && $1.5$ && $4.4$ && $31.5$ && $11435$ && - \\ && $\mathcal{E}_{\text{Wikitext103}}$ && $0$ && $0.03$ && $1.5$ && $6.0$ && $29.9$ && $14893$ && - \\ && $\mathcal{E}_{\text{Lambada}}$ && $0$ && $0.03$ && $1.3$ && $5.2$ && $41.4$ && $4817$ && - \\ \hline gpt2 && $W/N$ && 6.96 && 6.07 && 4.50 && 3.40 && 3.00 && 2.96 && 3.00 \\ && $\mathcal{E}_{\text{Wikitext2}}$ && $0$ && $0.03$ && $0.3$ && $3.1$ && $30.7$ && $9808$ && - \\ && $\mathcal{E}_{\text{Wikitext103}}$ && $0$ && $0.02$ && $0.6$ && $4.2$ && $29.4$ && $10059$ && - \\ && $\mathcal{E}_{\text{Lambada}}$ && $0$ && $0.03$ && $1.3$ && $5.2$ && $41.4$ && $4817$ && - \\ \hline vit-base && $W/N$ && 8.84 && 7.88 && 5.73 && 4.01 && 2.04 && 1.90 && 1.89 \\ && $\mathcal{E}_{\text{Imagenet}}$ && $0$ && $0.02$ && $0.01$ && $0.1$ && $0.5$ && $15$ && $79$ \\ \hline \hline \end{tabular}} \label{tabacc} \end{table} So far, trained networks have been compared to their values after shuffling, where the latter performs equivalently to a random network. Thus, an interesting question is whether the density of states distinguishes between networks that achieve different performance (e.g., from different stages of training), which can be approximated by varying the fraction $f$ of values that are shuffled. Table~\ref{tabacc} shows the width in the density of states for different amounts of shuffling, where we observe that $W/N$ approximates that of a random network ($f=1$) when more than $20\%$ of values have been shuffled. To correlate energy values to performance, transformers are evaluated on language or vision tasks after shuffling their weights. For language tasks, text generation is considered on Wikitext-2, Wikitext-103~\cite{wikitext}, and Lambada~\cite{lambada}, where perplexity evaluates network quality. For vision tasks, networks are evaluated for image classification on Imagenet~\cite{imagenet} comprising $1k$ possible classes, where classification accuracy is computed using $10k$ images sampled from the validation set. The task errors $\mathcal{E}$ are computed as the relative change in evaluation metrics, $\mathcal{M}$, between trained and shuffled networks, $\mathcal{E}=\frac{|\mathcal{M}_{\text{trained}}-\mathcal{M}_{\text{shuffled}}|}{\mathcal{M}_{\text{trained}}}\times100$. Table~\ref{tabacc} shows that $\mathcal{E}$ increases with shuffling; errors above $1$ are considered substantial in the deep learning community. Interestingly, the energy values, or more specifically the widths $W$, are much more sensitive to changes in weights than the task errors $\mathcal{E}$. From a more practical perspective, this implies that the density of states could be used to evaluate how well have neural networks been trained \textbf{\textit {without using any external data}}. This is increasingly important as networks become multi-purpose, where obtaining representative evaluation data for every possible task is challenging (e.g., in language domains, hundreds of tasks are often needed to determine network quality~\cite{brown2020}). From the density of states, one can derive various thermodynamic quantities, such as specific heat. The specific heat can be defined through the expression: \begin{align} C(T) = \frac{\langle E^2\rangle_T - \langle E\rangle_T^2}{T^2}, \end{align} where $\langle f\rangle=\sum_E f g(E)e^{-\beta E}/\sum_Eg(E)e^{-\beta E}$ denotes the expectation given $\beta=1/k_BT$. Figure~\ref{figspecificheat} shows that $C(T)$ differs across networks, but some such as bert-base and opt-125m exhibit similar curves. After shuffling, $C(T)$ shifts to the left, dropping to zero at much lower temperatures $T$, and achieving lower critical temperature $T_C$ (i.e., temperature that maximizes $C(T)$). For example, opt-125m has $T_c=0.11$, which is $3\times$ higher than $T_c=0.03$ obtained after shuffling its weights. Similar conclusions can be made for the other networks, as detailed in the appendix. These phenomenological findings require more studies for better understanding of their implications. \section{Conclusion} The current work analyzes deep neural networks from a statistical physics (thermodynamics) perspective, where weights are mapped to exchange interactions, and nodes to spin glass Ising model spins. By calculating the density of states, we demonstrate that structures emerge in weight values after training. For example, well-trained networks span a much wider range of energies than can be realized on poorly trained networks. This work opens several avenues for future research. One direction should focus on analyzing other thermodynamic quantities which may provide further insights into what properties neural networks obtain after training. For this purpose, the density of states have been released at \url{https://github.com/stosicresearch/dnnising}. Another direction is to expand on its practical applications, such as: distinguishing between weights during training, thereby serving as a metric when to pause the training procedure; determining network quality for various parameter choices; comparing networks that were trained on different data for distinct tasks; to name a few.
{ "timestamp": "2022-09-20T02:22:22", "yymm": "2209", "arxiv_id": "2209.08678", "language": "en", "url": "https://arxiv.org/abs/2209.08678" }
\section{Extended Related Work}\label{app:related_work} In this section, we give a brief overview of various approaches in CL, especially replay methods as well as spaced repetition techniques for human CL and generalization in RL. \paragraph{Continual Learning.} Traditional CL can be divided into three main areas, namely regularization-based, architecture-based, and replay-based approaches. Regularization-based methods protect parameters influencing the performance on known tasks from wide changes and use the other parameters for learning new tasks~\cite{adel2019continual, kao2021natural, kirkpatrick2017overcoming, li2017learning, nguyen2017variational, schwarz2018progress, zenke2017continual}. Architecture-based methods mitigate catastrophic forgetting by maintaining task-specific parameters~\cite{ebrahimi2020adversarial, mallya2018packnet, rusu2016progressive, serra2018overcoming, xu2018reinforced, yoon2019scalable, yoon2017lifelong}. Replay methods mix samples from old tasks with the current dataset to mitigate catastrophic forgetting, where the replay samples are either stored in an external memory~\cite{aljundi2019online, aljundi2019gradient, borsos2020coresets, chaudhry2019tiny, chrysakis2020online, hayes2019memory, hayes2020remind, iscen2020memory, isele2018selective, jin2020gradient, lopez2017gradient, nguyen2017variational, pellegrini2019latent, rebuffi2017icarl, rolnick2018experience, verwimp2021rehearsal, yoon2021online} or generated using a generative model~\cite{shin2017continual, van2018generative}. Regularization-based approaches and dynamic architectures have been combined with replay-based approaches to methods to overcome their limitations~\cite{buzzega2020dark, chaudhry2018riemannian, chaudhry2018efficient, chaudhry2021using, douillard2020podnet, ebrahimi2020adversarial, joseph2020meta, mirzadeh2020linear, nguyen2017variational, pan2020continual, pellegrini2019latent, rolnick2018experience, von2019continual}. Our work relates most to replay-based methods with external memory which we spend more time on describing in the next paragraph. \paragraph{Replay-based Continual Learning.} Much research effort in replay- or memory-based CL has focused on selecting higher quality samples to store in memory~\cite{aljundi2019gradient, borsos2020coresets, chaudhry2019tiny, chrysakis2020online, hayes2019memory, isele2018selective, lopez2017gradient, nguyen2017variational, rebuffi2017icarl, yoon2021online}. In \cite{chaudhry2019tiny}, several selection strategies are reviewed in scenarios with tiny memory capacity, such as reservoir sampling~\cite{vitter1985random}, first-in first-out buffer~\cite{lopez2017gradient}, k-Means, and Mean-of-Features~\cite{rebuffi2017icarl}. However, elaborate selection strategies have been shown to give little benefit over random selection for image classification problems~\cite{chaudhry2018riemannian, hayes2020remind}. More recently, there has been work on compressing raw images to feature representations to increase the number of memory examples for replay~\cite{hayes2020remind, iscen2020memory, pellegrini2019latent}. Selecting the time to replay old tasks has mostly been ignored in the literature, with an exception in~\cite{aljundi2019online} which replays memory samples that would most interfere with a foreseen parameter update. Our replay scheduling approach differs from the above mentioned works since we focus on learning to select which tasks to replay. Nevertheless, our scheduling can be combined with any selection strategy and replay-based method. \paragraph{Human Continual Learning.} Humans are CL systems in the sense of learning tasks and concepts sequentially. The timing of learning and rehearsal is essential for humans to memorize better~\cite{dempster1989spacing, dunlovsky2013improving, willis2007review}. An example technique is spaced repetition where time intervals between rehearsal are gradually increased to improve long-term memory retention~\cite{dempster1989spacing, ebbinghaus2013memory}, which has been shown to improve memory retention better uniformly spaced rehearsal times~\cite{hawley2008comparison, landauer1978optimum}. Several works in CL with neural networks are inspired by human learning techniques, including spaced repetition~\cite{amiri2017repeat, feng2019spaced, smolen2016right}, sleep mechanisms~\cite{ball2020study, mallya2018packnet, schwarz2018progress}, and memory reactivation~\cite{hayes2020remind, van2020brain}. Replay scheduling is also inspired by spaced repetition, where we learn schedules of which tasks to replay at different times. \paragraph{Generalization in Reinforcement Learning.} Generalization is an active research topic in RL~\cite{kirk2021survey} as RL agents tend to overfit to their training environments~\cite{henderson2018deep, zhang2018dissection, zhao2019investigating, zhang2018study}. The goal is often to transfer learned policies to environments with new tasks~\cite{finn2017model, kessler2021same, higgins2017darla} and action spaces~\cite{chandak2019learning, jain2020generalization, chandak2020lifelong}. Some approaches aim to improve generalization capabilities by generating more diverse training data~\cite{cobbe2019quantifying, tobin2017domain, zhang2018dissection}, using network regularization or inductive biases~\cite{farebrother2018generalization, igl2019generalization, zambaldi2018relational}, or learning dynamics models~\cite{ball2021augmented, nagabandi2018learning}. In this paper, we use RL for learning policies for selecting which tasks a CL network should replay. The goal is to learn policies that can be applied in new CL environments for replay scheduling on unseen task orders and datasets without additional computational cost. \section{Additional Figures and Tables}\label{app:additional_figures} In this section, we show figures and tables that provides additional information to the experiments in Section \ref{sec:experiments}. \subsection{Memory Usage in Experiments on Replay Scheduling Efficiency} \begin{wrapfigure}{r}{0.27\textwidth} \centering \setlength{\figwidth}{0.3\textwidth} \setlength{\figheight}{.12\textheight} \vspace{-4mm} \input{figures/tiny_memory/memory_usage_barplot} \vspace{-1mm} \caption{Number of replayed samples per task for the 5-task datasets in the tiny memory setting. % } \vspace{-3mm} \label{fig:tiny_memory_experiment_memory_usage} \end{wrapfigure} We visualize the memory usage in the experiment on efficiency of replay scheduling in Section \ref{sec:results_on_replay_scheduling_with_mcts}. For the 5-task datasets, the replay memory size for MCTS is set to $M=2$, such that only 2 samples can be selected for replay at all times. Similarly, we set $M=50$ for the 10- and 20-task datasets which have 100 classes to learn in total. The baselines A-GEM~\cite{chaudhry2018efficient}, ER-Ring~\cite{chaudhry2019tiny}, and Uniform use an incremental memory in order to replay 1 sample/class at all tasks. We visualize the memory usage for our method and the baselines for the 5-task datasets in Figure \ref{fig:tiny_memory_experiment_memory_usage}. Here, the memory capacity is reached at task 2, while the baselines must increment their memory size. Figure \ref{fig:memory_usage_10_and_20task_datasets} shows the memory usage for Permuted MNIST and the 20-task datasets Split CIFAR-100 and Split miniImagenet. The memory size capacity for our method is reached after learning task 6 and task 11 on the Permuted MNIST and the 20-task datasets respectively, while the baselines continue incrementing their replay memory size. \begin{figure}[ht] \centering \setlength{\figwidth}{0.78\textwidth} \setlength{\figheight}{.22\textheight} \input{appendix/figures/memory_usage/groupplot} \caption{Number of replayed samples per task for 10-task Permuted MNIST (top) and the 20-task datasets in the experiment in Section \ref{sec:results_on_replay_scheduling_with_mcts}. % The fixed memory size $M=50$ for our method is reached after learning task 6 and task 11 on the Permuted MNIST and the 20-task datasets respectively, while the baselines continue incrementing their number of replay samples per task.} \label{fig:memory_usage_10_and_20task_datasets} \end{figure} \subsection{Task Splits in Test Environments in Policy Generalization Experiments} Here, we provide the task splits of the test environments used in the policy generalization experiments in Section \ref{sec:results_on_policy_generalization}. We evaluated all methods using 10 test environments in all experiments. The test environments in the New Task Order experiments were generated with seeds 10-19. We show the task splits for the Split MNIST, Split FashionMNIST, and Split CIFAR-10 environments in Table \ref{tab:task_splits_mnist_new_task_orders_experiment}, \ref{tab:task_splits_fashionmnist_new_task_orders_experiment}, and \ref{tab:task_splits_cifar10_new_task_order_dataset} respectively. The test environments in the New Dataset experiments were generated with seeds 0-9. We show the task splits for the Split notMNIST and Split FashionMNIST environments in Table \ref{tab:task_splits_notmnist_new_dataset_experiment} and \ref{tab:task_splits_fashionmnist_new_dataset_experiment} respectively. \begin{table}[ht] \centering \small \caption{Task splits with their corresponding seed for test environments of Split MNIST datasets in the {\bf New Task Orders} experiments in Section \ref{sec:results_on_policy_generalization}. } \begin{tabular}{l c c c c c} \toprule % {\bf Seed} & {\bf Task 1} & {\bf Task 2} & {\bf Task 3} & {\bf Task 4} & {\bf Task 5} \\ \midrule 10 & 8, 2 & 5, 6 & 3, 1 & 0, 7 & 4, 9 \\ \midrule 11 & 7, 8 & 2, 6 & 4, 5 & 1, 3 & 0, 9 \\ \midrule 12 & 5, 8 & 7, 0 & 4, 9 & 3, 2 & 1, 6 \\ \midrule 13 & 3, 5 & 6, 1 & 4, 7 & 8, 9 & 0, 2 \\ \midrule 14 & 3, 9 & 0, 5 & 4, 2 & 1, 7 & 6, 8 \\ \midrule 15 & 2, 6 & 1, 3 & 7, 0 & 9, 4 & 5, 8 \\ \midrule 16 & 6, 2 & 0, 7 & 8, 4 & 3, 1 & 5, 9 \\ \midrule 17 & 7, 2 & 5, 3 & 4, 0 & 9, 8 & 6, 1 \\ \midrule 18 & 7, 9 & 0, 4 & 2, 1 & 6, 5 & 8, 3 \\ \midrule 19 & 1, 7 & 9, 6 & 8, 4 & 3, 0 & 2, 5 \\ \bottomrule % \end{tabular} \label{tab:task_splits_mnist_new_task_orders_experiment} \end{table} \begin{table}[ht] \centering \small \caption{Task splits with their corresponding seed for test environments of Split FashionMNIST datasets in the {\bf New Task Orders} experiments in Section \ref{sec:results_on_policy_generalization}. } \resizebox{\textwidth}{!}{ \begin{tabular}{l c c c c c} \toprule % {\bf Seed} & {\bf Task 1} & {\bf Task 2} & {\bf Task 3} & {\bf Task 4} & {\bf Task 5} \\ \midrule 10 & Bag, Pullover & Sandal, Shirt & Dress, Trouser & T-shirt/top, Sneaker & Coat, Ankle boot \\ \midrule 11 & Sneaker, Bag & Pullover, Shirt & Coat, Sandal & Trouser, Dress & T-shirt/top, Ankle boot \\ \midrule 12 & Sandal, Bag & Sneaker, T-shirt/top & Coat, Ankle boot & Dress, Pullover & Trouser, Shirt \\ \midrule 13 & Dress, Sandal & Shirt, Trouser & Coat, Sneaker & Bag, Ankle boot & T-shirt/top, Pullover \\ \midrule 14 & Dress, Ankle boot & T-shirt/top, Sandal & Coat, Pullover & Trouser, Sneaker & Shirt, Bag \\ \midrule 15 & Pullover, Shirt & Trouser, Dress & Sneaker, T-shirt/top & Ankle boot, Coat & Sandal, Bag \\ \midrule 16 & Shirt, Pullover & T-shirt/top, Sneaker & Bag, Coat & Dress, Trouser & Sandal, Ankle boot \\ \midrule 17 & Sneaker, Pullover & Sandal, Dress & Coat, T-shirt/top & Ankle boot, Bag & Shirt, Trouser \\ \midrule 18 & Sneaker, Ankle boot & T-shirt/top, Coat & Pullover, Trouser & Shirt, Sandal & Bag, Dress \\ \midrule 19 & Trouser, Sneaker & Ankle boot, Shirt & Bag, Coat & Dress, T-shirt/top & Pullover, Sandal \\ \bottomrule % \end{tabular} } \label{tab:task_splits_fashionmnist_new_task_orders_experiment} \end{table} \begin{table}[ht] \centering \caption{Task splits with their corresponding seed for test environments of Split CIFAR-10 datasets in the {\bf New Task Orders} experiments in Section \ref{sec:results_on_policy_generalization}. } \resizebox{1.0\textwidth}{!}{ \begin{tabular}{l c c c c c} \toprule % {\bf Seed} & {\bf Task 1} & {\bf Task 2} & {\bf Task 3} & {\bf Task 4} & {\bf Task 5} \\ \midrule 10 & Ship, Bird & Dog, Frog & Cat, Automobile & Airplane, Horse & Deer, Truck \\ \midrule 11 & Horse, Ship & Bird, Frog & Deer, Dog & Automobile, Cat & Airplane, Truck \\ \midrule 12 & Dog, Ship & Horse, Airplane & Deer, Truck & Cat, Bird & Automobile, Frog \\ \midrule 13 & Cat, Dog & Frog, Automobile & Deer, Horse & Ship, Truck & Airplane, Bird \\ \midrule 14 & Cat, Truck & Airplane, Dog & Deer, Bird & Automobile, Horse & Frog, Ship \\ \midrule 15 & Bird, Frog & Automobile, Cat & Horse, Airplane & Truck, Deer & Dog, Ship \\ \midrule 16 & Frog, Bird & Airplane, Horse & Ship, Deer & Cat, Automobile & Dog, Truck \\ \midrule 17 & Horse, Bird & Dog, Cat & Deer, Airplane & Truck, Ship & Frog, Automobile \\ \midrule 18 & Horse, Truck & Airplane, Deer & Bird, Automobile & Frog, Dog & Ship, Cat \\ \midrule 19 & Automobile, Horse & Truck, Frog & Ship, Deer & Cat, Airplane & Bird, Dog \\ \bottomrule % \end{tabular} } \label{tab:task_splits_cifar10_new_task_order_dataset} \end{table} \begin{table}[ht] \centering \small \caption{Task splits with their corresponding seed for test environments of Split notMNIST datasets in the {\bf New Dataset} experiments in Section \ref{sec:results_on_policy_generalization}. } \begin{tabular}{l c c c c c} \toprule % {\bf Seed} & {\bf Task 1} & {\bf Task 2} & {\bf Task 3} & {\bf Task 4} & {\bf Task 5} \\ \midrule 0 & A, B & C, D & E, F & G, H & I, J \\ \midrule 1 & C, J & G, E & A, D & B, H & I, F \\ \midrule 2 & E, B & F, A & H, C & D, G & J, I \\ \midrule 3 & F, E & B, C & J, G & H, A & D, I \\ \midrule 4 & D, I & E, J & C, G & A, B & F, H \\ \midrule 5 & J, F & C, E & H, B & A, I & G, D \\ \midrule 6 & I, B & H, A & G, F & C, E & D, J \\ \midrule 7 & I, F & A, C & B, J & H, D & G, E \\ \midrule 8 & I, G & J, A & C, F & H, B & E, D \\ \midrule 9 & I, E & H, C & B, J & D, A & G, F \\ \bottomrule % \end{tabular} \label{tab:task_splits_notmnist_new_dataset_experiment} \end{table} \begin{table}[ht] \centering \caption{Task splits with their corresponding seed for test environments of Split FashionMNIST datasets in the {\bf New Dataset} experiments in Section \ref{sec:results_on_policy_generalization}. } \resizebox{1.0\textwidth}{!}{ \begin{tabular}{l c c c c c} \toprule % {\bf Seed} & {\bf Task 1} & {\bf Task 2} & {\bf Task 3} & {\bf Task 4} & {\bf Task 5} \\ \midrule 0 & T-shirt/top, Trouser & Pullover, Dress & Coat, Sandal & Shirt, Sneaker & Bag, Ankle boot \\ \midrule 1 & Pullover, Ankle boot & Shirt, Coat & T-shirt/top, Dress & Trouser, Sneaker & Bag, Sandal \\ \midrule 2 & Coat, Trouser & Sandal, T-shirt/top & Sneaker, Pullover & Dress, Shirt & Ankle boot, Bag \\ \midrule 3 & Sandal, Coat & Trouser, Pullover & Ankle boot, Shirt & Sneaker, T-shirt/top & Dress, Bag \\ \midrule 4 & Dress, Bag & Coat, Ankle boot & Pullover, Shirt & T-shirt/top, Trouser & Sandal, Sneaker\\ \midrule 5 & Ankle boot, Sandal & Pullover, Coat & Sneaker, Trouser & T-shirt/top, Bag & Shirt, Dress \\ \midrule 6 & Bag, Trouser & Sneaker, T-shirt/top & Shirt, Sandal & Pullover, Coat & Dress, Ankle boot \\ \midrule 7 & Bag, Sandal & T-shirt/top, Pullover & Trouser, Ankle boot & Sneaker, Dress & Shirt, Coat \\ \midrule 8 & Bag, Shirt & Ankle boot, T-shirt/top & Pullover, Sandal & Sneaker, Trouser & Coat, Dress \\ \midrule 9 & Bag, Coat & Sneaker, Pullover & Trouser, Ankle boot & Dress, T-shirt/top & Shirt, Sandal \\ \bottomrule % \end{tabular} } \label{tab:task_splits_fashionmnist_new_dataset_experiment} \end{table} \section*{Appendix} % This supplementary material is structured as follows: \begin{itemize} \item Appendix \ref{app:potential_societal_impacts}: Potential negative societal impacts. \item Appendix \ref{app:experimental_settings}: Full details of the experimental settings. \item Appendix \ref{app:additional_experimental_results}: Additional experimental results. \item Appendix \ref{app:additional_methodlogy}: Additional methodology of the MCTS method and our RL-based framework for policy learning. \item Appendix \ref{app:related_work}: Extended related work. \item Appendix \ref{app:additional_figures}: Includes additional figures and tables. \end{itemize} \input{appendix/potential_societal_impacts} \input{appendix/experimental_settings} \input{appendix/additional_experimental_results} \clearpage \input{appendix/additional_methodology} \clearpage \input{appendix/additional_related_work} \clearpage \input{appendix/additional_figures} \section{Experimental Settings}\label{app:experimental_settings} In this section, we describe the full details of the experimental settings used in this paper. The experimental settings are divided into two parts where we first describe the settings for single CL environment experiments with MCTS in Section \ref{app:experimental_settings_single_cl_environments} and then describe the settings for the RL-based framework with DQN~\cite{mnih2013playing, mnih2015human} and A2C~\cite{mnih2016asynchronous} in Section \ref{app:experimental_settings_rl_framework}. \subsection{Experimental Settings in Single CL Environments}\label{app:experimental_settings_single_cl_environments} Here, we provide details on the experimental settings for the experiments with MCTS in single CL environments. \vspace{-2mm} \paragraph{Datasets.} We conduct experiments on six datasets commonly used in the CL literature. Split MNIST~\cite{zenke2017continual} is a variant of the MNIST~\cite{lecun1998gradient} dataset where the classes have been divided into 5 tasks incoming in the order 0/1, 2/3, 4/5, 6/7, and 8/9. Split FashionMNIST~\cite{xiao2017fashion} is of similar size to MNIST and consists of grayscale images of different clothes, where the classes have been divided into the 5 tasks T-shirt/Trouser, Pullover/Dress, Coat/Sandals, Shirt/Sneaker, and Bag/Ankle boots. Similar to MNIST, Split notMNIST~\cite{bulatov2011notMNIST} consists of 10 classes of the letters A-J with various fonts, where the classes are divided into the 5 tasks A/B, C/D, E/F, G/H, and I/J. We use training/test split provided by \cite{ebrahimi2020adversarial} for Split notMNIST. Permuted MNIST~\cite{goodfellow2013empirical} dataset consists of applying a unique random permutation of the pixels of the images in original MNIST to create each task, except for the first task that is to learn the original MNIST dataset. We reduce the original MNIST dataset to 10k samples and create 9 unique random permutations to get a 10-task version of Permuted MNIST. In Split CIFAR-100~\cite{krizhevsky2009learning}, the 100 classes are divided into 20 tasks with 5 classes for each task~\cite{lopez2017gradient, rebuffi2017icarl}. Similarly, Split miniImagenet~\cite{vinyals2016matching} consists of 100 classes randomly chosen from the original Imagenet dataset where the 100 classes are divided into 20 tasks with 5 classes per task. \vspace{-2mm} \paragraph{CL Network Architectures.} We use a 2-layer MLP with 256 hidden units and ReLU activation for Split MNIST, Split FashionMNIST, Split notMNIST, and Permuted MNIST. We use a multi-head output layer for each dataset except Permuted MNIST where the network uses single-head output layer. For Split CIFAR-100, we use a multi-head CNN architecture built according to the CNN in \cite{adel2019continual, schwarz2018progress, vinyals2016matching}, which consists of four 3x3 convolutional blocks, i.e. convolutional layer followed by batch normalization~\cite{ioffe2015batch}, with 64 filters, ReLU activations, and 2x2 Max-pooling. For Split miniImagenet, we use the reduced ResNet-18 from \cite{lopez2017gradient} with multi-head output layer. \vspace{-2mm} \paragraph{CL Hyperparameters.} We train all networks with the Adam optimizer~\cite{kingma2014adam} with learning rate $\eta = 0.001$ and hyperparameters $\beta_1 = 0.9$ and $\beta_2 = 0.999$. Note that the learning rate for Adam is not reset before training on a new task. Next, we give details on number of training epochs and batch sizes specific for each dataset: \begin{itemize}[topsep=1pt] \item Split MNIST: 10 epochs/task, batch size 128. \item Split FashionMNIST: 30 epochs/task, batch size 128. \item Split notMNIST: 50 epochs/task, batch size 128. \item Permuted MNIST: 20 epochs/task, batch size 128. \item Split CIFAR-100: 25 epochs/task, batch size 256. \item Split miniImagenet: 1 epoch/task (task 1 trained for 5 epochs as warm up), batch size 32. \end{itemize} \vspace{-2mm} \paragraph{Monte Carlo Tree Search.} We run RS-MCTS for 100 iterations in all experiments. The replay schedules used in the reported results on the held-out test sets are from the replay schedule that gave the highest reward on the validation sets. The exploration constant for UCT in Equation \ref{eq:uct} is set to $C=0.1$ in all experiments~\cite{chaudhry2018feature}. \vspace{-2mm} \paragraph{Computational Cost.} All experiments were performed on one NVIDIA GeForce RTW 2080Ti on an internal GPU cluster. The wall clock time for ETS on Split MNIST was around 1.5 minutes, and RS-MCTS and BFS takes 40 seconds on average to run one iteration, where BFS runs 1050 iterations in total for Split MNIST. \vspace{-2mm} \paragraph{Implementations.} We adapted the implementation released by \cite{borsos2020coresets} for the memory selection strategies Uniform sampling, $k$-means clustering, $k$-center clustering~\cite{nguyen2017variational}, and Mean-of-Features~\cite{rebuffi2017icarl}. For HAL~\cite{chaudhry2021using}, MER~\cite{riemer2018learning}, DER~\cite{buzzega2020dark}, and DER++, we follow the implementations released by \cite{buzzega2020dark} for each method to apply them to our replay scheduling methods. Furthermore, we follow the implementations released by \cite{chaudhry2019tiny} and \cite{mirzadeh2021cl-gym} for A-GEM~\cite{chaudhry2018efficient} and ER-Ring~\cite{chaudhry2019tiny}. For MCTS, we adapted the implementation from {\footnotesize \url{https://github.com/int8/monte-carlo-tree-search}} to search for replay schedules. \vspace{-2mm} \paragraph{Experimental Settings for Single Task Replay Memory Experiment.} We motivated the need for replay scheduling in CL with Figure \ref{fig:single_task_replay_with_M10} in Section \ref{sec:introduction}. This simple experiment was performed on Split MNIST where the replay memory only contains samples from the first task, i.e., learning the classes 0/1. Furthermore, the memory can only be replayed at one point in time and we show the performance on each task when the memory is replayed at different time steps. We set the memory size to $M=10$ samples such that the memory holds 5 samples from both classes. We use the same network architecture and hyperparameters as described above for Split MNIST. The ACC metric above each subfigure corresponds to the ACC for training a network with the single task memory replay at different tasks. We observe that choosing different time points to replay the same memory leads to noticeably different results in the final performance, and in this example, the best final performance is achieved when the memory is used when learning task 5. Therefore, we argue that finding the proper schedule of what tasks to replay at what time in the fixed memory situation can be critical for CL. \subsection{Experimental Settings for RL-Based Framework}\label{app:experimental_settings_rl_framework} Here, we provide details on the experimental settings for the experiments with our RL-based framework where we use multiple CL environments for learning replay scheduling policies that generalize. \vspace{-2mm} \paragraph{Datasets.} We conduct experiments on CL environments with four datasets common CL benchmarks, namely, Split MNIST~\cite{zenke2017continual}, Split Fashion-MNIST~\cite{xiao2017fashion}, Split notMNIST~\cite{bulatov2011notMNIST}, and Split CIFAR-10~\cite{krizhevsky2009learning}. All datasets consists of 5 tasks with 2 classes/task. \vspace{-2mm} \paragraph{CL Network Architectures.} We use a 2-layer MLP with 256 hidden units and ReLU activation for Split MNIST, Split FashionMNIST, and Split notMNIST. For Split CIFAR-10, we use the same ConvNet architecture as used for Split CIFAR-100 in Appendix \ref{app:experimental_settings_single_cl_environments}. We use a multi-head output layer for each dataset and assume task labels are available at test time for selecting the correct output head related to the task. \vspace{-2mm} \paragraph{CL Hyperparameters.} We train all networks with the Adam optimizer~\cite{kingma2014adam} with learning rate $\eta = 0.001$ and hyperparameters $\beta_1 = 0.9$ and $\beta_2 = 0.999$. Note that the learning rate for Adam is not reset before training on a new task. Next, we give details on number of training epochs and batch sizes specific for each dataset: \begin{itemize}[topsep=1pt] \item Split MNIST: 10 epochs/task, batch size 128. \item Split FashionMNIST: 10 epochs/task, batch size 128. \item Split notMNIST: 20 epochs/task, batch size 128. \item Split CIFAR-100: 20 epochs/task, batch size 256. \end{itemize} \vspace{-2mm} \paragraph{Generating CL Environments.} We generate multiple CL environments with pre-set random seeds for initializing the network parameters $\vphi$ and shuffling the task order. The pre-set random seeds are in the range $0-49$, such that we have 50 environments for each dataset. We shuffle the task order by permuting the class order and then split the classes into 5 pairs (tasks) with 2 classes/pair. For environments with seed $0$, we keep the original task order in the dataset. Taking a step at task $t$ in the CL environments involves training the CL network on the $t$-th dataset with a replay memory ${\mathcal{M}}_t$ from the discrete action space described in Section \ref{sec:replay_scheduling_in_continual_learning}. Therefore, to speed up the experiments with the RL algorithms, we run a breadth-first search (BFS) through the discrete action space and save the classification results for re-use during policy learning. Note that the action space has 1050 possible paths of replay schedules for the datasets with $T=5$ tasks, which makes the environment generation time-consuming. Hence, we only generate environments where the replay memory size $M=10$ have been used, and leave analysis of different memory sizes as future work. % \vspace{-2mm} \paragraph{DQN and A2C Architectures.} The input layer has size $T-1$ where each unit is inputting the task performances since the states are represented by the validation accuracies $s_t = [A_{t, 1}^{(val)}, ..., A_{t, t}^{(val)}, 0, ..., 0]$. The current task can therefore be determined by the number of non-zero state inputs. The output layer has 35 units representing the possible actions at $T=5$ with the discrete action space we have constructed in Section \ref{sec:replay_scheduling_in_continual_learning}. We use action masking on the output units to prevent the network from selection invalid actions for constructing the replay memory at the current task. The DQN is a 2-layer MLP with 512 hidden units and ReLU activations. For A2C, we use separate networks for parameterizing the policy and the value function, where both networks are 2-layer MLPs with 64 hidden units of Tanh activations. \vspace{-2mm} \paragraph{DQN and A2C Hyperparameters.} We provide the hyperparameters for the both DQN and A2C in Table \ref{tab:dqn_hyperparameters_new_task_orders}-\ref{tab:a2c_hyperparameters_new_dataset}. Table \ref{tab:dqn_hyperparameters_new_task_orders} and \ref{tab:a2c_hyperparameters_new_task_orders} includes the hyperparameters on the New Task Order experiment for DQN and A2C respectively, while Table \ref{tab:dqn_hyperparameters_new_dataset} and \ref{tab:a2c_hyperparameters_new_dataset} includes the hyperparameters on the New Dataset experiment for DQN and A2C respectively. Regarding the training environments in Table \ref{tab:dqn_hyperparameters_new_dataset} and \ref{tab:a2c_hyperparameters_new_dataset}, we use two different datasets in the training environments to increase the diversity. When Spit notMNIST is for testing, half the amount of training environments are using Split MNIST and the other half uses Split FashionMNIST. For example, in Table \ref{tab:a2c_hyperparameters_new_dataset}, A2C uses 10 training environments which means that there are 5 Split MNIST environments and 5 Split FashionMNIST environments. Similarly, half the amount of training environments are using Split MNIST and the other half uses Split notMNIST when the testing environments uses Split FashionMNIST. \vspace{-2mm} \paragraph{Computational Cost.} All experiments were performed on one NVIDIA GeForce RTW 2080Ti on an internal GPU cluster. Generating a CL environment for one seed with Split MNIST took on around 9.5 hours averaged over 10 runs of BFS. Similarly for Split CIFAR-10, generating one CL environment took on average 16.1 hours. \vspace{-2mm} \paragraph{Implementations.} The implementation for DQN was adapted from OpenAI baselines~\cite{dhariwal2017baselines} and the PyTorch~\cite{paszke2019pytorch} tutorial on DQN {\small \url{https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html}}. For A2C, we followed the implementations released by Kostrikov~\cite{kostrikov2018pytorchrl} and Igl et al.~\cite{igl2020transient}. \clearpage \begin{table}[t] \small \centering \caption{DQN hyperparameters for the experiments on {\bf New Task Orders} in Section \ref{sec:results_on_policy_generalization}. } \label{tab:dqn_hyperparameters_new_task_orders} \begin{tabular}{l c c c} \toprule {\bf Hyperparameters} & {\bf Split MNIST} & {\bf Split FashionMNIST} & {\bf Split CIFAR-10} \\ \midrule Training Environments & 30 & 20 & 10 \\ Learning Rate & 0.0001 & 0.0003 & 0.0003 \\ Optimizer & Adam & Adam & Adam \\ Buffer Size & 10k & 10k & 10k \\ Target Update per step & 500 & 500 & 500 \\ Batch Size & 32 & 32 & 32 \\ Discount Factor $\gamma$ & 1.0 & 1.0 & 1.0 \\ Exploration Start $\epsilon_{start}$ & 1.0 & 1.0 & 1.0 \\ Exploration Final $\epsilon_{final}$ & 0.02 & 0.02 & 0.02 \\ Exploration Annealing (episodes) & 2.5k & 2.5k & 2.5k \\ Training Episodes & 10k & 10k & 10k \\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \small \centering \caption{A2C hyperparameters for the experiments on {\bf New Task Orders} in Section \ref{sec:results_on_policy_generalization}. } \label{tab:a2c_hyperparameters_new_task_orders} \begin{tabular}{l c c c} \toprule {\bf Hyperparameters} & {\bf Split MNIST} & {\bf Split FashionMNIST} & {\bf Split CIFAR-10} \\ \midrule Training Environments & 10 & 10 & 10 \\ Learning Rate & 0.0001 & 0.0003 & 0.00003 \\ Optimizer & RMSProp & RMSProp & RMSProp \\ Gradient Clipping & 0.5 & 0.5 & 0.5 \\ GAE parameter $\lambda$ & 0.95 & 0.95 & 0.95 \\ VF coefficient & 0.5 & 0.5 & 0.5 \\ Entropy coefficient & 0.01 & 0.01 & 0.01 \\ Number of steps $n_{steps}$ & 5 & 5 & 5 \\ Discount Factor $\gamma$ & 1.0 & 1.0 & 1.0 \\ Training Episodes & 100k & 100k & 100k \\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \small \centering \caption{DQN hyperparameters for the experiments on {\bf New Dataset} in Section \ref{sec:results_on_policy_generalization}. Split notMNIST and Split FashionMNIST indicate the dataset used in the test environments. } \label{tab:dqn_hyperparameters_new_dataset} \begin{tabular}{l c c} \toprule {\bf Hyperparameters} & {\bf Split notMNIST} & {\bf Split FashionMNIST} \\ \midrule Training Environments & 30 & 30 \\ Learning Rate & 0.0001 & 0.0001 \\ Optimizer & Adam & Adam \\ Buffer Size & 10k & 10k \\ Target Update per step & 500 & 500 \\ Batch Size & 32 & 32 \\ Discount Factor $\gamma$ & 1.0 & 1.0 \\ Exploration Start $\epsilon_{start}$ & 1.0 & 1.0 \\ Exploration Final $\epsilon_{final}$ & 0.02 & 0.02 \\ Exploration Annealing (episodes) & 2.5k & 2.5k \\ Training Episodes & 10k & 10k \\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \small \centering \caption{A2C hyperparameters for the experiments on {\bf New Dataset} in Section \ref{sec:results_on_policy_generalization}. Split notMNIST and Split FashionMNIST indicate the dataset used in the test environments. } \label{tab:a2c_hyperparameters_new_dataset} \begin{tabular}{l c c c} \toprule {\bf Hyperparameters} & {\bf Split notMNIST} & {\bf Split FashionMNIST} \\ \midrule Training Environments & 10 & 10 \\ Learning Rate & 0.0001 & 0.0003 \\ Optimizer & RMSProp & RMSProp \\ Gradient Clipping & 0.5 & 0.5 \\ GAE parameter $\lambda$ & 0.95 & 0.95 \\ VF coefficient & 0.5 & 0.5 \\ Entropy coefficient & 0.01 & 0.01 \\ Number of steps $n_{steps}$ & 5 & 5 \\ Discount Factor $\gamma$ & 1.0 & 1.0 \\ Training Episodes & 100k & 100k \\ \bottomrule \end{tabular} \end{table} \clearpage \subsection{Heuristic Scheduling Baselines}\label{app:heuristic_scheduling_baselines} We implemented three heuristic scheduling baselines to compare against our proposed methods. These heuristics are based on the intuition of re-learning tasks when they have been forgotten. We keep a validation set for each task to determine whether any task should be replayed by comparing the validation accuracy against a hand-tuned threshold. If the validation accuracy is below the threshold, then the corresponding task is replayed. Let $A_{t, i}^{(val)}$ be the validation accuracy for task $t$ evaluated at time step $i$. The threshold is set differently in each of the baselines: \begin{itemize}[leftmargin=*, topsep=0pt] \item {\bf Heuristic Global Drop (Heur-GD).} Heuristic policy that replays tasks with validation accuracy below a certain threshold proportional to the best achieved validation accuracy on the task. The best achieved validation accuracy for task $i$ is given by $A_{t, i}^{(best)} = \max\{(A_{1, i}^{(val)}, \dots, A_{t, i}^{(val)})\}$. Task $i$ is replayed if $A_{t, i}^{(val)} < \tau A_{t, i}^{(best)}$ where $\tau \in [0, 1]$ is a ratio representing the degree of how much the validation accuracy of a task is allowed to drop. Note that Heur-GD (denoted as Heuristic) is the only one used in the experiments with MCTS in single CL environments in Section \ref{sec:results_on_replay_scheduling_with_mcts}. \item {\bf Heuristic Local Drop (Heur-LD).} Heuristic policy that replays tasks with validation accuracy below a threshold proportional to the previous achieved validation accuracy on the task. Task $i$ is replayed if $A_{t, i}^{(val)} < \tau A_{t-1, i}^{(val)}$ where $\tau$ again represents the degree of how much the validation accuracy of a task is allowed to drop. \item {\bf Heuristic Accuracy Threshold (Heur-AT).} Heuristic policy that replays tasks with validation accuracy below a fixed threshold. Task $i$ is replayed if if $A_{t, i}^{(val)} < \tau$ where $\tau \in [0, 1]$ represents the least tolerated accuracy before we need to replay the task. \end{itemize} The replay memory is filled with $M/k$ samples from each selected task, where $k$ is the number of tasks that need to be replayed according to their decrease in validation accuracy. We skip replaying any tasks if no tasks are selected for replay, i.e., $k=0$. \vspace{-1mm} \paragraph{Grid search for $\tau$ in Single CL Environments.} We performed a coarse-to-fine grid search for the parameter $\tau$ on each dataset to compare against the MCTS replay schedules. The best value for $\tau$ is selected according to the highest mean accuracy on the validation set averaged over 5 seeds. The validation set consists of 15\% of the training data and is the same for MCTS. We use the same experimental settings as described in Appendix \ref{app:experimental_settings}. The memory sizes are set to $M=10$ and $M=100$ for the 5-task datasets and the 10/20-task datasets respectively, and we apply uniform sampling as the memory selection method. We provide the ranges for $\tau$ that was used on each dataset and put the best value in \textbf{bold}: \begin{itemize}[topsep=1pt] \item Split MNIST: $\tau =$ \{0.9, 0.93, 0.95, \textbf{0.96}, 0.97, 0.98, 0.99\} \item Split FashionMNIST: $\tau =$ \{0.9, 0.93, 0.95, 0.96, \textbf{0.97}, 0.98, 0.99\} \item Split notMNIST: $\tau =$ \{0.9, 0.93, 0.95, 0.96, 0.97, \textbf{0.98}, 0.99\} \item Permuted MNIST: $\tau =$ \{0.5, 0.55, 0.6, 0.65, 0.7, \textbf{0.75}, 0.8, 0.9, 0.95, 0.97, 0.99\} \item Split CIFAR-100: $\tau =$ \{0.3, 0.4, 0.45, \textbf{0.5}, 0.55, 0.6, 0.65, 0.7, 0.8, 0.9, 0.95, 0.97, 0.99\} \item Split miniImagenet: $\tau =$ \{0.5, 0.6, 0.65, 0.7, \textbf{0.75}, 0.8, 0.85, 0.9, 0.95, 0.97, 0.99\} \end{itemize} Note that we use these values for $\tau$ on all experiments with Heuristic for the corresponding datasets. The performance for this heuristic highly depends on careful tuning for the ratio $\tau$ when the memory size or memory selection method changes, as can be seen in in Figure \ref{fig:acc_over_replay_memory_size} and Table \ref{tab:results_memory_selection_methods}. Note that this heuristic corresponds to Heur-GD described above. \vspace{-1mm} \paragraph{Grid search for $\tau$ in Multiple CL Environments.} We performed a grid search for the parameter $\tau$ for the three heuristic scheduling baselines for each experiment to compare against the learned replay scheduling policies. We select the parameter based on ACC scores achieved in the same number of training environments used by either DQN or A2C. The search range we use is $\tau \in \{0.90, 0.95, 0.999\}$. In Table \ref{tab:grid_search_heuristics_multiple_cl_environments}, we show the selected parameter value of $\tau$ and the number of environments used for selecting the value for each method and experiment in Section \ref{sec:results_on_policy_generalization}. The same parameters are used to generate the results on the heuristics in Table \ref{tab:average_ranking_rl_experiment}. \begin{table}[t] \centering \caption{The threshold parameter $\tau$ used in the heuristic scheudling baselines Heuristic Global Drop (Heur-GD), Heuristic Local Drop (Heur-LD), and Heuristic Accuracy Threshold (Heur-AT). The search range is $\tau \in \{0.90, 0.95, 0.999\}$ for all methods and we display the number of environments used for selecting the parameter used at test time. } \label{tab:grid_search_heuristics_multiple_cl_environments} \resizebox{\textwidth}{!}{ \begin{tabular}{l c c c c c c c c c c} \toprule % & \multicolumn{6}{c}{ {\bf New Task Order}} & \multicolumn{4}{c}{ {\bf New Dataset}}\\ \cmidrule(lr){2-7} \cmidrule(lr){8-11} & \multicolumn{2}{c}{ {\bf S-MNIST}} & \multicolumn{2}{c}{ {\bf S-FashionMNIST}} & \multicolumn{2}{c}{ {\bf S-CIFAR-10}} & \multicolumn{2}{c}{ {\bf S-notMNIST}} & \multicolumn{2}{c}{ {\bf S-FashionMNIST}} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} \cmidrule(lr){10-11} {\bf Method} & $\tau$ & \#Envs & $\tau$ & \#Envs & $\tau$ & \#Envs & $\tau$ & \#Envs & $\tau$ & \#Envs \\ \midrule Heur-GD & 0.9 & 10 & 0.95 & 20 & 0.9 & 10 & 0.9 & 10 & 0.9 & 10 \\ Heur-LD & 0.9 & 10 & 0.999 & 20 & 0.999 & 10 & 0.95 & 10 & 0.999 & 10 \\ Heur-AT & 0.9 & 10 & 0.999 & 20 & 0.9 & 10 & 0.9 & 10 & 0.95 & 10 \\ \bottomrule % \end{tabular} } \end{table} \subsection{Assessing Generalization with Ranking Method}\label{app:ranking_method} We use a ranking method based on the CL performance in every test environment for performance comparison between the methods in Section \ref{sec:results_on_policy_generalization}. We use rankings because the performances can vary greatly between environments with different task orders and datasets. To measure the CL performance in the environments, we use the average test accuracy over all tasks after learning the final task, i.e., \begin{align*} \text{ACC} = \frac{1}{T} \sum_{i=1}^{T} A_{T, i}^{(test)}, \end{align*} where $A_{t, i}^{(test)}$ is the test accuracy of task $i$ after learning task $t$. Each method are ranked in descending order based on the ACC achieved in an environment. For example, assume that we want to compare the CL performance from using learned replay scheduling policies with DQN and A2C against a Random scheduling policy in one environment. The CL performances achieved for each method are given by \vspace{2mm} \begin{align*} [\text{ACC}_{\text{Random}}, \text{ACC}_{\text{DQN}}, \text{ACC}_{\text{A2C}}] = [90\%, 99\%, 95\%]. \end{align*} We get the following ranking order between the methods based on their corresponding ACC: \vspace{2mm} \begin{align*} \texttt{ranking}([\text{ACC}_{\text{Random}}, \text{ACC}_{\text{DQN}}, \text{ACC}_{\text{A2C}}]) = [3, 1, 2], \end{align*} where DQN is ranked in 1st place, A2C in 2nd, and Random in 3rd. When there are multiple environments for evaluation, we compute the average ranking across the ranking positions in every environment for each method to compare. The average ranking for DQN and A2C are computed over the seed for initializing the network parameters as well as the seed of the environment. Similarly, the Random baseline is affected by the seed setting the random selection of actions and the environment seed. However, the performance of the ETS and Heuristic baselines are affected by the seed of the environment as these policies are fixed. We use copied values of the performance in environments for the ETS and Heuristic baselines when we need to compare across different random seeds for Random, DQN, and A2C. We show an example of such ranking calculation for ETS, a Heuristic baseline, DQN, and A2C. Consider the following performances for one environment: \vspace{2mm} \begin{align*} \begin{bmatrix} \text{ACC}_{\text{ETS}}^{1} & \text{ACC}_{\text{Heur}}^{1} & \text{ACC}_{\text{DQN}}^{1} & \text{ACC}_{\text{A2C}}^{1} \\[3pt] \text{ACC}_{\text{ETS}}^{2} & \text{ACC}_{\text{Heur}}^{2} & \text{ACC}_{\text{DQN}}^{2} & \text{ACC}_{\text{A2C}}^{2} \end{bmatrix} = \begin{bmatrix} 90\% & 95\% & 95\% & 99\% \\[3pt] * & * & 97\% & 98\% \end{bmatrix} , \end{align*} where $*$ denotes a copy of the ACC value in the first row. The subscript on ACC denotes the method and the superscript the seed used for initializing the policy network ${\bm{\theta}}$. Therefore, we copy the values for ETS and Heur such that the $\text{ACC}_{\text{DQN}}^2$ for seed 2 can be compared against ETS and Heur. Note that there is a tie between $\text{ACC}_{\text{Heur}}^{1}$ and $\text{ACC}_{\text{DQN}}^{1}$ as they have ACC $95\%$. We handle ties by assigning tied methods the average of their ranks, such that the ranks for both seeds will be \vspace{2mm} \begin{align*} & \texttt{ranking} \left( \begin{bmatrix} \text{ACC}_{\text{ETS}}^{1} & \text{ACC}_{\text{Heur}}^{1} & \text{ACC}_{\text{DQN}}^{1} & \text{ACC}_{\text{A2C}}^{1} \\[3pt] \text{ACC}_{\text{ETS}}^{2} & \text{ACC}_{\text{Heur}}^{2} & \text{ACC}_{\text{DQN}}^{2} & \text{ACC}_{\text{A2C}}^{2} \end{bmatrix}, \texttt{axis=-1, keepdim=True} \right) \\[3pt] = & \texttt{ranking} \left( \begin{bmatrix} 90\% & 95\% & 95\% & 99\% \\[3pt] 90\% & 95\% & 97\% & 98\% \end{bmatrix}, \texttt{axis=-1, keepdim=True} \right) \\[3pt] = & \begin{bmatrix} 4 & 2.5 & 2.5 & 1 \\[3pt] 4 & 3 & 2 & 1 \end{bmatrix}, \end{align*} where we inserted the copied values, such that $\text{ACC}_{\text{ETS}}^{1} = \text{ACC}_{\text{ETS}}^{2} = 90\%$ and $\text{ACC}_{\text{Heur}}^{1} = \text{ACC}_{\text{Heur}}^{2} = 95\%$. The mean ranking across the seeds thus becomes \vspace{2mm} \begin{align*} \texttt{mean} \left( \begin{bmatrix} 4 & 2.5 & 2.5 & 1 \\[3pt] 4 & 3 & 2 & 1 \end{bmatrix}, \texttt{axis=0} \right) = \begin{bmatrix} 4 & 2.75 & 2.25 & 1 \end{bmatrix} \end{align*} where A2C comes in 1st place, DQN in 2nd, Heur. in 3rd, and ETS on 4th place. We average across seeds and environments to obtain the final ranking score for each method for comparison. \section{Additional Experimental Results} \label{app:additional_experimental_results} In this section, we bring more insights to the benefits of replay scheduling in Section \ref{app:replay_schedule_visualization_for_split_mnist} as well as provide metrics for catastrophic forgetting in Section \ref{app:analysis_of_catastrophic_forgetting}. \subsection{Performance Progress of MCTS}\label{app:performance_progress_of_mcts} In the first experiments, we show that the replay schedules from MCTS yield better performance than replaying an equal amount of samples per task. The replay memory size is fixed to $M=10$ for Split MNIST, FashionMNIST, and notMNIST, and $M=100$ for Permuted MNIST, Split CIFAR-100, and Split miniImagenet. Uniform sampling is used as the memory selection method for all methods in this experiment. For the 5-task datasets, we provide the optimal replay schedule found from a breadth-first search (BFS) over all 1050 possible replay schedules in our action space (which corresponds to a tree with depth of 4) as an upper bound for MCTS. As the search space grows fast with the number of tasks, BFS becomes computationally infeasible when we have 10 or more tasks. Figure \ref{fig:mcts_best_rewards} shows the progress of ACC over % iterations by MCTS for all datasets. We also show the best ACC metrics for Random, ETS, Heuristic, and BFS (where appropriate) as straight lines. Furthermore, we include the ACC achieved by training on all seen datasets jointly at every task (Joint) for the 5-task datasets. We observe that MCTS outperforms Random and ETS successively with more iterations. Furthermore, MCTS approaches the upper limit of BFS on the 5-task datasets. For Permuted MNIST and Split CIFAR-100, the Heuristic baseline and MCTS perform on par after 50 iterations. This shows that Heuristic with careful tuning of the validation accuracy threshold can be a strong baseline when comparing replay scheduling methods. The top row of Table \ref{tab:results_memory_selection_methods} shows the ACC for each method for this experiment. We note that MCTS outperforms ETS significantly on most datasets and performs on par with Heuristic. \begin{figure}[t] \centering \setlength{\figwidth}{0.26\textwidth} \setlength{\figheight}{.14\textheight} \input{appendix/figures/mcts_rewards_comparison/mcts_rewards_groupplot_single_row} \caption{ Average test accuracies over tasks after learning the final task (ACC) over the MCTS simulations for all datasets, where 'S' and 'P' are used as short for 'Split' and 'Permuted'. We compare performance for MCTS (Ours) against random replay schedules (Random), Equal Task Schedule (ETS), and Heuristic Scheduling (Heuristic) baselines. For the first three datasets, we show the best ACC found from a breadth-first search (BFS) as well as the ACC achieved by training on all seen datasets jointly at every task (Joint). % All results have been averaged over 5 seeds. These results show that replay scheduling can improve over ETS and outperform or perform on par with Heuristic across different datasets and network architectures. % } \label{fig:mcts_best_rewards} \end{figure} \subsection{Replay Schedule Visualization for Split MNIST} \label{app:replay_schedule_visualization_for_split_mnist} In Figure \ref{fig:split_mnist_task_accuracies_and_bubble_plot}, we show the progress in test classification performance for each task when using ETS and MCTS with memory size $M=10$ on Split MNIST. For comparison, we also show the performance from a network that is fine-tuning on the current task without using replay. Both ETS and MCTS overcome catastrophic forgetting to a large degree compared to the fine-tuning network. Our method MCTS further improves the performance compared to ETS with the same memory, which indicates that learning the time to learn can be more efficient against catastrophic forgetting. Especially, Task 1 and 2 seems to be the most difficult task to remember since it has the lowest final performance using the fine-tuning network. Both ETS and MCTS manage to retain their performance on Task 1 using replay, however, MCTS remembers Task 2 better than ETS by around 5\%. To bring more insights to this behavior, we have visualized the task proportions of the replay examples using a bubble plot showing the corresponding replay schedule from MCTS in Figure \ref{fig:split_mnist_task_accuracies_and_bubble_plot}(right). At Task 3 and 4, we see that the schedule fills the memory with data from Task 2 and discards replaying Task 1. This helps the network to retain knowledge about Task 2 better than ETS at the cost of forgetting Task 3 slightly when learning Task 4. This shows that the learned policy has considered the difficulty level of different tasks. At the next task, the MCTS schedule has decided to rehearse Task 3 and reduces replaying Task 2 when learning Task 5. This behavior is similar to spaced repetition, where increasing the time interval between rehearsals helps memory retention. We emphasize that even on datasets with few tasks, using learned replay schedules can overcome catastrophic forgetting better than standard ETS approaches. \begin{figure}[t] \centering \setlength{\figwidth}{0.25\textwidth} \setlength{\figheight}{.14\textheight} \input{appendix/figures/bubble_plots/mnist_seed3/mnist_accuracy_and_bubble_plot} \caption{ Comparison of test classification accuracies for Task 1-5 on Split MNIST from a network trained without replay (Fine-tuning), ETS, and MCTS. The ACC metric for each method is shown on top of each figure. We also visualize the replay schedule found by MCTS as a bubble plot to the right. The memory size is set to $M=10$ with uniform memory selection for ETS and MCTS. Results are shown for 1 seed. } \label{fig:split_mnist_task_accuracies_and_bubble_plot} \end{figure} \subsection{Analysis of Catastrophic Forgetting}\label{app:analysis_of_catastrophic_forgetting} We have compared the degree of catastrophic forgetting for our method against the baselines by measuring the backward transfer (BWT) metric from \cite{lopez2017gradient}, which is given by \begin{align} \textnormal{BWT} = \frac{1}{T-1} \sum_{i=1}^{T-1} A_{T, i} - A_{i, i}, \end{align} where $A_{t, i}$ is the test accuracy for task $t$ after learning task $i$. Table \ref{tab:bwt_alternative_memory_selection} shows the ACC and BWT metrics for the experiments in Section \ref{sec:results_on_replay_scheduling_with_mcts}. In general, the BWT metric is consistently better when the corresponding ACC is better. We find an exception in Table \ref{tab:bwt_alternative_memory_selection} on Split CIFAR-100 and Split miniImagenet between Ours and Heuristic with uniform selection method, where Heuristic has better BWT while its mean of ACC is slightly lower than ACC for Ours. Table \ref{tab:bwt_efficiency_of_replay_scheduling} shows the ACC and BWT metrics for the experiments on efficiency of replay scheduling (also in Section \ref{sec:results_on_replay_scheduling_with_mcts}), where we see a similar pattern that better ACC yields better BWT. The BWT of MCTS is on par with the other baselines except on Split CIFAR-100 where the ACC on our method was a bit lower than the best baselines. \subsection{Applying Scheduling to Recent Replay Methods} \label{app:apply_scheduling_to_recent_replay_methods} In Section \ref{sec:results_on_replay_scheduling_with_mcts}, we showed that MCTS can be applied to any replay method. We combined MCTS together with four recent replay methods, namely Hindsight Anchor Learning (HAL)~\cite{chaudhry2021using}, Meta Experience Replay (MER)~\cite{riemer2018learning}, and Dark Experience Replay (DER)~\cite{buzzega2020dark}. Table \ref{tab:bwt_sota_models_applied_to_rsmcts} shows the ACC and BWT for all methods combined with the scheduling from Random, ETS, Heuristic, and MCTS. We observe that MCTS can further improve the performance for each of the replay methods across the different datasets. We present the hyperparameters used for each method in Table \ref{tab:hyperparameters_applying_scheduling_to_recent_replay_methods}. The hyperparameters for each method are denoted as \begin{itemize}[topsep=1pt, leftmargin=*] \item {\bf HAL.} $\eta$: learning rate, $\lambda$: regularization, $\gamma$: mean embedding strength, $\beta$: decay rate, $k$: gradient steps on anchors \item {\bf MER.} $\gamma$: across batch meta-learning rate, $\beta$: within batch meta-learning rate \item {\bf DER.} $\alpha$: loss coefficient for memory logits \item {\bf DER++.} $\alpha$: loss coefficient for memory logits, $\beta$: loss coefficient for memory labels \end{itemize} For the experiments, we used the same architectures and hyperparameters as described in Appendix \ref{app:experimental_settings} for all datasets if not mentioned otherwise. We used the Adam optimizer with learning rate $\eta=0.001$ for MER, DER, and DER++. For HAL, we used the SGD optimizer since using Adam made the model diverge in our experiments. \begin{table}[t] \small \centering \caption{ Hyperparameters for replay-based methods HAL, MER, DER and DER++ used in experiments on applying MCTS to recent replay-based methods in Section \ref{sec:results_on_replay_scheduling_with_mcts}. } \resizebox{\textwidth}{!}{ \begin{tabular}{l c c c c c c c} \toprule & & \multicolumn{3}{c}{{\bf 5-task Datasets}} & \multicolumn{3}{c}{{\bf 10- and 20-task Datasets}} \\ \cmidrule(lr){3-5} \cmidrule(lr){6-8} {\bf Method} & {\bf Hyperparam.} & S-MNIST & S-FashionMNIST & S-notMNIST & P-MNIST & S-CIFAR-100 & S-miniImagenet \\ \midrule \multirow{5}{*}{HAL} & $\eta$ & 0.1 & 0.1 & 0.1 & 0.1 & 0.03 & 0.03 \\ & $\lambda$ & 0.1 & 0.1 & 0.1 & 0.1 & 1.0 & 0.03 \\ & $\gamma$ & 0.5 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\ & $\beta$ & 0.7 & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 \\ & $k$ & 100 & 100 & 100 & 100 & 100 & 100 \\ \midrule \multirow{2}{*}{MER} & $\gamma$ & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ & $\beta$ & 1.0 & 0.01 & 1.0 & 1.0 & 0.1 & 0.1 \\ \midrule \multirow{1}{*}{DER} & $\alpha$ & 0.2 & 0.2 & 0.1 & 1.0 & 1.0 & 0.1 \\ \midrule \multirow{2}{*}{DER++} & $\alpha$ & 0.2 & 0.2 & 0.1 & 1.0 & 1.0 & 0.1 \\ & $\beta$ & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ \bottomrule \end{tabular} } \vspace{-4mm} \label{tab:hyperparameters_applying_scheduling_to_recent_replay_methods} \end{table} \input{appendix/tables/table_bwt_alternative_memory_selection} \input{appendix/tables/table_bwt_efficiency_replay_scheduling} \input{appendix/tables/table_apply_scheduling_to_recent_replay_methods} \subsection{Complementary Experimental Results for Policy Generalization} Here, we provide complementary results for the policy generalization experiments in Section \ref{sec:results_on_policy_generalization}. Figure \ref{fig:policy_rewards_mnist}-\ref{fig:policy_rewards_fashionmnist_new_dataset} shows the performance progress measured in ACC in the test environments during training for the DQN and A2C. We also show the ACC achieved by the scheduling baselines as straight lines as these policies are fixed. Figure \ref{fig:policy_rewards_mnist} and \ref{fig:policy_rewards_cifar10} shows the performance progress for test environments with Split MNIST and Split CIFAR-10 respectively in the New Task Orders experiment. Figure \ref{fig:policy_rewards_notmnist_new_dataset} and \ref{fig:policy_rewards_fashionmnist_new_dataset} shows the performance progress for test environments with Split notMNIST and Split FashionMNIST respectively in the New Dataset experiment. In general, we observe that DQN exhibits noisier progress of the achieved ACC in the test environments than A2C. In Table \ref{tab:rankings_mnist_new_task_order}-\ref{tab:rankings_fashionmnist_new_dataset}, we display the ACC and rank in every test environment that was used for generating the average rankings in Table \ref{tab:average_ranking_rl_experiment} (see Section \ref{sec:results_on_policy_generalization}). Table \ref{tab:rankings_mnist_new_task_order}, \ref{tab:rankings_fashionmnist_new_task_order}, and \ref{tab:rankings_cifar10_new_task_order} shows the ACCs and ranks for the New Task Orders experiments with datasets Split MNIST, FashionMNIST, and CIFAR-10 respectively. Table \ref{tab:rankings_notmnist_new_dataset} and \ref{tab:rankings_fashionmnist_new_dataset} shows the ACCs and ranks for the New Dataset experiments with datasets Split notMNIST and FashionMNIST respectively. The numbers for the average rankings in Table \ref{tab:average_ranking_rl_experiment} are computed by averaging over the ranks in each test environment for every method separately. \clearpage \begin{figure}[t] \centering \setlength{\figwidth}{0.26\textwidth} \setlength{\figheight}{.14\textheight} \input{appendix/figures/policy_rewards_mnist/groupplot} \vspace{-3mm} \caption{Performance progress measured in ACC (\%) for all methods in the 10 test environments for {\bf Split MNIST} in the New Task Orders experiment. We plot the performance progress for A2C and DQN for 100 evaluation steps equidistantly distributed over the training episodes. The performance for the Random, ETS, and Heuristic scheduling baselines are plotted as straight lines. } \label{fig:policy_rewards_mnist} \vspace{-3mm} \end{figure} \begin{figure}[t] \centering \setlength{\figwidth}{0.26\textwidth} \setlength{\figheight}{.14\textheight} \input{appendix/figures/policy_rewards_cifar10_new_task_order/groupplot} \vspace{-3mm} \caption{Performance progress measured in ACC (\%) for all methods in the 10 test environments for {\bf Split CIFAR-10} in the New Task Orders experiment. We plot the performance progress for A2C and DQN for 100 evaluation steps equidistantly distributed over the training episodes. The performance for the Random, ETS, and Heuristic scheduling baselines are plotted as straight lines. } \label{fig:policy_rewards_cifar10} \vspace{-3mm} \end{figure} \clearpage \begin{figure}[t] \centering \setlength{\figwidth}{0.26\textwidth} \setlength{\figheight}{.14\textheight} \input{appendix/figures/policy_rewards_notmnist_new_dataset/groupplot} \vspace{-3mm} \caption{Performance progress measured in ACC (\%) for all methods in the 10 test environments for {\bf Split notMNIST} in the New Dataset experiment. We plot the performance progress for A2C and DQN for 100 evaluation steps equidistantly distributed over the training episodes. The performance for the Random, ETS, and Heuristic scheduling baselines are plotted as straight lines. } \label{fig:policy_rewards_notmnist_new_dataset} \vspace{-3mm} \end{figure} \begin{figure}[t] \centering \setlength{\figwidth}{0.26\textwidth} \setlength{\figheight}{.14\textheight} \input{appendix/figures/policy_rewards_fashionmnist_new_dataset/groupplot} \vspace{-3mm} \caption{Performance progress measured in ACC (\%) for all methods in the 10 test environments for {\bf Split FashionMNIST} in the New Dataset experiment. We plot the performance progress for A2C and DQN for 100 evaluation steps equidistantly distributed over the training episodes. The performance for the Random, ETS, and Heuristic scheduling baselines are plotted as straight lines. } \label{fig:policy_rewards_fashionmnist_new_dataset} \vspace{-3mm} \end{figure} \clearpage \input{appendix/tables/table_rankings_mnist_new_task_order} \clearpage \input{appendix/tables/table_rankings_fashionmnist_new_task_order} \clearpage \input{appendix/tables/table_rankings_cifar10_new_task_order} \clearpage \input{appendix/tables/table_rankings_notmnist_new_dataset} \clearpage \input{appendix/tables/table_rankings_fashionmnist_new_dataset} \clearpage \section{Additional Methodology}\label{app:additional_methodlogy} In this section, we provide pseudo-code for MCTS to search for replay schedules in single CL environments in Section \ref{app:rs_mcts_algorithm} as well as pseudo-code for the RL-based framework for learning the replay scheduling policies in Section \ref{app:rl_framework_algorithm}. \subsection{Monte Carlo Tree Search Algorithm for Replay Scheduling }\label{app:rs_mcts_algorithm} \algrenewcommand\alglinenumber[1]{\small #1:} % \input{appendix/pseudocode_rs_mcts_algorithm} We provide pseudo-code in Algorithm \ref{alg:replay_scheduling_mcts} outlining the steps for our method using Monte Carlo tree search (MCTS) to find replay schedules described in the main paper (Section \ref{sec:mcts_for_replay_scheduling}). The MCTS procedure selects actions over which task proportions to fill the replay memory with at every task, where the selected task proportions are stored in the replay schedule $S$. The schedule is then passed to \textsc{EvaluateReplaySchedule$(\cdot)$} where the continual learning part executes the training with replay memories filled according to the schedule. The reward for the schedule $S$ is the average validation accuracy over all tasks after learning task $T$, i.e., ACC, which is backpropagated through the tree to update the statistics of the selected nodes. The schedule $S_{best}$ yielding the best ACC score is returned to be used for evaluation on the held-out test sets. The function $\textsc{GetReplayMemory}(\cdot)$ is the policy for retrieving the replay memory ${\mathcal{M}}$ from the historical data given the task proportion ${\bm{p}}$. The number of samples per task determined by the task proportions are rounded up or down accordingly to fill ${\mathcal{M}}$ with $M$ replay samples in total. The function $\textsc{GetTaskProportion}(\cdot)$ simply returns the task proportion that is related to given node. The following steps are performed during one MCTS rollout (or iteration): \begin{enumerate}[leftmargin=*, topsep=0pt] \item {\bf Selection} involves either selecting an unvisited node randomly, or selecting the next node by evaluating the UCT score (see Equation \ref{eq:uct}) if all children has been visited already. In Algorithm \ref{alg:replay_scheduling_mcts}, $\textsc{TreePolicy}(\cdot)$ appends the task proportions ${\bm{p}}_t$ to the replay schedule $S$ at every selected node. \item {\bf Expansion} involves expanding the search tree with one of the unvisited child nodes $v_{t+1}$ selected with uniform sampling. $\textsc{Expansion}(\cdot)$ in Algorithm \ref{alg:replay_scheduling_mcts} appends the task proportions ${\bm{p}}_t$ to the replay schedule $S$ of the expanded node. \item {\bf Simulation} involves selecting the next nodes randomly until a terminal node $v_T$ is reached. In Algorithm \ref{alg:replay_scheduling_mcts}, $\textsc{DefaultPolicy}(\cdot)$ appends the task proportions ${\bm{p}}_t$ to the replay schedule $S$ at every randomly selected node until reaching the terminal node. \item {\bf Reward} The reward for the rollout is given by the ACC of the validation sets for each task. In Algorithm \ref{alg:replay_scheduling_mcts}, $\textsc{EvaluateReplaySchedule}(\cdot)$ involves learning the tasks $t= 1, \dots, T$ sequentially and using the replay schedule to sample the replay memories to use for mitigating catastrophic forgetting when learning a new task. The reward $r$ for the rollout is calculated after task $T$ has been learnt. \item {\bf Backpropagation} involves updating the reward function $q(\cdot)$ and number of visits $n(\cdot)$ from the expansion node up to the root node. See $\textsc{Backrpropagate}(\cdot)$ in Algorithm \ref{alg:replay_scheduling_mcts}. \end{enumerate} \subsection{RL Framework Algorithm}\label{app:rl_framework_algorithm} We provide pseudo-code for the RL-based framework for learning the replay scheduling policy with either DQN~\cite{mnih2013playing} or A2C~\cite{mnih2016asynchronous} in Algorithm \ref{alg:rl_framework_for_learning_replay_scheduling_policy}. The procedure collects experience from all training environments in ${\mathcal{E}}^{(train)}$ at every time step $t$. The datasets and classifiers are specific for each environment $E_i \in {\mathcal{E}}^{(train)}$. At $t=1$, we obtain the initial state $s_1^{(i)}$ by evaluating the classifier on the validation set ${\mathcal{D}}_1^{(val)}$ after training the classifier on the task 1. Next, we get the replay memory for mitigating catastrophic forgetting when learning the next task $t+1$ by 1) taking action $a_t^{(i)}$ under policy $\pi_{{\bm{\theta}}}$, 2) converting action $a_t^{(i)}$ into the task proportion ${\bm{p}}_{t}$, and 3) sampling the replay memory ${\mathcal{M}}_t$ from the historical datasets given the selected proportion. We then obtain the reward $r_t$ and the next state $s_{t+1}$ by evaluating the classifier on the validation sets ${\mathcal{D}}_{1:t+1}^{(val)}$ after learning task $t+1$. The collected experience from each time step is stored in the experience buffer ${\mathcal{B}}$ for both DQN and A2C. In $\textsc{UpdatePolicy}(\cdot)$, we outline the steps for updating the policy parameters ${\bm{\theta}}$ with either DQN or A2C. \begin{algorithm}[t] \caption{RL Framework for Learning Replay Scheduling Policy} \label{alg:rl_framework_for_learning_replay_scheduling_policy} \begin{algorithmic}[1] \Require ${\mathcal{E}}^{(train)}$: Training environments, ${\bm{\theta}}$: Policy parameters, $\gamma$: Discount factor \Require $\eta$: Learning rate, $n_{episodes}$: Number of episodes, $M$: Replay memory size \Require $n_{steps}$: Number of steps for A2C \State ${\mathcal{B}} = \{\}$ \Comment{Initialize experience buffer} \For{$i = 1, \dots, n_{\text{episodes}}$} \For{$t=1, \dots, T-1$} \For{$E_i \in {\mathcal{E}}^{(train)}$} \State ${\mathcal{D}}_{1:t+1} = \textsc{GetDatasets}(E_i, t)$ \Comment{Get datasets from environment $E_i$} \State $f_{\vphi}^{(i)} = \textsc{GetClassifier}(E_i)$ \Comment{Get classifier from environment $E_i$} \If{$t==1$} \State $\textsc{Train}(f_{\vphi}^{(i)}, {\mathcal{D}}_{t}^{(train)}$ \Comment{Train classifier $f_{\vphi}^{(i)}$ on task 1} \State $A_{1:t}^{(val)} = \textsc{Eval}(f_{\vphi}^{(i)}, {\mathcal{D}}_{1:t}^{(val)})$ \Comment{Evaluate classifier $f_{\vphi}^{(i)}$ on task 1} \State $s_{t}^{(i)} = A_{1:t}^{(val)} = [A_{1, 1}^{(val)}, 0, ..., 0]$ \Comment{Get initial state} \EndIf \State $a_t^{(i)} \sim \pi_{{\bm{\theta}}}(a, s_t^{(i)})$ \Comment{Take action under policy $\pi_{{\bm{\theta}}}$} \State ${\bm{p}}_t = \textsc{GetTaskProportion}(a_t^{(i)})$ % \State ${\mathcal{M}}_t \sim \textsc{GetReplayMemory}({\mathcal{D}}_{1:t}^{(train)}, {\bm{p}}_t, M)$ % \State $\textsc{Train}(f_{\vphi}^{(i)}, {\mathcal{D}}_{t+1}^{(train)} \cup {\mathcal{M}}_t)$ \Comment{Train classifier $f_{\vphi}^{(i)}$} \State $A_{1:t+1}^{(val)} = \textsc{Eval}(f_{\vphi}^{(i)}, {\mathcal{D}}_{1:t+1}^{(val)})$ \Comment{Evaluate classifier $f_{\vphi}^{(i)}$} \State $s_{t+1}^{(i)} = A_{1:t+1}^{(val)} = [A_{t+1, 1}^{(val)}, ..., A_{t+1, t+1}^{(val)}, 0, ..., 0]$ \Comment{Get next state} \State $r_{t}^{(i)} = \frac{1}{t+1}\sum_{j=1}^{t+1} A_{1:t+1}^{(val)}$ \Comment{Compute reward} \State ${\mathcal{B}} = {\mathcal{B}} \cup \{(s_{t}^{(i)}, a_{t}^{(i)}, r_{t}^{(i)}, s_{t+1}^{(i)})\}$ \Comment{Store transition in buffer} \If{time to update policy} \State ${\bm{\theta}}, {\mathcal{B}} = \textsc{UpdatePolicy}({\bm{\theta}}, {\mathcal{B}}, \gamma, \eta, n_{steps})$ \Comment{Update policy with experience} \EndIf \EndFor \EndFor \EndFor \State \Return ${\bm{\theta}}$ \Comment{Return policy} \Statex \Function{UpdatePolicy}{${\bm{\theta}}, {\mathcal{B}}, \gamma, \eta, n_{steps}$} \If{DQN} \State $(s_j, a_j, r_j, s_j') \sim {\mathcal{B}}$ \Comment{Sample mini-batch from buffer} \State $y_j = \begin{cases} r_j & \text{if $s_j'$ is terminal} \\ r_j + \gamma \max_a Q_{{\bm{\theta}}^{-}}(s_j', a) & \text{else} \end{cases} $ \Comment{Compute $y_j$ with target net ${\bm{\theta}}^{-}$} \State ${\bm{\theta}} = {\bm{\theta}} - \eta \nabla_{{\bm{\theta}}}(y_j - Q_{{\bm{\theta}}}(s_j, a_j))^2$ \Comment{Update $Q$-function} \ElsIf{A2C} \State $s_t = {\mathcal{B}}[n_{steps}]$ \Comment{Get last state in buffer} \State $R = \begin{cases} 0 & \text{if $s_t$ is terminal} \\ V_{{\bm{\theta}}_v}(s_t) & \text{else} \end{cases} $ \Comment{Bootstrap from last state} \For{$j = n_{steps}-1, ..., 0$} \State $s_j, a_j, r_j = {\mathcal{B}}[j]$ \Comment{Get state, action, and reward at step $j$} \State $R = r_j + \gamma R$ \State ${\bm{\theta}} = {\bm{\theta}} - \eta \nabla_{{\bm{\theta}}} \log \pi_{{\bm{\theta}}}(a_j, s_j) (R - V_{{\bm{\theta}}_v}(s_j))$ \Comment{Update policy} \State ${\bm{\theta}}_v = {\bm{\theta}}_v - \eta \nabla_{{\bm{\theta}}_v} (R - V_{{\bm{\theta}}_v}(s_j))^2$ \Comment{Update value function} \EndFor \State ${\mathcal{B}} = \{\}$ \Comment{Reset experience buffer} \EndIf \State \Return ${\bm{\theta}}, {\mathcal{B}}$ \EndFunction \end{algorithmic} \end{algorithm} \section{Potential Negative Societal Impacts}\label{app:potential_societal_impacts} Privacy is one of the main concerns when storing raw input samples for replay in continual learning (CL). However, our replay scheduling framework could store compressed features or use synthetic data from a generative model for replay rather than the raw samples to mitigate the privacy risks. Moreover, as mentioned in the Limitations (see Section \ref{sec:conclusions}), substantial amounts of data and training time are required for learning policies that generalize which increases the need for secure data storage solutions. Our policy learning framework only requires metrics for representing task performances since the policy is incentivized to replay tasks that are to be forgotten. Hence, the policy can be trained on stored state transitions representing how the CL task performances progresses based on selected actions which circumvents the potential privacy issues. Requirements on training time could potentially be reduced with more advanced RL methods that generalize well which we will explore in future work. \section*{Checklist} \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{}% \item Did you describe the limitations of your work? \answerYes{See Section \ref{sec:conclusions}}% \item Did you discuss any potential negative societal impacts of your work? \answerYes{See Appendix \ref{app:potential_societal_impacts}}% \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{}% \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerNA{}% \item Did you include complete proofs of all theoretical results? \answerNA{}% \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{}% \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{See Appendix \ref{app:experimental_settings}}% \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{} % \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{See Appendix \ref{app:experimental_settings}}% \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{}% \item Did you mention the license of the assets? \answerNo{}% \item Did you include any new assets either in the supplemental material or as a URL? \answerNo{}% \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNo{}% \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNo{}% \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{}% \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{}% \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{}% \end{enumerate} \end{enumerate} \section{Introduction}\label{sec:introduction} Many organizations deploying machine learning systems receive large volumes of data daily~\cite{bailis2017macrobase, hazelwood2018applied}. Although all historical data are stored in the cloud in practice, retraining machine learning systems on a daily basis is prohibitive both in time and cost. In this setting, the systems often need to continuously adapt to new tasks while retaining the previously learned abilities. % Continual learning (CL) methods~\cite{delange2021continual, parisi2019continual} address this challenge where, in particular, replay methods~\cite{chaudhry2019tiny, hayes2020remind} have shown to be very effective in achieving great prediction performance. % Replay methods mitigate catastrophic forgetting by revisiting a small set of samples, which is feasible to process compared to the size of the historical data. In the traditional CL literature, replay memories are % limited due to the assumption that historical data are not available. In the real-world setting where historical data are in fact always available, the requirement of small memory remains due to processing time and cost issues. Recent research on replay-based CL has focused on the quality of memory samples ~\cite{aljundi2019gradient, borsos2020coresets, chaudhry2019tiny, chrysakis2020online, nguyen2017variational, rebuffi2017icarl, yoon2021online} or data compression to increase the memory capacity~\cite{hayes2020remind, iscen2020memory, pellegrini2019latent}. Most previous methods allocate equal memory storage space for samples from old tasks, and replay the whole memory to mitigate catastrophic forgetting. However, in life-long learning settings, this simple strategy would be inefficient as the memory must store a large number of tasks. Furthermore, uniform selection policy of samples to revisit is commonly used which ignores the time of which tasks to learn again. This stands in contrast to human learning where education methods focus on scheduling of learning and rehearsal of previous learned knowledge. For example, spaced repetition~\cite{dempster1989spacing, ebbinghaus2013memory, landauer1978optimum}, where the time interval between rehearsal increases, has been shown to enhance memory retention. \begin{figure}[t] \centering \setlength{\figwidth}{.22\textwidth} \setlength{\figheight}{.12\textheight} \input{figures/single_task_replay_experiment/single_task_replay_groupplot} \caption{Task accuracies on Split MNIST~\cite{zenke2017continual} when replaying only 10 samples of classes $0/1$ at a single time step. The black vertical line indicates when replay is used. ACC denotes the average accuracy over all tasks after learning Task 5. Results are averaged over 5 seeds. These results show that the time to replay the previous task is critical for the final performance. } \vspace{-4mm} \label{fig:single_task_replay_with_M10} \end{figure} We argue that finding the proper schedule of which tasks to replay in the fixed memory setting is critical for CL. To demonstrate our claim, we perform a simple experiment on the Split MNIST~\cite{zenke2017continual} dataset where each task consists of learning the digits 0/1, 2/3, etc.\ arriving in sequence. The replay memory contains data from task 1 and can only be replayed at one point in time. Figure \ref{fig:single_task_replay_with_M10} shows how the task performances progress over time when the memory is replayed at different time steps. In this example, the best final performance is achieved when the memory is used when learning task 5. Note that choosing different time points to replay the same memory leads to noticeably different results in the final performance. These results indicate that scheduling the time when to apply replay can influence the final performance significantly of a CL system. To this end, we propose learning the time to learn, in which we learn replay schedules of which tasks to replay at different times inspired from human learning~\cite{dempster1989spacing}. To show the importance of replay scheduling, we first take an episodic-learning approach where a policy is learned from multiple trials selecting which tasks to replay in a CL scenario. In particular, we illustrate in single CL environments by using Monte Carlo tree search (MCTS)~\cite{coulom2006efficient} as an example method that searches for good replay schedules. % The replay schedules from MCTS are evaluated by measuring the final performance of a network trained on a sequence of CL tasks where the scheduled replay samples have been used for mitigating catastrophic forgetting. We use this way to show the importance of replay scheduling given an ideal environment to learn the policy which is infeasible for real-world large scale CL tasks. To enable replay scheduling in real-world CL scenarios, we also propose a framework using reinforcement learning (RL)~\cite{sutton2018reinforcement} for learning general policies that can be applied in new CL scenarios without additional training at test time. % In summary, our contributions are: \begin{itemize}[topsep=1pt, leftmargin=*]% \setlength\itemsep{0.1mm} \item We propose a new CL setting where historical data is available while the processing time is limited, in order to adjust current CL research closer to real-world needs (Section \ref{sec:problem_setting}). In this new setting, we introduce replay scheduling where we learn the time of which tasks to replay (Section \ref{sec:replay_scheduling_in_continual_learning}). \item We argue that learning the time to learn is essential for CL performance. We use MCTS as an example method to illustrate the benefits of replay scheduling in CL, where MCTS searches over finite sets of replay memory compositions % at every task (Section \ref{sec:mcts_for_replay_scheduling}). We show that the replay schedules from MCTS mitigate catastrophic forgetting efficiently across multiple CL benchmarks % for various memory selection and replay methods, and % in tiny memory settings (Section \ref{sec:results_on_replay_scheduling_with_mcts}). \item To enable replay scheduling in real-world CL scenarios, % we % propose an RL-based framework for learning policies that generalize across different CL environments (Section \ref{sec:policy_learning_framework_for_replay_scheduling}). We show that the learned policies can efficiently mitigate catastrophic forgetting in CL scenarios with new task orders and datasets unseen during training without added computational cost (Section \ref{sec:results_on_policy_generalization}). \end{itemize} \section{Method}\label{sec:method} In this section, we describe our new problem setting of CL where historical data are available while the processing time is limited when learning new tasks. In Section \ref{sec:problem_setting} and \ref{sec:replay_scheduling_in_continual_learning}, we present the considered problem setting, as well as our idea of learning schedules over which tasks to replay at different time steps to mitigate catastrophic forgetting. Section \ref{sec:mcts_for_replay_scheduling} describes how we use MCTS~\cite{coulom2006efficient} for replay scheduling in single CL environments. In Section \ref{sec:policy_learning_framework_for_replay_scheduling}, we present a framework based on RL~\cite{sutton2018reinforcement} for learning replay scheduling policies that generalize across different CL scenarios. \subsection{Problem Setting}\label{sec:problem_setting} We focus on a slightly new setting, considering the needs for CL in the real-world where all historical data can be available since data storage is cheap. However, as this data volume is typically huge, we are often prohibited from replaying all historical data due to processing time constraints. Therefore, the goal is to determine which historical tasks to revisit and sample a small replay memory from the selected tasks to mitigate catastrophic forgetting as efficiently as possible. \setlength{\abovedisplayskip}{0pt} \setlength{\belowdisplayskip}{0pt} \setlength{\abovedisplayshortskip}{0pt} \setlength{\belowdisplayshortskip}{0pt} The notation of our problem setting resembles the traditional CL setting for image classification. We let the network $f_{\vphi}$, parameterized by $\vphi$, learn $T$ tasks sequentially from the datasets ${\mathcal{D}}_1, \dots, {\mathcal{D}}_T$ arriving one at a time. The $t$-th dataset ${\mathcal{D}}_t = \{({\bm{x}}_{t}^{(i)}, y_{t}^{(i)})\}_{i=1}^{N_{t}}$ consists of $N_t$ samples where ${\bm{x}}_{t}^{(i)}$ and $y_{t}^{(i)}$ are the $i$-th data point and class label respectively. Furthermore, each dataset is split into a training, validation, and test set, i.e., ${\mathcal{D}}_t = \{{\mathcal{D}}_t^{(train)}, {\mathcal{D}}_t^{(val)}, {\mathcal{D}}_t^{(test)}\}$. The objective at task $t$ is to minimize the loss $\ell(f_{\vphi}({\bm{x}}_t), y_{t})$ where $\ell(\cdot)$ is the cross-entropy loss in our case. The challenge is for the network $f_{\vphi}$ to retain its performance on the previous tasks. We assume that historical data from old tasks are accessible at any time step $t$. However, due to processing time constraints, we can only fill a small replay memory ${\mathcal{M}}$ with $M$ historical samples for replay. The challenge then becomes how to select the $M$ replay samples to efficiently retain knowledge of old tasks. We focus on selecting the samples on task-level by deciding on the task proportion $(p_1, \dots, p_{t-1})$ of samples to fetch from each task, where $p_{i} \geq 0$ is the proportion of $M$ samples from task $i$ to place in ${\mathcal{M}}$ and $\sum_{i=1}^{t-1} p_i = 1$. To simplify the selection of which tasks to replay, we construct a discrete set of possible task proportions that can be used for constructing % ${\mathcal{M}}$. \subsection{Replay Scheduling in Continual Learning}\label{sec:replay_scheduling_in_continual_learning} In this section, we describe our setup for enabling the scheduling for selecting replay memories at different time steps. We define a replay schedule as a sequence $S = ({\bm{p}}_1, \dots, {\bm{p}}_{T-1})$, where the task proportions ${\bm{p}}_i = (p_1, \dots, p_{T-1})$ for $1 \leq i \leq T-1$ are used for determining how many samples from seen tasks with which to fill the replay memory at task $i$. We construct an action space with a discrete number of choices of task proportions that can be selected at each task: At task $t$, we have $t-1$ historical tasks that we can choose samples from. We create $t-1$ bins ${\bm{b}}_t = [b_1, \dots, b_{t-1}]$ and sample a task index for each bin $b_i \in \{1, \dots, t-1 \}$. The bins are treated as interchangeable and we only keep the unique choices. For example, at task 3, we have seen task 1 and 2, so the unique choices of vectors are $[1,1], [1,2], [2,2]$, where $[1,1]$ indicates that all memory samples are from task 1, $[1,2]$ indicates that half memory is from task 1 and the other half are from task etc. We count the number of occurrences of each task index in ${\bm{b}}_t$ and divide by $t-1$ to obtain the task proportion, i.e., ${\bm{p}}_t = \texttt{bincount}({\bm{b}}_t) / (t-1)$. We round the number of replay samples from task $i$, i.e., $p_i \cdot M$, up or down accordingly to keep the memory size $M$ fixed when filling the memory. From this specification, we can build a tree of different replay schedules to evaluate with the network. Figure \ref{fig:replay_scheduling_mcts_tree_example} shows an example of a replay schedule tree with Split MNIST~\cite{zenke2017continual} where the memory size is $M=8$. % Each level corresponds to a task to learn and we show some examples of possible replay memories in the tree that can be evaluated at each task. A replay schedule is represented as a path traversal of different replay memory compositions from task 1 to task 5. At task 1, the memory ${\mathcal{M}}_1 = \emptyset$ is empty, while ${\mathcal{M}}_2$ is filled with samples from task 1 at task 2. The memory ${\mathcal{M}}_3$ can be composed with samples from either task 1 or 2, or equally fill ${\mathcal{M}}_3$ with samples from both tasks. % All possible paths in the tree are valid replay schedules. We show three examples of possible schedules in Figure \ref{fig:replay_scheduling_mcts_tree_example} for illustration: % the \textcolor{blue}{blue} path represents a replay schedule where only task 1 samples are replayed. % The \textcolor{red}{red} path represents using memories with equally distributed tasks, and the \textcolor{purple}{purple} path represents a schedule where the memory is only filled with samples from the most previous task. \subsection{Monte Carlo Tree Search for Replay Schedules} \label{sec:mcts_for_replay_scheduling} The tree-shaped action space of task proportions described in Section \ref{sec:replay_scheduling_in_continual_learning} grows fast with the number of tasks, which complicates studying replay scheduling in datasets with longer task-horizons. Thus, even for a single CL environment, the search space is too big for using exhaustive searches. To this end, we propose to use MCTS since it has been successful in applications with large action spaces~\cite{browne2012survey, chaudhry2018feature, gelly2006modification, silver2016mastering}. In our case, MCTS concentrates the search for replay schedules in directions with promising CL performance in the environment. We use MCTS in single CL environments for demonstration purposes to show that replay scheduling can be critical for the CL performance. % Each memory composition in the action space corresponds to a node that can be visited by MCTS. For example, in Figure \ref{fig:replay_scheduling_mcts_tree_example}, the nodes correspond to the possible memory examples which can be visited during the MCTS rollouts. One rollout corresponds to a tree traversal through all tree levels $1, \dots, T$ to select the replay schedule $S$ to use during the CL training. At level $t$, the node $v_t$ is related to a task proportions ${\bm{p}}_{t}$ used for retrieving a replay memory from the historical data at task $t$. We store the task proportion ${\bm{p}}_t$ from every visited node $v_t$ in the replay schedule $S$ during the rollout. With the final replay schedule $S$, we start the CL training where $S$ is used for constructing the replay memories at each task. Next, we briefly outline the MCTS steps for performing the replay schedule search (more details in Appendix \ref{app:rs_mcts_algorithm}): \vspace{-1mm} \begin{itemize}[leftmargin=*, topsep=0pt, label={}, noitemsep,] \item {\bf Selection.} During a rollout, the current node $v_t$ either moves randomly to unvisited children, or selects the next node by evaluating the Upper Confidence Tree (UCT)~\cite{kocsis2006bandit} if all children has been visited earlier. The child $v_{t+1}$ with the highest UCT score is selected using the function from \cite{chaudhry2018feature}: \begin{align}\label{eq:uct} UCT(v_t, v_{t+1}) = \text{max}(q(v_{t+1})) + C \sqrt{\frac{2 \log(n(v_{t}))}{n(v_{t+1})}}, \end{align} where $q(\cdot)$ is the reward function, $C$ the exploration constant, and $n(\cdot)$ the number of node visits. \item {\bf Expansion.} Whenever the current node $v_t$ has unvisited child nodes, the search tree is expanded with one of the unvisited child nodes $v_{t+1}$ selected with uniform sampling. \item {\bf Simulation and Reward.} After expansion, the succeeding nodes are selected randomly until reaching a terminal node $v_T$. The task proportions from the visited nodes in the rollout constitutes the replay schedule $S$. After training the network using $S$ for replay, we calculate the reward for the rollout given by $r = \frac{1}{T} \sum_{i=1}^T A_{T, i}^{(val)}$, where $A_{T, i}^{(val)}$ is the validation accuracy of task $i$ at task $T$. \item {\bf Backpropagation.} Reward $r$ is backpropagated from the expanded node $v_t$ to the root $v_1$, where the reward function $q(\cdot)$ and number of visits $n(\cdot)$ are updated at each node. \end{itemize} \subsection{Policy Learning Framework for Replay Scheduling}\label{sec:policy_learning_framework_for_replay_scheduling} In real-world CL settings, learning the policy with multiple rollouts is infeasible. Thus, we need a general policy for memory scheduling. We now describe our RL-based framework for learning replay scheduling policies that generalize across different CL environments. Our intuition is that there may exist general patterns regarding the replay scheduling, e.g., tasks that are harder or have been forgotten should be replayed more often. Moreover, the policy may non-trivially take task properties into consideration. Therefore, we aim to learn policies that select which tasks to replay from states representing the current task performance in the CL environments. The policy can then be applied for mitigating catastrophic forgetting in new CL scenarios. We model the CL environments as Markov Decision Processes~\cite{bellman1957markovian} (MDPs) where each MDP is represented as a tuple $E_i = ({\mathcal{S}}_i, {\mathcal{A}}, P_i, R_i, \mu_i, \gamma)$ consisting of the state space ${\mathcal{S}}_i$, action space ${\mathcal{A}}$, state transition probability $P_i(s' | s, a)$, reward function $R_i(s, a)$, initial state distribution $\mu_i(s_1)$, and discount factor $\gamma$. We assume that we have access to a fixed set of training environments ${\mathcal{E}}^{(train)} = \{E_1, \dots, E_K\}$ sampled from a distribution of CL environments, i.e., $E_i \sim p(E)$ for $i=1, ..., K$. Each environment $E_i$ contains of network $f_{\vphi}$ and $T$ datasets ${\mathcal{D}}_{1:T}$ where the $t$-th dataset is learned at time step $t$. To generate diverse CL environments, we get environments with different network initializations of $f_{\vphi}$ and shuffled task orders in the dataset when we sample environments from $p(E)$. We define the state $s_t$ of the environment as the validation accuracies $A_{t, 1:t}^{(val)}$ on each seen task $1, ..., t$ from $f_{\vphi}$ at task $t$, i.e., $s_t = [A_{t, 1}, ..., A_{t, t}, 0, ..., 0]$, where we use zero-padding on future tasks. The action space ${\mathcal{A}}$ is constructed as described in Section \ref{sec:replay_scheduling_in_continual_learning}, such that the $a_t \in {\mathcal{A}}$ corresponds to a task proportion ${\bm{p}}_t$ used for sampling the replay memory ${\mathcal{M}}_t$. We use a dense reward based on the average validation accuracies at task $t$, i.e., $r_{t} = \frac{1}{t}\sum_{i=1}^{t} A_{t, i}^{(val)}$. The state transition distribution $P_i(s' | s, a)$ represents the dynamics of the environment, which depend on the initialization of $f_{\vphi}$ and also the task order in the dataset. The procedure for training the policy goes as follows: The state $s_t$ is obtained by evaluating the network $f_{\vphi}$ on the validation sets ${\mathcal{D}}_{1:t}^{(val)}$ after learning the $t$-th task from ${\mathcal{D}}_t^{(train)}$. Action $a_t$ is selected under the policy $\pi_{{\bm{\theta}}}(a | s_t)$ parameterized by ${\bm{\theta}}$. The action is converted into task proportion ${\bm{p}}_t$ for sampling the replay memory ${\mathcal{M}}_t$ from the historical datasets. We then train classifier $f_{\vphi}$ with ${\mathcal{D}}_{t+1}^{(train)}$ and ${\mathcal{M}}_t$, and obtain the reward $r_{t+1}$ and the next state $s_{t+1}$ by evaluating $f_{\vphi}$ on the validation sets ${\mathcal{D}}_{1:t+1}^{(val)}$. The collected transitions $(s_t, a_t, r_{t+1}, s_{t+1})$ are used for updating the policy. This procedure is followed until the final task $T$ after which we start a new episode. % We evaluate the learned policy by applying it to mitigate catastrophic forgetting in new CL environments at test time. To foster generalization across environments, we train the policy on multiple environments with different dynamics, e.g., task orders and datasets, to learn from diverse sets of training data. The goal for the agent is to maximize the sum of rewards in each training environment. % At test time, the policy is applied on new CL classifiers and datasets in the test environments without added computational cost nor experience collection. In Section \ref{sec:experiments}, we test the policies generalization capability to new CL environments where the task orders and datasets are unseen during training. \section{Conclusions}\label{sec:conclusions} We proposed learning the time to learn, i.e.,~in a real-world CL context, learning schedules of which tasks to replay at different times. To the best of our knowledge, we are the first to consider the time to learn in CL inspired by human learning techniques. We demonstrated the benefits with replay scheduling by showing the performance improvements with the MCTS schedules on several CL benchmarks in single CL environments. To improve the scheduling efficiency, we further proposed an RL-based framework that allows learning policies that can generalize across different CL environments with unseen task orders and datasets without additional computational cost or training in the test environment. Moreover, the learned policies are capable of considering replaying forgotten tasks which can mitigate catastrophic forgetting more efficiently than fixed scheduling policies. The proposed problem setting and replay scheduling approach brings CL research closer to real-world needs, especially in scenarios where CL is applied under limited processing times but with rich amounts of historical data available for replay. % \vspace{-3mm} \paragraph{Limitations and Future Work.} Generalization in RL is a challenging research topic by itself. With the current method, large amounts of diverse data and training time is required to enable the learned policy to generalize well. This can be costly due to generating the CL environments is expensive since each state transition involves training the classifier on a CL task. Moreover, we are currently considering a discrete action space which is hard to construct especially when the number of tasks is large. Thus, in future work, we would explore more advanced RL methods which can handle continuous action spaces and generalize well. \section{Related Work}\label{sec:related_work} In this section, we give a brief overview of various approaches in CL, especially replay methods. We provide more details on the related work, including spaced repetition in human CL~\cite{dempster1989spacing, ebbinghaus2013memory, hawley2008comparison, landauer1978optimum} and generalization in RL~\cite{igl2019generalization, kirk2021survey, zhang2018dissection, zhao2019investigating}, in Appendix \ref{app:related_work}. Traditional CL can be divided into three main areas, namely regularization-based, architecture-based, and replay-based approaches. Regularization-based methods protect parameters influencing the performance on known tasks from wide changes and use the other parameters for learning new tasks~\cite{adel2019continual, kao2021natural, kirkpatrick2017overcoming, li2017learning, nguyen2017variational, schwarz2018progress, zenke2017continual}. Architecture-based methods mitigate catastrophic forgetting by maintaining task-specific parameters~\cite{ebrahimi2020adversarial, mallya2018packnet, rusu2016progressive, serra2018overcoming, xu2018reinforced, yoon2019scalable, yoon2017lifelong}. Replay methods mix samples from old tasks with the current dataset to mitigate catastrophic forgetting, where the replay samples are either stored in an external memory~\cite{aljundi2019online, aljundi2019gradient, borsos2020coresets, chaudhry2019tiny, chrysakis2020online, hayes2019memory, hayes2020remind, iscen2020memory, isele2018selective, jin2020gradient, lopez2017gradient, nguyen2017variational, pellegrini2019latent, rebuffi2017icarl, rolnick2018experience, verwimp2021rehearsal, yoon2021online} or generated using a generative model~\cite{shin2017continual, van2018generative}. Selecting the time to replay old tasks has mostly been ignored in the literature, with an exception in~\cite{aljundi2019online} which replays memory samples that would most interfere with a foreseen parameter update. Our replay scheduling approach differs from the above mentioned works since we focus on learning to select which tasks to replay. Nevertheless, our scheduling can be combined with any selection strategy and replay-based method. \section{Experiments}\label{sec:experiments} The experiments with our replay scheduling methods are divided into two parts. Firstly, we evaluate the benefits of replay scheduling in single CL environments using MCTS for finding replay schedules. We perform extensive evaluation to show that the replay schedules from MCTS can outperform the baselines across several CL benchmarks and different backbones. Secondly, we evaluate our RL-based framework using DQN~\cite{mnih2013playing} and A2C~\cite{mnih2016asynchronous} for learning policies that generalize to new CL scenarios. We show that the learned policies can efficiently mitigate catastrophic forgetting in CL environments with new task orders and datasets that are unseen during training. Full details on experimental settings and additional results are in Appendix \ref{app:experimental_settings} and \ref{app:additional_experimental_results}. % \vspace{-3mm} \paragraph{Datasets and Network Architectures.} We conduct experiments on several CL benchmark datasets: Split MNIST~\cite{lecun1998gradient, zenke2017continual}, FashionMNIST~\cite{xiao2017fashion}, Split notMNIST~\cite{bulatov2011notMNIST}, Permuted MNIST~\cite{goodfellow2013empirical}, Split CIFAR-10 and CIFAR-100~\cite{krizhevsky2009learning}, and Split miniImagenet~\cite{vinyals2016matching}. We randomly sample 15\% of training data for each task to use for validation. % We use multi-head output layers for most datasets and assume task labels are available at test time~\cite{van2019three}, except for Permuted MNIST where the network uses single-head output. We use a 2-layer MLP with 256 hidden units for Split MNIST, Split FashionMNIST, Split notMNIST, and Permuted MNIST. For Split CIFAR-10 and CIFAR-100, we use the ConvNet from \cite{schwarz2018progress,vinyals2016matching}. For Split miniImagenet, we apply the reduced ResNet-18 from \cite{lopez2017gradient}. \vspace{-3mm} \paragraph{Evaluation Protocol.} We use the average test accuracy over all tasks after learning the final task, i.e., $\text{ACC} = \frac{1}{T} \sum_{i=1}^{T} A_{T, i}$ where $A_{T, i}$ is the test accuracy of task $i$ after learning task $T$. We report results evaluated on the test set where the replay schedules are selected from evaluating the validation sets. To assess generalization capability, we use a ranking method based on the $\text{ACC}$ between the methods in every test environment for comparison (see Appendix \ref{app:ranking_method} for more details). \subsection{Results on Replay Scheduling with Monte Carlo Tree Search}\label{sec:results_on_replay_scheduling_with_mcts} We show the importance of replay scheduling in single CL environments using MCTS. We perform extensive evaluation where MCTS is combined with different memory selection and replay methods, varying memory sizes, and % show the efficiency of replay scheduling in a memory setting where only 1 example/class is available for replay. We compare MCTS to the following scheduling baselines: \begin{itemize}[leftmargin=*, topsep=0pt, noitemsep] \item {\bf Random.} Random policy that randomly selects task proportions from the action space on how to structure the replay memory at every task. \item {\bf Equal Task Schedule (ETS).} Policy that selects equal task proportion such that the replay memory aims to fill the memory with an equal number of samples from every seen task. \item {\bf Heuristic Scheduling (Heuristic).} Heuristic policy that replays tasks with validation accuracy below a certain threshold proportional to the best achieved validation accuracy on the task. \end{itemize} Heuristic scheduling is based on the intuition that forgotten tasks should be replayed. The replay memory is filled with $M/k$ samples per task where $k$ is the number of selected tasks. If $k=0$, then replay is skipped at the current task. \vspace{-2mm} \paragraph{Combine with Different Memory Selection Methods.} We show that our method can be combined with any memory selection method for storing replay samples. In addition to uniform sampling, we apply various memory selection methods commonly used in the CL literature, namely $k$-means clustering, $k$-center clustering~\cite{nguyen2017variational}, and Mean-of-Features (MoF)~\cite{rebuffi2017icarl}. The replay memory sizes are set to $M=10$ for the 5-task datasets and $M=100$ for the 10- and 20-task datasets. Table \ref{tab:results_memory_selection_methods} shows the results across all datasets. We note that using the replay schedule from MCTS outperforms the baselines when using the alternative selection methods, where MoF performs the best on most datasets. \input{tables/table_applying_scheduling_to_recent_replay_methods} \vspace{-3mm} \begin{wrapfigure}{r}{0.46\textwidth} \centering \setlength{\figwidth}{0.46\textwidth} \setlength{\figheight}{.23\textheight} \vspace{-2mm} \input{figures/replay_schedule_visualizations/cifar100/rs_bubble_plot_seed3} \caption{Replay schedule from MCTS on Split CIFAR-100 visualized as bubble plot. } \vspace{-3mm} \label{fig:replay_schedule_vis_cifar100} \end{wrapfigure} \paragraph{Replay Schedule Visualization.} We visualize a learned replay schedule from Split CIFAR-100 with memory size $M=100$ to gain insights into the behavior of the scheduling policy from MCTS. Figure \ref{fig:replay_schedule_vis_cifar100} shows a bubble plot of the task proportions that are used for filling the replay memory at every task. Each circle color corresponds to a historical task and its size represents the proportion of replay samples at the current task. The sum of points in all circles at each column is fixed at all current tasks. We see that the task proportions vary dynamically over time in a sophisticated nonlinear way which would be hard to replace by a heuristic method. Moreover, we can observe space repetition-style scheduling on many tasks, e.g., task 1-3 are replayed with similar proportion at the initial tasks but eventually starts varying the time interval between replay. Also, task 4 and 6 need less replay in their early stages, which could potentially be that they are simpler or correlated with other tasks. We provide a similar visualization for Split MNIST in Figure \ref{fig:split_mnist_task_accuracies_and_bubble_plot} in Appendix \ref{app:replay_schedule_visualization_for_split_mnist} to bring more insights to the benefits of replay scheduling. \vspace{-3mm} \begin{wrapfigure}{r}{0.59\textwidth} \centering \setlength{\figwidth}{0.24\textwidth} \setlength{\figheight}{.13\textheight}% \vspace{-6mm} \input{figures/acc_over_memory_size_100iters/acc_over_memory_size_groupplot} \vspace{-6mm} \caption{Performace comparison over memory sizes $M$ for MCTS (Ours) and the Random, ETS, and Heuristic baselines. All results have been averaged over 5 seeds. % } \vspace{-4mm} \label{fig:acc_over_replay_memory_size} \end{wrapfigure} \paragraph{Varying Memory Size.} We show that our method can improve the CL performance across different memory sizes. In Figure \ref{fig:acc_over_replay_memory_size}, we observe that MCTS obtains better task accuracies than ETS, especially for small memory sizes. Both MCTS and ETS perform better than Heuristic as $M$ increases, which shows that Heuristic requires careful tuning of the validation accuracy threshold. These results show that replay scheduling can outperform the baselines, especially for small $M$, on both small and large datasets across different backbone choices. \vspace{-3mm} \paragraph{Applying Scheduling to Recent Replay Methods.} In this experiment, we show that replay scheduling can be combined with any replay method to enhance the CL performance. We combine MCTS with Hindsight Anchor Learning (HAL)~\cite{chaudhry2021using}, Meta-Experience Replay (MER)~\cite{riemer2018learning}, Dark Experience Replay++ (DER++)~\cite{buzzega2020dark}. We provide the hyperparameter settings in Appendix \ref{app:apply_scheduling_to_recent_replay_methods}. Table \ref{tab:results_applying_scheduling_to_recent_replay_methods} shows the performance comparison between our the MCTS scheduling against using Random, ETS, and Heuristic schedules for each method. The results confirm that replay scheduling is important for the final performance given the same memory constraints and it can benefit any existing CL framework. \vspace{-3mm} \paragraph{Efficiency of Replay Scheduling.} We illustrate the efficiency of replay scheduling with comparisons to several common replay-based CL baselines when only 1 sample/class is available for replay. We consider CL scenarios where the memory size is even smaller than the number of classes. To this end, we set the replay memory size for our method to $M=2$ for the 5-task datasets, such that only 2 samples can be selected for replay at all times. For the 10- and 20-task datasets which have 100 classes, we set $M=50$. We then compare against the memory efficient CL baselines A-GEM~\cite{chaudhry2018efficient} and ER-Ring~\cite{chaudhry2019tiny}, which have shown promising results with 1 sample per class for replay, as well as uniform memory selection as reference. We visualize the memory usage for our method and the baselines in Appendix \ref{app:additional_figures}. Table \ref{tab:efficiency_of_replay_scheduling_all_datasets} shows the ACC for each method across all datasets. Despite using significantly fewer samples for replay, MCTS performs better or on par with the best baselines on most datasets. These results indicate that replay scheduling is an important research direction in CL as storing 1 sample/class in the memory could be inefficient in settings with large number of tasks. % \input{tables/table_efficiency_replay_scheduling} \subsection{Policy Generalization to New Continual Learning Scenarios}\label{sec:results_on_policy_generalization} We show that the policies learned with our RL-based framework using DQN and A2C are capable of generalizing across CL environments with new task orders and datasets unseen during training. In addition to the baselines in Section \ref{sec:results_on_replay_scheduling_with_mcts}, we add two more heuristic scheduling baselines: \vspace{-1mm} \begin{itemize}[leftmargin=*, topsep=0pt, noitemsep] \item {\bf Heuristic Local Drop (Heur-LD).} Heuristic policy that replays tasks with validation accuracy below a threshold proportional to the previous achieved validation accuracy on the task. \item {\bf Heuristic Accuracy Threshold (Heur-AT).} Heuristic policy that replays tasks with validation accuracy below a fixed threshold. \vspace{-2mm} \end{itemize} Here, we name the Heuristic from Section \ref{sec:results_on_replay_scheduling_with_mcts} as Heuristic Global Drop (Heur-GD). \begin{wrapfigure}{r}{0.51\textwidth} \centering \setlength{\figwidth}{0.25\textwidth} \setlength{\figheight}{.13\textheight} \vspace{-6mm}% \begin{subfigure}[b]{0.48\textwidth} \centering \input{figures/policy_cifar10/groupplot_a2c} \vspace{-2mm} \caption{A2C - ACC: 89.75\%} \label{fig:policy_cifar10_a2c} \end{subfigure} \\ \begin{subfigure}[b]{0.48\textwidth} \centering \input{figures/policy_cifar10/groupplot_ets} \caption{ETS - ACC: 88.32\%} \vspace{-2mm} \label{fig:policy_cifar10_ets} \end{subfigure} \caption{Task accuracies and replay schedules for A2C and ETS for a Split CIFAR-10 environment.% } \vspace{-3mm} \label{fig:policy_cifar10} \end{wrapfigure} \paragraph{Generalization to New Task Orders.} We show that the learned replay scheduling policies can generalize to CL environments with previously unseen task orders. We generate training and test environments with unique task orders for three datasets, namely, Split MNIST, FashionMNIST, and CIFAR-10. Table \ref{tab:average_ranking_rl_experiment} shows the average ranking for the DQN, A2C, and the baselines when being applied in 10 test environments. Our learned policies obtain the best average ranking across the datasets, where the DQN performs best for Split MNIST and FashionMNIST while A2C outperforms all methods on Split CIFAR-10. We provide further insights in the benefits of learning the replay scheduling policy by visualizing the replay schedule and task accuracy progress from A2C in one Split CIFAR-10 test environment in Figure \ref{fig:policy_cifar10}. Additionally, we show the replay schedule and task accuracies from the ETS baseline in the same environment. The replay schedules are visualized with bubble plots showing the selected task proportion to use for composing the replay memories at each task. In Figure \ref{fig:policy_cifar10_a2c}, we observe that A2C decides to replay task 2 more than task 1 as the performance on task 2 decreases, which results in a slightly better ACC metric achieved by A2C than ETS. These results show that the learned policy can flexibly consider replaying forgotten tasks to enhance the CL performance. \vspace{-3mm} \paragraph{Generalization to New Datasets.} We show that the learned replay scheduling policy is capable of generalizing to CL environments with new datasets unseen in the training environments. We perform two sets of experiments, 1) train with environments generated with Split MNIST and FashionMNIST and test on environments generated with Split notMNIST, and 2) train with environments generated with Split MNIST and notMNIST and test on environments generated with Split FashionMNIST. Table \ref{tab:average_ranking_rl_experiment} shows the average ranking for DQN, A2C, and the baselines when generalization to test environments with new datasets. We observe that both A2C and DQN successfully generalize to Split notMNIST environments outperforming all baselines. However, the learned policies have difficulties to generalize to Split FashionMNIST environments, which could be due to high variations in the dynamics between training and test environments. The policy may exhibit state transition dynamics which has not been experienced during training, which makes generalization difficult for both DQN and A2C. Potentially, the performance could be improved by generating more training environments for the agent to exhibit more variations in the CL scenarios or by using other advanced RL methods which may generalize better~\cite{igl2019generalization}. \begin{table}[t] \footnotesize% \centering \caption{Average ranking (lower is better) for experiments on generalizing policies to environments with new task orders or a new dataset. % We average the results over 10 test environments. % } \label{tab:average_ranking_rl_experiment} \begin{tabular}{l c c c c c} \toprule % & \multicolumn{3}{c}{ {\bf New Task Order}} & \multicolumn{2}{c}{ {\bf New Dataset}}\\ \cmidrule(lr){2-4} \cmidrule(lr){5-6} {\bf Method} & S-MNIST & S-FashionMNIST & S-CIFAR-10 & S-notMNIST & S-FashionMNIST \\ \midrule Random & 4.23 & 3.60 & 5.03 & 3.87 & 3.95\\ ETS & 3.80 & 4.57 & 5.37 & 4.27 & 3.63 \\ \midrule Heur-GD & 4.48 & 4.25 & 3.98 & 4.58 & {\bf 2.77} \\ Heur-LD & 4.65 & 3.65 & 3.75 & 4.97 & 5.10 \\ Heur-AT & 4.33 & 3.87 & 3.43 & 4.28 & 3.72 \\ \midrule DQN (Ours) & {\bf 3.23} & {\bf 3.55} & 3.68 & 3.20 & 4.37 \\ A2C (Ours) & 3.27 & 4.52 & {\bf 2.75} & {\bf 2.83} & 4.47 \\ \bottomrule % \end{tabular} \vspace{-4mm} \end{table}
{ "timestamp": "2022-09-20T02:21:37", "yymm": "2209", "arxiv_id": "2209.08660", "language": "en", "url": "https://arxiv.org/abs/2209.08660" }
\section{Introduction} Online Linear Programming belongs to an essential type of sequential decision making process. The formulation of Online Linear Programming can be understood to optimize the profit of selling a set of products to different customers each of whom appears sequentially with the amount of products intended for purchase and a bid price. The seller must make an irrevocable decision at the time each customer appears. Mathematically, when we have $m$ different products with storage of $b_i$ for $i$th product, we hope to maximize $$ \begin{array}{c} \underset{x}{\operatorname{maximize}} \sum_{j=1}^{n} r_{j} x_{j} \\ \text { subject to } \sum_{j=1}^{n} a_{i j} x_{j} \leq b_{i}, \forall i=1,2, \cdots, m \\ 0 \leq x \leq 1, \forall j=1,2, \cdots, n \end{array} $$ where $a_{ij}$ is $jth$ customer's wanted amount for $j$th products, $r_j$ is her bid price, $x_j$ is the decision the seller makes whether to fulfill (either completely or partially) her order, and $n$ is the total selling period. Such formulation is widely applied in the fields of revenue management (\cite{12}), advertisement deliveries (\cite{11}, \cite{10}), and resource allocation (\cite{8} , \cite{16}). The performance of various algorithms to solve this type of OLP is well studied when the stochastic inputs are i.i.d (\cite{5}, \cite{2}). There are also considerable progresses made to analyze non-i.i.d inputs: \cite{21} studies the adversarial stochastic input model, and \cite{22} , \cite{3}, and \cite{23} study the permutation model. In this paper, we focus on analyzing the performance of the algorithms proposed in \cite{5} with the regenerative model. In \cite{5}, three algorithms \ref{alg:1}, \ref{alg:2}, and \ref{alg:3} are analyzed in the i.i.d model. Their regrets are proved to be bounded by $O(\sqrt{n})$, $O(\sqrt{n}\log n)$, and $O(\log n \log \log n)$ respectively. Recent developments based on \cite{5} include \cite{17}, \cite{15}, and \cite{18} that improve revenue management by adopting the dual-policy based algorithms; \cite{16} that discusses the performance when the resource capacity does not scale up linearly with $n$; and \cite{13} that improves the matching problem in the discrete form, widely applied in kidney exchange platforms and carpooling platforms. Hence, suppose we can further generalize the results for those three algorithms, we may find a handful of promising applications. The first central goal of this paper is to analyze algorithm \ref{alg:1} and algorithm \ref{alg:2} using regenerative data so defined in the next section. Intuitively, the regenerative process can be thought of as a process that can be decomposed into i.i.d cycles with randomized length. Hence, such a feature can well model certain local dependencies and periodic behaviors of data. Some well-known regenerative models include certain types of Markov Chain, which is a popular model for financial modeling (\cite{24}). Another popular example of regenerative data is the inventory problem (\cite{25}). To achieve our goal, we have the following steps. In Section Two we analyze the regenerative process and establish a concentration result, the first main result of this paper: \begin{theorem} (Exponential Bound for Regenerative Processes with Bounded Time) Suppose $|f(X)|$ is almost surely bounded by $M$, and $T_{0}, \tau_{i}$ are almost surely bounded by $T$, then we have the following concentration bound: suppose $t>T M K / \epsilon$ for some large $K$, then $$ \mathbb{P}\left(\frac{1}{t}\left|\int_{0}^{t}(f(X(s))-\alpha) d s\right|>\epsilon\right) \leq 2 \exp \left(-\frac{2 \epsilon^{2}(K-2)^{2}}{K^{2} \lambda M^{2} T^{2}} t\right)+\epsilon(\delta, K) $$ where $$ \epsilon(\delta, K)=2 \exp \left(-\frac{\delta^{2} t}{(\lambda-\delta)^{2} \lambda^{2} T^{2}}\right)+\exp \left(\left(2 \delta M-\frac{K-2}{K} \epsilon\right) t\right) $$ $$ \alpha=\frac{E \int_{T_{0}}^{T_{1}} f(X(s)) d s}{E \tau_{1}} $$ and $\delta, K $ are free parameters. \end{theorem} This Hoeffding-style inequality is critical in our algorithms analysis, because the regret defined in Section Four is essential a minimax problem on distributional optimization, and a Hoeffding-style inequality only requires a certain upper bound on the data. In Section Three, we review the OLP models proposed by \cite{5} and extend the results to regenerative models. Specifically, we use the above concentration result to derive a Regenerative Dual Convergence essential for the regret analysis as the second main result: \begin{theorem} (Dual Convergence Theorem for Regenerative Processes ) For a regenerative price process $r_i$, and under regularity conditions \ref{ass 1*},\ref{ass 2*},\ref{ass 3*}, there exists a constant $C$ such that $$ \mathbb{E}\left[\left\|\boldsymbol{p}_{n}^{*}-\boldsymbol{p}^{*}\right\|_{2}^{2}\right] \leq \frac{C m \log m \log \log n}{n} $$ holds for all $n \geq \max \{m, 3\}, m \geq 2$, and distribution $\mathcal{P}$ that satisfies those assumptions. Additionally, $$ \mathbb{E}\left[\left\|\boldsymbol{p}_{n}^{*}-\boldsymbol{p}^{*}\right\|_{2}\right] \leq C \sqrt{\frac{m \log m \log \log n}{n}} $$ \label{First Main Result} \end{theorem} Since the algorithms we are interested in analyzing belong to the dual-policy algorithms, the convergence in the dual paves the way to regret analysis for dual-policy algorithms. In Section Four, we discuss the efficiency of algorithm \ref{alg:1} and \ref{alg:2}, and present them as our third and fourth main results: \begin{theorem}(Regenerative Regret for Algorithm \ref{alg:1}) With the online policy $\boldsymbol{\pi}_{1}$ specified by Algorithm 1 with regenerative data, $$ \Delta_{n}\left(\boldsymbol{\pi}_{1}\right) \leq O(\sqrt{n}) $$ \end{theorem} \begin{theorem}(Regenerative Regret for Algorithm \ref{alg:2}) With the online policy $\boldsymbol{\pi}_{2}$ specified by Algorithm 2 with regenerative data, $$ \Delta_{n}\left(\pi_{2}\right) \leq O(\sqrt{n} \log n) $$ \end{theorem} In Section Five, we provide some numerical simulations, discuss the source of regrets, and use the numerical results to analyze two small modifications that can potentially improve the algorithms. As a result, we answer our first question that "can the algorithms achieve the same efficiency if the stochastic inputs are not i.i.d but still stationary" by extending the theorems in the context of the regenerative model. In Section Six, we address the second question that "how can we modify our algorithms if we know the stochastic inputs are trendy, hence not stationary". We provide some candidates algorithms and demonstrate their efficiency through numerical simulations. Hence, we leave the second question open and discuss the future directions. \section{Regenerative Processes and Convergence Rate} A stochastic process $\mathbf{X}=\{X(t): t \geq 0\}$ is called a regenerative process, first defined by \cite{26}, if there exists a sequence of stopping time $0 \leq T_{0}<T_{1}<T_{2}<\ldots$ such that each post- $T_{k}$ processes $\left\{X\left(T_{k}+t\right): t \geq 0\right\}$ form an i.i.d sequence of processes. The interval $T_j-T_{j-1}=\tau_{j-1}$ is a sequence of i.i.d random time. Intuitively, the process on time interval $[0,t]$ is split into i.i.d cycles except $[0,T_1]$, the interval before the first regeneration, and $[T_n,t]$, the interval between the final regeneration and the termination of the process. Since a regenerative process resembles an i.i.d sequence of random variables but exhibits many desirable traits such as periodic behaviors, it is natural to study the Law of Large Number of such process \cite{26}: \begin{proposition}(Law of Large Number for Regenerative Processes) Suppose that $$\int_{T_i}^{T_{i+1}}|f(X(s))|ds $$ is integrable, then $$ \lim _{t \rightarrow \infty} \frac{1}{t} \int_{0}^{t} f(X(s)) d s=\frac{\mathbb{E}[R]}{\mathbb{E}[\tau]} $$ where $\tau$ is the length of the first cycle and $R=\int_{T_0}^{T_0\tau} f(X(s)) d s$ is the value over the first full cycle. \end{proposition} For convenience, we normally denote $\mathbb{E}(\tau_1)$ as $\lambda$, the regenerative rate. Intuitively, the higher the regenerative rate, the more the process behaves like a standard i.i.d process. Similarly, there is a Central Limit Theorem for such process \cite{26}: \begin{proposition}(Central Limit Theorem for Regenerative Processes) Suppose that $$\int_{T_i}^{T_{i+1}}(f(X(s)))^2ds $$ is integrable, and $T_0$ and $$\int_{0}^{T_0}|f(X(s))|ds $$ are finite almost surely, then $$ \lim _{t \rightarrow \infty} \frac{1}{t^{1/2}}\left(\frac{1}{t} \int_{0}^{t} f(X(s)) -\mathbb{E}[R]\right) d s \Rightarrow \sigma N(0,1) $$ where $N(0,1)$ is the Standard Normal Distribution and $\sigma$ is the normalized variance: $$\sigma^2=\frac{1}{\mathbb{E}(\tau_1)}Var\left(\int_{T_i}^{T_{i+1}}f(X(s))ds\right). $$ \end{proposition} Those two propositions would be sufficient to analyze the limiting behaviors and approximation for the regenerative processes, provided that $t$ is large. However, those propositions say very little about the rate of convergence, a crucial element in the application. It is therefore the goal of this section to fulfill the missing piece by introducing the Regenerative version of one of the most commonly used propositions on the i.i.d model that bounds the convergence rate: Hoeffding's inequality. \begin{proposition} (Hoeffding's inequality for Bounded Variables). Let $Z_{1}, \ldots, Z_{n}$ be independent bounded random variables with $Z_{i} \in[a, b]$ for all $i$, where $-\infty<a \leq b<\infty$. Then $$ \mathbb{P}\left(\frac{1}{n} \sum_{i=1}^{n}\left(Z_{i}-\mathbb{E}\left[Z_{i}\right]\right) \geq t\right) \leq \exp \left(-\frac{2 n t^{2}}{(b-a)^{2}}\right) $$ and $$ \mathbb{P}\left(\frac{1}{n} \sum_{i=1}^{n}\left(Z_{i}-\mathbb{E}\left[Z_{i}\right]\right) \leq-t\right) \leq \exp \left(-\frac{2 n t^{2}}{(b-a)^{2}}\right) $$ for all $t \geq 0$. \end{proposition} One of the main reasons for the popularity of Hoeffding's inequality is that, under the i.i.d assumption, Hoeffding's inequality would give an exponentially decay upper bound on the convergence rate. We will show a similar result can be established for the Regenerative Processes as our first main result: \begin{theorem} (Exponential Bound for Regenerative Processes with Bounded Time) Suppose $|f(X)|$ is almost surely bounded by $M$, and $T_0, \tau_i$ are almost surely bounded by $T$, then we have the following concentration bound: suppose $t>TMK/\epsilon$ for some large $K$, then $$ \mathbb{P}\left(\frac{1}{t}\left| \int_{0}^{t}(f(X(s))-\alpha)ds \right|>{\epsilon}\right)\leq 2\exp\left({-\frac{2\epsilon^2 (K-2)^2}{K^2\lambda M^2T^2}t}\right)+\epsilon(\delta,K) $$ where $$\epsilon(\delta, K)=2 \exp \left(-\frac{\delta^{2} t}{(\lambda-\delta)^{2} \lambda^{2} T^{2}}\right)+\exp \left(\left(2 \delta M-\frac{K-2}{K}\epsilon\right) t\right) $$ and $$ \alpha=\frac{E \int_{T_{0}}^{T_{1}} f(X(s)) d s}{E \tau_{1}}, $$ and $\delta, K$ are free parameters. \end{theorem} Let discuss what this $\delta$ stands for. The upper bound is partitioned into a form that is almost identical to Hoeffding's inequality except normalized by the regenerative rate $\lambda$. The error probability is partitioned into two parts; the former stands for the probability of the sample average epsilon away from the true mean, conditioned on the event that the true number of regeneration differs from the expected number $\lambda t$ less than $\delta t$; the latter part is the probability that the true number of regeneration differs from the expected number $\lambda t$ more than $\delta t$. We have checked that the sum of those two parts forms a convex function in $\delta$, so one can easily numerically approximate the optimal $\delta$ given reasonable belief about the bounds $M$ and $T$, the regeneration rate $\lambda$, and the error tolerance $\epsilon$. The proof is inspired by the central limit theorem proof in \cite{26}. This concentration result assumes maximum regenerative time, which may not be realistic in practice. In a uniformly ergodic Markov model, for example, regeneration can happen in geometric time. Related works include \cite{19} that establishes a Heoffding inequality without assuming bounded regenerative time but on a finite state space Markov chain; and \cite{20} that also establishes a Hoeffding inequality of a different form. One may result in a Corollary if we have further information on the interval from which $|f(X(s))|$ lies: \begin{corollary} (Exponential Bound for Regenerative Processes with Bounded Time) Suppose $f(X)\in (a,b)$ and $T_0, \tau_i$ are almost surely bounded by $T$, then we have the following concentration bound: suppose $t>TMK/\epsilon$ for some large $K$, then, let $M=\max\{|a|,|b|\}$: $$ \mathbb{P}\left(\frac{1}{t} \int_{0}^{t}(f(X(s))-\alpha)ds < {\epsilon}\right)\leq \exp\left({-\frac{2\epsilon^2 (K-2)^2}{K^2\lambda (b-a)^2T^2}t}\right)+\epsilon(\delta,K) $$ and $$ \mathbb{P}\left(\frac{1}{t} \int_{0}^{t}(f(X(s))-\alpha)ds > {\epsilon}\right)\leq \exp\left({-\frac{2\epsilon^2 (K-2)^2}{K^2\lambda (b-a)^2T^2}t}\right)+\epsilon(\delta,K) $$ where $$\epsilon(\delta, K)=2 \exp \left(-\frac{\delta^{2} t}{(\lambda-\delta)^{2} \lambda^{2} T^{2}}\right)+\exp \left(\left(2 \delta M-\frac{K-2}{K}\epsilon\right) t\right) $$ and $$ \alpha=\frac{E \int_{T_{0}}^{T_{1}} f(X(s)) d s}{E \tau_{1}}, $$ and $\delta, K$ are free parameters. \end{corollary} \section{ Online Linear Programming} \subsection{Backgrounds} Online Linear Programming belongs to the sequential decision making problem: In mathematics, Online Linear Programming is concerned with solving the following linear programming in the presence of incomplete information: \begin{equation} \tag{3.1}\label{eq:3.1} \underset{x}{\operatorname{maximize}} \sum_{j=1}^{n} r_{j} x_{j} \end{equation} $$ \begin{aligned} \text { subject to } & \sum_{j=1}^{n} a_{i j} x_{j} \leq b_{i}, \forall i=1,2, \cdots, m \\ & 0 \leq x \leq 1, \forall j=1,2, \cdots, n \end{aligned} $$ where $r=\left(r_{1}, r_{2}, \cdots, r_{n}\right)^{T}$ can be interpreted as the price vector such that the goal is to find the allocation of decision vector $x_i$ such that the total profit is maximized. In this setting, $a_{ij}$ is the required $i$th resource to fulfill the $j$th decision while $b=\left(b_{1}, b_{2}, \cdots, b_{m}\right)^{T}$ is the resource capacity constraint. In this section, we will assume $(a_i,r_i)$ follows some i.i.d distribution. Such assumption is commonly used when analyzing OLP--\cite{1}, \cite{2}, and \cite{3}. The theoretical foundation for the i.i.d case is first established in \cite{5}. Therefore, we are interested in extending the main result of \cite{5}, which shows the dual multiplier, or the shadow price, of the online problem converges to that of the off-line. To analyze this problem, we consider the Dual of this system: $$ \begin{aligned} \min & \sum_{i=1}^{m} b_{i} p_{i}+\sum_{j=1}^{n} y_{j} \\ \text { s.t. } & \sum_{i=1}^{m} a_{i j} p_{i}+y_{j} \geq r_{j}, \quad j=1, \ldots, n \\ & p_{i}, y_{j} \geq 0 \text { for all } i, j. \end{aligned} $$ Here the decision variables are $\boldsymbol{p}=\left(p_{1}, \ldots, p_{m}\right)^{\top}$ and $\boldsymbol{y}=\left(y_{1}, \ldots, y_{n}\right)^{\top}$. Let $\left(\boldsymbol{p}_{n}^{*}, \boldsymbol{y}_{n}^{*}\right)$ be an optimal solution for the dual LP. From the complementary slackness condition, we know the primal optimal solution satisfies $$ x_{j}^{*}=\left\{\begin{array}{ll} 1, & r_{j}>\boldsymbol{a}_{j}^{\top} \boldsymbol{p}_{n}^{*} \\ 0, & r_{j}<\boldsymbol{a}_{j}^{\top} \boldsymbol{p}_{n}^{*} \end{array}\right. $$ Therefore, if we are able to solve the Dual system, we know what the decision vector should be. In fact, this complementary slackness condition would give us discrete solutions if the bidding price is distinct from the $\boldsymbol{p}_{n}^{*}$, which is interpreted as the Shadow Price. If $r_{j}=\boldsymbol{a}_{j}^{\top} \boldsymbol{p}_{n}^{*}$, the optimal solution $x_{j}^{*}$ may take on non-integer values. In the case when only integer solution is allowed, we can view the action to be probabilistic, whereas integer values represent the deterministic action. Or we may accept or reject the order, depending on how conservative we want to be about the resource. Since we know $y_i\geq 0$, an equivalent way to write this system is $$ \begin{array}{l} \min \sum_{i=1}^{m} b_{i} p_{i}+\sum_{j=1}^{n}\left(r_{j}-\sum_{i=1}^{m} a_{i j} p_{i}\right)^{+} \\ \text {s.t. } p_{i} \geq 0, \quad i=1, \ldots, m. \end{array} $$ As a result, this optimization problem resembles a stochastic problem: \begin{equation} \min f_{n}(\boldsymbol{p}):=\sum_{i=1}^{m} d_{i} p_{i}+\frac{1}{n} \sum_{j=1}^{n}\left(r_{j}-\sum_{i=1}^{m} a_{i j} p_{i}\right)^{+} \\ \text {s.t. } p_{i} \geq 0, \quad i=1, \ldots, m. \label{eq:1} \end{equation} where $d_i=b_i/n$. This is similar to take the expectation with respect to $r_j$ and $a_{ij}$: \begin{equation} \min f(\boldsymbol{p}):=\boldsymbol{d}^{\top} \boldsymbol{p}+\mathbb{E}\left[\left(r-\boldsymbol{a}^{\top} \boldsymbol{p}\right)^{+}\right] \\ \text {s.t. } \quad \boldsymbol{p} \geq \mathbf{0}, \label{eq:2} \end{equation} such that $$ \mathbb{E} f_{n}(\boldsymbol{p})=f(\boldsymbol{p}). $$ Therefore, given the distribution of $(r,a)$, we can find the expected minimum of $f_n(p)$ by evaluating the function $f(p)$. The convergence problem is to show the optimal solution to system \eqref{eq:1}, denoted as $p^*_n$ will converge to the optimal solution to \eqref{eq:2}, denoted as $p^*$. This convergence can be viewed as an extension of the Law of Large Numbers in the dual space. To have a reasonable convergence result for the stochastic optimization, we first need some assumptions on the distribution of $(r,a)$: \begin{Assumption}[Boundedness and Linear Growth Capacity].\\ \noindent (a) $\left\{\left(r_{j}, \boldsymbol{a}_{j}\right)\right\}_{j=1}^{n}$ are generated i.i.d. from distribution $\mathcal{P}$.\\ \noindent (b) There exist constants $\bar{r}, \bar{a}>0$ such that $\left|r_{j}\right| \leq \bar{r}$ and $\left\|\boldsymbol{a}_{j}\right\|_{2} \leq \bar{a}$ almost surely.\\ \noindent (c) $d_{i}=b_{i} / n \in(\underline{d}, \bar{d})$ for $\underline{d}, \bar{d}>0, i=1, \ldots$, m. Denote $\Omega_{d}=\bigotimes_{i=1}^{m}(\underline{d}, \bar{d})$\\ \noindent (d) $n>m$. \label{ass:1} \end{Assumption} Roughly speaking, this assumption asserts that the incoming orders and their prices are i.i.d and bounded almost surely. Moreover, the resource constraints grow linearly so that the service level remains relatively stable. Two consequences are the almost surely bounded optimal solution and the convexity of $f_n(p),f(p)$ as discussed in Proposition 1 of \cite{5}; so it makes sense to define $$ \Omega_{p}:=\left\{\boldsymbol{p} \in \mathbb{R}^{m}: \boldsymbol{p} \geq \mathbf{0}, \boldsymbol{e}^{\top} \boldsymbol{p} \leq \frac{\bar{r}}{\underline{d}}\right\} $$ where $\boldsymbol{e} \in \mathbb{R}^{m}$ is an all-one vector. We know that $\Omega_{p}$ covers all possible optimal solutions. Now we state the second assumption on the distribution of $(r,a)$: \begin{Assumption}[Non-degeneracy].\\ \noindent (a) The second-order moment matrix $\boldsymbol{M}:=\mathbb{E}_{(r, \boldsymbol{a}) \sim \mathcal{P}}\left[\boldsymbol{a a}^{\top}\right]$ is positive-definite. Denote its minimum eigenvalue with $\lambda_{\min }$.\\ \noindent (b) There exist constants $\lambda$ and $\mu$ such that if $(r, \boldsymbol{a}) \sim \mathcal{P}$, $$ \left.\lambda\left|\boldsymbol{a}^{\top} \boldsymbol{p}-\boldsymbol{a}^{\top} \boldsymbol{p}^{*}\right| \leq\left|\mathbb{P}\left(r>\boldsymbol{a}^{\top} \boldsymbol{p} \mid \boldsymbol{a}\right)-\mathbb{P}\left(r>\boldsymbol{a}^{\top} \boldsymbol{p}^{*} \mid \boldsymbol{a}\right)\right| \leq \mu\left|\boldsymbol{a}^{\top} \boldsymbol{p}-\boldsymbol{a}^{\top} \boldsymbol{p}^{*}\right|\right) $$ holds for any $\boldsymbol{p} \in \Omega_{p}$.\\ \noindent (c) The optimal solution $\boldsymbol{p}^{*}$ to the stochastic optimization problem (7) satisfies $p_{i}^{*}=0$ if and only if $d_{i}-\mathbb{E}_{(r, \boldsymbol{a}) \sim \mathcal{P}}\left[a_{i} I\left(r>\boldsymbol{a}^{\top} \boldsymbol{p}^{*}\right)\right]>0$ \label{ass:2} \end{Assumption} The second group of assumptions is called Non-degeneracy, for the first condition essentially requires the constraints matrix to be full rank; the second condition imposes a linear growth on the conditional probability so that the biding prices are reasonable; and third condition states the strict complementarity for the stochastic program. When those two assumptions are satisfied, we have the following theorem from \cite{5}: \begin{theorem}[Dual Convergence Theorem] \label{thm:conv} Under Assumption 3.1 and 3.2, there exists a constant $C$ such that $$ \mathbb{E}\left[\left\|\boldsymbol{p}_{n}^{*}-\boldsymbol{p}^{*}\right\|_{2}^{2}\right] \leq \frac{C m \log m \log \log n}{n} $$ holds for all $n \geq \max \{m, 3\}, m \geq 2$, and distribution $\mathcal{P} \in \Xi .$ Additionally, $$ \mathbb{E}\left[\left\|\boldsymbol{p}_{n}^{*}-\boldsymbol{p}^{*}\right\|_{2}\right] \leq C \sqrt{\frac{m \log m \log \log n}{n}}. $$ \end{theorem} This Dual Convergence Theorem is the theoretical foundation for the Online Learning Algorithms, for it provides the provable basis for the convergence efficiency. Therefore, if we can derive a similar dual convergence theorem for the regenerative processes, we provide the theoretical foundation to extend Online Learning Algorithms beyond the barrier of the i.i.d restriction. \subsection{Regenerative Online Linear Programming} In this section, we will prove the regenerative dual convergence theorem in the case where $\{a\}$ follows the i.i.d assumption, yet the proposed prices $\{r\}$ follow a regenerative process. First, let us recall the dual optimization problem we are interested in solving: \begin{equation} \min f_{n}(\boldsymbol{p}):=\sum_{i=1}^{m} d_{i} p_{i}+\frac{1}{n} \sum_{j=1}^{n}\left(r_{j}-\sum_{i=1}^{m} a_{i j} p_{i}\right)^{+} \text {s.t. } p_{i} \geq 0, \quad i=1, \ldots, m. \end{equation} Then, we know by the law of large number of the regenerative processes, this converges to \begin{equation}\label{3.4} \min f(\boldsymbol{p}):=\boldsymbol{d}^{\top} \boldsymbol{p}+\frac{1}{\mathbb{E}\tau_1}\mathbb{E}\sum_{i=\tau_0}^{\tau_1}\left[\left(r_i-\boldsymbol{a}^{\top} \boldsymbol{p}\right)^{+}\right] \text {s.t. } \quad \boldsymbol{p} \geq \mathbf{0} \end{equation} Let note observe that suppose $r$ is a non-delay regenerative process, where the process regenerates itself at the initial point, and suppose further $r$ terminates exactly before the next regeneration, we would have $$ \mathbb{E} f_{n}(\boldsymbol{p})=f(\boldsymbol{p}) $$ Even though those two quantities do not agree in general, the difference decays exponentially. For the remainder of the section, let us assume the equivalence of Assumption 3.1 for our regenerative process: \begin{Assumption}[Regenerative Boundedness and Linear Growth Capacity 1*].\\ \noindent (a) $\left\{\left( \boldsymbol{a}_{j}\right)\right\}_{j=1}^{n}$ is generated i.i.d. and $\left\{\left( \boldsymbol{r}_{j}\right)\right\}_{j=1}^{n}$ is generated as a regenerative process from distribution $\mathcal{P}_n$.\\ \noindent (b) There exist constants $\bar{r}, \bar{a}>0$ such that $\left|r_{j}\right| \leq \bar{r}$ and $\left\|\boldsymbol{a}_{j}\right\|_{2} \leq \bar{a}$ almost surely.\\ \noindent (c) $d_{i}=b_{i} / n \in(\underline{d}, \bar{d})$ for $\underline{d}, \bar{d}>0, i=1, \ldots$, m. Denote $\Omega_{d}=\bigotimes_{i=1}^{m}(\underline{d}, \bar{d})$\\ \noindent (d) $n>m$. \label{ass 1*} \end{Assumption} Similarly, we have the boundedness on the optimal dual solution in the space $\Omega_{p}$ and the convexity of $f_n(p)$ and $f(p)$. Then, it makes sense to assume, \begin{Assumption}[Regenerative Non-degeneracy 2*].\\ \noindent (a) The second-order moment matrix $\boldsymbol{M}:=\mathbb{E}_{(r, \boldsymbol{a}) \sim \mathcal{P}_n}\left[\boldsymbol{a a}^{\top}\right]$ is positive-definite for all $n$. Denote its minimum eigenvalue with $\lambda_{\min }$.\\ \noindent (b) There exist constants $\lambda$ and $\mu$ such that if $(r, \boldsymbol{a}) \sim \mathcal{P}_n$, $$ \left.\lambda\left|\boldsymbol{a}^{\top} \boldsymbol{p}-\boldsymbol{a}^{\top} \boldsymbol{p}^{*}\right| \leq\left|\mathbb{P}\left(r>\boldsymbol{a}^{\top} \boldsymbol{p} \mid \boldsymbol{a}\right)-\mathbb{P}\left(r>\boldsymbol{a}^{\top} \boldsymbol{p}^{*} \mid \boldsymbol{a}\right)\right| \leq \mu\left|\boldsymbol{a}^{\top} \boldsymbol{p}-\boldsymbol{a}^{\top} \boldsymbol{p}^{*}\right|\right) $$ holds for any $\boldsymbol{p} \in \Omega_{p}$, where $\Omega_{p}$ is as defined in Assumption \ref{ass:2}.\\ \noindent (c) The optimal solution $\boldsymbol{p}^{*}$ to the stochastic optimization problem \ref{3.4} satisfies $p_{i}^{*}=0$ if and only if $d_{i}-\mathbb{E}_{(r, \boldsymbol{a}) \sim \mathcal{P}_n}\left[a_{i} I\left(r>\boldsymbol{a}^{\top} \boldsymbol{p}^{*}\right)\right]>0 $ for all $n$. In the case where $p_i^*>0$, we call the $i$th resource binding. \label{ass 2*} \end{Assumption} For simplicity, let us denote them as Assumption 1* and 2*. In addition, we assume \begin{Assumption} (Bounded and Independent Regenerative Times 3*) \\ \noindent (a) The $\{r\}_i$ is a non-delay regenerated process with i.i.d stopping time $\tau_i$. \\ \noindent (b) The stopping time $\tau_i$ sequence is independent of the process $\{a\}_i$ and bounded by $T$. \label{ass 3*} \end{Assumption} We denote this assumption as 3*. With those three assumptions, we are able to establish the following: \begin{theorem} (Dual Convergence Theorem for Regenerative Processes ) For a regenerative price process $r_i$, and under certain regularity conditions \ref{ass 1*},\ref{ass 2*},\ref{ass 3*}, there exists a constant $C$ such that $$ \mathbb{E}\left[\left\|\boldsymbol{p}_{n}^{*}-\boldsymbol{p}^{*}\right\|_{2}^{2}\right] \leq \frac{C m \log m \log \log n}{n} $$ holds for all $n \geq \max \{m, 3\}, m \geq 2$, and distribution $\mathcal{P} \in \Xi .$ Additionally, $$ \mathbb{E}\left[\left\|\boldsymbol{p}_{n}^{*}-\boldsymbol{p}^{*}\right\|_{2}\right] \leq C \sqrt{\frac{m \log m \log \log n}{n}} $$ \end{theorem} Therefore, the Dual Convergence Theorem in \cite{5} is rendered as a sub-class of this more general expression. Since the dual convergence is the theoretical foundation for the OLP, this result would be the foundation that extends the power of OLP far beyond its original i.i.d constraint. Moreover, it is worth pointing out that the strategy of the proofs does not depend explicitly on the dual objective function $f(\cdot)$. Therefore, the results can be easily extended to other learning program that uses law of large number approximation as the foundation. Now we present the key steps in proving the Dual Convergence Theorem for Regenerative Processes. The main structure of this proof is the following: we first decompose the difference between the optimal ($f(p^*)$) and optimal values approximated through the sample ($f(p_n)$) into first and second order approxiations (Proposition \ref{Q1}), second we show the relation of the dual convergence rate and the convergence rate of the first and second order approximations, (Proposition \ref{Q2}), third we provide convergence rate for first and second order approximations (Proposition \ref{Q3}, Proposition \ref{Q4}), and fourth we use their convergence rates to show the convergence rate of the dual (Proposition \ref{Q5}). We use $N(t)$ to denote the number of complete regenerations before time $t$, and $T(n)$ to denote the time when $n$th cycle is completed. To derive a tractable decomposition, we borrow the strategies from \cite{5} to define a function $h: \mathbb{R}^{m} \times \mathbb{R}^{m+1} \rightarrow \mathbb{R}$ $$ h(\boldsymbol{p}, \boldsymbol{u}):=\sum_{i=1}^{m} d_{i} p_{i}+\left(u_{0}-\sum_{i=1}^{m} u_{i} p_{i}\right)^{+} $$ and function $\phi: \mathbb{R}^{m} \times \mathbb{R}^{m+1} \rightarrow \mathbb{R}^{m}$ $$ \phi(\boldsymbol{p}, \boldsymbol{u}):=\frac{\partial h(\boldsymbol{p}, \boldsymbol{u})}{\partial p}=\left(d_{1}, \ldots, d_{m}\right)^{\top}-\left(u_{1}, \ldots, u_{m}\right)^{\top} \cdot I\left(u_{0}>\sum_{i=1}^{m} u_{i} p_{i}\right) $$ where $\boldsymbol{u}=\left(u_{0}, u_{1}, \ldots, u_{m}\right)^{\top}$ and $\boldsymbol{p}=\left(p_{1}, \ldots, p_{m}\right)^{\top}$. From \cite{5} we know the function $\phi$ is the partial sub-gradient of the function $h$ with respect to $\boldsymbol{p}$; in particular, $\phi(\boldsymbol{p}, \boldsymbol{u})=\boldsymbol{d}$ when $u_{0}=\sum_{i=1}^{m} u_{i} p_{i}$. Then, we denote $$f(p):=\mathbb{E} \sum_{i=1}^{\tau_1}\frac{1}{\mathbb{E}\tau_1}[h(p,u_i)], u_n\sim\mathcal{P}_n$$ $$\hat{f}_n(p):=\mathbb{E} [h(p,u)], u_n\sim\mathcal{P}_n$$ and $$\nabla f(p):=\mathbb{E}\sum_{i=1}^{\tau_1}\frac{1}{\mathbb{E}\tau_1}[\phi(p,u_i)], u_n\sim\mathcal{P}_n.$$ $$\nabla \hat{f}_n(p):=\mathbb{E}[\phi(p,u)], u_n\sim\mathcal{P}_n.$$ The key difference is that we make the differentiation between $f,\hat{f}$, for the expected value of a regenerative process does not agree with its limiting sample average in general. Such sub-gradient allows us to analyze the functions in first and second orders, an idea encapsulated in the following proposition: \begin{proposition} \label{Q1} For any $\boldsymbol{p} \geq \mathbf{0}$ and $\lambda=\mathbb{E}\tau_1$, we have the following identity, $$ f(\boldsymbol{p})-f\left(\boldsymbol{p}^{*}\right)={\nabla f\left(\boldsymbol{p}^{*}\right)\left(\boldsymbol{p}-\boldsymbol{p}^{*}\right)}+{ \mathbb{E}\lambda\sum_{i=0}^{\tau_1}\left[\int_{\boldsymbol{a}^{\top} \boldsymbol{p}}^{\boldsymbol{a}^{\top} \boldsymbol{p}^{*}}\left(I(r_i>v)-I\left(r_i>\boldsymbol{a}^{\top} \boldsymbol{p}^{*}\right)\right) d v\right]} . $$ and $$ \hat{f}_n(\boldsymbol{p})-\hat{f}_n\left(\boldsymbol{p}^{*}\right)={\nabla \hat{f}_n\left(\boldsymbol{p}^{*}\right)\left(\boldsymbol{p}-\boldsymbol{p}^{*}\right)}+{ \mathbb{E}\left[\int_{\boldsymbol{a}_i^{\top} \boldsymbol{p}}^{\boldsymbol{a}^{\top} \boldsymbol{p}^{*}}\left(I(r_n>v)-I\left(r_n>\boldsymbol{a}_i^{\top} \boldsymbol{p}^{*}\right)\right) d v\right]} . $$ \end{proposition} The second step is to use this proposition to show the lipschitz continuity of $f(\cdot)$ and the uniqueness of the optimality solution $p^*$: \begin{proposition} \label{Q2} Under Assumption 1*, 2*,and 3*, for $\boldsymbol{p} \in \Omega_{p}$, $$ \frac{\lambda \lambda_{\min }}{2}\left\|\boldsymbol{p}-\boldsymbol{p}^{*}\right\|_{2}^{2} \leq f(\boldsymbol{p})-f\left(\boldsymbol{p}^{*}\right)-\nabla f\left(\boldsymbol{p}^{*}\right)\left(\boldsymbol{p}-\boldsymbol{p}^{*}\right) \leq \frac{\mu \bar{a}^{2}}{2}\left\|\boldsymbol{p}-\boldsymbol{p}^{*}\right\|_{2}^{2} $$ Moreover, the optimal solution $\boldsymbol{p}^{*}$ to the stochastic program (3.2) is unique.\label{4.5} \end{proposition} The above proposition shows our Assumption \ref{ass 2*} imposes a strong local convexity and smoothness around $\boldsymbol{p}^*$ as in the i.i.d counterpart. It is not surprising to observe that under the same regularity conditions the dual objective functions should exhibit the same characteristics. With such a relationship between the dual objective functions and the dual optimal, to bound the convergence rate of the dual optimal it suffices to bound the convergence rate of the dual objective functions. With such a goal, we proceed to consider the concentration for the first and second order approximations: \begin{proposition} We have \label{Q3} $$ \mathbb{P}\left(\left\|\frac{1}{n} \sum_{i=1}^{n} \phi\left(p^{*},u_i\right)-\nabla f\left(\boldsymbol{p}^{*}\right)\right\|_{2} \geq \epsilon\right)\leq 2m \exp \left(-\frac{ \epsilon^{2}(K-2)^{2}}{K^{2} 2\lambda \bar{a}^2 T^{2}m} t\right)+m\tilde{\epsilon}(\delta, K) $$ where $K=n\epsilon/T(d_i+\bar{a})$ and $$ \epsilon(\delta, K)=2 \exp \left(-\frac{\delta^{2} t}{(\lambda-\delta)^{2} \lambda^{2} T^{2}}\right)+\exp \left(\left(2 \delta (d_i+\bar{a})-\frac{K-2}{K\sqrt{m}} \epsilon \right) t\right), $$ where $u_i\sim \mathcal{P}_i$.\label{4.7} \end{proposition} The above result establishes the concentration of the first order approximation of the dual objective. The error term is a result of the error generated from the incomplete cycle of the regenerative process. Since we know for binding resource i the sub-gradient $\nabla f(\boldsymbol{p}^{*})_i=0$, we know the average sample gradient concentrates around zero up to small resources accumulated for the incomplete cycle. Let us show further that the second order term is uniformly bounded below with a high probability: \begin{proposition} \label{Q4} We have $$ \begin{aligned} \mathbb{P}\left(\frac{1}{n} \sum_{j=1}^{n} \int_{\boldsymbol{a}_{j}^{\top} \boldsymbol{p}}^{\boldsymbol{a}_{j}^{\top} \boldsymbol{p}^{*}}\left(I\left(r_{j}>v\right)-I\left(r_{j}>\boldsymbol{a}_{j}^{\top} \boldsymbol{p}^{*}\right)\right) d v\right.& \geq-\epsilon^{2}-2 \epsilon \bar{a}\left\|\boldsymbol{p}^{*}-\boldsymbol{p}\right\|_{2}\\+\frac{\lambda \lambda_{\min }}{32}\left\|\boldsymbol{p}^{*}-\boldsymbol{p}\right\|_{2}^{2} \text { for all } \left.\boldsymbol{p} \in \Omega_{p}\right) \end{aligned} $$ $$\geq 1-m \exp \left(-\frac{n \lambda_{\min }^{2}}{4 \bar{a}^{2}}\right)+2 \exp \left(-\frac{n \epsilon^{2}}{2\lambda T^2} +\right) \cdot(2 N)^{m}$$$$-(2N)^m\cdot 2 \exp \left(-\frac{\delta^{2} t}{(\lambda-\delta)^{2} \lambda^{2} T^{2}}\right)+\exp \left(\left(2 \delta -\frac{K-2}{K} \epsilon\right) t\right)$$ holds for any $\epsilon>0, n>m$ and $\mathcal{P}$ satisfies assumption 1*,2* and 3*. Here $$ N=\left\lfloor\log _{q}\left(\frac{\underline{d} \epsilon^{2}}{\bar{a} \bar{r} \sqrt{m}}\right)\right\rfloor+1, \quad q=\max \left\{\frac{1}{1+\frac{1}{\sqrt{m}}}, \frac{1}{1+\frac{1}{\sqrt{m}}\left(\frac{\lambda \lambda_{\min }}{8 \mu \bar{a}^{2}}\right)^{\frac{1}{3}}}\right\} $$ where $\lfloor\cdot\rfloor$ is the floor function. \label{4.8} \end{proposition} The above proposition establishes that the second order term is uniformly bounded below with high probability. Above two propositions on the concentration of first and second order impose a concentration on a quadratic function of $\boldsymbol{p}$; namely the following has a high probability:$$ \begin{aligned} f_{n}(\boldsymbol{p})-f_{n}\left(\boldsymbol{p}^{*}\right) & \geq \nabla f\left(\boldsymbol{p}^{*}\right)^{\top}\left(\boldsymbol{p}-\boldsymbol{p}^{*}\right)-\epsilon\left\|\boldsymbol{p}^{*}-\boldsymbol{p}\right\|_{2}-\epsilon^{2}-2 \epsilon \bar{a}\left\|\boldsymbol{p}^{*}-\boldsymbol{p}\right\|_{2}+\frac{\lambda \lambda_{\min }}{32}\left\|\boldsymbol{p}^{*}-\boldsymbol{p}\right\|_{2}^{2} \\ & \geq-\epsilon^{2}-(2 \bar{a}+1) \epsilon\left\|\boldsymbol{p}^{*}-\boldsymbol{p}\right\|_{2}+\frac{\lambda \lambda_{\min }}{32}\left\|\boldsymbol{p}^{*}-\boldsymbol{p}\right\|_{2}^{2} \text { uniformly for all } \boldsymbol{p} \in \Omega_{p}. \end{aligned} $$ This form is identical the equation 13 in \cite{5}, for the proofs that derive those propositions depend largely on the regularity conditions in our assumptions, and neither the i.i.d nor the regenerative structure plays a significant role. It is expected that other stationary price process may also exhibit a similar characteristic. Hence, one natural extension of the dual convergence theorem is to ask whether this quadratic bound also exist for other types of price data. Since this bound is the key to proving the dual convergence theorem, which is almost sufficient to prove the following regrets for the algorithms, any successful extension of this quadratic bound to other price processes would make the regret analysis for dual algorithms on such price process possible. The proof technique is similar to \cite{5} in the sense that since the proposition requires a uniform bound on uncountably many elements, we first partition the space into different sets; pick a representative element to which we apply the Regenerative Heoffding; and finally check show uniformly any element is close to one of the representative to conclude the proof. The details can be found in the appendix. Now we are ready to prove the Dual Convergence for the regenerative process: \begin{theorem} \label{Q5} Under Assumption 1*, 2*, and 3*, there exists a constant $C$ such that $$ \mathbb{E}\left[\left\|\boldsymbol{p}_{n}^{*}-\boldsymbol{p}^{*}\right\|_{2}^{2}\right] \leq \frac{C m \log m \log \log n}{n} $$ holds for all $n \geq \max \{m, 3\}, m \geq 2$, and distribution $\mathcal{P}$ that satisfies Assumption 1*, 2*, and 3*. Additionally, $$ \mathbb{E}\left[\left\|\boldsymbol{p}_{n}^{*}-\boldsymbol{p}^{*}\right\|_{2}\right] \leq C \sqrt{\frac{m \log m \log \log n}{n}} $$\label{4.11} \end{theorem} The significance of this theorem, as we will demonstrate later in the regret analysis, is that it provides an error bound for the dual-policy algorithm, for if our sample dual converges to the actual off-line dual fast enough, our accumulated error should be small. Such an idea is illustrated in the regret decomposition proposition. The extension of this theorem compared to its original version in \cite{5} is that it shows the regenerative process has the same order of convergence; hence we can expect the same order of regret for the algorithms. As we discussed above, very likely other stationary price processes may also have such dual convergence theorem. Hence the investigation of such a possibility remains an interesting open problem. \section{Regret Analysis for Algorithms} \subsection{Regret Analysis} In this section, we shift our focus to analyzing the regret of the regenerative online linear programming that uses the dual optimization as its policy's basis. We will shortly define formally the regret and the dual-based policy in this section after a short introduction. Recall that the procedure of our dual algorithms depends on the following comparison: \begin{equation}\tag{6.1}\label{eq:6.1} x_{j}^{*}=\left\{\begin{array}{ll} 1, & r_{j}>\boldsymbol{a}_{j}^{\top} \boldsymbol{p}_{n}^{*} \\ 0, & r_{j}\leq \boldsymbol{a}_{j}^{\top} \boldsymbol{p}_{n}^{*} \end{array}\right. \end{equation} where $x_{j}^{*}$ is the optimal policy. This inequality holds true when the complementarity condition is in force. Hence, our optimization problem can be reformulated in the following way if we use such dual policy procedure: $$ \begin{array}{rl} \max _{\boldsymbol{p} \geq \mathbf{0}} & \mathbb{E}\left[r I\left(r>\boldsymbol{a}^{\top} \boldsymbol{p}\right)\right] \\ \text { s.t. } & \mathbb{E}\left[\boldsymbol{a} I\left(r>\boldsymbol{a}^{\top} \boldsymbol{p}\right)\right] \leq \boldsymbol{d} \end{array} $$ However, in practice, we do not need to spare the energy to compute the exact form of $\boldsymbol{p}_{n}^{*}$ each time. Nor do we know such optimal dual before the completion of the program. Hence, suppose a decent approximation of $\boldsymbol{p}_{n}^{*}$ is possible for each $n$, then if we use the same procedure as \ref{eq:6.1} except using the approximated dual optimal, we will get a small regret, that is the difference between the true optimal revenue and our actual revenue should be small. This reasoning is exactly why we need to compute the convergence of the dual optimals, for it provides a theoretical basis for the regret analysis. Let us define the regrets formally now: Suppose $\mathbf{a}_i$ is generated i.i.d while $r_i$ follows a regenerative process. We denote the offline optimal value of the objective as $\boldsymbol{x}^{*}=\left(x_{1}^{*}, \ldots, x_{n}^{*}\right)^{\top}$, and the offline (online) objective value as $R_{n}^{*}\left(R_{n}\right)$. Specifically, $$ \begin{aligned} R_{n}^{*} &:=\sum_{j=1}^{n} r_{j} x_{j}^{*} \\ R_{n}(\pi) &:=\sum_{j=1}^{n} r_{j} x_{j} . \end{aligned} $$ A quick observation would tell us since $R_{n}^{*}$ is the revenue generated by the policy which assumes a full knowledge of the realization, it is the upper bound of any other policies. Therefore, the regret is the comparison of those two objects: \begin{definition} We define the regret as $$ \Delta_{n}^{\mathcal{P}}(\boldsymbol{\pi}):=\mathbb{E}_{\mathcal{P}}\left[R_{n}^{*}-R_{n}(\boldsymbol{\pi})\right] $$ and the worst-case regret as $$ \Delta_{n}(\boldsymbol{\pi}):=\sup _{\mathcal{P} \in \Xi} \Delta_{n}^{\mathcal{P}}(\boldsymbol{\pi})=\sup _{\mathcal{P} \in \Xi} \mathbb{E}_{\mathcal{P}}\left[R_{n}^{*}-R_{n}(\boldsymbol{\pi})\right]. $$ \label{6.1} \end{definition} When the distribution is known, the regret of the first kind is sufficient for our analysis. However, in the case where we only know certain regularity conditions of our distribution, we will encounter a distributional optimization problem as illustrated in the worst-case regret. Now, we will also formally define our dual-based policy. \begin{definition} A dual-based policy $\{x_i\}$ is a policy constructed by the following procedure: first we compute some vector, interpreted as the approximation of the dual optimal, $\boldsymbol{p}_{t}=h_{t}\left(\mathcal{H}_{t-1}\right)$, where $\mathcal{H}_{t-1}=\left\{r_{j}, \boldsymbol{a}_{j}, x_{j}\right\}_{j=1}^{t-1}$. Then, we set the candidate policy as $$ \tilde{x}_{t}=\left\{\begin{array}{ll} 1, & \text { if } r_{t}>\boldsymbol{a}_{t}^{\top} \boldsymbol{p}_{t} \\ 0, & \text { if } r_{t} \leq \boldsymbol{a}_{t}^{\top} \boldsymbol{p}_{t} \end{array}\right. $$ To set our policy as the candidate policy, we need to check whether adopting such candidate policy would not violate the resource constraint: $$ x_{t}=\left\{\begin{array}{ll} \tilde{x}_{t}, & \text { if } \sum_{j=1}^{t-1} a_{i j} x_{j}+a_{i t} \tilde{x}_{t} \leq b_{i}, \quad \text { for } i=1, \ldots, m \\ 0, & \text { otherwise. } \end{array}\right. $$ Such policy based on this rule is called the dual-based policy. \end{definition} Since our procedure, in the one-sided situation, terminates when the resources are depleted, it makes sense to define the stopping time for resource depletion as $$ \tau_{s}:=\min \{n\} \cup\left\{t \geq 1: \min _{i} b_{i t}<s\right\} $$ where $\boldsymbol{b}_{0}=\boldsymbol{b}=n \boldsymbol{d}$, $\boldsymbol{b}_{t}=\boldsymbol{b}_{t-1}-\boldsymbol{a}_{t} x_{t}$ represents the left-over resource after time $t$. This stopping time stops when some resource $i$ at time $t$ is less than a threshold amount of $s$. In practice, future orders may still be fulfilled when some resource falls below the threshold moment. Moreover, in the double-sided situation where orders represent both buyers and sellers, such stopping time would not cause an issue to the programming. However, assuming the orders are time-homogenous, the resource depletion rate should be linear in time and any early resource depletion represents a certain amount of misuses of the resources. We will see how such early depletion would cause an increase to the regret. To study the regret, let us consider the following Optimization problem: \begin{equation}\tag{6.2}\label{eq:6.2} \max _{\boldsymbol{p} \geq \mathbf{0}} \frac{1}{\mathbb{E}\tau_1}\mathbb{E}\sum_{i=0}^{\tau_1}\left[r_i I\left(r_i>\boldsymbol{a}_i^{\top} \boldsymbol{p}\right)\right] \text { s.t. } \frac{1}{\mathbb{E}\tau_1} \mathbb{E}\left[\sum_{i=1}^{\tau_1}\boldsymbol{a}_i I\left(r_i>\boldsymbol{a}^{\top} \boldsymbol{p}\right)\right] \leq \boldsymbol{d} \end{equation} This optimization can be seen as the deterministic relaxation of the stochastic program of \ref{eq:3.1}, and it differs mainly form \cite{5} in the sense that we need to take the average over an entire period of the regeneration. The reason for such a formulation is that it provides a clean and tractable form for the upper bound, for let us recall that when $n$ is large, the average reward we collect form each other in \ref{eq:3.1} is approximated the same as in \ref{eq:6.2}. Let us consider the Lagrangian of the deterministic formulation as $$ g(\boldsymbol{p}):=\frac{1}{\mathbb{E}\tau_1}\mathbb{E}\sum_{i=1}^{\tau_1}\left[r_i I\left(r_i>\boldsymbol{a}_i^{\top} \boldsymbol{p}\right)+\left(\boldsymbol{d}-\boldsymbol{a}_i I\left(r_i>\boldsymbol{a}_i^{\top} \boldsymbol{p}\right)\right)^{\top} \boldsymbol{p}^{*}\right] $$ where $\boldsymbol{p}^{*}$ is the optimal solution to \ref{eq:2}. Since our price parameter is not i.i.d, it makes sense to depend the Lagrangian on time as $$ g_{i}(\boldsymbol{p}):=\left[r_{i} I\left(r_{i}>\boldsymbol{a}_{i}^{\top} \boldsymbol{p}\right)+\left(\boldsymbol{d}-\boldsymbol{a}_{i} I\left(r_{i}>\boldsymbol{a}_{i}^{\top} \boldsymbol{p}\right)\right)^{\top} \boldsymbol{p}^{*}\right]. $$ To formalize our idea that the expected revenue $R_{n}^{*}$ is bounded by our tractable form, let us prove the following proposition: \begin{proposition}\label{R1} Under Assumptions \ref{ass 1*}, \ref{ass 2*}, and \ref{ass 3*} we have $$ \mathbb{E} R_{n}^{*} \leq \sum_{i=1}^{n}g_i\left(\boldsymbol{p}^{*}\right) $$$$ g_i\left(\boldsymbol{p}^{*}\right) \geq g_i(\boldsymbol{p}) $$ for any $\boldsymbol{p} \geq 0$. Additionally, $$ g_i\left(\boldsymbol{p}^{*}\right)-g_i(\boldsymbol{p}) \leq \mu \bar{a}^{2}\left\|\boldsymbol{p}^{*}-\boldsymbol{p}\right\|_{2}^{2} $$ holds for all $\boldsymbol{p} \in \Omega_{p}$ and all the distribution $\mathcal{P}$ that satisfies those three assumptions. \end{proposition} With this result, let us move to analyze the worst-case regret as defined by \ref{6.1}. In particular, there are three different sources of regret in such programming. The first is that approximated regret, resulted from using non-optimal dual in the policy making procedure. This regret is linear in the operation time. A second source of regret is the temporary regret, resulted from the situation when the programming terminates too early such that profitable orders in the end are left unfulfilled. This regret corresponds to the case like a tail risk, where the highly profitable orders can accumulate in the end. The third source of regret is the resource regret, resulted from not utilizing all the resources, especially the binding resources that, from the complementarity perspective, constitute the bottleneck for optimizing the objective function. Let us formalize those ideas in the following theorem: \begin{theorem} \label{R2} Under Assumption \ref{ass 1*},\ref{ass 2*} and \ref{ass 3*}, there exists a constant $K$ such that the worst-case regret under policy $\pi$, $$ \Delta_{n}(\pi) \leq K \cdot \mathbb{E}\left[\sum_{t=1}^{\tau_{\bar{a}}}\left\|\boldsymbol{p}_{t}-\boldsymbol{p}^{*}\right\|_{2}^{2}+\left(n-\tau_{\bar{a}}\right)+\sum_{i \in I_{B}} b_{i n}\right] $$ holds for all $n>0$. Here $I_{B}$ is the set of binding constraints, $\boldsymbol{p}_{t}$ is specified by the policy $\boldsymbol{\pi}$, and $\boldsymbol{p}^{*}$ is the optimal.\label{6.4} \end{theorem} Therefore, as we discussed above, a nice policy should have the following features: first, the average error between the approximated dual optimal and the true dual optimal shouldn't be large. Second, the consumption rate should be smooth. And third, all the binding resources should be utilized with no waste. It is in this regret theorem where we see exactly why wee need to construct the dual convergence theorem of \ref{First Main Result}. One Corollary to this theorem is \begin{corollary} Using the same notation as above, we have any given $\boldsymbol{b}_{t}$-adapted stopping time $\tau$, if $\mathbb{P}\left(\tau \leq \tau_{\bar{a}}\right)=1$, $$ \Delta_{n}(\boldsymbol{\pi}) \leq K \cdot \mathbb{E}\left[\sum_{t=1}^{\tau-1}\left\|\boldsymbol{p}_{t}-\boldsymbol{p}^{*}\right\|_{2}^{2}+(n-\tau)+\sum_{i \in I_{B}} b_{i n}\right]. $$ \end{corollary} Above Theorem \ref{R2} establishes that the best possible upper bound for the efficiency of our algorithms is of the same order of the Dual Convergence Theorem. Hence, for any geometrically updating algorithms, $\log n \log \log n$ is the best upper bound given the Dual Convergence Theorem. We will discuss in more details later. \subsection{When the Distribution is Known} In this section, we will discuss the regret for the algorithm when the distribution is known discussed in the \cite{5} using the regret analysis derived from the previous section. \begin{algorithm}[h] \caption{Known Distribution}\label{alg:1} \begin{algorithmic}[1] \State Input: $n, d_{1}, \ldots, d_{m}$, Distribution $\mathcal{P}$ \State Compute the optimal solution of the stochastic programming problem $$ \begin{array}{l} \boldsymbol{p}^{*}=\arg \min \boldsymbol{d}^{\top} \boldsymbol{p}+\mathbb{E}_{(r, \boldsymbol{a}) \sim \mathcal{P}}\frac{1}{\mathbb{E}(\tau_1)}\sum_{i=1}^{\tau_1}\left[\left(r_i-\boldsymbol{a}_i^{\top} \boldsymbol{p}\right)^{+}\right] \\ \text {s.t. } \boldsymbol{p} \geq 0 \end{array} $$ \For{ $t=1, \ldots, n$} \State$\quad$ If constraints are not violated, choose $$ x_{t}=\{\begin{array}{ll} 1, & \text { if } r_{t}>\boldsymbol{a}_{t}^{\top} \boldsymbol{p}^{*} \\ 0, & \text { if } r_{t} \leq \boldsymbol{a}_{t}^{\top} \boldsymbol{p}^{*}. \end{array} $$ \EndFor \end{algorithmic} \end{algorithm} The regret bound for this algorithm is given by \begin{theorem}\label{Al1} With the online policy $\boldsymbol{\pi}_{1}$ specified by Algorithm \ref{alg:1}, $$ \Delta_{n}\left(\boldsymbol{\pi}_{1}\right) \leq O(\sqrt{n}) $$ \end{theorem} Essentially, knowing the distribution for the data is powerful enough to achieve sub-linear regret. Hence, to optimize our objective value with sub-linear regret, we do not need to consider every data in the sequence, and the optimizing problem can be transformed into a statistical problem of distributional approximation. \subsection{ Dynamic Learning Algorithm} The above algorithm assumes the knowledge of the distribution, which is usually not true in the application. Therefore, we want to approximate the dual optimal as more information becomes available. The question becomes, how frequently should we update the dual price, since there will be a computational cost associated with this update. Since as more information becomes available, our dual price becomes a better approximation of the actual dual optimal so that the update should be less frequent. To illustrate such an idea, The algorithm below incorporates a geometric update rule: \begin{algorithm}[H] \caption{Dynamic Learning Algorithm}\label{alg:2} \begin{algorithmic}[1] \State Input: $d_{1}, \ldots, d_{m}$ where $d_{i}=b_{i} / n$ \State Initialize: Find $\delta \in(1,2]$ and $L>0$ s.t. $\left\lfloor\delta^{L}\right\rfloor=n$. \State Let $t_{k}=\left\lfloor\delta^{k}\right\rfloor, k=1,2, \ldots, L-1$ and $t_{L}=n+1$ \State Set $x_{1}=\ldots=x_{t_{1}}=0$ \For{ $k=1,2, \ldots, L-1$ } \State Specify an optimization problem $$ \begin{aligned} \max & \sum_{j=1}^{t_{k}} r_{j} x_{j} \\ \text { s.t. } & \sum_{j=1}^{t_{k}} a_{i j} x_{j} \leq t_{k} d_{i}, \quad i=1, \ldots, m \\ & 0 \leq x_{j} \leq 1, \quad j=1, \ldots, t_{k} \end{aligned} $$ \State Solve its dual problem and obtain the optimal dual variable $\boldsymbol{p}_{k}^{*}$ $$ \begin{array}{c} \boldsymbol{p}_{k}^{*}=\underset{p}{\arg \min } \sum_{i=1}^{m} d_{i} p_{i}+\frac{1}{t_{k}} \sum_{j=1}^{t_{k}}\left(r_{j}-\sum_{i=1}^{m} a_{i j} p_{i}\right)^{+} \\ \text {s.t. } p_{i} \geq 0, \quad i=1, \ldots, m \end{array} $$ \For{ $t=t_{k}+1, \ldots, t_{k+1}$} \State If constraints permit, set $$ x_{t}=\left\{\begin{array}{ll} 1, & \text { if } r_{t}>\boldsymbol{a}_{t}^{\top} \boldsymbol{p}_{k}^{*} \\ 0, & \text { if } r_{t} \leq \boldsymbol{a}_{t}^{\top} \boldsymbol{p}_{k}^{*} \end{array}\right. $$ \State $\quad$ Otherwise, set $x_{t}=0$ \State $\quad$ If $t=n$, stop the whole procedure. \EndFor \EndFor \end{algorithmic} \end{algorithm} Let us analyze the regret of this algorithm by proving the following theorem \begin{theorem} With the online policy $\pi_{2}$ specified by above Algorithm, where the distribution of $(\mathbf{a},r)$ satisfies \ref{ass 1*},\ref{ass 2*},and \ref{ass 3*}, then $$ \Delta_{n}\left(\pi_{2}\right) \leq O(\sqrt{n} \log n) $$ \label{7.2} \end{theorem} Essentially, we see the main contribution to the regret for this algorithm is, as seen from the proof, is the wasted time. The accumulated errors generated from the sample dual is $O(\log n\log\log n)$ while the regret generated from the wasted resources is $\sqrt{n}\sqrt{\log \log n}$, whereas the regret generated from early exit time is $O(\sqrt{n}\log n)$. Hence, early exit time is considered to be most harmful to this type of algorithm, for it forgoes potentially large orders in the end, compared to wasted resources whose cost is at most the shadow price per unit. It is no surprise that some similar algorithm like \cite{2} include a small shrinkage term in the constraint to be slightly more conservative, in order to ensure minimum early exit time at the relatively low cost of wasted resources. \section{Numerical Simulations} In this section, we provide numerical simulations to test the Dynamical Learning Algorithm. We test two kinds of models, the model where the price depends on the quantity of purchase and the model where the price is independent. We can also observe that though the data violates some regularity constraints for our three assumptions \ref{ass 1*}, \ref{ass 2*}, \ref{ass 3*}, the performance is better than what the regret theorem \ref{7.2} predicts. Let us denote a bounded random walk model $$ R_{t+1} =\left\{\begin{array}{ll} X_t+\epsilon_t, & \text { if }\underline{r}\leq |R_t+\epsilon_t|\leq \bar{r} \\ \bar{r}, & \text { if } R_t+\epsilon_t,> \bar{r}\\ \underline{r}, & \text { if } R_t+\epsilon_t,< \underline{r} \end{array}\right. $$ where $\epsilon_t$ is the i.i.d increment. In below's example, we use $ \underline{r}=1, \bar{r}=5, m=5, \epsilon_i\sim 2B(0.5)-1$ for Bernoulli $B(0.5)$. In Random Input I, we chose m independent bounded random walks, starting from $\underline{r}$, as the hidden market price for each resource, and the bid price is the sum of the quantity multiplied by the market price. Therefore, Random Input I reflects a type of efficient market where the fair prices are known to the buyer while the seller is to learn those prices. In this case the seller does not receive any surplus. Random Input II has a single regenerative price with no hidden item price. Therefore, Random Input II describes a situation where the price and the quantity are independent, so there is a chance for the seller to exploit consumer surplus, for consumers may pay more than the fair prices. Both inputs follow a certain regenerative random walk structure. That financial data is well modeled by random walk is not new to us. The bounded random walk can be used to model the return of combinations of options, for example a protective collar option strategy. \begin{tabularx}{1\textwidth} { | >{\raggedright\arraybackslash}X | >{\centering\arraybackslash}X | >{\raggedleft\arraybackslash}X | } \hline Random Input I (Quantity Dependent Price) & $a_{ij} \sim |\text{Normal}(0.5, 1)| $ & $r_i=\sum_{j=1}^{m} a_{ij}r_{ij}$, $r_ij\sim R_{ij}$\\ \hline Random Input II (Quantity Independent Price) & $a_{ij} \sim |\text{Normal}(0.5, 1)| $ & $r_i\sim R_i$\\ \hline Random Input III (I.I.D Price) & $a_{ij} \sim |\text{Normal}(0.5, 1)| $ & $r_i\sim \text{Uniform}(1,5)$\\ \hline \end{tabularx} \\ The realization of the sample paths of bounded random walk are given below in figure \ref{figure 0}. \begin{figure}[!ht] \caption{Bounded Fair (Left) and Weighted Down (Right) Random Walks } \includegraphics[scale=0.3]{re_price1.pdf} \includegraphics[scale=0.3]{re_price_2.pdf} \centering \label{figure 0} \end{figure} The regret and the consumption rate is shown below. In figure \ref{figure 1}, figure \ref{figure 3}, and figure \ref{figure 5} , we observe that the regrets are below the upper bound of $O(\sqrt{n}\log n)$ as $O(\sqrt{n})$. Meanwhile, they imply that on a larger scale regenerative price and i.i.d price data give the similar performance for our algorithms. \begin{figure}[!ht] \caption{Regret for Algorithm \ref{alg:2} (Red) bounded by $25\sqrt{n}$ (Blue) with Input I} \includegraphics[scale=0.3]{Alg_2_Input_1.pdf} \centering \label{figure 1} \end{figure} \begin{figure}[!ht] \caption{Resource Consumption Rate of Algorithm \ref{alg:2} (Left) and Zoomed in View (Right) with Input I} \includegraphics[scale=0.3]{Alg_2_Input_1_Resource.pdf} \includegraphics[scale=0.3]{Alg_2_Input_1_zoom_in.pdf} \centering \label{figure 2} \end{figure} \begin{figure}[!ht] \caption{Regret for Algorithm \ref{alg:2} (Red) bounded by $4\sqrt{n}$ (Blue) with Input II} \includegraphics[scale=0.3]{Alg_2_Input_2.pdf} \centering \label{figure 3} \end{figure} \begin{figure}[!ht] \caption{Resource Consumption Rate of Algorithm \ref{alg:2} (Left) and Zoomed in View (Right) with Input II} \includegraphics[scale=0.3]{Alg_2_Input_2_re.pdf} \includegraphics[scale=0.3]{Alg_2_Input_2_in.pdf} \centering \label{figure 4} \end{figure} \begin{figure}[!ht] \caption{Regret for Algorithm \ref{alg:2} (Red) bounded by $4\sqrt{n}$ (Blue) with Input III} \includegraphics[scale=0.3]{Alg_2_Input_3.pdf} \centering \label{figure 5} \end{figure} \begin{figure}[!ht] \caption{Resource Consumption Rate of Algorithm \ref{alg:2} (Left) and Zoomed in View (Right) with Input III} \includegraphics[scale=0.3]{Alg_2_Input_3_re.pdf} \includegraphics[scale=0.3]{Alg_2_Input_3_in.pdf} \centering \label{figure 6} \end{figure} \newpage There are few important observations to be made from the consumption rates in figure \ref{figure 2}, \ref{figure 4},\ref{figure 6}. Figure \ref{figure 6} with the i.i.d price data has the smoothest consumption rate with the least wasted resource and time. This optimal performance is due to the fact that we assume a linear consumption rate when solving for the dual optimal in the algorithm. For i.i.d data, this assumption is realistic at all scales, both macro and local, and therefore the real consumption rate based on this approach is smooth. Regenerative data that is independent of the quantity of purchase in Figure \ref{figure 4}, however, may not suit this assumption well at the micro level, for even if the consumption rate is linear at the macro level, since each period is i.i.d, the consumption at local level is not linear. Hence, using this assumption may cause some small deviations of the true dual and result in the consumption rate becoming rough in the zoomed in picture on the right of figure \ref{figure 4}. However, when the scale is large, such a small deviation is insignificant. Indeed, figure \ref{figure 3} and \ref{figure 5} show when the price is independent, no matter whether it is i.i.d or regenerative, they have similar regrets. Figure \ref{figure 2} shows there exists some true fluctuating market price for each item and the price is the market price for the entire bundle. Since there exists a hidden unobserved market price, there is relatively no noise in the system, compared to random input II where the market has only noises (since the price is independent of the quantity). Such data is therefore easier to learn and causes a more stable consumption rate. To summarize, the consumption rate is most linear when the price data is i.i.d with less noise, and less linear when the price data is regenerative with noises. Such difference is caused by the linear consumption rate assumed by the algorithm. As we have discussed earlier that the main contribution of the regret comes from the early exit time. To prevent the early exit time, the two solutions are either to be more conservative about the resource and introduce a shrinkage term as in \cite{2}, or to take into account the rate leftover resource such that our algorithm is no longer consuming resources linearly. We demonstrate both algorithms here. \begin{algorithm}[H] \caption{Conservative Dynamic Learning Algorithm}\label{alg:3} \begin{algorithmic}[1] \State Input: $d_{1}, \ldots, d_{m}$ where $d_{i}=b_{i} / n$ \State Initialize: Find $\delta \in(1,2]$ and $L>0$ s.t. $\left\lfloor\delta^{L}\right\rfloor=n$. \State Let $t_{k}=\left\lfloor\delta^{k}\right\rfloor, k=1,2, \ldots, L-1$ and $t_{L}=n+1$ \State Set $x_{1}=\ldots=x_{t_{1}}=0$ \For{ $k=1,2, \ldots, L-1$ } \State Specify an optimization problem $$ \begin{aligned} \max & \sum_{j=1}^{t_{k}} r_{j} x_{j} \\ \text { s.t. } & \sum_{j=1}^{t_{k}} a_{i j} x_{j} \leq \left(1-\epsilon \sqrt{\frac{n}{t_{k}}}\right)t_{k} d_{i}, \quad i=1, \ldots, m \\ & 0 \leq x_{j} \leq 1, \quad j=1, \ldots, t_{k} \end{aligned} $$ \State Solve its dual problem and obtain the optimal dual variable $\boldsymbol{p}_{k}^{*}$ $$ \begin{array}{c} \boldsymbol{p}_{k}^{*}=\underset{p}{\arg \min } \sum_{i=1}^{m} d_{i} p_{i}+\frac{1}{t_{k}} \sum_{j=1}^{t_{k}}\left(r_{j}-\sum_{i=1}^{m} a_{i j} p_{i}\right)^{+} \\ \text {s.t. } p_{i} \geq 0, \quad i=1, \ldots, m \end{array} $$ \For{ $t=t_{k}+1, \ldots, t_{k+1}$} \State If constraints permit, set $$ x_{t}=\left\{\begin{array}{ll} 1, & \text { if } r_{t}>\boldsymbol{a}_{t}^{\top} \boldsymbol{p}_{k}^{*} \\ 0, & \text { if } r_{t} \leq \boldsymbol{a}_{t}^{\top} \boldsymbol{p}_{k}^{*} \end{array}\right. $$ \State $\quad$ Otherwise, set $x_{t}=0$ \State $\quad$ If $t=n$, stop the whole procedure. \EndFor \EndFor \end{algorithmic} \end{algorithm} Above algorithm modifies line 6 to include a shrinkage term $\left(1-\epsilon \sqrt{\frac{n}{t_{k}}}\right)$. The idea is that since the cost of early exit ($O(\sqrt{n}\log n)$) is higher than the cost of wasted resource ($O(\sqrt{n}\sqrt{\log \log n})$), an algorithm slightly more conservative with the resource may be better off. However, this imposes a tradeoff because to compensate for operation time we need to pay for wasted resources and potential errors in computing the samples optimal duals. The regrets and the consumptions rate are given below: \begin{figure}[!ht] \caption{Regret for Algorithm \ref{alg:3} (Dotted) compared with Algorithm \ref{alg:2} (Red) with Input I} \includegraphics[scale=0.3]{Alg_3_Input_1.pdf} \centering \label{figure 7} \end{figure} \begin{figure}[!ht] \caption{Resource Consumption Rate of Algorithm \ref{alg:3} (Left) and Zoomed in View (Right) with Input I} \includegraphics[scale=0.3]{Alg_3_Input_1_resource.pdf} \includegraphics[scale=0.3]{Alg_3_Input_1_in.pdf} \centering \label{figure 8} \end{figure} \begin{figure}[!ht] \caption{Regret for Algorithm \ref{alg:3} (Dotted) compared with Algorithm \ref{alg:2} (Red) with Input II} \includegraphics[scale=0.3]{Alg_3_Input_2.pdf} \centering \label{figure 9} \end{figure} \begin{figure}[!ht] \caption{Resource Consumption Rate of Algorithm \ref{alg:3} (Left) and Zoomed in View (Right) with Input II} \includegraphics[scale=0.3]{Alg_3_Input_2_resource.pdf} \includegraphics[scale=0.3]{Alg_3_Input_2_in.pdf} \centering \label{figure 10} \end{figure} \begin{figure}[!ht] \caption{Regret for Algorithm \ref{alg:3} (Dotted) compared with Algorithm \ref{alg:2} (Red) with Input III} \includegraphics[scale=0.3]{Alg_3_Input_3.pdf} \centering \label{figure 11} \end{figure} \begin{figure}[!ht] \caption{Resource Consumption Rate of Algorithm \ref{alg:3} (Left) and Zoomed in View (Right) with Input III} \includegraphics[scale=0.3]{Alg_3_Input_3_resource.pdf} \includegraphics[scale=0.3]{Alg_3_Input_3_in.pdf} \centering \label{figure 12} \end{figure} We can observe that though the regrets may be improved when operation period is small, the regrets are actually greater when the period is long across all three random inputs in Figure \ref{figure 7},\ref{figure 9}, and \ref{figure 11} . This can be explained by the fact that when the period is small, a more conservative approach may be better, since the estimation is usually rough. However, when the period is long, there is no need for making special compensations for the estimation error, and a conservative approach is likely to cause long-term underperformance. If we observe the consumption table Figure \ref{figure 8}, \ref{figure 10},\ref{figure 12}, we see the consumption is indeed more conservative, but such conservation does not give the rise to overperformance. Hence, there is generally no need to add a shrinkage term in the algorithm, for the tradeoff of conservation is too high. \\ To solve this trade off, let us consider Action-History-Dependent Learning Algorithm from \cite{5}, which adjusts the optimal dual solution based on the previous actions. Therefore, such algorithm is an adaptive learning algorithm. The advantage for such algorithm, as we will see, compared to our previous algorithm that assumes a normalized consumption rate of $\frac{t}{n} b_{i}=t d_{i}$, is that it compensates the mistakes we made from approximation. For example, if we consume too much resource at first, then it will increase the dual price and slow down the consumption. Recall from theorem \ref{7.2} we know that the regret comes in three parts: the average error, the early depletion, and the wasted resources. This adaptive algorithms will significantly decrease the regret coming both from the early depletion and the wasted resources, for it adjusts its consumption based on real leftover resources instead of following the normalized rate. We present the algorithm as the following: \begin{algorithm}[H] \caption{Action-History-Dependent Learning Algorithm}\label{alg:4} \begin{algorithmic}[1] \State Input: $n, d_{1}, \ldots, d_{m}$ \State Initialize the constraint $b_{i 0}=n d_{i}$ for $i=1, \ldots, m$ \State Initialize the dual price $\boldsymbol{p}_{1}=\mathbf{0}$. \For {$t=1, \ldots, n$ } \State $\quad$ Observe $\left(r_{t}, \boldsymbol{a}_{t}\right)$ and set $$ x_{t}=\left\{\begin{array}{ll} 1, & \text { if } r_{t}>\boldsymbol{a}_{t}^{\top} \boldsymbol{p}_{t} \\ 0, & \text { if } r_{t} \leq \boldsymbol{a}_{t}^{\top} \boldsymbol{p}_{t} \end{array}\right. $$ \State If the constraints are not violated \State Update the constraint vector $$ b_{i t}=b_{i, t-1}-a_{i t} x_{t} \text { for } i=1, \ldots, m $$ \State $\quad$ Specify an optimization problem $$ \begin{aligned} \max & \sum_{j=1}^{t} r_{j} x_{j} \\ \text { s.t. } & \sum_{j=1}^{t} a_{i j} x_{j} \leq \frac{t b_{i t}}{n-t}, \quad i=1, \ldots, m \\ & 0 \leq x_{j} \leq 1, \quad j=1, \ldots, t \end{aligned} $$ \State $\quad$ If $t<n$, solve its dual problem and obtain the dual price $\boldsymbol{p}_{t+1}$ $$ \begin{array}{c} \boldsymbol{p}_{t+1}=\underset{\boldsymbol{p}}{\arg \min } \sum_{i=1}^{m} \frac{b_{i t} p_{i}}{n-t}+\frac{1}{t} \sum_{j=1}^{t}\left(r_{j}-\sum_{i=1}^{m} a_{i j} p_{i}\right)^{+} \\ \text {s.t. } p_{i} \geq 0, \quad i=1, \ldots, m . \end{array} $$ \EndFor \end{algorithmic} \end{algorithm} The performances are recorded below: \begin{figure}[!ht] \caption{Regret for Algorithm \ref{alg:4} (Dotted) compared with Algorithm \ref{alg:2} (Red) with Input I} \includegraphics[scale=0.3]{Alg_4_Input_1.pdf} \centering \label{figure 13} \end{figure} \begin{figure}[!ht] \caption{Resource Consumption Rate of Algorithm \ref{alg:4} (Left) and Zoomed in View (Right) with Input I} \includegraphics[scale=0.3]{Alg_4_Input_1_re.pdf} \includegraphics[scale=0.3]{Alg_4_Input_1_in.pdf} \centering \label{figure 14} \end{figure} \begin{figure}[!ht] \caption{Regret for Algorithm \ref{alg:4} (Dotted) compared with Algorithm \ref{alg:2} (Red) with Input II} \includegraphics[scale=0.3]{Alg_4_Input_2.pdf} \centering \label{figure 15} \end{figure} \begin{figure}[!ht] \caption{Resource Consumption Rate of Algorithm \ref{alg:4} (Left) and Zoomed in View (Right) with Input II} \includegraphics[scale=0.3]{Alg_4_Input_2_re.pdf} \includegraphics[scale=0.3]{Alg_4_Input_2_in.pdf} \centering \label{figure 16} \end{figure} \begin{figure}[!ht] \caption{Regret for Algorithm \ref{alg:4} (Dotted) compared with Algorithm \ref{alg:2} (Red) with Input III} \includegraphics[scale=0.3]{Alg_4_Input_3.pdf} \centering \label{figure 17} \end{figure} \begin{figure}[!ht] \caption{Resource Consumption Rate of Algorithm \ref{alg:4} (Left) and Zoomed in View (Right) with Input III} \includegraphics[scale=0.3]{Alg_4_Input_3_re.pdf} \includegraphics[scale=0.3]{Alg_4_Input_3_in.pdf} \centering \label{figure 18} \end{figure} As we can observe, both the regrets and the consumption smoothness are improved across three random inputs. The adaptive algorithm no longer aims for computing the real dual value across the entire horizon, for computing the dual for the past time periods is not relevant for making decisions in the future. By computing only the relevant dual based on the real leftover resources, this algorithm is more efficient by discarding irrelevant information. As a result, this algorithm no longer assumes linear consumption. When there is little resources left, the dual value is driven up which slows down the consumption rate; when storage is too large, the dual value is driven down to accept more orders. This adaptive feature allows the algorithm, in all three random inputs, to finish its resources almost exactly at the end time. \clearpage \section{Open Problems} One of the essential questions to ask at this stage is whether we can design algorithms suitable for non-stationary price data, for example, trendy data. All of our algorithms and the algorithms raised by \cite{5} focus exclusively on stationary data. The ability to analyze non-stationary data is critical in the application. \cite{27} and \cite{28} demonstrate resource allocation with non-stationary data from online video-streaming and online time series data respectively. \cite{29} and \cite{30} propose and analyze algorithms that solve non-stationary linear programming problems on modern computing clusters. The promising step forward is to answer whether our algorithms can be adaptive for non-stationary data. To start with, we consider two types of trendy data: i) a weighted random walk and ii) a linear regression model with noise. We take the dimension of the products as $2$ and the capacities to be $0.25n$. So on average, the algorithm can accept a fourth of total orders. The details are given below: \begin{tabularx}{1\textwidth} { | >{\raggedright\arraybackslash}X | >{\centering\arraybackslash}X | >{\raggedleft\arraybackslash}X | } \hline Random Input IV & $A_{ij} \sim \text{Uniform}(0.6, 1.4) $ & $r_i\sim r_{i-1}+0.2+\text{Uniform}(-0.2, 0.2)$\\ \hline Random Input V & $A_{ij} \sim \text{Uniform}(0.6, 1.4) $ & $r_i\sim 1+0.2i+\text{Uniform}(-0.2, 0.2)$\\ \hline \end{tabularx} If we test the data using Algorithm \ref{alg:4}, we have the following regret: \begin{figure}[!ht] \caption{Regrets for Algorithm \ref{alg:4} with Random Input IV/V } \includegraphics[scale=0.3]{RandomNo.pdf} \includegraphics[scale=0.3]{LinearNo.pdf} \centering \label{figure 20} \end{figure} As we can observe, the regrets are super-linear, for the stationary dual algorithm can no longer cope with non-stationary data. In fact, this super-linear regret is a result of the misleading dual computed in early time, which, instead of gaining more information, provides additional noises. To handle trendy data, we need to force our algorithms to be trend-adaptive, namely to have the ability to predict the trend before computing for the dual. We can design the algorithm in the following way: \begin{algorithm} \caption{Trend-Adaptive Action-History-Dependent Learning Algorithm}\label{alg:5} \begin{algorithmic}[1] \State Input: $d_{1}, \ldots, d_{m}$ where $d_{i}=b_{i} / n$ \State Initialize: Find $\delta \in(1,2]$ and $L>0$ s.t. $\left\lfloor\delta^{L}\right\rfloor=n$. \State Let $t_{k}=\left\lfloor\delta^{k}\right\rfloor, k=1,2, \ldots, L-1$ and $t_{L}=n+1$ \State Set $x_{1}=\ldots=x_{t_{1}}=0$ \For{ $k=1,2, \ldots, L-1$ } \State Input $A_i,r_i$ for $i\leq t_k$. \State Predict the trend $\alpha_{t_k}$ with intercept $b_{t_k}$ using Least Square Method. \State Complete the data $r_i=i\alpha_{t_k}+b_{t_k}$ for $n\geq i> t_k$. \State Complete the data $A_i=\frac{1}{t_k}\sum_{j=1}^{t_k}A_j$ for $n\geq i> t_k$. \State Update the real-time constraints $b_{t_k}=b-Ax$. \State Specify an optimization problem with simulated data. $$ \begin{aligned} \max & \sum_{j=t_k}^{n} r_{j} x_{j} \\ \text { s.t. } & \sum_{j=t_k}^{n} a_{i j} x_{j} \leq b_{t_k}, \quad i=1, \ldots, m \\ & 0 \leq x_{j} \leq 1, \quad j=1, \ldots, t_{k} \end{aligned} $$ \State Solve its dual problem and obtain the optimal dual variable $\boldsymbol{p}_{k}^{*}$ $$ \begin{array}{c} \boldsymbol{p}_{k}^{*}=\underset{p}{\arg \min }\ b_{t_k}p+ \sum_{j=t_k}^{n}\left(r_{j}-\sum_{i=1}^{m} a_{i j} p_{i}\right)^{+} \\ \text {s.t. } p_{i} \geq 0, \quad i=1, \ldots, m \end{array} $$ \For{ $t=t_{k}+1, \ldots, t_{k+1}$} \State If constraints permit, set $$ x_{t}=\left\{\begin{array}{ll} 1, & \text { if } r_{t}>\boldsymbol{a}_{t}^{\top} \boldsymbol{p}_{k}^{*} \\ 0, & \text { if } r_{t} \leq \boldsymbol{a}_{t}^{\top} \boldsymbol{p}_{k}^{*} \end{array}\right. $$ \State $\quad$ Otherwise, set $x_{t}=0$ \State $\quad$ If $t=n$, stop the whole procedure. \EndFor \EndFor \end{algorithmic} \end{algorithm} \newpage The performance is recorded below. \begin{figure}[!ht] \caption{Regrets for Algorithm \ref{alg:5}(Red) with Random Input IV/V compared to Algorithm \ref{alg:4}(Blue)} \includegraphics[scale=0.3]{RandomWalkModelRegret.pdf} \includegraphics[scale=0.3]{LinearRegressionModelRegret.pdf} \centering \label{figure 21} \end{figure} As we can observe, significant improvement is achieved using the new adaptive algorithm. We can take a closer look at the resource depletion rate: \begin{figure}[!ht] \caption{Resource Depletion Rates for Algorithm \ref{alg:5} with Random Input IV/V} \includegraphics[scale=0.3]{RandomWalkResourceDepletion.pdf} \includegraphics[scale=0.3]{LinearRegressionResourceDepletion.pdf} \centering \label{figure 22} \end{figure} The depletion rates for Algorithm \ref{alg:5} are highly stable, accepting orders roughly near the end. The zoomed-in pictures are provided below. \begin{figure}[!ht] \caption{(Zoomed-in)Resource Depletion Rates for Algorithm \ref{alg:5} with Random Input IV/V} \includegraphics[scale=0.3]{RandomWalkResourceDepletionZoom.pdf} \includegraphics[scale=0.3]{LinearRegressionResourceDepletionZoom.pdf} \centering \label{figure 23} \end{figure} As we have discussed in the previous sections, the regret comes from three sources: the approximation of the dual optimal, the early depletion time, and the wasted resource. Since we have an increasing price, the cost of early depletion time is especially harmful, for the algorithm neglects the most profitable orders happening near the end. However, the exact regret coming from the early depletion time is unknown. We suspect that a slightly more conservative approach would be more helpful. For example, when computing for the dual $p_{t_k}$, we slightly increases it to be $p_{t_k}+\epsilon_{t_k}$ where $\epsilon_{t_k}$ vanishes quickly as $k\to L-1$. Surprised by those promising simulation results, we try to, in follow-up work, establish a formal statement on the regrets by verifying the following conjecture: \begin{conjecture} Suppose $a_t$ follows some i.i.d process and $r_i$ follows some linear regression model with white noise or weighted random walk model, then, under some suitable regularity conditions, $$ \Delta_{n}\left(\pi_{5}\right) \leq O(n\log n) $$ where $\pi_5$ is the online policy specified by Algorithm \ref{alg:5}. \end{conjecture} We believe the conjecture at least holds for $O(n\sqrt{n})$, since most geometric algorithms have regret $O(\sqrt{n})$ and our additional price complexity shouldn't distort the regret by a factor of more than $O(n)$. Suppose this conjecture is proved to be true in either $O(\sqrt{n})$ or $O(n\log n)$, it would be a promising cornerstone in online linear programming, for it opens the possibility for non-stationary price data with quasi-linear regret, where the original algorithms exhibit $O(n^2)$ regrets. As a result, online linear programming algorithms can be used in a wide range of realistic settings unimaginable from the original i.i.d restrictions. \clearpage
{ "timestamp": "2022-10-04T02:19:29", "yymm": "2209", "arxiv_id": "2209.08657", "language": "en", "url": "https://arxiv.org/abs/2209.08657" }
\section*{\refname}} \newcommand{\pbref}[1]{\ref{#1} (\nameref*{#1})} \def\({\left(} \def\){\right)} \newcommand{\textnormal}{\textnormal} \newcommand{\displaystyle}{\displaystyle} \newcommand{\dsfrac}[2]{\displaystyle{\frac{#1}{#2}}} \newcommand{\textstyle{\bigoplus}}{\textstyle{\bigoplus}} \newcommand{\textstyle{\bigotimes}}{\textstyle{\bigotimes}} \newcommand{\textstyle{\bigcup}}{\textstyle{\bigcup}} \newcommand{\textstyle{\bigsqcup}}{\textstyle{\bigsqcup}} \newcommand{\textstyle{\bigcap}}{\textstyle{\bigcap}} \newcommand{\mc{S}}{\mc{S}} \newcommand{\mc{K}}{\mc{K}} \newcommand{\ddots}{\rotatebox[origin=t]{135}{$\cdots$}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\ms}[1]{\mathscr{#1}} \newcommand{\mf}[1]{\mathfrak{#1}} \newcommand{\wh{\mc{U}}}{\wh{\mc{U}}} \newcommand{\wh}[1]{\widehat{#1}} \newcommand{\dwh}[1]{\wh{\rule{0ex}{1.3ex}\smash{\wh{\hfill{#1}\,}}}} \newcommand{\wt}[1]{\widetilde{#1}} \newcommand{\wht}[1]{\widehat{\widetilde{#1}}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{$\mathscr{U}$}{$\mathscr{U}$} \newcommand{$\mathscr{R}$}{$\mathscr{R}$} \newcommand{$\mathscr{UR}$}{$\mathscr{UR}$} \newcommand{$\mathscr{DR}$}{$\mathscr{DR}$} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{K}}{\mathbb{K}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\operatorname{d}}{\operatorname{d}} \newcommand{{\scriptstyle \mc{D}}}{{\scriptstyle \mc{D}}} \newcommand{\operatorname{tr}}{\operatorname{tr}} \newcommand{\operatorname{Im}}{\operatorname{Im}} \newcommand{\textit{i.e.}\ }{\textit{i.e.}\ } \newcommand{\textit{vs.}\ }{\textit{vs.}\ } \newcommand{\textit{e.g.}\ }{\textit{e.g.}\ } \newcommand{\textit{cf.}\ }{\textit{cf.}\ } \newcommand{\textit{etc}}{\textit{etc}} \newcommand{\textit{et al.}}{\textit{et al.}} \newcommand{\tn{span}}{\textnormal{span}} \newcommand{PDE}{PDE} \newcommand{\tn{U}}{\textnormal{U}} \newcommand{\tn{SU}}{\textnormal{SU}} \newcommand{\tn{GL}}{\textnormal{GL}} \newcommand{\tn{su}}{\textnormal{su}} \newcommand{Schr\"odinger}{Schr\"odinger} \newcommand{Liouville-von Neumann}{Liouville-von Neumann} \newcommand{Kochen-Specker}{Kochen-Specker} \newcommand{Leggett-Garg}{Leggett-Garg} \newcommand{\bra}[1]{\langle#1|} \newcommand{\ket}[1]{|#1\rangle} \newcommand{\kett}[1]{|\!\!|#1\rangle\!\!\rangle} \newcommand{\proj}[1]{\ket{#1}\bra{#1}} \newcommand{\braket}[2]{\langle#1|#2\rangle} \newcommand{\ketbra}[2]{|#1\rangle\langle#2|} \newcommand{\expectation}[1]{\langle#1\rangle} \newcommand{\tn{Herm}}{\textnormal{Herm}} \newcommand{\Sym}[1]{\textnormal{Sym}_{#1}} \newcommand{\meanvalue}[2]{\langle{#1}\rangle_{#2}} \newcommand{\tn{Prob}}{\textnormal{Prob}} \newcommand{\kjj}[3]{#1\!:\!#2,#3} \newcommand{\jk}[2]{#1,#2} \newcommand{\mf{j}}{\mf{j}} \newcommand{\pobs}[1]{\mathsf{#1}} \newcommand{\obs}[1]{\wh{\pobs{#1}}} \newcommand{\uop}[1]{\wh{\mathbf{#1}}} \newcommand{\weightU}[5]{\big[{#2}{}_{#3}\overset{#1}{\rightarrow}{#4}{}_{#5}\big]} \newcommand{\weightUT}[8]{\big[{#3}{}_{#4}\overset{#1}{\rightarrow}{#5}{}_{#6}\overset{#2}{\rightarrow}{#7}{}_{#8}\big]} \newcommand{\weight}[4]{\weightU{}{#1}{#2}{#3}{#4}} \newcommand{\weightT}[6]{\weightUT{}{}{#1}{#2}{#3}{#4}{#5}{#6}} \newcommand{\boxtimes}{\boxtimes} \newcommand{{\boxtimes_s}}{{\boxtimes_s}} \newcommand{\mathbf{(2\pi\hbar)}}{\mathbf{(2\pi\hbar)}} \newcommand{\boldsymbol{x}}{\boldsymbol{x}} \newcommand{\boldsymbol{y}}{\boldsymbol{y}} \newcommand{\mathbf{z}}{\mathbf{z}} \newcommand{\mathbf{q}}{\mathbf{q}} \newcommand{\mathbf{p}}{\mathbf{p}} \newcommand{\mathbf{0}}{\mathbf{0}} \newcommand{\widehat{\mathbf{a}}}{\widehat{\mathbf{a}}} \newcommand{\mathscr{C}}{\mathscr{C}} \newcommand{\mathscr{P}}{\mathscr{P}} \newcommand{\widehat{\x}}{\widehat{\boldsymbol{x}}} \newcommand{\widehat{\mathbf{p}}}{\widehat{\mathbf{p}}} \newcommand{\fqproj}[1]{\Pi_{#1}} \newcommand{\cqproj}[1]{\wh{\Pi}_{#1}} \newcommand{\cproj}[1]{\wh{\Pi}^{\perp}_{#1}} \newcommand{\mathbb{E}_3}{\mathbb{E}_3} \newcommand{\mathbf{D}}{\mathbf{D}} \newcommand{\tn{d}}{\textnormal{d}} \newcommand{\mathbf{d}}{\mathbf{d}} \newcommand{\mathbf{n}}{\mathbf{n}} \newcommand{\mathbf{m}}{\mathbf{m}} \newcommand{\V}[1]{\mathbb{V}_{#1}} \newcommand{\F}[1]{\mathcal{F}_{#1}} \newcommand{\widetilde{\mathcal{F}}^0}{\widetilde{\mathcal{F}}^0} \newcommand{\nD}[1]{|{#1}|} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\tn{End}}{\textnormal{End}} \newcommand{\vbundle}[4]{{#1}\to {#2} \stackrel{\pi_{#3}}{\to} {#4}} \newcommand{\vbundlex}[1]{\vbundle{V_{#1}}{E_{#1}}{#1}{M_{#1}}} \newcommand{\rho_{\scriptscriptstyle\btimes}}{\rho_{\scriptscriptstyle\boxtimes}} \newcommand{\intl}[1]{\int\limits_{#1}} \newcommand{\moyalBracket}[1]{\{\mskip-5mu\{#1\}\mskip-5mu\}} \newcommand{H_{\tn{int}}}{H_{\textnormal{int}}} \newcommand{\quot}[1]{``#1''} \def\sref #1{\S\ref{#1}} \newcommand{de Broglie--Bohm}{de Broglie--Bohm} \newcommand{{\dBB} theory}{{de Broglie--Bohm} theory} \newcommand{pilot-wave theory}{pilot-wave theory} \newcommand{PWT}{PWT} \newcommand{{\textbf{NRQM}}}{{\textbf{NRQM}}} \newcommand{\image}[3]{ \begin{center} \begin{figure}[!ht] \includegraphics[width=#2\textwidth]{#1} \caption{\small{\label{#1}#3}} \end{figure} \end{center} \vspace{-0.40in} } \newcommand{\ref{def:TQS}}{\ref{def:TQS}} \newcommand{\ref{thesis:HSF}}{\ref{thesis:HSF}} \newcommand{\ref{def:MQS}}{\ref{def:MQS}} \title{Born rule from counting states} \author{Ovidiu Cristinel Stoica} \affiliation{ Dept. of Theoretical Physics, NIPNE---HH, Bucharest, Romania. \\ Email: \href{mailto:cristi.stoica@theory.nipne.ro}{cristi.stoica@theory.nipne.ro}, \href{mailto:holotronix@gmail.com}{holotronix@gmail.com} }% \date{\today \begin{abstract} I give a very simple derivation of the Born rule by counting states from a continuous basis. More precisely, I show that in a continuous basis, the contributing basis vectors are present in a state vector with real and equal coefficients, but they are distributed with variable density among the eigenspaces of the observable. Counting the contributing basis vectors while taking their density into account gives the Born rule without making other assumptions. State counting yields the Born rule only if the basis is continuous, but all known physically realistic observables admit such bases. The continuous basis is not unique, and for subsystems it depends on the observable. But for the entire universe, there are continuous bases that give the Born rule for all measurements, because all measurements reduce to distinguishing macroscopic pointer states, and macroscopic observations commute. This allows for the possibility of an ontic basis for the entire universe. In the wavefunctional formulation, the basis can be chosen to consist of classical field configurations, and the coefficients $\Psi[\phi]$ can be made real by absorbing them into a global $\textnormal{U}(1)$ gauge. For the many-worlds interpretation, this result gives the Born rule from micro-branch counting. \end{abstract} \keywords{Born rule; state counting; Everett's interpretation; many-worlds interpretation; branch counting. \maketitle \section{Introduction} \label{s:intro} In quantum mechanics, the Born rule prescribes the probability that the outcome of a quantum measurement is the eigenvalue $\lambda_j$ of the observable is \begin{equation} \label{eq:born_rule} \textnormal{Prob}(\lambda_j)=\bra{\psi}\obs{P}_j\ket{\psi}, \end{equation} where the unit vector $\ket{\psi}$ represents the state of the observed system right before the measurement, and $\obs{P}_j$ is the projector on the eigenspace corresponding to $\lambda_j$. The \emph{projection postulate} states that $\ket{\psi}$ projects onto one of the eigenspaces $\obs{P}_j$ with the probability from \eqref{eq:born_rule}. von Neumann expressed already in $1927$ the desirability of having a derivation of the Born rule ``from empirical facts or fundamental probability-theoretic assumptions, \emph{i.e.}, an inductive justification'' \cite{vonNeumann1955MathFoundationsQM}. Gleason's theorem shows that any countably additive probability measure on closed subspaces of a Hilbert space $\mathcal{H}$, $\dim\mathcal{H}>2$, has the form $\operatorname{tr}(\obs{P}\wh{\rho})$, where $\obs{P}$ is the projector on the subspace and $\wh{\rho}$ is a density operator \cite{GleasonTheorem1957}. If the state is represented by $\wh{\rho}$, this can be interpreted as the Born rule. Gleason's theorem is very important, in showing that if there is a probability rule, it should have the form of the Born rule. But it does not say that the density operator of the observed system is the same $\wh{\rho}$, how the probabilities arise in the first place, and what they are about \cite{Earman2022TheStatusOfTheBornRuleAndTheRoleOfGleasonsTheoremAndItsGeneralizations}. For example, it is unable to convert the amplitudes of the branches in the many-worlds interpretation (MWI) \cite{Everett1957RelativeStateFormulationOfQuantumMechanics,deWittGraham1973ManyWorldsInterpretationOfQuantumMechanics,Wallace2012TheEmergentMultiverseQuantumTheoryEverettInterpretation,SEP-Vaidman2018MWI} into actual probabilities. For this reason, the search for a proof of the Born rule continues. There are numerous proposals to derive the Born rule. Earlier attempts to derive it from more basic principles include \cite{Finkelstein1963LogicOfQuantumPhysics_Born_rule_derivation}, \cite{Hartle1968QuantumMechanicsOfIndividualSystems}, and others \cite{FarhiEtal1989HowProbabilityArisesInQuantumMechanics}. Such approaches based on a frequency operator were accused of circularity \cite{Cassinello1996OnTheProbabilisticPostulateOfQuantumMechanics,CavesSchack2005PropertiesOfTheFrequencyOperatorDoNotImplyTheQuantumProbabilityPostulate}. Other proposals, in relation to MWI, are based on many-minds \cite{AlbertLower1988InterpretingMWI_ManyMinds}, decision theory \cite{Deutsch1999QuantumTheoryOfProbabilityAndDecision,Wallace2002QuantumProbabilitiesAndDecisionRevisited,Saunders2004BornRuleFromOperationalAssumptions} (also accused of circularity in \cite{Baker2007MeasurementOutcomesAndProbabilityInEverettianQuantumMechanics,BarnumEtal2000QuantumProbabilityFromDecisionTheory}), envariance \cite{Zurek2005ProbabilitiesFromEntanglement} (accused of circularity in \cite{SchlosshauerFine2003OnZureksDerivationOfTheBornRule}), measure of existence \cite{Vaidman2012ProbabilityInMWI} \textit{etc}. For a review see \cite{Vaidman2020DerivationsOfTheBornRule}. The necessity to obtain the Born rule in MWI by branch counting was advocated in \cite{Saunders2021BranchCountingInTheEverettInterpretationOfQuantumMechanics}. In this article I follow this guideline: \begin{goal} \label{goal:counting} Ideally, the Born rule should be obtained in the old-fashioned way, as \emph{the ratio of the number of favorable outcomes to the total number of possible outcomes}. \end{goal} I show that, in a continuous basis, it is possible to express the state vector as a linear combination of basis vectors of equal norm, but distributed unevenly. Then the probability density can be understood as a distribution of ``classical'' states (Fig. \ref{born.pdf}). \vspace{-0.25in} \image{born.pdf}{0.45}{\textbf{The Born rule from counting basis states.} \\\textbf{A.} The usual interpretation of a wavefunction as a linear combination of basis state vectors of different lengths. \\\textbf{B.} The interpretation of the wavefunction in terms of equal length basis state vectors, but with inhomogeneous density.} In Sec. \sref{s:probabilities} I prove the main result. In Sec. \sref{s:implications} I discuss its implications, how it makes possible the existence of a ``classical'' or ontic basis for the entire universe, how the wavefunction becomes real, and how this yields probabilities in the many-worlds interpretation. \section{Probabilities from counting} \label{s:probabilities} Before proving the main result, let us motivate it. Consider a state vector of the form \begin{equation} \label{eq:finite_case_psi} \ket{\psi}=\frac{1}{\sqrt{n}}\sum_{k=1}^n\ket{\phi_k}. \end{equation} where $(\ket{\phi_k})_{k\in\{1,\ldots,n\}}$ are orthonormal vectors from $\mathcal{H}$. Then, if every $\ket{\phi_k}$ is an eigenvector of the operator $\obs{A}$ representing the observable, the Born rule simply coincides with the counting of basis states: \begin{equation} \label{eq:finite_case_proof} \begin{aligned} \bra{\psi}\obs{P}_j\ket{\psi} &=\frac{1}{n}\(\sum_{k=1}^n\bra{\phi_k}\)\(\obs{P}_j\sum_{k=1}^n\ket{\phi_k}\)\\ &=\frac{1}{n}\sum_{\ket{\phi_k}\in\obs{P}_j\mathcal{H}}\braket{\phi_k}{\phi_k} = \frac{n_j}{n}, \end{aligned} \end{equation} where $\obs{P}_j$ is the projector of the eigenspace corresponding to the eigenvalue $\lambda_j$, and $n_j$ is the number of basis vectors $\ket{\phi_k}$ that are eigenvectors for $\lambda_j$. This would satisfy Goal \ref{goal:counting}, but the state vectors of the form \eqref{eq:finite_case_psi} are very special. Interestingly, in the continuous case, the basis vectors can be distributed with nonuniform density, making it possible for the continuous version of eq. \eqref{eq:finite_case_psi} to apply to any state vector. This motivates the following results. \begin{theorem} \label{thm:born_rule_counting} Let $(\ket{\phi})_{\phi\in\mc{C}}$ be an orthonormal basis indexed continuously by the points of a topological manifold $\mc{C}$ with a measure $\mu$ on its $\sigma$-algebra. Then, any state vector $\ket{\psi}$ so that $\abs{\braket{\phi}{\psi}}$ is $\mu$-measurable has the form \begin{equation} \label{eq:psi_uniform} \ket{\psi}=\int_{\phi\in\mc{C}}e^{i \alpha(\phi)}\ket{\phi} d\wt{\mu}(\phi), \end{equation} where $\alpha:\mc{C}\to\mathbb{R}$, and $\wt{\mu}$ is a measure on $\mc{C}$ specifying the density of the basis vectors $(e^{i \alpha(\phi)}\ket{\phi})_{\phi\in\mc{C}}$. If the eigenspace $\mathcal{H}_{\lambda}$ of an eigenvalue $\lambda$ of an observable $\obs{A}$ is spanned by $(\ket{\phi})_{\phi\in\mc{C}_{\lambda}}$, where $\mc{C}_{\lambda}$ is $\mu$-measurable, \begin{equation} \label{eq:born_rule_continuous} \textnormal{Prob}(\lambda)=\int_{\phi\in\mc{C}_{\lambda}}e^{i \alpha(\phi)}\ket{\phi} d\wt{\mu}(\phi). \end{equation} \end{theorem} \begin{proof} In full generality, we can assume that $\braket{\phi}{\psi}\in\mathbb{R}$ for all $\phi$. If not, substitute $\ket{\phi}\mapsto e^{i \alpha(\phi)}\ket{\phi}$, where $\alpha(\phi)$ is the phase in the polar form of $\braket{\phi}{\psi}$, for all $\phi\in\mc{C}$. Then, \begin{equation} \label{eq:psi_real} \ket{\psi}=\int_{\phi\in\mc{C}}r(\phi)\ket{\phi} d\mu(\phi), \end{equation} where $r(\phi):=\abs{\braket{\phi}{\psi}}$ and $r\in L^2(\mc{C},\mu,\mathbb{R})$ is a real non-negative square-integrable function. The measure $d\wt{\mu}(\phi):=r(\phi)d\mu(\phi)$ satisfies eq. \eqref{eq:psi_uniform}: \begin{equation} \label{eq:psi_real_uniformized} \ket{\psi}=\int_{\phi\in\mc{C}}\ket{\phi} d\wt{\mu}(\phi). \end{equation} Since $r(\phi)$ is $\mu$-measurable, the measure $\wt{\mu}$ is absolutely continuous with respect to $\mu$. If one is not careful enough, one may think that eq. \eqref{eq:psi_real_uniformized} cannot represent a normalized vector. But it does: \begin{equation} \label{eq:check-normalized} \begin{aligned} \braket{\psi}{\psi}&=\int_{\phi\in\mc{C}}\bra{\phi} d\wt{\mu}(\phi)\int_{\phi'\in\mc{C}}\ket{\phi'} d\wt{\mu}(\phi')\\ &=\int_{\phi\in\mc{C}}\(\int_{\phi'\in\mc{C}}\braket{\phi}{\phi'} d\wt{\mu}(\phi')\)d\wt{\mu}(\phi)\\ &=\int_{\phi\in\mc{C}}\(\int_{\phi'\in\mc{C}}\braket{\phi}{\phi'} r(\phi')d\mu(\phi')\)d\wt{\mu}(\phi)\\ &=\int_{\phi\in\mc{C}}r(\phi)d\wt{\mu}(\phi) =\int_{\phi\in\mc{C}}r^2(\phi)d\mu(\phi)=1.\\ \end{aligned} \end{equation} Eq. \eqref{eq:born_rule_continuous} follows directly from eq. \eqref{eq:psi_real_uniformized}. \end{proof} Therefore, the density $\wt{\mu}$ of the basis states corresponds to the Born rule, according to Goal \ref{goal:counting}. \section{Implications} \label{s:implications} \begin{remark} \label{rem:applicability} For any physically realistic quantum measurement there is a continuous basis in which the observable is diagonal, as required by Theorem \ref{thm:born_rule_counting}. Even for a single particle in nonrelativistic quantum mechanics, the Hilbert space is infinite-dimensional, and admits continuous bases, \textit{e.g.}\ the position basis. In general, measurements reduce to position measurements: the pointer indicates the result by its position, for a photographic plate we read the position where the particle hit it \textit{etc}. In practice, these are not points, but regions of space of positive area or volume, so \textbf{all measurements satisfy, in practice, the conditions from Theorem \ref{thm:born_rule_counting}}. \qed \end{remark} \begin{remark} \label{rem:subsystems} Subsystems admit observables that cannot be diagonalized simultaneously, so the continuous basis depends on the observable. Therefore, \textbf{for subsystems there are no continuous bases universal for all observables}. \qed \end{remark} \begin{remark} \label{rem:universe} However, \textbf{there is a universal continuous basis for the entire universe}. Every measurement ultimately becomes a direct observation of a macro-state, the state of the pointer of the measuring device. So every measurement reduces to distinguishing macro-states. \emph{Macro-states} are represented by subspaces of the form $\obs{P}_{\alpha}\mathcal{H}$, where $(\obs{P}_{\alpha})_{\alpha\in\mc{A}}$ is a complete set of commuting projectors on $\mathcal{H}$, so that $[\obs{P}_{\alpha},\obs{P}_{\beta}]=0$ for any $\alpha\neq\beta\in\mc{A}$, and $\textstyle{\bigoplus}_{\alpha\in\mc{A}}\obs{P}_{\alpha}\mathcal{H}=\mathcal{H}$. Since ultimately every measurement translates into an observation represented by the macro projectors, there is a universal continuous basis for all measurements, which diagonalizes all macro projectors. Therefore, this universal basis can be taken as representing ``classical states'', which may be called \emph{ontic states}. Theorem \ref{thm:born_rule_counting} allows us to interpret the Born rule for any measurement as counting such ontic states. This is consistent with any observable we measure for the subsystem, since different measurement settings ultimately translate to distinguishing macro-states defined by the same set of macro projectors. It may seem too much to count states of the entire universe just to account for the probabilities of the measurement of a single particle. But in fact we always do this, because the observed particle can be entangled with any other system in the universe. \qed \end{remark} \begin{remark} \label{rem:ontic} A basis $(\ket{\phi})_{\phi\in\mc{C}}$ that really is ontic or classical is possible. In the {Schr\"odinger} wavefunctional formulation of quantum field theory \cite{Jackiw1988AnalysisInfDimManifoldsSchrodingerRepresentationForQuantizedFields}, $\mc{C}$ becomes the configuration space of classical fields, and the wavefunctional $\Psi[\phi]:=\braket{\phi}{\Psi}$ replaces the nonrelativistic wavefunction. \textbf{Now our basis literally consists of classical states.} While it may be unusual to interpret quantum mechanics in this way, it makes sense, once we remember that we never observe individual particles, but macro-states, and these are imported from the classical theory. \qed \end{remark} \begin{remark} \label{rem:gauge} The phase change $\ket{\phi}\mapsto e^{i \alpha(\phi)}\ket{\phi}$ from the proof of Theorem \ref{thm:born_rule_counting} can be identified with an $\tn{U}(1)$ gauge transformation of the classical field, $\phi\mapsto e^{i \alpha(\phi)}\phi$, so that $e^{i \alpha(\phi)}\ket{\phi}=\ket{e^{i \alpha(\phi)}\phi}$, because both are unphysical. Charged and spinor fields admit an $\tn{U}(1)$ symmetry, but also the photons \cite{BialynickiBirula1994OnTheWavefunctionOfThePhoton}, since classical electromagnetic field admits a complex form. Then, $\Psi[\phi]$ \textbf{can be made real by changing the global $\tn{U}(1)$ gauge of the classical states}, and eq. \eqref{eq:psi_real_uniformized} can be interpreted directly as a distribution of classical states. \qed \end{remark} \begin{remark} \label{rem:mwi} In the \textbf{many-worlds interpretation}, if we ``naively'' count the worlds or macro-branches that result after a measurement, the result coincides with the Born rule only if the state has the form \eqref{eq:finite_case_psi} in the eigenbasis of the observable. But counting micro-branches that correspond to the basis $(\ket{\phi})_{\phi\in\mc{C}}$ gives the correct probabilities (even if they may interfere in the future, unlike the macro-branches), in accord with Goal \ref{goal:counting}. \qed \end{remark} \begin{remark} \label{rem:beables} In the wavefunctional approach each micro-branch consists of classical fields $\phi$. \textbf{These are the local beables. This justifies counting each micro-branch as a world.} \qed \end{remark}
{ "timestamp": "2022-09-20T02:20:33", "yymm": "2209", "arxiv_id": "2209.08621", "language": "en", "url": "https://arxiv.org/abs/2209.08621" }
\section{Introduction and statement of main results} Consider the Kirchhoff type problem \begin{equation}\label{Eq.(5.1)} \begin{gathered} -(a+b\int_\Omega |\nabla u|^2dx)\Delta u=f(x,u), \quad\text{in } \Omega, \\ u=0, \quad\text{on } \partial \Omega, \end{gathered} \end{equation} where $\Omega $ is a smooth bounded domain in $ {\mathbb {R}}^N$, $a,b>0$, and $f(x,t): \overline{\Omega}\times {\mathbb R}$ is a continuous real function and satisfies the subcritical condition \begin{equation}\label{8.1} |f(x,t)|\leq C(|t|^{p-1}+1)\quad \text{for some } 2<p<2^*=\begin{cases}\frac{2N}{N-2},& N\geq 3,\\ \infty, & N=1,2, \end{cases} \end{equation} where $C$ denotes some positive constant. It is pointed out in \cite{MB} that the similar nonlocal problems model several physical and biological systems where $u$ describes a process which depends on the average of itself, for example that of the population density. Problem \eqref{Eq.(5.1)} is related to the stationary analogue of the Kirchhoff equation $$ u_{tt}-\Big(a+b\int_\Omega|\nabla u|^2dx\Big)\Delta u=g(x,u), $$ proposed by Kirchhoff in \cite{GK} as an extension of the classical d'Alembert wave equation for free vibrations of elastic strings. Kirchhoff's model takes into account the changes in length of the string produced by transverse vibrations. It received great attention only after Lions \cite{JL} proposed an abstract framework for the problem. Positive solutions are considered by authors, such as Alves et al \cite{CO}, Ma and Rivera \cite{TF}, Cheng and Wu \cite{CX}, Yang and Zhang \cite{YJ}. Problems on the unbounded domain $ {\mathbb {R}}^3$ have also been considered by authors, such as Jin and Wu \cite{JXW}, Nie and Wu \cite{NX}, Wu \cite{XW}, Liu and He \cite{LH}, Li et al \cite{LL}, Li and Ye \cite{GL}. And more recently the concentration behavior of positive solutions has been studied by He and Zou \cite{HZ}, Wang et al \cite{WT}, He et al \cite{hl}. A result with Hartree-type nonlinearities can be found in L\"{u} \cite{df}. Ground state nonlinear with critical growth is considered in He and Zou \cite{HZ2}. The readers may consult Bernstein \cite{SB} and Poho\v{z}aev \cite{SI}, Sun and Tang \cite{JJS}, Chen et al \cite{CY}, Cheng \cite{BC}, Perera and Zhang \cite{KP, ZT}, Mao and Zhang \cite{AM}, Sun and Liu \cite{SJ} and the references therein, for more information on this problem. There are many solvability conditions for problem \eqref{Eq.(5.1)} with $f$, like the asymptotical linear case (at infinity) in \cite{YJ} and people are more interested in the superlinear case (at infinity): \begin{itemize} \item[(S1)] there exists $\theta \geq 1$ such that $\theta G(t)\geq G(st)$ for all $t\in \mathbb {R}$ and $s \in [0, 1]$, where $G(t)=f(t)t-4F(t)$ (see \cite{JJS}); \item[(S2)] $\lim_{|t|\to\infty} G(t)=\infty$ and there exist $ \sigma> \max \{1, N/2\}$ and $C>0$ such that $|f (t)|^\sigma\leq CG(t)|t|^\sigma$ for $|t|$ large (see \cite{AM}); or some limitation forms, \item[(S3)] $\lim_{|t|\to \infty} [f(t)t-4F(t)]=\infty$ (see \cite{YYJ}); \item[(S4)] $ \liminf_{|t|\to\infty}\frac{f(x,t)t-4F(x,t)}{|t|^\tau} >-\alpha$ uniformly in $x\in \Omega$, where $\tau \in [0, 2]$ and $0< \alpha<a\lambda_1$, $\lambda_1$ is the first eigenvalue of $\big(-\Delta, H_0^1(\Omega)\big)$ (see \cite{BC}). \end{itemize} All kinds of conditions are mainly making sure the boundness of the Cerami or Palais-Smale sequences. The following condition on $f$ which is called Ambrosetti-Rabinowitz condition is often used: \begin{itemize} \item[(S5)] there exists $\theta>4$ such that $f(x, t)t\geq \theta F(x,t)$ for $|t|$ large, where $F (x, t)=\int_0^tf(x, s)ds$. \end{itemize} We consider the nonlinear eigenvalue problem \begin{equation} \begin{gathered} -\Big(\int_{\Omega} |\nabla u|^2 dx\Big)\Delta u=\mu u^3, \quad\text{in } \Omega, \\ u=0, \quad\text{on } \partial\Omega, \end{gathered} \end{equation} whose the eigenvalues are the critical values of the functional \begin{equation} J(u)=\|u\|^4 ,\quad u\in S:=\big\{u \in H_0^1(\Omega): \int_\Omega |u|^4dx=1\big\}, \end{equation} where $\|u\|=\big(\int_\Omega |\nabla u|^2dx\big)^{1/2}$. We already know the first eigenvalue $\mu_1 > 0$ and the first eigenfunction $\psi_1 > 0$ (see \cite{ZT}). Now, we can state our main results. \begin{theorem} \label{thm1.1} Assume that $f\in C(\Omega\times{\mathbb {R}}, {\mathbb {R}})$ satisfies \eqref{8.1} and \begin{itemize} \item[(F1)] ${\lim_{|t|\to\infty}\big(\frac{a\lambda_1}{2}t^2 +\frac{b\mu_1}{4}t^4-F(x,t)\big)=+\infty}$ uniformly in $x\in \Omega$; \item[(F2)] there exists $\lambda>\lambda_1$ such that $F(x,t)\geq\frac{a\lambda}{2}t^2$ for $|t|$ small. \end{itemize} Then \eqref{Eq.(5.1)} has at least one nontrivial solution. \end{theorem} \begin{remark} \label{rmk1} \rm Theorem \ref{thm1.1} is a new for the case $\liminf_{|t|\to\infty}\frac{F(x, t)}{t^4}\leq\frac{b\mu_1}{4}$. The condition (F1) is weaker than (F3) in \cite{YYJ}. So our theorem is different from their theorems and obtains one nontrivial solution by adding the condition (F2) near zero. \end{remark} \begin{theorem} \label{thm1.2} Assume that $f\in C(\Omega\times{\mathbb {R}}, {\mathbb {R}})$ satisfies \eqref{8.1} and \begin{itemize} \item[(F3)] ${\liminf_{|t|\to\infty}\frac{F(x, t)}{t^4}>\frac{b\mu_1}{4}}$ uniformly in $x\in \Omega$; \item[(F4)] ${\lim_{|t|\to \infty}\big(\frac{1}{4}f(x,t)t-F(x,t) +\frac{a\lambda_1}{4}t^2\big)=+\infty}$ uniformly in $x\in \Omega$; \item[(F5)] there exists $\mu<\mu_1$ such that $F(x,t)\leq\frac{a\lambda_1}{2}t^2+\frac{b\mu}{4}t^4$ for $|t|$ small. \end{itemize} Then \eqref{Eq.(5.1)} has at least one nontrivial solution. \end{theorem} \begin{remark} \label{rmk2} \rm Condition (F4) is a new condition for a class of function $f(x,t)$ and is weaker than (S1)--(S5). For example, let $$ f(x,t)=\frac{a\lambda_1}{8}\Big(8t^3\ln(1+t^2)+\frac{4t^5}{1+t^2} +4t^3\cos t^4\Big). $$ A simple computation shows that $$ \frac{1}{4}f(x,t)t-F(x,t)+\frac{a\lambda_1}{4}t^2 = \frac{a\lambda_1}{8}\Big(\frac{t^6 (1+\cos t^4)}{1+t^2} + \frac{t^4 (2+\cos t^4)+2t^2}{1+t^2}- \sin t^4\Big) $$ and $$ \lim_{|t|\to \infty}\Big(\frac{1}{4}f(x,t)t-F(x,t)+\frac{a\lambda_1}{4}t^2\Big) =+\infty. $$ Hence, $f(x,t)$ satisfies all the assumptions of Theorem \ref{thm1.2}, but it does not satisfy any conditions of (S1)--(S5). \end{remark} \section{Preliminaries} We consider $H:=H_0^1(\Omega)$ endowed with the norm $\|u\|=\big(\int_\Omega |\nabla u|^2dx\big)^{1/2}$. We denote the usual $L^p(\Omega)$-norm by $|\cdot|_p$. Since $\Omega $ is a bounded domain, it is well known that $H \hookrightarrow L^p(\Omega)$ continuously for $p\in[1, 2^*]$, and compactly for $p\in[1, 2^*)$. Moreover there exists $\gamma_p>0$ such that \begin{equation}\label{eq6.1} |u|_p\leq\gamma_p\|u\|, \quad u\in H. \end{equation} Seeking a weak solution of problem \eqref{8.1} is equivalent to finding a critical point of the $C^1$ functional \begin{equation}\label{eq6.2} I(u):=\frac{a}{2}\|u\|^2+\frac{b}{4}\|u\|^4-\int_\Omega F(x,u)dx, \quad u\in H, \end{equation} which implies that \begin{equation}\label{eq6.3} \langle I'(u),v \rangle =(a+b\|u\|^2)\int_\Omega \nabla u \cdot\nabla v dx-\int_\Omega f(x,u)vdx, \ \ u,v\in H. \end{equation} Let $$ {E_j:=\oplus_{i\leq j} \ker(-\Delta-\lambda_i)}, $$ where $0<\lambda_1\leq\lambda_2\leq\lambda_3 \leq\dots\leq\lambda_i\leq\dots$ are the eigenvalues of $(-\Delta,H)$. We denote a subsequence of a sequence $\{u_n\}$ as $\{u_n\}$ to simplify the notation unless specified. We need the following concept, which was introduced by Cerami \cite{GC} and is a weak version of the (PS) condition. \begin{definition}[\cite{GC}] \label{def2.1} \rm Let $J\in C^1(X,\mathbb {R})$, we say that $J$ satisfies the Cerami condition at the level $c\in \mathbb {R}$ ($(Ce)_c$ for short), if any sequence $\{u_n\}\subset X$ with $$ J(u_n)\to c, \quad (1+\|u_n\|)J'(u_n)\to 0\quad\text{as } n\to \infty, $$ possesses a convergence subsequence in $X$; $J$ satisfies the $(Ce)$ condition if $J$ satisfies the $(Ce)_c$ for all $c\in \mathbb {R}$. \end{definition} The following lemma, which can be found in \cite{DG}, is our main tool in this article. \begin{lemma}[Mountain Pass Theorem] \label{lem2.1} Let $H$ be a real Banach space and $I\in C^1(H, \mathbb R)$ satisfying the $(Ce)$ condition. Suppose $I(0)=0$, \begin{itemize} \item[(i)] there are constants $\rho, \beta> 0$ such that $I|_{\partial B_\rho}\geq\beta$ where $$ B_\rho=\{u\in H : \|u\|\leq \rho\}; $$ \item[(ii)] there is $u_1\in H$ and $\|u_1\|>\rho$ such that $I(u_1)<0$. \end{itemize} Then $I$ possesses a critical value $c\geq \beta$. Moreover $c$ can be characterized as $$ c =\inf_{g\in\Gamma}\max_{u\in g([0,1])} I(u), \quad \Gamma= \{g\in C([0, 1],H) : g(0) = 0, g(1) = u_1\}. $$ \end{lemma} We give a lemma about the $(Ce)$ condition which will play an important role in the proof of our theorems. \begin{lemma} \label{lem2.2} Assume that $f(x,t)$ satisfies \eqref{8.1} and (F4), then $I$ satisfies the $(Ce)$ condition. \end{lemma} \begin{proof} Suppose that $\{u_n\} $ is a $(Ce)_c$ sequence for $c\in \mathbb {R}$ \begin{equation}\label{eq6.4} I(u_n)\to c, \quad (1+\|u_n\|)I'(u_n)\to 0 \quad\text{as } n\to \infty. \end{equation} Now firstly, we prove that $\{u_n\} $ is a bounded sequence. From \eqref{eq6.2}, \eqref{eq6.3} and \eqref{eq6.4}, we obtain \begin{equation}\label{eq6.5} 1+c \geq I(u_n)-\frac{1}{4} I'(u_n)u_n =\frac{a}{4}\|u_n\|^2+\int_\Omega \Big(\frac{1}{4}f(x,u_n)u_n-F(x,u_n)\Big)dx. \end{equation} By (F4), there exists $M>0$ such that \begin{equation}\label{eq6.6} \frac{1}{4}f(x,t)t-F(x,t)+\frac{a\lambda_1}{4}|t|^2\geq -M \end{equation} for all $x\in\Omega$ and $t\in\mathbb {R}$. And let $u_n=\phi_n+w_n$, where $\phi_n\in E_1$ and $w_n\in E^{\perp}_1$. From \eqref{eq6.5} and \eqref{eq6.6}, one obtains \begin{equation} \label{eq6.7} \begin{aligned} 1+c &\geq I(u_n)-\frac{1}{4} I'(u_n)u_n \\ &=\frac{a}{4}\|u_n\|^2-\frac{a\lambda_1}{4}|u_n|^2_2 +\int_\Omega \Big(\frac{1}{4}f(x,u_n)u_n-F(x,u_n) +\frac{a\lambda_1}{4}|u_n|^2\Big)dx \\ &\geq \frac{a}{4}\big(1-\frac{\lambda_1}{\lambda_2}\big)\|w_n\|^2-M|\Omega| \end{aligned} \end{equation} which implies that $\|w_n\|$ is bounded. We claim that $\{u_n\}$ is a bounded sequence. Otherwise, there is a subsequence of $\{u_n\}$ satisfying $\|u_n\| \to+\infty$ as $n\to+\infty$. Then we obtain $$ \frac{w_n}{\|u_n\|} \to 0 \in H. $$ Since $\phi_n/\|u_n\|$ is bounded in $E_1$ ($E_1$ has finite dimension), we have $\phi_n/\|u_n\|\to v$ in $E_1$. By $$ v_n:=\frac{u_n}{\|u_n\|}=\frac{\phi_n+w_n}{\|u_n\|} =\frac{\phi_n}{\|u_n\|}+\frac{w_n}{\|u_n\|}\to v\in E_1, $$ one has \begin{equation}\label{eq6.8} \frac{u_n(x)}{\|u_n\|}\to v(x)\quad\text{a.e. in }\Omega. \end{equation} From $\|v_n\|=1$, we obtain that $\|v\|=1$. And by $v\in E_1$, one has that $v(x)>0$ or $v(x)<0$, which implies that \begin{equation}\label{eq6.9} |u_n(x)|\to +\infty\quad\text{as }n\to+\infty \end{equation} for all $x\in \Omega$ by \eqref{eq6.8}. It follows from \eqref{eq6.7}, \eqref{eq6.9} and Fatou's lemma that \begin{align*} 1+c &\geq I(u_n)-\frac{1}{4} I'(u_n)u_n \\ &=\frac{a}{4}\|u_n\|^2+\int_\Omega \Big(\frac{1}{4}f(x,u_n)u_n-F(x,u_n)\Big)dx \\ &\geq \int_\Omega \Big(\frac{1}{4}f(x,u_n)u_n-F(x,u_n)+\frac{a\lambda_1}{4}|u_n|^2 \Big)dx \\ &\to+\infty\quad\text{as }n \to +\infty, \end{align*} which is a contradiction. Then we get that $\{u_n\}$ is bounded in $H$. Since $f(x,t)$ is subcritical growth, we can easily obtain that $\{u_n\}$ has a convergence subsequence. Hence, $I$ satisfies the $(Ce)$ condition. \end{proof} \section{Proof of main results} \begin{proof}[Proof of Theorem \ref{thm1.1}] Let $$ \overline{u}=\Big(\int_\Omega \nabla u \cdot \nabla \phi_1 dx \Big)\phi_1, \quad \widetilde{u}=u-\overline{u}, $$ where the $\phi_1$ is the first eigenfunction corresponding to $\lambda_1$. The following statements come from \cite{SQ}. First, there exist a real function $g\in L^1(\Omega)$, and $G\in C(\mathbb{R},\ \mathbb{R})$ which is subadditive; that is, $$ G(s+t)\leq G(s)+G(t) $$ for all $s,\ t\in \mathbb{R}$, and coercive; that is, $G(t)\to+\infty$ as $|t|\to\infty$, and satisfies $$ G(t)\leq|t|+4 $$ for all $t\in \mathbb{R}$, such that $$ F(x, t)-\frac{a\lambda_1}{2}t^2-\frac{b\mu_1}{4}t^4\leq-G(t)+g(x) $$ for all $t\in \mathbb{R}$ and $x\in \Omega$. Second, the functional $\int_\Omega G(v)dx$ is coercive on $E_1$ (this result also can be seen in \cite{SQ}). We claim that $I(u)$ is coercive. \begin{align*} &\int_\Omega \Big( F(x, u)-\frac{a\lambda_1}{2}u^2-\frac{b\mu_1}{4}u^4 \Big)dx\\ &\leq -\int_\Omega G(u)dx+\int_\Omega g(x)dx \\ & \leq -\int_\Omega\left( G(\overline{u})-G(-\widetilde{u})\right) dx +\int_\Omega g(x)dx \\ & \leq -\int_\Omega G(\overline{u})dx +|\widetilde{u}|_{1}+4 |\Omega| +\int_\Omega g(x)dx \\ & \leq -\int_\Omega G(\overline{u})dx +C_1(\|\widetilde{u}\|+1) \end{align*} for all $u\in H$ and some $$ C_1=C+4 |\Omega|+\int_\Omega g(x)dx, $$ where $C$ is a positive constant in Sobolev's inequality, $$ |u|_{1}\leq C\|u\|,\quad |u|_{2}\leq C\|u\| $$ for all $u\in H$. Hence we have \begin{align*} I(u_n) &=\frac{a}{2}\|u\|^2+\frac{b}{4}\|u\|^4-\int_\Omega F(x,u)dx \\ &=\frac{a}{2}\|u_n\|^2-\frac{a\lambda_1}{2}|u_n|^2_2 +\frac{b}{4}\|u_n\|^4-\frac{b\mu_1}{4}|u_n|^4_4 \\ &\quad+\int_\Omega\Big(\frac{a\lambda_1}{2}|u_n|^2+\frac{b\mu_1}{4}|u_n|^4 -F(x,u_n)\Big)dx \\ &\geq\frac{a}{2}\|u_n\|^2-\frac{a\lambda_1}{2}|u_n|^2_2 +\int_\Omega\Big(\frac{a\lambda_1}{2}|u_n|^2+\frac{b\mu_1}{4}|u_n|^4-F(x,u_n)\Big)dx \\ &\geq \frac{a}{2}\big(1-\frac{\lambda_1}{\lambda_2}\big) \|\widetilde{u}\|^2+\int_\Omega G(\overline{u})dx-C_1(\|\widetilde{u}\|+1) \end{align*} for all $u\in H$. By the coercivity of the functional $\int_\Omega G(v)dx$ on $E_1$ and that fact $$ \|u\|^2=\|\overline{u}\|^2+\|\widetilde{u}\|^2, $$ which implies that the functional $I(u)$ is coercive. $I$ satisfies the $(Ce)$ condition and is bounded from below. By (F2), we have $$ F(x,t)\geq \frac{a\lambda}{2}t^2-C|t|^p $$ for all $x\in \Omega$ and $t\in\mathbb{R}$, which implies that \begin{align*} I(u) &\leq\frac{a}{2}\|u\|^2+\frac{b}{4}\|u\|^4-\frac{a\lambda}{2}|u|_2^2+C|u|_p^p \\ &=\frac{a}{2}\big(1-\frac{\lambda}{\lambda_1}\big)\|u\|^2 +\frac{b}{4}\|u\|^4+C\|u\|^p <0 \end{align*} for $u\in E_1\cap B_\delta$, $\lambda>\lambda_1$, where $\delta>0$ small enough and $E_1$ is the subspace of $H$ spanned by $\phi_1$ the eigenfunctions of $\lambda_1$. Then $I(u)$ achieves the negative infimum. This completes the proof \end{proof} \begin{proof}[Proof of Theorem \ref{thm1.2}] By Lemmas \ref{lem2.1} and \ref{lem2.2}, it is sufficient to show that $I$ satisfies (i) and (ii). \smallskip \noindent\textbf{Step 1.} There are constants $\rho,\ \beta> 0$ such that $I(u)\geq\beta$ for all $\|u\|=\rho$. In fact, by (F5), it is easy to see that $$ F(x,t)\leq\frac{a\lambda_1}{2}t^2+\frac{b(\mu_1-\varepsilon)}{4}t^4+C|t|^p $$ for all $t\in \mathbb{R}$ and $x\in \Omega$, \begin{align*} I(u)&\geq\frac{a}{2}\|u\|^2+\frac{b}{4}\|u\|^4 -\frac{a\lambda_1}{2}|u|_2^2-\frac{b(\mu_1-\varepsilon)}{4}|u|_4^4 -C\int_\Omega |u|^pdx \\ &\geq\frac{b}{4}\big(1-\frac{\mu_1-\varepsilon}{\mu_1}\big)\|u\|^4-C\gamma_p\|u\|^p. \end{align*} Note that $4< p < 2^*$, then for $\varepsilon$ small enough. So there exists $\beta>0$ such that $I(u)\geq\beta$ for all $\|u\|=\rho$, where $\rho>0$ small enough. \smallskip \noindent\textbf{Step 2.} There exists $u_1\in H$ and $\|u_1\|>\rho$ such that $I(u_1)<0$. Indeed, for small $\varepsilon > 0$, by the definition of $\mu_1$, we can choose $u\in S$ satisfying \begin{equation}\label{eq7.6} \mu_1+\frac{\varepsilon}{2}\geq\|u\|^4. \end{equation} It follows from (F3) that \begin{equation}\label{eq7.7} F(x,t)\geq\frac{b(\mu_1+\varepsilon)}{4}t^4-C. \end{equation} Hence, by \eqref{eq7.6} and \eqref{eq7.7}, we have \begin{equation} \label{eq7.8} \begin{aligned} I(tu) &\leq\frac{a}{2}t^2\|u\|^2+\frac{b}{4}t^4\|u\|^4 -\frac{b}{4}t^4(\mu_1+\varepsilon)+C|\Omega| \\ &\leq\frac{a}{2}t^2\|u\|^2+\frac{b}{4}t^4\mu_1 +\frac{b\varepsilon}{8}t^4-\frac{b}{4}t^4(\mu_1+\varepsilon)+C|\Omega| \\ &=-\frac{b\varepsilon}{8}t^4+\frac{a}{2}t^2\|u\|^2+C|\Omega|. \end{aligned} \end{equation} Thus, $I(tu)\to-\infty$ as $t\to\infty$. Therefore, there is $u_1 \in H$ with $\|u_1\|>\rho$ such that $I(u_1)< 0$. This completes the proof. \end{proof}
{ "timestamp": "2022-09-20T02:21:15", "yymm": "2209", "arxiv_id": "2209.08644", "language": "en", "url": "https://arxiv.org/abs/2209.08644" }
\section{Introduction} \label{Introduction} Gamma-Ray Bursts (GRBs) are incredibly powerful phenomena: they are the brightest objects after the Big Bang, as well as one of the farthest astrophysical objects ever detected \citep{Paczynski,Piran,Kumar}. These features allow us to use them as cosmological tools, similar to what has been achieved for Supernovae Type Ia (SNe Ia, \cite{Riess}). Because of their high luminosity, GRBs can be observed up to very large distances, corresponding to high redshifts. Indeed, GRBs have been observed up to $z=8.2$ and $z=9.4$ \citep{Tanvir2009,Cucchiara}, while SNe Ia has only been observed up to $z=2.26$ \citep{Rodney}. Using GRBs as cosmological tools requires a full understanding of their physical mechanisms. Both their energy emission mechanisms and progenitors are still being studied by the scientific community. For their birth, there is a general consensus on two different main scenarios: the explosion of a very massive star at the end of its lifetime \citep{Narayan1992, Woosley, MacFadyen, Nagataki2007,Nagataki2009, Nagataki2011}, followed by a core-collapse SNe \citep{Stanek,MacFadyen2001} or the coalescence of two compact objects, like black holes (BHs) or neutron stars (NSs) \citep{Lattimer, Eichler1989,Li1998, Rowlinson, Rea2015, Stratta}. The most probable frameworks for the central engine that powers the GRB consider the following astrophysical objects: BHs, NSs, or fast spinning newly born highly magnetized NSs magnetars, (\citet{Usov, Liang2018, Ai2018,Komissarov2007, Barkov2008}). To identify the different possible natures of their origin, it is necessary to classify GRBs according to their observable features. A general paradigm divides GRB light curves (LCs) into a rapid prompt energy emission followed by a longer emission phase called the afterglow. The afterglow is usually detected in X-ray, optical, and also radio wavelengths \citep{Sari,O'Brien, Sakamoto2007, Perley, Li2015, Morsony, Warren2017, Warren2018}. We usually detect the prompt emission of GRBs in high-energy bands, like from X-rays up to $\ge$ 100 MeV $\gamma$-rays, but sometimes they have been observed in the optical band as well \citep{Fraija18, Panaitescu2011, Fraija20b}. A first categorization divides GRBs into Short and Long, depending on the duration of their prompt emission: $T_{90}\leq 2$ s or $T_{90} \ge 2$ s \footnote{$T_{90}$ is the time during which a GRB ejects from $5\%$ to $95\%$ of its total measured photons during the prompt phase}, respectively \citep{Mazets, Kouveliotou}. There is a very strong association between the prompt duration and the progenitor of GRBs: indeed, the majority of Long GRBs originate from the core collapse of a very massive star, while Short GRBs are born by the merging of two compact objects \citep{Abbott,Troja, Zhang2006, Ito2015, Ito2021}. A new classification of GRBs according to their progenitor mechanism has been proposed \citep{Zhang2006}: Type I GRBs are the ones born by the merging of two compact objects, while Type II are the ones born by the core collapse of very massive stars. Their progenitors can be inferred from morphological and physical characteristics. The plateau phase of GRBs is a flat part of the GRB LC following the prompt phase, and it was discovered by the {\it Neil Gehrels Swift Observatory} ({\it Swift}) \citep{O'Brien, Sakamoto2007, W07}. The duration of this plateau usually ranges from $10^2$ to $10^5$ s, after which a power-law (PL) decay phase is observed. Several scenarios describe the plateau, such as the external shock model, according to which the shock front between the ejecta of the emission and the interstellar medium is powered by a long-lasting energy emission from the central engine \citep{Zhang2006}, or due to the spin-down of a new-born magnetar \citep{Stratta,Fraija20}. In the past decades many efforts have been performed by the scientific community in order to find possible correlations between physical features of GRBs. Regarding correlations involving only the prompt features we cite, among the others, the relation between the peak in the ${\nu F_\nu}$ spectrum, ${E_{peak}}$ the isotropic energy in the prompt emission, ${E_{iso}}$ \citep{Amati2002}; and the one between $E_{peak}$ and the isotropic prompt luminosity ${L_{iso}}$ \citep{Yonetoku, Ito2019}. We also mention the correlations between the collimated-corrected energy ${E_{jet} = E_{iso}\times (1-cos \theta)}$ where ${\theta}$ is the jet opening angle and ${E_{peak}}$ found by \cite{Ghirlanda}; the one found by \cite{Liang2005} between ${E_{p}}$, ${E_{iso}}$, and the break time of the optical afterglow LCs, ${t_b}$. The last two correlations, even if they involve prompt features, introduce the jet-break time, which can also be inferred from the X-ray afterglow, which in some cases can include a plateau. Several other correlations directly involving the plateau \citep{Dainotti2008, Dainotti2013b, Dainotti2015, Dainotti2016, Liang, Bernardini2012a, Xu2012, Margutti, Zaninoni,Shun-Kun2018, Tang2019, Zhao2019, Srinivasaragavan, Wen2020} and their applications as cosmological probes \citep{Cardone, Postnikov, Dainotti2013a, Izzo2015} have been presented. For a more extensive discussion on the prompt, prompt-afterglow relations, their selection biases and the application as cosmological tools see \citet{DaiDel2017, DaiAm2018, Dainotti2018}. One of these correlations is the so-called Dainotti relation, which links the time at the end of the plateau emission measured in the rest frame, $T^{*}_X$, with the corresponding X-Ray luminosity of the LC, $L_X$ \citep{Dainotti2008}, see Equation \ref{Lpeak}. This correlation is theoretically supported by the magnetar model \citep{Dall'Osso, Bernardini2012b,Rowlinson}. Its extension in three dimensions has been discovered by adding the prompt peak luminosity, $L_{\rm peak}$ \citep{Dainotti2016, Dainotti2017c} and is known as the fundamental plane correlation or the 3D Dainotti relation. \footnote{We note that we are referring to the fundamental plane correlation related to GRBs, and not to other definitions of fundamental planes used in astronomy, such as the fundamental plane of elliptical galaxies \citep{Djorgovski}} To use this relation as a cosmological tool we selected a GRB sample with well-defined morphological properties and almost flat plateaus, called the platinum sample, which was introduced in \cite{Dainotti2020a}, and whose properties are detailed in Sec. \ref{sample selection}. We clarify that, following a well-established approach in the realm of the SNe Ia cosmology in which only the golden SNe Ia LCs are taken (see \cite{Scolnic}), we choose a well-defined sample. This is built in the observer frame and not in the rest-frame - namely, the LCs are in the fluxes versus time parameter space. This means that there is no involvement of cosmological parameters in this selection of the LCs. Therefore there is no circularity problem involved in the application of this sample for cosmological use. We correct this correlation for evolutionary effects due to the redshift and selection biases, as done in \cite{Dainotti2020a}, to infer cosmological parameters, such as the Hubble constant ${H_{0}}$, the current mass density of the universe, ${\Omega_{ M}}$, and the dark energy parameter, ${w}$, for a ${w}$CDM model, together with other cosmological probes like the SNe Ia and the Baryon Acoustic Oscillations (BAO). Indeed, the evolution of the cosmological parameters is a vital topic and it has been discussed especially in relation to ${H_{0}}$. It has been highlighted even for the SNe Ia by \cite{Dainotti2021a} and \cite{dainotti2022d} that there is an evolutionary trend on $H_0$ as a function of the redshift, which can be possibly explained either with selection biases or with a new physics (i.e., invoking the so-called ${f(R)}$-gravity theory). For a general review on the Hubble constant tension see \cite{2022abdalla}. In Sec. \ref{sample selection} the criteria used for the sample selection of the GRB data are detailed. In Sec. \ref{3D correlation} we show the GRB fundamental plane both with and without correcting for evolutionary effects and selection biases. In Sec. \ref{section4} we study the evolutionary parameters as a function of cosmology. In Sec. \ref{section5} we apply the fundamental plane as a cosmological tool. Our results are shown in Sec. \ref{Results}. In Sec. \ref{comparison} we present a comparison between the results obtained using GRBs alone versus the SNe Ia and SNe Ia + BAO sets. In Sec. \ref{standalone} we discuss the future use of GRBs as standalone probes. Finally, our conclusions are discussed in Sec. \ref{conclusions}. \begin{figure} \centering \subfloat[]{\label{fig1_a} \includegraphics[width=0.48\hsize,height=0.3\textwidth,angle=0,clip]{figures/Fig1/PlatEllipse2.pdf}} \subfloat[]{\label{fig1_b} \includegraphics[width=0.48\hsize,height=0.3\textwidth,angle=0,clip]{figures/Fig1/EvolutionEllipse2.pdf}}\\\hspace{0cm} \subfloat[]{\label{fig1_c} \includegraphics[width=0.49\hsize,height=0.3\textwidth,angle=0,clip]{figures/Fig1/histogram_EvolutionSimulation_a.pdf}} \subfloat[]{\label{fig1_d} \includegraphics[width=0.49\hsize,height=0.3\textwidth,angle=0,clip]{figures/Fig1/histogram_EvolutionSimulation_b.pdf}}\\\hspace{0cm} \subfloat[]{\label{fig1_e} \includegraphics[width=0.49\hsize,height=0.3\textwidth,angle=0,clip]{figures/Fig1/histogram_EvolutionSimulation_c.pdf}} \subfloat[]{\label{fig1_f} \includegraphics[width=0.49\hsize,height=0.3\textwidth,angle=0,clip]{figures/Fig1/histogram_EvolutionSimulation_sigma.pdf}} \caption{Top panels: the 2D projections of the fundamental plane related to the platinum sample without correcting for redshift evolution (\ref{fig1_a}), and with the corrections for selection and evolutionary effects (\ref{fig1_b}). Panels \ref{fig1_c}, \ref{fig1_d}, \ref{fig1_e} and \ref{fig1_f} : Histograms of the parameters $a, b, c$, and $ \sigma_{int}$ evaluated from the simulation of evolutionary coefficients taken from the 1 $\sigma$ range.} \label{fig1} \end{figure} \section{Sample Selection}\label{sample selection} We take into account the GRBs which can be described by the \citet{W07} phenomenological model using the BAT + XRT LCs gathered from the Swift web page repository \citep{Evans2009,Evans2010} \footnote{http://www.swift.ac.uk/burst\texttt{\_}analyser. We follow the criteria for the GRB sample selection considered in \citet{Srinivasaragavan} and \citet{Dainotti2020a}, and we use the platinum sample detailed in \citet{Dainotti2020a}. }. We fit this sample with the Willingale functional form for ${f(t)}$, which reads as follows: \begin{linenomath*} \begin{equation} f(t) = \begin{cases} F_i \exp{\left( \alpha_i \left( 1 - \frac{t}{T_i} \right) \right)} \exp{\left(-\frac{t_i}{t} \right)} & {\rm for} \ \ t < T_i \\ F_i \left(\frac{t}{T_i} \right)^{-\alpha_i}\exp{\left( -\frac{t_i}{t} \right)} & {\rm for} \ \ t \ge T_i \, ,\newline \end{cases} \label{eq: fc} \end{equation} \end{linenomath*} modelled for both the prompt (index `i=\textit{p}') ${\gamma}$\,-\,ray and initial hard X-ray decay and for the afterglow (`i=\textit{X}'), so that the complete LC ${f_{tot}(t) = f_p(t) + f_X(t)}$ contains two sets of four free parameters ${(T_{i},F_{i},\alpha_i,t_i)}$, where ${\alpha_{i}}$ is the temporal power law (PL) decay index and $T_i$ is the end time of the prompt and the plateau emission, respectively, while the time ${t_{i}}$ is the initial rise timescale. The transition from the exponential to PL occurs at the point ${(T_{i},F_{i}e^{-t_i/T_i})}$, where the two functions have the same value and this point marks the beginning of the plateau. Using these criteria, we fit 222 LCs. The peak prompt luminosity at 1 second, $L_{peak}$, and the X-ray luminosity measured in the final part of the plateau phase, $L_X$, have been calculated as follows: \begin{linenomath*} \begin{equation} L= 4 \pi D_L^2(z) \, F (E_{min},E_{max},T^{*}_{X}) \cdot K. \label{Lpeak} \end{equation} \end{linenomath*} To calculate $L_{peak}$ one substitutes the flux $F$ with $F_{peak}$ which is the $\gamma$-ray flux in $1$ s interval ($erg$ $cm^{-2} s^{-1}$) measured at the peak of the prompt emission, while to calculate $L_{\rm X}$, one uses the flux $F_{X}$, measured in X-rays at the end of the plateau; $D_L(z)$ is the luminosity distance computed for a particular redshift in the flat $\Lambda$CDM cosmological model, according to which we have an energy equation of state $w=-1$, $\Omega_M=0.3$, and $H_0=70$ $Km$ $s^{-1}$ $Mpc^{-1}$; $T^{*}_{X}$ is the time measured in the rest frame at the end of the plateau, and $K$ is the $K$-correction for the cosmic expansion \citep{Bloom}. For GRBs whose spectrum is fitted by a simple PL this correction is given by $K=(1+z)^{(\beta - 1)}$, where $\beta$ is the spectral index of the plateau in the X-ray band \citep{Evans2009, Evans2010}. The Platinum Sample \citep{Dainotti2020a} is a subset of the Gold Sample, the latter being defined in \cite{Dainotti2016} and inspired by similar samples presented in the literature \citep{Xu2012,Tang2019}. To define the Gold Sample, we consider the following requirements for the plateau: 1) its beginning, defined by the quantity $T_t$, must have at least five data points; 2) its inclination must be $< 41^{\circ}$, this latter criterion is adopted in \cite{Dainotti2016} on the Gold Sample, where a Gaussian distribution fit the plateau angles, and the outliers are beyond the threshold of $ 41^{\circ} $. To build the Platinum Sample, we also add the following requirements for the plateau: 3) its end time $T_{X}$ must not fall within observational gaps of the LCs to allow us the determination of this quantity directly from the data and not from the LC fitting; 4) it should last at least 500 $s$; and 5) should not present flares at its start or during the entire duration of the plateau itself (refining the idea of \cite{Xu2012}). Using these criteria, the platinum sample is composed of 50 GRBs out of the 222 plateaus analyzed. The furthest GRB in this sample is at $z=5$. \begin{figure*} \includegraphics[width=0.49\hsize,height=0.335\textwidth,angle=0,clip]{figures/Fig2/PlatHist.pdf} \includegraphics[width=0.49\hsize,height=0.335\textwidth,angle=0,clip]{figures/Fig2/PlatHistNE.pdf} \caption{The distributions of distances of the Platinum sample from the 3D fundamental plane with and without correction for evolution with their fitted Gaussian distributions.} \label{histdistanceplane} \end{figure*} Regarding the SNe Ia data, we use the Pantheon sample \citep{Scolnic}, a set composed of $1048$ SNe Ia collected by different surveys spanning from $z=0.01$ up to $z=2.26$. {It is important to note that the criteria defining our sample are objectively determined before the construction of the correlations sought; sample cuts are introduced strictly following either data quality or physical class constraints.} \section{The 3D Relation for The Platinum Gamma-ray Bursts}\label{3D correlation} In order to robustly apply any GRB correlation as a cosmological tool we need to have a reliable model supporting the theoretical scenario, like what has been done for SNe Ia. We also have to note that even if there is a very clear idea on the birth of the SNe Ia after the complete disruption of the accreted white dwarf in a binary system, there is still a debate on the particular mechanism that originates the SN explosion \citep{Livio}. A possible example of a model that can satisfactorily explain the plateau, which we have pinpointed in Sec. \ref{Introduction}, is the magnetar model. Indeed, the intrinsic scatter, $\sigma_{int}$, of the correlation, and the errors on the parameters can be derived directly from the values of the periods of the spin and the magnitude of the magnetic fields representative of the magnetar. Thus, the slope and the intercept of the $L_X-T^{*}_{X}$ relation are naturally derived from the equation of the magnetar \citep{Rowlinson,Stratta}, which links $L_X$ to $T^{*}_{X}$ using physical quantities of the astrophysical object, such as the moment of inertia and the spin period through the following equations: \begin{linenomath*} \begin{equation} L_{0,49} \propto B_{p,15}^2 P_{0,3}^{-4} R_{6}^6, \label{magnetarlum} \end{equation} \end{linenomath*} \begin{linenomath*} \begin{equation} T_{em,3} = 2.05 I_{45} B_{p,15}^{2} P_{0,-3}^{^2} R_6^{-6}. \label{magnetartime} \end{equation} \end{linenomath*} \noindent In Equations \ref{magnetarlum} and \ref{magnetartime}, ${L_{0,49}}$ is the plateau luminosity in ${10^{49} erg s^{-1}}$, ${I_{45}}$ is the moment of inertia in units of ${10^{45}}$ g ${cm^{2}}$, ${B_{p,15}}$ is the magnetic field strength at the poles in units of $10^{15}$ G, $R_6$ is the radius of the neutron star in ${10^6}$ cm and ${P_{0,-3}}$ is the spin period in milliseconds. If we substitute in Equation \ref{magnetarlum} the radii from Equation \ref{magnetartime} we obtain the following: \begin{linenomath*} \begin{equation} \log L_0 \propto (\log (10^{52} I_{45}^{ -1} P_{0,-3}^{2}) - \log(T_{em})), \end{equation} \end{linenomath*} where it is possible to see immediately that the first term is a constant for a given fixed period of the magnetar and momentum of inertia, and the luminosity is inversely correlated with the rest frame time at the end of the plateau emission. However, there are additional explanations for the plateau emission and the existence of the $L_{peak}-L_a$ relation, which can be ascribed to the external forward shock model by changing the microphysical parameters \citep{Hascoet}. In addition, in \cite{Stratta} the properties of the Dainotti 3D relation are explained through the anti-correlation between $L_{peak}$ and the spin period within the model of the pulsar spin-down described in \citet{Contopoulos}. Having stressed that this correlation and the 3D extension can be supported reliably within the magnetar scenario, we can safely proceed with the description of the procedure for using this correlation as a cosmological tool. We leave the parameters $a$, $b$, $c$ of the fundamental plane free to vary and we fit the correlation using the \cite{Dago05} Bayesian method. In the paper the uncertainties on our computed values are always be quoted in 1 $\sigma$. The luminosities and times carry error bars which are comparable, namely the $\frac{\Delta_{x}}{x}$ and $\frac{\Delta_{y}}{y}$ are of the same order of magnitude, where $\Delta_{x}$ is the error on the x-axis (error on time in our case) and $\Delta_{y}$ is the error on y axis (luminosity at the end of the plateau phase in our case). Thus, it is necessary to adopt methods which take into account both error bars like the \cite{Dago05}. \noindent The fundamental plane relation has the following form: \begin{linenomath*} \begin{equation} \log L_X = c + a\cdot \log T^{*}_{X} + b \cdot( \log L_{peak}), \label{isotropic} \end{equation} \end{linenomath*} \noindent where $a$ and $b$ are the best-fit parameters given by the \cite{Dago05} procedure linked to $\log T^{*}_{X}$ and $\log L_{peak}$, respectively, while $c$ is the normalization. The best fit results are: $ a =-0.86 \pm 0.13$, $b =0.56 \pm 0.12$, $c = 21.8 \pm 6.3$, and $\sigma_{int}=0.34 \pm 0.04$. We stress that if we had an ad hoc choice of the sample we would not have had any outliers from the distribution of the geometric distances of any point from the fundamental plane, while it is clear from the distribution of the distance from the platinum plane that we have several GRBs at distance -0.5 which are outliers from the distribution of the GRBs from the plane itself. The role of the corrections due to selection biases and evolutionary effects has been studied for the $L_X-T^{*}_{X}$ \citep{Dainotti2013b} and for the $L_X-L_{peak}$ relations \citep{dainotti2015b, Dainotti2017b}. Indeed, each physical feature, $L_X$, $T^{*}_{X}$ and $L_{peak}$, is affected by selection biases due to instrumental thresholds and redshift evolution of the variables involved in the correlations. To correct these effects for each variable, we employ the \citet{Efron} method, which tests the statistical dependence among $L_X$, $T^{*}_{X}$ and $L_{peak}$, see \citet{Dainotti2013b, dainotti2015b, Dainotti2017b,Petrosian}. \begin{figure*} \includegraphics[width=0.99\hsize,height=0.55\textwidth,angle=0,clip]{figures/Fig3/PairedHistograms2.0errors.pdf} \caption{Paired smoothed histograms of the $\sigma_{int} $ obtained for cases with and without evolution with different methods using the HyperFit online routine. The thin black horizontal lines indicate the central value of the $\sigma_{int}$ parameter from the D'Agostini fitting, while the red ones correspond to the 1 $\sigma$ error bars.} \label{hist:paisim} \end{figure*} The fundamental plane correlation, once the selection effects are considered, becomes: \begin{linenomath*} \begin{equation} \log L_X-k_{L_{a}} \log(z+1) = a_{ev} \cdot(\log T^{*}_{X}- k_{T_{X}} \log (z+1))+ b_{ev} \cdot(\log L_{peak}- k_{L_{peak}} \log (z+1))+c_{ev}, \label{planeev} \end{equation} \end{linenomath*} where $a_{\rm ev}$, $b_{\rm ev}$, and $c_{\rm ev}$ denote the parameters with redshift evolution. With evolution we define the dependence of the parameters on the redshift. The $k_{L_{peak}}$, $k_{T_{X}}$, and $k_{L_{a}}$ are the evolutionary coefficients computed by us for the whole sample of 222 GRBs: $k_{L_{peak}}=2.24^{+0.30}_{-0.30}$, $k_{T_{X}}=-1.25^{+0.28}_{-0.27}$, $k_{L_{a}}=2.42^{+0.41}_{-0.74}$. Comparing these results with the ones obtained in the literature, we note that the evolution on $L_{peak}$ is compatible within 1 $\sigma$, the evolution on $T^{*}_a$ is compatible within 1.6 $\sigma$, while the $L_a$ evolution is compatible within 1.8 $\sigma$ with the ones taken from \cite{Dainotti2017b} and used in \cite{Dainotti2020a}. To verify the reliability of our results, we simulated random values of the power-law coefficients of the evolution, $k_{L_{peak}}, k_{T_{X}}, k_{L_{a}}$ drawn from uniform distributions within the 1 $\sigma$ error range. For our best-fit computations of the Dainotti 3D relation we used the \cite{Dago05} and \cite{Reichart} methods (the latter considers a slightly different likelihood, but still includes $\sigma_{int}$, like the D'Agostini one) with the same minimization algorithm. To further ensure the reliability of our results, we repeated this procedure 1000 times finding that the results from these two computations are compatible within 1 $\sigma$. The results of minimization of the D'Agostini likelihood are shown in the central and bottom panels of Fig. \ref{fig1}. Considering these effects, the new best-fit parameters for the fundamental plane of the platinum sample are $a_{\rm ev}=-0.85 \pm 0.12$, $b_{\rm ev}=0.49 \pm 0.13$, $c_{\rm ev} = 25.4 \pm 6.9$, and $\sigma_{int}=0.18 \pm 0.09$. We note that \cite{Rowlinson} predicts that the $a$ coefficient of the $\log L_X-\alpha \log(z+1) \propto \log T^{*}_{X}- \beta \log (z+1)$ relation should be $a_{theoretical}=-1$, which is compatible with our results within $1.3\; \sigma$ error range. The central value of the intrinsic scatter is $47.1 \%$ smaller than the one computed for the original fundamental plane. We compare the two intrinsic scatters obtained with and without considering the corrections due to the EP method using the following formula, adapted from \cite{Dainotti2020a}: \begin{linenomath*} \begin{equation} x=\frac{\sigma_{int,NoEV}-\sigma_{int,EV}}{\sqrt{\sigma_{\sigma_{int,NoEV}}^2+\sigma_{\sigma_{int,EV}}^2}}. \label{discrepancies} \end{equation} \end{linenomath*} We obtain $x=1.98$, meaning that the evolutionary effects do indeed reduce the intrinsic scatter on the platinum fundamental plane correlation in a significant way. We also show in Fig. \ref{histdistanceplane} the distances of each data point belonging to the Platinum sample with respect to the best fit of the fundamental plane, both with the evolutionary effects (left panel) and without them (right panel). The 2D projections of both fundamental planes are shown in the top panels of Fig. \ref{fig1}. These figures allow us to show how the error bars for each GRB shown with the ellipses are placed around the plane when we correct for selection biases and redshift evolution (right upper panel) and when we do not correct for them (left upper panel). We note that when the correction for evolution is applied, the data points are closer to the plane including the errorbars, and fewer outliers are present compared to the situation in which the evolution is not taken into account. To compare these two relations we computed for both cases the Akaike information criterion (AIC) and the model weight: $B_{i}=e^{\frac{AIC_{min}-AIC_{i}}{2}}$ for each relation, where $AIC_{min}=MIN(AIC_{evolution},AIC_{noev})$, and $AIC_i$ is the AIC value corresponding to the relation for which the $B$ parameter is computed. For each model we computed the ``relative likelihood": $P_i=\dfrac{B_i}{\sum \limits _j B_j}$, obtaining $P_{evolution}=0.99$ and $P_{noev}=0.01$. Thus, the model with evolution is favoured compared to the one without evolution. To further confirm the reliability of our results and their independence for the particular Bayesian method adopted we performed other best fit procedures for the fundamental plane, both with and without evolution. We also used the \cite{Reichart} method, which we recall is another Bayesian approach which takes also into account of both error bars, obtaining best fit results compatible within 1 $\sigma$ to the D'Agostini ones, and an online fitting procedure called “HyperFit" (https://hyperfit.icrar.org/) based on \cite{Robotham}, built to obtain the best-fit of linear models that consider heteroscedastic errors for multidimensional data using Bayesian inference. The latter tool offers the possibility to employ different algorithms and methods, which we used to compute a smoothed paired histogram of the $\sigma_{int}$ obtained for each case, with and without evolution, presented in Fig. \ref{hist:paisim}. The values obtained with the \cite{Dago05} method are consistent with these histograms. Specifically, we add a black line indicating the mean value of ${\sigma_{int}}$ and red lines indicating the error on ${\sigma_{int}}$ obtained by the \cite{Dago05} method. We also applied other best fit methods: the Principal Component Analysis, PCA, the PC Regression (PCR, \cite{Liu}), and the Partial Least Squares (PLS), where the latter two are regression methods based on PCA. For PCA we found: $a=-1.19$, $b=0.44$, $c=28.87$ and $a_{\rm ev}=-1.17$, $b_{\rm ev}=0.49$, $c_{\rm ev}=26.75$ for the no evolution and the evolution cases, respectively. When comparing the PCA results with the \cite{Dago05} ones for the non-evolution case the parameters $a$, $b$, and $c$ are within $2.5$, $1$, and $1.1$ $\sigma$, respectively; for the evolution case $b_{ev}$ and $c_{ev}$ are consistent in $1\; \sigma$, while $a_{ev}$ is consistent in $2.7\; \sigma$. The PCA fitting does not account for the error bars, thus does not consider the intrinsic scatter that is instead computed by the Bayesian methods of \cite{Dago05} and \cite{Reichart}. This drives the difference in the results. For PCR and PLS we used the bootstrapping technique to infer the errors on the best fit parameters. These methods are consistent with the D'Agostini ones in 1 $\sigma$, thus giving more reliability to our conclusions. \section{The Study of the Evolutionary Parameters as a Function of Cosmology} \label{section4} The reliability of this procedure has already been proven via Monte Carlo simulations \citep{Dainotti2013b}. To correct for the evolution we use $g(z)=1/(1+z)^{k}$, where the $k$ parameter mimics the evolution due to the redshift. As addressed in \cite{dainotti2015b}, the functional form for the evolution can be a power law or a more complex function, and the results for these functions are compatible within 2 $\sigma$ for the luminosities and 1 $\sigma$ for the time evolutions. Here, we detail our results of the EP method for the whole sample of 222 GRBs for the studied parameters. The EP method takes into account these effects by using an adaptation of the Kendall $\tau$ test, according to which $\tau$ has the following definition: \begin{figure} \centering \includegraphics[width=0.45\hsize,height=0.3\textwidth,angle=0,clip]{figures/Fig4/Lpeak_z1.pdf} \includegraphics[width=0.45\hsize,height=0.3\textwidth,angle=0,clip]{figures/Fig4/Lpeak_tau.pdf} \includegraphics[width=0.45\hsize,height=0.3\textwidth,angle=0,clip]{figures/Fig4/LX_z1.pdf} \includegraphics[width=0.45\hsize,height=0.3\textwidth,angle=0,clip]{figures/Fig4/LX_tau.pdf} \includegraphics[width=0.45\hsize,height=0.3\textwidth,angle=0,clip]{figures/Fig4/TX_z1.pdf} \includegraphics[width=0.45\hsize,height=0.3\textwidth,angle=0,clip]{figures/Fig4/TX_tau.pdf} \caption{The application of the EP method to our entire sample for the parameters involved in the fundamental plane correlation. The limiting lines chosen for the EP method are visible in red. The left panels show the distribution of studied parameters versus $redshift+1$, while the right panels show the relation between $\tau$ and the evolutionary coefficients in red. The vertical red solid lines indicate the value for which $\tau=0$ and thus the evolution is removed. The dashed blue lines represent the 1 $\sigma$ for the evolution, which is determined for $\tau \leq 1$.} \label{fig:evoL} \end{figure} \begin{linenomath*} \begin{equation} \tau =\frac{\sum_{i}{(\mathcal{R}_i-\mathcal{E}_i)}}{\sqrt{\sum_i{\mathcal{V}_i}}}, \label{tau} \end{equation} \end{linenomath*} \noindent where $\mathcal{E}_i=(1/2)(i+1)$ is the expectation value, $R_i$ is the rank, and $\mathcal{V}_i=(1/12)(i^{2}+1)$ is the variance. To eliminate the impact of the redshift on our data we demand $\tau=0$. $R_i$ is computed for each data point considering the position of the data in the so-called associated sets, which are samples that include all the objects that can be detected considering a particular observational limit \citep{Dainotti2013b, dainotti2015b, Dainotti2017c}. The computations to derive the evolutionary coefficients follow the same procedure for $L_{peak}$, $L_{X}$, and $T^*_X$. The limiting values for these quantities are shown in the left panels of Fig. \ref{fig:evoL}, while the evolutionary coefficients are shown in the right panels. Here, for simplicity we detail only the computation for $L_X$, given that for the other parameters is similar. For this luminosity we compute the flux limit at the end of the plateau phase, $f_{lim}$, and then we compute the correspondent luminosity $L_{min}(z_i)$, that would allow us to detect that object at a given $z_i$. The associated set for $z_i$ contains all GRBs with $L_{min} \leq L(z_j)$ and $z_j \leq z_i$, where with $j$ and $i$ we denote the objects of the associated set and of the GRB sample, respectively. According to the EP method, the samples used to derive the evolutionary effects should not be less than $90 \%$ of the original ones, so conservative choices regarding the limiting values are needed. The chosen coefficients for the evolutionary functions ($g(z)$, $f(z)$, and $h(z)$ for the $L_{peak}$, $L_X$ and $T^*_X$, respectively) are the ones for which $\tau=0$ as shown with the red vertical lines in Fig. \ref{fig:evoL}, while the dashed blue lines correspond to the 1 $\sigma$ for the evolutionary coefficients, which is determined for $\tau\leq 1$. We have used this method on our sample of 222 GRBs to compute new evolutionary parameters, thus updating the values with respect to the ones used in \cite{Dainotti2020a}. We also would like to point out that \cite{Dainotti2021b} has shown that this method is reliable regardless of the choice of the limiting values for several sample sizes for Short GRBs (samples of 56, 32 and 34 GRBs). Thus, the discussion of \cite{2021MNRAS.504.4192B} on the EP method and its applicability are not a concern given the approach and the reliability of the results in \cite{Dainotti2021b}. \begin{figure} \centering \includegraphics[width=0.49\hsize,height=0.34\textwidth,angle=0,clip]{figures/Fig5/kLa_vs_Om.pdf} \includegraphics[width=0.49\hsize,height=0.34\textwidth,angle=0,clip]{figures/Fig5/kLpeak_vs_Om.pdf} \includegraphics[width=0.49\hsize,height=0.34\textwidth,angle=0,clip]{figures/Fig5/kLa_vs_H0.pdf} \includegraphics[width=0.49\hsize,height=0.34\textwidth,angle=0,clip]{figures/Fig5/kLpeak_vs_H0.pdf} \includegraphics[width=0.49\hsize,height=0.36\textwidth,angle=0,clip]{figures/Fig5/kLa_vs_cosmology.pdf} \includegraphics[width=0.49\hsize,height=0.36\textwidth,angle=0,clip]{figures/Fig5/kLpeak_vs_cosmology.pdf} \caption{{The ${k_{L_{a}}}$ (left) and ${k_{L_{peak}}}$ (right) as a function of ${\Omega_{M}}$ and ${H_{0}}$. In the first four pictures the 1-${\sigma}$ error bars are shown with the thin red line together with the thick central line that represents the value of the slope of the function for which the evolution is removed. In the two bottom pictures the contour plots of ${\Omega_{M}}$ and ${H_{0}}$ as a function of ${k_{L_{a}}}$ (left) and ${k_{L_{peak}}}$ (right). The different colours indicate the different values of the ${k}$ parameters.}} \label{fig:evoL-k} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\hsize,height=0.34\textwidth,angle=0,clip]{figures/Fig5/kLavsw.pdf} \includegraphics[width=0.49\hsize,height=0.34\textwidth,angle=0,clip]{figures/Fig5/kLpeakvsw.pdf} \includegraphics[width=0.49\hsize,height=0.34\textwidth,angle=0,clip]{figures/Fig5/kLavsOk.pdf} \includegraphics[width=0.49\hsize,height=0.34\textwidth,angle=0,clip]{figures/Fig5/kLpeakvsOk.pdf} \caption{{\bf The ${k_{L_{a}}}$ (left) and ${k_{L_{peak}}}$ (right) as a functions of $w$ and ${\Omega_{k}}$. In the pictures, the 1-${\sigma}$, 2-${\sigma}$ and 3-${\sigma}$ error bars are shown with thin red, orange and green lines, respectively. The thick central line represents the value of the slope of the function for which the evolution is removed assuming $w = -1$ and ${\Omega_{k}} = 0$. }} \label{fig:evoL-kwOk} \end{figure} The previous analysis in this paper fixes the value of an evolutionary parameter at a given ${\Omega _{M}}$, ${H_{0}}$, ${\Omega _{k}}$ and $w$. One may wonder how this influences the cosmological results. To verify the impact of the cosmological parameters when ${k_{L_{a}}}$ and ${k_{L_{peak}}}$ depend on ${\Omega _{M}}$, ${H_{0}}$, ${\Omega _{k}}$ and $w$, we repeat the EP method with luminosity distances computed over a grid of different ${\Omega _{M}}$, ${H_{0}}$, ${\Omega _{k}}$ and $w$ values. Similar analysis of the dependence of evolutionary parameters on cosmology was performed on quasars in \cite{dainottigiada2022}. This allows us to determine how the evolutionary functions vary when the cosmological parameters change. The results of this procedure are presented in Fig. \ref{fig:evoL-k} and Fig. \ref{fig:evoL-kwOk}. We note that there are no changes of the evolutionary parameter, $k$, as $H_{0}$ varies. Regarding ${\Omega_{M}}$ and ${\Omega_{k}}$, there is a mild evolution of the evolutionary parameter, although at very low values of ${\Omega_{M}}$ (between 0 and 0.2) the evolution is noticeable, but still within 1 $\sigma$ when we account for the errorbars. {\bf When we consider the behaviour of the evolutionary slope, $k$, with ${\Omega_{k}}$, it is compatible in 1-$\sigma$ both for $k_{L_{a}}$ and $k_{L_{peak}}$ over the whole considered range of ${\Omega_{k}}$ values. When we consider even a very wide range of the $w$ parameter, all obtained values of the $k$ are compatible with each other in less than 1-$\sigma$, thus again in this case the relation of $k$ with $w$ is negligible.} To account for those results, we created a numerical function ($k=k(\Omega_{M})$) with a linear interpolation method and varied the values and errors on the $k$ parameters with $\Omega_{M}$. This is the first time in the literature that such a complete treatment has been performed, which completely overcomes the circularity problem. {\bf In this approach we neglect the $k$ as a function of $\Omega_{k}$ and $w$ because over a reasonable range of parameters the change of $k$ is insignificant. \cite{dainottigiada2022} showed that the change of $k$ with $\Omega_{M}$ is much more significant for QSOs, this is a result derived using a much larger sample. Thus, in future for a large sample we may encounter a more significant relation of $k$ with $\Omega_{k}$ and $w$. We postulate that all present cosmological efforts should investigate the impact of selection biases and redshift evolution, contrary to assuming a lack of such effects.} \section{The GRB cosmology with the Fundamental Plane Relation}\label{section5} In order to check the reliability of the fundamental plane for cosmological purposes, we here perform a series of tests. First of all we plot for a fixed fiducial cosmology the distance moduli ($\mu_{GRB}$) derived by the fundamental plane relation, with the distance moduli obtained by the SNe Ia, $\mu_{SNe}$ in Fig. 6. The left and right panels respectively show GRBs without accounting for selection biases and the right panel the ones accounting for selection biases. \begin{figure} \includegraphics[width=0.53\hsize,height=0.38\textwidth,angle=0,clip]{figures/Fig6/distancemoduliGRBSNe_NoEv.pdf} \includegraphics[width=0.53\hsize,height=0.38\textwidth,angle=0,clip]{figures/Fig6/distancemoduliGRBSNe_Evo.pdf} \label{SNeMuGRB} \caption{The distance moduli versus logarithm-ed redshift of SNe Ia ($\mu_{SNe}$) and GRBs ($\mu_{GRB}$) belonging to the platinum sample assuming the fundamental plane relation and {\bf assuming $\Lambda CDM$}. On the left for the case without correction for evolution, while on the right with correction.} \end{figure} To solve the so-called circularity problem, we compute the cosmological parameters together with the coefficients of the fundamental plane relation, starting from the peak fluxes and the fluxes at the end of the plateau emission which are the observer frame quantities of the corresponding peak prompt luminosity and the luminosity at the end of the plateau emission, respectively. Thus, this procedure does not involve any fixed a priori cosmological models, and the results of this computation leads to the best-fit cosmological parameters together with the coefficients of the fundamental plane correlation using the right hand side of Equation 6 in which the luminosity is defined in Equation 2. More specifically, in our computation we run MCMC simulations using either uniform or Gaussian priors on ${\Omega_M, H_0}$ and ${{w}}$, and compute the corresponding distance luminosity ${D_L(z, \Omega_M, H_0, w)}$ and the corresponding ${L_{peak}}$, ${{L_X}}$ for each value of this grid. Then, for each value of this grid, we compute the best fit parameters of the plane. Thus, this procedure completely avoids the circularity problem. This method does not need any calibration of the fundamental plane relation on other local probes; the correlation's parameters are free to vary following \cite{Dainotti2013a}. For the flat $\Lambda$CDM cosmological model, in which $w=-1$, and where we neglect the radiation contribution, the luminosity distance used in Equation \ref{Lpeak} is : \begin{linenomath*} \begin{equation} \label{flatdistanceluminosity} D_L(z)=(1+z)\frac{c}{H_{0}}\int_{0}^{z} \frac{dz'}{\sqrt{\Omega_{M}(1+z')^3+(1-\Omega_{M}})}, \end{equation} \end{linenomath*} \noindent where $c$ is the speed of light. For simplicity we can also write that integrand of Equation \ref{flatdistanceluminosity} as the following: \begin{linenomath*} \begin{equation} E(z)=\frac{1}{\sqrt{\Omega_{M}(1+z')^3+(1-\Omega_{M}})}. \label{E(z)} \end{equation} \end{linenomath*} We combine the GRB Platinum sample, the SNe Ia Pantheon Sample, and the BAO data presented in \cite{SharovBAO}. We note that even if according to \cite{Riess} $\Omega_M$ and $H_0$ are kinematically independent, we still have chosen to take into account the separate case of varying both of them together as a check of how the precision reached by us on these quantities depends on the parameter space's dimension. We use the fundamental plane correlation both with and without the correction computed by the EP method to see if this correction may carry a reduction on $\sigma_{int}$, and consequently on the cosmological parameters. We derive $\mu_{obs,GRBs}$ in such a way that it is completely independent from the $\mu_{SNe}$, by manipulating the fundamental plane relation corrected for evolution: \begin{linenomath*} \begin{equation} \begin{split} \mu_{obs,GRBs}= 5 (b_{1} \log F_{p,cor}+a_{1} \log F_{X,cor} + c_{1}+d_{1} \log T^{*}_X)+25 \end{split}, \label{equmu} \end{equation} \end{linenomath*} \noindent where $\log F_{p,cor}$ and $\log F_{X,cor}$ are the prompt and afterglow emission fluxes, respectively, corrected by the $K$-correction and the evolutionary functions. We show some of the algebraic computations performed to obtain Equation \ref{isotropic} starting from Equation \ref{Lpeak}. For the case where we do not take into account the evolutionary effects, considering the relation between fluxes and luminosities given by Equation 2 we obtain: \begin{linenomath*} \begin{equation} \log_{10}(4\pi d_L^2)+\log_{10}K_{X}-b \cdot (\log_{10}(4\pi d_L^2)+\log_{10}K_{peak})=b \cdot \log_{10} F_{peak}+a \cdot \log_{10} T^{*}_X+C-\log_{10} F_{X}, \end{equation} \end{linenomath*} \noindent where $K_{peak}$ and $K_{X}$ are the $K$-corrections computed for the prompt and the afterglow, respectively, and $a$, $b$, and $C$ are the coefficients of the fundamental plane correlation. Isolating the luminosity distance from the previous equation we obtain: \begin{linenomath*} \begin{equation} \log_{10}(d_L)=-\frac{\log_{10} F_{X}+\log_{10}K_{X}}{2 (1-b)}+\frac{b \cdot (\log_{10} F_{peak}+\log_{10}K_{peak})}{2 (1-b)} -\frac{(1-b)\log_{10}(4\pi)+C}{2 (1-b)}+ \frac{a \log_{10} T^{*}_X}{2 (1-b)}. \label{distancemoduli} \end{equation} \end{linenomath*} With new definitions of the coefficients and the fluxes we finally reproduce Equation \ref{equmu} considering also the relation between luminosity distance and distance modulus. We compare both $\mu_{obs, GRBs, SNe}$ with the theoretical $\mu_{th}$, defined as: \begin{linenomath*} \begin{equation} \mu_{th}=5\cdot \log\ D_L(z,\Omega_{M}, H_0, w) +25. \label{modulus} \end{equation} \end{linenomath*} We now present the constraints given by BAO measurements used in our computations. The data comes from \cite{SharovBAO} who refer to the equation for the $d_{z}(z')$ function defined as: \begin{equation} d_{z}(z') = \frac{r_{s}(z_{d})}{D_{V}(z')}, \label{dzformula} \end{equation} \begin{equation} \text{where} \hspace{2ex} D_{V}(z') =\frac{c}{H_{0}}\left[ \frac{z'}{E(z')}\times \left(\int_{0}^{z'}\frac{dz}{E(z)} \right)^{2}\right]^{\frac{1}{3}}, \hspace{1.5ex} \& \hspace{1.5ex} r_s(z_d) =\frac{55.514 \cdot e^{[72.3(\Omega_{\nu}h^{2}+0.0006)^2]}}{(\Omega_{M}h^{2})^{0.25351}(\Omega_{b}h^{2})^{0.12807}}Mpc. \label{eq_rsfiducialtrue} \end{equation} \noindent The value $z_d$ corresponds to the decoupling of photons in the comoving sound horizon scale $r_s(z_d)$ using the fitting formula from \cite{SharovBAO}. For this approach we combine the likelihoods and write the logarithm-ed equation as: \begin{align} \begin{split} \mathcal{L}(GRBs+SNe Ia+BAO) =& \sum_{i}\biggl[\log \biggl( \frac{1}{\sqrt{2\pi}\sigma_{\mu,i}} \biggr)- \frac{1}{2}\biggl( \frac{\mu_{th,GRB,i}-\mu_{obs,GRB,i}}{\sigma_{\mu,i}}\biggr)^{2} \biggr] \\ &-\frac{1}{2}[(\mu_{th,SNe}-\mu_{obs,SNe})^T\times C_{inv} \times (\mu_{th,SNe}-\mu_{obs,SNe}) + (\Delta d_{z})^{T} \times C_{inv}^{BAO} \times \Delta d_{z}], \label{likelihood} \end{split} \end{align} \begin{comment} \begin{equation} \begin{split} \mathcal{L}(GRBs+SNe Ia+BAO)= \sum_{i}\biggl[\log \biggl( \frac{1}{\sqrt{2\pi}\sigma_{\mu,i}} \biggr)- \frac{1}{2}\biggl( \frac{\mu_{th,GRB,i}-\mu_{obs,GRB,i}}{\sigma_{\mu,i}}\biggr)^{2} \biggr] \\ -\frac{1}{2}[(\mu_{th,SNe}-\mu_{obs,SNe})^T\times C_{inv} \times (\mu_{th,SNe}-\mu_{obs,SNe})\\+ (\Delta d_{z})^{T} \times C_{inv}^{BAO} \times \Delta d_{z}], \label{likelihood} \end{split} \end{equation} \end{comment} \noindent where the first term relates to GRBs' distance moduli \citep{Dainotti2013a, Amati2019}, the second to the SNe Ia's, where $C_{inv}$ is the inverse of the covariance matrix of the SNe Ia data taken from \cite{Scolnic}, and the third to the BAO, where $C_{inv}^{BAO}$ is the inverse of the covariance matrix of the BAO data taken from \cite{SharovBAO} and the $\Delta d_{z}$ is defined as: $\Delta d_{z,i} = d_{z}^{obs}(z_{i}) - d_{z}^{th}(z_{i})$; $d_{z}^{obs}(z_{i})$ is taken from the data, while $d_{z}^{th}(z_{i})$ is computed with the Equation \ref{dzformula}. We have also computed the cosmological parameters using only SNe Ia data as well as SNe Ia+BAO, to verify if adding GRBs would confirm the results and to what extent we could enhance the precision on the cosmological parameters. \section{Results} \label{Results} The results presented here are divided in three major steps. First, we show the capability of GRBs as alone standardizable candles with the fundamental plane using Equations \ref{isotropic} and \ref{planeev}, as well as the Equation for $\mu_{GRB}$, \ref{equmu}, without calibration using Gaussian priors based on the values of SNe Ia in \cite{Scolnic} (see \ref{GRBs alone without calibration}). Then, we show the calibration on SNe Ia using Gaussian priors (see \ref{GRB alone with calibration}). Then, we derive the cosmological parameters with uniform priors, both with and without calibrating GRBs on the SNe Ia (see \ref{uniformprior}) and we compare with the results with Gaussian priors. The analysis has the scope of showing the precision of GRBs in constraining cosmological parameters in a flat $\Lambda$CDM model. Second, we use GRBs together with SNe Ia and BAO to verify the usefulness of GRBs in combination with other probes, see \ref{SN+BAO+GRB}). These results will entail both the observational data of GRBs with no correction for evolution as well as accounting for these corrections. The third step is instead the analysis of the open cosmological model with the GRB fundamental plane relation both corrected and uncorrected for selection biases and redshift evolution (see \ref{flatnessUniverse}). \subsection{GRBs alone with no calibration with Gaussian priors}\label{GRBs alone without calibration} We here clarify that in all computations we do not minimize the relation of the evolutionary parameters as a function of $\Omega_M$, but we use the evolutionary function $k(\Omega_M)$. This indeed is an independent computation, see sec. \ref{section4}, which shows how the evolutionary functions depend on $\Omega_M$. There is no minimization involved in this computation. \begin{table} \addtolength{\tabcolsep}{-2pt} \centering \title{By Distance Modulus and Fundamental Plane} \maketitle \begin{tabular}{p{35mm}|l|c|c|c|c|c|c} \toprule[1.2pt] \toprule[1.2pt] \textbf{No calibration on SNe Ia, Equation \ref{equmu}} & {\textbf{parameters varied}} & \textbf{Model} & $\boldsymbol{\Omega_{M}}$ & $\boldsymbol{H_{0}}$ & $w$ & $z-score_{SN}$ & $z-score_{SN+BAO}$ \\ \midrule without evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.316 \pm 0.063$ & \bf{70} & \bf{-1} & 0.268 & 0.190 \\\hline without evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $73.225\pm 3.307$ & \bf{-1} & 0.984 & 0.987 \\\hline without evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & $0.320 \pm 0.068 $ & $73.149\pm 3.026$ & \bf{-1} & 0.307, 1.025 & 0.137, 1.09 \\\hline without evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.673 \pm 0.717$ & 0.460 & 0.484\\\hline \midrule with fixed evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.308 \pm 0.063$ & \bf{70} & \bf{-1} & 0.142 & 0.063 \\\hline with fixed evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $72.869\pm 2.921$ & \bf{-1} & 0.988 & 0.992 \\\hline with fixed evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & $0.304 \pm 0.064 $ & $73.128\pm 3.008$ & \bf{-1} & 0.089, 1.024 & 0.109, 1.10 \\\hline with fixed evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.977 \pm 0.620$ & 0.037 & 0.064\\\hline \midrule with $k=k(\Omega_{M})$ & $\Omega_{M}$ & $\Lambda$CDM & $0.295 \pm 0.062$ & \bf{70} & \bf{-1} & 0.064 & 0.144 \\\hline with $k=k(\Omega_{M})$ & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & $0.297 \pm 0.065 $ & $73.036\pm 3.139$ & \bf{-1} & 0.015, 0.955 & 0.214, 1.02\\\hline \toprule[1.2pt] \toprule[1.2pt] \textbf{No calibration on SNe Ia, Equations \ref{isotropic} and \ref{planeev}} & {\textbf{parameters varied}} & \textbf{Model} & $\boldsymbol{\Omega_{M}}$ & $\boldsymbol{H_{0}}$ & $w$ & $z-score_{SN}$ & $z-score_{SN+BAO}$ \\ \midrule without evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.302 \pm 0.061$ & \bf{70} & \bf{-1} & 0.049 & 0.033 \\\hline without evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $73.152\pm 3.113$ & \bf{-1} & 1.021 & 1.024 \\\hline without evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & $0.302 \pm 0.064 $ & $73.074\pm 3.145$ & \bf{-1} & 0.163, 0.99 & 0.031, 1.09\\\hline without evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-1.133 \pm 1.048$ & 0.127 & 0.111\\\hline \midrule with fixed evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.299 \pm 0.065$ & \bf{70} & \bf{-1} & 0.000 & 0.077 \\\hline with fixed evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $73.073\pm 3.126$ & \bf{-1} & 0.992 & 0.995 \\\hline with fixed evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & $0.294 \pm 0.065 $ & $72.785\pm 3.049$ & \bf{-1} & 0.058, 0.900 & 0.260, 0.971 \\\hline with fixed evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.978 \pm 0.662$ & 0.033 & 0.059 \\\hline \midrule with $k=k(\Omega_{M})$ & $\Omega_{M}$ & $\Lambda$CDM & $0.305 \pm 0.063$ & \bf{70} & \bf{-1} & 0.095 & 0.016 \\\hline with $k=k(\Omega_{M})$ & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & $0.305 \pm 0.064 $ & $73.126\pm 3.101$ & \bf{-1} & 0.103, 0.996 & 0.093, 1.065 \\\hline \end{tabular} \caption{Results of the fitting of the cosmological parameters without calibration on SNe Ia, using GRBs alone and using Gaussian priors with $\mu_{GRB}$ (first part) and with the Fundamental plane Equation, \ref{isotropic} (2nd part) without evolution correction, with fixed evolution, and with evolution correction as a function of $\Omega_{M}$. In the first column we define the studied case and the type of evolution correction. In the second column we define which cosmological parameters are left free to vary. In the subsequent columns we present the values of parameters with error bars obtained in the computation for the corresponding cases. The fixed values are presented in bold. In the last column we present a comparison of each of the case results with the ones obtained with SNe Ia alone, present in Table \ref{Table5}. For this purpose we compute the z-score given by: $z = \frac{|X_{SN}-X_{GRB}|}{\sqrt{\sigma_{X,SN}^2+\sigma_{X,GRB}^2}}$, where $X$ is a computed value of the considered cosmological parameter for SNe Ia and GRBs, respectively, while $\sigma_{X}$ is its error.} \label{Table1} \end{table} We have tested two approaches to derive the cosmological parameters with GRBs. For each approach we vary a) both $\Omega_M$ and $H_0$, b) only $\Omega_M$, c) only $H_0$ and d) only $w$. For our computations related to GRBs we consider the Gaussian priors of 3 $\sigma$ based on the results and the uncertainties computed in \cite{Scolnic}. Although this procedure does not allow to test if there are deviations beyond the 3 $\sigma$ limit, it still allows us to understand the role and the impact of GRBs as standalone cosmological probes, and what are the uncertainties we can achieve considering the current state of the art. We use the following likelihoods: \begin{figure} \centering \subfloat[Varying only $\Omega_M$ without correction for evolution]{\label{fig9_a} \includegraphics[width=0.49\hsize,height=0.49\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/fundplane_without_evol_varying_Om.pdf}} \subfloat[Varying only $H_0$ without correction for evolution]{\label{fig9_b} \includegraphics[width=0.49\hsize,height=0.49\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/fundplane_without_evol_varying_H0.pdf}}\\ \subfloat[Varying both $\Omega_M$ and $H_0$ without correction for evolution]{\label{fig9_c} \includegraphics[width=0.49\hsize,height=0.49\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/fundplane_without_evol_varying_H0_and_O_m.pdf}} \subfloat[Varying only $w$ without correction for evolution]{\label{fig9_d} \includegraphics[width=0.49\hsize,height=0.49\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/fundplane_without_evol_varying_w.pdf}} \caption{Cosmological results for the GRBs alone (with no calibration) without evolution using the equation of the fundamental plane, Equation \ref{isotropic} and using the 3 $\sigma$ Gaussian priors on the cosmological results reported in \citet{Scolnic}. We derive in the sub-panels $a$, $b$, $c$ and $d$ the values of $\Omega_{M}$, $H_{0}$, $\Omega_{M}$ and $H_{0}$ contemporaneously, and $w$, respectively.} \label{fig7} \end{figure} \begin{figure} \centering \subfloat[Varying only $\Omega_{M}$ with fixed evolution]{\label{fig10_a} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/fundplane_with_fixed_evol_varying_Om.pdf}} \subfloat[Varying only $H_0$ with fixed evolution]{\label{fig10_b} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/fundplane_with_fixed_evol_varying_H0.pdf}}\\ \subfloat[Varying both $\Omega_M$ and $H_0$ with fixed evolution]{\label{fig10_c} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/fundplane_with_fixed_evol_varying_H0_and_O_m.pdf}} \subfloat[Varying only $w$ with fixed evolution]{\label{fig10_d} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/fundplane_with_fixed_evol_varying_w.pdf}} \caption{Cosmological results for the GRBs alone with no calibration with Fundamental Plane using fixed evolution on 3 $\sigma$ Gaussian priors on the cosmological parameters investigated following \citet{Scolnic}. Panels a), b) c) and d) show the contours from the case (ii) for the case of $\Omega_M$, $H_0$, $\Omega_M$ and $H_0$ together, and $w$.} \label{fig8} \end{figure} \begin{table} \centering \scalebox{0.84}{\begin{tabular}{p{35mm}|l|c|c|c|c|c|c|c|c} \toprule[1.2pt] \toprule[1.2pt] \textbf{Calibration with SNe Ia, Equation \ref{equmu}} & {\textbf{parameters varied}} & \textbf{Model} & $\boldsymbol{\Omega_{M}}$ & $\boldsymbol{H_{0}}$ & $\boldsymbol{w}$ & $\boldsymbol{\Delta^{GRB_{ C}}_{GRB_{NO-C}}}\%$ & $z-score_{SN}$ & $z-score_{SN+BAO}$ \\ \midrule without evolution &$\Omega_{M}$ & $\Lambda$CDM & $0.292 \pm 0.068$ & \bf{70} & \bf{-1} & 7.94 & 0.102 & 0.176 \\\hline without evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $73.286\pm 3.007$ & \bf{-1} & -9.07 & 1.102 & 1.105\\\hline without evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & $0.295 \pm 0.064 $ & $73.358\pm 3.006$ & \bf{-1} & -5.88, -0.66 & 0.044, 1.103 & 0.249,1.176 \\\hline without evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-1.094 \pm 0.673$ & -6.13 & 0.140 & 0.114 \\\hline \midrule with fixed evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.316 \pm 0.068$ & \bf{70} & \bf{-1} & 7.94 & 0.249 & 0.176 \\\hline with fixed evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $72.762 \pm 3.227$ & \bf{-1} & 10.48 & 0.864 & 0.868 \\\hline with fixed evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & $0.306 \pm 0.060 $ & $73.264\pm 3.082$ & \bf{-1} & -6.25, 2.46 & 0.125, 1.046 & 0.083, 1.116 \\\hline with fixed evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.743 \pm 0.694$ & 11.94 & 0.370 & 0.395 \\\hline \midrule with $k=k(\Omega_{M})$ & $\Omega_{M}$ & $\Lambda$CDM & $0.298 \pm 0.063$ & \bf{70} & \bf{-1} &1.61 & 0.016 & 0.095 \\\hline with $k=k(\Omega_{M})$ & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & $0.295 \pm 0.064 $ & $73.159\pm 3.134$ & \bf{-1} & -1.54, -0.16 & 0.044, 0.996 & 0.249, 1.064 \\\hline \toprule[1.2pt] \toprule[1.2pt] \textbf{Calibration with SNe Ia, Equations \ref{isotropic} and \ref{planeev}} & {\textbf{parameters varied}} & \textbf{Model} & $\boldsymbol{\Omega_{M}}$ & $\boldsymbol{H_{0}}$ & $\boldsymbol{w}$ & $\boldsymbol{\Delta^{GRB_{ C}}_{GRB_{NO-C}}}\%$ & $z-score_{SN}$ & $z-score_{SN+BAO}$ \\ \midrule without evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.306 \pm 0.069$ & \bf{70} & \bf{-1} & 13.11 & 0.101 & 0.029 \\\hline without evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $73.519\pm 3.119$ & \bf{-1} & 0.19 & 1.137 & 1.140 \\\hline without evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & $0.301 \pm 0.065 $ & $73.089\pm 3.251$ & \bf{-1} & 1.56, 3.37 & 0.044, 0.939 & 0.083, 1.004\\\hline without evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.906 \pm 0.697$ & -33.49 & 0.159 & 0.159\\\hline \midrule with fixed evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.295 \pm 0.060$ & \bf{70} & \bf{-1} & -7.69 & 0.066 & 0.149 \\\hline with fixed evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $73.272 \pm 3.143$ & \bf{-1} & 0.54 & 1.050 & 1.140 \\\hline with fixed evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & $0.296 \pm 0.066 $ & $73.201\pm 3.062$ & \bf{-1} & 1.54, 0.43 & 0.029, 1.033 & 0.226, 1.103 \\\hline with fixed evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.959 \pm 0.631$ & -4.68 & 0.065 & 0.0918 \\\hline \midrule with $k=k(\Omega_{M})$ & $\Omega_{M}$ & $\Lambda$CDM & $0.300 \pm 0.073$ & \bf{70} & \bf{-1} & 15.87 & 0.014 & 0.055\\\hline with $k=k(\Omega_{M})$ & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & $0.296 \pm 0.064 $ & $73.024\pm 3.073$ & \bf{-1} & 0, -0.90 & 0.030, 0.972 & 0.233,1.042 \\\hline \end{tabular}} \caption{Cosmological parameters obtained using GRB alone calibrated on SNe Ia (indicated with the subscript C) using Gaussian priors with distance modulus equation (first part) and with the Fundamental plane equation (2nd part) without evolution, with evolution with fixed parameters, and with evolution correction as a function of $\Omega_{M}$. In the first column we define the studied type of the correction for the evolution. In the second column we define which cosmological parameters are left free to vary, while in the third column we define the investigated cosmological model. In the next three columns we present the values of parameters with error bars obtained in computation for each given case, namely with the fixed cosmological parameters present in bold. In the 7th column, we show the percentage increase/decrease between the uncertainties of GRBs alone without calibration (indicated with No-C) and with Gaussian priors from Table \ref{Table1} (the ${\Delta_{GRB} \%}$) vs the results from this Table with calibration (indicated with C) and with Gaussian priors. The formula for the percentage change is: $\frac{\Delta_{\text{comparing}}-\Delta_{\text{reference}}}{\Delta_{\text{reference}}}$, where $\Delta_{\text{reference}}$ is the error obtained with GRBs without calibration. With the negative sign we indicate a percentage decrease in the error, while with the positive one, we indicate a percentage increase in the error. Finally, in the last two columns we show the z-scores computed with respect to the SNe Ia alone and SNe Ia+ BAO results, respectively.} \label{Table2} \end{table} \begin{figure} \centering \subfloat[Varying only $\Omega_M$ with evolutionary function]{\label{fig11_a} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/fundplane_with_evol_func_varying_Om.pdf}} \subfloat[Varying both $\Omega_M$ and $H_0$ with evolutionary function]{\label{fig11_c} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/fundplane_with_evol_func_varying_H0_and_O_m.pdf}} \caption{Cosmological results for the GRBs alone (with no calibration) with Fundamental Plane using evolutionary functions and the assumptions of 3 $\sigma$ Gaussian priors on the cosmological parameters investigated following \citet{Scolnic}. Panels a) and b) show the contours from case (iii) for the case of $\Omega_M$ and the case of $\Omega_M$ and $H_0$ together, respectively. } \label{fig9} \end{figure} \begin{figure} \centering \subfloat[Varying only $\Omega_M$ without correction for evolution]{\label{fig12_a} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_DM/DM_noEvolution_Omega_m.pdf}} \subfloat[Varying only $H_0$ without correction for evolution]{\label{fig12_b} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_DM/DM_noEvolution_H0.pdf}}\\ \subfloat[Varying both $\Omega_M$ and $H_0$ without correction for evolution]{\label{fig12_c} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_DM/DM_noEvolution_Omega_m_and_H0.pdf}} \subfloat[Varying only $w$ without correction for evolution]{\label{fig12_d} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_DM/DM_noEvolution_w.pdf}} \caption{Cosmological results for the GRBs alone (with no calibration) with $\mu_{GRB}$, see Equation \ref{equmu}, without evolution and the assumptions of 3 $\sigma$ Gaussian priors on the cosmological parameters investigated following \citet{Scolnic}. Panels a), b), c) and d) show the contours from the case (iv) for the case of $\Omega_M$, $H_0$, $\Omega_M$ and $H_0$ together, and $w$, respectively. } \label{fig10} \end{figure} \begin{figure} \centering \subfloat[Varying only $\Omega_M$ with fixed evolution]{\label{fig13_a} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_DM/DM_FixedEvolution_Omega_m.pdf}} \subfloat[Varying only $H_0$ with fixed evolution]{\label{fig13_b} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_DM/DM_FixedEvolution_H0.pdf}}\\ \subfloat[Varying both $\Omega_M$ and $H_0$ with fixed evolution]{\label{fig13_c} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_DM/DM_FixedEvolution_Omega_m_and_H0.pdf}} \subfloat[Varying only $w$ with fixed evolution]{\label{fig13_d} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_DM/DM_FixedEvolution_w.pdf}} \caption{Cosmological results for the GRBs alone (with no calibration) with $\mu_{GRB}$, see Equation \ref{equmu}, using fixed evolution and the assumptions of 3 $\sigma$ Gaussian priors on the cosmological parameters investigated following \citet{Scolnic}. Panels a), b), c) and d) show the contours from case (v) for the case of $\Omega_M$, $H_0$, $\Omega_M$ and $H_0$ together, and $w$, respectively. } \label{fig11} \end{figure} \begin{figure} \centering \subfloat[Varying only $\Omega_M$ with evolutionary function]{\label{fig14_a} \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_DM/DM_FuncEvolution_Omega_m.pdf}} \subfloat[Varying both $\Omega_M$ and $H_0$ with evolutionary function] \includegraphics[width=0.50\hsize,height=0.50\textwidth,angle=0,clip]{figures/Only_GRB_DM/DM_FuncEvolution_Omega_m_and_H0.pdf}} \caption{Cosmological results for the GRBs alone (with no calibration) with $\mu_{GRB}$ using evolutionary functions and the assumptions of 3 $\sigma$ Gaussian priors on the cosmological parameters investigated following \citet{Scolnic}. Panels a) and b) show the contours from the case (vi) for the case of $\Omega_M$ and the case of $\Omega_M$ and $H_0$ together, respectively. } \label{fig12} \end{figure} \begin{enumerate} \item \hspace{1mm} We assume a likelihood with Equations \ref{Lpeak} and \ref{isotropic} without evolution, see Fig. \ref{fig7}. \item \hspace{1mm} We assume a likelihood with Equation \ref{planeev} with the evolution considering fixed values of $k_{L_{peak}}$, $k_{L_{a}}$ and $k_{T_{X}}$, see Fig. \ref{fig8}. \item \hspace{1mm} We assume a likelihood with Equation \ref{planeev} and considering a function for the evolutionary parameters $k_{L_{peak}}$ and $k_{L_{a}}$, since they vary together with the cosmological parameters, and we fix the $k_{T_{X}}$, since it does not depend on the cosmological models, see Fig. \ref{fig9}. \item \hspace{1mm} The likelihood from Equation \ref{equmu}, $\mu_{GRB}$, without evolution, see Fig. \ref{fig10}. \item \hspace{1mm} The likelihood from Equation \ref{equmu}, $\mu_{GRB}$, for the fixed evolutionary parameters, see Fig. \ref{fig11}. \item \hspace{1mm} The likelihood from Equation \ref{equmu}, $\mu_{GRB}$, for the evolutionary parameters $k_{L_{a}}$ and $k_{L_{peak}}$ as functions of $\Omega_M$, see Fig. \ref{fig12}. \end{enumerate} The Gaussian priors are justified by the fact that the underlying physics of the fundamental plane is not expected to vary within any given cosmology since it relies on a fundamental physical process, the magnetar emission, which gives rise to the plateau and, in turn, naturally drives the anti-correlation between $L_X$ and $T_a$ \citep{Rowlinson,Rea2015,Stratta}, and the prompt kinetic energy is positively correlated with the kinetic power in the afterglow \citep{dainotti2011,dainotti2015b}, as it is predicted within the standard fireball model assuming microphysical variations \citep{van Eerten2014a, van Eerten2014b}. \begin{figure} \centering \subfloat[Plane parameters without evolution]{\label{fig15_a} \includegraphics[width=0.440\hsize,height=0.4365\textwidth,angle=0,clip]{figures/FP_DM_withoutEvol/FPnearest_abcsv.pdf}} \subfloat[Plane parameters with fixed evolution]{\label{fig15_b} \includegraphics[width=0.440\hsize,height=0.4365\textwidth,angle=0,clip]{figures/FP_DM_withfixedEvol/FPnearest_abcsv.pdf}} \caption{The Fundamental plane relation parameters with the nearest 25 GRBs used to calibrate them on SNe Ia using $\mu_{GRB}$, Equation \ref{equmu}. Panels a) and b) show the contours of the plane fitting parameters without evolution and with fixed evolution respectively.} \label{fig13} \end{figure} \begin{figure} \centering \subfloat[Using $\mu_{GRB}$ on full sample varying only\\ $\Omega_M$ without correction for evolution]{\label{fig16_a} \includegraphics[width=0.40\hsize,height=0.36\textwidth,angle=0,clip]{figures/FP_DM_withoutEvol/DMfull_omega_m.pdf}}\hspace{5mm} \subfloat[Using $\mu_{GRB}$ on full sample varying only\\ $H_0$ without correction for evolution]{\label{fig16_b} \includegraphics[width=0.40\hsize,height=0.36\textwidth,angle=0,clip]{figures/FP_DM_withoutEvol/DMfull_H0.pdf}}\\ \subfloat[Using $\mu_{GRB}$ on full sample varying both $\Omega_M$\\ and $H_0$ without correction for evolution]{\label{fig16_c} \includegraphics[width=0.40\hsize,height=0.36\textwidth,angle=0,clip]{figures/FP_DM_withoutEvol/DMfull_omega_m_H0.pdf}}\hspace{5mm} \subfloat[Using $\mu_{GRB}$ on full sample varying only\\ $w$ without correction for evolution]{\label{fig16_d} \includegraphics[width=0.40\hsize,height=0.36\textwidth,angle=0,clip]{figures/FP_DM_withoutEvol/DMfull_w.pdf}} \caption{Cosmological results for the GRBs alone (with calibration on SNe Ia) with $\mu_{GRB}$ using no correction for the evolution and the assumptions of 3 $\sigma$ Gaussian priors on the cosmological parameters investigated following \citet{Scolnic}. Panels a), b), c) and d) show the contours from the case (vi) for the case of $\Omega_M$, $H_0$, $\Omega_M$ and $H_0$ together, and $w$, respectively.} \label{fig14} \end{figure} \subsection{GRBs alone calibrated with SNe Ia with Gaussian priors}\label{GRB alone with calibration} When it comes to observational cosmology, one can set up a standard candle by calibrating it with other well-known cosmological probes. This method is widely used in the literature with different approaches (it is, indeed, the main procedure to build the so-called cosmological ladder on which the most updated cosmological late-type results are based upon). To investigate the case of GRBs calibrated with SNe Ia we performed the following procedure: \begin{enumerate} \item \hspace{1mm}First we fit a set of $a$, $b$, $c$, and $\sigma_{int}$ parameters related to the GRBs fundamental plane with the part of our GRBs sample whose redshift overlaps with the redshift range of SNe Ia (up to $z=2.25$), which corresponds to 25 GRBs. We fix the $H_{0}$ and $\Omega_{M}$ parameters considering the values obtained using SNe Ia alone (for simplicity, we use $H_{0}=70$ $km \hspace{1ex} s^{-1} Mpc^{-1}$ and $\Omega_{M}=0.3$), but using Gaussian priors with 3 $\sigma$ based on the values of \citet{Scolnic}. \item \hspace{1mm} We have then performed the same steps i)-vi) as in the previous Sec. \ref{GRBs alone without calibration} with the only difference that the parameters of the plane are fixed to the sample of the 25 GRBs with $z<2.25$ overlapping with the SNe Ia. \end{enumerate} \begin{table} \centering \scalebox{0.885}{ \begin{tabular}{p{35mm}|l|c|c|c|c|c|c|c} \toprule[1.2pt] \toprule[1.2pt] \textbf{No calibration with SNe Ia with uniform priors, Equation \ref{equmu}} & {\textbf{parameters varied}} & \textbf{Model} & $<\boldsymbol{\Omega_{M}}>$ & $<\boldsymbol{H_{0}}>$ & $<\boldsymbol{w}>$ & $\boldsymbol{\Delta^{GRB_{U}}_{GRB_{G}}}\%$ & $z-score_{SN}$ & $z-score_{SN+BAO}$ \\ \midrule without evolution &$\Omega_{M}$ & $\Lambda$CDM & $0.58 \pm 0.27$ & \bf{70} & \bf{-1} & 328.57 & 1.040 & 1.022 \\\hline without evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $77.51\pm 14.14$ & \bf{-1} & 327.58 & 0.533 & 0.534 \\\hline without evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.91 \pm 0.58$ & -19.11 & 0.155 & 0.184 \\\hline \midrule with fixed evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.56 \pm 0.27$ & \bf{70} & \bf{-1} & 328.57 & 0.966 & 0.948 \\\hline with fixed evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $77.48 \pm 14.15$ & \bf{-1} & 384.42 & 0.531 & 0.531 \\\hline with fixed evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.93 \pm 0.58$ & -6.45 & 0.121 & 0.150 \\\hline \midrule with $k=k(\Omega_{M})$ & $\Omega_{M}$ & $\Lambda$CDM & $0.56 \pm 0.27$ & \bf{70} & \bf{-1} & 335.48 & 0.966 & 0.948 \\\hline \toprule[1.2pt] \toprule[1.2pt] \textbf{No Calibration with uniform priors, Equation \ref{isotropic} and \ref{planeev}} & {\textbf{parameters varied}} & \textbf{Model} & $<\boldsymbol{\Omega_{M}}>$ & $<\boldsymbol{H_{0}}>$ & $<\boldsymbol{w}>$ & $\boldsymbol{\Delta^{GRB_{U}}_{GRB_{G}}}\%$ & $z-score_{SN}$ & $z-score_{SN+BAO}$ \\ \midrule without evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.53 \pm 0.28$ & \bf{70} & \bf{-1} & 359.01 & 0.825 & 0.807 \\\hline without evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $75.00\pm 14.17$ & \bf{-1} & 355.19 & 0.355 & 0.356 \\\hline without evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.98 \pm 0.58$ & -44.65 & 0.034 & 0.064 \\\hline \midrule with fixed evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.53 \pm 0.27$ & \bf{70} & \bf{-1} & 315.38 & 0.855 & 0.837 \\\hline with fixed evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $74.99 \pm 14.27$ & \bf{-1} & 356.49 & 0.352 & 0.352 \\\hline with fixed evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.98 \pm 0.58$ & -12.39 & 0.034 & 0.064 \\\hline \midrule with $k=k(\Omega_{M})$ & $\Omega_{M}$ & $\Lambda$CDM & $0.55 \pm 0.27$ & \bf{70} & \bf{-1} & 328.57 & 0.929 & 0.911 \\\hline \end{tabular}} \caption{Averaged cosmological parameters of 100 runs with no calibration using GRBs alone assuming uniform priors (indicated with the subscript U) and with $\mu_{GRB}$ (first part of the Table) and with Fundamental plane equation, see Equation \ref{isotropic} (2nd part) without evolution, and with the evolution correction as a function of $\Omega_{M}$. In the header we use the notation: "$<>$" to distinguish results obtained in this analysis from the ones for which Gaussian priors have been considered. The third column before the last corresponds to the percentage change in errors computed comparing the current results obtained with the GRBs alone with Gaussian priors (indicated with the subscript G) without calibration taking as a reference point GRB values from Table \ref{Table1}. The last column represents the z-score from the SNe Ia taking the SNe Ia as a reference point.} \label{Table3} \end{table} \begin{figure} \centering \subfloat[Using $\mu_{GRB}$ on full sample varying only $\Omega_M$ with\\ correction for evolution with fixed parameters]{\label{fig17_a} \includegraphics[width=0.40\hsize,height=0.40\textwidth,angle=0,clip]{figures/FP_DM_withfixedEvol/DMfull_omega_m.pdf}}\hspace{5mm} \subfloat[Using $\mu_{GRB}$ on full sample varying only $H_0$ with\\ correction for evolution with fixed parameters]{\label{fig17_b} \includegraphics[width=0.40\hsize,height=0.40\textwidth,angle=0,clip]{figures/FP_DM_withoutEvol/DMfull_H0.pdf}}\\ \subfloat[Using $\mu_{GRB}$ on full sample varying both $\Omega_M$ and $H_0$\\ with correction for evolution with fixed parameters]{\label{fig17_c} \includegraphics[width=0.40\hsize,height=0.40\textwidth,angle=0,clip]{figures/FP_DM_withfixedEvol/DMfull_omega_m_H0.pdf}}\hspace{5mm} \subfloat[Using $\mu_{GRB}$ on full sample varying only $w$ with\\ correction for evolution with fixed parameters]{\label{fig17_d} \includegraphics[width=0.40\hsize,height=0.40\textwidth,angle=0,clip]{figures/FP_DM_withfixedEvol/DMfull_w.pdf}} \caption{Cosmological results for the GRBs alone (with calibration on SNe Ia) with $\mu_{GRB}$ using correction for evolution and the assumptions of 3 $\sigma$ Gaussian priors on the cosmological parameters investigated following \citet{Scolnic}. Panels a), b), c) and d) show the contours from the case (vi) for the case of $\Omega_M$, $H_0$, $\Omega_M$ and $H_0$ together, and $w$, respectively.} \label{fig15} \end{figure} \begin{figure} \centering \subfloat[Using $\mu_{GRB}$ on full sample varying only\\ $\Omega_M$ with evolutionary function correction]{\label{fig18_a} \includegraphics[width=0.44\hsize,height=0.44\textwidth,angle=0,clip]{figures/FP_DM_withfuncEvol/DMfull_omega_m.pdf}}\hspace{5mm} \subfloat[Using $\mu_{GRB}$ on full sample varying both $\Omega_M$\\ and $H_0$ with evolutionary function correction]{\label{fig18_c} \includegraphics[width=0.44\hsize,height=0.44\textwidth,angle=0,clip]{figures/FP_DM_withfuncEvol/DMfull_omega_m_H0.pdf}}\hspace{5mm} \caption{Cosmological results for the GRBs alone (with calibration on SNe Ia) with $\mu_{GRB}$ correcting with the evolutionary functions and the assumptions of 3 $\sigma$ Gaussian priors on the cosmological parameters investigated following \citet{Scolnic}. Panels a) and b) show the contours from the case (vi) for the case of $\Omega_M$ and the case of $\Omega_M$ and $H_0$ together, respectively.} \label{fig16} \end{figure} \begin{figure} \centering \subfloat[Plane parameters without evolution]{\label{fig19_a} \includegraphics[width=0.44\hsize,height=0.4326\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/calibration_without_evol_varying_a_b_c_sv.pdf}} \subfloat[Plane parameters with fixed evolution]{\label{fig19_b} \includegraphics[width=0.44\hsize,height=0.4326\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/calibration_with_fixed_evol_varying_a_b_c_sv.pdf}} \caption{The Fundamental plane relation parameters obtained with the nearest 25 GRBs used to calibrate them on SNe Ia using the Equation \ref{isotropic}. Panels a) and b) show the contours of the plane fitting parameters without evolution and with fixed evolution respectively.} \label{fig17} \end{figure} \begin{figure} \centering \subfloat[Using fundamental plane on full sample varying\\ only $\Omega_M$ without correction for evolution]{\label{fig20_a} \includegraphics[width=0.40\hsize,height=0.35\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/calibration_without_evol_varying_O_m.pdf}}\hspace{5mm} \subfloat[Using fundamental plane on full sample varying\\ only $H_0$ without correction for evolution.]{\label{fig20_b} \includegraphics[width=0.40\hsize,height=0.35\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/calibration_without_evol_varying_H0.pdf}}\\ \subfloat[Using fundamental plane on full sample varying both\\ $\Omega_M$ and $H_0$ without correcting for the evolution.]{\label{fig20_c} \includegraphics[width=0.40\hsize,height=0.35\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/calibration_without_evol_varying_H0_and_O_m.pdf}} \hspace{5mm} \subfloat[Using fundamental plane on full sample varying\\ only $w$ without correction for evolution.]{\label{fig20_d} \includegraphics[width=0.40\hsize,height=0.35\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/calibration_without_evol_varying_w.pdf}} \caption{Cosmological results for the GRBs alone (with calibration) using the fundamental plane, Equation \ref{isotropic} with no evolution and the assumptions of 3 $\sigma$ Gaussian priors on the investigated cosmological parameters following \citet{Scolnic}. Panels a), b), c) and d) show the contours from the case (vi) for the case of $\Omega_M$, $H_0$, $\Omega_M$ and $H_0$ together, and $w$, respectively.} \label{fig18} \end{figure} \begin{figure} \centering \subfloat[Using fundamental plane on full sample varying only\\ $\Omega_M$ correcting with the fixed parameters of the evolution]{\label{fig21_a} \includegraphics[width=0.40\hsize,height=0.40\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/calibration_with_fixed_evol_varying_O_m.pdf}}\hspace{5mm} \subfloat[Using fundamental plane on full sample varying only\\ $H_0$ correcting with the fixed parameters of the evolution]{\label{fig21_b} \includegraphics[width=0.40\hsize,height=0.40\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/calibration_with_fixed_evol_varying_H0.pdf}}\\ \subfloat[Using fundamental plane on full sample varying both $\Omega_M$\\ and $H_0$ correcting with the fixed parameters of the evolution]{\label{fig21_c} \includegraphics[width=0.40\hsize,height=0.40\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/calibration_with_fixed_evol_varying_H0_and_O_m.pdf}} \hspace{5mm} \subfloat[Using fundamental plane on full sample varying only\\ $w$ correcting with the fixed parameters of the evolution]{\label{fig21_d} \includegraphics[width=0.40\hsize,height=0.40\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/calibration_with_fixed_evol_varying_w.pdf}} \caption{Cosmological results for the GRBs alone (with calibration on SNe Ia) using Fundamental plane, Equation \ref{isotropic} with fixed evolution and the assumptions of 3 $\sigma$ Gaussian priors for the cosmological parameters investigated following \citet{Scolnic}. Panels a), b), c) and d) show the contours from the case (vi) for the case of $\Omega_M$, $H_0$, $\Omega_M$ and $H_0$ together, and $w$, respectively.} \label{fig19} \end{figure} \begin{figure} \centering \subfloat[Using fundamental plane on full sample varying only $\Omega_M$]{\label{fig22_a} \includegraphics[width=0.44\hsize,height=0.44\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/calibration_with_evol_func_varying_O_m.pdf}}\hspace{5mm} \subfloat[Using fundamental plane on full sample varying both $\Omega_M$ and $H_0$]{\label{fig22_c} \includegraphics[width=0.44\hsize,height=0.44\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/calibration_with_evol_func_varying_H0_and_O_m.pdf}} \hspace{5mm} \caption{Cosmological results for the GRBs alone (with calibration on SNe Ia) using Fundamental plane, Equation \ref{isotropic} with evolutionary functions and the assumptions of 3 $\sigma$ Gaussian priors on the cosmological parameters investigated following \citet{Scolnic}. Panels a) and b) show the contours from the case (vi) for the case of $\Omega_M$ and the case of $\Omega_M$ and $H_0$ varied together respectively.} \label{fig20} \end{figure} The results for this analysis are presented without the corrections for evolution in Figs. \ref{fig14} and \ref{fig18}; with the correction for the evolutionary effects and selection biases in Figs. \ref{fig15} and \ref{fig19}; and considering these corrections, but with the evolutionary parameter computed as a function of $\Omega_{M}$, $k=k(\Omega_{M})$, in Figs. \ref{fig16} and \ref{fig20}. We find that all the results of GRBs alone without calibration lie within 1 $\sigma$ with respect to the cosmological results obtained with calibration on SNe Ia. The percentage change in the uncertainty values of the cosmological parameters is shown in the seventh column of Table \ref{Table2}. We also compare cosmological parameters obtained by GRBs alone with calibration using Gaussian priors and the results obtained with SNe Ia alone. We have found out that the results, both using Equation regarding $\mu_{GRB}$, \ref{equmu}, as well as the Fundamental plane Equations \ref{isotropic} and \ref{planeev} fall within 1 $\sigma$ as shown by the $z$-score in Table \ref{Table2}. The only exception to this result is the case of GRBs calibrated with SNe Ia using the fundamental plane for the case without evolution and with fixed evolution varying $H_0$. Indeed, we note that the z-score for this case is slightly larger than 1 ($z=1.140$). All the z-score with respect to the GRB results are shown in the last two columns of Table \ref{Table2}. \begin{figure} \centering \subfloat[Varying only $\Omega_M$ without evolution]{ \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/Calibration_DM_withoutevol_omega_m.pdf}} \subfloat[Varying only $H_0$ without evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/Calibration_DM_withoutevol_H0.pdf}} \subfloat[Varying only $w$ without evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/Calibration_DM_withoutevol_w.pdf}}\\ \subfloat[Varying only $\Omega_M$ with fixed evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/Calibration_DM_fixedevol_omega_m.pdf}} \subfloat[Varying only $H_0$ with fixed evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/Calibration_DM_fixedevol_H0.pdf}} \subfloat[Varying only $w$ with fixed evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/Calibration_DM_fixedevol_w.pdf}}\\ \subfloat[Varying only $\Omega_M$ with evolutionary function] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/Calibration_DM_evolfunc_omega_m.pdf}} \caption{The distributions of the cosmological parameters for the GRBs with calibration and with $\mu_{GRB}$ using the assumptions of uniform priors of 100 runs of MCMC. Panels a), b), c) show the contours from case with no evolution (upper panel), evolution with fixed parameters (central panel) and with the evolutionary functions (lower panel) for the case of $\Omega_M$, $H_0$, and $w$, respectively.} \label{fig21} \end{figure} \begin{figure} \centering \subfloat[Varying only $\Omega_M$ without evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/Calibration_FP_withoutevol_omega_m.pdf}} \subfloat[Varying only $H_0$ without evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/Calibration_FP_withoutevol_H0.pdf}} \subfloat[Varying only $w$ without evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/Calibration_FP_withoutevol_w.pdf}}\\ \subfloat[Varying only $\Omega_M$ with fixed evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/Calibration_FP_fixedevol_omega_m.pdf}} \subfloat[Varying only $H_0$ with fixed evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/Calibration_FP_fixedevol_H0.pdf}} \subfloat[Varying only $w$ with fixed evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/Calibration_FP_fixedevol_w.pdf}}\\ \subfloat[Varying only $\Omega_M$ with evolutionary function] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/Calibration_FP_evolfunc_omega_m.pdf}} \caption{The distributions of the cosmological parameters for the GRBs with calibration and with $L_X$, Equations \ref{isotropic} and \ref{planeev} using the assumptions of uniform priors and over 100 runs of MCMC. Panels a), b), c) show the contours from case with no evolution (upper panel), evolution with fixed parameters (central panel) and with the evolutionary functions (lower panel) for the case of $\Omega_M$, $H_0$, and $w$, respectively.} \label{fig22} \end{figure} \begin{figure} \centering \subfloat[Varying only $\Omega_M$ without evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/DM_withoutevol_omega_m.pdf}} \subfloat[Varying only $H_0$ without evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/DM_withoutevol_H0.pdf}} \subfloat[Varying only $w$ without evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/DM_withoutevol_w.pdf}}\\ \subfloat[Varying only $\Omega_M$ with fixed evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/DM_fixedevol_omega_m.pdf}} \subfloat[Varying only $H_0$ with fixed evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/DM_fixedevol_H0.pdf}} \subfloat[Varying only $w$ with fixed evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/DM_fixedevol_w.pdf}}\\ \subfloat[Varying only $\Omega_M$ with evolutionary function] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_DM/meandistributions/DM_evolfunc_omega_m.pdf}} \caption{The distributions of the cosmological parameters for the GRBs without calibration and with $\mu_{GRB}$ using the assumptions of uniform priors of 100 runs of MCMC. Panels a), b), c) show the contours from case with no evolution (upper panels), evolution with fixed parameters (central panels) and with the evolutionary functions (lower panel) for the case of $\Omega_M$.} \label{fig23} \end{figure} \begin{figure} \centering \subfloat[Varying only $\Omega_M$ without evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/FP_withoutevol_omega_m.pdf}} \subfloat[Varying only $H_0$ without evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/FP_withoutevol_H0.pdf}} \subfloat[Varying only $w$ without evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/FP_withoutevol_w.pdf}}\\ \subfloat[Varying only $\Omega_M$ with fixed evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/FP_fixedevol_omega_m.pdf}} \subfloat[Varying only $H_0$ with fixed evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/FP_fixedevol_H0.pdf}} \subfloat[Varying only $w$ with fixed evolution] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/FP_fixedevol_w.pdf}}\\ \subfloat[Varying only $\Omega_M$ with evolutionary function] \includegraphics[width=0.32\hsize,height=0.27\textwidth,angle=0,clip]{figures/Only_GRB_plots_FP/meandistributions/FP_evolfunc_omega_m.pdf}} \caption{The distributions of the cosmological parameters for the GRBs without calibration and with $L_X$, Equations \ref{isotropic} and \ref{planeev} using the assumptions of uniform priors and over 100 runs of MCMC. Panels a), b), c) show the contours from case with no evolution (upper panels), evolution with fixed parameters (central panels) and with the evolutionary functions (lower panel) for the case of $\Omega_M$.} \label{fig24} \end{figure} \subsection{GRBs alone with and without calibration with the SNe Ia with uniform priors}\label{uniformprior} \begin{table} \centering \scalebox{0.88}{ \begin{tabular}{p{35mm}|l|c|c|c|c|c|c|c} \toprule[1.2pt] \toprule[1.2pt] \textbf{Calibration with SNe Ia with uniform priors, Equation \ref{equmu}} & {\textbf{parameters varied}} & \textbf{Model} & $<\boldsymbol{\Omega_{M}}>$ & $<\boldsymbol{H_{0}}>$ & $<\boldsymbol{w}>$ & $\boldsymbol{\Delta^{GRB_{U}}_{GRB_{G}}}\%$ & $z-score_{SN}$ & $z-score_{SN+BAO}$ \\ \midrule without evolution &$\Omega_{M}$ & $\Lambda$CDM & $0.50 \pm 0.28$ & \bf{70} & \bf{-1} & 326.47 & 0.718 & 0.700 \\\hline without evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $74.11\pm 13.48$ & \bf{-1} & 348.29 & 0.307 & 0.308 \\\hline without evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-1.01 \pm 0.56$ & -16.42 & 0.018 & 0.012 \\\hline \midrule with fixed evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.59 \pm 0.27$ & \bf{70} & \bf{-1} & 297.06 & 1.077 & 1.059 \\\hline with fixed evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $79.04 \pm 13.12$ & \bf{-1} & 306.57 & 0.691 & 0.692 \\\hline with fixed evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.88 \pm 0.58$ & -16.43 & 0.207 & 0.236 \\\hline \midrule with $k=k(\Omega_{M})$ & $\Omega_{M}$ & $\Lambda$CDM & $0.54 \pm 0.27$ & \bf{70} & \bf{-1} & 328.57 & 0.892 & 0.874 \\\hline \toprule[1.2pt] \toprule[1.2pt] \textbf{Calibration with SNe Ia with uniform priors, Equations \ref{isotropic} and \ref{planeev}} & {\textbf{parameters varied}} & \textbf{Model} & $<\boldsymbol{\Omega_{M}}>$ & $<\boldsymbol{H_{0}}>$ & $<\boldsymbol{w}>$ & $\boldsymbol{\Delta^{GRB_{U}}_{GRB_{G}}}\%$ & $z-score_{SN}$ & $z-score_{SN+BAO}$ \\ \midrule without evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.52 \pm 0.28$ & \bf{70} & \bf{-1} & 305.80 & 0.789 & 0.771 \\\hline without evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $75.70\pm 13.64$ & \bf{-1} & 337.32 & 0.420 & 0.421 \\\hline without evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.97 \pm 0.57$ & -18.22 & 0.053 & 0.082 \\\hline \midrule with fixed evolution & $\Omega_{M}$ & $\Lambda$CDM & $0.52 \pm 0.28$ & \bf{70} & \bf{-1} & 366.67 & 0.789 & 0.771 \\\hline with fixed evolution & $H_0$ & $\Lambda$CDM & \bf{0.30} & $75.71 \pm 13.77$ & \bf{-1} & 338.11 & 0.417 & 0.418 \\\hline with fixed evolution & $w$ & $w$CDM & \bf{0.30} & \bf{70} & $-0.97 \pm 0.57$ & -9.67 & 0.053 & 0.082 \\\hline \midrule with $k=k(\Omega_{M})$ & $\Omega_{M}$ & $\Lambda$CDM & $0.52 \pm 0.28$ & \bf{70} & \bf{-1} & 283.56 & 0.789 & 0.771 \\\hline \end{tabular}} \caption{Averaged Cosmological parameters over 100 runs of the MCMC derived from the calibration on the SNe Ia using GRBs alone assuming uniform priors (indicated with the subscript U) and with $\mu_{GRB}$ (first part of the Table) and with Fundamental plane equation, see Equation \ref{isotropic} (2nd part) without evolution, and with the evolution correction, see Equation \ref{planeev} and as a function of $\Omega_{M}$. Columns' content is analogous to Table \ref{Table2}. The third column before the last corresponds to the percentage change in errors computed comparing the current results obtained with the GRBs alone with Gaussian priors (indicated with the subscript G) with no calibration taking as reference point values from Table \ref{Table2}. The last two columns represent the z-score from the SNe Ia taking the SNe Ia as a reference point.} \label{Table4} \end{table} The aim of the previous sections is to explore the possibility of using GRBs as standalone standard candles up to redshift 5. Indeed, the aim is not to explore parameter spaces which may lead to exotic scenarios, but rather to consider the reliability of GRBs as cosmological probes. This is the reason why Gaussian priors up to 3 $\sigma$ around the best-fit values of the SNe Ia computations have been investigated. It is clear that given the sample size and the large scatter, the applicability of GRBs as cosmological probes nowadays is not definitive, but one of our goals is to show that the complementarity of using GRBs in combination with SNe Ia is beneficial for exploring the cosmological setting at high-z. This can allow access to the universe in principle up to $z=9.4$, which is the redshift of the furthest GRB ever detected, and, in our case regarding the platinum sample, up to $z=5$. If we consider Gaussian priors, we can recover similar results without the need of exploring the full parameter space. However, to explore how strong the impact of the Gaussian priors on our results is, and for comparing also with the results of GRBs and SNe Ia+BAO together, in which uniform priors have been used, we consider the steps i)-vi) with Gaussian priors as well. We here vary $\Omega_M$, $w$, and $H_0$ alone to appreciate the differences of the parameters computing them one by one, with the only difference that the priors are uniform and are the following : $0.0 \leq \Omega_M \leq 1$, $50 \leq H_0 \leq 100$ and $-2 \leq w \leq 0$. We have repeated each MCMC run 100 times, and plotted the distribution of $\Omega_M$, $w$, and $H_0$ both by calibrating the results on SNe Ia for $\mu_{GRB}$, see Fig. \ref{fig21} and Fig. \ref{fig22} for Equations \ref{isotropic} and \ref{planeev}, respectively. The same MCMC run 100 times is adopted without calibration on SNe Ia for $\mu_{GRB}$, see Fig. \ref{fig23} and Fig. \ref{fig24} for Equations \ref{isotropic} and \ref{planeev}, respectively. \\ We consider the scenarios of no evolutionary correction (upper panels of Figs. \ref{fig21}, \ref{fig22}, \ref{fig23}, \ref{fig24}), with evolution correction using fixed parameters (middle panel of Figs. \ref{fig21}, \ref{fig22}, \ref{fig23}, \ref{fig24}), and with evolutionary functions (bottom panels of Figs. \ref{fig21}, \ref{fig22}, \ref{fig23}, \ref{fig24}). Both without and with calibration on SNe Ia using $\mu{GRB}$, the $\Omega_M$ and $H_0$ uncertainty values tend to be higher by $328.57 \%$, $327.58 \%$ (without evolution and calibration); $328.57 \%$, $384.42 \%$ (with fixed evolution and without calibration); $326.47 \%$, $348.29 \%$ (without evolution and with calibration); $297.06 \%$, $306.57 \%$ (with fixed evolution and calibration), respectively, than the results obtained using Gaussian prior of 3 $\sigma$. Also, without and with calibration on SNe Ia using $\mu{GRB}$, uncertainties on the value of $\Omega_M$ using the $k(\Omega_M)$ evolutionary function are higher by $335.48 \% $ and $328.57 \%$, respectively, than the results obtained using Gaussian priors of 3 $\sigma$. Now, considering Fundamental plane Equations \ref{isotropic} and \ref{planeev}, the uncertainties on the values for both $\Omega_M$ and $H_0$ obtained by GRBs alone are higher by $359.01 \%$, $355.19 \%$ (without evolution and calibration); $315.38 \%$, $356.49 \%$ (with fixed evolution and without calibration); $305.80 \%$, $337.32 \%$ (without evolution and with calibration); $366.67 \%$, $338.11 \% $ (with fixed evolution and calibration), respectively, than the results obtained using Gaussian prior of 3 $\sigma$. Also, without and with calibration on SNe Ia using Equations \ref{isotropic} and \ref{planeev}, uncertainties on the value of $\Omega_M$ using the $k(\Omega_M)$ evolutionary function are higher by $328.57 \% $ and $283.56 \%$, respectively, than the results obtained using Gaussian priors of 3 $\sigma$. Surprisingly, the uncertainty values of $w$ using uniform priors are always smaller for both without and with calibration on SNe Ia and both with $\mu_{GRB}$ and Equations \ref{isotropic} and \ref{planeev} by $19.11 \%$ (without evolution and calibration), $6.45 \%$ (with fixed evolution and without calibration), $16.42 \%$ (without evolution and with calibration), $16.43 \%$ (with fixed evolution and calibration), compared to the values obtained with Gaussian priors. Considering the Fundamental plane Equations \ref{isotropic} and \ref{planeev}, the uncertainties on the value of $w$ using uniform priors for the case of no evolution and fixed evolution for no calibration are smaller than the values of $w$ with Gaussian priors by $44.65 \%$ and $12.39 \%$, respectively, and in the case of calibration on SNe Ia they are smaller in the cases of without evolution ($18.22 \%$) and with evolution ($9.67 \%$). We note that the values of $\Omega_M$ and $H_0$ obtained considering the Gaussian priors are generally larger than the respective computations taking into account uniform priors. However, as we have pointed out this is not the case for $w$, thus it is not clear yet why the uniform priors would enlarge the uncertainties only on $\Omega_{M}$ and $H_{0}$. On the other hand, the values of $w$ have narrow ranges and the difference between Gaussian and uniform priors may be less appreciable. This trend is slightly mitigated when we correct for the evolution and with the evolutionary functions, but the large uncertainties prevent to see a net difference. In the future, when we have more data with smaller uncertainties, the trend noted using the evolutionary functions can be important to reduce such a bias against higher values of $\Omega_M$. On the other hand, we have seen a trend of increasing values for $\Omega_M$ when we consider cosmological computations with other high redshift probes \citep{Colgain2022}. This trend, however, does not appear when we simulate many GRBs based on the features of the 10 closest GRBs to the fundamental plane \citep{Dainotti2022c}. Results are shown in Sec. \ref{standalone}. All averaged cosmological results over 100 MCMC runs using uniform priors lie within 1 $\sigma$ with respect to the cosmological results obtained using Gaussian priors, both considering calibration and no-calibration on the SNe Ia, see Table \ref{Table3} and \ref{Table4}. \subsection{GRBs in combination with SNe Ia and BAO with and without the correction for selection biases and redshift evolution}\label{SN+BAO+GRB} Results for the cosmological computations using GRBs+BAO+SNe Ia are shown in Fig. \ref{fig25} and for SNe Ia alone and SNe +BAO are shown in \ref{fig26}. All results are summarized in Table \ref{Table5}. We show the cosmological parameters obtained using: 1) SNe Ia alone 2) SNe Ia+BAO 3) SNe Ia+BAO+GRBs without correction for evolutionary effects; 4) SNe Ia+BAO+GRBs with these corrections. The contour plots at 68\% (dark blue) and 95\% (light blue) are computed for each case. In all figures, the following fiducial values (at which the parameters are fixed) have been taken into account: $\Omega_M=0.30$, $H_0=70$ $km$ $s^{-1} Mpc^{-1}$ and $w=-1$ (bold-faced in Table \ref{Table5}). \begin{figure*} \centering \subfloat[Without evolution]{\label{fig2_a} \includegraphics[width=0.32\hsize,height=0.32\textwidth,angle=0,clip]{figures/Fig7/SNe+BAO+GRBMNoEv_Varying_O_m.pdf}} \subfloat[Without evolution]{\label{fig2_b} \includegraphics[width=0.32\hsize,height=0.32\textwidth,angle=0,clip]{figures/Fig7/LCDM_SNeBAOGRBNoEv_varying_H0.pdf}} \subfloat[Without evolution]{\label{fig2_c} \includegraphics[width=0.32\hsize,height=0.32\textwidth,angle=0,clip]{figures/Fig7/SNe+BAO+GRB_Varying_w_NOEV.pdf}}\\\hspace{0cm} \subfloat[With evolution]{\label{fig2_d} \includegraphics[width=0.32\hsize,height=0.32\textwidth,angle=0,clip]{figures/Fig7/LCDM_SNeBAOGRBEv_varying_Om.pdf}} \subfloat[With evolution]{\label{fig2_e} \includegraphics[width=0.32\hsize,height=0.32\textwidth,angle=0,clip]{figures/Fig7/LCDM_SNeBAOGRBEv_varying_H0.pdf}} \subfloat[With evolution]{\label{fig2_f} \includegraphics[width=0.32\hsize,height=0.32\textwidth,angle=0,clip]{figures/Fig7/LCDM_SNeBAOGRBEv_varying_w.pdf}}\\\hspace{0cm} \subfloat[Without evolution]{\label{fig2_g} \includegraphics[width=0.49\hsize,height=0.49\textwidth,angle=0,clip]{figures/Fig7/LCDM_SNeBAOGRBNoEv_varying_Om_H0.pdf}} \subfloat[With evolution]{\label{fig2_h} \includegraphics[width=0.49\hsize,height=0.49\textwidth,angle=0,clip]{figures/Fig7/LCDM_SNeBAOGRBEv_varying_Om_H0.pdf}} \caption{Cosmological results for the GRBs and SNe Ia data with the BAO constraints with uniform priors.} \label{fig25} \end{figure*} In Fig. \ref{fig25} we present the cosmological results of SNe Ia+GRBs+BAO both with (panels d, e, f, h) and without evolution (panels a, b, c, and g) for GRBs. When we consider SNe Ia vs SNe Ia+BAO+GRBs with evolution, we observe smaller computed uncertainties for the cosmological values (see Table \ref{Table5}) once more probes are taken into account. More specifically, we see a decrease of 14.3\% for $\Omega_M$, and 16.7\% for $w$. When $\Omega_{M}$ and $H_0$ are varied contemporaneously, we obtain a decrease in the scatter of 68.2\%, and 52.9\%, respectively. When we compare SNe Ia+BAO vs SNe Ia+BAO+GRBs with evolution, we reproduce the precision on $\Omega_{M}$ with no reduction of the uncertainties. However, we note an increase of the scatter on $H_{0}$ of 14.3\% when both $\Omega_{M}$ and $H_{0}$ are varied contemporaneously. Furthermore, we see an increase of the scatter when ${H_0}$ and $w$ are varied alone of 7.7 \% and 7.1 \%, respectively. For completeness, all the percentage variations with respect to the SNe Ia and SNe Ia+BAO results are shown in the last two columns of Table \ref{Table5}. We stress that to check the numerical errors on the computation of the MCMC chain, we ran the computation of $H_0$ 100 times and we found that the uncertainty on the scatter on $H_0$ is $0.004$, which is two orders of magnitude smaller than the error in the results. Similarly, we find the scatter is two orders of magnitude smaller for $w$ ($0.0006$), while it is one order smaller for $\Omega_M$ ($0.0002$). \begin{figure*} \centering \subfloat[SNe Ia]{\label{fig4_a} \includegraphics[width=0.32\hsize,height=0.32\textwidth,angle=0,clip]{figures/Fig8/SNe_varying_H0.pdf}} \subfloat[SNe Ia]{\label{fig4_b} \includegraphics[width=0.32\hsize,height=0.32\textwidth,angle=0,clip]{figures/Fig8/SNe_varying_Om.pdf}} \subfloat[SNe Ia]{\label{fig4_c} \includegraphics[width=0.32\hsize,height=0.32\textwidth,angle=0,clip]{figures/Fig8/SNeOnly_Vary_w.pdf}}\\\hspace{0cm} \subfloat[SNe Ia + BAO]{\label{fig4_d} \includegraphics[width=0.32\hsize,height=0.32\textwidth,angle=0,clip]{figures/Fig8/LCDM_SNe+BAONoEv_varying_H0.pdf}} \subfloat[SNe Ia + BAO]{\label{fig4_e} \includegraphics[width=0.32\hsize,height=0.32\textwidth,angle=0,clip]{figures/Fig8/SNe+BAO_LCDM_Om.pdf}} \subfloat[SNe Ia + BAO]{\label{fig4_f} \includegraphics[width=0.32\hsize,height=0.32\textwidth,angle=0,clip]{figures/Fig8/SNe+BAO_LCDM_w.pdf}}\\\hspace{0cm} \subfloat[SNe Ia]{\label{fig4_g} \includegraphics[width=0.49\hsize,height=0.49\textwidth,angle=0,clip]{figures/Fig8/LCDM_SNe_NoEv_varying_Om_H0.pdf}} \subfloat[SNe Ia + BAO]{\label{fig4_h} \includegraphics[width=0.49\hsize,height=0.49\textwidth,angle=0,clip]{figures/Fig8/LCDM_SNeBAO_varying_Om+H0.pdf}} \caption{Cosmological results considering only SNe Ia and SNe Ia+BAO using uniform priors. In a) we fix $H_0$, in b) we fix $\Omega_M$ in c) we fix $\Omega_M$ and $H_0$. The central panel is the same as the upper panel but adding BAO.} \label{fig26} \end{figure*} \begin{figure} \centering \subfloat[SNe Ia]{ \includegraphics[width=0.48\hsize,height=0.48\textwidth,angle=0,clip]{figures/Fig9/oCDM_OnlySNe_varying_O_k_H070_O_m0.30.pdf}} \subfloat[SNe Ia + BAO]{\label{figOk_b} \includegraphics[width=0.48\hsize,height=0.48\textwidth,angle=0,clip]{figures/Fig9/oCDM_SNeBAO_varying_Ok.pdf}}\\\hspace{0cm} \subfloat[SNe Ia + BAO + GRB without evolution]{\label{figOk_c} \includegraphics[width=0.48\hsize,height=0.48\textwidth,angle=0,clip]{figures/Fig9/oCDM_SNeBAOGRBNoEv_varying_Ok.pdf}} \subfloat[SNe Ia + BAO + GRB with evolution]{\label{figOk_d} \includegraphics[width=0.48\hsize,height=0.48\textwidth,angle=0,clip]{figures/Fig9/oCDM_SNeBAOGRBEv_varying_Ok.pdf}} \caption{The upper left and right panels show the values of $\Omega_k$ fixing $H_0$ and $\Omega_M$ for the SNe Ia and SNe Ia +BAO, respectively. The lower right and left panel shows again the values of $\Omega_k$, but for the SNe Ia+BAO +GRBs with no evolution and SNe Ia+BAO +GRBs with evolution with fixed parameters, respectively. The priors on all probes are uniform.} \label{fig27} \end{figure} When we treat $k_{L_{peak}}$ and $k_{L_{a}}$ as a function of $\Omega_{M}$, the results remain unchanged. Both the best fit value and the uncertainties on parameters do not change within the computation accuracy of the MCMC algorithm in both cases, varying $\Omega_{M}$ alone and together with $H_{0}$. In Fig. \ref{fig26} we show the results obtained using SNe Ia (panels a, b, c and g) and SNe Ia+BAO (panels d, e, f and h), to quantify how much the uncertainties on the cosmological parameters are changed when we add BAO. In the a) panel, we use SNe Ia only. We vary $H_0$ fixing $\Omega_M$ and $w$; in the b) panel we vary $\Omega_M$ and fix $H_0$ and $w$; in the c) panel we vary $H_0$ and $\Omega_M$ with $w$ fixed; in the d) panel we vary $w$ for the $w$CDM model and fix $H_0$ and $\Omega_M$. In the bottom panels, the figures show the same quantities, considering BAO+SNe Ia. The constraints derived when GRBs are added to the SNe Ia +BAO samples lead to a reduction or a confirmation of the scatter when using SNe Ia only, with the additional advantage that GRBs span up to $z=5$ in our sample. \begin{table*} \addtolength{\tabcolsep}{-2pt} \centering \begin{tabular}{c|l|c|c|c|c|l} \toprule[1.2pt] \toprule[1.2pt] {\textbf{SNe Ia sample}} & \textbf{Model} & $\boldsymbol{w}$ & $\boldsymbol{\Omega_{M}}$ & $\boldsymbol{H_{0}}$ & $\boldsymbol{-}$ & $\boldsymbol{\Delta_{SNe+BAO}^{SNe}\%}$ \\ \midrule varying $\Omega_{M}$ & $\Lambda$CDM & {-1} & $0.299 \pm 0.007$ & \bf{70} & - & 16.7 \\\hline varying $H_0$ & $\Lambda$CDM & {-1} & \bf{0.30} & $69.97\pm 0.13$ & - & 0\%\\\hline varying $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & {-1} & $0.298 \pm 0.022 $ & $70.02\pm 0.34$ & - & 214.3 \%, 142.3 \% \\\hline varying $w$ & $w$CDM & $-1.000\pm 0.018$ & \bf{0.30} & \bf{70} & - & 28.6 \% \\ \hline \hline \textbf{SNe Ia + BAO sample} & \textbf{Model} & $\boldsymbol{w}$ & $\boldsymbol{\Omega_{M}}$ & $\boldsymbol{H_{0}}$ & $\boldsymbol{\Delta_{SNe}^{SNe+BAO}\%}$ & $\boldsymbol{-}$ \\ \midrule varying $\Omega_{M}$ & $\Lambda$CDM & {-1} & $0.304 \pm 0.006$ & \bf{70} & -14.3 \% & - \\\hline varying $H_0$ & $\Lambda$CDM & {-1} & \bf{0.30} & $69.96 \pm 0.13$ & 0 \% & - \\\hline varying $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & {-1} & $0.311 \pm 0.007 $ & $69.82 \pm 0.14$ & -68.2 \%, -58.8 \% & - \\\hline varying $w$ & $w$CDM & $-1.017\pm 0.014$ & \bf{0.30} & \bf{70} & -22.2 \% & - \\ \hline \hline \textbf{SNe Ia + BAO + GRB sample NO EV} & \textbf{Model} & $\boldsymbol{w}$ & $\boldsymbol{\Omega_{M}}$ & $\boldsymbol{H_{0}}$ & $\boldsymbol{\Delta_{SNE}^{SNe+BAO+GRBNOEV}\%}$ &$\boldsymbol{\Delta_{SNE+BAO}^{SNe+BAO+GRBNOEV}\%}$ \\ \midrule varying $\Omega_{M}$ & $\Lambda$CDM &{-1} & $0.306 \pm 0.006$ & {70} & -14.3 \% & 0 \% \\\hline varying $H_0$ & $\Lambda$CDM & {-1} & \bf{0.30} & $69.94 \pm 0.13$ & 0 \% & 0 \% \\\hline varying $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & {-1} & $0.310 \pm 0.007$ & $69.84 \pm 0.15$ & -68.2\%, -55.9 \% & 0\%, 7.1 \% \\\hline varying $w$ & $w$CDM & $-1.017 \pm 0.014$ & \bf{0.30} & \bf{70} & -22.2 \% & 0\%\\ \hline \hline \textbf{SNe Ia + BAO + GRB sample EV} & \textbf{Model} & $\boldsymbol{w}$ & $\boldsymbol{\Omega_{M}}$ & $\boldsymbol{H_{0}}$ & $\boldsymbol{\Delta_{SNE}^{SNe+BAO+GRBEV}\%}$ &$\boldsymbol{\Delta_{SNE+BAO}^{SNe+BAO+GRBEV}\%}$ \\ \midrule varying $\Omega_{M}$ & $\Lambda$CDM &{-1} & $0.306 \pm 0.006$ & \bf{70} & -14.3 \% & 0 \% \\\hline varying $H_0$ & $\Lambda$CDM & {-1} & \bf{0.30} & $69.94 \pm 0.14$ & 7.7 \% & 7.7\%\\\hline varying $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & {-1} & $0.310 \pm 0.007$ & $69.83 \pm 0.16$ & -68.2 \%, -52.9 \% & 0\%,14.3 \% \\\hline varying $w$ & $w$CDM & $-1.017 \pm 0.015$ & \bf{0.30} & \bf{70} & -16.7 \% & 7.1\% \\ \bottomrule \bottomrule \end{tabular} \caption{Results of the fitting of the cosmological parameters using the SNe Ia (the upper part of the Table), SNe Ia+BAO (the second part), and SNe Ia+BAO+GRBs using GRBs without calibration on SNe Ia and with uniform priors without (the third part) and with (the bottom) the correction for the evolution indicated with EV, using together the platinum sample, the Pantheon sample for SNe Ia and the BAO measurements of \citet{SharovBAO}. The values in bold are fixed at fiducial values. We also show two columns with the percentage increase/decrease between the uncertainties of SNe Ia only (the ${\Delta_{SNE}^{considered\, sample} \%}$) versus the other results, and the BAO+SNe Ia (${\Delta_{SNe+BAO}^{considered\, sample} \%}$) versus the other results. The formula is: $\frac{\Delta_{\text{comparing}}-\Delta_{\text{reference}}}{\Delta_{\text{reference}}}$, where $\Delta_{\text{reference}}$ is the uncertainty obtained with SNe Ia (6th column) and SNe Ia + BAO (7th column) samples. With the negative sign we indicate a percentage decrease on the uncertainty compared to the reference sample (indicated with a lower subscript), while with the positive one a percentage increase.} \label{Table5} \end{table*} \subsection{Comparing our results to the other cosmological computations with GRBs in the literature} Many scientists have tried to address the problem of using GRBs as cosmological tools for almost two decades from now and we here discuss only a few papers which are more closely related to the fundamental plane relation or when a similar study about the comparison with and without the calibration on SNe Ia has been performed. As an example, cosmological computations have been performed in \citet{Moresco22} involving GRBs both with and without calibration against SNe Ia considering uniform priors based on the Amati correlation between the peak energy in the $\nu F_{\nu}$ of the prompt emission spectrum and the isotropic prompt emission \citep{Amati2008}. For their sample composed of 70 GRBs without calibration, they have found $\Omega_{M}=0.27_{-0.18}^{+0.38}$. To compare their results to ours, we symmetrize their results and obtain: $\Omega_{M}=0.27 \pm 0.28$, which is similar to the variance obtained by us for an analogous case in Table \ref{Table3}. In particular, for the case without correction for evolution and with the likelihood based on the Equation \ref{isotropic} without calibration and with uniform priors, we reproduce the same precision (line 10 in Table \ref{Table3} and the Figures \ref{fig24}, \ref{fig22}), while for all the other cases we have slightly smaller variance (0.27). For the sample of 208 GRBs, whose size is four times bigger than the one of our sample, they found $\Omega_{M}=0.26_{-0.12}^{+0.23}$ and $\Omega_{M}=0.30_{-0.06}^{+0.06}$ for the cases without and with calibration, respectively. Both results hold a higher precision than our computations. We remind that our results without evolution and without and with calibration yield $\Omega_{M}=(0.53\pm 0.28)$ and $\Omega_{M}=(0.52\pm 0.28)$. In \cite{Liu22b}, cosmological computations have been performed with 220 GRBs calibrated with SNe Ia considering uniform priors based on the Amati correlation \citep{Amati2008} and improved the Amati correlation \citep{Liu22a} using the copula function, a multivariate cumulative distribution function. They have found $\Omega_M > 0.651$ with the Amati correlation and $\Omega_M = 0.308 \pm 0.192$ with the improved Amati correlation. The result with the improved Amati correlation is consistent with the ones obtained by us for GRBs alone calibrated with SNe Ia using uniform priors and the distance modulus Equation, \ref{equmu}, see Table \ref{Table4}. Though in this paper, the variance of the measurement is smaller than ours (without evolution: $0.28$, with evolution fixed and $k=k(\Omega_{M}): 0.27$) it is probably due to the fact that their GRB sample size is more than four times the size of our sample. \cite{Wang07} performed cosmological computations with 69 GRBs without calibration on SNe Ia using the distance modulus Equation, \ref{equmu}, and $\chi^2$ minimization technique to obtain $\Omega_M = 0.34 \pm 0.10$. Their results are consistent with the ones we obtained for GRBs alone without calibration using uniform priors and the distance modulus Equation, \ref{equmu}, see Table \ref{Table3}. The variance in their measurements is smaller than the one in our cosmological calculations as in the comparisons above (without evolution: $0.28$, with evolution fixed and $k=k(\Omega_{M}): 0.27$). \cite{Cao22a} performed a set of computations involving GRBs using the Dainotti Fundamental plane correlation with 3 different sets consisting of 60 events altogether (one set of 5 GRBs only, one of 24, and the other composed of 31 GRBs) and using also the Amati relation for 118 GRBs. The considered likelihood corresponds to the one derived from equation \ref{isotropic}. In the analysis uniform priors were applied. \cite{Cao22a} obtained closed contours for only two cases: 115 GRBs (3 events are removed due to the overlap with the other sample) and for 5 GRBs. They obtained using the Fundamental plane alone: $\Omega_{M} = 0.630^{+0.352}_{-0.135}$ and $\Omega_{M} = 0.520^{+0.379}_{-0.253}$ for the two samples, respectively. We symmetrize those results in order to compare them with ours, obtaining: $\Omega_{M} = 0.630\pm 0.244$ and $\Omega_{M} = 0.520\pm 0.316$. In the first case the results are slightly more precise than the one obtained by us in Table \ref{Table3}, but the considered sample is more than 2 times larger, while the others are slightly less precise, but for a very small sample size of GRBs. In the newer paper \cite{Cao22b} used the Platinum sample for the Dainotti Fundamental plane correlation and 118 events using Amati correlation, out of which 17 overlap with the Platinum sample, thus using 101 events in addition to the Platinum sample and obtained: $\Omega_M > 0.411$ using the Platinum sample alone, $\Omega_M > 0.256$ using the 118 GRBs sample alone, and $\Omega_M = 0.614 \pm 0.255$ using both samples together ($101 + 50$). The variance of those measurements ($0.255$) is slightly smaller than the ones of our results ($0.28 - 0.27$) when both the Amati and the Dainotti relations are combined. The difference in the results when the only Platinum sample is used is due to the fact that we run the analysis 100 times and we average our results, instead in the mentioned paper the results are computed only one time. \subsection{The flatness of the universe}\label{flatnessUniverse} \begin{table} \centering \begin{tabular}{|l|c|c|} \toprule[1.2pt] \toprule[1.2pt] \textbf{Sample} & $\boldsymbol{\Omega_{k}}$ & $\boldsymbol{z}$\textbf{-score}\\ \midrule SNe Ia sample & $-0.003 \pm 0.018$ & $0.17$\\\hline SNe Ia + BAO sample & $-0.016 \pm 0.012$ & $1.33$ \\\hline SNe Ia + BAO + GRB sample NO EV & $-0.017 \pm 0.012$ & $1.42$ \\\hline SNe Ia + BAO + GRB sample EV & $-0.013 \pm 0.011$ & 1.18 \\ \bottomrule \bottomrule \end{tabular} \caption{Results of the fitting of the $\Omega_{k}$ parameter for different samples and probes. All the obtained values are compatible within 1.5 $\sigma$ with $\Omega_{k}=0$ corresponding to the flat universe. To compare the results with the flat cosmology we use the z-score, defined as: $ z = \frac{|\Omega_{k, flat}-\Omega_{k}|}{\Delta_{\Omega_{k}}}\, =\, \frac{|\Omega_{k}|}{\Delta_{\Omega_{k}}}$.}\label{Table6} \end{table} Due to the recent results in which the flatness of the universe is questioned by \citet{Melchiorri} and several other authors we also consider scenarios accounting for its curvature. To consider not-flat universe models we use the appropriate formula for the distance luminosity which reads as follows: \begin{linenomath*} \begin{equation} d_{L} = (1+z) \times d_{M}, \end{equation} \end{linenomath*} where $d_{M}$ is the transverse comoving distance given by the formula: \begin{linenomath*} \begin{equation} d_{M} = \left\{\begin{matrix} \frac{c}{H_{0}\,\sqrt{\Omega_{k}}} \sinh\left(\sqrt{\Omega_{k}} \times \int_{0}^{z} \frac{dz'}{\sqrt{\Omega_{M}(1+z')^{3}+\Omega_{k}(1+z')^{2}+\Omega_{\Lambda}}} \right) & \; \Omega_{k}>0 \\ \\ \frac{c}{H_{0}} \times \int_{0}^{z} \frac{dz'}{\sqrt{\Omega_{M}(1+z')^{3}+\Omega_{k}(1+z')^{2}+\Omega_{\Lambda}}} & \; \Omega_{k}=0 \\ \\ \frac{c}{H_{0}\,\sqrt{|\Omega_{k}|}} \sin\left(\sqrt{|\Omega_{k}|} \times \int_{0}^{z} \frac{dz'}{\sqrt{\Omega_{M}(1+z')^{3}+\Omega_{k}(1+z')^{2}+\Omega_{\Lambda}}} \right) & \; \Omega_{k}<0. \end{matrix}\right. \end{equation} \end{linenomath*} \begin{table} \centering \begin{tabular}{c|l|c|c|c|c} \toprule[1.2pt] \toprule[1.2pt] \textbf{Likelihood based on $\mu_{GRB}$, Equation \ref{equmu}} & {\textbf{parameters varied}} & \textbf{Model} & $\boldsymbol{\Delta_{SN}^{GRB}}\%$& $\boldsymbol{\Delta_{SN+BAO}^{GRB}}\%$\\ \midrule without evolution & $\Omega_{M}$ & $\Lambda$CDM & 800.00 & 950.00 \\\hline without evolution & $H_0$ & $\Lambda$CDM & 2443.85 & 2443.85 \\\hline without evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & 209.09, 790.00 & 871.43, 2061.43 \\\hline without evolution & $w$ & $w$CDM & 3883.33 & 5021.43 \\\hline \midrule with fixed evolution & $\Omega_{M}$ & $\Lambda$CDM & 800.00 & 950.00 \\\hline with fixed evolution & $H_0$ & $\Lambda$CDM & 2146.92 & 2146.92\\\hline with fixed evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & 190.91, 784.71 & 814.29, 2048.57 \\\hline with fixed evolution & $w$ & $w$CDM & 3344.44 & 4328.57 \\\hline \midrule with $k=k(\Omega_{M})$ & $\Omega_{M}$ & $\Lambda$CDM & 785.71 & 933.33\\\hline with $k=k(\Omega_{M})$ & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & 195.46, 823.24 & 828.57, 2142.14 \\\hline \toprule[1.2pt] \toprule[1.2pt] \textbf{Likelihood based on $L_X$, Equations \ref{isotropic} and \ref{planeev}} & {\textbf{parameters varied}} & \textbf{Model} & $\boldsymbol{\Delta_{SN}^{GRB}}\%$& $\boldsymbol{\Delta_{SN+BAO}^{GRB}}\%$ \\ \midrule without evolution & $\Omega_{M}$ & $\Lambda$CDM & 771.43 & 916.67 \\\hline without evolution & $H_0$ & $\Lambda$CDM & 2294.62 & 2294.62\\\hline without evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & 190.91, 825.00 & 814.29, 2146.43 \\\hline without evolution & $w$ & $w$CDM & 5722.22 & 7385.71 \\\hline \midrule with fixed evolution & $\Omega_{M}$ & $\Lambda$CDM & 828.57 & 983.33 \\\hline with fixed evolution & $H_0$ & $\Lambda$CDM & 2304.62 & 2304.62\\\hline with fixed evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & 195.46, 796.77 & 828.57, 2077.86 \\\hline with fixed evolution & $w$ & $w$CDM & 3577.78 & 4628.57 \\\hline \midrule with $k=k(\Omega_{M})$ & $\Omega_{M}$ & $\Lambda$CDM & 800.00 & 950.00 \\\hline with $k=k(\Omega_{M})$ & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & 190.91, 812.06 & 9042.86, 2115.00 \\\hline \end{tabular} \caption{The table of comparison between the cosmological parameters derived from SNe Ia and from SNe Ia + BAO with GRBs alone using Gaussian priors without calibration on SNe (Table \ref{Table1}). The table is divided into two parts. The first part compares SNe Ia and SNe Ia + BAO with GRBs alone with no calibration (in particular, the percentage decrease of the errors on the measurements) when the computation of the cosmological parameters with GRBs has been performed using Equations \ref{isotropic} and \ref{planeev}. The second part is related to the comparison with SNe Ia and SNe Ia +BAO with GRBs alone, but using $\mu_{GRB}$, Equation \ref{equmu}. The uncertainty percentage change is computed with formulas: $ \Delta_{SN}^{GRB}\% = \frac{\Delta_{GRB}-\Delta_{SN}}{\Delta_{SN}} $ and $ \Delta_{SN+BAO}^{GRB}\% = \frac{\Delta_{GRB}-\Delta_{SN+BAO}}{\Delta_{SN+BAO}}$. Those quantities measure how much SNe Ia alone and SNe Ia+BAOs have smaller scatter than the GRBs alone. With the negative sign we indicate a percentage decrease on the uncertainty compared to the reference sample (indicated with a lower subscript), while with the positive one a percentage increase. References for comparisons are SNe Ia and SNe Ia + BAO.} \label{Table7} \end{table} \begin{table} \centering \begin{tabular}{c|l|c|c|c|c} \toprule[1.2pt] \toprule[1.2pt] \textbf{Calibration with SNe Ia, Equation \ref{equmu}} & \textbf{parameters varied} & \textbf{Model} & $\boldsymbol{\Delta_{SN}^{GRB}}\%$& $\boldsymbol{\Delta_{SN+BAO}^{GRB}}\%$\\ \midrule without evolution &$\Omega_{M}$ & $\Lambda$CDM & 871.43& 1033.33\\\hline without evolution & $H_0$ & $\Lambda$CDM & 2213.08 & 2213.07\\\hline without evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & 190.91, 784.11 & 814.29, 2047.14\\\hline without evolution & $w$ & $w$CDM & 3638.89 & 4707.14 \\\hline \midrule with fixed evolution & $\Omega_{M}$ & $\Lambda$CDM & 871.43 & 1033.33 \\\hline with fixed evolution & $H_0$ & $\Lambda$CDM & 2382.31 & 2382.31\\\hline with fixed evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & 172.72, 806.47 & 757.14, 2101.43 \\\hline with fixed evolution & $w$ & $w$CDM & 3755.56 &4857.14 \\\hline \midrule with $k=k(\Omega_{M})$ & $\Omega_{M}$ & $\Lambda$CDM & 800.00 & 950.00 \\\hline with $k=k(\Omega_{M})$ & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & 190.91, 821.76 & 814.29, 2138.57\\\hline \toprule[1.2pt] \toprule[1.2pt] \textbf{Calibration with SNe Ia, Equation \ref{isotropic} and \ref{planeev}} & {\textbf{parameters varied}} & \textbf{Model} & $\boldsymbol{\Delta_{SN}^{GRB}}\%$& $\boldsymbol{\Delta_{SN+BAO}^{GRB}}\%$\\ \midrule without evolution & $\Omega_{M}$ & $\Lambda$CDM & 885.71 & 1050.00\\\hline without evolution & $H_0$ & $\Lambda$CDM & 2299.23 & 2299.23\\\hline without evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & 195.45, 856.17 & 828.57, 2222.14\\\hline without evolution & $w$ & $w$CDM & 3772.22 & 4878.57\\\hline \midrule with fixed evolution & $\Omega_{M}$ & $\Lambda$CDM & 757.14 & 900\\\hline with fixed evolution & $H_0$ & $\Lambda$CDM & 2317.69 & 2317.69\\\hline with fixed evolution & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & 200.00, 800.59 & 842.86, 2087.14\\\hline with fixed evolution & $w$ & $w$CDM & 3405.56 & 4407.14\\\hline \midrule with $k=k(\Omega_{M})$ & $\Omega_{M}$ & $\Lambda$CDM & 942.86 & 1116.67\\\hline with $k=k(\Omega_{M})$ & $\Omega_{M}$ and $H_0$ & $\Lambda$CDM & 190.91, 803.82 & 814.29, 2095.00 \\\hline \end{tabular} \caption{The Table of comparison between the cosmological parameters derived from SNe Ia and from SNe Ia + BAO with GRBs alone using Gaussian priors with calibration on SNe Ia. The first part compares SNe Ia and SNe Ia +BAO with GRBs alone calibrated with SNe Ia (in particular, the percentage decrease of the uncertainties on the measurements) when the computation of the cosmological parameters with GRBs has been performed using Equations \ref{isotropic} and \ref{planeev}. The second part is related to the comparison with SNe Ia and SNe Ia +BAO with GRBs alone, but using $\mu_{GRB}$, Equation \ref{equmu}. The percentage decrease is computed with formulas: $ \Delta_{SN}^{GRB}\% = \frac{\Delta_{GRB}-\Delta_{SN}}{\Delta_{SN}} $ and $ \Delta_{SN+BAO}^{GRB}\% = \frac{\Delta_{GRB}-\Delta_{SN+BAO}}{\Delta_{SN+BAO}}$. Those quantities measure how much GRBs alone have a larger scatter than SNe alone and SNe Ia + BAO. With the negative sign we indicate a percentage decrease on the uncertainty compared to the reference sample (indicated with a lower subscript), while with the positive one a percentage increase. References for comparisons are SNe alone and SNe Ia + BAO.} \label{Table8} \end{table} The results shown in Table \ref{Table6} correspond to the computation of $\Omega_{k}$ parameter with the other parameters fixed: $\Omega_{M} = 0.30$, $H_{0} = 70\; km\,s^{-1}\,Mpc^{-1}$. All the obtained values are compatible within 1.5 $\sigma$ with $\Omega_{k}=0$ corresponding to the flat universe. To compare the results we use the z-score, defined as: \begin{linenomath*} \begin{equation} z = \frac{|\Omega_{k, flat}-\Omega_{k}|}{\Delta_{\Omega_{k}}}\, =\, \frac{|\Omega_{k}|}{\Delta_{\Omega_{k}}}. \end{equation} \end{linenomath*} The z-score results are presented in the last column of Table \ref{Table6}. We note an interesting trend. When we add more probes to the SNe Ia sample, the value of $\Omega_{k}$ becomes less close to $\Omega_k=0$ (the flat universe). The addition of the correction for evolution is indeed leading to a more compatible value (1.18 for z-score) with the flat universe. We present the results of the following computation in Fig. \ref{fig27}. \begin{figure} \centering \includegraphics[width=0.95\hsize,height=0.95\textwidth,angle=0,clip]{figures/Fig10/800GRBsSigma0.pdf} \caption{{ Posterior contours for $\Omega_M$ considering 800 simulated GRBs on the platinum fundamental plane.}} \label{GRBsimulation} \end{figure} \section{The comparison of GRBs alone and SNe Ia alone and SNe Ia+BAO} \label{comparison} In this section, we evaluate the comparison between the cosmological parameters and their uncertainties of GRBs alone with the SNe Ia alone, and then SNe Ia + BAO. This analysis is done using GRBs for which we consider Gaussian priors of 3 $\sigma$ on the values of the cosmological parameters from SNe Ia taken from \cite{Scolnic}. We divide our comparison between GRBs without and with calibration on SNe Ia, see Tables \ref{Table7} and \ref{Table8}, respectively. Specifically, we compare the results in the i)-vi) cases as detailed in Sec. \ref{GRBs alone without calibration} without calibration, and in Sec. \ref{GRB alone with calibration} for i)-vi) cases with the calibration. In this way, it will be clear which analysis brings smaller uncertainties. Looking specifically at the distance in terms of $\sigma$ to the SNe Ia and SNe Ia+BAO, we refer to Table \ref{Table1} for the comparison of GRBs without calibration with the Gaussian priors. We first consider the comparison with SNe Ia alone. All the cosmological parameter results, except for the $H_0$ one when the likelihood of $\mu_{GRB}$ (see the upper part of Table \ref{Table1}) is considered in the cases of no evolution and fixed evolution, fall within 1 $\sigma$. For the cases with and without evolution the z-score is 1.024 and 1.025, respectively, when varying both $\Omega_M$ and $H_0$. When we look instead at the second part of Table \ref{Table1} we consider the likelihood for Equations \ref{isotropic} and \ref{planeev}, and the only case that fall outside 1 $\sigma$ is the one for which $H_0$ is computed without accounting for the evolution (z-score = 1.021). When comparing with SNe Ia + BAO and considering the likelihood of GRBs with $\mu_{GRB}$, all cases fall within 1 $\sigma$ with the exception again of $H_0$ without evolution (z-score = 1.09), with fixed evolution (z-score = 1.10) and with the $k = k(\Omega_M)$ evolutionary function (z-score = 1.02). When comparing with SNe Ia + BAO and considering the likelihood for Equations \ref{isotropic} and \ref{planeev}, all cases fall within 1 $\sigma$ with the exception again of $H_0$ without evolution when varying only $H_0$ (z-score = 1.024) and when varying both $\Omega_M$ and $H_0$ (z-score = 1.09), and with $k = k(\Omega_M)$ evolutionary function (z-score = 1.065). In Table \ref{Table2}, we show results obtained for GRBs alone calibrated with SNe Ia with Gaussian priors compared with SNe Ia alone and SNe Ia + BAO. We start the comparison with the SNe Ia only first. We see that the results which are not compatible within 1 $\sigma$ with the likelihood of $\mu_{GRB}$ (upper part of Table \ref{Table2}) are the following: $H_{0}$ varied alone without correction for evolution is compatible in $1.102$ $\sigma$, $H_{0}$ varied together with $\Omega_{M}$ without correction for evolution is compatible in $1.103$ $\sigma$, and $H_{0}$ varied together with $\Omega_{M}$ with fixed correction for evolution is compatible in $1.046$ $ \sigma$. Results that are not compatible in 1 $\sigma$ with SNe Ia when we consider again calibrated GRBs alone with Gaussian priors, but using instead the likelihood derived from the Fundamental plane equation (Equations \ref{planeev} and \ref{isotropic}) (see the lower part of Table \ref{Table2}), are the following: $H_{0}$ varied alone without correction for evolution is compatible in 1.137 $\sigma$, $H_{0}$ varied alone with fixed correction for evolution is compatible in 1.050 $\sigma$, and $H_{0}$ varied together with $\Omega_{M}$ with fixed correction for evolution is compatible in 1.033 $\sigma$. When we compare SNe Ia + BAO results with the ones obtained with calibrated GRBs alone using the likelihood of $\mu_{GRB}$ (see the upper part of Table \ref{Table2}) we obtain that the following are not compatible in 1 $\sigma$: $H_{0}$ varied alone without correction for evolution is compatible in 1.105 $\sigma$, $H_{0}$ varied together with $\Omega_{M}$ without correction for evolution is compatible in 1.176 $ \sigma$, $H_{0}$ varied together with $\Omega_{M}$ with fixed correction for evolution is compatible in 1.116 $\sigma$, $H_{0}$ varied together with $\Omega_{M}$ with the correction for evolution as $k=k(\Omega_{M})$ is compatible in 1.064 $\sigma$. When we compare SNe Ia + BAO results with the ones obtained with calibrated GRBs alone using the likelihood derived from the Fundamental plane equation (Equations \ref{planeev} and \ref{isotropic}) (see the lower part of Table \ref{Table2}) we obtain, that the following results are not compatible in 1 $\sigma$: $H_{0}$ varied alone without correction for evolution is compatible in 1.140 $\sigma$, $H_{0}$ varied together with $\Omega_{M}$ without correction for evolution is compatible in 1.004 $\sigma$, $H_{0}$ varied alone with fixed correction for evolution is compatible in 1.140 $\sigma$, $H_{0}$ varied together with $\Omega_{M}$ with fixed correction for evolution is compatible in 1.103 $\sigma$, $H_{0}$ varied together with $\Omega_{M}$ with the correction for evolution as $k=k(\Omega_{M})$ is compatible in 1.042 $\sigma$. We now check the compatibility of the results obtained with GRBs alone with uniform priors in Tables \ref{Table3}, \ref{Table4}. In Table \ref{Table3} where GRBs have not been calibrated with SNe Ia and with the likelihood based on $\mu_{GRB}$, the only case that exceeds 1 $\sigma$ limit for both SNe Ia alone and SNe Ia + BAO is when we vary $\Omega_{M}$ only without correction for the evolution. The z-scores for SNe Ia alone and SNe Ia + BAO in this case are 1.040 and 1.022, respectively. In Table \ref{Table4} the only case of GRBs alone with calibration with uniform priors and with the likelihood based on $\mu_{GRB}$ that exceeds the 1 $\sigma$ limit for both SNe Ia alone and SNe Ia+ BAO is the one where we compute $\Omega_{M}$ only with fixed correction for evolution. The z-scores for SNe Ia alone and SNe Ia + BAO in this case are 1.077 and 1.059, respectively. When we look instead at the lower part of Tables \ref{Table3} and \ref{Table4} considering the likelihood Equations \ref{isotropic} and \ref{planeev}, all cases for SNe Ia alone and SNe Ia + BAO fall within 1 $\sigma$. All the percentage variations of SNe Ia alone and SNe Ia+BAO results with respect to the GRB results (SNe Ia and SNe Ia + BAO are taken as reference in the percentage variation computation, respectively) both without and with calibration on SNe Ia are shown in the last two columns of Tables \ref{Table7} and \ref{Table8}, respectively. The percentage variation of SNe Ia + BAO with GRBs alone ($\Delta_{SN+BAO}^{GRB}\%$) is in general larger than the one of SNe Ia alone with GRBs ($\Delta_{SN}^{GRB}\%$). More specifically, looking at Table \ref{Table7} where GRBs have not been calibrated, the minimum percentage increase (190.91 \%) is when comparing with SNe Ia results and in two cases 1) when we vary $\Omega_M$ and $H_0$ together with fixed evolution using distance modulus $\mu{GRB}$, equation \ref{equmu} and 2) when we vary $\Omega_M$ and $H_0$ together with evolutionary functions, $k = k(\Omega_M)$ using fundamental plane equations, \ref{isotropic} and \ref{planeev}. The maximum percentage increase (5021.43 \%) is when comparing with SNe Ia + BAO results and in the case when we vary $w$ in the case without evolution using distance moduli $\mu{GRB}$, equation \ref{equmu}. Now, considering the comparison of calibrated GRBs with SNe Ia and SNe Ia + BAOs (see Table \ref{Table8}), the percentage variation of the uncertainties spans from $172.72\%$ when comparing with SNe Ia only in the case of fixed evolutionary parameters and varying $\Omega_M$ and $H_0$ together using distance moduli $\mu{GRB}$, equation \ref{equmu}, to almost $4878.57\%$ when comparing with SNe Ia + BAOs and varying $w$ in the case without evolution using fundamental plane equations, \ref{isotropic} and \ref{planeev}. In general, the minimum percentage difference in the uncertainty has been found for the case when both $\Omega_M$ and $H_0$ are varied together, both for the calibration and no-calibration cases, and when comparing with SNe Ia alone results. The maximum percentage difference, instead, is found for the case where $w$ is varied both for the calibration and no-calibration cases, and when comparing with SNe Ia + BAO results. We can conclude that SNe Ia alone and SNe Ia + BAO have tighter constraints on the cosmological parameters than GRBs alone, which in turn has larger uncertainties, as expected. This conclusion does not undermine the possibility of using GRBs as standalone cosmological probes. Indeed, in the very first papers regarding SNe Ia cosmology, \citep{Riess, Perlmutter1999}, the uncertainties on the $\Omega_M$ parameter was $0.28 \pm 0.09$, which, compared to the current analysis, is $47.54\%$ larger than the same parameter computed using GRBs for the case without evolution and calibration. Further, when we compare this value to GRBs for the case in which evolutionary functions, $k = k(\Omega_M)$ are considered and with calibration on SNe Ia (the largest variance obtained in Tables \ref{Table1} and \ref{Table2} $\Omega_{M}=0.300\pm0.073$), our results have an uncertainty on $\Omega_M$ $23.29\%$ smaller. In addition, the current sample of GRBs is 100 times smaller than the current sample of SNe Ia. In the next section indeed we calculate the predictions on the uncertainties on the cosmological parameters if we simulate a sample of 800 GRBs. \section{The future use of GRB cosmology as standalone probes} \label{standalone} Regarding the use of GRBs as standalone standard candles, in \cite{Dainotti2022c}, we studied how many GRBs are required to obtain constraints on the cosmological parameters (in particular on ${\Omega_M}$) similar to the ones obtained in the literature using SNe Ia, thus giving a realistic forecast of the time required to reach these limits, taking into account present and future telescopes and missions. In order to do so, we simulated different numbers of GRBs starting from the physical features of the platinum sample, {\bf as well as an optical sample of GRBs, opening this studies also to the optical wavelength domain}. One of our simulations is shown in Fig. \ref{GRBsimulation}, where we note a closed contour for ${\Omega_M}$ for 800 GRBs. The number of 800 GRBs gives a satisfactory precision on ${\Omega_M}$. Indeed, we have also computed that the number of platinum GRBs which are needed to achieve the same precision reached by \cite{Conley} with 472 SNe Ia is 789 if no machine learning and lightcurve reconstructions are involved. {\bf More specifically, in \citet{Dainotti2022c} we have studied, via simulations, the effects that the platinum and the optical samples will have on future studies when the sample size will be increased thanks to 1) new observations by SVOM, Theseus and other ground-based observations 2) the use of machine learning (Dainotti et al. in preparation), 3) the use of lightcurves reconstruction (Dainotti et al. in preparation). Going more-in depth in the analysis performed in the cited paper, the simulations were based on the best-fit fundamental planes obtained by considering different baseline samples. Indeed, not only the entire platinum and optical sets were considered but also subsets built by choosing the closest GRBs to the best-fit planes following two different methods. Then, the simulations were performed by creating GRBs observations lying on the best-fit planes, considering as starting distributions the ones provided by the observed GRBs used as a baseline. We have also run simulations in which we halved the errors on the quantities involved in the best-fit fundamental planes, simulating the increase in the precision which is expected to be achieved by future observations. The goal was to infer how many simulated GRBs are needed to reach pivotal thresholds on the precision on ${\Omega_M}$ achieved by the SNe Ia cosmological literature, namely \citet{Conley, Betoule2014}, and \citet{Scolnic} results, who found an error on $\Omega_M$ equal to $0.10$, $0.042$, and $0.022$, respectively, with the assumption that the ratio of the X-rays and optical plateau will be the same as currently. Indeed, this is a conservative estimate, since we expect to have more GRBs in the future at high redshift and more enhanced sensitivities in X-rays and optical so that more plateaus can indeed be observed. Once we have found the exact number of simulated GRBs necessary to achieve these limits, we have considered both present and future missions and observational campaigns (especially the future SVOM and Theseus missions), to give a realistic time period in which these precisions shall be reached by the standalone GRBs used as cosmological probes. This has been done both considering the current observed data, as well as taking into account machine learning and lightcurve reconstruction methods, which can be used for inferring the redshifts of GRBs observations and decreasing the errors on the observed quantities themselves, respectively. With the analysis of machine learning we will be able to double the optical plateaus, while with the lightcurve reconstruction analysis we will be able to obtain a lightcurve with almost half (47.5 $\%$) of the error carried by the lightcurves not reconstructed. When the machine learning, lightcurve reconstruction and errorbars divided by half will be applied for the optical sample, it is calculated in this analysis that the precision obtained by SNe Ia on ${\Omega_M}$ as the one in \cite{Conley} is reachable now, while the precision of \cite{Betoule2014} is reachable in 2026, and the precision of \cite{Scolnic} is reachable in 2042. All these results have been gathered in the tables 9 and 10 presented in \citet{Dainotti2022c}. This shows the increasing importance the GRBs in the next decades for cosmological applications. Thus, with these two tandem papers we have shown that GRBs have not only the credibility to be used reliably as standard candles, but they can be an important aid and complementary tool together with the SNe Ia to extend the Hubble diagram at high redshifts. For a more complete discussion, we refer to the paper itself \citep{Dainotti2022c}.} \section{Summary, Discussion and Conclusions} \label{conclusions} The fundamental plane relation carried a $\sigma_{int}=0.18 \pm 0.07$ when selection biases and redshift evolution is accounted for, which is the smallest intrinsic scatter in the current literature regarding GRB correlations involving the plateau emission. In this paper, we first investigate the reliability of the 3D fundamental plane as an intrinsic correlation when we correct for selection biases and redshift evolution, then we also study its application as a cosmological tool. To this end, we performed several tests to check the reproducibility of the parameters of the correlation, see Fig. \ref{fig1}, when we consider simulations of the evolutionary coefficients within a 1 $\sigma$ range. We also tested the reliability of the intrinsic scatter on the fundamental plane by showing the distribution of this quantity obtained with the HyperFit online routine, which uses many fitting methods to derive the best-fit parameters of the fundamental plane both with and without the evolutionary corrections, see Fig. \ref{hist:paisim}. Before applying the fundamental plane relation corrected for the evolutionary effects as a cosmological tool, we investigate to which extent the parameters of the evolutionary functions, determined through statistical methods, depend on the cosmological ones, see Fig. \ref{fig:evoL}. We find out that while the evolutionary parameters have no dependence on $H_0$, on the other end, they depend on $\Omega_M${\bf, $\Omega_{k}$ and $w$}. This discovery opens the way to the application of the evolution to the fundamental plane both considering fixed parameters for the evolution, as well as the evolutionary function dependent on $\Omega_M$ {\bf and in the future also on $\Omega_{k}$ and $w$}. Thus, the application of the fundamental plane corrected for selection biases and redshift evolution allows us to estimate cosmological parameters for GRBs alone for the first time considering such correction. To use GRBs as standalone cosmological probes, we have adopted two methods: the Fundamental plane (Equations \ref{isotropic}, \ref{planeev}), and the distance moduli, $\mu_{GRB}$, variables (Equation \ref{equmu}), taking into account GRBs without calibration on SNe Ia, see \ref{GRBs alone without calibration}, as well as considering these calibrations on SNe Ia, see \ref{GRB alone with calibration}. We have obtained cosmological parameters by GRBs alone using Gaussian priors of 3 $\sigma$ based on the values of SNe Ia, see Tables \ref{Table1} and \ref{Table2}. We show the percentage change between the uncertainties of GRBs alone without and with calibration on SNe, see the seventh column of Table \ref{Table2}. We find that all the results lie within 1 $\sigma$ when comparing cases i-vi) both with and without calibration. We, then, explored how strong the impact of the Gaussian priors is on our results and how much the parameter space of GRBs can be constrained. To do so, we obtain averaged cosmological parameter results over 100 runs performed by MCMC by GRBs alone using a uniform prior, see Sec. \ref{uniformprior}. Then, we compare the uncertainties on the results of GRBs alone using both Gaussian and the uniform priors, see the seventh column of Tables \ref{Table3} and \ref{Table4}. We find that all the results obtained by GRBs alone using uniform priors lie within 1 $\sigma$ with respect to the cosmological ones obtained by GRBs alone using Gaussian priors both with and without the calibration on SNe Ia. We used as Gaussian priors the parameters based on the results of SNe Ia. The uncertainties on the cosmological parameter values using $\mu_{GRB}$ for both $\Omega_M$ and $H_0$ obtained by GRBs alone are higher by $328.57 \%$, $327.58 \%$ (without evolution and calibration, the upper part of Table \ref{Table3}); $328.57 \%$, $384.42 \%$ (with fixed evolution and without calibration, the upper part of Table \ref{Table3}); $326.47 \%$, $348.29 \%$ (without evolution and with calibration, the upper part of Table \ref{Table4}); $297.06 \%$, $306.57 \%$ (with fixed evolution and calibration, the upper part of Table \ref{Table4}), respectively, than the uncertainties on the results obtained using Gaussian priors of 3 $\sigma$. We then note that the uncertainties on the value of $\Omega_M$ using the $k(\Omega_M)$ evolutionary function, both without and with calibration on SNe Ia using $\mu{GRB}$, are higher by $335.48 \% $ and $328.57 \%$, respectively (Table \ref{Table3}) than the ones obtained using Gaussian priors of 3 $\sigma$. Surprisingly, the uncertainties on the values of $w$ using $\mu_{GRB}$ and uniform priors for the case of no evolution and fixed evolution and for no calibration are smaller than the values of $w$ with Gaussian priors by $19.11 \%$ and $6.45 \%$ (upper panel of Table \ref{Table3}), respectively, while for the case considering the calibration on SNe Ia they are smaller in both of without evolution ($16.42 \%$) and with evolution ($16.43 \%$) cases, the upper panel of Table \ref{Table4}. This difference may be due to the fact that the variation in $w$ is smaller than the variation of $H_0$ ($50<H_0<100$) and $\Omega_M$ ($0<\Omega_M<1$). Now, considering the Fundamental plane Equations \ref{isotropic} and \ref{planeev}, the uncertainties on the values for both $\Omega_M$ and $H_0$ obtained by GRBs alone are higher by $359.01 \%$, $355.19 \%$ (without evolution and without calibration, the lower part of Table \ref{Table3}); $315.38 \%$, $356.49 \%$ (with fixed evolution and without calibration, the lower part of Table \ref{Table3}); $305.80 \%$, $337.32 \%$ (without evolution and with calibration, the lower part of Table \ref{Table4}); $366.67 \%$, $338.11 \% $ (with fixed evolution and with calibration, the lower part of Table \ref{Table4}), respectively, than the uncertainties on the correspondent results obtained using Gaussian priors of 3 $\sigma$ based on the results of SNe Ia. Also, when using $k(\Omega_M)$ evolutionary function without and with calibration on SNe Ia using Equations \ref{isotropic} and \ref{planeev}, uncertainties on the value of $\Omega_M$ are higher by $328.57 \% $ and $283.56 \%$, respectively, than the correspondent results obtained using Gaussian priors of 3 $\sigma$. Similarly to the results obtained with the $\mu_{GRB}$, when we compare the values of $w$ using the Fundamental plane Equations \ref{isotropic} and \ref{planeev} with and without evolution, respectively, in the case without calibration, the uncertainties on $w$ using uniform priors for the case without evolution and fixed evolution are smaller than the values of $w$ with Gaussian priors by $44.65 \%$ and $12.39 \%$, respectively (lower part of Table \ref{Table3}). In the case of calibration on SNe Ia, the values of $w$ are smaller in the cases of no evolution ($18.22 \%$) and with evolution ($9.67 \%$) (lower part of Table \ref{Table4}). We have also computed the percentage change of the uncertainties between the results obtained by GRBs alone with Gaussian priors without calibration and with calibration on SNe Ia, see the last two columns of Tables \ref{Table7} and \ref{Table8}, respectively, with the results obtained by SNe Ia alone and SNe + BAOs. We have concluded from this analysis that SNe Ia alone and SNe Ia + BAO have tighter constraints on the cosmological parameters than GRBs alone. To better understand what the advantage of using Gaussian priors is with respect to uniform priors in the comparison with other probes, we have then computed the z-scores of GRBs alone with respect to the SNe Ia alone and SNe Ia + BAO results, see the last two columns of Tables \ref{Table1}, \ref{Table2}, \ref{Table3}, \ref{Table4}. The last two columns of Tables \ref{Table1} and \ref{Table2}, present GRB results alone using Gaussian priors compared with the SNe Ia and SNe Ia + BAO with no calibration and with calibration, respectively, see Sec. \ref{comparison} for more details. The last two columns of Tables \ref{Table3} and \ref{Table4}, present the same comparison among the GRBs alone and SNe Ia and SNe Ia+BAO results, but using uniform priors. More specifically, we first consider the comparison with SNe Ia using both the $\mu_{GRB}$ (upper panels of Tables \ref{Table1} and \ref{Table2}) and the fundamental plane equations \ref{isotropic} and \ref{planeev} (lower panels of Tables \ref{Table1} and \ref{Table2}) using Gaussian priors. In relation to the cases of no calibration, we obtain that all cases have z-score $<1$ with the only exception of $H_0$, for which the maximum z-score $=1.025$ in the case of $\Omega_M$ and $H_0$ varied contemporaneously without considering evolution. In relation to the cases with calibration, we obtain that in all cases z-score $<1$ with the only exception of $H_{0}$ without evolution varied alone for which the maximum z-score is $1.137$. Now we consider the comparison between SNe Ia+ BAO and GRBs with both $\mu_{GRB}$ and the fundamental plane Equations \ref{isotropic} and \ref{planeev}, see Table \ref{Table1} and Table \ref{Table2} using Gaussian priors. In relation to the cases of no calibration, we obtain that all cases have z-score $<1$ with the only exception of $H_0$ for which the maximum z-score $=1.10$ in the case of $\Omega_M$ and $H_0$ varied contemporaneously with fixed evolution. In relation to the cases considering the calibration, we obtain that in all cases z-score$<1$, with the only exception of $H_0$ for which the maximum z-score is $1.176$ in the case of $\Omega_M$ and $H_0$ varied contemporaneously without considering evolution. Now, we consider the comparison with SNe Ia using both the $\mu_{GRB}$ (upper panels of Tables \ref{Table3} and \ref{Table4}) and the fundamental plane equations \ref{isotropic} and \ref{planeev} (lower panels of Tables \ref{Table3} and \ref{Table4}) using uniform priors. In relation to the cases of no calibration, we obtain that all cases have z-score $<1$ with the only exception of $\Omega_M$ for which the maximum z-score $=1.040$ in the case when $\Omega_M$ only is varied without considering evolution. In relation to the cases with calibration, we obtain that in all cases z-score $<1$ with the only exception of $\Omega_M$ with fixed evolution varied alone for which the maximum z-score is $1.077$. Now we consider the comparison between SNe Ia+ BAO and GRBs with both $\mu_{GRB}$ and the fundamental plane Equations \ref{isotropic} and \ref{planeev}, see Table \ref{Table3} and Table \ref{Table4} using uniform priors. In relation to the cases of no calibration, we obtain that all cases have z-score $<1$ with the only exception of $\Omega_M$ for which the maximum z-score$=1.022$ in the case when $\Omega_M$ only is varied without evolution. In relation to the cases considering the calibration, we obtain that in all cases z-score$<1$, with the only exception of $\Omega_M$ for which the maximum z-score is $1.059$ in the case when $\Omega_M$ only is varied contemporaneously with fixed evolution. The most notable comparison is achieved between SNe Ia, SNe Ia + BAO vs SNe Ia + BAO + GRBs using a uniform prior, for the cases of both without and with correction for evolution, see Table \ref{Table5}. For the case without correction for evolution, we reduced the scatter related to $\Omega_M$ by $14.3\%$, $\Omega_M$ and $H_0$ by $68.2\%$ and $55.9\%$ when varied together, and $w$ by $22.2 \%$, in comparison to the results obtained by SNe Ia alone. For the case of correction for evolution, we reduced the scatter related to $\Omega_M$ by $14.3\%$, $\Omega_M$ and $H_0$ by $68.2\%$ and $52.9\%$ when varied together, and $w$ by $16.7 \%$, in comparison to the results obtained by SNe Ia alone. All our results are consistent at the $68\%$ level with the $\Lambda$CDM model. The crucial points of our derivations are: 1) we have obtained cosmological parameters compatible with the $\Lambda$CDM model; 2) in all cases, except for $H_0$, we obtained a smaller intrinsic scatter by using GRBs + SNe Ia +BAO compared to SNe Ia alone, see subsection \ref{SN+BAO+GRB}. From this analysis we have concluded that, even if uncertainties are smaller in the SNe Ia alone and SNe Ia+BAO cases, GRBs alone can still be used to verify if cosmological parameters are compatible with the ones inferred by the SNe Ia. Indeed, we may conclude that GRBs alone can be used as cosmological tools, which carry the great advantage of being observed up to $z=5$ in the Platinum sample case. We have also computed the z-scores of GRBs alone using a uniform prior with respect to the SNe Ia alone and SNe Ia + BAO results, see last two columns of Tables \ref{Table3} and \ref{Table4}, to check the compatibility of the GRBs results using a uniform prior with the SNe Ia and SNe Ia + BAO ones, see Sec. \ref{comparison}. We obtained the maximum z-score of $1.077$ for the $\Omega_M$ parameter when it is varied with fixed evolution and when we compare SNe Ia results with the ones obtained with calibrated GRBs alone using the $\mu_{GRB}$ likelihood. Surprisingly, the average results of GRBs alone using uniform priors do not change even if we correct for evolution, both fixing the parameters as well as using a function for the evolution, see Tables \ref{Table3} and \ref{Table4}. However, it is very likely that this result is due to the paucity of the sample size, given that when we simulate 800 GRBs starting from the ones closest to the fundamental plane \citep{Dainotti2022c} we do not recover these results. On the other hand, the closest GRBs to the plane in the simulated data have been built with a given fiducial cosmology. Thus, it is necessary to wait for additional data to come with future missions, such as SVOM and Theseus \citep{Wei2016,Amati2018} to cast light on this discrepancy. Very interestingly, also other probes at high redshift, like the quasars, show a tendency to have a higher value of $\Omega_M$ \citep{Colgain2022}. Although this discussion surely deserves much attention, it is far beyond the scope of the current paper. Although results related to GRB cosmology have been reached by previous studies for the prompt correlation \citep{Amati2008, Kodama, Wang, Demiansky, Luongo} and the Combo relation \citep{Izzo2015}, this is the first time cosmological constraints have been achieved considering a 3D correlation involving the plateau, and it is also the first time the evolutionary parameters have been included in the computation of cosmological ones, thus allowing a road-map for a new methodology to treat selection biases and redshift evolution contemporaneously inside the cosmological setting. \section*{Funding} \label{Funding} This work is supported by JSPS Grants-in-Aid for Scientific Research “KAKENHI” (A: Grant Number JP19H00693). N.F. acknowledges the support from UNAM-DGAPA-PAPIIT through grant IA102019. \section*{Acknowledgments} This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. We are grateful to E. Rinaldi for helping in the discussion of the constants in the fundamental parameters and useful comments about the manuscript, to B. De Simone for the help in the analysis of the SNe Ia's likelihood, and to G. Srinivasaragavan, R. Wagner, L. Bowden, Z. Nguyen for the help in the fitting of the parameters of the LCs. Z. Kania and J. Fernandez for helping to run our cosmological computations. We are particularly grateful to Dr. Cuellar for managing the SULI program during the summer of 2020. S.N also acknowledges the support from the Pioneering Program of RIKEN for Evolution of Matter in the universe (r-EMU). We thank the RIKEN HOKUSAI BigWaterfall cluster system for the computational time. A. Lenart is greatful for the financial support from the NAOJ Division of Science. \section{Data Availability Statement} The data underlying this article will be shared upon reasonable request to the corresponding author. \section{} \textit{Research Notes of the \href{https://aas.org}{American Astronomical Society}} (\href{http://rnaas.aas.org}{RNAAS}) is a publication in the AAS portfolio (alongside ApJ, AJ, ApJ Supplements, and ApJ Letters) through which authors can promptly and briefly share materials of interest with the astronomical community in a form that will be searchable via ADS and permanently archived. The astronomical community has long faced a challenge in disseminating information that may not meet the criteria for a traditional journal article. There have generally been few options available for sharing works in progress, comments and clarifications, null results, and timely reports of observations (such as the spectrum of a supernova), as well as results that wouldn’t traditionally merit a full paper (such as the discovery of a single exoplanet or contributions to the monitoring of variable sources). Launched in 2017, RNAAS was developed as a supported and long-term communication channel for results such as these that would otherwise be difficult to broadly disseminate to the professional community and persistently archive for future reference. Submissions to RNAAS should be brief communications - 1,000 words or fewer \footnote{An easy way to count the number of words in a Research Note is to use the \texttt{texcount} utility installed with most \latex\ installations. The call \texttt{texcount -incbib -v3 rnaas.tex}) gives 57 words in the front matter and 493 words in the text/references/captions of this template. Another option is by copying the words into MS/Word, and using ``Word Count'' under the Tool tab.}, and no more than a single figure (e.g. Figure \ref{fig:1}) or table (but not both) - and should be written in a style similar to that of a traditional journal article, including references, where appropriate, but not including an abstract. Unlike the other journals in the AAS portfolio, RNAAS publications are not peer reviewed; they are, however, reviewed by an editor for appropriateness and format before publication. If accepted, RNAAS submissions are typically published within 72 hours of manuscript receipt. Each RNAAS article is issued a DOI and indexed by ADS \citep{2000A&AS..143...41K} to create a long-term, citable record of work. Articles can be submitted in \latex\ (preferably with the new "RNAAS" style option in AASTeX v6.2), MS/Word, or via the direct submission in the \href{http://www.authorea.com}{Authorea} or \href{http://www.overleaf.com}{Overleaf} online collaborative editors. Authors are expected to follow the AAS's ethics \citep{2006ApJ...652..847K}, including guidance on plagiarism \citep{2012AAS...21920404V}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.85,angle=0]{aas.pdf} \caption{Top page of the AAS Journals' website, \url{http://journals.aas.org}, on October 15, 2017. Each RNAAS manuscript is only allowed one figure or table (but not both). Including the \href{http://journals.aas.org//authors/data.html\#DbF}{data behind the figure} in a Note is encouraged, and the data will be provided as a link in the published Note.\label{fig:1}} \end{center} \end{figure} \acknowledgments Acknowledge people, facilities, and software here but remember that this counts against your 1000 word limit.
{ "timestamp": "2022-09-20T02:22:10", "yymm": "2209", "arxiv_id": "2209.08675", "language": "en", "url": "https://arxiv.org/abs/2209.08675" }
\section{Introduction} Blockchain is centralized. \cite{moxie} The most popular wallet% \footnote{MetaMask has $21{,}000{,}000$ monthly active users as of July 2022~\cite{metamask-mau-22} and is the most popular non-custodial wallet~\cite{metamask-mau-21}.} today, \emph{MetaMask}, trusts a single infrastructure provider, \emph{Infura}, to supply the users with token and NFT balances, smart contract interactions, and notifications of payment. This renders billions of dollars susceptible to attacks such as faking payments and balances, double spending, or transaction censorship, by a single malicious provider. In fact, MetaMask servers do censor NFTs~\cite{metamask-censorship-nft} and smart contract interactions~\cite{metamask-censorship-tc}. To mitigate this centralization, blockchain users can run full nodes. When a full node boots up for the first time, it needs to download and verify all transactions that were ever recorded by the chain throughout its history. This is expensive and cannot be supported by a phone or browser~\cite{sok-light-clients}. Even a traditional \emph{light client} requires downloading and verifying the header chain, which grows \emph{linearly} as time goes by~\cite{sok}. This \emph{bootstrapping problem} has been successfully resolved in the proof-of-work (PoW) setting by the \emph{proof of proof-of-work} (PoPoW) protocols, but their methods are not applicable to proof-of-stake (PoS). In this work, we put forth the first succinct \emph{proof of proof-of-stake} (PoPoS) protocol. The protocol involves a light client verifier and multiple full node provers, at least one of which is assumed to be honest (this is the standard \emph{existential honesty} assumption~\cite{backbone,backbone-new,varbackbone,pass-asynchronous}). The verifier interacts with the provers in multiple rounds. It initially requests from each prover a proof about the current state of the chain. After these proofs are received, the verifier pits provers against each other in \emph{bisection games} until any adversarial claims are ruled out. The number of interactions and communication complexity of the bootstrapping protocol is \emph{logarithmic} in the chain lifetime. This constitutes an exponential improvement over previous work. Our PoPoS construction has two main applications: \emph{Superlight} proof-of-stake clients that can bootstrap very efficiently, and \emph{trustless bridges} that allow the passing of information from one proof-of-stake chain to another. \myparagraph{Implementation.} To demonstrate their feasibility, we implement our light client protocols for PoS Ethereum. We illustrate our improvements in two gradual improvement steps over the light client currently proposed for PoS Ethereum~\cite{minimal_light_client}. Firstly, we perform measurements using the light client protocol currently proposed for PoS Ethereum. We find that this protocol, while much more efficient than a full node, is likely insufficient to support communication-, computation-, and battery-constrained devices such as browsers and mobile phones. Next, we introduce an \emph{optimistic light client\xspace} for PoS Ethereum that leverages the existential honesty assumption to achieve significant gains over the traditional light client. We demonstrate this implementation is already feasible for resource-constained devices. However, the theoretical complexity is still linear in the lifetime of the protocol. Lastly, we introduce our \emph{superlight client} that achieves exponential asymptotic gains over the optimistic light client. These gains are both theoretically significant (the superlight client has logarithmic complexity), but also constitute concrete improvements over the optimistic light client when the blockchain system is long-lived and has an execution history of a few years. We compare all three clients in terms of communication (bandwidth and latency), computation, and energy consumption. \myparagraph{Contributions.} In summary, our contributions are as follows: \begin{enumerate} \item We give the first formal definition for succinct proof of proof-of-stake (PoPoS) protocols. \item We put forth a solution to the long-standing problem of efficient PoS bootstrapping. Our solution is exponentially better than previous work. \item We implement a highly performant optimistic light client\xspace and a complete superlight client for mainnet PoS Ethereum. Our superlight client is the first succinct node for PoS Ethereum. We measure and contrast the performance of our clients against the currently proposed design for PoS Ethereum. \item We show our construction is secure for PoS Ethereum and other PoS blockchains. \end{enumerate} \myparagraph{Overview of the optimistic light client\xspace and superlight client constructions.} PoS protocols typically proceed in \emph{epochs} during which the validator set is fixed. In each epoch, a subset of validators is elected by the protocol as the epoch \emph{committee}. The security of the protocol assumes that the majority~\cite{snowwhite,ouroboros,praos,genesis} or super-majority~\cite{casper,pbft,tendermint-paper,hotstuff} of the committee members are honest. In this paper, we put forth a protocol through which a bootstrapping light client can synchronize in logarithmic time and communication. We build our protocol ground up, starting with a linear light client. Consider a client that boots up in the first epoch, and wishes to find its current balance. The client knows the initial committee at the genesis. If that committee has honest majority, its members can sign the latest system state. Then, the client only has to verify the committee signatures on the state and take a majority vote. In PoS protocols, the stake changes hands in every epoch. Hence, to repeat the same verification at later epochs, the client needs to keep track of the current committee. To help the client in this endeavor, the committee members of each epoch, while active, sign a \emph{handover} message inaugurating the members of the new committee~\cite{pos-sidechains}. This enables the light client to discover the latest committee by processing a sequence of such handovers. Regrettably, the sizes of handover messages and the committees can be large, imposing an undue bandwidth requirement on the light client. Moreover, the sequence of handovers grows linearly with the lifetime of the protocol. The client can leverage the \emph{existential honesty assumption} to reduce its communication load. Towards this goal, it connects to multiple provers, which might provide conflicting state claims. To discover the truthful party, the client plays the disagreeing provers against each other. Upon observing two conflicting provers, it asks from each prover a sequence of hash values corresponding to its claimed sequence of past committees. The client then finds the \emph{first point of disagreement} between the two returned sequences through a linear search. Finally, it asks the provers to show the correctness of the handover at the point of disagreement. Each prover subsequently reveals the committee attested by the hash at that point, the previous committee and the associated handover messages. Upon validating these messages which can be done locally and efficiently, the client identifies the truthful party, and accepts its state. Although the client at this stage is still linear, it has a smaller communication load, as it downloads succinct hashes of the old committees instead of the committees themselves. This reduction in the message size demonstrate the power of existential honesty in practical settings, and gives the optimistic light client\xspace its name. To make our optimistic light client\xspace asymptotically succinct (\emph{i.e.}\xspace, polylog complexity), we improve the procedure to find the first point of disagreement. To this end, our final PoPoS protocol requires each prover to organize its claimed sequence of committees---one per epoch---into a Merkle tree~\cite{merkle}. The roots of those trees are then sent over to the client, who compares them. Upon detecting disagreement at the roots, the client asks the provers to reveal the children of their respective roots. By repeating this process recursively on the mismatching children, it arrives at the \emph{first point of disagreement} between the claimed committee sequences, in logarithmic number of steps. This process, called the \emph{bisection game}, renders the optimistic light client\xspace a superlight client with logarithmic communication. \input{figures/table-related-work} \myparagraph{Related work (\emph{cf.}\xspace Table~\ref{tab.comparison}).} Proof-of-work bootstrapping has been explored in the interactive~\cite{popow} and non-interactive~\cite{nipopows} setting using various constructions from superblocks~\cite{logspace,compactsuperblocks} to Fiat--Shamir~\cite{fiatshamir} sampling~\cite{flyclient}, and proven secure in the Bitcoin backbone model~\cite{backbone,backbone-new,varbackbone}. Such constructions can be adopted without forking~\cite{velvet,velvet-nipopows} and have been deployed in practice~\cite{gas-efficient}. They have also been used to deploy one-way~\cite{burn} and two-way sidechains~\cite{sidechains,pow-sidechains,crosschain-sok}. The first provably secure proof-of-stake protocols were introduced in Ouroboros~\cite{ouroboros,praos,genesis}, Snow White~\cite{snowwhite}, and Algorand~\cite{algorand}. Several attempts to improve the efficiency of clients and sidechains have been proposed~\cite{pos-sidechains,mithril,eth-lightclient}, but they all achieve only \emph{concrete} gains in efficiency and no \emph{asymptotic} improvement. In the trusted setup model, zkSNARKS have been used to achieve constant bootstrapping communication complexity in Coda/Mina~\cite{coda} and Plumo~\cite{plumo}. For an overview of all light client constructions, refer to Chatzigiannis et al.~\cite{sok-light-clients}. Our construction is based on bisection games. These first appeared in the context of verifiable computation~\cite{refereed-computation}, and in blockchains for the efficient execution of smart contracts~\cite{arbitrum}, for wallet metadata~\cite{wallets}, and for LazyLedger light clients~\cite{lazylight}. \myparagraph{Outline.} We present our theoretical protocol in a \emph{generic} PoS framework, which typical proof-of-stake systems fit into. We prove our protocol is secure if the underlying blockchain protocol satisfies certain simple and straightforward \emph{axioms}. Many popular PoS blockchains can be made to fit within our axiomatic framework. We define our desired primitive, the proof of proof-of-stake (PoPoS), together with the axioms required from the underlying PoS protocol in Section~\ref{sec.primitive}. We iteratively build and present our construction in Sections~\ref{sec.sequences} and~\ref{sec.bisection}. We present the security claims in Section~\ref{sec.analysis}. For concreteness, and because it is the most prominent upcoming PoS protocol, we give a concrete construction of our protocol for PoS Ethereum in Section~\ref{sec.pos-eth}. PoS Ethereum is the next generation of Ethereum, and soon to be the most widely adopted PoS protocol.\footnote{Bitcoin, which remains the most popular cryptocurrency, does not currently have any plans for migrating from PoW to PoS.} Interestingly, PoS Ethereum directly satisfies our axiomatic framework and does not require any changes on the consensus layer at all. The applicability of our framework to other PoS chains such as Ouroboros (Cardano), Algorand, and Snow White are discussed in Section~\ref{sec.other}. We provide an open source implementation of a superlight PoS Ethereum client following our protocol. The description of our implementation and the relevant experimental measurements showcasing the advantages of our implementation are presented in Section~\ref{sec.experiments}. Our implementation is a complete client that can synchronize with the PoS Ethereum network by connecting to multiple provers and retrieve user balances on the real mainnet chain. \section{Preliminaries} \myparagraph{Proof-of-stake.} Our protocols work in the proof-of-stake (PoS) setting. In a PoS protocol, participants transfer value and maintain a balance sheet of \emph{stake}, or \emph{who owns what}, among each other. It is assumed that the \emph{majority of stake} is honestly controlled at every point in time. The PoS protocol uses the current stake \emph{distribution} to establish consensus. The exact mechanism by which consensus is reached varies by PoS protocol. Our PoPoS protocol works for popular PoS flavours. \myparagraph{Primitives.} The participants in our PoS protocol transfer stake by \emph{signing} transactions using a secure signature scheme~\cite{katz}. The public key associated with each validator is known by all participants. The signatures are key-evolving, and honest participants delete their old keys after signing transactions~\cite{key-evolving,praos}\footnote{ Instead of key-evolving signatures, PoS Ethereum relies on a concept called \emph{weak subjectivity}~\cite{weak-subjectivity}. This alternative assumption can also be used in the place of key-evolving signatures to prevent posterior corruption attacks~\cite{long-range-survey}.}. Additionally, throughout our construction, we use a hash function. The only assumption needed of this hash function is collision resistance. In particular, we highlight the fact that it does not need to be treated in the Random Oracle model, and no trusted setup is required for our protocol (beyond what the underlying PoS protocol may need). \myparagraph{Types of nodes.} The stakeholders who participate in maintaining the system's consensus are known as \emph{validators}. In addition to those, other parties, who do not participate in maintaining consensus, can join the system, download its full history, and discover its current state. These are known as \emph{full nodes}. Clients that are interested in joining the system and learning a small part of the system state (such as their user's balance) without downloading everything are known as \emph{light clients}. Both full nodes and light clients can join the system at a later time, after it has already been executing for some duration $|\mathcal{C}|$. A late-joining light client or full node must \emph{bootstrap} by downloading some data from its peers. The amount of data the light client downloads to complete the bootstrapping process is known as its \emph{communication complexity}. A light client is \emph{succinct} if its communication complexity is $\mathcal{O}(\text{poly}\log(|\mathcal{C}|))$ in the lifetime $|\mathcal{C}|$ of the system. Succinct light clients are also called \emph{superlight clients}. The goal of this paper is to develop a PoS superlight client. \myparagraph{Time.} The protocol execution proceeds in discrete \emph{epochs}, roughly corresponding to moderate time intervals such as one day. Epochs are further subdivided into \emph{rounds}, which correspond to shorter time durations during which a message sent by one honest party is received by all others. In our analysis, we assume synchronous communication. The validator set stays fixed during an epoch, and it is known one epoch in advance. The validator set of an epoch is determined by the snapshot of stake distribution at the beginning of the previous epoch. To guarantee an honest majority of validators at any epoch, we assume a \emph{delayed honest majority} for a duration of \emph{two epochs}: Specifically, if a snapshot of the current stake distribution is taken at the beginning of an epoch, this snapshot satisfies the honest majority assumption for a duration of two full epochs. Additionally, we assume that the adversary is \emph{slowly adaptive}: She can corrupt any honest party, while respecting the honest majority assumption, but that corruption only takes place two epochs later. This assumption will be critical in our construction of \emph{handover} messages that allow members of one epoch to inaugurate a committee representing the next epoch (\emph{cf.}\xspace Section~\ref{sec.sequences}). \myparagraph{The prover/verifier model.} The bootstrapping process begins with a light client connecting to its full node peers to begin synchronizing. During the synchronization process, the full nodes are trying to convince the light client of the system's state. In this context, the light client is known as the \emph{verifier} and the full nodes are known as the \emph{provers}. We make the standard \emph{existential honesty assumption} that the verifier is connected to at least one honest prover (otherwise, the verifier is \emph{eclipsed} and cannot hope to synchronize). The verifier queries the provers about the state of the system, and can exchange multiple messages to interrogate them about the truth of their claims during an \emph{interactive protocol}. \myparagraph{Ledgers.} The consensus protocol attempts to maintain a unified view of a \emph{ledger} $\ensuremath{\mathbb{L}}$. The ledger is a sequence of \emph{transactions} $\ensuremath{\mathbb{L}} = (\ensuremath{\mathsf{tx}}_1, \ensuremath{\mathsf{tx}}_2, \ldots)$. Each validator and full node has a different view of the ledger. We denote the ledger of party $P$ at round $r$ as $\ensuremath{\mathbb{L}}^P_r$. Nodes joining the protocol, whether they are validators, full nodes, or (super)light clients, can also \emph{write} to the ledger by asking for a transaction to be included. In a secure consensus protocol, all honestly adopted ledgers are prefixes of one another. We denote the longest among these ledgers as $\ensuremath{\mathbb{L}}^\cup_r$, and the shortest among them as $\ensuremath{\mathbb{L}}^\cap_r$. We will build our protocol on top of PoS protocols that are secure. A \emph{secure} consensus protocol enjoys the following two virtues: \begin{definition}[Consensus Security] A consensus protocol is \emph{secure} if it is: \begin{enumerate} \item \textbf{Safe:} For any honest parties $P_1, P_2$ and rounds $r_1 \leq r_2$: $\ensuremath{\mathbb{L}}^{P_1}_{r_1} \preccurlyeq \ensuremath{\mathbb{L}}^{P_1}_{r_2}$. \item \textbf{Live:} If all honest validators attempt to \emph{write} a transaction during $u$ consecutive rounds $r_1, \ldots, r_u$, it is included in $\ensuremath{\mathbb{L}}^P_{r_u}$ of any honest party $P$. \end{enumerate} \end{definition} \myparagraph{Transactions.} A transaction encodes an update to the system's state. For example, a transaction could indicate a value transfer of 5 units from Alice to Bob. Different systems use different transaction formats, but the particular format is unimportant for our purposes. A transaction can be applied on the current \emph{state} of the system to reach a new state. Given a state $\ensuremath{\mathsf{st}}$ and a transaction $\ensuremath{\mathsf{tx}}$, the new state is computed by applying the state transition function $\delta$ to the state and transaction. The new state is then $\ensuremath{\mathsf{st}}' = \delta(\ensuremath{\mathsf{st}}, \ensuremath{\mathsf{tx}})$. For example, in Ethereum, the state of the system encodes a list of balances of all participants~\cite{buterin,wood}. The system begins its lifetime by starting at a genesis state $\ensuremath{\mathsf{st}}_0$. A ledger also corresponds to a particular system state, the state obtained by applying its transactions iteratively to the genesis state. Consider a ledger $\ensuremath{\mathbb{L}} = (\ensuremath{\mathsf{tx}}_1 \cdots \ensuremath{\mathsf{tx}}_n)$. Then the state of the system is $\delta(\cdots \delta(st_0, \ensuremath{\mathsf{tx}}_1), \cdots, \ensuremath{\mathsf{tx}}_n)$. We use the shorthand notation $\delta^*$ to apply a sequence of transactions $\overline{\ensuremath{\mathsf{tx}}} = \ensuremath{\mathsf{tx}}_1 \cdots \ensuremath{\mathsf{tx}}_n$ to a state. Namely, $\delta^*(st_0, \overline{\ensuremath{\mathsf{tx}}}) = \delta(\cdots \delta(st_0, \ensuremath{\mathsf{tx}}_1), \cdots, \ensuremath{\mathsf{tx}}_n)$. Because the state of the system is large, the state is compressed using an authenticated data structure (\emph{e.g.}\xspace, Merkle Tree~\cite{merkle}). We denote by $\ensuremath{\left<\mathsf{st}\right>}$ the state \emph{commitment}, which is this short representation of the state $\ensuremath{\mathsf{st}}$ (\emph{e.g.}\xspace, Merkle Tree root). Given a state commitment $\ensuremath{\left<\mathsf{st}\right>}$ and a transaction $\ensuremath{\mathsf{tx}}$, it is possible to calculate the state commitment $\left<\ensuremath{\mathsf{st}}'\right>$ to the new state $\ensuremath{\mathsf{st}}' = \delta(\ensuremath{\mathsf{st}}, \ensuremath{\mathsf{tx}})$. However, this calculation may require a small amount of auxiliary data $\pi$ such as a Merkle tree proof of inclusion of certain elements in the state commitment $\ensuremath{\left<\mathsf{st}\right>}$. We denote the transition that is performed at the state commitment level by the \emph{succinct transition function} $\left<\delta\right>$. Concretely, we will write that $\left<\delta(\ensuremath{\mathsf{st}}, \ensuremath{\mathsf{tx}})\right> = \left<\delta\right>(\ensuremath{\left<\mathsf{st}\right>}, \ensuremath{\mathsf{tx}}, \pi)$. This means that, if we take state $\ensuremath{\mathsf{st}}$ and apply transaction $\ensuremath{\mathsf{tx}}$ to it using the transition function $\delta$, and subsequently calculate its commitment using the $\left<\cdot\right>$ operator, the resulting state commitment is the same as the one obtained by applying the succinct transition function $\left<\delta\right>$ to the state commitment $\ensuremath{\left<\mathsf{st}\right>}$ and transaction $\ensuremath{\mathsf{tx}}$ using the auxiliary data $\pi$. If the auxiliary data is incorrect, the function $\left<\delta\right>$ returns $\bot$ to indicate failure. If the state commitment uses a secure authenticated data structure such as a Merkle tree, we can only find a unique $\pi$ that makes the $\left<\delta\right>$ function run successfully. \myparagraph{Notation.} We use $\epsilon$ and $[\,]$ to mean the empty string and empty sequence. By $x \,\|\, y$, we mean the string concatenation of $x$ and $y$ encoded in a way that $x$ and $y$ can be unambiguously retrieved. We denote by $|\mathcal{C}|$ the length of the sequence $\mathcal{C}$; by $\mathcal{C}[i]$ the $i^\text{th}$ (zero-based) element of the sequence, and by $\mathcal{C}[-i]$ the $i^\text{th}$ element from the end. We use $\mathcal{C}[i{:}j]$ to mean the subarray of $\mathcal{C}$ from the $i^\text{th}$ element (inclusive) to the $j^\text{th}$ element (exclusive). Omitting $i$ takes the sequence to the beginning, and omitting $j$ takes the sequence to the end. We use $\lambda$ to denote the security parameter. Following Go notation, in our multi-party algorithms, we use $m \dashrightarrow A$ to indicate that message $m$ is sent to party $A$ and $m \dashleftarrow A$ to indicate that message $m$ is received from party $A$. \section{The PoPoS Primitive}\label{sec.primitive} \myparagraph{The PoPoS Abstraction.} Every verifier $\mathcal{V}$ online at some round $r$ holds a state commitment $\ensuremath{\left<\mathsf{st}\right>}^\mathcal{V}_r$. To learn about this recent state, the verifier connects to provers $\mathcal{P} = \{P_1, P_2, \cdots, P_q\}$. All provers except one honest party can be controlled by the adversary, and the verifier does not know which party among the provers is honest (the verifier is assumed to be honest). The honest provers are always online. Each of them maintains a ledger $\ensuremath{\mathbb{L}}_i$. They are consistent by the safety of the underlying PoS protocol. Upon receiving a query from the verifier, each honest prover sends back a state commitment corresponding to its current ledger. However, the adversarial provers might provide incorrect or outdated commitments that are different from those served by their honest peers. To identify the correct commitment, the light client mediates an \emph{interactive} protocol among the provers: \begin{definition}[Proof of Proof-of-Stake] A \emph{Proof of Proof-of-Stake protocol} (PoPoS) for a PoS consensus protocol is a pair of interactive probabilistic polynomial-time algorithms $(P, V)$. The algorithm $P$ is the \emph{honest prover} and the algorithm $V$ is the \emph{honest verifier}. The algorithm $P$ is ran on top of an online PoS full node, while $V$ is a light client booting up for the first time holding only the genesis state commitment $\left<st_0\right>$. The protocol is executed between $V$ and a set of provers $\mathcal{P}$. After completing the interaction, $V$ returns a state commitment $\ensuremath{\left<\mathsf{st}\right>}$. \end{definition} \myparagraph{Security of the PoPoS Protocol.} The goal of the verifier is to output a state commitment consistent with the view of the honest provers. This is reflected by the following security definition of the PoPoS protocol. \begin{definition}[State Security] \label{def:state-security} Consider a PoPoS protocol $(P, V)$ executed at round $r$, where $V$ returns $\ensuremath{\left<\mathsf{st}\right>}$. It is \emph{secure} with \emph{parameter} $\nu$ if there exists a ledger $\ensuremath{\mathbb{L}}$ such that $\ensuremath{\left<\mathsf{st}\right>} = \delta^*(st_0, \ensuremath{\mathbb{L}})$, and $\ensuremath{\mathbb{L}}$ satisfies: \begin{itemize} \item \textbf{Safety:} For all rounds $r' \geq r + \nu$: $\ensuremath{\mathbb{L}} \preccurlyeq \ensuremath{\mathbb{L}}^{\cup}_{r'}$. \item \textbf{Liveness:} For all rounds $r' \leq r - \nu$: $\ensuremath{\mathbb{L}}^{\cap}_{r'} \preccurlyeq \ensuremath{\mathbb{L}}$. \end{itemize} \end{definition} State security implies that the commitment returned by a verifier corresponds to a state recently obtained by the honest provers. \section{The Optimistic light client\xspace} \label{sec.sequences} Before we present our succinct PoPoS protocol, we introduce sync committees and handover messages, two necessary components that we will later use in our construction. We also propose a highly performant optimistic light client\xspace as a building blocks for the superlight clients. \myparagraph{Sync Committees.} To allow the verifier to achieve state security, we introduce a \emph{sync committee} (first proposed in the context of PoS sidechains~\cite{pos-sidechains}). Each committee is elected for the duration of an epoch, and contains a subset, of fixed size $m$, of the public keys of the validators associated with that epoch. The committee of the next epoch is determined in advance at the beginning of the previous epoch. All honest validators agree on this committee. The validators in the sync committee are sampled from the validator set of the corresponding epoch in such a manner that the committee retains honest majority during the epoch. The exact means of sampling are dependent on the PoS implementation. One way to construct the sync committee is to sample uniformly at random from the underlying stake distribution using the epoch randomness of the PoS protocol~\cite{ouroboros,minimal_light_client}. The first committee $S^0$ is recorded by the genesis state $st_0$. We denote the set of public keys of the sync committee assigned to epoch $j \in \mathbb{N}$ by $S^j$, and each committee member public key within $S^j$ by $S^j_i$, $i \in \mathbb{N}$. \myparagraph{Handover signatures.} During each epoch $j$, each honest committee member $S^j_i$ of epoch $j$ signs the tuple $(j + 1, S^{j+1})$, where $j+1$ is the next epoch index and $S^{j+1}$ is the set of all committee member public keys of epoch $j+1$. We let $\sigma^j_i$ denote the signature of $S^j_i$ on the tuple $(j+1,S^{j+1})$. This signature means that member $S^j_i$ approves the inauguration of the next epoch committee. We call those \emph{handover signatures}\footnote{Handover signatures between PoS epochs were introduced in the context of PoS sidechains~\cite{pos-sidechains}. Some practical blockchain systems already implement similar handover signatures~\cite{nearbridge,horizon}.}, as they signify that the previous epoch committee \emph{hands over} control to the next committee. When epoch $j+1$ starts, the members of the committee $S^j$ assigned to epoch $j$ can no longer use their keys to create handover signatures\footnote{This assumption can be satisfied using key-evolving signatures~\cite{key-evolving,praos}, social consensus~\cite{weak-subjectivity}, or a static honest majority assumption.}. As soon as more than $\frac{m}{2}$ members of $S^j$ have approved the inauguration of the next epoch committee, the inauguration is ratified. This collection of signatures for the handover between epoch $j$ and $j+1$ is denoted by $\Sigma^{j+1}$, and is called the \emph{handover proof}. A \emph{succession} $\mathbb{S} = (\Sigma^1, \Sigma^2, \ldots, \Sigma^j)$ at an epoch $j$ is the sequence of all handover proofs across an execution until the beginning of the epoch. In addition to the handover signature, at the beginning of each epoch, every honest committee member signs the \emph{state commitment} corresponding to its ledger. When the verifier learns the latest committee, these signatures enable him to find the current state commitment. \myparagraph{A naive linear client.} Consider a PoPoS protocol, where each honest prover gives the verifier a state commitment and signatures on the commitment from the latest sync committee $S^{N-1}$, where $N$ is the number of epochs (and $N-1$ is the last epoch). To convince the verifier that $S^{N-1}$ is the correct latest committee, each prover also shares the sync committees $S^0 \ldots S^{N-2}$ and the associated handover proofs in its view. The verifier knows $S_0$ from the genesis state $st_0$, and can verify the committee members of the future epochs iteratively through the handover proofs. Namely, upon obtaining the sync committee $S^j$, the verifier accepts a committee $S^{j+1}$ as the correct committee assigned to epoch $j+1$, if there are signatures on the tuple $(j+1, S^{j+1})$ from over half of the committee members in $S^j$. Repeating the process above, the verifier can identify the correct committee for the last epoch. After identifying the latest sync committee, the verifier checks if the state commitment provided by a prover is signed by over half of the committee members. If so, he accepts the commitment. It is straightforward to show that this strawman PoPoS protocol (which we later abbreviate as TLC\xspace) is secure (Definition~\ref{def:state-security}) under the following assumptions: \begin{enumerate} \item The underlying PoS protocol satisfies safety and liveness. \item The majority of the sync committee members are honest. \end{enumerate} When all provers are adversarial, the verifier might not receive any state commitment from them. In this context, the existential honest assumption guarantees that there will be at least one honest prover providing the commitment signed by the sync committee of the latest epoch. However, the strawman protocol does not require existential honesty for the \emph{correctness} of the commitment accepted by the verifier. This is because the verifier directly validates the correctness of each sync committee assigned to consecutive epochs, and does not accept commitments that were not signed by over $\frac{m}{2}$ members of the correct latest committee. Hence, he cannot be made to accept a commitment that does not satisfy state security. Regrettably, the strawman protocol is $\mathcal{O}(|\mathcal{C}|)$ and not succinct: To identify the lastest sync committee, the verifier has to download each sync committee since the genesis block. In the rest of this paper, we will improve this protocol to make it succinct. \myparagraph{The optimistic light client\xspace.} To reduce the communication complexity of the verifier, the PoPoS protocol can further utilize the existential honesty assumption. In this version of the protocol (which we later abbreviate as OLC\xspace), instead of sharing the sync committees $S^0 \ldots S^{N-2}$ and the associated handover proofs, each honest prover $P$ sends a sequence of \emph{hashes} $h^1 \ldots h^{N-1}$ corresponding to the sync committees $S^0 \ldots S^{N-1}$. Subsequently, to prove the correctness of the state commitment, the prover $P$ reveals the latest sync committee $S^{N-1}$ assigned to epoch $N-1$ and the signatures by its members on the commitment. Upon receiving the committee $S^{N-1}$, the verifier checks if the hash of $S^{N-1}$ matches $h^{N-1}$, and validates the signatures on the commitment. Unfortunately, an adversarial prover $P^*$ can claim an incorrect committee $S^{*,N-1}$, whose hash $h^{*,N-1}$ disagrees with $h^{N-1}$ returned by $P$. This implies a disagreement between the two hash sequences received from $P$ and $P^*$. The verifier can exploit this discrepancy to identify the truthful party that returned the correct committee. Towards this goal, the verifier iterates over the two hash sequences, and finds the \emph{first point of disagreement}. Let $j$ be the index of this point such that $h^j \neq h^{*,j}$ and $h^{i} = h^{*,i}$ for all $i < j$. The verifier then requests $P$ to reveal the committees $S^j$ and $S^{j-1}$ at the preimage of $h^j$ and $h^{j-1}$, and to supply a handover proof $\Sigma^{j}$ for $S^{j-1}$ and $S^{j}$. He also requests $P^*$ to reveal the committees $S^{*,j}$ and $S^{*,j-1}$ at the preimage of $h^{*,j}$ and $h^{*,j-1}$, and to supply a handover proof $\Sigma^{*,j}$ for $S^{*,j-1}$ to $S^{*,j}$. As $h^{j-1} = h^{*,j-1}$ by definition, the verifier is convinced that the committees $S^{j-1}$ and $S^{*,j-1}$ revealed by $P$ and $P^*$ are the same. Finally, the verifier checks whether the committees $S^{*,j}$ and $S^{j}$ were inaugurated by the previous committee $S^{j-1}$ using the respective handover proofs $\Sigma^{j}$ and $\Sigma^{*,j}$. Since $S^{j-1}$ contains over $\frac{m}{2}$ honest members that signed only the correct committee $S^j$ assigned to epoch $j$, adversarial prover $P^*$ cannot create a handover proof with sufficiently many signatures inaugurating $S^{*,j}$. Hence, the handover from $S^{j-1}$ to $S^{*,j}$ will not be ratified $\Sigma^{*,j}$, whereas the handover from $S^{j-1}$ to $S^j$ will be ratified by $\Sigma^{j}$. Consequently, the verifier will identify $P$ as the truthful party and accept its commitment. In the protocol above, security of the commitment obtained by the prover relies crucially on the existance of an honest prover. Indeed, when all provers are adversarial, they can collectively return the \emph{same} incorrect state commitment and the \emph{same} incorrect sync committee for the latest epoch. They can then provide over $\frac{m}{2}$ signatures by this committee on the incorrect commitment. In the absence of an honest prover to challenge the adversarial ones, the verifier would believe in the validity of an incorrect commitment. The optimistic light client\xspace reduces the communication load of sending over the whole sync committee sequence by representing each committee with a constant size hash. However, it is still $\mathcal{O}(|\mathcal{C}|)$ as the verifier has to do a linear search on the hashes returned by the two provers to identify the first point of disagreement. To support a truly succinct verifier, we will next work towards an interactive PoPoS protocol based on bisection games. \begin{figure*} \centering \includegraphics[width=0.8\textwidth,keepaspectratio]{figures/popos-tree.pdf} \caption{The \emph{handover tree}, the central construction of our protocol. The root of the Merkle tree is the initial proof $\pi$. During the bisection game, the signatures between the challenge node $j$ and its neighbours $j-1$ and $j+1$ are validated.} \label{fig.popos-tree} \end{figure*} \section{The Superlight Client} \label{sec.bisection} \myparagraph{Trees and Mountain Ranges.} Before describing the succinct PoPoS protocol and the superlight client, we introduce the data structures used by the bisection games. \import{./}{algorithms/alg.make.merkle.tree.tex} Suppose the number of epochs $N$ is a power of two. The honest provers organize the committee sequences for the past epochs into a Merkle tree (Algorithm~\ref{alg.make.merkle.tree}) called the \emph{handover tree} (Figure~\ref{fig.popos-tree}). The $j^\text{th}$ leaf of the handover tree contains the committee $S^j$ of the $j^\text{th}$ epoch. A handover tree consisting of leaves $S^0, \ldots, S^{N-1}$ is said to be \emph{well-formed} with respect to a succession $\mathbb{S}$ if it satisfies the following properties: \begin{enumerate} \item The leaves are syntactically valid. Every $j^\text{th}$ leaf contains a sync committee $S^j$ that consists of $m$ public keys. \item The first leaf corresponds to the known \emph{genesis} sync committee $S^0$. \item For each $j=1 \ldots N-1$, $\Sigma^j$ consists of over $\frac{m}{2}$ signatures by members of $S^{j-1}$ on $(j, S^j)$. \end{enumerate} Every honest prover holds a succession of handover signatures attesting to the inauguration of each sync committee in its handover tree after $S^0$. These successions might be different for every honest prover as any set of signatures larger than $\frac{m}{2}$ by $S^j$ can inaugurate $S^{j+1}$. However, the trees are the same for all honest parties, and they are well-formed with respect to the succession held by each honest prover. \import{./}{algorithms/alg.make.mmr.tex} When the number $N$ of epochs is not a power of two, provers arrange the past sync committees into Merkle mountain ranges (MMRs)~\cite{mmr,mmr-grin} (Algorithm~\ref{alg.make.mmr}). An MMR is a list of Merkle trees, whose sizes are decreasing powers of two. To build an MMR, a prover first obtains a binary representation $2^{q_1}+\ldots+2^{q_n}$ of $N$, where $q_1>\ldots>q_n$. It then divides the sequence of sync committees into $n$ subsequences, one for each $q_i$. For $i \geq 1$, the $i^\text{th}$ subsequence contains the committees $S^{\sum_{n=1}^{i-1} 2^{q_i}}, \ldots, S^{(\sum_{n=1}^{i} 2^{q_i})-1}$. Each $i^\text{th}$ subsequence is organized into a distinct Merkle tree $\mathcal{T}_i$, whose root, denoted by $\left<\mathcal{T}_i\right>$, is called a \emph{peak}. These peaks are all hashed together to obtain the root of the MMR. We hereafter refer to the index of each leaf in these Merkle trees with the epoch of the sync committee contained at the leaf. (For instance, if there are two trees with sizes $4$ and $2$, the leaf indices in the first tree are $0,1,2,3$ and the leaf indices in the second tree are $4$ and $5$.) The MMR is said to be \emph{well formed} if each constituent tree is well-formed (but, of course, only the first leaf of the first tree needs to contain the genesis committee). To ensure succinctness, only the peaks and a small number of leaves, with their respective inclusion proofs, will be presented to the verifier during the following bisection game. \myparagraph{Different state commitments.} We begin our construction of the full PoPoS protocol (which we later abbreviate as SLC\xspace) by describing the first messages exchanged between the provers $\mathcal{P}$ and the verifier. Each honest prover first shares the state commitment signed by the latest sync committee at the beginning of the last epoch $N-1$. If all commitments received by the verifier are the same, by existential honesty, the verifier can rest assured that this commitment is correct, \emph{i.e.}\xspace, it corresponds to the ledger of the honest provers at the beginning of the epoch. If not, the verifier requests from each prover in $\mathcal{P}$: (i) the MMR peaks $\ensuremath{\left<\mathcal{T}\right>}_i$, $i \in [n]$ held by the prover, where $n$ is the number of peaks, (ii) the latest sync committee $S^{N-1}$, (iii) a Merkle inclusion proof for $S^{N-1}$ with respect to the last peak $\ensuremath{\left<\mathcal{T}\right>}_{n}$, and (iv) signatures by the committee members in $S^{N-1}$ on the state commitment given by the prover. Upon receiving these messages, the verifier first checks if there are more than $\frac{m}{2}$ valid signatures by the committee members in $S^{N-1}$ on the state commitment. It then verifies the Merkle proof for $S^{N-1}$ with respect to $\ensuremath{\left<\mathcal{T}\right>}_n$. As the majority of the committee members in $S^{N-1}$ are honest, it is not possible for different state commitments to be signed by over half of $S^{N-1}$. Hence, if the checks above succeed for two provers $P$ and $P^*$ that returned different commitments, one of them ($P^*$) must be an adversarial prover, and must have claimed an incorrect sync committee $S^{*,N-1}$ for the last epoch. Moreover, as the Merkle proofs for both $S^{*,N-1}$ and $S^{N-1}$ verify against the respective peaks $\ensuremath{\left<\mathcal{T}\right>}_n$ and $\ensuremath{\left<\mathcal{T}\right>}^*_n$, these peaks must be different. Since the two provers disagree on the roots and there is only one well-formed MMR at any given epoch, therefore one of the provers does not hold a well-formed MMR. This reduces the problem of identifying the correct state commitment to detecting the prover that has a well-formed MMR behind its peaks. \import{./}{algorithms/alg.verifier} \import{./}{algorithms/alg.prover} \myparagraph{Bisection game.} To identify the honest prover with the well-formed MMR, the verifier (Algorithm~\ref{alg.bisection.verifier}) initiates a bisection game between $P$ and $P^*$ (Algorithm~\ref{alg.bisection.prover}). Suppose the number of epochs $N$ is a power of two. Each of the two provers claims to hold a tree with size $N$ (otherwise, since the verifier knows $N$ by his local clock, the prover with a different size Merkle tree loses the game.) During the game, the verifier aims to locate the first point of disagreement between the alleged sync committee sequences at the leaves of the provers' Merkle trees, akin to the improved optimistic light client\xspace (Section~\ref{sec.sequences}). The game proceeds in a binary search fashion similar to refereed delegation of computation~\cite{refereed-computation,practical-delegation,arbitrum}. Starting at the Merkle roots $\ensuremath{\left<\mathcal{T}\right>}$ and $\ensuremath{\left<\mathcal{T}\right>}^*$ of the two trees, the verifier traverses an identical path on both trees until reaching a leaf with the same index. This leaf corresponds to the first point of disagreement. At each step of the game, the verifier asks the provers to reveal the children of the \emph{current} node, denoted by $h_c$ and $h^*_c$ on the respective trees (Algorithm~\ref{alg.bisection.prover} Line~\ref{line:reveal}). Initially, $h_c=\ensuremath{\left<\mathcal{T}\right>}$ and $h^*_c=\ensuremath{\left<\mathcal{T}\right>}^*$ (Algorithm~\ref{alg.bisection.verifier} Line~\ref{line:initial}). Upon receiving the alleged left and right child nodes $h^*_0$ and $h^*_1$ from $P^*$, and $h_0$, $h_1$ from $P$, he checks if $h_c=H(h_0 \,\|\, h_1)$ and $h^*_c=H(h^*_0 \,\|\, h^*_1)$, where $H$ is the collision-resistant hash function used to construct the Merkle trees (Algorithm~\ref{alg.bisection.verifier} Lines~\ref{line:check1} and~\ref{line:check2}). The verifier then compares $h_0$ with $h^*_0$, and $h_1$ with $h^*_1$ to determine if the disagreement is on the left or the right child (Algorithm~\ref{alg.bisection.verifier} Lines~\ref{line:left} and~\ref{line:right}). Finally, he descends into the first disagreeing child, and communicates this decision to the provers (Algorithm~\ref{alg.bisection.prover} Line~\ref{line:comm}); so that they can update the current node that will be queried in the next step of the bisection game (Algorithm~\ref{alg.bisection.prover} Lines~\ref{line:descend1} and~\ref{line:descend2}). Upon reaching a leaf at some index $j$, the verifier asks both provers to reveal the alleged committees $S^j$ and $S^{*,j}$ at the pre-image of the respective leaves. If $j=1$, he inspects whether $S^j$ or $S^{*,j}$ matches $S^0$. The prover whose alleged first committee is not equal to $S^0$ loses the game. If $j>1$, the verifier also requests from the provers (i) the committees at the $(j-1)^\text{th}$ leaves, (ii) their Merkle proofs with respect to $\ensuremath{\left<\mathcal{T}\right>}$ and $\ensuremath{\left<\mathcal{T}\right>}^*$, and (iii) the handover proofs $\Sigma^{j}$ and $\Sigma^{*,j}$. The honest prover responds with (i) $S^{j-1}$ assigned to epoch $j - 1$, (ii) its Merkle proof with respect to $\ensuremath{\left<\mathcal{T}\right>}$, and (iii) its own view of the handover proof $\Sigma^j$ (which might be different from other provers). Upon checking the Merkle proofs, the verifier is now convinced that the committees $S^{j-1}$ and $S^{*,j-1}$ revealed by $P$ and $P^*$ are the same, since their hashes match. The verifier subsequently checks if $\Sigma^{j}$ contains more than $\frac{m}{2}$ signatures by the committee members in $S^{j-1}$ on $(j, S^{j})$, and similarly for $P^*$. The prover that fails any of checks by the verifier loses the bisection game. If one prover loses the game, and the other one does not fail any checks, the standing prover is designated the winner. If neither prover fails any of the checks, then the verifier concludes that there are over $\frac{m}{2}$ committee members in $S^{j-1}$ that signed different future sync committees (\emph{i.e.}\xspace, signed both $(j,S^j)$ and $(j,S^{*,j})$, where $(j,S^j) \neq (j,S^{*,j})$). This implies $S^{j-1}$ is not the correct sync committee assigned to epoch $j-1$, and both provers are adversarial. In this case, both provers lose the bisection game. In any case, at most one prover can win the bisection game. \import{./}{algorithms/alg.peaks.vs.peaks} \myparagraph{Bisection games on Merkle mountain ranges.} When the number of epochs $N$ is not a power of two, the verifier first obtains the binary decomposition $\sum_{i=1}^n 2^{q_i} = N$, where $q_1>\ldots>q_n$. Then, for each prover $P$, he checks if there are $n$ peaks returned. If that is the case for two provers $P$ and $P^*$ that returned different commitments, the verifier compares the peaks $\ensuremath{\left<\mathcal{T}\right>}_i$ of $P$ with $\ensuremath{\left<\mathcal{T}\right>}^*_i$ of $P^*$, and identifies the first different peak (Algorithm~\ref{alg.peaks.vs.peaks}). It then plays the bisection game as described above on the identified Merkle trees. The only difference with the game above is that if the disagreement is on the first leaf $j$ of a later Merkle tree, then the Merkle proof for the previous leaf $j-1$ is shown with respect to the peak of the previous tree. \import{./}{algorithms/alg.tournament} \myparagraph{Tournament.} When there are multiple provers, the verifier interacts with them sequentially in pairs, in a tournament fashion. It begins by choosing two provers $P_1$ and $P_2$ with different state commitments from the set $\mathcal{P}$ (Algorithm~\ref{alg.tournament}, line~\ref{line:sample}). The verifier then \emph{pits one against the other}, by facilitating a bisection game between $P_1$ and $P_2$, and decides which of the two provers loses (Algorithm~\ref{alg.tournament}, line~\ref{line:game}). (There can be at most one winner at any bisection game). He then eliminates the loser from the tournament, and chooses a new prover with a different state commitment than the winner's commitment from the set $\mathcal{P}$ to compete against the winner. In the event that both provers lose, the verifier eliminates both provers, and continues the tournament with the remaining ones by sampling two new provers with different state commitments. This process continues until all provers left have the same state commitment. This commitment is adopted as the correct one. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/bisection-game.pdf} \vspace{-0.7cm} \caption{Honest and adversarial prover in the PoPoS bisection game.} \label{fig.bisection-game} \end{figure} \myparagraph{Past and future.} Now that the verifier obtained the state commitment signed for the most recent epoch, and confirmed its veracity, the task that remains is to discern facts about the system's state and its history. To perform queries about the current state, such as determining how much balance one owns, the verifier simply asks for Merkle inclusion proofs into the proven state commitment. One drawback of our protocol is that the state commitment received by the verifier is the commitment at the \emph{beginning} of the current epoch, and therefore may be somewhat stale. In order to synchronize with the latest state within the epoch, the verifier must function as a full node for the small duration of an epoch. This functionality does not harm succinctness, since epochs have a fixed, constant duration. For example, in the case of a longest-chain blockchain, the protocol works as follows. In addition to signing the state commitment, the sync committee also signs the first stable block header of its respective epoch. The block header is verified by the verifier in a similar fashion that he verified the state commitment. Subsequently, the block header can be used as a \emph{neon genesis} block. The verifier treats the block as a replacement for the genesis block and bootstraps from there\footnote{While bootstrapping, the verifier can update the state commitment by applying the transactions within the later blocks on top of the state commitment from the neon genesis block via the function $\left<\delta\right>$.}. One aspect of wallets that we have not touched upon concerns the retrieval and verification of historical transactions. This can be performed as follows. The verifier, as before, identifies the root of the correct handover tree. Using a historical sync committee, attested by an inclusion proof to the reference root, it detects the first stable block header of the epoch \emph{immediately following} the transaction of interest. He downloads and verifies the committee signatures on the first stable block header of that epoch. Subsequently, he requests the short blockchain that connects the block containing the transaction of interest to the reference stable block header. As blockchains contain a hash of all their past data, this inclusion cannot be faked by an adversary. \section{Proof-of-Stake Ethereum Light Clients} \label{sec.pos-eth} The bisection games presented in Section~\ref{sec.bisection} can be applied to a variety of PoS consensus protocols to efficiently catch up with current consensus decisions. In this section we present an instantiation for PoS Ethereum. We also detail how to utilize the latest epoch committee obtained from bisection games to build a full-featured Ethereum JSON-RPC. This allows for existing wallets such as MetaMask to use our construction without making any changes. Our implementation can be a drop-in replacement to obtain better decentralization and performance. Our PoPoS protocol for PoS Ethereum does not require any changes to the consensus layer, as PoS Ethereum already provisions for sync committees in the way we introduced in Section~\ref{sec.sequences}. \subsection{Sync Committee Essentials} \label{sec:poseth-synccommittee} Sync committees of PoS Ethereum contain $m = 512$ validators, sampled uniformly at random from the validator set, in proportion to their stake distribution. Every sync committee is selected for the duration of a so-called \emph{sync committee period}~\cite{minimal_light_client} (which we called \emph{epoch} in our generic construction). Each period lasts $256$ PoS Ethereum epochs (these are different from our epochs), approximately $27$ hours. PoS Ethereum epochs are further divided into \emph{slots}, during which a new block is proposed by one validator and signed by the subset of validators assigned to the slot. At each slot, each sync committee member of the corresponding period signs the block at the tip of the chain (called the \emph{beacon chain}~\cite{minimal_light_client}) according to its view. The proposer of the next slot aggregates and includes within its proposal the aggregate sync committee signature on the parent block. The sync committees are determined one period in advance, and the committee for each period is contained in the block headers of the previous period. Each block also contains a commitment to the header of the last finalized block that lies on its prefix. \subsection{Linear-Complexity Light Client} \label{sec:poseth-lightclient} Light clients use the sync committee signatures to detect the latest beacon chain block finalized by the Casper FFG finality gadget~\cite{casper,gasper}. At any round, the view of a light client consists of a $\mathsf{finalized\_header}$, the current sync committee and the next committee. The client updates its view upon receiving a $\mathsf{LightClientUpdate}$ object (update for short), that contains (i) an $\mathsf{attested\_header}$ signed by the sync committee, (ii) the corresponding aggregate BLS signature, (iii) the slot at which the aggregate signature was created, (iv) the next sync committee as stated in the $\mathsf{attested\_header}$, and (v) a $\mathsf{finalized\_header}$ (called the new finalized header for clarity) to replace the one held by the client. To validate an update, the client first checks if the aggregate signature is from a slot larger than the $\mathsf{finalized\_header}$ in its view, and if this slot is within the current or the next period. (Updates with signatures from sync committees that are more than one period in the future are rejected.) It then verifies the inclusion of the new finalized header and the next sync committee provided by the update with respect to the state of the $\mathsf{attested\_header}$ through Merkle inclusion proofs. Finally, it verifies the aggregate signature on the $\mathsf{attested\_header}$ by the committee of the corresponding period. Since the signatures are either from the current period or the next one, the client knows the respective committee. After validating the update, the client replaces its $\mathsf{finalized\_header}$ with the new one, if the $\mathsf{attested\_header}$ was signed by over $2/3$ of the corresponding sync committee. If this header is from a higher period, the client also updates its view of the sync committees. Namely, the old next sync committee becomes the new current committee, and the next sync committee included in the $\mathsf{attested\_header}$ is adopted as the new next sync committee. \subsection{Logarithmic Bootstrapping from Bisection Games} \label{sec:poseth-bisectiongames} The construction above requires a bootstrapping light client to download at least one update per period, imposing a linear communication complexity in the life time of the chain. To reduce the communication load and complexity, the optimistic light client\xspace and superlight client constructions introduced in Sections~\ref{sec.sequences} and~\ref{sec.bisection} can be applied to PoS Ethereum. A bootstrapping superlight client first connects to a few provers, and asks for the Merkle roots of the handover trees (\emph{cf.}\xspace Section~\ref{sec.bisection}). The leaf of the handover tree at position $j$ consist of all the public key of the sync committee of period $j$ concatenated with the period index $j$. If all the roots are the same, then the client accepts the sync committee at the last leaf as the most recent committee. If the roots are different, the client facilitates bisection games among conflicting provers. Upon identifying the first point of disagreement between two trees (\emph{e.g.}\xspace, some leaf $j$), the client asks each prover to provide a $\mathsf{LightClientUpdate}$ object to justify the handover from the committee $S^{j-1}$ to $S^j$. For this purpose, each prover has to provide a valid update that includes (i) an aggregate signature by $2/3$ of the set $S^{j - 1}$ on an $\mathsf{attested\_header}$, and (ii) the set $S^{j}$ as the next sync committee within the $\mathsf{attested\_header}$. Upon identifying the honest prover, and the correct latest sync committee, the client can ask the honest prover about the lastest update signed by the latest sync committe and containing the tip of the chain. \subsection{Superlight Client Architecture} \label{sec:poseth-slc-architecture} \input{figures/superlightclient-architecture} On the completion of bootstrapping, the client has identified the latest beacon chain blockheader. The blockheader contains the commitment to the state of the Ethereum universe that results from executing all transactions since genesis up to and including the present block. Furthermore, this commitment gets verified as part of consensus. The client can perform query to the fullnode about the state of Ethereum. The result of the query can be then verified against the state commitment using Merkle inclusion proofs. This allows for the client to access the state of the Ethereum universe in a trust-minimizing way. Figure~\ref{fig:architecture} depicts the resulting architecture of the superlight client. In today's Ethereum, a user's wallet typically speaks to Ethereum JSON-RPC endpoints provided by either a centralized infrastructure provider such as Infura or by a (trusted) Ethereum full node (could be self-hosted). Instead, the centerpiece in a superlight client is a shim that provides RPC endpoints to the wallet, but where new transactions and queries to the Ethereum state are proxied to upstream full nodes, and query responses are verified w.r.t. a given commitment to the Ethereum state. This commitment is produced using two sidecar processes, which implement the prover and verifier of the bisection game. For this purpose, the server-side sidecar obtains the latest sync information from a full node, using what is commonly called `libp2p API'. The client-side sidecar feeds the block header at the consensus tip into the shim. \section{Experiments} \label{sec.experiments} To assess the different bootstrapping mechanisms for PoS Ethereum (traditional light client\xspace = TLC\xspace; optimistic light client\xspace = OLC\xspace; superlight client\xspace = SLC\xspace), we implemented them in $\approx2000$ lines of TypeScript code (source code available on Github\footnote{% \ifanonymous{\url{https://anonymous.4open.science/r/eth-pos-superlight-client-8C2F/}}% \else{\url{https://github.com/shresthagrawal/poc-superlight-client}}% }). We demonstrate an improvement of SLC\xspace over TLC\xspace of $9\times$ in time-to-completion, $180\times$ in communication bandwidth, and $30\times$ in energy consumption, when bootstrapping after $10$ years of consensus execution. SLC\xspace improves over OLC\xspace by $3\times$ in communication bandwidth in this setting. \subsection{Setup} \label{sec:experiments-setup} Our experimental scenario includes seven malicious provers, one honest prover, and a verifier. All provers run in different Heroku `\texttt{performance-m}' instances located in the `\texttt{us}' region. The verifier runs on an Amazon EC2 `\texttt{m5.large}' instance located in `\texttt{us-west-2}'. The provers' Internet access is not restricted beyond the hosting provider's limits. The verifier's down- and upload bandwidth is artificially rate-limited to 100\,Mbit/s and 10\,Mbit/s, respectively, using `\texttt{tc}'. We monitor to rule out spillover from RAM into swap space. In preprocessing, we create eight valid traces of the sync committee protocol for an execution horizon of $30$ years. For this purpose, we create $512$ cryptographic identities per simulated day, as well as the aggregate signatures for handover from one day's sync committee to the next day's. In some experiments, we vary how much simulated time has passed since genesis, and for this purpose truncate the execution traces accordingly. One of the execution traces is used by the honest prover and understood to be the true honest execution. Adversarial provers each pick a random point in time, and splice the honest execution trace up to that point together with one of the other execution traces for the remaining execution time, \emph{without regenerating handover signatures}, so that the resulting execution trace used by adversarial provers has invalid handover at the point of splicing. We also vary the internal parameters of the (super-)light client protocols (\emph{i.e.}\xspace, batch size $b$ of TLC\xspace and OLC\xspace, Merkle tree degree $d$ of SLC\xspace). \subsection{Time-To-Completion \& Total Verifier Communication} \label{sec:experiments-tradeoff-delay-bandwidth} \input{figures/plots/experiment-1/figure} The average time-to-completion (TTC) and total communication bandwidth (TCB) required by the different light client constructions per bootstrapping occurrence is plotted in Figure~\ref{fig:experiment-experiment-1} for varying internal parameters (batch sizes $b$ for TLC\xspace and OLC\xspace; Merkle tree degrees $d$ for SLC\xspace) and varying execution horizons (from $5$ to $30$ years). Pareto-optimal TTC and TCB are achieved for $b$ and $d$ resulting `at the tip' of the `L-shaped' plot. For instance, for $10$ years execution, TLC\xspace, OLC\xspace and SLC\xspace achieve Pareto-optimal TTC/TCB for $b\approx200$, $b\approx500$, and $d\approx100$, respectively. Evidently, across a wide parameter range, OLC\xspace and SLC\xspace vastly outperform TLC\xspace in both metrics; \emph{e.g.}\xspace, for $10$ years execution and Pareto-optimal parameters, $9\times$ in TTC, and $180\times$ in TCB. In this setting, SLC\xspace has similar TTC as OLC\xspace, and $3\times$ lower TCB ($5\times$ lower TCB for $30$ years). \input{figures/plots/experiments-23/figure} The fact that both TLC\xspace and OLC\xspace have TCB linear in the execution horizon, is readily apparent from Figure~\ref{fig:experiment-experiment-1}. The linear TTC is visible for TLC\xspace, but not very pronounced for OLC\xspace, due to the concretely low proportionality constant. In comparison, SLC\xspace shows barely any dependence of TTC or TCB on the execution horizon, hinting at the (exponentially better) logarithmic dependence. To contrast the asymptotics, we plot average TTC as a function of exponentially increasing execution horizon in Figure~\ref{fig:experiment-experiments-23} for OLC\xspace and SLC\xspace with internal parameters $b=20$ and $d=2$, respectively. Note that these are not Pareto-optimal parameters, but chosen here for illustration purposes. Clearly, TTC for OLC\xspace is linear in the execution horizon (plotted in Figure~\ref{fig:experiment-experiments-23} on an exponential scale), while for SLC\xspace it is logarithmic. \subsection{Power \& Energy Consumption} \label{sec:experiments-energy} \input{figures/plots/energypower-1/figure} A key motivation for superlight clients is their application on resource-constrained platforms such as browsers or mobile phones. In this context, computational efficiency, and as a proxy energy efficiency, is an important metric. We ran the light clients on a battery-powered System76 Lemur Pro (`\texttt{lemp10}') laptop with Pop!\_OS 22.04 LTS, and recorded the decaying battery level using `\texttt{upower}' (screen off, no other programs running, no keyboard/mouse input, WiFi connectivity; provers still on Heroku instances). From the energy consumption and wallclock time we calculated the average power consumption. As internal parameters for TLC\xspace, OLC\xspace, and SLC\xspace, we chose $b=200$, $b=500$, and $d=100$, respectively (\emph{cf.}\xspace Pareto-optimal parameters in Figure~\ref{fig:experiment-experiment-1}). The energy required to bootstrap $10$ years of consensus execution, averaged over $5$ trials for TLC\xspace, and $25$ trials for OLC\xspace and SLC\xspace, is plotted in Figure~\ref{fig:experiment-energypower-1}. We disaggregate the energy consumption into power consumption and TTC for each light client, and also record the power consumption of the machine in idle. (Note, discrepancies in Figures~\ref{fig:experiment-experiment-1} and~\ref{fig:experiment-energypower-1} are due to the light clients running on Amazon EC2 vs. a laptop.) OLC\xspace and SLC\xspace have comparable TTC and power consumption, resulting in comparable energy consumption per bootstrap occurrence. The energy required by OLC\xspace and SLC\xspace is $30\times$ lower than the energy required by TLC\xspace per bootstrap occurrence (top panel in Figure~\ref{fig:experiment-energypower-1}). This can be attributed to a $\approx4\times$ lower power consumption (middle panel in Figure~\ref{fig:experiment-energypower-1}) together with a $\approx7\times$ lower TTC (bottom panel in Figure~\ref{fig:experiment-energypower-1}). The considerably lower energy/power consumption of OLC\xspace/SLC\xspace compared to TLC\xspace is due to the lower number of signature verifications (and thus lower computational burden). Note that a sizeable fraction of OLC\xspace's/SLC\xspace's power consumption can be attributed to system idle (middle panel in Figure~\ref{fig:experiment-energypower-1}). When comparing light clients in terms of \emph{excess} energy consumption (\emph{i.e.}\xspace, subtracting idle consumption) per bootstrapping, then OLC\xspace and SLC\xspace improve over SLC\xspace by $64\times$. \section{Analysis} \label{sec.analysis} The theorems for succinctness and security of the PoPoS protocol are provided below. Security consists of two components: completeness and soundness. \begin{restatable}[Succinctness]{theorem}{restateSuccinctness} \label{thm:succinctness} Consider a verifier that invokes a bisection game at round $r$ between two provers that provided different handover tree roots. Then, the game ends in $O(\log(r))$ steps of interactivity and has a total communication complexity of $O(\log(r))$. \end{restatable} \begin{restatable}[Completeness]{theorem}{restateCompleteness} \label{thm:completeness} Consider a verifier that invokes a bisection game at round $r$ between two provers that provided different handover tree roots. Suppose one of the provers is honest. Then, the honest prover wins the bisection game. \end{restatable} \begin{restatable}[Soundness]{theorem}{restateSoundness} \label{thm:soundness} Let $H^s$ be a collision resistant hash function. Consider a verifier that invokes a bisection game executed at round $r$ of a secure underlying PoS protocol between two provers that provided different handover tree roots. Suppose one of the provers is honest, and the signature scheme satisfies existential unforgeability. Then, for all PPT adversarial provers $\mathcal{A}$, the prover $\mathcal{A}$ loses the bisection game against the honest prover with overwhelming probability in $\lambda$. \end{restatable} \begin{restatable}[Tournament Runtime]{theorem}{restateTournamentRuntime} \label{thm:tournament-runtime} Consider a tournament started at round $r$ with $|\mathcal{P}|$ provers. Suppose one of the provers is honest. Then, the tournament ends in $O(|\mathcal{P}|\log(r))$ steps of interactivity, and has a total communication complexity of $O(|\mathcal{P}|\log(r))$. \end{restatable} \begin{restatable}[Security]{theorem}{restateSecurity} \label{thm:security} Let $H^s$ be a collision resistant hash function. Consider a tournament executed between an honest verifier and $|\mathcal{P}|$ provers at round $r$. Suppose one of the provers is honest, the signature scheme satisfies existential unforgeability, and the PoS protocol is secure. Then, for all PPT adversaries $\mathcal{A}$, the state commitment obtained by the verifier at the end of the tournament satisfies state security with overwhelming probability in $\lambda$. \end{restatable} Proofs of these theorems are given in Appendix~\ref{sec.proofs}. \section{Other Proof-of-Stake Systems} \label{sec.other} We have presented our construction in a generic PoS model, and instantiated it concretely for Ethereum PoS. Our construction is quite general and can be adopted to virtually any PoS system. Many PoS systems are split into (potentially smaller) epochs in which some sampling from the underlying stake distribution is performed according to some random number. The random number generation can be performed in multiple ways. For example, all of Ouroboros~\cite{ouroboros}, Ouroboros Praos~\cite{praos}, and Ouroboros Genesis~\cite{genesis} use a verifiable secret sharing mechanism, while Algorand~\cite{algorand} uses a multiparty computation. The stake distribution from which the sampling is performed could also have various nuances such as delegation, might require locking up one's funds, may exclude people with very small stake, or may give different weights to different stake ownership. In all of these cases, a frozen stake distribution from which the final sampling is performed is determined. Our scheme can be generalized to any PoS scheme in which the leader can be verified from a frozen stake distribution and some randomness, no matter how it is generated, as long as the block associated with a particular slot can be uniquely determined after it stabilizes (a property that follows in any blockchain system as long as it observes the common prefix property). In the scheme we described throughout the paper, the sequence of signatures $\overline{\sigma}^{j+1}$ that are generated in an epoch $j$ and vouch for the leaders of the next epoch sign the public key set $S^{j+1}$ of the next epoch. To generalize our scheme to any PoS system with randomness and a stake distribution, the signatures $S^{j+1}$ need not sign the public key sequence any more; instead, they can sign: \begin{enumerate} \item the epoch randomness $\eta^{j+1}$ of the next epoch, and \item the frozen stake distribution $\textsf{SD}^j$ of the current epoch that will be used for sampling during the next epoch. \end{enumerate} Of course, in such a scheme, a succinctness problem arises: The stake distribution $\textsf{SD}_j$ might be large. However, this problem can be overcome by organizing the stake distribution $\textsf{SD}_j$ into a Merkle tree. This Merkle tree contains one leaf for every satoshi (the smallest cryptocurrency denomination). The leaf's value is the public key who owns this satoshi. When the sampling from $\textsf{SD}^j$ according to the randomness $\eta^{j+1}$, the prover can provide a proof that the correct leader was the one that happened to be elected by opening the particular Merkle tree path at a particular index. That way, the verifier can deduce the last slot leaders of each epoch. Because the number of satoshis can be large, this Merkle tree can have a large (potentially an exponential) number of leaves. However, its root and proofs can be efficiently computed using sparse Merkle tree techniques~\cite{sparse-mt} (or Merkle tries~\cite{wood}) because the tree contains a polynomial number of continuous ranges in which many consecutive leaves share the same value. Even better, a Merkle--Segment trees~\cite{flyclient}) can be used. These trees are similar to Merkle trees, except that each node (internal or leaf) is also annotated with a numerical value, here the total stake under the subtree rooted at the particular node. Each internal node has the property that its annotated value is the sum of the annotated values of its children. The above technique is quite generic, but each system has its nuances that must be accounted for. \myparagraph{Ouroboros/Cardano.} Our construction can be implemented in Cardano/Ouroboros~\cite{ouroboros} as presented by electing a committee, but the underlying longest chain rule lends itself to better implementations for committee election and signature inclusion. One way to make use of the Cardano protocol is to extend the epoch duration $R$ by $2k$ slots. In this manner, the randomness and leaders of the next epoch are known during the last $2k$ slots of the previous epoch (in the vanilla Cardano protocol, the leaders and randomness of the next epoch only become known at the end of the epoch). The last $2k$ slots of each epoch are then used to determine the sync committee. The committee is the leaders of these $2k$ slots, and no separate process is required to elect it. In each of the last $2k$ slots of an epoch, we add the extra requirement to the block validity rules that the block producer has included a handover signature to the correct next epoch committee; otherwise the block is rejected as invalid by full nodes. These small changes mean that the Ouroboros protocol can be used almost as-is to support our PoPoS and do not require any additional mechanisms for electing committees or any off-chain mechanism for exchanging committee succession signatures, as the blocks themselves are used as carriers of this information. The critical property of the Ouroboros protocol that allows us to prove security in this setting is the following lemma: \begin{lemma}[Honest Subsequence] \label{lem.subsequence} Consider any continuous window of $2k$ slots within an epoch. If \emph{any} $k+1$ keys among these $2k$ are chosen, then at least one of them is guaranteed to be honest, except with negligible probability in $k$. \end{lemma} Using the above lemma, we see that the last $2k$ slots will necessarily contain $k+1$ honest leaders who will produce correct committee signatures, and so our PoPoS assumption that the committee has honest majority during its epoch is satisfied. The security of the protocol then follows from Theorem~\ref{thm:security}. \myparagraph{Ouroboros Praos and Genesis.} These two protocols have some similarities to Ouroboros, but also significant differences. As Ouroboros Praos and Genesis are designed to be resilient to fully adaptive adversaries, the actual slot leader of each slot is not known \emph{a priori}. However, a party can himself determine whether he is eligible to be the slot leader by evaluating a VRF on the epoch randomness and the current slot index using his private key. If the VRF output is below a certain threshold, determined by the candidate leader's stake, then the party is elegible to be a leader for this slot. The party's public key can then be used by others to verify a proof that the VRF computation was correct, and that he is indeed a rightful leader. Because we cannot determine the leaders of the $j+1^\text{st}$ epoch at the end of epoch $j$, we cannot hope to have the leaders of the $j^\text{th}$ epoch sign off the public keys of the leaders of epoch $j+1$. However, the above technique, in which signatures sign the randomness and a Merkle--Segment tree of the stake distribution, together with the VRF proof, suffices. In this construction, the signatures of epoch $j$ sign off the randomness for epoch $j+1$ and stake distribution Merkle tree for epoch $j$. At a later time, when it is revealed who is leader, the honest prover can provide the VRF proof to the verifier, and the verifier can check that the leader was indeed rightful. To obtain the VRF threshold, the prover can open the Merkle--Segment tree to the depth required to illustrate the total sum of the stake of the leader. Once the leader's stake is revealed, the threshold used in the VRF inequality is validated. These protocols have several advantages, including security in the semi-synchronous setting as well as resilience to adaptive adversaries~\cite{praos,genesis}. \myparagraph{Snow White.} This protocol uses epochs and every epoch contains a randomness and a stake distribution from which leaders are sampled~\cite{snowwhite}. Therefore, our protocol can be readily adapted to it. \myparagraph{Algorand.} Contrary to Ouroboros, Algorand offers immediate finality~\cite{algorand}. Once a block is broadcast, any transactions contains within are confirmed and can no longer be reverted. In other words, its common prefix property holds with a parameter of $k = 1$. To achieve this, Algorand runs a full Byzantine Agreement protocol for the generation of every block before moving to the next block. One way to look at it is to think of Algorand as a coin in which the epoch duration is $R = 1$. Our construction can therefore create a handover tree in which the leaves are exactly the blocks in the Algorand chain. The Algorand private sortition mechanism can be used to elect a committee large enough to ensure honest supermajority (a property required for Algorand's security). This committee, whose members can be placed in increasing order by their public key to ensure determinism, can then be used in place of our sequence of public keys, to sign off the results of the next block. Even though our handover tree now becomes slightly larger, with its number of leaves equal to the chain length $|\mathcal{C}|$, our protocol is still $\mathcal{O}(\log|\mathcal{C}|)$. \section*{Acknowledgment} The authors thank Kostis Karantias for the helpful discussions on bisection games. The work of JN was conducted in part while at Paradigm. JN is supported by the Protocol Labs PhD Fellowship. ENT is supported by the Stanford Center for Blockchain Research. The work of DZ was supported in part by funding from Harmony. \section{Security of Ethereum Light Clients} \label{sec:sync-committee-security} The following assumptions ensure the security of the optimistic light client\xspace and superlight client on PoS Ethereum: \begin{enumerate} \item The honest Ethereum validators constitutes at least $\frac{2}{3}+\epsilon$ fraction of the validator set at all times. \item The sync committee for each period is sampled uniformly at random from the validator set. \item The underlying PoS consensus protocol satisfies security. \item The $\mathsf{attested\_header}$ of a beacon block containing a $\mathsf{finalized\_header}$ is signed by a sync committee member \emph{only if} the $\mathsf{finalized\_header}$ is the header of a Casper FFG finalized PoS block in the view of the sync committee member. \item Honest block proposers include the latest Casper FFG finalized block in their view as the $\mathsf{finalized\_header}$ of their proposal blocks. \end{enumerate} The assumptions (a) and (b) ensure that the honest sync committee members constitute a supermajority of the sync committee at all periods. Assumption (d) ensures that any header obtained by a light client belongs to a Casper FFG finalized block, whereas assumption (e) ensures that upon being finalized, these blocks are soon adopted by the light clients through the light client updates. Together with (c), these assumptions and Theorem~\ref{thm:security} imply the security of our optimistic light client\xspace and superlight client constructions for PoS Ethereum per Definition~\ref{def:state-security}. \myparagraph{Security under Adversarial Network Conditions.} Due to network delays or temporary adversarial majorities, there might be extended periods during which the light client does not receive any updates. In this case, if the client observes that $\mathsf{UPDATE\_TIMEOUT}$ number of slots have passed since the slot of the last $\mathsf{finalized\_header}$ in its view, it can do a \emph{force update}. Prior to the force update, the client replaces the $\mathsf{finalized\_header}$ within the best valid light client update in its view with the $\mathsf{attested\_header}$ of the same update. Note that the $\mathsf{finalized\_header}$ of the best valid update must have had a smaller slot than the $\mathsf{finalized\_header}$ in the client's view, as it could not prompt the client to update its view during the last $\mathsf{UPDATE\_TIMEOUT}$ slots. Hence, treating the $\mathsf{attested\_header}$ within the best valid update, which is by definition from a higher slot, as a $\mathsf{finalized\_header}$ can enable the client to adopt it as the latest $\mathsf{finalized\_header}$ block, and facilitate the client's progression into a later sync committee period. The current Ethereum specification~\cite{minimal_light_client} also recommends using other use-case dependent heuristics for updates, in lieu of checking signatures, if the light client seems stalled. However, heuristics such as swapping the attested and finalized headers as described above might cause the light client to adopt block headers that are not finalized by Casper FFG. Hence, in this work, we assume that the underlying consensus protocol is not subject to disruptions like network delays, and focus on the regular update mechanism described in Section~\ref{sec.pos-eth}. \section{Proofs}\label{sec.proofs} \begin{proof}[Proof Sketch for Theorem~\ref{thm:succinctness}] Let $N\in\Theta(r)$ be the number of epochs at round $r$. When the handover trees have $N$ leaves, there can be at most $\log{N}\in\Theta(\log{r})$ steps of interactivity during the bisection game. In case an adversarial prover attempts to continue beyond $\log N$ steps of interactivity, the verifier aborts the interaction early, as the verifier expects to receive sync committees after $\log{N}$ queries, and the number $N$ is known by the verifier. At each step of the bisection game until the sync committees are revealed, the verifier receives two children (two constant size hash values) of the queried node from both provers. At the final step, the verifier receives the sync committees $S^{j}$ and $S^{*,j}$ from the provers at the first point of disagreement $j$, and the sync committees $S^{j-1}$ and $S^{*,j-1}$ at the preceding leaf along with their Merkle proofs. As each committee consists of a constant number $m$ of public keys with constant size, and each Merkle proof contains $\log{N}$ constant size hash values, the total communication complexity of the bisection game becomes $\Theta(\log{N})=\Theta(\log(r))$. \end{proof} \begin{proof}[Proof Sketch for Theorem~\ref{thm:completeness}] To show that the honest prover wins the bisection game, we will step through the conditions checked by the verifier during the bisection game. By the synchrony assumption, the honest prover and verifier both hold the same epoch number $N$. Since the size of the handover tree $N$ held by the honest prover matches the size expected by the honest verifier $\ell = N$, the prover will successfully respond to all of the \emph{open} queries of the verifier. As the honest prover's handover tree is well-formed, upon receiving an \emph{open} query to reveal the children of a node $h_c$ on its tree, the left and right children $h_l$ and $h_r$ returned by the honest prover satisfy $h_c=H(h_l \,\|\, h_r)$. Subsequently, upon reaching a leaf, the honest prover will supply a sync committee, as expected by the verifier. By synchrony, the honest prover does not time out. The honest prover replies to all queries by the honest verifier, and the replies are always syntactically valid. Suppose the first point of disagreement between the leaves of the honest and the adversarial prover is identified at some index $j$. If $j=0$, the honest prover returns $S^0$, which is validated by the verifier as the correct sync committee stipulated by the genesis state $st_0$. If $j>0$, the honest prover reveals the sync committee $S^j$ at leaf $j$, the committee $S^{j-1}$ at leaf $j-1$, and the Merkle inclusion proof for $S^{j-1}$, which is validated by the verifier with respect to the root. By the well-formedness of the honest prover's Merkle tree, the prover holds a handover proof $\Sigma^j$ that contains over $m/2$ signatures on $(j,S^j)$ by the committee members within $S^{j-1}$. The honest prover sends this valid handover proof to the verifier. Consequently, the honest prover passes all of verifier's checks, and wins the bisection game. \end{proof} Let $\textsc{Verify}$ be the verification function for Merkle proofs. It takes a proof $\pi$, a Merkle root $\ensuremath{\left<\mathcal{T}\right>}$, the size of the tree $\ell$, an index for the leaf $0 \leq i < \ell$ and the leaf $v$ itself. It outputs $1$ if $\pi$ is valid and $0$ otherwise. We assume that the well-formed Merkle trees built with a collision-resistant hash function satisfy the following collision-resistance property: \begin{proposition}[Merkle Security~\cite{lazylight}] \label{prop:adv-merkle} Let $H^s$ be a collision resistant hash function used in the binary Merkle trees. For all PPT $\mathcal{A}$: $ \Pr[(v,D,\pi,i) \gets \mathcal{A}(1^\lambda): \ensuremath{\left<\mathcal{T}\right>} = \textsc{MakeMT}(D).\mathrm{root} \land D[i] \neq v \land \textsc{Verify}(\pi, \ensuremath{\left<\mathcal{T}\right>}, |D|, i, v) = 1] \leq \text{negl}(\lambda) $. \end{proposition} The following lemma shows that the sync committees at the first point of disagreement identified by the verifier are different, and the committees at the previous leaf are the same with overwhelming probability. \begin{lemma}[Bisection Pinpointing] \label{lem:bisection.pinpointing} Let $H^s$ be a collision resistant hash function. Consider the following game among an honest prover $P$, a verifier $V$ and an adversarial prover $P^*$: The prover $P$ receives an array $D$ of size $N$ from $P^*$, and calculates the corresponding Merkle tree $\mathcal{T}$ with root $\ensuremath{\left<\mathcal{T}\right>}$. Then, $V$ mediates a bisection game between $P^*$ claiming root $\ensuremath{\left<\mathcal{T}\right>}^*$ and $P$ with $\ensuremath{\left<\mathcal{T}\right>}$. Finally, $V$ outputs $(1, D^*[j-1], D^*[j])$ if $P$ wins the bisection game; otherwise, it outputs $(0, \bot, \bot)$. Here, $D^*[j-1]$ and $D^*[j]$ are the two entries revealed by $P^*$ for the consecutive indices $j-1$ and $j$ during the bisection game. ($D^*[-1]$ is defined as $\bot$ if $j=0$.) Then, for all PPT adversarial responder $\mathcal{A}$, $\Pr[D \gets \mathcal{A}(1^\lambda); (1, D^*[j-1], D^*[j]) \gets (V(|D|) \leftrightarrow (P(D), \mathcal{A})) \land (D^*[j-1] \neq D[j-1] \lor D^*[j] = D[j])] \leq \mathrm{negl}(\lambda)$. \end{lemma} The above lemma resembles~\cite[Lemma 4]{lazylight}. A proof sketch is presented below. \begin{proof}[Proof Sketch for Lemma~\ref{lem:bisection.pinpointing}] The verifier starts the bisection game by asking the provers to reveal the children of the roots $\ensuremath{\left<\mathcal{T}\right>}$ and $\ensuremath{\left<\mathcal{T}\right>}^*$ of the respective handover trees, where $\ensuremath{\left<\mathcal{T}\right>} \neq \ensuremath{\left<\mathcal{T}\right>}^*$. Subsequently, at every step of the bisection game, the verifier asks each prover to reveal the two children of a previously revealed node, where the queried nodes have the same position, different values in the respective trees. Hence, for the index $j$ identified by the verifier as the first point of disagreement, it holds that $D^*[j] \neq D[j]$. Suppose $j>0$. Then, there must exist a step in the bisection game, where the verifier asks the provers to open the right children of the previously queried node. Concretely, there exists a node $\tilde{h}_c$ on $\mathcal{T}$, queried by the verifier, and a node $\tilde{h}^*_c$, alleged by $P^*$ to be at the same position as $\tilde{h}_c$, such that for the two children $\tilde{h}_l$ and $\tilde{h}_r$ of $\tilde{h}_c$ and the two children $\tilde{h}^*_l$ and $\tilde{h}^*_r$ of $\tilde{h}^*_c$ revealed to the verifier, the following holds: $\tilde{h}^*_l = \tilde{h}_l$ and $\tilde{h}^*_r \neq \tilde{h}_r$. Let's consider the last such nodes $\tilde{h}_c$ and $\tilde{h}^*_c$ after which, the verifier asks the provers to open only the left children of the subsequent nodes. In this case, the leaves $D[j-1]$ and $D^*[j-1]$ lie under the nodes $\tilde{h}^*_l = \tilde{h}_l=h$. Finally, consider the Merkle proofs revealed for $D[j-1]$ and $D^*[j-1]$ with respect to $\ensuremath{\left<\mathcal{T}\right>}$ and $\ensuremath{\left<\mathcal{T}\right>}^*$ respectively. Consider the sequence of hashes within the Merkle proofs that lie under the node $h$. These sequences constitute valid Merkle proofs for $D[j-1]$ and $D^*[j-1]$ with respect to $h$. By Proposition~\ref{prop:adv-merkle}, $P^*$ cannot create two Merkle proofs for two different leaves at the same position such that both proofs verify with respect to the same root, with overwhelming probability. Hence, $D^*[j-1]=D[j-1]$ with overwhelming probability. \end{proof} \begin{proof}[Proof Sketch for Theorem~\ref{thm:soundness}] Suppose the first point of disagreement between the leaves of the honest and the adversarial prover is identified at some index $j$. If $j=0$, by Lemma~\ref{lem:bisection.pinpointing}, the adversarial prover $P^*$ returns some committee $S^{*,0}$, which is different from $S^0$, the committee at the first leaf of the honest prover $P$. As the honest prover's tree is well-formed, $S^0$ is the sync committee within the genesis state $st_0$. Thus, $P^*$ loses the bisection game as $S^{*,0} \neq S^0$. If $j>0$, by Lemma~\ref{lem:bisection.pinpointing}, for the sync committees $S^j$ and $S^{*,j}$ returned by $P$ and $P^*$ for the index $j$, it holds that $S^j \neq S^{*,j}$, and for the sync committees $S^{j-1}$ and $S^{*,j-1}$ returned for the index $j-1$, it holds that $S^{j-1} = S^{*,j-1}$, with overwhelming probability. By the well-formedness of $P$'s handover tree, $P$ provides a handover proof $\Sigma^j$ that contains over $m/2$ signatures on $(j,S^j)$ by the committee members within $S^{j-1}$. Here, $S^{j-1}$ is the committee assigned to epoch $j-1$. During epoch $j-1$, honest committee members constitute over $m/2$ of the members within $S^{j-1}$, and create only a \emph{single} handover signature. After epoch $j-1$ ends, the members of the committee $S^{j-1}$ can no longer use their keys to create handover signatures. The party $P$ has sent over $m/2$ signatures on $(j,S^j)$ by the committee members within $S^{j-1}$. With overwhelming probability, the adversary $\mathcal{A}$ cannot provide over $m/2$ signatures on $(j,S^{*,j}) \neq (j,S^j)$ by the committee members in $S^{j-1}$, by honest majority (as at least one honest member of $S^{j-1}$ would need to sign two different messages) and by existential unforgeability. This implies that $P^*$ cannot provide a valid handover proof from the committee $S^{j-1}$ to $S^{*,j}$, except with negligible probability. Consequently, $P^*$ loses the bisection game with overwhelming probability, as the verifier will catch the invalid signature during the signature checking step. \end{proof} \begin{proof}[Proof Sketch for Theorem~\ref{thm:tournament-runtime}] Consider a tournament started at round $r$ with $|\mathcal{P}|$ provers, one of which is honest. At each step of the tournament, the verifier facilitates a bisection game between two provers with different state commitments. (Honest provers hold the same state commitment.) At the end of the game, at least one prover is designated as a loser and eliminated from the set of provers. The tournament continues until all remaining provers hold the same state commitment. Hence, it lasts at most $|\mathcal{P}|-1$ steps. By Theorem~\ref{thm:succinctness}, each bisection game at round $r$ ends in $O(\log(r))$ steps of interactivity, and has a total communication complexity of $O(\log(r))$. Consequently, the tournament consists of $O(|\mathcal{P}|\log(r))$ steps of interactivity, and has a total communication complexity of $O(|\mathcal{P}|\log(r))$. \end{proof} \begin{proof}[Proof Sketch for Theorem~\ref{thm:security}] Consider a tournament step that involves an honest prover $P$ and an adversarial prover $P^*$ that have provided different state commitments $\ensuremath{\left<\mathsf{st}\right>}$ and $\ensuremath{\left<\mathsf{st}\right>}^*$ respectively. Let $N$ denote the number of epochs at round $r$, and $S^{N-1}$ denote the committee assigned to epoch $N-1$. Define $n$ as the number of Merkle trees within the MMRs of the honest provers at epoch $N$. Let $\ensuremath{\left<\mathcal{T}\right>}^*_i$ and $\ensuremath{\left<\mathcal{T}\right>}_i$, $i \in [n]$, denote the sequence of peaks revealed by $P^*$ and $P$ to the verifier before the bisection game. By definition, $P$ returns $S^{N-1}$ as the latest sync committee in its view, and let $S^{*,N-1}$ denote the latest sync committee alleged by $P^*$. At the beginning of epoch $N-1$, honest committee members constitute over $m/2$ of the members within $S^{N-1}$, and sign the same state commitment $\ensuremath{\left<\mathsf{st}\right>}$ returned by $P$. The party $P$ has sent over $m/2$ signatures on $\ensuremath{\left<\mathsf{st}\right>}$ by the committee members within $S^{N-1}$. As in the proof of Theorem~\ref{thm:soundness}, due to honest majority and existential unforgeability, $\mathcal{A}$ cannot produce more than $m/2$ signatures on a different commitment $\ensuremath{\left<\mathsf{st}\right>}' \neq \ensuremath{\left<\mathsf{st}\right>}$ by the committee members in $S^{N-1}$, except with negligible probability. Hence, if $P^*$ provides over $m/2$ signatures on $\ensuremath{\left<\mathsf{st}\right>}^*$ by $S^{*,N-1}$, it must hold that $S^{*,N-1} \neq S^{N-1}$. Moreover, if the Merkle proof provided for $S^{*,N-1}$ verifies against $\ensuremath{\left<\mathcal{T}\right>}^*_n$, as $S^{*,N-1} \neq S^{N-1}$, by Proposition~\ref{prop:adv-merkle}, it must hold that $\ensuremath{\left<\mathcal{T}\right>}^*_n \neq \ensuremath{\left<\mathcal{T}\right>}_n$ with overwhelming probability (due to the hash function being collision resistant). Hence, there exists an index $d \in [n]$ such that $\ensuremath{\left<\mathcal{T}\right>}^*_d \neq \ensuremath{\left<\mathcal{T}\right>}_d$ and $\ensuremath{\left<\mathcal{T}\right>}^*_{i} = \ensuremath{\left<\mathcal{T}\right>}_{i}$ for all $i \in [n]$, $i<d$. In this case, the verifier mediates a bisection game between $P$ and $P^*$ on the two alleged trees with roots $\ensuremath{\left<\mathcal{T}\right>}^*_d$ and $\ensuremath{\left<\mathcal{T}\right>}_d$. By Theorem~\ref{thm:completeness}, $P$ wins the game, and by Theorem~\ref{thm:soundness}, $P^*$ loses the game with overwhelming probability, so $P^*$ is eliminated. At each step of the tournament, at least one prover is eliminated, and the tournament continues until all remaining provers hold the same state commitment. By assumption, there is at least one honest prover $P$. This prover emerges victorious from every tournament step against other provers with a different state commitment, except with negligible probability. Consequently, with overwhelming probability, $P$ remains in the tournament until all remaining provers hold the same state commitment $\ensuremath{\left<\mathsf{st}\right>}$ as $P$. Let $\ensuremath{\mathbb{L}}$ be the ledger held by $P$ at round $r_0$ corresponding to the beginning of the epoch of round $r$. By definition, $r_0 \leq r$ and $r-r_0 \leq C$ for some constant epoch length $C$. By the safety of the PoS protocol, for any honest parties $P_1, P_2$ and rounds $r_1 \geq r_2$: $\ensuremath{\mathbb{L}}^{P_2}_{r_2} \preccurlyeq \ensuremath{\mathbb{L}}^{P_1}_{r_1}$. Thus, for any honest party $P'$ and rounds $r' \geq r \geq r_0$, it holds that $\ensuremath{\mathbb{L}} \preccurlyeq \ensuremath{\mathbb{L}}^{P'}_{r'}$, Similarly, for any honest party $P'$, it holds that $\ensuremath{\mathbb{L}}^{P'}_{r_0-1} \preccurlyeq \ensuremath{\mathbb{L}}$. Consequently, there exists a latency parameter $\nu=K$, and a ledger $\ensuremath{\mathbb{L}}$ such that $\ensuremath{\left<\mathsf{st}\right>} = \delta^*(st_0, \ensuremath{\mathbb{L}})$, and $\ensuremath{\mathbb{L}}$ satisfies the following properties: \begin{itemize} \item \textbf{Safety:} For all rounds $r' \geq r + \nu$: $\ensuremath{\mathbb{L}} \preccurlyeq \ensuremath{\mathbb{L}}^{\cup}_{r'}$. \item \textbf{Liveness:} For all rounds $r' \leq r - \nu$: $\ensuremath{\mathbb{L}}^{\cap}_{r'} \preccurlyeq \ensuremath{\mathbb{L}}$. \end{itemize} Thus, $\ensuremath{\left<\mathsf{st}\right>}$ satisfies state security. As the verifier accepts $\ensuremath{\left<\mathsf{st}\right>}$ as the correct commitment at the end of the tournament with overwhelming probability, the commitment obtained by the verifier at the end of the tournament satisfies state security with overwhelming probability in $\lambda$. \end{proof} \section{Discussion and Future Work} \myparagraph{Interactivity.} The interactivity in our protocol is undesirable. Contrary to linear bootstrapping protocols in which the bottleneck in practice comes down to the bandwidth needed to download all the data, in our protocol, because the communication complexity we have achieved is so low, the practical bottleneck in the protocol comes down to the interactivity. This interactivity is inherent to our construction and is difficult to remove. In contrast to standard complexity proof protocols (such as zero-knowledge proofs), our interactivity cannot be removed using the Fiat--Shamir heuristic~\cite{fiatshamir}. The reason here is that we require two provers to challenge each other under the watchful eyes of the verifier. Such behavior cannot be emulated by the random sampling that Fiat--Shamir would offer, in which a \emph{single} prover and \emph{single} verifier interaction is emulated. However, even though the protocol has inherent logarithmic interactivity, in practice the actual number of interaction rounds can be reduced with a couple of techniques: \begin{enumerate} \item When the attacker is challenging the defender, instead of the attacker asking the defender to \emph{open left} or \emph{open right}, the attacker can accompany the request with his own version of what nodes he expects to see on the \emph{left} and on the \emph{right}~\cite{inside-arbitrum}. That way, the defender can immediately be asked to open the next level, wherever it might be unmatching, too. This reduces the number of rounds in the interaction by a factor of two. But this technique cannot be applied multiple times (preemptively opening the tree many subsequent levels deep), because it quickly leads to an exponential blowup of communication cost with meager savings in interactivity. \item In case both trees have the same claimed size, two bisection games are required, but these can run in parallel: When the one party responds to the challenge of the other, he can accompany the first challenge response with a different challenge of his own. The paths of the tree that are opened in these parallel games may be completely different, and there is no interaction between the two games. This optimization has the effect that, even if the two trees have the same size, the number of interactions will be the same as if only one bisection game was played. \item The base of the logarithm can be increased from $2$ to a larger number. This is a trade-off: the larger the base of the logarithm, the fewer rounds of interactivity are required, but at the same time, more data need to be sent in each round of interaction. The base of the logarithm parameter does not have to be preset, and can be specified by the verifier depending on factors such as the network connection performance, to balance latency (high latency means we want less interactivity) and bandwidth (low bandwidth means we want more interactivity but small amounts of data per round of interaction). \item The tournament between multiple provers can involve bisection games played in parallel. If the verifier speaks to $8$ provers, once every prover commits to his tree root, the verifier can have the provers compete with each other in $4$ bisection games running simultaneously over the network, to reduce latency. The $4$ remaining provers can play against each other in $2$ bisection games running in parallel, and so forth. Although this does not decrease the number of interactions, the actual network delay will be significantly reduced as compared to challenging the parties one-by-one. \end{enumerate} Despite the above optimizations, the asymptotic interaction remains $\mathcal{O}(\log n)$. The problem of \emph{Non-Interactive} Proof of Proof-of-Stake (NIPoPoS), PoPoS that can run in just a single interaction, is left for future work. \myparagraph{Trusted Setup.} Our protocol was developed in the standard model and does not require a trusted setup. The underlying proof-of-stake protocol \emph{does} have some trusted setup assumption: The genesis epoch contains the public keys of a committee who, at the time of the epoch, were assumed to have an honest majority. The fact that our protocol does not leverage that trust to build zero knowledge or other constructions has certain advantages. We only use hashes and signatures. This makes the protocol easy to understand and straightforward to implement, with a limited attack surface, a great advantage in security-critical protocols. But also, in case the trusted setup assumption is violated, our protocol can readily provide evidence of what exactly happened, because it does not rely on that assumption: At the exact point where two provers disagree at the bisection game, the evidence presented illustrates two different epochs at the same index accompanied by their different randomnesses, a situation that should not occur. In contrast, a zero-knowledge proof in case of trusted setup failure can lead to decisions in which foul play may be undetectable, and certainly the exact point and conditions of first failure are difficult to pinpoint. Additionally, the simplicity of our construction makes it possible to implement very efficient provers and verifiers in practice. Despite the slightly worse asymptotics, we expect our construction to perform better in concrete terms than zero knowledge proof systems that feature large constants in their complexities. \myparagraph{Proof-of-Work extensions.} Our tree-based protocol cannot be readily extended to proof-of-work. To see why, consider a ``chain tree'' similar to our handover tree, but storing the proof-of-work chain in its leaves. One could na\"ively imagine that two provers can run a similar bisection protocol to find the first difference in this tree. But note that, contrary to proof-of-stake epochs, there can be multiple trees that are \emph{admissible} here, and they are not prefixes of one another. For example, if a mining adversary chooses to mine a secret chain starting somewhere in the middle of the honest chain, she will end up with a ``chain tree'' whose first difference with the honest tree will be the first secret block. At that point, the honest verifier has nothing to see; both trees look equally valid when the leaves are revealed. Therefore, this scheme cannot be readily applied to \emph{chains}, but must remain constructed on top of \emph{epochs}, as we chose to do. The closest related proof-of-work construction is FlyClient~\cite{flyclient} in which leaves are opened at random until foul play is detected, but no bisection games are played.
{ "timestamp": "2022-09-20T02:22:08", "yymm": "2209", "arxiv_id": "2209.08673", "language": "en", "url": "https://arxiv.org/abs/2209.08673" }
\section{Introduction} Foldable quadrotors (FQrs) have created a paradigm shift in the design of multirotor aerial vehicles for widening the scope of applications such as flying through small openings and cluttered spaces \cite{PZ21}. While there is ample research demonstrating the mechanical feasibility of the foldable designs \cite{PM+20, F+19}, limited literature exists on the analysis of the low-level flight controller and the effects of inflight configuration switching. The low-level flight control for a FQr is challenging due to the parameter-varying dynamics corresponding to its various configurations. Also, not accounting for any modeling uncertainties, such as inertia or aerodynamics, can further deteriorate the tracking performance. In this context, robust controllers have been explored to obtain the desired tracking performance by considering bounded model uncertainties \cite{F+21,Z+21,L+19,D+21}. The uncertainty bounds for these systems are generally held constant across the various configurations, and may lead to chattering in the control inputs \cite{SL91}. Alternatively, adaptive controllers that switch between various operating configurations have also been explored, which fall into the broad category of switched systems (Fig. \ref{fig:firstpic}). For example, researchers have synthesized different LQR controllers for different configurations and the corresponding changes in vehicle dynamics \cite{F+19,BT+21}. Other approaches employing switching model predictive controllers have also been developed for addressing the parameter variation during the change in configuration \cite{PMS21,PN20}. Similarly, in \cite{DB22}, the authors propose an integral backstepping adaptive controller to ensure stability of the tracking errors. However, in all these works, the disturbance encountered while switching has not been accounted for. Since the goal of the foldable chassis is to ensure that the vehicle flies through narrow constrained spaces safely, it is important to ensure that this transition-based disturbance is not significant to cause physical crashes. Therefore, the switching signal should also be planned, for the transition to occur safely, as a function of vehicle state while to adhering to geometric constraints. \begin{figure}[t] \centering \includegraphics[width = 0.48\textwidth]{cbd5.pdf}~ \caption{(a) Example of the foldable quadrotors \cite{Y+19} considered as switching systems in this work. (b) Illustrates a foldable quadrotor switched system consisting of four individual subsystems.} \vspace{-0.3in} \label{fig:firstpic} \end{figure} This letter attempts to fill a gap in the design of controllers for trajectory tracking of FQrs by modeling them as switched systems and thoroughly analyzing the stability characteristics. We consider three scenarios for our analysis: 1) the simplest case scenario with precisely-known inertia parameters, 2) case with modeling uncertainties in inertia. For the latter case, we propose an adaptive controller, as shown in Fig. \ref{fig:firstpic}, using proprioceptive state measurement and employ the theory of switched systems to derive the dwell-time requirements for attitude tracking stability. We also modify the adaptive controller to add a robust term and improve the robustness of the controller in the presence of disturbances, as case 3. Furthermore, we couple this proposed low-level attitude controller with a control-aware minimum-jerk trajectory planner and a proportional-derivative position controller to develop a complete control framework for foldable quadrotors to enforce the stability requirements and guarantee reference tracking. The remainder of the paper is organized as follows: Section \ref{sec:ps} describes the problem setup with the error definitions in Section \ref{sec:prelims}. Section \ref{sec:control} analyzes the tracking stability for the aforementioned two case scenarios with the proposed controller while Section \ref{sec:planning} describes the control-aware trajectory generation. Finally, in Section \ref{sec:results}, simulation results are presented that validate the proposed control framework, and Section \ref{sec:conclusion} concludes the letter. \section{Problem Statement} \label{sec:ps} Let $x = [\boldsymbol{R},~\Omega]^T$ denote the rotation and angular velocity respectively of a foldable quadrotor. Now, consider the following family of systems $\dot{x} = f_p(x)$ corresponding to each configuration shown in Fig. \ref{fig:firstpic}(b) as: \begin{align}\label{eqn:ps} \centering \boldsymbol{\dot{R}} &= \boldsymbol{R}\hat{\Omega} \nonumber \\ \boldsymbol{H}_p\dot{\Omega} &- [\boldsymbol{H}_p\Omega]_\times \Omega = u \end{align} with $p \in \mathcal{P}$ where $\mathcal{P} \subseteq \mathbb{N}$ is the index set and is finite such that $\mathcal{P} = \{ 1,2,...,m\}$. To define a switched system generated by the above family, we introduce the \textit{switching signal} as a piece-wise constant function $\sigma : [0,\infty) \rightarrow \mathcal{P}.$ It has a finite number of discontinuities and takes a constant value on every interval between two consecutive switching time instants. The role of $\sigma$ is to specify, at each time instant $t$, the index $\sigma(t) \in \mathcal{P}$ of the active subsystem model from the family (\ref{eqn:ps}) that the FQr currently follows. The \textit{hat map} $\hat{\cdot}: \mathbb{R}^3 \xrightarrow{} \mathsf{SO(3)}$ is a symmetric matrix operator defined by the condition that $\hat{x}y = x \times y ~\forall~ x,y \in \mathbb{R}^3$ and $[.]_\times$ is the skew symmetric cross product matrix. We aim to design an adaptive attitude controller for a morphing quadrotor that switches between its various configurations where its attitude dynamics follow (\ref{eqn:ps}) and $\boldsymbol{H}_p$ is not precisely known. \section{Error Definitions}\label{sec:prelims} This section describes the definitions of the attitude errors for the tracking problem. The readers are referred to \cite{tLee} for further details. Consider the error function, $\Phi$, and attitude errors $e_R$ and $e_\Omega$ defined as follows \begin{equation} \label{eqn:error_definitions1} \begin{aligned} \Phi(R,R_d) &= \frac{1}{2} \text{tr} \Big[ G(I - R_d^TR)\Big] \\ e_R(R,R_d) &= \frac{1}{2}(GR_d^TR - R^TR_dG)^\vee \\ e_\Omega(R,\Omega,R_d,\Omega_d) &= \Omega - R^TR_d\Omega_d \end{aligned} \end{equation} where $G \in \mathbb{R}^{3\times3}$ is given by diag[$g_1, ~ g_2, ~g_3]^T$ for distinct positive constants $g_1, g_2,g_3 \in \mathbb{R}$. With these definitions, the following statements hold: \begin{enumerate} \item $\Phi$ is locally positive definite about $R = R_d$ \item the left trivialized derivative of $\Phi$ is given by $e_R$ \item the critical points of $\Phi$ where $e_R = 0$ are \newline $\{R_d\} \bigcup \{R_d \text{ exp}(\pi \hat{s})\} $ for $s \in \{e_1,e_2,e_3\}$ \item the bounds on $\Phi$ are given by \begin{equation}\label{eqn:error_function_bounds} b_1 \| e_R(R,Rd) \|^2 \leq \Phi(R,Rd) \leq b_2 \| e_R(R,Rd) \|^2 \end{equation} \end{enumerate} The time derivative of the errors are given by \begin{equation*} \begin{aligned} &\frac{d}{dt}\Phi(R,R_d) = e_R \cdot e_\Omega \\ \dot{e}_R &= \frac{1}{2}(R_d^T R \hat{e}_{\Omega} + \hat{e}_{\Omega} R^T R_d)^\vee \equiv C(R_d^T,R)e_\Omega\\ \text{with } C(R_d^T,R) &= \frac{1}{2}(\text{tr}[R^T R_dG]I - R^T R_d G) \end{aligned} \end{equation*} It can also be verified that $C(R_d^T,R)$ is bounded by $ \| C(R_d^T,R) \| \leq \frac{1}{\sqrt{2}}\text{tr}[G] $. Furthermore, \begin{equation} \begin{aligned} \dot{e}_\Omega &= \dot{\Omega} + \hat{\Omega} R^T R_d \Omega_d - R^T R_d \dot{\Omega}_d = \dot{\Omega} - \alpha_D \end{aligned} \end{equation} where $\alpha_D = R^T R_d \dot{\Omega}_d - \hat{\Omega} R^T R_d \Omega_d $ physically represents the angular acceleration term. \section{Controller Design and Stability Analysis}\label{sec:control} For this letter, we consider the sub-level set $\mathcal{L} = \{ R_d,R \in \mathsf{SO(3)}|\Phi(R,R_d) < 2\}$ such that the initial attitude error satisfies \begin{equation} \Phi(R(0),R_d(0)) < 2 \end{equation} Note that this requires that the initial attitude error should be less than 180$^o$. Future extensions of this work will analyze complete low-level flight controller stability over the entire $\mathsf{SO(3)}$. In this section, we will first provide the methodology for stability analysis for attitude tracking of FQrs modeled as switched systems. We present the conditions for switching such that the overall system retains the tracking performance when the system model and parameter values are precisely known. Next, we propose an adaptive controller which estimates the unknown inertia online and extend the stability analysis with the proposed controller. \subsection{Case with the precise model} For this case scenario, $\boldsymbol{H}_p$ is precisely known for each $p^{th}$ subsystem in (\ref{eqn:ps}). \subsubsection{Attitude tracking of individual subsystems} The attitude dynamics for an individual subsystem from the switched system of (\ref{eqn:ps}) can be rewritten in the form of $\boldsymbol{H_p}\boldsymbol{\dot{\Omega}} - \boldsymbol{Y}_1 \boldsymbol{h}_p = \boldsymbol{u}$ where $\boldsymbol{Y}_1$ is given by (\ref{eqn:y1}) and $\boldsymbol{h}_p = [h_{p_{xx}}~h_{p_{yy}}~h_{p_{zz}}~h_{p_{xy}}~h_{p_{xz}}~h_{p_{zz}}]^T$ is the vector encompassing the unique elements of the moment of inertia tensor. \begin{equation}\label{eqn:y1} \begingroup \setlength\arraycolsep{1.5pt} \boldsymbol{Y}_1 = \begin{bmatrix} 0 & \omega_2\omega_3 & -\omega_2\omega_3 & \omega_1 \omega_3 & -\omega_1 \omega_2 & \omega_3^2 - \omega_2^2 \\ - \omega_1 \omega_3 & 0 & \omega_1 \omega_3 & -\omega_2 \omega_3 & \omega_1^2 - \omega_3^2 & \omega_2 \omega_1 \\ \omega_1 \omega_2 & -\omega_2 \omega_1 & 0 & \omega_2^2 -\omega_1^2 & \omega_3 \omega_2 & -\omega_3 \omega_2 \end{bmatrix} \endgroup \end{equation} The control moment in this case can be generated according to (\ref{eqn:control_tau_1}) as proposed in \cite{L+10}. \begin{align} u &= -k_R \boldsymbol{e}_R - k_\Omega \boldsymbol{e}_\Omega - \boldsymbol{Y}\boldsymbol{h}_p \label{eqn:control_tau_1} \end{align} where $\boldsymbol{Y} = \boldsymbol{Y}_1 - \boldsymbol{Y}_2$ with $\boldsymbol{Y}_2$ defined as \begin{equation}\label{eqn:y2} \boldsymbol{Y}_2 = \begin{bmatrix} \alpha_{d1} & 0& 0& \alpha_{d2}& \alpha_{d3}& 0 \\ 0& \alpha_{d2}& 0& \alpha_{d1} &0 & \alpha_{d3} \\ 0& 0& \alpha_{d3}& 0& \alpha_{d1}& \alpha_{d2} \\ \end{bmatrix} \end{equation} such that $\boldsymbol{H}_p \alpha_d \triangleq \boldsymbol{Y}_2 \boldsymbol{h}_p$. \subsubsection*{Proposition 1} For positive constants $k_\Omega$ and $k_R$, if the positive constant $c$ is chosen such that \begin{equation} \begin{aligned}\label{eqn:c} c < \text{min} &\Bigg \{ \frac{\sqrt{2}k_\Omega}{\text{tr}[G]}, \frac{4\sqrt{2} k_R k_\Omega {(\Lambda_{min}^p})^2}{\sqrt{2} k_\Omega^2 \Lambda_{max}^p+ 4 k_R {(\Lambda_{min}^p)}^2 \text{tr}[G]},\\ & \sqrt{b_1k_R\Lambda_{min}^p}, \sqrt{b_2k_R \Lambda_{max}^p} \Bigg\} \end{aligned} \end{equation} then the attitude tracking dynamics of the individual subsystems, ($e_R,e_\Omega$), are exponentially stable in the sublevel set $\mathcal{L}$. Moreover, if each subsystem resides in a particular switched state for a minimum dwell-time given by $\tau_d$ in (\ref{eqn:tau_d}), the switched system in (\ref{eqn:ps}) is asymptotically stable in $\mathcal{L}$. Here, $\Lambda_{max}^{(\cdot)}$ and $\Lambda_{min}^{(\cdot)}$ refer to the maximum and minimum eigen values respectively of the quantity $(\cdot)$ and $W_2^p$ is defined as (\ref{eqn:W2}) $\forall~ p \in \mathcal{P}$. \begin{equation}\label{eqn:tau_d} \tau_d > \frac{1}{2(\sum\beta_i)}\text{log}\frac{\prod \Lambda_{max}^{W_2^p} }{\prod \Lambda_{min}^{W_1^p}}, p = 1,2..m \in \mathcal{P} \end{equation} \subsubsection*{Proof} Here we provide a brief sketch of the stability of the attitude tracking errors for the individual subsystem. For full details, the readers are referred to \cite{L+10}. Consider the individual subsystem's Lyapunov candidate $\forall p = 1,2..m \in \mathcal{P}$ as \begin{equation} \begin{aligned} \mathcal{V}_p =~& \frac{1}{2}e_\Omega^T \boldsymbol{H_p}e_\Omega + k_R\Phi(R,R_d) + c \boldsymbol{e}_R \cdot \boldsymbol{e}_\Omega \end{aligned} \end{equation} In the sub-level set $\mathcal{L}$, we have \begin{equation} z_1^T W_1^p z_1 \leq \mathcal{V}_p \leq z_1^T W_2^p z_1, \end{equation} where $z_1= [\| \boldsymbol{e}_R \|~ \|\boldsymbol{e}_\Omega \|]^T$ and $W_1^p$, $W_2^p \in \mathbb{R}^{2\times2}$ are \begin{equation}\label{eqn:W2} W_1^p = \frac{1}{2}\begin{bmatrix} b_1k_R & -c\\ -c & \Lambda_{p_{min}} \\ \end{bmatrix}, W_2^p = \frac{1}{2}\begin{bmatrix} b_2k_R & c \\ c & \Lambda_{p_{max}} \\ \end{bmatrix} \end{equation} i.e. \begin{equation}\label{eqn:Vbounds} \Lambda_{min}^{W_1^p} \| z_1 \|^2 \leq \mathcal{V}_p \leq \Lambda_{max}^{W_2^p} \| z_1 \|^2 \end{equation} Differentiating $\mathcal{V}$ along the solutions of the system and employing $\boldsymbol{H_p}\boldsymbol{\dot{\Omega}} = u + \boldsymbol{Y}_1 \boldsymbol{h}_p $, and $\boldsymbol{H_p} \alpha_D = \boldsymbol{Y}_2 \boldsymbol{h}_p$, \begin{align*} \mathcal{\dot{V}}_p =~& \boldsymbol{e}_\Omega^T \boldsymbol{H_p} \boldsymbol{\dot{e}}_\Omega + k_R \boldsymbol{e}_R \cdot \boldsymbol{e}_\Omega + c \boldsymbol{\dot{e}}_R \cdot \boldsymbol{e}_\Omega + c \boldsymbol{e}_R \cdot \boldsymbol{\dot{e}}_\Omega \end{align*} Now, substituting (\ref{eqn:y1}), (\ref{eqn:control_tau_1}) and (\ref{eqn:y2}), we obtain \begin{align*} =~& \boldsymbol{e}_\Omega ^T (-k_R \boldsymbol{e}_R - k_\Omega \boldsymbol{e}_\Omega - \boldsymbol{Y}\boldsymbol{h}_p + \boldsymbol{Y}_1 \boldsymbol{h}_p ) \\ &- \boldsymbol{e}_\Omega ^T \boldsymbol{Y}_2 \boldsymbol{h}_p + k_R \boldsymbol{e}_R \cdot \boldsymbol{e}_\Omega + c C(R_d^T,R)\boldsymbol{e}_\Omega \cdot \boldsymbol{e}_\Omega \\ &+ c \boldsymbol{e}_R^T (-k_R \boldsymbol{e}_R - k_\Omega \boldsymbol{e}_\Omega - \boldsymbol{Y}\boldsymbol{h}_p + \boldsymbol{Y}_1 \boldsymbol{h}_p)-c \boldsymbol{e}_R^T \boldsymbol{Y}_2 \boldsymbol{h}_p \\ =~& -k_\Omega \boldsymbol{e}_\Omega ^T \boldsymbol{e}_\Omega + c C(R,R_d)\boldsymbol{e}_\Omega \cdot \boldsymbol{e}_\Omega - c k_R \boldsymbol{e}_R \cdot \boldsymbol{H}_p^{-1}\boldsymbol{e}_R \\ &- ck_\Omega \boldsymbol{e}_R \cdot \boldsymbol{H}_p^{-1} \boldsymbol{e}_\Omega. \end{align*} Since $C(R,Rd)e_\Omega \leq \frac{1}{\sqrt{2}}\text{tr}[G] \|e_\Omega \|$, \begin{equation} \begin{aligned} \mathcal{\dot{V}}_p \leq~& -\Big( k_\Omega - \frac{c}{\sqrt{2}} \text{tr}[G]\Big ) \| \boldsymbol{e}_\Omega \|^2 - c k_R \boldsymbol{H}_p^{-1} \| \boldsymbol{e}_R \|^2 \\ &+ ck_\Omega \boldsymbol{H}_p^{-1} \| \boldsymbol{e}_R \| \| \boldsymbol{e}_\Omega\| \\ \leq~ &-z_1^T W_3 z_1, \end{aligned} \end{equation} where $W_3^p$ is given by (\ref{eqn:W3}) \begin{equation}\label{eqn:W3} W_3^p = \begin{bmatrix} ck_R & -\frac{ck_\Omega}{2\Lambda_{min}^p} \\ -\frac{ck_\Omega}{2\Lambda_{min}^p} & k_\Omega - \frac{c}{\sqrt{2}}\text{tr}[G]. \end{bmatrix} \end{equation} Therefore, \begin{equation}\label{eqn:VdotBounds} \dot{\mathcal{V}}_p \leq -\Lambda_{min}^{W_3^p} \| z_1\|^2 \end{equation} Let $\beta_p = \frac{\Lambda_{min}^{W_3^p}}{2\Lambda_{max}^{W_2^p}}$, then from (\ref{eqn:Vbounds}) and (\ref{eqn:VdotBounds}), we have \begin{equation}\label{eqn:Vexp} \dot{\mathcal{V}}_p \leq -2\beta_p \mathcal{V}_p \end{equation} Hence the tracking errors are exponentially stable for the individual subsystems. This implies that if $\sigma(t) = p$ for $t \in [t_0,t_0 + \tau_d)$, we have \begin{equation}\label{eqn:tau_d_exp} \mathcal{V}_p (z_1(t_0 + \tau_d)) \leq e^{-2\beta_p\tau_d}\mathcal{V}_p(z_1(t_0)) \end{equation} \subsubsection{Stability of the overall switched system} We will use multiple Lyapunov functions to prove the stability of the switched system. Consider the following Lemma 2: \subsubsection*{Lemma 2(\cite{liberzon}, pp 41-42)} \textit{ Consider a finite family of globally asymptotically stable systems, and let $\mathcal{V}_p$, $p \in \mathcal{P}$ be a family of corresponding radially unbounded Lyapunov functions. Suppose that there exists a family of positive definite continuous functions $\mathcal{W}_p, p \in \mathcal{P}$ with the property that for every pair of switching times ($t_i,t_j)$, $i < j$, such that $\sigma(t_i) = \sigma(t_j)$ and $\sigma(t_k) \neq p$ for $t_i < t_k < t_j$, we have} \begin{equation} \mathcal{V}_p(x(t_j)) - \mathcal{V}_p(x(t_i)) \leq -\mathcal{W}_p (x(t_i)), \end{equation} \textit{then the switched system (\ref{eqn:ps}) is globally asymptotically stable.} \subsubsection*{Proof} Employing (\ref{eqn:Vexp}), we can find a desired lower bound on the \textit{dwell-time}, that corresponds to the amount of time that a system should reside in subsystem $p$ to ensure that the overall tracking errors converge to zero. To elaborate, consider a system when $\mathcal{P} = \{ 1,2\}$ and $\sigma$ takes values of 1 on [$t_0, t_1$) and 2 on [$t_1, t_2$) such that $t_{i+1} - t_i \geq \tau_d, i = 0,1$. From (\ref{eqn:tau_d_exp}), the minimum dwell-time can be calculated using the theory of the switched systems \cite{liberzon} as (\ref{eqn:tau_d}), which guarantees that the switched system (\ref{eqn:ps}) is asymptotically stable in $\mathcal{L}$ by employing \textit{Lemma 2}. \subsubsection*{Remark 3} The constraint in (\ref{eqn:tau_d}) is conservative in the sense that if switching is not allowed for $\tau_d$ amount of time, it is impossible to react to change in environments, which can defeat the purpose of the foldable design. However the trajectory planner can be made aware of the dwell-time to generate that the reference trajectory by accounting for the dwell-time. This ensures that when the geometric-constrained based switching is performed, the attitude error transients have already stabilized. This is further discussed in Section \ref{sec:planning}. \subsection{Case with model uncertainties and no external disturbances} \label{sec:jhat} The dwell-time derived in (\ref{eqn:tau_d}) ensures that the switched system is stable when the model (e.g., moment of inertia) is known. However, this is not the case for almost all real-world scenarios. To handle modeling errors, we will estimate the moment of inertia online for each subsystem. There have been many approaches to estimate the moment of inertia online \cite{tLee}, however only recently researchers have tried to ensure physical consistency of the inertia estimates \cite{L+18,LS21}. For this work, we aim to ensure physical consistency during adaptation of the inertia parameters and hence adopt the methodology presented in \cite{L+18}. \subsubsection{Attitude tracking for individual subsystems} For the $p^{th}$ subsystem, let us assume that the control torques are now generated according to \begin{align} u &= -k_R \boldsymbol{e}_R - k_\Omega \boldsymbol{e}_\Omega - \boldsymbol{Y}\boldsymbol{\hat{h}}_p \label{eqn:control_torqueJhat}, \\ \boldsymbol{\dot{\hat{h}}}_p &= - (\nabla^2\psi(\boldsymbol{\hat{h}}_p))^{-1} \boldsymbol{Y}^T \boldsymbol{e}_A, \label{eqn:updaterate_h} \end{align} where the inertia parameters are estimated based on the augmented error $\boldsymbol{e}_A = \boldsymbol{e}_\Omega + c \boldsymbol{e}_R$. Here, $\psi(\cdot)$ is the log-determinant function which ensures that the estimates of the inertia parameters are physically consistent given that the initial guess is also physically consistent. \subsubsection*{Assumption 4} The minimum eigen value $\Lambda_{max}^p$ and the maximum eigen values $\Lambda_{min}^p$ of the true inertia matrix $\boldsymbol{H}_p$ for the $p^{th}$ subsystem are known. \subsubsection*{Proposition 5} Suppose that Assumption 4 holds. For positive constants $k_\Omega$ and $k_R$ in (\ref{eqn:control_torqueJhat}), if the positive constant $c$ is chosen such that (\ref{eqn:c_hdot}) holds, the attitude tracking errors, ($e_R,e_\Omega$), for the individual subsystems converge to zero asymptotically. \begin{equation} \begin{aligned}\label{eqn:c_hdot} c < \text{min} &\Bigg \{\sqrt{\frac{2b_1k_R\Lambda_{min}^p}{(\Lambda_{max}^p)^2}}, \frac{\sqrt{2}k_\Omega}{\Lambda_{max}^p \text{tr}[G]}, \\ &\frac{4 k_R k_\Omega}{k_\Omega^2 + \frac{4}{\sqrt{2}} k_R\Lambda_{max}^p \text{tr}[G]}\Bigg\} \end{aligned} \end{equation} \subsubsection*{Proof} We will again proceed to first analyze the stability of the individual system and the stability of the switched system. Consider the Lyapunov candidate for individual subsystem as the following \begin{equation}\label{eqn:lyapunov_p} \begin{aligned} \mathcal{V}_p =~& \frac{1}{2}e_\Omega^T \boldsymbol{H_p}e_\Omega + k_R\Phi(R,R_d) + c \boldsymbol{e}_R \cdot \boldsymbol{H_p} \boldsymbol{e}_\Omega + d_\psi (\boldsymbol{h}_p \| \boldsymbol{\hat{h}}) \end{aligned} \end{equation} where $ d_\psi (\boldsymbol{h}_p \| \boldsymbol{\hat{\boldsymbol{h}}_p})$ is the Bregman divergence operator \cite{L+18}: \begin{equation*} \begin{aligned} d_\psi (\boldsymbol{h}_p \| \boldsymbol{\hat{\boldsymbol{h}}_p}) = \psi(\boldsymbol{h}_p) - \psi(\hat{\boldsymbol{h}}_p) - (\boldsymbol{h}_p - \hat{\boldsymbol{h}}_p)^T\nabla \psi(\hat{\boldsymbol{h}}_p) \end{aligned} \end{equation*} and the time-derivative of $d_\psi (\boldsymbol{h}_p \| \boldsymbol{\hat{h}}_p)$ is \begin{equation} \begin{aligned} \dot{d_\psi }(\cdot) & = (\hat{\boldsymbol{h}}_p - \boldsymbol{h}_p)^T\nabla^2 \psi(\hat{\boldsymbol{h}}_p)\dot{\hat{\boldsymbol{h}}}_p \end{aligned} \end{equation} As shown in \cite{L+18}, $d_\psi (\boldsymbol{h}_p \| \boldsymbol{\hat{h}}_p)$ can be taken as an approximation for the geodesic estimation error with the properties required of a desired Lyapunov candidate. Also, from (\ref{eqn:error_function_bounds}) we have that $\mathcal{V}_p$ is lower-bounded by \begin{equation} z^T W_{11} z \leq \mathcal{V}_p \end{equation} where $z = [z_1, ~ z_2 ]^T = [\|e_R\|,~\|e_\Omega\|,~ d_\psi(\boldsymbol{h}_p\|\hat{\boldsymbol{h}})]^T \in \mathbb{R}^3$ and $W_{11} \in \mathbb{R}^{3\times3}$ is given by \begin{equation} W_{11} = \begin{bmatrix} b_1 k_R & \frac{1}{2}c\Lambda_{max}^p & 0 \\ \frac{1}{2}c\Lambda_{max}^p & \frac{1}{2}\Lambda_{min}^p & 0 \\ 0 & 0 & 1 \end{bmatrix} \end{equation} Furthermore, we have \begin{equation} z_1^T W_{13}^p z_1 \leq \mathcal{V}_p \leq z_1^T W_{23}^p z_1 \end{equation} where $z_1= [\| \boldsymbol{e}_R \|,~ \|\boldsymbol{e}_\Omega \|]^T$ and $W_{13}^p$, $W_{23}^p \in \mathbb{R}^{2\times2}$ are given by \begin{equation*} W_{13}^p = \begin{bmatrix} b_1k_R & \frac{1}{2}c\Lambda_{max}^p\\ \frac{1}{2}c\Lambda_{max}^p & \frac{1}{2}\Lambda_{min}^p \\ \end{bmatrix}, W_{23}^p = \frac{1}{2}\begin{bmatrix} b_2k_R & \frac{1}{2}c\Lambda_{min}^p \\ \frac{1}{2}c\Lambda_{min}^p & \frac{1}{2}\Lambda_{max}^p \\ \end{bmatrix} \end{equation*} i.e. \begin{equation}\label{eqn:Vphat_bounds} \Lambda_{min}^{W_{13}^p} \| z_1 \|^2 \leq \mathcal{V}_p \leq \Lambda_{max}^{W_{23}^p} \| z_1 \|^2 \end{equation} Differentiating $\mathcal{V}_p$ along the solutions of the system and employing (\ref{eqn:control_torqueJhat}), we obtain \begin{align*} \mathcal{\dot{V}}_p =& \boldsymbol{e}_\Omega^T \boldsymbol{H_p} \boldsymbol{\dot{e}}_\Omega + k_R \boldsymbol{e}_R \cdot \boldsymbol{e}_\Omega + c \boldsymbol{\dot{e}}_R \cdot \boldsymbol{H_p} \boldsymbol{e}_\Omega \\ &+ c \boldsymbol{e}_R \cdot \boldsymbol{H_p} \boldsymbol{\dot{e}}_\Omega + \dot{d_\psi }(\cdot) \\ =& -k_\Omega \boldsymbol{e}_\Omega ^T \boldsymbol{e}_\Omega + \boldsymbol{e}_\Omega ^T \boldsymbol{Y}(\boldsymbol{\hat{h}}_p - \boldsymbol{h}_p) + c C(R_d^T,R)\boldsymbol{e}_\Omega \cdot \boldsymbol{H_p} \boldsymbol{e}_\Omega \\ & - c k_R \boldsymbol{e}_R^T \boldsymbol{e}_R + c \boldsymbol{e}_R \cdot \boldsymbol{Y}(\boldsymbol{\hat{h}}_p - \boldsymbol{h}_p) -ck_\Omega \boldsymbol{e}_R \cdot \boldsymbol{e}_\Omega + \dot{d_\psi }(\cdot) \\ =& -k_\Omega \boldsymbol{e}_\Omega ^T \boldsymbol{e}_\Omega + c C(R_d^T,R)\boldsymbol{e}_\Omega \cdot \boldsymbol{H_p} \boldsymbol{e}_\Omega - c k_R \boldsymbol{e}_R^T \boldsymbol{e}_R \\ &+ \boldsymbol{e}_A \cdot \boldsymbol{Y}(\boldsymbol{\hat{h}}_p - \boldsymbol{h}_p) + \dot{d_\psi }(\cdot) \\ &- ck_\Omega \boldsymbol{e}_R \cdot \boldsymbol{e}_\Omega \end{align*} Substituting for the control law, $u$, and parameter estimate law, $\boldsymbol{\dot{\hat{h}}}$, from (\ref{eqn:control_torqueJhat}) and (\ref{eqn:updaterate_h}) \begin{equation}\label{eqn:Vdot} \begin{aligned} \mathcal{\dot{V}}_p =~& -k_\Omega \boldsymbol{e}_\Omega ^T \boldsymbol{e}_\Omega - c k_R \boldsymbol{e}_R^T \boldsymbol{e}_R + c C(R_d^T,R)\boldsymbol{e}_\Omega \cdot \boldsymbol{H_p} \boldsymbol{e}_\Omega \\ &- ck_\Omega \boldsymbol{e}_R \cdot \boldsymbol{e}_\Omega \\ \leq~& -\Big( k_\Omega - \frac{c}{\sqrt{2}} \Lambda_{p_{max}} \text{tr}[G]\Big ) \| \boldsymbol{e}_\Omega \|^2 - c k_R \| \boldsymbol{e}_R \|^2 \\ &+ ck_\Omega \| \boldsymbol{e}_R \| \| \boldsymbol{e}_\Omega\| = -z_1^T W_{31}^p z_1 \end{aligned} \end{equation} where $W_{31}^p \in \mathbb{R}^{2\times2}$ is defined in (\ref{eqn:W31}). \begin{equation}\label{eqn:W31} W_{31} = \begin{bmatrix} ck_R & -\frac{ck_\Omega}{2} \\ -\frac{ck_\Omega}{2} & k_\Omega - \frac{c}{\sqrt{2}}\Lambda_{max}^p \text{tr}[G] \end{bmatrix} \end{equation} This implies that the errors $z_1 = [\|\boldsymbol{e}_R \|, ~ \| \boldsymbol{e}_\Omega\| ]^T$ asymptotically converge to zero. \subsubsection*{Remark 6} Although the tracking errors converge to their zero equilibrium, (\ref{eqn:W31}) does not ensure that the parameter errors converge. This is because of the absence of persistence of excitation which would aid in parameter convergence to true values. However, the attitude tracking errors are still guaranteed to be stable and do not depend on the parameter estimation error. \subsubsection{Stability of the switched system} We will again use multiple Lyapunov functions to establish the stability of the attitude tracking dynamics with the proposed adaptive controller. \subsubsection*{Proposition 7}\label{sec:prop6} Consider the system (\ref{eqn:ps}) and that Assumption 4 holds. With the control generated according to (\ref{eqn:control_torqueJhat}) and (\ref{eqn:updaterate_h}), if the initial guess of inertia parameters, $\hat{\boldsymbol{h}}_p$ for each subsystem is adaptively updated and the switching is performed at time $t_j >> t_i$ such that (\ref{eqn:switch_condition}) holds, then the attitude tracking errors, $e_{R},e_\Omega$, of the switched system converge to zero asymptotically. \begin{equation}\label{eqn:switch_condition} \| z_1(t_j) \|^2 \leq \Bigg ( \frac{\Lambda_{min}^{W_{13}^p}}{\Lambda_{max}^{W_{23}^p}} \Bigg ) \| z_1(t_i) \|^2 \end{equation} \subsubsection*{Proof} To analyze this case, consider a switched system generated by two dynamical systems such that $\mathcal{P} = [1, ~2]$. Let $t_i < t_j$ be two switching times when $\sigma = 1$. Then, \begin{align} \mathcal{V}_1(t_i) = \Big [ & \frac{1}{2}e_\Omega^T \boldsymbol{H_p}e_\Omega + k_R\Phi(R,R_d) + \\ & c \boldsymbol{e}_R \cdot \boldsymbol{H_p} \boldsymbol{e}_\Omega + d_\psi (\boldsymbol{h}_p \| \boldsymbol{\hat{h}}) \Big ] _{|_{t = t_i}}\label{eqn:V1_ti} \\ \mathcal{V}_1(t_j) = \Big [ & \frac{1}{2}e_\Omega^T \boldsymbol{H_p}e_\Omega + k_R\Phi(R,R_d) + \\ & c \boldsymbol{e}_R \cdot \boldsymbol{H_p} \boldsymbol{e}_\Omega + d_\psi (\boldsymbol{h}_p \| \boldsymbol{\hat{h}}) \Big ]_{|_{ t = t_j}} \label{eqn:V1_tj} \end{align} Using \textit{Proposition} 7, the term $d_\psi (\boldsymbol{h}_p \| \boldsymbol{\hat{h}})$ is adaptively updated from the previous value, hence is constant at the two time instants. Next, (\ref{eqn:Vphat_bounds}) provides the bounds on the first three terms of the Lyapunov candidate at the two time intervals. Hence if the switching time instant is chosen such that (\ref{eqn:switch_condition}) holds, the switched system is asymptotically stable using \textit{Lemma 2}. \begin{figure}[t] \centering \includegraphics[width = 0.4\textwidth]{lyapunov6} \caption{Lyapunov function of the attitude tracking error during configuration switching. $\tau_s$ and $\tau_d$ represent the attitude settling-time and desired dwell-time respectively.} \label{fig:lyapunov} \end{figure} \subsubsection*{Remark 8} Note that \textit{Proposition 7} enforces the minimum dwell-time ($\tau_d$) requirement for the switched system stability. As mentioned in Remark 2, the planner is made aware of the dwell-time such that the reference trajectory is generated to accommodate the dwell-time requirements as described in the following Section. \ref{sec:planning}. \subsubsection*{Remark 9} Since it is well-known that the adaptive controllers can be unstable even for slight disturbance, we modify the control law proposed in (\ref{eqn:control_torqueJhat}) and (\ref{eqn:updaterate_h}) to include a robust term as discussed in the following Section \ref{sec:caseC}. \subsection{Case with external disturbances and model uncertainties}\label{sec:caseC} Finally we discuss the case when we have modelling uncertainties coupled with external disturbances to improve the robustness of the proposed adaptive controller in the presence of disturbances. The problem statement is modified from (\ref{eqn:ps}) to the following: \begin{align}\label{eqn:ps_dist} \centering \boldsymbol{\dot{R}} &= \boldsymbol{R}\hat{\Omega} \nonumber \\ \boldsymbol{H}_p\dot{\Omega} &- [\boldsymbol{H}_p\Omega]_\times \Omega = \zeta u + \Delta \end{align} where $\Delta \in \mathbb{R}^3$ represents the disturbances in the attitude dynamics, and $\zeta \in \mathbb{R}^3$ is the vector comprising $\zeta_i, i = 1,...,3$ representing multiplicative input uncertainty due to the aerodynamic effects from propeller proximity. \subsubsection*{Assumption 10} The disturbances in attitude dynamics have known bounds, i.e, $\| \Delta \| \leq \delta_R$ for a positive constant and $\zeta_{i_{min}} \leq \zeta_i \leq \zeta_{i_{max}}$. \subsubsection*{Proposition 11} Suppose Assumptions 4 and 10 hold. Then, if the control torques are generated according to \begin{align} u &= \frac{1}{\sqrt{(\zeta_{i_{max}}\zeta_{i_{min}})}} \cdot \underbrace{(-k_R \boldsymbol{e}_R - k_\Omega \boldsymbol{e}_\Omega - \boldsymbol{Y}\boldsymbol{\hat{h}} + \mu)}_\text{$\hat{u}$} \label{eqn:control_torqueSMC}, \\ \boldsymbol{\dot{\hat{h}}} &= - (\nabla^2\psi(\boldsymbol{\hat{h}}))^{-1} \boldsymbol{Y}^T \boldsymbol{e}_A, \\ \mu &= -\frac{\delta_R^2 \boldsymbol{e}_A}{\delta_R \| \boldsymbol{e}_A \| + \epsilon}, \end{align} where $\epsilon$ is a small positive constant and $\hat{u}$ is computed from (\ref{eqn:control_torqueJhat}), the attitude tracking errors are uniformly bounded. \subsubsection*{Proof:} We can apply Barbalat's lemma in conjecture with the Lyapunov analysis to show that the tracking errors of the individual subsystems are uniformly bounded in $\mathcal{L}$. The proof is similar as presented in \cite{tLee}. \begin{figure*}[t] \centering \subfloat[Tracking of angular velocity ($\Omega$:red solid, $\Omega_d$:blue dashed)]{\includegraphics[width = 0.48\textwidth]{p_omega_v3.pdf}} \subfloat[Errors in $\boldsymbol{e}_R$ ($e_R$:red solid, Reference:blue dashed)]{\includegraphics[width = 0.48\textwidth]{eRv3.pdf}}\\ \vspace{-0.1in} \subfloat[Inertia estimates (kg-m$^2$) (true: dotted, estimated: solid)]{\includegraphics[width = 0.48\textwidth]{Jv3.pdf}} \subfloat[Control effort ]{\includegraphics[width = 0.48\textwidth]{controlv3.pdf}} \caption{Performance of the proposed attitude controller when the vehicle switches between the two configurations shown in Fig. \ref{fig:lyapunov}(a) at $t = 30s$ from 1 to 2 (black dash-dotted vertical line) and at $t= 60s$ from 2 to 1 (pink dotted vertical line) by following (\ref{eqn:switch_condition}). With the proposed adaptive controller, the attitude errors converge to the reference (shown by horizontal dotted lines).} \label{fig:adaptive_controller} \vspace{-0.1in} \end{figure*} \section{Control-Aware Minimum Jerk Trajectory}\label{sec:planning} The \textit{Proposition 7} in \ref{sec:prop6} with (\ref{eqn:switch_condition}) implies that for the switched system to have asymptotic tracking stability, the minimum dwell-time before switching should be calculated as a percentage of the initial tracking error. This is shown by the blue line in Fig. \ref{fig:lyapunov}. However, this doesn't still quantify the bounds of $\| \mathcal{V}_i(t) - \mathcal{V}_j(t) \|~ \forall i \neq j; i,j \in \mathcal{P}$ as shown by the red line. \subsubsection*{Assumption 12} The upper bound on the estimation error for the $p^{th}$ subsystem, $z_{2}^p(t)$ is known. \subsubsection*{Assumption 13} The settling time corresponding to the maximum attitude error for the $p^{th}$ subsystem is known. \subsubsection*{Proposition 14} Suppose that Assumptions 4, 12 and 13 hold, $\Delta = [0,0,0]^T$ and $\zeta = 1$. If switching is performed at $t = \tau_s$ when $\| z_1^p(\tau_s) \| \leq \rho $ where $\tau_s$ denotes the settling-time for the attitude errors, $e_R$ and $e_\Omega$, and $\rho > 0$ denotes the region within which the errors remain, we have the minimum value of the difference in the two Lyapunov functions at the same time instant (the jump in the Lyapunov value, shown by the red line in Fig. \ref{fig:lyapunov}b), as (\ref{eqn:VtBounds}) \begin{align}\label{eqn:VtBounds} \| \mathcal{V}_i(\tau_s) - \mathcal{V}_j(\tau_s) \| \leq ~& (\Lambda_{max}^{W_{21}^i} + \Lambda_{max}^{W_{21}^j})\rho \\ &+ \Lambda_{max}^{W_{21}^i}\|z_{2}^i(\tau_s) \| + \Lambda_{max}^{W_{21}^j}\| z_{2}^j(\tau_s) \| \nonumber \end{align} \subsubsection*{Proof} From (\ref{eqn:lyapunov_p}), we have: \begin{equation}\label{eqn:boundVp} \mathcal{V}_p \leq \Lambda_{max}^{W_{21}^p} \| z \|^2, \text{ with } W_{21} = \begin{bmatrix} b_2 k_R & \frac{1}{2}c\Lambda_{min}^p & 0 \\ \frac{1}{2}c\Lambda_{min}^p & \frac{1}{2}\Lambda_{max}^p & 0 \\ 0 & 0 & 1 \end{bmatrix} \end{equation} \textit{Proposition 14} then directly follows from the minimum value of (\ref{eqn:boundVp}). Since the position controller is a proportional-derivative control on position, waypoint planning to fly through passages is not ideal which would result in high initial attitude errors such that $z_1^i(t) \nleq \rho$ if the vehicle switches before the attitude errors' settling-time. Alternatively, the minimum-jerk trajectory (MJT) planner can be successfully employed here to ensure $z_1^i(\tau) \leq \rho$ by imposing the desired velocity boundary conditions at the entrance of the passageway, where configuration switching is mandated by the geometric constraint. The time taken to reach this velocity should be set to at least $\tau_s$. By ensuring that the vehicle has attained this velocity, the attitude errors will be lower at the entrance of the passageway in the absence of external disturbances. Hence this will lead to lower bounds on the tracking errors as given by (\ref{eqn:VtBounds}). The MJT planner is generated according to \begin{equation} r^*(t) = \underset{r(t)}{argmin} \int_0^T \dddot{r}^2 ~dt \end{equation} with the following boundary conditions: \begin{equation} \centering \begin{aligned} r(0) = [0,0,0]^T,& ~\dot{r}(0) = [0,0,0]^T, ~\ddot{r}(0) = [0,0,0]^T \\ r(\tau) = r_{des},&~ \dot{r}(\tau) = \dot{r}_{des}, ~\ddot{r}(\tau) = [0,0,0]^T \end{aligned} \end{equation} where $r_{des}$ and $\dot{r}_{des}$ denote the coordinates of the entrance of the passageway and the desired velocity to fly through the passageway respectively and $\tau \geq \text{max }\{\tau_s,\tau_d\}$ where $\tau_d$ is defined as $t_j - t_i$ from (\ref{eqn:switch_condition}) for the $p^{th}$ system. \begin{figure}[t] \centering \subfloat[Tracking of angular velocity]{\includegraphics[width = 0.48\textwidth]{comparison_robust_omega.pdf}}\\ \subfloat[Control effort ]{\includegraphics[trim = 0 2cm 0 0, clip,width = 0.48\textwidth]{robust_concantenated.pdf}} \caption{Performance comparison between the proposed adaptive controller and a conventional robust controller \cite{L+18} \textit{, without the presence of disturbances,} when the configuration switches from 1 to 2 at $t = 30$s and from 2 to 1 at $t = 60$s. The uncertainty bounds assumed for the robust controller are constant across the two subsystems and are too high for the subsystem 2. This leads to high control efforts and chattering. It also demonstrates higher control efforts initially from t = 0 to 1s. The adaptive controller has similar performance, however, with low control efforts through-out the entire flight.} \label{fig:robust_controller} \end{figure} \begin{figure} \centering \subfloat[Errors in $\boldsymbol{e}_R$]{\includegraphics[width = 0.48\textwidth]{robust_eRv2.pdf}}\\ \subfloat[Control effort]{\includegraphics[width = 0.48\textwidth]{robust_controlv2.pdf}} \\ \subfloat[Inertia estimates (kg-m$^2$)]{\includegraphics[width = 0.48\textwidth]{robust_inertiav2.pdf}} \caption{Performance comparison between the proposed robust adaptive controller and a conventional robust controller \cite{L+18} \textit{in the presence of disturbances}. The uncertainty bounds assumed for the robust controller are not sufficient and therefore the performance is deteriorated. While with the same parameters, the robust adaptive controller shows improved performance for attitude tracking.} \vspace{-0.2in} \end{figure} \section{Results and Discussion}\label{sec:results} This section describes the various case scenarios simulated to validate the proposed controller for the switched system. The position controller in Fig. \ref{fig:firstpic} is implemented from \cite{L+10} to generate the necessary desired orientation and thrust. The nominal inertia matrix ($\boldsymbol{H}_p^0$) for initial guess in the $p^{th}$ configuration is employed from our previous work \cite{PM+20} for the two configurations 1 and 2 in Fig. \ref{fig:firstpic}(b) as \begin{equation} \begin{aligned} \boldsymbol{H}_1^0 &= \begin{bmatrix} 0.0123 & -0.0006 & 0.0010 \\ -0.0006 & 0.0272 & 0 \\ 0.0010 & 0 & 0.0381 \end{bmatrix},\\ \boldsymbol{H}_2^0 &= \begin{bmatrix} 0.0114 & -0.0001 & 0.0005 \\ -0.0001 & 0.0152 & 0\\ 0.0005 & 0 & 0.0253 \end{bmatrix} \end{aligned} \end{equation} Two switched configurations ($p = \{1,2\}$) corresponding to $l_1 = 0.2$ and $l_2 = 0.1$ are chosen. The rest of the parameters are set to $ M = 1.4\text{kg}, k_R = 0.0424, k_\Omega = 0.0296 $. The modelling errors in the inertia parameters are chosen as \[ \Delta \boldsymbol{H}_1 = \Delta \boldsymbol{H}_2 = \text{diag}[0.01~ 0.01~ 0.02] \] such that $\boldsymbol{H}_p = \boldsymbol{H}_p^0 + \Delta \boldsymbol{H}_p$. For the first case scenario with the presence of only modeling errors in inertia, we show how the switching is performed after the errors have decreased and the tracking errors converge to their zero equilibrium to validate \textit{Proposition 7}. The parameter estimates also converge, however it is not guaranteed due to the absence of persistence of excitation. The results are seen in Fig. \ref{fig:adaptive_controller}(a)-(d). Next, we also compare the performance of the proposed adaptive controller against a stand-alone the robust controller that accounts for parameter-varying uncertainty in inertia without the presence of external disturbances. The parameters for this case are chosen as \begin{equation*} \begin{aligned} \boldsymbol{H}_1^0 = & \begin{bmatrix} 0.0023 & -0.0006 & 0.0010\\ -0.0006 & 0.0172 & 0\\ 0.0010 & 0 & 0.0181 \end{bmatrix},\\ \boldsymbol{H}_2^0 = & \begin{bmatrix} 0.0014 & -0.0001 & 0.0005\\ -0.0001 & 0.0052 & 0\\ 0.0005 & 0 & 0.0053 \end{bmatrix} \end{aligned} \end{equation*} with the uncertainty as \begin{equation*} \Delta \boldsymbol{H}_1 = \text{diag}[0.2,0.2,0.4],~~~ \Delta \boldsymbol{H}_2 = \text{diag}[0.1,0.1,0.2] \end{equation*} The results for angular velocity tracking and the corresponding control efforts are seen in Fig. \ref{fig:robust_controller}(a)-(b). The performance of the proposed adaptive controller is comparable to that of the robust controller, however with low control efforts. Further, the robust controller uncertainty assumed ($\delta_R = 0.5)$ is too high for the subsystem 2 which leads to higher control efforts as shown in Fig. \ref{fig:robust_controller}(b). Since the adaptive controllers can be unstable even for a slight disturbance, we simulate the case when there is disturbance added to the system with $\Delta (t) = 0.5[0~sin(t)~cos(t)]^T$. The parameters are retained from Case 2. The proposed robust adaptive controller, simulated with low gains as above and low uncertainty bounds at $\delta_R = 0.2$, performs better against the robust controller by leading to improved tracking performance as shown in Fig \ref{fig:robust_controller}, thereby validating \textit{Proposition 11}. This is due to the fact the adaptive controller is successfully able to estimate and account for the uncertainties. \begin{figure*}[t] \centering \includegraphics[width = 0.85\textwidth]{trajectory_plots4.pdf} \vspace{-0.1in} \caption{The tracking results for minimum-jerk trajectory and waypoint based methods. The MJT based trajectory leads to low deviations from the trajectory, however the waypoint based one leads to safety constraint violations.} \label{fig:tracking} \end{figure*} Finally, we integrate the proposed attitude controller with a minimum-jerk trajectory planner and compare the performance against a waypoint-based planner to validate \textit{Proposition 14}. The MJT-based planning framework demonstrates how the vehicle transitions from the initial configuration to the new configuration at $[0.5~0 ~-2]^T$m at $t = 9.02s$ without giving rise to additional tracking errors as shown in Fig. \ref{fig:tracking}(a)-(b), shown in red solid lines. The waypoint based planner, however, arrives at the same position at $t=5.24s$ which is less than the maximum attitude settling time ($\tau_s = 8.87s)$ and therefore has high attitude errors during the transitioning. This leads to higher switch-based disturbances, violating the safety constraints as shown in \ref{fig:tracking}(b). Since the attitude does not change much during this maneuver, the parameter estimations does not converge and move slowly; however, this does not affect the tracking performance. \section{Conclusion}\label{sec:conclusion} In this article, we presented an approach for analyzing the attitude tracking stability of the foldable quadrotors (FQrs) by modeling them as switched systems. We employed this analysis to design an adaptive controller and derived necessary dwell-time requirements that should be enforced to guarantee that the attitude tracking errors of the switched system are also asymptotically stable. Another highlight of this work was to exploit the dwell-time information and design the boundary conditions for a minimum-jerk trajectory planner to achieve stable flights through narrow gaps for a varying morphology. Future work includes complete stability analysis of the system including the position tracking over entire $\mathsf{SO(3)}$ space and corresponding experimental results to validate the performance of the proposed controller.
{ "timestamp": "2022-09-20T02:22:16", "yymm": "2209", "arxiv_id": "2209.08676", "language": "en", "url": "https://arxiv.org/abs/2209.08676" }
\section{Introduction} Correlated phenomena play a central role in condensed matter physics, and have been studied in many contexts including phase transitions \cite{Bernien2017,Zhang2017}, many-body interactions and entanglement \cite{Cheneau2012,Shankar2017,Altman2004,Deng2005,Baez2020}, and magnetic ordering \cite{Simon2011,Mazurenko2017}, as well as in the context of fluctuating electromagnetic fields, where two-point correlators are central to characterizing field statistics \cite{Lifshitz1980,Joulain2005,Premakumar2017,Agarwal2017}. Recent efforts towards improving quantum devices have also explored correlated noise in SQUIDS \cite{Sendelbach2009,Yoshihara2010,Gustavsson2011} and qubits \cite{Szankowski2016,Viola2017,Krzywda2019,vonLupke2020,Wilen2021,Tennant2022}. Nitrogen vacancy (NV) centers in diamond are a promising sensing platform for detecting correlations, as they are robust, noninvasive, and capable of measuring weak signals with nanoscale resolution \cite{Casola2018}. These advantages have made them a useful tool for studying many condensed matter systems including magnetic systems like 2D van der Waals materials \cite{Thiel2019,Sun2021}, magnons \cite{LeeWong2020}, and skyrmions \cite{Dovzhenko2018,Yu2018,Jenkins2019}; and transport phenomena like Johnson noise \cite{Kolkowitz2015}, hydrodynamic flow \cite{Vool2021,Ku2020,Jenkins2022}, and electron-phonon interactions in graphene \cite{Anderson2019}. These applications are powerful but have so far been limited to signals that are averaged over space or time --- more information is potentially available by studying spatial and temporal correlations in the system. Significant advances in nanoscale spectroscopy have already been made by studying correlations from a single NV center at different points in time \cite{Laraoui2013,Boss2017,Pfender2019}; measuring correlated dynamics between two different NV centers would provide simultaneous information at length scales ranging from the diffraction limit to the full field of view ($\sim$0.1--100 micron length scales). Furthermore, measuring two NV centers allows for measurements of correlations at two different sensing times limited only by the experimental clock cycle ($\sim$1 ns resolution). Measurements of spatiotemporal correlations at these length and time scales would provide useful information about the dynamics of the target system, including the electron mean free path, signatures of hydrodynamic flow \cite{Levitov2016}, or the microscopic nature of local NV center noise sources like surface spins \cite{Romach2015,Sangtawesin2019,Dwyer2021}. In this paper we develop a new technique to measure classical correlations between two noninteracting NV centers, which gives access to nonlocal information that would normally be discarded with single NV center measurements. Measuring such two-point correlators with NV centers is challenging because conventional optical spin readout provides very little information per shot. Here we derive the sensitivity requirements for detecting correlations, and experimentally implement a covariance magnetometry protocol using spin-to-charge readout of two spatially separated NV centers to achieve low readout noise. We demonstrate correlation measurements of random-phase classical magnetic fields measured at two points separated in space and time, and implement a spectral decomposition method for extracting and distinguishing between correlated and uncorrelated spectral components. \begin{figure*}[ht] \centering \includegraphics[width=4.75in]{figures/Fig01.pdf} \caption{Covariance noise sensing. (A) Diagram of a diamond with two near-surface NV centers experiencing uncorrelated local fields and a correlated common field. (B) Bloch sphere representations of each qubit state during sensing, with the states prepared along $x$ followed by a phase accumulation which will be different in each experiment, resulting in a distribution of phases. At the end of each experiment a final $\pi/2$ pulse maps these phases to populations. (C-D) Pulse sequence diagrams showing the sensing (XY8) and measurement (SCC) sequence for each NV center. The measurement is repeated many times, retaining the photon counts from each measurement without signal averaging; we instead measure the correlation between the resulting lists $S_i$. (E) Using conventional detection of single NV centers (top row), the coherence decay gives access to the noise spectral density $S(f)$ but provides no spatial information. Covariance magnetometry measuring two NV centers (bottom row) provides information about which spectral features are correlated and which are uncorrelated.} \label{fig:overview} \end{figure*} We consider two NV centers that do not directly interact with each other but experience a shared classical magnetic field, whose amplitude is correlated at the locations of the two NV centers (\cref{fig:overview}A). Each NV center also sees a unique local magnetic field that is uncorrelated between the two locations. These fields are detected using a Ramsey-type experiment addressing the $m_s=0$ and $m_s=+1$ (or $-1$) spin sublevels of the NV center (referred to as states 0 and 1 respectively), as illustrated in \cref{fig:overview}B-D. Upon many repeated measurements, we accumulate a list of signals $S_1=\{s_{1,i}\}$ and $S_2=\{s_{2,i}\}$, where $i=1...N$ indexes the $N$ total experiments. Though similar to a typical Ramsey-type variance detection sequence \cite{Degen2017}, we emphasize two significant modifications for covariance detection. First, despite detecting zero-mean noise, we choose a final pulse that is 90 degrees out of phase with the initial pulse, such that for high-frequency noise detection the final spin state is equally likely to be 0 or 1 (\cref{fig:overview}B,C), maximizing our sensitivity to correlations. This is not done in conventional noise detection using variance magnetometry, since straightforward signal averaging would then produce the same result $\braket{m_{s_i}}=0.5$ always. Second, we do not compute the average value of this signal, but rather compute the shot-to-shot cross-correlation between the raw signals $S_1$ and $S_2$ (\cref{fig:overview}D). \begin{figure*}[ht] \centering \includegraphics[width=7in]{figures/Fig02.pdf} \caption{Detecting correlations and anticorrelations. (A) Pulse sequence and final Bloch sphere mapping for correlation (top left) or anticorrelation (top right) measurements using global microwave control. For anticorrelations, an extra $\pi$ pulse and spatially selective NV polarization optical pulse (``reset'') are added during initialization (bottom, gray box). (B) Correlation detected from a 2 MHz AC signal whose phase is randomized with 1 MHz bandwidth Gaussian noise. The measured correlations are positive when the NV centers are initialized parallel to one another (blue circles) and negative when they are initialized antiparallel (red squares). Lines indicate the predicted correlation shape \cite{Supp}. Raw photon count statistics (bottom) taken from the marked data points in the top panel show no correlation (i), positive correlation (ii), or negative correlation (iii), where the color indicates the joint detection probability $\tilde{P}_{ab} \equiv P(s_1\mathord{=} a,s_2\mathord{=} b)-P(s_1\mathord{=} a)P(s_2\mathord{=} b)$. (C) Comparison of shot-to-shot photon counts during averaging for conventional readout (top left) and spin-to-charge conversion readout (top right). (bottom) Minimum magnetic field amplitude to detect correlations with $\text{SNR}=1$ for Gaussian noise. Here we have assumed $T_2=100\,$ $\mu$s and the phase integration time $t=T_2/2=50\,$ $\mu$s, as well as a readout time of $300\,$ns for conventional readout and $1\,$ms for SCC and optimal readout. Initialization time was ignored. } \label{fig:ACcorrelations} \end{figure*} Whereas conventional variance measurements provide spectral densities with no spatial information (\cref{fig:overview}E, top row), the addition of covariance information allows us to identify which spectral components are common between two NV centers and which are unique to each (\cref{fig:overview}E, bottom row). Throughout this work, we focus on the measured Pearson correlation $r = \text{Cov}(S_1,S_2)/(\sigma_1 \sigma_2)$, where Cov is the covariance and $\sigma_{1,2}$ is the standard deviation of $S_{1,2}$. \section{Detecting correlations} To demonstrate our protocol, we use an external radiofrequency (RF) coil or stripline to apply a global, random phase AC signal to two shallow NV centers approximately 10 nm from the diamond surface. Here the two NV centers share the same magnetic resonance frequency, so all microwave pulses address both. They are spatially resolved, allowing for separate excitation and readout using two independent optical paths \cite{Supp}. To boost the sensitivity of our readout, we use a simultaneous spin-to-charge conversion (SCC) protocol \cite{Shields2015, Barry2020} on each NV center separately. We use an XY8 sensing protocol for each NV center to maximize sensitivity to the applied AC signal \cite{Gullion1990} (\cref{fig:ACcorrelations}A). As expected, we observe correlations that are maximized when the interpulse spacing matches the frequency of the global signal (\cref{fig:ACcorrelations}B, blue circles). The correlations are apparent in the photon count statistics (\cref{fig:ACcorrelations}B, bottom panel ii); when one or more photons are detected from NV1, we observe a higher likelihood of also detecting a photon from NV2. To confirm that we are in fact detecting correlations in the spin state of the NV centers rather than spurious technical correlations \cite{Supp}, we can also initialize the two NV centers on opposite sides of the Bloch sphere prior to applying the XY8 sequence (\cref{fig:ACcorrelations}A). The phase accumulation step then results in a final state that is anticorrelated between the two NV centers (\cref{fig:ACcorrelations}B, red squares). \begin{figure*}[ht] \centering \includegraphics[width=4.6in]{figures/Fig03.pdf} \caption{Disentangling correlated and uncorrelated signals. (A) Single-NV noise spectra derived from conventional XY8 variance magnetometry (top) of two NV centers (orange open markers and gray filled markers, arbitrarily offset). Each NV center detects signals at two common frequencies, but it is impossible to directly determine whether the sources are local or nonlocal. Spectral decomposition (bottom) using covariance magnetometry (\cref{eq:reconstructedSpectrum}) reveals that the higher frequency peak is caused by a shared noise source. Here, the shared noise feature is engineered using an applied global 1.75 MHz AC signal, while the local feature is caused by the $^{15}$N nuclear spin intrinsic to each NV center. (B) In a broadband correlated noise environment, the two NV centers rapidly decohere (orange open markers and gray filled markers). (C) Covariance magnetometry for evolution times indicated by the gray rectangle in (B) reveals a dip in the Pearson correlation around $\tau=1800$ ns arising from the uncorrelated $^{15}$N nuclear spins intrinsic to each NV center. The broadband noise is correlated, allowing for the observation of spectral features from local signals even at evolution times beyond the coherence time of both NV centers.} \label{fig:spectraldecomposition} \end{figure*} The sensitivity of a covariance measurement differs from that of a traditional magnetometry measurement because it requires simultaneous signals from two NV centers. Assuming that the detected phases are statistically even, as for a noisy or random-phase signal, we find \cite{Supp} the Pearson correlation \begin{align} r = \frac{\textrm{e}^{-[\tilde{\chi}_1(t_1)+\tilde{\chi}_2(t_2)]}}{\sigma_{R_1}\sigma_{R_2}} \braket{\sin[\phi_{C_1}(t_1)]\sin[\phi_{C_2}(t_2)]}, \label{eq:rpc} \end{align} where the subscripts $1,2$ denote NV1 and NV2 respectively, the decoherence function $\tilde{\chi}_{1,2}(t)$ describes the `typical' coherence decay of the NV centers due to the local fields \cite{Cywinski2008}, $\phi_{C_{1,2}}$ are the phases accumulated by the NV centers due to the correlated field, and the readout noise $\sigma_{R_{1,2}}=\sqrt{1+2(\alpha_0+\alpha_1)/(\alpha_0-\alpha_1)^2}$ characterizes the fidelity of a photon-counting experiment with mean detected photon number $\alpha_0,\alpha_1$ for spin states $0,1$ respectively \cite{Taylor2008}. For thresholding, the readout noise instead depends on the fidelity of the spin state assignment \cite{Supp}. \begin{figure*}[ht] \centering \includegraphics[width=7in]{figures/Fig04.pdf} \caption{Temporal structure in correlations using independent control. (A) Confocal image showing the two NV centers used for these experiments (left). Optically detected magnetic resonance spectrum (middle) showing optical contrast as a function of microwave drive frequency displays two distinct sets of transitions corresponding to NV1 and NV2, with assignments (right). The NV centers are driven independently on either the $(0,-1)$ transitions for both NVs, labeled $\{-,-\}$, or the $(0,-1)$ and $(0,+1)$ transitions for NV1 and NV2 respectively, labeled $\{-,+\}$. (B) Diagram of the pulse sequence used to probe temporal correlations. After initialization, the start of the XY8 pulse sequence applied to NV2 is delayed by time $t_\text{delay}$ from the start of the pulses on NV1. A $f_0=3.125\,$MHz global AC signal is applied, making the resonant XY8 interpulse spacing $\tau=160\,$ns. (C) Correlations for the case where the NV centers are addressed on the same transitions ($\{-,-\}$, blue circles) oscillate as a function of $t_\text{delay}$ at the AC signal frequency $3.125\,$MHz. The correlations invert (red squares) when the two NV centers are addressed on different transitions ($\{-,+\}$), as they now accumulate opposite phases for the same signal. (D) With added phase noise, the time-domain dephasing of the AC signal is resolvable, despite having a short coherence time (less than $2\,\mu$s) compared to the XY8 sequence time.} \label{fig:timeoffset} \end{figure*} Note that the detectable correlation depends quadratically on the readout noise, making readout fidelity especially important for detecting correlations; this key fact is implicit in prior calculations of single-NV center two-point correlators derived in the context of repeated weak measurements \cite{Pfender2019}. This may be intuitively understood from \cref{fig:ACcorrelations}C, which shows the raw photon counts for conventional versus SCC readout methods. Using conventional readout, only approximately $0.01$ photons are detected per measurement, such that detecting simultaneous counts from both NV centers is extremely unlikely. Using SCC readout dramatically increases our ability to detect coincident events, and has a greater effect on covariance measurements than on conventional single-NV center measurements. From the independently measured values for each term on the r.h.s.\ of \cref{eq:rpc} \cite{Supp}, we expect the detectable correlation in our experiment to be approximately bounded by $r \approx 0.01$, in good agreement with the maximum correlation $r\approx 0.008$ we detect here (\cref{fig:ACcorrelations}B). The remaining discrepancy is likely due to imperfect charge state initialization and SCC ionization. Because readout noise plays an amplified role in covariance detection, covariance measurements can become prohibitively long without optimizing sensitivity, for which we require a detailed understanding of the signal to noise ratio (SNR). The sensitivity (minimum noise amplitude $\sigma_{B,\text{min}}$ with $\text{SNR}=1$) of an experiment detecting Gaussian noise is given by \cite{Supp} \begin{align} \sigma_{B,\text{min}}^2 &= \frac{-\pi\cdot\text{Hz}}{4\gamma_e^2t} \ln \left( 1-\frac{2\sigma_R^2 \textrm{e}^{2t/T_2}}{\sqrt{T/(t+t_\text{R})}} \right), \label{eq:sensitivitygeneral1} \end{align} where $\gamma_e$ is the electron gyromagnetic ratio, $t$ is the phase integration time, $T_2$ is the coherence time, $t_\text{R}$ is the readout time, and $T\approx (t+t_\text{R})N$ is the total experiment time ignoring initialization. This is shown in \cref{fig:ACcorrelations}C (bottom) for three different readout methods: conventional ($\sigma_R=35$), spin-to-charge conversion ($\sigma_R=4$), and single-shot readout with perfect fidelity ($\sigma_R=1$), which is ultimately limited by quantum projection noise. Achieving $\text{SNR}=1$ for these three scenarios when \ $\sigma_B=1\,$nT requires of order $300\,$ hours, $3\,$ hours, and $10\,$ seconds respectively. While detecting correlations is extremely inefficient using conventional readout, enhanced readout protocols like spin-to-charge conversion \cite{Shields2015,Hopper2018,Barry2020,Irber2021,Zhang2021} allow for drastically lower readout noise, making covariance magnetometry possible to implement in practice. \section{Disentangling correlated and uncorrelated noise sources} Detecting cross-correlations in pure noise reveals previously hidden information about the spatial structure of the noise, which we now demonstrate using two NV centers sensing both local and nonlocal magnetic fields. We first measure the spectral density $S(f)$ using a conventional variance magnetometry measurement of two different NV centers (\cref{fig:spectraldecomposition}A). These individual spectra reveal that there are two frequencies where signals are seen by both NV centers, but cannot provide simultaneous nonlocal spatial information about that signal. Using covariance magnetometry over the same frequency range (\cref{fig:spectraldecomposition}B) shows only the higher-frequency feature, which clearly reveals that the higher-frequency feature is caused by a noise signal common to each NV center, while the lower-frequency feature is instead caused by local noise sources unique to each NV center. This ability to distinguish correlated and uncorrelated features enables spatially-resolved spectral decomposition, allowing us to distinguish spectral components that are shared from those that are local. For phases that are Gaussian-distributed or small ($\phi\ll\pi$) we can find \cite{Supp} the correlated noise spectrum $S_C(f)$ if we have access to both the two-NV correlation $r$ as well as each NV center's coherence decay $C_i(t)=\textrm{e}^{-\chi(t)}$ (note that $C_i(t)$ includes both the correlated and uncorrelated noise sources): \begin{align} S_C(f) = \frac{\pi}{2t} \sinh^{-1}\left(\frac{\sigma_R^2 r}{C_1(t) C_2(t)}\right), \label{eq:reconstructedSpectrum} \end{align} where $t=n/(2f)$ and $n$ is the total number of applied XY8 pulses. This equation is used to obtain the correlated spectrum from the measured correlation and single-NV center coherence decays, as shown in \cref{fig:spectraldecomposition}A. The local spectrum for each NV center $S_{L_{1,2}}(f)$ may also be found from each individual NV center's total spectrum $S_{L_{1,2}}(f)= S_{1,2}(f)-S_C(f)$. So far we have analyzed the case where shared and local features are spectrally resolved, but an interesting scenario arises when a shared signal decoheres each NV center at frequencies coincident with local noise sources. In order to probe this case, we apply a global broadband Gaussian noise signal, decohering both NV centers while inducing broadband correlations in their phases (\cref{fig:spectraldecomposition}B-C). Beyond the coherence time of each NV center, conventional variance detection cannot reveal any information (\cref{fig:spectraldecomposition}B, gray region). However, covariance magnetometry (\cref{fig:spectraldecomposition}C) measures the broadband correlation in the random phases of the decohered NV centers --- this correlation will dip if either NV center interacts with a local noise source in its vicinity, as the local signal induces a phase that is unique to that NV center. The covariance magnetometry spectrum therefore reveals a feature that is hidden in the single-NV spectra. \section{Temporal structure of correlations} Covariance magnetometry also enables measurements of the temporal structure of the two-point correlator $\braket{B(r_1,t_1)B(r_2,t_2)}$ separated in time as well as space for short timescales where $t_2-t_1<t+t_R$, which is not possible with single NV center correlation measurements \cite{Laraoui2013,Boss2017,Pfender2019}. To perform this measurement, independent control of each NV center is required. We accomplish this by choosing two NV centers with different orientations at low magnetic fields (\cref{fig:timeoffset}A), such that the $0\rightarrow -1$ transition of the NV center that is aligned with the magnetic field is detuned by $70\,$MHz from that of the misaligned NV center. We then offset the beginning of the XY8 sequence applied to NV2 by time $t_\text{delay}$ (\cref{fig:timeoffset}B), and measure an applied AC field at frequency $f_0=3.125\,$MHz. As we sweep $t_\text{delay}$, the correlations oscillate at frequency $f_0$ (\cref{fig:timeoffset}C), as expected for a random-phase AC signal \cite{Laraoui2013,Degen2017}. Independent control also allows us to simultaneously address opposite spin transitions for each NV center (\cref{fig:timeoffset}A, right). Since the two NV centers then accumulate opposite phases from the AC field, we observe anti-correlations with the same frequency (\cref{fig:timeoffset}C, red squares). Because the two NV centers are manipulated independently, there are no fundamental constraints on the length of $t_\text{delay}$. This allows us to directly measure time-domain structure on the nanosecond time scale at two points in space, despite using $\pi$ pulses with $60\,$ns duration. When we measure the correlations between two NV centers experiencing a shared AC signal with added phase noise (\cref{fig:timeoffset}D), we can directly resolve the temporal structure of the AC signal despite its short coherence time of less than $2\,\mu$s, without making use of spectral deconvolution. \section{Conclusions and outlook} Here we have demonstrated simultaneous control and readout of two spatially resolved NV centers, and have shown that enhanced readout enables nanoscale magnetometry of two-point spatiotemporal field correlators that would normally be discarded using conventional NV center magnetometry. This new measurement technique has many potential applications; specifically, measurements of these two-point correlators can reveal the underlying length and time scales of fluctuating electromagnetic fields near surfaces \cite{Lifshitz1980,Joulain2005,Premakumar2017,Agarwal2017}, providing information about nonequilibrium transport dynamics \cite{Dolgirev2022} and condensed matter phenomena like magnetic ordering in low-dimensional systems \cite{Simon2011,Mazurenko2017,Thiel2019}. Future extensions of the current demonstration include using photonic structures to improve photon collection efficiency \cite{Hopper2018,Barry2020}, applying different pulse sequences to each NV center to probe the correlations between signals at different frequencies \cite{Cundiff2013} or phases \cite{Cywinski2008}, and using detector arrays to perform simultaneous readout of many pairs of NV centers. \putbib[Correlated_sensing] \begin{acknowledgments} We gratefully acknowledge helpful conversations with Alex Burgers, Sarang Gopalakrishnan, and Jeff Thompson. \textbf{Funding:} Developing the covariance sensing protocol and shallow NV center preparation was supported by the NSF under the CAREER program (grant DMR-1752047) as well as the Princeton Catalysis Initiative, and spin-to-charge readout and charge state stabilization was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, under Award No. DE-SC0018978. Work performed at UW-Madison was supported by the U.S. Department of Energy Office of Science National Quantum Information Science Research Centers. J.R. acknowledges the Princeton Quantum Initiative Postdoctoral Fellowship for support. M.F. acknowledges the Intelligence Community Postdoctoral Research Fellowship Program by Oak Ridge Institute for Science and Education (ORISE) through an interagency agreement between the US Department of Energy and the Office of the Director of National Intelligence (ODNI). \textbf{Author contributions:} J.R., M.F., A.I.A., C.F., M.C.C., S.K., and N.P.d.L.\ developed the theoretical framework for covariance magnetometry. J.R., Z.Y., and L.F.\ carried out covariance magnetometry experiments. J.R., C.F., M.C.C., S.K., and N.P.d.L.\ conceived the sensing technique, designed experiments, analyzed the data, and wrote the manuscript. \textbf{Competing interests:} The authors declare no competing interests. \end{acknowledgments} \end{bibunit} \pagebreak \widetext \begin{center} \textbf{\large Supplementary Information} \end{center} \setcounter{secnumdepth}{3} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{page}{1} \makeatletter \renewcommand{\theequation}{S\arabic{equation}} \renewcommand{\thefigure}{S\arabic{figure}} \renewcommand{\bibnumfmt}[1]{[S#1]} \renewcommand{\citenumfont}[1]{S#1} \begin{bibunit}[apsrev4-2] \section{Methods} The diamond sample was implanted with a nitrogen ion energy of 3 keV, resulting in shallow NV centers roughly $10\,$nm from the surface. NV center measurements are performed in a home-built dual-path confocal microscope setup. The green illumination on both paths is provided by a 532 nm optically pumped solid-state laser (Coherent Sapphire LP 532-300), split with a 50:50 beamsplitter (Thorlabs CCM5-BS016). Each path is then optically modulated by a dedicated acousto-optic modulator (AOM) (Isomet 1205C-1). The readout light around 590 nm is provided by different lasers for each path. The path 1 readout is provided by an NKT SuperK laser (repetition rate 78 MHz, pulse width 5 ps) with two bandpass filters with transmission wavelength around 590 nm (Thorlabs FB590-10 and Semrock FF01-589/18-25). The path 2 readout is provided by a 594 nm helium-neon laser (REO 39582). Both paths are optically modulated with dedicated AOMs (Isomet 1205C-1). The ionization light is provided by two 638 nm lasers (Hubner Cobolt 06-MLD) for each path, internally modulated. For each optical path, the three excitation wavelengths are combined by a 3-channel fiber RGB combiner (Thorlabs RGB26HF), and each excitation path is scanned by dedicated X-Y galvo mirrors (Thorlabs GVS012). The two optical paths are combined with a 2 inch beamsplitter cube (Thorlabs BS031). Each path is equipped with a 650 nm longpass dichroic mirror (Thorlabs DMLP650) to separate the excitation and collection pathways, and the photoluminescence (PL) for each path is measured by a dedicated fiber-coupled avalanche photodiode (Excelitas SPCM-AQRH-16-FC). A Nikon Plan Fluor 100x, NA = 1.30, oil immersion objective is used for focusing the excitation lasers and collecting the PL. The laser powers used (as measured before the objective) were approximately 3 to 7 $\mu$W for orange readout, 100 to 130 $\mu$W for green initialization, and 10 to 30 mW for the red ionization (this ionization power was extrapolated from lower-power measurements, and assumes perfect laser linearity). In practice, we found that the use of a green shelving pulse before ionization was unnecessary to achieve low readout noise, so a shelving pulse was not used. Microwave pulses are generated using a Rohde and Schwarz signal generator (SMATE200A) and amplified with a high power amplifier (Mini-Circuits ZHL-16W-43S+) before being sent to a homemade microwave stripline. Low frequency test signals are generated with an arbitrary waveform generator (Keysight 33622A) and amplified with a high power amplifier (Mini-Circuits LZY-22+). For the data shown in \cref{fig:ACcorrelations}, we apply a random-phase AC signal at $f_0=2\,$MHz, phase-randomized with 1 MHz bandwidth Gaussian noise, which we detect using an XY8 dynamical decoupling sequence repeated 4 times (32 total pulses). For the data shown in \cref{fig:spectraldecomposition}A, we apply a $f_0=1.75\,$MHz AC signal phase-randomized with 50 kHz bandwidth Gaussian noise, detected with an XY8 sequence repeated 5 times (40 total pulses). For \cref{fig:spectraldecomposition}B we apply spectrally flat Gaussian noise with 2 MHz bandwidth and repeat an XY8 sequence twice (16 total pulses). For the data shown in \cref{fig:timeoffset}, the XY8 sequence is repeated twice (16 total pulses), and we measure an externally applied AC field at frequency $f_0=3.125\,$MHz using pulses separated by $\tau=160\,$ns. The AC signal is either phase-coherent (\cref{fig:timeoffset}C) or phase-randomized with 1 MHz bandwidth white noise (\cref{fig:timeoffset}D). The correlation data were obtained by performing typically 1 to 2 million individual experiments for each data point shown, then correlating the resulting individual photon counts between the two paths. To filter out spurious correlations from slow PL variations due to sample drift, we subtract the mean photon number calculated for each 1000 data points sequentially, effectively high-pass filtering the raw counts. While this can help reduce spurious correlations from any significant background drifts in principle, the resulting change was minor in our data. \section{Temporal correlations between subsequent measurements} Covariance magnetometry with multiple NV centers enables correlation sensing with high temporal resolution, but we are further able to access the temporal correlation function between subsequent measurements $r(s) = \text{Cov}\left[S_1(i)S_2(i+s)\right]/(\sigma_1\sigma_2)$, where $s$ defines a relative offset and where the covariance is taken over the index $i$. For coherent AC signals like the ones measured in \cref{fig:timeoffset}B, we expect to see a temporal structure which depends on the pulse sequence duration and the signal frequency if the signal is stable for long periods of time (\cref{fig:supp_fig03}A). Although we did not set out to synchronize subsequent experiments with a stable clock, we are still able to observe this temporal structure in our data (\cref{fig:supp_fig03}B). The only free parameter in \cref{fig:supp_fig03}A is an overall offset in the experiment duration, which we set to 60 ns -- this is possibly caused by clock instabilities during the long charge state readout, which lasts a few milliseconds. For white noise signals like the ones measured in \cref{fig:spectraldecomposition}C, we instead expect the shot-to-shot correlations to be zero for $s>0$, which we also observe (\cref{fig:supp_fig03}C). These long term time dynamics can also be useful for diagnosing experimental noise sources, which can cause significant problems in detecting true shot-to-shot correlations of the NV centers' spin states. As an example, \cref{fig:supp_fig03}D shows correlations which mimic a real spin signal but are in fact caused by mechanical vibrations due to a lateral contact point in one of the optical table legs. This vibration creates a global fluctuation in fluorescence collection on both optical paths and thus appears in the correlated signal. Removing such systematic noise sources is crucial for mitigating spurious correlations. \begin{figure}[ht] \centering \includegraphics[width=4in]{figures/FigS01} \caption{Temporal correlations between subsequent measurements. (A) Expected and (B) measured correlation in the signals from a 3.125 MHz source detected by 2 NV centers using XY8 sequences separated by $t_\text{delay}$. Because the experiment duration is milliseconds, the pattern is strongly aliased. (C) Measured correlation between signals from a white noise source, which drops into the noise for $s>0$. (D) Spurious correlation induced by roughly $20\,$Hz mechanical vibrations of the optical table.} \label{fig:supp_fig03} \end{figure} \section{Detectable correlations} Consider two NV centers, which are not directly interacting with each other but which experience a shared classical magnetic field in their vicinity, as illustrated in \cref{fig:overview}A. This field is referred to as the correlated field. Each NV also sees proximal magnetic fields (for instance from fluctuating local nuclear spins), which are unique to each NV center. This will be called the uncorrelated or local field. We isolate an effective spin-1/2 system for each NV by considering only the $m_s=0$ and $m_s=+1$ (or $-1$) sublevels, which we refer to as states 0 and 1 respectively. We assume each NV is initialized in the transverse plane, then in the course of some detection protocol each NV acquires some net transverse phase $\phi$ due to interactions with the magnetic field. At the end of the sensing protocol a final $\pi/2$ pulse maps the azimuthal angle $\phi$ to a polar angle $\theta=\phi + \pi/2$. Due to quantum projection the conditional probability for each NV center to be found in a given quantum state upon measurement is \begin{align} P(m_s=0|\theta) = \cos^2(\theta/2) \\ P(m_s=1|\theta) = \sin^2(\theta/2) \end{align} The measured signals $s_1$ and $s_2$ will depend on the measurement type; for single shot thresholded (th) measurement they will be the inferred spin states $0$ or $1$, while for photon counting (pc) they will be the counted photon numbers $k$, resulting in the measurement probabilities conditioned on the spin state: \begin{align} &P^\text{pc}(s\mathord{=} k|m_s) = \Pois(k,\alpha_{m_s}) \\ &P^\text{th}(s\mathord{=} m_s|m_s) = F \end{align} where $\Pois(k,\alpha_{m_s})$ is a Poisson process with mean $\alpha_{m_s}$ determined by the spin state, and $0.5 \leq F \leq 1$ is the readout fidelity. Below we will start by treating the thresholded readout more generally, allowing the error probabilities $P(s=0|m_s=1)$ and $P(s=1|m_s=0)$ to differ. We repeat the experiment many times such that for each NV center we have a list of phases $\phi_i$ and corresponding measurements $s_i$ \begin{align} \Phi_1 &= \{\phi_{1,i}\} \hspace{8mm} S_1=\{s_{1,i}\} \\ \Phi_2 &= \{\phi_{2,i}\} \hspace{8mm} S_2=\{s_{2,i}\} \end{align} where $i=1...N$ indexes the $N$ total experiments. The values of the $s_i$ will depend on whether we use photon counting or thresholded measurement as described above. We are interested in what we can learn from the correlation between the two data sets $S_1$ and $S_2$ given certain assumptions about the distributions $\Phi_1$ and $\Phi_2$. We will focus on the measured Pearson correlation $r$, which will differ from the true statistical correlation due to quantum projection, finite sampling size, and readout error. In the following, we will use a Bayesian statistical model to derive the measured correlation between two such data sets, and find the sensitivity of such a measurement in different contexts. \subsection{Ideal measured correlation} We will not make any assumptions about the precise distributions from the correlated and local signals, except to assume that they are evenly distributed. We further assume that the phases acquired by the two NVs are \begin{align} \phi_1 = \phi_{C_1} + \phi_{L1} \\ \phi_2 = \phi_{C_2} + \phi_{L2} \end{align} where $\phi_{C_1},\phi_{C_2}$ are the common phases acquired due to the shared correlated field, and $\phi_{L1},\phi_{L2}$ are the phases acquired due to the local field. We assume that $\phi_{C_1}\propto\phi_{C_2}$, and assume that $\phi_{L1}$ and $\phi_{L2}$ are independent. Such a decomposition into correlated and uncorrelated components is always possible for two lists, and here we take the correlated component $\phi_{C_1},\phi_{C_2}$ to be caused by the global (shared) signal and the uncorrelated component $\phi_{L1},\phi_{L2}$ to be caused by the local (unshared) signal. The quantity we seek to derive is the Pearson correlation, defined by: \begin{align} r=\frac{\text{Cov}(S_1,S_2)}{\sigma_{S_{1}}\sigma_{S_{2}}}=\frac{\braket{S_1 S_2}-\braket{S_1}\braket{S_2}}{\sigma_{S_{1}}\sigma_{S_{2}}}. \end{align} We start by assuming perfect readout so that the signals are the NV spin states $\{S_1,S_2\}=\{m_{s_1},m_{s_2}\}$. We let $p_\phi(\phi_1,\phi_2)$ denote the joint probability density to acquire phase $\{\phi_1,\phi_2\}$ with NV 1 and 2 respectively, and $p_s(s_1,s_2)=\int p(s_1|\phi_1)p(s_2|\phi_2) p_\phi(\phi_1,\phi_2) d\phi_1 d\phi_2$ denote the probability to detect signal $\{s_1,s_2\}$, where $p(s_i|\phi_i)$ is the probability to detect signal $s_i$ given accumulated phase $\phi_i$. Since we acquire phase in the transverse plane and read out after a final $\pi/2$ pulse (see \cref{fig:overview}) we have $\theta = \phi+\pi/2$, and \begin{align} \braket{m_{s_1} m_{s_2}} &= 1\cdot 1 \cdot p_s(1,1) + 1\cdot 0 \cdot p_s(1,0) + 0\cdot 1 \cdot p_s(0,1) + 0\cdot 0 \cdot p_s(0,0) \nonumber \\ &= p_s(1,1) \\ &= \int_{\phi_1,\phi_2} \sin^2{\left(\frac{\phi_1}{2}+\frac{\pi}{4}\right)} \sin^2{\left(\frac{\phi_2}{2}+\frac{\pi}{4}\right)} p_\phi(\phi_1, \phi_2) d\phi_1 d\phi_2 \end{align} Since we assume $\phi_C, \phi_{L1}, \phi_{L2}$ are drawn from independent distributions, we may rewrite the phase probabilities in terms of separate statistical draws, and we have \begin{align} &\braket{m_{s_1} m_{s_2}} = \nonumber \\ &\int \sin^2{\left(\frac{1}{2}[\phi_{C_1} + \phi_{L1}]+\frac{\pi}{4}\right)} \sin^2{\left(\frac{1}{2}[\phi_{C_2} + \phi_{L2}]+\frac{\pi}{4}\right)} \, p(\phi_C) p(\phi_{L1}) p(\phi_{L2}) \, d\phi_C d\phi_{L1} d\phi_{L2} \\ &= \frac{1}{4}(1 + \braket{\sin(\phi_{C_1})\sin(\phi_{C_2})} \braket{\cos(\phi_{L1})}\braket{\cos(\phi_{L2})}) \label{eq:meanXY} \end{align} where we have used the fact that $\braket{\sin(\phi)}=0$ for an even distribution. Then we have for the correlation (using the Bernoulli statistics $\braket{S}=\braket{S^2}=1/2$ and $\sigma_S^2 = \braket{S^2} - \braket{S}^2 = 1/4$) \begin{align} r_\text{ideal} = \braket{\sin(\phi_{C_1})\sin(\phi_{C_2})} \braket{\cos(\phi_{L1})}\braket{\cos(\phi_{L2})} \end{align} Noticing that $\braket{\cos(\phi_{L})} = \braket{\textrm{e}^{i\phi_{L}}} = \textrm{e}^{-\tilde{\chi}(t)}$ is the decoherence function for variance detection \cite{Degen2017}, we find the correlation \begin{align} r_\text{ideal} = \textrm{e}^{-[\tilde{\chi}_1(t)+\tilde{\chi}_2(t)]} \braket{\sin[\phi_{C_1}(t)]\sin[\phi_{C_2}(t)]} \label{eq:ridealSI} \end{align} Note that \Cref{eq:ridealSI} is similar to the expression for temporal correlation spectroscopy using a single NV center \cite{Laraoui2013}, in which case there are two subsequent phase acquisition times for the single NV center instead of independent phase acquisition times for two separate NV centers. As an example, for identical correlated phases $\phi_{C_1}=\phi_{C_2}=\phi_C$ this is \begin{align} r_\text{ideal} = \frac{1}{2}\textrm{e}^{-[\tilde{\chi}_1(t)+\tilde{\chi}_2(t)]} \left[1-\braket{\cos(2\phi_C(t))}\right] \label{eq:ridealSIequalphase} \end{align} which for Gaussian-distributed correlated phases is \begin{align} r_\text{ideal} = \frac{1}{2}\textrm{e}^{-[\tilde{\chi}_1(t)+\tilde{\chi}_2(t)]} \left(1-\textrm{e}^{-2\sigma_{\Phi_C}^2} \right) \end{align} where $\sigma_{\Phi_C}^2$ is the variance of the correlated phase distribution. The assumption of Gaussian phases is violated for e.g.\ random-phase AC signals detected by a CP-type pulse sequence with pulse spacing $\tau$, in which case the more general Bessel function forms will result \cite{Degen2017} if the correlated phases are identical: \begin{align} r_\text{ideal} = \frac{1}{2}\textrm{e}^{-(\tilde{\chi}_1(t)+\tilde{\chi}_2(t))}\left[1-J_0\left(4\gamma B_0 \overline{W}t\right)\right] \label{eq:BesselRhoSupp}, \end{align} where $\overline{W}=\text{sinc}(\pi f n \tau)\left[1-\sec(\pi f \tau)\right]$ and $n$ is the total number of applied pulses. Note the extra factor of 2 relative to the expression for decoherence measured using typical single-NV center variance detection \cite{Degen2017}. While decoherence is effectively accelerated because of contributions from both NV centers (since there are two factors of $\tilde{\chi}$ in \cref{eq:ridealSI}), the phase accumulation rate is also effectively doubled (since $\sin^2(\phi)\sim \cos(2\phi)$), such that there is no net penalty to the sensitivity regarding phase integration time. \subsection{Measured correlation with readout noise} \subsubsection{Photon counting: shot noise} We are now interested in accounting explicitly for the number of photons $n$ that are counted from an NV center, depending on its state. The photon number is drawn from a Poisson distribution whose mean depends on the NV center spin state, with mean $\alpha_0$ for state $m_s=0$ and mean $\alpha_1$ for $m_s=1$. The list of photon counts for NV 1 is $S_1$ and for NV 2 is $S_2$, with individual photon counts $n_1$ and $n_2$. As before, we must calculate $\braket{S_1 S_2} = \sum n_1 n_2 P(n_1,n_2)$: \begin{align} \braket{S_1 S_2} = \sum_{n_1,n_2} n_1 n_2 \big[ &P(n_1|m_s\seq0)P(n_2|m_s\seq0) \int_{\phi_1,\phi_2}\cos^2{\left(\frac{\phi_1}{2}+\frac{\pi}{4}\right)} \cos^2{\left(\frac{\phi_2}{2}+\frac{\pi}{4}\right)} p(\phi_1, \phi_2) d\phi_1 d\phi_2 + \nonumber \\ &P(n_1|m_s\seq0)P(n_2|m_s\seq1) \int_{\phi_1,\phi_2}\cos^2{\left(\frac{\phi_1}{2}+\frac{\pi}{4}\right)} \sin^2{\left(\frac{\phi_2}{2}+\frac{\pi}{4}\right)} p(\phi_1, \phi_2) d\phi_1 d\phi_2 + \nonumber \\ &P(n_1|m_s\seq1)P(n_2|m_s\seq0) \int_{\phi_1,\phi_2}\sin^2{\left(\frac{\phi_1}{2}+\frac{\pi}{4}\right)} \cos^2{\left(\frac{\phi_2}{2}+\frac{\pi}{4}\right)} p(\phi_1, \phi_2) d\phi_1 d\phi_2 + \nonumber \\ &P(n_1|m_s\seq1)P(n_2|m_s\seq1) \int_{\phi_1,\phi_2}\sin^2{\left(\frac{\phi_1}{2}+\frac{\pi}{4}\right)} \sin^2{\left(\frac{\phi_2}{2}+\frac{\pi}{4}\right)} p(\phi_1, \phi_2) d\phi_1 d\phi_2 \big], \end{align} or recognizing the angular integral from above, and using by symmetry \begin{align} p(m_s\seq0)=p(m_s\seq1)= \int \sin^2{\left(\frac{\phi_i}{2}+\frac{\pi}{4}\right)} p(\phi_i) d\phi_i = \int \cos^2{\left(\frac{\phi_i}{2}+\frac{\pi}{4}\right)} p(\phi_i) d\phi_i = \frac{1}{2}, \end{align} we have \begin{align} \braket{S_1 S_2} &= \sum_{n_1,n_2} n_1 n_2 \times \nonumber \\ \big[ &P(n_1|m_s\seq0)P(n_2|m_s\seq0) \braket{m_{s_1} m_{s_2}} + \nonumber \\ &P(n_1|m_s\seq0)P(n_2|m_s\seq1) \left(\tfrac{1}{2}- \braket{m_{s_1} m_{s_2}}\right) + \nonumber \\ &P(n_1|m_s\seq1)P(n_2|m_s\seq0) \left(\tfrac{1}{2}- \braket{m_{s_1} m_{s_2}}\right) + \nonumber \\ &P(n_1|m_s\seq1)P(n_2|m_s\seq1) \braket{m_{s_1} m_{s_2}} \big] \end{align} where $\braket{m_{s_1} m_{s_2}}$ is defined as in \cref{eq:meanXY}. Because the draws for $n_1$ and $n_2$ are independent at this stage (i.e.\ $\braket{n_1n_2}=\braket{n_1}\braket{n_2}$ when drawn from already-given Poisson distributions), and denoting a Poisson distribution with mean $\alpha$ as $\text{Pois}_\alpha(x)$, we can use e.g.\ $\sum_{n_1} n_1 P(n_1|m_s\mathord{=} 0) = \sum_{n_1} n_1 \text{Pois}_{\alpha_0}(n_1) = \alpha_0$ to find \begin{align} \braket{S_1 S_2} = \braket{m_{s_1} m_{s_2}}(\alpha_0-\alpha_1)^2 + \alpha_0\alpha_1. \end{align} Lastly, we note that for a joint Poisson distribution we have mean and variance: \begin{align} \braket{S} &= \frac{1}{2}(\alpha_0 + \alpha_1) \\ \braket{S^2}-\braket{S}^2 &= \frac{1}{4}(\alpha_0-\alpha_1)^2 + \frac{1}{2}(\alpha_0 + \alpha_1). \end{align} Combining these elements the detected correlation for photon counting $r_\text{pc}$ becomes \begin{align} r_\text{pc} &= \frac{1} {1+2(\alpha_0+\alpha_1)/(\alpha_0-\alpha_1)^2} r_\text{ideal} \nonumber \\ &= \frac{1}{\sigma_R^2} r_\text{ideal}, \end{align} where $\sigma_R=\sqrt{1+2(\alpha_0+\alpha_1)/(\alpha_0-\alpha_1)^2}$ is the readout noise \cite{Taylor2008,Hopper2018}. Notice that the measured correlation depends quadratically on the readout noise, rather than linearly. If we assume that the two NV centers have different readout noise $\sigma_{R_1}$ and $\sigma_{R_2}$, a slightly longer but straightforward calculation yields the more general result: \begin{align} r_\text{pc} &= \frac{1}{\sigma_{R_1}\sigma_{R_2}} r_\text{ideal}, \label{eq:rpcSupp} \end{align} \subsubsection{Single shot readout: thresholding} For a thresholded measurement with $P_k(i,j)$ the probability to assign spin state $i$ given spin state $j$ on NV center $k$, we can perform a similar calculation to the one above to find: \begin{align} \braket{S_1 S_2} =& P(S_1\mathord{=} 1,S_2\mathord{=} 1) \nonumber \\ =& P_1(1|0)P_2(1|0) \braket{m_{s_1}m_{s_2}} + \nonumber \\ =& P_1(1|0)P_2(1|1) \left(\tfrac{1}{2}-\braket{m_{s_1}m_{s_2}}\right) + \nonumber \\ =& P_1(1|1)P_2(1|0) \left(\tfrac{1}{2}-\braket{m_{s_1}m_{s_2}}\right) + \nonumber \\ =& P_1(1|1)P_2(1|1) \braket{m_{s_1}m_{s_2}}. \end{align} Then, since $\braket{S_i^2}=\braket{S_i}=\tfrac{1}{2}\left[P_i(1|0)+P_i(1|1)\right]$, we find the detected correlation for thresholding $r_\text{th}$ \begin{align} r_\text{th}=\frac{1}{\sigma_{R_1}^\text{th}\sigma_{R_2}^\text{th}}r_\text{ideal} \end{align} where the readout noise for thresholding is \cite{Hopper2018} \begin{align} \sigma_{R_i}^\text{th}=\sqrt{1+2\frac{P_i(1|0)\left[1-P_i(1|0)\right]+P_i(1|1)\left[1-P_i(1|1)\right]}{\left[P_i(1|0)-P_i(1|1)\right]^2}}. \end{align} In the simplified case that the errors are symmetric with $P(1|0)=1-P(1|1)$ we have $\sigma_{R_i}^\text{th}=1/(2F-1)$ where $F=1-\tfrac{1}{2}\left[P(1|0)+\left[1-P(1|1)\right]\right]\rightarrow 1-P(1|0)$ is the fidelity \cite{Hopper2018}. The detectable correlation then becomes: \begin{align} r_\text{th}=(2F_1-1)(2F_2-1)r_\text{ideal}, \end{align} where the two NV centers may have different readout fidelities $F_1$ and $F_2$ respectively. When fidelity is minimized ($F=0.5$) the measured correlation is 0, and when fidelity is maximized ($F=1$) we recover the idealized correlation $r_\text{ideal}$ in \cref{eq:ridealSI}. Again, note that the measurable correlation depends quadratically on the readout fidelity rather than linearly. \section{Expected correlations} We estimate the expected detectable correlations in our experiment by measuring each of the key parameters in \cref{eq:BesselRhoSupp,eq:rpcSupp}: decoherence, magnetic field strength, frequency range, and readout noise. This characterization is show in \cref{fig:supp_fig02} for each of these variables in turn. Numerically calculating the expected correlation using these variables we find that our measured correlation is approximately 70\,\% of the expected value, likely limited by charge state initialization and SCC ionization efficiency. Decoherence from local sources reduces the measurable correlation, as shown in \cref{fig:spectraldecomposition}A,C. In both cases the local noise is due to the hyperfine interaction of the NV center with its intrinsic nuclear spin. The filter function frequencies where this interaction is detected are at \cite{Abe2018} $f_k = (2 \gamma_N B_0 + 3.05 \,\text{MHz} )/(2 k)$, where $\gamma_N =-4.3\,$MHz/T is the $^{15}$N nuclear gyromagnetic ratio, $B_0\approx 31\,$mT is the strength of the external magnetic field, and $k$ is the filter function frequency harmonic. In \cref{fig:spectraldecomposition}A,C in the main text, the detected harmonics are $k=1$ and $k=5$ respectively. For the data shown in \cref{fig:timeoffset}, note that the $0\rightarrow+1$ detuning is only about $60\,$MHz due to hybridization for the misaligned NV center \cite{Epstein2005}. \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{figures/FigS02.pdf} \caption{Measuring parameters in estimating expected correlation for \cref{fig:ACcorrelations}. (A) Dynamical decoupling measurements provide decoherence rates with $\text{exp}(-t/T_2)\approx 0.94$ and $0.71$ for the two NV centers near $\tau=250\,$ns. (B) Correlation measurements versus amplitude provide an estimate of the AC signal magnetic field strength $B_0\approx 0.13\,$G. (C) XY8 measurements provide an estimate of the FWHM of the 2 MHz signal with 1 MHz phase noise. In \cref{eq:BesselRhoSupp}, we integrate $f$ over the frequency range of our line-broadened signal to derive the theory curves shown in \cref{fig:ACcorrelations}B. (D) Readout noise $\sigma_R$ versus ionization time measured before (open markers) and after (filled markers) the data acquisition in \cref{fig:ACcorrelations}. Lines are guides to the eye. $t_\text{ion}=200\,$ns was used to acquire the data shown in \cref{fig:ACcorrelations}.} \label{fig:supp_fig02} \end{figure*} \section{Spectral decomposition and sensitivity} \subsection{Spectral decomposition} We assume the NV centers accumulate small phase angles (or experience a Gaussian noise source) in the presence of a Carr-Purcell (CP) \cite{Carr1954} type AC sensing sequence with large pulse numbers, and pulse spacing $\tau$. Approximating the pulse sequence filter function as a delta function centered at frequency $\omega=\pi/\tau$, the coherence decay $C(t)$ is generally described by \begin{align} C(t) &= \textrm{e}^{-\chi(t)} \nonumber \\ \chi(t) &= \frac{1}{2}\braket{\phi^2} = \frac{1}{\pi}\int_0^{\infty} d\omega \frac{F(\omega)}{\omega^2}S(\omega) \approx \frac{t}{\pi} S(\omega), \label{eq:spectralbasics} \end{align} where $F(\omega)$ is the pulse sequence filter function, $\chi(t)$ is the decoherence from \emph{all} noise sources (local and global), and $S(\omega)$ is the spectral density of the magnetic field \cite{Degen2017,Szankowski2017,Romach2015}: \begin{align} S(\omega) & = \int_{-\infty}^{\infty}\textrm{e}^{-i\omega t}\gamma_e^2 G(t)dt \\ G(t) &= \braket{B(t'+t)B(t')}. \end{align} To perform spectral decomposition we assume that the two NV centers experience identical global fields with noise spectral densities $S_C(\omega) = S_{C_1}(\omega) = S_{C_2}(\omega)$, and that the noise spectrum may be decomposed into correlated and uncorrelated (local) contributions $S(\omega) = S_C(\omega)+S_{L_{1,2}}(\omega)$. We further assume that the accumulated correlated phases are Gaussian-distributed or small such that $\phi\ll\pi$. Then we have $\braket{\sin(\phi_{C_1})\sin(\phi_{C_2})}=\textrm{e}^{-2\chi_C}\sinh(2\chi_C)$, where $\chi_C$ is the decoherence induced by the correlated noise source, and the correlation becomes \begin{align} r=\frac{1}{\sigma_R^2}C_1(t)C_2(t) \sinh\left(\frac{2 t}{\pi}S_C\left(\frac{\pi}{\tau}\right)\right), \label{eq:rGaussian} \end{align} where $C(t)=\textrm{e}^{-(\tilde{\chi}+\chi_C)t}$ is the total decoherence from all sources. Inverting this equation we find the correlated spectral density \begin{align} S_C(\omega) = \frac{\pi}{2t} \sinh^{-1}\left(\frac{\sigma_R^2 r}{C_1(t) C_2(t)}\right). \label{eq:reconstructedSpectrumSupp} \end{align} which is \cref{eq:reconstructedSpectrum} in the main text. \subsection{Sensitivity} To derive the sensitivity of a covariance magnetometry measurement, we start from \cref{eq:rpc} in the main text, which accounts for the signal, readout noise, and decoherence. We now account for the statistical noise $\varsigma_r$ which is a measure of the uncertainty in the Pearson correlation due to the finite number of sampled points $N$ \cite{Fisher1925}: \begin{align} \varsigma_r \approx \tanh\left(\frac{1}{\sqrt{N-3}}\right)\approx \frac{1}{\sqrt{N}}, \end{align} where the approximation holds for $N\gg1$. Then the SNR is approximately \begin{align} \text{SNR} = \frac{r}{\varsigma_r} \approx \frac{\textrm{e}^{-2\tilde{\chi}(t)}\sqrt{N}}{\sigma_R^2} \braket{\sin[\phi_{C_1}(t)]\sin[\phi_{C_2}(t)]}. \label{eq:SNR} \end{align} For simplicity, we have assumed that the NV centers have the same readout noise $\sigma_R$ and the same decoherence function from local noise sources $\tilde{\chi}(t)$. To determine the sensitivity we must consider the time dependence of each term in \cref{eq:SNR}. These include the time it takes to run each of the $N$ experiments, the phase accumulation time, and potentially the time dependence of the readout noise $\sigma_R$ (which for SCC improves for longer readout times). We assume the detected correlations are from a shared magnetic field source with spectral density $S_C(\omega)$ (\cref{eq:rGaussian}), where we again assume that the two NV centers see the same shared field (rather than e.g.\ one NV center being further and experiencing an attenuated version of the shared field), so that $S_{C_1}(\omega)=S_{C_2}(\omega)=S_{C}(\omega)$. Then \begin{align} S_{C,\text{min}} &=-\frac{\pi}{4t} \ln \left[1-\frac{2\sigma_R^2 \textrm{e}^{\chi_{L1}(t)+\chi_{L2}(t)}}{\sqrt{N}}\right] \nonumber \\ &\approx \frac{\pi}{2}\sigma_R^2\textrm{e}^{2t/T_2} \sqrt{\frac{t+t_\text{R}}{t^2 T}}, \end{align} where $t$ is the phase integration time, $t_\text{R}$ is the readout time, and $T\approx (t+t_\text{R})N$ is the total experiment time ignoring initialization. Assuming the noise has flat spectral density around the detection frequency $S_{C}(\omega)=\gamma_e^2\sigma_B^2/\text{Hz}$ we find the minimum detectable noise amplitude \begin{align} \sigma_{B,\text{min}}^2 &= \frac{-\pi\cdot\text{Hz}}{4\gamma_e^2t} \ln \left( 1-\frac{2\sigma_R^2 \textrm{e}^{2t/T_2}}{\sqrt{T/(t+t_\text{R})}} \right), \label{eq:sensitivitygeneralSupp} \end{align} which is \cref{eq:sensitivitygeneral1} in the main text and is illustrated in \cref{fig:ACcorrelations}C. \section{Higher order joint cumulants} We have focused on 2-body (Pearson) correlations but here we extend this to higher orders. Consider the $N$th-order joint cumulant defined by \begin{align} \kappa_N=\kappa(m_1,m_2,...,m_N)= \sum_\pi (|\pi|-1)! (-1)^{|\pi|-1} \Pi_{B \in \pi} \braket{\Pi_{i\in B} m_i} \label{eq:jointcumulants} \end{align} where $\pi$ are the different partitions (ways of grouping the individual $m_i$), $|\pi|$ is the number of parts in a partition, and $B$ is the blocks in the partitions. For example, for $N=3$ we have \begin{align} \kappa_3 = \braket{m_1,m_2,m_3} - \braket{m_1,m_2}\braket{m_3} - \braket{m_1,m_3}\braket{m_2} - \braket{m_2,m_3}\braket{m_1} + 2 \braket{m_1,m_2,m_3}, \end{align} where e.g.\ the partition $\braket{m_1,m_2}\braket{m_3}$ has two blocks ($|\pi|=2$), which are $\braket{m_1,m_2}$ and $\braket{m_3}$. Here we calculate this joint cumulant for $N$ NV centers where we assume each NV center experiences the same magnetic field for simplicity. Suppose we arrange our starting NV center orientations from measurement to measurement in such a way that across many measurements all NV measurement expectation values are independent; for instance with four NV centers we have $\braket{m_1m_2m_3}=\braket{m_1}\braket{m_2}\braket{m_3}$, etc., where for a Bernoulli distribution with NV states $m_i=0$ or $1$ we have $\braket{m_i}=1/2$. Then in \cref{eq:jointcumulants} we must calculate $\braket{m_1m_2...m_N}$ as well as a series of terms which will only contain products of individual means $\braket{m_i}$: \begin{align} \kappa_N = \braket{m_1m_2...m_N} + \left(\sum_i x_i\right) \braket{m}^N \label{eq:cumulantdeduce} \end{align} where $x_i$ are coefficients to the series of mean-product partition terms in \cref{eq:jointcumulants}. However, the latter term may be quickly deduced by noticing that for any cumulant with independent entries we must have \begin{align} \kappa_N^\text{indep}=0&=\braket{m_1m_2...m_N} + \left(\sum_i x_i\right)\braket{m_1}\braket{m_2}...\braket{m_N} \nonumber \\ &=\left(1+\sum_i x_i\right)\braket{m_1}\braket{m_2}...\braket{m_N} \end{align} so that $\sum_ix_i=-1$ and the second term in \cref{eq:cumulantdeduce} must be $-\braket{m}^N=-1/2^N$. For the first term we have \begin{align} \braket{m_1m_2...m_N} = 1\cdot p(1,1,...,1) &= \frac{1}{2^N}\int (1+\sin(\phi))^N p(\phi) d\phi \nonumber \\ &= \frac{1}{2^N} \int 1+\sin^N(\phi) p(\phi) d\phi \end{align} where the last equality holds because we have already assumed the phase distributions are independent for any number of NVs $m<N$. Then for the $N$th-order cumulant we have \begin{align} \kappa_N &= \frac{1}{2^N}\int 1 + \sin^N(\phi) p(\phi) d\phi - \frac{1}{2^N} \nonumber \\ &= \frac{1}{2^N}\braket{\sin^N(\phi)} \end{align} Lastly, by analogy with the usual expression for the Pearson correlation, we define a normalized $N$th-order joint cumulant $\tilde{\kappa}_N$ by \begin{align} \tilde{\kappa}_N = \frac{\kappa_N}{\Pi_i \sigma_i} \end{align} where $\sigma_i$ are the standard deviations of the individual distributions; for Bernoulli distributions these are $\sigma_i = 1/2$ yielding \begin{align} \tilde{\kappa}_N = 2^N \kappa_N = \braket{\sin^N(\phi)}. \end{align} The Fourier term in this expression whose rate is $N\phi$ is suppressed by a factor $1/2^N$, such that the net sensitivity relative to single-NV variance sensing is $\sqrt{N}/2^{N-1}$ for this term. Thus the boosted phase accumulation rate does not translate to an overall sensitivity enhancement for large $N$ relative to single-NV sensing of the same signal. \putbib[Correlated_sensing] \end{bibunit} \end{document}
{ "timestamp": "2022-09-20T02:23:12", "yymm": "2209", "arxiv_id": "2209.08703", "language": "en", "url": "https://arxiv.org/abs/2209.08703" }
\section{Introduction} In this paper we describe our experiments and results on the Multidocument Summarisation for Literature Review (MSLR) shared task.\footnote{\url{https://github.com/allenai/mslr-shared-task}} In particular, we attempt to improve on previous multi-document summarisation models in the biomedical domain, which have tried to integrate domain knowledge by marking important biomedical entities \citep{wallace2021generating,deyoung-etal-2021-ms}. We hypothesise that highlighting such entities by placing global attention on them will enable better aggregation and normalisation of related entities across documents, and thus improve the factuality of the generated summaries. To explore this idea, we experiment with four different ways of modifying the global attention mechanism of PRIMERA \citep{xiao-etal-2022-primera}, a recent state-of-the-art model designed for multi-document summarisation (MDS). In particular, while by default the global attention tokens in Primera are used to separate documents in the input and capture their relationships, we assign global attention to important biomedical entities in input documents to create links between them. Moreover, to examine the effect of content selection on the quality of summaries produced by this underlying model, we compare results where we use the whole abstract as input vs.\ only the concluding sentences (which we expect to be more informative). We train and analyse models in zero-shot, few-shot (10 and 100), as well as fully fine-tuned scenarios. Overall we evaluate (using both automatic metrics and human evaluation) a total of 23 models, two of which formed our official submissions to the leaderboard.\footnote{Additional results and code for all models is provided at \url{https://github.com/joey234/PRIMER-pico-attn}.} Both submitted models substantially outperform the baseline approaches \citep{deyoung-etal-2021-ms} in terms of automatic metrics, and one achieves the best performance in terms of BERTScore and ROUGE-2 among all submissions. Overall, our contributions in comparison to the previously published domain-specific models for MDS are the following: \begin{itemize} \item We explore the potential of using global attention as a means to highlight important biomedical entities, in order to improve aggregation across input documents. \item We examine how the amount of training data influences the quality of generated summaries, and propose several scenarios where the performance of few-shot and even zero-shot models is on par with that of fully fine-tuned ones. \item We show that in the fine-tuned scenario, the model is able to select important content without additional marking. \end{itemize} \section{Dataset} We use the Cochrane dataset as provided in the shared task without any additional data. See \Cref{tab:cochrane_statistic} in Appendix \ref{dataset_stats} for dataset statistics. \subsection{Pre-processing} As the trials are collected automatically from the Cochrane library, they contain redundant metadata such as hyperlinks, trial identifiers, funding information, copyright statements, and publication records. We perform string matching using regular expressions to remove this content. Following \citet{wallace2021generating}, for each review, we concatenate all corresponding documents and add a separator token to denote the end of each document. \subsection{Entity marking} The PICO framework describes several essential components of the central question in a clinical trial, including Populations (e.g.\ \textit{diabetics}), Interventions (e.g.\ \textit{animal insulin}), Comparators (e.g.\ \textit{human insulin}), and Outcomes (e.g.\ \textit{glycaemic control}) \citep{huang2006evaluation}. We tag PICO spans in input and target documents to make the summarisation models explicitly attend to them. We train a tagger on the EBM-NLP dataset \citep{nye-etal-2018-corpus}, which contains annotations for the P, I, and O classes\footnote{Comparators are grouped with Interventions in the dataset due to the difficulty in distinguishing them.} on abstracts of randomized controlled trials. Using this dataset, we fine-tune the BioLinkBERT model \citep{yasunaga-etal-2022-linkbert}, a BERT variant that leverages links between documents that achieve state-of-the-art results on various biomedical NLP tasks, including the PI(C)O tagging task. We adopt the same hyperparameters as in \citet{yasunaga-etal-2022-linkbert} using the BioLinkBERT\textsubscript{base} model, and achieve 74.06 macro-$F_1$ score on the EBM-NLP test set, which is comparable to the reported results in \citet{yasunaga-etal-2022-linkbert}. We run the trained PIO tagger on the Cochrane dataset for both the documents and summaries. For simplicity, we only use two new special tokens \token{<ent>} and \token{</ent>} to mark the beginning and the end of each PICO span (e.g.\ \textit{\token{<ent>} Magnesium sulfate \token{</ent>} does not have a major impact on disease progression in \token{<ent>} women with mild preeclampsia \token{</ent>}.}). \Cref{tab:cochrane_statistic} presents basic statistics of the Cochrane dataset used in this challenge. The average number of PIO spans in the summary and input documents is based on the output of the trained PIO tagger. Note that target summaries for the test set are not provided to participants. \section{Evaluation} For the automatic evaluation, in addition to ROUGE scores \citep{lin-2004-rouge} and BERTScore\footnote{Hash code: \texttt{roberta-large\_L17\_no-idf\_ \\ version=0.3.11(hug\_trans=3.1.0)}} \citep{zhang2019bertscore}, we report the metrics introduced in \citet{deyoung-etal-2021-ms}, namely $\Delta$EI which measures the distance in predicted direction of the conclusions (\textit{increases}, \textit{decreases}, or \textit{no change}) in the target and generated summaries. For this metric, we report the average distance across samples and also macro-F1 score, in which the predicted direction for the target summary is treated as the correct label ($\Delta$EI-$F_1$). To estimate quality of the generated summaries, especially in terms of their factuality, we also perform human evaluation, for which we adopt the binary decision method proposed in \citet{otmakhova-etal-2022-patient}. As we need to assess results from a large number of models, we simplify the evaluation, focusing only on factual errors and collapsing the categories of \textit{modality} and \textit{polarity} into a single category with five potential values (\textit{positive}, \textit{negative}, \textit{no effect}, \textit{no evidence}, \textit{no claim}), similar to how it was done by \citet{deyoung-etal-2021-ms}. Thus, we report if \textbf{PICO} elements used in the correct and generated summaries are aligned, if the \textbf{direction} of the findings is the same, and if the summaries are \textbf{factual}, that is, correct in these two aspects. In addition, to analyse common errors, we annotate generations as \textbf{contradictory} (i.e.\ containing statements with the same set of PICO elements but different polarity), \textbf{malformed} (i.e.\ including lexical and grammatical errors or repetitions), and \textbf{not evidential} (i.e.\ claiming that there is not enough evidence to determine the effect of intervention). We list some examples of contradictory, malformed and non-evidential summaries in Appendix \ref{sec:examples}. As the vast majority of the target summaries were multi-aspect --- that is, contained statements regarding several groups of patients, interventions or outcomes --- one of the difficulties we experienced during the evaluation was comparing them to generated summaries which were either single-aspect or contained different sets of PICO elements. We adopted a precision-based approach when evaluating such pairs of summaries: while it is not necessary for the generated summary to contain all PICO elements included in the target to be considered correct, it must not include any extra PICO elements. In the case of extra PICO elements in the generated summaries, we compared them against the \textit{Objectives} section of the review's abstract to determine if they were truly erroneous or if the target conclusion underreported some of the elements. Moreover, in the case of multi-aspect summaries we consider direction to be correct only if it is correct for the corresponding set of PICO elements. Thus though our evaluation approach is less detailed than the one proposed in \citet{otmakhova-etal-2022-patient}, it is more strict in terms of alignment of multi-aspect summaries. \section{Experiments} \subsection{Model} We base our experiments on PRIMERA \citep{xiao-etal-2022-primera}, which was designed for multi-document summarisation, and experiment with zero-, 10-, 100-shot, and fine-tuning scenarios with the same hyperparameters reported by the authors of the paper. We use the same random seed for all models to ensure consistency. For the baseline model (\textit{No entity}) we use documents and summaries without any entity marking; all other models use documents with entity tags. \begin{table}[t!] \footnotesize \centering \begin{tabular}{p{2cm} p{5cm}} \toprule Setting & Description \\ \midrule DocSep & The global attention is only set on the document separation token (\token{<doc-sep>}) as in the original PRIMERA model. The attention on \token{<doc-sep>} is used across the board in all settings described below. \\ EntMarkers & In addition to the \token{<doc-sep>} global attention, we set global attention on tokens which mark the beginning and end of entities (i.e.\ \token{<ent>}, \token{</ent>}).\\ EntMarkersSpans & In addition to the \token{<ent>} and \token{</ent>} tags, global attention is set on the tokens between them, that is, the entities themselves.\\ EntSpans & We only assign global attention to the entity spans. The \token{<ent>} and \token{</ent>} tokens are replaced by the padding mask token to mask them in inputs and thus do not get either global or local attention. \\ EntOnly & We additionally mask out all tokens outside the entity spans so they do not get either global or local attention; thus we only pass entities with global attention on them to the decoder. We test this scenario to see how well the summaries can be recovered from only the essential entities plus information collected by \token{<doc-sep>} tokens. \\ \bottomrule \end{tabular} \caption{Global attention settings} \label{tab:global_attention_setting} \end{table} \subsection{Entity marking and global attention} PRIMERA is based on Longformer-Encoder-Decoder (LED) \citep{beltagy2020longformer}, which uses sparse attention (global attention) in addition to fixed-sized window attention (local attention). Here, we experiment with employing the global attention mechanism to highlight PICO elements and aggregate them across the documents. Specifically, for the scenario with entity spans in input and target texts, we use the five settings for global attention listed in \Cref{tab:global_attention_setting}. \begin{table*}[t!] \footnotesize \centering \begin{tabular}{p{0.05cm} p{2cm} p{0.75cm} p{0.75cm} p{0.75cm} c p{0.75cm} p{1.25cm}} \toprule & & R-1$\uparrow$ & R-2$\uparrow$ & R-L$\uparrow$ & BERTScore$\uparrow$ & $\Delta$EI$\downarrow$ & $\Delta$EI-$F_1$ $\downarrow$ \\ \midrule {\multirow{2}{*}{\rotatebox[origin=c]{90}{\textbf{\ex{Zero}}}}} & Default & 0.215 & 0.032 & 0.132 & 0.834 & 0.580 & 0.321 \\ & Last 3 & 0.245 & 0.063 & 0.179 & 0.871 & 0.260 & 0.385 \\ \midrule {\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{\ex{10-shot}}}}} & No entity & 0.229 & 0.037 & 0.147 & 0.857 & 0.269 & 0.328 \\ & DocSep & 0.234 & 0.041 & 0.155 & 0.864 & 0.267 & 0.367 \\ & EntOnly & 0.197 & 0.024 & 0.139 & 0.834 & 0.297 & 0.330 \\ & EntMarkers & 0.208 & 0.035 & 0.143 & 0.859 & 0.286 & 0.327 \\ & EntSpans & 0.235 & 0.036 & 0.155 & 0.854 & 0.307 & \textbf{0.295} \\ & EntMarkersSpans & 0.187 & 0.266 & 0.122 & 0.831 & 0.322 & 0.319 \\ \midrule {\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{\ex{100-shot}}}}} & No entity & \textbf{0.259} & 0.052 & 0.171 & 0.864 & 0.302 & 0.376 \\ & DocSep & 0.251 & 0.048 & 0.164 & 0.862 & 0.339 & 0.452 \\ & EntOnly & 0.237 & 0.038 & 0.157 & 0.851 & 0.308 & 0.389 \\ & EntMarkers & 0.244 & 0.048 & 0.164 & 0.864 & 0.284 & 0.369 \\ & EntSpans & \textbf{0.259} & 0.049 & 0.170 & 0.863 & 0.273 & 0.314 \\ & EntMarkersSpans & 0.251 & 0.048 & 0.166 & 0.863 & 0.301 & 0.315 \\ \midrule {\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{\ex{Full}}}}} & No entity & 0.256 & 0.064 & \textbf{0.182} & 0.871 & 0.308 & 0.409 \\ & DocSep & 0.234 & 0.060 & 0.170 & 0.869 & 0.337 & 0.373 \\ & EntOnly & 0.236 & 0.060 & 0.174 & 0.872 & 0.256 & 0.310 \\ & EntMarkers & 0.244 & \textbf{0.066} & 0.179 & \textbf{0.874} & 0.246 & 0.312 \\ & EntSpans & 0.237 & 0.061 & 0.174 & \textbf{0.874} & 0.251 & 0.302 \\ & EntMarkersSpans & 0.230 & 0.059 & 0.168 & 0.873 & \textbf{0.244} & 0.321 \\ \bottomrule \end{tabular} \caption{Results of automatic evaluation; $\uparrow$: higher is better, $\downarrow$: lower is better} \label{tab:global_attention} \end{table*} \subsection{Manipulating inputs} As dealing with lengthy inputs is a well-known issue for multi-document summarisation, especially in scientific and biomedical domains, we experiment with several settings to control the length of individual input documents: \begin{itemize} \item \textit{Default}: The default PRIMERA setting where LED's token budget of 4096 tokens is distributed evenly across all input documents and they are truncated to the corresponding length. \item \textit{Last 3}: In the biomedical domain the most important information appears in conclusions at the end of the paper, so we include only the last three sentences, based on NLTK's sentence tokenizer.\footnote{https://github.com/nltk/nltk} \end{itemize} \section{Results} Tables \ref{tab:global_attention} and \ref{tab:human_eval} report the results of automatic and human evaluation, correspondingly. \begin{table*}[t!] \footnotesize \centering \begin{tabular}{p{0.05cm} p{2.25cm} P{1cm} P{1.25cm} P{1.25cm} P{1.25cm} P{1.25cm} c} \toprule & & PICO$\uparrow$ & Direction$\uparrow$ & Factual$\uparrow$ & Contradict.$\downarrow$ & Malformed$\downarrow$ & No evid.$\downarrow$ \\ \midrule {\multirow{2}{*}{\rotatebox[origin=c]{90}{\textbf{\ex{Zero}}}}} & Default & 50 & 15 & 5 & 0 & 0 & 0 \\ & Last 3 & 50 & 50 & 30 & 0 & 5 & 70\\ \midrule {\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{\ex{10-shot}}}}} & No entity & 25 & 45 & 10 & 5 & 30 & 100 \\ & DocSep & 25 & 50 & 10 & 15 & 20 & 95 \\ & EntOnly& 10 & 30 & 0 & 10 & 75 & 35\\ & EntMarkers& 25 & 50 & 15 & 0 & 0 & 70\\ & Ent Spans& 30 & 35 & 5 & 5 & 30 & 65 \\ & EntMarkersSpans& 20 & 35 & 10 & 5 & 70 & 40\\ \midrule {\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{\ex{100-shot}}}}} & No entity & 50 & 50 & 20 & 5& 5 & 60\\ & DocSep & 50 & 50 & 20& 10& 15& 65\\ & EntOnly& 45 & 35 & 5 & 5 & 35 & 45 \\ & EntMarkers& 50 & 45 & 30 & 25 & 25 & 85\\ & EntSpans& 35 & 40 & 15 & 20 & 10 & 100\\ & EntMarkersSpans& 60 & 40 & 25 & 0 & 0 & 75\\ \midrule {\multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{\ex{Full}}}}} & No entity & 50& 60& 35& 10& 10&35\\ & DocSep & 50 & 50 & 25 & 5 & 10 & 65 \\ & EntOnly& 30 & 40 & 20 & 0 & 5 & 85\\ & EntMarkers & 35 & 40 & 20 & 10 & 0 & 90\\ & EntSpans& 55& 40& 25 & 5 & 5 & 90\\ & EntMarkersSpans& 50 & 40 & 25 & 5 & 0 & 100\\ \bottomrule \end{tabular} \caption{Results of human evaluation; $\uparrow$: higher is better, $\downarrow$: lower is better. \textbf{\ex{Zero}} denotes the zero-shot setting.} \label{tab:human_eval} \end{table*} \subsection{Models with and without global attention on entities} Though we do not see major improvements in ROUGE scores between the model without PICO entity marking (\textit{No entity}) and the models with global attention on PICO entities (with the exception of \textit{EntMarkers} and \textit{EntSpans}) and even observe some decrease in factuality scores, on closer inspection the summaries generated by those systems prove to be qualitatively different. In particular, the \textit{No entity} model is more extractive and more extensively copies the input studies, while the results of models with global attention on entities are more abstractive. For example, for review CD005963 (\Cref{tab:examples} in Appendix \ref{gen_examples}), the \textit{No entity} model copies the term \textit{Mental Health Act} often mentioned in source documents but absent in target conclusions, while the other models do not. \begin{table*}[t] \footnotesize \centering \begin{tabular}{p{0.05cm} p{2cm} p{0.75cm} p{0.75cm} p{0.75cm} c p{0.75cm} P{1.35cm}} & & R-1$\uparrow$ & R-2$\uparrow$ & R-L$\uparrow$ & BERTScore$\uparrow$ & $\Delta$EI$\downarrow$ & $\Delta$EI-$F_1$ $\downarrow$ \\ \midrule {\multirow{4}{*}{\rotatebox[origin=c]{90}{\textbf{\ex{Default}}}}} & Zero-shot & 0.215 & 0.032 & 0.132 & 0.834 & 0.580 & \textbf{0.321} \\ & 10-shot & 0.229 & 0.037 & 0.147 & 0.857 & 0.269 & 0.328 \\ & 100-shot & \textbf{0.259} & 0.052 & 0.171 & 0.864 & 0.302 & 0.376 \\ & Full & 0.256 & \textbf{0.064} & \textbf{0.182} & 0.871 & 0.308 & 0.409 \\ \midrule {\multirow{4}{*}{\rotatebox[origin=c]{90}{\textbf{\ex{Last 3}}}}} & Zero-shot & 0.245 & 0.063 & 0.179 & \textbf{0.871} & \textbf{0.260} & 0.385 \\ & 10-shot & 0.211 & 0.030 & 0.135 & 0.853 & 0.289 & 0.342 \\ & 100-shot & 0.250 & 0.046 & 0.164 & 0.862 & 0.341 & 0.424 \\ & Full & 0.239 & 0.061 & 0.171 & 0.870 & 0.279 & 0.382 \\ \end{tabular} \caption{Results of automatic evaluation; $\uparrow$: higher is better, $\downarrow$: lower is better} \label{tab:manip_input} \end{table*} \Cref{tab:abstractive} in Appendix \ref{lex_overlap} shows how the overlap with source documents decreases when the entity marking with global attention is used, thus making the summaries more abstractive. This, however comes at a cost: we notice that the models with additional global attention produce remarkably more \textit{no evidence} summaries, and in the fully fine-tuned scenario the number of such summaries grows with the number of tokens on which we place global attention. This is consistent with the results of another model which extensively uses global attention \citep{deyoung-etal-2021-ms} which also produces a large number of \textit{no evidence} summaries \citep{otmakhova-etal-2022-patient}. Another behaviour of models with extra global attention observed both in \citet{deyoung-etal-2021-ms} and here is that they generate sequences which are representative of biomedical text style. For example, in addition to conclusions, the summaries generated by such models contain generic sentences such as \textit{There is a need for more studies of high methodological quality}. Thus we hypothesise that tokens with global attention tend to accumulate and reproduce information common to a large number of documents in the training set rather than information shared by a particular set of input documents. Finally, though we expected the \textit{EntOnly} model, which only uses only PIO entities as inputs and thus loses information about the relations between them, to perform much worse than the other models, it is very similar to them both in automatic metrics and \textit{Direction} scores. We maintain that it shows that even if the models are able to attend to all tokens, they only reproduce PIO entities and are not able to consistently capture the relationships between them. \subsection{Zero-shot vs.\ few-shot vs.\ fully fine-tuned models} We notice that in terms of automatic metrics, zero-shot models are comparable to fine-tuned ones or even outperform them; however they perform substantially worse in terms of factuality, especially for the direction. We find that in zero-shot scenarios, PRIMERA copies spans of text from one or several of the input documents, focusing mostly on their beginnings, rather than aggregates information across documents. Thus it outputs either conclusions copied from a single document, or, more often, makes no claims at all by reporting the objectives of the review or its setup. Another interesting finding is that the ROUGE scores tend to be the highest in the 100-shot scenario and go down for the fully fine-tuned models. We maintain that in 10-shot scenarios the models are still unable to correctly capture and reproduce important entities (which is also reflected in their low accuracy in terms of PICO), while in the fully fine-tuned models, there is a tendency to generate broader and generic entities, for example \textit{metal-protein attenuation compounds} instead of \textit{PBT1/PBT2} in the target summary. Not surprisingly, the number of malformed generations decreases with increasing the number of training samples: the majority of summaries produced by \textit{EntOnly} and \textit{EntMarkersSpans} after 10 shots are malformed, but even 100-shot training significantly reduces this amount. On the other hand, it is surprising to see that the more the models are fine-tuned the more \textit{no evidence} statements they produce, with some models generating only such summaries in fully fine-tuned scenario. Lastly, we find that the 100-shot \textit{EntMarkers} model is similar in terms of factuality to the fully fine-tuned model without entity marking (\textit{No entity}). This is an encouraging result as high-quality multi-document summarisation data is scarce in biomedical domain, so few-shot learning is a practically important direction to explore. \subsection{\textit{Default} vs.\ \textit{Last3}} For few-shot and fine-tuned models we find no major improvements in quality when restricting the inputs to the last three sentences only (\Cref{tab:manip_input}). This shows that after fine-tuning PRIMERA is able to detect most useful spans without relying on their explicit marking. On the other hand, for the zero-shot scenario, where the model tends to copy from the beginning of input documents, the quality dramatically improves when we force it to extract only from a more informative span at the end of documents. Interestingly, such an easy manipulation of inputs allows to achieve results comparable to the best 100-shot and fully fine-tuned models without any training on the in-domain dataset. Again, this is a promising direction for research considering the scarcity of high-quality data. \section{Conclusion} We tackle the problem of biomedical multi-document summarisation by incorporating PICO information into a strong summarisation model, and using global attention to enhance the representation of this information. Through automatic and human evaluations on an extensive set of experiments, we find that adding global attention to PICO spans would help in (1) generating more abstractive summaries, and (2) improving summarization quality in few-shot settings, which is especially important in the biomedical domain. \section*{Acknowledgements} The authors would like to thank the anonymous reviewers for their comprehensive and constructive reviews. This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200. This research was conducted by the Australian Research Council Training Centre in Cognitive Computing for Medical Technologies (project number ICI70200030) and funded by the Australian Government.
{ "timestamp": "2022-09-20T02:23:00", "yymm": "2209", "arxiv_id": "2209.08698", "language": "en", "url": "https://arxiv.org/abs/2209.08698" }
\section{Introduction} \label{sec:introduction} The dissonance between the \citet{Airy:1845} linear water wave theory and \citet{Russell:1844} observations of a solitary wave, resolved by \citet{Boussinesq:1871,Boussinesq:1872,Boussinesq:1877} and \citet{Rayleigh:1876}, led to the idea of balancing dispersion and nonlinearity, now known as the \citet{Ursell:1953} criterion, leading to the classical Korteweg-de Vries (KdV) and nonlinear Schr\"{o}dinger (NLS) equations, both possessing soliton solutions. The transverse instability of solitary waves in the shallow and deep water regimes has been explored for a long time since the classical works of \citet{Kadomtsev:1970} and \citet{Zakharov:1974}, respectively, but all the subsequent studies have been mostly limited to plane KdV and NLS solitons, i.e. the effects of curvature of the solitary wave as well as of its amplitude decay due to cylindrical geometry have not been systematically addressed; for review of the vast literature on the topic see \citet{Kivshar:2000,Yang:2010}. In fact, despite the classical setting and ongoing interest \citep{Peregrine:1983,Grimshaw:2007,Vitanov:2013}, this set of problems proved to be understudied as not only instability to transverse perturbations has not been explored, but also the equations governing the solitary wave dynamics in the deep-water limit have not been derived, while in the shallow-water limit the underlying asymptotic assumptions and the structure of the equation and its axisymmetric solutions in the presence of surface tension have not been fully understood. It is the goal of the present study to fill in this gap and, applying the Ursell criterion, to deduce appropriate weakly nonlinear models generating concentric solitary waves in the presence of surface tension and to analyze their transverse instability. As models for studying the transverse instability of the finite-amplitude concentric carrier waves, we will consider the deep water case resulting in the nearly concentric NLS-type (ncNLS) and the shallow water case resulting in the nearly concentric Korteweg-de Vries (ncKdV) equations, respectively; the term ``near concentric'' is used to distinguish from the ``concentric'' limits cNLS and cKdV having no azimuthal dependence. The former (NLS-type), to the author's knowledge, has not been derived before. Since the systematic derivation is technically involved, to make it more transparent in \S \ref{subsec:ncNLS} we guide the reader through the key steps in the derivation of a NLS-type equation in cylindrical geometry including the effects of surface tension, while highlighting the differences in the derivation from the plane case. As for the latter (ncKdV), it has been previously derived by \citet{Johnson:1980} in a certain asymptotic regime without surface tension effects; therefore, in \S \ref{subsec:ncKdV} we revisit the derivation not only to include these effects, but also to highlight the physics necessary for understanding the transverse instability implications, and to identify an asymptotic regime relevant for our purposes, under which the ncKdV arises. Since both models correspond to the limit of a finite-amplitude narrow wavepacket evolution leading to a solitary wave solution, we are interested in here, to set the stage we contrast it with the case when the initial axisymmetric free surface deflection $\eta_{0}(r)$ dependent on the radial coordinate $r$ only is infinitesimally small and contains a wide range of wavenumbers $k$, e.g. in the Hankel-space $\widehat{\eta}_{0}(k)=1$, which corresponds to a localized initial free surface deflection in the physical space $\eta_{0}(r) = \left\{2 \pi b \ \text{for} \ 0 \le r \le r_{0}, \ \text{and} \ 0 \ \text{for} \ r > r_{0}\right\}$; here $\pi b \, r_{0}^{2} = 1$ with $r_{0} \rightarrow 0$ and $b \rightarrow \infty$. For example, in the deep water case of pure gravity-driven waves the stationary phase analysis \citep{Koshlyakov:1964} identifies the `stationary' wavenumber $k = \frac{g t^{2}}{4 r^{2}}$ proportional to the gravity acceleration $g$ and gives for the free surface evolution: \begin{align} \label{free-surface:stationary-phase} \eta(t,r) \sim \frac{g t^{2}}{r^{3}} \cos{k r} = \frac{g t^{2}}{r^{3}} \cos{\frac{g t^{2}}{4 r}} \end{align} indicating that at a fixed time $t$ the waves become of longer wavelength and smaller amplitude with increasing $r$, while for fixed $r$ the amplitude of the wave increases and the wavelength shortens; equation \eqref{free-surface:stationary-phase} was calculated by \citet{Lamb:1904}, though some elements of this analysis were known to \citet{Poisson:1816}. Of course, in reality such idealized Dirac delta-function signals $\eta_{0}(r)$ do not exist: replacing it with a smoothed (delta-sequence) deflection $\eta_{0}(r) \sim e^{-\frac{1}{2} \delta^{2} r^{2}}$ of characteristic width $\delta$ gives $\widehat{\eta}_{0}(k) \sim \frac{1}{\delta^{2}} e^{-\frac{k^{2}}{2 \delta^{2}}}$ thus regularizing the solution at short distances and long times. One may also arrive at \eqref{free-surface:stationary-phase} informally \citep{Kadomtsev:1982}: namely, given that the initial perturbation consists of all wavenumbers, a wavepacket centered around $k$ propagates with the group velocity $\frac{\d \omega}{\d k} = \frac{1}{2} \left(\frac{g}{k}\right)^{1/2}$ and hence in time $t$ will arrive at the point $r = \frac{1}{2} \left(\frac{g}{k}\right)^{1/2} t$, i.e. at a given time $t$ we find the wavenumber $k$ of the wave arriving at the point $r$, identified above with the stationary phase method. For the comparison with the subsequent results in the present study, let us remind the reader the key conclusions of the earlier cited classical works on stability of 1D solitons. \textit{First}, the 1D case of a general 2D NLS with focusing nonlinearity in the Cartesian coordinates (written here in the adopted in the present work scaling with $\xi$ standing for the longitudinal direction, in which the 1D-soliton propagates, and $Y$ for the transverse direction): \begin{align} \label{eqn:NLS-1D} \i \psi_{\tau} + \psi_{\xi\xi} + \alpha \psi_{YY} + |\psi|^{2} \psi = 0, \end{align} with $\alpha = 0$ and $\psi$ being the slow envelope amplitude of a traveling wave, admits the solution in the standing wave (Stokes) form $\psi = e^{\i \omega \tau} v(\xi)$ leading to \begin{align} \label{base-state:1D-NLS} - \omega \, v + v^{\prime\prime} + v^{3} = 0, \end{align} which can be supplied with boundary conditions (BCs) $v^{\prime}(0)=0$, $v(\infty)=0$. Multiplying \eqref{base-state:1D-NLS} with $v^{\prime}$, and integrating w.r.t. $\xi$, we get \begin{align} \label{base-state:1D-NLS:reduced} - \omega \, v^{2} + v^{\prime 2} + \frac{1}{2} v^{4} = 0 \ \Rightarrow \ w^{\prime 2} = \omega \, w^{2} - \frac{1}{2}, \end{align} where we took into account the BCs and switched to $w=v^{-1}$. The solution of \eqref{base-state:1D-NLS:reduced} is a localized in the $\xi$-space soliton, $v = (2 \, \omega)^{1/2} \mathrm{sech}{\left(\omega^{1/2} \xi\right)}$. Other solutions can be generated from the fact that \eqref{eqn:NLS-1D} is amenable to translational symmetry $(\tau,\xi) \rightarrow (\tau^{\prime},\xi^{\prime} = \xi - u \tau)$ such that $\psi(\tau,\xi) \rightarrow e^{\i \frac{u}{2} \left(\xi^{\prime} + \frac{u}{2} \tau^{\prime}\right)} \psi(\tau^{\prime},\xi^{\prime})$. As shown by \citet{Zakharov:1968} (see also \citet{Grimshaw:2007}), plane waves solutions of 1D NLS are modulationally unstable in the focusing case \eqref{eqn:NLS-1D}. However, spectrally the solitons are neutrally stable, i.e. all eigenvalues are located on the imaginary axis; this fact is also consistent with the \citet{Vakhitov:1973} criterion \citep{Kuznetsov:1986,Yang:2010} based on the slope of the power curve $P(\mu) = \int{U^{2}(\xi;\mu) \, \d \xi}$ for the 1D solitary wave $\psi(\tau,\xi) = U(\xi) e^{\i \mu \tau}$, where $\mu$ is the propagation constant -- however spectral stability does not imply even linear, not to mention nonlinear, stability \citep{Krechetnikov:2007}. Later, \citet{Zakharov:1974} also established 1D-NLS soliton instability to transverse $Y$-modulations regardless of the sign of the transverse dispersion coefficient $\alpha$ in \eqref{eqn:NLS-1D}; $\alpha=+1$ corresponds to the elliptic and $-1$ to the hyperbolic case, respectively. \textit{Second}, the nearly plane KdV equation (npKdV) deduced by \citet{Kadomtsev:1970} \begin{align} \label{eqn:KP} \left(2 \, \eta_{\tau} + 3 \, \eta \, \eta_{\xi} + \frac{1}{3} \eta_{\xi\xi\xi}\right)_{\xi} - \beta \, \eta_{YY} = 0, \end{align} in absence of $Y$-dependence possesses not only a self-similar solution $\eta(\tau,\xi) = \tau^{-2/3} F(\zeta), \ \zeta = \tau^{-1/3} \xi$, but also the 1D soliton \begin{align} \label{soliton:1D:KdV} \eta(\tau,\xi) = A \, f(\widetilde{\xi}), \ \widetilde{\xi} = \sqrt{A} \left(\xi - A \, \tau\right), \end{align} governed by \begin{align} \label{eqn:self-similar:1D-KdV} \frac{1}{3} f^{\prime\prime\prime} - 2 \, f^{\prime} + 3 f f^{\prime} = 0. \end{align} Equation \eqref{eqn:self-similar:1D-KdV} can be integrated once to $\frac{1}{3} f^{\prime\prime} - 2 \, f + \frac{3}{2} f^{2} = 0$, assuming that the solution $f$ and its derivatives decay at infinity, and then its order can be reduced even further via $f^{\prime} = g(f)$ and integrated to yield the usual localized $f(\widetilde{\xi}) = \mathrm{sech}^{2}{\left(\frac{3}{2}\widetilde{\xi}\right)^{1/2}}$ soliton, qualitatively anticipated by Boussinesq and \citet{Rayleigh:1876} before the work of \cite{Korteweg:1895}. As first shown by \citet{Kadomtsev:1970} based on \eqref{eqn:KP}, this plane soliton exhibits transverse instability in the medium with positive dispersion ($\beta>0$) in the corresponding dispersion law $\omega(k;\beta)$, meaning that the phase velocity of linear waves increases with the wavenumber $k$, while for negative dispersion ($\beta<0$) it is spectrally stable. As the structure \eqref{soliton:1D:KdV} of the 1D-KdV soliton suggests, its speed $A$ relative to the frame of reference traveling with the phase speed of the carrier linear wave $c_{0} = \omega/k$, where $\omega^{2} = k^{2} h \left(g + \sigma k^{2}/\rho\right)$, depends on the soliton amplitude $A$, namely the larger the amplitude the faster the soliton travels. As we know from the transverse stability analysis of such a soliton \citep{Alexander:1997}, there exists the most amplified (preferred) transverse wavelength, which also depends on the soliton amplitude. The latter property is not an issue in the plane (1D) case as the soliton amplitude does not change with time in non-dissipative media. However, as soon as we try to translate this knowledge of 1D soliton behavior onto the cylindrical case, we meet with two immediate complications both resulting from intrinsic time-dependence of the base state. To start with, the cylindrical soliton is being stretched in the transverse direction as it travels outwards and hence, according to the stability theory on time-dependent spatial domains \citep{Knobloch:2014,Knobloch:2015,Krechetnikov:2017,Ghadiri:2019}, an Eckhaus instability must insert new wavelengths (cells). However, as is obvious from the energy conservation, the soliton amplitude must decrease as it propagates outwards, which means that if one applies the intuition developed in the plane case than the wavelength of instability must change as well. Also, due to the lack of Galilean symmetry of ncKdV, only a self-similar solution of the form $\eta(\tau,\xi) = \tau^{-2/3} F(\zeta), \ \zeta = \tau^{-1/3} \xi$ exists in the cylindrical case -- its speed dependence upon its amplitude is obscured compared to \eqref{soliton:1D:KdV}; however, one may still adopt approximately the qualitative 1D picture to the cylindrical case as it was done in numerical studies of \citet{Maxon:1974b,Ko:1979}\footnote{While starting with $\mathrm{sech}^{2}$-soliton shape as an IC approximately follows this quasi-1D picture, it is clear that due to amplitude decrease with the radial distance the dynamics will eventually exit the KdV regime and switch to the NLS one as suggested by the fact that small amplitude solutions of KdV are governed by NLS \citep{Dias:2005}, in which case the soliton assumes $\mathrm{sech}$-form.}. As a result, the mechanism of self-focusing existing in the plane soliton case, i.e. when the soliton amplitude change leads to a variation in its speed and hence self-focusing and instability \citep{Askaryan:1962,Kadomtsev:1982}, must be modified in the cylindrical case. Moreover, the soliton stretching in the transverse direction should counteract to any other possible self-focusing mechanisms leading to transverse instability. Hence, the question arises if cylindrical solitons can experience a transverse instability. Besides that, there is yet another crucial difference between plane and cylindrical geometries -- the single-soliton solutions in the latter case \citep{Maxon:1974b} no longer have exponential decay both in front and behind the soliton, but instead possess a slowly decaying oscillatory tail, i.e. there exists no localized soliton in the cylindrical case which makes the theory more difficult \citep{Freeman:1980}; this motivated one to name the corresponding solutions as `nonlocal' solitons \citep{Boyd:1988}, though the governing equations are local and the semantics of the term ``soliton'' is a subject of recurrent contemplation \citep{Infeld:2000}. While this fact of oscillatory tails in solitons is well-known in the context of KdV \citep{Ablowitz:1977,Johnson:1980}, it is less so for the NLS case. To illustrate this point, note that in the case of a \textit{radial NLS}, i.e. when $\psi_{\xi\xi} \pm \alpha \psi_{YY} \rightarrow \partial_{r}^{2} + \frac{1}{r} \partial_{r}$ in \eqref{eqn:NLS-1D}, one can still reduce \eqref{eqn:NLS-1D} to an equation of the type \eqref{base-state:1D-NLS:reduced}. Indeed, looking for a solution in the form $\psi = e^{\i \omega \tau} v(r)$, multiplying \eqref{eqn:NLS-1D} by $v_{r}$, and integrating w.r.t. the cylindrical measure $r \d r$, instead of \eqref{base-state:1D-NLS:reduced} we get $- \omega \, v^{2} - v^{\prime 2} + \frac{1}{2} v^{4} = 0$, provided the term arising from integration by parts vanishes, $\left.r v^{\prime 2}\right|_{0}^{\infty} = 0$. As a result, instead of \eqref{base-state:1D-NLS:reduced} we obtain \begin{align} \label{base-state:radial-NLS:reduced} w^{\prime 2} = - \omega \, w^{2} + \frac{1}{2}, \end{align} where the difference in signs from \eqref{base-state:1D-NLS:reduced} is notable. The resulting general solution is either constant everywhere, $v(r) = \pm (2 \, \omega)^{1/2}$ -- the extreme case of nonlocalized soliton -- or $v(r) = (2 \, \omega)^{1/2} \sec{\left(\omega^{1/2} r + \varphi\right)}$ with arbitrate phase $\varphi$; the latter solution does not satisfy the condition $\left.r v^{\prime 2}\right|_{r=\infty} = 0$ necessary to arrive at \eqref{base-state:radial-NLS:reduced} and is singular periodically. This demonstrates the lack of localization in cylindrical geometry characteristic to the plane 1D case. One implication of that is the fact that an attempt to apply the Vakhitov-Kolokolov stability approach for plane (1D) solitons mentioned above in the cylindrical case fails not only because the power curve $P(\mu)$ diverges, but also because $P(\mu)$ does not depend on the propagation constant as follows from a simple scaling argument. With this introduction to a range of general questions, the outline of the manuscript is as follows. Following the derivations of the governing equations in the deep (\S \ref{subsec:ncNLS}) and shallow (\S \ref{subsec:ncKdV}) water limits, we will discuss the origin and implications of the potential term in the NLS model (\S \ref{subsec:heuristic-analysis}). In \S\S \ref{subsec:ncNLS:BS} and \ref{subsec:soliton:ncKdV} we will construct the ground state solitary waves for deep and shallow water, respectively. Since the envelope equation derived in the deep water case -- the Gross-Pitaevskii (GP) equation with a potential -- is new, its properties and key base state solutions will be studied in detail, including with the help of dynamical systems tools in order to get a better insight into their structure. Stability of these solutions will be studied in \S \ref{subsec:spectral-analysis:ncNLS},\ref{subsec:Lagrange-Dirichlet} and \S\S \ref{subsec:KdV:stability-preliminary},\ref{subsec:KP:ncKdV:analysis}. In the case of GP equation the stability analysis will be done from both spectral (\S \ref{subsec:spectral-analysis:ncNLS}) and nonlinear Hamiltonian (\S \ref{subsec:Lagrange-Dirichlet}) perspectives, while in the case of ncKdV the general considerations in \S \ref{subsec:KdV:stability-preliminary} will be followed in \S \ref{subsec:KP:ncKdV:analysis} with the derivation of the linear amplitude equation governing instability in the spirit of \cite{Kadomtsev:1970} along with its analysis. Finally, while conservation laws will be constructed and discussed for both GP (\S \ref{subsec:GP:conservation-laws}) and ncKdV (\S \ref{subsec:soliton:ncKdV}) equations, in the former case the condition for self-focusing and singularity formation will be identified in analogy to that of the standard NLS equation. \section{Waves on deep water} \label{sec:deep-water} \subsection{Derivation of the envelope equation} \label{subsec:ncNLS} Let us first consider concentric water waves on deep water in the inviscid potential approximation, for which it is natural to adopt a cylindrical system of coordinates. The corresponding non-dimensional system for the velocity potential $\phi$ and interfacial deflection $\eta$ from quiescent state coupled through kinematic and dynamic boundary conditions (BCs) reads \begin{subequations} \label{system:deep-water:non-dimensional:cylindrical} \begin{align} \label{bulk:Laplace:non-dimensional:cylindrical} z \le \varepsilon \, \eta(t,x)&: \quad \left\{\begin{array}{c} \Delta \phi \equiv \frac{1}{r} \frac{\partial}{\partial r}\left(r \frac{\partial \phi}{\partial r}\right) + \frac{1}{r^{2}} \frac{\partial^{2} \phi}{\partial \theta^{2}} + \frac{\partial^{2} \phi}{\partial z^{2}} = 0, \\ \nabla \phi \rightarrow 0, \ z \rightarrow - \infty, \end{array}\right. \\ \label{interface:kinematic:non-dimensional:cylindrical} z = \varepsilon \, \eta(t,x)&: \quad \phi_{z} = \eta_{t} + \varepsilon \, \nabla_{\perp} \phi \cdot \nabla_{\perp} \eta, \\ \label{interface:dynamic:non-dimensional:cylindrical} z = \varepsilon \, \eta(t,x)&: \quad \phi_{t} + \eta + \frac{\varepsilon}{2} \left|\nabla \phi\right|^{2} + We \, \nabla \cdot \mathbf{n} = 0, \end{align} \end{subequations} where $\nabla_{\perp} = (\partial_{r}, r^{-1} \partial_{\theta})$ and the scaled interfacial curvature \begin{align} \label{eqn:curvature:cylindrical} \nabla \cdot \mathbf{n} &= - \frac{\eta_{rr} \left(1 + \varepsilon^{2} \frac{\eta_{\theta}^{2}}{r^{2}}\right) + (1 + \varepsilon^{2} \eta_{r}^{2}) \frac{1}{r} \left(\frac{\eta_{\theta\theta}}{r} + \eta_{r}\right) + 2 \varepsilon^{2} \frac{\eta_{r}\eta_{\theta}}{r^{2}} \left(\frac{\eta_{\theta}}{r} - \eta_{\theta r}\right)}{\left(1 + \varepsilon^{2} \eta_{r}^{2} + \varepsilon^{2} \eta_{\theta}^{2}/r^{2}\right)^{3/2}}. \end{align} Above, the Weber number $We = \sigma k_{0}^{2} / (\rho \, g)$ measures the effect of surface tension relative the wave intertia (driven by gravity) and $\varepsilon = a k_{0}$ is the wave amplitude (wave steepness) scaled w.r.t. the wavenumber $k_{0}$ of the carrier wave. The non-dimensionalization that led to \eqref{system:deep-water:non-dimensional:cylindrical} is dictated by the following considerations. Since our interest is to analyze the evolution of a narrow wavepacket centered around a wavenumber $k_{0}$, the latter sets the natural lengthscale for non-dimensionalization: \begin{align} \label{eqn:non-dimensionalization} (r,z) \rightarrow k_{0}^{-1} (r,z), \ t \rightarrow \omega_{0}^{-1} t, \ \eta \rightarrow a \, \eta, \ \phi \rightarrow a \, \omega_{0} \, k_{0}^{-1} \, \phi, \end{align} where $\omega_{0} = \omega(k_{0})$ is dictated by the deep water dispersion relation $\omega^{2} = g \, k$ for pure gravity-driven waves, $a$ is the wave amplitude, and the scaling for $\phi$ follows from balancing the fluid velocity at the interface with that of the interface itself, $\phi_{z} \sim \eta_{t}$. The scaled wave amplitude $\varepsilon$ is treated here as small since we are interested in the balance of nonlinear and dispersive effects, which happens at small solution amplitudes only. Because of the latter, we expand the kinematic and dynamic BCs (\ref{interface:kinematic:non-dimensional:cylindrical},\ref{interface:dynamic:non-dimensional:cylindrical}) in Taylor series around $z=0$, $f(z=\varepsilon\eta) = \left.f(0) + f^{\prime}(0) z + f^{\prime\prime}(0) \frac{z^{2}}{2} + \ldots\right|_{z=\varepsilon\eta}$ thereby making the spatial domain to be the perfect half-space, as well as look for solutions in the series \begin{align} \label{expansion:NLS} \phi = \phi_{0} + \varepsilon \, \phi_{1} + \varepsilon^{2} \, \phi_{2} + \ldots, \ \eta = \eta_{0} + \varepsilon \, \eta_{1} + \varepsilon^{2} \, \eta_{2} + \ldots. \end{align} However, solving problem \eqref{system:deep-water:non-dimensional:cylindrical} with such a regular perturbation approach is known to lead to secular divergencies, which necessitates the introduction of multiple scales, cf. \cite{Hakim:1998} and \S \ref{subsec:heuristic-analysis}: \begin{align} \label{scales:multiple:NLS} (t,x,z) \rightarrow (t,T = \varepsilon \, t, \tau = \varepsilon^{2} \, t; x, R = \varepsilon \, r; z, Z = \varepsilon \, z) \end{align} with the corresponding transformation of derivatives, i.e. $\partial_{t} \rightarrow \partial_{t} + \varepsilon \, \partial_{T} + \varepsilon^{2} \, \partial_{\tau}$, $\partial_{r} \rightarrow \partial_{r} + \varepsilon \, \partial_{R}$, and $\partial_{z} \rightarrow \partial_{z} + \varepsilon \, \partial_{Z}$. The NLS proves to appear at the radii $r \sim \varepsilon^{-1}$, so we would have to consider the balance at the lengthscale $R = \varepsilon \, r$. As a result, at the leading order we get the system \begin{equation} \label{ncNLS:O-0} \mathcal{O}(\epsilon^{0}): \begin{cases} \phi_{0 zz} + \phi_{0 rr} = 0 & z < 0, \\ |\nabla\phi_{0}| < \infty & z \rightarrow -\infty, \\ \phi_{0 z} - \eta_{0 t} = 0 & z = 0, \\ \phi_{0 t} + \eta_{0} = 0 & z = 0, \end{cases} \end{equation} shown here for $We = 0$ as our first goal is to illustrate the derivation in the simplest possible case and then to point out the differences in the derivation when surface tension effects are included. The solution of \eqref{ncNLS:O-0} is \begin{subequations} \label{sln:ncNLS:O-0} \begin{align} \phi_{0} &= \psi_{0}(T,\tau,R,Z,\theta) \, e^{\i (\widehat{k}_{0} r - \widehat{\omega}_{0} t) + \widehat{k}_{0} z} + \mathrm{c.c.}, \\ \eta_{0} &= \i \, \widehat{\omega}_{0} \, \psi_{0}(T,\tau,R,0,\theta) \, e^{\i (\widehat{k}_{0} r - \widehat{\omega}_{0} t)} + \mathrm{c.c.}, \end{align} \end{subequations} where $\widehat{k}_{0}$ and $\widehat{\omega}_{0} = \widehat{k}_{0}^{1/2}$ are equal to one due to our choice of non-dimensionalization \eqref{eqn:non-dimensionalization}, but are kept here explicitly for now, which will be useful when we discuss the case $We \neq 0$ since the dispersion relation $\omega(k)$ will be different. Notably, at $\mathcal{O}(\epsilon^{0})$ the problem is identical to the plane (1D) case. At the next order, however, we start observing some differences \begin{equation} \label{ncNLS:O-1} \mathcal{O}(\epsilon^{1}): \begin{cases} \phi_{1 zz} + \phi_{1 rr} = - 2 \left(\phi_{0 zZ} + \phi_{0 rR}\right) - \frac{\phi_{0 r}}{R} & z < 0, \\ |\nabla\phi_{1}| < \infty & z \rightarrow -\infty, \\ \phi_{1 z} + \phi_{1 tt} = - 2 \phi_{0 t T} - \phi_{0 Z} - \eta_{0 t} \phi_{0 z t} - \phi_{0 z} \phi_{0 z t} & \\ \qquad - \eta_{0} \left(\phi_{0 z tt} + \phi_{0 zz}\right) + \eta_{0 r} \phi_{0 r} - \phi_{0 r} \phi_{0 r t} & z = 0, \\ \phi_{1 t} + \eta_{1} = - \phi_{0 T} - \frac{1}{2} \left(\phi_{0 z}^{2} + \phi_{0 r}^{2}\right) - \eta_{0} \phi_{0 z t} & z = 0, \end{cases} \end{equation} where instead of the kinematic condition we provided a \textit{combined} one constructed by adding the dynamic condition \eqref{interface:dynamic:non-dimensional:cylindrical}, differentiated with respect to time $t$, to the kinematic condition \eqref{interface:kinematic:non-dimensional:cylindrical}, and subsequently applying the multiple-scales expansion outlined earlier; the use of the combined boundary condition makes it easier to identify the resonances compared to dealing with the system of kinematic and dynamic conditions. The entire right-hand side of the Poisson equation in \eqref{ncNLS:O-1} leads to secular terms containing exponents $e^{\pm \i (\widehat{k}_{0} r - \widehat{\omega}_{0} t)}$, the factors of which vanish provided that the no-resonance condition holds \begin{align} \label{conditions:no-resonance:Laplace-1:cylindrical} \psi_{0 Z}(T,\tau,R,Z,\theta) = - \i \, \left(\psi_{0 R}(T,\tau,R,Z,\theta) + \frac{\psi_{0}(T,\tau,R,Z,\theta)}{2R}\right), \end{align} along with the complex conjugate of this expression; both render the Poisson equation in \eqref{ncNLS:O-1} to be homogeneous. For future simplifications, differential consequences of \eqref{conditions:no-resonance:Laplace-1:cylindrical} will be needed: \begin{align} \psi_{0 ZZ}(T,\tau,R,Z,\theta) = \frac{\psi_{0}(T,\tau,R,Z,\theta)}{4 R^{2}} - \frac{\psi_{0 R}(T,\tau,R,Z,\theta)}{R} - \psi_{0 RR}(T,\tau,R,Z,\theta). \end{align} Similarly, the right-hand side of the combined boundary condition in \eqref{ncNLS:O-1}, after simplification with \eqref{conditions:no-resonance:Laplace-1:cylindrical} evaluated at $Z=0$, leads to the following conditions necessary for avoiding secularities: \begin{align} \label{conditions:no-resonance:combinedBC-1:cylindrical} \psi_{0 T}(T,\tau,R,0,\theta) + \frac{\psi_{0 R}(T,\tau,R,0,\theta)}{2 \widehat{k}_{0}^{1/2}} + \frac{\psi_{0}(T,\tau,R,0,\theta)}{4 R \widehat{k}_{0}^{1/2}} = 0 \end{align} along with its complex conjugate and the differential consequence \begin{align} \psi_{0 TT}(T,\tau,R,0,\theta) = - \frac{\psi_{0}(T,\tau,R,0,\theta)}{16 R^{2} \widehat{k}_{0}} + \frac{\psi_{0 R}(T,\tau,R,0,\theta)}{4 R \widehat{k}_{0}} + \frac{\psi_{0 RR}(T,\tau,R,0,\theta)}{4 \widehat{k}_{0}}. \end{align} Integration of \eqref{conditions:no-resonance:combinedBC-1:cylindrical} with initial conditions (ICs) $\psi_{0}(0)=\psi_{0}\left(0\right)$ at $T=0$ and $R(0)$ using the method of characteristics gives $\psi_{0} = (R(0)/R) \, \psi_{0}(0)$, $R = R(0) + T$ and shows that the first two terms in \eqref{conditions:no-resonance:combinedBC-1:cylindrical} represent advection, i.e. the envelope traveling at the group velocity $\widehat{\omega}_{0}^{\prime}(\widehat{k}_{0}) = \frac{1}{2} \widehat{k}_{0}^{-1/2}$, and the last one -- dilution affecting the amplitude of the wavepacket, i.e. decreasing it with $R$ as $\psi_{0} \sim R^{-1}$ on the timescale $T$; as we will see, on the timescale $\tau$ the amplitude varies as $\psi_{0} \sim R^{-1/2}$. The condition \eqref{conditions:no-resonance:combinedBC-1:cylindrical} nullifies the inhomogeneous terms in the combined BC and results in the following solution for $\phi_{1}$: \begin{align} \label{sln:ncNLS:O-1} \phi_{1} &= \psi_{1}(T,\tau,R,Z,\theta) \, e^{\i (\widehat{k}_{0} r - \widehat{\omega}_{0} t) + \widehat{k}_{0} z} + \mathrm{c.c.}, \end{align} while $\eta_{1}$ is found straightforwardly from the dynamic condition in \eqref{ncNLS:O-1}. Finally, the Laplace equation at the order required for balancing the nonlinearity and dispersion reads \begin{equation} \label{NLS:O-2} \mathcal{O}(\epsilon^{2}): \begin{cases} \phi_{2 zz} + \phi_{2 rr} = - 2 \left(\phi_{1 zZ} + \phi_{1 rR}\right) - \left(\phi_{0 ZZ} + \phi_{0 RR}\right) & \\ \qquad\qquad\qquad- \frac{\phi_{1 r}}{R} - \frac{\phi_{0 R}}{R} - \frac{\phi_{0 \theta\theta}}{R} & z < 0, \\ |\nabla\phi_{2}| < \infty & z \rightarrow -\infty, \end{cases} \end{equation} which brings about the no-resonance conditions \begin{multline} \label{conditions:no-resonance:Laplace-2:cylindrical} \psi_{1 Z}(T,\tau,R,Z,\theta) = - \frac{\psi_{0}(T,\tau,R,Z,\theta) + 4 \, \psi_{0 \theta\theta}(T,\tau,R,Z,\theta)}{8 R^{2} \widehat{k}_{0}} \\ - \i \, \frac{\psi_{1}(T,\tau,R,Z,\theta) - 2 R \psi_{1 R}(T,\tau,R,Z,\theta)}{2 R}. \end{multline} The corresponding combined boundary condition at $\mathcal{O}(\epsilon^{2})$ (not shown due to excessive number of terms), simplified with the conditions (\ref{conditions:no-resonance:Laplace-1:cylindrical},\ref{conditions:no-resonance:combinedBC-1:cylindrical},\ref{conditions:no-resonance:Laplace-2:cylindrical})\footnote{The perturbation $\psi_{1}$ must obey the same condition \eqref{conditions:no-resonance:combinedBC-1:cylindrical} as $\psi_{0}$ since, for coherence of the envelope, both perturbations $\psi_{0}$ and $\psi_{1}$ must travel at the same group velocity.} and their differential consequences, leads to the no-resonance condition in the form of ncNLS amended with an inverse-square potential: \begin{align} \label{ncNLS} - 2 \, \i \, \widehat{k}_{0}^{1/2} \psi_{0 \tau} + \frac{1}{4 \, \widehat{k}_{0}} \left[\psi_{0 RR} + \frac{\psi_{0 R}}{R} - \frac{3}{4} \frac{\psi_{0}}{R^{2}}\right] - \frac{1}{2 R^{2} \widehat{k}_{0}} \psi_{0 \theta\theta} + 4 \, \widehat{k}_{0}^{4} |\psi_{0}|^{2} \psi_{0} = 0, \end{align} which in the limit $R \rightarrow \infty$, obviously, reduces to the 1D NLS if the dependence on the transverse coordinate is neglected or to the 2D NLS derived by \citet{Zakharov:1968} in the Cartesian coordinates if one lets $R \, \theta \rightarrow y$ (and $r \rightarrow x$). Inclusion of surface tension ($We > 0$) brings about several key differences and the associated algebraic complications. First, the dynamic condition in \eqref{ncNLS:O-0} is amended with the leading order curvature terms $\nabla \cdot \mathbf{n} \approx - \eta_{rr} \left(1 - \frac{3}{2} \varepsilon^{2} \eta_{r}^{2}\right) + \mathcal{O}(\varepsilon^{3})$ contributing to the resulting envelope equation, so that the frequency $\widehat{\omega}_{0}$ in \eqref{sln:ncNLS:O-0} modifies to $\widehat{\omega}_{0}^{2} = \widehat{k}_{0} \left(1 + We \, \widehat{k}_{0}^{2}\right)$, where $\widehat{k}_{0}=1$ as in the case $We = 0$. Naturally, the derivation of the combined boundary condition requires not only substitution of $\eta_{t}$ from the kinematic condition \eqref{interface:kinematic:non-dimensional:cylindrical}, but also calculating $\eta_{t}$, $\eta_{rt}$, $\eta_{rrt}$, $\eta_{\theta t}$, $\eta_{\theta\theta t}$ and $\eta_{r \theta t}$ from the kinematic condition \eqref{interface:kinematic:non-dimensional:cylindrical} to substitute them in the differentiated dynamic condition \eqref{interface:dynamic:non-dimensional:cylindrical}. While the condition \eqref{conditions:no-resonance:Laplace-1:cylindrical} and its differential consequences are not affected by surface tension, equation \eqref{conditions:no-resonance:combinedBC-1:cylindrical} now reads \begin{align} \label{condition:no-resonance:ST} \psi_{0 T}(T,\tau,R,0,\theta) = - \left(1 + 3 We \, \widehat{k}_{0}^{2}\right) \left[\frac{\psi_{0 R}(T,\tau,R,0,\theta)}{2 \, \widehat{\omega}_{0}} + \frac{\psi_{0}(T,\tau,R,0,\theta)}{4 R \widehat{\omega}_{0}}\right], \end{align} and still retains the meaning that the envelope of the wavepacket (and its complex conjugate $\psi_{0}^{*}$) propagates at the group velocity; differential consequences \eqref{condition:no-resonance:ST} are computed similar to the no-surface tension case above. Next, as opposed to \eqref{sln:ncNLS:O-1} the solution for $\phi_{1}$ now contains the inhomogeneous part leading to \begin{multline} \phi_{1} = \psi_{1}(T,\tau,R,Z,\theta) \, e^{\i (\widehat{k}_{0} r - \widehat{\omega}_{0} t) + \widehat{k}_{0} z} \\ + \frac{3 \, \i \, We \, \widehat{k}_{0}^{4} \, \psi_{0}^{2}(T,\tau,R,Z,\theta)}{\left(1 - 2 \, We \, \widehat{k}_{0}^{2}\right) \widehat{\omega}_{0}} \, e^{2 \, \i (\widehat{k}_{0} r - \widehat{\omega}_{0} t) + 2 \, \widehat{k}_{0} z} + \mathrm{c.c.}. \end{multline} The no-resonance condition \eqref{conditions:no-resonance:Laplace-2:cylindrical} arising at $\mathcal{O}(\varepsilon^{2})$ stays intact. As a result, the envelope equation in the presence of surface tension now generalizes from \eqref{ncNLS} to \begin{multline} \label{ncNLS-ST} - 2 \, \i \, \widehat{\omega}_{0} \psi_{0 \tau} + \frac{1 - 6 \, We \, \widehat{k}_{0}^{2} - 3 \, We^{2} \, \widehat{k}_{0}^{4}}{4 \, \widehat{\omega}_{0}^{2}} \left(\psi_{0 RR} + \frac{\psi_{0 R}}{R}\right) - \frac{3 + We \, \widehat{k}_{0}^{2} \left(2 + 3 \, We \, \widehat{k}_{0}^{2}\right)}{16 \, R^{2} \, \widehat{\omega}_{0}^{2}} \psi_{0} \\ - \frac{1 + 3 \, We \, \widehat{k}_{0}^{2}}{2 \, R^{2} \, \widehat{k}_{0}} \psi_{0 \theta\theta} + \frac{\widehat{k}_{0}^{5} \left(8 + We \, \widehat{k}_{0}^{2} + 2 \, We^{2} \, \widehat{k}_{0}^{4}\right)}{2 \, \widehat{\omega}_{0}^{2} (1 - 2 \, We \, \widehat{k}_{0}^{2})} |\psi_{0}|^{2} \psi_{0} = 0, \end{multline} which in the limit $R \rightarrow \infty$ reduces to the nearly plane NLS \citep{Kawahara:1975,Djordjevic:1977,Ablowitz:1979}. Adopting the notation for the coefficients in the NLS from the latter reference, \eqref{ncNLS-ST} can be compactly rewritten as \begin{align} \label{ncNLS-ST:abstract} \i \, \psi_{\tau} + \lambda_{\infty} \Delta_{R} \psi_{R} + \lambda_{\infty}^{\prime} \frac{\psi}{R^{2}} + \frac{\mu_{\infty}}{R^{2}} \psi_{\theta\theta} = \chi_{\infty} |\psi|^{2} \psi, \end{align} where we dropped index $0$ and introduced the notation for the radial Laplacian $\Delta_{R} = \partial_{R}^{2} + \frac{1}{R}\partial_{R}$; the coefficients in \eqref{ncNLS-ST:abstract} are \begin{subequations} \label{coefficients:GP} \begin{align} \lambda_{\infty} &= - \frac{1 - 6 \, We \, \widehat{k}_{0}^{2} - 3 \, We^{2} \, \widehat{k}_{0}^{4}}{8 \, \widehat{\omega}_{0}^{3}} \mathop{\longrightarrow}_{We \rightarrow 0} - \frac{1}{8}, \\ \lambda_{\infty}^{\prime} &= \frac{3 + We \, \widehat{k}_{0}^{2} \left(2 + 3 \, We \, \widehat{k}_{0}^{2}\right)}{32 \, \widehat{\omega}_{0}^{3}} \mathop{\longrightarrow}_{We \rightarrow 0} \frac{3}{32}, \\ \mu_{\infty} &= \frac{1 + 3 \, We \, \widehat{k}_{0}^{2}}{4 \, \widehat{k}_{0} \, \widehat{\omega}_{0}} \mathop{\longrightarrow}_{We \rightarrow 0} \frac{1}{4}, \\ \chi_{\infty} &= \frac{\widehat{k}_{0}^{5} \left(8 + We \, \widehat{k}_{0}^{2} + 2 \, We^{2} \, \widehat{k}_{0}^{4}\right)}{4 \, \widehat{\omega}_{0}^{3} (1 - 2 \, We \, \widehat{k}_{0}^{2})} \mathop{\longrightarrow}_{We \rightarrow 0} 2, \end{align} \end{subequations} where and in what follows we put $\widehat{k}_{0} = 1$ based on the non-dimensionalization \eqref{eqn:non-dimensionalization}. Once surface tension effects are introduced, $\chi_{\infty}$ changes sign from positive to negative at $We = \frac{1}{2}$, while $\lambda_{\infty}$ changes sign from negative to positive at $We = \frac{2}{\sqrt{3}} - 1$. The latter implies that the type of equation \eqref{ncNLS-ST:abstract} changes from hyperbolic to elliptic in accordance with the classification of its Cartesian counterpart \eqref{eqn:NLS-1D}. This will have certain consequences for the stability of solutions of \eqref{ncNLS-ST:abstract}, as we will see in \S \ref{subsec:spectral-analysis:ncNLS}. \subsection{On the origin of the potential and its implications} \label{subsec:heuristic-analysis} Since the work of \citet{Zakharov:1968}, where 2D NLS was derived, it has been tacitly assumed the Laplacian $\Delta$ stays intact when applies the NLS to axisymmetric case \citep{Zakharov:1976b,Jones:1988}; however, the principle of covariance (coordinate-independence) is applicable only to the fundamental physical laws such as Euler's equations of fluid motion, not amplitude equations deduced from them under concrete asymptotic assumptions despite their `universal' character. To understand the origin of the potential term $V(R) \sim \frac{1}{R^{2}}$ in (\ref{ncNLS},\ref{ncNLS-ST:abstract}), let us perform a heuristic derivation of the linear part of the envelope equation in the case of pure gravity-driven waves. To bring in more physical intuition, let us consider the linear part of \eqref{system:deep-water:non-dimensional:cylindrical} back in dimensional variables: \begin{subequations} \label{system:deep-water:dimensional:cylindrical} \begin{align} \label{bulk:Laplace:dimensional:cylindrical} z \le \varepsilon \, \eta(t,x)&: \quad \left\{\begin{array}{c} \Delta \phi \equiv \frac{1}{r} \frac{\partial}{\partial r}\left(r \frac{\partial \phi}{\partial r}\right) + \frac{1}{r^{2}} \frac{\partial^{2} \phi}{\partial \theta^{2}} + \frac{\partial^{2} \phi}{\partial z^{2}} = 0, \\ \nabla \phi \rightarrow 0, \ z \rightarrow - \infty, \end{array}\right. \\ \label{interface:kinematic:dimensional:cylindrical} z = \varepsilon \, \eta(t,x)&: \quad \phi_{z} = \eta_{t}, \\ \label{interface:dynamic:non-dimensional:cylindrical} z = \varepsilon \, \eta(t,x)&: \quad \phi_{t} + g \eta = 0, \end{align} \end{subequations} the straightforward analysis of which in the axisymmetric case leads to the following form of the solution for the free surface elevation: \begin{align} \eta(t,r) = \int_{0}^{\infty}{\widehat{\eta}_{0}(k) J_{0}(k r) e^{-\i \omega(k) t} \, k \d k} + \mathrm{c.c.}, \end{align} where $\widehat{\eta}_{0}(k)$ is the Hankel transform of the initial free surface deflection $\eta_{0}(r)$. The asymptotic expansion of this expression away from the origin, $k r \gg 1$, and in the form of a narrow wavepacket $|\delta k| = |k - k_{0}| \ll k_{0}$ near some fixed wavenumber $k_{0}$ yields \begin{multline} \label{free-surface:deep-water:asymptotics} \eta(t,r) = e^{\i (k_{0} r - \omega_{0} t)} \frac{\varepsilon^{1/2}}{R^{1/2}} \int_{-\infty}^{\infty} \widehat{\eta}_{0}(k_{0},\kappa) \bigg[e^{\i \left(\kappa R - \omega^{\prime}(k_{0}) T - \frac{\omega^{\prime\prime}(k_{0})}{2} \kappa^{2} \tau - \frac{\pi}{4}\right)} \\ + \mathcal{O}\left(\frac{\varepsilon}{R}\right)\bigg] \, \kappa^{1/2} \d \kappa + \mathrm{c.c.}, \end{multline} that is $\eta(t,r)$ is a traveling wave $e^{\i (k_{0} r - \omega_{0} t)}$ modulated with an envelope function \begin{align} \label{envelope:NLS:heuristic} \psi_{0}(T,\tau,R) \sim \int_{-\infty}^{\infty}{\widehat{\eta}_{0}(k_{0},\kappa) e^{\i \left(\kappa R - \omega^{\prime}(k_{0}) T - \frac{\omega^{\prime\prime}(k_{0})}{2} \kappa^{2} \tau - \frac{\pi}{4}\right)} \, \kappa^{1/2} \d \kappa} \end{align} evolving on slow time $T = \varepsilon t$, $\tau = \varepsilon^{2} t$ and spatial $R = \varepsilon r$ scales, which naturally appear in this narrow wavepacket approximation $\kappa = \delta k/\varepsilon$. Taking the derivatives of \eqref{envelope:NLS:heuristic}, we get the following factors for the integrand in \eqref{envelope:NLS:heuristic}: \begin{align} \begin{split} \psi_{0T} \sim - \frac{\i \kappa \omega^{\prime}(k_{0})}{R^{1/2}}, \ &\psi_{0\tau} \sim - \frac{\i \kappa^{2} \omega^{\prime\prime}(k_{0})}{2 R^{1/2}}, \\ \psi_{0R} \sim - \frac{1}{2} R^{-3/2} + \frac{\i \kappa}{R^{1/2}}, \ \psi_{0RR} &\sim \frac{3}{2} R^{-5/2} - \i \kappa R^{-3/2} - \kappa^{2} R^{-1/2}, \end{split} \end{align} where we omitted the sign of integration for brevity, and immediately find that \begin{align} \psi_{0T} + \frac{\omega^{\prime}(k_{0})}{2 R} \psi_{0} + \omega^{\prime}(k_{0}) \psi_{0R} = 0, \end{align} which is the no-resonance condition \eqref{conditions:no-resonance:combinedBC-1:cylindrical} identified above in the course of the formal analysis, as well as \begin{align} \label{eqn:SE:cylindrical} \psi_{0\tau} - \frac{\i \, \omega^{\prime\prime}(k_{0})}{2} \left(\psi_{0RR} + \frac{1}{R} \psi_{0R} - \frac{1}{4 R^{2}} \psi_{0}\right) = 0, \end{align} which is almost the same as the linear part of \eqref{ncNLS} except for the coefficient in front of the potential, i.e. $-3/4$ vs $-1/4$. Notably, with the transformation $\psi_{0} = R^{-1/2} \widetilde{\psi}_{0}(\tau,R)$ the above equation reduces to the 1D Schrodinger equation \begin{align} \widetilde{\psi}_{0\tau} - \frac{\i\omega^{\prime\prime}(k_{0})}{2} \widetilde{\psi}_{0RR} = 0, \end{align} i.e. the effect the potential $- \frac{1}{4 R^{2}} \psi_{0}$ plays in \eqref{eqn:SE:cylindrical} is to modify the amplitude of the wave as it travels either to or from the origin; this, in turn, explains the appearance of the potential in our system -- without it the wave would travel as a ``free particle'' with unmodified amplitude. A salient feature of the above heuristic derivation was the assumption that the wavepacket changes its width in the same fashion as in the 1D case. This is evident from the approximation \eqref{free-surface:deep-water:asymptotics}, which is valid only in the limit $k r \rightarrow \infty$. However, as the behavior of the Bessel function $J_{0}(k r)$ entails for large, but finite $k r$, the speed of propagation changes as one gets closer to the origin: this effect leads to the more severe change in the wavepacket width and, in fact, when the corresponding wavelength $\lambda = 2 \pi / k$ becomes shorter than the distance $r$ from the origin, is responsible for the formation of a singularity in the form of a spike jet. Therefore, in order to account for a stronger wavepacket width change, the potential must be modified from that of $- \frac{1}{4 R^{2}} \psi_{0}$, and, as we saw from the formal derivation in \S \ref{subsec:ncNLS}, the potential indeed becomes stronger (through a modified factor), in the sense that it will lead to a stronger singularity of the solution near the origin compared to $\sim R^{-1/2}$ in \eqref{eqn:SE:cylindrical} as we will see in \S \ref{subsec:ncNLS:BS}. The resulting envelope equations (\ref{ncNLS},\ref{ncNLS-ST:abstract}) arise from a balance between nonlinearity and dispersion of the wavepacket, which occurs only at some distance from the origin as the wave amplitude varies with it -- this is a crucial difference from the translationally invariant case when one can take the limit of small amplitude solutions and be left with the same linear part; in the case of cylindrical waves this is no longer the case, i.e. the linear part of \eqref{ncNLS}, when nonlinearity and dispersion are balanced, does not correspond to \eqref{eqn:SE:cylindrical}, when nonlinearity is absent. Notably, for both potentials $- \frac{1}{4 R^{2}} \psi_{0}$ and $- \frac{3}{4 R^{2}} \psi_{0}$ the wave amplitude drops as $R^{-1/2}$, but the behavior near the origin proves to be different (\S \ref{subsec:ncNLS:BS}). Finally, the technical reason for the appearance of the $- \frac{3}{4 R^{2}} \psi_{0}$ potential instead of $- \frac{1}{4 R^{2}} \psi_{0}$ is due to the first term in the second-order no-resonance condition \eqref{conditions:no-resonance:Laplace-2:cylindrical}, which entangles both $\psi_{1}$ and $\psi_{0}$ -- this effect is absent in the plane (1D and 2D) cases. In any case, the appearance of an inverse-square potential is a generic property of cylindrical envelope wave equations; for example, a derivation of NLS from Maxwell's equations in nonlinear options gives the factor $-1$ at the inverse-square potential. As we saw from \eqref{ncNLS-ST:abstract}, in the case of waves on deep water this factor changes with surface tension as $\lambda_{\infty}^{\prime}/\lambda_{\infty}$. Our NLS equations (\ref{ncNLS},\ref{ncNLS-ST:abstract}) with the inverse-square potential belong to the Gross-Pitaevskii type \citep{Gross:1961,Pitaevskii:1961}, originally derived to describe the ground state wavefunction of a quantum system composed of a Bose-Einstein condensate in an external potential and nonlinearity is responsible for the interaction between particles. The interested reader may find a mechanistic interpretation of equation \eqref{ncNLS-ST:abstract} in Appendix~\ref{appx:mechanistic-interpretation}. Notably, an inverse-square potential also arises, though not in the context of NLS, in the motion of a charged particle in the field of a stationary electric dipole, in quantum mechanics \citep{Case:1950,Kalf:1975,Reed:1979}, molecular physics \citep{Camblong:2001}, nuclear physics \citep{Beane:2001,Esteve:2002}, black holes \citep{Regge:1957,Zerilli:1970,Moncrief:1974,Strominger:1998,Claus:1998,Azcarraga:1999,Solodukhin:1999,Michelson:2000,Papadopoulos:2000,Bellucci:2002,Carlip:2002}, in wave propagation on conic manifolds \citep{Cheeger:1982}, and in the theory of combustion \citep{Bebernes:1989}. Since in our case the potential is $V(R) = - \frac{3}{4 R^{2}}$ and the Laplacian are of equal strength, the former cannot be neglected and the GP equation retains the NLS scaling symmetry \begin{align} u(\tau,R) \mapsto \lambda u(\lambda^{2} \tau,\lambda R). \end{align} Because of that it is known to have some peculiar properties such as no ground state, i.e. there is no lower limit on the allowed energies \citep{Essin:2006} and symmetry breaking anomaly emerges in the process of renormalization \citep{Essin:2006,Camblong:2000,Coon:2002}. The spatial operator in \eqref{ncNLS} or, more generally, in \eqref{ncNLS-ST:abstract}: \begin{align} L = \Delta_{R} + \frac{\lambda_{\infty}^{\prime}}{\lambda_{\infty}} \frac{1}{R^{2}}, \end{align} has as eigenfunctions $L \phi_{\lambda} = \lambda \phi_{\lambda}$ either modified Bessel function of real order $I_{\nu}\left(\lambda^{1/2} R\right)$, $K_{\nu}\left(\lambda^{1/2} R\right)$ with $\nu^{2} = \lambda_{\infty}^{\prime} / \lambda_{\infty} > 0$, which are unbounded at infinity and origin, respectively, or of imaginary order $I_{\i\nu}\left(\lambda^{1/2} R\right)$, $K_{\i\nu}\left(\lambda^{1/2} R\right)$ with $- \nu^{2} = \lambda_{\infty}^{\prime} / \lambda_{\infty} < 0$, which have highly oscillatory behavior with the period decreasing near the origin. As we will see in \S \ref{subsec:ncNLS:BS}, these observations will have certain implications for the structure of solutions of (\ref{ncNLS},\ref{ncNLS-ST:abstract}), which could be regular and singular. Without the potential $V(R)$, the corresponding standard NLS is of critical type since the dimension of the problem is $d=2$, while the order of the nonlinearity $|\psi_{0}|^{2n}$ is $n=1$, so that $n \, d = 2$. This borderline case separates the subcritical NLS with $n \, d < 2$ when all solutions exist globally from the supercritical NLS with $n \, d > 0$, where singular solutions exist \citep{Fibich:2015}. Finally, while the standard defocusing NLS has a purely ``dispersive'' character, i.e. no solitary waves of the type \begin{align} \label{wave:Stokes} \psi_{0}(\tau,R) = e^{\i \mu \tau} u(R) \end{align} exist and focusing NLS does have ground states \eqref{wave:Stokes} that are unstable leading to a finite-time blow-up, both focusing and defocusing GP have solutions of the form \eqref{wave:Stokes} as we will see in \S \ref{subsec:ncNLS:BS}. Singular solutions of GP equation are as valuable as the widely studied finite-time singularities peculiar to NLS \citep{Glassey:1977} -- such singularities are indicative of a localized behavior in the original unreduced physical system such as the Euler equations, from which \eqref{ncNLS-ST:abstract} is deduced. Finally, as follows from the derivation in \S \ref{subsec:ncNLS}, the deduced GP equations (\ref{ncNLS},\ref{ncNLS-ST:abstract}) are valid only at asymptotically large distances $R = \mathcal{O}(1)$ from the origin. Hence, while the deduced Gross-Pitaevskii equation captures the singularity at the origin, which is naturally expected at the origin as in the spike solutions \citep{McAllister::2022}, due to limitations its applicability in that region, one should not seek quantitative accuracy in describing the details of the corresponding singularities. Also, the symmetry of \eqref{ncNLS-ST:abstract} does not preclude from a possibility of ring-type singularities at a finite distance from the origin, where the GP equation is applicable, which will be shown in \S \ref{subsec:ncNLS:BS}. In this context it is worth pointing out that the extensive and controversial research on the rate at which the singularity is approached starting with \citet{Kelley:1965,Zakharov:1976b} (see also overview in \citet{Rypdal:1986} and \citet{Sulem:1999}) is flawed not only because it was unjustifiably assumed that the Laplacian in the 2D NLS deduced in the Cartesian coordinates stays intact when the NLS is applied to an axisymmetric case, but also because the NLS and GP equations in the axisymmetric case are applicable only at sufficiently large distances from the origin. The inapplicability of the NLS model near the blow-up where focusing levels are high (sometimes claimed \citep{Fibich:2015} necessary to be $\gg 10^{48}$ for the self-similar asymptotic rates to be valid) is also obvious as NLS was deduced only for sufficiently small, but finite, amplitudes allowing a balance with the dispersion effects, and the assumptions behind its derivation are no longer valid when the amplitude of the solution becomes incommensurate with the narrow wave-packet assumption. \subsection{Conservation laws, variance, and finite-time singularity} \label{subsec:GP:conservation-laws} To analyze the conservation laws of the GP equation \eqref{ncNLS-ST:abstract}, from physical considerations we supply the initial-value problem (IVP) for this equation with the BCs: \begin{align} \label{BCs:cNLS} R=0: \ \psi_{R} = 0; \ R \rightarrow \infty: \ \psi \rightarrow 0. \end{align} Multiplying \eqref{ncNLS-ST:abstract} with $\overline{\psi} = \psi^{r} - \i \psi^{i}$, \begin{align} \label{eqn:cNLS-times-psiconj} \i \, \overline{\psi} \, \psi_{\tau} + \lambda_{\infty} \overline{\psi} \, \Delta_{R} \psi + \frac{\lambda_{\infty}^{\prime}}{R^{2}} |\psi|^{2} + \frac{\mu_{\infty}}{R^{2}} \overline{\psi} \psi_{\theta\theta} - \chi_{\infty} \, |\psi|^{4} = 0, \end{align} and taking the imaginary part, we get \begin{align} \label{eqn:cNLS:psi-squared} \frac{\d}{\d \tau}|\psi|^{2} + \lambda_{\infty} \left(\psi^{r} \Delta_{R} \psi^{i} - \psi^{i} \Delta_{R} \psi^{r}\right) + \frac{\mu_{\infty}}{R^{2}} \left(\psi^{r} \psi^{i}_{\theta\theta} - \psi^{i} \psi^{r}_{\theta\theta}\right) = 0, \end{align} where we took into account that $\overline{\psi} \, \psi_{\tau} = \psi^{r} \psi^{r}_{\tau} + \psi^{i} \psi^{i}_{\tau} + \i \left(\psi^{r} \psi^{i}_{\tau} - \psi^{i} \psi^{r}_{\tau}\right) = \frac{\d}{\d \tau}|\psi|^{2} + \i \left(\psi^{r} \psi^{i}_{\tau} - \psi^{i} \psi^{r}_{\tau}\right)$, $\overline{\psi} \, \Delta_{R} \psi = \psi^{r} \Delta_{R} \psi^{r} + \psi^{i} \Delta_{R} \psi^{i} + \i \left(\psi^{r} \Delta_{R} \psi^{i} - \psi^{i} \Delta_{R} \psi^{r}\right)$ and similar equalities for $\overline{\psi} \, \psi_{\theta\theta}$. Next, since the integral of the second term in \eqref{eqn:cNLS:psi-squared}: \begin{align} \label{eqn:cNSL:mass-derivation:1} \begin{split} &\int_{0}^{\infty}{\left[\psi^{r} \left(\psi^{i}_{RR} + \frac{1}{R}\psi^{i}_{R}\right) - \psi^{i} \left(\psi^{r}_{RR} + \frac{1}{R}\psi^{r}_{R}\right)\right] R \, \d R} \\ &= R \left[\psi^{r} \psi^{i}_{R} - \psi^{i} \psi^{r}_{R}\right]_{0}^{\infty} - \int_{0}^{\infty}{\left[\psi^{i}_{R} \left(\psi^{r} R\right)_{R} - \psi^{r}_{R} \left(\psi^{i} R\right)_{R}\right] \, \d R} \\ &+ \int_{0}^{\infty}{\left[\psi^{r} \psi^{i}_{R} - \psi^{i} \psi^{r}_{R}\right] \, \d R} = 0 \end{split} \end{align} vanishes in view of the BCs \eqref{BCs:cNLS} as well as the integral of the last term in \eqref{eqn:cNLS:psi-squared}: \begin{align} \int_{0}^{2\pi}{\left[\psi^{r} \psi^{i}_{\theta\theta} - \psi^{i} \psi^{r}_{\theta\theta}\right] \, \d \theta} = \left[\psi^{r} \psi^{i}_{\theta} - \psi^{i} \psi^{r}_{\theta}\right]_{0}^{2\pi} - \int_{0}^{2\pi}{\left[\psi^{r}_{\theta} \psi^{i}_{\theta} - \psi^{i}_{\theta} \psi^{r}_{\theta}\right] \, \d \theta} = 0 \end{align} due to periodicity in $\theta$, equation \eqref{eqn:cNLS:psi-squared} leads to the conservation of the number of particles (in analogy to quantum mechanics): \begin{align} \label{conservation-mass:cNLS} \frac{\d \mathcal{N}}{\d \tau} \equiv \frac{\d}{\d \tau}\int{|\psi|^{2} \, \d \nu} = 0, \end{align} which is the consequence of the invariance of \eqref{ncNLS-ST:abstract} under the phase-shift; the integration over the cylindrical measure $\d \nu$ is defined as \begin{align} \label{measure:cylindrical} \int{\circ \, \d \nu} = \int_{0}^{2\pi}{\circ \, \d\theta}\int_{0}^{\infty}{x \d x}. \end{align} Similarly, multiplying \eqref{ncNLS-ST:abstract} with $\overline{\psi}_{\tau}$, \begin{align} \i \, |\psi_{\tau}|^{2} + \lambda_{\infty} \overline{\psi}_{\tau} \, \Delta_{R} \psi + \frac{\lambda_{\infty}^{\prime}}{R^{2}} \overline{\psi}_{\tau} \psi + \frac{\mu_{\infty}}{R^{2}} \overline{\psi}_{\tau} \psi_{\theta\theta} - \chi_{\infty} \, |\psi|^{2} \overline{\psi}_{\tau} \psi = 0, \end{align} and taking the real part of the resulting expression, we get \begin{multline} \label{eqn:cNLS:psi-tau-squared} \lambda_{\infty} \left(\psi^{r}_{\tau} \Delta_{R} \psi^{r} + \psi^{i}_{\tau} \Delta_{R} \psi^{i}\right) + \frac{\lambda_{\infty}^{\prime}}{R^{2}} \frac{1}{2} \frac{\d}{\d \tau}|\psi|^{2} \\ + \frac{\mu_{\infty}}{R^{2}} \left(\psi^{r}_{\tau} \psi^{r}_{\theta\theta} + \psi^{i}_{\tau} \psi^{i}_{\theta\theta}\right) - \chi_{\infty} |\psi|^{2} \frac{1}{2} \frac{\d}{\d \tau}|\psi|^{2} = 0, \end{multline} where we again took into account that $\psi^{r} \psi^{r}_{\tau} + \psi^{i} \psi^{i}_{\tau} = \frac{1}{2} \frac{\d}{\d \tau}|\psi|^{2} = \frac{1}{2} \frac{\d}{\d \tau}\left(\psi^{r2} + \psi^{2i}\right)$. Next, integrating by parts \begin{multline} \label{eqn:cNSL:energy-derivation:1} \int_{0}^{\infty}{f_{\tau} \Delta_{R} f \, R \, \d R} = \int_{0}^{\infty}{f_{\tau} \left(f_{RR} + \frac{1}{R} f_{R}\right) \, R \, \d R} = \left.f_{R} f_{\tau} R\right|_{0}^{\infty} - \int_{0}^{\infty}{f_{R} \left(f_{\tau} \, R\right)_{R} \, \d R} \\ + \int_{0}^{\infty}{f_{R} f_{\tau} \, \d R} = \left.f_{R} f_{\tau} R\right|_{0}^{\infty} - \frac{1}{2} \frac{\d}{\d \tau} \int_{0}^{\infty}{R f_{R}^{2} \, \d R}, \end{multline} and applying this result to $f = \psi^{r}$ and $\psi^{i}$ with the BCs \eqref{BCs:cNLS}, equation \eqref{eqn:cNLS:psi-tau-squared} takes the form of the conservation of the Hamiltonian $\mathcal{H}$: \begin{align} \label{conservation-energy:cNLS} \frac{\d \mathcal{H}}{\d \tau} \equiv \frac{\d}{\d \tau}\int{\left[-\frac{\lambda_{\infty}}{2}|\psi_{R}|^{2} + \frac{\lambda_{\infty}^{\prime}}{2 R^{2}}|\psi|^{2} - \frac{\mu_{\infty}}{2 R^{2}}|\psi_{\theta}|^{2} - \frac{\chi_{\infty}}{4} |\psi|^{4}\right] \, \d \nu} = 0; \end{align} here we simplified the last term in \eqref{eqn:cNLS:psi-tau-squared}, $|\psi|^{2} \frac{1}{2} \frac{\d}{\d \tau}|\psi|^{2} = \frac{1}{2} (\psi^{r2}+\psi^{i2}) \frac{\d}{\d \tau} (\psi^{r2}+\psi^{i2}) = \frac{1}{4}\frac{\d}{\d \tau} (\psi^{r2}+\psi^{i2})^{2}$, and also took into account that $\int_{0}^{2\pi}{f_{\tau} f_{\theta\theta} \, \d \theta} = \left.f_{\tau} f_{\theta}\right|_{\theta=0}^{2\pi} - \int_{0}^{2\pi}{f_{\tau\theta} f_{\theta} \, \d \theta} = - \frac{1}{2} \frac{\d}{\d\tau}\int_{0}^{2\pi}{f_{\theta}^{2} \, \d \theta}$ when integrating the third term in \eqref{eqn:cNLS:psi-tau-squared}, $\psi^{r}_{\tau} \psi^{r}_{\theta\theta} + \psi^{i}_{\tau} \psi^{i}_{\theta\theta}$. Hence, the Hamiltonian reads \begin{align} \label{H:ncNLS:original} \mathcal{H} = \int{\left[-\frac{\lambda_{\infty}}{2}|\psi_{R}|^{2} + \frac{\lambda_{\infty}^{\prime}}{2 R^{2}}|\psi|^{2} - \frac{\mu_{\infty}}{2 R^{2}}|\psi_{\theta}|^{2} - \frac{\chi_{\infty}}{4} |\psi|^{4}\right] \, \d \nu}. \end{align} Finally, given the above expression for the Hamiltonian, it can be shown (cf. Appendix~\ref{appx:variance}) that the evolution of the variance $\mathcal{V}(\tau) = \int{R^{2} |\psi|^{2} \, \d \nu}$, also known as the wave power (a variant of the power curve introduced by \citet{Vakhitov:1973}), obeys \begin{align} \label{variance:derivative:second:final} \frac{1}{4\lambda_{\infty}} \frac{\d^{2} \mathcal{V}}{\d \tau^{2}} = - 4 \, \mathcal{H} + 2 \pi \lambda_{\infty}^{\prime} |\psi(\tau,0)|^{2}, \end{align} integrating which yields \begin{align} \mathcal{V}(\tau) = - 8 \mathcal{H} \tau^{2} + 8 \pi \lambda_{\infty}^{\prime} \int_{0}^{\tau}{\d \tau^{\prime}\int_{0}^{\tau^{\prime}}{|\psi(\tau^{\prime\prime},0)|^{2} \d \tau^{\prime\prime}}} + \mathcal{V}^{\prime}(0) \tau + \mathcal{V}(0). \end{align} Should $\lambda_{\infty}^{\prime}=0$ as in the case of the standard NLS, then, if the initial conditions are such that $\mathcal{H} > 0$, i.e. $\mathcal{V}^{\prime\prime}(0) = - 16 \mathcal{H} < 0$, from the solution of the quadratic equation \begin{align} \mathcal{V}^{\prime\prime}(0) \frac{\tau_{*}^{2}}{2} + \mathcal{V}^{\prime}(0) \tau_{*} + \mathcal{V}(0) = 0 \ \Rightarrow \ \tau_{*} = \frac{- \mathcal{V}^{\prime}(0) + \sqrt{\mathcal{V}^{\prime 2}(0) - 2 \mathcal{V}(0) \mathcal{V}^{\prime\prime}(0)}}{\mathcal{V}^{\prime\prime}(0)}, \end{align} where necessarily $\mathcal{V}(0)>0$ and $\mathcal{V}^{\prime}(0)<0$, it follows that there exists a finite time $\tau_{*} > 0$ such that $\mathcal{V} \rightarrow 0$ in contradiction to its definition, which shows that it has to be positive. The $H^{1}$-solution must therefore develop a singularity no later than the time $\tau_{*}$, $|\psi| \rightarrow \infty$, $|\psi_{R}| \rightarrow \infty$ at $R \rightarrow 0$. This means that the solution gets out of the $H^{1}$-space, so that the condition of $\mathcal{V}$ being positive (when $\psi \in H^{1}$) does not need to be satisfied any longer. The analogous behavior is known for the standard NLS equations \citep{Glassey:1977}. However, the presence of the potential leads to an extra term $2 \pi \lambda_{\infty}^{\prime} |\psi(\tau,0)|^{2}$ in \eqref{variance:derivative:second:final}: if $\lambda_{\infty}^{\prime} < 0$ then, since the integral of $|\psi(\tau,0)|^{2}$ is positive-definite, the finite-time singularity still takes the place, while for $\lambda_{\infty}^{\prime} > 0$ the situation may potentially change and prevent the singularity from formation altogether, i.e. if the growth of the second term in \eqref{variance:derivative:second:final} with time is faster than $8 \mathcal{H} \tau^{2}$. Note that in the above analysis, in particular in equations (\ref{eqn:cNSL:mass-derivation:1},\ref{eqn:cNSL:energy-derivation:1},\ref{eqn:cNSL:variance-derivation:1},\ref{eqn:cNSL:variance-derivation:2},\ref{eqn:cNSL:variance-derivation:3}), we used the BC \eqref{BCs:cNLS} $\psi_{R}=0$ at $R=0$ and also naturally assumed that at $R=0$ the solution itself is non-singular so that the corresponding terms at $R=0$ vanish in equations (\ref{variance:derivative:second-prelim},\ref{eqn:cNSL:variance-derivation:4}). In all these equations we also assumed sufficiently fast decay of the solution as $R \rightarrow \infty$, which should be valid at least initially if the IC is chosen as a compact/localized perturbation of finite energy; however, at some time the solution at infinity may not decay fast enough to enable neglecting the boundary terms in the above referenced equations. As we will see in \ref{subsec:ncNLS:BS}, there is a class of solutions of the Stokes-type \eqref{wave:Stokes}, which indeed decay only as $R^{-1/2}$ at infinity, though with an oscillatory coefficient. \subsection{Base states} \label{subsec:ncNLS:BS} It is known that a truly solitary wave occurs only if the phase speed of the carrier wave coincides with the group velocity of the envelope which happens at a certain wavenumber \citep{Grimshaw:2007}, though, of course, even in the classical case of the KdV soliton \eqref{soliton:1D:KdV} its does not happen as it travels with amplitude-dependent speed relative to the carrier wave. While it may happen in the case of the GP equation \eqref{ncNLS-ST:abstract} at very large distances from the origin, $R \rightarrow \infty$, where its solution behaves as \begin{align} \label{wave:traveling:infinity} \psi_{0}(\tau,R) \sim \frac{A_{0}}{\sqrt{R}} e^{\i (\mu \tau - k R)}, \end{align} it does not happen everywhere in the cylindrical geometry we consider here, which is easy to see by appending \eqref{wave:traveling:infinity} with next order terms accounting for large, but finite, distances $R$: \begin{align} \label{wave:traveling:asymptotics} \psi_{0}(\tau,R) \sim A(R) e^{\i (\mu t - \varphi(R))}, \end{align} where \begin{subequations} \begin{align} \varphi(R) &= R \left[k - \frac{A_{0}^{2} \, \chi_{\infty}}{2 \, k \, \lambda_{\infty}} \, \frac{\ln{R}}{R} + \mathcal{O}\left(\frac{1}{R^{2}}\right)\right], \\ A(R) &= \frac{A_{0}}{\sqrt{R}} \left[1 + \frac{A_{0}^{2} \, \chi_{\infty}}{4 \, k^{2} \lambda_{\infty}}\frac{1}{R} + \mathcal{O}\left(\frac{1}{R^{2}}\right)\right]. \end{align} \end{subequations} Therefore, as we can see from the expression for the phase $\varphi(R)$, the group velocity of the envelope is changing with the distance from the origin $R$, while the phase speed of the (linear) carrier wave does not. This implies that one cannot identify a single wavenumber $k$ at which those two speeds would match for all $R$. Therefore, in this section we will focus on axisymmetric standing-wave ground states of \eqref{ncNLS-ST:abstract}, which are sought in the form \eqref{wave:Stokes} also known as a solitary wave (ground state or breather) in the context of NLS. Substituting \eqref{wave:Stokes} in \eqref{ncNLS-ST:abstract} we get \begin{align} \label{ncNLS-ST:ground-state} - \mu \, u + \lambda_{\infty} \left(u_{RR} + \frac{1}{R} u_{R}\right) + \lambda_{\infty}^{\prime} \frac{u}{R^{2}} = \chi_{\infty} |u|^{2} u, \end{align} where we keep the modulus sign for the convenience of subsequent calculations, though all the base states we consider are real. Equation \eqref{ncNLS-ST:ground-state} belongs to a semilinear elliptic type, which has been widely studied \citep{Berestycki:1983,Jones:1986,McLeod:1990,Bartsch:1993,Derrick:1997} and known to possess an infinite number of solutions. However, semilinear elliptic equations with singular and, in particular, inverse-square potentials are considerably less explored \citep{Lin:2019}. \begin{figure} \centering \includegraphics[width=2.5in]{d-parameter.pdf} \caption{\label{fig:d-parameter} On the variation of the parameter $d$ with the Weber number; the point $d=-\frac{3}{4}$ corresponds to special asymptotics \eqref{asymptotics:special}.} \end{figure} Next applying the transformation $u(R) = R^{-1/2} U(R)$, which eliminates the first derivative w.r.t. $R$ in \eqref{ncNLS-ST:abstract} thereby removing the $R^{-1/2}$-factor in the asymptotics $R \rightarrow \infty$, we obtain \begin{align} \label{ncNLS-ST:abstract:U} \lambda_{\infty} R^{2} U^{\prime\prime} + \left[\left(\frac{1}{4}\lambda_{\infty} + \lambda_{\infty}^{\prime}\right) - \mu R^{2}\right] U - \chi_{\infty} \, R \, |U|^{2} U = 0. \end{align} In order to bring it to a form convenient for analysis, let us scale variables according to $R = \alpha x$, $U = \beta y$, thus furnishing \begin{align} \label{ncNLS-ST:abstract:y:general} x^{2} y^{\prime\prime} + \left[d - \frac{\mu \alpha^{2}}{\lambda_{\infty}} x^{2}\right] y - \frac{\chi_{\infty}}{\lambda_{\infty}} \alpha \beta^{2} \, x \, |y|^{2} y = 0, \ d \equiv \left(\frac{1}{4} + \frac{\lambda_{\infty}^{\prime}}{\lambda_{\infty}}\right). \end{align} Because of the change of signs of $\lambda_{\infty}$ and $\chi_{\infty}$, there are three ranges of Weber numbers to consider, cf. figure~\ref{fig:d-parameter}: \begin{description} \item[Case 1, $0 \le We < \frac{2}{\sqrt{3}} - 1$: \ ] in which case $\lambda_{\infty}<0$, $\chi_{\infty}>0$, and $d<0$, so that we define $\alpha$ and $\beta$ via $\frac{\mu \alpha^{2}}{\lambda_{\infty}} = - 1$, $\frac{\chi_{\infty}}{\lambda_{\infty}} \alpha \beta^{2} = -1$ thus yielding $\alpha = \left(-\lambda_{\infty}/\mu\right)^{1/2}$ and $\beta = \left(-\lambda_{\infty}/\chi_{\infty}\right)^{1/2} \left(-\mu/\lambda_{\infty}\right)^{1/4}$ and reducing \eqref{ncNLS-ST:abstract:y:general} to \begin{align} \label{ncNLS-ST:abstract:y:case-1} y^{\prime\prime} + \left[\frac{d}{x^{2}} + 1\right] y + \frac{1}{x} \, |y|^{2} y = 0. \end{align} \item[Case 2, $\frac{2}{\sqrt{3}} - 1 < We < \frac{1}{2}$: \ ] in which case $\lambda_{\infty}>0$, $\chi_{\infty}>0$, and $d>0$, so that we define $\alpha$ and $\beta$ via $\frac{\mu \alpha^{2}}{\lambda_{\infty}} = 1$, $\frac{\chi_{\infty}}{\lambda_{\infty}} \alpha \beta^{2} = 1$ thus yielding $\alpha = \left(\lambda_{\infty}/\mu\right)^{1/2}$ and $\beta = \left(\lambda_{\infty}/\chi_{\infty}\right)^{1/2} \left(\mu/\lambda_{\infty}\right)^{1/4}$ and reducing \eqref{ncNLS-ST:abstract:y:general} to \begin{align} \label{ncNLS-ST:abstract:y:case-2} y^{\prime\prime} + \left[\frac{d}{x^{2}} - 1\right] y - \frac{1}{x} \, |y|^{2} y = 0. \end{align} \item[Case 3, $\frac{1}{2} < We$: \ ] in which case $\lambda_{\infty}>0$, $\chi_{\infty}<0$, and $d>0$, so that we define $\alpha$ and $\beta$ via $\frac{\mu \alpha^{2}}{\lambda_{\infty}} = 1$, $\frac{\chi_{\infty}}{\lambda_{\infty}} \alpha \beta^{2} = -1$ thus yielding $\alpha = \left(\lambda_{\infty}/\mu\right)^{1/2}$ and $\beta = \left(-\lambda_{\infty}/\chi_{\infty}\right)^{1/2} \left(\mu/\lambda_{\infty}\right)^{1/4}$ and reducing \eqref{ncNLS-ST:abstract:y:general} to \begin{align} \label{ncNLS-ST:abstract:y:case-3} y^{\prime\prime} + \left[\frac{d}{x^{2}} - 1\right] y + \frac{1}{x} \, |y|^{2} y = 0. \end{align} \end{description} \begin{figure} \setlength{\labelsep}{-3.0mm} \centering \sidesubfloat[]{\includegraphics[width=2.5in]{plot-y-We-low.pdf}\label{fig:plot-y-We-low}} \sidesubfloat[]{\includegraphics[width=2.5in]{plot-u-We-low.pdf}\label{fig:plot-u-We-low}} \\ \sidesubfloat[]{\includegraphics[width=2.5in]{plot-y-We-high.pdf}\label{fig:plot-y-We-high}} \sidesubfloat[]{\includegraphics[width=2.5in]{plot-u-We-high.pdf}\label{fig:plot-u-We-high}} \caption{(a) Solutions to \eqref{ncNLS-ST:abstract:y:case-1} and (b) in the unscaled variables for $We < \frac{2}{\sqrt{3}}-1$. (b) Solutions to (\ref{ncNLS-ST:abstract:y:case-2},\ref{ncNLS-ST:abstract:y:case-3}) and (c) in the unscaled variables for $We > \frac{2}{\sqrt{3}}-1$.} \label{fig:plot-GP-solitons} \end{figure} The first notable fact about the base states of the Gross-Pitaevskii equation is that, in general, they can be singular at the origin -- this is opposed to the case when the potential $V(R)$ is omitted as was done by \citet{Zakharov:1976b}, for example, which leads to the standing wave-type solutions \eqref{wave:Stokes} regular at the origin and satisfying $u^{\prime}(0)=0$. To get a sense of the structure of the $y$-solution, let us look into the asymptotics near the origin, $x \rightarrow 0$, starting with \textit{case 1}. Expecting a power-law form $y = C x^{\alpha}$, where from now on the notation $C$ is used for a generic constant unless stated otherwise, so that \eqref{ncNLS-ST:abstract:y:case-1} produces: \begin{align} \left[\alpha (\alpha-1) + d\right] x^{\alpha-2} + x^{\alpha} + C^{2} x^{3\alpha-1} = 0. \end{align} We find that for $d \in [-\frac{1}{2},-\frac{3}{4}]$ the solution is determined by the first two (linear) terms in \eqref{ncNLS-ST:abstract:y:case-1} giving $\alpha = \frac{1 \pm \sqrt{1 - 4 d}}{2} \in \left[\frac{1-\sqrt{3}}{2},-\frac{1}{2}\right]$, where the most singular solution is of interest to us. At $We=0$ the parameter $d=-\frac{1}{2}$ and then decreases with $We$ down to $-\infty$. At $d = - \frac{3}{4}$ the nonlinearity `kicks in' with the power $\alpha = - \frac{1}{2}$ and the solution of \eqref{ncNLS-ST:abstract:y:case-1} has a different asymptotics: \begin{align} \label{asymptotics:special} y(x) \sim \frac{x^{1/2}}{\left(\ln{x}\right)^{1/2}}. \end{align} As $d$ varies further in the range $-\infty < d < -\frac{3}{4}$, the power $\alpha$ stays at the same value $\alpha = - \frac{1}{2}$, but the `amplitude' of the solution $C$ in $y = C x^{\alpha}$ varies with $d$ according to $\alpha (\alpha-1) + d + C^{2} = 0$. In \textit{cases 2 and 3}, we have $d > 0$ falling in the range $(\frac{1}{2},\infty)$ as the Weber number changes from $\infty$ down to $\frac{2}{\sqrt{3}} - 1$. The power $\alpha = \alpha_{R} + \i \alpha_{I}$ is then complex with $\alpha_{R} = \frac{1}{2}$. Looking for a solution in the form $y = C \, x^{\alpha}$ with $\alpha = \alpha_{R} + \i \alpha_{I}$ yields \begin{align} \left[\alpha (\alpha-1) + d\right] - x^{2} \mp |C|^{2} x^{2 \alpha_{R}+1} = 0, \end{align} where we took into account that $|x^{\alpha_{R} + \i \alpha_{I}}| = |x^{\alpha_{R}}| \, x^{\i \alpha_{I}} = x^{\alpha_{R}} |e^{\i \alpha_{I} \ln{x}}| = x^{\alpha_{R}}$, i.e. the imaginary part $\alpha_{I}$ does not affect the amplitude because $x^{\i \alpha_{I}} = e^{\i \alpha_{I} \ln{x}}$ and hence $|x^{\i \alpha_{I}}| = 1$ for any $x$. At the leading order the balance occurs due to the first two (linear) terms in \eqref{ncNLS-ST:abstract:y:case-2} and the first two (linear) terms in \eqref{ncNLS-ST:abstract:y:case-3}, respectively, which are the same as in \eqref{ncNLS-ST:abstract:y:case-1} and hence $\alpha_{R} = \frac{1}{2}$, $\alpha_{I} = \pm \frac{\sqrt{4 d - 1}}{2}$. Thus, if $2 \alpha_{R}+1 > 0$, then \begin{align} \alpha = \frac{1 \pm \sqrt{1-4d}}{2} \ \Rightarrow \ \alpha_{R} = \frac{1}{2}, \ \alpha_{I} = \pm \frac{1}{2} \sqrt{4 d - 1}; \end{align} in the considered cases $d \in [\frac{1}{2},\infty)$, which implies $\alpha_{I} \in (-\infty,-\frac{1}{2}] \cup [\frac{1}{2},\infty)$. As a result, the asymptotics of the real solution can be represented as \begin{align} \label{asymptotics:y:2-3:origin} y = C x^{\alpha_{R}} \cos{\left[\alpha_{I} \ln{x} + \varphi(x)\right]}, \end{align} where $|\varphi(x)| \ll |\ln{x}|$. Notably, \textit{case 2} also admits solutions singular along a ring of radius $x_{0} \neq 0$, cf. figure~\ref{fig:plot-u-ring}: \begin{align} y(x) \sim \frac{C}{|x-x_{0}|}, \ x_{0} = \frac{1}{2} C^{2}. \end{align} It should be noted that the found singular ring ground states are different from the ring-type solitons and the solutions identified in the radial NLS not only because they were constructed without the potential term $V(R)$, but also because they are non-singular (and approximate) dark \citep{Kivshar:1994} and bright \citep{Lomdahl:1980,Afanasjev:1995} ring solitons. \begin{figure} \centering \includegraphics[width=2.5in]{plot-u-ring.pdf} \caption{Solution to \eqref{ncNLS-ST:abstract:y:case-2} for $We =0.2$ singular along the ring of radius $x_{0}=5$.}\label{fig:plot-u-ring} \end{figure} Next, let us determine the asymptotics of solutions at infinity. In \textit{case 1}, we see that as $x \rightarrow \infty$ the leading-order solution is $\cos{x}$ with some corrections to its phase (cf. Appendix~\ref{appx:asymptotics:infinity}): \begin{align} \label{asymptotics:BS:cNLS:infinity:case1} y(x) = C \cos{\left[x + \frac{C^{2}}{4} \ln{x} + \mathcal{O}\left(\frac{1}{x}\right)\right]}. \end{align} Because the asymptotics at infinity to this order does not depend on parameter $d$ in this case, numerical integration can be done only starting from the neighborhood of the origin. However, as we saw from the corresponding analysis of the leading order asymptotic terms, the solution is singular with a negative power-law exponent $\alpha_{0}$. Clearly, for numerically accurate solution one needs to improve that asymptotics \begin{align} y(x) = x^{\alpha_{0}} \left(C_{0} + C_{1} x^{\alpha_{1}} + \ldots + C_{i} x^{\alpha_{i}} + \ldots\right) \end{align} to the order $O(x^{\alpha_{i}})$ with $\alpha_{0}+\alpha_{i}>1$, since the first derivative is needed for numerical integration as well. For values of $We < 0.011$, it proves sufficient to compute the first five terms in the above expansion giving $\alpha_{i} = i (1+2\alpha_{0})$, $i \ge 1$, and the coefficients \begin{align} C_{1} &= - \frac{C_{0}^{3}}{\alpha_{1} \left(2 \alpha_{0} - 1 + \alpha_{1}\right)}, \ C_{2} = - \frac{3 C_{1} C_{0}^{2}}{\alpha_{2} \left(2 \alpha_{0} - 1 + \alpha_{2}\right)}, \ C_{3} = - 3 \frac{C_{2} C_{0}^{2} + C_{0} C_{1}^{2}}{\alpha_{3} \left(2 \alpha_{0} - 1 + \alpha_{3}\right)}, \nonumber \\ C_{4} &= - 3 \frac{C_{3} C_{0}^{2} + 2 C_{0} C_{1} C_{2} + C_{1}^{3}}{\alpha_{4} \left(2 \alpha_{0} - 1 + \alpha_{4}\right)}, \ C_{5} = - 3 \frac{C_{4} C_{0}^{2} + 2 C_{0} C_{1} C_{3} + C_{0} C_{2}^{2} + C_{2} C_{1}^{2}}{\alpha_{5} \left(2 \alpha_{0} - 1 + \alpha_{5}\right)}. \end{align} Similarly, the asymptotics can be determined in \textit{cases 2 and 3}. The physically meaningful leading-order solution $y(x) = C e^{-x}$ is corrected with a phase $\varphi(x)$, i.e. $y(x) = C e^{-x + \varphi(x)}$. However, as opposed to case 1 in which the phase is found from balance with the nonlinear term, the phase here comes from balance of linear terms; indeed substitution of $y(x) = C e^{-x + \varphi(x)}$ in \eqref{ncNLS-ST:abstract:y:case-2} and \eqref{ncNLS-ST:abstract:y:case-3} gives \begin{align} \varphi^{\prime\prime} + \left(-1 + \varphi^{\prime}\right)^{2} + \frac{d}{x^{2}} - 1 \mp \frac{C^{2}}{x} e^{- 2 x + 2 \varphi(x)} = 0, \end{align} and hence at the next order the balance is due to $2 \, \varphi^{\prime} = \frac{d}{x^{2}}$, which yields $\varphi(x) = - \frac{d}{2 x} + \varphi(\infty)$ satisfying the underlying assumptions that $|\varphi^{\prime\prime}| \ll |\varphi^{\prime}|$ and $|\varphi^{\prime}|^{2} \ll |\varphi^{\prime}|$. As a result, the corrected asymptotics in both cases 2 and 3 reads \begin{align} \label{asymptotics:BS:cNLS:infinity:case2-3} y(x) = C \exp{\left[- x - \frac{d}{2 x} + \mathrm{const}\right]}. \end{align} Despite the singular nature of the ground states in Figs.~\ref{fig:plot-u-We-low},\ref{fig:plot-u-ring}, they are as valuable as the widely studied finite-time singularity peculiar to NLS -- such singularities are indicative of a localized behavior in the original unreduced physical system such as the Euler equations \eqref{system:deep-water:non-dimensional:cylindrical} such as spike waves \citep{McAllister::2022}, from which \eqref{ncNLS-ST:abstract} is deduced. In accordance with physical expectations the identified singular and regular solitons shown in figures~\ref{fig:plot-GP-solitons} and \ref{fig:plot-u-ring} are bright, i.e. localized in space and evanescent at infinity. A convenient way to understand the structure of solution variety of (\ref{ncNLS-ST:abstract:y:case-1}-\ref{ncNLS-ST:abstract:y:case-3}) is through a dynamical systems point of view \citep{Jones:1986,Newton:1993}. The idea is to compactify the problem: the phase space is augmented with a bounded but open dimension and then extended at both ends by gluing in invariant subspaces that carry autonomous dynamics of the limit systems \citep{Wieczorek:2021}. Namely, reducing, for example, (\ref{ncNLS-ST:abstract:y:case-1},\ref{ncNLS-ST:abstract:y:case-2}) to an non-autonomous system of first-order equations: \begin{subequations} \label{system:Stokes-wave} \begin{align} \dot{y} &= \rho^{2} v, \\ \dot{v} &= - \left[d (1-\rho)^{2} \pm \rho^{2}\right] y \mp \rho (1-\rho) |y|^{2} y, \\ \dot{\rho} &= \rho^{2} (1-\rho)^{2}, \end{align} \end{subequations} in which a singularity at the origin is removed by introducing a new independent variable $t = x - \frac{1}{x} + 2 \ln{x} \in (-\infty,+\infty)$ for $x \in [0,\infty)$ and seeing the radial coordinate $x$ via a new dependent variable $\rho = x / (x+1)$; the upper choice of sign corresponds to \eqref{ncNLS-ST:abstract:y:case-1} and the lower one to \eqref{ncNLS-ST:abstract:y:case-2}. From \eqref{system:Stokes-wave} we find that all solutions starting in the invariant plane $\rho=0$ end up being attracted to one of the trajectories in the invariant plane $\rho=1$ shown in figure~\ref{fig:combined}. For example, the solutions of the type in figure~\ref{fig:plot-y-We-low} look like in figure~\ref{fig:combined1} and get attracted to one of the centers. On the way from $\rho=0$ to $\rho=1$ the solution may pierce the $y$-plane many times, which correspond to the number of zeros of a given solution. This dynamical systems approach proved to be fruitful to analyze the number of zeros or existence of a solution with a given number of zeros for the semilinear elliptic equation \eqref{ncNLS-ST:ground-state} without the potential term \citep{Jones:1986}. The dynamical systems view in figure~\ref{fig:combined1} also makes it clear that structurally the solutions must be Lyapunov stable. On the other hand, solutions of the type shown in figure~\ref{fig:plot-y-We-high}, e.g. for $We=0.2$ corresponding to \textit{case 2} represent trajectories approaching a saddle point as one can observe from the phase portrait at $\rho = 1$ in figure~\ref{fig:combined2}. Obviously, unless the boundary condition at infinity, $y,v \rightarrow 0$ as $x \rightarrow \infty$, is enforced, the solution would otherwise be structurally unstable. We will see both scenarios from the subsequent spectral (\S \ref{subsec:spectral-analysis:ncNLS}) and Hamiltonian (\S \ref{subsec:Lagrange-Dirichlet}) stability analyses. \begin{figure} \setlength{\labelsep}{-3.0mm} \centering \sidesubfloat[]{\includegraphics[width=2.5in]{combined1.pdf}\label{fig:combined1}} \quad \sidesubfloat[]{\includegraphics[width=2.5in]{combined2.pdf}\label{fig:combined2}} \caption{ A solution trajectory of (a) equation \eqref{ncNLS-ST:abstract:y:case-1} and (b) equation \eqref{ncNLS-ST:abstract:y:case-2}.} \label{fig:combined} \end{figure} \subsection{Spectral stability of base states} \label{subsec:spectral-analysis:ncNLS} Superimposing a perturbation on the base state: \begin{align} \psi(\tau,R) = u(R) \left[1 + u^{\prime}(\tau,R)\right] e^{\i \left[\mu \tau + \varphi(\tau,R)\right]} \end{align} substituting in \eqref{ncNLS-ST:abstract}, and separating real $\Re$ and imaginary $\Im$ parts we get a system \begin{subequations} \begin{align} \Re&: & &- u (1+u^{\prime}) (\mu + \varphi_{\tau}) + \lambda_{\infty} \bigg[u_{RR} (1+u^{\prime}) + 2 u_{R} u^{\prime}_{R} - u (1+u^{\prime}) \varphi_{R}^{2} \nonumber \\ & & &+ u u^{\prime}_{RR} + \frac{1}{R}\left\{u_{R} (1+u^{\prime}) + u u^{\prime}_{R}\right\}\bigg] \\ & & &+ \lambda_{\infty}^{\prime} \frac{1}{R^{2}} u (1+u^{\prime}) + \frac{\mu_{\infty}}{R^{2}} \left[u u^{\prime}_{\theta\theta} - u (1+u^{\prime}) \varphi_{\theta}^{2}\right] - \chi_{\infty} u^{3} (1+u^{\prime})^{3} = 0, \nonumber \\ \Im&: & &u u^{\prime}_{\tau} + \lambda_{\infty} \bigg[2 u_{R} (1+u^{\prime}) \varphi_{R} + 2 u u^{\prime}_{R} \varphi_{R} \nonumber \\ & & &+ u (1+u^{\prime}) \varphi_{RR} + \frac{1}{R} u (1+u^{\prime}) \varphi_{R}\bigg] + \frac{\mu_{\infty}}{R^{2}} \left[2 u u^{\prime}_{\theta} \varphi_{\theta} + u (1+u^{\prime}) \varphi_{\theta\theta}\right] = 0, \end{align} \end{subequations} where $u$ is real as we consider real base states constructed in \S \ref{subsec:ncNLS:BS}\marginlabel{redo for complex $u$, $u^{\prime}$}. Taking into account equation \eqref{ncNLS-ST:abstract:U} for the base state, the linearized system for a perturbation simplifies to \begin{subequations} \begin{align} - \varphi_{\tau} + \lambda_{\infty} \Delta_{R} u^{\prime} + \frac{\mu_{\infty}}{R^{2}} u^{\prime}_{\theta\theta} + 2 \lambda_{\infty} \frac{u_{R}}{u} u^{\prime}_{R} - 2 \chi_{\infty} u^{2} u^{\prime} &= 0, \\ u^{\prime}_{\tau} + \lambda_{\infty} \Delta_{R} \varphi + \frac{\mu_{\infty}}{R^{2}} \varphi_{\theta\theta} + 2 \lambda_{\infty} \frac{u_{R}}{u} \varphi_{R} &= 0. \end{align} \end{subequations} Next, applying the Fourier transform in $\theta$ and looking for eigenmodes, i.e. $\varphi = \widehat{\varphi} \, e^{\lambda \tau} e^{\i k \theta}$ and $u^{\prime} = \widehat{u} \, e^{\lambda \tau} e^{\i k \theta}$, we arrive at \begin{subequations} \begin{align} \lambda \widehat{\varphi} &= L_{R} \widehat{u} - 2 \chi_{\infty} u^{2} \widehat{u}, \\ - \lambda \widehat{u} &= L_{R} \widehat{\varphi}, \end{align} \end{subequations} where $L_{R} = \lambda_{\infty} \Delta_{R} - \frac{\mu_{\infty} k^{2}}{R^{2}} + 2 \lambda_{\infty} \frac{u_{R}}{u} \partial_{R}$. To bring these equations to the canonical form convenient for analysis, first let us apply the transformation of the base state, $u(R) = R^{-1/2} U(R)$ introduced earlier (\S \ref{subsec:ncNLS:BS}), which gives $u^{2} = \frac{U^{2}}{R}$ and $\frac{u_{R}}{u} = - \frac{1}{2} \frac{1}{R} + \frac{U_{R}}{U}$. Second, rescaling the variables $R = \alpha x$, $U = \beta y$, we end up with the following canonical systems: \begin{subequations} \label{EVP:cNLS} \begin{align} \label{EVP:cNLS:case-1-3} \text{\textit{cases 1 and 3}}&: & &\left\{\begin{array}{c} \nu \widehat{\varphi} = L_{x} \widehat{u} + 2 \frac{y^{2}}{x} \widehat{u}, \\ - \nu \widehat{u} = L_{x} \widehat{\varphi}; \end{array}\right. \ \begin{pmatrix} \text{\textit{case 1}}: \ \frac{\alpha^{2}}{\lambda_{\infty}} = - \frac{1}{\mu}, \ \lambda_{\infty} < 0 \\ \text{\textit{case 2}}: \ \frac{\alpha^{2}}{\lambda_{\infty}} = \frac{1}{\mu}, \ \lambda_{\infty} > 0 \end{pmatrix} \\ \label{EVP:cNLS:case-2} \text{\textit{case 2}}&: & &\left\{\begin{array}{c} \nu \widehat{\varphi} = L_{x} \widehat{u} - 2 \frac{y^{2}}{x} \widehat{u}, \\ - \nu \widehat{u} = L_{x} \widehat{\varphi}; \end{array}\right. \ \begin{pmatrix} \frac{\alpha^{2}}{\lambda_{\infty}} = \frac{1}{\mu}, \ \lambda_{\infty} > 0 \end{pmatrix}, \end{align} \end{subequations} where $\nu = \lambda \alpha^{2} / \lambda_{\infty}$ and \begin{align} L_{x} = \Delta_{x} - \frac{\mu_{\infty} k^{2}}{\lambda_{\infty} x^{2}} + 2 \left(-\frac{1}{2} \frac{1}{x} + \frac{y_{x}}{y}\right) \frac{\d}{\d x} = \frac{\d^{2}}{\d x^{2}} - \frac{\widetilde{\mu}}{x^{2}} + 2 \frac{y_{x}}{y} \frac{\d}{\d x}, \ \widetilde{\mu} = \frac{\mu_{\infty} k^{2}}{\lambda_{\infty}}. \end{align} As for the BCs, it is natural to impose \begin{subequations} \label{EVP:BCs:cNLS} \begin{align} x=0&: \ \widehat{u}_{x} = \widehat{\varphi}_{x} = 0, \\ x=\infty&: \ \widehat{u}=\widehat{\varphi}_{x}=0. \end{align} \end{subequations} The challenge of the eigenvalue problem (\ref{EVP:cNLS},\ref{EVP:BCs:cNLS}) is its singularity, i.e. some of the coefficients in \eqref{EVP:cNLS} diverge either at infinity (\textit{case 1}) or at the origin (\textit{cases 2 and 3}) as follows from \S \ref{subsec:ncNLS:BS}. Apparently, it is not feasible to solve the eigenvalue problems \eqref{EVP:cNLS} analytically for all $x$, as well as the numerically accurate treatment of the problem is impeded by the singular behavior mentioned above or non-periodic oscillations propagating to $x \rightarrow \infty$, which requires ever-increasing number of modes/nodes for resolution\footnote{Due to the identified oscillatory behavior of the solution at infinity, truncating the semi-infinite domain to a finite one necessarily introduces significant errors; also mapping the semi-infinite to a finite domain simply compresses oscillations near one of the boundaries with ever-increasing frequency of oscillations.}.\marginlabel{find a ref} However, the latter properties, that makes numerical approach difficult, allow us to resort to an asymptotic way of solving (\ref{EVP:cNLS},\ref{EVP:BCs:cNLS}) based on a peculiar behavior of the corresponding linear operators. The key guiding principle is that if we can solve an eigenvalue problem locally, i.e. for some range of $x$, then due to the linear character of the problem at hand, the thereby determined eigenvalues hold globally. \textit{Case 1}. The eigenvalue problem assumes the form \begin{subequations} \label{EVP:no-ST:cNLS} \begin{align} \nu \widehat{\varphi} &= L_{x} \widehat{u} + 2 \frac{y^{2}}{x} \widehat{u}, \\ - \nu \widehat{u} &= L_{x} \widehat{\varphi}, \end{align} \end{subequations} where $L_{x} = \frac{\d^{2}}{\d x^{2}} - \frac{\widetilde{\mu}}{x^{2}} + 2 \frac{y_{x}}{y} \frac{\d}{\d x}$ and $\nu = - \frac{\lambda}{\mu}$. Since for large $x$ \begin{align} \frac{y_{x}}{y} \approx - \left(1 + \frac{C^{2}}{4x}\right) \tan{x}, \end{align} we get the approximate eigenvalue problem \begin{subequations} \label{EVP:no-ST:cNLS:infinity} \begin{align} \nu \widehat{\varphi} &= L_{x}^{\infty} \widehat{u}, \\ - \nu \widehat{u} &= L_{x}^{\infty} \widehat{\varphi}, \end{align} \end{subequations} where $L_{x}^{\infty} = \frac{\d^{2}}{\d x^{2}} - 2 \tan{x} \frac{\d}{\d x}$. Applying operator $L_{x}^{\infty}$ to the second of equations \eqref{EVP:no-ST:cNLS:infinity} produces an equation for $\widehat{\varphi}$: \begin{align} \label{EVP:no-ST:cNLS:infinity:leading-order:original} L_{x}^{\infty 2} \widehat{\varphi} = - \nu^{2} \widehat{\varphi}. \end{align} Let us first treat the simpler problem \begin{align} \label{EVP:no-ST:cNLS:infinity:leading-order} L_{x}^{\infty} \widehat{\varphi} = \left[\frac{\d^{2}}{\d x^{2}} - 2 \tan{x} \frac{\d}{\d x}\right] \widehat{\varphi} = \widetilde{\nu} \, \widehat{\varphi} \ \text{on} \ x \in \left[-\frac{\pi}{2},\frac{\pi}{2}\right], \end{align} which will be justified by the constructed solution satisfying \eqref{EVP:no-ST:cNLS:infinity:leading-order:original}; here $\widetilde{\nu}^{2} = - \nu^{2}$, i.e. $\widetilde{\nu} = \pm \i \nu$. Multiplication by the integrating factor $I(x) = \cos^{2}{x}$ gives a self-adjoint Sturm-Liouville problem \begin{align} \label{SL-problem:case-1} \frac{\d}{\d x}\left[\cos^{2}{x} \frac{\d \widehat{\varphi}}{\d x}\right] = \widetilde{\nu} \, \cos^{2}{x} \, \widehat{\varphi} \ \text{on} \ x \in \left[-\frac{\pi}{2},\frac{\pi}{2}\right]. \end{align} With the change of variables $z = \tan{x}$ equation \eqref{SL-problem:case-1} can be reduced to \begin{align} \left(1 + z^{2}\right)^{2} \widehat{\varphi}_{zz} = \widetilde{\nu} \, \widehat{\varphi} \ \text{on} \ z \in \left(-\infty,\infty\right). \end{align} The requirement for its solution to be bounded leads to quantization \begin{align} \widehat{\varphi}(z) = \sqrt{1 + z^{2}} \left\{C_{1} \cos{(\alpha \atan{z})} + C_{2} \sin{(\alpha \atan{z})}\right\} \ \text{for} \ 1 - \widetilde{\nu} = \alpha^{2} > 0, \end{align} or, in the original variables, \begin{align} \widehat{\varphi}_{0}(x) = \frac{\cos{\sqrt{1-\widetilde{\nu}} \, x}}{\cos{x}}, \end{align} where one must put $\sqrt{1-\widetilde{\nu}} = 1 + 2 n$, $n \in \Bbb{Z}$ for the solution to be bounded. As a result, $\widetilde{\nu} = 1 - (1 + 2 n)^{2}$, $n \in \Bbb{Z}$. The original eigenvalue $\lambda$ is then \begin{align} \label{EV:cNLS:no-ST:leading} \lambda = - \mu \nu = \pm \i \mu \widetilde{\nu} = \pm \i \mu \left[1 - (1 + 2 n)^{2}\right], \ n \in \Bbb{Z}, \end{align} i.e. one has spectral stability. To see the effect of higher-order terms in $L_{x}$ including those due to the transverse perturbations with wavenumber $k$, we represent the operator as \begin{align} L_{x} = L_{x}^{\infty} + L_{x}^{\prime} \ \text{with} \ L_{x}^{\prime} = - \frac{C^{2}}{x} \tan{x} \frac{\d}{\d x} - \frac{\widetilde{\mu}}{x^{2}}. \end{align} From \eqref{EVP:no-ST:cNLS} we deduce a stand-alone equation for $\widehat{\varphi}$: \begin{align} - \nu^{2} \widehat{\varphi} = L_{x}^{2} \widehat{\varphi} + 2 \frac{y^{2}}{x} L_{x} \widehat{\varphi}. \end{align} Linearizing around the zero eigenvalue $\widetilde{\nu}=0$, i.e. $\nu_{0}=0$ as well, and the corresponding eigensolution $\widehat{\varphi}_{0} = 1$, we find for the eigenvalue $\nu^{\prime}=\nu-\nu_{0}$ and the eigenfunction $\widehat{\varphi}^{\prime}$ perturbations: \begin{align} \label{Fredholm:ncNLS:case-1} L_{x}^{\infty 2} \widehat{\varphi}^{\prime} = - \nu^{\prime 2} \widehat{\varphi}_{0} - L_{x}^{\infty} \left(L_{x}^{\prime} \widehat{\varphi}_{0}\right) - 2 \frac{y^{2}}{x} L_{x}^{\prime} \widehat{\varphi}_{0}. \end{align} While the operator $L_{x}^{\infty 2}$ is not self-adjoint, we know that its solution corresponding to zero eigenvalue is $\widehat{\varphi}^{\prime} = \widehat{\varphi}_{0}$, so we may apply the Fredholm alternative using the same integrating factor $I(x) = \cos^{2}{x}$, which allows us to determine the eigenvalue deviation from zero: \begin{align} \nu^{\prime 2} = \widetilde{\mu} \frac{\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}{L_{x}^{\infty}\left(\frac{1}{x^{2}}\right) I(x) \, \d x}}{\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}{I(x) \, \d x}}, \ \text{where} \ L_{x}^{\infty}\left(\frac{1}{x^{2}}\right) = \frac{6}{x^{4}} + \frac{4}{x^{3}} \tan{x}; \end{align} we also took into account that the last term in \eqref{Fredholm:ncNLS:case-1} does not contribute as it is odd in $x$. Since integrands in both integrals are positive-definite, then $\nu^{\prime 2} < 0$ since $\widetilde{\mu}<0$. Hence, corrections to \eqref{EV:cNLS:no-ST:leading} are purely imaginary and spectral stability is retained. Note that while the Fredholm alternative is global in nature, i.e. requires the knowledge of eigenfunction for all $x$, due to periodicity of the solution at infinity, the Fredholm alternative can be applied `locally' over the period of the solution in this asymptotic limit. \textit{Cases 2-3}. The corresponding equations (\ref{EVP:cNLS:case-2},\ref{EVP:cNLS:case-1-3}) for perturbations: \begin{subequations} \label{EVP:cNLS:cases-2-3} \begin{align} \label{EVP:a:cNLS:cases-2-3} \nu \widehat{\varphi} &= L_{x} \widehat{u} \mp 2 \frac{y^{2}}{x} \widehat{u}, \\ \label{EVP:b:cNLS:cases-2-3} - \nu \widehat{u} &= L_{x} \widehat{\varphi}, \end{align} \end{subequations} can be rewritten in the new variable $z = \alpha_{I} \ln{x}$. Splitting the operator into the main and perturbation parts $L_{x} = L_{x}^{\infty} + L_{x}^{\prime}$, where $L_{x}^{\infty} = \frac{\d^{2}}{\d x^{2}} + 2 \frac{y_{x}}{y} \frac{\d}{\d x}$ and $L_{x}^{\prime} = - \frac{\widetilde{\mu}}{x^{2}}$, yields \begin{align} \label{operator:EVP:cNLS:cases-2-3} L_{x}^{\infty} = \alpha_{I}^{2} e^{-2z/\alpha_{I}} \, \left[\frac{\d^{2}}{\d z^{2}} - 2 \tan{z} \frac{\d}{\d z}\right], \ L_{x}^{\prime} = \alpha_{I}^{2} e^{-2z/\alpha_{I}} \, \left[- \frac{\widetilde{\mu}}{\alpha_{I}^{2}}\right], \end{align} where we assumed that $\widetilde{\mu}$ is small, i.e. corresponding to the short wavenumber limit. Since $z \rightarrow - \infty$ as $x \rightarrow 0$, the last term in equation \eqref{EVP:a:cNLS:cases-2-3}: \begin{align} \frac{y^{2}}{x} \approx C^{2} \cos^{2}{z} = \mathcal{O}(1) \end{align} can be considered as a perturbation; here we used the asymptotics \eqref{asymptotics:y:2-3:origin}. Hence, at the leading order, \eqref{EVP:cNLS:cases-2-3} reduces to \begin{align} - \nu^{2} \widehat{\varphi} = L_{x}^{\infty 2} \widehat{\varphi}, \end{align} or, taking $\widetilde{\nu}^{2} = - \nu^{2}$, to a simpler problem \begin{align} L_{x}^{\infty} \widehat{\varphi} = \widetilde{\nu} \widehat{\varphi}, \end{align} which similar to \textit{case 1} allows us to justify that $\nu = 0$ is an eigenvalue. Hence, we may drop the factor $\alpha_{I}^{2} e^{-2z/\alpha_{I}}$ in the operator \eqref{operator:EVP:cNLS:cases-2-3} and consider the problem on the periodic interval $z \in \left[-\frac{\pi}{2},\frac{\pi}{2}\right]$. Next, treating $\nu^{\prime}$ as a perturbation around the zero eigenvalue, from \eqref{EVP:cNLS:cases-2-3} we find $\nu^{\prime 2} \widehat{\varphi} = - L_{x}^{2} \widehat{\varphi} \pm \frac{2 y^{2}}{x} L_{x} \widehat{\varphi}$ and hence for the perturbation \begin{align} \nu^{\prime 2} \widehat{\varphi}_{0} = - L_{x}^{\infty 2} \widehat{\varphi}^{\prime} - \left(L_{x}^{\infty} L_{x}^{\prime} + L_{x}^{\prime} L_{x}^{\infty}\right) \widehat{\varphi}_{0} \pm \frac{2 y^{2}}{x} \left(L_{x}^{\infty} + L_{x}^{\prime}\right) \widehat{\varphi}_{0}. \end{align} Since $\widehat{\varphi}_{0} = \mathrm{const}$, then the second term on the right gives only $L_{x}^{\infty} L_{x}^{\prime} = - 4 \, \widetilde{\mu} \, x^{-4} \left[1 + \alpha_{I} \tan{z}\right]$, which is of higher order compared to $\frac{2 y^{2}}{x} L_{x}^{\prime} = -2 \widetilde{\mu} C^{2} \frac{\cos^{2}{z}}{x^{2}}$. Thus, to the leading order we get \begin{align} L_{x}^{\infty 2} \widehat{\varphi}^{\prime} = - \nu^{\prime 2} \widehat{\varphi}_{0} \mp 2 \widetilde{\mu} C^{2} \frac{\cos^{2}{z}}{x^{2}} \widehat{\varphi}_{0}. \end{align} From the Fredholm solvability condition it then follows: \begin{align} \int_{-\pi/2}^{\pi/2}{I(z) \left[\mp 2 \, \widetilde{\mu} \, C^{2} \, e^{- 2 z/\alpha_{I}} \, \cos^{2}{z} - \nu^{\prime 2}\right] \d z} = 0, \end{align} where $I(z) = \cos^{2}{z}$, that is in \textit{case 2} we have spectral stability, while in \textit{case 3} spectral instability. The above stability analysis conclusions will be compared with the Lagrange-Dirichlet approach in \S \ref{subsec:Lagrange-Dirichlet}. \subsection{Lagrange-Dirichlet stability analysis} \label{subsec:Lagrange-Dirichlet} While the above spectral analysis provides certain insights into stability, strictly speaking only spectral instability implies linear (and hence nonlinear) instability, while spectral stability does not even imply linear stability \citep{Krechetnikov:2007}. A good visual understanding of the solution stability picture is provided by figure~\ref{fig:combined}, which shows, in particular, that if we infinitesimally perturb the trajectory in figure~\ref{fig:combined1}, it should stay Lyapunov stable by displacing it to a nearby center orbit, while the trajectory in figure~\ref{fig:combined1} is structurally unstable as any small perturbation will drive it away from the saddle point. With these considerations in mind, let us look at the stability picture from the Hamiltonian finite-amplitude viewpoint starting with equation \eqref{ncNLS-ST:abstract}. Applying the scaling \begin{align} \label{scalings:Lagrange-Dirichlet} R = \alpha \, x, \ \psi = \alpha^{-1/2} \beta \, y, \ \tau = \gamma \, t, \end{align} with factors appropriate for \textit{case 1} as per \S \ref{subsec:ncNLS:BS}, i.e. \begin{align} \alpha = \left(-\lambda_{\infty}/\mu\right)^{1/2}, \ \beta = \left(-\lambda_{\infty}/\chi_{\infty}\right)^{1/2} \left(-\mu/\lambda_{\infty}\right)^{1/4}, \ \gamma = \mu^{-1}, \end{align} we arrive at \begin{align} \i y_{t} - \left(-\frac{\gamma \lambda_{\infty}}{\alpha^{2}}\right) \Delta_{x} y + \frac{\gamma \lambda_{\infty}^{\prime}}{\alpha^{2}} \frac{y}{x^{2}} + \frac{\gamma \mu_{\infty}}{\alpha^{2}} \frac{y_{\theta\theta}}{x^{2}} = \frac{\gamma \beta^{2} \chi_{\infty}}{\alpha} |y|^{2} y, \end{align} where \begin{align} -\frac{\gamma \lambda_{\infty}}{\alpha^{2}} = 1, \ \frac{\gamma \lambda_{\infty}^{\prime}}{\alpha^{2}} = - \frac{\lambda_{\infty}^{\prime}}{\lambda_{\infty}} = \frac{1}{4} - d, \ \frac{\gamma \mu_{\infty}}{\alpha^{2}} = - \frac{\mu_{\infty}}{\lambda_{\infty}}, \ \frac{\gamma \beta^{2} \chi_{\infty}}{\alpha} = 1, \end{align} so at the end we get a two-parameter equation: \begin{align}\label{ncNLS:scaled:case-1} \i y_{t} - \Delta_{x} y - \left(d - \frac{1}{4}\right) \frac{y}{x^{2}} - \frac{\mu_{\infty}}{\lambda_{\infty}} \frac{y_{\theta\theta}}{x^{2}} = |y|^{2} y, \ \text{where} \ d = \frac{1}{4} - \frac{\lambda_{\infty}^{\prime}}{\lambda_{\infty}}. \end{align} To bring it to a Hamiltonian form, let $y = u + \i v$, which gives a system for the real and imaginary parts: \begin{subequations} \label{system:ncNLS} \begin{align} - v_{t} &= \Delta_{x} u + \left(d - \frac{1}{4}\right) \frac{u}{x^{2}} + \frac{\mu_{\infty}}{\lambda_{\infty}} \frac{u_{\theta\theta}}{x^{2}} + \left(u^{2}+v^{2}\right) u, \\ u_{t} &= \Delta_{x} v + \left(d - \frac{1}{4}\right) \frac{v}{x^{2}} + \frac{\mu_{\infty}}{\lambda_{\infty}} \frac{v_{\theta\theta}}{x^{2}} + \left(u^{2}+v^{2}\right) v, \end{align} \end{subequations} respectively. The canonical Hamiltonian form of this system is \begin{align} \label{system:Hamiltonian:GP} J U_{t} = \frac{\delta \mathrm{H}}{\delta U}, \ \text{where} \ J = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}, \ U = \begin{pmatrix} u \\ v \end{pmatrix}, \end{align} and the Hamiltonian, being a scaled variant of \eqref{H:ncNLS:original}, reads \begin{align} \label{H:ncNLS} \mathrm{H} = - \frac{1}{2} \int{|U_{x}|^{2} \, \d \nu} + \frac{1}{2}\left(d-\frac{1}{4}\right) \int{\frac{|U|^{2}}{x^{2}}\d \nu} - \frac{1}{2}\frac{\mu_{\infty}}{\lambda_{\infty}} \int{\frac{|U_{\theta}^{2}|}{x^{2}}\d \nu} + \frac{1}{4} \int{|U|^{4}\d \nu}, \end{align} where the cylindrical measure \eqref{measure:cylindrical} in these scaled variables becomes $\d \nu = \d\theta \, x \d x$. Assuming that after integration by parts all boundary terms do not contribute (in the azimuthal $\theta$-variable this follows from the periodicity of the solution and its derivatives, while in the radial $x$-variable the boundary terms disappear due to the solution being symmetric, i.e. $U_{x}=0$ at $x=0$ as per \eqref{BCs:cNLS}, or due to vanishing variation $\delta U$ at $x \rightarrow 0$; for $x \rightarrow \infty$ the decay could be due to considering an IVP with compact ICs or also due to vanishing variation $\delta U$), we find for the first variation: \begin{multline} \delta \mathrm{H} = \int{\frac{1}{x}\frac{\partial}{\partial x}\left(x\frac{\partial U}{\partial x}\right) \cdot \delta U\d\nu} + \left(d-\frac{1}{4}\right) \int{\frac{U \cdot \delta U}{x^{2}}\d\nu} \\ + \frac{\mu_{\infty}}{\lambda_{\infty}} \int{\frac{U_{\theta\theta}}{x^{2}} \cdot \delta U \d\nu} + \int{|U|^{2} U \cdot \delta U \d\nu}, \end{multline} where all the terms are arranged in the same order as in \eqref{H:ncNLS}; also dot denotes scalar product, e.g. $U \cdot \delta U = u \delta u + v \delta v$. Obviously, the base state $y = e^{\i t} \mathcal{Y}(x)$, the stability of which we are studying, is not a fixed point of $\frac{\delta \mathrm{H}}{\delta U}$, but rather that of a Hamiltonian constrained by the conservation of particle number \eqref{conservation-mass:cNLS}, which in rescaled variables reads $\mathrm{N} = \int{|y|^{2} \, \d \nu} = \int{|U|^{2} \, \d \nu} = \mathrm{const}$, so that $\mathcal{Y}(x)$ satisfies both \begin{align} \label{eqn:Y} \mathcal{Y} + \Delta_{x} \mathcal{Y} + \left(d - \frac{1}{4}\right) \frac{\mathcal{Y}}{x^{2}} + \mathcal{Y}^{3} = 0 \ \text{and} \ \frac{\delta \mathrm{H}}{\delta U} + \lambda \frac{\delta \mathrm{N}}{\delta U} = 0, \end{align} where $\lambda=\frac{1}{2}$. Notably, while \eqref{eqn:Y} is expectedly Hamiltonian as it is derived from \eqref{system:Hamiltonian:GP}, the Hamiltonian for \eqref{eqn:Y} is non-local, which follows from multiplying \eqref{eqn:Y} with $x \, \mathcal{Y}_{x}$ and integrating w.r.t. $x$ resulting in \begin{align} \frac{1}{2}\int_{0}^{\infty}{x \frac{\d}{\d x} \mathcal{Y}^{2} \d x} - \frac{1}{2}\int_{0}^{\infty}{x \frac{\d}{\d x} \mathcal{Y}_{x}^{2} \d x} + \left(d - \frac{1}{4}\right) \int_{0}^{\infty}{x \frac{\mathcal{Y} \mathcal{Y}_{x}}{x^{2}} \d x} + \frac{1}{4}\int_{0}^{\infty}{x \frac{\d}{\d x} \mathcal{Y}^{4} \d x} = 0, \nonumber \end{align} after integration by parts. Therefore, the Hamiltonian for the reduced Hamiltonian system \eqref{eqn:Y} is \begin{align} \mathrm{H}_{V} = \mathrm{H}_{0} - \left(d - \frac{1}{4}\right) \int_{x}^{\infty}{\frac{\mathcal{Y}(x^{\prime}) \mathcal{Y}_{x}(x^{\prime})}{x^{\prime 2}} \, \d x^{\prime}}, \ \mathrm{H}_{0} = \frac{1}{2} \mathcal{Y}^{2} - \frac{1}{2} \mathcal{Y}_{x}^{2} + \frac{1}{4} \mathcal{Y}^{4}, \end{align} where the lower limit of integration in the last term of $\mathrm{H}_{V}$ can be chosen arbitrarily, though it should be fixed. One way to interpret the nonlocality of $\mathrm{H}_{V}$ is that the trajectory of \eqref{eqn:Y} crosses the level curves of the Hamiltonian $\mathrm{H}_{0}$ of the system without the potential, i.e. locally the energy $\mathrm{H}_{0}$ changes, but the integral quantity $\mathrm{H}_{V}$ is conserved. Returning to the Hamiltonian $\mathrm{H}$ \eqref{H:ncNLS}, its second variation reads \begin{multline} \label{variation:second:case-1} \delta^{2} \mathrm{H} = \frac{1}{2}\int \Big\{- \left[(\delta u_{x})^{2}+(\delta v_{x})^{2}\right] + \left(d-\frac{1}{4}\right) \frac{(\delta u)^{2} + (\delta v)^{2}}{x^{2}} - \frac{\mu_{\infty}}{\lambda_{\infty}} \frac{(\delta u_{\theta})^{2}+(\delta v_{\theta})^{2}}{x^{2}} \\ + \left[\left(3 u^{2} + v^{2}\right) (\delta u)^{2} + \left(3 v^{2} + u^{2}\right) (\delta v)^{2} + 4 u v \, \delta u \delta v\right] \Big\} \d \nu. \end{multline} Hence, formally, the Hessian density can be written as \begin{align}\label{Hessian:ncNLS:case-1} \resizebox{0.975\hsize}{!}{$\begin{pmatrix} \delta u \\ \delta v \\ \delta u_{x} \\ \delta v_{x} \\ \delta u_{\theta} \\ \delta v_{\theta} \end{pmatrix}^{T} \begin{pmatrix} \left(d-\frac{1}{4}\right) \frac{1}{x^{2}} + \left(3 u^{2} + v^{2}\right) & 2 u v & 0 & 0 & 0 & 0 \\ 2 u v & \left(d-\frac{1}{4}\right) \frac{1}{x^{2}} + \left(3 v^{2} + u^{2}\right) & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -\frac{\mu_{\infty}}{\lambda_{\infty}}\frac{1}{x^{2}} & 0 \\ 0 & 0 & 0 & 0 & 0 & -\frac{\mu_{\infty}}{\lambda_{\infty}}\frac{1}{x^{2}} \end{pmatrix} \begin{pmatrix} \delta u \\ \delta v \\ \delta u_{x} \\ \delta v_{x} \\ \delta u_{\theta} \\ \delta v_{\theta} \end{pmatrix}$} \end{align} and alone suggests instability of the base state. However, according to the \citet{Dirac:1964} theory of constrained Hamiltonian systems, we must consider second variation of the constrained Hamiltonian $\delta^{2} \mathrm{H} + \lambda \, \delta^{2} \mathrm{N}$ and only dynamically accessible variations, i.e. tangent to the constraint, \begin{align} \label{variations:constraint} \delta \mathrm{N} = 0, \ \text{i.e.} \ U \cdot \delta U = 0, \end{align} along with its differential consequences (consistency conditions), thus reducing the dimension of \eqref{Hessian:ncNLS:case-1} in half. Without detailed calculations, from the structure of \eqref{variation:second:case-1} it is clear that the second variation is sign-indefinite implying instability with the transverse perturbations playing destabilizing role. Similar calculations for \textit{case 2}, using appropriate expressions for scaling constants in \eqref{scalings:Lagrange-Dirichlet} from \S \ref{subsec:ncNLS:BS}, instead of \eqref{ncNLS:scaled:case-1} yield for the scaled GP equation \begin{align}\label{ncNLS:scaled:case-2} \i y_{t} + \Delta_{x} y + \left(d - \frac{1}{4}\right) \frac{y}{x^{2}} + \frac{\mu_{\infty}}{\lambda_{\infty}} \frac{y_{\theta\theta}}{x^{2}} = |y|^{2} y, \end{align} and the Hessian density matrix \begin{align}\label{Hessian:ncNLS:case-2} \begin{pmatrix} -\left(d-\frac{1}{4}\right) \frac{1}{x^{2}} + \left(3 u^{2} + v^{2}\right) & 2 u v & 0 & 0 & 0 & 0 \\ 2 u v & -\left(d-\frac{1}{4}\right) \frac{1}{x^{2}} + \left(3 v^{2} + u^{2}\right) & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{\mu_{\infty}}{\lambda_{\infty}}\frac{1}{x^{2}} & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{\mu_{\infty}}{\lambda_{\infty}}\frac{1}{x^{2}} \end{pmatrix}, \end{align} which, under the same constrained conditions \eqref{variations:constraint}, again implies instability due to sign-indefiniteness of the second variation $\delta^{2} \mathrm{H} + \lambda \, \delta^{2} \mathrm{N}$; notably, the potential now plays a destabilizing role (w.r.t. the longitudinal perturbations) compared to \textit{case 1}, while the transverse perturbations have a stabilizing effect. In \textit{case 3}, however, we get for the scaled GP equation \begin{align}\label{ncNLS:scaled:case-3} \i y_{t} + \Delta_{x} y + \left(d - \frac{1}{4}\right) \frac{y}{x^{2}} + \frac{\mu_{\infty}}{\lambda_{\infty}} \frac{y_{\theta\theta}}{x^{2}} = - |y|^{2} y, \end{align} and the Hessian density matrix \begin{align}\label{Hessian:ncNLS:case-3} \begin{pmatrix} -\left(d-\frac{1}{4}\right) \frac{1}{x^{2}} - \left(3 u^{2} + v^{2}\right) & - 2 u v & 0 & 0 & 0 & 0 \\ - 2 u v & -\left(d-\frac{1}{4}\right) \frac{1}{x^{2}} - \left(3 v^{2} + u^{2}\right) & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{\mu_{\infty}}{\lambda_{\infty}}\frac{1}{x^{2}} & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{\mu_{\infty}}{\lambda_{\infty}}\frac{1}{x^{2}} \end{pmatrix}, \end{align} which under the same constrained conditions \eqref{variations:constraint}, implies instability as nonlinearity now plays the destabilizing role due to the change of sign (from defocusing in \textit{case 2} to focusing in \textit{case 3}). The limits of (\ref{Hessian:ncNLS:case-2},\ref{Hessian:ncNLS:case-3}) for $x \rightarrow \infty$ correspond to defocusing/focusing cases of NLS, respectively. The corresponding Hessian \eqref{Hessian:ncNLS:case-3} thus recovers the known fact that solutions of the focusing 1D NLS are both longitudinally \citep{Zakharov:1968} and transversely \citep{Zakharov:1974} unstable (see further discussion in \S \ref{sec:introduction}), leading to a finite-time singularity when nonlinearity overpowers the dispersive spreading. In conclusion, we are in the position to compare the above stability results with the spectral approach in \S \ref{subsec:spectral-analysis:ncNLS}. While in \textit{case 3} the conclusions of the Lagrange-Dirichlet method from the Hessian \eqref{Hessian:ncNLS:case-2} are in agreement with the spectral instability results of \S \ref{subsec:spectral-analysis:ncNLS}, in \textit{cases 1} and \textit{2} they appear to be at variance. However, as mentioned in \S \ref{sec:introduction}, spectral stability does not imply even linear stability, not to mention nonlinear (finite-amplitude) stability -- hence, the contradiction is only apparent. Having said that, the above spectral and Hamiltonian stability analyses apply to different conditions: the spectral approach giving spectral stability in \textit{cases 1} and \textit{2}, while instability in \textit{case 3} -- to the base states in the form of standing envelope solitary waves that are potentially singular at the origin as in \textit{case 1}, while the Hamiltonian approach -- to the base states which are smooth including at the origin and decay fast enough at infinity \textit{or} in the case when variations (and hence admissible perturbations) vanish at the origin and infinity. Lastly, it should be noted that since in all three cases the Lagrange-Dirichlet method implies instability, we do not have to deal with infinite-dimensional nature of the problem, which would otherwise impose extra work on establishing stability since positive-definiteness of the constrained Hamiltonian is not a sufficient condition for a local minimum to occur in infinite dimensions \citep{Krechetnikov:2009}. \section{Waves on shallow water} \subsection{Nearly concentric KdV with surface tension} \label{subsec:ncKdV} Let us next consider nearly concentric water waves on shallow water, also in the inviscid potential approximation. Since our interest is to analyze the evolution of an envelope of a wave with wavelength $\ell$, the latter sets the natural lengthscale for non-dimensionalization in the horizontal direction, while the quiescent fluid layer depth $h$ -- in the vertical direction: \begin{align} \label{eqn:non-dimensionalization} (r,z) \rightarrow \left(\ell r, h z\right), \ t \rightarrow \frac{\ell}{c_{0}} t, \ \eta \rightarrow a \, \eta, \ \phi \rightarrow a \, h^{-1} \, c_{0} \, \ell \, \phi, \end{align} where the phase speed $c_{0} = (g \, h)^{1/2}$ is dictated by the shallow water dispersion relation $\omega^{2} = k^{2} g \, h$, $a$ is the wave amplitude, and the scaling for $\phi$ follows from balancing the fluid acceleration at the interface with the hydrostatic pressure, $\phi_{t} \sim g \, \eta$. Altogether, this leads to the following non-dimensional system analogous to \eqref{system:deep-water:non-dimensional:cylindrical} in the deep water case \begin{subequations} \label{system:shallow-water:non-dimensional:cylindrical} \begin{align} \label{bulk:Laplace:non-dimensional} z \le 1 + \alpha \, \eta(t,x)&: \quad \left\{\begin{array}{c} \phi_{zz} + \delta^{2} \nabla_{\perp}^{2} \phi = 0, \\ \nabla \phi \rightarrow 0, \ z = 0, \end{array}\right. \\ \label{interface:kinematic:non-dimensional} z = 1 + \alpha \, \eta(t,x)&: \quad \phi_{z} = \delta^{2} \left[\eta_{t} + \alpha \, \nabla_{\perp} \phi \cdot \nabla_{\perp} \eta\right], \\ \label{interface:dynamic:non-dimensional} z = 1 + \alpha \, \eta(t,x)&: \quad \phi_{t} + \eta + \frac{\alpha}{2} \left[\nabla_{\perp} \phi \cdot \nabla_{\perp} \phi + \frac{1}{\delta^{2}} \phi_{z}^{2}\right] + We \, \delta^{2} \nabla \cdot \mathbf{n} = 0, \end{align} \end{subequations} where $\nabla_{\perp}=\left(\partial_{r},\frac{1}{r} \partial_{\theta}\right)$, the leading-order terms in the curvature are $\nabla \cdot \mathbf{n} = \eta_{rr} + \frac{1}{r} \eta_{r} + \frac{1}{r^{2}} \eta_{\theta\theta} + \mathcal{O}(\alpha^{2} \delta^{2})$, the Weber number $We = \sigma / (\rho \, g \, h^{2})$ measures the effect of surface tension relative the wave intertia (driven by gravity), $\delta = h / \ell$ is the shallowness parameter, and $\alpha = a / h$ is the scaled wave amplitude (the wave steepness). The latter is treated as small since we are interested in the balance of nonlinear and dispersive effects, which happens at small solution amplitudes only. As motivated by the study of \citet{Kadomtsev:1970} of transverse instability of plane (1D) solitons described the KdV equation \eqref{eqn:KP}, there is a natural generalization to weak 2D dependence (npKdV), which was initially done in the plane case by the aforementioned authors. In the nearly concentric case, it was argued by \citet{Johnson:1980} that in order to derive a ncKdV one needs the scaling $\tau = \alpha^{6} \delta^{-4} t$, $\xi = \alpha^{2} \delta^{-2} (r-t)$, $\Theta = \delta \alpha^{-2} \theta$, $\Phi = \alpha^{-1} \phi$, and $H = \alpha^{2} \delta^{-3} \eta$ since the balance occurs at large enough distance from the origin (and hence time) so that the wave amplitude is small due to radial spreading. However, one can derive the ncKdV following the same scaling as in the derivation of 1D KdV on the line \citep{Kano:1986}, i.e. choosing $\alpha = \delta^{2}$, because the wave amplitude $\alpha$ has not been fixed yet: \begin{align} \xi = r - t, \ \tau = \alpha \, t, \ \Theta = \frac{1}{\alpha^{1/2}} \theta, \end{align} where all new variables are $\mathcal{O}(1)$ meaning that $\theta \sim \mathcal{O}(\alpha^{1/2})$ belongs to a narrow sector as opposed to \eqref{ncNLS}, in which the azimuthal coordinate is defined for the entire circle $\theta \in[0,2\pi)$; thus, same as with $\xi$, we may consider $\Theta \in (-\infty,+\infty)$ in the limit $\alpha \rightarrow 0$. Also, if we are looking for large time behavior, $r$ is large too and must be replaced with $r = \left(\alpha \, \xi + \tau\right) / \alpha$; effectively, this means that geometric spreading measured by the ratio of dimensional quantities $\ell/r \ll 1$ is weak, which in the context of made approximations amounts to $r h / \ell \gg 1$ for non-dimensional $r$. The Laplace equation \eqref{bulk:Laplace:non-dimensional} then transforms to \begin{align} \phi_{zz} + \alpha \left[\phi_{\xi\xi} + \frac{\alpha}{\alpha \, \xi + \tau} \phi_{\xi} + \frac{\alpha}{\left(\alpha \, \xi + \tau\right)^{2}} \phi_{\Theta\Theta}\right] = 0, \end{align} with the solution being \begin{multline} \label{sln:ncKdV-2D} \phi = \widetilde{\phi}_{0}(\tau,\xi,\Theta) + \alpha \left(\widetilde{\phi}_{1}(\tau,\xi,\Theta) - \frac{z^{2}}{2} \widetilde{\phi}_{0 \xi\xi}(\tau,\xi)\right) + \alpha^{2} \bigg[\widetilde{\phi}_{2}(\tau,\xi,\Theta) - \frac{z^{2}}{2} \bigg(\widetilde{\phi}_{1 \xi\xi}(\tau,\xi,\Theta) \\ + \frac{1}{\tau}\widetilde{\phi}_{0 \xi}(\tau,\xi,\Theta) + \frac{1}{\tau^{2}}\widetilde{\phi}_{0 \Theta\Theta}(\tau,\xi,\Theta)\bigg) + \frac{z^{4}}{24} \widetilde{\phi}_{0 \xi\xi\xi\xi}(\tau,\xi,\Theta)\bigg] + \mathcal{O}(\alpha^{3}). \end{multline} The dynamic boundary condition yields \begin{align} \eta_{0} - \widetilde{\phi}_{0 \xi} + \alpha \left[\eta_{1} - \widetilde{\phi}_{1 \xi} + \frac{1}{2} \widetilde{\phi}_{0 \xi\xi\xi} + \widetilde{\phi}_{0 \tau} + \frac{1}{2} \widetilde{\phi}_{0 \xi}^{2} - We \, \eta_{0 \xi\xi}\right] + \mathcal{O}(\alpha^{2}) = 0, \end{align} while the kinematic one produces \begin{multline} - \alpha \left(1 + \alpha \eta_{0}\right) \widetilde{\phi}_{0 \xi\xi} + \alpha^{2} \left[- \widetilde{\phi}_{1 \xi\xi} - \frac{1}{\tau} \widetilde{\phi}_{0 \xi} - \frac{1}{\tau^{2}} \widetilde{\phi}_{0 \Theta\Theta} + \frac{1}{6} \widetilde{\phi}_{0 \xi\xi\xi\xi}\right] = \\ - \alpha \eta_{0 \xi} + \alpha^{2} \left[\eta_{0 \tau} - \eta_{1 \xi} + \widetilde{\phi}_{0 \xi} \eta_{0 \xi}\right] + \mathcal{O}(\alpha^{3}). \end{multline} Collecting terms of the same order gives $\eta_{0} = \widetilde{\phi}_{0 \xi}$ as well as the following two equations for the difference $\eta_{1} - \widetilde{\phi}_{1 \xi}$: \begin{subequations} \begin{align} \eta_{1} - \widetilde{\phi}_{1 \xi} &= - \frac{1}{2} \widetilde{\phi}_{0 \xi\xi\xi} - \widetilde{\phi}_{0 \tau} - \frac{1}{2} \widetilde{\phi}_{0 \xi}^{2} + We \, \eta_{0 \xi\xi}, \\ \eta_{1 \xi} - \widetilde{\phi}_{1 \xi\xi} &= \eta_{0 \tau} + \frac{1}{\tau} \widetilde{\phi}_{0 \xi} + \frac{1}{\tau^{2}} \widetilde{\phi}_{0 \Theta\Theta} - \frac{1}{6} \widetilde{\phi}_{0 \xi\xi\xi\xi} + \eta_{0} \widetilde{\phi}_{0 \xi\xi} + \eta_{0 \xi} \widetilde{\phi}_{0 \xi} + We \, \eta_{0 \xi\xi\xi}, \end{align} \end{subequations} which after eliminating $\eta_{1} - \widetilde{\phi}_{1 \xi}$ furnish \begin{align} 2 \, \eta_{0 \tau} + \frac{1}{\tau} \eta_{0} + \left(\frac{1}{3} - We\right) \eta_{0 \xi\xi\xi} + 3 \eta_{0} \eta_{0 \xi} + \frac{1}{\tau^{2}} \widetilde{\phi}_{0 \Theta\Theta} - We \, \eta_{0 \xi\xi\xi} = 0 \ \& \ \widetilde{\phi}_{0 \xi} = \eta_{0}, \end{align} or ncKdV\marginlabel{discuss limit as $\xi \rightarrow \infty$ and relate to KP eqn} \begin{align} \label{eqn:cKP} \left[2 \eta_{0 \tau} + \frac{1}{\tau} \eta_{0} + \left(\frac{1}{3}-We\right) \eta_{0 \xi\xi\xi} + 3 \eta_{0} \eta_{0 \xi}\right]_{\xi} + \frac{1}{\tau^{2}} \eta_{0 \Theta\Theta} = 0. \end{align} Without surface tension, $We=0$, equation \eqref{eqn:cKP} reduces to that derived by \citet{Johnson:1980}. The reason why the effect of surface tension enters by replacing the coefficient $\frac{1}{3}$ in front of $\eta_{0 \xi\xi\xi}$ to $\left(\frac{1}{3} - We\right)$ as in 1D KdV generalization onto the surface tension case \citep{Korteweg:1895,Benjamin:1982,Green:1983} is because the leading order curvature term in \eqref{eqn:curvature:cylindrical} in the considered approximation assumes the same form as in the plane (1D) case: \begin{align} \nabla \cdot \mathbf{n} = - \eta_{\xi\xi} + \mathcal{O}(\alpha). \end{align} \subsection{Single concentric soliton} \label{subsec:soliton:ncKdV} The single concentric soliton, transverse instability of which we will be studying, is governed by the $\Theta$-independent variant of \eqref{eqn:cKP}: \begin{align} \label{eqn:cKdV} 2 H_{\tau} + \frac{1}{\tau} H + \left(\frac{1}{3}-We\right) H_{\xi\xi\xi} + 3 H H_{\xi} = 0, \end{align} which is translationally invariant in the radial coordinate $\xi$ as opposed to its deep water counterpart \eqref{ncNLS-ST}. Equation \eqref{eqn:cKdV} is known as a concentric KdV, which was originally derived by \citet{Maxon:1974b} in the context of ion-acoustic waves in a collisionless plasma, whose numerical simulations showed that solitary waves are characterized by $A \, \lambda^{2} \simeq \mathrm{const}$, where $A$ is the amplitude and $\lambda$ wavelength of the solitary wave. \citet{Cumberbatch:1978} further demonstrated that the amplitude dependence on radial position $r$ scales as $A \propto r^{-2/3}$. In the context of free-surface gravity waves, equation \eqref{eqn:cKdV} was first derived by \citet{Miles:1978b} from the Boussinesq equations, though without surface tension effects and $\tau$ replaced by $r$; hence, the self-similar solution was studied in that work in the $(r,\xi)$-variables. Numerically, cylindrical solitary waves were also explored by \citet{Chwang:1976}, on water of constant depth, but using the Boussinesq-type model. Some solutions to \eqref{eqn:cKdV} were constructed, cf. \citet{Calogero:1978,Johnson:1979}, with the inverse scattering transform. On the symmetry side, note that the dilatation group of transformations \begin{align} \label{group:dilatational} \tau \rightarrow \gamma^{-3/2} \, \tau^{\prime}, \ \xi \rightarrow \gamma^{-1/2} \, \xi^{\prime}, \ H \rightarrow \gamma \, H^{\prime}, \end{align} leaves \eqref{eqn:cKdV} invariant. One way to interpret this group is that the scaling constant $\gamma$ falls out when we substitute in \eqref{eqn:cKdV} the solution of the form: \begin{align} H \, \gamma^{-1} = f(\tau \, \gamma^{3/2}, \xi \, \gamma^{1/2}). \end{align} Clearly, this representation corresponds to the structure of the 1D soliton solution \eqref{soliton:1D:KdV} with $\gamma$ being equivalent to $A$. However, such a solution is not allowed in the cylindrical case due to the lack of Galiliean invariance. Another implication of \eqref{group:dilatational} is the existence of a self-similar solution, which results from the fact that under \eqref{group:dilatational} the following complexes stay invariant: \begin{align} H \, \tau^{2/3} = H^{\prime} \, \tau^{\prime 2/3}, \ \xi \, \tau^{-1/3} = \xi^{\prime} \, \tau^{\prime -1/3} \equiv \zeta_{0}, \end{align} and thus are functionally related via self-similar variables: \begin{align} \label{sln:cKdV:self-similar} H(\tau,\xi) = \tau^{-2/3} F(\zeta), \ \zeta = \tau^{-1/3} \xi, \end{align} leading to a single solitary wave solution. While self-similar solutions of equation~\eqref{eqn:cKdV} have been constructed \citep{Johnson:1980} for $We=0$, we are going to explore the general case of $We>0$. The derivatives of \eqref{sln:cKdV:self-similar} are calculated according to \begin{align} H_{\tau} = - \frac{2}{3} \tau^{-5/3} F(\zeta) + \tau^{-2/3} F^{\prime}(\zeta) \, \zeta_{\tau}, \ \zeta_{\tau} = - \frac{1}{3} \frac{\zeta}{\tau}, \ H_{\xi} = \tau^{-2/3} F^{\prime}(\zeta) \frac{\zeta}{\xi}, \end{align} thus leading to an ODE: \begin{align} \label{ODE:ncKdV:soliton:original} - \frac{1}{3} F - \frac{2 \, \zeta}{3} F^{\prime} + \left(\frac{1}{3} - We\right) F^{\prime\prime\prime} + 3 F F^{\prime} = 0, \end{align} where the translational invariance is lost. Multiplying the latter equation by $F$ and integrating once, we get \begin{align} \label{ODE:ncKdV:soliton:intermediate} - \frac{1}{3} \zeta F^{2} + \left(\frac{1}{3} - We\right) \left[F F^{\prime\prime} - \frac{1}{2} F^{\prime 2}\right] + F^{3} = \mathrm{const}, \end{align} where we used the facts that $\left(\zeta F^{2}\right)^{\prime} = F^{2} + 2 \, \zeta F F^{\prime}$ and $\left(F F^{\prime\prime}\right)^{\prime} = F^{\prime} F^{\prime\prime} + F F^{\prime\prime\prime}$. Further, introducing rescalings $\zeta = 2^{1/3} \widehat{\zeta}$ and $F = 2^{1/3} \widehat{F}/3$ we can simplify \eqref{ODE:ncKdV:soliton:intermediate} to \begin{align} \label{ODE:ncKdV:soliton:simplified} \left(1 - 3 \, We\right) \left[\widehat{F} \widehat{F}^{\prime\prime} - \frac{1}{2} \widehat{F}^{\prime 2}\right] + 2 \left(\widehat{F}^{3} - \widehat{\zeta} \widehat{F}^{2}\right) = \mathrm{const}. \end{align} Introducing $\widehat{F} = v^{2}$ and putting the constant in \eqref{ODE:ncKdV:soliton:simplified} to zero (since the solution $\widehat{F}$ decays exponentially to zero at either infinity dictated by the sign of $\left(1 - 3 \, We\right)$), we can reduce \eqref{ODE:ncKdV:soliton:simplified} to the second Painlev\'{e} transcendent \citep{Ince:1944,Miles:1978}: \begin{align} \label{eqn:Painleve} \alpha \, v^{\prime\prime} - \widehat{\zeta} v + v^{3} = 0, \end{align} where $\alpha = 1 - 3 \, We$. Naturally, we will require that $v \rightarrow 0$ as $\widehat{\zeta} \rightarrow \pm \infty$, but the rate of decay depends on the direction taken. Also, if one is interested in the solution of \eqref{eqn:Painleve} for negative values of parameter $\alpha$, with the transformation $\alpha \rightarrow -\alpha$, $\widehat{\zeta} \rightarrow -\widehat{\zeta}$, $v \rightarrow -v$, equation \eqref{eqn:Painleve} is transformed to $\alpha \, v^{\prime\prime} - \widehat{\zeta} v - v^{3} = 0$, i.e. only the sign of the nonlinear term changes, which has some noticeable quantitative effect on the form of the solution; however, qualitatively the solution looks similar as one may notice by applying the transformation $\widehat{\zeta} \rightarrow -\widehat{\zeta}$, $v \rightarrow -v$ to figure~\ref{fig:plot-y-a-pos} and comparing with \ref{fig:plot-y-a-neg}. The asymptotics of the solutions to \eqref{eqn:Painleve} is governed by the linearized version of \eqref{eqn:Painleve}, which follows from the fact that $v \rightarrow 0$ as $\widehat{\zeta} \rightarrow \pm \infty$ and hence behaves as the Airy function $\sim \Ai{\left(\widehat{\zeta}\right)}$, e.g. for $\alpha>0$: \begin{subequations} \label{asymptotics:Painleve} \begin{align} v & \sim C_{+} \frac{e^{-\frac{2}{3} \, \widehat{\zeta}^{3/2}}}{2 \sqrt{\pi} \, \widehat{\zeta}^{1/4}} \ \text{for} \ \widehat{\zeta} \rightarrow \infty; \\ \label{asymptotics:Painleve:minfinity} v & \sim C_{-} \frac{1}{\sqrt{\pi} \, (-\widehat{\zeta})^{1/4}} \cos{\left[\frac{2}{3} \, (-\widehat{\zeta})^{3/2} - \frac{\pi}{4} + \varphi(\widehat{\zeta})\right]} \ \text{for} \ \widehat{\zeta} \rightarrow -\infty; \end{align} \end{subequations} where $\widetilde{x} = \alpha^{-1/3} x$; for $\alpha<0$ the asymptotics \eqref{asymptotics:Painleve} inverts because with the transformations $\alpha \rightarrow -\alpha$, $\widehat{\zeta} \rightarrow -\widehat{\zeta}$ the linearized part of \eqref{eqn:Painleve} $\alpha \, v^{\prime\prime} - \widehat{\zeta} v = 0$ stays intact. Phase correction $\varphi(\widehat{\zeta})$ to \eqref{asymptotics:Painleve:minfinity} is computed similar to Appendix~\ref{appx:asymptotics:infinity} and yields $\varphi(\widehat{\zeta}) \sim - \frac{3 C_{-}^{2}}{4 \pi} \ln{\left[- \widehat{\zeta}\right]}$, cf. \citep{Ablowitz:1977b,Miles:1978}. On the conservation law side, previously \citet{Maxon:1974b}, \citet{Cumberbatch:1978}, and \citet{Ko:1979} claimed the existence of the two for \eqref{eqn:cKdV}. The first $\mathrm{I}_{1}$ is found by integrating \eqref{eqn:cKdV} w.r.t. $\xi$ and assuming that the solution and its derivatives up to second order decay at $\xi \pm \infty$, which yields \begin{align} \label{conservation:mass:ncKdV} 2 \frac{\d}{\d \tau} \mathrm{I}_{1} + \frac{1}{\tau} \mathrm{I}_{1} = 0, \ \mathrm{I}_{1} = \int_{\Bbb{R}}{H \, \d \xi}, \end{align} meaning that $\tau^{1/2} \mathrm{I}_{1} = \mathrm{const}$. However, in this derivation the assumption that $H_{\xi\xi} \rightarrow 0$ as $\xi \rightarrow - \infty$ for $\alpha > 0$, cf. figure~\ref{fig:plot-y-a-pos}, and $\xi \rightarrow \infty$ for $\alpha < 0$, cf. figure~\ref{fig:plot-y-a-neg}, is not valid for a self-similar solution \eqref{sln:cKdV:self-similar} unless one considers long enough time limit\marginlabel{figure out the scaling!} or proves that due to fast oscillations the integral of $H_{\xi\xi}$ converges to zero\marginlabel{Riemann–Lebesgue lemma?}. Indeed, as follows from the analysis of equation \eqref{eqn:Painleve}, in the oscillatory tail the solution $H(\tau,\xi)$ behaves as: \begin{align} \label{asymptotics:H} H \sim \xi^{-1/2}, \ H_{\xi} \sim \xi^{0}, \ H_{\xi\xi} \sim \xi^{1/2}, \end{align} for $\alpha > 0$ and $\xi \rightarrow - \infty$. Similarly, multiplying \eqref{eqn:cKdV} by $H(\tau,\xi)$ and integrating w.r.t. $\xi$ produces the second conservation law \begin{align} \label{conservation:energy:ncKdV} \frac{\d}{\d \tau} \mathrm{I}_{2} + \frac{1}{\tau} \mathrm{I}_{2} = 0, \ \mathrm{I}_{2} = \int_{\Bbb{R}}{H^{2} \, \d \xi}, \end{align} meaning that $\tau \, \mathrm{I}_{2} = \mathrm{const}$, but the same assumption that $H_{\xi\xi} \rightarrow 0$ as $\xi \rightarrow - \infty$ is invalid. The validity of these conservation laws (\ref{conservation:mass:ncKdV},\ref{conservation:energy:ncKdV}) was asserted only based on the comparison with numerical solutions \citep{Maxon:1974b,Cumberbatch:1978}. The difficulty of comparing with experimental data was discussed by \citet{Stepanyants:1981}, which nevertheless favored the scaling for the amplitude with the radial coordinate $r$ as $\sim r^{-2/3}$ as opposed to $r^{-1/2}$, which one would expect from the above conservation laws (however, the soliton width may change thus affecting the scaling). In the context of cylindrical solitary waves, experiments of \cite{Weidman:1988} in the shallow water regime confirmed that an isolated disturbance evolves into a slowly varying solitary wave with amplitude decaying as $A \propto r^{-2/3}$. As discussed above, the conservation laws (\ref{conservation:mass:ncKdV},\ref{conservation:energy:ncKdV}) are valid only for localized solutions, which may exist initially or transiently, but not in the long-time limit when the self-similar solutions of the sort \eqref{sln:cKdV:self-similar} establish. The form of both (\ref{conservation:mass:ncKdV},\ref{conservation:energy:ncKdV}) suggests \textit{non-conservative} nature of \eqref{eqn:cKdV}. Indeed, in order to put the latter in a Hamiltonian form, first we would need to transform $H(\tau,\xi) = \tau^{-1/2} u(\tau,\xi)$ to remove the second term in \eqref{eqn:cKdV}, \begin{align} \label{eqn:cKdV:transformed} 2 u_{\tau} + \left(\frac{1}{3}-We\right) u_{\xi\xi\xi} + 3 \, \tau^{-1/2} u u_{\xi} = 0, \end{align} which allows us to put the resulting equation for $u(\tau,\xi)$ in the non-canonical Hamiltonian form: \begin{align} \label{form:Hamiltonian:ncKdV} u_{\tau} = \frac{\partial}{\partial \xi} \frac{\delta \mathcal{H}}{\delta u}, \ \mathcal{H} = \frac{1}{4} \left[\left(\frac{1}{3}-We\right) \int_{-\infty}^{\infty}{u_{\xi}^{2} \, \d\xi} - \tau^{-1/2} \int_{-\infty}^{\infty}{u^{3} \, \d\xi}\right], \end{align} i.e. depending upon the sign of $\frac{1}{3}-We$ the Hamiltonian $\mathcal{H}$ changes from focusing to defocusing thus suggesting the corresponding change in stability properties, which we will see in \S \ref{subsec:KP:ncKdV:analysis}. The fact that the Hamiltonian form \eqref{form:Hamiltonian:ncKdV} is non-canonical since the operator $J = \partial_{\xi}$ is non-invertible in general suggests the existence of Casimirs $C_{i}(\tau)$, $i=1,\ldots$. Also, despite the existence of the Hamiltonian $\mathcal{H}$, the non-autonomous character of \eqref{form:Hamiltonian:ncKdV} and the prior transformation from \eqref{eqn:cKdV} to \eqref{eqn:cKdV:transformed} indicates non-conservative nature of the ncKdV in the sense that energy is no longer a constant of motion. \begin{figure} \setlength{\labelsep}{-3.0mm} \centering \sidesubfloat[]{\includegraphics[width=2.5in]{plot-y-a-pos.pdf}\label{fig:plot-y-a-pos}} \sidesubfloat[]{\includegraphics[width=2.5in]{plot-y-a-neg.pdf}\label{fig:plot-y-a-neg}} \caption{Solutions to \eqref{eqn:Painleve} for the parameter $a$ taking (a) positive and (b) negative values; for concreteness, we considered $|a|=1$.} \end{figure} \subsection{Non-existence of a critical transverse wavenumber} \label{subsec:KdV:stability-preliminary} To analyze the transverse instability of the self-similar solution \eqref{sln:cKdV:self-similar}, we linearize \eqref{eqn:cKP} around the latter, $\eta_{0} = H + \eta^{\prime}$, thus leading to \begin{align} \label{eqn:perturbation:ncKdV} \left[2 \, \eta_{\tau}^{\prime} + \frac{1}{\tau} \eta^{\prime} + \frac{\alpha}{3} \eta_{\xi\xi\xi}^{\prime} + 3 \left(H \eta^{\prime}\right)_{\xi}\right]_{\xi} + \frac{1}{\tau^{2}} \eta_{\Theta\Theta}^{\prime} = 0. \end{align} Since the base state $H(\tau,\xi)$ is time-dependent, to make proper interpretation of the stability analysis the perturbation $\eta^{\prime}$ must be scaled in the same fashion as \eqref{sln:cKdV:self-similar}: \begin{align} \eta^{\prime} = \tau^{-2/3} h(\tau,\xi), \end{align} as well as the independent variables must be transformed according to \begin{align} \label{variables:self-similar:ncKdV} (\tau,\xi,\Theta) \rightarrow (\widehat{\tau} = \ln{\tau},\zeta = \tau^{-1/3} \xi, \widetilde{\Theta} = \tau^{1/3} \Theta), \end{align} thus requiring the transformation of derivatives according to \begin{align} \partial_{\tau} = \frac{1}{\tau} \partial_{\widehat{\tau}} - \frac{1}{3} \frac{\zeta}{\tau} \partial_{\zeta} + \frac{1}{3} \frac{\widetilde{\Theta}}{\tau} \partial_{\widetilde{\Theta}}, \ \partial_{\xi} = \frac{\zeta}{\xi} \partial_{\zeta} = \frac{1}{\tau^{1/3}} \partial_{\zeta}, \ \partial_{\Theta} = \tau^{1/3} \, \partial_{\widetilde{\Theta}}. \end{align} The resulting equation for $h(\widehat{\tau},\zeta,\widetilde{\Theta})$ reads: \begin{align} \left[2 \, h_{\widehat{\tau}} - \frac{2}{3} \left(\zeta h_{\zeta} - \widetilde{\Theta} h_{\widetilde{\Theta}}\right) - \frac{1}{3} h + \frac{\alpha}{3} \, h_{\zeta\zeta\zeta} + 3 \left(F h\right)_{\zeta}\right]_{\zeta} + h_{\widetilde{\Theta}\widetilde{\Theta}} = 0; \end{align} for the purpose of studying the temporal transverse instability, we will look for solutions of the above linear equation in the form \begin{align} h = e^{\lambda \widehat{\tau}} f(\zeta,\widetilde{\Theta}), \end{align} which gives a PDE eigenvalue problem with variable coefficients: \begin{align} \label{EP:ncKdV:rescaled} \left[\left(2 \, \lambda - \frac{1}{3}\right) f - \frac{2}{3} \left(\widehat{\zeta} f_{\widehat{\zeta}} - \widehat{\Theta} f_{\widehat{\Theta}}\right) + \frac{\alpha}{6} \, f_{\widehat{\zeta}\widehat{\zeta}\widehat{\zeta}} + \left(v^{2} f\right)_{\widehat{\zeta}}\right]_{\widehat{\zeta}} + f_{\widehat{\Theta}\widehat{\Theta}} = 0, \end{align} subject to $|f| \rightarrow 0$ for $\zeta, \widetilde{\Theta} \rightarrow \pm \infty$ since we are looking for perturbations of finite energy (in $L^{2}$-norm); in \eqref{EP:ncKdV:rescaled} we used the same variables $\zeta = 2^{1/3} \widehat{\zeta}$ and $F = 2^{1/3} \widehat{F}/3$ as in \eqref{ODE:ncKdV:soliton:simplified} along with the rescaling $\widehat{\Theta} = 2^{1/6} \widetilde{\Theta}$ as well as took into account that $F = v^{2}$ with $v(\widehat{\zeta})$ governed by \eqref{eqn:Painleve}. Hence, despite that the base state \eqref{sln:cKdV:self-similar} is time-dependent, the corresponding linear evolution problem for a superimposed perturbation can be reduced to eigenvalue problem \eqref{EP:ncKdV:rescaled} in the plane of self-similar variables $(\widehat{\zeta},\widehat{\Theta})$, as opposed to other familiar stability problems on time-dependent domains \citep{Homsy:1973,Krechetnikov:2017b}. As evident form the far-field behavior (\ref{asymptotics:Painleve},\ref{asymptotics:H}), the eigenvalue problem \eqref{EP:ncKdV:rescaled} is singular with aperiodically oscillating and growing coefficients, which makes it challenging for accurate numerical approximation and hence deserves a separate study. The latter is beyond the scope of the present work as we will develop analytical insights into stability picture below in this section as well as in \S \ref{subsec:KP:ncKdV:analysis} with the help of the \citet{Kadomtsev:1970} type analysis. At the point, however, we may note an important property of \eqref{EP:ncKdV:rescaled}, namely its structure indicates that there exists no solution of the form $f \sim e^{\i k \widehat{\Theta}}$, i.e. which would be periodic in the angular coordinate $\widehat{\Theta}$ and produce a regularly spaced ``spike'' structure. This observation holds regardless how we would scale the angular variable with respect to time and, of course, is contrary to standard intuition, but can be seen as a consequence of an effective `nonlinearity' built-in the linear stability problem through the base state-dependent term $v^{2}$ manifesting itself in the interaction of two effects: as the single-soliton travels outwards (1) the circular domain is stretching, which inevitably leads to insertion of new wavelengths via the Eckhaus mechanism \citep{Knobloch:2014,Knobloch:2015,Krechetnikov:2017}, and (2) the soliton amplitude decrease, which affects the most unstable wavelength if one adopts the plane (1D) stability picture (\S \ref{sec:introduction}). The competition between these two effects is responsible for an irregular along $\widehat{\Theta}$ structure and non-existence of a single most amplified wavenumber thus demonstrating the crucial differences between the transverse instability of plane and cylindrical solitons. As we saw in \S \ref{sec:deep-water}, this phenomenon, however, does not happen in the deep water case, in particular due to the different underlying dispersive relation. In the limit when the transverse part of \eqref{eqn:perturbation:ncKdV} can be considered as a perturbation, in particular for long times, one can see that stability changes to instability with the sign of parameter $\alpha$ based on the following simple considerations. Taking the Fourier transform of \eqref{eqn:perturbation:ncKdV} in $\Theta$, we get \begin{align} \left[2 \, \widehat{\eta}_{\tau} + \frac{1}{\tau} \widehat{\eta} + \frac{\alpha}{3} \widehat{\eta}_{\xi\xi\xi} + 3 \left(H \widehat{\eta}\right)_{\xi}\right]_{\xi} - \frac{k^{2}}{\tau^{2}} \widehat{\eta} = 0, \end{align} which after integrating twice w.r.t. $\xi$ gives \begin{align} \label{eqn:linearized:ncKdV:g} 2 g_{\tau\xi} + \frac{1}{\tau} g_{\xi} + \frac{\alpha}{3} g_{\xi\xi\xi\xi} + 3 H g_{\xi\xi} - \frac{k^{2}}{\tau^{2}} g = C_{1} \xi + C_{2}, \end{align} where $g_{\xi\xi} = \widehat{\eta}$ and $C_{1}=C_{2}=0$ as $g \rightarrow 0$ for $\xi \rightarrow +\infty$ for $\alpha>0$. To simplify equation \eqref{eqn:linearized:ncKdV:g} further we use the transformation $g(\tau,\xi) = \tau^{-\frac{1}{2}} \chi(\tau,\xi)$, which brings it to \begin{align} 2 \chi_{\tau\xi} + \frac{\alpha}{3} \chi_{\xi\xi\xi\xi} + 3 H \chi_{\xi\xi} - \frac{k^{2}}{\tau^{2}} \chi = 0. \end{align} Considering the last two terms as perturbations for $\xi \rightarrow +\infty$, by splitting the solution $\chi = \chi^{0} + \chi^{1}$ the problem can be recast into \begin{subequations} \begin{align} \label{EVP:ncKdV:0} 2 \chi_{\tau\xi}^{0} + \frac{\alpha}{3} \chi_{\xi\xi\xi\xi}^{0} &= 0, \\ \label{EVP:ncKdV:1} 2 \chi_{\tau\xi}^{1} + \frac{\alpha}{3} \chi_{\xi\xi\xi\xi}^{1} &= - 3 H \chi_{\xi\xi}^{0} + \frac{k^{2}}{\tau^{2}} \chi^{0}, \end{align} \end{subequations} where the ``smallness'' of $H$ for $\xi \rightarrow +\infty$ follows from \eqref{asymptotics:H}. Looking for an asymptotic solution of the first of these equations at $\xi \rightarrow +\infty$, i.e. $\chi^{0} \sim C_{0} e^{\mu \xi} e^{\lambda \tau}$, we find $\mu^{3} = 6 \lambda/\alpha$ and the real part $\Re{(\mu)}$ of $\mu$ must be negative as physically relevant solutions must decay at $\xi \rightarrow +\infty$. Hence, regardless whether $\lambda$ and $\mu$ are complex or real, if $\alpha$ changes sign, then the real part of $\lambda$ must change sign as well. Hence, the behavior is analogous to that of the npKdV \eqref{eqn:KP} and qualitatively similar to that in the GP equation (\S \ref{subsec:spectral-analysis:ncNLS}), i.e. instability appears at sufficiently high Weber numbers, though in the latter case they are measured in the carrier wavelength $2\pi/k_{0}$ compared to the layer depth $h$ in the case of ncKdV. However, as we will see in the next section, the short-time stability characteristics of ncKdV with application to the self-similar solution \eqref{sln:cKdV:self-similar} are very different from the considered here long-time limit conforming to our intuition developed in the near planar case of npKdV -- and this difference is due to the essential time-dependence of the base state \eqref{sln:cKdV:self-similar}. It is easy to show that $\chi^{1}$ essentially follows the time-evolution of $\chi^{0}$ albeit with an algebraic function of $\tau$ multiplying the exponential $e^{\lambda \tau}$. For example, focusing on the last term on the rhs of \eqref{EVP:ncKdV:1} responsible for the input of the azimuthal perturbation, we may look for a particular solution of \eqref{EVP:ncKdV:1} in the form \begin{align} \chi^{1} = A(\tau) e^{\mu \xi}, \ \text{where} \ A(\tau) \sim e^{- \frac{\alpha \mu^3}{6} \tau} \int^{\tau}{\frac{k^{2}}{\widetilde{\tau}^{2}} e^{\left(\lambda+\frac{\alpha \mu^3}{6}\right) \widetilde{\tau}} \, \d \widetilde{\tau}} \sim k^{2} \frac{e^{\lambda \tau}}{\tau} \ \text{for} \ \tau \gg 1. \end{align} As consistent with the observation made earlier, there is no preferred wavenumber $n$ in the azimuthal direction. The contribution of the base state, i.e. the first term on the rhs of \eqref{EVP:ncKdV:1}, can be computed analogously, after the transformation \eqref{variables:self-similar:ncKdV} to self-similar variables $(\tau,\xi) \mapsto (\tau,\zeta)$. \subsection{Kadomtsev-Petviashvili type analysis} \label{subsec:KP:ncKdV:analysis} Finally, let us develop analysis of transverse instability of the self-similar solution \eqref{sln:cKdV:self-similar} to the ncKdV equation \eqref{eqn:cKP} rewritten, for the ease of notation and comparison with the classical analysis of npKdV \citep{Kadomtsev:1970,Alexander:1997}, in the form \begin{align} \label{eqn:ncKdV:form-2} 2 \, \eta_{\tau} + \frac{1}{\tau} \eta + 3 \, \eta \, \eta_{\xi} + \eta_{\xi\xi\xi} + \frac{\beta}{\tau^{2}} \, \partial_{\xi}^{-1} \eta_{\Theta\Theta} = 0, \end{align} after moving the surface tension factor $\frac{\alpha}{3} = \frac{1}{3} - We$ to the last term in \eqref{eqn:ncKdV:form-2} via the rescaling of \eqref{eqn:cKP} with \begin{align} \tau \rightarrow \gamma \tau, \ \xi \rightarrow \left(\frac{\alpha \, \gamma}{3}\right)^{1/3} \xi, \ \eta_{0} \rightarrow \left(\frac{\alpha}{3 \, \gamma^{2}}\right)^{1/3} \eta, \ \Theta \rightarrow \left(\frac{\alpha \, \gamma^{4}}{3 \, \beta^{3}}\right)^{1/3} \Theta, \end{align} without intruding new notations for the variables, but dropping index $0$ in $\eta_{0}$; note that in the above rescalings the factor $\beta > 0$ for $\alpha>0$; if, on the other hand, $\alpha<0$, then the factor $\beta < 0$. When the solution does not depend on the transverse coordinate $\Theta$, equation \eqref{eqn:ncKdV:form-2} admits the self-similar solution \eqref{sln:cKdV:self-similar}: \begin{align} \label{solution:cKdV:self-similar} \eta \, \tau^{2/3} = F(\xi \, \tau^{-1/3}) \ \Rightarrow \ \eta = \tau^{-2/3} F(\zeta_{0}), \ \zeta_{0}=\frac{\xi}{\tau^{1/3}}. \end{align} This is the solution the transverse instability of which we will study by perturbing its amplitude and phase in analogy to the analysis of \citet{Kadomtsev:1970} (see also \citet{Kodama:2018,Ablowitz:1981} for interpretative accounts), who performed stability analysis of 1D plane $\mathrm{sech^{2}}$-soliton \eqref{eqn:self-similar:1D-KdV} with the help of the Krylov-Bogoliubov method \citep{Bogoliubov:1961}, translated here onto the stability analysis of a self-similar solution \eqref{solution:cKdV:self-similar}: \begin{align} \label{sln-perturbed:ncKdV} \eta(t,T,\xi,\widetilde{\Theta}) = \tau^{-2/3} \left[1 + A(T,\widetilde{\Theta})\right] F\left(\frac{\xi+\varphi}{\tau^{1/3}}\right); \end{align} here we will assume $A = \mathcal{O}(\epsilon)$ and $\varphi = \mathcal{O}(\epsilon^{c})$ with time $T = \epsilon^{a} \tau$ and slow transverse coordinate $\widetilde{\Theta} = \epsilon^{b} \Theta$; the exponents $a$, $b$, and $c$ are to be determined with the requirement that one must have $b > 0$ for long-wave instability. The time derivative is calculated to become \begin{multline} \eta_{\tau} = - \frac{2}{3} \, \tau^{-5/3} (1+A) \, F(\zeta) + \tau^{-2/3} \, A_{\tau} \, F(\zeta) \\ + \tau^{-2/3} (1+A) \, F^{\prime}(\zeta) \left[-\frac{1}{3} \frac{\zeta}{\tau} + \frac{\varphi_{\tau}}{\tau^{1/3}}\right], \end{multline} with $\zeta=(\xi+\varphi)/\tau^{1/3}$, while the first derivative w.r.t. $\Theta$ reads \begin{align} \eta_{\Theta} = \tau^{-2/3} \, A_{\Theta} F(\zeta) + \tau^{-2/3} (1+A) \, F^{\prime}(\zeta) \frac{\varphi_{\Theta}}{\tau^{1/3}}, \end{align} and the second derivative w.r.t. $\Theta$ \begin{multline} \eta_{\Theta\Theta} = \mathop{\tau^{-2/3} A_{\Theta\Theta}}_{\mathcal{O}(\epsilon^{1+2b+\frac{2}{3}a})}F(\zeta) + \mathop{2 \, \tau^{-2/3} \, A_{\Theta} \, \varphi_{\Theta} \, \tau^{-1/3}}_{\mathcal{O}(\epsilon^{a+1+c+2b})} F^{\prime}(\zeta) + \\ \mathop{\tau^{-2/3} (1+A) \, \varphi_{\Theta}^{2} \, \tau^{-2/3}}_{\mathcal{O}(\epsilon^{\frac{4}{3}a+2c+2b})} F^{\prime\prime}(\zeta) + \mathop{\tau^{-2/3} (1+A) \, \varphi_{\Theta\Theta} \, \tau^{-1/3}}_{\mathcal{O}(\epsilon^{a+c+2b})} F^{\prime}(\zeta), \end{multline} where under each term we show its order of magnitude once time $\tau$ and transverse direction $\Theta$ derivatives are understood in their modulational counterparts $T$ and $\widetilde{\Theta}$, and orders of $A$ and $\varphi$ are taken into account; the nonlinear terms in the above expression, under appropriate justification, must be omitted in the linear analysis. Since $\eta_{\xi} = \tau^{-1} (1+A) F^{\prime}(\zeta)$, $\eta_{\xi\xi\xi} = \tau^{-5/3} (1+A) F^{\prime\prime\prime}(\zeta)$, and $\partial^{-1} \xi = \tau^{1/3} \partial^{-1} \zeta$, at the leading order we get the equation for a self-similar soliton: \begin{multline} 2 \left[-\frac{2}{3} \tau^{-5/3} F(\zeta) + \tau^{-2/3} F^{\prime}(\zeta)\left(-\frac{1}{3} \frac{\zeta}{\tau}\right)\right] + \tau^{-5/3} F(\zeta) + \\ 3 \, \tau^{-2/3} F(\zeta) \, \tau^{-1} F^{\prime}(\zeta) + \tau^{-2/3} F^{\prime\prime\prime}(\zeta) \, \tau^{-1} = 0, \nonumber \end{multline} or dividing w.r.t. $\tau^{-5/3}$: \begin{align} \label{eqn:soliton:1D:ncKdV:self-similar} -\frac{1}{3} F(\zeta) - \frac{2 \, \zeta}{3} F^{\prime}(\zeta) + 3 \, F(\zeta) \, F^{\prime}(\zeta) + F^{\prime\prime\prime}(\zeta) = 0. \end{align} Since in the cylindrical case there is no translational symmetry, we must expand \eqref{eqn:soliton:1D:ncKdV:self-similar} about $\zeta_{0}$ as the shift of $x$ changes the stability properties of the cylindrical soliton. Thus, taking into account that \begin{subequations} \begin{align} \zeta F^{\prime}(\zeta) &= \left(F^{\prime}(\zeta_{0}) + F^{\prime\prime}(\zeta_{0}) \frac{\varphi}{\tau^{1/3}}\right)\left(\zeta_{0} + \frac{\varphi}{\tau^{1/3}}\right), \\ F(\zeta) \, F^{\prime}(\zeta) &= \left(F(\zeta_{0}) + F^{\prime}(\zeta_{0}) \frac{\varphi}{\tau^{1/3}}\right)\left(F^{\prime}(\zeta_{0}) + F^{\prime\prime}(\zeta_{0}) \frac{\varphi}{\tau^{1/3}}\right), \end{align} \end{subequations} linearization of equation \eqref{eqn:soliton:1D:ncKdV:self-similar} results in (the first-order perturbation): \begin{multline} -\frac{1}{3} F^{\prime}(\zeta_{0}) \frac{\varphi}{\tau^{1/3}} - \frac{2}{3} \left[F^{\prime}(\zeta_{0}) + \zeta_{0} F^{\prime\prime}(\zeta_{0})\right] \frac{\varphi}{\tau^{1/3}} + \\ 3 \left[F(\zeta_{0}) F^{\prime\prime}(\zeta_{0}) + F^{\prime 2}(\zeta_{0})\right] \frac{\varphi}{\tau^{1/3}} + F^{(iv)}(\zeta_{0}) \frac{\varphi}{\tau^{1/3}} = 0, \end{multline} which must be added (after multiplying by $\tau^{-5/3}$) to the linearization of \eqref{eqn:ncKdV:form-2}: \begin{align} &2 \left[-\frac{2}{3} \tau^{-1} A F(\zeta_{0}) + A_{\tau} F(\zeta_{0}) - \frac{1}{3} \frac{\zeta_{0}}{\tau} A F^{\prime}(\zeta_{0}) + F^{\prime}(\zeta_{0}) \frac{\varphi_{\tau}}{\tau^{1/3}}\right] + \tau^{-1} A F(\zeta_{0}) + \\ &3 \, \tau^{-1} 2 \, A F(\zeta_{0}) F^{\prime}(\zeta_{0}) + \tau^{-1} A F^{\prime\prime\prime}(\zeta_{0}) + \beta \frac{\tau^{1/3}}{\tau^{2}} \partial_{\zeta}\left[A_{\Theta\Theta} F(\zeta_{0}) + \varphi_{\Theta\Theta} \tau^{1/3} F^{\prime}(\zeta_{0})\right] = 0, \nonumber \end{align} multiplied by $\tau^{-2/3}$, altogether producing \begin{align} \varphi \, \tau^{-2} &\left\{-\frac{4}{3} F^{\prime} - \frac{2}{3} \left[F^{\prime} + \zeta_{0} F^{\prime\prime}\right] + F^{\prime} + 3 \left(F F^{\prime\prime} + F^{\prime 2}\right) + F^{(iv)}\right\} \nonumber \\ + 2 &\left\{\underline{-\frac{2}{3} \tau^{-5/3} A F} + \tau^{-2/3} A_{\tau} F + \underline{\tau^{-2/3} A F^{\prime} \left(-\frac{1}{3}\frac{\zeta_{0}}{\tau}\right)} + \tau^{-2/3} F^{\prime} \frac{\varphi_{\tau}}{\tau^{1/3}}\right\} \label{linearization:ncKdV} \\ + &\underline{\tau^{-5/3} A F + 6 \, A \, \tau^{-5/3} F F^{\prime} + \tau^{-5/3} A F^{\prime\prime\prime}} + \beta \tau^{-2} \partial_{\zeta}^{-1} \left[\tau^{-1/3} A_{\Theta\Theta} F + \tau^{-2/3} \varphi_{\Theta\Theta} F^{\prime}\right] = 0. \nonumber \end{align} Multiplying \eqref{eqn:soliton:1D:ncKdV:self-similar} evaluated at $\zeta=\zeta_{0}$ by $A \, \tau^{-5/3}$ eliminates the terms underlined in \eqref{linearization:ncKdV} simplifying the latter to \begin{align} \label{eqn:KP:intermediate} &\varphi \, \tau^{-2} \left[-\frac{4}{3} F^{\prime} - \frac{2}{3} \left(F^{\prime} + \zeta_{0} F^{\prime\prime}\right) + F^{\prime} + 3 \left(F F^{\prime\prime} + F^{\prime 2}\right) + F^{(iv)}\right] + \\ &2 \mathop{\tau^{-2/3} \frac{\varphi_{\tau}}{\tau^{1/3}} F^{\prime}}_{\mathcal{O}(\epsilon^{2 a + c})} + 2 \mathop{\tau^{-2/3} A_{\tau} F}_{\mathcal{O}(\epsilon^{1 + \frac{5}{3} a})} + \mathop{3 \, A \, \tau^{-5/3} F F^{\prime}}_{\mathcal{O}(\epsilon^{1 + \frac{5}{3} a})} + \beta \tau^{-2} \partial_{\zeta}^{-1} \left[\mathop{\tau^{-1/3} A_{\Theta\Theta} F}_{\mathcal{O}(\epsilon^{1 + \frac{7}{3} a + 2 b})} + \mathop{\tau^{-2/3} \varphi_{\Theta\Theta} F^{\prime}}_{\mathcal{O}(\epsilon^{\frac{8}{3} a + c + 2 b})}\right], \nonumber \end{align} where the expression in the first brackets also vanish because equation \eqref{eqn:soliton:1D:ncKdV:self-similar} differentiated once and evaluated at $\zeta=\zeta_{0}$ yields the same expression. As a result, we are left with five terms in \eqref{eqn:KP:intermediate} having, in general, four different exponents in the respective orders of $\epsilon$: \circled{1} $2 a + c$, \circled{2} $1 + \frac{5}{3} a$, \circled{3} $1 + \frac{7}{3} a + 2 b$, \circled{4} $\frac{8}{3} a + c + 2 b$. Consideration of all possible matching combinations leaves reasonable only two options: \begin{enumerate} \item $\circled{1}=\circled{2}$ yields $c = 1 - \frac{1}{3} a$, in which case $\circled{3}=\circled{4}$. In this case, there is a possibility of a slow developing long-wave instability. \item $\circled{2}=\circled{4}$ yielding $c = 1 - a - 2 b$, while $\circled{1}=\circled{3}$ produces $c = 1 + \frac{1}{3} a + 2 b$. Altogether, this leads to $b = - \frac{1}{3} a$ and therefore $\circled{1}=\circled{2}=\circled{3}=\circled{4}$, in which case the instability is fast $(a<0)$, but still long-wave $(b>0)$. \end{enumerate} Given that the most interesting and physically relevant case is the second one, i.e. if instability develops at short times then it will dominate the subsequent dynamics and the case (i) becomes irrelevant, let us proceed with its analysis: \begin{align} \label{eqn:perturbation:ncKdV:KP} 2 \, A_{\tau} F + 2 \frac{\varphi_{\tau}}{\tau^{1/3}} F^{\prime} + 3 \, A \, \tau^{-1} F \, F^{\prime} + \beta \, \tau^{-5/3} A_{\Theta\Theta} \int^{\zeta_{0}}{F \, \d \widehat{\zeta}} + \beta \, \tau^{-2} \varphi_{\Theta\Theta} F = 0. \end{align} The challenge of applying the \citet{Kadomtsev:1970} type analysis to \eqref{eqn:perturbation:ncKdV:KP} consists, in particular, in the lesser degree of localization of the soliton (\ref{sln:cKdV:self-similar},\ref{solution:cKdV:self-similar}) compared to the plane 1D case \eqref{soliton:1D:KdV} as we saw in \S \ref{subsec:soliton:ncKdV}. The goal, however, is still the same -- to decompose \eqref{eqn:perturbation:ncKdV:KP} in functionally independent parts, which would lead to an amplitude equation for the perturbation. Differentiating \eqref{eqn:perturbation:ncKdV:KP} w.r.t. $\zeta_{0}$, \begin{align} \label{eqn:perturbation:ncKdV:KP:prime} \left(2 \, A_{\tau} + \beta \, \tau^{-2} \varphi_{\Theta\Theta}\right) F^{\prime} + 2 \frac{\varphi_{\tau}}{\tau^{1/3}} F^{\prime\prime} + 3 \, A \, \tau^{-1} \left(F \, F^{\prime}\right)^{\prime} + \beta \, \tau^{-5/3} A_{\Theta\Theta} F = 0, \end{align} multiplying by $F^{\prime}$ and integrating w.r.t. $\zeta_{0}$, for example for $a>0$ from $\zeta_{0}$ to $\infty$ as dictated by the asymptotic behavior \eqref{asymptotics:Painleve}, in the limit $\zeta_{0}=-\infty$ we get at the leading order \begin{align} \label{eqn-1:KP-analysis:ncKdV} 2 \, A_{\tau} + \beta \, \tau^{-2} \varphi_{\Theta\Theta} = 0. \end{align} In arriving at \eqref{eqn-1:KP-analysis:ncKdV} we took into account that $\int_{\zeta_{0}}^{\infty}{F^{\prime} \left(F F^{\prime}\right)^{\prime} \, \d \zeta} = \frac{1}{2} \int_{\zeta_{0}}^{\infty}{F^{\prime 3} \, \d \zeta}$ and in the limit $\zeta_{0} \rightarrow -\infty$ the following integrals simplify $\int_{\zeta_{0}}^{\infty}{F^{\prime} F^{\prime \prime} \, \d \zeta} = \frac{1}{2} \left.F^{\prime 2}\right|_{\zeta_{0}} \sim \mathrm{const}$, $\int_{\zeta_{0}}^{\infty}{F F^{\prime} \, \d \zeta} = \frac{1}{2} \left.F^{2}\right|_{\zeta_{0}} = 0$, as well as \begin{align} \label{condition:KP:limiting-divergence} \lim_{\zeta_{0} \rightarrow - \infty}{\frac{\int_{\zeta_{0}}^{\infty}{F^{\prime 3} \, \d \zeta}}{\int_{\zeta_{0}}^{\infty}{F^{\prime 2} \, \d \zeta}}} = 0, \end{align} since in the limit $\zeta_{0} \rightarrow -\infty$ the integral $\int_{\zeta_{0}}^{\infty}{F^{\prime 2} \, \d \zeta}$ diverges as $\sim \zeta_{0}$, while the integral $\int_{\zeta_{0}}^{\infty}{F^{\prime 3} \, \d \zeta}$ grows slower than $\zeta_{0}$ due to cancellation of integrals of a fast oscillating function for $\zeta_{0} \rightarrow -\infty$ -- the property also known as the Riemann-Lebesgue lemma in the case of Fourier analysis. Hence, compared to the approach of \citet{Kadomtsev:1970}, we used the different rate of divergence of the corresponding integrals \eqref{condition:KP:limiting-divergence}. Note that integration in the case $a<0$ would have to be from $-\infty$ to $\zeta_{0}$ with the limit taken as $\zeta_{0} \rightarrow + \infty$ due to the asymptotic behavior of the soliton reversed compared to \eqref{asymptotics:Painleve}. Similarly, multiplying \eqref{eqn:perturbation:ncKdV:KP:prime} by $F^{\nu}$ with $\nu > 3$ and integrating w.r.t. $\zeta_{0}$, for example for $a>0$ from $\zeta_{0}$ to $\infty$, leads to \begin{align} \label{eqn-2:KP-analysis:ncKdV} 2 \frac{\varphi_{\tau}}{\tau^{1/3}} + 3 \, A \, \tau^{-1} \mathcal{I}_{32} + \beta \, \tau^{-5/3} A_{\Theta\Theta} \mathcal{I}_{12} = 0, \end{align} where $\mathcal{I}_{32} = \mathcal{I}_{3}/\mathcal{I}_{2} > 0$ and $\mathcal{I}_{12} = \mathcal{I}_{1}/\mathcal{I}_{2} < 0$ with the corresponding finite integrals $\mathcal{I}_{1-3}$ defined as follows \begin{subequations} \begin{align} \mathcal{I}_{1} &= \int_{\zeta_{0}}^{\infty}{F^{\nu + 1} \, \d \zeta} \ \text{converges for} \ \nu>1, \\ \mathcal{I}_{2} &= \int_{\zeta_{0}}^{\infty}{F^{\nu} F^{\prime\prime} \, \d \zeta} \ \text{converges for} \ \nu>3, \\ \mathcal{I}_{3} &= \int_{\zeta_{0}}^{\infty}{F^{\nu} \left(F F^{\prime}\right)^{\prime} \, \d \zeta} = \left.F^{\nu+1} F^{\prime}\right|_{\zeta_{0}}^{\infty} - \nu \int_{\zeta_{0}}^{\infty}{F^{\nu} F^{\prime 2} \, \d \zeta} \ \text{converges for} \ \nu>2; \end{align} \end{subequations} note that in the last integral $\lim_{\zeta_{0} \rightarrow -\infty}{\left.F^{\nu+1} F^{\prime}\right|_{\zeta_{0}}^{\infty}} = 0$. In the deduction of \eqref{eqn-2:KP-analysis:ncKdV} we also took into account that $\int_{-\infty}^{\infty}{F^{\nu} F^{\prime} \, \d \zeta} = 0$ as well as $F(\zeta_{0}) \sim (-\zeta_{0})^{-1/2}$ for $\zeta_{0} \rightarrow - \infty$ as per \eqref{asymptotics:Painleve}. \begin{figure} \setlength{\labelsep}{-3.0mm} \centering \sidesubfloat[]{\includegraphics[width=2.5in]{shorttime1.pdf}\label{fig:shorttime1}} \sidesubfloat[]{\includegraphics[width=2.5in]{shorttime2.pdf}\label{fig:shorttime2}} \caption{Behavior of two independent solutions (a,b) to \eqref{eqn:amplitude:ncKdV:perturbation:short-time} corresponding to the short-time asymptotics of \eqref{eqn:amplitude:ncKdV:perturbation}; insets show the oscillatory behavior near the origin.} \label{fig:shorttime} \end{figure} As a result, the perturbation evolution is determined by the system (\ref{eqn-1:KP-analysis:ncKdV},\ref{eqn-2:KP-analysis:ncKdV}), which after the Fourier transform in the transverse direction becomes: \begin{subequations} \begin{align} 2 \, \widehat{A}_{\tau} - \beta \, k^{2} \, \tau^{-2} \widehat{\varphi} &= 0, \\ 2 \frac{\widehat{\varphi}_{\tau}}{\tau^{1/3}} + 3 \, \widehat{A} \, \tau^{-1} \mathcal{I}_{32} - \beta \, k^{2} \, \tau^{-5/3} \widehat{A} \, \mathcal{I}_{12} &= 0, \end{align} \end{subequations} and can be reduced to a single equation after elimination of $\widehat{\varphi}$ and substitution $\widehat{A} = \tau^{-1} \widehat{\eta}$: \begin{align} \label{eqn:amplitude:ncKdV:perturbation} \widehat{\eta}_{\tau\tau} + \frac{\beta k^{2}}{4} \tau^{-10/3} \left(3 \, \mathcal{I}_{32} \, \tau^{3/2} - \beta k^{2} \mathcal{I}_{12}\right) \widehat{\eta} = 0. \end{align} The first observation to make about equation \eqref{eqn:amplitude:ncKdV:perturbation} is that the transverse wavenumber $k$ can be scaled out by $\tau \rightarrow k^{3} \widetilde{\tau}$ and hence no critical wavenumber exists, also in agreement with the conclusions of \S \ref{subsec:KdV:stability-preliminary}. While equation \eqref{eqn:amplitude:ncKdV:perturbation} corresponds to the short time instability, i.e. case (ii), within this asymptotic approximation we can consider the short- and long-time behavior of \eqref{eqn:amplitude:ncKdV:perturbation} in the proper multiple-scale sense. Clearly, for short times it is the second term in the brackets of \eqref{eqn:amplitude:ncKdV:perturbation}, which is dominant, thus leading to \begin{align} \label{eqn:amplitude:ncKdV:perturbation:short-time} \widehat{\eta}_{\widetilde{\tau}\widetilde{\tau}} + \widetilde{\tau}^{-10/3} \widehat{\eta} = 0, \end{align} after the straightforward scaling out of the constant with the help of redefining the time variable and taking into account that $\mathcal{I}_{12}<0$. The solution of \eqref{eqn:amplitude:ncKdV:perturbation:short-time} is a linear combination of two independent modes \begin{align} \widehat{\eta}(\widetilde{\tau}) = C_{1} \widetilde{\tau}^{1/2} J_{3/4}{\left(\frac{3}{2} \widetilde{\tau}^{-2/3}\right)} + C_{2} \widetilde{\tau}^{1/2} J_{-3/4}{\left(\frac{3}{2} \widetilde{\tau}^{-2/3}\right)}, \end{align} which are shown in figure~\ref{fig:shorttime} -- the first is approaching a constant plateau, while the second one grows linearly in time. The long-time asymptotics of \eqref{eqn:amplitude:ncKdV:perturbation} is dictated by the first term in the brackets, which after scaling out the numerical coefficient produces \begin{align} \label{eqn:amplitude:ncKdV:perturbation:long-time} \widehat{\eta}_{\widetilde{\tau}\widetilde{\tau}} \pm \widetilde{\tau}^{-8/3} \widehat{\eta} = 0, \end{align} where the plus sign corresponds to $\beta >0$ and negative to $\beta < 0$. The solutions of \eqref{eqn:amplitude:ncKdV:perturbation:long-time} read for $\beta>0$: \begin{align} \widehat{\eta}(\widetilde{\tau}) &= C_{1} \widetilde{\tau} \left[-\varrho \cos{\varrho} + \sin{\varrho}\right] = C_{2} \widetilde{\tau} \left[\cos{\varrho} + \varrho \sin{\varrho}\right], \end{align} where $\varrho = 3 \, \widetilde{\tau}^{-1/3}$, and are illustrated in figure~\ref{fig:longtimep}. In the case $\beta<0$, the solution becomes \begin{align} \widehat{\eta}(\widetilde{\tau}) &= C_{1} \widetilde{\tau} \left[\varrho \cosh{\varrho} - \sinh{\varrho}\right] = C_{2} \widetilde{\tau} \left[\cosh{\varrho} - \varrho \sinh{\varrho}\right] \end{align} and is illustrated in figure~\ref{fig:longtimem}. In both cases, one of the solutions approaches a non-zero constant, while the other one grows linearly (the one in figure~\ref{fig:longtimep} is shown on logarithmic scale). Thus, taking into account the transformation $\widehat{A} = \tau^{-1} \widehat{\eta}$ connecting $\widehat{A}$ and $\widehat{\eta}$, we conclude that initial perturbations, measured relative to the unit amplitude of the self-similar solution as per \eqref{sln-perturbed:ncKdV}, are able to grow from infinitesimal values and approach some finite value, so that nonlinear effects start playing a role -- this behavior is atypical for linear stability problems, usually exhibiting either exponential growth or decay, and more characteristic to nonlinear behavior predicated earlier in \S \ref{subsec:KdV:stability-preliminary} based on the properties of equation \eqref{EP:ncKdV:rescaled}. Therefore, it is the short-time behavior governed by \eqref{eqn:amplitude:ncKdV:perturbation} which dictates the transverse stability properties of ncKdV, and makes the appearance of transverse instability possible. This situation is not unusual for stability problems involving time-dependent base states such as in the Rayleigh-Plateau instability of a growing cylindrical liquid blob \citep{Krechetnikov:2017b}. \begin{figure} \setlength{\labelsep}{-3.0mm} \centering \sidesubfloat[]{\includegraphics[width=2.5in]{longtimep.pdf}\label{fig:longtimep}} \sidesubfloat[]{\includegraphics[width=2.5in]{longtimem.pdf}\label{fig:longtimem}} \caption{Behavior of two independent solutions to \eqref{eqn:amplitude:ncKdV:perturbation:long-time} corresponding to (a) $\beta>0$ and (b) $\beta<0$.} \label{fig:longtime} \end{figure} \section{Conclusions} With the goal to study stability of axisymmetric solitary waves, in the present work we deduced a proper envelope equation for solitary waves on deep water, which proves to include an inverse-square potential and hence be of Gross-Pitaevskii type (\ref{ncNLS},\ref{ncNLS-ST:abstract}); in the shallow water limit we rederived a ncKdV equation \eqref{eqn:cKP} by including surface tension effects and under asymptotic assumptions different from what was known before. In the former case, our derivation is set apart from previous studies which postulated that the corresponding NLS for axisymmetric case has the Laplace operator unchanged -- our analysis (\S\S \ref{subsec:ncNLS},\ref{subsec:heuristic-analysis}) demonstrates that the covariance principle does not apply to envelope equations despite their ``universal'' character. Given the novelty of the deduced GP equation for deep water waves, we studied its general properties -- conservation laws (\S \ref{subsec:GP:conservation-laws}), Hamiltonian structure (\S \ref{subsec:Lagrange-Dirichlet}), finite-time-singularity (\S \ref{subsec:GP:conservation-laws}) -- as well as axisymmetric base states along with geometric and mechanistic interpretations of their varieties (\S \ref{subsec:ncNLS:BS}, Appendix~\ref{appx:mechanistic-interpretation}). The stability analysis in the deep water case was performed with the help of both spectral (in the limit of long wavelengths, cf. \S \ref{subsec:spectral-analysis:ncNLS}) and Hamiltonian (for general wavelengths, cf. \S \ref{subsec:Lagrange-Dirichlet}) methods, which complement each other. The challenge of the spectral stability problem \eqref{EVP:cNLS} was its singular nature dictated by the particularities of the base states (\S \ref{subsec:ncNLS:BS}), which nevertheless enable analytical approaches. We revealed the crucial differences in stability characteristics between cylindrical and plane solitons: namely, there is a threshold in the Weber number $We_{c}=\frac{1}{2}$ above which instability appears in the deep water case as opposed to the nearly plane NLS \eqref{eqn:NLS-1D}, the 1D plane solitons of which are always unstable to transverse perturbations \citep{Zakharov:1974} regardless of the value of $We$. Thus, surface tension must be sufficiently strong to induce a transverse instability of a cylindrical soliton on the deep water\footnote{A qualitative interpretation one may offer is that in the case of a plane soliton surface tension breaks it similar to a Rayleigh-Plateau instability of a rectilinear liquid column, which takes place for any magnitude of surface tension as long as it is non-zero, while in the case of a cylindrical soliton the Rayleigh-Plateau instability competes with stabilizing effect of the transverse curvature in the plane of the soliton propagation as well as with the time-dependence of the base state.}. In the shallow water case, we performed an analysis (\S \ref{subsec:KP:ncKdV:analysis}) in the spirit of \citet{Kadomtsev:1970} extending it not only to cylindrical geometry but also to self-similar solitons \eqref{sln:cKdV:self-similar}, with the resulting linear amplitude equation \eqref{eqn:amplitude:ncKdV:perturbation}, which governs perturbation evolution, being highly-nonautonomous and exhibiting transient growth of transverse perturbations regardless of the value of the Weber number in contrast to its plane counterpart, where there is a non-zero critical Weber number\footnote{Obviously, the effect of the solid bottom plays a stabilizing role in the case of plane solitons and thus requires strong enough surface tension to induce a transverse instability, i.e. to get into a Rayleigh-Plateau regime. On the other hand, in the case of a cylindrical soliton the effect of the base soliton time-dependence overpowers any other effects thus leading to transient growth of perturbations, which should trigger nonlinear effects before the subsequent linear dynamics would lead to a decay of the perturbation.}. For long times the stability picture is consistent with the intuition that the dynamics should approach that of npKdV (\S \ref{subsec:KdV:stability-preliminary}). Also, for general wavenumbers, from the reduction to a 2D eigenvalue problem \eqref{EP:ncKdV:rescaled} in the self-similar plane, we made an unexpected conclusion that the transverse perturbations must have an irregular structure in the azimuthal $\theta$-direction, cf. \S \ref{subsec:KdV:stability-preliminary}. Numerical study of \eqref{EP:ncKdV:rescaled}, however, represents a challenge for future efforts. While here we explored only the basic properties of the GP and ncKdV solutions, one might expect that similar to the standard (near planar) versions of these equations, the behavior of their solutions is very rich \citep{Cai:2002}. Including higher-order terms \citep{Dysthe:1979} or generalization onto finite depth \citep{Hasimoto:1972} of the GP equation may offer further insights in the axisymmetric water waves, same as establishing a relation between GP and ncKdV similar to that between NLS and KdV \citep{Boyd:2001} as well as considering the near-critical values of the Weber number $We \rightarrow We_{c}=\frac{1}{3}$ in the ncKdV equation, which should bring up fifth-order derivatives \citep{Green:1983,Hunter:1988}. Also, in the derivation of ncKdV equation with surface tension (\S \ref{subsec:ncKdV}) we neglected the resonance between the linear (carrier) wave speed $c_{0}$ and the linear phase speed $\omega(k)/k \approx g^{1/2} (1 + We \, k^{2} h^{2})^{1/2} h$, which exhibits itself in the far-field \citep{Boyd:1988,Grimshaw:2003,Grimshaw:2005} and occurs because the graph $\omega(k)/k$ is not monotonic when $0 < We < \frac{1}{3}$; for $We>\frac{1}{3}$ the graph of $\omega(k)/k$ is monotonic and hence the derivation of ncKdV does not require corrections. Both types of envelope equations -- on deep and shallow water -- could be amenable to the inverse scattering transform methods, which may serve as yet another method for studying transverse stability as it was done by \citet{Zakharov:1975} for the KdV solitons. While some solutions of the cKdV \eqref{eqn:cKdV} were constructed with inverse scattering transform, cf. \citet{Calogero:1978,Johnson:1979,Freeman:1980}, the feasibility of the inverse scattering for the NLS with an inverse-square potential has not been fully explored yet, cf. \citet{Murphy:2019} and references therein, even though NLS with potentials could be suitable to inverse scattering transform analysis and represents an active area of research, cf. \citet{Sasaki:2008,Fajun:2019}. \section*{Acknowledgements} his work was partially supported by the National Science Foundation (NSF) CAREER award under Grant No. 1054267 and the Natural Sciences and Engineering Research Council of Canada (NSERC) under Grant No. 04374. Declaration of interests: the author reports no conflict of interest.
{ "timestamp": "2022-09-20T02:20:45", "yymm": "2209", "arxiv_id": "2209.08628", "language": "en", "url": "https://arxiv.org/abs/2209.08628" }
\section{Introduction} In \cite{BCHM10,HM10}, Birkar, Cascini, Hacon, and M\textsuperscript{c}Kernan established the relative minimal model program with scaling for projective morphisms of complex varieties. Recently, Villalobos-Paz established the analogue of this result for algebraic spaces of finite type over a field of characteristic zero \cite{VP}, and Fujino \cite{Fuj} and Das--Hacon--P\u{a}un \cite{DHP} established the analogue for complex analytic spaces.\medskip \par The goal of this paper is to prove the following theorem. This shows one can give a unified proof of the relative minimal model program with scaling established in \cite{BCHM10,VP,Fuj,DHP} that simultaneously applies to other, larger categories of spaces, with appropriate choices of scaling divisors $A$. We note the hypotheses on the scaling divisor $A$ can be weakened in case $(\ref{setup:introalgebraicspaces})$. See Theorems \ref{rem:MMP} and \ref{thm:cl1365}. \begin{alphthm}\label{thm:introrelativemmp} Let $\pi\colon X \to Z$ be a projective morphism in one of the following categories, where $X$ and $Z$ are integral and $X$ is normal: \begin{enumerate}[label=$(\textup{\Roman*})$,ref=\textup{\Roman*}] \item[$(0)$] \makeatletter \protected@edef\@currentlabel{0} \phantomsection \label{setup:introalgebraicspaces} \makeatother The category of quasi-excellent Noetherian algebraic spaces of equal characteristic zero over a scheme $S$ admitting dualizing complexes. \item\label{setup:introformalqschemes} The category of quasi-excellent Noetherian formal schemes of equal characteristic zero admitting $c$-dualizing complexes. \item\label{setup:introcomplexanalyticgerms} The category of semianalytic germs of complex analytic spaces. \item\label{setup:introberkovichspaces} The category of $k$-analytic spaces over a complete non-Archimedean field of characteristic zero. \makeatletter \item[{$(\ref*{setup:introberkovichspaces}')$}] \protected@edef\@currentlabel{\ref*{setup:introberkovichspaces}'} \phantomsection \label{setup:introrigidanalyticspaces} \makeatother The category of rigid $k$-analytic spaces over a complete non-trivially valued non-Archimedean field of characteristic zero. \end{enumerate} Let $K_X$ be a canonical divisor on $X$ chosen compatibly with a dualizing complex on $Z$.\footnote{For example, when $Z$ is a $k$-variety or in cases $(\ref{setup:introcomplexanalyticgerms})$, $(\ref{setup:introberkovichspaces})$, and $(\ref{setup:introrigidanalyticspaces})$, we can choose $K_X$ so that $\mathcal{O}_{X_\textup{sm}}( (K_X)_{\vert X_\textup{sm}}) = \det(\Omega_{X_\textup{sm}/k})$ where $X_\textup{sm}$ is the smooth locus of $X$.} \par Suppose $X$ is $\mathbf{Q}$-factorial and let $\Delta$ be a $\mathbf{Q}$-divisor such that $(X,\Delta)$ is klt. Let $A$ be a $\mathbf{Q}$-invertible sheaf on $X$ such that the following conditions hold: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item $A$ is $\pi$-ample. \item $K_X+\Delta+A$ is $\pi$-nef. \end{enumerate} Then, the relative minimal model program with scaling of $A$ over $Z$ exists, and it terminates over every affinoid subdomain\footnote{In cases $(\ref{setup:introalgebraicspaces})$ for schemes or $(\ref{setup:introformalqschemes})$, we mean ``affine open.'' In case $(\ref{setup:introalgebraicspaces})$, we mean ``\'etale affine.''} in $Z$. \end{alphthm} In addition to the results in \cite{BCHM10,VP,Fuj,DHP} mentioned above, as far as we are aware, the only known case of $(\ref{setup:introformalqschemes})$, $(\ref{setup:introberkovichspaces})$, and $(\ref{setup:introrigidanalyticspaces})$ is the case when $X$ is a rigid analytic surface. In this case, the relative minimal model program is known \cite{Mit11}, and also holds when $\Char(k) > 0$. For case $(\ref{setup:introalgebraicspaces})$, the relative minimal model program for schemes holds without the assumption on characteristic in dimension $2$ \cite{Sha66,Lic68,Lip69,Tan18}, and for residue characteristics $\notin \{2,3,5\}$ in dimension $3$ \cite{Kaw94,Kol21qfac,TY,BMPSTWW}.\medskip \par Theorem \ref{thm:introrelativemmp} illustrates the power of working in the general context of excellent rings and schemes. Since all rings appearing in these different contexts are excellent \cite{Kie69,Duc09,AT19}, it suffices to prove our results on the minimal model program in the algebraic setting for schemes or algebraic spaces, and then use GAGA-type theorems from \cite{Ser56,EGAIII1,Kop74,Ber93,Poi10,AT19} to move between the algebraic and analytic settings. To do so in this paper, we prove GAGA-type theorems for dualizing complexes and Grothendieck duality in \S\ref{sect:dualityandgaga}, which allow us to move from settings $(\ref{setup:introformalqschemes})$, $(\ref{setup:introcomplexanalyticgerms})$, $(\ref{setup:introberkovichspaces})$, and $(\ref{setup:introrigidanalyticspaces})$ to the algebraic setting. This strategy using GAGA was previously used for resolutions of singularities in \cite{Tem12,Tem18} and for weak factorization of birational maps in \cite{AT19}. However, as far as we are aware, our GAGA-type theorems for Grothendieck duality and dualizing complexes are new in cases $(\ref{setup:introcomplexanalyticgerms})$, $(\ref{setup:introberkovichspaces})$, and $(\ref{setup:introrigidanalyticspaces})$ (the case for formal schemes appears in \cite{ATJLL99}). \par When $X$ and $Z$ are schemes, case $(\ref{setup:introalgebraicspaces})$ answers a question of Koll\'ar \cite[(23)]{Kolhol}, and is already of particular interest. This is because of the important role (quasi-)excellent schemes play in the birational geometry of algebraic varieties, for example in proving resolutions of singularities \cite{Hir64}, the theory of generic limits \cite{dFM09,Kolhol} and the proof of the ACC conjecture for log canonical thresholds in the smooth or bounded singularity case \cite{dFEM10,dFEM11}, and cases of the ACC conjecture for minimal log discrepancies in dimension three \cite{Kaw15}.\medskip \par One of the key results shown in \cite{BCHM10} to establish Theorem \ref{thm:introrelativemmp}$(\ref{setup:introalgebraicspaces})$ for complex varieties is the finite generation of relative adjoint rings \cite[Theorem 1.2(3)]{BCHM10}. We show the following finite generation result, following the approach of Cascini--Lazi\'c \cite[Theorem A]{CL12} and Corti--Lazi\'c \cite[Theorem 2]{CL13} for complex varieties. Case $(\ref{setup:introcomplexanalyticgerms})$ below gives a new proof of \citeleft\citen{Fuj}\citemid Theorem F(1)\citepunct \citen{DHP}\citemid Theorem 1.3\citeright\ (note that \cite{DHP} also uses the strategy in \cite{CL12,CL13} in the complex analytic setting). \begin{alphthm}\label{thm:introfinitegen} Fix notation as in the first paragraph of Theorem \ref{thm:introrelativemmp}. Let $\Delta_i$ be effective $\mathbf Q$-Weil divisors on $X$ for $i \in \{1,2,\ldots,\ell\}$ such that $K_X+\Delta_i$ is $\mathbf{Q}$-Cartier and $(X,\Delta_i)$ is klt for each $i$. Let $A_i$ be $\pi$-nef $\mathbf{Q}$-invertible sheaves for $i \in \{1,2,\ldots,\ell\}$. Assume that for each $i$, either $A_i$ is $\pi$-ample, or that there exists a rational number $c_i\in (-\infty,1]$ such that $c_iK_X+\Delta_i$ is $\mathbf{Q}$-Cartier and $\pi$-big. Then, the relative adjoint ring \[ \bigoplus_{(m_1,m_2,\ldots,m_\ell) \in \mathbf{N}^\ell} \pi_*\mathcal{O}_X\Biggl( \Biggl\lfloor \sum_{i=1}^\ell m_i(K_X+\Delta_i+A_i)\Biggr\rfloor \Biggr) \] is of finite type over every affinoid subdomain in $Z$. In particular, if $Z$ has a finite cover by affinoid subdomains, then the relative adjoint ring is generated by finitely many summands. \end{alphthm} \par An interesting aspect of our proof is that our version of \cite[Theorem B]{CL12} (which states that $\mathcal{E}_A(V)$ is a rational polytope) holds when $Z$ is a scheme of mixed characteristic. See Theorem \ref{thm:cl12b}. This is because we can deduce it from \cite[Theorem B]{CL12} by passing to generic fibers. We note that Theorem \ref{thm:introfinitegen} in cases $(\ref{setup:introformalqschemes})$, $(\ref{setup:introcomplexanalyticgerms})$, $(\ref{setup:introberkovichspaces})$, and $(\ref{setup:introrigidanalyticspaces})$ is not used to prove the corresponding cases of Theorem \ref{thm:introrelativemmp}.\medskip \par Theorems \ref{thm:introrelativemmp} and \ref{thm:introfinitegen} unify the aforementioned results in \cite{BCHM10,VP,Fuj,DHP} since we are able to deduce them all from the case of excellent schemes. In the scheme-theoretic setting, there are several key new inputs compared to \cite{KMM87,BCHM10,HM10,CL12,CL13}. First, we need the Kawamata--Viehweg vanishing theorem for proper morphisms of schemes of equal characteristic zero, which was recently established by the second author in \cite{Mur}. Second, we need new, relative versions of Bertini theorems (see \S\ref{sect:bertini}) for globally generated invertible sheaves, since the usual Bertini theorems for schemes of finite type over a field do not apply. Similar Bertini theorems for very ample invertible sheaves were shown in \cite{BMPSTWW}. Third, as mentioned above, to establish the minimal model program in other categories, we need to prove appropriate GAGA theorems for Grothendieck duality and dualizing complexes. Finally, we prove variants of results in \cite{VP} to glue steps of the minimal model program together after constructing them over affinoid subdomains in $Z$.\medskip \par To prove Theorem \ref{thm:introrelativemmp}, we also need versions of the Basepoint-free, Contraction, Rationality, and Cone theorems for schemes and algebraic spaces. We give two proofs of these results: One by by adapting strategy in \cite{KMM87} for complex varieties (see \S\ref{sect:bpfcontrratcone}), and another by adapting the strategy in \cite{CL13} for complex varieties (see \S\ref{sect:fundrevisit}). We have included the results proved by adapting the strategy in \cite{KMM87} because they apply more generally to weakly log terminal pairs, and this version of the Rationality theorem (Theorem \ref{thm:rationality}) also yields information on the denominators that can appear. However, we will use some of our versions of the results in \cite{CL13} to deduce termination with scaling.\medskip \par Finally, we mention that one can consider other generalizations of the minimal model program to other categories of spaces. For example, for complex analytic spaces (case $(\ref{setup:introcomplexanalyticgerms})$), the minimal model program for K\"ahler threefolds \cite{HP15,HP16,CHP16,DO22,DH20}, classes of K\"ahler fourfolds \cite{DHP}, and log surfaces in Fujiki's class $\mathcal{C}$ \cite{Fuj21} are known. For formal schemes (case $(\ref{setup:introformalqschemes})$), Smith initiated the study of a minimal model program for pseudo-proper formal schemes over a field in \cite{Smi17}. A major difficulty for this class of formal schemes is that Smith showed there are counterexamples to Kodaira-type vanishing theorems for smooth formal schemes that are pseudo-projective over fields of characteristic zero \cite[Proposition 4.3.1]{Smi17}. \subsection*{Outline} This paper consists of five parts.\smallskip \par In Part \ref{part:prelim}, we establish the necessary preliminaries for the minimal model program for schemes and algebraic spaces. Compared to the case of varieties, there are subtleties working with divisors on algebraic spaces and having to do with $\mathbf{Q}$-factoriality. We also prove many fundamental results on relative nefness and bigness for morphisms of algebraic spaces, for example the theorem of the base (Theorem \ref{thm:RelNSFinite}) and Kleiman's criterion for relative ampleness (Theorem \ref{thm:ampisinterior}), which we need to establish theorems of the minimal model program for algebraic spaces in our setting. \par We note that to prove Theorem \ref{thm:introrelativemmp}, it suffices to prove Theorem \ref{thm:introrelativemmp}$(\ref{setup:introalgebraicspaces})$ for schemes. This is because one can deduce Theorem \ref{thm:introrelativemmp}$(\ref{setup:introalgebraicspaces})$ for algebraic spaces from the scheme-theoretic case using the framework in \cite{VP}, and cases $(\ref{setup:introformalqschemes})$, $(\ref{setup:introcomplexanalyticgerms})$, $(\ref{setup:introberkovichspaces})$, and $(\ref{setup:introrigidanalyticspaces})$ only use the scheme case of Theorem \ref{thm:introrelativemmp}$(\ref{setup:introalgebraicspaces})$. However, we have included the foundational results necessary to prove Theorem \ref{thm:introrelativemmp} directly for algebraic spaces because in verifying the necessary results for schemes that we could not locate in the literature, we realized that we could prove the same statements for algebraic spaces. We believe these statements to be of independent interest and hope they will be useful for future reference. Part \ref{part:prelim} also illustrates what foundational results would be necessary to prove Theorem \ref{thm:introrelativemmp} directly in cases $(\ref{setup:introformalqschemes})$, $(\ref{setup:introberkovichspaces})$, and $(\ref{setup:introrigidanalyticspaces})$ (see \cite{Fuj,DHP} for case $(\ref{setup:introcomplexanalyticgerms})$).\smallskip \par In Part \ref{part:bertiniandfund}, we prove our new relative versions of Bertini theorems for schemes. These theorems will become necessary later to perturb klt pairs without having global Bertini theorems available as would be the case for varieties over a field. We also show the fundamental theorems of the minimal model program (the Basepoint-freness, Contraction, Rationality, and Cone theorems) for algebraic spaces adapting the strategy in \cite{KMM87} for complex varieties. While we also prove dual versions of these theorems for klt pairs using the method in \cite{CL13} (see \S\ref{sect:fundrevisit}), we have included these results because they hold more generally for weakly log terminal pairs, and the Rationality theorem \ref{thm:rationality} provides some more information about the denominators that appear.\smallskip \par In Part \ref{part:fingen}, we prove Theorem \ref{thm:introfinitegen} for schemes using the strategy of Cascini--Lazi\'c. A key input is the version of the Kawamata--Viehweg vanishing theorem proved by the second author \cite[Theorem A]{Mur}. Because of the lack of Bertini theorems, however, we need to formulate many of our results in terms of restriction maps on global sections instead of linear systems as is done in \cite{CL12}. This allows us to reduce to the case when the base scheme $Z$ is the spectrum of an excellent local $\mathbf{Q}$-algebra. We conclude the part by proving finite generation for klt pairs and giving alternative proofs of the Contraction, Rationality, and Cone theorems by adapting the strategy in \cite{CL13} for complex varieties. These results will be used in Part \ref{part:relativemmpforschemes} to prove termination with scaling.\smallskip \par In Part \ref{part:relativemmpforschemes}, we establish the existence of flips and termination with scaling for schemes and algebraic spaces, using Theorem \ref{thm:introfinitegen}. This completes the proof of Theorem \ref{thm:introrelativemmp}$(\ref{setup:introalgebraicspaces})$. We then give some applications of these results by showing that $\mathbf{Q}$-factorializations and terminalizations exist, which for simplicity we prove only for schemes.\medskip \par In Part \ref{part:othercats}, we setup the necessary preliminaries for Theorem \ref{thm:introrelativemmp} in cases $(\ref{setup:introformalqschemes})$, $(\ref{setup:introcomplexanalyticgerms})$, $(\ref{setup:introberkovichspaces})$, and $(\ref{setup:introrigidanalyticspaces})$. We then prove our GAGA-type results for dualizing complexes and Grothendieck duality in \S\ref{sect:dualityandgaga}. In \S\ref{sect:setupforothercats}, we set our notation for different categories of spaces and check that the hypotheses in Theorems \ref{thm:introrelativemmp} and \ref{thm:introfinitegen} are preserved under algebraization. Finally, we prove Theorems \ref{thm:introrelativemmp} and \ref{thm:introfinitegen} in \S\ref{sect:mmpforothercats}. \subsection*{Acknowledgments} We are grateful to Dan Abramovich for helpful conversations about \cite{AT19}, to Jack Hall for helpful conversations on the GAGA theorems in \cite{Hal}, and to James M\textsuperscript{c}Kernan for pointing out the reference \cite{Smi17}. We would also like to thank J\'anos Koll\'ar, Joaqu\'in Moraga, David Villalobos-Paz, and Chenyang Xu for helpful discussions. \subsection*{Notation and conventions} All rings are commutative with identity, and all ring maps are unital. \par For a ringed space or ringed site $X$, $\mathbf{D}_{\textup{c}}(X)$ denotes the derived category of $\mathcal{O}_X$-modules with coherent cohomology sheaves. We can then define $\mathbf{D}^+_{\textup{c}}(X)$, $\mathbf{D}^-_{\textup{c}}(X)$, and $\mathbf{D}^b_{\textup{c}}(X)$ bounded-below, bounded-above, and bounded derived categories of $\mathcal{O}_X$-modules with coherent cohomology sheaves, respectively. When the notion of quasi-coherent $\mathcal{O}_X$-modules is defined, we define $\mathbf{D}_{\textup{qc}}(X)$, $\mathbf{D}^+_{\textup{qc}}(X)$, $\mathbf{D}^-_{\textup{qc}}(X)$, and $\mathbf{D}^b_{\textup{qc}}(X)$ similarly. \par Let $X$ be an algebraic space over a scheme $S$. We say that a quasi-coherent sheaf $\mathcal{A}$ of $\mathcal{O}_X$-algebras is \textsl{of finite type} if for every affine scheme $U = \Spec(R)$ \'etale over $X$, we have $\mathcal{A}_{\vert U} \cong \widetilde{A}$ where $A$ is an $R$-algebra of finite type (see \citeleft\citen{EGAInew}\citemid (2.2.5)\citepunct \citen{stacks-project}\citemid \href{https://stacks.math.columbia.edu/tag/07V8}{Tag 07V8}\citeright). \begingroup \makeatletter \renewcommand{\@secnumfont}{\bfseries} \part{Preliminaries for schemes and algebraic spaces}\label{part:prelim} \makeatother \endgroup In this part, we establish preliminary definitions and results that will be used throughout the paper. For the reader's convenience, we have tried to provide references for corresponding material in \cite{KMM87}, \cite{CL12}, and \cite{CL13}. We use the definition of algebraic spaces over a scheme $S$ from \cite[\href{https://stacks.math.columbia.edu/tag/025Y}{Tag 025Y}]{stacks-project}.\bigskip \section{Quasi-excellence, excellence, and dualizing complexes}\label{sect:excellentschemes} \subsection{Quasi-excellence and excellence} We will mostly work with quasi-excellent or excellent schemes. \begin{citeddef}[{\citeleft\citen{EGAIV2}\citemid D\'efinition 7.8.2\citeright}]\label{def:excellent} Let $R$ be a ring. We say that $R$ is \textsl{excellent} if the following conditions are satisfied. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{def:excellentnoeth} $R$ is Noetherian. \item\label{def:excellentuc} $R$ is universally catenary. \item\label{def:excellentgring} $R$ is a \textsl{G-ring}, i.e., for every prime ideal $\mathfrak{p} \subseteq R$, the $\mathfrak{p}$-adic completion map $R_\mathfrak{p} \to \widehat{R}_\mathfrak{p}$ has geometrically regular fibers. \item\label{def:excellentj2} $R$ is \textsl{J-2}, i.e., for every $R$-algebra $S$ of finite type, the regular locus in $\Spec(S)$ is open. \end{enumerate} We say that $R$ is \textsl{quasi-excellent} if $(\ref{def:excellentnoeth})$, $(\ref{def:excellentgring})$, and $(\ref{def:excellentj2})$ are satisfied. A locally Noetherian scheme $X$ is \textsl{excellent} (resp.\ \textsl{quasi-excellent}) if it admits an open affine covering $X = \bigcup_i \Spec(R_i)$ such that every $R_i$ is excellent (resp.\ quasi-excellent). \end{citeddef} Since quasi-excellence is an \'etale local property by \cite[Theorem 32.2]{Mat89}, we can define quasi-excellence as follows. \begin{definition}[see {\citeleft\citen{CT20}\citemid \S2.1\citeright}] Let $X$ be a locally Noetherian algebraic space over a scheme $S$. We say that $X$ is \textsl{quasi-excellent} if for every \'etale morphism $U \to X$ from a scheme $U$, the scheme $U$ is quasi-excellent. \end{definition} \subsection{Dualizing complexes} We will need the notion of a dualizing complex to make sense of canonical sheaves and divisors, which we will define in \S\ref{sect:candiv}. \begin{citeddef}[{\citeleft\citen{Har66}\citemid Chapter V, Definition on p.\ 258\citepunct \citen{Con00}\citemid p.\ 118\citepunct \citen{stacks-project}\citemid \href{https://stacks.math.columbia.edu/tag/0A87}{Tag 0A87}\citeright}] \label{def:dualizingcomplexschemes} Let $X$ be a locally Noetherian scheme. A \textsl{dualizing complex} on $X$ is an object $\omega_X^\bullet$ in $\mathbf{D}^b_{\mathrm{c}}(X)$ that has finite injective dimension, such that the natural morphism \[ \mathrm{id} \longrightarrow \RRHHom_{\mathcal{O}_X}\bigl(\RRHHom_{\mathcal{O}_X}(-,\omega_X^\bullet), \omega_X^\bullet\bigr) \] of $\delta$-functors on $\mathbf{D}_{\textup{c}}(X)$ is an isomorphism. \end{citeddef} \begin{remark} Locally Noetherian schemes admitting dualizing complexes have finite Krull dimension and are universally catenary \citeleft\citen{Har66}\citemid Chapter V, Corollary 7.2\citepunct \citen{stacks-project}\citemid \href{https://stacks.math.columbia.edu/tag/0A80}{Tag 0A80}\citeright. Thus, quasi-excellent schemes admitting dualizing complexes are excellent. \end{remark} \par We can define dualizing complexes for algebraic spaces \'etale-locally. \begin{citeddef}[{\citeleft\citen{AB10}\citemid Definition 2.16\citepunct \citen{stacks-project}\citemid \href{https://stacks.math.columbia.edu/tag/0E4Z}{Tag 0E4Z}\citeright}] Let $X$ be a locally Noetherian algebraic space over a scheme $S$. A \textsl{dualizing complex} on $X$ is a complex $\omega_X^\bullet$ in $\mathbf{D}^b_{\mathrm{qc}}(X)$ for which there exists a surjective \'etale morphism $U \to X$ from a scheme $U$ such that the pullback of $\omega_X^\bullet$ to $U$ is a dualizing complex on $U$ in the sense of Definition \ref{def:dualizingcomplexschemes}. \end{citeddef} We will frequently use the following fact: \begin{lemma}[cf.\ {\citeleft\citen{Har66}\citemid (2) on p.\ 299\citepunct \citen{AB10}\citemid Proposition 2.18 and Remark on p.\ 14\citepunct \citen{stacks-project}\citemid \href{https://stacks.math.columbia.edu/tag/0AA3}{Tag 0AA3}\citeright}]\label{lem:dualizingcomplexpullback} Let $f\colon X \to Y$ be a separated morphism of finite type between Noetherian algebraic spaces over a scheme $S$. Consider a Nagata compactification \[ \begin{tikzcd} X \rar[hook]\arrow{dr}[swap]{f} & \bar{X}\dar{\bar{f}}\\ & Y \end{tikzcd} \] of $f$. If $\omega_Y^\bullet$ is a dualizing complex on $Y$, then \[ f^!\omega_Y^\bullet \coloneqq \bigl( \bar{a}(\omega_Y^\bullet) \bigr)_{\vert X} \] is a dualizing complex on $X$, where $\bar{a}$ is the right adjoint of the derived pushforward $\mathbf{R} \bar{f}_*$. \end{lemma} The right adjoint of the derived pushforward is constructed in \cite[\href{https://stacks.math.columbia.edu/tag/0E55}{Tag 0E55}]{stacks-project}. Nagata compactifications exist for separated morphisms of finite type between quasi-compact quasi-separated algebraic spaces \cite[Theorem 1.2.1]{CLO12} (see also \citeleft\citen{FK06}\citemid pp.\ 355--356\citepunct \citen{Ryd}\citemid Theorem F\citeright). \begin{proof} Let $U \to Y$ be an \'etale surjective morphism from a scheme $U$ such that the pullback of $\omega_Y^\bullet$ to $U$ is a dualizing complex. Next, we note that restriction and the right adjoint $a$ are compatible with \'etale base change by definition, where we use the fact that the right adjoint does not depend on whether we consider a scheme as an actual scheme or the algebraic sapce it represents by \cite[\href{https://stacks.math.columbia.edu/tag/0E6E}{Tag 0E6E}]{stacks-project}. We therefore see that the pullback of $f^!$ to $U$ is the exceptional pullback for schemes constructed in \cite[\href{https://stacks.math.columbia.edu/tag/0A9Y}{Tag 0A9Y}]{stacks-project}. The statement now follows from the scheme case (after replacing $U$ by an open affine cover) in \citeleft\citen{Har66}\citemid (2) on p.\ 299\citepunct \citen{stacks-project}\citemid \href{https://stacks.math.columbia.edu/tag/0AA3}{Tag 0AA3}\citeright. \end{proof} \section{Divisors and linear systems} \label{sect:divisorsetc} \subsection{Divisors} We will use the definition of the group $\Div(X)$ of Cartier divisors for ringed spaces from \cite[D\'efinition 21.1.2]{EGAIV4}, and the group $\WDiv(X)$ of Weil divisors for locally Noetherian schemes from \cite[(21.6.2)]{EGAIV4}. See \cite[p.\ 204]{Kle79} for the definition of the sheaf $\mathscr{K}_X$ of meromorphic functions. The group of Weil divisors is denoted by $\mathfrak{Z}^1(X)$ in \cite[(21.6.2)]{EGAIV4} and by $\Div(X)$ in \cite[\href{https://stacks.math.columbia.edu/tag/0ENJ}{Tag 0ENJ}]{stacks-project}. The subgroup of principal Cartier divisors is denoted by $\Princ(X)$. \par Instead of developing the theory of Cartier divisors and cycle maps for algebraic spaces, we will only work with the monoid of effective Cartier divisors $\Div^\mathrm{eff}(X)$ on algebraic spaces in the sense of \cite[\href{https://stacks.math.columbia.edu/tag/083B}{Tag 083B}]{stacks-project} (denoted by $\operatorname{EffCart}(X)$ in \cite[\href{https://stacks.math.columbia.edu/tag/0CPG}{Tag 0CPG}]{stacks-project}) and Weil divisors on integral locally Noetherian algebraic spaces in the sense of \cite[\href{https://stacks.math.columbia.edu/tag/0ENJ}{Tag 0ENJ}]{stacks-project}.\medskip \par We now define Cartier and Weil divisors with $\mathbf{Q}$- and $\mathbf{R}$-coefficients. \begin{definition}[see {\citeleft\citen{KMM87}\citemid Definitions 0-1-3 and 0-1-8\citepunct \citen{BCHM10}\citemid Definition 3.1.1\citeright}] \label{def:kcartdiv} Let $X$ be a ringed space, and let $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$. A \textsl{$\mathbf{k}$-Cartier divisor} on $X$ is an element of the group \begin{align*} \Div_\mathbf{k}(X) &\coloneqq \Div(X) \otimes_\mathbf{Z} \mathbf{k}. \intertext{If $X$ is a locally Noetherian scheme or an integral locally Noetherian algebraic space over a scheme $S$, a \textsl{$\mathbf{k}$-Weil divisor} on $X$ is an element of the group} \WDiv_\mathbf{k}(X) &\coloneqq \WDiv(X) \otimes_\mathbf{Z} \mathbf{k}. \end{align*} A $\mathbf{k}$-Cartier divisor is \textsl{integral} if it lies in the image of the map \begin{align*} \Div(X) &\longrightarrow \Div_\mathbf{k}(X) \intertext{and a $\mathbf{k}$-Weil divisor is \textsl{integral} if it lies in the image of the map} \WDiv(X) &\longrightarrow \WDiv_\mathbf{k}(X). \end{align*} A $\mathbf{k}$-Cartier divisor (resp.\ \textsl{$\mathbf{k}$-Weil divisor}) is \textsl{effective} if it can be written as a $\mathbf{k}$-linear combination of effective Cartier divisors (resp.\ effective Weil divisors). The set of effective $\mathbf{k}$-Cartier (resp.\ $\mathbf{k}$-Weil) divisors on $X$ is denoted $\Div_\mathbf{k}^\mathrm{eff}(X)$ (resp.\ $\WDiv_\mathbf{k}^\mathrm{eff}(X)$). We drop the prefix ``$\mathbf{Z}$-'' if $\mathbf{k} = \mathbf{Z}$. \par If $A = \sum_{i=1}^r a_iC_i$ is an $\mathbf{R}$-Weil divisor on $X$ where the $C_i$ are distinct prime Weil divisors, then the \textsl{round-up} and \textsl{round-down} of $A$ are the Weil divisors \[ \lceil A \rceil \coloneqq \sum_{i=1}^r \lceil a_i \rceil C_i \qquad \text{and} \qquad \lfloor A \rfloor \coloneqq \sum_{i=1}^r \lfloor a_i \rfloor C_i \] respectively, and the \textsl{fractional part} of $A$ is \begin{align*} \{A\} &\coloneqq \sum_{i=1}^r \{a_i\} C_i, \intertext{where $\{a_i\} \coloneqq a_i - \lfloor a_i \rfloor$ is the fractional part of $a_i$ for every $i$. If $B = \sum_{i=1}^r b_iC_i$ is another $\mathbf{R}$-Weil divisor on $X$, then we also set} A \wedge B &\coloneqq \sum_{i=1}^r \min\{a_i,b_i\}\,C_i. \end{align*} \end{definition} When $X$ is a locally Noetherian scheme, there is a commutative diagram \begin{equation}\label{eq:divwdivdiag} \begin{tikzcd} \Div(X) \rar\dar{\mathrm{cyc}} & \Div_\mathbf{Q}(X) \rar[hook]\dar{\mathrm{cyc}\otimes_\mathbf{Z}\mathbf{Q}} & \Div_\mathbf{R}(X) \dar{\mathrm{cyc}\otimes_\mathbf{Z}\mathbf{R}}\\ \WDiv(X) \rar[hook] & \WDiv_\mathbf{Q}(X) \rar[hook] & \WDiv_\mathbf{R}(X) \end{tikzcd} \end{equation} of Abelian groups, where the left vertical map is the \textsl{cycle map} from \cite[(21.6.5.1)]{EGAIV4}, and the other vertical maps are obtained via extension of scalars. The cycle map preserves effective divisors \cite[Proposition 21.6.6]{EGAIV4}. \begin{convention} Let $X$ be a locally Noetherian scheme. Then, the cycle map $\mathrm{cyc}$ is bijective if and only if $X$ is locally factorial \cite[Th\'eor\`eme 21.6.9$(ii)$]{EGAIV4}. In this case, we can identify Cartier and Weil divisors, as well as their corresponding versions with $\mathbf{Q}$- or $\mathbf{R}$-coefficients. On such schemes, we omit the word ``Cartier'' or ``Weil.'' \end{convention} \par Even if $X$ is not locally factorial, as long as $X$ is normal, we can pass from Cartier divisors to Weil divisors. \begin{definition}[see {\citeleft\citen{KMM87}\citemid Remark 0-1-6(2)\citepunct \citen{Laz04a}\citemid Remarks 1.1.4 and 1.3.8\citeright}]\label{def:kcartier} Let $X$ be a normal locally Noetherian scheme. Then, the cycle map \[ \mathrm{cyc}\colon \Div(X) \longrightarrow \WDiv(X) \] is injective \cite[Th\'eor\`eme 21.6.9$(i)$]{EGAIV4}, as are the maps \[ \Div(X) \longrightarrow \Div_\mathbf{k}(X) \] for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$ by the commutativity of the diagram \eqref{eq:divwdivdiag}. A Weil divisor $D$ \textsl{is Cartier} if $D$ lies in the image of $\Div(X)$ under the cycle map $\mathrm{cyc}$. For $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$, a $\mathbf{k}$-Weil divisor $D$ \textsl{is $\mathbf{k}$-Cartier} if $D$ lies in the image of the map \[ \mathrm{cyc} \otimes_\mathbf{Z} \mathbf{k} \colon \Div_\mathbf{k}(X) \longrightarrow \WDiv_\mathbf{k}(X). \] \end{definition} \begin{convention}[see {\citeleft\citen{KMM87}\citemid Definition 0-1-7\citeright}] Let $X$ be a normal locally Noetherian scheme. We say that $X$ is \textsl{$\mathbf{Q}$-factorial} if every $\mathbf{Q}$-Weil divisor is $\mathbf{Q}$-Cartier. In this case, we will say ``$\mathbf{Q}$-divisor'' instead of ``$\mathbf{Q}$-Cartier divisor'' or ``$\mathbf{Q}$-Weil divisor.'' \end{convention} To make analogous definitions for algebraic spaces, we will only work with Weil divisors. We recall that for ringed spaces $X$, there is an exact sequence \[ 0 \longrightarrow \Princ(X) \longrightarrow \Div(X) \overset{l}{\longrightarrow} \Pic(X) \] by \cite[Proposition 21.3.3$(i)$]{EGAIV4}. For $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$, we will also consider its extension of scalars \begin{equation}\label{eq:divtopic} 0 \longrightarrow \Princ_\mathbf{k}(X) \longrightarrow \Div_\mathbf{k}(X) \xrightarrow{l\otimes_\mathbf{Z}\mathbf{k}} \Pic_\mathbf{k}(X) \end{equation} to $\mathbf{k}$, where \[ \Princ_\mathbf{k}(X) \coloneqq \Princ(X) \otimes_\mathbf{Z} \mathbf{k} \qquad \text{and} \qquad \Pic_\mathbf{k}(X) \coloneqq \Pic(X) \otimes_\mathbf{Z} \mathbf{k}. \] For algebraic spaces $X$, we also have maps \begin{equation}\label{eq:effcarttopic} \bigl(\Div^\mathrm{eff}(X)\bigr)^\mathrm{gp}_\mathbf{k} \longrightarrow \Pic_\mathbf{k}(X) \end{equation} for $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$ obtained from \cite[\href{https://stacks.math.columbia.edu/tag/0CPG}{Tag 0CPG}]{stacks-project} via extension of scalars, where \[ (\Div^\mathrm{eff}(X))^\mathrm{gp}_\mathbf{k} \coloneqq (\Div^\mathrm{eff}(X))^\mathrm{gp} \otimes_\mathbf{Z} \mathbf{k}. \] \begin{definition}[{see \citeleft\citen{FM}\citemid Definition 1.1\citepunct \citen{KMM87}\citemid Definition 0-1-3\citeright}] Let $X$ be a ringed site. For $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$, a \textsl{$\mathbf{k}$-invertible sheaf} is an element of $\Pic_\mathbf{k}(X)$. We will usually write $\Pic_\mathbf{k}(X)$ additively, in which case we denote the invertible sheaves associated to elements $D \in \Pic_\mathbf{Z}(X) = \Pic(X)$ and elements $D \in \Div_\mathbf{Z}(X)$ (for ringed spaces $X$) or $D \in \Div_\mathbf{Z}^\mathrm{eff}(X)$ (for algebraic spaces $X$) by $\mathcal{O}_X(D)$. We say that $D,D' \in \Div_\mathbf{k}(X)$ are \textsl{$\mathbf{k}$-linearly equivalent} if their images in $\Pic_\mathbf{k}(X)$ are equal. \end{definition} When $X$ is a locally Noetherian scheme, these exact sequences fit into the commutative diagram \begin{equation}\label{eq:princses} \begin{tikzcd} 0 \rar & \Princ_\mathbf{k}(X) \rar\dar[equal] & \Div_\mathbf{k}(X) \rar{l \otimes_\mathbf{Z} \mathbf{k}}\dar{\mathrm{cyc} \otimes_\mathbf{Z} \mathbf{k}} & \Pic_\mathbf{k}(X) \dar\\ & \Princ_\mathbf{k}(X) \rar & \WDiv_\mathbf{k}(X) \rar & \Cl_\mathbf{k}(X) \rar & 0 \end{tikzcd} \end{equation} for $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$ by definition of the divisor class group $\Cl(X)$ in \cite[(21.6.7)]{EGAIV4}, where \[ \Cl_\mathbf{k}(X) \coloneqq \Cl(X) \otimes_\mathbf{Z} \mathbf{k}. \] \begin{definition}[see {\cite[Definition 0-1-3]{KMM87}}] Let $X$ be a locally Noetherian scheme or an integral locally Noetherian algebraic space over a scheme $S$. For $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$, we say that $D,D' \in \WDiv_\mathbf{k}(X)$ are \textsl{$\mathbf{k}$-linearly equivalent} if their images in $\Cl_\mathbf{k}(X)$ are equal. \end{definition} We will need to know when the exact sequence in the top row of \eqref{eq:princses} is also right exact. \begin{remark}\label{rem:lbcomefromcartdiv} The exact sequence in the top row of \eqref{eq:princses} is also right exact in the following cases for $\mathbf{k} = \mathbf{Z}$, and hence also for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$ by flatness. \begin{enumerate}[label=$(\roman*)$] \item $X$ is a locally Noetherian scheme and $\Ass(\mathcal{O}_X)$ is contained in an open affine subscheme of $X$ \cite[Proposition 21.3.4$(a)$]{EGAIV4}. This holds for example when $X$ is Noetherian and has an ample invertible sheaf, in particular when $X$ is quasi-projective over a Noetherian ring \cite[Corollaire 21.3.5]{EGAIV4}. \item $X$ is a reduced scheme whose set of irreducible components is locally finite \cite[Proposition 21.3.4$(b)$]{EGAIV4}. \end{enumerate} \end{remark} \begin{lemma}\label{lem:kcartierdefs} Let $X$ be a locally Noetherian scheme satisfying one of the hypotheses in Remark \ref{rem:lbcomefromcartdiv}. For $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$, a $\mathbf{k}$-Weil divisor $D$ is lies in the image of $\mathrm{cyc} \otimes_\mathbf{Z} \mathbf{k}$ if and only if the class of $D$ in $\Cl_\mathbf{k}(X)$ lies in the image of the map \[ \Pic_\mathbf{k}(X) \longrightarrow \Cl_\mathbf{k}(X). \] \end{lemma} \begin{proof} The implication $\Rightarrow$ holds by the commutativity of the diagram in \eqref{eq:princses}. Conversely, suppose the class of $D$ in $\Cl_\mathbf{k}(X)$ lies in the image of $\Pic_\mathbf{k}(X)$. Since $l \otimes_\mathbf{Z} \mathbf{k}$ is surjective, there exists an element $\tilde{D} \in \Div_\mathbf{k}(X)$ such that $(\mathrm{cyc} \otimes_\mathbf{Z} \mathbf{k})(\tilde{D}) \sim_\mathbf{k} D$. By the exactness of the bottom row in \eqref{eq:princses}, we therefore have an element $D' \in \Princ_\mathbf{k}(X)$ such that \[ (\mathrm{cyc} \otimes_\mathbf{Z} \mathbf{k})(\tilde{D} + D') = D, \] and hence $D$ is $\mathbf{k}$-Cartier. \end{proof} If $X$ is an integral locally Noetherian algebraic space, then by \cite[\href{https://stacks.math.columbia.edu/tag/0ENV}{Tag 0ENV}]{stacks-project}, there is a map \begin{equation}\label{eq:stacks0EPW} \Pic(X) \longrightarrow \Cl(X) \end{equation} that coincides with the corresponding map in \eqref{eq:princses} when $X$ is a scheme. We will use this map to define what it means for a $\mathbf{k}$-Weil divisor to be $\mathbf{k}$-Cartier on an algebraic space. \begin{definition}[see {\cite[Definition 1.3.4]{VPthesis}}] Let $X$ be an integral normal locally Noetherian algebraic space over a scheme $S$, in which case the map \eqref{eq:stacks0EPW} is injective \cite[\href{https://stacks.math.columbia.edu/tag/0EPX}{Tag 0EPX}]{stacks-project}. A Weil divisor $D$ \textsl{is Cartier} if $D$ lies in the image of the map \eqref{eq:stacks0EPW}. For $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$, a $\mathbf{k}$-Weil divisor $D$ \textsl{is $\mathbf{k}$-Cartier} if $D$ lies in the image of the map \[ \Pic_\mathbf{k}(X) \longrightarrow \Cl_\mathbf{k}(X) \] obtained from \eqref{eq:stacks0EPW} via extension of scalars. By Lemma \ref{lem:kcartierdefs}, this definition matches that in Definition \ref{def:kcartier} when $X$ is a scheme. \end{definition} \begin{convention}[see {\cite[Definition 1.3.4]{VPthesis}}] \label{convention:factorialityforspaces} Let $X$ be an integral normal locally Noetherian algebraic space over a scheme $S$. We say that $X$ is \textsl{locally factorial} (resp.\ $\mathbf{Q}$-factorial) if $\Pic(X) \to \Cl(X)$ (resp.\ $\Pic_\mathbf{Q}(X) \to \Cl_\mathbf{Q}(X)$) is an isomorphism. In this case, we will say ``divisor'' (resp.\ $\mathbf{Q}$-divisor) instead of ``Weil divisor'' (resp.\ ``$\mathbf{Q}$-Weil divisor''). \end{convention} \begin{remark} Convention \ref{convention:factorialityforspaces} is chosen to work around the fact that the property of being locally factorial or $\mathbf{Q}$-factorial is not \'etale local. See \citeleft\citen{Kaw88}\citemid p.\ 104\citepunct \citen{SGA2}\citemid Expos\'e XIII, note de l'\'editeur (15) on p.\ 150\citepunct \citen{BGS}\citemid p.\ 1\citeright. \end{remark} \subsection{Linear systems} We now define linear systems and their corresponding notions for $\mathbf{Q}$- and $\mathbf{R}$-coefficients. \begin{definition}[see {\citeleft\citen{KMM87}\citemid Definition p.\ 298\citepunct \citen{CL12}\citemid p.\ 2419\citepunct \citen{McK17}\citemid Definition 2.2\citeright}] \label{def:linearsystem} Let $X$ be a normal locally Noetherian scheme or an integral normal locally Noetherian algebraic space over a scheme $S$. We then define linear equivalence and $\mathbf{k}$-linear equivalence for Weil divisors and $\mathbf{k}$-Weil divisors using the cycle map and its extensions of scalars in \eqref{eq:divwdivdiag}. The \textsl{linear system} associated to a Weil divisor $D$ is \begin{align*} \lvert D \rvert &\coloneqq \Set[\big]{C \in \WDiv(X) \given C \ge 0\ \text{and}\ C \sim D}, \intertext{and for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$, the \textsl{$\mathbf{k}$-linear system} associated to a $\mathbf{k}$-Weil divisor $D$ is} \lvert D \rvert_\mathbf{k} &\coloneqq \Set[\big]{C \in \WDiv_\mathbf{k}(X) \given C \ge 0\ \text{and}\ C \sim_\mathbf{k} D}. \end{align*} \end{definition} We can now state the main result that allows us to pass between sheaf-theoretic language and the language of linear systems on schemes. \begin{citedprop}[{\citeleft\citen{Har94}\citemid Proposition 2.9\citepunct\citen{Har07}\citemid Remark 2.9\citeright}]\label{prop:har29} Let $X$ be a normal Noetherian scheme, and consider a Weil divisor $D$ on $X$. Then, there is a bijection \[ \lvert D \rvert \longleftrightarrow \biggl\{ \begin{tabular}{@{}c@{}} nondegenerate global sections\\ $s \in H^0\bigl(X,\mathcal{O}_X(D)\bigr)$ \end{tabular} \biggr\}\bigg/H^0(X,\mathcal{O}_X^*). \] \end{citedprop} Here, $\mathcal{O}_X(D)$ is the sheaf associated to the Weil divisor $D$, which can be defined as $j_*\mathcal{O}_U(D_{\vert U})$, where $U$ is the open subset where $D_{\vert U}$ is Cartier, and $j\colon U \hookrightarrow X$ is the canonical open embedding (see \cite[Definition on p.\ 301 and Proposition 2.7]{Har94}). A global section $s \in H^0(X,\mathcal{O}_X(D))$ is \textsl{nondegenerate} if it is nonzero after localizing at the generic points of irreducible components of $X$ \cite[Definition on p.\ 304]{Har94}.\medskip \par We also prove the following result about the relationship between $\mathbf{Q}$- and $\mathbf{R}$-linear systems of a $\mathbf{Q}$-Weil divisor. \begin{lemma} \label{lem:RatlIsDense} Let $X$ be a normal locally Noetherian scheme or an integral normal locally Noetherian algebraic space over a scheme $S$, and consider a $\mathbf{Q}$-Weil divisor $D$ on $X$. Then, $\lvert D\rvert_{\mathbf Q}$ is dense in $\lvert D\rvert_{\mathbf R}$ in the following sense: For each $\sum a_iE_i\in \lvert D\rvert_{\mathbf R}$ where $a_i$ are real numbers and $E_i$ are prime divisors, there exist sequences of rational numbers $(a^j_i)_j$ such that \[ \lim_{j\to\infty} a_i^j=a_i \qquad \text{and} \qquad \sum_i a_i^jE_i\in \lvert D\rvert_{\mathbf Q} \] for all $i$. \end{lemma} \begin{proof} We adapt the proofs of \citeleft\citen{BCHM10}\citemid Lemma 3.5.3\citepunct \citen{CL12}\citemid Lemma 2.3\citeright. Let \[ V=\mathbf{Q} \cdot D+\Span\{E_i\}\subseteq \WDiv_{\mathbf Q}(X), \] and let $V_0$ be the subspace of $V$ consisting of rational combinations of principal divisors. Then, $V_{\mathbf R} \coloneqq V \otimes_\mathbf{Q} \mathbf{R}$ is a (finite-dimensional) subspace of $\WDiv_{\mathbf R}(X)$, and $(V_0)_{\mathbf R} \coloneqq V_0 \otimes_\mathbf{Q} \mathbf{R}$ is the subspace of $V_{\mathbf R}$ consisting of real combinations of principal divisors. Let $\pi\colon V_{\mathbf R}\to V_{\mathbf R}/(V_0)_{\mathbf R}$ be the canonical projection map. The subset \[ \Set[\bigg]{\sum_i b_iE_i\in V_{\mathbf R} \given b_i\geq 0\ \text{and}\ \pi\biggl(\sum_i b_iE_i\biggr)=\pi(D)} \] is cut out from $V$ by rational hyperplanes and half-spaces, and it contains the real point $\sum_i a_iE_i$. The result now follows. \end{proof} \section{Positivity, the theorem of the base, cones, and\texorpdfstring{\except{toc}{\\}}{} Kleiman's criterion for ampleness} \subsection{Relative positivity conditions}\label{sect:relativeampleness} We define relative ampleness conditions for $\mathbf{k}$-invertible sheaves and $\mathbf{k}$-Cartier divisors for $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$. \begin{definition}[see {\citeleft\citen{EGAII}\citemid D\'efinition 4.6.1\citepunct \citen{KMM87}\citemid Definition 0-1-4\citepunct \citen{BCHM10}\citemid Definition 3.1.1\citepunct \citen{CT20}\citemid \S2.1.1\citepunct \citen{FM}\citemid Definition 2.1\citepunct \citen{stacks-project}\citemid \href{https://stacks.math.columbia.edu/tag/0D31}{Tag 0D31}\citeright}] \label{def:relativeampleness} Let $\pi\colon X \to Z$ be a morphism of schemes (resp.\ algebraic spaces over a scheme $S$), and let $\mathscr{L}$ be an invertible sheaf on $X$. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item Suppose $\pi$ is quasi-compact (resp.\ representable). We say that $\mathscr{L}$ is \textsl{$\pi$-ample} if there exists an affine open cover $Z = \bigcup_i U_i$ such that $\mathscr{L}\rvert_{\pi^{-1}(U_i)}$ is ample for all $i$ (resp.\ if for every morphism $Z' \to Z$ where $Z'$ is a scheme, the pullback of $\mathscr{L}$ to $Z' \times_Z X$ is $\pi$-ample). \item We say that $\mathscr{L}$ is \textsl{$\pi$-generated} if the adjunction morphism $\pi^*\pi_*\mathscr{L} \to \mathscr{L}$ is surjective. \item We say that $\mathscr{L}$ is \textsl{$\pi$-semi-ample} if there exists an integer $n> 0$ such that $\mathscr{L}^{\otimes n}$ is $\pi$-generated. \end{enumerate} When $X$ is a scheme, we can extend these definitions to Cartier divisors $L$ on $X$ by asking that their associated invertible sheaves $\mathcal{O}_X(L)$ satisfy these conditions. \par Now suppose that $D$ is a $\mathbf{k}$-invertible sheaf on $X$ for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$. We say that $D$ is \textsl{$\pi$-ample} if $D$ is a finite nonzero $\mathbf{k}_{>0}$-linear combination of $\pi$-ample invertible sheaves on $X$. We say that $D$ is \textsl{$\pi$-semi-ample} if $D$ is a finite $\mathbf{k}_{\ge0}$-linear combination of $\pi$-semi-ample invertible sheaves on $X$. We extend these definitions to elements $D \in \Div_\mathbf{k}(X)$ (resp.\ $\Div_\mathbf{k}^\mathrm{eff}(X)$) by asking that their images in $\Pic_\mathbf{k}(X)$ satisfy these conditions. \end{definition} To define $\pi$-numerically trivial and $\pi$-nef $\mathbf{k}$-invertible sheaves or $\mathbf{k}$-Cartier divisors, we recall some background on intersection theory for proper morphisms. Let $\pi\colon X \to Z$ be a proper morphism of locally Noetherian algebraic spaces over a scheme $S$. Let $z \in \lvert Z \rvert$ be a point, and consider a subspace $Y \subseteq \pi^{-1}(z)$ that is closed in $\pi^{-1}(z)$. We can then define intersection numbers \[ (\mathscr{L}_1 \cdot \mathscr{L}_2 \cdots \mathscr{L}_n \cdot Y) \coloneqq \chi\bigl(c_1(\mathscr{L}_1)\cdot c_1(\mathscr{L}_2) \cdots c_1(\mathscr{L}_n) \cdot \mathcal{O}_Y\bigr) \in \mathbf{Z} \] for invertible sheaves $\mathscr{L}_i$ on $X$, where $n \ge \dim(Y)$. See \citeleft\citen{stacks-project}\citemid \href{https://stacks.math.columbia.edu/tag/0EDF}{Tag 0EDF}\citeright. By linearity \citeleft\citen{stacks-project}\citemid \href{https://stacks.math.columbia.edu/tag/0EDH}{Tag 0EDH}\citeright, we can extend this definition to $\mathbf{k}$-invertible sheaves for $\mathbf{k}\in\{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$ (see \cite[Chapter VI, Definition-Corollary 2.7.4]{Kol96}). When $X$ is a scheme, we can also extend this definition to $\mathbf{k}$-Cartier divisors using the group maps \[ l \otimes_\mathbf{Z} \mathbf{k}\colon \Div_\mathbf{k}(X) \longrightarrow \Pic_\mathbf{k}(X) \] from \eqref{eq:divtopic}. In this case, we denote the intersection product by $(D_1 \cdot D_2 \cdots D_n \cdot Y)$, where $D_i \in \Div_\mathbf{k}(X)$ for all $i$. \par We use this intersection product to define $\pi$-nef and $\pi$-numerically trivial $\mathbf{k}$-invertible sheaves or $\mathbf{k}$-Cartier divisors. We restrict to the case when the base $Z$ is a decent algebraic space to make sense of residue fields (see \cite[\href{https://stacks.math.columbia.edu/tag/02Z7}{Tag 02Z7}]{stacks-project}). \begin{definition}[see {\citeleft\citen{Kle66}\citemid pp.\ 334--335\citepunct \citen{KMM87}\citemid Definition 0-1-1\citepunct \citen{Kol90}\citemid p.\ 236\citepunct \citen{Kee03}\citemid Definition 2.9\citepunct \citen{BCHM10}\citemid Definition 3.1.1\citepunct \citen{CT20}\citemid \S2.1.1\citepunct \citen{VPthesis}\citemid Definition 1.3.8\citeright}] \label{def:RelNeronSeveri} Let $\pi\colon X\to Z$ be a proper morphism of algebraic spaces over a scheme $S$, where $Z$ is decent. Let $\mathbf{k}\in\{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$. \begin{enumerate}[label=$(\roman*)$] \item An element $D\in\Pic_{\mathbf{k}}(X)$ is \textsl{$\pi$-nef} if, for every point $z \in \lvert Z \rvert$ and for every integral one-dimensional subspace $C \subseteq \pi^{-1}(z)$ that is closed in $\pi^{-1}(z)$, we have $(D \cdot C)\geq 0$. If $Z = \Spec(k)$ for a field $k$, we just say $D$ is \textsl{nef}. \item An element $D \in \Pic_\mathbf{k}(X)$ is \textsl{$\pi$-numerically trivial} if both $D$ and $-D$ are $\pi$-nef. We denote by $N^1(X/Z)$ the quotient of $\Pic(X)$ by the subgroup of numerically trivial elements, and set \[ N^1(X/Z)_\mathbf{k} \coloneqq N^1(X/Z) \otimes_\mathbf{Z} \mathbf{k} \] for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$. If $Z = \Spec(k)$ for a field $k$, we just say $D$ is \textsl{numerically trivial}. \end{enumerate} If $X$ is a scheme, we extend these definitions to elements $D \in \Div_\mathbf{k}(X)$ by asking that their images in $\Pic_\mathbf{k}(X)$ satisfy these conditions. By definition, this only depends on the class $[D] \in N^1(X/Z)_\mathbf{k}$. \end{definition} We now prove some fundamental properties of nefness and numerical triviality. Many of these results are known for proper morphisms of schemes or for algebraic sapces that are proper over a field, but as far as we are aware they are new for proper morphisms of algebraic spaces. \begin{lemma}[cf.\ {\citeleft\citen{Kle66}\citemid Chapter I, \S4, Proposition 1\citepunct \citen{Kee03}\citemid Lemma 2.17\citepunct \citen{CLM}\citemid Lemma 3.3\citeright}]\label{lem:nefpullback} Let $S$ be a scheme. Let \[ \begin{tikzcd} X' \rar{f}\arrow{dr}[swap]{\pi'} & X\dar{\pi}\\ & Z \end{tikzcd} \] be a commutative diagram of algebraic spaces over $S$, where $\pi$ and $\pi'$ are proper and $Z$ is decent. Let $D \in \Pic_\mathbf{k}(X)$ for $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:nefpullbackalways} If $D$ is $\pi$-nef (resp.\ $\pi$-numerically trivial), then $f^*D$ is $\pi'$-nef (resp.\ $\pi'$-numerically trivial). \item\label{lem:nefpullbacksurjective} If $f$ is surjective and $f^*D$ is $\pi'$-nef (resp.\ $\pi$-numerically trivial), then $D$ is $\pi$-nef (resp.\ $\pi$-numerically trivial). \end{enumerate} \end{lemma} \begin{proof} By definition, it suffices to consider the nefness (resp.\ numerical triviality) of $D$ when $Z$ is the spectrum of a field. The statements for numerical triviality follow from those for nefness applied to $D$ and $-D$, and hence it suffices to show $(\ref{lem:nefpullbackalways})$ and $(\ref{lem:nefpullbacksurjective})$ for nefness. \par For $(\ref{lem:nefpullbackalways})$, let $C' \subseteq X'$ be an integral one-dimensional closed subspace. By the projection formula \cite[\href{https://stacks.math.columbia.edu/tag/0EDJ}{Tag 0EDJ}]{stacks-project}, we have \[ (f^*D \cdot C') = \deg\bigl(C' \to f(C')\bigr)\bigl(D \cdot f(C')\bigr) \ge 0. \] \par For $(\ref{lem:nefpullbacksurjective})$, let $C \subseteq X$ be an be an integral one-dimensional closed subspace. By \cite[Lemma 3.2]{CLM}, there is an integral one-dimensional closed subspace $C' \subseteq X'$ such that $C = f(C')$. By the projection formula again \cite[\href{https://stacks.math.columbia.edu/tag/0EDJ}{Tag 0EDJ}]{stacks-project}, we have \[ (D \cdot C) = \bigl(\deg(C' \to C)\bigr)^{-1}(f^*D \cdot C') \ge 0.\qedhere \] \end{proof} We show that nefness and numerical triviality behave well under base change. \begin{lemma}[cf.\ {\citeleft\citen{Kee03}\citemid Lemma 2.18\citeright}]\label{lem:nefbasechange} Let $S$ be a scheme. Consider a Cartesian diagram \[ \begin{tikzcd} X' \rar{f}\dar[swap]{\pi'} & X\dar{\pi}\\ Z' \rar{g} & Z \end{tikzcd} \] of algebraic spaces over $S$ where $\pi$ is proper and $Z$ and $Z'$ are decent. Let $D \in \Pic_\mathbf{k}(X)$. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:nefbasechangepullback} If $D$ is $\pi$-nef (resp.\ $\pi$-numerically trivial), then $f^*D$ is $\pi'$-nef (resp.\ $\pi'$-numerically trivial). \item\label{lem:nefbasechangeconverse} Suppose that $g$ is surjective. If $\pi^*D$ is $\pi'$-nef (resp.\ $\pi'$-numerically trivial), then $D$ is $\pi$-nef (resp.\ $\pi$-numerically trivial). \end{enumerate} \end{lemma} \begin{proof} As in the proof of Lemma \ref{lem:nefpullback}, it suffices to show the statement for nefness. By transitivity of fibers, it suffices to consider the case when $Z = \Spec(k)$ and $Z' = \Spec(k')$ for a field extension $k \subseteq k'$. \par We first show $(\ref{lem:nefbasechangepullback})$. By the weak version of Chow's lemma in \cite[\href{https://stacks.math.columbia.edu/tag/089J}{Tag 089J}]{stacks-project}, there exists a proper surjective morphism $\mu\colon Y \to X$ from a scheme $Y$ that is projective over $k$. We then consider the commutative diagram \[ \begin{tikzcd} Y' \rar{f'}\dar[swap]{\mu'} & Y\dar{\mu}\\ X' \rar{f} & X \end{tikzcd} \] with Cartesian squares. Then, we know that $\mu^*D$ is nef by Lemma \ref{lem:nefpullback}$(\ref{lem:nefpullbackalways})$. Now since $Y$ is a projective scheme over $k$, we know that choosing an ample invertible sheaf $A$ on $Y$, the $\mathbf{R}$-invertible sheaf $\mu^*D + \varepsilon A$ is ample for every $\varepsilon > 0$ by Kleiman's criterion for ampleness for projective schemes \cite[Chapter VI, Theorem 2.19]{Kol96}. Then, \[ (\mu \circ f')^*D+\varepsilon\,f^{\prime*}A = (f \circ \mu')^*D+\varepsilon\,f^{\prime*}A \] nef for every $\varepsilon > 0$. Taking the limit as $\varepsilon \to 0$, we see that $(\mu \circ f')^*D = (f \circ \mu')^*D$ is nef by \cite[Theorem 3.9]{Kee03}. Finally, we see that $f^*D$ is nef by Lemma \ref{lem:nefpullback}$(\ref{lem:nefpullbacksurjective})$. \par For $(\ref{lem:nefbasechangeconverse})$, let $C \subseteq X$ be an integral one-dimensional subspace. Let $C'_i$ be the irreducible components of $C' \coloneqq C \otimes_k k'$ with reduced structure and geometric generic point $\bar{x}_i$, and let \[ m_i = \Length_{\mathcal{O}_{X \otimes_k k',\bar{x}_i}}\bigl( \mathcal{O}_{C'_i,\bar{x}_i} \bigr). \] Then, we have \[ (D \cdot C) = (f^*D \cdot C') = \sum_i m_i (f^*D \cdot C'_i) \ge 0 \] where the first equality follows from flat base change \cite[\href{https://stacks.math.columbia.edu/tag/073K}{Tag 073K}]{stacks-project}, the second equality is \cite[\href{https://stacks.math.columbia.edu/tag/0EDI}{Tag 0EDI}]{stacks-project}, and the last inequality is by the assumption that $f^*D$ is nef. \end{proof} We note that nefness can be detected at closed points $z \in \lvert Z \rvert$ under some additional assumptions. \begin{lemma}[cf.\ {\citeleft\citen{Kee03}\citemid Lemma 2.18(1)\citepunct \citen{CT20}\citemid Lemma 2.6\citeright}]\label{lem:NefAgainstNonClosedContracted} Let $\pi\colon X\to Z$ be a proper morphism of algebraic spaces over a scheme $S$. Suppose that $Z$ is quasi-compact and decent, or that $Z$ is a locally Noetherian scheme. Let $D \in \Pic_\mathbf{k}(X)$. Then, the following conditions are equivalent. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:NefAgainstNonClosedContracteddef} $D$ is $\pi$-nef (resp.\ $\pi$-numerically trivial). \item \label{lem:NefAgainstNonClosedContractedclosed} For every closed point $z \in \lvert Z \rvert$ and every integral one-dimensional subspace $C \subseteq \pi^{-1}(z)$ that is closed in $\pi^{-1}(z)$, we have $(D \cdot C) \ge 0$. \end{enumerate} \end{lemma} \begin{proof} We have $(\ref{lem:NefAgainstNonClosedContracteddef}) \Rightarrow (\ref{lem:NefAgainstNonClosedContractedclosed})$ by definition, and hence it suffices to show $(\ref{lem:NefAgainstNonClosedContractedclosed}) \Rightarrow (\ref{lem:NefAgainstNonClosedContracteddef})$. As in the proof of Lemma \ref{lem:nefpullback}, it suffices to show the statement for nefness. \par We want to show that for every $z \in \lvert Z \rvert$, the pullback $D\rvert_{\pi^{-1}(z)}$ is nef over $\Spec(\kappa(z))$. We first show that $z$ specializes to a closed point $z_0 \in \lvert Z \rvert$. Since $Z$ is quasi-compact and decent, $\lvert Z \rvert$ is quasi-compact and Kolmogorov \cite[\href{https://stacks.math.columbia.edu/tag/03K3}{Tag 03K3}]{stacks-project}, and hence every point $z \in \lvert Z \rvert$ specializes to a closed point \cite[\href{https://stacks.math.columbia.edu/tag/005E}{Tag 005E}]{stacks-project}. When $Z$ is a locally Noetherian scheme, then every point $z \in Z$ specializes to a closed point as well \cite[\href{https://stacks.math.columbia.edu/tag/02IL}{Tag 02IL}]{stacks-project}. \par Now let $z \rightsquigarrow z_0$ be a specialization to a closed point in $\lvert Z \rvert$, which exists by the previous paragraph. By \cite[\href{https://stacks.math.columbia.edu/tag/0BBP}{Tag 0BBP} and \href{https://stacks.math.columbia.edu/tag/03IL}{Tag 03IL}]{stacks-project}, there is an \'etale morphism $U \to Z$ from an affine scheme $U$ with points $u \rightsquigarrow u_0$ mapping to $z \rightsquigarrow z_0$ such that the field extension $\kappa(z_0) \hookrightarrow \kappa(u_0)$ is an isomorphism. We note that $\kappa(z) \subseteq \kappa(u)$ is a field extension, and hence by Lemma \ref{lem:nefbasechange}$(\ref{lem:nefbasechangeconverse})$ it suffices to show that the pullback of $D$ to $X \times_Z \Spec(\kappa(u))$ is nef over $\Spec(\kappa(u))$. By Lemma \ref{lem:nefpullback} and the weak version of Chow's lemma in \cite[\href{https://stacks.math.columbia.edu/tag/089J}{Tag 089J}]{stacks-project}, we may replace $X$ by a proper surjective cover $Y \to X$ that is a projective scheme over $U$. \par By \cite[Chapter VI, \S1, n\textsuperscript{o} 2, Corollary to Theorem 2]{BouCA}, we can find a valuation ring $(R,\mathfrak{m})$ and a morphism $\Spec(R) \to U$ such that the generic point of $\Spec(R)$ maps to $u$ and the closed point of $\Spec(R)$ maps to $u_0$, and such that the field extension $\kappa(u) \hookrightarrow \Frac(R)$ is an isomorphism. Let $C \subseteq X \times_Z \Spec(\Frac(R))$ be an integral closed one-dimensional subspace. Taking the scheme-theoretic closure of $C$ in $X \times_Z \Spec(R)$, we obtain a flat family of closed one-dimensional subspaces in $X \times_Z \Spec(R)$ over $R$ \citeleft\citen{EGAInew}\citemid Proposition 8.4.5\citepunct \citen{BouCA}\citemid Chapter VI, \S4, n\textsuperscript{o} 1, Lemma 1\citeright. Since the residue field of $R$ is a field extension of $\kappa(z_0) \cong \kappa(u_0)$, we see that the restriction of $D$ to $X \times_Z \Spec(R/\mathfrak{m})$ is nef over $\Spec(R/\mathfrak{m})$ by Lemma \ref{lem:nefbasechange}$(\ref{lem:nefbasechangepullback})$. Thus, we have $(D' \cdot C) \geq 0$ by the invariance of intersection numbers in flat families \cite[Proposition B.18]{Kle05}. \end{proof} Since nefness can be detected over closed points in many cases, we define the following. \begin{definition}[see {\citeleft\citen{Kle66}\citemid p.\ 335\citepunct \citen{KMM87}\citemid p.\ 291\citepunct \citen{Kee03}\citemid Definition 2.8\citepunct \citen{VPthesis}\citemid Definitions 1.3.19 and 1.3.20 and p.\ 15\citeright}] \label{def:intersectionWithContractedCurves} Let $\pi\colon X\to Z$ be a proper morphism of algebraic spaces over a scheme $S$, such that $Z$ is either quasi-compact and decent or a locally Noetherian scheme. A closed subspace $Y \subseteq X$ is \textsl{$\pi$-contracted} if $\pi(Y)$ is a zero-dimensional (closed) subspace of $Z$. A \textsl{$\pi$-contracted curve} is a $\pi$-contracted closed subspace that is integral and of dimension one. \par Now suppose that $X$ is quasi-compact. We denote by $Z_1(X/Z)$ the free Abelian group generated by $\pi$-contracted curves, and set \begin{align*} Z_1(X/Z)_\mathbf{k} &\coloneqq Z_1(X/Z) \otimes_\mathbf{Z} \mathbf{k} \intertext{for $\mathbf{k}\in\{\mathbf{Q},\mathbf{R}\}$. An element $\beta\in Z_1(X/Z)_{\mathbf{k}}$ is \textsl{$\pi$-numerically trivial} if $(D\cdot\beta)=0$ for all $D\in\Pic_\mathbf{k}(X)$. We denote by $N_1(X/Z)$ the quotient of $Z_1(X/Z)$ by the subgroup of numerically trivial elements, and set} N_1(X/Z)_{\mathbf{k}} &\coloneqq N_1(X/Z) \otimes_\mathbf{Z} \mathbf{k} \end{align*} for $\mathbf{k}\in\{\mathbf{Q},\mathbf{R}\}$. \end{definition} \subsection{Theorem of the base} As in the absolute case, the modules $N^1(X/Z)_\mathbf{k}$ and $N_1(X/Z)_\mathbf{k}$ are finitely generated. This statement is called the theorem of the base. This theorem allows us to define cones in $N^1(X/Z)_\mathbf{k}$ and $N_1(X/Z)_\mathbf{k}$ corresponding to the various positivity notions in \S\ref{sect:relativeampleness}. \par To prove the theorem of the base, we start with the following. \begin{lemma}[cf.\ {\citeleft\citen{Kle66}\citemid Chapter IV, \S4, Proposition 1\citepunct \citen{Kee03}\citemid Lemma 2.20\citeright}]\label{lem:nefbasechangenotcartesian} Let $S$ be a scheme. Consider a commutative diagram \[ \begin{tikzcd} X'\arrow{dr}{h}\arrow[bend right=15]{ddr}[swap]{\rho} \arrow[bend left=15]{drr}{f}\\ & X \times_Z Z' \rar[swap]{g'}\dar[swap]{\pi'} & X\dar{\pi}\\ & Z' \rar{g} & Z \end{tikzcd} \] of algebraic spaces over $S$ where the square is Cartesian, $\pi$ and $\rho$ are proper, and $Z$ and $Z'$ are decent. Let $D \in \Pic_\mathbf{k}(X)$. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:nefbasechangenotcartesianpullback} If $D$ is $\pi$-nef (resp.\ $\pi$-numerically trivial), then $f^*D$ is $\rho$-nef (resp.\ $\rho$-numerically trivial). \item\label{lem:nefbasechangenotcartesianconverse} Suppose that for every $z \in \lvert Z \rvert$ and every integral one-dimensional subspace $C \subseteq \pi^{-1}(z)$ that is closed in $\pi^{-1}(z)$, there exists a point $z' \in \lvert Z' \rvert$ such that for every irreducible component \[ C'_i \subseteq C' \coloneqq C \otimes_{\kappa(z)} \kappa(z') \] with reduced structure, there exists an integral one-dimensional subspace $C''_i \subseteq \rho^{-1}(z')$ that is closed in $\rho^{-1}(z')$ such that $h(C''_i) = C'_i$. If $f^*D$ is $\rho$-nef (resp.\ $\rho$-numerically trivial), then $D$ is $\pi$-nef (resp.\ $\pi$-numerically trivial). \item\label{lem:nefbasechangeconversenotcartesianclosed} Suppose that $Z$ either is quasi-compact or is a locally Noetherian scheme. Suppose that for every $\pi$-contracted curve $C \subseteq X$, there exists a point $z' \in \lvert Z' \rvert$ such that for every irreducible component \[ C'_i \subseteq C' \coloneqq C \otimes_{\kappa(z)} \kappa(z') \] with reduced structure, there exists an integral one-dimensional subspace $C''_i \subseteq \rho^{-1}(z')$ that is closed in $\rho^{-1}(z')$ such that $h(C''_i) = C'_i$. If $f^*D$ is $\rho$-nef (resp.\ $\rho$-numerically trivial), then $D$ is $\pi$-nef (resp.\ $\pi$-numerically trivial). \end{enumerate} \end{lemma} \begin{remark}\label{rem:nefbasechange} The condition on curves in $(\ref{lem:nefbasechangenotcartesianconverse})$ and $(\ref{lem:nefbasechangeconversenotcartesianclosed})$ hold for example when $g = \mathrm{id}_S$ and $f$ is proper and surjective, which is the case proved in Lemma \ref{lem:nefpullback}, or when $g$ is surjective and $h = \mathrm{id}_{X'}$, which is the case proved in Lemma \ref{lem:nefbasechange}. \end{remark} \begin{proof}[Proof of Lemma \ref{lem:nefbasechangenotcartesian}] As in the proof of Lemma \ref{lem:nefpullback}, it suffices to show the statements for nefness. \par For $(\ref{lem:nefbasechangenotcartesianpullback})$, we know that $g^{\prime*}D$ is $\pi'$-nef by Lemma \ref{lem:nefbasechange}$(\ref{lem:nefbasechangepullback})$. Thus, $f^*D$ is $\rho$-nef by Lemma \ref{lem:nefpullback}$(\ref{lem:nefpullbackalways})$. \par For $(\ref{lem:nefbasechangenotcartesianconverse})$ (resp.\ $(\ref{lem:nefbasechangeconversenotcartesianclosed})$), let $C \subseteq \pi^{-1}(z)$ be an integral one-dimensional subspace, where $z \in \lvert Z \rvert$ is a point (resp.\ a closed point). By definition (resp.\ by Lemma \ref{lem:NefAgainstNonClosedContracted}), it suffices to show that $(D \cdot C) \ge 0$. Let $C'_i$ be the irreducible components of $C'$ with reduced structure and geometric generic point $\bar{x}_i$, and let \[ m_i = \Length_{\mathcal{O}_{X \times_Z Z',\bar{x}_i}}\bigl( \mathcal{O}_{C'_i,\bar{x}_i} \bigr). \] Then, we have \begin{align*} (D \cdot C) = (g^{\prime*}D \cdot C') &= \sum_i m_i (g^{\prime*}D \cdot C'_i)\\ &= \sum_i m_i \bigl( \deg(C''_i \to C'_i) \bigr)^{-1} (f^*D \cdot C''_i) \ge 0 \end{align*} where the first equality follows from flat base change \cite[\href{https://stacks.math.columbia.edu/tag/073K}{Tag 073K}]{stacks-project}, the second equality is \cite[\href{https://stacks.math.columbia.edu/tag/0EDI}{Tag 0EDI}]{stacks-project}, the third equality is the projection formula \cite[\href{https://stacks.math.columbia.edu/tag/0EDJ}{Tag 0EDJ}]{stacks-project}, and the last inequality is by the assumption that $\pi^*D$ is nef. \end{proof} We show that $N^1$ is compatible with pullbacks. \begin{proposition}[cf.\ {\citeleft\citen{Kle66}\citemid Chapter IV, \S4, Proposition 1\citepunct \citen{Kee03}\citemid Lemma 3.1\citeright}]\label{lem:n1pullsback} Consider a commutative diagram \[ \begin{tikzcd} X' \rar{f} \dar[swap]{\rho} & X \dar{\pi}\\ Z' \rar{g} & Z \end{tikzcd} \] of algebraic spaces over a scheme $S$, where $\pi$ and $\pi'$ are proper, and $Z$ and $Z'$ are decent. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:n1pullsbackmap} The pair $(f/g)$ induces a group map \[ (f/g)^*\colon N^1(X/Z) \longrightarrow N^1(X'/Z'). \] \item\label{lem:n1pullsinjective} The map $(f/g)^*$ is injective either if the condition in Lemma \ref{lem:nefbasechangenotcartesian}$(\ref{lem:nefbasechangenotcartesianconverse})$ holds, or if $Z$ is quasi-compact or is a locally Noetherian scheme and the condition in Lemma \ref{lem:nefbasechangenotcartesian}$(\ref{lem:nefbasechangeconversenotcartesianclosed})$ holds. \end{enumerate} \end{proposition} \begin{proof} We first show $(\ref{lem:n1pullsbackmap})$. By \cite[\href{https://stacks.math.columbia.edu/tag/0B8P}{Tag 0B8P}]{stacks-project}, pulling back invertible sheaves induces a map $\Pic(X) \to \Pic(X')$. It therefore suffices to show that the composition \[ \Pic(X) \longrightarrow \Pic(X') \longrightarrow N^1(X'/Z') \] factors through $N^1(X/Z)$. This holds since $\pi$-numerically trivial elements pull back to $\rho$-numerically trivial elements by Lemma \ref{lem:nefbasechangenotcartesian}$(\ref{lem:nefbasechangenotcartesianpullback})$. \par For $(\ref{lem:n1pullsinjective})$, it suffices to note that if the pullback of $\mathscr{L} \in \Pic(X)$ to $X'$ is $\rho$-numerically trivial, then $\mathscr{L}$ is $\pi$-numerically trivial by Lemma \ref{lem:nefbasechangenotcartesian}$(\ref{lem:nefbasechangenotcartesianconverse})$ or Lemma \ref{lem:nefbasechangenotcartesian}$(\ref{lem:nefbasechangeconversenotcartesianclosed})$. \end{proof} We can now show the theorem of the base. We note that Noetherian algebraic spaces are quasi-compact, quasi-separated, and locally Noetherian (see the definition in \cite[\href{https://stacks.math.columbia.edu/tag/03EA}{Tag 03EA}]{stacks-project}), and hence are automatically decent (see \cite[\href{https://stacks.math.columbia.edu/tag/03I7}{Tag 03I7}]{stacks-project}). \begin{theorem}[Theorem of the base; cf.\ {\citeleft\citen{Kle66}\citemid Chapter IV, \S4, Proposition 3\citepunct\citen{Kee03}\citemid Theorem 3.6\citepunct \citen{Kee18}\citemid Theorem E2.2\citeright}] \label{thm:RelNSFinite} Let $\pi\colon X\to Z$ be a proper morphism of Noetherian algebraic spaces over a scheme $S$, and let $\mathbf{k}\in\{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$. Then, the $\mathbf{k}$-modules $N^1(X/Z)_{\mathbf{k}}$ and $N_1(X/Z)_{\mathbf{k}}$ are finitely generated. Consequently, the intersection pairing \[ N^1(X/Z)_{\mathbf{k}}\times N_1(X/Z)_{\mathbf{k}}\longrightarrow \mathbf{k} \] is a perfect pairing. \end{theorem} \begin{proof} Since $N_1(X/Z)_{\mathbf{k}}$ is a submodule of $\Hom_{\mathbf{k}}(N^1(X/Z)_{\mathbf{k}},\mathbf{k})$, it suffices to show $N^1(X/Z)_{\mathbf{k}}$ finitely generated. The cases $\mathbf{k} = \mathbf{Q}$ and $\mathbf{k} = \mathbf{R}$ follow from the case $\mathbf{k} = \mathbf{Z}$ by extending scalars. The case when $Z$ is a scheme is proved in \citeleft\citen{Kle66}\citemid Chapter IV, \S4, Proposition 3\citepunct\citen{Kee03}\citemid Theorem 3.6\citepunct \citen{Kee18}\citemid Theorem E2.2\citeright. It therefore suffices to consider the case when $Z$ is an algebraic space. \par Let $Z' \to Z$ be an \'etale cover by a quasi-compact scheme $Z'$. Note that $Z'$ is a Noetherian scheme. We then consider the Cartesian diagram \[ \begin{tikzcd} X' \rar{f} \dar[swap]{\rho} & X \dar{\pi}\\ Z' \rar{g} & Z \end{tikzcd} \] By Proposition \ref{lem:n1pullsback} (see Remark \ref{rem:nefbasechange}), we have an injection $N^1(X/Z) \hookrightarrow N^1(X'/Z')$. Since $N^1(X'/Z')$ is finitely generated by scheme case, we see that $N^1(X/Z)$ is finitely generated. \end{proof} \begin{remark}\label{rem:NefAgainstNonClosedContracted} With notation as in Definition \ref{def:intersectionWithContractedCurves}, if $z\in \lvert Z \rvert$ is not closed, then a closed subspace $C$ of $\pi^{-1}(z)$ is not a closed subspace of $X$, and thus is not covered by Definition \ref{def:intersectionWithContractedCurves}. However, if $\dim(C)=1$, the intersection number $(\mathscr{L}\cdot C)$ is still well-defined and extends linearly to $\Div_{\mathbf{k}}(X)$ for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$ as before (cf.\ the proof of Lemma \ref{lem:NefAgainstNonClosedContracted}). Consequently, if $D$ is $\pi$-nef and $C$ is a one-dimensional integral closed subspace of $\pi^{-1}(z)$ for a point $z \in \lvert Z \rvert$, then $(D\cdot C)=0$ whenever $[D]=0\in N^1(X/Z)_{\mathbf{k}}$. These subspaces $C \subseteq \pi^{-1}(z)$ define classes \[ [C]\in N_1(X/Z)_{\mathbf{k}} \coloneqq \Hom_{\mathbf{k}}\bigl(N^1(X/Z)_{\mathbf{k}},\mathbf{k}\bigr), \] for $\mathbf{k}\in\{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$. \end{remark} \subsection{Cones and Kleiman's criterion for ampleness} The theorem of the base allows us to define the relative ample and relative nef cones for proper morphisms of Noetherian algebraic spaces. \begin{definition}[see {\citeleft\citen{Kle66}\citemid p.\ 335\citepunct \citen{KMM87}\citemid p.\ 291\citepunct \citen{VPthesis}\citemid Definitions 1.3.21 and 1.3.24\citeright}] \label{def:AmpleConeAndNefCone} Let $\pi\colon X\to Z$ be a proper morphism of Noetherian algebraic spaces over a scheme $S$. The \textsl{relative nef cone} is \begin{align*} \Nef(X/Z)&\coloneqq\Set[\big]{[D]\in N^1(X/Z)_{\mathbf{R}} \given D\in \Pic_\mathbf{R}(X)\ \text{is $\pi$-nef}}, \intertext{and the \textsl{relative ample cone} is} \Amp(X/Z)&\coloneqq\Set[\big]{[D]\in N^1(X/Z)_{\mathbf{R}} \given D\in \Pic_\mathbf{R}(X)\ \text{is $\pi$-ample}}. \end{align*} In the space $N_1(X/Z)_{\mathbf{R}}$, we define the cone $\NE(X/Z)$ to be the set of $\mathbf{R}_{\geq 0}$-combinations of $\pi$-contracted curves, and let $\NEbar(X/Z)$ be its closure. By definition, it is clear that an $\mathbf{R}$-invertible sheaf $D$ on $X$ is $\pi$-nef if and only if for all $\gamma\in \NEbar(X/Z)$, we have $(D\cdot\gamma)\geq 0$. For an $\mathbf{R}$-invertible sheaf $D$ on $X$, we also define \[ \NEbar_{D\ge0}(X/Z) \coloneqq \Set[\big]{\gamma \in \NEbar(X/Z) \given (D \cdot \gamma) \ge 0}. \] Since $\NEbar(X/Z)$ is a closed convex subset of $N_1(X/Z)$, it is an intersection of half-spaces. Thus, we have \begin{align}\label{NEBARisNEFgeq0} \NEbar(X/Z)=\Set[\big]{\gamma\in N_1(X/Z)_{\mathbf{R}}\given (\beta\cdot\gamma)\ge0\ \text{for all}\ \beta\in \Nef(X/Z)}. \end{align} \end{definition} We now want to prove the relative version of Kleiman's criterion for ampleness for proper morphisms of algebraic spaces. We start with the following definition. \begin{definition}[cf.\ {\citeleft\citen{Kle66}\citemid Chapter IV, \S4, Definition 1\citepunct \citen{Kee03}\citemid Definition 3.8\citepunct \citen{FS11}\citemid Lemma 4.12\citeright}] \label{def:relquasidiv} Let $\pi\colon X \to Z$ be a proper morphism of Noetherian algebraic spaces over a scheme $S$. We say that $X$ is \textsl{relatively quasi-divisorial for $\pi$} if, for every $\pi$-contracted integral subspace $V$ of positive dimension, there exist an invertible sheaf $\mathscr{H}$ on $X$ and a nonzero effective Cartier divisor $H$ on $V$ such that $\mathscr{H}_{\vert V} \cong \mathcal{O}_V(H)$. \end{definition} \begin{remark} With notation as in Definition \ref{def:relquasidiv}, $X$ is relatively quasi-divisorial for $\pi$ in the following cases: \begin{enumerate}[label=$(\roman*)$] \item When $\pi$ is projective (let $\mathscr{H}$ be $\pi$-very ample in the sense of \cite[\S2.1.1]{CT20}; see \cite[p.\ 257]{Kee03}). \item When $X$ is a regular scheme, or more generally a $\mathbf{Q}$-factorial scheme \cite[Chapter VI, Proof of Theorem 2.19]{Kol96}. \end{enumerate} \end{remark} We can now show that the ample cone is the interior of the nef cone. \begin{theorem}[cf.\ {\citeleft\citen{Kle66}\citemid Chapter IV, \S4, Theorem 2\citepunct \citen{Kee03}\citemid Theorem 3.9\citepunct \citen{Kee18}\citemid Theorem E2.2\citeright}]\label{thm:ampisinterior} Let $\pi\colon X \to Z$ be a proper morphism of Noetherian algebraic spaces over a scheme $S$. Then, we have \begin{align} \Amp(X/Z) &\subseteq \Int\bigl(\Nef(X/Z)\bigr).\label{eq:ampininterior} \intertext{If $X$ is relatively quasi-divisorial for $\pi$, then we have} \Amp(X/Z) &= \Int\bigl(\Nef(X/Z)\bigr).\label{eq:ampequalsinterior} \end{align} \end{theorem} \begin{proof} We show $\Amp(X/Z) \subseteq \Nef(X/Z)$. Let $D \in \Amp(X/Z)$, and write $D = \sum_i a_iH_i$, where $H_i$ are $\pi$-ample invertible sheaves. We have $D \in \Nef(X/Z)$ since the restriction of each $H_i$ to the fibers of $\pi$ are ample, and hence nef by \cite[Proposition B.14]{Kle05}. \par For the statements involving interiors, as in \cite[Chapter IV, \S 1, Remarks 4 and 5]{Kle66}, the cone generated by $\Int(\Nef(X/Z)) \cap N^1(X/Z)$ is equal to $\Int(\Nef(X/Z))$, and hence it suffices to prove both statements for invertible sheaves $\mathscr{L}$. Note that this reduction uses the fact that $N^1(X/Z)$ is finitely generated (Theorem \ref{thm:RelNSFinite}). Let $g\colon Z' \to Z$ be a surjective \'etale cover by a quasi-compact scheme $Z'$, and consider the associated Cartesian diagram \[ \begin{tikzcd} X' \rar{f}\dar[swap]{\pi'} & X\dar{\pi}\\ Z' \rar{g} & Z \end{tikzcd} \] \par To show \eqref{eq:ampininterior}, let $\mathscr{L} \in \Amp(X/Z)$. It suffices to show that for every $\mathscr{M} \in \Pic(X)$, we have $\mathscr{L}^{\otimes m} \otimes_{\mathcal{O}_X} \mathscr{M} \in \Amp(X/Z)$ for $m \gg 0$. Since $\mathscr{L}$ is $\pi$-ample, we know $X \to Z$ is representable, and hence $X'$ is a scheme. Since $f^*\mathscr{L}$ is $\pi'$-ample, we know that $f^*\mathscr{L}^{\otimes m} \otimes_{\mathcal{O}_X} f^*\mathscr{M}$ is $\pi'$-ample for all $m \gg 0$ by \cite[Corollaire 4.6.12]{EGAII}. We therefore see that $\mathscr{L}^{\otimes m} \otimes_{\mathcal{O}_X} \mathscr{M}$ is $\pi$-ample by \cite[\href{https://stacks.math.columbia.edu/tag/0D36}{Tag 0D36}]{stacks-project}.\smallskip \par It remains to show \eqref{eq:ampequalsinterior} when $X$ is quasi-divisorial for $\pi$. Let $\mathscr{L} \in \Int(\Nef(X/Z))$. It suffices to show that $f^*\mathscr{L}$ is $\pi'$-ample and that $X'$ is a scheme by \cite[\href{https://stacks.math.columbia.edu/tag/0D36}{Tag 0D36}]{stacks-project}. By \cite[\href{https://stacks.math.columbia.edu/tag/0D3A}{Tag 0D3A}]{stacks-project} and the Nakai--Moishezon criterion for proper algebraic spaces over fields \citeleft\citen{PG85}\citemid Theorem 1.4\citepunct \citen{Kol90}\citemid Theorem 3.11\citeright, it suffices to show that for every $\pi'$-contracted integral closed subspace $V \subseteq X'$ of dimension $d > 0$, we have $((f^*\mathscr{L})^d \cdot V) > 0$. \par We proceed by induction on $d$. Since $X$ is relatively quasi-divisorial for $\pi$, there exists $\mathscr{H} \in \Pic(X)$ such that $\mathscr{H}_{\vert f(V)} \cong \mathcal{O}_{f(V)}(H)$ for some nonzero effective Cartier divisor $H$ on $X$, and hence $f^*\mathscr{H}_{\vert V} \cong \mathcal{O}_V(f^*H)$, where the pullback of $H$ is defined by \cite[\href{https://stacks.math.columbia.edu/tag/083Z}{Tag 083Z}(1)]{stacks-project}. Since $\mathscr{L} \in \Int(\Nef(X/Z))$, there exists $m > 0$ such that $\mathscr{L}^{\otimes m} \otimes_{\mathcal{O}_X} \mathscr{H}^{-1}$ is $\pi$-nef, and hence $f^*\mathscr{L}^{\otimes m} \otimes_{\mathcal{O}_{X'}} f^*\mathscr{H}^{-1}$ is $\pi'$-nef by Lemma \ref{lem:nefbasechange}$(\ref{lem:nefbasechangepullback})$. We claim we have the following chain of equalities and inequalities: \begin{align*} (f^*\mathscr{L}^d \cdot V) &= \frac{1}{m}\bigl( (f^*\mathscr{L})^{d-1} \cdot f^*\mathscr{L}^{\otimes m} \cdot V\bigr)\\ &= \frac{1}{m}\Bigl(\bigl( (f^*\mathscr{L})^{d-1} \cdot (f^*\mathscr{L}^{\otimes m} \otimes_{\mathcal{O}_{X'}} f^*\mathscr{H}^{-1}) \cdot V\bigr) + \bigl( (f^*\mathscr{L})^{d-1} \cdot \mathscr{H} \cdot V\bigr)\Bigr)\\ &\ge \frac{1}{m}\bigl( (f^*\mathscr{L})^{d-1} \cdot f^*\mathscr{H} \cdot V\bigr)\\ &>0. \end{align*} The first two equalities follow from linearity of the intersection product \cite[\href{https://stacks.math.columbia.edu/tag/0EDH}{Tag 0EDH}]{stacks-project}. To show the inequality in the third line, let $\mu\colon V' \to V$ be a finite surjective morphism from a scheme $V'$, which exists by \cite[\href{https://stacks.math.columbia.edu/tag/09YC}{Tag 09YC}]{stacks-project}. Then, $(f_{\vert V} \circ \mu)^*\mathscr{L}$ and $(f_{\vert V} \circ \mu)^*\mathscr{L}^{\otimes m} \otimes_{\mathcal{O}_{V'}} (f_{\vert V} \circ \mu)^*\mathscr{H}^{-1}$ are nef on $V'$, and hence \[ \bigl( (f^*\mathscr{L})^{d-1} \cdot (f^*\mathscr{L}^{\otimes m} \otimes_{\mathcal{O}_{X'}} \mathscr{H}^{-1}) \cdot V\bigr) \ge 0 \] by the projection formula \cite[\href{https://stacks.math.columbia.edu/tag/0EDJ}{Tag 0EDJ}]{stacks-project} and \cite[Lemma 2.12]{Kee03}. For the last inequality, if $d = 1$, we see that $V$ is a scheme by \cite[\href{https://stacks.math.columbia.edu/tag/0ADD}{Tag 0ADD}]{stacks-project}, and hence \[ (f^*\mathscr{H} \cdot V) = \deg(f^*\mathscr{H}) > 0 \] by \cite[\href{https://stacks.math.columbia.edu/tag/0B40}{Tag 0B40}(2)]{stacks-project}. If $d \ge 2$, then we have \[ \bigl((f^*\mathscr{L})^{d-1} \cdot f^*\mathscr{H} \cdot f(V)\bigr) = \bigl((f^*\mathscr{L})^{d-1} \cdot f^*H\bigr) > 0 \] by \cite[\href{https://stacks.math.columbia.edu/tag/0EDK}{Tag 0EDK}]{stacks-project} and the inductive hypothesis. \end{proof} \begin{remark} As seen in the proof of \eqref{eq:ampininterior}, the ample cone is always open in $N^1(X/Z)_\mathbf{R}$. In particular, the ample cone $\Amp(X/Z)$ $\mathbf{R}$-linearly spans $N^1(X/Z)_\mathbf{R}$. \end{remark} The relative ampleness of an $\mathbf{R}$-Cartier divisor $D$ only depends on its class $[D]$; thus, $[D]\in\Amp(X/Z)$ if and only if $D$ is $\pi$-ample. Indeed, we have the following relative version of Kleiman's criterion for ampleness stated in terms of the cone $\NEbar(X/Z)$. See \cite[Lemma 4.12]{FS11} for the case when $Z = \Spec(k)$, where $k$ is a field. See also \cite[Lemma 21]{Kol21def} and \cite[Corollary 1.4]{VP} for other versions of Kleiman's criterion for algebraic spaces. \begin{proposition}[see {\citeleft\citen{Kle66}\citemid Chapter IV, \S4, Proposition 4\citepunct \citen{FS11}\citemid Lemma 4.12\citeright}] \label{lem:AmpleIsPositiveOnNE} Let $\pi\colon X\to Z$ be a proper morphism of Noetherian algebraic spaces over a scheme $S$. Suppose that $X$ is relatively quasi-divisorial for $\pi$. Then, $D\in\Pic_\mathbf{R}(X)$ is $\pi$-ample if and only if for all nonzero $\gamma\in \NEbar(X/Z)$, we have $(D\cdot\gamma)>0$. \end{proposition} \begin{proof} For $\Rightarrow$, we proceed by contradiction as in \cite[Chapter II, Proposition 4.8]{Kol96}. Suppose $(D\cdot\gamma)\le0$. Let $E \in \Pic(X)$ be such that $(E \cdot \gamma) < 0$. We have that $mD+E$ is $\pi$-ample for $m \gg 0$ by Theorem \ref{thm:ampisinterior}, and hence \[ 0 \le \bigl((mD+E) \cdot \gamma\bigr) = m(D \cdot \gamma) + (E \cdot \gamma) < 0, \] a contradiction. \par For $\Leftarrow$, by Theorem \ref{thm:ampisinterior}, we need to show that $D \in \Int(\Nef(X/Z))$. We need to show that for arbitrary $D' \in \Pic_\mathbf{k}(X)$, we have $mD+D' \in \Nef(X/Z)$ for all $m \gg 0$. We adapt the proof in \cite[Theorem 1.4.29]{Laz04a}. By Lemma \ref{lem:NefAgainstNonClosedContracted}, it suffices to show that there exists an $m$ such that $((mD+D') \cdot C) \ge 0$ for all $\pi$-contracted curves $C$. Consider the linear functionals \[ \phi_D\colon N_1(X/Z)_\mathbf{R} \longrightarrow \mathbf{R}\qquad\text{and}\qquad \phi_{D'}\colon N_1(X/Z)_\mathbf{R} \longrightarrow \mathbf{R} \] defined by intersecting with $D$ and $D'$, respectively. Fix a norm $\lVert \cdot \rVert$ on $N_1(X/Z)_\mathbf{R}$, and let \[ S = \Set[\big]{\gamma \in N_1(X/Z)_\mathbf{R} \given \lVert \gamma \rVert = 1}. \] Since $\NEbar(X/Z) \cap S$ is compact, there exists $\varepsilon \in \mathbf{R}_{>0}$ such that $\phi_D(\gamma) \ge \varepsilon$ for all $\gamma \in \NEbar(X/Z) \cap S$. Similarly, there exists $\varepsilon' \in \mathbf{R}$ such that $\phi_{D'}(\gamma) \ge \varepsilon'$ for all $\gamma \in \NEbar(X/Z) \cap S$. Thus, $(D \cdot C) \ge \varepsilon \cdot \lVert C \rVert$ and $(D' \cdot C) \ge \varepsilon' \cdot \lVert C \rVert$ for every $\pi$-contracted curve $C \subseteq X$. We then have \[ \bigl((mD+E) \cdot C\bigr) = m(D \cdot C) + (E \cdot C) \ge (m\,\varepsilon + \varepsilon') \cdot \lVert C \rVert, \] and hence it suffices to choose $m \gg 0$ such that $m\,\varepsilon + \varepsilon' > 0$. \end{proof} Next, we consider the behavior of cones under localization on the base. \begin{lemma}\label{lem:NumericalClassOpenSubset} Let $\pi\colon X\to Z$ be a proper morphism of Noetherian algebraic spaces over a scheme $S$. Let $V$ be an open subspace of $Z$. Restriction of invertible sheaves gives a $\mathbf{k}$-linear map \begin{align} \Pic_{\mathbf{k}}(X)&\longrightarrow \Pic_{\mathbf{k}}\bigl(\pi^{-1}(V)\bigr)\nonumber \intertext{for $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$, and the construction in Remark \ref{rem:NefAgainstNonClosedContracted} gives a $\mathbf{k}$-linear map} Z_1(\pi^{-1}(V)/V)_{\mathbf{k}}&\longrightarrow N_1(X/Z)_{\mathbf{k}}.\nonumber \intertext{These maps are compatible with intersection products and thus give $\mathbf{k}$-linear maps} \begin{split} N^1(X/Z)_{\mathbf{k}}&\longrightarrow N^1(\pi^{-1}(V)/V)_{\mathbf{k}}\\ N_1\bigl(\pi^{-1}(V)/V\bigr)_{\mathbf{k}}&\longrightarrow N_1(X/Z)_{\mathbf{k}} \end{split}\label{eq:NumericalClassOpenSubset} \end{align} that preserve $\Nef$, $\Amp$, and $\NEbar$. \end{lemma} \begin{proof} That these maps are compatible with intersection products is a consequence of the construction of $[C]$ as in Lemma \ref{lem:NefAgainstNonClosedContracted} and Remark \ref{rem:NefAgainstNonClosedContracted}. Therefore they induce the $\mathbf{k}$-linear maps in \eqref{eq:NumericalClassOpenSubset}. Under these maps, $\Nef(X/Z)$ is mapped into $\Nef(\pi^{-1}(V)/V)$ by Lemma \ref{lem:NefAgainstNonClosedContracted}, and $\Amp(X/Z)$ is mapped into $\Amp(\pi^{-1}(V)/V)$ by definition, since a $\pi$-ample line bundle $\mathscr{L}$ restricts to a $\pi_{|\pi^{-1}(V)}$-ample line bundle. By \eqref{NEBARisNEFgeq0}, $\NEbar(\pi^{-1}(V)/V)$ is mapped into $\NEbar(X/Z)$. \end{proof} Finally, we will use the following terminology to describe our cones. \begin{definition}[see {\cite[Definition 3-2-3]{KMM87}}]\label{def:SuppPlaneAndDualRay} We say a subspace $W\subseteq N^1(X/Z)_{\mathbf{R}}$ is a \textsl{supporting subspace of $\Nef(X/Z)$} if $W$ is the span of $W\cap\Nef(X/Z)$ and $W\cap \Amp(X/Z)=\emptyset$. We say a supporting subspace $W$ of $\Nef(X/Z)$ a \textsl{supporting hyperplane of $\Nef(X/Z)$} if $\dim W=\dim(N^1(X/Z)_{\mathbf{R}})-1$. Let $W$ be a supporting subspace of $\Nef(X/Z)$. The \textsl{extremal face dual to $W$} is \[ R=\Set[\big]{\gamma\in \NEbar(X/Z) \given (W\cdot\gamma)=0}. \] When $W$ is a supporting hyperplane, we call $R$ the \textsl{extremal ray dual to $W$}. Note that $R$ is an extremal face of $\NEbar(X/Z)$ in the sense that if $\beta_1,\beta_2\in \NEbar(X/Z)$ satisfy $\beta_1+\beta_2\in R$, then $\beta_1,\beta_2\in R$. \end{definition} \begin{remark}\label{rem:DualToOneVector} There always exist a single $[D_0]\in W\cap\Nef(X/Z)$ such that \[R=\Set[\big]{\gamma\in \NEbar(X/Z) \given (D_0\cdot\gamma)=0}.\] Indeed, by assumptions (and by Theorem \ref{thm:RelNSFinite}) $W$ is spanned by several $[D_1],[D_2],\ldots,[D_n]\in\Nef(X/Z)$. Since $(\Nef(X/Z)\cdot\NEbar(X/Z))\geq 0$, it is easy to see $D_0=D_1+D_2+\cdots +D_n$ is a valid choice. If $W$ has a basis consisting of rational elements of $\Nef(X/Z)$, then by above we may take $D_0$ rational. \end{remark} \begin{remark}\label{rem:DualRay} When $W$ is a supporting hyperplane, the extremal ray $R$ dual to $W$ is a ray in the $\mathbf{R}$-vector space $N_1(X/Z)_{\mathbf{R}}$. Indeed, $R\neq\{0\}$ by Lemma \ref{lem:AmpleIsPositiveOnNE}, and the span of $R$ has dimension at most one since $W$ has codimension one. \end{remark} \section{Relatively big \texorpdfstring{$\mathbf{R}$}{R}-invertible sheaves} In this section, we define the ``birational'' variants of the relative ampleness conditions defined in the previous section, i.e., relative bigness and relative pseudo-effectivity. As far as we are aware, these results are new for algebraic spaces, even for proper algebraic spaces over a field. \subsection{Growth of cohomology and volume} We will need the following estimate on the growth of cohomology of twists. \begin{proposition}[cf.\ {\citeleft\citen{Deb01}\citemid Proposition 1.31$(a)$\citeright}] \label{prop:deb131a} Let $X$ be a proper algebraic space over a field $k$ of dimension $d$, and let $\mathscr{L}$ be an invertible sheaf on $X$. For every coherent sheaf $\mathscr{F}$ on $X$, we have \begin{equation}\label{eq:dimcounttwists} h^i(X,\mathscr{F} \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m}) = O(m^d) \end{equation} for all $i$. Here, the dimension $h^i$ of $H^i$ is computed over $k$. \end{proposition} \begin{proof} By d\'evissage \cite[\href{https://stacks.math.columbia.edu/tag/08AN}{Tag 08AN}]{stacks-project}, it suffices to show the following: \begin{enumerate}[label=$(\alph*)$,ref=\alph*] \item\label{prop:dimcounttwistsdevissage1} For every short exact sequence \[ 0 \longrightarrow \mathscr{F}_1 \longrightarrow \mathscr{F}_2 \longrightarrow \mathscr{F}_3 \longrightarrow 0 \] of coherent sheaves on $X$, if \eqref{eq:dimcounttwists} holds for two out of three of $\mathscr{F}_1$, $\mathscr{F}_2$, and $\mathscr{F}_3$, then \eqref{eq:dimcounttwists} holds for the third. \item\label{prop:dimcounttwistsdevissage2} If \eqref{eq:dimcounttwists} holds for $\mathscr{F}^{\oplus r}$ for some $r \ge 1$, then \eqref{eq:dimcounttwists} holds for $\mathscr{F}$. \item\label{prop:dimcounttwistsdevissage3} For every integral closed subspace $\iota\colon V \hookrightarrow X$, there exists a coherent sheaf $\mathscr{G}$ on $X$ whose scheme-theoretic support is $V$ such that \eqref{eq:dimcounttwists} holds for $\mathscr{G}$. \end{enumerate} \par First, $(\ref{prop:dimcounttwistsdevissage1})$ follows from the inequalities \begin{alignat*}{3} h^i(X,\mathscr{F}_1\otimes_{\mathcal{O}_X}\mathscr{L}^{\otimes m}) &\le{}& h^{i-1}(X,\mathscr{F}_3\otimes_{\mathcal{O}_X}\mathscr{L}^{\otimes m}) &{}+{}& h^i(X,\mathscr{F}_2\otimes_{\mathcal{O}_X}\mathscr{L}^{\otimes m})\\ h^i(X,\mathscr{F}_2\otimes_{\mathcal{O}_X}\mathscr{L}^{\otimes m}) &\le{}& h^i(X,\mathscr{F}_1\otimes_{\mathcal{O}_X}\mathscr{L}^{\otimes m}) &{}+{}& h^i(X,\mathscr{F}_3\otimes_{\mathcal{O}_X}\mathscr{L}^{\otimes m})\\ h^i(X,\mathscr{F}_3\otimes_{\mathcal{O}_X}\mathscr{L}^{\otimes m}) &\le{}& h^i(X,\mathscr{F}_2\otimes_{\mathcal{O}_X}\mathscr{L}^{\otimes m}) &{}+{}& h^{i+1}(X,\mathscr{F}_1\otimes_{\mathcal{O}_X}\mathscr{L}^{\otimes m}) \end{alignat*} obtained by twisting the given exact sequence by $\mathscr{L}^{\otimes m}$ and using the long exact sequence on sheaf cohomology. \par Second, $(\ref{prop:dimcounttwistsdevissage2})$ follows since \[ h^i(X,\mathscr{F}^{\oplus r} \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m}) = r \cdot h^i(X,\mathscr{F} \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m}). \] \par Third, $(\ref{prop:dimcounttwistsdevissage3})$ follows from the scheme case of \eqref{eq:dimcounttwists} as follows. By the weak version of Chow's lemma in \cite[\href{https://stacks.math.columbia.edu/tag/089J}{Tag 089J}]{stacks-project}, there exists a proper surjective morphism $\mu\colon V' \to V$ from a scheme $V'$ that is a closed subscheme of $\mathbf{P}^N_k$ for some $N$. By \cite[\href{https://stacks.math.columbia.edu/tag/0DMN}{Tag 0DMN}]{stacks-project}, after replacing $V'$ by a closed integral subspace, we may assume that $\mu$ is generically finite. Let $\mathcal{O}_{V'}(n) = \mathcal{O}_{\mathbf{P}^N_k}(n)_{\vert V'}$. Choose $n > 0$ such that $R^p\mu_*\mathcal{O}_{V'}(n) = 0$ for all $p > 0$ \cite[\href{https://stacks.math.columbia.edu/tag/08AQ}{Tag 08AQ}]{stacks-project}. We claim that $\mathscr{G} = \iota_*\mu_*\mathcal{O}_{V'}(n)$ satisfies \eqref{eq:dimcounttwists}. We have \[ h^i(X,\mathscr{G} \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m}) = h^i\Bigl(V',\mathcal{O}_{V'}(n) \otimes_{\mathcal{O}_{V'}} \mu^*\bigl(\mathscr{L}^{\otimes m}_{\vert V}\bigr) \Bigr) = O(m^{\dim(V)}) \] by the projection formula \cite[\href{https://stacks.math.columbia.edu/tag/0944}{Tag 0944}]{stacks-project}, the Leray spectral sequence \cite[\href{https://stacks.math.columbia.edu/tag/0733}{Tag 0733}]{stacks-project}, and the scheme case of the proposition \cite[Proposition 1.31$(a)$]{Deb01}. \end{proof} We next define volumes. \begin{definition}[see {\cite[Definition 2.2.31]{Laz04a}}] Let $X$ be an integral proper algebraic space of dimension $d$ over a field $k$. The \textsl{volume} of an invertible sheaf $\mathscr{L}$ on $X$ is \[ \vol_X(\mathscr{L}) \coloneqq \limsup_{m \to \infty} \frac{h^0(X,\mathscr{L}^{\otimes m})}{m^d/d!}, \] where the dimension $h^0$ of $H^0$ is computed over $k$. \end{definition} We show that the volume behaves well with respect to generically finite morphisms. \begin{proposition}[cf.\ {\cite[Lemma 4.3]{Hol}}]\label{prop:volumegenfin} Let $f\colon Y \to X$ be a surjective generically finite morphism of integral proper algebraic spaces over a field $k$. Consider an invertible sheaf $\mathscr{L}$ on $X$. Then, we have \[ \vol_Y(f^*\mathscr{L}) = \deg(f) \cdot \vol_X(\mathscr{L}). \] \end{proposition} \begin{proof} Since $f$ is generically finite, we know that $f_*\mathcal{O}_Y$ has rank $r = \deg(f)$. Thus, there exists a dense open subspace $U \subseteq X$ such that $(f_*\mathcal{O}_Y)_{\vert U} \cong \mathcal{O}_U^{\oplus r}$, which yields an injection $f_*\mathcal{O}_Y \hookrightarrow \mathscr{K}_X^{\oplus r}$, where $\mathscr{K}_X$ is the sheaf of meromorphic functions as defined in \cite[\href{https://stacks.math.columbia.edu/tag/0EN3}{Tag 0EN3}]{stacks-project}. Consider the intersection $\mathscr{G} = f_*\mathcal{O}_Y \cap \mathcal{O}_X^{\oplus r}$ as subsheaves of $\mathscr{K}_X^{\oplus r}$, and the short exact sequences \[ \begin{tikzcd}[column sep=1.475em,row sep=small] 0 \rar & \mathscr{G} \rar\dar[equal] & f_*\mathcal{O}_Y \rar & \mathscr{G}_1 \rar & 0\\ 0 \rar & \mathscr{G} \rar & \mathcal{O}_X^{\oplus r} \rar & \mathscr{G}_2 \rar & 0 \end{tikzcd} \] Since $\mathscr{G}_1$ and $\mathscr{G}_2$ are supported in $X - U$, we see that \[ h^1(X,\mathscr{G}_1 \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m}) = O(m^{d-1}) \qquad\text{and}\qquad h^1(X,\mathscr{G}_2 \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m}) = O(m^{d-1}) \] by Proposition \ref{prop:deb131a}. Twisting by $\mathscr{L}^{\otimes m}$, the long exact sequence on sheaf cohomology and the projection formula \cite[\href{https://stacks.math.columbia.edu/tag/0944}{Tag 0944}]{stacks-project} imply \begin{align*} h^0(Y,f^*\mathscr{L}^{\otimes m}) - h^0(X,\mathscr{G} \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m}) &\le h^1(X,\mathscr{G}_1 \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m}) = O(m^{d-1}),\\ r \cdot h^0(X,\mathscr{L}^{\otimes m}) - h^0(X,\mathscr{G} \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m}) &\le h^1(X,\mathscr{G}_2 \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m}) = O(m^{d-1}). \end{align*} We therefore see that $\vol_Y(f^*\mathscr{L}) = r \cdot \vol_X(\mathscr{L})$. \end{proof} As a consequence, we can show that the volume is homogeneous with respect to taking powers of $\mathscr{L}$. \begin{proposition}[cf.\ {\cite[Proposition 2.2.35$(a)$]{Laz04a}}]\label{prop:volhomog} Let $X$ be an integral proper algebraic space of dimension $d$ over a field $k$. Then, for every integer $n > 0$, we have \[ \vol_X(\mathscr{L}^{\otimes n}) = n^d\vol_X(\mathscr{L}). \] \end{proposition} \begin{proof} By the weak version of Chow's lemma in \cite[\href{https://stacks.math.columbia.edu/tag/089J}{Tag 089J}]{stacks-project}, there exists a proper surjective morphism $f\colon X' \to X$ from a scheme $X'$ that is a closed subscheme of $\mathbf{P}^N_k$ for some $N$. By \cite[\href{https://stacks.math.columbia.edu/tag/0DMN}{Tag 0DMN}]{stacks-project}, after replacing $X'$ by a closed integral subspace, we may assume that $f$ is generically finite. We then have \[ \vol_X(\mathscr{L}^{\otimes n}) = \frac{1}{\deg(f)} \vol_{X'}(f^*\mathscr{L}^{\otimes n}) = \frac{n^d}{\deg(f)}\vol_{X'}(f^*\mathscr{L}) = n^d\vol_X(\mathscr{L}), \] where the first and last equalities follow from Proposition \ref{prop:volumegenfin}, and the middle equality follows from the fact that the limit supremum in the definition of $\vol_{X'}(f^*\mathscr{L})$ is in fact a limit \cite[Theorem 8.1]{Cut14}. \end{proof} \subsection{Relatively big and pseudo-effective \texorpdfstring{$\mathbf{R}$}{R}-invertible sheaves} We now define $\pi$-big and $\pi$-pseudo-effective $\mathbf{k}$-invertible sheaves and $\mathbf{k}$-Cartier divisors. In the definition below, we recall that integral algebraic spaces are decent by definition \cite[\href{https://stacks.math.columbia.edu/tag/0AD4}{Tag 0AD4}]{stacks-project}. \begin{definition}[see {\citeleft\citen{BCHM10}\citemid Definition 3.1.1(7)\citepunct \citen{Fuj14}\citemid Definition A.20\citeright}] \label{def:fbig} Let $\pi\colon X \to Z$ be a proper surjective morphism between integral algebraic spaces over a scheme $S$. Let $\mathscr{L}$ be an invertible sheaf on $X$. We say that $\mathscr{L}$ is \textsl{$\pi$-big} if \begin{equation}\label{eq:bigvolcond} \vol_{X_\eta}\bigl(\mathscr{L}_{\vert X_\eta}\bigr) \coloneqq \limsup_{m \to \infty} \frac{h^0 \bigl(X_\eta,\mathscr{L}^{\otimes m}_{\vert X_\eta} \bigr)}{m^{\dim(X_\eta)}/(\dim(X_\eta))!} > 0, \end{equation} where $X_\eta = \pi^{-1}(\eta)$ is the generic fiber, and the dimension $h^0$ of $H^0$ is computed over $\kappa(\eta)$. We note that $\lvert X_\eta \rvert$ is irreducible by \cite[Chapitre 0, Proposition 2.1.13]{EGAInew}, and hence $X_\eta$ is integral. \par Now suppose $D$ is a $\mathbf{k}$-invertible sheaf on $X$ for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$. We say that $D$ is \textsl{$\pi$-big} if $D$ is a finite nonzero $\mathbf{k}_{>0}$-linear combination of $\pi$-big invertible sheaves on $X$. If $Z = \Spec(k)$ for a field $k$, we just say that $\mathscr{L}$ or $D$ is \textsl{big}. We use the same terminology for $\mathbf{k}$-Cartier divisors when $X$ is a locally Noetherian scheme. \end{definition} \begin{remark} If $X_\eta$ is a scheme in Definition \ref{def:fbig}, the condition \eqref{eq:bigvolcond} holds if and only if for $m \gg 0$, the rational map \[ \begin{tikzcd}[column sep=large] X_\eta \rar[dashed]{\lvert \mathscr{L}_{\vert X_\eta}^{\otimes m} \rvert} & \mathbf{P}\Bigl(H^0\bigl(X_\eta,\mathscr{L}^{\otimes m}_{\vert X_\eta} \bigr)\Bigr) \end{tikzcd} \] is generically finite onto its image by \cite[Theorems 8.2 and 10.7]{Cut14}. \end{remark} \begin{definition}[see {\cite[Definition 3.1.1(9)]{BCHM10}}] Let $\pi\colon X \to Z$ be a proper surjective morphism between integral algebraic spaces over a scheme $S$. Let $D$ be a $\mathbf{k}$-invertible sheaf on $X$ for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$. We say that $D$ is \textsl{$\pi$-pseudo-effective} if the restriction $D_{\vert X_\eta}$ of $D$ to the generic fiber of $\pi$ is the limit of $\mathbf{Q}$-invertible sheaves associated to effective $\mathbf{Q}$-Cartier divisors under the map \eqref{eq:effcarttopic}. If $Z = \Spec(k)$ for a field $k$, we just say that $D$ is \textsl{pseudo-effective}. \end{definition} We now show a relative version of Kodaira's lemma. \begin{lemma}[Relative Kodaira's lemma; cf.\ {\citeleft\citen{KMM87}\citemid Lemma 0-3-3 and Corollary 0-3-4\citepunct \citen{Fuj17}\citemid Lemma 2.1.27\citepunct \citen{CLM}\citemid Lemma 1.18\citeright}]\label{lem:BigIsAmpleAddEffective} Let $\pi\colon X \to Z$ be a proper surjective morphism between integral algebraic spaces over a scheme $S$. Let $\mathscr{L}$ be a $\pi$-big invertible sheaf on $X$. Let $V \subseteq X$ be a proper closed subspace. For infinitely many $m > 0$, we have \[ f_*( \mathcal{I}_V \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m} ) \ne 0. \] If the generic fiber $X_\eta$ is a scheme, then this holds for all $m \gg 0$. \end{lemma} \begin{proof} By restricting to the generic fiber of $\pi$, it suffices to consider the case when $Z = \Spec(k)$ for a field $k$. \par Consider the short exact sequence \begin{align*} 0 \longrightarrow \mathcal{I}_V \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m} &\longrightarrow \mathscr{L}^{\otimes m} \longrightarrow \mathscr{L}^{\otimes m}_{\vert V} \longrightarrow 0. \intertext{Taking global sections, we have the exact sequence} 0 \longrightarrow H^0(X,\mathcal{I}_V \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m}) &\longrightarrow H^0(X,\mathscr{L}^{\otimes m}) \longrightarrow H^0\bigl(V,\mathscr{L}^{\otimes m}_{\vert V}\bigr). \end{align*} Since $\mathscr{L}$ is big, we see that \[ \dim_k\bigl(H^0(X,\mathscr{L}^{\otimes m})\bigr) > \dim_k \Bigl(H^0\bigl(V,\mathscr{L}^{\otimes m}_{\vert V}\bigr)\Bigr) \] for some $m$ by Proposition \ref{prop:deb131a}, and hence $H^0(X,\mathcal{I}_V \otimes_{\mathcal{O}_X} \mathscr{L}^{\otimes m}) \ne 0$. \par The last statement when $X_\eta$ is a scheme holds because in this case, the limit supremum in \eqref{eq:bigvolcond} is a limit by \cite[Theorem 10.7]{Cut14}. \end{proof} We obtain the following characterization of $\pi$-big $\mathbf{k}$-invertible sheaves. \begin{corollary}[cf.\ {\citeleft\citen{Laz04a}\citemid Corollary 2.2.7 and Proposition 2.2.22\citepunct \citen{Fuj17}\citemid Lemma 2.1.29\citeright}]\label{lem:kodairachar} Let $\pi\colon X \to Z$ be a projective surjective morphism between integral Noetherian schemes, such that $Z$ is affine. Let $D$ be a $\mathbf{k}$-invertible sheaf on $X$ for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$. The following are equivalent: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:kodairacharbig} $D$ is $\pi$-big. \item\label{lem:kodairacharapluse} We have $D = A+E$ in $\Pic_\mathbf{k}(X)$ for $\mathbf{k}$-invertible sheaves $A$ and $E$ such that $A$ is a $\pi$-ample $\mathbf{k}$-invertible sheaf and $E$ is the $\mathbf{k}$-invertible sheaf associated to an effective $\mathbf{k}$-Cartier divisor. \item\label{lem:kodairacharapluserational} We have $D = A+E$ in $\Pic_\mathbf{k}(X)$ for $\mathbf{k}$-invertible sheaves $A$ and $E$ such that $A$ is a $\pi$-ample $\mathbf{k}$-invertible sheaf and $E$ is the $\mathbf{k}$-invertible sheaf associated to an effective $\mathbf{k}$-Cartier divisor, where $A$ is in fact a $\mathbf{Q}$-invertible sheaf. \item\label{lem:kodairacharapluserationaleff} We have $D = A+E$ in $\Pic_\mathbf{k}(X)$ for $\mathbf{k}$-invertible sheaves $A$ and $E$ such that $A$ is a $\pi$-ample $\mathbf{k}$-invertible sheaf and $E$ is the $\mathbf{k}$-invertible sheaf associated to an effective $\mathbf{k}$-Cartier divisor, where $E$ is in fact a $\mathbf{Q}$-invertible sheaf. \end{enumerate} Moreover, if $D$ is $\pi$-big and $\pi$-nef, then writing $D = A+E$ as above, we can make the coefficients on $E$ arbitrarily small without changing the invertible sheaves that appear when expressing $E$ as a $\mathbf{k}$-linear combination of invertible sheaves. \end{corollary} \begin{proof} We first show $(\ref{lem:kodairacharbig}) \Rightarrow (\ref{lem:kodairacharapluse})$. Write $D = \sum_{i=1}^n a_iD_i$ for $a_i \in \mathbf{k}_{>0}$. Let $A_0$ be a $\pi$-very ample effective Cartier divisor. Applying Lemma \ref{lem:BigIsAmpleAddEffective} to each invertible sheaf $\mathcal{O}_X(D_i)$, we have \[ H^0\bigl(X,\mathcal{O}_X(m_iD_i-A_0)\bigr) \ne 0 \] for some $m_i > 0$. We can then find an effective Cartier divisor $E_i \in \lvert m_iD_i - A_0 \rvert$, and hence \[ D = \sum_{i=1}^n a_iD_i \sim_\mathbf{k} \sum_{i=1}^n \frac{a_i}{m_i}A_0 + \sum_{i=1}^n \frac{a_i}{m_i}E_i. \] Setting $A = \sum_{i=1}^n \frac{a_i}{m_i}A_0$ and $E = \sum_{i=1}^n \frac{a_i}{m_i}E_i$, we are done.\smallskip \par Next, we show $(\ref{lem:kodairacharapluse}) \Rightarrow (\ref{lem:kodairacharapluserational})$ and $(\ref{lem:kodairacharapluse}) \Rightarrow (\ref{lem:kodairacharapluserationaleff})$. If $\mathbf{k} = \mathbf{Q}$, there is nothing to show. If $A = \sum_{i=1}^m b_iA_i$ for $b_i \in \mathbf{R}_{\ge0}$ and $E = \sum_{j=1}^n c_jE_j$ for $c_j \in \mathbf{R}_{\ge0}$, then we can write \[ D = \sum_{i=1}^m b_i'A_i + \sum_{j=1}^n (c_j-c_j')E_j + \sum_{i=1}^m (b_i-b_i')A_i + \sum_{j=1}^n c_j'E_j \] where $b_i',c_j' \in \mathbf{Q}$. To obtain a decomposition $D = A+E$ where $A \in \Pic_\mathbf{Q}(X)$, we choose $c_j = c_j'$ and choose $b_i'$ such that $0 \le b_i - b_i' \ll 1$. To obtain a decompotision $D = A+E$ where $E \in \Pic_\mathbf{Q}(X)$, we choose $b_i = b_i'$ and choose $c_j'$ such that $\lvert c_j-c_j' \rvert \ll 1$ and use the openness of the ample cone (Theorem \ref{thm:ampisinterior}).\smallskip \par Clearly $(\ref{lem:kodairacharapluserational}) \Rightarrow (\ref{lem:kodairacharapluse})$ and $(\ref{lem:kodairacharapluserationaleff}) \Rightarrow (\ref{lem:kodairacharapluse})$. It therefore suffices to show $(\ref{lem:kodairacharapluserational}) \Rightarrow (\ref{lem:kodairacharbig})$ to complete the proof. We first show the statement when $\mathbf{k} = \mathbf{Q}$. Writing $D = A+E$, we can clear denominators to reduce to the case when $D = A + E$ in $\Pic(X)$. In this case, we have \[ H^0\bigl(X,\mathcal{O}_X(mA)\bigr) \lhook\joinrel\longrightarrow H^0\bigl(X,\mathcal{O}_X(mA+mE)\bigr) \cong H^0\bigl(X,\mathcal{O}_X(mD)\bigr) \] for all $m > 0$, and hence the claim follows from \cite[Chapter VI, Theorem 2.15]{Kol96}. \par We now show $(\ref{lem:kodairacharapluserational}) \Rightarrow (\ref{lem:kodairacharbig})$ when $\mathbf{k} = \mathbf{R}$. Write $E = \sum_{j=1}^n c_jE_j$. We induce on $n$. If $n = 0$, there is nothing to show. If $n \ge 1$, write \[ D = \biggl(A + \sum_{j=1}^{n-1} c_jE_j\biggr) + c_nE_n. \] By the inductive hypothesis, we know that $D' = A + \sum_{j=1}^{n-1} c_jE_j$ is $\pi$-big, and hence we can write $D' = \sum_{i=1}^m a_iD_i$ for $\pi$-big invertible sheaves $D_i$ and $a_i \in \mathbf{R}_{>0}$. Choose $s_1,s_2 \in \mathbf{Q}_{>0}$ such that $s_1 < c_n/a_m < s_2$ and $t \in [0,1]$ such that $c_n/a_m = ts_1 + (1-t)s_2$. We then have \begin{align*} D &= \sum_{i=1}^{m-1} a_iD_i + a_mD_m + c_nE_n\\ &= \sum_{i=1}^{m-1} a_iD_i + a_m\biggl(D_m + \frac{c_n}{a_m}E_n \biggr)\\ &= \sum_{i=1}^{m-1} a_iD_i + a_m\bigl( t(D_m + s_1E_n) + (1-t)(D_m+s_2E_n) \bigr). \end{align*} Since $D_m + s_1E_n$ and $D_m + s_2E_n$ are $\pi$-big by the implication $(\ref{lem:kodairacharapluserational}) \Rightarrow (\ref{lem:kodairacharbig})$ for $\mathbf{k} = \mathbf{Q}$, we see that $D$ is an $\mathbf{R}_{>0}$-linear combination of $\pi$-big invertible sheaves.\smallskip \par Finally, if $D$ is $\pi$-nef and $\pi$-big, then $kD+A$ is $\pi$-ample for any positive integer $k$ by Theorem \ref{thm:ampisinterior}. If we have a decomposition $D = A+E$ as above, we then have \[ D = \frac{1}{k+1}(kD+A) + \frac{1}{k+1}E. \] Replacing $A$ and $E$ by $\frac{1}{k+1}(kD+A)$ and $\frac{1}{k+1}E$, respectively, we can make the coefficients on $E$ arbitrarily small without changing the invertible sheaves that appear when writing $E$ as a $\mathbf{k}$-linear combination of invertible sheaves. \end{proof} We show bigness behaves well with respect to pulling back by generically finite morphisms. \begin{lemma}[cf.\ {\citeleft\citen{Fuj14}\citemid Lemmas A.5 and A.18\citeright}]\label{lem:bigpullback} Let $S$ be a scheme. Let \[ \begin{tikzcd} X' \rar{f}\arrow{dr}[swap]{\pi'} & X\dar{\pi}\\ & Z \end{tikzcd} \] be a commutative diagram of integral algebraic spaces over $S$, where $\pi$ and $\pi'$ are proper surjective and $f$ is generically finite. Let $D \in \Pic_\mathbf{k}(X)$ for $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$. Then, $D$ is $\pi$-big if and only if $f^*D$ is $\pi'$-big. Also, if $D$ is pseudo-effective, then $f^*D$ is pseudo-effective. \end{lemma} \begin{proof} Replacing $Z$ by the spectrum of its generic point, we may assume that $Z = \Spec(k)$ for a field $k$.\smallskip \par We first show that if $D$ is big or pseudo-effective, then $f^*D$ is also. For bigness, working one term of $D$ at a time, it suffices to consider the case when $\mathbf{k} = \mathbf{Z}$. The statement for bigness now follows from Proposition \ref{prop:volumegenfin}. The statement for pseudo-effectivity follows from taking limits, since the pullback of an effective $\mathbf{Q}$-Cartier divisor is an effective $\mathbf{Q}$-Cartier divisor.\smallskip \par We now show that if $f^*D$ is big, then $D$ is big. If $D\in \Pic_\mathbf{Q}(X)$, since the volume is homogeneous (Proposition \ref{prop:volhomog}), we can clear denominators and reduce to the case $D\in \Pic(X)$, and the statement follows from Proposition \ref{prop:volumegenfin}. Now assume $D\in \Pic_\mathbf{R}(X)$. By the weak version of Chow's lemma in \cite[\href{https://stacks.math.columbia.edu/tag/089J}{Tag 089J}]{stacks-project} and using \cite[\href{https://stacks.math.columbia.edu/tag/0DMN}{Tag 0DMN}]{stacks-project}, there exists a generically finite morphism $\mu\colon X'' \to X'$ from a projective variety over $k$. Since $f^*D$ is $\pi'$-big, $\mu^*f^*D$ is $(\pi'\circ\mu)$-big, by the previous paragraph. By Kodaira's Lemma (Corollary \ref{lem:kodairachar}) and the openness of the ample cone (Theorem \ref{thm:ampisinterior}), we see that there exists $D_0\in \Pic_\mathbf{Q}(X)$ such that $\mu^*f^*D_0$ is $(\pi'\circ\mu)$-big, and that if $L$ in an element of $\Pic(X)$, then $\mu^*f^*(D_0+\epsilon L)$ is $(\pi'\circ\mu)$-big for all positive rational numbers $\epsilon \ll 1$. By the $\mathbf{Q}$-case already proved, $D_0+\epsilon L$ and $D_0$ are $\pi$-big. Write \[ D = \sum_i a_i D_i, \] where the $D_i$ are elements of $\Pic_{\mathbf{Q}}(X)$ and $a_i$ are real numbers. By the previous discussion, we may assume that each $D_i$ is $\pi$-big. For rational numbers $a_i'<a_i$ with $a_i-a_i'\ll 1$, let $D'=\sum_i a_i'D_i$. By Kodaira's Lemma (Corollary \ref{lem:kodairachar}) and the openness of the ample cone (Theorem \ref{thm:ampisinterior}), $\mu^*f^*D'$ is $(\pi'\circ\mu)$-big, so $D'$ is $\pi$-big by the $\mathbf{Q}$-case, and \[ D=D'+\sum_i (a_i-a_i')D_i \] is $\pi$-big by definition. \end{proof} We can also show that the sum of a $\pi$-big and $\pi$-nef or $\pi$-pseudo-effective $\mathbf{k}$-invertible sheaf is $\pi$-big. \begin{lemma}\label{lem:bigplusnef} Let $\pi\colon X \to Z$ be a proper surjective morphism between integral algebraic spaces over a scheme $S$. Let $D$ be a $\pi$-big $\mathbf{k}$-invertible sheaf on $X$ for $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$. If $D'$ is a $\pi$-nef (resp.\ $\pi$-pseudo-effective) $\mathbf{k}$-invertible sheaf on $X$, then $D+D'$ is $\pi$-big. \end{lemma} \begin{proof} Replacing $Z$ by the spectrum of its generic point, we may assume that $Z = \Spec(k)$ for a field $k$. By the weak version of Chow's lemma in \cite[\href{https://stacks.math.columbia.edu/tag/089J}{Tag 089J}]{stacks-project} and using \cite[\href{https://stacks.math.columbia.edu/tag/0DMN}{Tag 0DMN}]{stacks-project}, there exists a generically finite morphism $\mu\colon X' \to X$ from a projective variety over $k$. We then see that $\mu^*D$ is big by Lemma \ref{lem:bigpullback}, and that $\mu^*D'$ is $\pi$-nef by Lemma \ref{lem:nefpullback} (resp.\ $\pi$-pseudo-effective by Lemma \ref{lem:bigpullback}). Now by Kodaira's lemma (Corollary \ref{lem:kodairachar}), we can write $\mu^*D = A+E$ in $\Pic_\mathbf{k}(X')$, where $A$ is ample and $E$ is effective, and hence \[ \mu^*(D + D') = A + \mu^*D' + E. \] If $D'$ is nef, then $A+\mu^*D'$ is ample by Kleiman's criterion (Proposition \ref{lem:AmpleIsPositiveOnNE}), and hence $\mu^*(D+D')$ is big by Kodaira's lemma (Corollary \ref{lem:kodairachar}). If $D'$ is pseudo-effective, then $\mu^*D'$ can be written as a limit of effective $\mathbf{Q}$-Cartier divisors $F_i$ as $i \to \infty$. Writing \[ \mu^*(D+D') = A + (\mu^*D' - F_i) + F_i + E, \] we see that $A + (\mu^*D' - F_i)$ is ample for $i \gg 0$ by Theorem \ref{thm:ampisinterior}, and hence $\mu^*(D+D')$ is big by Kodaira's lemma (Corollary \ref{lem:kodairachar}). Finally, we conclude that $D+D'$ is big by Lemma \ref{lem:bigpullback}. \end{proof} Bigness and pseudo-effectivity are well-behaved under birational transforms. \begin{lemma}\label{lem:ContrPreservesBig} Let $g\colon Y\to Z$ be a projective surjective morphism of integral algebraic spaces over a scheme $S$ with $Y$ integral and normal. Let $f\colon X\to Y$ be a proper birational morphism over $Z$ with $X$ integral and normal. Let $D$ be a $\mathbf{Q}$-Weil divisor that is $\mathbf{Q}$-Cartier on $X$ such that the birational transform $f_*D$ is $\mathbf{Q}$-Cartier. If $D$ is big over $Z$, so is $f_*D$. Assume further that $f$ is an isomorphism in codimension 1. Then, $f_*D$ is big over $Z$ if and only if $D$ is. \end{lemma} \begin{proof} By Definition \ref{def:fbig}, we may take the fiber over the generic point of $\pi(X)$ and assume $Z$ the spectrum of a field. If $m\in\mathbf{Z}_{>0}$ is sufficiently divisible, then $mD$ and $mf_*D$ are Cartier. For each $E\in \lvert mD\rvert$, we have $f_*E\in \lvert mf_*D\rvert$; and if $f_*E_1=f_*E_2$, then $E_1-E_2$ is an $f$-exceptional divisor linearly equivalent to 0, hence is 0. We thus know that $\dim \lvert mD\rvert\leq \dim \lvert mf_*D\rvert$, and we see that $f_*D$ is big whenever $D$ is. If $f$ is an isomorphism in codimension 1, then for each $E\in \lvert mf_*D\rvert$ we have $f^{-1}_{*}E\in \lvert mD\rvert$. Thus $\dim \lvert mD\rvert= \dim \lvert mf_*D\rvert$ and $f_*D$ is big if and only if $D$ is big. \end{proof} \begin{lemma}\label{lem:ContrPreservesPsEff} Let $g\colon Y\to Z$ be a projective surjective morphism of integral algebraic spaces over a scheme $S$ with $Y$ integral and normal. Let $f\colon X\to Y$ be a projective birational morphism over $Z$ with $X$ integral and normal. Let $D$ be a $\mathbf{Q}$-Weil divisor that is $\mathbf{Q}$-Cartier on $X$ such that the birational transform $f_*D$ is $\mathbf{Q}$-Cartier. If $D$ is pseudoeffective over $Z$, so is $f_*D$. Assume further that $f$ is an isomorphism in codimension 1. Then $f_*D$ is pseudoeffective over $Z$ if and only if $D$ is. \end{lemma} \begin{proof} Again, we may assume $Z$ the spectrum of a field to avoid writing ``over $Z$.'' If $D$ is pseudoeffective, for each big $\mathbf{Q}$-Cartier divisor $B$ on $Y$ we have $f_*D+B=f_*(D+f^*B)$. $D+f^*B$ is big since $f^*B$ is, so by Lemma \ref{lem:ContrPreservesBig} $f_*D+B$ is big, and $f_*D$ is pseudoeffective. Now assume that $f$ is an isomorphism in codimension 1, and that $f_*D$ is pseudoeffective. Let $A$ be an ample $\mathbf{Q}$-Cartier divisor on $X$ and we need to prove $D+A$ big. Choose an ample divisor $H$ on $Y$ with $A-f^*H$ ample. By Lemma \ref{lem:ContrPreservesBig} we see $D+f^*H$ big since its pushforward is $f_*D+H$. Thus $D+A$ is big. \end{proof} \subsection{Linear systems and generic fibers} Relative bigness and relative pseudo-effectivity only depend on the generic fiber, and hence we describe how linear systems behave when passing to the generic fiber of a morphism. \begin{lemma}[cf.\ {\cite[Lemma 3.2.1]{BCHM10}}]\label{lem:LinearSysLocalizes} Let $\pi\colon X\to Z$ be a proper surjective morphism of integral Noetherian schemes, where $X$ is normal and $Z$ is affine. Consider a point $z \in Z$, and set $R \coloneqq \mathcal{O}_{Z,z}$ and $X_R \coloneqq X \times_Z \Spec(R)$. Let $D$ be a $\mathbf{k}$-Weil divisor on $X$ and let $E$ an effective $\mathbf{k}$-Weil divisor on $X_R$ such that $E_{\vert X_R} \sim_\mathbf{k} D_{\vert X_R}$ for some $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$. Then, there exists an effective $\mathbf{k}$-Weil divisor $F$ on $X$ such that $F \sim_\mathbf{k} D$ and $F_{\vert X_R}=E$. \end{lemma} \begin{proof} Let $E=\sum_{i=1}^n a_iE_i$ where $a_i\in\mathbf{k}$ and $E_i$ are prime divisors on $X_R$. There exist rational functions $f_1,f_2,\ldots,f_m$ on $X_R$ and numbers $b_1,b_2,\ldots,b_m\in\mathbf{k}$ such that \begin{align*} D_{|X_R}=\sum_{i=1}^n a_iE_i&+\sum_{j=1}^m b_j\prdiv_{X_R}(f_j). \intertext{Since the function fields of $X$ and $X_R$ are the same, the functions $f_j$ define principal divisors $\prdiv_{X}(f_j)$ on $X$. For each $i$, we also obtain a prime divisor $\overline{E}_i$ on $X$ as the closure of $E_i$. Let} D'=D-\sum_{i=1}^n a_i\overline{E}_i&-\sum_{j=1}^m b_j\prdiv_{X}(f_j). \end{align*} Then, $D'$ is a $\mathbf{k}$-linear combination of prime divisors that avoid $X_R$. In other words, we have $D'=\sum_k c_kS_k$ where $(S_k)_{\vert X_R}=0$ and $c_k\in\mathbf{k}$ for every $k$. If we can prove the result for $\sgn(c_k)S_k$ for each $k$ (and $\mathbf{k}=\mathbf Z$) then we are done. Let $\mathcal{F}=\mathcal O_X(\sgn(c_k)S_k)$. By flat base change \cite[Proposition 1.4.15]{EGAIII1}, we have \[H^0(X,\mathcal{F})\otimes_{H^0(Z,\mathcal{O}_Z)}R=H^0(X_R,\mathcal{F}_R) \simeq H^0(X_R,\mathcal{O}_{X_R}).\] Since $H^0(X,\mathcal{F})$ is torsion-free as an $H^0(Z,\mathcal{O}_Z)$-module \cite[Proposition 8.4.5]{EGAInew}, there exists a section $s\in H^0(X,\mathcal{F})$ such that $s$ maps to a nonzero section of $\mathcal{F}_R$. We then have $\prdiv(s) \sim \sgn(c_k)S_k$ while $\prdiv(s)_{\vert X_R}=0$, and hence we are done. \end{proof} \begin{corollary}\label{cor:EffIffEffAtGenFiber} Let $\pi\colon X\to Z$ be a proper surjective morphism of integral Noetherian schemes with $X$ normal and $Z$ affine. Consider a point $z \in Z$, and set $R \coloneqq \mathcal{O}_{Z,z}$ and $X_R \coloneqq X \times_Z \Spec(R)$. Let $D$ be a $\mathbf{k}$-Weil divisor on $X$ where $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$. Then $\lvert D \rvert_{\mathbf{k}}\neq\emptyset$ if and only if $\lvert D_{\vert X_R} \rvert_{\mathbf{k}}\neq\emptyset$. \end{corollary} \subsection{Relatively big \texorpdfstring{$\mathbf{R}$}{R}-Weil divisors} We now extend the definition of $\pi$-bigness to $\mathbf{Q}$- or $\mathbf{R}$-Weil divisors. \begin{definition}\label{def:BigIsAmpleAddEffective} Let $\pi\colon X\to Y$ be a proper surjective morphism of integral locally Noetherian algebraic spaces over a scheme $S$. Let $X_\eta$ be the generic fiber of $\pi$ and assume $X_\eta$ projective over $\kappa(\eta)$. Let $D$ be a $\mathbf{k}$-Weil divisor on $X$ where $\mathbf{k}\in\{\mathbf{Q},\mathbf{R}\}$. We say that $D$ is \emph{$\pi$-big} if $D_{\vert X_\eta} \sim_\mathbf{k} A+E$ for an ample $\mathbf{k}$-invertible sheaf $A$ on $X_\eta$ and an effective $\mathbf{k}$-Weil divisor $E$ on $X_\eta$. \end{definition} If $\pi$ is birational, then clearly every $\mathbf{k}$-Weil divisor is $\pi$-big.\medskip Definition \ref{def:BigIsAmpleAddEffective} is equivalent to Definition \ref{def:fbig} for $\mathbf{k}$-invertible sheaves or $\mathbf{k}$-Cartier divisors. This equivalent characterization is the definition taken in \cite[Definition 2.16]{CU15}. \begin{lemma}\label{lem:BigWeilIsAmplePlusEffective} Let $\pi\colon X\to Z$ be a proper morphism of locally Noetherian schemes, such that $X$ is normal and $X_\eta$ is projective over $\kappa(\eta)$. Let $\mathbf{k}\in\{\mathbf{Q},\mathbf{R}\}$ and let $D$ be a $\mathbf{k}$-Weil divisor on $X$. If $D$ is $\mathbf{k}$-Cartier, $D$ is $\pi$-big in the sense of Definition \ref{def:BigIsAmpleAddEffective} if and only if $D$ is $\pi$-big in the sense of Definition \ref{def:fbig}. If $Z$ is affine and $\pi$ is projective, $D$ is $\pi$-big in the sense of Definition \ref{def:BigIsAmpleAddEffective} if and only if there exists a $\pi$-ample $\mathbf{k}$-Cartier divisor $A$ and an effective $\mathbf{k}$-Weil divisor $E$ with $D\sim_\mathbf{k} A+E$. \end{lemma} \begin{proof} The first statement follows from Corollary \ref{lem:kodairachar}. Now assume that $Z$ affine and $\pi$ is projective, in which case $X$ is a scheme. The implication $\Leftarrow$ is trivial, so we assume that $D$ is $\pi$-big in the sense of Definition \ref{def:BigIsAmpleAddEffective}. Let $A^\eta$ and $E^\eta$ be divisors on the generic fiber $X_\eta$ as in Definition \ref{def:BigIsAmpleAddEffective}. Let $H$ be a $\pi$-ample $\mathbf{Q}$-Cartier divisor on $X$. After scaling, we may assume $A^\eta-H_{|X_\eta}$ ample, so we see that $\lvert (D-H)_{|X_\eta}\rvert_\mathbf{k}\neq \emptyset$. By Corollary \ref{cor:EffIffEffAtGenFiber}, $\lvert D-H\rvert_\mathbf{k}\neq \emptyset$, as desired. \end{proof} \section{Canonical sheaves, canonical divisors, and singularities of pairs}\label{sect:candiv} \subsection{Canonical sheaves and divisors} We define canonical sheaves. \begin{definition}[cf.\ {\citeleft\citen{KMM87}\citemid Remark 0-2-2(2)\citepunct \citen{Cor92}\citemid (16.3.3)\citepunct \citen{Kov12}\citemid \S5\citeright}]\label{def:canonicalsheaf} Let $X$ be an equidimensional and connected locally Noetherian algebraic space over a scheme $S$. Suppose that $X$ has a dualizing complex $\omega_X^\bullet$. The \textsl{canonical sheaf} $\omega_X$ associated to $\omega_X^\bullet$ is the cohomology sheaf of $\omega_X^\bullet$ in lowest cohomological degree. \end{definition} We can also often make sense of $\omega_X$ as a Weil divisor. \begin{definition}[cf.\ {\citeleft\citen{KMM87}\citemid Remark 0-2-2(2)\citepunct \citen{Cor92}\citemid (16.3.3)\citepunct \citen{Kov12}\citemid \S5\citeright}] Let $X$ be an equidimensional and connected locally Noetherian algebraic space over a scheme $S$. Suppose that $X$ has a dualizing complex $\omega_X^\bullet$ with associated canonical sheaf $\omega_X$. The sheaf $\omega_X$ is invertible on an open subspace $U \subseteq X$, since it is the complement of the closed subspace where \[ \omega_X \otimes_{\mathcal{O}_X} \HHom_{\mathcal{O}_X}(\omega_X,\mathcal{O}_X) \longrightarrow \mathcal{O}_X \] is an isomorphism by \cite[\href{https://stacks.math.columbia.edu/tag/0B8N}{Tag 0B8N}]{stacks-project}. \par Now suppose that $X$ is normal. If $X$ not a scheme, then we also suppose that $X$ is integral. Since $X$ is normal, $U$ contains all codimension one points of $X$. A \textsl{canonical divisor} $K_X$ on $X$ is a Weil divisor whose class in $\Cl(X)$ restricts to the image of $\omega_U$ under the map $\Pic(U) \to \Cl(U)$ from \eqref{eq:stacks0EPW}. \end{definition} \subsection{Singularities of pairs} We can now define pairs and singularities of pairs in our setting. \begin{definition}[see \citeleft\citen{Kol13}\citemid Definition 1.5 and (2.20)\citeright] \label{def:rpairs} Let $X$ be a normal locally Noetherian scheme or an integral normal locally Noetherian algebraic space over a scheme $S$. Suppose that $X$ has a dualizing complex $\omega_X^\bullet$ with associated canonical divisor $K_X$. Let $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$. A \textsl{$\mathbf{k}$-pair} $(X,\Delta)$ is the combined data of $X$ together with an effective $\mathbf{k}$-Weil divisor $\Delta$ such that $K_X + \Delta$ is $\mathbf{k}$-Cartier. \end{definition} We will also use the following definition. For algebraic spaces, we take the characterization in \cite[\href{https://stacks.math.columbia.edu/tag/0BIA}{Tag 0BIA}(2)]{stacks-project} as our definition for a simple normal crossings divisor. \begin{definition}[see {\cite[p.\ 2418]{CL12}}] Let $(X,\Delta)$ be a $\mathbf{k}$-pair for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$. We say that $(X,\Delta)$ is \textsl{log regular} if $X$ is regular and $\Delta$ has simple normal crossings support. \end{definition} For the definition below, we note that \cite{Kol13} works over a regular scheme $B$ throughout (see \cite[Definition 1.5]{Kol13}), but this is not necessary for the following definition to make sense, since we are assuming the existence of a dualizing complex $\omega_X^\bullet$. \begin{definition}[{see \citeleft\citen{KMM87}\citemid Definitions 0-2-6 and 0-2-10\citepunct \citen{Kol13}\citemid Definitions 2.4 and 2.8\citeright}]\label{def:singpairs} Let $(X,\Delta)$ be a $\mathbf{k}$-pair for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$. For a separated birational morphism $f\colon Y \to X$ of finite type from an integral normal locally Noetherian algebraic space $Y$ over $S$, we can write \[ K_Y + f_*^{-1}\Delta \sim_\mathbf{k} f^*(K_X+\Delta) + \sum_{\text{$f$-exceptional $E$}} a(E,X,\Delta)E \] for some $a(E,X,\Delta) \in \mathbf{k}$, where the $E$ are $f$-exceptional prime Weil divisors and $f_*^{-1}\Delta$ is the birational transform of $\Delta$. \par For each $f$-exceptional prime Weil divisor $E$ on $Y$, the number $a(E,X,\Delta) \in \mathbf{k}$ is called the \textsl{discrepancy of $E$} with respect to $(X,\Delta)$. For nonexceptional prime Weil divisors $D \subseteq X$, we set $a(D,X,\Delta) \coloneqq -\coeff_D(\Delta)$. If $f'\colon Y' \to X$ is another birational morphism and $E' \subseteq Y'$ is the birational transform of $E$, then $a(E,X,\Delta) = a(E',X,\Delta)$, and hence the discrepancy of $E$ only depends on $E$ and not on $Y$. The \textsl{center} $\cent_X(E)$ of $E$ is the image of $E$ in $X$. \par Now suppose that $\Delta$ has coefficients in $[0,1]$. We say that $(X,\Delta)$ is \begin{center} \begin{tabular}{c@{}c@{}l} \textsl{terminal} & \rdelim\}{4}{*} \multirow{4}{*}{\;if $a(E,X,\Delta)$ is} \ldelim\{{4}{*} & $> 0$ for every exceptional $E$,\\ \textsl{canonical} & & $\ge 0$ for every exceptional $E$,\\ \textsl{klt} & & $> -1$ for every $E$,\\ \textsl{dlt} & & $> -1$ for every $E$ such that $\cent_X(E) \subseteq \nonsnc(X,\Delta)$. \end{tabular} \end{center} Here, the divisors $E$ range over all prime Weil divisors on schemes $Y$ birational over $X$ as above. \end{definition} We will also state some results using the notion of weakly log terminal singularities from \cite{KMM87}. \begin{definition}[see {\cite[Definition 0-2-10]{KMM87}}]\label{def:wlt} Let $(X,\Delta)$ be a $\mathbf{k}$-pair for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$ such that $X$ is quasi-excellent of equal characteristic zero and such that $\Delta$ has coefficients in $[0,1]$. We say that $(X,\Delta)$ is \textsl{weakly log terminal} if the following conditions hold: \begin{enumerate}[label=$(\roman*)$] \item There exists a resolution of singularities $f\colon Y \to X$ such that $\Supp(f_*^{-1}\Delta) \cup \Exc(f)$ has normal crossings support (in the sense of \cite[\href{https://stacks.math.columbia.edu/tag/0BSF}{Tag 0BSF}]{stacks-project}) and $a(E,X,\Delta) > -1$ for every $f$-exceptional $E$. \item There exists an $f$-ample invertible sheaf $\mathscr{H}$ whose image in $\Cl(Y)$ is equal to the class of a Weil divisor whose support equals $\Exc(f)$. \end{enumerate} \end{definition} \begin{remark}[see {\citeleft\citen{Sza94}\citemid Divisorial log terminal theorem\citepunct \citen{Fuj17}\citemid Remark 2.3.22\citeright}]\label{rem:dltiswlt} Let $X$ be as in Definition \ref{def:wlt}. Since thrifty log resolutions exist in this setting by \cite[Theorems 1.1.6 and 1.1.13]{Tem18}, we see that dlt pairs are weakly log terminal. \end{remark} \begin{remark}\label{rem:singsetalelocal} Since terminal, canonical, and klt are \'etale-local conditions \cite[(2.14) and Proposition 2.15]{Kol13}, one can also define these notions for algebraic spaces by pulling back to an \'etale cover of $X$. Note that dlt is not an \'etale-local condition \cite[Warning on p.\ 47]{Kol13}. \end{remark} We will use the following lemma. \begin{lemma}\label{lem:kltFacts} Let $(X,\Delta)$ be a $\mathbf{k}$-pair for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$, and let $\Delta'$ be an effective $\mathbf{k}$-Weil divisor on $X$. Then, we have the following: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item \label{lem:kltSmaller} If $\Delta'$ is $\mathbf{k}$-Cartier and $(X,\Delta+\Delta')$ is klt, then $(X,\Delta)$ is klt. \item \label{lem:kltConvex} Suppose $K_X+\Delta'$ is $\mathbf{k}$-Cartier. If $(X,\Delta')$ is klt, then $(X,t\Delta+(1-t)\Delta')$ is klt for all $t\in [0,1] \cap \mathbf{k}$. \item \label{lem:kltContinuous} Assume that $(X,\Delta)$ has a log resolution, that $(X,\Delta)$ is klt, and that $\Delta'$ is $\mathbf{k}$-Cartier. Then, for all sufficiently small $\varepsilon\in \mathbf{k}_{>0}$, the pair $(X,\Delta+\varepsilon\Delta')$ is klt. \item \label{lem:kltContinuousConvex} Suppose $K_X+\Delta'$ is $\mathbf{k}$-Cartier. Assume that $(X,\Delta)$ has a log resolution, that $(X,\Delta)$ is klt. Then, for all sufficiently small $\varepsilon\in \mathbf{k}_{>0}$, the pair $(X,(1-\varepsilon)\Delta+\varepsilon\Delta')$ is klt. \end{enumerate} \end{lemma} \begin{proof} Items $(\ref{lem:kltSmaller})$ and $(\ref{lem:kltConvex})$ follow immediately from the definition of discrepancy. For $(\ref{lem:kltContinuous})$ and $(\ref{lem:kltContinuousConvex})$, it suffices to note that klt-ness is detected by a single log resolution \cite[Corollary 2.13]{Kol13}. \end{proof} \section{Base loci and restricted linear systems}\label{subsection:baseloci} We define base loci and some of their asymptotic invariants, which we use to define restricted linear systems. \begin{definition}[see {\citeleft\citen{KMM87}\citemid p.\ 299\citepunct \citen{CL12}\citemid p.\ 2419\citepunct \citen{McK17}\citemid Definition 2.2\citeright}] Let $X$ be a normal locally Noetherian scheme or an integral normal locally Noetherian algebraic space over a scheme $S$. The \textsl{base locus} of a Weil divisor $D$ is the closed set \begin{align*} \Bs\lvert D \rvert &\coloneqq \bigcap_{D' \in \lvert D \rvert} \Supp(D'). \intertext{We set $\Bs\lvert D \rvert = X$ if $\lvert D \rvert = \emptyset$. The \textsl{stable base locus} of an $\mathbf{R}$-Weil divisor $D$ is the closed set} \SB(D) &\coloneqq \bigcap_{D' \in \lvert D \rvert_\mathbf{R}} \Supp(D'). \end{align*} We set $\SB(D) = X$ if $\lvert D \rvert_\mathbf{R} = \emptyset$. \end{definition} We can now define restricted linear systems. \begin{definition}[see {\citeleft\citen{ELMNP09}\citemid p.\ 612\citepunct \citen{CL12}\citemid p.\ 2420 and Definition 2.23\citeright}]\label{def:restrictedlinearsystem} Let $X$ be an algebraic space over a scheme $S$, and let $T \subseteq X$ be a closed subspace. For an invertible sheaf $\mathscr{L}$ on $X$, we set \[ H^0\bigl(X \vert T,\mathscr{L}\bigr) \coloneqq \im\Bigl( H^0\bigl(X,\mathscr{L}\bigr) \longrightarrow H^0\bigl(T,\mathscr{L}_{\vert T}\bigr)\Bigr), \] which is denoted $\res_T(H^0(X,\mathscr{L}))$ in \cite[Definition 2.23]{CL12}. \par Now suppose $X$ is a normal Noetherian scheme and $D$ is a Cartier divisor intersecting $T$ properly. The \textsl{restricted linear system} $\lvert D \rvert_T$ is the subset of $\lvert D_{\vert T} \rvert$ corresponding to nondegenerate sections in $H^0(X \vert T,\mathcal{O}_X(D))$ under the bijection in Proposition \ref{prop:har29}. The restriction map \[ H^0\bigl(X,\mathcal{O}_X(D)\bigr) \mathrel{\text{\longtwo@rightarrow}} H^0\bigl(T,\mathcal{O}_T(D_{\vert T})\bigr) \] induces a map $\lvert D \rvert \to \lvert D \rvert_T$ if $T$ is integral and $T \not\subseteq \Bs\lvert D \rvert$, since $H^0(X,\mathcal{O}_X^*)$ maps to $H^0(T,\mathcal{O}_T^*)$ and nondegenerate sections of $\mathcal{O}_X(D)$ map to nondegenerate sections of $\mathcal{O}_T(D_{\vert T})$. \end{definition} We now want to define the fixed and stable fixed parts of a linear system. To do so, we need the following result, which shows that the definition of $\SB(D)$ is compatible with the usual definition for $\mathbf{Q}$-Cartier divisors in \cite[Definition 2.1.20]{Laz04a}. \begin{lemma}[see \citeleft\citen{BCHM10}\citemid Lemma 3.5.3\citepunct \citen{CL12}\citemid Lemma 2.3\citepunct \citen{McK17}\citemid Lemma 2.4\citeright]\label{lem:cl23} Let $X$ be a normal locally Noetherian scheme or an integral normal locally Noetherian algebraic space over a scheme $S$. Consider a $\mathbf{Q}$-Weil divisor $D$ on $X$. Then, we have \[ \SB(D) = \bigcap_{D' \in \lvert D \rvert_\mathbf{Q}} \Supp(D'). \] \end{lemma} \begin{proof} This is immediate from Lemma \ref{lem:RatlIsDense}. \end{proof} Finally, we define fixed and mobile parts of linear systems, together with the asymptotic variant of the fixed part. \begin{definition}[{see \citeleft\citen{CL12}\citemid Definition 2.5\citeright}] Let $X$ be a normal locally Noetherian scheme or an integral normal locally Noetherian algebraic space over a scheme $S$. Consider a Weil divisor $D$ on $X$. The \textsl{fixed part} $\Fix\lvert D \rvert$ of $D$ is the largest effective Weil divisor $F$ on $X$ such that $F \le D'$ for all $D' \in \lvert D \rvert$. We can then write \[ \lvert D \rvert = \bigl\lvert \Mob(D) \bigr\rvert + \Fix\lvert D \rvert, \] where $\Mob(D)$ is the \textsl{mobile part} of $\lvert D \rvert$. If $T \subseteq X$ is a closed subscheme, we use the same definition for the restricted linear system $\lvert D \rvert_T$ to define the fixed part $\Fix\lvert D \rvert_T$. \par Now consider a $\mathbf{Q}$-Weil divisor $D$ on $X$. The \textsl{stable fixed part} of $D$ is \begin{align*} \FFix(D) &\coloneqq \liminf_{k \to \infty} \frac{1}{k} \Fix\lvert kD \rvert,\\ \intertext{which by Lemma \ref{lem:cl23} is the divisorial part of the stable base locus $\SB(D)$. Similarly, we set} \FFix_T(D) &\coloneqq \liminf_{k \to \infty} \frac{1}{k} \Fix\lvert kD \rvert_T. \end{align*} \end{definition} \section{Sets in \texorpdfstring{$\Div_\mathbf{R}(X)$}{Div\_R(X)} and relative divisorial graded rings}\label{subsection:relativedivring} We define some subsets of $\Div_\mathbf{R}(X)$ associated to finite-dimensional subspaces in $\Div_\mathbf{R}(X)$, following \cite[\S2.1]{CL12}. We will restrict to the scheme case in this section. \begin{definition}[cf.\ {\cite[Defnition 2.4]{CL12}}]\label{def:cl1224} Let $X$ be a regular locally Noetherian scheme with a dualizing complex $\omega_X^\bullet$. Denote by $K_X$ a canonical divisor defined using $\omega_X^\bullet$. Let $S_1,S_2,\ldots,S_p$ be distinct prime divisors on $X$ such that $(X,\sum_{i=1}^p S_i)$ is log regular. Let \[ V = \sum_{i=1}^p \mathbf{R} \cdot S_i \subseteq \Div_\mathbf{R}(X), \] and let $A$ be a $\mathbf{Q}$-divisor on $X$. We set \begin{align*} \mathcal{L}(V) &\coloneqq \Set[\Big]{B = \sum b_iS_i \in V \given 0 \le b_i \le 1\ \text{for all}\ i},\\ \mathcal{E}_A(V) &\coloneqq \Set[\big]{B \in \mathcal{L}(V) \given \lvert K_X+A+B \rvert_\mathbf{R} \ne \emptyset}. \intertext{Let $S$ be a prime divisor on $X$ different from each $S_i$ such that $(X,S+\sum_{i=1}^p S_i)$ is log regular. We set} \mathcal{B}_A^S(V) &\coloneqq \Set[\big]{B \in \mathcal{L}(V) \given S \not\subseteq \SB(K_X+S+A+B)}. \end{align*} \end{definition} We now define relative divisorial graded rings and establish some basic properties about them, following \cite[\S2.4]{CL12}. \begin{definition}[cf.\ {\citeleft\citen{KMM87}\citemid Definitions 0-3-7 and 0-3-11\citepunct \citen{CL12}\citemid Definition 2.22\citepunct \citen{CL13}\citemid p.\ 620\citeright}]\label{def:cl12222} Let $\pi\colon X \to Z$ be a proper morphism of integral Noetherian schemes, where $X$ is regular and $Z$ is affine. Let $\mathcal{S} \subseteq \Div_\mathbf{Q}(X)$ be a finitely generated monoid. The \textsl{relative divisorial graded ring associated to $\mathcal{S}$} is the $\mathcal{S}$-graded $H^0(Z,\mathcal{O}_Z)$-algebra \[ R\bigl(X/Z;\mathcal{S}\bigr) \coloneqq \bigoplus_{D \in \mathcal{S}} H^0\bigl(X,\mathcal{O}_X\bigl(\lfloor D \rfloor\bigr)\bigr). \] \par Now suppose that $Z$ has a dualizing complex $\omega_Z^\bullet$, and denote by $K_X$ a canonical divisor defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. If divisors $D_1,D_2,\ldots,D_\ell$ are generators of $\mathcal{S}$ and if $D_i \sim_\mathbf{Q} k_i(K_X+\Delta_i)$ for effective $\mathbf{Q}$-divisors $\Delta_i$ and for $k_i \in \mathbf{Q}_{\ge0}$, the algebra $R(X/Z;\mathcal{S})$ is called the \textsl{relative adjoint ring associated to $\mathcal{S}$}, and the \textsl{relative adjoint ring associated to the sequence $D_1,D_2,\ldots,D_\ell$} is the $\mathbf{N}^\ell$-graded $H^0(Z,\mathcal{O}_Z)$-algebra \[ R\bigl(X/Z;D_1,D_2,\ldots,D_\ell\bigr) \coloneqq \bigoplus_{(m_1,m_2,\ldots,m_\ell) \in \mathbf{N}^\ell} H^0\Biggl(X,\mathcal{O}_X\Biggl(\Biggl\lfloor \sum_{i=1}^\ell m_iD_i \Biggr\rfloor \Biggr)\Biggr). \] Note that there is a natural projection map $R(X/Z;D_1,D_2,\ldots,D_\ell) \to R(X/Z;\mathcal{S})$. The \textsl{support} of $R(X/Z;D_1,D_2,\ldots,D_\ell)$ is \[ \Supp\Bigl(R\bigl(X/Z;D_1,D_2,\ldots,D_\ell\bigr)\Bigr) \coloneqq \biggl( \sum_{i=1}^\ell \mathbf{R}_{\ge0} \cdot D_i \biggr) \cap \Div_\mathbf{R}^\mathrm{eff}(X) \subseteq \Div_\mathbf{R}(X). \] If $\mathcal{C} \subseteq \Div_\mathbf{R}(X)$ is a rational polyhedral cone, then Gordan's lemma \cite[\S1.2, Proposition 1]{Ful93} implies that $\mathcal{S} = \mathcal{C} \cap \Div(X)$ is a finitely generated monoid, and we define the \textsl{adjoint ring associated to $\mathcal{C}$} to be \[ R\bigl(X/Z;\mathcal{C}\bigr) \coloneqq R\bigl(X/Z;\mathcal{S}\bigr). \] \end{definition} \begin{definition}[cf.\ {\cite[Defnition 2.23]{CL12}}] Let $\pi\colon X \to Z$ be a proper morphism of integral Noetherian schemes, where $X$ is regular and $Z$ is affine. Let $S$ be a regular prime divisor on $X$ and let $D$ be an effective divisor on $X$. Using Proposition \ref{prop:har29} (see also \cite[\href{https://stacks.math.columbia.edu/tag/01X0}{Tag 01X0}]{stacks-project}), we fix $1_S \in H^0(X,\mathcal{O}_X(S))$ such that $Z(1_S) = S$. Consider the exact sequence \begin{equation}\label{eq:restrictionexactseq} 0 \longrightarrow H^0\bigl(X,\mathcal{O}_X(D-S)\bigr) \longrightarrow H^0\bigl(X,\mathcal{O}_X(D)\bigr) \overset{\rho_S}{\longrightarrow} H^0\bigl(S,\mathcal{O}_S(D)\bigr), \end{equation} where the middle map is obtained via twisting the map $\mathcal{O}_X(-S) \hookrightarrow \mathcal{O}_X$ corresponding to $1_S$ and applying global sections. For $\sigma \in H^0(X,\mathcal{O}_X(D))$, we denote by $\sigma_{\vert S} \in H^0(X\vert S,\mathcal{O}_X(D))$ the image of $\sigma$ under $\rho_S$, where $H^0(X\vert S,\mathcal{O}_X(D))$ is the image of $\rho_S$ as defined in Definition \ref{def:restrictedlinearsystem}. \par If $\mathcal{S} \subseteq \Div_\mathbf{Q}(X)$ is a monoid generated by divisors $D_1,D_2,\ldots,D_\ell$, the \textsl{restriction of $R(X/Z;\mathcal{S})$ to $S$} is the $\mathcal{S}$-graded $H^0(Z,\mathcal{O}_Z)$-algebra \begin{align*} \res_S\bigl(R(X/Z;\mathcal{S})\bigr) &\coloneqq \bigoplus_{D \in \mathcal{S}} H^0\bigl(X \vert S,\mathcal{O}_X\bigl(\lfloor D \rfloor\bigr)\bigr), \intertext{and the \textsl{restriction of $R(X/Z;D_1,D_2,\ldots,D_\ell)$ to $S$} is the $\mathbf{N}^\ell$-graded $H^0(Z,\mathcal{O}_Z)$-algebra} \res_S\bigl(R(X/Z;D_1,D_2,\ldots,D_\ell)\bigr) &\coloneqq \bigoplus_{(m_1,m_2,\ldots,m_\ell) \in \mathbf{N}^\ell} H^0\Biggl(X\vert S,\mathcal{O}_X\Biggl(\Biggl\lfloor \sum_{i=1}^\ell m_iD_i \Biggr\rfloor \Biggr)\Biggr). \end{align*} \end{definition} We give two lemmas about finite generation of relative divisorial graded rings. \begin{lemma}[cf.\ {\cite[Corollary 2.26]{CL12}}]\label{lem:cl12226} Let $\pi\colon X \to Z$ be a proper morphism of integral Noetherian schemes, where $X$ is regular and $Z$ is affine. Let $f\colon Y \to X$ be a proper birational morphism, where $Y$ is regular. Let $D_1,D_2,\ldots,D_\ell \in \Div_\mathbf{Q}(X)$, let $D_1',D_2',\ldots,D_\ell' \in \Div_\mathbf{Q}(X)$, and assume there exist positive rational numbers $r_i$ and $f$-exceptional $\mathbf{Q}$-divisors $E_i \ge 0$ such that \[ D_i' \sim_\mathbf{Q} r_if^*D_i + E_i \] for every $i$. Then, the ring \begin{align*} R &= R\bigl(X/Z;D_1,D_2,\ldots,D_\ell\bigr) \intertext{is finitely generated over $H^0(Z,\mathcal{O}_Z)$ if and only if the ring} R' &= R\bigl(Y/Z;D_1',D_2',\ldots,D_\ell'\bigr) \end{align*} is finitely generated over $H^0(Z,\mathcal{O}_Z)$. Similarly, suppose $S$ is a regular prime divisor on $X$, and let $T = f_*^{-1}S$. Then, the ring $\res_S(R)$ is finitely generated over $H^0(Z,\mathcal{O}_Z)$ if and only if the ring $\res_T(R')$ is finitely generated over $H^0(Z,\mathcal{O}_Z)$. \end{lemma} \begin{proof} The proof of \cite[Corollary 2.26]{CL12} works after replacing absolute divisorial rings with relative divisorial graded rings. For completeness, we write down the proof below. \par Let $k$ be a positive integer such that for all $i$, we have that $kD_i$, $kr_iD_i'$, and $kE_i$ are all integral and \[ kD_i' \sim kr_if^*D_i + kE_i. \] Then, the rings \begin{align*} R\bigl(X/Z;kD_1,kD_2,\ldots,kD_\ell\bigr) &\qquad\text{and}\qquad R\bigl(Y/Z;kD_1',kD_2',\ldots,kD_\ell'\bigr) \intertext{are Veronese subrings of finite index in $R$ and $R'$, respectively, and both rings are isomorphic to \[ R\bigl(Y/Z;kr_1f^*D_1 + kE_1,kr_2f^*D_2 + kE_2,\ldots,kr_\ell f^*D_\ell + kE_\ell\bigr). \] Similarly, the rings} \res_S\Bigl(R\bigl(X/Z;kD_1,kD_2,\ldots,kD_\ell\bigr)\Bigr) &\qquad\text{and}\qquad \res_T\Bigl(R\bigl(Y/Z;kD_1',kD_2',\ldots,kD_\ell'\bigr)\Bigr) \end{align*} are Veronese subrings of finite index in $\res_S(R)$ and $\res_T(R')$, respectively, and both rings are isomorphic to \[ \res_T\Bigl(R\bigl(Y/Z;kr_1f^*D_1 + kE_1,kr_2f^*D_2 + kE_2,\ldots,kr_\ell f^*D_\ell + kE_\ell\bigr)\Bigr). \] In either case, the conclusion follows from \cite[Propositions 1.2.2 and 1.2.4]{ADHL15}. \end{proof} \begin{lemma}[cf.\ {\cite[Lemma 2.27]{CL12}}]\label{lem:cl12227} Let $\pi\colon X \to Z$ be a proper morphism of integral Noetherian schemes, where $X$ is regular and $Z$ is affine. Let $D_1,D_2,\ldots,D_\ell \in \Div_\mathbf{Q}(X)$, and set \[ \mathcal{C} = \sum_{i=1}^\ell \mathbf{R}_{\ge0} \cdot D_i \subseteq \Div_\mathbf{R}(X). \] Then, we have the following: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:cl12227i} If $R(X/Z;\mathcal{C})$ is finitely generated as an $H^0(Z,\mathcal{O}_Z)$-algebra, then $R(X/Z;D_1,D_2,\ldots,D_\ell)$ is finitely generated as an $H^0(Z,\mathcal{O}_Z)$-algebra. \item\label{lem:cl12227ii} Let $S$ be a regular prime divisor on $X$. If $\res_S(R(X/Z;\mathcal{C}))$ is finitely generated as an $H^0(Z,\mathcal{O}_Z)$-algebra, then $\res_S(R(X/Z;D_1,D_2,\ldots,D_\ell))$ is finitely generated as an $H^0(Z,\mathcal{O}_Z)$-algebra. \end{enumerate} \end{lemma} \begin{proof} The proof of \cite[Lemma 2.27]{CL12} works after replacing absolute divisorial rings with relative divisorial graded rings. For completeness, we write down the proof below. \par Let $k$ be a positive integer such that $D_i' = k D_i \in \Div(X)$ for all $i$. The monoid \[ \mathcal{S} = \sum_{i=1}^\ell \mathbf{N} \cdot D_i' \] is a submonoid of $\mathcal{C} \cap \Div(X)$. If $R(X/Z;\mathcal{C})$ (resp.\ $\res_S(R(X/Z;\mathcal{C}))$) is finitely generated, then $R(X/Z;\mathcal{S})$ (resp.\ $\res_S(R(X/Z;\mathcal{S}))$) is also finitely generated by \cite[Proposition 1.2.2]{ADHL15}. Then, $R(X/Z;D_1',D_2',\ldots,D_\ell')$ (resp.\ $\res_S(R(X/Z;D_1',D_2',\ldots,D_\ell'))$) is finitely generated by \cite[Proposition 1.2.6]{ADHL15}, which implies that $R(X/Z;D_1,D_2,\ldots,D_\ell)$ (resp.\ $\res_S(R(X/Z;D_1,D_2,\ldots,D_\ell))$) is finitely generated by \cite[Proposition 1.2.4]{ADHL15}. \end{proof} \section{Asymptotic order of vanishing}\label{section:asymporder} Following \cite[\S3 and \S8]{CL13}, we define the asymptotic order of vanishing in our setting. We will not need this in the proof of our analogue of \cite[Theorem B]{CL12}, since we are able to derive it from the result in \cite{CL12}. On the other hand, we will need to use the asymptotic order of vanishing when running the minimal model program, as in \cite{CL13}. \par We will always work over an affine base and work with absolute linear systems as in Definition \ref{def:linearsystem}. \begin{definition}[see {\citeleft\citen{ELMNP06}\citemid p.\ 1713\citepunct \citen{CL13}\citemid p.\ 620\citeright}] \label{def:GeomVal} Let $X$ be an integral normal separated scheme. Let $v$ be a discrete valuation on the function field $K(X)$ of $X$ given by a morphism $\Spec(R) \to X$, which is uniquely determined by $v$ up to isomorphism. The \textsl{center} of $v$ is the image of the closed point of $\Spec(R)$. We say $v$ is a \textsl{geometric valuation on $X$} if $v$ is given by the order of vanishing at the generic point $\eta$ of a prime divisor $\Gamma$ on some birational model $f\colon Y \to X$ of $X$. In this case, the valuation is given by the composition $\Spec(\mathcal{O}_{Y,\eta}) \to Y \to X$. \end{definition} We now define the asymptotic order of vanishing for $\mathbf{R}$-Weil divisors such that $\lvert D\rvert_\mathbf{R}\neq \emptyset$. When $D$ is a big $\mathbf{R}$-Cartier divisor and $Z$ is a point, this notion coincides with the invariant $v(\lVert D \rVert)$ defined in \cite[Definition 2.2]{ELMNP06}, and when $v$ is futhermore a geometric valuation given by a prime divisor $\Gamma$, this notion coincides with the invariant $\sigma_\Gamma(D)$ from \cite[Chapter III, Definition 1.1]{Nak04}. See also Remark \ref{rem:OIsNotSigma}. \begin{definition}[see {\cite[p.\ 632]{CL13}; cf.\ \citeleft\citen{ELMNP06}\citemid Lemma 3.3\citemid \citen{CDB13}\citemid Remark 2.16\citeright}] \label{def:AsymptoticOrder} Let $\pi\colon X \to Z$ be a projective morphism of integral noetherian schemes, where $Z$ is affine. Let $D$ be an $\mathbf{R}$-Weil divisor on $X$ such that $\lvert D\rvert_\mathbf{R}\neq \emptyset$. For each discrete valuation $v$ on $K(X)$, the \textsl{asymptotic order of vanishing} of $D$ is \[ o_{v}(D)\coloneqq\inf_{E \in \lvert D \rvert_\mathbf{R}} \bigl\{v(E)\bigr\}. \] For every positive real number $a$, we have $o_v(aD)=a\cdot o_v(D)$. For every pair of elements $D,D' \in \Div_\mathbf{R}(X)$, we have $o_v(D+D')\leq o_v(D)+o_v(D')$ \cite[Proposition 2.4]{ELMNP06}. When $v$ comes from a prime divisor $S$ we write $o_S$ for $o_v$. \end{definition} \begin{remark}\label{rem:OIsNotSigma} Let $D$ be an $\mathbf{R}$-Weil divisor on a complex projective variety $X$. If $\lvert D \rvert_{\mathbf{R}}\neq\emptyset$, then $D$ is $\pi$-pseudo-effective. However, the asymptotic order of vanishing $o_v(D)$ and the invariant $v(\lVert D \rVert)$ defined in \cite{ELMNP06} are not necessarily equal. See \cite[Remark 2.16]{CDB13}. \end{remark} \begingroup \makeatletter \renewcommand{\@secnumfont}{\bfseries} \part{Bertini theorems and fundamental theorems of the MMP} \label{part:bertiniandfund} \makeatother \endgroup In this part, we prove our new relative versions of Bertini theorems for schemes. These theorems will become necessary later to perturb klt pairs without having global Bertini theorems available as would be the case for varieties over a field. We also show the fundamental theorems of the minimal model program (the Basepoint-freness, Contraction, Rationality, and Cone theorems) for algebraic spaces adapting the strategy in \cite{KMM87} for complex varieties.\bigskip \section{Bertini theorems}\label{sect:bertini} As in the mixed characteristic case considered in \cite{BMPSTWW}, we will need Bertini theorems that work for schemes that are of finite type over a Noetherian local domain of containing $\mathbf{Q}$. \begin{theorem}[cf.\ {\cite[Theorem 2.15]{BMPSTWW}}]\label{thm:bertini} Let $(R,\mathfrak{m},k)$ be a Noetherian local domain containing $\mathbf{Q}$. Fix an integer $N \ge 1$. Let $f\colon X \to \mathbf{P}^N_R$ be a separated morphism of finite type from a regular Noetherian scheme $X$. Assume that every closed point of $X$ lies over the unique closed point of $\Spec(R)$. Let $T_0,T_1,\ldots,T_N$ be a basis of $H^0(\mathbf{P}^N_R,\mathcal{O}(1))$ as a free $R$-module. Then, there exists a nonempty Zariski open subset $W\subseteq \mathbf{A}^{N+1}_k$ with the following property: For all $a_0,a_1,\ldots,a_N\in R$, if \[ (\bar{a}_0,\bar{a}_1,\ldots,\bar{a}_N)\in W(k), \] then the section \[ h = a_0T_0+a_1T_1+\ldots+a_NT_N \in H^0\bigl(\mathbf{P}^N_R,\mathcal{O}(1)\bigr) \] is such that $f^{-1}(V(h))$ is regular. \end{theorem} \begin{proof} Denote by $f_s\colon X_s \to \mathbf{P}^N_k$ the base change of $f$ along the closed embedding $\Spec(k) \hookrightarrow \Spec(R)$. Choose a stratification $\{U_j\}_{j \in J}$ of $X_s$ by locally closed subschemes such that each $U_j$ is connected and regular. By Jouanolou's Bertini theorem \cite[Theorem 6.10(2)]{Jou83}, since $k$ is of characteristic zero, there exists a Zariski open subset $W \subseteq \mathbf{A}^{n+1}_k$ such that for all $(\bar{a}_0,\bar{a}_1,\ldots,\bar{a}_N) \in W(k)$, the section \[ \bar{h} = \bar{a}_0T_0+\bar{a}_1T_1+\ldots+ \bar{a}_NT_N \in H^0\bigl(\mathbf{P}^N_k,\mathcal{O}(1)\bigr) \] is such that $f_s^{-1}(V(\bar{h})) \cap U_j$ is regular for all $j$. \par We claim that this choice of $W$ satisfies the conclusion of the theorem. Since the regular locus is stable under generization, it suffices to show that $f^{-1}(V(h))$ is regular at every closed point $x \in f^{-1}(V(h))$. Let $0 \ne g \in \mathcal{O}_{X,x}$ define $f^{-1}(V(h))$ at such a closed point $x$. By assumption, the image of $x$ in $\Spec(R)$ is $\mathfrak{m}$, and hence there exists a member $U_j$ our stratification of $X_s$ containing $x$. We now consider the image of $g$ under the composition \[ \mathcal{O}_{X,x}/\mathfrak{m}_x^2 \longrightarrow \mathcal{O}_{X_s,x}/\mathfrak{m}_x^2 \longrightarrow \mathcal{O}_{U_j,x}/\mathfrak{m}_x^2. \] By \cite[Chapitre 0, Proposition 17.1.7]{EGAIV1}, since $U_j$ and $f_s^{-1}(V(\bar{h})) \cap U_j$ are regular, we know that the image of $g$ in $\mathcal{O}_{U_j,x}/\mathfrak{m}_x^2$ is nonzero. Thus, the image of $g$ in $\mathcal{O}_{X,x}/\mathfrak{m}_x^2$ is also nonzero. Applying \cite[Chapitre 0, Proposition 17.1.7]{EGAIV1} again, we therefore see that $f^{-1}(V(h))$ is regular at $x$. \end{proof} \begin{remark}[cf.\ {\cite[Remark 2.16]{BMPSTWW}}]\label{rem:bertinisnc} Let $f\colon X \to \Spec(R)$ be a separated morphism of finite type mapping closed points to the unique closed point that factors through $\mathbf{P}^N_R$ for some $N \ge 1$, and let $B$ be an effective divisor on $X$ with simple normal crossings. Applying Theorem \ref{thm:bertini} to $X$ and the finitely many strata of $B$, we obtain a divisor $H = g^{-1}(V(h))$ such that $(X,H+B)$ and $(H,B \cap H)$ are log regular, where $g:X\to \mathbf{P}^N_R$ is a factorization of $f$. We may also require $H$ to avoid finitely many given points, for example the generic points of the components of $B$. We will use this version of Bertini's theorem when working with linear systems associated to $f$-generated Cartier divisors. \end{remark} When $X$ is proper over a non-local base, we can still find semi-ample regular divisors after passing to an affine open cover of the base. Below, a scheme is \textsl{J-2} if it admits an open affine covering $X = \bigcup_i \Spec(R_i)$ such that every $R_i$ is J-2 in the sense of Definition \ref{def:excellent}$(\ref{def:excellentj2})$ (see \cite[\href{https://stacks.math.columbia.edu/tag/07R3}{Tag 07R3} and \href{https://stacks.math.columbia.edu/tag/07R4}{Tag 07R4}]{stacks-project}). \begin{corollary}\label{cor:bertinionopencover} Let $R$ be a Noetherian domain containing $\mathbf{Q}$. Fix an integer $N \ge 1$. Let \[ \Set[\big]{f_i\colon X_i \to \mathbf{P}^N_R}_{i} \] be a finite collection of closed separated morphisms of finite type from regular Noetherian schemes $X_i$ that are J-2. Let $\Spec(R) = \bigcup_k V_k$ be a finite affine open cover of $\Spec(R)$. Then, there exists a finite affine open cover \[ \Spec(R) = \bigcup_{j} U_j \] refining $\Spec(R) = \bigcup_k V_k$, such that for each $j$, there exists a section $h_j \in H^0(\mathbf{P}^N_R,\mathcal{O}(1))$ whose preimage $f_i^{-1}(V(h_j))$ is regular along the preimage of $U_j$ in $X_i$. \end{corollary} \begin{proof} For each prime ideal $\mathfrak{p} \subseteq R$, we can construct sections $h_\mathfrak{p} \in H^0(\mathbf{P}^N_{R_\mathfrak{p}},\mathcal{O}(1))$ such that the preimage of $V(h_\mathfrak{p})$ in $X_{i} \otimes_R R_\mathfrak{p}$ is regular for every $i$ by Theorem \ref{thm:bertini}. Since $R$ is a domain, we can lift the sections $h_\mathfrak{p}$ to sections $\tilde{h}_\mathfrak{p} \in H^0(\mathbf{P}^N_R,\mathcal{O}(1))$ by clearing denominators. For each $\mathfrak{p}$ and $i$, denote by $\Sing(f_i^{-1}(V(\tilde{h}_\mathfrak{p})))$ the singular locus of $f_i^{-1}(V(\tilde{h}_\mathfrak{p}))$, which is closed by the J-2 condition. Then, denoting by $\pi_i\colon X_i \to \Spec(R)$ the composition of $f_i$ with the projection morphism $\mathbf{P}^N_R \to \Spec(R)$, we have \[ \mathfrak{p} \in \Spec(R) - \pi_i\biggl( \bigcup_i \Sing\Bigl(f_i^{-1}\bigl(V(\tilde{h}_\mathfrak{p})\bigr)\Bigr) \biggr) \] since $f_i^{-1}(V(\tilde{h}_\mathfrak{p}))$ is regular along the preimage of $\mathfrak{p}$ by construction, and hence \[ \Spec(R) = \bigcup_{\mathfrak{p} \in \Spec(R)} \Biggl(\Spec(R) - \pi_i\biggl( \bigcup_i \Sing\Bigl(f_i^{-1}\bigl(V(\tilde{h}_\mathfrak{p})\bigr)\Bigr) \biggr)\Biggr) \] is an open cover. Each of the members of this open cover contains an affine open $U_\mathfrak{p}$ such that $\mathfrak{p} \in U_\mathfrak{p} \subseteq V_k$ for some $k$, and since $\Spec(R)$ is quasi-compact, there is a finite subset $\{U_{\mathfrak{p}_j}\}\subseteq\{U_\mathfrak{p}\}$ that forms an affine open cover of $\Spec(R)$. Setting $U_j \coloneqq U_{\mathfrak{p}_j}$ and $h_j \coloneqq \tilde{h}_{\mathfrak{p}_j}$, we are done. \end{proof} Corollary \ref{cor:bertinionopencover} allows us to perturb klt pairs up to replacing the base by an affine open cover. \begin{corollary}\label{cor:BertiniKLTOpenCover} Let $\pi\colon X \to Z$ be a proper morphism of excellent locally Noetherian schemes of equal characteristic zero. Suppose that $X$ is integral and normal, and that $Z$ has a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $(X,\Delta)$ be a klt $\mathbf{k}$-pair for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$. Let $A$ be a $\pi$-semi-ample $\mathbf{k}$-Cartier divisor on $X$. Then, there exists an open covering $Z=\bigcup_a V_a$ and \[ A_a\in \bigl\lvert A_{|\pi^{-1}(V_a)}\bigr\rvert_{\mathbf{k}} \] such that $(\pi^{-1}(V_a),\Delta_{|\pi^{-1}(V_a)}+A_a)$ is klt. \end{corollary} \begin{proof} The $\pi$-semi-ample $\mathbf{k}$-Cartier divisor $A$ is a $\mathbf{k}_{\geq 0}$-linear combination of $\pi$-semi-ample Cartier divisors on $X$, so it suffices to treat the case $A=rH$ where $r\in\mathbf{k}$, $0<r<1$, and $H$ is $\pi$-generated. We may assume $Z=\Spec(R)$ affine and integral. Let $f\colon Y\to X$ be a log resolution of $(X,\Delta)$, which exists by \cite[Theorem 2.3.6]{Tem08}. Write \[ K_Y+\sum_E a(E)\,E\sim_{\mathbf{k}}f^*(K_X+\Delta), \] where $a(E)\coloneqq a(E,X,\Delta)$ is the discrepancy. The divisor $\Delta_Y\coloneqq \sum_E a(E)E$ is effective and satisfies $\lfloor\Delta_Y\rfloor=0$ since $(X,\Delta)$ is klt. Since $H$ is $\pi$-generated, it defines a morphism $h\colon X\to \mathbf{P}^N_R$. Applying Corollary \ref{cor:bertinionopencover} and passing to an open cover of $Z$ if necessary, we can find $H'\in \lvert H \rvert$ such that $f^*H'$ is reduced, does not share a component with $\Delta_Y$, and is such that $(Y,\Delta_Y+f^*H')$ is log regular. We have $A'\coloneqq rH'\in\lvert A\rvert_{\mathbf{k}}$, $f_*(\Delta_Y+rf^*H')=\Delta+A'$, and \[ K_Y+\Delta_Y+rf^*H'\sim_{\mathbf{k}}f^*(K_X+\Delta+A'), \] so $a(E,X,\Delta+A')=a(E,Y,\Delta_Y+rf^*H')$ for all divisors $E$ over $X$ (cf.\ \cite[Lemma 2.30]{KM98}). Since $r<1$, we have $\lfloor\Delta_Y+rf^*H'\rfloor=0$. Moreover, since $(Y,\Delta_Y+rf^*H')$ is log regular, we see that $(Y,\Delta_Y+rf^*H')$ is klt by \cite[Corollary 2.11]{Kol13}. Thus $(X,\Delta+A')$ is klt, as desired. \end{proof} When $X$ is projective over an affine base, we can find ample divisors avoiding finitely many points in $X$, even without passing to an affine open cover of the base. \begin{lemma}\label{lem:AmpleAvoid} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes, where $Z$ is affine. Let $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$. For a $\pi$-ample $\mathbf{k}$-Cartier divisor $A$ on $X$ and finitely many points $x_i\in X$, there exist a positive integer $n$ and a divisor $A'\in \lvert nA\rvert_{\mathbf{k}}$ with $\mult_{x_i}(A')=0$ for all $i$. \end{lemma} \begin{proof} Since $\pi$-ample $\mathbf{k}$-Cartier divisors are $\mathbf{k}_{>0}$-linear combination of $\pi$-ample Cartier divisors, we may assume that $\mathbf{k}=\mathbf{Z}$. The statement now follows by the graded version of prime avoidance \cite[Chapter III, \S1, n\textsuperscript{o} 4, Proposition 8]{BouCA}. \end{proof} \section{Basepoint-free, Contraction, Rationality, and Cone theorems}\label{sect:bpfcontrratcone} In this section, we prove that the Basepoint-free and Contraction, Rationality, and Cone theorems hold for projective morphisms of quasi-excellent algebraic spaces of equal characteristic zero with dualizing complexes by adapting the proofs in \cite{KMM87}. Later, in \S\ref{sect:fundrevisit}, we will prove dual versions of these statements in the vein of \cite{Kaw11} using our finite generation result (Theorem \ref{thm:cl12a}), as is done for varieties in \cite{CL13}. We have stated these results using the notion of weakly log terminal pairs (see Definition \ref{def:wlt}). Dlt pairs are weakly log terminal by Remark \ref{rem:dltiswlt}. \subsection{Basepoint-free theorem} \par We start with the Basepoint-free theorem. A version of the statement for schemes below appeared in \cite[Proposition 2.42]{BMPSTWW}. The statement for algebraic spaces when $Z = \Spec(k)$ for a field $k$ appears in \cite[Basepoint-free theorem 1.4.4]{Kol91}. \par We have included the statement for $\mathbf{R}$-pairs to illustrate that for schemes that are not necessarily of finite type over a field, one cannot simply perturb boundary divisors directly at the beginning because we do not have Bertini theorems available. If $\pi$ is projective, one can instead replace $Z$ by an affine cover and use an appropriate version of Corollary \ref{cor:BertiniKLTOpenCover}. \begin{theorem}[Basepoint-free theorem; cf.\ {\cite[Theorem 3-1-1 and Remark 3-1-2(1)]{KMM87}}]\label{thm:bpf} Let $\pi\colon X \to Z$ be a proper surjective morphism of integral quasi-excellent Noetherian algebraic spaces of equal characteristic zero over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Let $(X,\Delta)$ be a klt $\mathbf{R}$-pair, and let $H \in \Pic(X)$ be $\pi$-nef. Suppose one of the following holds: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{thm:bpfdlt} $(X,\Delta)$ is weakly log terminal and $aH - (K_X+\Delta)$ is $\pi$-ample for some $a \in \mathbf{Z}_{>0}$. \item\label{thm:bpfklt} $(X,\Delta)$ is klt and $aH - (K_X+\Delta)$ is $\pi$-big and $\pi$-nef for some $a \in \mathbf{Z}_{>0}$. \end{enumerate} Then, there exists $m_0 \in \mathbf{Z}_{>0}$ such that $mH$ is $\pi$-generated for all $m \ge m_0$. \end{theorem} \begin{proof} After replacing $\pi$ by its Stein factorization \cite[\href{https://stacks.math.columbia.edu/tag/0A1B}{Tag 0A1B}]{stacks-project}, we may assume that $Z$ is normal and that $\pi$ has geometrically connected fibers \cite[\href{https://stacks.math.columbia.edu/tag/0AYI}{Tag 0AYI}]{stacks-project}. For $(\ref{thm:bpfklt})$, this does not change the $\pi$-bigness or the $\pi$-nefness of $aH - (K_X+\Delta)$ since it changes volumes and intersections on $\kappa(\eta)$ by the factor $[H^0(X_\eta,\mathcal{O}_{X_\eta}) : \kappa(\eta)]$.\smallskip \par We claim we may replace $Z$ by a scheme $Z'$ \'etale over $Z$. Let $Z' \to Z$ be an \'etale morphism where $Z'$ is a quasi-compact scheme, and consider the associated Cartesian diagram \[ \begin{tikzcd} X' \rar{f'}\dar[swap]{\pi'} & X\dar{\pi}\\ Z' \rar{f} & Z \end{tikzcd} \] By flat base change \cite[\href{https://stacks.math.columbia.edu/tag/073K}{Tag 073K}]{stacks-project}, it suffices to show that $m\,f^{\prime*}H$ is $\pi'$-generated for all $m \gg0$. Note that the assumptions on $(X,\Delta)$ are inherited by $(X',f^{\prime*}\Delta)$ by Remark \ref{rem:singsetalelocal}. Moreover, we have \[ f^{\prime*}\bigl(aH - (K_X+\Delta)\bigr) = a\,f^{\prime*}H - (K_{X'}+ f^{\prime*}\Delta), \] where $f^{\prime*}\Delta$ is the \'etale pullback of $\Delta$, since the formation of canonical divisors is compatible with \'etale base change (see the proof of Lemma \ref{lem:dualizingcomplexpullback}). This $\mathbf{R}$-invertible sheaf is $\pi'$-nef by Lemma \ref{lem:nefbasechange}$(\ref{lem:nefbasechangepullback})$ and is $\pi'$-big by flat base change \cite[\href{https://stacks.math.columbia.edu/tag/073K}{Tag 073K}]{stacks-project}. We can then replace $\pi$ by $\pi'$ to assume that $X$ and $Z$ are schemes. To assume that $X$ is integral, we work one connected component at a time and let $Z$ be the scheme theoretic images of these components.\smallskip \par We now prove the theorem for schemes. Let $f_1\colon Y_1 \to X$ be a projective log resolution of $(X,\Delta)$, where for $(\ref{thm:bpfdlt})$ we assume the hypotheses in Definition \ref{def:wlt}, and for $(\ref{thm:bpfklt})$ we first apply Chow's lemma \cite[Th\'eor\`eme 5.6.1]{EGAII} then resolve using \cite[Theorem 1.1.6]{Tem18} to assume that $Y_1$ is projective over $Z$. Then, we know that $f_1^*(aH-(K_X+\Delta))$ is $\pi$-big and $\pi$-nef by Lemmas \ref{lem:nefpullback} and \ref{lem:bigpullback}. By Kodaira's lemma (Corollary \ref{lem:kodairachar}), the $\mathbf{R}$-divisor \[ f_1^*\bigl(aH-(K_X+\Delta)\bigr) + \delta\,f_{1*}^{-1}\Delta - \sum_i \delta_{1i} G_i \] is $(\pi\circ f_1)$-ample for some $\delta,\delta_{1i} \in \mathbf{R}$ with $0 < \delta \ll \min_{\delta_{1i} \ne 0}\{\delta_{1i}\} \ll 1$, where $\{G_i\}$ is a family of effective Cartier divisors on $X$ with normal crossings, $\Supp(\sum_i \delta_{1i} G_i)$ is $f_1$-exceptional, and \[ K_{Y_1} + \delta\,f_{1*}^{-1}\Delta \sim_\mathbf{R} f_1^*(K_X+\Delta) + \sum_i b_iG_i \] for $b_i \in \mathbf{R}$ with $b_i > -1$. Let $C \coloneqq \sum_i (b_i-\delta_{1i})G_i$. After perturbing the $\delta_{1i}$ using Theorem \ref{thm:ampisinterior}, we may assume that $C$ is a $\mathbf{Q}$-divisor. Letting $\eta \in \lvert Z \rvert$ be the generic point, we can apply the Non-vanishing theorem \cite[Theorem 2-1-1]{KMM87} to a connected component of the geometric generic fiber $Y_{1\bar{\eta}}$ and the pullbacks of $f_1^*H$ and $C$ to $Y_{1\bar{\eta}}$ to see that \[ \bigl( (\pi \circ f_1)_*\mathcal{O}_{Y_1}\bigl(m\,f_1^*H+\lceil C \rceil\bigr)\bigr)_\eta \cong H^0\bigl(Y_{1\eta},\mathcal{O}_{Y_{1\eta}}(m\,f_1^*H_\eta+\lceil C_\eta \rceil\bigr)\bigr) \ne 0 \] for $m \gg 0$ by flat base change \cite[Proposition 1.4.15]{EGAIII1}, since \begin{align*} a\,f_1^*H+C-K_{Y_1} &\sim_\mathbf{R} a\,f_1^*H+C-\biggl(f_1^*(K_X+\Delta) + \sum_i b_iG_i - \delta\,f_{1*}^{-1}\Delta\biggr)\\ &\sim_\mathbf{R} f_1^*\bigl(aH-f_1^*(K_X+\Delta)\bigr) + \delta\,f_{1*}^{-1}\Delta - \sum_i \delta_{1i}G_i \end{align*} is $\pi$-ample. In particular, we have \[ \pi_*\mathcal{O}_X(mH) \cong (\pi \circ f_1)_*\mathcal{O}_{Y_1}\bigl(m\,f_1^*H+\lceil C \rceil \bigr) \ne 0 \] by the projection formula since $\lceil C \rceil$ is $g$-exceptional. \par We now make the following claim: \begin{claim}\label{claim:pnbpf} For every prime number $p$, the divisor $p^nH$ is $\pi$-generated for $n \gg 0$. \end{claim} Showing Claim \ref{claim:pnbpf} would imply the theorem, since then the monoid of natural numbers $m \in \mathbf{N}$ such that $mH$ is $\pi$-generated would contain all sufficiently large integers by \cite[Theorem 1.0.1]{RA05}. \par Choose $n_0 > 0$ such that $\pi_*\mathcal{O}_X(p^{n_0}H) \ne 0$ as above. If $p^{n_0}H$ is $\pi$-generated, there is nothing to show. We will therefore assume that $p^{n_0}H$ is not $\pi$-generated. \par First, let $f_1\colon Y_1 \to X$ be a projective log resolution of $(X,\Delta)$ as above. Taking successive blowups along regular centers (see \cite{Consnc}), there is a projective birational morphism $f_2\colon Y \to Y_1$ with a family of effective Cartier divisors $\{F_j\}$ with only \emph{simple} normal crossings such that setting $f \coloneqq f_1 \circ f_2$, the $\mathbf{R}$-divisor \begin{align*} f_2^*\biggl( f_1^*\bigl(aH-(K_X+\Delta)\bigr) + \delta\,f_{1*}^{-1}\Delta - \sum_i \delta_{1i} G_i\biggr) &- \delta'A_2\\ = f^*\bigl(aH-(K_X+\Delta)\bigr) + \delta\,f_2^*f_{1*}^{-1}\Delta &- \sum_j \delta_jF_j \end{align*} is $(\pi \circ f)$-ample for an $f_2$-exceptional $\mathbf{R}$-divisor $A_2$ with $0 < \delta' \ll \delta$, again using Kodaira's lemma (Corollary \ref{lem:kodairachar}). Moreover, we have \[ K_Y +\delta\,f_2^*f_{1*}^{-1}\Delta \sim_\mathbf{R} f^*(K_X+\Delta) + \sum_j a_jF_j \] for $a_j \in \mathbf{R}$ with $a_j > -1$, and after possibly using \cite[Theorem 1.1.6]{Tem18} to replace $f$ by a resolution that also resolves the $\pi$-base ideal of $\mathcal{O}_X(p^{n_0}H)$, we have \[ (\pi \circ f)^*(\pi \circ f)_*\mathcal{O}_Y(f^*p^{n_0}H) \mathrel{\text{\longtwo@rightarrow}} \mathcal{O}_Y\biggl(f^*p^{n_0}H - \sum_j r_jF_j\biggr) \subseteq \mathcal{O}_Y(f^*p^{n_0}H) \] for some non-negative integers $r_j$ not all equal to zero. \par Next, since $0 < \delta' \ll \delta \ll \min_{\delta_{1i}\ne0}\{\delta_{1i}\} \ll 1$, we know that $a_j + 1 - \delta_j > 0$ for all $j$ by \cite[Corollary 2.11]{Kol13}. Set \[ c \coloneqq \min_j \biggl\{ \frac{a_j+1-\delta_j}{r_j} \biggr\}, \] where we set $\frac{a_j+1-\delta_j}{r_j} = \infty$ if $r_j = 0$. After possibly perturbing $A_2$ (and hence the $\delta_j$) slightly using Theorem \ref{thm:ampisinterior}, we may assume that the minimum $c$ is attained at a unique index $j$, which we relabel as $j = 0$, and that $a_j - \delta_j \in \mathbf{Q}$ for all $j$. Set \[ A \coloneqq \sum_{j \ne 0} (-cr_j + a_j - \delta_j)F_j \qquad \text{and} \qquad B \coloneqq F_0. \] Then, the $\mathbf{Q}$-divisor \begin{alignat*}{3} N &\coloneqq p^{n'}f^*H + A - B - K_Y\\ &\sim_\mathbf{R} c\biggl(f^*p^{n_0}H - \sum_j r_jF_j\biggr) &{}+{}& f^*\bigl( (p^{n'}-cp^{n_0})H - (K_X+\Delta)\bigr)\\ &&{}+{}& \delta\,f_2^*f_{1*}^{-1}\Delta - \sum_j \delta_j F_j \end{alignat*} is $(\pi \circ f)$-ample for all $n' \in \mathbf{N}$ such that $p^{n'} \ge cp^{n_0}+a$. Since \begin{align*} R^1(\pi\circ f)_*\mathcal{O}_Y\bigl(p^{n'}f^*H &+ \lceil A \rceil - B \bigr) = R^1(\pi\circ f)_*\mathcal{O}_Y\bigl(\lceil N \rceil + K_Y\bigr) = 0 \intertext{by \cite[Theorem A]{Mur}, the morphism} (\pi \circ f)_*\mathcal{O}_Y\bigl(p^{n'}f^*H &+ \lceil A \rceil\bigr) \longrightarrow (\pi \circ f)_*\mathcal{O}_B\Bigl(\bigl(p^{n'}f^*H + \lceil A \rceil\bigr)_{\vert B}\Bigr) \end{align*} is surjective. Now by the Non-vanishing theorem \cite[Theorem 2-1-1]{KMM87} applied to a connected component of the geometric generic fiber $B_{\bar{\eta}}$ and the pullbacks of $p^{n'}f^*H$ and $A$ to $B_{\bar{\eta}}$, we see that \[ (\pi \circ f)_*\mathcal{O}_B\Bigl(\bigl(p^{n'}f^*H + \lceil A \rceil\bigr)_{\vert B}\Bigr) \ne 0 \] for $n' \gg 0$. Since $(\pi \circ f)_*\mathcal{O}_Y(p^{n'}f^*H + \lceil A \rceil) \cong \pi_*\mathcal{O}_X(p^{n'}H)$ by the projection formula and the fact that $\lceil A \rceil$ is $f$-exceptional, we have \[ f(B) \not\subseteq \Supp\Bigl(\cok\bigl(\pi^*\pi_*\mathcal{O}_X(p^{n'}H) \longrightarrow \mathcal{O}_X(p^{n'}H)\bigr)\Bigr). \] Thus, we have \begin{align*} \MoveEqLeft[5] \Supp\Bigl(\cok\bigl(\pi^*\pi_*\mathcal{O}_X(p^{n'}H) \longrightarrow \mathcal{O}_X(p^{n'}H)\bigr)\Bigr)\\ &\subsetneq \Supp\Bigl(\cok\bigl(\pi^*\pi_*\mathcal{O}_X(p^{n_0}H) \longrightarrow \mathcal{O}_X(p^{n_0}H)\bigr)\Bigr). \end{align*} By Noetherian induction, we therefore have \[ \Supp\Bigl(\cok\bigl(\pi^*\pi_*\mathcal{O}_X(p^{n}H) \longrightarrow \mathcal{O}_X(p^{n}H)\bigr)\Bigr) = \emptyset, \] which is what we wanted to show in Claim \ref{claim:pnbpf}. \end{proof} \subsection{Contraction theorem} Next, we consider the Contraction theorem. Showing uniqueness of contraction morphisms is more involved than in the variety case because we also need to consider integral one-dimensional closed subschemes of non-closed fibers of $\pi$. The following lemma fills this gap. \begin{lemma}\label{lem:ContractsNonClosed} Let $Z$ be a Noetherian algebraic space over a scheme $S$ and let $f\colon X\to Y$ and $f'\colon X\to Y'$ be morphisms of proper algebraic spaces over $Z$. Suppose that for every integral one-dimensional closed subspace $C \subseteq X$ such that $f(C)$ is a point, we have that $f'(C)$ is a point. Then, for every $y\in \lvert Y \rvert$ and every connected component $W$ of $f^{-1}(y)$, we have that $f'(W)$ is a point. \end{lemma} \begin{proof} We fix the following notation for the structure morphisms of $X$, $Y$, and $Y'$: \[ \begin{tikzcd} Y \arrow{dr}[swap]{h} & \lar[swap]{f} X \dar{\pi} \rar{f'} & Y' \arrow{dl}{h'}\\ & Z \end{tikzcd} \] Let $y \in \lvert Y \rvert$. It suffices to show that for each integral one-dimensional closed subspace $\Gamma$ of $f^{-1}(y)$, the image $f'(\Gamma)$ is a point. We may replace $X$ by the closure of $\Gamma$ equipped with the reduced induced structure, in which case $X$ is integral. After replacing $Y$, $Y'$, and $Z$ by the scheme-theoretic images of $X$, we may assume that $X$ maps surjectively onto $Y$, $Y'$, and $Z$, and that $Y$, $Y'$, and $Z$ are integral. In this case, we have $\pi^{-1}(\eta)=\Gamma$ where $\eta$ is the generic point of $Z$. \par Let $z\in \lvert Z \rvert$ be a closed point where the local ring of $Z$ at $z$ has minimal dimension $d$. We proceed by induction on $d$. If $d=0$ there is nothing to prove. If $d>0$, pick $\eta_1\in \lvert Z \rvert$ such that the local ring of $Z$ at $\eta_1$ is one-dimensional, $\eta_1\leadsto z$, and the dimension of $\overline{\{\eta_1\}}$ at $z$ is $< d$. By the inductive hypothesis, we see that the conclusion holds for the base change of $X$, $Y$, and $Y'$ to $\overline{\{\eta_1\}}$. The assumptions also hold for the base change of $X$, $Y$, and $Y'$ to an elementary \'etale neighborhood of $\eta_1$, and hence we may assume that $Z$ is an affine local scheme of dimension $1$. \par Since $f$ is surjective, we have $f(\Gamma)=h^{-1}(\eta)$, which means $h^{-1}(\eta)=\{y\}$ is (set-theoretically) a point. Thus $Y\to Z$ is generically finite, so $\dim(Y)\leq 1$. Since $Y$ is integral, the closed fiber $h^{-1}(z)$ must be finite. Now each integral one-dimensional closed subscheme $C \subseteq X$ such that $\pi(C)$ is a point is also such that $f(C)$ is a point, and hence $f'(C)$ is a point by assumption. Thus $f'(\pi^{-1}(z))$ is finite, and this set is just $h^{\prime-1}(z)$. Therefore $h'$ is finite and we see that \[ \dim\bigl(f'(\Gamma)\bigr) \leq \dim\bigl(h^{\prime-1}(\eta)\bigr)=0, \] as desired. \end{proof} We can now prove the Contraction theorem. \begin{theorem}[Contraction theorem; cf.\ {\cite[Theorem 3-2-1]{KMM87}}] \label{thm:contraction} Let $\pi\colon X \to Z$ be a projective surjective morphism of integral quasi-excellent Noetherian algebraic spaces of equal characteristic zero over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Let $(X,\Delta)$ be a weakly log terminal $\mathbf{R}$-pair, and let $H \in \Pic(X)$ be $\pi$-nef and such that \[ F \coloneqq \bigl(H^\perp \cap \NEbar(X/Z)\bigr) - \{0\} \subseteq \Set[\big]{z \in N_1(X/Z) \given \bigl( (K_X+\Delta) \cdot z \bigr) < 0 }, \] where $H^\perp \coloneqq \Set{z \in N_1(X/Z) \given (H \cdot Z) = 0}$. Then, there exists a projective surjective morphism $\varphi\colon X \to Y$ to an integral normal quasi-excellent Noetherian algebraic space projective over $Z$ making the diagram \[ \begin{tikzcd}[column sep=tiny] X \arrow{rr}{\varphi}\arrow{dr}[swap]{\pi} & & Y \arrow{dl}{\sigma}\\ & Z \end{tikzcd} \] commute and satisfying the following properties: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{thm:contractioni} For every integral one-dimensional subspace $C \subseteq X$ such that $\pi(C)$ is a point, we have the property that $\varphi(C)$ is a point if and only if $(H \cdot C) = 0$, i.e., if and only if $[C] \in F$. \item\label{thm:contractionii} $\mathcal{O}_Y \to \varphi_*\mathcal{O}_X$ is an isomorphism. \item\label{thm:contractioniii} $H = \varphi^*A$ for some $\sigma$-ample $A \in \Pic(Y)$. \end{enumerate} Moreover, $\varphi$ is characterized by the properties $(\ref{thm:contractioni})$ and $(\ref{thm:contractionii})$. \end{theorem} \begin{proof} By Kleiman's criterion for $\pi$-ampleness (Proposition \ref{lem:AmpleIsPositiveOnNE}), there exists $a \in \mathbf{N}$ such that $aH - (K_X+\Delta)$ is $\pi$-ample. Thus, by the Basepoint-free theorem \ref{thm:bpf}, we know that $mH$ is $\pi$-generated for $m \gg 0$. \par We claim that the relative section ring \begin{align*} R\bigl(X/Z;H\bigr) \coloneqq{}& \bigoplus_{m=0}^\infty \pi_*\bigl(\mathcal{O}_X(mH)\bigr) \intertext{is an $\mathcal{O}_Z$-algebra of finite type. It suffices to show that for every affine scheme $U = \Spec(R)$ \'etale over $X$, the pullback of $R(X/Z;H)$ is an $R$-algebra of finite type. By flat base change \cite[\href{https://stacks.math.columbia.edu/tag/073K}{Tag 073K}]{stacks-project}, we note that} R\bigl(X/Z;H\bigr)_{\vert U} \cong{}& \bigoplus_{m=0}^\infty H^0\bigl(U,\mathcal{O}_U(mH_{\vert U})\bigr). \end{align*} Base changing along the morphism $U \to Z$, we reduce to the case when $Z$ is an affine scheme. We can also replace $\pi$ by its Stein factorization \cite[Th\'eor\`eme 4.3.1]{EGAIII1} to assume that $H^0(X,\mathcal{O}_X) = R$. \par Since $mH$ is globally generated, we have a surjection \[ H^0\bigl(X,\mathcal{O}_X(mH)\bigr) \otimes_R \mathcal{O}_X \mathrel{\text{\longtwo@rightarrow}} \mathcal{O}_X(mH), \] which induces a morphism \[ \psi_m\colon X \xrightarrow{\lvert mH \rvert} \mathbf{P}_Z\Bigl(H^0\bigl(X,\mathcal{O}_X(mH)\bigr)\Bigr) \eqqcolon \mathbf{P}_m \] such that $\psi_m^*\mathcal{O}_{\mathbf{P}_m}(1) \cong \mathcal{O}_X(mH)$. By the projection formula, we know that \[ R(X;mH) \coloneqq \bigoplus_{m'=0}^\infty H^0\bigl(X,\mathcal{O}_X(mm'H)\bigr) \cong \bigoplus_{m'=0}^\infty H^0\bigl(\mathbf{P}_m,\mathcal{O}_{\mathbf{P}_m}(m')\bigr). \] Since the right-hand side is a finitely generated $R$-algebra by \cite[Proposition 2.3.4.1]{EGAIII1}, we see that $R(X;H)$ is a finitely generated $R$-algebra by \cite[Proposition 1.2.2]{ADHL15}. \par We now claim the morphism $\varphi$ in the Stein factorization \[ X \overset{\varphi}{\longrightarrow} Y \longrightarrow \Proj_Z\Biggl( \bigoplus_{m=0}^\infty \pi_*\mathcal{O}_X(mH)\biggr) \] satisfies $(\ref{thm:contractioni})$ and $(\ref{thm:contractionii})$, where the composition is the natural morphism from \cite[\href{https://stacks.math.columbia.edu/tag/0D2Z}{Tag 0D2Z}]{stacks-project}. $(\ref{thm:contractioni})$ holds by the projection formula for intersection products \cite[\href{https://stacks.math.columbia.edu/tag/0EDJ}{Tag 0EDJ}]{stacks-project}, and $(\ref{thm:contractionii})$ holds by construction of the Stein factorization in \cite[\href{https://stacks.math.columbia.edu/tag/0A1B}{Tag 0A1B}]{stacks-project}. \par Next, we show that $(\ref{thm:contractioni})$ and $(\ref{thm:contractionii})$ characterize $\varphi$ after pulling back along every \'etale morphism $U \to Z$ from a scheme $U$. In this case, by Lemma \ref{lem:ContractsNonClosed}, $(\ref{thm:contractioni})$ characterizes $\varphi$ topologically. The isomorphism $\mathcal{O}_Y \overset{\sim}{\to} \varphi_*\mathcal{O}_X$ characterizes $\varphi$ as a morphism of ringed spaces. \par Finally, we show that $(\ref{thm:contractioniii})$ holds for $\varphi$ as defined above. We have \[ \psi_{m+1}^*\mathcal{O}_{\mathbf{P}_{m+1}}(1) \otimes_{\mathcal{O}_X} \psi_m^*\mathcal{O}_{\mathbf{P}_m}(-1) \cong \mathcal{O}_X\bigl((m+1)H - mH) = \mathcal{O}_X(H). \] Since the respective Stein factorizations $\phi_m\colon X \to Y_m$ and $\phi_{m+1}\colon X \to Y_{m+1}$ of $\psi_m$ and $\psi_{m+1}$ satisfy $(\ref{thm:contractioni})$ and $(\ref{thm:contractionii})$, they are both isomorphic to $\varphi$. Thus, setting \[ \mathcal{O}_Y(A) = \mathcal{O}_{\mathbf{P}_{m+1}}(1)_{\vert Y} \otimes_{\mathcal{O}_Y} \mathcal{O}_{\mathbf{P}_m}(-1)_{\vert Y}, \] we see $\mathcal{O}_X(H)=\varphi^*\mathcal{O}_X(A)$. Finally, since $\mathcal{O}_X(mH)\cong\varphi^*\mathcal{O}_{\mathbf{P}_m}(1)_{\vert Y}$, we see that $\mathcal{O}_Y(mA)\cong\mathcal{O}_{\mathbf{P}_m}(1)_{\vert Y}$ by $(\ref{thm:contractionii})$, so $A$ is ample. \end{proof} \begin{remark} Suppose $X$ is a scheme. Then, since both $X$ and $Y$ are normal, the condition in $(\ref{thm:contractionii})$ holds if and only if $K(Y)$ is algebraically closed in $K(X)$, which holds if and only if the fibers of $\varphi$ are geometrically connected by \cite[Remarque 4.3.4 and Corollaire 4.3.12]{EGAIII1}. \end{remark} We use Theorem \ref{thm:contraction} to define extremal faces and extremal rays. \begin{definition}[cf.\ {\cite[Definition 3-2-3]{KMM87}}]\label{def:contraction} Fix notation as in Theorem \ref{thm:contraction}. Since $\varphi$ is characterized by properties which only depend on $F$ and not on $H$, we call $\varphi$ the \textsl{contraction} of $F$. If $H$ is a $\pi$-nef Cartier divisor on $X$ such that $F = (H^\perp \cap \NEbar(X/Z)) - \{0\}$, we say that $H$ is a \textsl{supporting function} of $F$. We then say that $F$ is an \textsl{extremal face of $\NEbar(X/Z)$ for $(X,\Delta)$} (or \textsl{for $K_X+\Delta$}). If $\dim_\mathbf{R}(F) = 1$, we say that $F$ is an \textsl{extremal ray}. \end{definition} \begin{definition}\label{def:GoodContrOfR} Fix notation as in Definition \ref{def:contraction}. We say a contraction $f\colon X\to Y$ is \textsl{small} if the exceptional locus of $f$ is of codimension at least 2 in $X$. In particular, $f$ is birational when $X$ is integral. \par Let $R\subseteq \NEbar(X/Z)$ be an extremal face. We say that a contraction $f\colon X\to Y$ is \textsl{a contraction of $R$}, if a $\pi$-contracted curve $C$ is $f$-contracted when and only when $[C]\in R$. A contraction of $R$ is an isomorphism if and only if $R$ does not contain the class of any $\pi$-contracted curve. If $f$ is not an isomorphism and $R$ is a ray, then $R=\mathbf{R}_{\geq 0}\cdot[C]$ for any $f$-contracted curve $C$. Therefore we see $R=\NEbar(X/Y)$. We say a contraction $f\colon X\to Y$ of an extremal ray $R$ is \textsl{good} if, for all $\mathscr{L}\in\Pic(X)_{\mathbf{Q}}$, we have $(\mathscr{L}\cdot R)=0$ if and only if there exists an element $\mathscr{K}\in\Pic(Y)_{\mathbf{Q}}$ such that $\mathscr{L}=f^*\mathscr{K}\in \Pic(X)_{\mathbf{Q}}$. In this case $\NE(X/Y)\subseteq R$ canonically, and $\mathscr{L}\in \Pic(X)_{\mathbf{Q}}$ is $f$-ample if $(\mathscr{L}\cdot R)>0$. In general, when $Y$ is projective over $Z$, we have $\NEbar(X/Y)= R$; see the proof of \cite[Lemma 3-2-4]{KMM87}. For a good contraction $f$ of an extremal ray $R$, we always have \[ \dim (N^1(Y/Z)_{\mathbf{R}})=\dim (N^1(X/Z)_{\mathbf{R}})-1. \] See \cite[Lemma 3-2-5]{KMM87} and its proof. \end{definition} \subsection{Rationality theorem} We now consider the Rationality theorem. \begin{theorem}[Rationality theorem; cf.\ {\citeleft\citen{KMM87}\citemid Theorem 4-1-1\citepunct \citen{KM98}\citemid Theorem 3.5\citeright}]\label{thm:rationality} Let $\pi\colon X \to Z$ be a proper surjective morphism of integral quasi-excellent Noetherian algebraic spaces of equal characteristic zero over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Let $(X,\Delta)$ be a $\mathbf{Q}$-pair, and let $H \in \Pic(X)$ such that one of the following holds: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{thm:ratdlt} $(X,\Delta)$ is weakly log terminal and $H$ is $\pi$-ample. \item\label{thm:ratklt} $(X,\Delta)$ is klt and $H$ is $\pi$-big and $\pi$-nef. \end{enumerate} If $K_X+\Delta$ is not $\pi$-nef, then \begin{align*} r &\coloneqq \max\Set[\big]{t \in \mathbf{R} \given H+t(K_X+\Delta)\ \text{is $\pi$-nef}} \intertext{is a rational number. Moreover, expressing $r/a = u/v$ with $u,v \in \mathbf{Z}_{>0}$ and $(u,v) = 1$, we have $v \le a(b+1)$, where} a &\coloneqq \min\Set[\big]{e \in \mathbf{Z}_{>0} \given e(K_X+\Delta)\ \text{is Cartier}},\\ b &\coloneqq \max_{\substack{z \in Z\\\text{closed}}} \Set[\big]{\dim_{\kappa(z)}\bigl(\pi^{-1}(z)\bigr)}. \end{align*} \end{theorem} \begin{proof} We claim we may replace $Z$ by a scheme $Z'$ \'etale over $Z$. Let $f\colon Z' \to Z$ be a surjective \'etale morphism where $Z'$ is a quasi-compact scheme, and consider the associated Cartesian diagram \[ \begin{tikzcd} X' \rar{f'}\dar[swap]{\pi'} & X\dar{\pi}\\ Z' \rar{f} & Z \end{tikzcd} \] As in the proof of Theorem \ref{thm:bpf}, the conditions on $(X,\Delta)$ are preserved. Since $f$ is surjective, nefness is invariant under base change by Lemma \ref{lem:nefbasechange}. The number $b$ is invariant because $f$ is quasi-finite. The number $a$ is invariant because of the definition of $\Pic(X)$.\medskip \par We now prove the theorem when $Z$ is a scheme. We will derive a contradiction assuming that either $r \notin \mathbf{Q}$, or that $r \in \mathbf{Q}$ and $v > a(b+1)$.\smallskip \par We first claim that we may assume that $H$ is $\pi$-generated and that $H - (K_X+\Delta)$ is $\pi$-ample in case $(\ref{thm:ratdlt})$, and $\pi$-big and $\pi$-nef in case $(\ref{thm:ratklt})$. Let $c$ be sufficiently large such that $a < cr$ and $(c,v) = 1$. We then see that \[ cH + a(K_X+\Delta) \] is $\pi$-nef since $a < cr$. Moreover, we claim that \[ cH + (a-1)(K_X+\Delta) = \frac{c}{a}H + \frac{a-1}{a}\bigl(cH + a(K_X+\Delta)\bigr) \] is $\pi$-ample in case $(\ref{thm:ratdlt})$, and $\pi$-big and $\pi$-nef in case $(\ref{thm:ratklt})$. Case $(\ref{thm:ratdlt})$ is clear from Theorem \ref{thm:ampisinterior}, since it is the sum of a $\pi$-ample and a $\pi$-nef $\mathbf{Q}$-invertible sheaf. In case $(\ref{thm:ratklt})$, we see that $cH + (a-1)(K_X+\Delta)$ is $\pi$-nef since it is the sum of two $\pi$-nef $\mathbf{Q}$-invertible sheaves, and is $\pi$-big by Lemma \ref{lem:bigplusnef} since it is the sum of a $\pi$-big and a $\pi$-nef $\mathbf{Q}$-invertible sheaf. Since $cH+(a-1)(K_X+\Delta)$ is $\pi$-big and $\pi$-nef, the Basepoint-free theorem \ref{thm:bpf} implies \[ H' \coloneqq n\bigl(cH+a(K_X+\Delta)\bigr) \] is $\pi$-generated for $n \gg 0$. We moreover choose $n$ such that $(nc,v) = 1$. Setting \[ r' \coloneqq \max\Set[\big]{t \in \mathbf{R}\given H' + t(K_X+\Delta)\ \text{is $\pi$-nef}}, \] we have $r'/a = ncr/a - n$. Thus, we have $r \in \mathbf{Q}$ if and only if $r' \in \mathbf{Q}$. In this case, writing $r'/a = u'/v'$ with $u',v' \in \mathbf{N}$ and $(u',v') = 1$, we have $v = v'$ by the choice of $c$ and $n$. We therefore also have $v \le a(b+1)$ if and only if $v' \le a(b+1)$. We can therefore replace $H$ by $H'$ to assume that $H$ is $\pi$-generated. We also know that \[ H' - (K_X+\Delta) = (n-1)\bigl(cH+a(K_X+\Delta)\bigr) + cH+(a-1)(K_X+\Delta) \] is $\pi$-ample in case $(\ref{thm:ratdlt})$, and $\pi$-big and $\pi$-nef in case $(\ref{thm:ratklt})$ by the same argument as above.\smallskip \par We can now proceed as in the proof of \cite[Theorem 4-1-1]{KMM87}, replacing the Kawamata--Viehweg vanishing theorem \cite[Theorem 1-2-3]{KMM87} by \cite[Theorem A]{Mur}, the Basepoint-free theorem \cite[Theorem 3-1-1]{KMM87} by the Basepoint-free theorem \ref{thm:bpf}, and noting that \cite[Lemma 4-1-2]{KMM87} may be applied to a connected component of the geometric generic fiber of $\pi \circ f$. The necessary log resolutions are constructed as in the proof of Theorem \ref{thm:bpf}. \end{proof} \subsection{Cone theorem} Finally, we consider the Cone theorem. \begin{theorem}[Cone theorem; cf.\ {\cite[Theorem 4-2-1]{KMM87}}]\label{thm:kmmcone} Let $\pi\colon X \to Z$ be a projective surjective morphism of integral quasi-excellent Noetherian algebraic spaces of equal characteristic zero over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Let $(X,\Delta)$ be a weakly log terminal $\mathbf{Q}$-pair. Then, \[ \NEbar(X/Z) = \NEbar_{K_X+\Delta\ge0}(X/Z) + \sum_j R_j, \] where $R_j$ are extremal rays of $\NEbar(X/Z)$ for $(X,\Delta)$. Moreover, if $C_j \subseteq X$ is an integral closed subscheme such that $R_j = \mathbf{R}_{\ge0} \cdot [C_j]$, then for every $\pi$-ample $A\in \Pic(X)$, expressing \[ \frac{(A\cdot C_j)}{a((K_X+\Delta)\cdot C_j)} = - \frac{u_j}{v_j} \] with $u_j,v_j \in \mathbf{Z}_{>0}$ and $(u_j,v_j) = 1$, we have $v_j \le a(b+1)$, where \begin{align*} a &\coloneqq \min\Set[\big]{e \in \mathbf{Z}_{>0} \given e(K_X+\Delta)\ \text{is Cartier}},\\ b &\coloneqq \max_{\substack{z \in Z\\\text{closed}}} \Set[\big]{\dim_{\kappa(z)}\bigl(\pi^{-1}(z)\bigr)}. \end{align*} In particular, the $R_j$ are discrete in the half space \[ \Set[\big]{z \in N_1(X/Z) \given \bigl( (K_X+\Delta) \cdot z \bigr) < 0}. \] \end{theorem} \begin{proof} The proof of \cite[Theorem 4-2-1]{KMM87} applies using our versions of the Basepoint-free, Contraction, and Rationality theorems (Theorems \ref{thm:bpf}, \ref{thm:contraction}, and \ref{thm:rationality}). These theorems are also used to show the preliminary results \cite[Lemmas 3-2-4, 3-2-5, and 4-2-2]{KMM87}, which hold in our setting using Kleiman's criterion (Proposition \ref{lem:AmpleIsPositiveOnNE}). \end{proof} \begingroup \makeatletter \renewcommand{\@secnumfont}{\bfseries} \part{Finite generation of relative adjoint rings}\label{part:fingen} \makeatother \endgroup In this part, we prove Theorem \ref{thm:introfinitegen} for schemes and algebraic spaces by adapting the strategy in \cite{CL12} that was used for complex varieties. We then prove dual versions of the Rationality, Cone, and Contraction theorems in the vein of \cite{Kaw11} using our finite generation result (Theorem \ref{thm:introfinitegen}), as is done for varieties in \cite{CL13}. These versions of these results will be used later when showing termination with scaling.\bigskip \section{Statements of theorems}\label{sect:cl12statements} \par We state our version of \cite[Theorem A]{CL12}, which is very close to the original. \begin{theorem}[{cf.\ \cite[Theorem A]{CL12}}]\label{thm:cl12a} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian excellent schemes of equal characteristic zero, such that $X$ is regular of dimension $n$ and such that $Z$ is affine and has a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Let $B_1,B_2,\ldots,B_k$ be $\mathbf{Q}$-divisors on $X$ such that $\lfloor B_i \rfloor = 0$ for all $i$, and such that $\sum_{i=1}^k B_i$ has simple normal crossings support. Let $A$ be a $\pi$-ample $\mathbf{Q}$-divisor on $X$, and set $D_i = K_X+A+B_i$ for every $i$. Then, the relative adjoint ring \[ R\bigl(X/Z;D_1,D_2,\ldots,D_k\bigr) = \bigoplus_{(m_1,m_2,\ldots,m_k) \in \mathbf{N}^k} H^0\bigl(X,\mathcal{O}_X\bigl( \lfloor m_1D_1 + m_2D_2 + \cdots + m_kD_k\rfloor \bigr)\bigr) \] is finitely generated as an $H^0(Z,\mathcal{O}_Z)$-algebra. \end{theorem} As in \cite{CL12}, we will prove Theorem \ref{thm:cl12a} by induction. We therefore adopt the following: \begin{convention} In this paper, we write ``Theorem $\text{\ref{thm:cl12a}}_{n}$ holds'' to mean ``Theorem \ref{thm:cl12a} holds when $\dim(X) = n$.'' \end{convention} Next, we state our version of \cite[Theorem B]{CL12}. Note that $Z$ does not necessarily have to be an excellent scheme of equal characteristic zero in this statement. \begin{theorem}[cf.\ {\cite[Theorem B]{CL12}}]\label{thm:cl12b} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes, such that $X$ is regular of dimension $n$ and such that $Z$ is affine and has a dualizing complex $\omega_Z^\bullet$. Assume that the function field of $X$ has characteristic zero. \par Let $S_1,S_2,\ldots,S_p$ be distinct prime divisors on $X$ such that $(X,\sum_{i=1}^p S_i)$ is log regular, and consider a $\pi$-ample $\mathbf{Q}$-divisor $A$ on $X$. Then, setting \begin{align*} V &= \sum_{i=1}^p \mathbf{R} \cdot S_i \subseteq \Div_\mathbf{R}(X)\\ \mathcal{L}(V) = \biggl\{B &= \sum_{i=1}^p b_iS_i \in V \nonscript\:\bigg\vert\allowbreak\nonscript\:\mathopen{} 0 \le b_i \le 1\ \text{for all}\ i\biggr\} \intertext{the set} \mathcal{E}_A(V) = \bigl\{B &\in \mathcal{L}(V) \nonscript\:\big\vert\allowbreak\nonscript\:\mathopen{} \lvert K_X+A+B \rvert_\mathbf{R} \ne \emptyset\bigr\} \end{align*} is a rational polytope. \end{theorem} In \cite{CL12}, Cascini and Lazi\'c prove \cite[Theorems A and B]{CL12} simultaneously by induction on $n$. We will deduce Theorem \ref{thm:cl12b} directly from their work, which yields this possibly mixed characteristic version of \cite[Theorem B]{CL12}. \section{\texorpdfstring{$\mathcal{E}_A(V)$}{E\_A(V)} is a rational polytope} The goal of this section is to prove Theorem \ref{thm:cl12b}, which is our version of \cite[Theorem B]{CL12}. We can reduce Theorem \ref{thm:cl12b} to \cite[Theorem B]{CL12}. To this end, we show some localization results for some asymptotic loci of divisors. Among those, only Corollary \ref{cor:LociLocalize}$(\ref{cor:EAVlocalize})$ is used in this section; other results will be needed later. These results are quick corollaries of Lemma \ref{lem:LinearSysLocalizes} and Corollary \ref{cor:EffIffEffAtGenFiber}, so we call them corollaries. \begin{corollary}\label{cor:BBsFixLocalize} Let $\pi\colon X\to Z$ be a projective surjective morphism of integral Noetherian schemes with $X$ regular and $Z$ affine. Consider a point $z \in Z$, and set $R \coloneqq \mathcal{O}_{Z,z}$ and $X_R \coloneqq X \times_Z \Spec(R)$. For $\square\in\{\Bs,\Fix\}$ (resp.\ $\square\in\{\SB,\FFix\}$) and $D$ a divisor (resp.\ an $\mathbf{R}$-divisor) on $X$, we have $\square(D_R)=\square(D)_R$. \end{corollary} \begin{proof} In all cases, $\square(D_R)\subseteq\square(D)_R$ trivially. The other inclusion follows from Lemma \ref{lem:LinearSysLocalizes}. \end{proof} \begin{corollary}\label{cor:LociLocalize} Let $\pi\colon X\to Z$ be a projective surjective morphism of integral Noetherian schemes with $X$ regular and $Z$ affine with a dualizing complex $\omega^{\bullet}_Z$. Consider a point $z \in Z$, and set $R \coloneqq \mathcal{O}_{Z,z}$ and $X_R \coloneqq X \times_Z \Spec(R)$. \par Let $S_1,S_2,\ldots,S_p$ be distinct prime divisors on $X$ such that $(X,\sum_{i=1}^p S_i)$ is log regular. Renumber the $S_i$ so that there exists $a \in \{1,2,\ldots,p\}$ such that $z \in \pi(S_i)$ for all $i \le a$ while $z \notin \pi(S_i)$ for all $i \ge a+1$. Let \[ V_R=\sum_{i\leq a}{\mathbf R} \cdot (S_i)_R \subseteq \Div_\mathbf{R}(X_R), \] and consider a $\pi$-ample $\mathbf{Q}$-divisor $A$ on $X$. Define $\mathcal L(V_R)$ as in Definition \ref{def:cl1224} for the morphism $X_R \to \Spec(R)$, and identify $\mathcal L(V)$ with $\mathcal L(V_R)\times [0,1]^{p-a}$. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{cor:EAVlocalize} Define $\mathcal E_{A_R}(V_R)$ as in Definition \ref{def:cl1224} for the morphism $X_R \to \Spec(R)$. We then have \[ \mathcal E_{A}(V)=\mathcal E_{A_R}(V_R)\times [0,1]^{p-a}. \] \item\label{cor:BSAVlocalize} Let $S$ be a prime divisor on $X$ distinct from the $S_i$ such that $(X,S+\sum_{i=1}^p S_i)$ is log regular and $z \in \pi(S)$. Define $\mathcal B^{S_R}_{A_R}(V_R)$ as in Definition \ref{def:cl1224} for the morphism $X_R \to \Spec(R)$. We then have \[ \mathcal B^S_{A}(V)=\mathcal B^{S_R}_{A_R}(V_R)\times [0,1]^{p-a}. \] \end{enumerate} \end{corollary} \begin{proof} Follow immediately from Corollary \ref{cor:EffIffEffAtGenFiber} and Corollary \ref{cor:BBsFixLocalize}, respectively. \end{proof} We remark that the objects considered above also behave well with respect to field extensions. This is mostly trivial with $\mathbf{Q}$-coefficients, but we take extra caution here because we need to deal with $\mathbf{R}$-coefficients. We only record the results neccessary to the proof of our Theorem \ref{thm:cl12b}; therefore we restrict our attention to $\lvert \cdot \rvert_{\mathbf{R}}$ and $\mathcal{E}_A(V)$, whereas similar results hold for $\SB(\cdot)$, $\mathcal{B}^S_A(V)$, etc. \begin{lemma}\label{lem:LinearSystemFieldExtn} Let $k$ be a field, and let $X$ be a normal geometrically connected scheme of finite type over $k$. Let $L/k$ be a separable field extension. Let $D$ be an $\mathbf{R}$-Weil divisor on $X$ and $D_L$ its pullback to $X_L$. Then $\lvert D \rvert_{\mathbf{R}}\neq \emptyset$ if and only if $\lvert D_L \rvert_{\mathbf{R}}\neq \emptyset$. \end{lemma} \begin{proof} We denote by $K(-)$ the function field of an integral scheme. Assume $\lvert D \rvert_{\mathbf{R}}\neq \emptyset$, so $D=E+\sum_i a_i\,\prdiv_X(f_i)$ where $E$ is an effective $\mathbf{R}$-Weil divisor and $f_i\in K(X)^\times$. Then, $D_L=E_L+\sum_i a_i\,\prdiv_{X_L}(f_i)$, and thus $\lvert D_L \rvert_{\mathbf{R}}\neq \emptyset$. Conversely, assume $\lvert D_L \rvert_{\mathbf{R}}\neq \emptyset$, so there exist an effective $\mathbf{R}$-Weil divisor $F$ on $X_L$ and $g_j\in K(X_L)^\times$ with $D_L=F+\sum_j b_j\,\prdiv_{X_L}(g_j)$. There exists a finitely generated subextension $L'/k$ of $L/k$ such that $F$ is the pullback of an effective $F'$ on $X_{L'}$ and all $g_j\in K(X_{L'})^\times$, so $D_{L'}=F'+\sum_j b_j\,\prdiv_{X_{L'}}(g_j)$. Therefore we may assume $L/k$ of finite type, and since $L/k$ is separable, $L$ is the function field of an integral smooth $k$-algebra $S$; see for example \cite[\href{https://stacks.math.columbia.edu/tag/00TV}{Tag 00TV}]{stacks-project}. Now $K(X_L)=K(X_S)$, so we have the divisor $\sum_j b_j\,\prdiv_{X_{S}}(g_j)$. After possibly replacing $S$ by a localization, we have an effective $\mathbf{R}$-Weil divisor $\mathfrak{F}$ on $X_S$ with $D_S=\mathfrak{F}+\sum_j b_j\,\prdiv_{X_{S}}(g_j)$. Therefore, for a suitable maximal ideal $\mathfrak{m}$ of $S$, we have a well-defined effective $\mathbf{R}$-divisor $\mathfrak{F}_{S/\mathfrak{m}}$ and well-defined elements $\overline{g_j}\in K(X_{S/\mathfrak{m}})^\times$ such that \[ D_{S/\mathfrak{m}}=\mathfrak{F}_{S/\mathfrak{m}}+\sum_j b_j\,\prdiv_{X_{S/\mathfrak{m}}}(\overline{g_j}). \] The degree $d$ of $S/\mathfrak{m}$ over $k$ is finite, thus $h\colon X_{S/\mathfrak{m}}\to X$ is finite flat of degree $d$. Thus the proper pushforward $h_*\colon \WDiv_{\mathbf{R}}(X_{S/\mathfrak{m}})\to \WDiv_{\mathbf{R}}(X)$ satisfies $h_*D_L=dD$, so we have \[ D=\frac{1}{d}h_*(\mathfrak{F}_{S/\mathfrak{m}})+\frac{1}{d}\sum_j b_j\,h_*(\prdiv_{X_{S/\mathfrak{m}}}\bigl(\overline{g_j})\bigr). \] Since $\mathfrak{F}_{S/\mathfrak{m}}$ is effective, so is $h_*(\mathfrak{F}_{S/\mathfrak{m}})$; and if $\operatorname{Norm}$ is the norm function for the field extension $K(X_{S/\mathfrak{m}})/K(X)$, then $h_*(\prdiv_{X_{S/\mathfrak{m}}}(\overline{g_j}))=\prdiv_{X}(\operatorname{Norm}(\overline{g_j}))$. Therefore $\frac{1}{d}h_*(\mathfrak{F}_{S/\mathfrak{m}})\in\lvert D\rvert_{\mathbf{R}}$ and $\lvert D\rvert_{\mathbf{R}}\neq\emptyset$ as desired. \end{proof} \begin{lemma}\label{lem:EA(V)FIELDEXTENSION} Let $k$ be a field and let $X$ be a scheme of finite type over $k$. Let $S_1,S_2,\ldots,S_p$ be distinct prime divisors on $X$ such that $(X,\sum_{i=1}^p S_i)$ is log regular. Let $L/k$ be a separable extension of fields. Let $T_{i1},T_{i2},\ldots, T_{iq_i}$ be all the irreducible components of $(S_i)_L\subseteq X_L$, so $(X_L,\sum_{i=1}^p \sum_{j=1}^{q_i} T_{ij})$ is log regular, and consider $V$ and $\mathcal{L}(V)$ as defined in Definition \ref{def:cl1224}. Set \[ W=\sum_i\sum_{j\leq q_i}\mathbf{R}\cdot T_{ij}\subseteq \Div_{\mathbf{R}}(X_L), \] so there is a canonical injective linear map $\varphi\colon V\to W$ sending $S_i$ to $\sum_{j=1}^{q_i} T_{ij}$. Let $A$ be an ample $\mathbf{Q}$-divisor on $X$, so $A_L$ is an ample $\mathbf{Q}$-divisor on $X_L$. Then, with notation as in Definition \ref{def:cl1224}, we have \[ \varphi\bigl(\mathcal{E}_A(V)\bigr)=\mathcal{E}_{A_L}(W)\cap \varphi(V). \] \end{lemma} \begin{proof} Let $B\in \mathcal{L}(V)$. Then $\varphi(B)=B_L$, since $L/k$ is separable. Since $\lvert K_X+A+B \rvert_{\mathbf{R}}\neq \emptyset$, Lemma \ref{lem:LinearSystemFieldExtn} implies $\lvert K_{X_L}+A_L+\varphi(B)\rvert_{\mathbf{R}}\neq \emptyset$. Thus, $\varphi(\mathcal{E}_A(V))\subseteq\mathcal{E}_{A_L}(W)\cap \varphi(V)$. Conversely, let $C\in \mathcal{E}_{A_L}(W)\cap \varphi(V)$, so $C=\varphi(B)$ for some $B\in V$. It is clear that $B\in\mathcal{L}(V)$, and that $\lvert K_{X_L}+A_L+\varphi(B) \rvert_{\mathbf{R}}\neq \emptyset$ by the definition of $\mathcal{E}_{A_L}(W)$. By Lemma \ref{lem:LinearSystemFieldExtn}, we conclude that $B\in \mathcal{E}_A(V)$, as desired. \end{proof} With these results, we conclude that our Theorem \ref{thm:cl12b} follows from \cite[Theorem B]{CL12}. \begin{proof}[Proof of Theorem \ref{thm:cl12b}] Since the $\mathbf{R}$-linear system $\lvert K_X+A+B\rvert_\mathbf{R}$ does not change when replacing $\pi\colon X \to Z$ by its Stein factorization, we may assume that $\pi$ is surjective with geometrically connected fibers. Let $K$ be the function field of $Z$. By Corollary \ref{cor:LociLocalize}$(\ref{cor:EAVlocalize})$, we may assume $Z=\Spec(K)$. If $K=\mathbf C$ this is exactly \cite[Theorem B]{CL12}, therefore we get the result from the Lefschetz Principle and Lemma \ref{lem:EA(V)FIELDEXTENSION}. \end{proof} \section{Lifting sections}\label{sect:cl12s3} The main result in this section is Theorem \ref{thm:cl1234}. This result is a version of Cascini and Lazi\'c's lifting theorem \cite[Theorem 3.4]{CL12}, which in turn is a version of Hacon and M\textsuperscript{c}Kernan's lifting theorem \cite[Theorem 6.3]{HM10}. To prove these results for schemes, we require the version of the Kawamata--Viehweg vanishing theorem for proper morphisms of $\mathbf{Q}$-schemes proved by the second author \cite[Theorem A]{Mur}. In this context, log resolutions exist by \cite{Tem08,Tem12,Tem18}. \par On the other hand, one difficulty unique to our situation is the lack of Bertini theorems. To use our version of Bertini theorems over local domains (Theorem \ref{thm:bertini} and Remark \ref{rem:bertinisnc}), we need to rephrase everything in terms of restriction maps on global sections and then reduce to the case when we work over the spectrum of an excellent local $\mathbf{Q}$-algebra using flat base change.\medskip \par We prove each result in \cite[\S3]{CL12}. When the proof is not too different from that in \cite{CL12}, we indicate how the proof therein can be adapted. We start with the following consequences of the Kawamata--Viehweg vanishing theorem for $\mathbf{Q}$-schemes \cite[Theorem A]{Mur}. \begin{lemma}[{cf.\ \cite[Lemma 3.1]{CL12}}]\label{lem:cl1231} Let $\pi\colon X \to Z$ be a proper morphism of integral Noetherian schemes of equal characteristic zero such that $X$ is regular of dimension $n$ and such that $Z$ is affine with a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Let $B$ be an effective $\mathbf{Q}$-divisor on $X$ such that $(X,B)$ is log regular and $\lfloor B \rfloor = 0$. Let $A$ be a $\pi$-nef and $\pi$-big $\mathbf{Q}$-divisor. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:cl1231i} Let $S \subseteq X$ be a prime divisor such that $S \not\subseteq \Supp(B)$. Consider a divisor $G$ on $X$ such that \[ G \sim_\mathbf{Q} K_X+S+A+B. \] Then, the restriction map \[ H^0\bigl(X,\mathcal{O}_X(G)\bigr) \longrightarrow H^0\bigl(S,\mathcal{O}_S(G)\bigr) \] is surjective. In particular, we have $\lvert G_{\vert S} \rvert = \lvert G \rvert_S$. \item\label{lem:cl1231ii} Let $f\colon X \to Y$ be a birational morphism of integral excellent Noetherian schemes of equal characteristic zero such that the diagram \[ \begin{tikzcd}[column sep=tiny] X \arrow{rr}{f}\arrow{dr}[swap]{\pi} & & Y\arrow{dl}\\ & Z \end{tikzcd} \] commutes, where $Y \to Z$ is projective. Let $U \subseteq X$ be an open subset such that $f_{\vert U}$ is an isomorphism and such that $U$ intersects at most one irreducible component of $B$. Let $H'$ be a Cartier divisor on $Y$ that is very ample over $Z$, and let $H = f^*H'$. If $F$ is a divisor on $X$ such that \[ F \sim_\mathbf{Q} K_X+(n+1)H+A+B, \] then $\mathcal{O}_X(F)$ is $\pi$-generated at every point of $U$. In particular, $\lvert F \rvert$ is basepoint-free at every point of $U$. \end{enumerate} \end{lemma} \begin{proof} The ``in particular'' statements follow from Proposition \ref{prop:har29}, and hence it suffices to show the sheaf-theoretic statements in $(\ref{lem:cl1231i})$ and $(\ref{lem:cl1231ii})$. By flat base change, it suffices to show each statement after replacing $Z$ with $\Spec(\mathcal{O}_{Z,z})$ for every point $z \in Z$. This will allow us to use our version of the Bertini theorem (Theorem \ref{thm:bertini} and Remark \ref{rem:bertinisnc}). \par For $(\ref{lem:cl1231i})$, we consider the exact sequence \[ 0 \longrightarrow \mathcal{O}_X(G-S) \longrightarrow \mathcal{O}_X(G) \longrightarrow \mathcal{O}_S(G) \longrightarrow 0. \] By Kawamata--Viehweg vanishing \cite[Theorem A]{Mur}, we have $H^1(X,\mathcal{O}_X(G-S)) = 0$, and hence $H^0(X,\mathcal{O}_X(G)) \to H^0(S,\mathcal{O}_S(G))$ is surjective. \par For $(\ref{lem:cl1231ii})$, we induce on $n = \dim(X)$. The case when $n = 0$ holds because in this case $X$ is affine. Now suppose $n > 0$. Since the locus where $\pi^*\pi_*\mathcal{O}_X(F) \to \mathcal{O}_X(F)$ is not surjective is closed, it suffices to show that for every closed point $x \in U$, the morphism $\pi^*\pi_*\mathcal{O}_X(F) \to \mathcal{O}_X(F)$ is surjective at $x$. We claim there exists a divisor $T \sim H$ such that $T$ is regular and passes through $x$. Consider the blowup $\mu\colon X' \to X$ of $X$ at $x$ with exceptional divisor $E$, and consider the divisor $\mu^*H - E$. The sheaf $\mathcal{O}_{X'}(\mu^*H - E)$ is $(\pi \circ \mu)$-generated, and hence we can apply Theorem \ref{thm:bertini} and Remark \ref{rem:bertinisnc} to produce a divisor $T' \sim \mu^*H - E$ on $X'$ that is regular and intersects $E$ and the preimage of $B$ in $X'$ transversely, that also maps birationally onto its image in $Y$. The image of $T'$ in $X$ is then a divisor $T \sim H$ that is regular and passes through $x$ that intersects $B$ transversely, and hence $(X,T+B)$ is log regular. Since \[ F_{\vert T} \sim_\mathbf{Q} K_T + n\,H_{\vert T} + A_{\vert T} + B_{\vert T}, \] by the inductive hypothesis we know that $\mathcal{O}_T(F_{\vert T})$ is $\pi_{\vert T}$-generated at every point of $U \cap T$ (we note that $T$ may decompose into finitely many connected components, but the conclusion of $(\ref{lem:cl1231ii})$ still holds by working with each component separately). We now have the commutative diagram \[ \begin{tikzcd} & \pi^*\pi_*\bigl(\mathcal{O}_X(F-T)\bigr) \rar\dar & \pi^*\pi_*\bigl(\mathcal{O}_X(F)\bigr) \rar\dar & (\pi_{\vert T})^*(\pi_{\vert T})_* \bigl(\mathcal{O}_T(F_{\vert T})\bigr) \rar\dar & 0\\ 0 \rar & \mathcal{O}_X(F-T) \rar & \mathcal{O}_X(F) \rar & \mathcal{O}_T(F_{\vert T}) \rar & 0 \end{tikzcd} \] with exact rows, where the vertical arrows come from the counit of the adjunction $f^* \dashv f_*$, and the top row is exact since $H^1(X,\mathcal{O}_X(F-T)) = 0$ by the Kawamata--Viehweg vanishing theorem \cite[Theorem A]{Mur}. By the inductive hypothesis, we see that the right vertical arrow is surjective at $x$. By the NAK lemma \cite[Theorem 2.3]{Mat89}, this implies that the middle vertical arrow is also surjective at $x$. \end{proof} \begin{lemma}[cf.\ {\cite[Lemma 3.2]{CL12}}]\label{lem:cl1232} Let $\pi\colon X \to Z$ be a proper morphism of integral Noetherian schemes of equal characteristic zero such that $X$ is regular and such that $Z$ is affine and excellent with a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Let $S$ be a regular prime divisor on $X$ and let $B$ be an effective $\mathbf{Q}$-divisor on $X$ such that $S \not\subseteq \Supp(B)$. Let $A$ be a $\pi$-nef and $\pi$-big $\mathbf{Q}$-divisor on $X$. Assume that $D$ is a divisor on $X$ such that \[ D \sim_\mathbf{Q} K_X+S+A+B, \] and let $\sigma \in H^0(S,\mathcal{O}_S(D_{\vert S}))$ be a nonzero global section with corresponding divisor $\Sigma$. Let $\Phi$ be an effective $\mathbf{Q}$-divisor on $S$ such that the $\mathbf{Q}$-pair $(S,\Phi)$ is klt and such that $B_{\vert S} \le \Sigma+\Phi$. Then, $\sigma \in H^0(X \vert S,\mathcal{O}_X(D))$. In particular, we have $\Sigma \in \lvert D \rvert_S$. \end{lemma} \begin{proof} The ``in particular'' statement follows from Proposition \ref{prop:har29}, and hence it suffices to show the module-theoretic statement. By \cite[Theorem 2.3.6]{Tem08}, we get a log resolution $f\colon Y\to X$ of $(X,S+B)$. Write $T=f_*^{-1}S$. Fix a choice of the canonical divisor $K_X$. Let $K_Y$ be the unique canonical divisor such that $K_Y-f^*K_X$ is $f$-exceptional. Then there are $f$-exceptional divisors $\Theta\geq 0$ and $E\geq 0$ on $Y$ with no common components such that \[ K_Y + T + \Theta = f^*(K_X + S) + E\in \mathrm{Div}(Y). \] Let $g=f_{\vert R}:T\to S$. Restrict the corresponding invertible sheaves to $T$, we see that there exist canonical divisors $K_S$ of $S$ and $K_T$ of $T$ such that \[ K_T + \Theta_{\vert T} = g^*K_S + E_{\vert T}\in \mathrm{Div}(T). \] Therefore, \[ K_T + \Theta_{\vert T} + g^*\Phi = g^*(K_S +\Phi) + E_{\vert T}\in \mathrm{Div}_{\mathbf{Q}}(T). \] Since $\Theta_{\vert T}$ and $E_{\vert T}$ are $g$-exceptional, the coefficients of $E_{\vert T}-\Theta_{\vert T}-g^*\Phi$ are the discrepancies of the klt pair $(S,\Phi)$, thus are greater than $-1$. Therefore \begin{equation}\label{MinusPhi32} \lceil-g^*\Phi\rceil\geq \Theta_{\vert T}-E_{\vert T}. \end{equation} Now, by assumption $\Sigma\geq B_{\vert S}-\Phi$, so $g^*\Sigma\geq g^*B_{\vert S}-g^*\Phi$, thus $g^*\Sigma\geq \lceil -g^*\Phi\rceil+\lfloor g^*(B_{\vert S})\rfloor$ as $\Sigma$ is an integral divisor. Combining with the inequality \eqref{MinusPhi32}, we get \begin{equation}\label{SigmaIneq32} g^*\Sigma\geq \Theta_{\vert T}+\lfloor g^*(B_{\vert S})\rfloor-E_{\vert T}. \end{equation} Let $\Gamma=\Theta+f^*B\in \mathrm{Div}_{\mathbf{Q}}(Y)$, so that $T\not\subseteq \Supp(\Gamma)$, $\Gamma$ and $E$ have no common components, and we have \[ K_Y + T + \Gamma = f^*(K_X + S + B) + E\in \mathrm{Div}_{\mathbf{Q}}(Y). \] Let $C = \Gamma -E$ and \begin{equation}\label{DefineG32} G = f^*D -\lfloor C\rfloor = f^*D -\lfloor\Gamma\rfloor + E. \end{equation} Then, the $\mathbf{Q}$-divisor \[G -\bigl(K_Y + T + \{C\}\bigr) \sim_{\mathbf Q} f^*(K_X + S + A + B)-(K_Y + T + C) = f^*A\] is $(\pi\circ f)$-nef and $(\pi\circ f)$-big, and Lemma \ref{lem:cl1231}$(\ref{lem:cl1231i})$ implies that \begin{equation}\label{SectionLiftG32} H^0\bigl(T,\mathcal{O}_T(G_{\vert T})\bigr)=H^0\bigl(Y|T,\mathcal{O}_Y(G)\bigr). \end{equation} We let $g=f_{\vert T}\colon T\to S$ and consider the composition \[ \mathcal{O}_T\longrightarrow \mathcal{O}_T(E_{\vert T})\longrightarrow \mathcal{O}_T\bigl(E_{\vert T}+g^*(D_{\vert S})\bigr) \] where the second map is defined by $g^*\sigma$. This gives a section \[ \sigma'\in H^0\bigl(T,\mathcal{O}_T(E_{\vert T}+g^*(D_{\vert S})\bigr) \] with divisor $E_{\vert T}+g^*\Sigma$. By \eqref{SigmaIneq32}, $E_{\vert T}+g^*\Sigma\geq \Theta_{\vert T}+\lfloor g^*(B_{\vert S})\rfloor=\lfloor \Gamma\rfloor_{\vert T}$, so the section $\sigma'$ comes from a section \[ \tau\in H^0\bigl(T,\mathcal{O}_T\bigl(E_{\vert T}+g^*(D_{\vert S})-\lfloor \Gamma\rfloor_{\vert T}\bigr)\bigr)=H^0\bigl(T,\mathcal{O}_T(G_{\vert T})\bigr), \] where the last equality holds by the definition of $G$ in \eqref{DefineG32}. Therefore by \eqref{SectionLiftG32}, $\tau$ lifts to $\tilde\tau\in H^0(Y,\mathcal{O}_Y(G))$, which in turn gives rise to an element \[ \rho\in H^0\bigl(Y,\mathcal{O}_Y(G+\lfloor \Gamma\rfloor)\bigr)=H^0\bigl(Y,\mathcal{O}_Y(f^*D+E)\bigr). \] By construction, we have $\rho_{\vert T}=\sigma'$. Since $E$ is $f$-exceptional, pushing forward we see that $\sigma\in H^0(X|S,\mathcal{O}_X(D))$ as desired. \end{proof} \begin{lemma}[cf.\ {\cite[Lemma 3.3]{CL12}}]\label{lem:cl1233} Let $\pi\colon X \to \Spec(R)$ be a projective morphism of integral Noetherian schemes of equal characteristic zero such that $X$ is regular and $Z$ is affine and excellent with a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Let $S$ be a prime divisor on $X$, let $B$ be an effective $\mathbf{Q}$-divisor on $X$, and let $D$ be an effective $\mathbf{Q}$-divisor on $X$ such that the $\mathbf{Q}$-pair $(X,S+B+D)$ is log regular, $S \not\subseteq \Supp(B)$, $\lfloor B \rfloor = 0$, and $D$ and $S+B$ have no common components. Let $P$ be a $\pi$-nef $\mathbf{Q}$-divisor, and set $\Delta = S+B+P$. Assume that \[ K_X+\Delta \sim_\mathbf{Q} D. \] Let $k$ be a positive integer such that the divisors $kP$ and $kB$ are integral, and write $\Omega = (B+P)_{\vert S}$. Then, there is a $\pi$-very ample divisor $H$ on $X$ such that, for all sections $\sigma \in H^0(S,\mathcal{O}_S(k(K_S+\Omega)))$ and $u\in H^0(S,\mathcal{O}_S(H_{\vert S})$ and all positive integers $l$, we have \[\sigma^lu\in H^0\bigl(X|S,\mathcal{O}_X\bigl(lk(K_X+\Delta)+H\bigr)\bigr).\] In particular, if $\Sigma$ (resp.\ $U$) is the divisor of $\sigma$ (resp.\ $u$), we have $l\Sigma+U\in \lvert lk(K_X+\Delta)+H\rvert_S.$ \end{lemma} \begin{proof} For each $m \geq 0$, let $l_m = \bigl\lfloor\frac{m}{k}\bigr\rfloor$, let $r_m = m - l_m k \in \{0, 1,\ldots, k -1\}$, define $B_m = \lceil mB\rceil - \lceil (m-1)B\rceil$, and set $P_m = kP$ if $r_m = 0$ and $P_m = 0$ otherwise. Let \begin{equation}\label{DefineDm33} D_m =\sum_{i=1}^m(K_X + S + P_i + B_i) = m(K_X + S) + l_m kP + \lceil mB\rceil, \end{equation} and note that $D_m$ is integral and \begin{equation}\label{DmANDDrm33} D_m = l_m k(K_X + \Delta) + D_{r_m}. \end{equation} We choose a suitable $\pi$-very ample divisor $H$ as follows. First, we choose an arbitrary $\pi$-very ample divisor $H'$ on $X$. Then, there exists an integer $n > 0$ such that $\mathcal{O}_X(nH'+D_j)$ is $\pi$-generated for every $j \in \{0,1,\ldots,k-1\}$ by \cite[Proposition 2.6.8$(i)$]{EGAII}. Now $\mathcal{O}_X((n+m)H'+D_j)$ is $\pi$-very ample for every $j \in \{0,1,\ldots,k-1\}$ and every integer $m > 0$ by \cite[Proposition 4.4.8]{EGAII}. Finally, by relative Serre vanishing \cite[Th\'eor\`eme 2.2.1$(ii)$]{EGAIII1}, choosing $m$ large enough and setting $H = (n+m)H'$, we have $H^1(X, \mathcal{O}_X(D_k + H-S))=0$. Therefore, our $H$ satisfies \begin{equation}\label{Casek33} H^0\bigl(X|S,\mathcal{O}_X(D_k + H)\bigr) = H^0\bigl(S,\mathcal{O}_S( (D_k+H)_{\vert S})\bigr) \end{equation} and $\mathcal{O}_X(H+D_j)$ is $\pi$-very ample for every $j \in \{0,1,\ldots,k-1\}$. \par We claim the following. For all $m\geq k$ and all sections $u_m\in H^0(S,\mathcal{O}_S((D_{r_m}+H)_{\vert S}))$, we have \[\sigma^{l_m}u_m\in H^0\bigl(X|S,\mathcal{O}_X(D_m+H)\bigr).\] The case $r_m=0$ is what we want. The claim is local, so after replacing $Z$ with $\Spec(\mathcal{O}_{Z,z})$ for every point $z \in Z$, we may assume that $Z$ is local, in which case we may use our version of the Bertini theorem (Theorem \ref{thm:bertini} and Remark \ref{rem:bertinisnc}). \par We prove the claim by induction on $m$. The case $m = k$ is covered by \eqref{Casek33}. Now let $m > k$, and pick a small positive rational number $\delta$ such that $D_{r_m-1} + H + \delta B_m$ is $\pi$-ample. Note that $0 \leq B_m \leq \lceil B\rceil$, that $(X, S + B + D)$ is log regular, and that $D$ and $S + B$ have no common components. Thus, there exists a small positive rational number $\varepsilon$ such that, if we define \begin{equation}\label{DefineF33} F = (1 - \varepsilon\delta)B_m + l_{m-1}k\varepsilon D, \end{equation} then $(X, S + F)$ is log regular, $\lfloor F\rfloor = 0$ and $S \not\subseteq \Supp(F)$. In particular, by Theorem \ref{thm:bertini} and Remark \ref{rem:bertinisnc} applied to $S\to Z$, there exists an element $W$ of the $\pi$-generated (in fact $\pi$-very ample) linear system $\lvert(D_{r_m-1} + H)_{|S}\rvert$ such that $W$ is reduced, does not share a component with $F_{|S}$, and that $(S,W+F_{|S})$ is log regular. Thus, if we let \begin{equation}\label{DefinePhi33} \Phi = F_{|S} + (1 -\varepsilon)W, \end{equation} then $(S, \Phi)$ is klt. By induction, there is a divisor $\Theta\in \lvert D_{m-1} + H\rvert$ whose support does not contain $S$ and $\Theta_{|S} = l_{m-1}\Sigma + W$. Note that the statement is about sections, but we get divisors from sections. \par Denoting $C = (1 -\varepsilon)\Theta + F$, by \eqref{DefineF33} we have \begin{equation}\label{DefineC33} C \sim_{\mathbf Q} (1 -\varepsilon)(D_{m-1} + H) + (1 - \varepsilon\delta)B_m + l_{m-1}k\varepsilon D, \end{equation} and \eqref{DefinePhi33} yields \begin{equation}\label{RestrictC33} C_{|S} = (1 -\varepsilon)\Theta_{|S} + F_{|S} = (1-\varepsilon)l_{m-1}\Sigma +\Phi \leq \bigl(l_m\Sigma + \prdiv(u_m)\bigr) + \Phi. \end{equation} By the choice of $\delta$ and since $P_m= kP$ or $0$ is $\pi$-nef, the $\mathbf{Q}$-divisor \[ A = \varepsilon (D_{r_{m-1}} + H + \delta B_m) + P_m\] is $\pi$-ample. Then by \eqref{DefineDm33}, \eqref{DmANDDrm33}, and \eqref{DefineC33}, we have \begin{align*} D_m + H &= K_X + S + D_{m-1} + B_m + P_m + H\\ &= K_X + S + (1-\varepsilon)D_{m-1} + l_{m-1}k\varepsilon(K_X + \Delta) + \varepsilon D_{r_{m-1}} + B_m + P_m + H\\ &= K_X + S + A + (1-\varepsilon)D_{m-1} + l_{m-1}k\varepsilon D + (1 -\varepsilon\delta)B_m + (1 -\varepsilon)H\\ &\sim_{\mathbf Q} K_X + S + A + C, \end{align*} and thus $\sigma^{l_m}u\in H^0(X|S, \mathcal{O}_X(D_{r_{m}} + H))$ by \eqref{RestrictC33} and Lemma \ref{lem:cl1232}. \end{proof} \begin{theorem}[{cf.\ \cite[Theorem 3.4]{CL12}}]\label{thm:cl1234} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes of equal characteristic zero such that $X$ is regular and such that $Z$ is affine and excellent with a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Let $S$ be a prime divisor on $X$ and let $B$ be an effective $\mathbf{Q}$-divisor on $X$ such that $(X,S+B)$ is log regular, $S \not\subseteq \Supp(B)$, and $\lfloor B \rfloor = 0$. Let $A$ be a $\pi$-ample $\mathbf{Q}$-divisor on $X$, and set $\Delta = S + A + B$. Let $C$ be an effective $\mathbf{Q}$-divisor on $S$ such that $(S,C)$ is canonical, and let $m$ be a positive integer such that $mA$, $mB$, and $mC$ are integral. \par Assume there exists a positive integer $q > 0$ such that $qA$ is $\pi$-very ample, and we have \begin{gather*} S \not\subseteq \Bs\Bigl\lvert qm\Bigl(K_X+\Delta+\frac{1}{m}A\Bigr)\Bigr\rvert\\ C \le B_{\vert S} - B_{\vert S} \wedge \frac{1}{qm} \Fix\Bigl\lvert qm\Bigl(K_X+\Delta+\frac{1}{m}A \Bigr) \Bigr\rvert_S \end{gather*} where $K_X$ is a canonical divisor defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Then, for every nonzero global section $\sigma \in H^0(S,\mathcal{O}_S(m(K_S+A_{\vert S}+C)))$, the image of $\sigma$ under the map \[ H^0\bigl(S,\mathcal{O}_S\bigl(m(K_S+A_{\vert S}+C)\bigr)\bigr) \xrightarrow{\cdot m(B_{\vert S} - C)} H^0\bigl(S,\mathcal{O}_S\bigl(m(K_X+\Delta)_{\vert S}\bigr)\bigr) \] lies in $H^0(X\vert S,\mathcal{O}_X(m(K_X+\Delta)))$. In particular, we have \[ \bigl\lvert m(K_S+A_{\vert S}+C) \bigr\rvert + m(B_{\vert S} - C) \subseteq \bigl\lvert m(K_X+\Delta)\bigr\rvert_S, \] and if $\lvert m(K_S+A_{\vert S}+C) \rvert \ne \emptyset$, then $\lvert m(K_X+\Delta)\rvert_S \ne \emptyset$, and \[ \Fix\bigl\lvert m(K_S+A_{\vert S}+C) \bigr\rvert + m( B_{\vert S} - C ) \ge \Fix\bigl\lvert m(K_X+\Delta)\bigr\rvert_S \ge m \FFix_S(K_X+\Delta). \] \end{theorem} \begin{proof} The ``in particular'' statements follow from Proposition \ref{prop:har29}, and hence it suffices to show the module-theoretic statement. By flat base change and \cite[Chapter II, \S3, n\textsuperscript{o} 3, Corollary 1 to Theorem 1]{BouCA}, it suffices to show the statement after replacing $Z$ with $\Spec(\mathcal{O}_{Z,z})$ for every point $z \in Z$. We may therefore assume $Z$ is local, in which case we may use our version of the Bertini theorem (Theorem \ref{thm:bertini} and Remark \ref{rem:bertinisnc}). \par By \cite[Chapter I, \S3, Main Theorem I$(n)$]{Hir64}, we can find a simultaneous log resolution $f\colon Y\to X$ of $(X,S\cup \Supp(B))$ and the base ideal $\mathfrak{b}(\lvert qm(K_X+\Delta+\frac{1}{m}A)\rvert)$. Then, for some choice of the canonical divisor $K_Y$, there are ${\mathbf Q}$-divisors $B', E \geq 0$ on $Y$ with no common components, such that $E$ is $f$-exceptional and \begin{align*} K_Y + T + B'&= f^*(K_X + S + B) + E, \intertext{where $T = f_*^{-1}S$. Note that this implies} K_T + B'_{|T} &= g^*(K_S + B_{|S}) + E_{|T} \end{align*} where $g=f_{\vert T}\colon T\to S$ and $K_T$ and $K_S$ are some choices of canonical divisors of $T$ and $S$ respectively. Since $(Y, T + B' + E)$ is log regular and $B'$ and $E$ do not have common components, it follows that $B'_{\vert T}$ and $E_{|T}$ do not have common components. In particular, $E_{|T}$ is g-exceptional and $g_*B'_{|T} = B_{|S}$. Let $\Gamma = T + f^*A + B'$, and define \[F_q =\frac{1}{qm} \Fix \Bigl|qm\Bigl(K_Y + \Gamma + \frac{1}{m}f^*A\Bigr)\Bigr|.\] We notice that $qm(K_Y + \Gamma + \frac{1}{m}f^*A)=f^*(qm(K_X+\Delta+\frac{1}{m}A))+qmE$ and that $E$ is $f$-exceptional. Therefore, $\mathfrak{b}(\lvert qm(K_Y + \Gamma + \frac{1}{m}f^*A)\rvert)$ is the product of $\mathcal O_Y(-qmE)$ and $\mathfrak{b}(\lvert f^*(qm(K_X+\Delta+\frac{1}{m}A))\rvert)$, the latter being equal $f^*\mathfrak{b}(\lvert qm(K_X+\Delta+\frac{1}{m}A)\rvert)$. Since we resolved $\mathfrak{b}(\lvert qm(K_X+\Delta+\frac{1}{m}A)\rvert)$, its pullback is an invertible ideal, hence so is $\mathfrak{b}(\lvert qm(K_Y + \Gamma + \frac{1}{m}f^*A)\rvert)$. Therefore the mobile part \[ \Mob\Bigl(qm\Bigl(K_Y + \Gamma + \frac{1}{m}f^*A\Bigr)\Bigr)=qm\Bigl(K_Y+\Gamma +\frac{1}{m}f^*A-F_q\Bigr) \] is $(\pi\circ f)$-generated. By Theorem \ref{thm:bertini} and Remark \ref{rem:bertinisnc}, we may take $D^\circ\in \lvert K_Y+\Gamma +\frac{1}{m}f^*A-F_q\rvert_{\mathbf Q}$ such that $(Y,T+B'+F_q+D^\circ)$ is log regular and that $D^\circ$ does not contain any component of $T+B'$. Now define \[B'_q = B'- B' \wedge F_q, \qquad \Gamma_q = T + B'_q + f^*A, \qquad D=D^\circ+F_q-B' \wedge F_q.\] Then, \[ D\sim_{\mathbf Q}K_Y+\Gamma_q+\frac{1}{m}f^*A,\] the pair $(Y,T+B'_q+D)$ is log regular, and $D$ does not contain any component of $T+B'_q$. Let $g=f_{\vert T}\colon T\to S$ and $C'=g^{-1}_*C$. We claim that $C'\leq B'_{q|T}$. Assuming the claim, let us show how it implies the theorem. By Lemma \ref{lem:cl1233}, there exists a $\pi$-very ample divisor $H$ on $Y$ such that for all divisors $\Sigma'\in\lvert K_T+(B'_q+(1+\frac{1}{m})f^*A)_{\vert T}\rvert$ and $U\in \lvert H_{\vert T}\rvert$ and for all positive integers $p$, we have \[p\Sigma'+U\in \Bigl|pqm\Bigl(K_X+\Delta+\frac{1}{m}A\Bigr)+H\Bigr|_T.\] Since $f$ is constructed as a blowup of $X$ along regular centers, there exists an effective $f$-exceptional divisor $G$ such that $-G$ is $f$-ample. After possibly replacing $G$ by a small rational multiple, we therefore see that $f^*A-G$ is ample, and $\lfloor B' + \frac{1}{m}G \rfloor = 0$, in which case $(T,(B'+\frac{1}{m}G)_{\vert T})$ is klt. Now, we choose a positive integer $k$ so large such that for $l=kq$ the $\mathbf Q$-divisor \begin{displaymath} A_0=\frac{1}{m}(f^*A-G)-\frac{m-1}{ml}H \end{displaymath} $(\pi\circ f)$-ample. This is possible because $f^*A-G$ is $(\pi\circ f)$-ample. By Theorem \ref{thm:bertini} and Remark \ref{rem:bertinisnc}, we may find reduced divisors $W_1\in \lvert q(f^*A)_{|T}\rvert$ and $W_2\in\lvert H_{|T}\rvert$ such that $(B'+\frac{1}{m}G)_{|T}, W_1$ and $W_2$ share no common components and that $(T,(B'+\frac{1}{m}G)_{|T}+W_1+W_2)$ is log regular. For $W=kW_1+W_2$ and \[ \Phi=B'_{q|T}+\frac{1}{m}G_{|T}+\frac{1}{l}W=B'_{q|T}+\frac{1}{m}G_{|T}+\frac{1}{q}W_1+\frac{1}{l}W_2, \] the pair $(T,\Phi)$ is klt, since $\lfloor B'+\frac{1}{m}G\rfloor=0$. Now the proof of \cite[Theorem 3.4]{CL12} applies verbatim, except \cite[Lemma 3.2]{CL12} should be replaced by Lemma \ref{lem:cl1232}. It remains to verify the claim $C'\leq B'_{q|T}$. This is also identical to the corresponding part of the proof of \cite[Theorem 3.4]{CL12}, except for the word change ``basepoint-free'' to ``$(\pi\circ f)$-generated.'' \end{proof} As in \cite{CL12}, we immediately obtain the following version of the lifting theorem of Hacon and M\textsuperscript{c}Kernan \cite[Theorem 6.3]{HM10}. \begin{corollary}[{cf.\ \citeleft\citen{CL12}\citemid Corollary 3.5\citeright}] \label{cor:cl1235} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes of equal characteristic zero such that $X$ is regular and such that $Z$ is affine and excellent with a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Let $S$ be a prime divisor on $X$ and let $B$ be an effective $\mathbf{Q}$-divisor on $X$ such that $(X,S+B)$ is log regular, $S \not\subseteq \Supp(B)$, and $\lfloor B \rfloor = 0$. Suppose that $(S,B_{\vert S})$ is canonical. Let $A$ be a $\pi$-ample $\mathbf{Q}$-divisor on $X$, and set $\Delta = S + A + B$. Let $m$ be a positive integer such that $mA$ and $mB$ are integral and such that $S \not\subseteq \Bs\lvert m(K_X+\Delta)\rvert$. Set \[ \Phi_m = B_{\vert S} - B_{\vert S} \wedge \frac{1}{m} \Fix\bigl\lvert m(K_X+\Delta) \bigr\rvert_S. \] Then, we have \[ \bigl\lvert m(K_S+A_{\vert S}+\Phi_m) \bigr\rvert + m(B_{\vert S} - \Phi_m) \subseteq \bigl\lvert m(K_X+\Delta)\bigr\rvert_S. \] \end{corollary} \begin{proof} The proof of \cite[Corollary 3.5]{CL12} applies after replacing \cite[Theorem 3.4]{CL12} with our Theorem \ref{thm:cl1234}. \end{proof} \begin{lemma}[cf.\ {\cite[Lemma 3.6]{CL12}}]\label{lem:cl1236} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian $\mathbf{Q}$-schemes such that $X$ is regular and such that $Z$ is affine and excellent with a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Let $S$ be a regular prime divisor on $X$, let $D$ be a $\mathbf{Q}$-divisor on $X$ such that $S \not\subseteq \SB(D)$, and let $A$ be a $\pi$-ample $\mathbf{Q}$-divisor. Then, we have \[ \frac{1}{q} \Fix\bigl\lvert q(D+A) \bigr\rvert_S \le \FFix_S(D) \] for all sufficiently divisible positive integers $q$. \end{lemma} \begin{proof} The proof of \cite[Lemma 3.6]{CL12} carries word by word with the following changes: \begin{itemize} \item All instances of the words ``ample'' and ``very ample'' become ``$\pi$-ample'' and ``$\pi$-very ample,'' respectively. \item The sentence ``In particular, if $V\in \lvert F\rvert$ is a general element, then $P\not\subseteq \Supp f_*V$'' becomes ``In particular, for some $V\in \lvert F\rvert$ we have $P\not\subseteq \Supp f_*V$.'' \item The reference \cite[Lemma 3.1]{CL12} should be replaced by Lemma \ref{lem:cl1231}. \end{itemize} We note that the $\mathbf{Q}$-divisor $D'$ in the proof of \cite[Lemma 3.6]{CL12} does not come from Bertini's theorem, since the existence of a $\mathbf{Q}$-divisor $D' \sim_\mathbf{Q} D$ satisfying $S \not\subseteq\Supp(D')$ and $\mult_P(D'_{\vert S}) < 1/q$ follows from the definition of $\FFix_S(D)$. \end{proof} \section{\texorpdfstring{$\mathcal{B}_A^S(V)$}{B\_A\textasciicircum S(V)} is a rational polytope} Following \cite[\S4]{CL12}, we prove that the set $\mathcal{B}_A^S(V)$ defined in Definition \ref{def:cl1224} is a rational polytope. Given the work we have done in \S\ref{sect:cl12s3}, the proof in \cite[\S4]{CL12} applies almost verbatim.\medskip \par We replace \cite[Setup 4.1]{CL12} with the following setup. In the rest of this section, we write ``Setup $\text{\ref{setup:cl1241}}_n$'' to mean ``Setup \ref{setup:cl1241} when $\dim(X) = n$.'' We have only written down the notation from \cite[Setup 4.1]{CL12} that will be used in the statements in the rest of this section. \begin{setup}[cf.\ {\cite[Setup 4.1]{CL12}}]\label{setup:cl1241} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes of equal characteristic zero, such that $X$ is regular of dimension $n$ and such that $Z$ is affine and excellent and has a dualizing complex $\omega_Z^\bullet$. Let $S,S_1,S_2,\ldots,S_p$ be distinct prime divisors on $X$ such that $(X,S+\sum_{i=1}^p S_i)$ is log regular. We assume that Theorem $\text{\ref{thm:cl12a}}_{n-1}$ holds. Note that we have already shown that Theorem \ref{thm:cl12b} holds. \par Consider a $\pi$-ample $\mathbf{Q}$-divisor $A$ on $X$. Let \[ V = \sum_{i=1}^p \mathbf{R} \cdot S_i \subseteq \Div_\mathbf{R}(X), \] and let $W \subseteq \Div_\mathbf{R}(S)$ be the subspace spanned by the components of $\sum_i (S_i)_{\vert S}$. By Theorem $\text{\ref{thm:cl12b}}$, the set \[ \mathcal{E}_{A_{\vert S}}(W)=\Set[\big]{E\in\mathcal L(W) \given \lvert K_S+A_{\vert S}+E\rvert_{\mathbf{R}}\neq \emptyset} \] is a rational polytope. If $E_1,E_2,\ldots,E_d$ are its extreme points, then the ring \[ R\bigl(S/Z;K_{S}+A_{\vert S}+E_1,K_{S}+A_{\vert S}+E_2,\ldots,K_{S}+A_{\vert S}+E_d\bigr) \] is finitely generated as a $H^0(Z,\mathcal{O}_Z)$-algebra by Theorem $\textup{\ref{thm:cl12a}}_{n-1}$. Therefore, if we set \[ \mathbf{F}(E) = \FFix(K_{S}+A_{\vert S}+E) \] for a $\mathbf{Q}$-divisor $E \in \mathcal{E}_{A_{\vert S}}(W)$, then \cite[Lemma 2.28]{CL12} implies that $\mathbf{F}$ extends to a rational piecewise affine function on $\mathcal{E}_{A_{\vert S}}(W)$, and there exists a positive integer $k$ such that \[ \mathbf{F}(E) = \frac{1}{m} \Fix\bigl\lvert m(K_{S}+ A_{\vert S}+E)\bigr\rvert \] for every $E \in \mathcal{E}_{A_{\vert S}}(W)$ and every $m \in \mathbf{N}$ such that $mA/k$ and $mE/k$ are integral. \par For a $\mathbf{Q}$-divisor $B \in \mathcal{B}_{A}^{S}(V)$, set \[ \mathbf{F}_{S}(B) = \FFix_{S}(K_X+S+A+B), \] and for every positive integer $m$ such that $mA$ and $mB$ are integral and $S \not\subseteq \Bs\lvert m(K_X+S+A+B)\rvert$, denote \[ \Phi_m(B) = B_{\vert S} - B_{\vert S} \wedge \frac{1}{m} \Fix\bigl\lvert m(K_X+S+A+B)\bigr\rvert_S. \] Let $\Phi(B) = B_{\vert S} - B_{\vert S} \wedge \mathbf{F}_S(B)$, where we note that $\Phi(B) = \limsup_{m \to \infty} \Phi_m(B)$. \end{setup} The analogue of the main result in \cite[\S4]{CL12} is the following: \begin{theorem}[cf.\ {\cite[Theorem 4.3]{CL12}}]\label{thm:cl1243} Let the assumptions of Setup $\text{\ref{setup:cl1241}}_n$ hold. Let $\mathcal{G}$ be a rational polytope contained in the interior of $\mathcal{L}(V)$, and assume that $(S,G_{\vert S})$ is terminal for every $G \in \mathcal{G}$. Denote $\mathcal{P} = \mathcal{G} \cap \mathcal{B}_A^S(V)$. We then have the following: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item $\mathcal{P}$ is a rational polytope. \item $\Phi$ extends to a rational piecewise affine function on $\mathcal{P}$, and there exists a positive integer $\ell$ such that $\Phi(P) = \Phi_m(P)$ for every $P \in \mathcal{P}$ and every positive integer $m$ such that $mP/\ell$ is integral. \end{enumerate} \end{theorem} \begin{proof} We work through the proofs of \cite[Lemma 4.2]{CL12}, \cite[Lemma 4.4]{CL12}, and \cite[Theorem 4.3]{CL12}. Throughout, \cite[Theorem 3.4]{CL12} and \cite[Lemma 3.6]{CL12} should be replaced by our Theorem \ref{thm:cl1234} and Lemma \ref{lem:cl1236}, respectively. \par The proof of \cite[Lemma 4.2]{CL12} works with no changes. The proof of \cite[Lemma 4.4]{CL12} works with the following changes: \begin{itemize} \item In Step 2, the rational number $0 < \varepsilon \ll 1$ should be chosen such that the divisors $D+A/4$ and $\varepsilon(K_X+S+A+B)+A/4$ are $\pi$-ample. \item In the first paragraph of \cite[p.\ 2442]{CL12}, the divisors \[ H = \Gamma - B_\delta + \frac{1}{4m}A \qquad \text{and} \qquad G = \frac{\varepsilon}{m} (K_X+S+A+B_\delta) +\frac{1}{4m}A \] are $\pi$-ample. \end{itemize} The proof of \cite[Theorem 4.3]{CL12} works with no changes. \end{proof} As a result, we obtain the following corollary. \begin{corollary}[cf.\ {\cite[Corollary 4.6]{CL12}}]\label{cor:cl1246} Assume Theorem $\text{\ref{thm:cl12a}}_{n-1}$ holds. Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes of equal characteristic zero, such that $X$ is regular of dimension $n$ and such that $Z$ is affine and excellent with a dualizing complex $\omega_Z^\bullet$. Let $S,S_1,S_2,\ldots,S_p$ be distinct prime divisors on $X$ such that $(X,S+\sum_{i=1}^p S_i)$ is log regular. \par Let \[ V = \sum_{i=1}^p \mathbf{R} \cdot S_i \subseteq \Div_\mathbf{R}(X), \] and let $A$ be a $\pi$-ample $\mathbf{Q}$-divisor on $X$. Then $\mathcal{B}^S_A(V)$ is a rational polytope. $\mathcal{B}^S_A(V) = \Set{B \in \mathcal{L}(V) \given \sigma_S(K_X+S+A+B) = 0}$. \end{corollary} \begin{proof} The proof of \cite[Corollary 4.6]{CL12} applies with the following changes: \begin{itemize} \item In the first paragraph, \cite[Theorem 4.3]{CL12} should be replaced by our Theorem \ref{thm:cl1243}. \item In the second paragraph, \cite[Lemma 2.2]{CL12} holds for the pair $(X,S+B^G)$ since log resolutions exist \cite[Theorem 1.1.6]{Tem18}, and the proof of \cite[Proposition 2.36(1)]{KM98} works in this setting as well. Later, we choose $f^*A^G - F$ to be $(\pi\circ f)$-ample, where if $F$ is small enough, then $(T,(C+F)_{\vert T})$ is terminal. Here, the choice of $F$ is exactly like the choice of $G$ in the proof of Theorem \ref{thm:cl1234}, which works since Temkin's log resolutions are constructed by blowing up regular centers (see also \cite[Claim 8.1]{Kol21qfac}).\qedhere \end{itemize} \end{proof} \section{Finite generation} In this section, we prove Theorem $\textup{\ref{thm:cl12a}}_n$ assuming Theorem $\textup{\ref{thm:cl12a}}_{n-1}$. Again, we note that we have already shown Theorem \ref{thm:cl12b}. \begin{lemma}[cf.\ {\cite[Lemma 6.1]{CL12}}]\label{lem:cl1261} Let $\pi\colon X \to Z$ be a proper morphism of integral Noetherian schemes such that $X$ is regular and such that $Z$ is affine. \par Let $S_1,S_2,\ldots,S_p$ be distinct prime divisors on $X$ such that $(X,\sum_{i=1}^p S_i)$ is log regular. Let \[ \mathcal{C} \subseteq \sum_{i=1}^p \mathbf{R}_{\ge0}\cdot S_i \subseteq \Div_\mathbf{R}(X) \] be a rational polyhedral cone, and let $\mathcal{C} = \bigcup_{j=1}^q \mathcal{C}_j$ be a rational polyhedral decomposition. Set $\mathcal{S} = \mathcal{C} \cap \Div(X)$ and $\mathcal{S}_j = \mathcal{C}_j \cap \Div(X)$ for all $j$. Assume the following: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:cl1261condi} There exits a real number $M > 0$ such that if $\sum_i \alpha_iS_i \in \mathcal{C}_j$ for some $j$ and for some $\alpha_i \in \mathbf{N}$ where $\sum_i \alpha_i \ge M$, then $\sum_i \alpha_i S_i - S_j \in \mathcal{C}$; and \item\label{lem:cl1261condii} The ring $\res_{S_j}(R(X/Z;\mathcal{S}_j))$ is finitely generated as a $H^0(X \vert S_j,\mathcal{O}_{S_j})$-algebra for every $j \in \{1,2,\ldots,p\}$. \end{enumerate} Then, the relative divisorial ring $R(X/Z;\mathcal{S})$ is finitely generated as an $H^0(Z,\mathcal{O}_Z)$-algebra. \end{lemma} \begin{proof} After replacing $\pi\colon X \to Z$ by its Stein factorization \cite[Th\'eor\`eme 4.3.1]{EGAIII1}, we may assume that $H^0(Z,\mathcal{O}_Z)$ is the degree zero piece of $R(X/Z;\mathcal{S})$. We now follow the proof of \cite[Lemma 6.1]{CL12}. For every $i \in \{1,2,\ldots,p\}$, we use Proposition \ref{prop:har29} to choose sections $\sigma_i \in H^0(X,\mathcal{O}_X(S_i))$ such that $\prdiv(\sigma_i) = S_i$. Let $\mathfrak{R} \subseteq R(X/Z;S_1,S_2,\ldots,S_p)$ be the $H^0(Z,\mathcal{O}_Z)$-subalgebra generated by $R(X/Z;\mathcal{S})$ and $\sigma_1,\sigma_2,\ldots,\sigma_p$. Note that $\mathfrak{R}$ is graded by $\sum_{i=1}^p \mathbf{N} \cdot S_i \subseteq \Div(X)$. By \cite[Proposition 1.2.2]{ADHL15}, since $R(X/Z;\mathcal{S})$ is a Veronese subring of $\mathfrak{R}$, it suffices to show that $\mathfrak{R}$ is finitely generated as an $H^0(Z,\mathcal{O}_Z)$-algebra. \par For each $\alpha = (\alpha_1,\alpha_2,\ldots,\alpha_p) \in \mathbf{N}^p$, set $D_\alpha = \sum_i \alpha_i S_i$ and $\deg(\alpha) = \sum_i \alpha_i$, and for a section $\sigma \in H^0(X,\mathcal{O}_X(D_\alpha))$, set $\deg(\sigma) = \deg(\alpha)$. By $(\ref{lem:cl1261condii})$, for each $j \in \{1,2,\ldots,p\}$, there exists a finite set $\mathcal{H}_j \subseteq R(X/Z;\mathcal{S}_j)$ such that $\res_{S_j}( R(X/Z;\mathcal{S}_j))$ is generated by the set \[ \Set[\big]{\sigma_{\vert S_j} \given \sigma \in \mathcal{H}_j} \] over $H^0(X \vert S_j,\mathcal{O}_X)$. Since the $H^0(Z,\mathcal{O}_Z)$-module $H^0(X,\mathcal{O}_X(D_\alpha))$ is finitely generated for every $\alpha \in \mathbf{N}^p$, there is a finite set $\mathcal{H} \subseteq R(X/Z;\mathcal{S}_j)$ such that \[ \{\sigma_1,\sigma_2,\ldots,\sigma_p\} \cup \mathcal{H}_1 \cup \mathcal{H}_2 \cup \cdots \cup \mathcal{H}_p \subseteq \mathcal{H} \] and such that \[ H^0\bigl(X,\mathcal{O}_X(D_\alpha)\bigr) \subseteq \bigl(H^0(Z,\mathcal{O}_Z)\bigr)[\mathcal{H}] \] inside of $\mathfrak{R}$ for all $\alpha \in \mathbf{N}^p$ with $D_\alpha \in \mathcal{S}$ and $\deg(\alpha) \le M$, where $(H^0(Z,\mathcal{O}_Z))[\mathcal{H}] \subseteq \mathfrak{R}$ holds by definition of $\mathfrak{R}$ and $\mathcal{H}$. To show that $\mathfrak{R}$ is finitely generated as an $H^0(Z,\mathcal{O}_Z)$-algebra, it therefore suffices to show that $\mathfrak{R} \subseteq (H^0(Z,\mathcal{O}_Z))[\mathcal{H}]$. \par Let $\chi \in \mathfrak{R}$. By definition of $\mathfrak{R}$, we can write \[ \chi = \sum_i \sigma_1^{\lambda_{1,i}}\sigma_2^{\lambda_{2,i}} \cdots \sigma_p^{\lambda_{p,i}} \chi_i, \] where $\chi_i \in H^0(X,\mathcal{O}_X(D_{\alpha_i}))$ for some $D_{\alpha_i} \in \mathcal{S}$ and $\lambda_{j,i} \in \mathbf{N}$. It therefore suffices to show that $\chi_i \in (H^0(Z,\mathcal{O}_Z))[\mathcal{H}]$. After replacing $\chi$ by $\chi_i$, we may assume that $\chi \in H^0(X,\mathcal{O}_X(D_\alpha))$ for some $D_\alpha \in \mathcal{S}$. We induce on $\deg(\chi)$. If $\deg(\chi) \le M$, then $\chi \in (H^0(Z,\mathcal{O}_Z))[\mathcal{H}]$ by the definition of $\mathcal{H}$ in the previous paragraph. Now suppose $\deg(\chi) > M$. Then, there exists $j \in \{1,2,\ldots,p\}$ such that $D_\alpha \in \mathcal{S}_j$, and hence there exist $\theta_1,\theta_2,\ldots,\theta_z \in \mathcal{H}$ and a polynomial $\varphi \in (H^0(Z,\mathcal{O}_Z))[X_1,X_2,\ldots,X_z]$ such that \[ \chi_{\vert S_j} = \varphi\bigl(\theta_{1\vert S_j},\theta_{2\vert S_j},\ldots,\theta_{z\vert S_j}\bigr). \] By the exact sequence \[ 0 \longrightarrow H^0\bigl(X,\mathcal{O}_X(D_\alpha - S_j)\bigr) \xrightarrow{\sigma_j\cdot} H^0\bigl(X,\mathcal{O}_X(D_\alpha)\bigr) \longrightarrow H^0\bigl(S_j,\mathcal{O}_{S_j}(D_\alpha)\bigr), \] we therefore obtain \[ \chi - \varphi(\theta_1,\theta_2,\ldots,\theta_z) = \sigma_j \cdot \chi' \] for some $\chi' \in H^0(X,\mathcal{O}_X(D_\alpha-S_j))$. Since $D_\alpha - S_j \in \mathcal{S}$ by $(\ref{lem:cl1261condi})$ and since $\deg(\chi') < \deg(\chi)$, by the inductive hypotheses we see that $\chi' \in (H^0(Z,\mathcal{O}_Z))[\mathcal{H}]$. Thus, we have \[ \chi = \sigma_j \cdot \chi' + \varphi(\theta_1,\theta_2,\ldots,\theta_z) \in \bigl(H^0(Z,\mathcal{O}_Z)\bigr)[\mathcal{H}] \] as desired. \end{proof} \begin{lemma}[cf.\ {\cite[Lemma 6.2]{CL12}}]\label{lem:cl1262} Assume Theorem $\text{\ref{thm:cl12a}}_{n-1}$ holds. Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes of equal characteristic zero such that $X$ is regular of dimension $n$ and such that $Z$ is affine and excellent with a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $S,S_1,S_2,\ldots,S_p$ be distinct prime divisors on $X$ such that $(X,S+\sum_{i=1}^p S_i)$ is log regular. \par Let \[ V = \sum_{i=1}^p \mathbf{R} \cdot S_i \subseteq \Div_\mathbf{R}(X), \] let $A$ be a $\pi$-ample $\mathbf{Q}$-divisor on $X$, and let $B_1,B_2,\ldots,B_m \in \mathcal{E}_{S+A}(V)$ be $\mathbf{Q}$-divisors. Set $D_i = K_X+S+A+B_i$. Then, the ring \[ \res_S\bigl(R\bigl(X/Z;D_1,D_2,\ldots,D_m\bigr)\bigr) \] is finitely generated as an $H^0(Z,\mathcal{O}_Z)$-algebra. \end{lemma} \begin{proof} Following the proof of \cite[Lemma 6.2]{CL12}, we first prove the lemma under the additional assumption that the $B_i$ lie in the interior of $\mathcal{L}(V)$, and that the pairs $(S,B_{i\vert S})$ are all terminal. This part of the proof of \cite[Lemma 6.2]{CL12} applies with the following changes: \begin{itemize} \item In the second paragraph, \cite[Lemma 2.27]{CL12} should be replaced by our Lemma \ref{lem:cl12227}. \item In the third and fourth paragraphs, \cite[Setup 4.1]{CL12} and \cite[Theorem 4.3]{CL12} should be replaced by our Setup \ref{setup:cl1241} and Theorem \ref{thm:cl1243}, respectively. \item In the fourth paragraph, \cite[Corollary 3.5]{CL12} and \cite[Theorem $\textup{A}_{n-1}$]{CL12} should be replaced by our Corollary \ref{cor:cl1235} and Theorem $\text{\ref{thm:cl12a}}_{n-1}$, respectively. \end{itemize} \par We now prove the general case of the lemma. For every $i$, we choose a $\mathbf{Q}$-divisor $G_i \in V$ such that $A - G_i$ is $\pi$-ample and such that $B_i+G_i$ is in the interior of $\mathcal{L}(V)$. Let $A'$ be a $\pi$-ample $\mathbf{Q}$-divisor such that every $A-G_i-A'$ is also ample. We claim that there exists a finite open affine cover $Z = \bigcup_j U_j$ and effective $\mathbf{Q}$-divisors $A_{ij} \sim_\mathbf{Q} A-G_i-A'$ such that setting $X_j = \pi^{-1}(U_j)$, we have the following: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item For every $j$, $\lfloor A_{i\vert X_j} \rfloor = 0$; \item For every $j$, the pair $(X,S+\sum_{i=1}^p S_i + \sum_{i=1}^m A_{ij})$ is log regular along $X_j$; and \item For every $j$, the support of $\sum_{i=1}^m A_{ij\vert X_j}$ does not contain any of the divisors $S_{\vert X_j},S_{1 \vert X_j},\ldots,$ $S_{p \vert X_j}$. \end{enumerate} We induce on $m$. The case $m=0$ follows by assumption. Now suppose $m > 0$. By the inductive hypothesis, there exists a finite affine open cover $Z = \bigcup_k V_k$ and $\pi$-ample $\mathbf{Q}$-divisors $B_{ik} \sim_\mathbf{Q} A-G_1-A'$ for $i \in \{1,2,\ldots,m-1\}$ such that for every $k$, setting $X_k = \pi^{-1}(V_k)$, we have $\lfloor A_{i\vert X_k} \rfloor = 0$, the pair $(X,S+\sum_{i=1}^p S_i + \sum_{i=1}^{m-1} B_{ik})$ is log regular along $X_k$, and the support of $\sum_{i=1}^{m-1} B_{ik\vert X_j}$ does not contain any of the divisors $S_{\vert X_k},S_{1 \vert X_k},\ldots,S_{p \vert X_k}$. We can now apply Corollary \ref{cor:bertinionopencover} to the strata of the pair $(X,S+\sum_{i=1}^p S_i + \sum_{i=1}^{m-1} B_{ik})$ to construct a finite affine open cover $Z = \bigcup_j U_j$ refining $Z = \bigcup_k V_k$ and effective $\mathbf{Q}$-divisors $A_{mj} \sim_\mathbf{Q} A-G_i-A'$ satisfying the requirements above. Finally, by \cite[Corollaire 6.3.9]{EGAInew} and flat base change, to show that $\res_S(R(X/Z;D_1,D_2,\ldots,D_m))$ is finitely generated as an $H^0(Z,\mathcal{O}_Z)$-algebra, it suffices to show that \[ \res_{S \vert X_j}\bigl(R\bigl(X_j/Z_j;D_{1 \vert X_j},D_{2 \vert X_j}, \ldots,D_{m \vert X_j}\bigr)\bigr) \] is finitely generated as an $H^0(U_j,\mathcal{O}_{U_j})$-algebra for every $j$. Replacing $\pi\colon X \to Z$ by $\pi_{\vert X_j}\colon X_j \to U_j$, we may assume that the open affine cover $Z = \bigcup_j U_j$ has only one member. We now proceed as in the proof of \cite[Lemma 6.2]{CL12} with the following changes in the last paragraph: \begin{itemize} \item In the first line, \cite[Lemma 2.2]{CL12} holds for the pair $(X,S+B)$ since log resolutions exist \cite[Theorem 1.1.6]{Tem18}, and the proof of \cite[Proposition 2.36(1)]{KM98} works in this setting as well. \item Later, the $\mathbf{Q}$-divisor $A^\circ$ is $\pi$-ample. \item In the last line, \cite[Corollary 2.26]{CL12} should be replaced by our Lemma \ref{lem:cl12226}.\qedhere \end{itemize} \end{proof} \begin{theorem}[cf.\ {\cite[Theorem 6.3]{CL12}}] Theorem $\text{\ref{thm:cl12a}}_{n-1}$ implies Theorem $\text{\ref{thm:cl12a}}_{n}$. Thus, Theorem \ref{thm:cl12a} holds. \end{theorem} \begin{proof} The proof of \cite[Theorem 6.3]{CL12} applies with the following changes: \begin{itemize} \item In (69), the words ``log smooth'' should be replaced by ``log regular.'' \item Throughout, the references to \cite[Corollary 2.26]{CL12} and \cite[Lemma 2.27]{CL12} should be replaced by references to our Lemmas \ref{lem:cl12226} and \ref{lem:cl12227}, respectively. \item After $(iii)$ on p.\ 2463, \cite[Lemma 6.1]{CL12} should be replaced by our Lemma \ref{lem:cl1261}. \item At the bottom of p.\ 2464, \cite[Lemma 6.2]{CL12} should be replaced by our Lemma \ref{lem:cl1262}. \item In the second paragraph on p.\ 2465, \cite[Theorem $\text{B}_n$]{CL12} should be replaced by our Theorem \ref{thm:cl12b}, which we have already shown holds when $\dim(X)$ is arbitrary. \item In the last paragraph, the log resolution $f\colon Y \to X$ exists by \cite[Theorem 1.1.6]{Tem18}. Later, we choose $A^\circ = f^*A-H$ to be $(\pi \circ f)$-ample and $C_i^\circ = C_i+H$ such that $\lfloor C_i^\circ \rfloor = 0$ for all $i$, where the choice of $H$ is exactly like the choice of $G$ in the proof of Theorem \ref{thm:cl1234}, which works since Temkin's log resolutions are constructed by blowing up regular centers (see also \cite[Claim 8.1]{Kol21qfac}). \end{itemize} \par Finally, to show Theorem \ref{thm:cl12a}, we need to prove the base case when $\dim(X) = 0$. Let $m$ be an integer such that $mD_1,mD_2,\ldots,mD_k$ are integral. Then, $R(X/Z;mD_1,mD_2,\ldots,mD_k)$ is finitely generated over $H^0(Z,\mathcal{O}_Z)$, since it is isomorphic to a polynomial ring with variables $x_1,x_2,\ldots,x_k$ corresponding to $mD_1,mD_2,\ldots,mD_k$ in the direct sum decomposition in Definition \ref{def:cl12222}. Finally, $R(X/Z;D_1,D_2,\ldots,D_k)$ contains $R(X/Z;mD_1,mD_2,\ldots,mD_k)$ as a Veronese subring of finite index, and hence $R(X/Z;D_1,D_2,\ldots,D_k)$ is finitely generated by \cite[Proposition 1.2.2]{ADHL15}. \end{proof} \section{Finite generation for klt pairs}\label{sect:cl13s3} In this section, we prove finite generation of relative adjoint rings for klt pairs, adapting corresponding results in \cite[\S3]{CL13} to our setting. We also adapt other results from \cite[\S3]{CL13}, which will be used in the proofs of other theorems but are of independent interest as well. In contrast to previous sections in Part \ref{part:fingen}, where we worked with log regular pairs, we work with normal schemes and klt pairs. We will frequently use the continuity of kltness (Lemma \ref{lem:kltFacts}$(\ref{lem:kltContinuous})$) in this and the following sections. We sometimes do not explicitly refer to the lemma and just say ``by continuity.'' We note that log resolutions exist for quasi-excellent schemes of equal characteristic zero by \cite[Theorem 2.3.6]{Tem08}, and thus the lemma is applicable. \begin{lemma}[{cf.\ \cite[Lemma 1]{CL13}}]\label{lem:cl1331} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes with $Z$ affine. Let $D_1, D_2,\ldots, D_\ell$ be $\mathbf{Q}$-Cartier divisors on $X$. The ring \[ R = R\bigl(X/Z; D_1,D_2, \ldots, D_\ell\bigr) \] is finitely generated over $H^0(Z,\mathcal{O}_Z)$ if and only if one of its Veronese subrings of finite index is finitely generated over $H^0(Z,\mathcal{O}_Z)$. In particular, if $D'_i \sim_{\mathbf Q} e_iD_i$ for some $e_i\in\mathbf{Q}_{>0}$ and if $R$ is finitely generated over $H^0(Z,\mathcal{O}_Z)$, then the ring $R' = R(X/Z; D'_1,D'_2,\ldots, D'_\ell)$ is finitely generated over $H^0(Z,\mathcal{O}_Z)$. \end{lemma} \begin{proof} If $D'_i \sim_{\mathbf Q} e_iD_i$, then $R'$ and $R$ have isomorphic Veronese subrings of finite index, hence the ``in particular'' statement. The principal statement follows from \cite[Propositions 1.2.2 and 1.2.4]{ADHL15}. \end{proof} We also notice the following fact. \begin{lemma}\label{lem:BigDeltaIsDeltaPlusH} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes of equal characteristic zero, such that $X$ is normal and such that $Z$ is affine, excellent, and has a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta$ be an effective $\mathbf Q$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and $(X,\Delta)$ is klt. Assume that there exists a rational number $c\in (-\infty,1]$ such that $cK_X+\Delta$ is $\mathbf{Q}$-Cartier and $\pi$-big. Then there exists a rational number $e>0$, an effective $\mathbf Q$-Weil divisor $\Gamma$ on $X$ such that $(X,\Gamma)$ is klt, and a $\pi$-ample $\mathbf{Q}$-Cartier divisor $A$ such that $K_X+\Delta\sim_{\mathbf{Q}}e(K_X+\Gamma+A)$. \end{lemma} \begin{proof} By Kodaira's lemma (Corollary \ref{lem:kodairachar}), there exist a $\pi$-ample $\mathbf{Q}$-Cartier divisor $H$ and an effective $\mathbf{Q}$-Weil divisor $E$ such that $cK_X+\Delta \sim_\mathbf{Q} H+E$. For a sufficiently small $\varepsilon\in \mathbf{Q}_{>0}$, we have \begin{align*} K_X+\Delta&\sim_\mathbf{Q} (1-c)K_X+(1-\varepsilon)(cK_X+\Delta)+\varepsilon(H+E)\\ &=(1-c\varepsilon )K_X+(1-\varepsilon)\Delta+\varepsilon E+\varepsilon H\\ &=(1-c\varepsilon)\left(K_X+\frac{1-\varepsilon}{1-c\varepsilon}\Delta+\frac{\varepsilon}{1-c\varepsilon} E+\frac{\varepsilon}{1-c\varepsilon} H\right). \end{align*} By Lemma \ref{lem:kltFacts}$(\ref{lem:kltContinuousConvex})$ when $c<1$ (with $\Delta'$ there defined to be $\frac{1}{1-c}E$) and Lemma \ref{lem:kltFacts}$(\ref{lem:kltContinuous})$ when $c=1$, for sufficiently small $\varepsilon\in \mathbf{Q}_{>0}$, setting $\Gamma=\frac{1-\varepsilon}{1-c\varepsilon}\Delta+\frac{\varepsilon}{1-c\varepsilon} E$, the pair $(X,\Gamma)$ is klt. We may thus fix such an $\varepsilon$ and set $e=1-c\varepsilon,A=\frac{\varepsilon}{1-c\varepsilon} H$ to conclude. \end{proof} \begin{theorem}[{cf.\ \cite[Theorem 2]{CL13}}]\label{thm:cl1332} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes of equal characteristic zero, such that $X$ is normal and such that $Z$ is affine and excellent and has a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta_i$ be effective $\mathbf Q$-Weil divisors on $X$ for $i \in \{1,2,\ldots,\ell\}$ such that $K_X+\Delta_i$ is $\mathbf{Q}$-Cartier and $(X,\Delta_i)$ is klt for each $i$. Let $A_i$ be $\pi$-nef $\mathbf{Q}$-Cartier divisors for $i \in \{1,2,\ldots,\ell\}$. Assume that for each $i$, either $A_i$ is $\pi$-ample, or that there exists a rational number $c_i\in (-\infty,1]$ such that $c_iK_X+\Delta_i$ is $\mathbf{Q}$-Cartier and $\pi$-big. Then the relative adjoint ring \[ R\bigl(X/Z;K_X+\Delta_1+A_1,K_X+\Delta_2+A_2,\ldots,K_X+\Delta_\ell+A_\ell\bigr) \] is a finitely generated $H^0(Z,\mathcal{O}_Z)$-algebra. \end{theorem} \begin{proof} If there exists a rational number $c_i\in (-\infty,1]$ such that $c_iK_X+\Delta_i$ is $\mathbf{Q}$-Cartier and $\pi$-big, then by Lemma \ref{lem:BigDeltaIsDeltaPlusH} we may write $K_X+\Delta_i\sim_{\mathbf{Q}}e_i(K_X+\Theta_i+H_i)$ where $e_i\in\mathbf{Q}_{>0},$ $H_i$ is $\mathbf{Q}$-Cartier and $\pi$-ample, and $\Theta_i$ is effective with $(X,\Theta_i)$ klt. Thus $K_X+\Delta_i+A_i\sim_{\mathbf{Q}}e_i(K_X+\Theta_i+H_i+\frac{1}{e_i}A_i)$ and $H_i+\frac{1}{e_i}A_i$ is $\pi$-ample. By Lemma \ref{lem:cl1331} we see that we may assume $A_i$ $\pi$-ample for all $i$. Let $f\colon Y\to X$ be a log resolution of $(X,\sum_i\Delta_i)$, which exists by \cite[Theorem 1.1.6]{Tem18}. Since Temkin's log resolutions are constructed by blowing up regular centers, we may assume that there exists an $f$-exceptional effective Cartier divisor $F$ such that $-F$ is $f$-ample (see also \cite[Claim 8.1]{Kol21qfac}). Take a $\pi$-ample $\mathbf{Q}$-Cartier divisor $A$ on $X$ such that $A_i-A$ are all $\pi$-ample. Write \[ f^*(K_X+\Delta_i)+E_i\sim_{\mathbf{Q}} K_Y+\Gamma_i \] where $E_i\geq 0$ is $f$-exceptional, all coefficients of $\Gamma_i$ are in $(0,1)$, and $E_i$ and $\Gamma_i$ do not share common components. This is possible since $\Delta_i\geq 0$ and $(X,\Delta_i)$ is klt. By Lemma \ref{lem:cl12226}, it suffices to show \[ R=R\bigl(Y/Z;K_Y+\Gamma_1+f^*A_1,K_Y+\Gamma_2+f^*A_2,\ldots,K_Y+\Gamma_\ell+f^*A_\ell\bigr) \] is finitely generated. Let $r\in\mathbf{Q}_{>0}$ be sufficiently small such that $H\coloneqq f^*A-rF$ is $(\pi\circ f)$-ample and such that all coefficients of $\Gamma'_i\coloneqq \Gamma_i+rF$ are less than 1. Let $H_i=f^*(A_i-A)$, which is $(\pi\circ f)$-semi-ample by our choice. Then, we have \[ R=R\bigl(Y/Z;K_Y+\Gamma'_1+H_1+H,K_Y+\Gamma'_2+H_2+H,\ldots,K_Y+\Gamma'_\ell+H_\ell+H\bigr). \] Let $q$ be a positive integer such that every $qH_i$ is integral and $(\pi\circ f)$-generated, and such that all coefficients of $\Gamma'_i$ are less than $1-\frac{1}{q}$. By Corollary \ref{cor:bertinionopencover}, after replacing $Z$ by the scheme theoretic image of $\pi$ (thus making it integral) and passing to an affine open cover (allowed by \cite[Corollaire 6.3.9]{EGAInew} and flat base change), we may assume that there exists $H_i'\in \lvert qH_i \rvert$ such that $H_i'$ is regular and such that $\sum_i H_i'+\sum_i\Gamma_i$ has simple normal crossings support. Since all coefficients of $\Gamma'_i$ are less than $1-\frac{1}{q}$, all coefficients of $\Gamma'_i+\frac{1}{q}H_i'$ are less than 1, regardless of possible shared components. Therefore, the relative adjoint ring \[ R\Bigl(Y/Z;K_Y+\Gamma'_1+\frac{1}{q}H_1'+H,K_Y+\Gamma'_2+\frac{1}{q}H_2'+H,\ldots,K_Y+\Gamma'_\ell+\frac{1}{q}H_\ell'+H\Bigr) \] is finitely generated by Theorem \ref{thm:cl12a}. Since $\frac{1}{q}H_i'\sim_{\mathbf{Q}}H_i$, Lemma \ref{lem:cl1331} gives the finite generation of $R$. \end{proof} We therefore obtain Theorem \ref{thm:introfinitegen} for algebraic spaces where the base is no longer affine. \begin{theorem}\label{thm:finitegenerationalgspaces} Let $\pi\colon X \to Z$ be a proper morphism of integral quasi-excellent locally Noetherian algebraic spaces of equal characteristic zero over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Let $\Delta_i$ be effective $\mathbf Q$-Weil divisors on $X$ for $i \in \{1,2,\ldots,\ell\}$ such that $K_X+\Delta_i$ is $\mathbf{Q}$-Cartier and $(X,\Delta_i)$ is klt for each $i$. Let $A_i$ be $\pi$-nef $\mathbf{Q}$-invertible sheaves for $i \in \{1,2,\ldots,\ell\}$. Assume that for each $i$, either $A_i$ is $\pi$-ample, or that there exists a rational number $c_i\in (-\infty,1]$ such that $c_iK_X+\Delta_i$ is $\mathbf{Q}$-Cartier and $\pi$-big. Then, the relative adjoint ring \[ \bigoplus_{(m_1,m_2,\ldots,m_\ell) \in \mathbf{N}^\ell} \pi_*\mathcal{O}_X\Biggl( \Biggl\lfloor \sum_{i=1}^\ell m_i(K_X+\Delta_i+A_i)\Biggr\rfloor \Biggr) \] is an $\mathcal{O}_Z$-algebra locally of finite type. \end{theorem} \begin{proof} By definition and flat base change \cite[\href{https://stacks.math.columbia.edu/tag/073K}{Tag 073K}]{stacks-project}, we can pullback along \'etale morphisms from affine schemes $\Spec(R) \to Z$ to reduce to the case proved in Theorem \ref{thm:cl1332}. \end{proof} For later use, we prove some other consequences of finite generation, adapting the proofs from \cite{CL13} for complex varieties. \begin{theorem}[{cf.\ \cite[Theorem 3]{CL13}}]\label{thm:cl1335} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes such that $Z$ is affine. Let $D_1,D_2,\ldots, D_\ell$ be $\mathbf{Q}$-Cartier divisors on $X$. Assume that the ring \[ R = R(X/Z; D_1,D_2,\ldots, D_\ell) \] is finitely generated over $H^0(Z,\mathcal{O}_Z)$, and let \[ \begin{tikzcd}[row sep=0,column sep=1.475em] \mathllap{\mathbf{D}\colon}\mathbf{R}^\ell \rar & \Div_\mathbf{R}(X)\\ (\lambda_1,\lambda_2,\ldots, \lambda_\ell) \rar[mapsto] & \displaystyle \sum_{i=1}^\ell \lambda_iD_i \end{tikzcd} \] be the \textsl{tautological map} from \emph{\cite[p.\ 620]{CL13}}. We then have the following: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{thm:cl13351} The support $\Supp(R)$ of $R$ is a rational polyhedral cone. \item\label{thm:cl13352} Suppose that $\Supp(R)$ contains a $\pi$-big $\mathbf{R}$-Cartier divisor. If $D \in \sum_i {\mathbf{R}}_{\ge0}D_i$ is $\pi$-pseudo effective, then $D \in\Supp(R)$. \item\label{thm:cl13353} There is a finite rational polyhedral subdivision $\Supp(R)=\bigsqcup_i \mathcal{C}_i $ such that $o_v$ is a linear function on $\mathcal{C}_i$ for every geometric valuation $v$ of $X$. Furthermore, there is a coarsest subdivision with this property in the sense that, if $i$ and $j$ are distinct, there is at least one geometric valuation $v$ of $X$ such that (the linear extensions of) $(o_v)_{|\mathcal{C}_i}$ and $(o_v)_{|\mathcal{C}_j}$ are different. \item\label{thm:cl13354} There is a finite index subgroup $L \subseteq \mathbf{Z}^\ell$ such that for all $\mathbf{n} \in \mathbf{N}^\ell\cap L$, if $\mathbf{D}(\mathbf{n}) \in \Supp(R)$, then \[ o_v\bigl(\mathbf{D}(\mathbf{n})\bigr)=\inf_{E\in\lvert\mathbf{D}(\mathbf{n})\rvert}\big\{\mult_v(E)\bigr\} \] for all geometric valuations $v$ of $X$. \end{enumerate} \end{theorem} \begin{proof} The proof of \cite[Theorem 3]{CL13} carries verbatim here, noting that the external reference \cite[Proposition 4.7]{ELMNP06} holds for arbitrary Noetherian schemes. \end{proof} In the next result, for the same reason as the case of Theorem \ref{thm:cl12b}, we do not need to assume from the outset that $Z$ is an excellent $\mathbf{Q}$-scheme. \begin{corollary}[{cf.\ \cite[Corollary 1]{CL13}}]\label{cor:cl1336} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes, such that $X$ is normal and such that $Z$ is affine and has a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Assume that the function field of $X$ has characateristic zero. Let $\Delta$ be an effective $\mathbf{Q}$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and $(X,\Delta)$ is klt. Let $A$ be a $\pi$-nef $\mathbf{Q}$-Cartier divisor on $X$. Assume that either $A$ is $\pi$-ample or $\Delta$ is $\pi$-big, and assume that $K_X+\Delta+A$ is $\pi$-pseudoeffective. Then $\lvert K_X+\Delta+A\rvert_{\mathbf{Q}}\neq\emptyset$. \end{corollary} \begin{proof} We may assume $\pi$ surjective. Let $\eta$ be the generic point of $Z$. We know (Definition \ref{def:fbig}) that $K_X+\Delta+A+H$ is $\pi$-big for all $\pi$-ample Cartier divisors $H$ on $X$. Since there exists such an $H$, it follows that $K_{X_\eta}+\Delta_{|X_\eta}+A_{|X_\eta}+H$ is $\pi_{|X_\eta}$-big for all $\pi_{|X_\eta}$-ample Cartier divisors $H$ on $X_\eta$, so $K_{X_\eta}+\Delta_{|X_\eta}+A_{|X_\eta}$ is $\pi_{|X_\eta}$-pseudoeffective. By Corollary \ref{cor:EffIffEffAtGenFiber}, it suffices to show $|K_{X_\eta}+\Delta_{|X_\eta}+A_{|X_\eta}|_{\mathbf{Q}}\neq\emptyset$, so we may replace $Z$ by the spectrum of its function field and assume that $Z$ is an excellent $\mathbf{Q}$-scheme. Let $H$ be a $\pi$-ample Cartier divisor on $X$. By Theorem \ref{thm:cl1332}, the adjoint ring $R=R(X/Z;K_X+\Delta+A,K_X+\Delta+A+H)$ is finitely generated over $H^0(Z,\mathcal{O}_Z)$. Its support contains the $\pi$-big $\mathbf{Q}$-Cartier divisor $K_X+\Delta+A+H$. Thus, Theorem \ref{thm:cl1335}$(\ref{thm:cl13352})$ applies and shows $K_X+\Delta+A\in\Supp(R)$, i.e., $\lvert K_X+\Delta+A \rvert_{\mathbf{R}}\neq\emptyset$. By Lemma \ref{lem:RatlIsDense} we are done. \end{proof} \begin{lemma}[{cf.\ \cite[Lemma 3]{CL13}}]\label{lem:cl1337} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes with $Z$ affine. Let $D$ be a $\mathbf{Q}$-Cartier divisor on $X$. We then have the following: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:cl13371} If $D$ is $\pi$-semi-ample, then $o_v(D)=0$ for all geometric valuations $v$ of $X$. \item\label{lem:cl13372} Assume that there exist $\mathbf{Q}$-Cartier divisors $D_1,D_2,\ldots, D_\ell$ on $X$ such that the ring \[ R = R\bigl(X/Z; D_1,D_2,\ldots, D_\ell\bigr) \] is finitely generated over $H^0(Z,\mathcal{O}_Z)$, and suppose $D\in \Supp(R)$. If $o_v(D)=0$ for all geometric valuations $v$ of $X$, then $D$ is $\pi$-semi-ample. \end{enumerate} \end{lemma} \begin{proof} Assume $D$ is $\pi$-semi-ample. Then, $\mathscr{L}\coloneqq\mathcal{O}_X(pD)$ is a $\pi$-generated line bundle for some $p>0$. Since $Z$ is affine, for each geometric valuation $v$ of $X$, there exists a section $s$ of $\mathscr{L}$ that avoids the center of $v$. Then $\frac{1}{p}\prdiv(s)\in \lvert D\rvert_{\mathbf{Q}}$ has order zero with respect to $v$, and thus $o_v(D)=0$. Now suppose the assumptions in $(\ref{lem:cl13372})$ hold and suppose $o_v(D)=0$ for all geometric valuations $v$ on $X$. By Theorem \ref{thm:cl1335}$(\ref{thm:cl13354})$, there exists a positive integer $p$ such that $pD$ Cartier and such that \[ o_v(pD)=\inf_{E\in \lvert pD\rvert}\bigl\{\mult_v(E)\bigr\} \] for all geometric valuations $v$ on $X$. Since $o_v(pD)=p\cdot o_v(D)=0$, we see that the center of $v$ is not in $\Bs\lvert pD\rvert$. Since each closed point of $X$ is the center of a geometric valuation (unless $\dim(X)=0$, in which case the result is trivially true), we see that $\Bs\lvert pD\rvert=\emptyset$ and hence $pD$ is $\pi$-generated. \end{proof} \begin{corollary}[{cf.\ \cite[Corollary 2]{CL13}}]\label{cor:cl1338} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian $\mathbf{Q}$-schemes, such that $X$ is normal and such that $Z$ is excellent and has a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta$ be an effective $\mathbf{Q}$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and $(X,\Delta)$ is klt. Let $A$ be a $\pi$-nef $\mathbf{Q}$-Cartier divisor on $X$. Assume that either $A$ is $\pi$-ample or $\Delta$ is $\pi$-big. If $K_X+\Delta+A$ is $\pi$-nef, then it is $\pi$-semi-ample. \end{corollary} \begin{proof} Being $\pi$-semi-ample is local on the base, so we may assume $Z$ affine. Let $H$ be a $\pi$-ample Cartier divisor on $X$. By Theorem \ref{thm:cl1332}, the adjoint ring $R=R(X/Z;K_X+\Delta+A,K_X+\Delta+A+H)$ is finitely generated over $H^0(Z,\mathcal{O}_Z)$. By Corollary \ref{cor:cl1336}, we have $\lvert K_X+\Delta+A\rvert_{\mathbf{Q}}\neq\emptyset$, and hence $\lvert K_X+\Delta+A+H\rvert_{\mathbf{Q}}\neq\emptyset$. Therefore \[ \Supp(R)\supseteq \mathbf{R}_{\geq 0}\cdot(K_X+\Delta+A)+\mathbf{R}_{\geq 0}\cdot(K_X+\Delta+A+H). \] Since $K_X+\Delta+A$ is $\pi$-nef, we see $K_X+\Delta+A+\varepsilon H$ is $\pi$-ample for all $\varepsilon\in \mathbf{Q}_{>0}$. Therefore, for each geometric valuation $v$ of $X$ and each $\varepsilon\in \mathbf{Q}_{>0}$, we have $o_v(K_X+\Delta+A+\varepsilon H)=0$. Since $o_v$ is continuous on $\Supp(R)$ by Theorem \ref{thm:cl1335}$(\ref{thm:cl13353})$, we see that $o_v(K_X+\Delta+A)=0$ as well. By Lemma \ref{lem:cl1337}$(\ref{lem:cl13372})$, we conclude that $K_X+\Delta+A$ is $\pi$-semi-ample. \end{proof} \section{Rationality, Cone, and Contraction theorems revisited}\label{sect:fundrevisit} We now prove the rationality, cone, and contraction theorems, modeled after Kawamata's reformulation \cite{Kaw11} of the statements that appear in \cite{KMM87}.\medskip \par We start with the following preliminary result. \begin{lemma}[{cf.\ \cite[Corollary 3]{CL13}}]\label{lem:cl1339} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes with $Z$ affine. Let $D_1,D_2,\ldots, D_\ell$ be $\mathbf{Q}$-Cartier divisors on $X$. Let \[ \varphi\colon \sum_{i=1}^\ell \mathbf{R}\cdot D_i\longrightarrow N^1(X/Z)_{\mathbf{R}} \] be the natural projection map. Assume that the ring $R = R(X/Z; D_1,D_2,\ldots, D_\ell)$ is finitely generated over $H^0(Z,\mathcal{O}_Z)$. Let $\Supp(R)=\bigsqcup_j \mathcal{C}_j$ be a finite rational polyhedral subdivision such that $o_v$ is a linear function on $\mathcal{C}_j$ for every geometric valuation $v$ of $X$, as in Theorem \ref{thm:cl1335}$(\ref{thm:cl13353})$. Fix an index $k$. Assume that $\mathcal{C}_k\cap \varphi^{-1}(\Amp(X/Z))\neq\emptyset$. Then $\mathcal{C}_k\subseteq \varphi^{-1}(\Nef(X/Z))$. If additionally the decomposition $\Supp(R)=\bigsqcup_j \mathcal{C}_j$ is the coarest subdivision satisfying the hypotheses above, then $\mathcal{C}_k=\Supp(R)\cap \varphi^{-1}(\Nef(X/Z))$, in which case $\mathcal{C}_k$ is convex. \end{lemma} \begin{proof} Note that by Theorem \ref{thm:cl1335}$(\ref{thm:cl13353})$, all asymptotic order functions $o_v$ are identically zero on $\mathcal{C}_k$, because they are identically zero on the subset $\mathcal{C}_k\cap \varphi^{-1}(\Amp(X/Z))$, which is nonempty and open in the relative topology of $\mathcal{C}_k$. By Lemma \ref{lem:cl1337}$(\ref{lem:cl13372})$, all rational members of $\mathcal{C}_k$ are $\pi$-semiample, thus $\pi$-nef, and thus all members of $\mathcal{C}_k$ are $\pi$-nef since rational members are dense in the rational polyhedron $\mathcal{C}_k$. Now suppose that the decomposition $\Supp(R)=\bigsqcup_j \mathcal{C}_j$ is coarsest in the sense stated above. Since all asymptotic order functions $o_v$ are identically zero on every cell $\mathcal{C}_j$ that touches $\varphi^{-1}(\Amp(X/Z))$, if the decomposition is coarsest then $\varphi^{-1}(\Amp(X/Z))\subseteq \mathcal{C}_k$. Since $\varphi^{-1}(\Amp(X/Z))\neq \emptyset$, every $\pi$-nef member of $\Supp(R)$ is a limit of elements of $\varphi^{-1}(\Amp(X/Z))$, and is therefore contained in the closed subset $\mathcal{C}_k$. Since the other inclusion is already established, we conclude that $\mathcal{C}_k=\Supp(R)\cap \varphi^{-1}(\Nef(X/Z))$. The statement that $\mathcal{C}_k$ is convex follows from the fact that both $\Supp(R)$ and $\Nef(X/Z)$ are convex. \end{proof} \begin{theorem}[{cf.\ \cite[Theorem 4]{CL13}}]\label{thm:cl1342} Let $\pi\colon X \to Z$ be a projective morphism of integral noetherian $\mathbf{Q}$-schemes, such that $X$ is normal and such that $Z$ is excellent and has a dualizing complex $\omega_Z^\bullet$. Let $\mathcal{A}=\mathcal{A}(X/Z)$ be the set of classes $\mathbf{u}\in N^1(X/Z)_{\mathbf{R}}$ that satisfies the following condition. There exists an open covering $Z=\cup_a V_a$ such that for each index $a$, there exists a $\mathbf Q$-Weil divisor $\Delta_a\geq 0$ on $\pi^{-1}(V_a)$ with $K_{\pi^{-1}(V_a)}+\Delta_a$ $\mathbf{Q}$-Cartier and $(\pi^{-1}(V_a),\Delta_a)$ klt, a positive real number $c_a$, and a class $\mathbf{w}_a\in \Amp(\pi^{-1}(V_a)/V_a)$ such that the restriction of $\mathbf{u}$ to $N^1(\pi^{-1}(V_a)/V_a)$ (Lemma \ref{lem:NumericalClassOpenSubset}) equals to $c_a[K_{\pi^{-1}(V_a)}+\Delta_a]+\mathbf{w}_a$. Let $V^\circ=\mathcal{A}\cap\partial\Nef(X/Z)$. We then have the following: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item \label{thm:cl1342Precise} Let $\mathbf{u}\in \mathcal{A}\cap\Nef(X/Z)$. There exists a closed convex rational polytope $P$ containing $\mathbf{u}$ in its interior such that $P\cap \Nef(X/Z)$ is a closed convex rational polytope with nonempty interior. \item \label{thm:cl1342Formal} For $P$ as in $(\ref{thm:cl1342Precise})$, let $F_1,F_2,\ldots,F_m$ be all the codimension one faces of $P\cap \Nef(X/Z)$ that intersects the interior of $P$. Then each $F_i$ span a supporting hyperplane (Definition \ref{def:AmpleConeAndNefCone}) of $\Nef(X/Z)$, and $\Int(P)\cap \partial \Nef(X/Z)=\Int(P)\cap (F_1\cup F_2\cup\cdots\cup F_m)$. \item \label{thm:cl13421} Every compact subset of $V^\circ$ is contained in a finite union of supporting hyperplanes. \item \label{thm:cl13422} Let $D$ be a $\mathbf{Q}$-Cartier divisor on $X$ such that $[D]\in \mathcal{A}\cap\Nef(X/Z)$. Then $D$ is $\pi$-semi-ample. \end{enumerate} \end{theorem} \begin{remark} We do not require any compatibility of the divisors $\Delta_a$ and classes $\mathbf{w}_a$ in the definition of $\mathcal{A}$. Since $\mathcal{A}\cap\Nef(X/Z)\subseteq \Amp(X/Z)\cup V^\circ$, item $(\ref{thm:cl1342Precise})$ (resp. $(\ref{thm:cl13422})$) is only nontrivial for those $\mathbf{u}$ (resp. $[D]$) in $V^\circ$. However, $\mathcal{A}\cap\Nef(X/Z)$ behaves better when we pass to an open cover of $Z$. \end{remark} \begin{proof} Since $\Amp(X/Z)$ is open and convex, it is clear that $\mathcal{A}$ is open and convex and that \[ \mathcal{A}\cap N^1(X/Z)_{\mathbf{Q}}=\Set[\big]{a\mathbf{v}+\mathbf{w}\given a\in\mathbf{Q}_{>0},\ \mathbf{w}\in\Amp(X/Z)\cap N^1(X/Z)_{\mathbf{Q}}}. \] We first prove $(\ref{thm:cl1342Precise})$. By the definition of $\mathcal{A}$, we can find a finite affine cover $V_1,\ldots,V_t$ of $Z$, a $\mathbf Q$-Weil divisor $\Delta_a\geq 0$ on $\pi^{-1}(V_a)$ with $K_{\pi^{-1}(V_a)}+\Delta_a$ $\mathbf{Q}$-Cartier and $(\pi^{-1}(V_a),\Delta_a)$ klt, a positive real number $c_a$, and a class $\mathbf{w}_a\in \Amp(\pi^{-1}(V_a)/V_a)$ such that the restriction of $\mathbf{u}$ to $N^1(\pi^{-1}(V_a)/V_a)$ equals to $c_a[K_{\pi^{-1}(V_a)}+\Delta_a]+\mathbf{w}_a$. We use the notations $\rho_a:N^1(X/Z)_{\mathbf{R}}\to N^1(\pi^{-1}(V_a)/V_a)_{\mathbf{R}}$ for restriction of divisors. Assume for each $a$ we have a rational polytope $P_a$ in $N^1(\pi^{-1}(V_a)/V_a)_{\mathbf{R}}$ for $\rho_a(\mathbf{u})$ that fulfills $(\ref{thm:cl1342Precise})$. If $P_0$ is any closed convex rational polytope containing $\mathbf{u}$ in its interior, so is $P:=P_0\cap \rho_1^{-1}(P_1)\cap\ldots\cap\rho_t^{-1}(P_t)$, and since $\Nef(X/Z)=\cap_a\rho_a^{-1}\Nef(\pi^{-1}(V_a)/V_a)$ (by definition and Lemma \ref{lem:NumericalClassOpenSubset}), we see that \[ P\cap \Nef(X/Z)=P_0\cap \rho_1^{-1}\left(P_1\cap\Nef(\pi^{-1}(V_1)/V_1)\right)\cap\ldots\cap\rho_t^{-1}\left(P_t\cap\Nef(\pi^{-1}(V_t)/V_t)\right) \] is a closed convex rational polytope. Since $P$ contains $\mathbf{u}\in\Nef(X/Z)$ in its interior, $\mathrm{int}(P)\cap \Amp(X/Z)\neq\emptyset$, thus $P\cap \Nef(X/Z)$ has nonempty interior. Thus we may assume $Z$ affine, that there exists a $\mathbf Q$-Weil divisor $\Delta\geq 0$ on $X$ with $K_X+\Delta$ $\mathbf{Q}$-Cartier and $(X,\Delta)$ klt, and that $\mathbf{u}$ lies in the subset \begin{align*} \mathcal{A}_0{}\coloneqq{}&\Set[\big]{c[K_X+\Delta]+\mathbf{w}\given c\in\mathbf{R}_{>0},\ \mathbf{w}\in\Amp(X/Z)} \intertext{of $N^1(X/Z)_{\mathbf{R}}$. It is easy to see that $\mathcal{A}_0$ is open and convex, and that} \mathcal{A}_0\cap N^1(X/Z)_{\mathbf{Q}}{}={}&\Set[\big]{c[K_X+\Delta]+\mathbf{w}\given c\in\mathbf{Q}_{>0},\ \mathbf{w}\in\Amp(X/Z)\cap N^1(X/Z)_{\mathbf{Q}}}. \end{align*} For a sufficiently small closed convex rational polytope $P$ whose interior $\Int(P)$ contains $\mathbf{u}$, we have $P\subseteq \mathcal{A}_0$. Notice again that $ P\cap\Amp(X/Z)\neq\emptyset$, as $P$ contains $\mathbf{u}\in\Nef(X/Z)$ in its interior. Each vertex of $P$ has the form $c[K_X+\Delta]+\mathbf{w}=c([K_X+\Delta]+c^{-1}\mathbf{w})$ where $c\in\mathbf{Q}_{>0}$ and $\mathbf{w}\in\mathrm{Amp}(X/Z)$ rational. Therefore, we can find $\ell\in\mathbf{Z}_{>0}$, $c_i\in \mathbf{Q}_{>0}$ and $\pi$-ample $\mathbf{Q}$-Cartier divisors $A_i\ (i \in \{1,2,\ldots,\ell\})$, such that $c_i[K_X+\Delta+A_i]\ (i \in \{1,2,\ldots,\ell\})$ are the vertices of $P$. Write $D_i=K_X+\Delta+A_i$. \par Consider the adjoint ring \[ R=R\bigl(X/Z;D_1,D_2,\ldots,D_\ell\bigr), \] which is finitely generated by Theorem \ref{thm:cl1332}. Every element $\mathbf{x}\in P$ is a convex combination of the classes $c_i[D_i]$, and thus is a $\mathbf{R}_{\geq 0}$-combination of the classes $[D_i]$. In particular, $\Supp(R)$ contains a $\pi$-ample divisor since $P\cap\Amp(X/Z)\neq\emptyset$. By Theorem \ref{thm:cl1335}$(\ref{thm:cl13352})$, we see that every element $\mathrm{x}\in P\cap\Nef(X/Z)$ is the class of an element of $\Supp(R)$. In other words, if $\varphi$ is the canonical map from Lemma \ref{lem:cl1339}, we have $\Nef(X/Z)\cap P\subseteq \varphi(\Supp(R))$. Let $\Supp(R)=\bigsqcup_j \mathcal{C}_j$ be the coarest finite rational polyhedral subdivision such that $o_v$ is a linear function on $\mathcal{C}_j$ for every geometric valuation $v$ of $X$, as in Theorem \ref{thm:cl1335}$(\ref{thm:cl13353})$. Since $\Supp(R)$ contains a $\pi$-ample divisor, there exists an index $k$ with $\mathcal{C}_k\cap \varphi^{-1}(\Amp(X/Z))\neq\emptyset$. By Lemma \ref{lem:cl1339}, the set $\mathcal{C}_k= \varphi^{-1}(\Nef(X/Z))$ is convex, and $\varphi(\mathcal{C}_k)=P\cap \Nef(X/Z)$, as desired.\smallskip We now show $(\ref{thm:cl1342Formal})$. Let $W_i$ be the linear span of $F_i$. To show $W_i$ is a supporting hyperplane of $\Nef(X/Z)$, it suffices to show $W_i\cap \Amp(X/Z)=\emptyset$. However, since $F_i$ is convex and contained in $\Nef(X/Z)$, we see that $W_i\cap \Amp(X/Z)\neq\emptyset$ will imply $F_i\cap \Amp(X/Z)\neq\emptyset$, which is impossible since $F_i$ is a face of $P\cap\Nef(X/Z)$, so $F_i\subseteq \partial\Nef(X/Z)$. \par This argument also tells us that \[ \Int(P)\cap \partial \Nef(X/Z)\supseteq\Int(P)\cap (F_1\cup F_2\cup\cdots\cup F_m). \] Conversely, if $\mathrm{x}\in \Int(P)\cap \partial \Nef(X/Z)$, it is in the boundary of $P\cap\Nef(X/Z)$, and thus is contained in some $F_i$. Therefore we get the identity of sets.\smallskip Since $(\ref{thm:cl13421})$ follows immediately from $(\ref{thm:cl1342Formal})$, it remains to show $(\ref{thm:cl13422})$. By the discussion above, upon passing to a (finite) affine open covering of $Z$, we may assume that there exists a $\mathbf Q$-Weil divisor $\Delta\geq 0$ on $X$ with $K_X+\Delta$ $\mathbf{Q}$-Cartier and $(X,\Delta)$ klt, and our divisor $D$ satisfies $[D]=c[K_X+\Delta]+\mathrm{w}$ for some $c\in \mathbf{Q}_{>0}$ and $\mathrm{w}\in\Amp(X/Z)$. Therefore the $\mathbf{Q}$-Cartier divisor $A\coloneqq c^{-1}D-K_X-\Delta$ is $\pi$-ample. We have that $K_X+\Delta+A=c^{-1}D$ is $\pi$-nef, since $[D]\in \Nef(X/Z)$. By Corollary \ref{cor:cl1338}, we see that $K_X+\Delta+A$ is $\pi$-semi-ample and hence so is $D$. \end{proof} With uniqueness we can prove the following result. \begin{lemma}[{cf.\ \cite[p.\ 85, Step 9]{KM98}}]\label{lem:CL13S4ProducesGoodContractions} Let $\pi\colon X \to Z$ be a projective morphism of integral Noetherian schemes of equal characteristic zero, such that $X$ is normal and such that $Z$ is excellent and has a dualizing complex $\omega_Z^\bullet$. Let $W$ be a supporting hyperplane spanned by a face $F$ as in Theorem \ref{thm:cl1342}$(\ref{thm:cl1342Formal})$. Then, the extremal ray $R$ dual to $W$ (Definition \ref{rem:DualRay}) has a good contraction with target $Y$ projective over $Z$. \end{lemma} \begin{proof} By Theorem \ref{thm:cl1342}$(\ref{thm:cl1342Precise})$, it is clear that $W$ has a basis consisting of rational members of $\Nef(X/Z)$, and that there exists $[D_1]\in W_{\mathbf{Q}}\cap\mathcal{A}$. By Remark \ref{rem:DualToOneVector}, there exists a rational member $[D_0]$ of $W\cap \Nef(X/Z)$ such that \[ R=\Set[\big]{\gamma\in \NEbar(X/Z) \given (D_0\cdot\gamma)=0}. \] Since $(\Nef(X/Z)\cdot\NEbar(X/Z))\geq 0$, it is clear that \[ R=\Set[\big]{\gamma\in \NEbar(X/Z) \given (D_1+\varepsilon D_0\cdot\gamma)=0} \] for all $\varepsilon\in\mathbf{R}_{>0}$, and we know that for $\varepsilon$ rational and sufficiently small, $D_1+\varepsilon D_0$ is $\pi$-semi-ample, by Theorem \ref{thm:cl1342}$(\ref{thm:cl13422})$. Fix such an $\varepsilon$ an fix an $m\in\mathbf{Z}_{>0}$ such that $D_2\coloneqq m(D_1+\varepsilon D_0)$ is integral and $\pi$-generated. Then $\lvert D_2\rvert$ defines a morphism $X\to \mathbf{P}_Z(\pi_*\mathcal{O}_X(D_2))$, and we denote by $f\colon X\to Y$ the Stein factorization of this morphism. Then $Y$ is projective over $Z$, $f$ is proper, $f_*\mathcal{O}_X=\mathcal{O}_Y$, and $D_2\sim f^*A$ for some Cartier divisor $A$ on $Y$ ample over $Z$. Since $(D_2\cdot R)=0$, $D_2$ is not $\pi$-ample, so $f$ is not an isomorphism and thus there exists an $f$-contracted curve $C$. Now $(D_2\cdot C)=0$, so $[C]\in R$ and $R=\mathbf{R}_{\geq 0}[C]$. In particular, for each $\mathbf{Q}$-Cartier divisor $E$ on $Y$, we have $(f^*E\cdot R)=0$. Conversely, let $D\in \Div_{\mathbf{Q}}(X)$ be such that $(D\cdot R)=0$. Then $D$ and $D_2$ both induce a linear functional on the real vector space $U\coloneqq N_1(X/Z)_{\mathbf{R}}/\mathbf{R} [C]$. The image $\mathcal{C}$ of $\NEbar(X/Z)$ in $U$ is a compact cone and $D_2$ maps $\mathcal{C}\setminus\{0\}$ to $\mathbf{R}_{>0}$. By local compactness, for sufficiently small $\sigma\in \mathbf{Q}_{>0}$, $D_2+\sigma D$ maps $\mathcal{C}\setminus\{0\}$ to $\mathbf{R}_{>0}$ as well. Thus $D_3\coloneqq D_2+\sigma D$ is $\pi$-nef, \[ R=\Set[\big]{\gamma\in \NEbar(X/Z) \given (D_3\cdot\gamma)=0}, \] and $[D_3]\in W$ since $W$ is the subspace dual to $\mathbf{R}[C]$. Decreasing $\sigma$, we may assume $[D_3]\in \mathcal{A}$, so $D_3$ is $\pi$-semi-ample by Theorem \ref{thm:cl1342}$(\ref{thm:cl13422})$. By the same argument as that for $D_1+\varepsilon D_0$, a multiple of $D_3$ is $\pi$-generated and is pulled back from a contraction $f'\colon X\to Y'$ of $R$. However, by uniqueness of contraction (see the proof of Theorem \ref{thm:contraction}), this implies that $D_3\sim_{\mathbf{Q}}f^*E_3$ for some $E_3\in\Div_{\mathbf{Q}}(Y)$. Thus $D\sim_{\mathbf{Q}}f^*(\sigma^{-1}(E_3-A))$, as desired. \end{proof} \begingroup \makeatletter \renewcommand{\@secnumfont}{\bfseries} \part{The relative MMP with scaling for schemes and algebraic spaces}\label{part:relativemmpforschemes} \makeatother \endgroup In this part, we establish the existence of flips and termination with scaling for schemes and algebraic spaces using Theorem \ref{thm:introfinitegen}. This completes the proof of Theorem \ref{thm:introrelativemmp}$(\ref{setup:introalgebraicspaces})$. We then give some applications of these results by showing that $\mathbf{Q}$-factorializations and terminalizations exist, which for simplicity we prove only for schemes.\bigskip \section{Birational contractions and \texorpdfstring{$\mathbf{Q}$}{Q}-factoriality} We start by setting up the necessary preliminaries for birational contractions. See \cite[\href{https://stacks.math.columbia.edu/tag/0ED7}{Tag 0ED7}]{stacks-project} for the definition of universally catenary algebraic spaces. \begin{lemma}[{Negativity Lemma; cf.\ \cite[Lemma 2.14]{BMPSTWW}}]\label{lem:Negativity} Let $h\colon X \to Y$ be a proper birational morphism of integral normal quasi-excellent Noetherian algebraic spaces over a scheme $S$ that are universally catenary or have dualizing complexes. Let $B$ be a Weil divisor on $X$ such that $[B]$ is the class of an invertible sheaf $\mathscr{L}$. Assume that $\mathscr{L}^{-1}$ is $h$-nef and that $h_*B$ is effective. Then $B$ is effective. \end{lemma} \begin{proof} After replacing $Y$ by an \'etale cover $Y' \to Y$, we may reduce to the case of schemes. Note that $Y'$ is quasi-excellent by definition, and is moreover excellent either because $Y$ is universally catenary, or because $Y'$ has a dualizing complex. Nefness of $\mathscr{L}^{-1}$ is preserved by Lemma \ref{lem:nefpullback}$(\ref{lem:nefpullbackalways})$. The effectivity of $B$ can be checked after flat base change. \par When $Y$ is a scheme and $h$ is projective, this is \cite[Lemma 2.14]{BMPSTWW}. The general case follows from Chow's Lemma \cite[Th\'eor\`eme 5.6.1]{EGAII}, since we may pass to an affine open cover of $Y$ and pullback along a birational morphism $X'\to X$. \end{proof} We now characterize the types of contractions that are possible as outputs of Theorem \ref{thm:contraction}. \begin{lemma}[cf.\ {\cite[Proposition 2.5]{KM98}}] \label{lem:Contraction3Types} Let $\pi\colon X\to Z$ be a projective surjective morphism of integral quasi-excellent Noetherian algebraic spaces over a scheme $S$ with $X$ normal and $\mathbf{Q}$-factorial. Let $R\subseteq \NEbar(X/Z)$ be an extremal ray. Let $f\colon X\to Y$ be a contraction of $R$ over $Z$. Then, exactly one of the following holds. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item \label{lem:ContractionMori} $\dim X>\dim Y$. \item \label{lem:ContractionDivisorial} $f$ is birational, $\mathrm{Ex}(f)\subseteq X$ is a prime divisor. \item \label{lem:ContractionFlip} $f$ is birational, $\mathrm{Ex}(f)\subseteq X$ is of codimension $\geq 2$; i.e., $f$ is small (Definition \ref{def:GoodContrOfR}). \end{enumerate} \end{lemma} \begin{proof} It suffices to prove that if $f$ is birational and $\mathrm{Ex}(f)\subseteq X$ contains a prime divisor $E$, then $\mathrm{Ex}(f)=E$. Fix $n\in\mathbf{Z}_{>0}$ such that $[nE]$ is the Weil divisor class associated to an invertible sheaf. Assume not. Then there exists a point $\zeta\in Y$, not necessarily closed, such that $\pi^{-1}(\zeta)=\mathrm{Ex}(f)\cap \pi^{-1}(\zeta)\supsetneq E\cap \pi^{-1}(\zeta)$. By Zariski's Main Theorem, each irreducible component of $\pi^{-1}(\zeta)$ is positive-dimensional, and at least one of such is not contained in $E\cap\pi^{-1}(\zeta).$ Therefore there exists a one-dimensional integral closed subspace $C$ of $\pi^{-1}(\zeta)$ that is not contained in $E\cap \pi^{-1}(\zeta)$, so $(nE\cdot C)\geq 0$, where we use Remark \ref{rem:NefAgainstNonClosedContracted} to make sense of this intersection number. Since $\pi$ is projective, the class $[C]$ defined using Lemma \ref{lem:NefAgainstNonClosedContracted} is nonzero, and it belongs to $\NEbar(X/Y)$ by Lemma \ref{lem:NumericalClassOpenSubset}. As noted in Definition \ref{def:GoodContrOfR}, we have $R=\mathbf{R}_{\geq 0}[C]$, thus $nE$ is $f$-nef. Applying Lemma \ref{lem:Negativity} to the divisor $B=-nE$, we get a contradiction. \end{proof} \begin{lemma}[cf.\ \citeleft\citen{KMM87}\citemid Lemma 5-1-5 and Proposition 5-1-6\citepunct \citen{KM98}\citemid Corollary 3.18\citeright]\label{lem:ContractionQfac} In cases $(\ref{lem:ContractionMori})$ and $(\ref{lem:ContractionDivisorial})$ in Lemma \ref{lem:Contraction3Types}, if $f$ is a good contraction then $Y$ is $\mathbf{Q}$-factorial. \end{lemma} \begin{proof} In case $(\ref{lem:ContractionDivisorial})$, the proof is identical to the proof of \cite[Corollary 3.18]{KM98}. In case $(\ref{lem:ContractionMori})$, let $Y^\circ$ be the regular locus of $Y,$ which is open since $Y$ is quasi-excellent, and its complement is of codimension at least 2 since $Y$ is normal. Let $B$ be a prime divisor on $Y$. Then $B\cap Y^\circ$ is a prime divisor on $Y^\circ$ and is Cartier since $Y^\circ$ is regular. Therefore $f^{-1}(B\cap Y^\circ)$ is an effective Cartier divisor of $f^{-1}(Y^\circ)$ and we let $D$ be its closure in $X$. The class of $D$ is the class associated to a $\mathbf{Q}$-invertible sheaf $\tilde{D}$ since $X$ is $\mathbf{Q}$-factorial. Take $y\in Y^\circ$ not in $B$, and consider an integral curve $C\subseteq f^{-1}(y)$. As in the proof of Lemma \ref{lem:Contraction3Types}, $C$ defines a class $[C]\in N_1(X/Y)_{\mathbf{R}}$ and $R=\mathbf{R}_{\geq 0}[C]$. Since $D\cap f^{-1}(Y^\circ)=f^{-1}(B\cap Y^\circ)$, $(\tilde{D}\cdot C)=0$. Thus $(\tilde{D}\cdot R)=0$ and $\tilde{D}\sim_{\mathbf{Q}}f^*E$ for some $E\in\Pic_{\mathbf{Q}}(Y)$ as $f$ is a good contraction. Take $m\in\mathbf{Z}_{>0}$ such that $mE$ is integral and $m\tilde{D}\sim f^*(mE)$. Then there exists a global section $s$ of $\mathcal{O}_X(f^*(mE))$ with $\mathrm{div}(s)=mD$. Since $f_*\mathcal{O}_X=\mathcal{O}_Y$, we have a well-defined global section $f_*(s)$ of $\mathcal{O}_Y(mE)$ with $f^{-1}\mathrm{div}(f_*(s))=mD$. Thus by construction $\mathrm{div}(f_*(s))\cap Y^\circ=mB\cap Y^\circ$ and $\mathrm{div}(f_*(s))=mB$ since the complement of $Y^\circ$ is of codimension at least 2. Thus $mB\sim mE$ is Cartier and $B$ is $\mathbf{Q}$-Cartier. \end{proof} \section{Existence of flips} In this section, we show that flips exist. To do so, we first define flips. \begin{definition}[cf.\ {\cite[p.\ 335]{KMM87}}]\label{def:FLIPS} Let $\pi\colon X \to Z$ be a projective surjective morphism of integral quasi-excellent Noetherian algebraic spaces over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta\geq 0$ be a $\mathbf{Q}$-divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and that $(X,\Delta)$ is klt. Let $f\colon X\to Y$ be a small birational contraction over $Z$ (Definition \ref{def:GoodContrOfR}) such that $-(K_X+\Delta)$ is $f$-ample. A \textsl{flip} of $f$ is a proper birational morphism $f^+\colon X^+\to Y$ with the following properties. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{def:FLIPSnormal} $X^+$ is normal (and integral). \item\label{def:FLIPSsmall} The morphism $f^+$ is a small contraction. \item\label{def:FLIPSample} $K_{X^+}+\Delta^+$ is $\mathbf{Q}$-Cartier and $f^+$-ample where $\Delta^+$ is the strict transform of $\Delta$. \end{enumerate} Note that since $\mathrm{Ex}(f^+)\subseteq X^+$ is of codimension $\geq 2$, the strict transform operation $D\mapsto D^+$ induces an isomophism $\WDiv_{\mathbf{k}}(X)\cong \WDiv_{\mathbf{k}}(X^+)\ (\mathbf{k}=\mathbf{Z},\mathbf{Q}$ or $\mathbf{R}$) that preserves principal divisors and maps $K_X$ to a canonical divisor of $X^+$. Moreover, $f_*\mathcal{O}_X(D)=f^+_*\mathcal{O}_{X^+}(D^+)$ for all $D\in \Div(X)$. See for example \cite[Lemma 4(3)]{CL13}. A birational map $h\colon X\dashrightarrow X'$ of algebraic spaces over $Z$ is called a \textsl{flip} of the pair $(X,\Delta)$ if $h$ is isomorphic to the birational map $(f^+)^{-1}\circ f:X\dashrightarrow X^+$ for some $f,X^+$ as above. \end{definition} We can now show flips exist. The case for complex quasi-projective varieties is \cite[Corollary 1.4.1]{BCHM10} (cf.\ \cite[Theorem 5]{CL13}). \begin{theorem}\label{thm:FLIP} Let $\pi\colon X \to Z$ be a projective surjective morphism of integral quasi-excellent Noetherian algebraic spaces over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta\geq 0$ be a $\mathbf{Q}$-Weil divisor on $X$ such that $(X,\Delta)$ is klt. Let $f\colon X\to Y$ be a small contraction over $Z$ such that $-(K_X+\Delta)$ is $f$-ample. Then the following hold. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{thm:FLIPunique} A flip of $f$ is unique up to unique isomophism. \item\label{thm:FLIPisProj} If $Z$ is of equal characteristic zero, then the quasi-coherent $\mathcal{O}_Y$-algebra \[ \mathcal{A}\coloneqq\bigoplus_m f_*\mathcal{O}_X(\lfloor m(K_X+\Delta)\rfloor) \] is of finite type and $\PProj(\mathcal{A})$ is a flip of $f$. \end{enumerate} \end{theorem} \begin{proof} It is clear that if $f^+\colon X^+\to Y$ is a flip, then $X^+\cong\PProj(\mathcal{A})$ (see the proof of \cite[Lemma 6.2]{KM98}). Thus it suffices to show $(\ref{thm:FLIPisProj})$. Since $f$ is birational, $\Delta$ is $f$-big, thus Theorem \ref{thm:finitegenerationalgspaces} applies to show $\mathcal{A}$ is of finite type. Thus $X^+\coloneqq\PProj(\mathcal{A})$ is locally projective, in particular proper, over $Y$, and $X^+$ is normal and birational to $Y$ since $X$ is. The proof of the properties $(\ref{def:FLIPSsmall})$ and $(\ref{def:FLIPSample})$ as in Definition \ref{def:FLIPS} is the same as the proof of \cite[Proposition 5-1-11(2)]{KMM87}. \end{proof} \begin{lemma}\label{lem:FLIPQfacAndN1same} Notations and assumptions in Theorem \ref{thm:FLIP}. Assume further that $f$ is a good contraction of some extremal ray $R\subseteq \NEbar(X/Z)$. Let $X^+$ be a flip of $f$. Then the following hold. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:FLIPQfac} $X^+$ is $\mathbf{Q}$-factorial if $X$ is. \item\label{lem:FLIPNefPullback} If $D\in \Pic_{\mathbf{Q}}(X)$ is $\pi$-nef and satisfies $(D\cdot R)=0$, then $D\sim_{\mathbf{Q}}f^*E$ for some $E\in\Pic_{\mathbf{Q}}(Y)$ nef over $Z$. \item\label{lem:FLIPN1Same} $D\mapsto D^+$ induces an isomorphism $N^1(X/Z)_{\mathbf{R}}\cong N^1(X^+/Z)_{\mathbf{R}}$. \end{enumerate} \end{lemma} \begin{proof} Let $D$ be a $\mathbf{Q}$-invertible sheaf on $X$. Since $R$ is a ray, there exists $a\in \mathbf{Q}$ such that \[ \bigl(D+a(K_X+\Delta)\cdot R\bigr)=0. \] Since $f$ is a good contraction, $D+a(K_X+\Delta)\sim_{\mathbf{Q}}f^*E$ for some $E\in \Pic_{\mathbf{Q}}(Y)$. Thus \begin{equation}\label{eq:FLIPQfacAndN1same} D^++a(K_{X^+}+\Delta^+)\sim_{\mathbf{Q}}f^{+*}E. \end{equation} Since $K_{X^+}+\Delta^+$ is $\mathbf{Q}$-Cartier (Definition \ref{def:FLIPS}$(\ref{def:FLIPSample})$), we see that $D^+$ is $\mathbf{Q}$-Cartier. Since every $\mathbf{Q}$-Weil divisor on $X^+$ is of the form $D^+$ for some $\mathbf{Q}$-Weil divisor $D$ on $X$, we see that $X^+$ is $\mathbf{Q}$-factorial if $X$ is. If $D\in \Pic_{\mathbf{Q}}(X)$ is $\pi$-nef and satisfies $(D\cdot R)=0$, then $a=0$, and $D\sim_{\mathbf{Q}}f^*E$ for some $E\in\Pic_{\mathbf{Q}}(Y)$. For each $(Y\to Z)$-contracted curve $C$, there exists a $\pi$-contracted curve $C'$ such that $C'$ maps finite surjectively to $C$. We know $(D\cdot C')\geq 0$, thus $(E\cdot C)\geq 0$ and $E$ is nef over $Z$ since $C$ was arbitrary. Now $[D^+]=[f^{+*}E]$ is nef over $Z$. If $[D]=0\in N^1(X/Z)_{\mathbf{R}}$, then we get $[(-D)^+]=-[D^+]$ nef over $Z$ as well, so $[D^+]=0$. This shows that $D\mapsto D^+$ induces a linear map $N^1(X/Z)_{\mathbf{R}}\to N^1(X^+/Z)_{\mathbf{R}}$, which is automatically surjective. If $[D^+]=0\in N^1(X^+/Z)_{\mathbf{R}}$, from the equation \eqref{eq:FLIPQfacAndN1same} and the fact $K_{X^+}+\Delta^+$ ample over $Y$ we see that $a=0$, so by the same argument we get $[E]=0\in N^1(Y/Z)_{\mathbf{R}}$ and thus $[D]=[f^*E]=0\in N^1(X/Z)_{\mathbf{R}}$. Thus the linear map $N^1(X/Z)_{\mathbf{R}}\to N^1(X^+/Z)_{\mathbf{R}}$ is an isomorphism. \end{proof} \begin{lemma}[cf. {\cite[Lemma 3.38]{KM98}}]\label{lem:DiscrepancyNonIncreasing} Let $Y$ be a quasi-excellent integral Noetherian algebraic space over a scheme $S$. Let $X$ and $X'$ be algebraic spaces projective over $Y$ that are integral, normal, and birational to $Y$. Suppose that $Y$ admits a dualizing complex $\omega_Y^\bullet$. Denote by $K_X$ and $K_X'$ canonical divisors on $X$ and $X'$ defined using the exceptional pullbacks of $\omega_Y^\bullet$. Let $\Delta\geq 0$ be a $\mathbf Q$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier. Let $\Delta'\geq 0$ be the birational transform of $\Delta$ on $X'$ and assume that $K_{X'}+\Delta'$ is $\mathbf{Q}$-Cartier. Assume that the following hold. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:ConditionKNegative} $-(K_X+\Delta)$ is nef over $Y$. \item\label{lem:ConditionKPositive} $K_{X'}+\Delta'$ is nef over $Y$. \end{enumerate} Then for all divisors $E$ over $Y$, $a(E,X',\Delta')\leq a(E,X,\Delta)$, and if at least one of $K_X+\Delta$ and $K_{X'}+\Delta'$ is not numerically trivial over $Y$, then strict inequality holds for at least one such $E$. \end{lemma} \begin{proof} Consider a commutative diagram \[ \begin{tikzcd} W\rar{g'}\dar[swap]{g} &X'\dar\\ X\rar &Y \end{tikzcd} \] where $g,g'$ are birational and $W$ is integral and normal. We write \begin{align*} K_W\sim_{\mathbf{Q}} g^*(K_X+\Delta)&+\sum_F a(F,X,\Delta)F \intertext{and} K_W\sim_{\mathbf{Q}} {g'}^*(K_{X'}+\Delta')&+\sum_F a(F,X',\Delta')F \end{align*} as usual, so \[ {g'}^*(K_{X'}+\Delta')-g^*(K_X+\Delta)\sim_{\mathbf{Q}} \sum_F \left( a(F,X',\Delta')-a(F,X,\Delta) \right) F. \] By our assumptions $(\ref{lem:ConditionKNegative})$ and $(\ref{lem:ConditionKPositive})$, ${g'}^*(K_{X'}+\Delta')-g^*(K_X+\Delta)$ is nef. On the other hand, since $\Delta'$ is the birational transform of $\Delta$, $B\coloneqq-\sum_F\left( a(F,X',\Delta')-a(F,X,\Delta) \right) F$ is exceptional over $Y$. Therefore Lemma \ref{lem:Negativity} applies and shows that $B$ is effective, i.e., $a(F,X',\Delta')\leq a(F,X,\Delta)$ for all $F$ in the sum. Now for each divisor $E$ over $Y$, we may always find a diagram as above such that $E$ occurs as a prime divisor on $W$, so $a(E,X',\Delta')\leq a(E,X,\Delta)$. If at least one of $K_X+\Delta$ and $K_{X'}+\Delta'$ is not numerically trivial over $Y$, then $B$ is not numerically trivial over $Y$ so strict inequality must hold for some $F$. \end{proof} \begin{corollary}\label{cor:MMPpreservesKLT} Let $\pi\colon X \to Z$ be a projective surjective morphism of integral quasi-excellent Noetherian algebraic spaces over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta\geq 0$ be a $\mathbf{Q}$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and that $(X,\Delta)$ is klt (resp.\ terminal). Let $f\colon X\to Y$ be a birational contraction over $Z$ such that $-(K_X+\Delta)$ is $f$-ample. Then the followings hold. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:ContrPreservesKLT} If $K_Y+f_*\Delta$ is $\mathbf{Q}$-Cartier, then $(Y,f_*\Delta)$ is klt (resp.\ terminal). \item\label{lem:FlipPreservesKLT} Assume that $f$ is small and assume that a flip $(X^+,\Delta^+)$ of $f$ exists. Then $(X^+,\Delta^+)$ is klt (resp.\ terminal). \end{enumerate} \end{corollary} \begin{proof} Immediate from definitions and Lemma \ref{lem:DiscrepancyNonIncreasing}. \end{proof} \begin{corollary}\label{cor:MMPpreservesKeffective} Let $\pi\colon X \to Z$ be a projective surjective morphism of integral quasi-excellent Noetherian algebraic spaces over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta\geq 0$ be a $\mathbf{Q}$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and that $(X,\Delta)$ is klt (resp.\ terminal). Let $f\colon X\to Y$ be a birational contraction over $Z$ such that $-(K_X+\Delta)$ is $f$-ample. Then, the following hold. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:ContrPreservesKeffective} Assume that $K_Y+f_*\Delta$ is $\mathbf{Q}$-Cartier. Then $K_Y+f_*\Delta$ is pseudoeffective over $Z$ if and if $K_X+\Delta$ is. \item\label{lem:FlipPreservesKeffective} Assume that $f$ is small and assume that a flip $(X^+,\Delta^+)$ of $f$ exists. Then $K_{X^+}+\Delta^+$ is pseudoeffective over $Z$ if and if $K_X+\Delta$ is. \end{enumerate} \end{corollary} \begin{proof} In either case we may replace $Z$ by the Stein factorization of $Y\to Z$. Furthermore, it is clear that taking generic fiber of $Z$ preserves assumptions and conclusions (see Definitions \ref{def:fbig}, \ref{def:GoodContrOfR}, \ref{def:FLIPS}), so we may assume that $Z$ is the spectrum of a field.\smallskip In case $(\ref{lem:ContrPreservesKeffective})$, by Lemma \ref{lem:DiscrepancyNonIncreasing} or rather its proof, we have \[ K_X+\Delta\sim_{\mathbf{Q}}f^*(K_Y+f_*\Delta)+E \] where $E$ is an effective exceptional $\mathbf{Q}$-Cartier divisor. Assume that $K_Y+f_*\Delta$ is pseudoeffective and let $D$ be an ample $\mathbf{R}$-Cartier divisor on $X$. We want to show $K_X+\Delta+D$ big. Let $H$ be an ample divisor on $X$ such that $D-f^*f_*H$ ample. By Lemma \ref{lem:ContrPreservesBig}, $f_*H$ is big, thus so is $K_Y+f_*\Delta+f_*H$. For a sufficiently divisible $m\in\mathbf{Z}_{>0}$ and $C \in \lvert m(K_Y+f_*\Delta+f_*H) \rvert$, we have \[ f^*C\sim mf^*(K_Y+f_*\Delta+f_*H)\sim m(K_X+\Delta-E)+mf^*f_*H. \] Therefore $\dim\lvert m(K_X+\Delta+f^*f_*H) \rvert\geq \dim\lvert m(K_Y+f_*\Delta+f_*H)\rvert$. This shows that $K_X+\Delta+f^*f_*H$ is big, hence so is $K_X+\Delta+D$. Conversely, assume that $K_X+\Delta$ is pseudoeffective and let $D$ be an ample $\mathbf{R}$-Cartier divisor on $Y$. We want to show $K_Y+f_*\Delta+D$ big. By perturbing the coefficients on $D$, we may assume $D$ is a $\mathbf{Q}$-Cartier divisor. We have \[ f^*(K_Y+f_*\Delta+D)\sim_{\mathbf{Q}}K_X+\Delta-E+f^*D \] and thus \[ K_Y+f_*\Delta+D\sim_{\mathbf{Q}}f_*(K_X+\Delta+f^*D). \] Since $f^*D$ is big and $K_X+\Delta$ is pseudoeffective, $K_X+\Delta+f^*D$ is big. By Lemma \ref{lem:ContrPreservesBig} we conclude.\smallskip Item $(\ref{lem:FlipPreservesKeffective})$ follows immediately from Lemma \ref{lem:ContrPreservesPsEff}. \end{proof} \begin{corollary}\label{cor:MMPnotCycleBack} Let $\pi\colon X \to Z$ be a projective surjective morphism of integral quasi-excellent Noetherian algebraic spaces over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta\geq 0$ be a $\mathbf{Q}$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier. Let $m\in \mathbf{Z}_{>0}$ and \[ X \coloneqq X_1 \overset{f_1}{\dashrightarrow} X_2 \overset{f_2}{\dashrightarrow} \cdots \overset{f_{m-1}}{\dashrightarrow} X_m \] be a sequence of birational maps over $Z$ such that each $X_i$ is normal. Let $\Delta_i$ be the birational transform of $\Delta$ on $X_i$ and assume that $K_{X_i}+\Delta_i$ is $\mathbf{Q}$-Cartier for all $i\leq m$. Assume that for each $i<m$, either $f_i$ is a morphism and a contraction with $-(K_{X_i}+\Delta_i)$ $f_i$-ample, or that $f_i$ is a flip of the pair $(X_i,\Delta_i)$; and assume that there exists an index $i_0<m$ such that $f_{i_0}$ is not an isomorphism. Then the composition $X\dashrightarrow X_m$ is not an isomorphism. \end{corollary} \begin{proof} By Lemma \ref{lem:DiscrepancyNonIncreasing}, there exists a divisor $E$ over $X_{i_0}$ such that \begin{align*} a(E,X_{i_0},\Delta_{i_0})&>a(E,X_{i_0+1},\Delta_{i_0+1}). \intertext{This divisor defines a divisor over each $X_i$ and we have} a(E,X,\Delta)&\geq a(E,X_{i_0},\Delta_{i_0})\\ a(E,X_{i_0+1},\Delta_{i_0+1})&\geq a(E,X_m,\Delta_m) \end{align*} by the same lemma. Thus $a(E,X,\Delta)>a(E,X_m,\Delta_m)$ and $X\dashrightarrow X_m$ is not an isomorphism. \end{proof} We check that contractions and flips behave well when we pass to an open subset of the base $Z$. The assumption on Picard groups below is satisfied when, for example, $X$ is integral, normal, and $\mathbf{Q}$-factorial. \begin{lemma}\label{lem:ContSmallerOpen} Let $\pi\colon X \to Z$ be a projective surjective morphism of integral quasi-excellent Noetherian algebraic spaces over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $R\subseteq \NEbar(X/Z)$ be an extremal ray and let $f\colon X\to Y$ be a contraction of $R$. Let $W$ be an open subspace of $Z$ and denote by $\square_W$ the base change of a $Z$-space or a $Z$-morphism $\square$ to $W$. Assume that $\Pic(X)_{\mathbf{Q}}\to \Pic(X_W)_{\mathbf{Q}}$ is surjective and that $f_W\colon X_W\to Y_W$ is not an isomorphism. Then $f_W$ is a contraction of an extremal ray $R^W\subseteq \NEbar(X_W/W)$. Moreover, if $f$ is a good contraction of $R$, $f_W$ is also a good contraction of $R^W$. \end{lemma} \begin{proof} Since $f_W\colon X_W\to Y_W$ is not an isomorphism, there exists a closed point $z\in W$ such that $\mathrm{Ex}(f)\subset X$ intersects the fiber $f^{-1}(z)$. In particular, $R^W\coloneqq\NEbar(X_W/Y_W)$ is nontrivial. Since $\Pic(X)_{\mathbf{Q}}\to \Pic(X_W)_{\mathbf{Q}}$ is surjective, $N^1(X/Z)_{\mathbf{R}}\to N^1(X_W/W)_{\mathbf{R}}$ is also surjective, thus the canonical map $N_1(X_W/W)_{\mathbf{R}}\to N_1(X/Z)_{\mathbf{R}}$ is injective. By Lemma \ref{lem:NumericalClassOpenSubset}, $R^W$ is sent into $\NEbar(X/Y)$, which equals to $R$ as noticed in Definition \ref{def:GoodContrOfR}. Thus $R^W$ is a ray, and it is clear that $f_W$ is the contraction of $R^W$. Now assume that $f$ is a good contraction and let $\mathscr{L}^W$ be an element in $\Pic(X_W)_{\mathbf{Q}}$. If we can write $\mathscr{L}^W=f^*(\mathscr{K}^W)\in\Pic(X_W)_{\mathbf{Q}}$ for some $\mathscr{K}^W\in\Pic(Y_W)_{\mathbf{Q}}$, then $(\mathscr{L}^W\cdot R^W)=0$ by the definition of $R^W$. Thus it suffices to show the converse. Since $\Pic(X)_{\mathbf{Q}}\to \Pic(X_W)_{\mathbf{Q}}$ is surjective, there exists $\mathscr{L}\in\Pic(X)_{\mathbf{Q}}$ such that $\mathscr{L}_{|X_W}=\mathscr{L}^W$. Now, if $(\mathscr{L}^W\cdot R^W)=0$, then $(\mathscr{L}\cdot R)=0$ since $R$ is a ray, and thus there exists $\mathscr{K}\in \Pic(Y)_{\mathbf{Q}}$ such that $\mathscr{L}=f^*\mathscr{K}\in\Pic(X)_{\mathbf{Q}}$. Thus $\mathscr{L}^W=f^*(\mathscr{K}_{|Y_W})\in\Pic(X_W)_{\mathbf{Q}}$, as desired. \end{proof} We now prove two lemmas that are important to the proof of termination. The first one is about the asymptotic order of vanishing (Definition \ref{def:AsymptoticOrder}). \begin{lemma} \label{lem:cl1352} Let $\pi_i\colon X_i \to Z\ (i=1,2)$ be two proper morphisms of Noetherian schemes, such that $X_1$ and $X_2$ are integral and normal and $Z$ is affine. Let $g\colon X_1\dashrightarrow X_2$ be a birational map over $Z$ that is an isomorphism in codimension 1. Let $v$ be a geometric valuation on $X_1$ (Definition \ref{def:GeomVal}). Then $v$ induces canonically a geometric valuation $g_*v$ on $X_2$, and for each $\mathbf{R}$-Weil divisor $D$ on $X_1$ with $|D|_\mathbf{R}\neq\emptyset$, we have $|g_*D|_\mathbf{R}\neq\emptyset$ and $o_v(D)=o_{g_*v}(g_*D)$. \end{lemma} \begin{proof} By definition, $v$ is given by a prime divisor $\Gamma$ in a scheme $Y$ birational and proper over $X_1$. By taking a resolution of the composition $Y\to X_1\dashrightarrow X_2$ we find $g_*v$. It is clear that for each effective $\mathbf{R}$-Weil divisor $E$ on $X_1$, we have $v(E)=g_*v(g_*E)$. If $D$ is an $\mathbf{R}$-Weil divisor on $X_1$ with $|D|_\mathbf{R}\neq\emptyset$, $g_*$ induces a bijection $|D|_\mathbf{R}\to |g_*D|_\mathbf{R}$, thus by definition $o_v(D)=o_{g_*v}(g_*D)$. \end{proof} The second is about a sufficient condition for a birational map to be a morphism. \begin{lemma}[cf. {\cite[Lemma 6]{CL13}}]\label{lem:cl1353} Let $\pi_i\colon X_i \to Z$ for $i \in \{1,2\}$ be two proper morphisms of excellent Noetherian schemes, such that $X_1$ and $X_2$ are integral and normal. Let $g\colon X_1\dashrightarrow X_2$ be a birational map over $Z$ that is an isomorphism in codimension 1. Assume that there exists a $\pi_1$-ample effective $\mathbf{Q}$-Cartier divisor $A$ on $X_1$ such that the birational transform $B\coloneqq g_*A$ is $\mathbf{Q}$-Cartier and $\pi_2$-nef. Then $g^{-1}$ is a morphism. \end{lemma} \begin{proof} By taking the normalization of the fiber product $X_1 \times_Z X_2$, there exists an integral normal scheme $W$ with proper birational morphisms $h_i\colon W\to X_i$ for $i \in \{1,2\}$ such that $g=h_2\circ h_1^{-1}$ as rational maps. Since $B=g_*A$ and since $g$ is an isomorphism in codimension 1, the $h_1$-exceptional divisors are exactly the $h_2$-exceptional divisors and we can write $h_2^*B+E=h_1^*A+F$ where $E,F$ are $h_1$-exceptional divisors. Since $B$ is $\pi_2$-nef, $h_2^*B$ is $h_1$-nef and thus so is $F-E=h_2^*B-h_1^*A$. By Lemma \ref{lem:Negativity} we see $E-F$ is effective, and by the same reason $F-E$ is effective. Thus $h_2^*B=h_1^*A$. Since $A$ is $\pi_1$-ample, we see that every $h_2$-contracted curve on $W$ must be $h_1$-contracted. By Lemma \ref{lem:ContractsNonClosed}, we see that every fiber of $h_2$ is mapped to a point under $h_1$, so there exists a continuous map of topological spaces $u\colon X_2\to X_1$ compatible with $h_1$ and $h_2$. Since $\mathcal{O}_{X_i}=h_{i*}\mathcal{O}_W$, this continuous map upgrades to a morphism of schemes and is the inverse of $g$ as a rational map. \end{proof} \section{Existence and termination of the relative MMP with scaling} In this section, following \cite{CL13}, we prove the termination of MMP under suitable assumptions. \begin{definition}[cf.\ {\cite[Definition 6.1]{CL13}}]\label{def:cl1361} Let $\pi\colon X \to Z$ be a projective surjective morphism of integral quasi-excellent Noetherian algebraic spaces of equal characteristic zero over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta\geq 0$ be a $\mathbf Q$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and such that $(X,\Delta)$ is klt. For a $\mathbf{Q}$-invertible sheaf $D$ on $X$, the \textsl{$\pi$-nef threshold of the pair $(X,\Delta)$ with respect to $D$} is \[ \lambda(X/Z,\Delta,D)\coloneqq\inf\Set{t\in\mathbf{R}_{\geq 0}\given K_X+\Delta+tD\text{\ is\ }\pi\text{-nef}}\in \mathbf{R}_{\geq 0}\cup \{\infty\}. \] \end{definition} We now introduce a concept for the scaling divisor similar to that in \cite{CL13}. Note that in item $(\ref{lem:cl1362condition1})$ we need to pass to an open covering of the base, since we do not assume $Z$ affine. Even if $Z$ was affine, we still need to pass to an open covering since we do not have a global Bertini theorem. \begin{definition}\label{def:GoodScalingDivisors} Let $\pi\colon X \to Z$ be a projective surjective morphism of integral quasi-excellent Noetherian algebraic spaces of equal characteristic zero over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta\geq 0$ be a $\mathbf Q$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and such that $(X,\Delta)$ is klt. We say a $\mathbf{Q}$-invertible sheaf $A$ on $X$ is a \textsl{good scaling divisor for the pair $(X,\Delta)$}, if the following conditions hold. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:Abig} $A$ is $\pi$-big. \item\label{lem:K+Anef} $K_X+\Delta+A$ is $\pi$-nef. \item\label{lem:cl1362condition1} There exists an \'etale covering $Z=\bigcup_a V_a$ and $\mathbf{Q}$-Weil divisors $A_a\in |A_{|\pi^{-1}(V_a)}|_{\mathbf{Q}}$ such that $(\pi^{-1}(V_a),\Delta_{|\pi^{-1}(V_a)}+A_a)$ is klt. \end{enumerate} It is clear that base change to an open subset of the base preserves this property. \end{definition} The following lemma tells us that it is always possible to find a good scaling divisor. \begin{lemma}\label{lem:AmpleIsGoodScaling} Let $\pi\colon X \to Z$ be a projective surjective morphism of integral quasi-excellent Noetherian algebraic spaces of equal characteristic zero over a scheme $S$. Suppose that $X$ is normal and that $Z$ admits a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta\geq 0$ be a $\mathbf Q$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and such that $(X,\Delta)$ is klt. Let $A$ be a $\pi$-ample $\mathbf{Q}$-Cartier divisor on $X$ such that $K_X+\Delta+A$ is $\pi$-nef. Then $A$ is a good scaling divisor for the pair $(X,\Delta)$. Moreover, if $Z$ is a scheme, then the cover in $(\ref{lem:cl1362condition1})$ can be chosen to be an affine cover. \end{lemma} \begin{proof} Items $(\ref{lem:Abig})$ and $(\ref{lem:K+Anef})$ in Definition \ref{def:GoodScalingDivisors} are clear. $(\ref{lem:cl1362condition1})$ follows from Corollary \ref{cor:BertiniKLTOpenCover} after passing to an \'etale cover by affine schemes. When $Z$ is a scheme, we can instead choose an open cover by affine schemes. \end{proof} We now prepare to prove the existence of the minimal model program with scaling. We start with the following definition, which is a version of a condition stated in Theorem \ref{thm:cl1342} for algebraic spaces. \begin{definition}\label{def:cl13thm4cond} Let $\pi\colon X \to Z$ be a projective surjective morphism of Noetherian algebraic spaces of equal characteristic zero over a scheme $S$, such that $X$ is integral and normal and such that $Z$ is quasi-excellent and has a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. We define $\mathcal{A}=\mathcal{A}(X/Z)$ to be the set of classes $\mathbf{u}\in N^1(X/Z)_{\mathbf{R}}$ that satisfies the following condition: \begin{quote} There exists an \'etale covering $Z=\bigcup_a V_a$ such that for each index $a$, there exists a $\mathbf Q$-Weil divisor $\Delta_a\geq 0$ on $\pi^{-1}(V_a)$ with $K_{\pi^{-1}(V_a)}+\Delta_a$ $\mathbf{Q}$-Cartier and $(\pi^{-1}(V_a),\Delta_a)$ klt, a positive real number $c_a$, and a class $\mathbf{w}_a\in \Amp(\pi^{-1}(V_a)/V_a)$ such that the restriction of $\mathbf{u}$ to $N^1(\pi^{-1}(V_a)/V_a)$ (Lemma \ref{lem:NumericalClassOpenSubset}) equals to $c_a[K_{\pi^{-1}(V_a)}+\Delta_a]+\mathbf{w}_a$. \end{quote} \end{definition} \begin{lemma}\label{lem:K+Delta+lambdaAInmathcalA} Let $\pi\colon X \to Z$ be a projective surjective morphism of Noetherian algebraic spaces of equal characteristic zero over a scheme $S$, such that $X$ is integral and normal and such that $Z$ is quasi-excellent and has a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta\geq 0$ be a $\mathbf Q$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and such that $(X,\Delta)$ is klt. Assume that $K_X +\Delta$ is not $\pi$-nef, and let $A$ be a good scaling divisor for the pair $(X,\Delta)$. Let $\lambda\in [0,1]\subseteq \mathbf{R}$. Then, the class $\mathbf{u}\coloneqq[K_X+\Delta+\lambda A]$ belongs to the set $\mathcal{A}$ as in Definition \ref{def:cl13thm4cond}, and we can further require that the numbers $c_a=1$. \end{lemma} \begin{proof} Passing to an affine \'etale covering of $Z$, we may assume that $Z$ is an affine scheme, and that \begin{align}\label{Originalcl1362condition1} A\geq 0\qquad \text{and}\qquad (X,\Delta+A)\ \text{is klt}. \end{align} Write $A= H+E$, where $H$ is a $\pi$-ample $\mathbf{Q}$-Cartier divisor and $E\geq 0$. This is possible by Lemma \ref{lem:BigWeilIsAmplePlusEffective}. Choose $\epsilon\in\mathbf{Q}_{>0}$ such that $\epsilon<\lambda$ and that $(X,A+\Delta+\epsilon E)$ klt, which is possible by Lemma \ref{lem:kltFacts}$(\ref{lem:kltContinuous})$, since log resolutions exist for excellent $\mathbf{Q}$-schemes \cite[Theorem 2.3.6]{Tem08}; and we choose $\delta\in\mathbf{R}_{>0}$ such that $\lambda-\epsilon-\delta\in\mathbf{Q}_{>0}$ and that $\epsilon H+\delta A$ is $\pi$-ample. Set $\Delta'=\Delta+(\lambda-\epsilon-\delta)A+\epsilon E$ and $H'=\epsilon H+\delta A$. Then, by our choice (and Lemma \ref{lem:kltFacts}$(\ref{lem:kltSmaller})$), $H'$ is a $\pi$-ample $\mathbf{R}$-divisor, $\Delta'$ is an effective $\mathbf{Q}$-Weil divisor with $K_X+\Delta'$ $\mathbf{Q}$-Cartier and $(X,\Delta')$ klt and we have \[ K_X+\Delta+\lambda A=K_X+\Delta+\epsilon E+(\lambda-\epsilon)A+\epsilon H=K_X+\Delta'+H', \] as desired. \end{proof} \begin{lemma}[cf.\ {\citeleft\citen{KM98}\citemid \S3.1\citepunct \citen{CL13}\citemid Lemma 8\citeright}]\label{lem:cl1362} Let $\pi\colon X \to Z$ be a projective surjective morphism of Noetherian algebraic spaces of equal characteristic zero over a scheme $S$, such that $X$ is integral and normal and such that $Z$ is quasi-excellent and has a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta\geq 0$ be a $\mathbf Q$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and such that $(X,\Delta)$ is klt. Assume that $K_X +\Delta$ is not $\pi$-nef, and let $A$ be a good scaling divisor for the pair $(X,\Delta)$. Let $\lambda = \lambda(X/Z, \Delta, A)$ be the $\pi$-nef threshold. Then, $\lambda\in\mathbf{Q}_{>0}$, and there exists an extremal ray $R\subseteq\overline{\mathrm{NE}}(X/Z)$ with a good contraction with target projective over $Z$, and satisfies $(K_X +\Delta +\lambda A)\cdot R = 0$ and $(K_X +\Delta )\cdot R\setminus\{0\} < 0.$ \end{lemma} \begin{proof} By Lemma \ref{lem:K+Delta+lambdaAInmathcalA}, $\mathbf{u}:=[K_X+\Delta+\lambda A]$ belongs to the set $\mathcal{A}$ as in Definition \ref{def:cl13thm4cond}. By the definition of $\lambda$, $\mathbf{u}\in\partial\Nef(X/Z)$, so we can apply the Cone Theorem \ref{thm:kmmcone} (or Theorem \ref{thm:cl1342}$(\ref{thm:cl1342Precise})(\ref{thm:cl1342Formal})$ in the scheme case) to conclude that there exist finitely many rational supporting hyperplanes $W_1,\ldots,W_m$ of $\mathrm{Nef}(X/Z)$ cutting out closed half-spaces $W_1^+,\ldots,W_m^+$ such that, for some small open rational polytope $P$ containing $\mathbf{u}$, $P\cap \mathrm{Nef}(X/Z)=P\cap(W_1^+\cup\ldots\cup W_m^+).$ Since the spaces $W_i$ are rational, it is now clear that $\lambda\in\mathbf{Q}$ by the definition, and $\lambda\in\mathbf{Q}_{>0}$ since $K_X+\Delta$ is not $\pi$-nef. Finally, we show the existence of a desired ray $R$. Shrinking $P$ if necessary, we may assume $\mathbf{u}\in W_i$ for all $i$. Since $\mathbf{u}-\sigma [A]\not\in \mathrm{Nef}(X/Z)$ for all $\sigma\in (0,\lambda)$ by the definition of $\lambda$, we see that $-[A]\not\in W_i^+$ for some $i$. We may thus take $R$ to be the extremal ray dual to $W_i$, see Definition \ref{def:SuppPlaneAndDualRay}. Then $R$ is an extremal ray and $(K_X+\Delta+\lambda A)\cdot R=0$ since $\mathbf{u}\in W_i$. Since $-[A]\not\in W_i^+$, $A\cdot R>0$, so $(K_X+\Delta)\cdot R<0$. The fact $R$ has a good contraction with projective target follows from Lemma \ref{lem:CL13S4ProducesGoodContractions}. \end{proof} We can now prove the existence of the relative minimal model program with scaling. By Lemma \ref{lem:AmpleIsGoodScaling}, this implies the existence part of Theorem \ref{thm:introrelativemmp}$(\ref{setup:introalgebraicspaces})$. \begin{theorem}\label{rem:MMP} Let $\pi\colon X \to Z$ be a projective surjective morphism of Noetherian algebraic spaces of equal characteristic zero over a scheme $S$, such that $X$ is integral and normal and such that $Z$ is quasi-excellent and has a dualizing complex $\omega_Z^\bullet$. Denote by $K_X$ a canonical divisor on $X$ defined using $\omega_X^\bullet = \pi^!\omega_Z^\bullet$. \par Suppose $X$ is $\mathbf{Q}$-factorial and let $\Delta$ be a $\mathbf{Q}$-divisor such that $(X,\Delta)$ is klt. Let $A$ be a good scaling divisor for $(X,\Delta)$. Then, the relative minimal model program with scaling of $A$ over $Z$ exists. \end{theorem} \begin{proof} First, find a ray $R$ as in Lemma \ref{lem:cl1362}, and let $h\colon X\to Y$ be a good contraction of $R$. If $\dim Y<\dim X$, we do nothing further and say that the minimal model program of $(X,\Delta)$ over $Z$ with the scaling of $A$ terminates with a Mori fibration. Otherwise $f$ is birational. By Lemma \ref{lem:Contraction3Types}, $\mathrm{Ex}(h)\subseteq X$ is either a prime divisor, in which case we let $X'=Y$, or is of codimension $\geq 2$, in which case we let $X'$ be a flip of $h$, which exists (Theorem \ref{thm:FLIP}$(\ref{thm:FLIPisProj})$). Denote by $h'$ and $\pi'$ the maps from $X'$ to $Y$ and $Z$ respectively. Let $K_{X'}$ be the birational transform of $K_X$ on $X'$, which is a canonical divisor of $X'$; let $\Delta'$ and $A'$ be the birational transforms of $\Delta$ and $A$ respectively. We note that $Y$ is projective over $Z$, hence so is $X'$; $X'$ is integral and normal, see Definitions \ref{def:GoodContrOfR} and \ref{def:FLIPS}; $X'$ is $\mathbf{Q}$-factorial, see Lemmas \ref{lem:ContractionQfac} and \ref{lem:FLIPQfacAndN1same}. The pair $(X',\Delta')$ is klt by Corollary \ref{cor:MMPpreservesKLT}. \smallskip We now verify that $\lambda A'$ is a good scaling divisor for the pair $(X',\Delta')$. Since $A$ is $\pi$-big, so is $\lambda A$, and we see that $\lambda A'$ is $\pi'$-big from Lemma \ref{lem:ContrPreservesBig}. We know that $K_X+\Delta+\lambda A\sim_{\mathbf{Q}}h^*E$ for some effective $\mathbf{Q}$-Cartier divisor $E$ on $Y$, so $K_{X'}+\Delta'+\lambda A'\sim_{\mathbf{Q}}{h'}^*E$ and therefore $K_{X'}+\Delta'+\lambda A'$ is $\pi'$-nef. It remains to verify $(\ref{lem:cl1362condition1})$ in Definition \ref{def:GoodScalingDivisors}. Notice that birational transform preserves $\mathbf{Q}$-linear equivalence, thus after passing to an \'etale covering of $Z$, we may assume $A\geq 0$ and $(X,\Delta+A)$ klt, and thus $(X,\Delta+\lambda A)$ is klt. By construction, $K_X+\Delta+\lambda A$ is $h$-numerically trivial, and $K_{X'}+\Delta'+\lambda A'$ is $h'$-numerically trivial. By Lemma \ref{lem:DiscrepancyNonIncreasing}, $a(E,X',\Delta'+\lambda A')\leq a(E,X,\Delta+\lambda A)$ for all divisors $E$ over $Y$ and thus $(X',\Delta'+\lambda A')$ is klt, as desired. \smallskip Therefore, the new datum $(X',\Delta',\lambda A')$ satisfies the same assumptions as the datum $(X,\Delta,A)$, except that it is now possible (and desirable) that $K_{X'}+\Delta'$ is $\pi'$-nef, in which case we say that the minimal model program of $(X,\Delta)$ over $Z$ with the scaling of $A$ terminates with a minimal model. Otherwise, we start over with $(X',\Delta',\lambda A')$. If, after finitely many steps, we arrive at the situation $\dim X>\dim Y$ (resp. $K_{X'}+\Delta'$ is $\pi'$-nef), we say the minimal model program of $(X,\Delta)$ over $Z$ with the scaling of $A$ terminates with a Mori fibration (resp. a minimal model). Otherwise, we will get an infinite sequence \[ (X_1, \Delta_1, \lambda_1A_1) \overset{f_1}{\dashrightarrow} \cdots \overset{f_{i-1}}{\dashrightarrow} (X_i, \Delta_i, \lambda_iA_i) \overset{f_{i}}{\dashrightarrow} (X_{i+1}, \Delta_{i+1}, \lambda_{i+1}A_{i+1}) \overset{f_{i+1}}{\dashrightarrow} \cdots \] where $X_1=X$, $\Delta_i$ and $A_i$ are the birational transforms of $\Delta,A$ respectively, $(X_i,\Delta_i,\lambda_{i-1}A_i)$ satisfies the assumptions of Lemma \ref{lem:cl1362}, $\lambda_i=\lambda_{i-1}.\lambda(X_i/Z,\Delta_i,\lambda_{i-1}A_i)\leq \lambda_{i-1}$, and $f_i$ is either a birational contraction or a flip corresponding to a ray as in Lemma \ref{lem:cl1362}. Since the number $\dim\left(N^1(X_i/Z)_{\mathbf{R}}\right)$ decreases for a (good) contraction $f_i$ (Definition \ref{def:GoodContrOfR}) and remains unchanged for a flip $f_i$ (Lemma \ref{lem:FLIPQfacAndN1same}$(\ref{lem:FLIPN1Same})$), we see that all but finitely many $f_i$ are flips. \end{proof} We now prove that a sequence of flips always terminates with additional bigness conditions. By Lemma \ref{lem:AmpleIsGoodScaling}, this completes the proof of Theorem \ref{thm:introrelativemmp}$(\ref{setup:introalgebraicspaces})$. \begin{theorem}[cf. {\cite[Theorem 6]{CL13}}]\label{thm:cl1365} Let $\pi_1\colon X_1 \to Z$ be a projective morphism of Noetherian algebraic spaces of equal characteristic zero over a scheme $S$, such that $X_1$ is integral, normal, and $\mathbf{Q}$-factorial, and such that $Z$ is quasi-excellent and has a dualizing complex $\omega_Z^\bullet$. Denote by $K_{X_1}$ a canonical divisor on $X_1$ defined using $\omega_{X_1}^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta_1$ be an effective $\mathbf{Q}$-divisor on $X_1$ such that $(X_1,\Delta_1)$ is klt. Let $A_1$ be a good scaling divisor for the pair $(X_1, \Delta_1)$, and let $\lambda_1 = \lambda(X_1, \Delta_1, A_1)$. Assume that $cK_{X_1}+\Delta_1$ is $\pi_1$-big for some rational number $c\in(-\infty, 1]$. Then, any sequence \[ (X_1, \Delta_1, \lambda_1A_1) \overset{f_1}{\dashrightarrow} \cdots \overset{f_{i-1}}{\dashrightarrow} (X_i, \Delta_i, \lambda_iA_i) \overset{f_{i}}{\dashrightarrow} (X_{i+1}, \Delta_{i+1}, \lambda_{i+1}A_{i+1}) \overset{f_{i+1}}{\dashrightarrow} \cdots \] of flips of the Minimal Model Program with scaling of $A_1$ terminates. \end{theorem} \begin{proof} If the sequence of flips does not terminate, we can find an \'etale affine $W \to Z$ such that, denoting by $U_i$ the base change $X_i\times_Z W$, the birational map $f_{i|U_i}:U_i\dashrightarrow U_{i+1}$ is not an isomorphism (thus a flip of a suitable contraction, see Lemma \ref{lem:ContSmallerOpen}) for infinitely many $i$. By Lemma \ref{lem:K+Delta+lambdaAInmathcalA}, $[K_{X_1}+\Delta_1+\lambda_1 A_1]$ belongs to the set $\mathcal{A}(X_1/Z)$ as in Definition \ref{def:cl13thm4cond}, and we may require the numbers $c_a=1$. Therefore, after possibly shrinking $W$, we may assume that there exists a $\mathbf{R}$-divisor $\Delta_1'\geq 0$ and a $\pi_{|U_1}$-ample $\mathbf{R}$-divisor $H_1'$ on $U_1$ such that $(U_1,\Delta_1')$ is klt and $[K_{U_1}+\Delta_{1U_1}+\lambda_1 A_{1U_1}]=[K_{X_1}+\Delta_1'+H_1']\in N^1(U_1/V)_{\mathbf{R}}$. Since $K_{U_1}+\Delta_{1U_1}+\lambda_1 A_{1U_1}$ is a $\mathbf{Q}$-divisor by Lemma \ref{lem:cl1362}, we may assume that $\Delta_1'$ and $H_1'$ are $\mathbf{Q}$-divisors. We may find $\mathbf{Q}$-divisors $D_1^1,\cdots,D_1^m$ on $U_1$ such that the convex hull $P_1$ of $\{[D_1^1],\cdots,[D_1^m]\}$ is a rational polytope containing $[K_{U_1}+\Delta_{1|U_1}+\lambda_1 A_{1|U_1}]$ in its interior and is contained in $\mathcal{A}$, and that $D_1^a-K_{U_1}-\Delta_1'$ is ample for all indices $a$. Let $R_1:=R(U_1/V;K_{U_1}+\Delta_{1|U_1},D_1^1,\cdots,D_1^m)$. By Theorem \ref{thm:cl1332}, $R_1$ is finitely generated over $H^0(W,\mathcal{O}_W)$. Write $g_i=f_{i|U_i}\circ\cdots\circ f_{1|U_1}:U_1\dashrightarrow U_{i+1}\ (i\geq 0)$, $D_{i+1}^a=g_{i*}D_1^a$ and $R_{i}=R(U_{i+1}/V;K_{U_i}+\Delta_{i|U_i},D_i^1,\cdots,D_i^m).$ Then each $g_i$ induces an isomorphism $R_1\cong R_{i+1}$, so each $R_{i}$ is finitely generated over $H^0(W,\mathcal{O}_W)$. Put $V_{i}=\mathbf{R}(K_{U_i}+\Delta_{i|U_i})+\sum_a\mathbf{R} D_i^a$, so we have a commutative diagram \[ \begin{tikzcd} V_1\rar{f_{1*}}\dar{\varphi_1} &\cdots\rar{f_{i-1,*}} &V_i\dar{\varphi_i}\rar{f_{i*}}& V_{i+1}\dar{\varphi_{i+1}}\rar{f_{i+1,*}} &\cdots\\ N^1(U_1/W)_{\mathbf{R}}\rar{f_{1*}} &\cdots\rar{f_{i-1,*}} &N^1(U_i/W)_{\mathbf{R}}\rar{f_{i*}}& N^1(U_{i+1}/W)_{\mathbf{R}}\rar{f_{i+1,*}} &\cdots \end{tikzcd} \] in which $\varphi_i$ are the canonical maps as in Lemma \ref{lem:cl1339}, and $f_j$ by abuse of notation means $f_{j|U_j}$. Notice that by Lemma \ref{lem:FLIPQfacAndN1same}, the horizontal arrows are all isomorphisms of real vector spaces. Let $Q_i$ be the convex hull of $\{[K_{U_i}+\Delta_{i|U_i}],[D_i^1],\cdots,[D_i^m]\}$. Then by construction, $Q_1$ contains $[K_{U_1}+\Delta_{1|U_1}+\lambda A_{1|U_1}]$ in its interior for all positive $\lambda\leq \lambda_1$, thus $Q_i$ contains $[K_{U_i}+\Delta_{i|U_i}+\lambda_i A_{i|U_i}]\in\Nef(U_i/W)$ in its interior. Therefore $Q_i\cap \Amp(U_i/W)\neq\emptyset$, so $\mathrm{Supp}(R_i)\cap \Amp(U_i/W)\neq\emptyset$. Let $\mathrm{Supp}(R_1)=\sqcup_p \mathcal{C}_1^p$ be the coarsest subdivision into rational polyhedral cones such that $o_v$ is linear on each $\mathcal{C}_1^p$, see Theorem \ref{thm:cl1335}$(\ref{thm:cl13353})$. Write $\mathcal{C}_{i+1}^p=g_{i*}\mathcal{C}_1^p$, we see from Lemma \ref{lem:cl1352} that $\mathrm{Supp}(R_i)=\sqcup_p \mathcal{C}_i^p$ is the coarsest subdivision into rational polyhedral cones such that $o_v$ is linear on each $\mathcal{C}_i^p$. Now since $\mathrm{Supp}(R_i)\cap \Amp(U_i/W)\neq\emptyset$, by Lemma \ref{lem:cl1339} we see that for each $i$ there exists a $p_i$ such that $\varphi_i^{-1}(\Nef(X_i/Z))\cap\mathrm{Supp}(R_i)=\mathcal{C}_i^{p_i}$. Since there exists only finitely many indices $p$, there exists an $i$ and infinitely many $j>i\geq 1$ such that $p_j=p_i$. We pick a $j$ such that there exists $k\in\mathbf{Z}$, $i\leq k<j$ with $f_{k|U_k}$ not an isomorphism. Since $p_i=p_j$, there exists a $\pi$-ample $\mathbf{Q}$-Cartier divisor $H_i$ on $U_i$ and a $\pi$-nef $\mathbf{Q}$-Cartier divisor $H_j$ on $U_j$ such that $H_j$ is the birational transform of $H_i$. By Lemma \ref{lem:cl1353}, the rational map $f_{j-1|U_{j-1}}\circ\cdots\circ f_{i|U_i}:U_i\dashrightarrow U_j$ is a morphism. By symmetry, we have that $f_{i|U_i}^{-1}\circ\cdots\circ f_{j-1|U_{j-1}}^{-1}:U_j\dashrightarrow U_i$ is a morphism as well. By \cite[Lemma 7]{CL13}, they are isomorphisms inverse to each other. However, $f_{j-1|U_{j-1}}\circ\cdots\circ f_{i|U_i}$ is a composition of several isomorphisms and at least one flip, so it cannot be an isomorphism by Corollary \ref{cor:MMPnotCycleBack}, a contradiction. \end{proof} \begin{corollary}[cf. {\cite[Corollary 4]{CL13}}]\label{cor:cl1366} Let $\pi\colon X \to Z$ be a projective morphism of Noetherian algebraic spaces of equal characteristic zero over a scheme $S$, such that $X$ is integral, normal and $\mathbf{Q}$-factorial and such that $Z$ is quasi-excellent and has a dualizing complex $\omega_Z^\bullet$. Denote by $K_{X_1}$ a canonical divisor on $X_1$ defined using $\omega_{X_1}^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta\geq 0$ be a $\mathbf Q$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and such that $(X,\Delta)$ is klt. Let $A$ be a good scaling divisor for the pair $(X,\Delta)$. Assume that $cK_{X}+\Delta$ is $\pi$-big for some rational number $c\in(-\infty, 1]$. Then the following hold. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{cor:cl13661} If $K_X+\Delta$ is $\pi$-pseudoeffective and $\Delta$ is $\pi$-big, then any process of the Minimal Model Program of the pair $(X,\Delta)$ over $Z$ with the scaling of $A$ terminates with a minimal model. \item\label{cor:cl13662} If $K_X+\Delta$ is not $\pi$-pseudoeffective, then any process of the Minimal Model Program of the pair $(X,\Delta)$ over $Z$ with the scaling of $A$ terminates with a Mori fibration. \end{enumerate} \end{corollary} \begin{proof} At the end of the proof of Theorem \ref{rem:MMP}, we have noticed that if the process does not terminate, we will have an infinite sequence of flips. Our assumption and Theorem \ref{thm:cl1365} ensures that such an infinite sequence cannot exist, so the process terminates. Since whether or not $K_X+\Delta$ is $\pi$-pseudoeffective will not change in the process (Corollary \ref{cor:MMPpreservesKeffective}), we see that if the process terminates and $K_X+\Delta$ is $\pi$-pseudoeffective (resp.\ not $\pi$-pseudoeffective) then the process terminates with a minimal model (resp.\ Mori fibration), as desired. \end{proof} \begin{corollary}[cf. {\cite[Corollary 5]{CL13}}]\label{cor:cl1367} Let $\pi\colon X \to Z$ be a projective morphism of Noetherian algebraic spaces of equal characteristic zero over a scheme $S$, such that $X$ is integral, normal and $\mathbf{Q}$-factorial and such that $Z$ is quasi-excellent and has a dualizing complex $\omega_Z^\bullet$. Denote by $K_{X_1}$ a canonical divisor on $X_1$ defined using $\omega_{X_1}^\bullet = \pi^!\omega_Z^\bullet$. Let $\Delta\geq 0$ be a $\mathbf Q$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and such that $(X,\Delta)$ is klt. Let $A$ be a good scaling divisor for the pair $(X,\Delta)$. If $K_X+\Delta$ is not $\pi$-pseudoeffective, then any process of the Minimal Model Program of the pair $(X,\Delta)$ over $Z$ with the scaling of $A$ terminates with a Mori fibration. \end{corollary} \begin{proof} Reasoning as in the proof of Corollary \ref{cor:cl1366}, it suffices to show any sequence of flips \[ (X_1, \Delta_1, \lambda_1A_1) \overset{f_1}{\dashrightarrow} \cdots \overset{f_{i-1}}{\dashrightarrow} (X_i, \Delta_i, \lambda_iA_i) \overset{f_{i}}{\dashrightarrow} (X_{i+1}, \Delta_{i+1}, \lambda_{i+1}A_{i+1}) \overset{f_{i+1}}{\dashrightarrow} \cdots \] as in the proof of Theorem \ref{rem:MMP} terminates, and as in the proof of Theorem \ref{thm:cl1365}, we may replace $Z$ by any \'etale affine whose image in $Z$ intersects $\pi(X)$. By Condition $(\ref{lem:cl1362condition1})$ in Definition \ref{def:GoodScalingDivisors}, we may thus assume that there exists $A'\in |A|_{\mathbf{Q}}$ such that $(X,\Delta+A')$ is klt. Let $A'_i\in |A_i|_{\mathbf{Q}}$ be the birational transform of $A'$ on $X_i.$ Let $\mu\in \mathbf{Q}_{>0}$ be such that $K_X+\Delta+\mu A$ not $\pi$-pseudoeffective. We know from Lemma \ref{lem:ContrPreservesPsEff} that $K_{X_i}+\Delta_i+\mu A_i$ is not pseudoeffective over $Z$ for each $i$, so $\lambda_i>\mu$. The divisor $K_{X_i}+\Delta_i+\mu A'_i$ is $\mathbf{Q}$-linearly equivalent to the combination $(1-r)(K_{X_i}+\Delta_i)+r(K_{X_i}+\Delta_i+\lambda_iA_i)$ where $r=r_i:=\frac{\mu}{\lambda_i}\in (0,1)$. Thus the sequence of flips of concern is also a sequence of flips for the pair $(X,\Delta+\mu A')$ with the scaling of $(1-\mu)A$, in symbols \[ \biggl(X_1, \Delta_1+\mu A'_1, \frac{\lambda_1-\mu}{1-\mu}(1-\mu)A_1\biggr) \overset{f_1}{\dashrightarrow} \cdots \overset{f_{i-1}}{\dashrightarrow} \cdots \biggl(X_i, \Delta_i+\mu A'_i, \frac{\lambda_i-\mu}{1-\mu}(1-\mu)A_i\biggr) \overset{f_{i}}{\dashrightarrow} \cdots. \] Such a sequence terminates by Theorem \ref{thm:cl1365} (with $c=0$), as $\Delta+\mu A'$ is $\pi$-big. \end{proof} \section{Existence of \texorpdfstring{$\mathbf{Q}$}{Q}-factorializations and terminalizations for schemes} In this section, we show that $\mathbf{Q}$-factorializations and terminalizations exist for klt pairs. For simplicity, we restrict to the case of schemes. \begin{theorem}\label{thm:TerminalizationGeneral} Let $X$ be an integral normal Noetherian scheme that has a dualizing complex $\omega_X^\bullet$ with associated canonical divisor $K_X$. Let $\Delta$ an effective $\mathbf{R}$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{R}$-Cartier and $(X,\Delta)$ is klt. Assume that $X$ is an excellent scheme of equal characteristic zero. Let $g\colon Y\to X$ be a projective log resolution, and let $\mathcal{E}$ be a set of $g$-exceptional prime divisors with negative discrepancy with respect to $(X,\Delta)$. Then, there exists a projective birational morphism $h\colon Z\to X$ with $Z$ $\mathbf{Q}$-factorial, such that the $h$-exceptional prime divisors are exactly the birational transforms of divisors in the set $\mathcal{E}$. \end{theorem} \begin{proof} By \cite[Proposition 2.21]{Kol13}, there exists an effective $\mathbf{Q}$-Weil divisor $\Delta'$ on $X$ such that the support of $\Delta$ and $\Delta'$ are the same, that $K_X+\Delta'$ is $\mathbf{Q}$-Cartier and $(X,\Delta')$ is klt, and that each divisor in $\mathcal{E}$ has negative discrepancy with respect to the pair $(X,\Delta')$. We may thus replace $\Delta$ by $\Delta'$ and assume our $\Delta$ is a $\mathbf{Q}$-divisor such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier. Write \[ K_Y+\Delta_Y\sim_\mathbf{Q} g^*(K_X+\Delta)+\Gamma \] where \[ \Delta_Y=\sum_{E\subset Y,a(E,X,\Delta)\leq 0}-a(E,X,\Delta)E. \] and \[ \Gamma=\sum_{E\subset Y,a(E,X,\Delta)> 0}a(E,X,\Delta)E. \] Since $(X,\Delta)$ is klt, the coefficients of $\Delta_Y$ are less than 1, so $(Y,\Delta_Y)$ is klt by \cite[Corollary 2.13]{Kol13}. Let $F$ be the sum of the $g$-exceptional prime divisors not in $\mathcal{E}$, so $F$ is an effective Cartier divisor on $Y$ since $Y$ is regular. Let $\epsilon\in\mathbf{Q}_{>0}$ be sufficiently small such that $(Y,\Delta_Y+\epsilon F)$ is klt, see Lemma \ref{lem:kltFacts}$(\ref{lem:kltContinuous})$. There exists a $g$-ample Cartier divisor $A$ on $Y$, and $Y$ is $\mathbf{Q}$-factorial since it is regular, so we may run the MMP for the pair $(Y,\Delta_Y+\epsilon F)$ over $X$ with the scaling of $A$, see Theorem \ref{rem:MMP} and Lemma \ref{lem:AmpleIsGoodScaling}. Since $g$ is birational, $\Delta_{Y}$ is $g$-big and $K_{Y}+\Delta_{Y}+\epsilon F$ is $g$-pseudoeffective, so the MMP terminates with a minimal model $h:(Z,\varphi_*(\Delta_{Y}+\epsilon F))\to X$ where $\varphi:Y\dashrightarrow Z$ is a composition of divisorial contractions and flips, see Corollary \ref{cor:cl1366}$(\ref{cor:cl13661})$. Note that $h$ is projective and $Z$ is $\mathbf{Q}$-factorial, as noted in the proof of Theorem \ref{rem:MMP}. Since the rational map $\varphi^{-1}:Z\dashrightarrow Y$ does not contract any divisor, we see that $h$-exceptional divisors are birational transforms of $g$-exceptional divisors, and \[ K_Z+\varphi_*(\Delta_Y+\epsilon F)\sim_\mathbf{Q} h^*(K_X+\Delta)+\varphi_*(\Gamma+\epsilon F). \] As $h$ is a minimal model, we see $\varphi_*(\Gamma+\epsilon F)$ is $h$-nef. By Lemma \ref{lem:Negativity}, we have $\varphi_*(\Gamma+\epsilon F)=0$. By the definition of $F$, this means that all $g$-exceptional prime divisors not in $\mathcal{E}$ are contracted by $\varphi$. It now suffices to show that no divisor in $\mathcal{E}$ is contracted by $\varphi$. Assume not. Then there is a step $\psi_j:Y_j\rightarrow Y_{j+1}$ of the MMP that is a divisorial contraction, and the divisor $E_j$ contracted is the birational transform of some $E\in \mathcal{E}$. Denote by $\varphi_j:Y\dashrightarrow Y_j$ the rational map coming from the previous steps of the MMP. Then $\varphi_j^{-1}$ does not contract any divisor, and \[ K_{Y_j}+(\varphi_j)_*(\Delta_j+\epsilon F)\sim_\mathbf{Q} h_j^*(K_X+\Delta)+(\varphi_j)_*(\Gamma+\epsilon F), \] where $h_j$ is the map from the $X$-scheme $Y_j$ to $X$. Since $\psi_j$ is a step of the MMP, we know that $-\left(K_{Y_j}+(\varphi_j)_*(\Delta_j+\epsilon F)\right)$ is $\psi_j$-ample, so $-(\varphi_j)_*(\Gamma+\epsilon F)$ is $\psi_j$-ample. Since $Y_j$ is $\mathbf{Q}$-factorial, $-(\varphi_j)_*(\Gamma+\epsilon F)+\sigma E_j$ is $\mathbf{Q}$-Cartier and $\psi_j$-ample for sufficiently small $\sigma\in\mathbf{Q}_{>0}$. Lemma \ref{lem:Negativity} applies and we see $(\varphi_j)_*(\Gamma+\epsilon F)-\sigma E_j$ is effective, so $E_j$ is a component of $(\varphi_j)_*(\Gamma+\epsilon F)$, thus $E\in\mathcal{E}$ is a component of $\Gamma+\epsilon F$, contraction. \end{proof} \begin{definition}\label{def:QFactorialization} Let $X$ be an integral normal Noetherian scheme. A $\mathbf{Q}$\textsl{-factorialization of} $X$ is an integral $\mathbf{Q}$-factorial Noetherian scheme $Y$ together with a proper birational morphism $g\colon Y\to X$ such that no prime divisor on $Y$ is $g$-exceptional. \end{definition} \begin{corollary}\label{cor:QfactorializationsExist} Let $X$ be an integral normal Noetherian scheme that has a dualizing complex $\omega_X^\bullet$ with associated canonical divisor $K_X$. Let $\Delta$ an effective $\mathbf{R}$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{R}$-Cartier and $(X,\Delta)$ is klt. Assume that $X$ is an excellent scheme of equal characteristic zero. Then, there exists a projective $\mathbf{Q}$-factorialization $h\colon Z\to X$. \end{corollary} \begin{proof} Let $g\colon Y\to X$ be a log resolution constructed by blowing up regular centers, which exists by \cite[Theorem 1.1.6]{Tem18}, so $g$ is projective. Now take $\mathcal{E}$ to be the empty set in Theorem \ref{thm:TerminalizationGeneral}. The resulting $h\colon Z\to X$ has no exceptional divisors, and $Z$ is $\mathbf{Q}$-factorial, as desired. \end{proof} \begin{definition}\label{def:Terminalization} Let $X$ be an integral normal Noetherian scheme that has a dualizing complex $\omega_X^\bullet$ with associated canonical divisor $K_X$. Let $\Delta$ be an effective $\mathbf{Q}$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{Q}$-Cartier and $(X,\Delta)$ is klt. A \textsl{terminalization of the pair} $(X,\Delta)$ is a terminal pair $(Y,\Delta_Y)$ together with a proper birational morphism $g\colon Y\to X$ such that $g_*\Delta_Y=\Delta$ and that $g^*(K_X+\Delta)\sim_{\mathbf{Q}}K_Y+\Delta_Y$. The condition is equivalent to saying that $a(E,X,\Delta)\leq 0$ for all prime divisors $E\subset Y$, and that the pair \[ \left(Y,\sum_{E\subset Y}-a(E,X,\Delta)E\right) \] is terminal. \end{definition} \begin{lemma}\label{lem:ResolveKLTtoTerminal} Let $X$ be an integral normal Noetherian scheme that has a dualizing complex $\omega_X^\bullet$ with associated canonical divisor $K_X$. Let $\Delta$ an effective $\mathbf{R}$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{R}$-Cartier and $(X,\Delta)$ is klt. For an integral normal scheme $Y$ proper birational over $X$, we set \[ \Delta_Y=\sum_{E\subset Y,a(E,X,\Delta)\leq 0}-a(E,X,\Delta)E. \] Assume that $X$ is an excellent scheme of equal characteristic zero. Then there exists a log resolution $g\colon Y\to X$ constructed by blowing up regular centers such that components of $\Delta_Y$ are disjoint. Moreover, for any such resolution, the following hold: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item \label{Yterminal} $(Y,\Delta_Y)$ is terminal. \item \label{YhasallNegativeDiscrep} For every proper birational map $Y'\to X$ and every prime Weil divisor $E'$ on $Y'$, if $a(E',X,\Delta)<0$, then $E'$ is not contracted by the rational map $Y'\dashrightarrow Y$. \end{enumerate} \end{lemma} \begin{proof} Let $g_0\colon Y_0\to X$ be a log resolution constructed by blowing up regular centers, which exists by \cite[Theorem 1.1.6]{Tem18}. All coefficients of $\Delta_{Y_0}$ are less than $1$ since $(X,\Delta)$ is klt, so $\delta_0:=1-\max\{\text{coefficients of }\Delta_{Y_0}\}>0.$ If $t\geq 2$ components $E_1,\ldots,E_t$ with coefficients $a_1,\ldots,a_t$ meet and no other component of $\Delta_{Y_0}$ meet with $Z:=E_1\cap\cdots\cap E_t$, we consider the blow up $Y_1=\mathrm{Bl}_Z Y_0$. Note that $Z$ is a regular scheme of equidimension $(\dim Y_0-t)$ and may have several connected components. The preimage of each connected component $C$ of $Z$ in $Y_1$ is a prime divisor $E_C$, and \[ a(E_C,X,\Delta)\leq a(E_C,Y_0,\Delta_{Y_0})=1-t+\sum a_i\leq 1-t\delta_0\leq 1-\delta_0 \] where the equlity follows from \cite[Lemma 2.29]{KM98}. Along $E_C$, at most $t-1$ of the birational transforms of $E_i$ meet, and thus at most $t$ components of $\Delta_{Y_1}$ meet; if that happens, the sum of their coefficients is at most $a(E_C,X,\Delta)+\sum a_i-\min_ia_i\leq 1-t+\sum a_i+(t-1)(1-\delta_0)=\sum a_i-t\delta_0$. It is now clear that after finitely many such blow ups we get the desired $Y$. Now, the coefficients of $\Delta_Y$ are less than 1 since $(X,\Delta)$ is klt, and the components of $\Delta_Y$ are disjoint. By \cite[Corollary 2.11]{Kol13}, $(Y,\Delta_Y)$ is terminal. To show $(\ref{YhasallNegativeDiscrep})$, we may assume that $Y'$ is given by a proper birational map $h:Y'\to Y$. Write \[ K_Y+\Delta_Y\sim_\mathbf{R} g^*(K_X+\Delta)+\Gamma \] where the components of $\Gamma$ are exactly the exceptional divisors of $g$ that is not a component of $\Delta_Y$. Then $\Gamma$ is effective by the definition of $\Delta_Y$. Since $Y$ is regular, every component of $\Gamma$ is Cartier, and we have \[ h^*(K_Y+\Delta_Y) \sim_\mathbf{R} (h\circ g)^*(K_X+\Delta) + h^*\Gamma, \] therefore $a(E',Y,\Delta_Y)\leq a(E',X,\Delta)<0$. Since $(Y,\Delta_Y)$ is terminal, $E'$ must not be $h$-exceptional, as desired. \end{proof} \begin{corollary}\label{cor:TerminalizationsExist} Let $X$ be an integral normal noetherian scheme that has a dualizing complex, $\Delta$ an effective $\mathbf{R}$-Weil divisor on $X$ such that $K_X+\Delta$ is $\mathbf{R}$-Cartier and $(X,\Delta)$ is klt. Assume that $X$ is an excellent $\mathbf{Q}$-scheme. Then there exists a projective terminalization $h:Z\to X$ with $Z$ $\mathbf{Q}$-factorial. \end{corollary} \begin{proof} Let $g:Y\to X$ and $\Delta_Y$ be as in Lemma \ref{lem:ResolveKLTtoTerminal}, and take $\mathcal{E}$ to be the set of components of $\Delta_Y$ in Theorem \ref{thm:TerminalizationGeneral}. The exceptional prime divisors of the resulting map $h:Z\to X$ are exactly the birational transforms of the components of $\Delta_Y$. Thus we have $K_Z+\varphi_*\Delta_Y\sim_{\mathbf{R}} h^*(K_X+\Delta)$, and it suffices to show $(Z,\varphi_*\Delta_Y)$ terminal. For every proper birational map $Y'\to Z$ and prime divisor $E'$ on $Y'$, the $\mathbf{R}$-linear equivalence above gives $a(E',Z,\varphi_*\Delta_Y)=a(E',X,\Delta)$. If $a(E',Z,\varphi_*\Delta_Y)<0$, then $a(E',X,\Delta)<0$, so by Lemma \ref{lem:ResolveKLTtoTerminal}$(\ref{YhasallNegativeDiscrep})$, $E'$ is not contracted by the rational map $Y'\dashrightarrow Y$, and its birational transform is thus a component of $\Delta_Y$ since it has negative discrepancy. Thus $E'$ is not exceptional over $Z$, and $(Z,\varphi_*\Delta_Y)$ is terminal. \end{proof} \begingroup \makeatletter \renewcommand{\@secnumfont}{\bfseries} \part{Extensions to other categories}\label{part:othercats} \makeatother \endgroup In this part, we extend the relative minimal model program to projective morphisms of algebraic spaces, formal schemes, complex analytic spaces, Berkovich analytic spaces, and rigid analytic spaces, all in equal characteristic zero. We will also assume the existence of dualizing complexes. To do so, we first collect some preliminaries for each of these different categories.\bigskip \section{Quasi-excellence and dualizing complexes}\label{sect:qedualizingothers} In this section, we review the notions of quasi-excellence and dualizing complexes that are analogous to those for schemes in \S\ref{sect:excellentschemes}. \subsection{Formal schemes} We use the definition of formal schemes and Noetherian formal schemes from \cite[D\'efinition 10.4.2]{EGAInew}. Quasi-excellence is defined as follows. \begin{citeddef}[{\citeleft\citen{Tem08}\citemid \S3.1\citepunct \citen{Tem12}\citemid \S2.4.3\citeright}] Let $X$ be a locally Noetherian formal scheme. We say that $X$ is \textsl{quasi-excellent} if for every morphism $\Spf(A) \to X$ of finite type, the ring $A$ is quasi-excellent. \end{citeddef} \begin{remark} The definition above is from \cite{Tem12}, and is equivalent to the original definition in \cite{Tem08} by a theorem of Gabber \cite[Theorem 5.1]{KS}. See \cite[Remark 3.1.1]{Tem08} and \cite[\S2.4.3]{Tem12}. \end{remark} We use the notion of c-dualizing complexes from \cite{ATJLL99}. Compare the notion of \textsl{t-dualizing complexes} from \cite[Definition 2.5.1]{ATJLL99}. This latter notion coincides with the notion of dualizing complexes from \cite[Definition 5.2]{Yek98} (see \cite[Remark (3) on p.\ 25]{ATJLL99}). \begin{citeddef}[{\cite[Definition 2.5.1]{ATJLL99}}] Let $X$ be a Noetherian formal scheme. A complex $\omega_X^\bullet$ on $X$ is a \textsl{c-dualizing complex} if the following conditions are satisfied. \begin{enumerate}[label=$(\roman*)$] \item $\omega_X^\bullet$ is an object of $\mathbf{D}^+_{\mathrm{c}}(X)$. \item The natural morphism $\mathcal{O}_X \to \RRHHom(\omega_X^\bullet,\omega_X^\bullet)$ is an isomorphism. \item There is an integer $b$ such that for every coherent torsion sheaf $\mathscr{M}$ and for every $i > b$, we have $\mathbf{h}^i\RRHHom(\mathscr{M},\omega_X^\bullet) = 0$. \end{enumerate} \end{citeddef} There is a notion of relative analytification for formal schemes and corresponding GAGA results \cite[\S5]{EGAIII1}, which we will refer to by \textsl{formal GAGA}. Exceptional pullbacks in the sense of Grothendieck duality exist, preserve dualizing complexes, and are compatible with formal GAGA in the following sense. \begin{remark}\label{rem:formalgaga} Let $f\colon X \to Y$ be a pseudo-proper morphism of Noetherian formal schemes in the sense of \cite[1.2.2]{ATJLL99}. Consider the functor $f^\sharp$ constructed in \cite[Theorem 2$(b)$]{ATJLL99}. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item If $\omega_Y^\bullet$ is a $c$-dualizing complex on $Y$, then $\omega_X^\bullet \coloneqq f^\sharp\omega_Y^\bullet$ is a $c$-dualizing complex on $X$ by \cite[Proposition 2.5.11]{ATJLL99}. \item\label{rem:formalgagasharp} If $f$ is proper in the sense of \cite[(3.4.1)]{EGAIII1}, and if locally on $Y$ the morphism $f$ is the completion of a morphism of schemes, then $f^\sharp$ is compatible with formal GAGA by \cite[Corollaries 3.3.8 and 6.1.7$(a)$]{ATJLL99}. \end{enumerate} \end{remark} \subsection{Semianalytic germs of complex analytic spaces} We use the definition of complex analytic spaces from \cite[1.1.5]{GR84}. We start with the definition of a semianalytic subset of a complex analytic space. \begin{citeddef}[{\citeleft\citen{Loj64}\citemid \S1, I\citepunct \citen{Fri67}\citemid p.\ 120\citeright}] Let $\mathcal{X}$ be a complex analytic space, and let $a \in X$ be a point. Denote by $\mathscr{S}_a$ the minimal class of germs at $a$ of subsets of $X$ such that the following hold: \begin{enumerate}[label=$(\roman*)$] \item $\mathscr{S}_a$ is stable under finite unions. \item $\mathscr{S}_a$ is stable under complements. \item $\mathscr{S}_a$ contains all germs of the form $\Set{x \in \mathcal{X} \given f(x) < 0}_a$, where $f(x)$ is a real analytic function in a neighborhood of $a$. \end{enumerate} A subset $X \subseteq \mathcal{X}$ is \textsl{semianalytic} if, for every $x \in X$, the local germ of $X$ at $x$ is an element of $\mathscr{S}_x$. \end{citeddef} We can now define semianalytic germs of complex analytic spaces in the sense of \cite{AT19}. \begin{citeddef}[{\cite[\S\S B.2--B.3]{AT19}}] A \textsl{semianalytic germ of a complex analytic space} is a pair $(\mathcal{X},X)$ consisting of a complex analytic space $\mathcal{X}$ and a semianalytic subset $X \subseteq \mathcal{X}$. We call $X$ the \textsl{support} of $(\mathcal{X},X)$ and $\mathcal{X}$ a \textsl{representative} of $(\mathcal{X},X)$. We sometimes use the shorter notation $X$ for the germ $(\mathcal{X},X)$. The \textsl{structure sheaf} on $X$ is \[ \mathcal{O}_X \coloneqq (\mathcal{O}_{\mathcal{X}})_{\vert X} = i^{-1}\mathcal{O}_{\mathcal{X}}, \] where $i\colon X \hookrightarrow \mathcal{X}$ is the embedding. \par A \textsl{morphism} $\phi\colon (\mathcal{X},X) \to (\mathcal{Y},Y)$ of semianalytic germs of complex analytic spaces consists of a neighborhood $\mathcal{X}'$ of $X$ and an analytic map $f\colon \mathcal{X}' \to \mathcal{Y}$ taking $X$ to $Y$. We say that $f$ is a \textsl{representative} of $\phi$. \end{citeddef} We define proper morphisms and closed embeddings of semianalytic germs as follows. \begin{citeddef}[{\citeleft\citen{AT19}\citemid \S B.5\citeright}] Let $\phi\colon (\mathcal{X},X) \to (\mathcal{Y},Y)$ be a morphism of semianalytic germs of complex analytic spaces. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item We say that $\phi$ is \textsl{without boundary} if there exists a representative $f\colon \mathcal{X}' \to \mathcal{Y}$ of $\phi$ that satisfies $X = f^{-1}(Y)$. \item We say that $\phi$ is an \textsl{open immersion} (resp.\ a \textsl{closed immersion}) if $\phi$ is without boundary and there exists a representative $f\colon \mathcal{X}' \to \mathcal{Y}$ of $\phi$ that is an open immersion (resp.\ a closed embedding). \item We say that $\phi$ is \textsl{proper} (resp.\ \textsl{projective}) if $\phi$ is without boundary and there exists a representative $f\colon \mathcal{X}' \to \mathcal{Y}$ of $\phi$ that is proper (resp.\ projective) and satisfies $X = f^{-1}(Y)$. \end{enumerate} \end{citeddef} We can then define affinoid semianalytic germs as follows. \begin{citeddef}[{\cite[\S B.6 and \S6.2.4]{AT19}}] Let $(\mathcal{X},X)$ be a semianalytic germ of a complex analytic space. We say that $X$ is \textsl{affinoid} if it admits a closed immersion into a germ of the form $(\mathbf{C}^n,D)$, where $D$ is a closed polydisc. A covering $X = \bigcup_i X_i$ of $X$ by affinoids is \textsl{admissible} if it admits a finite refinement. \end{citeddef} We define dualizing complexes on semianalytic germs of complex analytic spaces. \begin{definition}[{cf.\ \cite[p.\ 89]{RR71}}]\label{def:complexanalyticdualizing} Let $(\mathcal{X},X)$ be a semianalytic germ of a complex analytic space. A \textsl{dualizing complex} on $X$ is an object $\omega_X^\bullet$ in $\mathbf{D}_\textup{c}^+(X)$ such that the following hold: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item For every $x \in X$, there exists $n(x) \in \mathbf{Z}$ such that $\Ext^i_{\mathcal{O}_{X,x}}(\mathbf{C},\omega_{X,x}^\bullet) = 0$ for all $i > n(x)$. \item\label{def:complexanalyticdualizingreflexive} The natural morphism \[ \mathrm{id} \longrightarrow \RRHHom_{\mathcal{O}_X}\bigl(\RRHHom_{\mathcal{O}_X}(-,\omega_X^\bullet), \omega_X^\bullet\bigr) \] of $\delta$-functors on $\mathbf{D}_\textup{c}(X)$ is an isomorphism. \end{enumerate} \end{definition} \begin{remark} By \cite[\S5]{RR71} (see also \cite[Chapter VII, Theorem 2.6]{BS76}), every complex analytic space has a dualizing complex. This dualizing complex lies in $\mathbf{D}_\textup{c}^b(X)$ if $X$ is finite-dimensional \citeleft\citen{RR71}\citemid p.\ 89\citepunct \citen{BS76}\citemid Theorem 2.6$(iii)$\citeright. Since both conditions in Definition \ref{def:complexanalyticdualizing} can be checked at the level of stalks, if $(\mathcal{X},X)$ is a semianalytic germ of a complex analytic space, then setting $\omega_X^\bullet = i^{-1}\omega_\mathcal{X}^\bullet$ gives a dualizing complex on $X$, where $i\colon X \hookrightarrow \mathcal{X}$ is the embedding. \end{remark} \begin{convention} For semianalytic germs of complex analytic spaces, we will always use the dualizing complex $\omega_X^\bullet$ constructed using \cite[\S5]{RR71}. \end{convention} \subsection{Non-Archimedean analytic spaces} Let $k$ be a complete non-trivially valued non-\penalty0\hskip0pt\relax{}Archimedean field. We use the definition of rigid $k$-analytic spaces from \cite[Definition 9.3.1/4]{BGR84} and the definition of (good) $k$-analytic spaces from \cite[\S1]{Ber93}. We sometimes refer to the $k$-analytic spaces from \cite{Ber93} as \textsl{Berkovich spaces}. \par Instead of defining dualizing complexes on rigid $k$-analytic or $k$-analytic spaces in a similar fashion to complex analytic spaces (Definition \ref{def:complexanalyticdualizing}), we adopt a definition that is more easily comparable to the scheme-theoretic notion of a \textsl{weakly pointwise dualizing complex} from \cite[p.\ 120]{Con00}. Below, $X_G$ denotes the ringed site where the Grothendieck topology is the \textsl{$G$-topology} in the sense of \citeleft\citen{BGR84}\citemid Definition 9.3.1/4\citepunct \citen{Ber93}\citemid p.\ 25\citeright. \begin{definition}\label{def:dualizingcomplexrigid} Let $X$ be one of the following: \begin{enumerate}[label=$(\alph*)$] \item A rigid $k$-analytic space, where $k$ is a complete non-trivially valued non-Archimedean field. \item A $k$-analytic space, where $k$ is a complete non-Archimedean field. \end{enumerate} A \textsl{dualizing complex} on $X$ is an object $\omega_X^\bullet$ in $\mathbf{D}^+_\textup{c}(X_G)$ such that for every $x \in X$, the object $\omega_{X,x}^\bullet$ in $\mathbf{D}_\textup{c}^+(\mathcal{O}_{X_G,x})$ is a dualizing complex in the sense of Definition \ref{def:dualizingcomplexschemes} (see also \cite[p.\ 118 and Lemma 3.1.4]{Con00}). \end{definition} \begin{convention} If $X$ is a good $k$-analytic space in the sense of \cite[Remark 1.2.16]{Ber93}, then we drop the subscript $G$ in $X_G$, since in this case there is a good notion of a structure sheaf $\mathcal{O}_X$ on $X$ such that the categories of coherent $\mathcal{O}_X$-modules and coherent $\mathcal{O}_{X_G}$-modules coincide \cite[Proposition 1.3.4$(ii)$]{Ber93}. Note that affinoids $k$-analytic spaces and all proper $k$-analytic spaces over affinoids are good by \citeleft\citen{Ber90}\citemid \S3.1\citepunct \citen{Ber93}\citemid \S1.6\citeright. \end{convention} \section{Grothendieck duality, dualizing complexes, and GAGA}\label{sect:dualityandgaga} To check that dualizing complexes are compatible with exceptional pullbacks under the relative GAGA correspondence for semianalytic germs of complex analytic spaces from \cite[Appendix C]{AT19} and for rigid analytic spaces from \cite{Kop74}, we need versions of \cite[Theorem C.1.1]{AT19} and \citeleft\citen{Kop74}\citemid Folgerung 6.6, Folgerung 6.7, and Theorem 6.8\citeright\ (see also \cite[Example 3.2.6]{Con06}) for bounded derived categories. \begin{convention}\label{convention:analytification} We denote the analytification functor in each setting by $(-) \mapsto (-)^\textup{an}$, and similarly for sheaves and complexes. For objects in the essential image of this functor, we denote by $(-)^\textup{al}$ the corresponding algebraic object, and call this process \textsl{algebraization}. \end{convention} \subsection{Equivalences of categories of coherent sheaves yield equivalences of bounded derived categories} We start with the following result deducing equivalences of bounded derived categories from equivalences of (weak) Serre subcategories of categories of modules. The statements $(\ref{thm:weakserreequivgivesderivedext})$ and $(\ref{thm:weakserreequivgivesderived})$ below are versions of the first half of the proof of \cite[Theorem 3.7]{Lim}, but we write down the proof for completeness. The result in \cite{Lim} gives the stronger conclusion that $\mathbf{D}^-_{\mathscr{A}_X}(X) \to \mathbf{D}^-_{\mathscr{A}_Y}(Y)$ is an equivalence of categories under stronger hypotheses. \begin{theorem}\label{thm:weakserreequivgivesderivedmain} Let $h\colon (Y,\mathcal{O}_Y) \to (X,\mathcal{O}_X)$ be a flat morphism of ringed sites. Fix weak Serre subcategories $\mathscr{A}_Y$ in $\Mod(Y)$ and $\mathscr{A}_X$ in $\Mod(X)$. Suppose the pullback functor $h^*\colon \Mod(X) \to \Mod(Y)$ restricts to a functor \begin{align}\label{eq:cohequivalencecomplex} h^*\colon \mathscr{A}_X &\longrightarrow \mathscr{A}_Y, \intertext{and consider the associated derived functor} \label{eq:dbequivalencecomplex} h^*\colon \mathbf{D}^b_{\mathscr{A}_X}(X) &\longrightarrow \mathbf{D}^b_{\mathscr{A}_Y}(Y). \end{align} \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{thm:weakserreequivgivesderivedext} Suppose the natural morphisms \begin{align}\label{eq:extsareisos} \Ext^n_{\mathcal{O}_X}(\mathscr{F},\mathscr{G}) &\longrightarrow \Ext^n_{\mathcal{O}_{Y}}(h^*\mathscr{F},h^*\mathscr{G}) \intertext{are isomorphisms for all objects $\mathscr{F},\mathscr{G}$ in $\mathscr{A}_X$ and for all $n \in \mathbf{Z}$. Then, the natural morphisms} \mathbf{R}\Hom_{\mathcal{O}_X}(\mathscr{F},\mathscr{G}) &\longrightarrow \mathbf{R}\Hom_{\mathcal{O}_{Y}}(h^*\mathscr{F},h^*\mathscr{G})\label{eq:dbfullyfaithful} \end{align} are isomorphisms for all objects $\mathscr{F},\mathscr{G}$ in $\mathbf{D}^b_{\mathscr{A}_X}(X)$. \item\label{thm:weakserreequivgivesderivedsheafext} Suppose the natural morphisms \begin{align}\label{eq:sheafextsareisos} h^*\EExt^n_{\mathcal{O}_X}(\mathscr{F},\mathscr{G}) &\longrightarrow \EExt^n_{\mathcal{O}_{Y}}(h^*\mathscr{F},h^*\mathscr{G}) \intertext{are isomorphisms for all objects $\mathscr{F},\mathscr{G}$ in $\mathscr{A}_X$ and for all $n \in \mathbf{Z}$. Then, the natural morphisms} h^*\RRHHom_{\mathcal{O}_X}(\mathscr{F},\mathscr{G}) &\longrightarrow \RRHHom_{\mathcal{O}_{Y}}(h^*\mathscr{F},h^*\mathscr{G})\nonumber \end{align} are isomorphisms for all objects $\mathscr{F},\mathscr{G}$ in $\mathbf{D}^b_{\mathscr{A}_X}(X)$. \item\label{thm:weakserreequivgivesderived} Suppose \eqref{eq:cohequivalencecomplex} is an equivalence of categories, and that the natural morphisms \eqref{eq:extsareisos} are isomorphisms for all objects $\mathscr{F},\mathscr{G}$ in $\mathscr{A}_X$ and for all $n \in \mathbf{Z}$. Then, \eqref{eq:dbequivalencecomplex} is an equivalence of categories. \item\label{thm:weakserrecohgiveshypercoh} If \eqref{eq:cohequivalencecomplex} induces isomorphisms on cohomology modules, then \eqref{eq:dbequivalencecomplex} induces isomorphisms on hypercohomology modules. In this case, if the natural morphisms \eqref{eq:sheafextsareisos} are isomorphisms, then the natural morphisms in $(\ref{thm:weakserreequivgivesderivedext})$ and $(\ref{thm:weakserreequivgivesderivedsheafext})$ are all isomorphisms. \end{enumerate} \end{theorem} \begin{proof} We note that $h^*$ sends bounded objects in $\mathbf{D}_{\mathscr{A}_X}(X)$ to bounded objects in $\mathbf{D}_{\mathscr{A}_Y}(Y)$ since $h$ is flat, and hence $h^*$ commutes with cohomology.\smallskip \par For $(\ref{thm:weakserreequivgivesderivedext})$, we first assume that $\mathscr{G}$ is concentrated in one degree. We induce on the length of $\mathscr{F}$. If $\mathscr{F}$ is concentrated in one degree, then the isomorphism follows from the isomorphism \eqref{eq:extsareisos}. \par We now show \eqref{eq:dbfullyfaithful} an isomorphism for general $\mathscr{F}$ when $\mathscr{G}$ is concentrated in one degree. Let $n$ be the smallest degree where $\mathbf{h}^n(\mathscr{F}) \ne 0$. Consider the exact triangle \[ \mathbf{h}^n(\mathscr{F})[-n] \longrightarrow \mathscr{F} \longrightarrow \tau_{\ge n+1}\mathscr{F} \xrightarrow{+1}. \] We then have the commutative diagram \[ \begin{tikzcd}[column sep=1.25em] \mathbf{R}\Hom_{\mathcal{O}_X}(\tau_{\ge n+1} \mathscr{F},\mathscr{G}) \rar\dar[sloped]{\sim} & \mathbf{R}\Hom_{\mathcal{O}_X}(\mathscr{F},\mathscr{G}) \rar\dar & \mathbf{R}\Hom_{\mathcal{O}_X}(\mathbf{h}^n(\mathscr{F})[-n],\mathscr{G}) \rar{+1}\dar[sloped]{\sim} & {}\\ \mathbf{R}\Hom_{\mathcal{O}_{Y}}(h^*\tau_{\ge n+1} \mathscr{F},h^*\mathscr{G}) \rar & \mathbf{R}\Hom_{\mathcal{O}_{Y}}(h^*\mathscr{F},h^*\mathscr{G}) \rar & \mathbf{R}\Hom_{\mathcal{O}_{Y}}(h^*\mathbf{h}^n(\mathscr{F})[-n],h^*\mathscr{G}) \rar{+1} & {} \end{tikzcd} \] where the left and right vertical arrows are quasi-isomorphisms by the inductive hypothesis. By \cite[Proposition 1.1.11]{BBD82}, we see the middle vertical arrow is a quasi-isomorphism. This shows \eqref{eq:dbfullyfaithful} is an isomorphism when $\mathscr{G}$ is concentrated in one degree. \par To show \eqref{eq:dbfullyfaithful} is an isomorphism for general $\mathscr{F}$ and general $\mathscr{G}$, we repeat the same argument inducing on the length of $\mathscr{G}$. The case when $\mathscr{G}$ is concentrated in one degree was shown above. If $n$ is the smallest degree where $\mathbf{h}^n(\mathscr{G}) \ne 0$, the exact triangle \[ \mathbf{h}^n(\mathscr{G})[-n] \longrightarrow \mathscr{G} \longrightarrow \tau_{\ge n+1}\mathscr{G} \xrightarrow{+1} \] yields the commutative diagram \[ \begin{tikzcd}[column sep=scriptsize] \mathbf{R}\Hom_{\mathcal{O}_X}(\mathscr{F},\mathbf{h}^n(\mathscr{G})[-n]) \rar\dar[sloped]{\sim} & \mathbf{R}\Hom_{\mathcal{O}_X}(\mathscr{F},\mathscr{G}) \rar\dar & \mathbf{R}\Hom_{\mathcal{O}_X}(\mathscr{F},\tau_{\ge n+1}\mathscr{G}) \rar{+1}\dar[sloped]{\sim} & {}\\ \mathbf{R}\Hom_{\mathcal{O}_{Y}}(h^*\mathscr{F},h^*\mathbf{h}^n(\mathscr{G})[-n]) \rar & \mathbf{R}\Hom_{\mathcal{O}_{Y}}(h^*\mathscr{F},\mathscr{G}) \rar & \mathbf{R}\Hom_{\mathcal{O}_{Y}}(h^*\mathscr{F},h^*\tau_{\ge n+1}\mathscr{G}) \rar{+1} & {} \end{tikzcd} \] where the left and vertical arrows are quasi-isomorphisms by the inductive hypothesis. By \cite[Proposition 1.1.11]{BBD82}, we see the middle vertical arrow is a quasi-isomorphism. This shows \eqref{eq:dbfullyfaithful} is an isomorphism for all $\mathscr{F},\mathscr{G}$.\smallskip \par For $(\ref{thm:weakserreequivgivesderivedsheafext})$, we can repeat the same argument as in $(\ref{thm:weakserreequivgivesderivedext})$ replacing $\mathbf{R}\Hom$ with $\RRHHom$.\smallskip \par For $(\ref{thm:weakserreequivgivesderived})$, since the functor \eqref{eq:dbequivalencecomplex} is fully faithful by $(\ref{thm:weakserreequivgivesderivedext})$, it suffices to show the functor \eqref{eq:dbequivalencecomplex} is essentially surjective. Fix an object $\mathscr{G}$ in $\mathbf{D}^b_{\mathscr{A}_Y}(Y)$. We proceed by induction on the length of $\mathscr{G}$. If $\mathscr{G}$ is concentrated in one degree, this follows from the equivalence \eqref{eq:cohequivalencecomplex}. For general $\mathscr{G}$, let $n$ be the smallest degree where $\mathbf{h}^n(\mathscr{G}) \ne 0$, and consider the exact triangle \[ (\tau_{\ge n+1}\mathscr{G})[-1] \longrightarrow \mathbf{h}^n(\mathscr{G})[-n] \longrightarrow \mathscr{G} \xrightarrow{+1}. \] By \eqref{eq:cohequivalencecomplex} and the inductive hypothesis, there exist objects $\mathscr{F},\mathscr{F}'$ in $\mathscr{A}_X$ such that \[ h^*\mathscr{F} \cong (\tau_{\ge n+1}\mathscr{G})[-1] \qquad \text{and} \qquad h^*\mathscr{F}' \cong \mathbf{h}^n(\mathscr{G})[-n]. \] By $(\ref{thm:weakserreequivgivesderivedext})$, we know that the morphism $(\tau_{\ge n+1}\mathscr{G})[-1] \to \mathbf{h}^n(\mathscr{G})[-n]$ is the pullback of a morphism $\varphi\colon \mathscr{F} \to \mathscr{F}'$ in $\mathbf{D}^b_{\mathscr{A}_X}(X)$. It follows that $\mathscr{G} \cong h^*\Cone(\mathscr{F} \to \mathscr{F}')$ since $h$ is flat. This concludes the proof of $(\ref{thm:weakserreequivgivesderived})$.\medskip \par It remains to show $(\ref{thm:weakserrecohgiveshypercoh})$. Let $\mathscr{F}$ be an object in $\mathbf{D}^b_{\mathscr{A}_X}(X)$. We induce on the length of $\mathscr{F}$. If $\mathscr{F}$ is concentrated in one degree, this follows from the assumption that $h$ preserves cohomology modules. In general, let $n$ be the smallest degree where $\mathbf{h}^n(\mathscr{F}) \ne 0$, and consider the exact triangle \[ \mathbf{h}^n(\mathscr{F})[-n] \longrightarrow \mathscr{F} \longrightarrow \tau_{\ge n+1}\mathscr{F} \xrightarrow{+1}. \] We then have the commutative diagram \[ \begin{tikzcd}[column sep=scriptsize] \mathbf{R}\Gamma\bigl(X,\mathbf{h}^n(\mathscr{F})[-n]\bigr) \rar\dar[sloped]{\sim} & \mathbf{R}\Gamma(X,\mathscr{F}) \rar\dar & \mathbf{R}\Gamma(X,\tau_{\ge n+1}\mathscr{F}) \rar{+1}\dar[sloped]{\sim} & {}\\ \mathbf{R}\Gamma\bigl(Y,h^*\mathbf{h}^n(\mathscr{F})[-n]\bigr) \rar & \mathbf{R}\Gamma(Y,h^*\mathscr{F}) \rar & \mathbf{R}\Gamma(Y,h^*\tau_{\ge n+1}\mathscr{F}) \rar{+1} & {} \end{tikzcd} \] where the left and right vertical arrows are quasi-isomorphism by the inductive hypothesis. By \cite[Proposition 1.1.11]{BBD82}, we see the middle vertical arrow is a quasi-isomorphism. The ``in particular'' statement in $(\ref{thm:weakserrecohgiveshypercoh})$ now follows by applying $\mathbf{H}^0$. \end{proof} \subsection{Dualizing complexes and relative GAGA for semianalytic germs of complex analytic spaces} We first deduce the relative GAGA theorem for bounded derived categories of semianalytic germs of complex analytic spaces from the statement for categories of coherent sheaves in \cite{AT19}. \begin{theorem}[cf.\ {\cite[Theorem C.1.1]{AT19}}]\label{thm:dbgagagerms} Let $(\mathcal{Z},Z)$ be an affinoid semianalytic germ of a complex analytic space with ring of global analytic functions $A$. Let $X$ be a projective scheme over $\Spec(A)$. Then, the pullback functor \begin{equation}\label{eq:derivedequiv} h^*\colon \mathbf{D}^b_\textup{c}(X) \longrightarrow \mathbf{D}^b_\textup{c}(X^\textup{an}) \end{equation} is an equivalence of categories that induces isomorphisms on hypercohomology modules, $\mathbf{R}\Hom$, and $\RRHHom$. \end{theorem} \begin{proof} We verify the hypotheses in Theorem \ref{thm:weakserreequivgivesderivedmain}$(\ref{thm:weakserreequivgivesderived})$ and \ref{thm:weakserreequivgivesderivedmain}$(\ref{thm:weakserrecohgiveshypercoh})$ for the relative analytification morphism $h\colon X^\textup{an} \to X$ from \cite[Appendix C]{AT19} when $\mathscr{A}_{X^\textup{an}} = \Coh(X^\textup{an})$ and $\mathscr{A}_X = \Coh(X)$. By \cite[Theorem C.1.1]{AT19}, we have an equivalence of categories \begin{equation*} h^*\colon \Coh(X) \overset{\sim}{\longrightarrow} \Coh(X^\textup{an}) \end{equation*} that induces isomorphisms on cohomology modules. Since $h\colon X^\textup{an} \to X$ is flat \cite[p.\ 421]{AT19}, the natural morphisms \[ h^*\EExt^n_{\mathcal{O}_X}(\mathscr{F},\mathscr{G}) \longrightarrow \EExt^n_{\mathcal{O}_{X^\textup{an}}}(\mathscr{F}^\textup{an},\mathscr{G}^\textup{an}) \] are isomorphisms for all objects $\mathscr{F},\mathscr{G}$ in $\Coh(X)$ by \cite[Proposition 12.3.5]{EGAIII1}. We therefore see that \eqref{eq:derivedequiv} is an equivalence by Theorem \ref{thm:weakserreequivgivesderivedmain}$(\ref{thm:weakserreequivgivesderived})$. This equivalence induces isomorphisms on hypercohomology modules, $\mathbf{R}\Hom$, and $\RRHHom$ by Theorem \ref{thm:weakserreequivgivesderivedmain}$(\ref{thm:weakserrecohgiveshypercoh})$. \end{proof} We can now show that dualizing complexes are compatible with GAGA. \begin{theorem}\label{thm:dualizingcomplexcompatcomplex} Let $(\mathcal{Z},Z)$ be an affinoid semianalytic germ of a complex analytic space with ring of global analytic functions $A$. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{thm:dualizingcomplexcompatcomplexaffinoid} Let $\omega_A^\bullet$ denote the object in $\mathbf{D}^b_\textup{c}(\Spec(A))$ corresponding to $\omega_{Z}^\bullet$ under the equivalence in Theorem \ref{thm:dbgagagerms}. Then, $\omega_A^\bullet$ is a dualizing complex on $\Spec(A)$. \item\label{thm:rrvisexceptionalpullback} Let $f\colon Y \to X$ be a morphism of schemes projective over $\Spec(A)$. We then have the following commutative diagram of functors: \begin{equation}\label{eq:rrvleftadjoint} \begin{tikzcd}[column sep=17em] \mathbf{D}^b_\textup{c}(X^\textup{an}) \rar{\RRHHom_{\mathcal{O}_{Y^\textup{an}}}(\mathbf{L} f^{\textup{an}*}\RRHHom_{X^\textup{an}}(-,\omega_{X^\textup{an}}^\bullet),\omega_{Y^\textup{an}}^\bullet)} &\mathbf{D}^b_\textup{c}(Y^\textup{an})\\ \mathbf{D}^b_\textup{c}(X) \rar{f^!}\arrow[u, "h^*"', "\sim" sloped] &\mathbf{D}^b_\textup{c}(Y)\arrow[u, "h^*"', "\sim" sloped] \end{tikzcd} \end{equation} \item\label{thm:dualizingcomplexcompatcomplexexcpullback} Let $f\colon Y \to X$ be a morphism of schemes projective over $\Spec(A)$. We have $(f^!\omega_X^\bullet)^\textup{an} \cong \omega_{Y^\textup{an}}^\bullet$, and the analytification of the Grothendieck trace $\mathbf{R} f_* \omega_Y^\bullet \to \omega_X^\bullet$ is the relative trace from \emph{\cite{RRV71}}. \end{enumerate} \end{theorem} \begin{proof} For $(\ref{thm:dualizingcomplexcompatcomplexaffinoid})$, we first note that $\Spec(A)$ is Noetherian of finite Krull dimension \cite[Lemma B.6.1$(i)$]{AT19}. Thus, it suffices to show that $\omega_A^\bullet$ is locally a dualizing complex at every $x \in \Spec(A)$ by \cite[Chapter V, Proposition 8.2]{Har66} (see also \cite[p.\ 120]{Con00}). Moreover, it suffices to show that $\omega_A^\bullet$ is locally a dualizing complex at every closed point $x \in \Spec(A)$ by \cite[Chapter V, Corollary 2.3]{Har66}. But this follows from the conditions in Definition \ref{def:complexanalyticdualizing} together with the fact that $h$ is flat \cite[Lemma B.6.1$(iv)$]{AT19} and induces a bijection on closed points \cite[Lemma B.6.1$(iii)$]{AT19}, since finite injective dimension can be tested with modules of the form $\Ext^i_{\mathcal{O}_{X,x}}(\mathbf{C},-)$ \cite[\href{https://stacks.math.columbia.edu/tag/0AVJ}{Tag 0AVJ}]{stacks-project}, and both the formation of $\Ext$ and $\RRHHom$ commute with $h^*$ by Theorem \ref{thm:dbgagagerms}. Note here that while the condition in Definition \ref{def:complexanalyticdualizing}$(\ref{def:complexanalyticdualizingreflexive})$ is a statement about functors on $\mathbf{D}_\textup{c}(X)$, it suffices to check that the morphism is an isomorphism when plugging in $\mathcal{O}_X$ (resp.\ $\mathcal{O}_{X^\textup{an}}$) by \cite[Chapter V, Proposition 2.1]{Har66} (resp.\ \cite[Proposition 1]{RR71}).\smallskip \par For $(\ref{thm:rrvisexceptionalpullback})$, we apply Grothendieck duality for proper morphsims of complex analytic spaces \cite[p.\ 261]{RRV71} to a representative $f\colon \mathcal{X}' \to \mathcal{Z}$ of $\pi$ such that $f$ is proper and $f^{-1}(Z) = X$, and then restrict to $Z$ using the proper base change theorem from topology \cite[Chapter VII, Corollary 1.5]{Ive86} to obtain the isomorphism \begin{align*} \MoveEqLeft[3]\mathbf{R} f^\textup{an}_* \RRHHom_{\mathcal{O}_{Y^\textup{an}}}\Bigl(\mathscr{F}^\bullet,\RRHHom_{\mathcal{O}_{Y^\textup{an}}}\bigl(\mathbf{L} f^{\textup{an}*}\RRHHom_{X^\textup{an}}(\mathscr{G}^\bullet,\omega_{X^\textup{an}}^\bullet),\omega_{Y^\textup{an}}^\bullet \bigr)\Bigr)\\ &\overset{\sim}{\longrightarrow} \RRHHom_{\mathcal{O}_{X^\textup{an}}}(\mathbf{R} f_*^\textup{an}\mathscr{F}^\bullet,\mathscr{G}^\bullet) \end{align*} natural in objects $\mathscr{F}^\bullet$ and $\mathscr{G}^\bullet$ in $\mathbf{D}^b_\textup{c}(Y^\textup{an})$ and $\mathbf{D}^b_\textup{c}(X^\textup{an})$, respectively. Taking $\mathbf{H}^0$, and applying the eequivalence of categories $h^*$ from Theorem \ref{thm:dbgagagerms}, we see that the top functor in \eqref{eq:rrvleftadjoint} is a right adjoint of $\mathbf{R} f_*$. Finally, we obtain the diagram \eqref{eq:rrvleftadjoint} since right adjoints are unique.\smallskip \par We now show $(\ref{thm:dualizingcomplexcompatcomplexexcpullback})$. By $(\ref{thm:rrvisexceptionalpullback})$, it suffices to note that \begin{align*} \RRHHom_{\mathcal{O}_{Y^\textup{an}}}\Bigl(\mathbf{L} f^{\textup{an}*}\RRHHom_{X^\textup{an}}\bigl( \omega_{X^\textup{an}}^\bullet,\omega_{X^\textup{an}}^\bullet\bigr),\omega_{Y^\textup{an}}^\bullet\Bigr) &\cong \RRHHom_{\mathcal{O}_{Y^\textup{an}}}\Bigl(\mathbf{L} f^{\textup{an}*}\mathcal{O}_{X^\textup{an}},\omega_{Y^\textup{an}}^\bullet\Bigr)\\ &\cong \RRHHom_{\mathcal{O}_{Y^\textup{an}}}\Bigl(\mathcal{O}_{Y^\textup{an}},\omega_{Y^\textup{an}}^\bullet\Bigr)\\ &\cong \omega_{Y^\textup{an}}^\bullet. \end{align*} The last statement about trace now follows since in both settings, the trace is the counit morphism for the adjunction from $(\ref{thm:rrvisexceptionalpullback})$. \end{proof} \subsection{Dualizing complexes and relative GAGA for rigid analytic spaces} We first deduce the relative GAGA theorem for bounded derived categories of rigid analytic spaces and Berkovich spaces from the statements for categories of coherent sheaves in \cite{Kop74,Poi10} (see also \cite{Con06,Hal}). One can also show an adic version using \cite[\S6]{Hub07}. \begin{theorem}[{cf.\ \citeleft\citen{Kop74}\citemid Folgerung 6.6, Folgerung 6.7, and Theorem 6.8\citepunct \citen{Poi10}\citemid Th\'eor\`eme A.1\citeright}]\label{thm:derivedgagarigid} Let $Z$ be one of the following: \begin{enumerate}[label=$(\alph*)$,ref=\roman*] \item An affinoid rigid $k$-analytic space, where $k$ is a complete non-trivially valued non-\penalty0\hskip0pt\relax{}Archimedean field. \item An affinoid $k$-analytic space, where $k$ is a complete non-Archimedean field. \end{enumerate} Let $R$ be the ring of global functions on $Z$, and let $X$ be a proper scheme over $\Spec(R)$. Then, the pullback functor \begin{equation}\label{eq:derivedequivnonarch} h^*\colon \mathbf{D}^b_\textup{c}(X) \longrightarrow \mathbf{D}^b_\textup{c}(X^\textup{an}) \end{equation} is an equivalence of categories that induces isomorphisms on hypercohomology modules, $\mathbf{R}\Hom$, and $\RRHHom$. \end{theorem} \begin{proof} We verify the hypotheses in Theorem \ref{thm:weakserreequivgivesderivedmain}$(\ref{thm:weakserreequivgivesderived})$ and \ref{thm:weakserreequivgivesderivedmain}$(\ref{thm:weakserrecohgiveshypercoh})$ for the relative analytification morphism $h\colon X^\textup{an} \to X$ from \cite[Definition 1.4]{Kop74} (see also \cite[Example 2.2.11]{Con06}) and \cite[\S2.6]{Ber93} when $\mathscr{A}_{X^\textup{an}} = \Coh(X^\textup{an})$ and $\mathscr{A}_X = \Coh(X)$. \par By \cite[Folgerung 6.6, Folgerung 6.7, and Theorem 6.8]{Kop74} (see also \cite[Example 3.2.6]{Con06}) and \cite[Th\'eor\`eme A.1]{Poi10}, respectively, we have an equivalence of categories \begin{equation}\label{eq:rigidcohequiv} h^*\colon \Coh(X) \overset{\sim}{\longrightarrow} \Coh(X^\textup{an}) \end{equation} that induces isomorphisms on cohomology modules (see also \cite[Example 9.4]{Hal}). We note that $h^*$ induces isomorphisms on $\EExt$ sheaves by \cite[Satz 3.9]{Kop74} in the rigid analytic case and by \cite[Proposition 12.3.5]{EGAIII1} in the Berkovich case since $h$ is flat \cite[Proposition 2.6.2]{Ber93}. We therefore see that \eqref{eq:derivedequivnonarch} is an equivalence by Theorem \ref{thm:weakserreequivgivesderivedmain}$(\ref{thm:weakserreequivgivesderived})$. Finally, \eqref{eq:derivedequivnonarch} induces isomorphisms on hypercohomology modules, $\mathbf{R}\Hom$, and $\RRHHom$ by Theorem \ref{thm:weakserreequivgivesderivedmain}$(\ref{thm:weakserrecohgiveshypercoh})$. \end{proof} We can now show that dualizing complexes are compatible with GAGA. See \citeleft\citen{CM98}\citemid p.\ 14\citepunct \citen{Con99}\citemid p.\ 496\citeright\ (in the rigid analytic case) and \cite[Definition 3.5.1]{Ber93} (in the Berkovich analytic case) for the definition of equidimensionality used below. \begin{theorem}\label{thm:dualizingcomplexcompatrigid} Let $Z$ be one of the following: \begin{enumerate}[label=$(\alph*)$,ref=\roman*] \item An affinoid rigid $k$-analytic space, where $k$ is a complete non-trivially valued non-\penalty0\hskip0pt\relax{}Archimedean field. \item An affinoid Berkovich $k$-analytic space, where $k$ is a complete non-Archimedean field. \end{enumerate} Let $A$ be the ring of global functions on $Z$. Let $\pi\colon X \to \Spec(A)$ be a finite type morphism of schemes. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{thm:dualizingcomplexcompatrigidaffinoid} Let $\mathscr{K}$ be an object in $\mathbf{D}^b_\textup{c}(X)$. Then, $\mathscr{K}$ is a dualizing complex on $X$ if and only if $\mathscr{K}^\textup{an}$ is a dualizing complex on $X^\textup{an}$. \item\label{thm:dualizingcomplexcompatrigidexcpullback} Suppose $\pi$ is separated, and let $\omega_Z^\bullet$ be a dualizing complex on $Z$. Then, $(\pi^!\omega_Z^{\bullet\textup{al}})^\textup{an}$ is a dualizing complex on $X^\textup{an}$. \item\label{thm:dualizingcomplexcompatrigidexcpullbacksmooth} Suppose $\pi$ is separated. If $X^\textup{an}$ is smooth of equal dimension $d$ over $k$, then the sheaf $\omega_{X^\textup{an}/k}[d]$ of top differential forms shifted by $d$ is a dualizing complex on $X^\textup{an}$ for which there exists a dualizing complex $\omega_Z^\bullet$ on $Z$ such that \[ \omega_{X^\textup{an}/k}[d] \cong (\pi^!\omega_Z^{\bullet\textup{al}})^\textup{an}. \] \end{enumerate} \end{theorem} \begin{proof} For $(\ref{thm:dualizingcomplexcompatrigidaffinoid})$, we note that $\mathscr{K}$ is a dualizing complex on $X$ if and only if $\mathscr{K}_x$ is a dualizing complex on $\mathcal{O}_{X,x}$ for every $x \in X$ \cite[p.\ 120]{Con00}. Since $X^\textup{an} \to X$ is surjective \cite[Proposition 2.6.2]{Ber93}, it therefore suffices to prove that for every point $\tilde{x} \in X^\textup{an}$ with image $x = h(\tilde{x}) \in X$, we have $\mathscr{K}^\textup{an}_{\tilde{x}}$ is a dualizing complex on $\mathcal{O}_{X^\textup{an},\tilde{x}}$ if and only if $\mathscr{K}_x$ is a dualizing complex on $\mathcal{O}_{X,x}$. This equivalence holds by \cite[Theorem 5.1]{AF92} since $\mathcal{O}_{X,x} \to \mathcal{O}_{X^\textup{an},\tilde{x}}$ is a regular ring map \cite[Th\'eor\`eme 3.3]{Duc09}.\smallskip \par Next, $(\ref{thm:dualizingcomplexcompatrigidexcpullback})$ follows from $(\ref{thm:dualizingcomplexcompatrigidaffinoid})$ since $\pi^!\omega_Z^{\bullet\textup{al}}$ is a dualizing complex on $X$ by Lemma \ref{lem:dualizingcomplexpullback}.\smallskip \par Finally, we show $(\ref{thm:dualizingcomplexcompatrigidexcpullbacksmooth})$. Since $Z$ is affinoid, there exists a surjection \[ k\{r^{-1}T\} \mathrel{\text{\longtwo@rightarrow}} A, \] where in the rigid analytic case, we can assume $r = (1,1,\ldots,1)$. Let $i\colon \Spec(A) \hookrightarrow \Spec(k\{r^{-1}T\})$ be the associated closed immersion with associated closed immersion $i^\textup{an}\colon Z \hookrightarrow D$ of rigid $k$-analytic spaces or $k$-analytic spaces. We can replace $\pi$ by $i \circ \pi$ to assume that $Z = D$, since $\pi^!i^! \cong (i \circ \pi)^!$ by \cite[Chapter VII, Corollary 3.4$(a)$]{Har66}, and hence if $\omega_D^\bullet$ works for $i \circ \pi$, then $(i^!\omega_D^{\bullet\textup{al}})^\textup{an}$ works for $\pi$. \par We now prove $(\ref{thm:dualizingcomplexcompatrigidexcpullbacksmooth})$ assuming $Z$ is a polydic with ring of analytic functions $A = k\{r^{-1}T\}$. By \cite[p.\ 144]{Har66}, we have \begin{align*} \pi^!\omega_{k\{r^{-1}T\}/k}[n] &= \pi^*\omega_{k\{r^{-1}T\}/k}[n] \otimes_{\mathcal{O}_X} \omega_{X/k\{r^{-1}T\}}[d-n]. \intertext{Applying $(-)^\textup{an}$, we obtain} \bigl(\pi^!\omega_{k\{r^{-1}T\}/k}[n]\bigr)^\textup{an} &= \bigl((\pi^*\omega_{k\{r^{-1}T\}/k})^\textup{an} \otimes_{\mathcal{O}_{X^\textup{an}}} \omega_{X/k\{r^{-1}T\}}^\textup{an}\bigr)[d] \end{align*} since sheaves of differentials are compatible with analytification \cite[Proposition 3.3.11]{Ber93}. The right-hand side is isomorphic to $\omega_{X^\textup{an}/k}[d]$ by taking determinants in \cite[Corollary 3.5.10]{Ber93}. Thus, by $(\ref{thm:dualizingcomplexcompatrigidaffinoid})$ and $(\ref{thm:dualizingcomplexcompatrigidexcpullback})$, we can take $\omega_Z^\bullet = (\omega_{k\{r^{-1}T\}/k}[n])^\textup{an}$, where $\omega_{k\{r^{-1}T\}/k}[n]$ is a dualizing complex by \cite[Chapter V, Example 2.2 and Theorem 3.1]{Har66}. \end{proof} \section{Setup for the relative MMP with scaling}\label{sect:setupforothercats} We now give our setup for the relative MMP with scaling in categories other than schemes and algebraic spaces. We have made an effort to make definitions consistent with those in the literature. \subsection{Categories of spaces} \par We will work in the following categories of spaces. We have included $(\ref{setup:algebraicspaces})$ to simplify our discussion in the rest of this section, although the necessary preliminaries are already covered in Part \ref{part:prelim}. \begin{setup}[cf.\ {\cite[\S6.2.1]{AT19}}]\label{setup:spaces} A \textsl{category of spaces} is one of the following categories. \begin{enumerate}[label=$(\textup{\Roman*})$,ref=\textup{\Roman*}] \item[$(0)$] \makeatletter \protected@edef\@currentlabel{0} \phantomsection \label{setup:algebraicspaces} \makeatother The category of quasi-excellent Noetherian algebraic spaces over a scheme $S$ admitting dualizing complexes. \item\label{setup:formalqschemes} The category of quasi-excellent Noetherian formal schemes admitting $c$-dualizing complexes. \makeatletter \item\label{setup:complexanalyticgerms} The category of semianalytic germs $X = (\mathcal{X},X)$ of complex analytic spaces. \item \label{setup:berkovichspaces} The category of $k$-analytic spaces, where $k$ is a complete non-Archimedean field. \makeatletter \item[{$(\ref*{setup:berkovichspaces}')$}] \protected@edef\@currentlabel{\ref*{setup:berkovichspaces}'} \phantomsection \label{setup:rigidanalyticspaces} \makeatother The category of rigid $k$-analytic spaces, where $k$ is a complete non-trivially valued non-Archimedean field. \end{enumerate} We denote any such category by $\Sp$. A \textsl{space} is an object in $\Sp$. \par A \textsl{category of $\mathbf{Q}$-spaces} is a space as above, except in $(\ref{setup:algebraicspaces})$ and $(\ref{setup:formalqschemes})$ we assume that the formal schemes are over $\Spec(\mathbf{Q})$, and in $(\ref{setup:rigidanalyticspaces})$, we assume that the field $k$ is of characteristic zero. We denote any such category by $\Sp_\mathbf{Q}$. A \textsl{$\mathbf{Q}$-space} is an object in $\Sp_\mathbf{Q}$. \end{setup} In each category, there are good notions of affinoid subdomains, admissible affinoid coverings, regularity, and smooth and regular morphisms \cite[\S6.2]{AT19}. Moreover, there is a relative GAGA theorem for proper schemes over $\Spec(\mathcal{O}_U(U))$ when $U$ is affinoid, which induces equivalences on categories of coherent sheaves and isomorphisms on cohomology modules (see \cite[\S6.3]{AT19}). For spaces $X$ and schemes $X_0$ that these GAGA theorems apply to, we use the notions of \textsl{analytification} and \textsl{algebraization} as in Convention \ref{convention:analytification}. \subsubsection{Divisors and \texorpdfstring{$\mathbf{Q}$}{Q}-factoriality} Let $X$ be an irreducible normal space. \par Weil divisors are defined as formal sums of integral closed subspaces of codimension $1$ (see \cite[p.\ 34-08]{Bos83} for $(\ref{setup:rigidanalyticspaces})$). \par We use Cartier divisors in cases $(\ref{setup:complexanalyticgerms})$, $(\ref{setup:berkovichspaces})$, and $(\ref{setup:rigidanalyticspaces})$. For $(\ref{setup:complexanalyticgerms})$, we define Cartier divisors are defined as a special type of Weil divisor, following \cite[p.\ 555]{Nak87}. For $(\ref{setup:berkovichspaces})$ and $(\ref{setup:rigidanalyticspaces})$, we use the definition of Cartier divisors on $G$-ringed spaces from \cite[Definition 2.2]{Gub98}. In each of these cases, we have a cycle map \[ \mathrm{cyc}\colon \Div(X) \longrightarrow \WDiv(X). \] This follows from definition of Weil and Cartier divisors for $(\ref{setup:complexanalyticgerms})$. For $(\ref{setup:berkovichspaces})$ and $(\ref{setup:rigidanalyticspaces})$, see the construction \cite[2.5]{Gub98} (note the construction also works for $k$-analytic spaces since inclusions of affinoid subdomains induce flat maps on rings of sections \cite[Proposition 2.2.4$(ii)$]{Ber90}). \begin{remark} For formal schemes $(\ref{setup:formalqschemes})$, we will not use the language of Cartier divisors because of difficulties that exist in defining cycle maps (see \cite[pp.\ 59--60]{Smi17}). This also affects our definition of $\mathbf{Q}$-factoriality below. \end{remark} \par We now define $\mathbf{k}$-Weil and $\mathbf{k}$-Cartier divisors and the corresponding notion of $\mathbf{Q}$-factoriality. See also \cite[Definition 4.13]{Nak87} for the complex analytic case. \begin{definition} Let $X$ be an irreducible and normal space. Let $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$. For $(\ref{setup:complexanalyticgerms})$, $(\ref{setup:berkovichspaces})$, and $(\ref{setup:rigidanalyticspaces})$, we define $\mathbf{k}$-Weil divisors and $\mathbf{k}$-Cartier divisors as in Definition \ref{def:kcartdiv}, and say a $\mathbf{k}$-Weil divisor \textsl{is $\mathbf{k}$-Cartier} if it is in the image of the map $\mathrm{cyc} \otimes_\mathbf{Z} \mathbf{k}$. We say $X$ is \textsl{$\mathbf{Q}$-factorial} if $\mathrm{cyc}_\mathbf{k}$ is surjective. \par For case $(\ref{setup:formalqschemes})$, let $\pi\colon X \to Z$ be a projective morphism, We say that $X$ is \textsl{$\mathbf{Q}$-factorial} if, for every $\mathbf{Q}$-Weil divisor $\Delta$ on $X$, there exists a $\mathbf{Q}$-invertible sheaf $D$ on $X$ such that for every affine open $\Spf(A) \subseteq Z$, the $\mathbf{Q}$-invertible sheaf on $\pi^{-1}(\Spf(A))^\textup{al}$ corresponding to $D_{\vert \pi^{-1}(\Spf(A))}$ via formal GAGA \cite[Corollaire 5.1.6]{EGAIII1} has associated Weil divisor equal to the $\mathbf{Q}$-Weil divisor on $\pi^{-1}(\Spf(A))$ corresponding to $\Delta_{\vert \pi^{-1}(\Spf(A))}$ via formal GAGA \cite[Corollaire 5.1.6]{EGAIII1}. \end{definition} We note that regular rigid analytic spaces are $\mathbf{Q}$-factorial (in fact, $\mathrm{cyc}$ is an isomorphism) by \cite[Theorem A.9]{Mit11}. \begin{lemma}\label{lem:gagaqfac} Let $\pi\colon X \to Z$ be a projective morphism in $\Sp$. If $X$ is $\mathbf{Q}$-factorial, then $\pi^{-1}(U)^\textup{al}$ is $\mathbf{Q}$-factorial for every affinoid subdomain $U \subseteq Z$. \end{lemma} \begin{proof} This holds since GAGA induces isomorphisms on Picard groups and groups of Weil divisors. The isomorphisms on Picard groups holds since GAGA induces equivalences of categories of coherent sheaves, and a coherent sheaf $\mathscr{F}$ is invertible if and only if \[ \mathscr{F} \otimes_{\mathcal{O}_X} \HHom_{\mathcal{O}_X}(\mathscr{F},\mathcal{O}_X) \longrightarrow \mathcal{O}_X \] is an isomorphism \cite[\href{https://stacks.math.columbia.edu/tag/0B8N}{Tag 0B8N}]{stacks-project}. \end{proof} \subsubsection{Ampleness} We have good notions of relative ampleness for the categories $(\ref{setup:formalqschemes})$, $(\ref{setup:complexanalyticgerms})$, and $(\ref{setup:rigidanalyticspaces})$. We have adopted a definition for Berkovich spaces that allows us to apply the relative GAGA theorem in this setting. These ample invertible sheaves correspond to ample invertible sheaves under the GAGA correspondence (see \cite[Remark 3.1.3]{Con06} for $(\ref{setup:rigidanalyticspaces})$). \begin{definition} Let $\pi\colon X \to Z$ be a proper morphism in $\Sp$ in the sense of \cite[(3.4.1)]{EGAIII1}, \cite[p.\ 91]{BS76} (with the adjustment to germs as in \cite[\S B.5]{AT19}), \cite[Definition 9.6.2/2]{BGR84}, and \cite[Example 1.5.3$(iii)$]{Ber93}, respectively. Let $\mathscr{L}$ be an invertible sheaf on $X$. We say that $\mathscr{L}$ is \textsl{$\pi$-ample} in each setting of Setup \ref{setup:spaces} if the following hold: \begin{enumerate} \item[$(\ref{setup:formalqschemes})$] For every affine open $\Spf(A) \subseteq Z$, if $I$ is the ideal of definition of $A$, then $\mathscr{L}$ restricts to a relatively ample invertible sheaf on $X \times_Z \Spec(A/I)$ (see \cite[Th\'eor\`eme 5.4.5]{EGAIII1}). \item[$(\ref{setup:complexanalyticgerms})$] There exists a proper representative $\mathcal{X} \to \mathcal{Z}$ of $\pi$ and an invertible sheaf on $\mathcal{X}$ restricting to $\mathscr{L}$ on $X$ that is $\pi$-ample in the sense of \cite[p.\ 141]{BS76}. \item[$(\ref{setup:rigidanalyticspaces})$] The invertible sheaf $\mathscr{L}$ is ample relative to $Z$ in the sense of \cite[Definition 3.2.2]{Con06}. \end{enumerate} If there exists a $\pi$-ample invertible sheaf on $X$, we say that $\pi$ is \textsl{projective}. \par For $(\ref{setup:berkovichspaces})$, we call $\pi\colon X \to Z$ \textsl{projective} if $\pi$ is proper and there exists an invertible sheaf $\mathscr{L}$ on $X$ such that for every affinoid subdomain $V \subseteq Z$, the restriction of $\pi$ to $\pi^{-1}(V)$ is the analytification of a projective morphism over $\Spec(V)$ with $\mathscr{L} = (\mathcal{O}(1))^\textup{an}$. In this case, we refer to $\mathscr{L}$ as a \textsl{$\pi$-ample invertible sheaf}. \par A $\mathbf{k}$-invertible sheaf for $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$ is \textsl{$\pi$-ample} if it is a nonzero $\mathbf{k}_{>0}$-linear combination of $\pi$-ample invertible sheaves. \end{definition} \subsubsection{Nefness} We can define nefness using GAGA. \begin{definition}\label{def:nefothercats} Let $\pi\colon X \to Z$ be a proper morphism in $\Sp$. \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item A closed subspace $Y \subseteq X$ is \textsl{$\pi$-contracted} if $\pi(Y)$ is a zero-dimensional (closed) subspace of $Z$. A \textsl{$\pi$-contracted curve} is a $\pi$-contracted closed subspace that is integral and of dimension one. \item\label{def:nefothercatscontr} Suppose that every $\pi$-contracted curve $C \subseteq X$ is the analytification of a scheme $C^\textup{al}$ over $\{z\}^\textup{al}$. Let $D \in \Pic_\mathbf{k}(X)$ for $\mathbf{k} \in \{\mathbf{Z},\mathbf{Q},\mathbf{R}\}$. We say that $D$ is \textsl{$\pi$-nef} if, for every $\pi$-contracted curve $C \subseteq X$, we have $\deg_{C^\textup{al}}(D^\textup{al}) \ge 0$. \end{enumerate} \end{definition} We note that this definition is consistent with that for schemes and algebraic spaces, since nefness for schemes and algebraic spaces can be checked at closed points when the base is (for example) Noetherian (Lemma \ref{lem:NefAgainstNonClosedContracted}). It is also consistent with the complex analytic case defined in \cite[Definition 1.7]{Nak87} and with the notions of degree and intersection theory on rigid analytic spaces of dimension $\le 2$ from \cite[\S\S A.4--A.5]{Mit11}, since in either case the GAGA correspondence preserves cohomology modules \citeleft\citen{AT19}\citemid Theorem C.1.1\citepunct \citen{Kop74}\citemid 1. GAGA-Satz 4.7\citeright. \begin{remark} The condition in Definition \ref{def:nefothercats}$(\ref{def:nefothercatscontr})$ on $\pi$-contracted curves holds for $(\ref{setup:formalqschemes})$ when $\pi$ is projective. In the categories $(\ref{setup:complexanalyticgerms})$, $(\ref{setup:berkovichspaces})$, and $(\ref{setup:rigidanalyticspaces})$, every $\pi$-contracted curves is the analytification of a scheme $C^\textup{al}$ over $\{z\}^\textup{al}$. See \cite{nfdc23} for $(\ref{setup:complexanalyticgerms})$, see \cite[Th\'eor\`eme 3.7.2]{Duc} for $(\ref{setup:berkovichspaces})$, and see \cite[Theorem A.12]{Mit11} for $(\ref{setup:rigidanalyticspaces})$. \end{remark} \subsubsection{Bigness} For $\mathbf{k} \in \{\mathbf{Q},\mathbf{R}\}$ and projective morphisms in $\Sp$, we define relatively big $\mathbf{k}$-invertible sheaves by passing to affinoid subdomains in $Z$, and then using Definition \ref{def:fbig}. \subsubsection{Canonical divisors and singularities of pairs} We can define canonical sheaves and divisors in the same way as in Definition \ref{def:canonicalsheaf} using the notion of dualizing complexes from \S\ref{sect:qedualizingothers}. We define singularities of $\mathbf{Q}$-pairs as in Definition \ref{def:singpairs}, where we note that the requisite trace morphisms $f_*\omega_Y\to \omega_X$ between canonical sheaves exist by analytifying the corresponding Grothendieck trace morphisms on schemes. Since we are working with $\mathbf{Q}$-pairs, however, instead of working with $\mathbf{Q}$-linear equivalences as in Definition \ref{def:canonicalsheaf}, we can work with isomorphisms of coherent sheaves as in \cite[(2.4.1)]{Kol13}, which is easier to work with under the GAGA correspondence. \par In case $(\ref{setup:formalqschemes})$, the trace morphism from \cite{ATJLL99} is the analytification of the trace morphism in the scheme case by Remark \ref{rem:formalgaga}$(\ref{rem:formalgagasharp})$. In case $(\ref{setup:complexanalyticgerms})$, the trace morphism from \cite{RRV71} is the analytification of the trace morphism in the scheme case by Theorem \ref{thm:dualizingcomplexcompatcomplex}$(\ref{thm:dualizingcomplexcompatcomplexexcpullback})$. In cases $(\ref{setup:berkovichspaces})$ and $(\ref{setup:rigidanalyticspaces})$, we note that one can define discrepancies using isomorphisms of the form in \cite[(2.4.1)]{Kol13}. \par Moreover, in cases $(\ref{setup:complexanalyticgerms})$, $(\ref{setup:berkovichspaces})$, and $(\ref{setup:rigidanalyticspaces})$, the canonical divisors $K_X$ and canonical sheaves $\omega_X$ have concrete descriptions as sheaves of top differential forms after restricting to the smooth locus of $X$ by Theorems \ref{thm:dualizingcomplexcompatcomplex}$(\ref{thm:dualizingcomplexcompatcomplexexcpullback})$ and \ref{thm:dualizingcomplexcompatrigid}$(\ref{thm:dualizingcomplexcompatrigidexcpullbacksmooth})$.\smallskip \par To reduce to the scheme setting, we prove the following: \begin{lemma}\label{lem:kltgaga} Let $\pi\colon X \to Z$ be a projective morphism in $\Sp_\mathbf{Q}$, and let $\Delta$ be a $\mathbf{Q}$-Weil divisor on $X$ such that $K_X+\Delta$ is klt. Then, for every affinoid subdomain $U \subseteq Z$, we have that $(\pi^{-1}(U)^\textup{al},\Delta_{\vert \pi^{-1}(U)^\textup{al}}^\textup{al})$ is klt. \end{lemma} \begin{proof} Replacing $Z$ by an affinoid subdomain $U$, we may assume that $U = Z$. Fix a proper log resolution $f\colon Y \to X^\textup{al}$ of $(X^\textup{al},\Delta^\textup{al})$, which exists by \cite[Theorem 2.3.6]{Tem08}. Then, $f^\textup{an}\colon Y^\textup{an} \to X$ is a log resolution of $(X,\Delta)$ by \cite[Proposition 6.3.6]{AT19}. The claim about klt singularities holds since the expression \[ K_{Y ^\textup{an}} + (f^\textup{an})_*^{-1}\Delta \sim_\mathbf{Q} f^{\textup{an}*}(K_X+\Delta) + \sum_{\text{$f$-exceptional $E$}} a(E,X,\Delta)E \] (or more canonically, the sheaf-theoretic version of this $\mathbf{Q}$-linear equivalence in \cite[(2.4.1)]{Kol13}) also holds after algebraization. \end{proof} \begin{remark} Lemma \ref{lem:kltgaga} holds for other singularities of pairs, since we showed that the discrepancies are well-behaved under algebraization. \end{remark} \section{The relative MMP with scaling (Proofs of Theorems \texorpdfstring{\ref*{thm:introrelativemmp}}{A} and \texorpdfstring{\ref*{thm:introfinitegen}}{B})}\label{sect:mmpforothercats} \subsection{Conventions} We now set our conventions for the relative minimal model program with scaling. We adopt the conventions from \cite{Kol21qfac,VP} where one contracts extremal faces instead of extremal rays. \par We will fix the following notation. \begin{enumerate}[ref=\dagger] \makeatletter \item[$(\dagger)$] \protected@edef\@currentlabel{\dagger} \phantomsection\label{setup:dagger} \makeatother Let $\pi\colon X \to Z$ be a projective morphism of normal spaces in $\Sp$, such that $Z$ is irreducible. Let $D$ be an $\mathbf{R}$-invertible sheaf on $X$, and let $H$ be an $\mathbf{R}$-invertible sheaf on $X$ such that $D+r_XH$ is $\pi$-ample for some $r_X \in \mathbf{R}_{>0}$. \end{enumerate} \begin{citeddef}[\citeleft\citen{Kol21qfac}\citemid Definition 1\citepunct \citen{VP}\citemid \S2\citeright]\label{def:mmpconventions} Fix notation as in $(\ref{setup:dagger})$. By decreasing the constant $r_X$ in $(\ref{setup:dagger})$ to a real number $r \ge 0$, we arrive at either one of the following situations: \begin{enumerate}[label=$(\roman*)$] \item $r > 0$ is a real number such that $D+rH$ is $\pi$-nef but not $\pi$-ample, and $D+(r+\varepsilon)H$ is $\pi$-ample for all $\varepsilon > 0$. \item $r = 0$, in which case $D$ is $\pi$-nef. \end{enumerate} For such $r$, we define $\phi^r \colon X \to Y$ to be the morphism (if it exists) that contracts all $\pi$-contracted curves $C \subseteq X$ such that $(D+rH) \cdot C = 0$. We then assume we have a diagram \[ \begin{tikzcd} X \arrow[dashed]{rr}{f^r}\arrow[bend right=15]{ddr}[swap]{\pi} \arrow{dr}{\phi^r} & & X^r \arrow[bend left=15]{ddl}{\pi^r} \arrow{dl}[swap]{\psi^r}\\ & Y\dar\\ & Z \end{tikzcd} \] where $\psi^r$ is birational with exceptional and assuming $D^r = (f^r)_*D$ and $H^r = (f^r)_*H$ are $\mathbf{R}$-Cartier, the divisor $D^r + (r-\varepsilon)H^r$ on $X^r$ is $\pi^r$-ample for sufficiently small $\varepsilon > 0$. The rational map $f^r\colon X \dashrightarrow X^r$ is the first step of the relative $D$-MMP with scaling of $H$. We then replace $X$ by $X^r$ and decrease the coefficient $(r-\varepsilon)$ in $D^r+ (r-\varepsilon)H^r$ to obtain the next step of the $\pi$-relative $D$-MMP with scaling of $H$. \par To discuss the outputs of the relative $D$-MMP, we index these steps continuously, following \cite[\S2]{VP}. For each $r \in \mathbf{R}_{>0}$, the \textsl{$r$-th output of the $\pi$-relative $D$-MMP with scaling of $H$} (if it exists), denoted by $f^r\colon X \dashrightarrow X^r$, will mean the composite \begin{equation}\label{eq:rthoutputdmmp} X \dashrightarrow X^{r_1} \dashrightarrow \cdots \dashrightarrow X^{r_n} \end{equation} for real numbers $r_1 > \cdots > r_n \ge r$, where each $r_i$ is such that \begin{itemize} \item $D^{r_{i-1}} + r_iH^{r_i-1}$ is $\pi$-nef, but not $\pi$-ample; \item $D^{r_{i-1}} + (r_i+\varepsilon)H^{r_i-1}$ is $\pi$-ample for all $\varepsilon > 0$; and \item $D^{r_n} + (r-\varepsilon)H^{r_n}$ is $\pi$-ample for sufficiently small $\varepsilon > 0$. \end{itemize} Each birational map in \eqref{eq:rthoutputdmmp} is the composition of steps in the $\pi$-relative $D$-MMP with scaling of $H$ as described above. \end{citeddef} \begin{remark}\label{rem:diffinconventions} Compared to the conventions in \cite[Remark 3.10.10]{BCHM10} and in the proof of Theorem \ref{rem:MMP}, the steps of this version of the relative minimal model program are uniquely determined by the starting data, and correspond to compositions of steps in Theorem \ref{rem:MMP}. This will be useful in extending our results to other categories as was done for algebraic spaces in \cite{VP}. See \cite[Warning 1.8]{Kol21qfac} and \cite[pp.\ 2--3]{VP} for more discussion. \end{remark} \par We have the following uniqueness result for the outputs of the relative $D$-MMP with scaling. \begin{citedlem}[{\cite[Lemma 2.1]{VP}}]\label{lem:vp21} Let $\Sp$ be as in $(\ref{setup:algebraicspaces})$. Suppose the hypotheses in $(\ref{setup:dagger})$ are satisfied. The $r$-th output of the $\pi$-relative $D$-MMP with scaling of $H$, if it exists, is characterized by the following properties: \begin{enumerate}[label=$(\roman*)$,ref=\roman*] \item\label{lem:vp21i} $f^r\colon X \dashrightarrow X^r$ is a birational contraction to an integral normal algebraic space proper over $Z$. \item\label{lem:vp21ii} $f^r_*(D+(r-\varepsilon)H)$ is ample over $Z$ for sufficiently small $\varepsilon > 0$. \item\label{lem:vp21iii} $f^r$ only contracts Weil divisors $E$ for which the restriction $(D+rH)_{\vert E}$ is not big over $Z$. \end{enumerate} \end{citedlem} \subsection{Gluing} We now prove a variation of \cite[Theorem 2.2]{VP} that shows that the steps of the relative $D$-MMP with scaling are compatible with certain base changes $V \to Z$. Compared to \cite[Theorem 2.2]{VP}, we have weakened the quasi-finiteness assumption to the assumption that the morphism $V \to Z$ is quasi-finite in the sense of \cite[p.\ 2]{SGA1} over all closed points. Note that the definition of quasi-finiteness in \cite{SGA1} does not include finite type assumptions. This is necessary to work with transition morphisms between rings of sections of affinoid subdomains in Theorem \ref{thm:gluemmp} below. For simplicity, we assume that $Z$ is a scheme. \begin{theorem}\label{thm:vp22alt} Let $\Sp$ be as in $(\ref{setup:algebraicspaces})$ of Setup \ref{setup:spaces}. Suppose the hypotheses in $(\ref{setup:dagger})$ are satisfied, and suppose that $Z$ is a scheme. Let $V$ be a quasi-excellent normal scheme and let $V \to Z$ be a universally open morphism that is quasi-finite over all closed points in $Z$. If $X^r$ exists, then $(X_V)^r$ exists and $(X_V)^r \cong (X^r \times_Z V)^\nu$, where $(-)^\nu$ denotes normalization. \end{theorem} \begin{proof} We assume that $f^r\colon X \dashrightarrow X^r$ exists. Consider the normalized base change \[ X_V \coloneqq (X \times_Z V)^\nu \overset{f^r}{\dashrightarrow} (X^r \times_Y V)^\nu, \] which fits into the following commutative diagram: \[ \begin{tikzcd} X_V \rar[dashed]{f_V^r}\dar[swap]{p_V} & (X^r \times_Z V)^\nu \dar{p_V^r}\\ X \rar[dashed]{f^r} & X^r \end{tikzcd} \] We claim that $f^r_V$ satisfies the properties in Lemma \ref{lem:vp21}. \par We first show $(\ref{lem:vp21i})$. By construction, $(X^r \times_Z V)^\nu$ is normal, and is proper over $V$. Now suppose that the birational inverse $(f^r_V)^{-1}$ contracts a Weil divisor $E$. Since $X^r$ is proper over $Z$, we see that $p_V^r$ is quasi-finite over all closed points in $X^r$, and similarly for $p_V$. Thus, $p_V^r(E)$ is a Weil divisor in $X^r$ whose birational transform in $X$ is of codimension $\ge 2$. This contradicts the assumption that $f^r$ is a birational contraction. Thus, $f^r_V$ is a birational contraction as well. \par Next, we show $(\ref{lem:vp21ii})$. We know that $f^r_*(D+(r-\varepsilon)H)$ is ample over $Z$. Since $p_V^r$ is quasi-finite over all closed points in $X^r$, we see that the pullback $p_V^{r*}f^r_*(D+(r-\varepsilon)H)$ is ample over $Z$ (here we use that ampleness can be detected over closed points by combining \cite[\href{https://stacks.math.columbia.edu/tag/0D36}{Tag 0D36}]{stacks-project} and \cite[Proposition 2.7]{Kee03}). Since both $X_V$ and $(X^r \times_Z V)^\nu$ are normal schemes, we have \[ p_V^{r*}f^r_*(D+(r-\varepsilon)H) = (f^r_V)_*\bigl(D_V+(r-\varepsilon)H_V\bigr) \] since they agree in codimension $1$. \par Finally, we show $(\ref{lem:vp21iii})$. Suppose $f^r_V$ contracts a Weil divisor $E$. Since $p_V$ is quasi-finite all closed points in $X^r$, we see that $p_V(E)$ is a divisor in $X$ that is contracted by $f^r$. Thus, $(D+rH)_{\vert p_V(E)}$ is not big. This implies $(D_V+rH_V)_{\vert p_V(E)}$ is not big, since $V \to Z$ maps generic points to generic points \cite[Propositions 3.9.3$(i)$ and 3.9.5$(ii)$]{EGAInew}, and volumes are compatible with field extensions by flat base change \cite[Proposition 1.4.15]{EGAIII1}. \end{proof} We can now glue steps of the relative $D$-MMP together. See \cite[Corollary 2.3]{VP} for the corresponding gluing statement for steps of the MMP for algebraic spaces. \begin{theorem}\label{thm:gluemmp} Let $\Sp$ be as in $(\ref{setup:formalqschemes})$, $(\ref{setup:complexanalyticgerms})$, $(\ref{setup:berkovichspaces})$, or $(\ref{setup:rigidanalyticspaces})$ of Setup \ref{setup:spaces}. Suppose the hypotheses in $(\ref{setup:dagger})$ are satisfied. Let $Z = \bigcup_a V_a$ be an affinoid covering, and define $X_a = X \times_Z V_a$, $\pi_a = \pi_{\vert X_a}$, $D_a = D_{\vert X_a}$, and $H_a = H_{\vert X_a}$. Suppose that for each $a$ we know the existence of the $r$-th output of the $\pi_a$-relative $D_a$-MMP with scaling of $H_a$. Then, the $r$-th output of the $\pi$-relative $D$-MMP with scaling of $H$ exists. \end{theorem} \begin{proof} It suffices to show that for every affinoid subdomain $W \subseteq V_a \cap V_b$, the restrictions of the $r$-th output of the $\pi_a$-relative $D$-MMP with scaling coincides with that of the $\pi_b$-relative $D$-MMP with scaling. \par Let $A_a = \mathcal{O}_{V_a}(V_a)$, $A_b = \mathcal{O}_{V_b}(V_b)$, and $B = \mathcal{O}_W(W)$. It suffices to show that the corresponding steps of the relative $D$-MMP with scaling over the schemes $\Spec(A_a)$ and $\Spec(A_b)$ under the GAGA correspondences in \cite[\S6.3]{AT19} coincide with that on $\Spec(B)$, since all objects involved are projective over $Z$. By Theorem \ref{thm:vp22alt}, it suffices to show that the maps $\Spec(B) \to \Spec(A_a)$ and $\Spec(B) \to \Spec(A_b)$ are universally open and quasi-finite over closed points. The fact that they are universally open follows from the fact that these morphisms are regular \cite[Lemma 6.2.8]{AT19}, and hence flat. It therefore suffices to show that if $W \subseteq V$ is an inclusion of affinoid subdomains in $Z$, then the map $\Spec(\mathcal{O}_W(W)) \to \Spec(\mathcal{O}_{V}(V))$ is quasi-finite over closed points. For $(\ref{setup:formalqschemes})$, this follows by definition of the structure sheaf on $\Spf$. For $(\ref{setup:complexanalyticgerms})$ and $(\ref{setup:rigidanalyticspaces})$, we consider the commutative diagram \[ \begin{tikzcd} W \rar[hook]\dar[swap]{h_W} & V\dar{h_V}\\ \Spec\bigl(\mathcal{O}_W(W)\bigr) \rar & \Spec\bigl(\mathcal{O}_V(V)\bigr) \end{tikzcd} \] of ($G$-)ringed spaces, where the vertical arrows are the analytification morphisms from GAGA as in \cite[\S6.3]{AT19}. Then, $h_W$ and $h_V$ induce bijections on closed points by \cite[Lemma B.6.1$(iii)$]{AT19} for $(\ref{setup:complexanalyticgerms})$ and by combining \cite[Corollary 6.1.2/3]{BGR84} \cite[Lemma 2.6.3]{Ber93} for $(\ref{setup:rigidanalyticspaces})$. For $(\ref{setup:complexanalyticgerms})$ and $(\ref{setup:rigidanalyticspaces})$, we therefore see that the bottom horizontal arrow is finite-to-one over closed points by counting points in fibers. \par For $(\ref{setup:berkovichspaces})$, we can repeat the same argument as above after passing to a complete non-trivially valued non-Archimedean field extension $k \subseteq K_r$ such that $W \hat{\otimes}_k K_r$ and $V \hat{\otimes}_k K_r$ are strictly $K_r$-affinoid (such a $K_r$ exists as in the proof of \cite[Proposition 2.2.4]{Ber90}). This is because the morphisms $W \hat{\otimes}_k K_r \to W$ and $V \hat{\otimes}_k K_r \to V$ are faithfully flat by \cite[Lemma 2.1.2]{Ber93} and because the normality conditions in $(\ref{setup:dagger})$ are preserved by the fact that the extension $k \subseteq K_r$ is analytically separable in the sense of \cite[D\'efinition 1.6]{Duc09} by \cite[Proposition 2.6.7(3)]{Duc18}. \end{proof} \subsection{Proof of Theorem \texorpdfstring{\ref{thm:introrelativemmp}}{A}} We can now prove Theorem \ref{thm:introrelativemmp}. \begin{proof}[Proof of Theorem \ref{thm:introrelativemmp}] First, we already showed case $(\ref{setup:introalgebraicspaces})$ in Theorems \ref{rem:MMP} and \ref{thm:cl1365}. It therefore suffices to show Theorem \ref{thm:introrelativemmp} in cases $(\ref{setup:introformalqschemes})$, $(\ref{setup:introcomplexanalyticgerms})$, $(\ref{setup:introberkovichspaces})$, and $(\ref{setup:introrigidanalyticspaces})$. \par Let $A$ be a $\mathbf{Q}$-invertible sheaf as in the statement of Theorem \ref{thm:introrelativemmp}, and let $Z = \bigcup_a V_a$ be the associated affinoid covering. Using Lemma \ref{lem:AmpleIsGoodScaling} and after possibly shrinking the $V_a$, we have divisors $A_{a} \in \lvert A_{\vert \pi^{-1}(V_a)} \rvert$ such that $(X_a,\Delta_{\vert X_a}+A_a)$ is klt. We want to apply Theorem \ref{thm:gluemmp} for $D = K_X+\Delta$, $H = A$, and the affinoid covering $Z = \bigcup_a V_a$. By Theorems \ref{rem:MMP} and \ref{thm:cl1365}, each of the steps of the $\pi_a$-relative $(K_X+\Delta)_a$-MMP with scaling of $A_a$ exists (note that because of the difference in conventions, the steps of the relative MMP in Definition \ref{def:mmpconventions} are compositions of steps in Theorem \ref{rem:MMP}; see Remark \ref{rem:diffinconventions}) and the $\pi_a$-relative $(K_X+\Delta)_a$-MMP with scaling of $A_a$ terminates. In order to apply these theorems, we note that the positivity conditions on $A$ and $K_X+\Delta+A$ are preserved under algebraization, as well as the klt condition on $(X_a,\Delta_{\vert X_a}+A_a)$ (see Lemma \ref{lem:kltgaga}). By Theorem \ref{thm:gluemmp}, we can glue these relative MMP steps to obtain global MMP steps over $Z$. By construction, we see that this relative MMP terminates over each $V_a$. \end{proof} \subsection{Proof of Theorem \texorpdfstring{\ref{thm:introfinitegen}}{B}} Finally, we prove Theorem \ref{thm:introfinitegen}. \begin{proof}[Proof of Theorem \ref{thm:introfinitegen}] As before, we have already shown case $(\ref{setup:introalgebraicspaces})$ in Theorem \ref{thm:finitegenerationalgspaces}. It therefore suffices to show Theorem \ref{thm:introfinitegen} in cases $(\ref{setup:introformalqschemes})$, $(\ref{setup:introcomplexanalyticgerms})$, $(\ref{setup:introberkovichspaces})$, and $(\ref{setup:introrigidanalyticspaces})$. \par The positivity conditions on the $A_i$ and $c_iK_X+\Delta_i$ are preserved under algebraization over every affinoid subdomain $U \subseteq Z$, as well as the klt condition on $(X,\Delta_i)$ (see Lemma \ref{lem:kltgaga}). Since GAGA preserves cohomology groups \citeleft\citen{EGAIII1}\citemid Proposition 5.1.2\citepunct \citen{AT19}\citemid Theorem C.1.1\citepunct \citen{Poi10}\citemid Th\'eor\`eme A.1$(i)$\citepunct \citen{Kop74}\citemid Folgerung 6.6\citeright\ (see also \citeleft\citen{Con06}\citemid Example 3.2.6\citepunct \citen{Hal}\citemid Example 9.4\citeright), we can apply Theorem \ref{thm:finitegenerationalgspaces} over $\Spec(\mathcal{O}_U(U))$ to deduce Theorem \ref{thm:introfinitegen}. \end{proof} \addtocontents{toc}{\protect\bigskip} \bookmarksetup{startatroot}
{ "timestamp": "2022-09-20T02:23:52", "yymm": "2209", "arxiv_id": "2209.08732", "language": "en", "url": "https://arxiv.org/abs/2209.08732" }
\section{Robot Design} The robot used for collecting photographic data of the lettuce plants consists of 2 subsystems: a robot arm mounted on the end effector of a cable-driven parallel robot (CDPR). The purpose of the robot arm is to collect large numbers of photos of a plant from various, repeatable angles for use in SfM and other analysis techniques. Meanwhile, the CDPR enables analyzing a larger quantity of plants by moving the robot arm from plant to plant, enlarging the workspace of the robot arm to cover dozens of plants. \subsection{Mechanical Design} \subsubsection{CDPR} The cable-based robot platform is chosen for its scalability and economy \cite{Kirchgessner17fpb_cablerobot_phenotyping,Bai19cea_cablerobot_agriculture}, which allow the robot to reach multiple plants and remain permanently installed for complete autonomy. Although our demonstrated robot is only 2.9m\,$\times$\,2.3m in size, in principle it can scale to almost any size vertical grow towers. As compared to e.g. gantry or conveyor type systems, cable-based systems remain almost constant in price relative to size. The planar design is chosen for its favorable tradeoff between capability and cost/complexity, since collisions with plants would limit the utility of out-of-plane motions anyway. The CDPR is an 8-cable, 4-motor planar CDPR with a workspace of roughly 2.9m\,$\times$\,2.3m. Details on the design can be found in \cite{Chen22icra_GTGraffiti}, with the primary distinctions being that (a) the robot arm shown in Fig. \ref{fig:arm_photo} is used in place of the spray paint carriage, and (b) the cables are doubled to provide more out-of-plane stability. The doubled cables consist of two cables spooled with two drums on a shared shaft driven by a single motor, as depicted in Fig. \ref{fig:doubled_cable}. Although certain 4-motor planar CDPR geometries can control both translation \emph{and rotation} in the plane, we choose a geometry which largely precludes rotational motion. This choice was made because the benefit of the additional stiffness enabled by the chosen geometry outweighs the lack of CDPR rotation, especially when coupled with the robot arm's shoulder joint. The CDPR with robot arm is shown in Fig. \ref{fig:cdpr}. \begin{figure} \centering \includegraphics[height=0.4239417642\linewidth]{figs/mock1.JPG} \includegraphics[height=0.4239417642\linewidth]{figs/mock3.JPG} \caption{Left: CDPR consists of 4 pairs of cables controlling a moving platform on which a robot arm is mounted. Right: Each pair of cables is crossed provide additional out-of-plane stability, but both cables in each pair are driven by the same motor.} \label{fig:doubled_cable} \label{fig:cdpr} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth,valign=t]{figs/daniel_cropped.JPG} \includegraphics[width=0.49\linewidth,valign=t]{figs/arm.jpg} \caption{4DoF Robot arm with camera used to take a large number of photos from various angles of a single plant. Left: Arm (without wooden cover or CDPR) taking photos of a plant. Right: Arm inside wooden protective cover.} \label{fig:arm_photo} \end{figure} \subsubsection{Robot Arm} The robot arm (Fig. \ref{fig:arm_photo}) is chosen to supplement the planar CDPR with the dexterity to reach around a plant and take photos from a variety of viewpoints. The robot arm was chosen to have 4DoF in a configuration that, combined with the 2DoF of the CDPR, provides the robot with full SE(3) motion. Although in theory 1DoF is redundant with camera-axis rotation and another 2DoF are unnecessary with a sufficiently wide field-of-view camera, we found in practice that they are helpful when reaching around plants to avoid collisions with neighboring plants. The robot arm is adapted from a Trossen Robotics PhantomX Pincher Mark II \cite{trossen_arm}, which is a 4DoF robot manipulator using Dynamixel AX-12A servos. The links were extended to have lengths of \SI{0.107}{\m}, \SI{0.194}{\m}, and \SI{0.032}{\m} after the shoulder, elbow, and wrist joints respectively. We also replaced the gripper with a Raspberry Pi Camera Module v2, which uses a IMX219 8MP sensor. The 4 DoF allow rotation in $\theta$ with the base joint and both translation and rotation in the $x-r$ plane (see Fig. \ref{fig:coords}). The completed robot arm is shown in Fig. \ref{fig:arm_photo}. \begin{figure} \centering \subfloat[Top view\label{fig:coords_top}]{% \includegraphics[scale=0.18, page=1,trim=0 3.39in 6.99in 0, clip, valign=c]{figs/figs.pdf} \vphantom{ \includegraphics[scale=0.18, page=2,trim=0 0in 8.59in 0, clip, valign=c]{figs/figs.pdf}} } \hspace*{4em \subfloat[Front view\label{fig:coords_front}]{% \includegraphics[scale=0.18, page=2,trim=0 0in 8.59in 0, clip, valign=c]{figs/figs.pdf} } \caption{Coordinate frame of the camera with respect to a lettuce plant.} \label{fig:coords} \end{figure} \subsection{Electrical and Communication Design} A Raspberry Pi 4 controls the camera, robot arm, and CDPR using ROS, as overviewed in Fig. \ref{fig:communication}. The electronics are shown in Fig. \ref{fig:arm_box_annotated}, excepting the motor controllers and motors which are available in \cite[Fig. 6 (right)]{Chen22icra_GTGraffiti}. The camera is connected and controlled directly by the Pi. The arm is controlled by an Arbotix-M microcontroller which receives joint angle position commands from the Pi and sends back joint angle position feedback. The CDPR is controlled by a Teensy 4.1 which receives high-level cartesian position commands from the Pi and applies low-level motor torque commands to the motor controllers using the algorithm from \cite[Sec. III.A]{Chen22iros_lqg_cdpr} using factor graphs \cite{Dellaert17fnt_factorgraphsforrobotperception,Chen19blog_lqr-blogpost,Yang21icra_ecLQR}. \begin{figure} \centering \includegraphics[width=\linewidth, page=3,trim=0 0.72in 2.49in 0,clip]{figs/figs.pdf} \caption{System communication overview.} \label{fig:communication} \end{figure} \subsection{Data Collection Algorithm} The algorithm used for a data collection session consists of using the CDPR to move to a plant, then using the robot arm to take photos from a variety of viewpoints. The positions of the plants on the grow towers are known and pre-programmed for the CDPR to move to, and the set of camera poses is adjusted based on the age of the plant: the view angles remain consistent while the distance to the plant increases with plant age. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{figs/arm_box_annotated_compressed.jpg} \caption{Electronics mounted on the CDPR moving platform inside a protective wooden cover.} \label{fig:arm_box_annotated} \end{figure} \subsection{Lettuce Growing and Dataset Collection Procedure} \label{ssec:grow_procedure} The data collection procedure was designed to collect photos, masses, and elemental nutrient contents of 72 total plants distributed across various growth stages and 2 different nutrient schedules (``Experiment 1'' and ``Experiment 2''). We used Bibb Butterhead Lettuce plants because they are economically well-suited to indoor farming, grow fast, and have qualities that make them challenging for computer vision approaches (e.g. high degree of self-occlusion and self-similar texture). Due to learned lessons from Experiment 1, the experiments used slightly different procedures, but the growing procedure for each individual plant remained the same: \begin{enumerate} \item To germinate, place seeds in a dampened rockwool substrate, and place the substrate in an incubator next to grow lights for 14 days. \item Transplant the seedlings (with substrate) into the vertical hydroponic grow towers (``Day 0''). \item Take 64 photos (1 top-down, and 21 from each of 3 rings with constant $\phi$) per day. \item Harvest and measure ground truth (GT) data at the scheduled time. \end{enumerate} The harvest process consists of cutting the plant at the base, weighing immediately (``Fresh Mass''), dehydrating for 48 hours, weighing again (``Dry Mass''), and performing a nutrient analysis. The fresh mass must be weighed immediately since, once cut, transpiration causes the plant to lose mass so quickly that the reading on a gram scale will observably decrease during the few seconds it is being weighed. For Experiment 1, the \textit{General Hydroponics Flora Series} fertilizer is used with ratios 3:2:1 of \textit{FloraGro}, \textit{FloraMicro}, and \textit{FloraBloom} totalling 138ml of fertilizer per 100L of water. The pH is monitored and buffered daily, and the hydroponic system is flushed/replenished every 2 weeks. For Experiment 2, the Modified Sonneveld's solution from \cite{Mattson14ig_hydroponic_nutrient_formula} is used. The pH is monitored and buffered daily, and the hydroponic system is flushed/replenished 3 times per week. For Experiment 1, 48 plants were planted in sets of 12 each week and harvested at the same time when the oldest plant reached maturity (28 days after transplant). For Experiment 2, 24 plants were planted at the same time and harvested 3/day from 21 to 28 days after transplant. The Experiment 2 plants were planted at the same time to reduce variability due to germination conditions. Consequently, they reached maturity at similar times so they were planted at half the density as in Experiment 1 to reduce overcrowding. The harvest schedule was designed to capture the most ``interesting'' portion of the logistic growth curve. The specifications for the indoor vertical hydroponic setup will be available in \cite{Sharkey23_pilot_site_tbd}, and the vertical grow rig with cable robot is shown in Fig. \ref{fig:system_with_plants}. \subsection{Mass Estimation using SfM} \label{ssec:mass_estimation} We validated the efficacy of our robot and data collection by estimating the masses of the lettuces using SfM. For each plant, we first used COLMAP to generate a dense point cloud of the plant from the photos taken by the robot. Using robot forward kinematics as camera pose priors, we transformed the point cloud into a canonical frame to resolve monocular ambiguity. We then applied a number of programmatic cleaning steps to discard outlier and background points from the point-cloud. Next, we generated a mesh of the plant using Poisson Surface Reconstruction and applied Poisson Disk Sampling to make a more uniform mesh while maintaining a point density sufficient to voxelize without bias on a 3mm grid size. We computed both (1) the surface area of the mesh and (2) a volume estimate by voxelizing the mesh at a 3mm grid size and counting the total occupied voxels. Finally, we applied linear regressions to estimate mass using either surface area or volume. \subsection{Baseline Methods} \label{ssec:baseline_methods} We evaluated our approach against (a) high-throughput methods -satellite, UAV, or conveyor-belt imagery- by using subsets of the total photos collected, and (b) high-accuracy methods by using only the robot arm without the CDPR to replicate similar high-accuracy methods in the literature. \subsubsection{Baseline 1} Here we used only a single top-down photo of each plant. We applied a 2-layer CNN to segment the plant (foreground) from the background in the undistorted photos and count the number of pixels occupied by the plant. We then used the known pose of the camera relative to the base of the plant, combined with the calibrated camera intrinsics, to approximate the projected area of the plant. \subsubsection{Baseline 2} Here we simulated over-canopy approaches (such as UAV or conveyor-belt) by using only the photos with camera poses that do not ``reach around'' the plant. Specifically, we set the minimum threshold for the x-position of the camera to be 17cm (the maximum observed x-dimension of any plant in the datasets) and used only photos from the dataset with camera poses beyond the threshold. We applied the same mass estimation algorithm as in \ref{ssec:mass_estimation}. \subsubsection{Baseline 3} Here we replicated a low-throughput, high-accuracy method by using only the robot arm without the CDPR, instead manually placing the base of the robot arm in front of each plant. For each plant, we commanded the robot arm to a ``top-down photo'' pose for reference, fixed the arm's base in place on a stand (see Fig. \ref{fig:arm_photo}, left), and took the same set of photos per plant we used for our method. We compared the throughput and camera pose consistency. \subsection{Data Collection Throughput} Our robot system was capable of autonomously collecting data at approximately 2640 photos/hour and spanning 56 plants at a density of 350 cm$^2$/plant (although the experiments had fewer viable plants than the robot was capable of imaging). Given the inherent scalability of cable robots, increasing the size of the cable robot to reach a greater number of plants should be possible. Additionally, higher quality cameras can dramatically increase the photo capture rate by enabling faster shutter speeds or even continuous robot arm motion (currently, the arm must stop for each photo to eliminate motion blur and rolling shutter effects; motion input-shaping may also improve the capture rate). Whereas Baseline 3 required a pair of skilled humans to collect data and was only capable of collecting 300-600 photos/hour (64 photos/plant) (depending on the skills of the operators), during experiment 2, our robot system was demonstrated to run autonomously without human supervision across several days. We anticipate that the robot can be run 100\% autonomously in future growth cycles. \subsection{Mass Estimation Regression Results} \label{ssec:regression} Based on $R^2$ and leave-one-out cross-validation MAE values, our estimated metrics using the full dataset are better correlated to the GT masses than the baseline metrics. Figure \ref{fig:regressions} shows the linear regressions and $R^2$ values for a couple representative pairs, and Table \ref{tab:regressions} shows the $R^2$ and MAE values for linear regressions between estimated metrics and ground truth masses. \begin{figure} \centering \includegraphics[width=0.85\linewidth]{figs/Regression_DryMassg_vs_EstimatedSurfaceAream2-eps-converted-to.pdf} \includegraphics[width=0.85\linewidth]{figs/Regression_DryMassg_vs_Baseline12DProjectedAream2-eps-converted-to.pdf} \caption{Sample linear regression results estimating dry mass (GT) using the surface area computed from SfM (top) and the projected area estimate from a single top-down photo of each plant (bottom). Our regression is better than the baseline's which demonstrates that the our robot design's ability to capture many viewpoints obtains better performance than a simulated high-throughput approach with limited viewpoints of a plant.} \label{fig:regressions} \end{figure} \begin{table} \centering \caption{Linear Regression Results} \label{tab:regressions} \begin{tabular}{l|cc|cc} \multirow{2}{*}{Estimation Metric} & \multicolumn{2}{c|}{GT: Fresh Mass} & \multicolumn{2}{c}{GT: Dry Mass} \\ & $R^2$ & MAE (g) & $R^2$ & MAE (g) \\ \hline Surface Area (ours) & \bf0.845 & \bf11.216 & \bf0.846 & \bf0.586 \\ Volume (ours) & 0.833 & 11.671 & 0.832 & 0.617 \\ Baseline 1: Projected Area & 0.537 & 19.976 & 0.505 & 1.084 \\ Baseline 2: Surface Area & 0.292 & 26.049 & 0.285 & 1.401 \\ Baseline 2: Volume & 0.277 & 26.439 & 0.269 & 1.422 \\ \end{tabular} \end{table} \subsection{Statistical Power} Ultimately, one primary application of non-destructive phenotyping is to draw scientific conclusions from the computed metrics. To this end, we evaluated our method by running statistical significance tests using (1) ground-truth masses, (2) estimated masses our method in \ref{ssec:mass_estimation}, and (3) estimated masses using the baseline methods from \ref{ssec:baseline_methods}. We used ANOVA \cite{Mishra19ca_statistical_tests} to test 2 hypotheses: (a) the age of the plant is correlated with the mass of the plant and (b) nutrient schedule is correlated with the mass of the plant. To test the hypothesis that age is correlated with mass, we grouped the plants into their harvest days and run one-way ANOVA tests to determine if the different groups have different means of each of the mass metrics. Due to the non-uniform frequencies of ages in the harvested plants, shown in Table \ref{tab:plant_age}, we performed separate tests for Experiments 1 and 2, and for Experiment 1 we only used the 15 and 21 day-old age groups (note that 2-group ANOVA is equivalent to t-test with independent samples \cite{Mishra19ca_statistical_tests}). The results are presented in columns 2 and 3 of Table \ref{tab:anova} and show that both our estimate volume and surface area have similar statistical powers as the GT masses ($p<0.005$), while the baselines are nearly an order of magnitude less powerful. The exception is Baseline 1 for Experiment 1, which is likely an artifact of the camera position algorithm used for experiment 1 using different camera positions for the different age groups. To test the hypothesis that nutrient schedule is correlated with plant mass, we tested whether Experiments 1 and 2 have different mean mass metrics (since Experiments 1 and 2 were executed with different nutrient schedules, as in \ref{ssec:grow_procedure}). To balance the data, only 28 day-old plants were used. The results are presented in column 4 of Table \ref{tab:anova} and show that our method's metrics are less powerful than the GT masses, but still more powerful than the baseline methods. \begin{table} \centering \caption{Plant Harvest Age Distribution} \label{tab:plant_age} \vspace*{-1.2em}\# of Samples for Each Age\\[1.2em] \scriptsize \begin{tabular}{c|cccccccccc} Age (days) & 8 & 15 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 \\ \hline Experiment 1 & 6 & 11 & 11 & ~ & ~ & ~ & ~ & ~ & ~ & 3 \\ Experiment 2 & ~ & ~ & 3 & 3 & 2 & 3 & 3 & 3 & 3 & 3 \\ \end{tabular} \end{table} \begin{table} \caption{ANOVA Statistical Significance Tests}\label{tab:anova} \centering \begin{tabular}{l|cc|cc} \multirow{3}{*}{\centering Metric} & \multicolumn{2}{c|}{p-value for} & \multicolumn{1}{c}{p-value for} \\ & \multicolumn{2}{c|}{Age Discrimination} & \multicolumn{1}{c}{Nutrient Schedule} \\ & Exp. 1 & Exp. 2 & Discrimination \\ \hline % Fresh Mass (GT) & 0.\o\o156 & 0.\o\o\o37 & 0.\o\o284 \\ Dry Mass (GT) & 0.\o\o137 & 0.\o\o263 & 0.\o\o288 \\ Surface Area (ours) & 0.\o\o219 & 0.\o\o352 & 0.\o3134 \\ Volume (ours) & 0.\o\o204 & 0.\o\o338 & 0.\o3766 \\ Baseline 1: Projected Area & 0.\o\o\o86 & 0.\o2661 & 0.32745 \\ Baseline 2: Surface Area & 0.\o\o287 & 0.31166 & 0.32066 \\ Baseline 2: Volume & 0.\o\o265 & 0.26535 & 0.28106 \\ \end{tabular} \end{table} \subsection{Point Cloud Occlusion} We claim that a key advantage of our robot design is that the dexterity of the robot arm allows us to see more of the lettuce plant thereby reducing occlusions (as compared to high-throughput, over-canopy methods). To assess the degree of occlusion, we used a number of assumptions to generate a metric for ``occlusion proportion''. Intuitively, we observed that the smallest plants had negligible occlusion so their estimates should be the most accurate. This is reflected by the slightly convex shapes of the data in Figure \ref{fig:regressions}, indicating that extrapolating the lettuce density from small plants would produce under-estimates for the masses of large plants. Assuming that underestimates in the mass are due primarily to occlusion, we approximated the occlusion proportion $d\approx 1-\frac{\hat{m}}{m}$ where $m$ is the true mass, $\hat{m}:=\rho X_{est}$ is a na\"ively estimated mass (because it does not compensate for occlusions), and $X_{est}$ is the computed metric from the plant photos (surface area, volume, or projected area). We also approximated $d$ to be proportional to the depth of a cube with the same mass and density as the lettuce: $d\approx k m_{est}^{1/3}$, where $k$ (``occlusion coefficient'') indicates a method's susceptibility to occlusion (smaller $k$ is better). Finally, we can derive a new equation for the occlusion-compensated mass estimate: \begin{align} \hat{m}_{occ} &= \frac{\hat{m}}{1-d} = \frac{\rho X_{est}}{1- k\left(\rho X_{est}\right)^{1/3}}. \label{eq:occlusion_compensated} \end{align} By running a new regression using this model instead of the linear one in \ref{ssec:regression}, we can compare $k$ for different methods, where small $k$ indicates less occlusion. The occlusion-compensated regression results show that our method has the smallest occlusion coefficient, indicating that the dexterity of the robot arm was helpful in reducing occlusions and capturing a more complete reconstruction of the plant. Representative regressions are shown in Figure \ref{fig:regressions_occ} and the occlusion coefficients are shown in Table \ref{tab:occlusion}. \begin{figure} \vspace*{-2.5em} \centering \includegraphics[width=0.6\linewidth]{figs/Regression_Wet_vs_SA_occlusion-eps-converted-to.pdf} \includegraphics[width=0.6\linewidth]{figs/Regression_Wet_vs_proj_occlusion_exp3only-eps-converted-to.pdf} \caption{Representative regression results using the occlusion-compensated model (Eq. \eqref{eq:occlusion_compensated}) to estimate fresh mass (ground truth) from (top:) the surface area computed from SfM and (bottom:) projected area estimate from a single top-down photo of each plant, suggest that our method is less susceptible to occlusion. Lower $k$ signifies less occlusion (better).} \label{fig:regressions_occ} \end{figure} \begin{table} \centering \caption{Susceptibility of Different Methods to Occlusion}\label{tab:occlusion} \begin{tabular}{l|cc} \multirow{3}{*}{Estimation Method} & \multicolumn{2}{c}{Occlusion coefficient, $k$ (\SI{}{\per\g})} \\ & \multicolumn{2}{c}{(lower is better)}\\ & GT: Fresh Mass & GT: Dry Mass \\ \hline Surface Area & \bf 0.236 & \bf 0.593 \\ Volume & 0.261 & 0.659 \\ Baseline 1: Projected Area & 0.519 & 0.883 \\ Baseline 2: Surface Area & 0.333 & 0.680 \\ Baseline 2: Volume & 0.350 & 0.743 \\ \end{tabular} \end{table} \subsection{Point Cloud Visualizations and Qualitative Descriptions} Both the photos (Fig. \ref{fig:plants_mosaic}) and resulting point clouds (Fig. \ref{fig:pointclouds}) show that our robot is capable of capturing lettuce plant photos from a variety of viewpoints for use in non-destructively estimating plant mass. Fig. \ref{fig:pointclouds} identifies gaps in the reconstruction due to occlusion that would result from an over-canopy approach (Baseline 2), thereby supporting the claim that the additional dexterity afforded by our hybrid CDPR with robot arm is helpful in reducing occlusions. A comparison between photos taken by our robot design and photos taken by Baseline 3 (the arm alone), depicted in Fig. \ref{fig:plants_mosaic}, demonstrates the improved consistency with which our robot can position the camera for photos. Camera pose relative to the plant center is highly variable due to human placement error, and the human labor/oversight required is significant (6-9 plants / hour vs no supervision required for the CDPR). 3D reconstructions for Baseline 3 are not shown: aside from the occasional missing points due to improperly framed photos (human error), the reconstructions were similar to those obtained using our robot design. \begin{figure} \centering \includegraphics[width=0.49\linewidth,trim={0 450px 0 0}, clip]{figs/img_mod-8.jpg} \includegraphics[width=0.49\linewidth,angle=180,origin=c]{figs/arm_only_compressed.jpg} \caption{Left: Example photos from our plant dataset depict the consistency with which our robot is able to photograph different plants from the same relative camera angle. Right: In contrast, photos taken using only the robot arm (Baseline 3) are inconsistent and labor-intensive.} \label{fig:plants_mosaic} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth, trim={300 300 300 300}, clip]{figs/pcd_plant018.jpg} \includegraphics[width=0.49\linewidth, trim={320 320 313 320}, clip]{figs/pcd_reduced_plant018_annotated.jpg} \caption{Dense reconstructions of an example lettuce plant using all the photos (left) vs a subset of photos according to Baseline 2 (right) collected by our robot show that the additional camera viewpoints enabled by our hybrid CDPR+arm design are helpful in reducing occlusions (circled in yellow) and capturing a more complete reconstruction of the plant.} \label{fig:pointclouds} \end{figure} \section{INTRODUCTION} \input{1_intro} \section{APPROACH} \input{2_approach} \section{EXPERIMENTAL METHODS} \input{3_experiment_methods} \section{EXPERIMENTAL RESULTS} \input{4_experiment_results} \section{CONCLUSIONS and FUTURE WORK} \input{6_conclusions} \addtolength{\textheight}{-2cm} \section*{ACKNOWLEDGMENTS} We thank Andrew Sharkey, Thomas Igou, and Teagan Groh for their growing assistance and horticultural expertise. \IEEEtriggeratref{36} \bibliographystyle{IEEEtran}
{ "timestamp": "2022-09-20T02:22:42", "yymm": "2209", "arxiv_id": "2209.08690", "language": "en", "url": "https://arxiv.org/abs/2209.08690" }
\section{Introduction} We consider the two-dimensional fractional Navier-Stokes equations \begin{equation}\label{eq:NS} \tag{NS$_\beta$} \left\lbrace \begin{aligned} \partial_t u + (u \cdot \nabla) u + \nabla p+\Lambda^\beta u &= f \\ \div u &= 0 \end{aligned} \right. \quad \text{ in } \mathbb{R}^2 \times \mathbb{R}_+ \end{equation} where $\beta \in (0,2]$ and $\Lambda^\beta = (-\Delta)^{\beta/2}$ is the fractional Laplacian. We consider the Cauchy problem for~\eqref{eq:NS} with divergence-free initial velocity \begin{equation}\label{e:Cauchy} u (\cdot, 0) = u_0 \in L^2(\mathbb{R}^2). \end{equation} It is not difficult to adapt the seminal work~\cite{leray} of Leray to demonstrate the existence of a global-in-time \emph{Leray-Hopf solution} for each divergence-free initial velocity $u_0 \in L^2(\mathbb{R}^2)$ and body force $f \in L^1_t L^2_x(\mathbb{R}^2 \times \mathbb{R}_+)$. These are distributional solutions $u \in L^\infty_t L^2_x \cap L^2_t \dot H^{\frac{\beta}{2}}_x(\mathbb{R}^2 \times \mathbb{R}_+)$ to~\eqref{eq:NS} satisfying $\| u(\cdot,t) - u_0 \|_{L^2(\mathbb{R}^2)} \to 0$ as $t \to 0^+$ and the \emph{energy inequality} for all $t >0$: \begin{equation}\label{e:energy_ineq} \frac{1}{2}\int_{\mathbb{R}^2} |u(x,t)|^2 \, dx + \int_0^t \int_{\mathbb{R}^2} |\Lambda^{\frac{\beta}{2}} u|^2 \, dx\, ds \leq \frac{1}{2}\int_{\mathbb{R}^3} | u_0(x) |^2 \, dx + \int_0^t \int_{\mathbb{R}^2} f \cdot u \, dx \, ds \, . \end{equation} It is well known that Leray's solutions are unique in the classical setting $\beta = 2$. In this paper, we demonstrate non-uniqueness when $\beta < 2$: \begin{theorem}[Non-uniqueness of Leray-Hopf solutions] \label{thm:main} For all $\beta \in (0,2)$, there exist a force $f \in L^1_t L^2_x(\mathbb{R}^2 \times \mathbb{R}_+)$ and two distinct Leray-Hopf solutions, $u$ and $\bar{u}$, with zero initial velocity $u_0 \equiv 0$ and force $f$: \begin{equation} u \neq \bar{u} \text{ on } \mathbb{R}^2 \times (0,T) \text{ for all } T > 0 \, . \end{equation} \end{theorem} The solutions are constructed explicitly and satisfy many desirable properties, among which is smoothness on $\mathbb{R}^2 \times (0,T)$ for $T \ll 1$ and energy equality in~\eqref{e:energy_ineq}. In a recent work~\cite{albritton2021non} with Bru{\'e}, we constructed non-unique Leray solutions of the three-dimensional Navier-Stokes equations. While the method therein is also applicable to~\eqref{eq:NS}, at least when $\beta > 1$, our goal is to highlight a different but equally natural way of incorporating viscosity into the scheme. The main idea of the proof is to perturb the examples of non-uniqueness discovered by Vishik in~\cite{Vishik1,Vishik2} to the viscous setting. To begin, we recall the basic dimensional analysis for the Euler equations \begin{equation} \label{eq:evelocity} \partial_t u + u \cdot \nabla u + \nabla p = f \, , \quad \div u = 0 \, , \end{equation} which can be equivalently rewritten in vorticity formulation as \begin{equation} \label{eq:evorticity} \partial_t \omega + u \cdot \nabla \omega = g \, , \quad u = \Delta^{-1} \nabla^\perp \omega \, , \end{equation} where $\omega =\curl u$ is the vorticity and $g = \curl f$. They have a two-parameter scaling symmetry \begin{equation} u_{\mu,\eta} = \frac{\eta}{\mu} u(\mu x, \eta t) \,, \qquad \omega_{\mu,\eta} = \eta u(\mu x, \eta t) \end{equation} corresponding to the physical dimensions $[x] = L$, $[t] = T$, $[u] = L/T$, $[\omega] = 1/T$, $[f] = L/T^2$, and $[g] = 1/T^2$. We suppose a relationship $T = L^\alpha$ between the dimensions. This yields a one-parameter scaling symmetry: \begin{equation} u_\lambda = \lambda^{\alpha - 1} u(\lambda x, \lambda^\alpha t) \, , \qquad \omega_\lambda = \lambda^\alpha u(\lambda x, \lambda^\alpha t) \, . \end{equation} One may seek special solutions $\bar{\omega}$ which are invariant under the above symmetry and necessarily have the for \begin{equation} \label{eq:baromegadef} \bar{\omega}(x) = \frac{1}{t} \bar{\Omega} (\xi) \, , \end{equation} where $\xi = x/t^{\frac{1}{\alpha}}$. In fact, we may analyze an arbitrary Euler solution $\omega$ in \emph{similarity variables} $(\xi,\tau)$, where $\tau = \log t$ and the \emph{similarity profile} $\Omega$ is defined by \begin{equation} \label{eq:omegadef} \omega(x,t) = \frac{1}{t} \Omega (\xi,\tau) \,, \qquad g(x,t) = \frac{1}{t^2} G (\xi,\tau) \,. \end{equation} The Euler equations~\eqref{eq:evorticity} in similarity variables are \begin{equation} \label{eq:similarityeuler} \partial_\tau \Omega + (-1- \frac{\xi}{\alpha} \cdot \nabla) \Omega + U \cdot \nabla \Omega = G \, , \quad U = \nabla^\perp \Delta^{-1} \Omega \, . \end{equation} Self-similar solutions $\bar{\omega}$ correspond precisely to steady states $\bar{\Omega}$ in the new variables. If $\bar{\Omega}$ is an unstable steady state of~\eqref{eq:similarityeuler}, then it is natural to seek its \emph{unstable manifold}. Suppose that $\Omega^{\rm lin}$ is an exponentially growing solution of the linearization of~\eqref{eq:similarityeuler} around $\bar{\Omega}$. One seeks a trajectory \begin{equation} \label{eq:basicansatz} \Omega = \bar{\Omega} + \Omega^{\rm lin} + \Omega^{\rm per} \, , \end{equation} where $\Omega^{\rm per} = o_{\tau \to -\infty}(|\Omega^{\rm lin}|)$ ensures that $\Omega$ is not identically equal to $\bar{\Omega}$. On the other hand, $\Omega \overset{\tau \to -\infty}{\longrightarrow} \bar{\Omega}$. This is the non-uniqueness scenario in~\cite{Vishik1}, which demonstrated the sharpness of the Yudovich class~\cite{Yudovich1963} in the `forced category' How does the above non-uniqueness scenario look with viscosity? The fractional Navier-Stokes equations~\eqref{eq:NS} in self-similarity variables are \begin{equation} \label{eq:NSss} \partial_\tau \Omega + (-1-\xi \cdot \nabla_\xi / \alpha) \Omega + U \cdot \nabla \Omega + e^{\tau \gamma} \Lambda^\beta \Omega = G \, , \quad U = \nabla^\perp \Delta^{-1} \Omega \, , \end{equation} where \begin{equation} \gamma = 1-\frac{\beta}{\alpha} \, . \end{equation} The traditional choice of exponent is $\alpha = \beta$, so that the PDE~\eqref{eq:NSss} is $\tau$-autonomous. (The choice $\alpha = \beta = 2$ in~\cite{jiasverakselfsim,albritton2021non} is the classical Navier-Stokes scaling.) This choice is determined by the dimensions of the viscosity: $[\nu] = L^\beta/T$. With this choice, it is possible to follow the strategy in~\cite{albritton2021non} to prove Theorem~\ref{thm:main} when $\beta \in (1,2)$. The same strategy would be valid in dimension three with $\beta \in (1,5/2)$, that is, up to the Lions exponent. The most delicate part of the analysis in~\cite{albritton2021non} is an eigenvalue perturbation argument which shows that not only the linearized $2$D Euler equations, but also the linearized $3$D Navier-Stokes equations admit an unstable eigenvalue around a well chosen background solution. In this paper, we instead consider $\alpha > \beta$, so that $\gamma > 0$ and the dissipation term $e^{\tau \gamma} \Lambda^\beta$ becomes perturbative (formally) as $\tau \to -\infty$. Then we consider the ansatz~\eqref{eq:basicansatz} exactly as in the inviscid setting, in particular, choosing $\Omega^{\rm lin}$ as an unstable mode for $2$D Euler rather than Navier-Stokes, and study $\Omega^{\rm per}$. We take the existence of an unstable inviscid self-similar solution for granted.\footnote{Indeed, the majority of the present work was actually written before we realized that the spectral problem in~\cite{albritton2021non} was manageable. This work is independent from~\cite{albritton2021non} except for the observation that Vishik's vortex can be truncated.} In this way, we turn our attention away from the spectral problem at the heart of~\cite{albritton2021non} and toward the nonlinear arguments, which become quasilinear in nature and inherently more difficult. The nonlinear scheme will be outlined in Section~\ref{sec:strategy}. Our strategy can be compared to analogous strategies in the singularity formation literature, which seek \emph{stable manifolds} as $\tau \to +\infty$ of (potentially unstable) steady states in backward self-similarity variables. In contrast, we seek \emph{unstable manifolds} as $\tau \to -\infty$ of unstable steady states in forward self-similarity variables. Notably, the three-dimensional compressible Navier-Stokes `implosion' singularity in~\cite{merle2019implosion} is a perturbation of a compressible Euler singularity; in similarity variables, the viscous term appears with a factor $e^{-\tau \gamma}$. See~\cite{oh2021gradient,chickering2021asymptotically} for perturbative dissipation in the fractional Burgers equation. The above comparison highlights the following point: In the absence of forcing, the desired self-similar solutions and corresponding exponents are \emph{chosen for you}. When the scaling of the self-similar solution is supercritical with respect to the scaling of the viscous equation, the formal calculations give hope to perturb inviscid solutions to viscous solutions. For this reason, we choose to work with general (supercritical) exponents below. The present work provides another general method to incorporate a supercritical viscosity into non-uniqueness scenarios; we hope that this will inform future studies on non-uniqueness scenarios and continuation past singularities of various PDEs in fluid mechanics. \emph{Previous work}. The important works~\cite{jiasverakselfsim,jiasverakillposed,guillodsverak} proposed a route, accompanied by significant numerical evidence, to the (yet unproven) non-uniqueness of Leray solutions \emph{without force}. The unstable manifold in similarity variables is already in~\cite[Theorem 4.1]{jiasverakillposed}. A related program for the two-dimensional Euler equations was initiated in~\cite{BressanAposteriori,BressanSelfSimilar}. Non-uniqueness of weak Navier-Stokes solutions was demonstrated via convex integration in~\cite{BuckmasterVicolAnnals}. With hypoviscosity $\beta < 2/3$, convex integration was able to achieve the Leray class ~\cite{mariahypodissipativeonefifth,DeRosa19}. See~\cite{albritton2021non} for a more comprehensive review. Finally, we recently demonstrated non-uniqueness of Leray solutions to the forced Navier-Stokes solutions in bounded domains via a gluing method~\cite{albritton2022gluing}. \section{Preliminaries} \label{sec:preliminaries} Let $ 0 < \beta < \alpha < 2$. Given a compactly supported, smooth velocity field $\bar{U} \in C^\infty_0(B_2;\mathbb{R}^2)$, $\bar{U}(x) = V(r) e_\theta$, with corresponding vorticity $\bar{\Omega} = \curl \bar{U}$, we define the linearized operator around $\bar{\Omega}$, \begin{equation} - \L_{\ss} \Omega = \left( - 1 - \frac{\xi}{\alpha} \cdot \nabla \right) \Omega + \bar{U} \cdot \nabla \Omega + U \cdot \nabla \bar{\Omega} \, , \end{equation} where $U = \nabla^\perp \Delta^{-1} \Omega$. The above operator is considered $\L_\ss \: D(\L_\ss) \subset L^2_m \to L^2_m$, where $L^2_m \subset L^2$ consists of $m$-fold rotationally symmetric $L^2$ functions and $D(\L_\ss) := \{ \Omega \in L^2_m : \L_\ss \Omega \in L^2 \}$. The following proposition was essentially demonstrated in~\cite{Vishik1,Vishik2,OurLectureNotes,albritton2021non} \begin{proposition}[Unstable vortex]\label{pro:spectral} Let $a_0 \geq 0$. There exists a compactly supported, smooth velocity field $\bar{U} \in C^\infty_0(B_2;\mathbb{R}^2)$, $\bar{U}(x) = V(r) e_\theta$, with corresponding vorticity $\bar{\Omega} = \curl \bar{U}$ and satisfying the following properties. 1. \textbf{Linear instability}. There exists an integer $m\geq 2$ such that $\L_{\ss}$ has an unstable eigenvalue. More specifically, there exist $\lambda \in \mathbb{C}$ with $\Re \lambda =: a \geq a_0$ and $\eta\in L^2_m\setminus \{0\}$ satisfying $\L_{\rm ss} \eta = \lambda \eta$. 2. \textbf{Semigroup estimate}. The operator $\L_{\ss}$ generates a continuous semigroup in $L^2_m$, and for all $\delta > 0$, we have the semigroup estimate \begin{equation}\label{eqn:semigr} \norm{e^{\tau\L_\ss}}_{L^2_m \to L^2} \lesssim_\delta e^{(a+\delta) \tau} \text{ for all } \tau \geq 0 \, . \end{equation} 3. \textbf{Eigenfunction estimates}. Any eigenfunction $\eta$ with unstable eigenvalue $\lambda$ is compactly supported in $B_1$. For any $k \in \mathbb{N}_0$ with $\Re \lambda > k/\alpha - 1$, the above eigenfunction belongs to $H^k$, and its velocity field ${\rm BS}[\eta]$ belongs to $H^{k+1}$. \end{proposition} \begin{proof} 1. and 2. above are contained in~\cite[Theorem 2.4.2]{OurLectureNotes} except for the compact support property of $\bar{U}$, which was pointed out in~\cite[Proposition 2.2]{albritton2021non}. Therefore, we focus on \textbf{3. Eigenfunction estimates}. We present an alternative to the approach in~\cite[Lemma 5.0.3]{OurLectureNotes}, where it was shown that $\eta \in W^{2,\infty}$. Suppose that $\eta \in D(\L_{\rm ss})$ is an eigenfunction with unstable eigenvalue $\lambda = \lambda' - 1$: \begin{equation} \left( \lambda' - \frac{\xi}{\alpha} \cdot \nabla + \bar{U} \cdot \nabla \right) \eta + {\rm BS}[\eta] \cdot \nabla \bar{\Omega} = 0 \, . \end{equation} First, we demonstrate that $\eta$ is compactly supported. When $|\xi| \geq 2$ is beyond the support of $\bar{U} \in C^\infty_0(B_2)$, we have \begin{equation} \frac{r}{\alpha} \partial_r \eta = \lambda' \eta \, , \quad r \geq 2 \, , \end{equation} whose only decaying solutions are identically zero in $\mathbb{R}^2 \setminus B_2$. Second, we prove that $\eta \in H^k$ provided that $\Re \lambda' > k/\alpha$. We consider ${\rm BS}[\eta] \cdot \nabla \bar{\Omega}$ to be a `forcing term' in the resolvent problem, and we rewrite the above equation via the Laplace transform characterization of the resolvent: $R(\lambda',\L_\ss) = \int_0^{+\infty} e^{-\lambda' t} e^{t \L_\ss} \, dt$. Let $X_t$ be the flow map associated to the autonomous vector field $V(\xi) := \xi/\alpha - \bar{U}(\xi)$. Then \begin{equation} \eta = - \int_0^{+\infty} e^{-\lambda' t} ({\rm BS}[\eta] \cdot \nabla \bar{\Omega}) \circ X_t \, dt \, . \end{equation} To estimate $\eta$, we recall basic properties of $X_t$, which is explicitly \begin{equation} X_t(r_0 e^{i \theta_0}) = r(t,r_0) e^{i\theta(t,\theta_0,r_0)} \, , \end{equation} \begin{equation} r(t,r_0) = e^{t/\alpha} r_0 \, , \quad \theta(t,\theta_0,r_0) = \theta_0 - \int_0^t \bar{U}_\theta(r(s,r_0)) \, ds \, . \end{equation} By the general formula \begin{equation} (\nabla X_t e_r)(r_0 e^{i \theta_0}) = \partial_{r_0} r \; (e_r \circ X_t) + r \partial_{r_0} \theta \; (e_\theta \circ X_t) \, , \end{equation} \begin{equation} (\nabla X_t e_\theta)(r_0 e^{i \theta_0}) = \frac{1}{r_0} \partial_{\theta_0} r\; (e_r \circ X_t) + \frac{r}{r_0} \partial_{\theta_0} \theta \; (e_\theta \circ X_t) \, , \end{equation} it is not difficult to demonstrate that $|\nabla X_t| \lesssim e^{t/\alpha}$. A different way of saying this is that solutions of the vector-valued ODE $\dot{\vec{y}} = (\nabla V)(X_t) \vec{y}$ satisfy $|\vec{y}(t)| \lesssim e^{t/\alpha} |\vec{y}(0)|$. By taking spatial derivatives of the ODE $\dot X_t = V(X_t)$, this is enough to bootstrap the estimate on $\nabla X_t$ to $|\nabla^k X_t| \lesssim_k e^{k t/\alpha}$ for all $k \geq 0$. In particular, we have \begin{equation} \| \eta \|_{H^k} \lesssim \int_0^{+\infty} e^{(k/\alpha - \lambda')t} \, dt \; \| {\rm BS}[\eta] \cdot \nabla \bar{\Omega} \|_{H^k} \lesssim \| \eta \|_{H^{k-1}} \, , \end{equation} where the implied constant depends on various quantities. Hence, one can bootstrap $\eta \in L^2$ to $\eta \in H^k$ provided that $\Re \lambda > k/\alpha - 1$. Finally, since $\eta$ is compactly supported and mean zero, we have $|{\rm BS}[\eta]| \lesssim |\xi|^{-2}$ for $|\xi| \geq 2$. In particular, ${\rm BS}[\eta] \in L^2$. \end{proof} \begin{remark} It is natural that the unstable eigenfunctions should have smoothness related to $\Re \lambda$ since solutions to the ODE $\lambda f = r \partial_r f$, $\Re \lambda > 0$, are $f = \text{const}. \times r^{\lambda}$. In Proposition~\ref{pro:spectral}, once it is known that the eigenfunctions are $C^\beta$, it is possible to bootstrap them in the $C^{k,\beta}$ scale, but we do not require this here. \end{remark} Recall the similarity variables $\xi = x/t^{1/\alpha}$ and $\tau = \log t$ defined above~\eqref{eq:omegadef}. Given a similarity profile $\bar{\Omega}$ as in Proposition~\ref{pro:spectral}, we define the self-similar solution $\bar{\omega}$ according to~\eqref{eq:baromegadef}. Its velocity field is $\bar{u}(x,t) = t^{-1+1/\alpha} \bar{U}(\xi)$. More generally, given a vorticity $\omega$, we extract its similarity profile $\Omega$ according to~\eqref{eq:omegadef}. Its velocity field is $u(x,t) = t^{-1+1/\alpha} U(\xi,\tau)$. We make the convention that, given a lowercase variable representing vorticity or velocity, its similarity profile is represented by the corresponding uppercase variable, and vice versa. We now state basic properties of the force, which is induced entirely by the time derivative and fractional Laplacian of $\bar{\omega}$. \begin{lemma}[Finite energy forcing] \label{lem:forcinglemma} Define $h := \partial_t \bar{\omega} + \Lambda^\beta \bar{\omega}$ and $f := \Delta^{-1} \nabla^\perp h = \partial_t \bar{u} + \Lambda^\beta \bar{u}$. Then the background velocity $\bar{u}$ solves the fractional Navier-Stokes equations with velocity forcing $f$, which belongs to $L^1_t L^2_x(\mathbb{R}^2 \times (0,T))$ for all $T > 0$. Additionally, $f \in L^\infty_t H^k_x(\mathbb{R}^2 \times (\varepsilon,T))$ for all $k \geq 0$ and $0 < \varepsilon < T < +\infty$. \end{lemma} This is a simple consequence of scaling. \subsection{Strategy} \label{sec:strategy} We revisit the ansatz~\eqref{eq:basicansatz}: \begin{equation} \Omega = \bar{\Omega} + \Omega^{\rm lin} + \Omega^{\rm per} \end{equation} where \begin{equation}\label{e:Omega_lin} \Omega^{\rm lin} (\xi, \tau) := \Re (e^{\lambda \tau} \eta) \, , \end{equation} $\lambda$ is a maximally unstable eigenvalue of $\L_\ss$, and $\eta$ is an associated non-trivial eigenfunction. Then $\Omega^{\rm lin}$ is a solution to the linearized evolution equation \begin{equation}\label{e:evolution_of_Omega_lin} \partial_\tau \Omega^{\rm lin} - \L_{\rm ss} \Omega^{\rm lin} = 0 \, , \end{equation} and our goal is to solve for the higher-order correction $\Omega^{\rm per}$. The PDE satisfied by $\Omega^{\rm per}$ is \begin{equation} \label{eq:pdeforomegaper} \begin{aligned} &\partial_\tau \Omega^{\rm per} - \L_\ss \Omega^{\rm per} + (U^{\rm lin} + U^{\rm per}) \cdot \nabla \Omega^{\rm per} + U^{\rm per} \cdot \nabla \Omega^{\rm lin} + e^{\tau \gamma} \Lambda^{\beta} \Omega^{\rm per} \\ &\quad = - U^{\rm lin} \cdot \nabla \Omega^{\rm lin} - e^{\tau \gamma} \Lambda^{\beta} \Omega^{\rm lin} \, , \end{aligned} \end{equation} where the right-hand side is considered a source term. The fractional Laplacian $e^{\tau \gamma} \Lambda^\beta$ will be treated perturbatively. Hence, our approach is essentially quasilinear in nature, and it will be necessary to incorporate the transport $(U^{\rm lin} + U^{\rm per}) \cdot \nabla$ into the operator. Define \begin{equation} - \L[W] = - \L_{\rm ss} + (U^{\rm lin} + W) \cdot \nabla + e^{\tau \gamma} \Lambda^\beta \, , \end{equation} the `main operator' in the PDE, and the associated (formal) solution operator $S[W] = S[W](\tau,s)$. We seek solutions satisfying Duhamel's formula: \begin{equation}\label{eqn:omega-per} \Omega^{\rm per}(\cdot,\tau) = - \int_{-\infty}^\tau S[U^{\rm per}](\tau,s) [( U^{\rm lin}+U^{\rm per}) \cdot \nabla \Omega^{\rm lin} + e^{s \gamma} \Lambda^{\beta} \Omega^{\rm lin}](\cdot,s) \, ds \, . \end{equation} It will be necessary to develop the (strong) solution theory for $\Omega^{\rm per}$ and the solution operator $S$ in high regularity spaces, whose corresponding velocity fields are Lipschitz continuous, e.g., $\Omega \in H^s$, $s > d/2$. It will be more convenient to estimate $S$ in a function space $X \subset W^{1,4}$ which will be defined in Section~\ref{sec:linear}. There are two main difficulties, which are already present in the Euler equations and have been encountered in previous work, e.g.,~\cite{bardosguostrauss}; the difficulties will only be exacerbated by the dissipation $e^{\tau \gamma} \Lambda^\beta$. First, there is the apparent derivative loss in adding the transport operator $(U^{\rm lin} + U^{\rm per}) \cdot \nabla$ onto $\L_\ss$. On the one hand, it is necessary to use the spectral information on $\L_\ss$ through the semigroup estimate in $L^2$. On the other hand, it is too na{\"i}ve to write Duhamel's formula, containing expressions like $\int_0^\tau e^{(\tau - s) \L_\ss} (U^{\rm lin} \cdot \nabla \Omega^{\rm per}) \, ds$, and hope to close an estimate at the $L^2$ level. Moreover, in our setting, adding the dissipation operator $e^{\tau \gamma} \Lambda^\beta$ further exhibits a loss of $\beta$ derivatives. The key observation for handling the loss is that the compact term $U^{\rm per} \cdot \nabla \bar{\Omega}$, responsible for the creation of the unstable eigenvalues, gains regularity. Hence, the strategy will be to lose derivatives while applying the semigroup estimate in Duhamel's formula and regain them in a bootstrapping procedure based on energy estimates.\footnote{A different approach is to work with the Lagrangian formulation, in which there is no derivative loss, see the construction of unstable manifolds in~\cite{LinZengCPAM2013,LinZengCorrigendum2014}.} The second difficulty concerns the operator $\bar{U} \cdot \nabla$ in high regularity spaces. One way to see the difficulty is to pass a derivative into the equation: The PDE for $\partial_i \Omega^{\rm per}$ contains the term $\partial_i \bar{U} \cdot \nabla \Omega^{\rm per}$, which in principle can change the growth rate. In~\cite{bardosguostrauss}, it was observed that it is enough assume that $\Re \lambda$ is greater than the maximal fluid Lyapunov exponent. Our approach will be Eulerian rather than Lagrangian. It is an energy method specific to shear flows and vortices (in short, the idea is to apply $\partial_\theta$ first), and crucially, it is well behaved even with the term $e^{\tau \gamma} \Lambda^\beta$. In fact, when $\beta > 1$, the term $e^{\tau \gamma} \Lambda^\beta$ will help us in the bootstrapping procedure: Once we have propagated one derivative, the other $\beta-1$ derivatives needed to close the argument are regained by smoothing. Heuristically, a gain of $\beta$ derivatives costs a factor $e^{\tau \gamma}$, which is why the dissipation term cannot be used to control everything. This is seen in the tracking of the dissipation norm $D$ below. \section{Linear theory}\label{sec:linear} In the following, we will assume that $a_0 = 8/\alpha$ and $\delta\leq \delta_0 := \min\{ \gamma/4, 1/8, a_0/2\}$. We also fix $\bar \Omega$, $\lambda=a+ib$, $\eta$ and $m$ as in Proposition~\ref{pro:spectral}. We denote by $C$ constants whose values depend only on \begin{equation} {\rm data} = (\alpha,\beta, \bar \Omega, a, b, \eta, m, \delta) \, . \end{equation} The dependence on $\bar \Omega$ and $ \eta$ can be made more precise through their regularity and size of the support, but this is not needed for the rest of our arguments, and we do not pursue it here. The value of $C$ may change from line to line. When we need to recall a specific constant of this type, we denote it with $C_1, C_2$, ... This section is devoted to estimates on the solution operator $S[W]$ to the PDE \begin{equation}\label{eqn:lin-pde} \partial_\tau \Omega + (-1-\xi \cdot \nabla/\alpha) \Omega + \bar{U} \cdot \nabla \Omega + U \cdot \nabla \bar{\Omega} + (U^{\rm lin} + W) \cdot \nabla \Omega + e^{\tau \gamma} \Lambda^\beta \Omega = 0 \end{equation} with initial data $\Omega(\cdot,\tau_0) = \Omega_0$ and divergence-free $W$ satisfying the property~\eqref{eq:Wrequirement} below. We assume that $\Omega_0$ and $W$ are $m$-fold rotationally symmetric, so that the semigroup estimate in Proposition~\ref{pro:spectral} can be applied. The remaining terms in the PDE~\eqref{eq:pdeforomegaper} will be incorporated in Section~\ref{sec:nonlin} by Duhamel's formula. Let us fix a parameter $Q \in [1,3/2)$. In view of the precise structure of the terms in our equation, it turns out that a natural space to perform such estimates is given by the norm \begin{equation} \label{eq:Xnorm} \| \Omega \|_X = \| \Omega \|_{L^Q \cap L^2} +\| \partial_\theta \Omega \|_{L^Q \cap L^4} + \| \nabla \Omega \|_{L^2 \cap L^4} \, , \end{equation} which is equivalent to $\| \Omega \|_{L^Q} + \| \partial_\theta \Omega \|_{L^Q \cap L^4} + \| \nabla \Omega \|_{L^2 \cap L^4}$, but the description in~\eqref{eq:Xnorm} is perhaps more indicative of the estimates we perform. We also keep track of the quantity \begin{equation} \| \Omega \|_{D(\tau,\tau_0)}^2 = \int_{\tau_0}^{\tau} e^{s \gamma} \| \Lambda^{1+\frac{\beta}{2}} \Omega \|_{L^2}^2 \, ds \end{equation} related to the dissipation. The space $X$ has some useful properties listed below. Let us consider $W \in L^2$ such that $\curl W \in X$. Since $ K_2 = 1_{B_1} K_2 + 1_{B_1^c K_2 \in L^{p} + L^{q}$ for every $p<2,q>2$, we have by Young's convolution inequality that \begin{equation} \label{eqn:U-infty} \| W \|_{L^\infty} = \| K_2 \ast \curl W \|_{L^\infty} \leq C ( \| \curl W \|_{L^Q}+ \| \curl W \|_{L^4}) \leq C \| \curl W \|_{X} \, , \end{equation} and similarly, \begin{equation} \label{eqn:p-theta-W-infty} \| \partial_\theta W \|_{L^\infty} = \| K_2 \ast \partial_\theta \curl W \|_{L^\infty} \leq C ( \| \partial_\theta \curl W \|_{L^Q} + \| \partial_\theta \curl W \|_{L^4}) \leq C \| \curl W \|_X \, . \end{equation} The main result of this section is \begin{proposition}[Estimates for $S$] \label{pro:transportestimate} Let $W \in L^\infty((-\infty, 0]; L^2(\mathbb{R}^2))$ be such that \begin{equation} \label{eq:Wrequirement} \sup_{\tau \in (-\infty,0)} e^{-a \tau} \norm{\curl W}_X \leq 1. \end{equation} Then for every $\delta \leq \delta_0$, there exist constants $T_{\rm max}:= T_{\rm max}({\rm data}) \leq 0$ and $\bar{C} := \bar{C}({\rm data}) > 1$ such that for every $\tau_0 \in (-\infty,T_{\rm max}]$ and $\tau \in [\tau_0,T_{\rm max}]$, we have \begin{equation} \label{eq:transportestimate} \| S[W](\tau,\tau_0) \Omega_0 \|_X + \| S[W](\cdot,\tau_0) \Omega_0 \|_{D(\tau_0,\tau)} \leq \bar{C} e^{(a + \delta) (\tau-\tau_0)} \| \Omega_0 \|_X. \end{equation} \end{proposition} \begin{proof}[Proof of Proposition~\ref{pro:transportestimate}] The constants $T_{\rm max} \leq 0$ and $\bar{C} > 1$ will be chosen later. For $\tau \in [\tau_0,T_{\rm max}]$, we denote $\Omega(\cdot,\tau) = S[W](\tau,\tau_0) \Omega_0$ the solution of \eqref{eqn:lin-pde}. The proof of Proposition \ref{pro:transportestimate} will be established through the following claim. \emph{ Let $\tau_1 \in [\tau_0, T_{\rm max}]$ satisfying the following bootstrap assumption: \begin{equation}\label{eqn:bootstrap} \mbox{ For all $\tau \in [\tau_0,\tau_1]$, the estimate~\eqref{eq:transportestimate} holds.} \end{equation} Then for all $\tau \in [\tau_0,\tau_1]$ we have \begin{equation} \label{eqn:finalbound} \| \Omega(\cdot,\tau) \|_{X} + \| \Omega \|_{D(\tau_0,\tau)} \leq C_{1}( + \bar C ^{1/2}e^{\zeta T_{\rm max}}) \bar C ^{1/2} e^{(a+\delta)(\tau-\tau_0)} \| \Omega_0 \|_X \, , \end{equation} where $ C_{1}:= C_1({\rm data})>1$ and $\zeta := \zeta({\rm data}) > 0$.} Once this claim is established, we easily conclude the proof of Proposition~\ref{pro:transportestimate} by the following argument. We fix $ \bar C = (4 C_{1 )^{2}$ and $T_{\rm max}$ in terms of ${\rm data}$, but otherwise independent of $\tau_0$, such that $ \bar C^{1/2} e^{\zeta T_{\rm max}} \leq $. Next, we consider $\bar \tau_1$ to be the largest value of $\tau_1 \in (\tau_0, T_{\rm max})$ such that~\eqref{eq:transportestimate} holds. If $\bar \tau_1 =T_{\rm max}$, our main claim is proved. The case $\bar \tau_1 <T_{\rm max}$ is ruled out by the continuity of $\tau \to\| \Omega(\cdot,\tau) \|_{X}$ and by \eqref{eqn:finalbound}, which implies with our choice of $T_{\rm max}$ that the bound \eqref{eq:transportestimate} is not saturated in $\bar \tau_1$, namely, \begin{equation} \| \Omega(\cdot,\bar \tau_1) \|_{X} + \| \Omega \|_{D(\tau_0,\bar \tau_1)} \leq \frac{\bar C}{2} e^{(a+\delta)(\bar \tau_1-\tau_0)} \| \Omega_0 \|_X \, . \end{equation} Hence, the proof of Proposition~\ref{pro:transportestimate} is complete provided that \eqref{eqn:finalbound} is established. \bigskip {\bf Step 1. [Baseline estimate: Improved $L^2$ bound]} {\it Suppose that the bootstrap assumption \eqref{eqn:bootstrap} holds. Then, for all $\tau \in [\tau_0,\tau_1]$, \begin{equation}\label{eqn:baseline-est} \| \Omega(\cdot,\tau) \|_{L^2} \leq C (1+ \bar e^{2\zeta T_{\rm max}} e^{(a+\delta)(\tau-\tau_0)}) \| \Omega_0 \|_X. \end{equation} } Indeed, by Duhamel's formula: \begin{equation}\label{eqn:duham} \Omega (\cdot,\tau) = e^{(\tau-\tau_0)L_{\rm ss}} \Omega_0 - \int_{\tau_0}^\tau e^{(\tau-s)L_{\rm ss}} [(W + U^{\rm lin}) \cdot \nabla \Omeg + e^{s \gamma} \Lambda^\beta \Omega](\cdot,s) \, ds. \end{equation} According to the semigroup estimate \eqref{eqn:semigr}, we have\footnote{ Crucially, this is where we use assumptions on the semigroup $e^{\tau\L_\ss}$ generated by the linearized operator. While the above Duhamel formula `loses derivatives' in the sense that we are estimating in $L^2$ according to `higher' quantities, e.g., $\| \nabla \Omega \|_{L^2}$, it is acceptable to lose derivatives at this level, though it will not be acceptable at the `top tier' ($\| \nabla \Omega \|_{L^4}$). This is common in quasi-linear perturbation arguments and can already be seen in standard proofs of local-in-time existence for the Euler equations.} \begin{equation}\label{eqn:duham-est} \begin{split} \|\Omega (\cdot,\tau)\|_{L^2} &\leq \| e^{(\tau-\tau_0)L_{\rm ss}} \Omega_0\|_{L^2} + \int_{\tau_0}^\tau \|e^{(\tau-s)L_{\rm ss}} [(W + U^{\rm lin}) \cdot \nabla \Omeg e^{s \gamma} \Lambda^\beta \Omega ] \|_{L^2}(\cdot,s) \, ds \\& \leq C e^{(a + \delta)(\tau - \tau_0)} \| \Omega_0 \|_{L^2} + \int_{\tau_0}^\tau e^{(a+\delta) (\tau - s)} [\| (W + U^{\rm lin}) \cdot \nabla \Omega (\cdot,s) \|_{L^2}(\cdot,s) \, ds \\&\qquad \int_{\tau_0}^\tau e^{(a+\delta) (\tau - s)} e^{s \gamma} \|\Lambda^\beta \Omega \|_{L^2}(\cdot,s) ]\, ds \, . \end{split} \end{equation} Next, we estimate the first integrand in the right-hand side of \eqref{eqn:duham-est}. By \eqref{eqn:U-infty}, the smoothness of $\eta$ in Proposition~\ref{pro:spectral}, the assumption on $W$ \eqref{eq:Wrequirement} and the bootstrap assumption \eqref{eqn:bootstrap}, we obtain that \begin{align} \| (W + U^{\rm lin}) \cdot \nabla \Omega(\cdot,s) \|_2 &\leq \| W \|_{\infty} \| \nabla \Omega \|_{2}+ \| U^{\rm lin} \|_\infty \| \nabla \Omega \|_2 \\&\leq e^{a s} \| \Omega(\cdot,s) \|_X \lesssim e^{2s \zeta}e^{(a + \delta) (s - \tau_0)} \| \Omega_0 \|_X. \label{eqn:100} \end{align} Plugging this estimate in the first integral in \eqref{eqn:duham-est}, we can estimate such integral with $ C\bar C e^{2\zeta T_{\rm max}} e^{(a+\delta)(\tau-\tau_0)} \| \Omega_0 \|_X$. Next, we consider the dissipation term. First, we interpolate between $L^2$ and $\dot H^{1+\frac{\beta}{2}}$ and by Young inequality we hav \begin{equation} \label{eq:thingiusedtopass} \begin{aligned} \Big( \int_{\tau_0}^\tau e^{s \gamma} \| \Lambda^\beta \Omega \|_{L^2}^2 \Big)^{1/2}\, ds &\leq \Big(\int_{\tau_0}^\tau e^{s \gamma} \| \Lambda^{1+\frac{\beta}{2}} \Omega \|_{L^2}^{2 \frac{2-\beta}{2+\beta}} \| \Omega \|_{L^2}^{\frac{4\beta}{2+\beta}} \, ds \Big)^{1/2}\\ &\leq C\Big(\int_{\tau_0}^\tau e^{s \gamma} \big(\| \Lambda^{1+\frac{\beta}{2}} \Omega \|_{L^2}^{2}+ \| \Omega \|_{L^2}^{2}\big) \, ds \Big)^{1/2}\\ &\le C\| \Omega \|_{D(\tau_0,\tau)} + C\bar{C} \Big(\int_{\tau_0}^\tau e^{2s\gamma} e^{2(a+\delta)(s-\tau_0)} \| \Omega_0 \|_{X}^2\, ds \Big)^{1/2} \\ \leq C\bar{C} e^{(a+\delta)(\tau - \tau_0)} \| \Omega_0 \|_{X}, \end{aligned} \end{equation} where we used the bootstrap assumption both on $\| \Omega \|_{D(\tau_0,\tau)}$ and on $\| \Omega \|_{L^2}$. Then, using the Cauchy-Schwarz in time and~\eqref{eq:thingiusedtopass}, \begin{equation}\label{diss-baseline} \begin{aligned} \int_{\tau_0}^\tau e^{(\tau- s)(a+\delta)} e^{s \gamma/2} \| e^{s \gamma/2} \Lambda^{\beta} \Omega(\cdot,s) \|_{L^2}\, ds &\leq C e^{\tau \gamma/2} (\bar{C} + 1) e^{(a+\delta)(\tau-\tau_0)} \| \Omega_0 \|_X . \end{aligned} \end{equation} This completes the proof of \eqref{eqn:baseline-est}. We notice that from an interpolation inequality, \eqref{eqn:baseline-est} and the bootstrap assumption it follows also an improved $L^4$ bound of the following form: \begin{equation}\label{eqn:omega-4-impr} \begin{split} \| \Omega \|_{L^4} &\leq C \| \Omega \|_{L^2}^{1/2} \| \nabla \Omega \|_{L^2}^{1/2} \\&\leq [C(1+ \bar C e^{2\zeta T_{\rm max}}) e^{(a+\delta)(\tau-\tau_0)} )]^{1/2} \bar C^{1/2}e^{\frac{a+\delta}{2}(\tau-\tau_0)} \| \Omega_0 \|_X \\& \leq C(1+ \bar C^{1/2} e^{\zeta T_{\rm max}}) \bar C^{1/2}e^{(a+\delta)(\tau-\tau_0)} \| \Omega_0 \|_X. \end{split} \end{equation} Notice that this estimate is worse than \eqref{eqn:baseline-est} due to the presence of $ \bar C^{1/2}$; however, being its power strictly less than $1$, the estimate is still good enough. \bigskip {\bf Step 2. [Improved $L^Q$ bound]} {\it Suppose that the bootstrap assumption \eqref{eqn:bootstrap} holds. Then for all $\tau \in [\tau_0,\tau_1]$ \begin{equation}\label{eqn:ts-Q} \| \Omega(\cdot,\tau) \|_{L^Q} \leq C(C_{\delta_0}+ C \bar C e^{2\zeta T_{\rm max}} e^{(a+\delta)(\tau-\tau_0)} )\| \Omega_0 \|_X. \end{equation} } We perform an $L^Q$ energy estimate on the equation for $\Omega$, namely we multiply the equation by $Q |\Omega|^{Q-2} \Omega$ and integrate in space. We obtain \begin{equation}\label{eqn:Q-est} \frac{d}{dt} \int |\Omega|^Q \, d \xi \leq \Big(- Q+\frac 2 \alpha \Big) \int |\Omega|^Q \, d \xi + Q \int |U \cdot \nabla \bar{\Omega}| | \Omega|^{Q-1}\, d \xi - Q\int e^{\tau \gamma} \Lambda^\beta \Omega |\Omega|^{Q-2} \Omega\, d \xi \end{equation} The last term, related to the dissipation, is known to be negative via the C{\'o}rdoba-C{\'o}rdoba inequality (see, for instance,~\cite[Lemma 2.4]{CoCo}). The main term of interest is the `forcing' $ U \cdot \nabla \bar{\Omega}$, which we estimate thanks to \eqref{eqn:U-infty} and the fact that $\bar \Omega$ is compactly supported: \begin{equation} \| U \cdot \nabla \bar{\Omega}\|_{L^Q}\leq \| U\|_{L^\infty}\| \nabla \bar{\Omega}\|_{L^Q} \leq C \| \Omega\|_{X} \, . \end{equation} We conclude, via a Gronwall estimate on the inequality \eqref{eqn:Q-est}, that \eqref{eqn:ts-Q} holds. Here we take advantage of the fact that $a_0>\frac {2}\alpha$ to see that the term $ \big(- Q+\frac 2 \alpha \big) \int |\Omega|^Q \, d \xi $ can be neglected in this estimate, since it does not change the exponential in time behavior of the solution. \bigskip {\bf Step 3. [Improved $\partial_\theta\Omega $ bound]} {\it Suppose that the bootstrap assumption \eqref{eqn:bootstrap} holds. Then for all $\tau \in [\tau_0,\tau_1]$ and for $p = Q,2,4$, we have \begin{equation}\label{eqn:improvedpthetabound} \| \partial_\theta \Omega(\cdot,\tau) \|_{L^p} \leq C(1+ \bar C^{1/2} e^{2\zeta T_{\rm max}})\bar C^{1/2} e^{(a+\delta)(\tau-\tau_0)} \| \Omega_0 \|_X . \end{equation} We first observe that \begin{equation} \partial_\theta \Lambda^\beta = \Lambda^\beta \partial_\theta \, , \end{equation} which follows from differentiating the rotation symmetry {\rm Rot}_\theta \Lambda^\beta = \Lambda^\beta {\rm Rot}_\theta $ in the angle $\theta \in \mathbb{R}$ at $\theta = 0$. Similarly $\partial_\theta \curl W = \curl \partial_\theta W$ and notice that $\partial_\theta$ also commutes through the Biot-Savart law. Commuting $\partial_\theta$ into the PDE, we have \begin{equation} \partial_\tau \partial_\theta \Omega + (-1 - \frac{\xi}{\alpha} \cdot \nabla) \partial_\theta \Omega + (\bar{U} + W + U^{\rm lin}) \cdot \nabla \partial_\theta \Omega + e^{\tau \gamma} \Lambda^\beta \partial_\theta \Omega = F, \end{equation} where \begin{equation} -F = \partial_\theta U \cdot \nabla \bar{\Omega} + (\partial_\theta W + \partial_\theta U^{\rm lin}) \cdot \nabla \Omega. \end{equation} As regards the first term in the force, we estimate it in $L^p$ via Calder{\'o}n-Zygmund as follows $$\|\partial_\theta U \cdot \nabla \bar{\Omega} \|_{L^p} \leq \|\frac 1 {|x|} \partial_\theta U \|_{L^p} \| |x| |\nabla \bar{\Omega}| \|_{L^\infty} \leq C \|\nabla U \|_{L^p} \leq C \| \Omega \|_{L^p} $$ Using the baseline estimate \eqref{eqn:baseline-est} if $p=Q$ or $p=2$ and the improved bound \eqref{eqn:ts-Q} if $p=4$, we get \begin{equation*} \begin{split} \|\partial_\theta U \cdot \nabla \bar{\Omega} \|_{L^4} &\leq C \| \Omega \|_{L^4} \leq C(1+ \bar C e^{\zeta T_{\rm max}}) \bar C^{1/2} e^{(a+\delta)(\tau-\tau_0)} \| \Omega_0 \|_X . \end{split} \end{equation*} By \eqref{eqn:U-infty}, \eqref{eqn:p-theta-W-infty}, \eqref{eq:Wrequirement} and the bootstrap assumption we have, for $p=2,4$, \begin{equation} \begin{split} \| (\partial_\theta W + \partial_\theta U^{\rm lin}) \nabla \Omega \|_{L^p} &\leq C (\| \partial_\theta W \|_{L^\infty} + \| \partial_\theta U^{\rm lin} \|_{L^\infty} ) \| \nabla \Omega \|_{L^p} \\&\leq C \bar C e^{a\tau} e^{(a+\delta) (\tau - \tau_0)} \| \Omega_0 \|_X . \end{split} \end{equation} For $p=Q$ we modify the previous computation as follows \begin{equation} \begin{split} \| (\partial_\theta W + \partial_\theta U^{\rm lin}) \nabla \Omega \|_{L^Q} &\leq C (\| \partial_\theta W \|_{L^{\frac{2Q}{2-Q}}} + \| \partial_\theta U^{\rm lin} \|_{L^{\frac{2Q}{2-Q}}} ) \| \nabla \Omega \|_{L^2} \\&\leq C (\| \partial_\theta \curl W \|_{L^{Q}} + \| \partial_\theta U^{\rm lin} \|_{L^{\frac{2Q}{2-Q}}} ) \| \nabla \Omega \|_{L^2} \\& \leq C \bar C e^{a\tau}e^{(a+\delta) (\tau - \tau_0)} \| \Omega_0 \|_X . \end{split} \end{equation} We multiply the equation by $p (\partial_\theta \Omega)^{p-1}$ if $p=Q,2,4$ and integrate by parts. After rewriting the fractional dissipation term and using Holder inequality to estimate the term which includes $F$, we get \begin{equation} \frac{d}{d\tau} \int |\partial_\theta \Omega|^p \, d\xi + \Big(\frac{2}{\alpha}-p) \int | \partial_\theta \Omega|^p \, d\xi + c e^{\tau \gamma} \int |\Lambda^{\frac{\beta}{2}} |\partial_\theta \Omega|^{p/2} |^2 \, d\xi \leq C \| F \|_{L^p} \|\partial_\theta \Omega\|_{L^p}^{p-1}. \end{equation} We neglect the last term in the left-hand side, which has the right sign, divide by $\|\partial_\theta \Omega\|_{L^p}^{p-1}$ and sum over $p=Q,2,4$. \bigskip {\bf Step 4. [Improved $\nabla \Omega$ bound]} {\i Suppose that the bootstrap assumption \eqref{eqn:bootstrap} holds. Then for all $\tau \in [\tau_0,\tau_1]$ and for $p=2,4$, we have \begin{equation}\label{ts:grad1} \| \nabla \Omega \|_{L^p} \leq C(1+ \bar C^{1/2} e^{2\zeta T_{\rm max}}) e^{(a+\delta)(\tau-\tau_0)} \bar C^{1/2} \| \Omega_0 \|_X \, , \end{equation} \begin{equation}\label{ts:grad2} \| \Omega \|_{D(\tau_0,\tau)} \leq C(1+ \bar C^{1/2} e^{2\zeta T_{\rm max}}) e^{(a+\delta)(\tau-\tau_0)} \bar C^{1/2} \| \Omega_0 \|_X \, . \end{equation} Let $i=1,2$ and consider the equation for $\partial_i \Omega$: \begin{equation} \begin{aligned} \label{eq:piomegaequation} &\partial_\tau \partial_i \Omega + (-1-1/\alpha - \xi \cdot \nabla/\alpha) \partial_i \Omega + (\bar{U} + U^{\rm lin} + W) \cdot \nabla \partial_i \Omega \\ &\quad + \partial_i \bar{U} \cdot \nabla \Omega + \partial_i( U^{\rm lin} + W) \cdot \nabla \Omega - e^{\tau \gamma} \Lambda^\beta \partial_i \Omega = F_i, \end{aligned} \end{equation} where \begin{equation}\label{eqn:for-grad} -F_i = \partial_i U \cdot \nabla \bar{\Omega} + U \cdot \nabla \partial_i \bar{\Omega} . \end{equation} We multiply by $|\nabla \Omega |^{p-2}\partial_i \Omega$, sum over $i=1,2$ and integrate by parts. The last term in the first line of \eqref{eq:piomegaequation} vanishes, after this computation, due to the transport structure. We have \begin{equation}\label{eqn:full-grad} \begin{aligned} &\frac{1}{p} \frac{d}{d\tau} \int |\nabla \Omega|^p \, d\xi + \Big(\frac{2}{\alpha p} -1 -\frac 1 \alpha \Big) \int |\nabla \Omega|^p \, d\xi -\int e^{\tau \gamma} \Lambda^{\beta} \partial_i \Omega |\nabla \Omega|^{ p-2} \partial_i \Omega \, d\xi \\ &\quad + \sum_{i=1}^2 \int \partial_i \bar{U} \cdot \nabla \Omega \partial_i \Omega |\nabla \Omega|^{p-2} \, d\xi + \partial_i ( U^{\rm lin} + W) \cdot \nabla \Omega \partial_i \Omega |\nabla \Omega|^{p-2} d\xi \\ &\quad = \sum_{i=1}^2 \int F_i \partial_i \Omega |\nabla \Omega|^{p-2} d\xi \, . \end{aligned} \end{equation} We remark that \begin{equation}\label{eqn:cancellaz} \partial_i \bar{U} \cdot \nabla \Omega = \zeta' (|x|) \frac{x_i}{|x|} \partial_\theta \Omega + \zeta(|x|) \partial_i x^\perp \cdot \nabla \Omega = \zeta' (|x|) \frac{x_i}{|x|} \partial_\theta \Omega - \zeta(|x|) \nabla^\perp \Omega \end{equation} and we exploit the cancellation of the last term in \eqref{eqn:cancellaz} when multiplied by $\nabla \Omega$. Therefore \eqref{eqn:full-grad} rewrites as \begin{equation}\label{eqn:full-grad} \begin{aligned} &\frac{1}{p} \frac{d}{d\tau} \int |\nabla \Omega|^p \, d\xi + \Big(\frac{2}{\alpha p} -1 -\frac 1 \alpha \Big) \int |\nabla \Omega|^p \, d\xi -\int e^{\tau \gamma} \Lambda^{\beta} \partial_i \Omega |\nabla \Omega|^{ p-2} \partial_i \Omega \, d\xi \\ &\quad = - \sum_{i=1}^2 \int \Big \zeta' (|x|) \frac{x_i}{|x|} \partial_\theta \Omega + \partial_i ( U^{\rm lin} + W) \cdot \nabla \Omega +F_i \Big) \partial_i \Omega |\nabla \Omega|^{p-2} d\xi \\& \leq C \ \zeta' (|x|) \frac{x_i}{|x|} \partial_\theta \Omega + \partial_i ( U^{\rm lin} + W) \cdot \nabla \Omega +F_i\|_{L^p}^p + \frac 1 2 \int |\nabla \Omega|^p \, d\xi \, . \end{aligned} \end{equation} We estimate each term in the $L^p$ norm in the right-hand side. By \eqref{eqn:improvedpthetabound} and since $\zeta'$ is bounded, we have \begin{equation} \|\zeta' (|x|) \frac{x_i}{|x|} \partial_\theta \Omega\|_{L^p} \leq \|\zeta' (|x| \|_{L^\infty} \| \partial_\theta \Omega\|_{L^p} \leq C(1+ \bar C^{1/2} e^{2\zeta T_{\rm max}}) e^{(a+\delta)(\tau-\tau_0)} \bar C^{1/2} \| \Omega_0 \|_X. \end{equation} By the explicit expression of $\Omega^{\rm lin}$, the regularity of $\eta$ in Proposition~\ref{pro:spectral}, and \eqref{eqn:U-infty}, we have $\| \nabla U^{\rm lin} \|_{L^\infty} \lesssim e^{a \tau} $. Using the Gagliardo-Nirenberg interpolation inequality, Calder{\'o}n-Zygmund and \eqref{eq:Wrequirement}, we have $$ \| \nabla W \|_{L^\infty} \leq C\| \nabla ^2 W\|_{L^4}^\theta \|\nabla W\|_{L^Q}^{1-\theta} \leq C\| \nabla \curl W \|_{L^4}^\theta \|\curl W\|_{L^Q}^{1-\theta} \leq e^{a\tau}$$ for a suitable choice of $\theta \in (0,1)$. We estimate \begin{equation*} \begin{split} \| \partial_i ( U^{\rm lin} + W) \cdot \nabla \Omega \|_{L^p} &\leq \big( \| \nabla U^{\rm lin} \|_{L^\infty} + \| \nabla W \|_{L^\infty}\big) \|\nabla \Omega \|_{L^p} \\&\leq e^{a\tau} e^{(a + \delta) (\tau-\tau_0)} \| \Omega_0 \|_X. \end{split} \end{equation*} Finally, as regards the force appearing in \eqref{eqn:for-grad}, we use the baseline estimate \eqref{eqn:baseline-est} and \eqref{eqn:omega-4-impr}. For the first term in the force, also by Calder{\'o}n-Zygmund, we have \begin{equation*} \begin{split} \| \partial_i U \cdot \nabla \bar{\Omega} \|_{L^p}&\leq \|\nabla \bar \Omega (\cdot, \tau)\|_\infty \|\nabla U (\cdot, \tau)\|_{L^p} \leq C \|\Omega (\cdot, \tau)\|_{L^p} \\&\leq C(C_{\delta_0}+ \bar C e^{\zeta T_{\rm max}}) \bar C^{1/2}e^{(a+\delta)(\tau-\tau_0)} \| \Omega_0 \|_X. \end{split} \end{equation*} For the second term in the force, given that $D^2 \bar \Omega \in L^1$ since $\Omega$ is smooth and compactly supported, and by the $L^\infty$ bound on $U$ in \eqref{eqn:U-infty} we obtain \begin{align} \| U \cdot \nabla \partial_i \bar{\Omega}\|_{L^p}&\leq\|U \|_{L^{\infty}} \|D^2 \bar \Omega (\cdot, \tau)\|_{L^p} \leq C (\|\Omega\|_{L^Q}+ \|\Omega\|_{L^4}) \\& \leq C(1+ \bar C^{1/2} e^{\zeta T_{\rm max}}) \bar C^{1/2}e^{(a+\delta)(\tau-\tau_0)} \| \Omega_0 \|_X. \end{align} By the previous estimates, \eqref{eqn:full-grad} rewrites for $p=2$ as \begin{equation}\label{eqn:full-grad2} \begin{aligned} &\frac{1}{2} \frac{d}{d\tau} \int |\nabla \Omega|^2 \, d\xi -\int |\nabla \Omega|^2 \, d\xi + \int e^{\tau \gamma} |\Lambda^{\beta/2} \nabla \Omega|^2 \, d\xi \\ & \leq C[ (1+ \bar C^{1/2} e^{2\zeta T_{\rm max}})\bar C^{1/2} e^{(a+\delta)(\tau-\tau_0)} \| \Omega_0 \|_X ]^2. \end{aligned} \end{equation} We now observe that, when applying the formula \eqref{eqn:full-grad} for $p=4$, the term related to the fractional dissipation (last term in first line) is nonnegative. This is a technical computation that can be seen, for instance, through the Caffarelli extension $(\partial_i \Omega)^*$ of $\partial_i \Omega$ in $\{ (\xi,\xi_3) \in \mathbb{R}^2 \times \mathbb{R}: y \geq 0\}$ and through the characterization of the fractional laplacian in terms of the extension as follows. By means of the divergence theorem, we compute \begin{align*} -&\sum_{i=1}^2\int_{\mathbb{R}^2} \Lambda^{\beta} \partial_i \Omega(\xi,\tau) |\nabla \Omega|^{ 2}(\xi,\tau) \partial_i \Omega(\xi,\tau) \, dx \\ &= -C\sum_{i=1}^2 \lim_{\xi_3 \to 0^+} \int_{\mathbb{R}^2} \xi_3^b \partial_{\xi_3} (\partial_i \Omega)^*(x,\xi_3,t) |(\nabla \Omega)^*|^{ 2}(\xi,\tau) (\partial_i \Omega)^*(\xi,\tau)\, dx \\ &= C \sum_{i=1}^2\int_{\mathbb{R}^3_+} \overline{\div} ( \xi_3^b \overline{\nabla} (\partial_i \Omega)^*(x,\xi_3,t) |(\nabla \Omega)^*|^{ 2}(\xi,\tau) (\partial_i \Omega)^*(\xi,\tau) ) \, dx \, d\xi_3 \\ &= C \sum_{i=1}^2\int_{\mathbb{R}^3_+} ( \xi_3^b \overline{\nabla} \frac{ (\partial_i \Omega)^*}{2} \cdot \overline{\nabla} [ |(\nabla \Omega)^*|^{ 2}(\xi,\tau) ]) +\xi_3^b |\overline{\nabla} (\partial_i \Omega)^*|^2 |(\nabla \Omega)^*|^{ 2} ) \, d\xi \, d\xi_3 \,, \\ &= C \int_{\mathbb{R}^3_+} ( \xi_3^b \overline{\nabla} \frac{| ({\nabla} \Omega)^*|^2}{2} \cdot \overline{\nabla} [ |(\nabla \Omega)^*|^{ 2} ] +\xi_3^b \Big(\sum_{i=1}^2|\overline{\nabla} (\partial_i \Omega)^*|^2\Big) |(\nabla \Omega)^*|^{ 2} ) \, d\xi \, d\xi_3 \geq 0\,, \end{align*} where the constant depends only on $\beta$ and in the last two equalities we omitted to indicate that all integrands are evaluated at $(\xi,\xi_3,\tau)$. Thanks to the previous estimates, \eqref{eqn:full-grad} implies then \begin{equation}\label{eqn:full-grad2} \begin{aligned} &\frac{1}{4} \frac{d}{d\tau} \int |\nabla \Omega|^4 \, d\xi \Big( 1 +\frac{1}{2\alpha } \Big)\int |\nabla \Omega|^4 \, d\xi \leq C[ (1+ \bar C^{1/2} e^{2\zeta T_{\rm max}})\bar C^{1/2} e^{(a+\delta)(\tau-\tau_0)} \| \Omega_0 \|_X ]^4. \end{aligned} \end{equation} Integrating the previous inequality and \eqref{eqn:full-grad2} between $\tau_0$ and $\tau$, and taking advantage of the fact that the second term in the right-hand side does not influence the final estimate via Gronwall, we deduce that \eqref{ts:grad1} holds and, with $p=2$, that also \eqref{ts:grad2} holds. \end{proof} \section{Nonlinear theory}\label{sec:nonlin} In this section, we complete the proof of Theorem~\ref{thm:main} \begin{proof}[Proof of Theorem~\ref{thm:main}] Let $\bar \Omega, a, b, \eta, m, \delta_0$ be fixed as in the beginning of Section~\ref{sec:linear}. \emph{Step 1. Construction of two distinct solutions via a limiting procedure}. For $k \in \mathbb{N}$, consider $t_k := e^{-k}$. We solve the Cauchy problem for the fractional Navier-Stokes equations~\eqref{eq:NS} with smooth `initial' vorticity $u^{(k)}(\cdot,t_k) = \bar{u}(\cdot,t_k) + u^{\rm lin}(\cdot,t_k)$ and smooth forcing term $f$ defined in Lemma~\ref{lem:forcinglemma}. For each $k$, a short-time solution $u^{(k)}$ on $\mathbb{R}^2 \times (t_k,T_k)$ exists and is unique in $C([t_k,T_k];L^2 \cap W^{1,4})$, among other function spaces, including $X$.\footnote{When $\beta \leq 1$, one must argue existence and uniqueness in a quasilinear fashion, whereas when $\beta \in (1,2)$ it is possible to develop the well-posedness theory in a semilinear fashion.} Furthermore, these solutions are regular enough to demonstrate energy conservation: \begin{equation} \label{eq:energyconservation} \frac{1}{2} \int |u^{(k)}|^2(x,t') \, dx + \int_{t}^{t'} \int |\Lambda^{\frac{\beta}{2}} u^{(k)}|^2 \, dx \,ds = \frac{1}{2} \int |u^{(k)}|^2(x,t) \, dx + \int_{t}^{t'} \int f \cdot u^{(k)} \, dx \, ds. \end{equation} Our desired second solution, violating uniqueness, should arise as $k \to +\infty$. For this purpose, it is necessary to continue the solutions up to some time $\bar t$ independent of $k$. In particular, we will demonstrate that there exists a time $\bar{t} > 0$, independent of $k$, such that the solutions $u^{(k)}$ can be continued to $\mathbb{R}^2 \times (t_k,\bar{t})$, and writing $u^{(k)} = \bar{u} + u^{\rm lin}+ u^{\rm per,(k)}$, the associated vorticity in self-similar variables $\Omega^{\rm per,(k)}$ solves \eqref{eqn:omega-per} in $[-k, \bar \tau]$ and satisfies \begin{equation} \label{eq:mainestimateattheendofthepaper} \| \Omega^{{\rm per},(k)}(\cdot,\tau) \|_{X} \leq e^{\tau(a + \delta_0)}, \quad \tau \in [-k,\bar{\tau}], \end{equation} where $\bar{\tau} = \log \bar{t}$. Once we have the above estimate, we may employ standard weak compactness arguments to take the limit $k\to \infty$ and obtain a solution $u= \bar{u} + u^{\rm lin}+ u^{\rm per}$ satisfying the estimate~\eqref{eq:mainestimateattheendofthepaper} with $\Omega^{\rm per}$ instead of $\Omega^{{\rm per},(k)}$ on the time interval $(-\infty,\bar{\tau}]$. In particular, we will have \begin{equation} \| \Omega(\cdot,\tau) - \bar{\Omega}(\cdot,\tau) \|_X \geq \| \Omega^{\rm lin}(\cdot,\tau) \|_X - \| \Omega^{{\rm per}}(\cdot,\tau) \|_{X} \geq Ce^{\tau a} - e^{\tau(a + \delta_0)} > 0 \, , \end{equation} when $\tau \ll 0$, which implies non-uniqueness. To demonstrate that $u$ is a Leray solution of~\eqref{eq:NS}, we use the energy conservation~\eqref{eq:energyconservation} with initial time $t = t_k$. The first term on the right-hand side of~\eqref{eq:energyconservation} is simply $\| \bar{u}(\cdot,t_k) + u^{\rm lin}(\cdot,t_k) \|^2_{L^2}/2$, which converges to zero as $t_k \to 0^+$. Hence, we have the energy equality \begin{equation} \label{eq:energyconservation2} \frac{1}{2} \int |u|^2(x,t') \, dx + \int_{0}^{t} \int |\Lambda^{\frac{\beta}{2}} u|^2 \, dx \,ds = \int_{0}^{t} \int f \cdot u \, dx \, ds \, . \end{equation} \emph{Step 2. The estimate~\eqref{eq:mainestimateattheendofthepaper} via Duhamel's formula}. It remains to verify~\eqref{eq:mainestimateattheendofthepaper}. By continuity in $X$, the estimate~\eqref{eq:mainestimateattheendofthepaper} holds on a small neighborhood of $-k$. We define $\bar \tau_k>-k$ to be the largest non-positive time such that~\eqref{eq:mainestimateattheendofthepaper} holds and we claim that there exists $\bar \tau$ independent of $k$ (which will be fixed at the end of the proof) such that $\bar \tau_k \geq \bar \tau$. Therefore, in $[-k, \bar \tau_k]$ we are in the position to apply Duhamel's formula and the assumptions of Proposition~\ref{pro:transportestimate} applied with $\delta =\delta_0$ and $W = U^{\rm per}$ satisfying the requirement~\eqref{eq:Wrequirement}. We conclude that for every $\tau \in[-k, \bar \tau_k]$, \begin{equation} \Omega^{\rm per}(\cdot,\tau) = \int_{-k}^\tau S[U^{\rm per}](\tau,s) [-(U^{\rm per}+ U^{\rm lin}) \cdot \nabla \Omega^{\rm lin} +e^{s \gamma} \Lambda^{\beta} \Omega^{\rm lin}](\cdot,s) \, ds \, , \end{equation} and that \begin{align}\label{eqn:duhamel-nonlin-est}\nonumber \|\Omega^{\rm per}(\cdot,\tau)\|_{X}&\leq \int_{-k}^\tau \|S[U^{\rm per}](\tau,s)[-(U^{\rm per}+ U^{\rm lin}) \cdot \nabla \Omega^{\rm lin} +e^{s \gamma} \Lambda^{\beta} \Omega^{\rm lin}](\cdot,s) \|_{X} \, ds \\& \leq \int_{-k}^\tau e^{(a_0+\delta_0)( \tau - s)}\|-(U^{\rm per}+ U^{\rm lin}) \cdot \nabla \Omega^{\rm lin} +e^{s \gamma} \Lambda^{\beta} \Omega^{\rm lin}\|_{X}(\cdot,s) \, ds \, . \end{align} We claim that the integrands in the right-hand side satisfy \begin{equation}\label{eqn:force-claim} \| (U^{\rm per}+ U^{\rm lin}) \cdot \nabla \Omega^{\rm lin}(\cdot,s) \|_X + \| e^{s \gamma} \Lambda^{\beta} \Omega^{\rm lin}(\cdot,s) \|_X \lesssim e^{s(a+2\delta_0)} \, . \end{equation} Indeed, by the eigenfunction properties in Proposition~\ref{pro:spectral} as well as the structure of $\eta$ in the variable $\theta$ and elementary interpolation, we have \begin{equation} \| e^{s \gamma} \Lambda^{\beta} \Omega^{\rm lin}(\cdot,s) \|_X \leq C e^{s (a+\gamma)}(\| \eta\|_{C^3({\rm supp} \eta) }+ \|\partial_\theta \eta\|_{C^2({\rm supp} \eta) }) \leq C e^{s (a+\gamma)}. \end{equation} By \eqref{eqn:U-infty} and \eqref{eqn:p-theta-W-infty} applied to $U^{\rm per}$, the bound \eqref{eq:mainestimateattheendofthepaper} which by assumption holds in $[-k, \bar \tau_k]$, and the bounds on $\Omega ^{\rm lin}$ in terms of $e^{a_0 s}$, we estimate all the terms composing $\| (U^{\rm per}+ U^{\rm lin}) \cdot \nabla \Omega^{\rm lin} (\cdot,s)\|_{X}$ as \begin{equation*} \begin{split} \| (U^{\rm per}+ U^{\rm lin}) \cdot \nabla \Omega^{\rm lin}\|_{L^p} &\leq \big(\| U^{\rm per}\|_{L^\infty}+\| U^{\rm lin}\|_{L^\infty}\big) \| \Omega^{\rm lin}\|_{L^p} \\& \leq C \big(\|\Omega^{\rm per}\|_X +\|\Omega^{\rm lin}\|_X \big) \|\Omega^{\rm lin}\|_{C^2({\rm supp} \eta)} \leq C e^{2a_0s} \end{split} \end{equation*} $$\| \big(\partial_\theta U^{\rm per}+\partial_\theta U^{\rm lin}\big) \cdot \nabla \Omega^{\rm lin}\|_{L^p} \leq \big(\| \partial_\theta U^{\rm per}\|_{L^\infty} \|+\| \partial_\theta U^{\rm lin}\|_{L^\infty}\big) \| \nabla \Omega^{\rm lin}\|_{L^p} \leq C e^{2a_0s} $$ $$\| \big(U ^{\rm per}+U ^{\rm lin}\big)\cdot \partial_\theta \nabla \Omega^{\rm lin}\|_{L^p} \leq \big( \| U^{\rm per}\|_{L^\infty}+\| U^{\rm lin}\|_{L^\infty}\big) \|\partial_\theta \nabla \Omega^{\rm lin}\|_{L^p} \leq C e^{2a_0s} $$ for $p=Q,2,4$ and \begin{equation*} \begin{split} \| \big( \nabla &U^{\rm per} + \nabla U^{\rm lin} \big) \cdot \nabla \Omega^{\rm lin}\|_{L^p} + \| \big( U^{\rm per}+U^{\rm lin}\big) \cdot \nabla^2 \Omega^{\rm lin}\|_{L^p} \\&\leq \big(\| \nabla U^{\rm per}\|_{L^p}+\| \nabla U^{\rm lin}\|_{L^p}\big) \| \nabla \Omega^{\rm lin}\|_{L^\infty} +\big( \| U^{\rm per}\|_{L^\infty}+\| U^{\rm lin}\|_{L^\infty}\big) \| \nabla^2 \Omega^{\rm lin}\|_{L^p} \\&\leq C e^{2a_0s} \end{split} \end{equation*} for $p=2,4$. Therefore, since our choice of $\delta_0$ satisfies $\delta_0 \leq \frac 1 4 \min\{\gamma, a_0\}$, the previous inequalities imply the validity of \eqref{eqn:force-claim}. We obtain then from \eqref{eqn:duhamel-nonlin-est} and these estimates that $$\|\Omega^{\rm per}(\cdot,\tau)\|_{X} \leq C e^{(a_0+\delta_0)\bar \tau} \int_{-\infty}^\tau e^{\delta_0 s} \, ds \leq C e^{(a_0+2 \delta_0)\bar \tau}$$ for every $\tau \in [-k, \bar \tau_k]$. Choosing finally $\bar \tau$ so that $C e^{\delta_0\bar \tau}<1/2$, where $C$ is the constant appearing in the right-hand side of the previous formula, we find that $\bar \tau_k\geq \bar \tau$. \end{proof} \subsubsection{Acknowledgments} DA was supported by NSF Postdoctoral Fellowship Grant No.\ 2002023 and Simons Foundation Grant No.\ 816048. MC was supported by the SNSF Grant 182565. \bibliographystyle{alpha}
{ "timestamp": "2022-09-20T02:23:36", "yymm": "2209", "arxiv_id": "2209.08713", "language": "en", "url": "https://arxiv.org/abs/2209.08713" }
\section{INTRODUCTION} \vspace{-1mm} The Koopman operator describes the evolution of a nonlinear dynamical system in terms of a possibly infinite-dimensional but linear operator in a lifted space of observables. Starting from applications in dimension reduction of high-dimensional nonlinear systems such as turbulent flows\cite{Spectral_analysis_of_nonlinear_flows,P_shmidt_DMD_2008}, the Koopman operator has gained high popularity in recent years as a data-driven modeling approach for dynamical systems. Among a number of applications is that of data-driven control. On the basis of finite-dimensional approximation of the Koopman operator derived from Extended Dynamic Mode Decomposition (EDMD)\cite{Williams2015}, which yields a Linear Time-Invariant (LTI) data-driven model, several linear controllers have been applied such as Linear Quadratic Regulator (LQR)\cite{derivative_based_Murphy,local_Koopman,soft_robot_arm} and Model Predictive Control (MPC)\cite{KORDA_Koopman_MPC,Koopman_Lyapunov_based_MPC,Koopman_generators_Peitz,Soft_robot,tube_based_MPC}. While there are many frameworks and methods in literature to incorporate the Koopman operator into data-driven control, not much attention has been paid to the data-driven modeling aspect in the context of control applications. In particular, there do not appear to be many investigations focused on the modeling error, which pertains to practically unavoidable discrepancy between theories and actual implementations. It has been recognized that the convergence property of the EDMD algorithm\cite{on_convergence_of_EDMD} does not hold for non-autonomous types of Koopman models\cite{KORDA_Koopman_MPC}. This motivates the use of neural networks to learn the observable functions themselves along with the finite-dimensional approximation of the Koopman operator. We show the sufficient and necessary condition for the state-prediction of the Koopman control models of interest to achieve precisely zero error, which implies that obtaining high state-predictive accuracy is achievable if one has access to significant data and computational resources that afford large and complex model structures. On the other hand, we show that the modeling error of the Koopman models interacts with the closed-loop system in a different way from the state-prediction and we exemplify that the controller performance can greatly suffer from the modeling error, on which the complexity and dimension of observables have a large influence. To improve the possibly undesirable closed-loop behavior induced by Koopman-based control models, a control-consistent method is proposed, in which the model is refined after the initial training with exclusive use of data points sampled from closed-loop dynamics. This modification of the model aims to directly reduce the impact of the modeling error on the controller performance. Moreover, with the same intent as the control-consistent learning method, we also present a simple yet effective data sampling strategy that only uses input signals deterministically sampled from continuous functions. This paper is organized as follows. In Section \ref{section 2}, the Koopman operator framework for non-autonomous systems is presented. In Section \ref{section 3}, we discuss the manifestation of the modeling error on both prediction and control and propose a control-consistent learning method along with a data sampling strategy about input signals to improve the actual closed-loop behavior. Finally, several dynamical systems are tested to show the effectiveness of the proposed method in Section \ref{section. numerical examples}. \vspace{-3mm} \section{KOOPMAN OPERATOR THEORY FOR NON-AUTONOMOUS SYSTEMS} \vspace{-2mm} \label{section 2} We consider a dynamical system: \vspace{-2mm} \begin{align} \dot{x}(t)=f(x(t),u(t)), \label{eq. governing eq in ODE} \end{align} where $x(t)\in \mathcal{X}\subseteq \mathbb{R}^n$, $u(t)\in \mathcal{U}\subseteq \mathbb{R}^p$, and $f:\mathcal{X}\times \mathcal{U}\rightarrow \mathcal{X}$ are the state, the input signal, and the possibly nonlinear mapping describing dynamics of the system, respectively. Throughout the paper, we assume the solution $x(t)$ to \eqref{eq. governing eq in ODE} to be continuous with respect to $t$. With a first-order time discretization, \eqref{eq. governing eq in ODE} yields the following difference equation: \vspace{-2mm} \begin{align} x_{k+1}=F(x_k, u_k), \label{eq. governing eq} \end{align} where $x_k:=x(k\Delta t)$, $u_k:=u(k\Delta t)$, and $\Delta t$ denotes the sampling period. On the assumption $\Delta t\ll 1$, we consider \eqref{eq. governing eq} as the discrete-time system whose dynamics is equivalent to that of \eqref{eq. governing eq in ODE}. It is assumed that $f$ (i.e., $F$) is unknown and its dynamics is modeled in a data-driven manner. In the Koopman operator formalism, the dynamics is characterized through functions called observables, which are mappings from the state-space into $\mathbb{R}$. While the Koopman operator was first introduced in the context of autonomous systems, there have been also several efforts extending it to non-autonomous systems with control inputs\cite{DMDc,EDMDc,KORDA_Koopman_MPC}. In a formal extension\cite{KORDA_Koopman_MPC}, the state-space is extended to the augmented space $\mathcal{X}\times l (\mathcal{U})$, where \vspace{-2mm} \begin{align} l (\mathcal{U}):=\{ \bm{U}:=(u_k,u_{k+1},\cdots) \mid u_i\in \mathcal{U},\forall i \}, \end{align} is the space of sequences of input signals, and the observables $g$ are of the form: \vspace{-2mm} \begin{align} g:\mathcal{X}\times l (\mathcal{U}) \rightarrow \mathbb{R}: (x_k,\bm{U}) \mapsto g(x_k,\bm{U}). \label{eq. obs general description} \end{align} In practice, the observables $g$ may be considered as the feature maps that are either specified by users or learned from data in the modeling procedure. The Koopman operator corresponding to the non-autonomous system \eqref{eq. governing eq} is defined as an infinite-dimensional linear operator $\mathcal{K}:\mathcal{F}\rightarrow \mathcal{F}$ ($\mathcal{F}$: space of functions $g$) s.t. \vspace{-2mm} \begin{align} &\mathcal{K}g = g\circ \hat{F} \ \Leftrightarrow\ (\mathcal{K}g)(x_k, \bm{U})= g(\hat{F}(x_k, \bm{U})), & \label{eq. def of Koopman operator} \end{align} where the mapping $\hat{F}:\mathcal{X}\times l (\mathcal{U})\rightarrow \mathcal{X}\times l (\mathcal{U})$ is defined by \vspace{-4mm} \begin{align} \label{eq. def of augmented dynamics} \hat{F}(x_k, \bm{U}) := \left(F(x_k,u_k), \mathcal{S}\bm{U}\right) = \left( x_{k+1}, \mathcal{S}\bm{U} \right), \end{align} and $\mathcal{S}$ denotes the shift operator s.t. \vspace{-2mm} \begin{align} \mathcal{S}\bm{U}= \mathcal{S}(u_k,u_{k+1},\cdots):= (u_{k+1},u_{k+2},\cdots). \end{align} It is easily inferred from \eqref{eq. def of Koopman operator} that $\mathcal{K}$ is a linear operator. Note that while the time evolution of $x_k$ is specified by \eqref{eq. governing eq}, there is at this point no mapping governing the evolution of $u_k$, which requires to introduce the sequence $\bm{U}=(u_k,u_{k+1},\cdots)$ of input signals to formally define the Koopman operator $\mathcal{K}$. The equation \eqref{eq. def of Koopman operator} along with the definition \eqref{eq. def of augmented dynamics} can be viewed as the evolution of the dynamics \eqref{eq. governing eq} through observables $g$. \vspace{-1mm} \subsection{Finite Dimensional Approximation of the Koopman Operator and Data-Driven Koopman Models} To apply the Koopman operator formalism to dynamical systems modeling, a finite-dimensional approximation $K$ of the Koopman operator $\mathcal{K}$ is introduced as follows. \vspace{2mm} \begin{prop} \rm{} Given observables $g_i\in \mathcal{F}$ ($i=1,\cdots,D$), let $g$ be an arbitrary element of $\text{span}(g_1\cdots, g_D)$. Then, $\mathcal{K}g\in \text{span}(g_1\cdots, g_D)$, i.e., $\text{span}(g_1\cdots, g_D)$ is an invariant subspace under the action of the Koopman operator $\mathcal{K}:\mathcal{F}\rightarrow \mathcal{F}$, if and only if there exits $K\in \mathbb{R}^{D\times D}$ s.t. \begin{align} [\mathcal{K}g_1\ \cdots \ \mathcal{K}g_D]^\mathsf{T} = K[g_1\ \cdots \ g_D]^\mathsf{T}. \label{eq. finite dimensional approx of K} \end{align} \end{prop} \vspace{2mm} From an engineering perspective, it is of great interest to introduce the observables such that they allow practical models for control applications. One major choice of $g_i$ for the Koopman control problem takes the following structure of observables\cite{Model_based_control,Soft_robot,tube_based_MPC}: \vspace{-2mm} \begin{align} [g_1(x_k,\bm{U})\cdots g_{D}(x_k,\bm{U})]^\mathsf{T} = \left[ x_k^\mathsf{T}\ \tilde{g}(x_k)^\mathsf{T}\ u_k^\mathsf{T} \right]^\mathsf{T}, \label{eq. def of obs} \end{align} where $D=n+N+p$ and $\tilde{g}(x_k)\in \mathbb{R}^N$ represents a vector-valued function from $\mathcal{X}$ into $\mathbb{R}^N$ for some $N\in \mathbb{N}$. Note that only the first element $u_k$ in the sequence $\bm{U}=(u_k,u_{k+1},\cdots)$ appears in the definition \eqref{eq. def of obs}, which leads to a practical form of data-driven models consistent with many linear controller designs such as LQR and MPC. On the assumption that we have access to $x_k$ and $u_k$ as data, we consider the following finite-dimensional approximation $K_c\in\mathbb{R}^{(n+N+p)\times (n+N+p)}$ of the Koopman operator $\mathcal{K}$: \vspace{-2mm} \begin{align} \left[ \begin{array}{c} x_{k+1} \\ \tilde{g}(x_{k+1}) \\ u_{k+1} \end{array} \right] \approx \underset{=:K_c}{ \underbrace{ \left[ \begin{array}{c} \begin{array}{cc} A & B \end{array} \\ * \end{array} \right] }} \left[ \begin{array}{c} x_{k} \\ \tilde{g}(x_{k}) \\ u_{k} \end{array} \right], \label{eq. intro to Koopman model} \end{align} where matrices $A\in\mathbb{R}^{(n+N)\times (n+N)}$ and $B\in\mathbb{R}^{(n+N)\times p}$ are to be learned along with the feature maps $\tilde{g}$. Note that \eqref{eq. intro to Koopman model} is approximate since $\text{span}(g_1,\cdots,g_D)$ defined by \eqref{eq. def of obs} may not be invariant under the action of $\mathcal{K}$. Noticing that the first $n+N$ rows of \eqref{eq. intro to Koopman model} are enough to specify the evolution of the state $x_k$ s.t. \vspace{-1mm} \begin{align} \left[ \begin{array}{c} x_{k+1} \\ \tilde{g}(x_{k+1}) \end{array} \right] \approx A \left[ \begin{array}{c} x_{k} \\ \tilde{g}(x_{k}) \end{array} \right] + Bu_k, \label{eq. Koopman model 1} \end{align} we are only interested in learning \eqref{eq. Koopman model 1} and the last $p$ rows of $K_c$ in \eqref{eq. intro to Koopman model} are ignored in the proceeding formulations. From \eqref{eq. Koopman model 1}, the modeling error is defined as: \vspace{-2mm} \begin{align} r(x,u):= \left[ \begin{array}{c} F(x,u) \\ \tilde{g}(F(x,u)) \end{array} \right] - \left( A \left[ \begin{array}{c} x \\ \tilde{g}(x) \end{array} \right] + Bu \right), \label{eq. def of modeling error} \end{align} and its norm: \vspace{-4mm} \begin{align} \| r \|= \sqrt{ \int_{\mathcal{X}\times \mathcal{U}} \| r(x,u) \|_2^2 dxdu }, \label{eq. norm of r} \end{align} may be used as a characteristic to evaluate the model, e.g., \eqref{eq. Koopman model 1} is exact almost everywhere if and only if $\|r\|=0$. The model \eqref{eq. Koopman model 1} is an LTI system in the new coordinates $[x_k^\mathsf{T}\ \tilde{g}(x_k)^\mathsf{T}]^\mathsf{T}$ and linear controller designs can be applied to control \eqref{eq. governing eq}. In this paper, the following feedback controller with a static gain $\bm{K}\in \mathbb{R}^{p\times (n+N)}$ is considered: \vspace{-1mm} \begin{align} u_k=\bm{K} [x_k^\mathsf{T}\ \tilde{g}(x_k)^\mathsf{T}]^\mathsf{T}. \label{eq. controller} \end{align} \section{Control-Consistent Learning of Koopman Embedding} \vspace{-1mm} \label{section 3} \subsection{Motivating Example} \label{section. motivating example} We consider the following one-dimensional system as a guiding example to motivate the proposed control-consistent learning. \vspace{-1mm} \begin{align} x_{k+1}={x^2_k} e^{-x_k}+u_k, \ \ x_k,u_k\in \mathbb{R}. \end{align} Suppose that we create the Model 1 defined as: \vspace{-2mm} \begin{align} & \left[ \begin{array}{c} x_{k+1} \\ \hspace{-2mm}{x^2_{k+1}}e^{-x_{k\hspace{-0.5mm}+\hspace{-0.5mm}1}}\hspace{-2.5mm} \end{array} \right] \hspace{-1.5mm}\approx \hspace{-1mm} A\hspace{-1mm} \left[ \begin{array}{c} x_{k} \\ \hspace{-2mm}{x^2_k}e^{-x_k}\hspace{-2.5mm} \end{array} \right] \hspace{-1.5mm}+\hspace{-1mm} Bu_k,\hspace{-0.5mm} \left( \tilde{g}(x_k)\hspace{-1mm}=\hspace{-1mm} {x^2_k}e^{-x_{k}} \right)\hspace{-1mm}. & \label{eq. ex1 model 1} \end{align} From Proposition \ref{prop. state prediction} in Section \ref{section. second training}, perfect state prediction with no error is possible with $A$ and $B$ given as the following forms: \vspace{-1mm} \begin{align} A= \left[ \begin{array}{cc} 0 & 1 \\ \alpha_1 & \alpha_2 \end{array} \right], B= \left[ \begin{array}{c} 1 \\ \alpha_3 \end{array} \right], \alpha_i\in \mathbb{R}. \end{align} The modeling error \eqref{eq. def of modeling error} is then represented by \vspace{-2mm} \begin{align} r(x_k,u_k)\hspace{-1mm}=\hspace{-1mm} \left[ \begin{array}{c} 0 \\ \begin{array}{l} \hspace{-4mm} ({x^2_k}e^{-x_k} + u_k)^2 \exp(-{x^2_k}e^{-x_{k}} \hspace{-1mm}-\hspace{-1mm} u_k) \\ \hspace{14mm}-\hspace{-0.1mm} \alpha_1 x_k \hspace{-1mm}-\hspace{-1mm} \alpha_2 {x^2_k}e^{-x_{k}} \hspace{-1mm}-\hspace{-1mm} \alpha_3 u_k \hspace{-4mm} \end{array} \end{array} \right]\hspace{-1mm}. \label{eq. r1} \end{align} Suppose we also have Model 2, which has richer features: \vspace{-5mm} \begin{align} & \left[ \begin{array}{c} x_{k+1} \\ \hspace{-2.5mm}{x^2_{k+1}}e^{-x_{k\hspace{-0.5mm}+\hspace{-0.5mm}1}}\hspace{-2.8mm} \\ x_{k+1}^2 \end{array} \right] \hspace{-1.5mm}\approx\hspace{-1mm} A\hspace{-1.5mm} \left[ \begin{array}{c} x_{k} \\ \hspace{-2.5mm}{x^2_k}e^{-x_{k}}\hspace{-2.8mm} \\ x_{k}^2 \end{array} \right] \hspace{-2mm}+\hspace{-1mm} Bu_k, \hspace{-1mm} \left( \hspace{-1mm} \tilde{g}(x_k) \hspace{-1mm}=\hspace{-1.5mm} \left[ \begin{array}{c} \hspace{-2.3mm} {x^2_k}e^{\hspace{-0.5mm}-\hspace{-0.2mm}x_{k}}\hspace{-2.5mm} \\ x_k^2 \end{array} \right] \hspace{-0.5mm} \right)\hspace{-1.2mm}. & \label{eq. ex1 model 2} \end{align} Perfect state prediction can be also achieved with \vspace{-2mm} \begin{align} A= \left[ \begin{array}{ccc} 0 & 1 & 0 \\ \beta_1 & \beta_2 & \beta_3 \\ \beta_4 & \beta_5 & \beta_6 \end{array} \right], B= \left[ \begin{array}{c} 1 \\ \beta_7 \\ \beta_8 \end{array} \right], \beta_i\in \mathbb{R}. \end{align} In this case, however, the modeling error $r(x_k,u_k)$ takes a different form: \vspace{-2mm} \begin{align} r(x_k,\hspace{-0.7mm}u_k)\hspace{-1mm}=\hspace{-1.5mm} \left[ \begin{array}{c} 0 \\ \begin{array}{l} \hspace{-4mm} ({x^2_k}e^{-x_k} + u_k)^2 \exp(-{x^2_k}e^{-x_{k}}-u_k) \\ \hspace{7mm} - \beta_1 x_k - \beta_2 {x^2_k}e^{-x_{k}} - \beta_3 x_k^2 - \beta_7 u_k\hspace{-4mm} \end{array} \\ \begin{array}{l} \hspace{-4mm}({x^2_k}e^{-x_{k}}+u_k)^3 - \beta_5 x_k \\ \hspace{18mm} - \beta_6 {x^2_k}e^{-x_{k}} - \beta_7 x_k^2 - \beta_8 u_k\hspace{-4mm} \end{array} \end{array} \right]\hspace{-1mm}. \label{eq. r2} \end{align} Figures \ref{subfig. ex1 heat map r1} and \ref{subfig. ex1 heat map r2} show the heat maps of $\|r(x_k,u_k)\|_2$, where the model parameters $\alpha_i$ and $\beta_i$ are obtained by EDMD\cite{KORDA_Koopman_MPC}. Model 2 in \eqref{eq. ex1 model 2} is highly erroneous for $x_k<0$ compared to the Model 1 in \eqref{eq. ex1 model 1} and the modeling error accumulates according to \eqref{eq. error accumulation of control model} in Section \ref{section. second training} leading to undesirable closed-loop behavior. Figure \ref{fig. control performance ex1} shows the controller performance of both models, where $\bm{K}$ is computed as an LQR gain with the cost function $\sum_{k=0}^{\infty}{x^2_k}+{u^2_k}$. Despite the fact that both models achieve precisely zero state-prediction error, the controller designed for Model 2 causes an undesirable oscillation even as the open-loop dynamics quickly converges to the origin after $k=1$. Herein, it is also emphasized that from the state-prediction point of view, complex and large models such as the Model 2 may be preferable in general since it is more likely to achieve better state-prediction by Corollary \ref{col. state prediction} in Section \ref{section. second training}. To deal with this degradation of the controller performance, which is not revealed from the state-prediction accuracy, a control-consistent learning approach is proposed along with a simple data sampling strategy. \vspace{-0.5mm} \begin{figure} \centering \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/ex1/r1.png} \caption{Model 1 \eqref{eq. r1}.} \label{subfig. ex1 heat map r1} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/ex1/r2.png} \caption{Model 2 \eqref{eq. r2}.} \label{subfig. ex1 heat map r2} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/ex1/ex1_cl.png} \caption{Control simulations.} \label{fig. control performance ex1} \end{subfigure} \caption{\em (a), (b): Heat maps of $\|r(x_k,u_k)\|_2$. (c): Controller performance with $x_0=-3.4298$.} \vspace{-7mm} \end{figure} \vspace{-3mm} \subsection{Modeling Errors of Koopman Control Models} \vspace{-2mm} Several methods have been proposed to learn the model \eqref{eq. Koopman model 1}, among of which the most straightforward one is to first specify the feature maps $\tilde{g}$ and infer $A$ and $B$ from data. This is reduced to a linear regression problem and a unique solution can be obtained in the same manner as the Dynamic Mode Decomposition (DMD)\cite{P_shmidt_DMD_2008,Spectral_analysis_of_nonlinear_flows}. This learning algorithm with nonlinear observables is called the Extended Dynamic Mode Decomposition (EDMD)\cite{Williams2015}, based on which many data-driven Koopman controller designs are developed\cite{KORDA_Koopman_MPC,Koopman_Lyapunov_based_MPC,Model_based_control,PEITZ_switched_control,Soft_robot,data-driven_Koopman_H2,local_Koopman,soft_robot_arm,tube_based_MPC,Koopman_generators_Peitz}. An important feature of the EDMD algorithm is its convergence property. Letting $\mathcal{F}=L_2(\mathcal{X}\times l(\mathcal{U}))$, the approximation obtained by EDMD is, under several assumptions, shown to converge to the true Koopman operator in the strong operator topology as the number $M$ of data points and the number $D$ of observables $g_i$ tend to infinity\cite{On_convergence_Klus,on_convergence_of_EDMD}. Specifically, this convergence property is stated as follows: for $\forall g\in L_2(\mathcal{X}\times l(\mathcal{U}))$, \vspace{-2mm} \begin{align} \lim_{D\rightarrow \infty} &\int_{\mathcal{X}\times l(\mathcal{U})} |(P_D^\mu \mathcal{K})P_D^\mu g - \mathcal{K}g|^2d\mu =0, & \label{eq. convergence of EDMD 1} \end{align} where $P_D^\mu $ and $\mu$ are the $L_2$ projection onto $\text{span}(g_1,\hspace{-0.5mm}\cdots\hspace{-0.5mm},g_D)$ and a measure with which $L_2(\mathcal{X}\times l(\mathcal{U}))$ is endowed, respectively. The operator $P_D^\mu \mathcal{K}$ in \eqref{eq. convergence of EDMD 1} is related to the finite dimensional approximation $K$ computed by EDMD in the following manner: \vspace{-0.5mm} \begin{align} &\lim_{M\rightarrow \infty} \| a^\mathsf{T} K[g_1\cdots g_D]^\mathsf{T} - P_D^{\mu}\mathcal{K}g\|=0,& \nonumber \\ & \forall g=a^\mathsf{T} [g_1\cdots g_D]^\mathsf{T} \in \text{span}(g_1,\cdots,g_D), a\in \mathbb{R}^D, & \label{eq. convergence of EDMD 2} \end{align} where $\|\cdot \|$ is an arbitrary norm on $\text{span}(g_1,\cdots,g_D)$. The convergence property \eqref{eq. convergence of EDMD 1} implies that data-driven Koopman models can provide reasonable approximation with sufficiently large $M$ and $D$. However, it is not the case for non-autonomous systems as seen in the following remark. \vspace{2mm} \begin{remark} \label{remark convergenve property does not hold} Given feature maps $\tilde{g}(x_k)$, the approximation $K_c$ in \eqref{eq. intro to Koopman model} does not possess the the convergence property \eqref{eq. convergence of EDMD 1} since for \eqref{eq. convergence of EDMD 1} to hold, it is necessary that $\{g_i\}_{i=1}^D$ is an orthonormal basis of $L_2(\mathcal{X}\times l(\mathcal{U}))$ as $D\rightarrow \infty$\cite{on_convergence_of_EDMD}. As described in \cite{KORDA_Koopman_MPC}, no elements of $\bm{U}=(u_k,u_{k+1},\cdots)$ except for the first one $u_k$ depend on the definition \eqref{eq. def of obs} and it is obvious that they cannot form any basis of $L_2(\mathcal{X}\times l(\mathcal{U}))$. \end{remark} \vspace{2mm} No convergence property of the model \eqref{eq. intro to Koopman model} or \eqref{eq. Koopman model 1} implies $\|r\|$ may not be negligibly small even if $D$ and $M$ are sufficiently large. To this end, we adopt another learning formulation, where the nonlinear feature maps $\tilde{g}$ are also learned from data along with matrices $A$ and $B$. \vspace{-1mm} \subsection{Initial Training: Simultaneous Learning of the Feature Maps and the System Matrices} \vspace{-1mm} Another class of methods to learn model \eqref{eq. Koopman model 1} estimates the system matrices $A,B$, and the feature maps $\tilde{g}$ simultaneously. The resulting models are expected to achieve better predictive accuracy than those of the linear formulations such as EDMD since they can have greater model expressivity with the feature maps $\tilde{g}$ also learned from data along with the matrices $A$ and $B$. Especially, the use of neural networks has been shown to be promising to incorporate into the Koopman operator-based modeling, analyses, and control\cite{Learning_DNN_ACC2019,Learning_Koopman_Invariant_Subspaces,Physics-based_robabilistic_learning,deep_learning_representation_CDC,Wiener_MIMO_ID}. Hence, the proposed method characterizes the feature maps $\tilde{g}$ in \eqref{eq. Koopman model 1} as a neural network aiming at high predictive accuracy and solves a nonlinear regression problem to learn $A,B$, and $\tilde{g}$, which is formulated as Step \ref{problem. initial training}. \begin{table} \centering \begin{screen} \begin{prob} \label{problem. initial training} \rm{} \ (Initial Training) \\ Find $g$, $A\in\mathbb{R}^{(n+N)\times (n+N)}$, and $B\in \mathbb{R}^{(n+N)\times p}$ s.t. \vspace{-2mm} \begin{align} &\left\{ g,\ A,\ B \right\}= \underset{\left\{ g,A,B \right\}}{ \text{argmin} }\ J(g,A,B), & \label{eq. argmin of learning} \\ & \text{where } J(g,A,B) := \lambda_1 \| AG_x + BU - G_y \|_F^2 &\nonumber \\ & \hspace{16mm}+ \lambda_2 \| W ( AG_x + BU ) - Y \|_F^2, & \label{eq. loss of initial training} \\ & G_x:= [g(x_1)\cdots g(x_{M})], \ U:= [u_1\cdots u_{M}], &\nonumber \\ & G_y:= [g(y_1)\cdots g(y_{M})], \ Y := [y_1\cdots y_{M}], &\nonumber \\ & y_k=F(x_k,u_k), \ k=1,\cdots, M, &\nonumber \\ & W:= \left[ \begin{array}{cc} I & 0 \end{array} \right], & \label{eq. decoder} \\[-1ex] & g:\mathcal{X}\rightarrow \mathbb{R}^{n+N}: x_k\mapsto \left[ \begin{array}{c} x_k \\ \tilde{g}(x_k) \end{array} \right],& \label{eq. g subsystemized} \\ & \tilde{g}(x_k)= \text{NN}(x_k;w) \ (\text{a neural network}). & \end{align} \end{prob} \end{screen} \end{table} The loss function $J(g,A,B)$ in \eqref{eq. loss of initial training} consists of two terms. The first one multiplied by a hyperparameter $\lambda_1$ accounts for (approximately) minimizing $\|r\|$ in \eqref{eq. norm of r}. The other one with a hyperparameter $\lambda_2$ intends to directly minimize the state-reconstruction error by applying the decoder $W$ to the model prediction in order to compare it with the state $y_k$. While the decoder may be also characterized by a neural network in general, the specific structure \eqref{eq. def of obs} of observables, which explicitly includes the state $x_k$, allows an analytical expression of the decoder $W$ in \eqref{eq. decoder}. The nonlinear feature maps $\tilde{g}$ are defined as a fully-connected feed-forward neural network: \vspace{-2mm} \begin{align} &\text{NN}\hspace{-0.3mm}(\hspace{-0.5mm}x_k\hspace{-0.2mm};\hspace{-0.5mm}w\hspace{-0.5mm}) \hspace{-1mm} := \hspace{-1mm} \sigma \hspace{-0.7mm} \left( \Theta_{l} \sigma \hspace{-0.5mm} \left( \hspace{-0.5mm}\cdots\hspace{-0.5mm} \Theta_2 \sigma \hspace{-0.5mm} \left( \Theta_1 x_k \hspace{-1mm}+\hspace{-0.5mm} b_1 \right) \hspace{-0.7mm}+\hspace{-0.8mm} b_2 \hspace{-0.5mm}\cdots\hspace{-0.5mm} \right) \hspace{-0.7mm}+\hspace{-0.7mm} b_l \right)\hspace{-1mm}, & \label{eq. NN} \\ & w:=\{ \Theta_{i}, b_i \}_{i=1}^l, & \end{align} where $\Theta_i$, $b_i$, and $\sigma$ are a kernel, a bias, and an activation function, respectively. In this paper, we only consider continuous activation functions so that \eqref{eq. NN} is continuous. \vspace{-2mm} \subsection{Second Training: Modification of the Initial Model} \vspace{-1mm} \label{section. second training} Characterizing observables by a neural network allows greater expressivity, which can lead to higher accuracy of the data-driven model if the optimization problem is feasible. On the other hand, from the controller design perspective, including high-order nonlinearities in the observables is not preferable since it may introduce unexpected or undesirable effect on the closed-loop system due to the modeling error, which could even alter the actual closed-loop system unstable, while it is easier for the state prediction to eliminate the modeling error. Specifically, the state prediction is implemented as follows: \vspace{-2mm} \begin{align} &x^{\text{est}}_{k+1} \hspace{-1mm}=\hspace{-1mm} \underset{=W}{ \underbrace{ \left[ \begin{array}{cc} \hspace{-1mm}I\hspace{-0.2mm} & \hspace{-0.2mm}0\hspace{-1mm} \end{array} \right]}} \hspace{-1mm} \left\{ \hspace{-0.5mm} A \left[ \begin{array}{c} \hspace{-1mm}x^{\text{est}}_{k}\hspace{-1mm} \\ \hspace{-1mm}\tilde{g}(x^{\text{est}}_{k})\hspace{-1mm} \end{array} \right] \hspace{-1mm}+\hspace{-1mm} Bu_k \hspace{-0.5mm} \right\}\hspace{-1mm}, k=0,\hspace{-0.5mm}1,\cdots\hspace{-0.5mm}, & \label{eq. state prediction} \end{align} where $x^\text{est}_k$ denotes the state prediction at time $k$ and $x^\text{est}_0=x_0$ is given. From \eqref{eq. state prediction}, the modeling error in the state prediction is evaluated as: \vspace{-2mm} \begin{align} x_{k+1} \hspace{-1mm}=\hspace{-1mm} \left[ \begin{array}{cc} \hspace{-1mm}I\hspace{-0.5mm} & \hspace{-0.5mm}0\hspace{-1mm} \end{array} \right] \hspace{-1mm} \left\{ \hspace{-1mm} A \left[ \begin{array}{c} \hspace{-1mm}x_{k}\hspace{-1mm} \\ \hspace{-1mm}\tilde{g}(x_{k})\hspace{-1mm} \end{array} \right] \hspace{-1mm}+\hspace{-1mm} Bu_k \hspace{-1mm} \right\} \hspace{-1mm}+\hspace{-1mm} \left[ \begin{array}{cc} \hspace{-1mm}I\hspace{-0.5mm} & \hspace{-0.5mm}0\hspace{-1mm} \end{array} \right] \hspace{-1mm} r(x_k\hspace{-.2mm},\hspace{-0.2mm}u_k). \label{eq. state prediction exact relation} \end{align} \vspace{2mm} \begin{prop} \label{prop. state prediction} Let $x_k$ and $u_k$ be arbitrary. There exist $\tilde{g}$, $A_1\in \mathbb{R}^{n\times n}$, $A_2\in \mathbb{R}^{n\times N}$ and $B_1\in \mathbb{R}^{n\times p}$ s.t. \vspace{-2mm} \begin{align} F(x_k,u_k)= A_1 x_k + A_2 \tilde{g}(x_k) + B_1 u_k, \label{eq. condition for perfect state prediction} \end{align} if and only if \vspace{-2mm} \begin{align} \left[ \begin{array}{cc} \hspace{-1mm}I\hspace{-0.5mm} & \hspace{-0.5mm}0\hspace{-1mm} \end{array} \right] r(x_k,u_k)=0, \end{align} i.e., \eqref{eq. state prediction} has no state prediction error. \end{prop} \begin{proof} Let $[A_1\ A_2]\in \mathbb{R}^{n\times (n+N)}$ and $B_1\in \mathbb{R}^{n\times p}$ be the first $n$ rows of $A$ and $B$ in \eqref{eq. state prediction exact relation}, respectively. From \eqref{eq. governing eq}, the equation \eqref{eq. state prediction exact relation} reads \vspace{-2mm} \begin{align} F(x_k,u_k) \hspace{-1mm}=\hspace{-1mm} A_1 x_k \hspace{-1mm}+\hspace{-1mm} A_2 \tilde{g}(x_k) \hspace{-1mm}+\hspace{-1mm} B_1 u_k \hspace{-1mm}+\hspace{-1mm} \left[ \begin{array}{cc} \hspace{-1mm}I\hspace{-0.7mm} & \hspace{-0.7mm}0\hspace{-1mm} \end{array} \right]\hspace{-1mm} r(x_k,u_k), \end{align} which implies the statement of the proposition. \end{proof} \vspace{2mm} Note that there exist $\tilde{g}$, $A_1$, $A_2$, and $B_1$ that satisfy \eqref{eq. condition for perfect state prediction} with $A_2=0$ or $\tilde{g}(x_k)\equiv 0$ if and only if the original dynamics \eqref{eq. governing eq} is linear. \vspace{2mm} \begin{col} \label{col. state prediction} Suppose \eqref{eq. governing eq} is nonlinear. It is necessary and sufficient for \eqref{eq. state prediction} to be able to achieve zero state-prediction error that the feature maps $\tilde{g}$ include enough number of different features to reconstruct $F$. \end{col} \vspace{2mm} While the accuracy of the state prediction of the model \eqref{eq. Koopman model 1} is evaluated by Proposition \ref{prop. state prediction}, its accuracy in terms of controller design is characterized in a different way. Note that the system to be controlled by the feedback controller $\bm{K}$ in \eqref{eq. controller} is assumed to be a linear time-invariant system: \vspace{-2mm} \begin{align} \xi_{k+1} = A\xi_k + Bu_k, \ \ \xi_k \in \mathbb{R}^{n+N}, \label{eq. control model} \end{align} and we can only ensure properties of the closed-loop system: \vspace{-5mm} \begin{align} \xi_{k+1} &= (A+B\bm{K})\xi_{k} = (A+B\bm{K})^{k+1}\xi_{0}. & \end{align} Clearly, \eqref{eq. control model} is identical to the Koopman control model \eqref{eq. Koopman model 1} if $\|r\|=0$ and $\xi_0=[x_0^\mathsf{T}\ \tilde{g}(x_0)^\mathsf{T}]^\mathsf{T}$. However, in general cases where $r(x_k,u_k)\not\equiv 0$, the modeling error may persist at any time and accumulate as follows: \vspace{-2mm} \begin{align} \left[ \begin{array}{c} \hspace{-1mm}x_{k+1}\hspace{-1mm} \\ \hspace{-1mm}\tilde{g}(x_{k+1})\hspace{-1mm} \end{array} \right] &= (A+B\bm{K})^{k+1} \left[ \begin{array}{c} \hspace{-1mm}x_{0}\hspace{-1mm} \\ \hspace{-1mm}\tilde{g}(x_{0})\hspace{-1mm} \end{array} \right] &\nonumber \\ &\hspace{-15mm} + \sum_{i=0}^{k} (A+B\bm{K})^i r \left( x_{k-i}, \bm{K} \left[ \begin{array}{c} \hspace{-1mm}x_{k-i}\hspace{-1mm} \\ \hspace{-1mm}\tilde{g}(x_{k-i})\hspace{-1mm} \end{array} \right] \right), & \label{eq. error accumulation of control model} \end{align} in which \eqref{eq. controller} is substituted. The second term of the r.h.s. of \eqref{eq. error accumulation of control model} represents the discrepancy between \eqref{eq. control model} and \eqref{eq. Koopman model 1}, which directly leads to degradation of controller performance. Since the error propagation in \eqref{eq. error accumulation of control model} reflects all components of $r(x_k,u_k)$ unlike the state prediction as in \eqref{eq. state prediction exact relation}, the actual closed-loop system could greatly suffer from the modeling error depending on the design and/or the dimensions of $\tilde{g}$, as illustrated in Section \ref{section. motivating example}. In the proposed method, considering that the controller performance of the Koopman control model \eqref{eq. Koopman model 1} can deteriorate due to the modeling error (while it is relatively easy to achieve high predictive accuracy as shown in Corollary \ref{col. state prediction}), the second training process formulated as Step \ref{prob. second training} follows Step \ref{problem. initial training} in case the control objective is not achieved by the initially learned model. Specifically, data points sampled from a closed-loop system formed by \eqref{eq. controller} are exclusively used so that the learning problem can directly minimize the modeling error in the regime of closed-loop dynamics. The modification of the initial model is implemented by updating matrices $A$ and $B$ to $A+\Delta A$ and $B+\Delta B$, where we impose the constraint that the modified model retains dynamics close to the initial one. This constraint prevents the additional learning, which retrains the model with closed-loop data only, from degrading the high predictive accuracy obtained in the initial training. Specifically, $\Delta A$ and $\Delta B$ have the constraints that their induced 2-norms $\| \Delta A \|:=\text{sup}_{\|x\|=1}\| \Delta A x \|_2$ and $\|\Delta B\|$ are bounded by hyperparameters $\epsilon_A$ and $\epsilon_B$, respectively. \begin{table} \centering \begin{screen} \begin{prob} \label{prob. second training} \rm{} (Modification of the initial model) \\ Given $g$, $A$, $B$, and a controller gain $\bm{K}$ that is designed for ($A$, $B$), find $\Delta A$ and $\Delta B$ s.t. \vspace{-2mm} \begin{align} &\left\{ \Delta A,\ \Delta B \right\}:= \underset{\left\{ \Delta A,\Delta B \right\}}{ \text{argmin} }\ J_c(\Delta A,\Delta B), & \label{eq. argmin of learning 2} \\ & \text{subject to:} \hspace{5mm} \| \Delta A \|\leq \epsilon_A, & \\ &\hspace{20mm} \| \Delta B \|\leq \epsilon_B, & \\[-1ex] &\text{where}&\nonumber \\[-1ex] &J_c(\Delta A,\Delta B) \hspace{-1mm}:=\hspace{-1mm} \hspace{0mm} \lambda_1 \| (A \hspace{-1mm}+\hspace{-1mm} \Delta A) G_x \hspace{-0.5mm}+\hspace{-0.5mm} (B \hspace{-1mm}+\hspace{-1mm} \Delta B)U \hspace{-0.5mm}-\hspace{-0.5mm} G_y \|_F^2 &\nonumber \\ &\hspace{-1mm} +\hspace{-1mm} \lambda_2 \| W ( (A \hspace{-1mm}+\hspace{-1mm} \Delta A) G_x \hspace{-0.5mm} + \hspace{-0.5mm} (B \hspace{-1mm}+\hspace{-1mm} \Delta B)U ) - Y \|_F^2, & \label{eq. loss of second training} \\ & y_k=F(x_k,u_k), u_k=\bm{K}g(x_k). &\nonumber \end{align} \end{prob} \end{screen} \vspace{-5mm} \end{table} \vspace{-1mm} \subsection{Restricting the Input Signal of the Data Set} \vspace{0mm} \label{section. using restricted inputs} In addition to the modification of the initially learned model, we also propose a data sampling strategy that generates the input signal $u_k$ deterministically such that $(u_k,u_{k+1},\cdots)$ will be a sequence of data points sampled from continuous functions. From the modeling perspective, it is in general advisable to sample data points $(x_k, u_k)$ randomly, i.e., from some probability distribution. Indeed, assuming that data points $(x_k, u_k)$ are i.i.d. random variables, it is confirmed that minimizing the loss function $J(g,A,B)$ in \eqref{eq. loss of initial training} corresponds to minimizing the norm \eqref{eq. norm of r} of the modeling error $r$ since the first term of $J(g,A,B)$ is related to $\|r\|$ as: \vspace{-3mm} \begin{align} \cfrac{1}{M} \| AG_x \hspace{-1mm}+\hspace{-1mm} BU \hspace{-1mm}-\hspace{-1mm} G_y \|_F^2 = \cfrac{1}{M} \sum_{k=1}^{M} \| r(x_k,u_k) \|_2^2 &\nonumber \\[-1ex] \overset{\text{a.s.}}{ \rightarrow } \int_{\mathcal{X}\times \mathcal{U}} \hspace{-2mm} \| r(x,u) \|_2^2 dxdu \ \ (M\hspace{-1mm}\rightarrow\hspace{-1mm} \infty ) = \|r\|^2, & \label{eq. relation of loss and the norm of r} \end{align} where the almost sure convergence follows from the strong law of large numbers. However, it is often the case that sampling a very large number of data points across the entire space is not practical due to limitations of experimental or computational resources. Indeed, for non-autonomous systems, the space from which the data $(x_k,u_k)$ is sampled is the product space $\mathcal{X}\times \mathcal{U}$, not the original state space $\mathcal{X}$, and it is especially difficult to sample enough data in applications. As a result, learned models may be overfitted or biased, which do not necessarily lead to the modeling error profile consistent with the analysis in \eqref{eq. relation of loss and the norm of r}. In this paper, we propose to only use deterministic $u_k$ sampled from continuous functions. Since the solution $x(t)$ to \eqref{eq. governing eq in ODE} is assumed to be continuous and $\tilde{g}$ defined in \eqref{eq. NN} is also continuous, the control inputs \eqref{eq. controller} are always discretized points sampled from continuous functions. Hence, for the same reason as that of the modification of the initial model in Section \ref{section. second training}, we only use $u_k$ sampled from continuous functions so that Step \ref{problem. initial training} can minimize $J(g,A,B)$ over possible regime of dynamics realized by the controller \eqref{eq. controller} only. This strategy does not affect learning the autonomous part $A$ of the model since $u_k$ only enters the dynamics of the model through $B$, which is separate from $A$ as in \eqref{eq. Koopman model 1}. \vspace{-1mm} \section{Numerical Examples} \vspace{-1mm} \label{section. numerical examples} In this section, the control objective is defined as stabilizing the system at the origin while minimizing the cost and LQR is used for the controller design. \vspace{-0.5mm} \subsection{Simple Pendulum} \vspace{-0.5mm} The simple pendulum is considered as the first example: \vspace{-1mm} \begin{align} \ddot{\theta} = -\sin\theta + u,\ \ (x_1:=\theta, x_2:=\dot{\theta}). \label{eq. pendulum} \end{align} We collect 300 data sets generated by \eqref{eq. pendulum}, each of which consists of a single trajectory of a length of 50 steps with the sampling period $\Delta t=0.1$ starting from an initial condition $x_0\sim \text{Uniform}[-3,3]^2$. The Runge-Kutta method is used to solve \eqref{eq. pendulum} with a step size of 0.01. We include one nonlinear feature $\tilde{g}(x_k)\in \mathbb{R}$ ($N=1$) in the model, which is a neural network with a single hidden layer consisting of 10 neurons and the swish function is used as the activation. Step \ref{problem. initial training} is implemented in TensorFlow. The following two types of $u_k$ are considered to evaluate the efficacy of the sampling strategy in Section \ref{section. using restricted inputs}: \vspace{-2mm} \begin{align} u_k&\sim \text{Uniform}[-1,1],& \label{eq. uk random} \\[-0.5ex] u_k&= \cos (\omega_i k\Delta t), \ \omega_i:= 20i, \ i=0,1,\cdots,5. & \label{eq. uk cos} \end{align} In the proposed method that adopts \eqref{eq. uk cos}, each single trajectory data set is split evenly into six groups $\mathcal{D}_i$ and $u_k$ with $\omega_i$ is included in $\mathcal{D}_i$ ($i=0,1,\cdots,5$). Figure \ref{fig. pendulum} shows the results of the models obtained by Step \ref{problem. initial training}, where the state predictions (Figs. \ref{subfig. pendulum pred random} and \ref{subfig. pendulum pred cos}) are implemented according to \eqref{eq. state prediction} and the control simulations (Figs. \ref{subfig. pendulum cl random} and \ref{subfig. pendulum cl cos}) use LQR gains computed with the cost $\sum_{k=0}^{\infty} 100x_{k,1}^2+x_{k,2}^2+u_k^2$. Whereas the state-prediction accuracy barely changes with different types of $u_k$, the controller performance is greatly deteriorated if the model uses the randomly generated input \eqref{eq. uk random}. The controller designed for the model trained with the proposed sampling strategy successfully makes the state converge to the origin. Since the control objective is already achieved by the initial model, Step \ref{prob. second training} is not applied in this example. \begin{figure} \centering \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/pendulum_const_off/figs/pred_g_constrained.png} \caption{State prediction of the model trained with \eqref{eq. uk random}.} \label{subfig. pendulum pred random} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/pendulum_const_off/figs/cl_g_constrained.png} \caption{Controller performance of the model trained with \eqref{eq. uk random}.} \label{subfig. pendulum cl random} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/pendulum/figs/pred_g_constrained.png} \caption{State prediction of the model trained with \eqref{eq. uk cos}.} \label{subfig. pendulum pred cos} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/pendulum/figs/cl_g_constrained.png} \caption{Controller performance of the model trained with \eqref{eq. uk cos}.} \label{subfig. pendulum cl cos} \end{subfigure} \caption{\em Results of the simple pendulum.} \label{fig. pendulum} \vspace{-7mm} \end{figure} \vspace{-2mm} \subsection{Inverted Pendulum on A Cart} \vspace{-1.5mm} The inverted pendulum on a cart is considered as the second example, whose dynamics is given as follows\cite{Data_driven_book}: \vspace{-2mm} \begin{comment} \begin{align} & \dot{x}_2= \cfrac{ -\hspace{-1mm}m^2L^2g\cos x_3\sin x_3 \hspace{-1mm}+\hspace{-1mm} mL^2A(x_2,x_3,x_4) \hspace{-1mm}+\hspace{-1mm} mL^2u } {mL^2(M+m(1-\cos^2 x_3))}, &\nonumber \\ & \dot{x}_4=\hspace{-1mm} \cfrac{ \begin{array}{l} (m+M)mgL\sin x_3 -mL\cos x_3 A(x_2,x_3,x_4) \\ \hspace{47mm}+mL\cos x_3 u \end{array} } {mL^2(M+m(1-\cos^2 x_3))}, &\nonumber \end{align} \end{comment} \begin{align} \left\{ \hspace{-2mm} \begin{array}{l} \dot{x}_1=x_2, \\ \dot{x}_2= \cfrac{ -\hspace{-1mm}m^2L^2g\cos x_3\sin x_3 \hspace{-1mm}+\hspace{-1mm} mL^2A(x_2,x_3,x_4) \hspace{-1mm}+\hspace{-1mm} mL^2u } {mL^2(M+m(1-\cos^2 x_3))}, \\ \dot{x}_3=x_4, \\ \dot{x}_4=\hspace{-1mm} \cfrac{ \begin{array}{l} (m+M)mgL\sin x_3 -mL\cos x_3 A(x_2,x_3,x_4) \\ \hspace{47mm}+mL\cos x_3 u \end{array} } {mL^2(M+m(1-\cos^2 x_3))}, \end{array} \right. \nonumber \end{align} where $A(x_2,x_3,x_4)=mL{x_4}^2\sin x_3 -\delta x_2$, $m=1$, $M=5$, $L=2$, $g=-10$, and $\delta=1$. Step \ref{problem. initial training} is applied with the same conditions as the first example except for the number of hidden layers, which is changed to 25. Also, only the deterministic sampling \eqref{eq. uk cos} for $u_k$ is considered in this example. For the controller design, the cost is defined as $\sum_{k=0}^{\infty} 100x_{k,1}^2+x_{k,2}^2+100x_{k,3}^2+x_{k,4}^2+u_k^2$. While the initially learned model has reasonable predictive accuracy as in Fig. \ref{subfig. invp pred initial}, the controller performance suffers from the undesirable modeling error effect, which is shown in Fig. \ref{subfig. invp control initial}. Thus, Step \ref{prob. second training} is implemented to modify the model, for which we additionally collect data points in the same way as Step \ref{problem. initial training}. The TensorFlow Constrained Optimization module\cite{tfco_paper} is used to solve the optimization problem in Step \ref{prob. second training} with $\epsilon_A=\epsilon_B=0.1$. It is shown in Fig. \ref{subfig. invp control modified} that the modified model achieves the control objective. As a more quantitative analysis, we estimate the basin of attraction of the closed-loop systems by testing various initial conditions, whose results are shown in Figs. \ref{subfig. invp basin of attraction initial} and \ref{subfig. invp basin of attraction modified}. Some initial conditions do not converge to the origin for the closed-loop system formed by the initial model, while it is shown that the given set of initial conditions is a basin of attraction for the one formed by the modified model. Moreover, the modified model retains good state-prediction accuracy thanks to the constraints on $\Delta A$ and $\Delta B$ and it is comparable to that of the initial model (Figs. \ref{subfig. invp pred initial} and \ref{subfig. invp pred modified}). Finally, the profiles of the state prediction errors are evaluated. Figures \ref{subfig. invp error contour initial} and \ref{subfig. invp error contour modified} show the heat maps of $\| [I\ 0]r(x,0) \|_2$ with $x_2=x_4=0$, which are overlaid with the data points used in Step \ref{problem. initial training}. It is confirmed that both models have quite similar error profiles and the good state-prediction accuracy of the initial model is successfully preserved after the modification. \begin{figure} \centering \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/figs/pred_g_constrained.png} \caption{State prediction of the initial model.} \label{subfig. invp pred initial} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/figs/pred_gAB_constrained.png} \caption{State prediction of the modified model.} \label{subfig. invp pred modified} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/error_contour_plot/inverted_pendulum/1/fold_2/g_constrained/error_contour.png} \caption{State prediction error of the initial model.} \label{subfig. invp error contour initial} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/error_contour_plot/inverted_pendulum/1/fold_2/gAB_constrained/error_contour.png} \caption{State prediction error of the modified model.} \label{subfig. invp error contour modified} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/figs/cl_g_constrained.png} \caption{Controller performance of the initial model.} \label{subfig. invp control initial} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/figs/cl_gAB_constrained.png} \caption{Controller performance of the modified model.} \label{subfig. invp control modified} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/basin_of_attraction/inverted_pendulum/1/fold_2/g_constrained/basin_of_attraction_ts.png} \caption{Estimate of basin of attraction of the initial model.} \label{subfig. invp basin of attraction initial} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/basin_of_attraction/inverted_pendulum/1/fold_2/gAB_constrained/basin_of_attraction_ts.png} \caption{Estimate of basin of attraction of the modified model.} \label{subfig. invp basin of attraction modified} \end{subfigure} \caption{\em Results of the inverted pendulum on a cart. (c),(d): $x_2$ and $x_4$ are fixed to 0. (g),(h): Tested ranges are $x_1(0)\in [-13,6.9]$, $x_2(0)\in [-2.5,2]$ with $x_2(0)=x_4(0)=0$.} \label{fig. invp} \vspace{-7mm} \end{figure} \vspace{-2.5mm} \section{Conclusion} \vspace{-2mm} A control-consistent learning method is proposed along with a simple but effective data sampling strategy for Koopman operator-based control. The initial learning of the observables and the operator matrices is augmented by a second step to improve the controller performance. The use of deterministically-sampled input data from continuous functions improves the controller performance and the proposed two-stage learning contributes to improved closed-loop behavior by updating the operator matrices while retaining high state-prediction accuracy obtained by the initially learned model. \section{INTRODUCTION} \vspace{-1mm} \textcolor{magenta}{ The Koopman operator is an infinite-dimensional operator that describes the evolution of a possibly nonlinear dynamical system in a linear manner through functions called observables. } Starting from applications \cc{in} dimension reduction of high-dimensional nonlinear systems such as turbulent flows\cite{Spectral_analysis_of_nonlinear_flows,P_shmidt_DMD_2008}, the Koopman operator has gained high popularity in recent years \cc{as a data-driven modeling approach}. Among a number of successful applications of the data-driven Koopman frameworks is utilizing the Koopman operator for data-driven control. On the basis of finite-dimensional approximation of the Koopman operator derived from Extended Dynamic Mode Decomposition (EDMD)\cite{Williams2015}, which yields a Linear Time-Invariant (LTI) data-driven model, several linear controller designs have been applied such as Linear Quadratic Regulator (LQR)\cite{derivative_based_Murphy,local_Koopman,soft_robot_arm} and Model Predictive Control (MPC)\cite{KORDA_Koopman_MPC,Koopman_Lyapunov_based_MPC,Koopman_generators_Peitz,Soft_robot,tube_based_MPC}. While there are many frameworks and methods in literature to incorporate the Koopman operator into data-driven control, not much attention has been paid to the data-driven modeling \cc{aspect} in the context of control application\cc{s.} \cc{T}here do not \cc{appear to be} many efforts \cc{focused} on the modeling error, which pertains to practically unavoidable discrepancy between theories and actual implementations. \cc{It has been recognized} that the convergence property of the EDMD algorithm\cite{on_convergence_of_EDMD} does not hold for non-autonomous types of Koopman models\cite{KORDA_Koopman_MPC}. \cc{This}\hyperref[comment 1]{\textcolor{blue}{\fbox{Please see 1)}}} motivates \cc{the use of} neural networks and learn the observable functions themselves along with the finite-dimensional approximation of the Koopman operator. \cc{W}e show the sufficient and necessary condition for the state-prediction of the Koopman control models of interest to achieve precisely zero error, which implies \cc{that} obtaining high state-predictive accuracy is \cc{achievable} if \cc{one has} access to \cc{significant data and} computational resources that afford large and complex model structures. On the other hand, \cc{we show} that the modeling error of the Koopman models \cc{interacts} with the closed-loop system in a different way from the state-prediction and we exemplify that the controller performance can greatly suffer from the modeling error, on which the complexity and dimensions of observables have \cc{a large} influence. \cc{T}o improve the possibly undesirable closed-loop behavior induced by Koopman\cc{-based} control models, a two-stage learning method is proposed, in which some model parameters are modified after the initial training with exclusive use of data points sampled from closed-loop dynamics. This modification of the model aims \cc{to} directly \cc{reduce} the \cc{impact} of the modeling error on the controller performance. Moreover, with the same intent as the two-stage learning method, we also present a simple yet effective data sampling strategy that only use\cc{s} input signals deterministically sampled from continuous functions. This paper is organized as follows. In Section \ref{section 2}, the Koopman operator framework for non-autonomous systems is presented. In Section \ref{section 3}, we discuss the \cc{manifestation of the} modeling error \cc{on} both prediction and control and propose a two-stage learning method along with a data sampling strategy about input signals to improve the actual closed-loop behavior. Finally, several dynamical systems are tested to show the effectiveness of the proposed method in Section \ref{section. numerical examples}. \vspace{-3mm} \section{KOOPMAN OPERATOR THEORY FOR NON-AUTONOMOUS SYSTEMS} \vspace{-2mm} \label{section 2} We consider a dynamical system: \vspace{-2mm} \begin{align} \dot{x}\cc{(t)}=f(x\cc{(t)},u\cc{(t)}), \label{eq. governing eq in ODE} \end{align} where $x\cc{(t)}\in \mathcal{X}\subseteq \mathbb{R}^n$, $u\cc{(t)}\in \mathcal{U}\subseteq \mathbb{R}^p$, and $f:\mathcal{X}\times \mathcal{U}\rightarrow \mathcal{X}$ are the state, the input signal, and the possibly nonlinear mapping describing dynamics of the system, respectively. Throughout the paper, we assume the solution $x\cc{(t)}$ to \eqref{eq. governing eq in ODE} \cc{to be} continuous \cc{with respect to $t$}. With a first-order time discretization, \eqref{eq. governing eq in ODE} yields the following difference equation: \vspace{-2mm} \begin{align} x_{k+1}=F(x_k, u_k), \hyperref[comment 1]{\textcolor{blue}{\fbox{Please see 2)}}} \label{eq. governing eq} \end{align} where $x_k:=x(k\Delta t)$, $u_k:=u(k\Delta t)$, and $\Delta t$ denotes the sampling period. On the assumption $\Delta t\ll 1$, we consider \eqref{eq. governing eq} as the discrete-time system whose dynamics is identical to that of \eqref{eq. governing eq in ODE}. It is assumed that $f$ (i.e., $F$) is unknown and its dynamics is modeled in fully data-driven manners. In the Koopman operator formalism, dynamics of the system of interest is characterized through functions called observables, which are mappings from the state-space into $\mathbb{R}$. While the Koopman operator was first introduced in the context of autonomous systems, there have been also several efforts extending it to non-autonomous systems with control inputs\cite{DMDc,EDMDc,KORDA_Koopman_MPC}. In a formal extension\cite{KORDA_Koopman_MPC}, the state-space is extended to the augmented space $\mathcal{X}\times l (\mathcal{U})$ (not the original state-space $\mathcal{X}$), where \vspace{-2mm} \begin{align} l (\mathcal{U}):=\{ \bm{U}:=(u_k,u_{k+1},\cdots) \mid u_i\in \mathcal{U},\forall i \}, \end{align} is the space of sequences of input signals, and the observables $g$ are of the form: \vspace{-2mm} \begin{align} g:\mathcal{X}\times l (\mathcal{U}) \rightarrow \mathbb{R}: (x_k,\bm{U}) \mapsto g(x_k,\bm{U}). \label{eq. obs general description} \end{align} In practice, the observables $g$ may be considered as the feature maps that are either specified by users or learned from data in the modeling procedure. The Koopman operator corresponding to the non-autonomous system \eqref{eq. governing eq} is defined as an infinite-dimensional linear operator $\mathcal{K}:\mathcal{F}\rightarrow \mathcal{F}$ ($\mathcal{F}$: space of functions $g$) s.t. \vspace{-2mm} \begin{align} &\mathcal{K}g = g\circ \hat{F} \ \Leftrightarrow\ (\mathcal{K}g)(x_k, \bm{U})= g(\hat{F}(x_k, \bm{U})), & \label{eq. def of Koopman operator} \end{align} where the mapping $\hat{F}:\mathcal{X}\times l (\mathcal{U})\rightarrow \mathcal{X}\times l (\mathcal{U})$ is defined by \vspace{-4mm} \begin{align} \label{eq. def of augmented dynamics} \hat{F}(x_k, \bm{U}) := \left(F(x_k,u_k), \mathcal{S}\bm{U}\right) = \left( x_{k+1}, \mathcal{S}\bm{U} \right), \end{align} and $\mathcal{S}$ denotes the shift operator s.t. \vspace{-2mm} \begin{align} \mathcal{S}\bm{U}= \mathcal{S}(u_k,u_{k+1},\cdots):= (u_{k+1},u_{k+2},\cdots). \end{align} It is easily \cc{inferred} from \eqref{eq. def of Koopman operator} that $\mathcal{K}$ is a linear operator. Note that while the time evolution of $x_k$ is specified by \eqref{eq. governing eq}, there is at this point no mapping governing the evolution of $u_k$, which requires to introduce the sequence $\bm{U}=(u_k,u_{k+1},\cdots)$ of input signals to formally define the Koopman operator $\mathcal{K}$. The equation \eqref{eq. def of Koopman operator} along with the definition \eqref{eq. def of augmented dynamics} can be viewed as the evolution of the dynamics \eqref{eq. governing eq} through observables $g$. \vspace{-1mm} \subsection{Finite Dimensional Approximation of the Koopman Operator and Data-Driven Koopman Models} \cc{T}o apply the Koopman operator formalism to dynamical systems modeling, finite-dimensional approximation $K$ of the Koopman operator $\mathcal{K}$ is introduced as follows. \begin{prop} \rm{} Given observables $g_i\in \mathcal{F}$ ($i=1,\cdots,D$), let $g$ be an arbitrary element of $\text{span}(g_1\cdots, g_D)$. Then, $\mathcal{K}g\in \text{span}(g_1\cdots, g_D)$, i.e., $\text{span}(g_1\cdots, g_D)$ is an invariant subspace under the action of the Koopman operator $\mathcal{K}:\mathcal{F}\rightarrow \mathcal{F}$, if and only if there exits $K\in \mathbb{R}^{D\times D}$ s.t. \begin{align} [\mathcal{K}g_1\ \cdots \ \mathcal{K}g_D]^\mathsf{T} = K[g_1\ \cdots \ g_D]^\mathsf{T}. \label{eq. finite dimensional approx of K} \end{align} \end{prop} From the engineering perspective, it is of great interest to introduce the observables such that they allow practical models for control application\cc{s}. One major choice of $g_i$ for the Koopman control problem takes the following structure of observables\cite{Model_based_control,Soft_robot,tube_based_MPC}: \vspace{-2mm} \begin{align} [g_1(x_k,\bm{U})\cdots g_{D}(x_k,\bm{U})]^\mathsf{T} = \left[ x_k^\mathsf{T}\ \tilde{g}(x_k)^\mathsf{T}\ u_k^\mathsf{T} \right]^\mathsf{T}, \label{eq. def of obs} \end{align} where $D=n+N+p$ and $\tilde{g}(x_k)\in \mathbb{R}^N$ represents a vector-valued function from $\mathcal{X}$ into $\mathbb{R}^N$ for some $N\in \mathbb{N}$. Note that only the first element $u_k$ in the sequence $\bm{U}=(u_k,u_{k+1},\cdots)$ appears in the definition \eqref{eq. def of obs}, which leads to a practical form of data-driven models consistent with many linear controller designs such as Linear Quadratic Regulator (LQR) and Model Predictive Control (MPC). On the assumption that we have access to $x_k$ and $u_k$ as data, we consider the following finite-dimensional approximation $K_c\in\mathbb{R}^{(n+N+p)\times (n+N+p)}$ of the Koopman operator $\mathcal{K}$: \vspace{-2mm} \begin{align} \left[ \begin{array}{c} x_{k+1} \\ \tilde{g}(x_{k+1}) \\ u_{k+1} \end{array} \right] \approx \underset{=:K_c}{ \underbrace{ \left[ \begin{array}{c} \begin{array}{cc} A & B \end{array} \\ * \end{array} \right] }} \left[ \begin{array}{c} x_{k} \\ \tilde{g}(x_{k}) \\ u_{k} \end{array} \right], \label{eq. intro to Koopman model} \end{align} where matrices $A\in\mathbb{R}^{(n+N)\times (n+N)}$ and $B\in\mathbb{R}^{(n+N)\times p}$ are to be learned along with the feature maps $\tilde{g}$. The both sides of \eqref{eq. intro to Koopman model} are not necessarily equal to each other since $\text{span}(g_1,\cdots,g_D)$ defined by \eqref{eq. def of obs} may not be invariant under the action of $\mathcal{K}$. Noticing that the first $n+N$ rows of \eqref{eq. intro to Koopman model} are enough to specify the evolution of the states $x_k$ s.t. \vspace{-2mm} \begin{align} \left[ \begin{array}{c} x_{k+1} \\ \tilde{g}(x_{k+1}) \end{array} \right] \approx A \left[ \begin{array}{c} x_{k} \\ \tilde{g}(x_{k}) \end{array} \right] + Bu_k, \label{eq. Koopman model 1} \end{align} we are only interested in learning \eqref{eq. Koopman model 1} and the last $p$ rows of $K_c$ in \eqref{eq. intro to Koopman model} are ignored in the proceeding formulations. From \eqref{eq. Koopman model 1}, the modeling error is defined as: \vspace{-2mm} \begin{align} r(x,u):= \left[ \begin{array}{c} F(x,u) \\ \tilde{g}(F(x,u)) \end{array} \right] - \left( A \left[ \begin{array}{c} x \\ \tilde{g}(x) \end{array} \right] + Bu \right), \label{eq. def of modeling error} \end{align} and its norm: \vspace{-4mm} \begin{align} \| r \|= \sqrt{ \int_{\mathcal{X}\times \mathcal{U}} \| r(x,u) \|_2^2 dxdu }, \label{eq. norm of r} \end{align} may be used as a characteristic to evaluate the model, e.g., \eqref{eq. Koopman model 1} is exact almost everywhere if and only if $\|r\|=0$. The model \eqref{eq. Koopman model 1} is an LTI \cc{system} in the new coordinates $[x_k^\mathsf{T}\ \tilde{g}(x_k)^\mathsf{T}]^\mathsf{T}$ and linear controller designs can be applied to control \eqref{eq. governing eq}. In this paper, the following feedback controller with a static gain $\bm{K}\in \mathbb{R}^{p\times (n+N)}$ is considered: \vspace{-1mm} \begin{align} u_k=\bm{K} [x_k^\mathsf{T}\ \tilde{g}(x_k)^\mathsf{T}]^\mathsf{T}. \label{eq. controller} \end{align} \section{Two-Stage Learning of Koopman Embedding} \vspace{-1mm} \label{section 3} \subsection{Motivating Example} \label{section. motivating example} We consider the following one-dimensional system as a guiding example to motivate the proposed two-stage learning. \vspace{-5mm} \begin{align} x_{k+1}={x^2_k} e^{-x_k}+u_k, \ \ x_k,u_k\in \mathbb{R}. \end{align} Suppose that we create the Model 1 defined as: \vspace{-2mm} \begin{align} & \left[ \begin{array}{c} x_{k+1} \\ \hspace{-2mm}{x^2_{k+1}}e^{-x_{k\hspace{-0.5mm}+\hspace{-0.5mm}1}}\hspace{-2.5mm} \end{array} \right] \hspace{-1.5mm}\approx \hspace{-1mm} A\hspace{-1mm} \left[ \begin{array}{c} x_{k} \\ \hspace{-2mm}{x^2_k}e^{-x_k}\hspace{-2.5mm} \end{array} \right] \hspace{-1.5mm}+\hspace{-1mm} Bu_k,\hspace{-0.5mm} \left( \tilde{g}(x_k)\hspace{-1mm}=\hspace{-1mm} {x^2_k}e^{-x_{k}} \right)\hspace{-1mm}. & \label{eq. ex1 model 1} \end{align} From Proposition \ref{prop. state prediction} in Section \ref{section. second training}, perfect state prediction with no error is possible with $A$ and $B$ given as the following forms: \vspace{-1mm} \begin{align} A= \left[ \begin{array}{cc} 0 & 1 \\ \alpha_1 & \alpha_2 \end{array} \right], B= \left[ \begin{array}{c} 1 \\ \alpha_3 \end{array} \right], \alpha_i\in \mathbb{R}. \end{align} The modeling error \eqref{eq. def of modeling error} is then represented by \vspace{-2mm} \begin{align} r(x_k,u_k)\hspace{-1mm}=\hspace{-1mm} \left[ \begin{array}{c} 0 \\ \begin{array}{l} \hspace{-4mm} ({x^2_k}e^{-x_k} + u_k)^2 \exp(-{x^2_k}e^{-x_{k}} \hspace{-1mm}-\hspace{-1mm} u_k) \\ \hspace{14mm}-\hspace{-0.1mm} \alpha_1 x_k \hspace{-1mm}-\hspace{-1mm} \alpha_2 {x^2_k}e^{-x_{k}} \hspace{-1mm}-\hspace{-1mm} \alpha_3 u_k \hspace{-4mm} \end{array} \end{array} \right]\hspace{-1mm}. \label{eq. r1} \end{align} Suppose we also have Model 2, which has richer features: \vspace{-5mm} \begin{align} & \left[ \begin{array}{c} x_{k+1} \\ \hspace{-2.5mm}{x^2_{k+1}}e^{-x_{k\hspace{-0.5mm}+\hspace{-0.5mm}1}}\hspace{-2.8mm} \\ x_{k+1}^2 \end{array} \right] \hspace{-1.5mm}\approx\hspace{-1mm} A\hspace{-1.5mm} \left[ \begin{array}{c} x_{k} \\ \hspace{-2.5mm}{x^2_k}e^{-x_{k}}\hspace{-2.8mm} \\ x_{k}^2 \end{array} \right] \hspace{-2mm}+\hspace{-1mm} Bu_k, \hspace{-1mm} \left( \hspace{-1mm} \tilde{g}(x_k) \hspace{-1mm}=\hspace{-1.5mm} \left[ \begin{array}{c} \hspace{-2.3mm} {x^2_k}e^{\hspace{-0.5mm}-\hspace{-0.2mm}x_{k}}\hspace{-2.5mm} \\ x_k^2 \end{array} \right] \hspace{-0.5mm} \right)\hspace{-1.2mm}. & \label{eq. ex1 model 2} \end{align} \cc{P}erfect state prediction can be also achieved with \vspace{-2mm} \begin{align} A= \left[ \begin{array}{ccc} 0 & 1 & 0 \\ \beta_1 & \beta_2 & \beta_3 \\ \beta_4 & \beta_5 & \beta_6 \end{array} \right], B= \left[ \begin{array}{c} 1 \\ \beta_7 \\ \beta_8 \end{array} \right], \beta_i\in \mathbb{R}. \end{align} In this case, however, the modeling error $r(x_k,u_k)$ takes a different form: \vspace{-2mm} \begin{align} r(x_k,\hspace{-0.7mm}u_k)\hspace{-1mm}=\hspace{-1.5mm} \left[ \begin{array}{c} 0 \\ \begin{array}{l} \hspace{-4mm} ({x^2_k}e^{-x_k} + u_k)^2 \exp(-{x^2_k}e^{-x_{k}}-u_k) \\ \hspace{7mm} - \beta_1 x_k - \beta_2 {x^2_k}e^{-x_{k}} - \beta_3 x_k^2 - \beta_7 u_k\hspace{-4mm} \end{array} \\ \begin{array}{l} \hspace{-4mm}({x^2_k}e^{-x_{k}}+u_k)^3 - \beta_5 x_k \\ \hspace{18mm} - \beta_6 {x^2_k}e^{-x_{k}} - \beta_7 x_k^2 - \beta_8 u_k\hspace{-4mm} \end{array} \end{array} \right]\hspace{-1mm}. \label{eq. r2} \end{align} Figures \ref{subfig. ex1 heat map r1} and \ref{subfig. ex1 heat map r2} show the heat maps of $\|r(x_k,u_k)\|_2$, where the model parameters $\alpha_i$ and $\beta_i$ are obtained by EDMD\cite{KORDA_Koopman_MPC}. Model 2 in \eqref{eq. ex1 model 2} \cc{is highly erroneous} for $x_k<0$ compared to the Model 1 in \eqref{eq. ex1 model 1}, which accumulates according to \eqref{eq. error accumulation of control model} in Section \ref{section. second training} \cc{leading to undesirable} closed-loop behavior. Figure \ref{fig. control performance ex1} shows the controller performance of both models, where $\bm{K}$ is computed as an LQR gain with the cost function $\sum_{k=0}^{\infty}{x^2_k}+{u^2_k}$. Despite the fact that the both models achieve precisely zero state prediction error, the controller designed for Model 2 causes an undesirable oscillation even \cc{as} the open-loop dynamics quickly converges to the origin after $k=1$. Herein, it is also emphasized that from the state-prediction point of view, complex and large models such as the Model 2 \cc{may be} considered as preferable options in general since it is more likely to achieve better state-prediction by Corollary \ref{col. state prediction} in Section \ref{section. second training}. \cc{T}o deal with this degradation of the controller performance, which is not be revealed from the state-prediction accuracy, a two-stage learning \cc{approach is proposed to improve} controller performance. \vspace{-3mm} \begin{figure} \centering \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/ex1/r1.png} \caption{Model 1 \eqref{eq. r1}.} \label{subfig. ex1 heat map r1} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/ex1/r2.png} \caption{Model 2 \eqref{eq. r2}.} \label{subfig. ex1 heat map r2} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/ex1/ex1_cl.png} \caption{Control simulations.} \label{fig. control performance ex1} \end{subfigure} \caption{(a), (b): Heat maps of $\|r(x_k,u_k)\|_2$. (c): Controller performance with $x_0=-3.4298$.} \end{figure} \subsection{Modeling Errors of Koopman Control Models} \vspace{-2mm} Several methods \cc{have been proposed} to learn the model \eqref{eq. Koopman model 1}, among of which the most straightforward one is to first specify the feature maps $\tilde{g}$ and \cc{infer} $A$ and $B$ from data. This is reduced to a linear regression problem and a unique solution can be obtained in the same manner as the Dynamic Mode Decomposition (DMD)\cite{P_shmidt_DMD_2008,Spectral_analysis_of_nonlinear_flows}. This learning algorithm with nonlinear observables is called the Extended Dynamic Mode Decomposition (EDMD)\cite{Williams2015}, based on which many data-driven Koopman controller designs are developed\cite{KORDA_Koopman_MPC,Koopman_Lyapunov_based_MPC,Model_based_control,PEITZ_switched_control,Soft_robot,data-driven_Koopman_H2,local_Koopman,soft_robot_arm,tube_based_MPC,Koopman_generators_Peitz}. \cc{An important feature of the EDMD algorithm is its convergence property.} Letting $\mathcal{F}=L_2(\mathcal{X}\times l(\mathcal{U}))$, the approximation obtained by EDMD is, \cc{under} several assumptions, shown to converge to the true Koopman operator in the strong operator topology as the number $M$ of data points and the number $D$ of observables $g_i$ tend to infinity\cite{On_convergence_Klus,on_convergence_of_EDMD}. Specifically, this convergence property is stated as follows: for $\forall g\in L_2(\mathcal{X}\times l(\mathcal{U}))$, \vspace{-2mm} \begin{align} \lim_{D\rightarrow \infty} &\int_{\mathcal{X}\times l(\mathcal{U})} |(P_D^\mu \mathcal{K})P_D^\mu g - \mathcal{K}g|^2d\mu =0, & \label{eq. convergence of EDMD 1} \end{align} where $P_D^\mu $ and $\mu$ are the $L_2$ projection onto $\text{span}(g_1,\hspace{-0.5mm}\cdots\hspace{-0.5mm},g_D)$ and a measure with which $L_2(\mathcal{X}\times l(\mathcal{U}))$ is endowed, respectively. The operator $P_D^\mu \mathcal{K}$ in \eqref{eq. convergence of EDMD 1} is related to the finite dimensional approximation $K$ computed by EDMD in the following manner: \vspace{-2mm} \begin{align} &\lim_{M\rightarrow \infty} \| a^\mathsf{T} K[g_1\cdots g_D]^\mathsf{T} - P_D^{\mu}\mathcal{K}g\|=0,& \nonumber \\ & \forall g=a^\mathsf{T} [g_1\cdots g_D]^\mathsf{T} \in \text{span}(g_1,\cdots,g_D), a\in \mathbb{R}^D, & \label{eq. convergence of EDMD 2} \end{align} where $\|\cdot \|$ is an arbitrary norm on $\text{span}(g_1,\cdots,g_D)$. The convergence property \eqref{eq. convergence of EDMD 1} implies that data-driven Koopman models may provide reasonable approximation with sufficiently large $M$ and $D$. However, it is not the case for the non-autonomous systems setting as seen in the following remark. \begin{remark} \label{remark convergenve property does not hold} Given feature maps $\tilde{g}(x_k)$, the approximation $K_c$ in \eqref{eq. intro to Koopman model} does not possess the the convergence property \eqref{eq. convergence of EDMD 1} since for \eqref{eq. convergence of EDMD 1} to hold, it is necessary that $\{g_i\}_{i=1}^D$ form an orthonormal basis of $L_2(\mathcal{X}\times l(\mathcal{U}))$ as $D\rightarrow \infty$\cite{on_convergence_of_EDMD}. As described in \cite{KORDA_Koopman_MPC}, no elements of $\bm{U}=(u_k,u_{k+1},\cdots)$ except for the first one $u_k$ depend on the definition \eqref{eq. def of obs} and it is obvious that they cannot form any basis of $L_2(\mathcal{X}\times l(\mathcal{U}))$. \end{remark} No convergence property of the model \eqref{eq. intro to Koopman model} or \eqref{eq. Koopman model 1} implies $\|r\|$ may not be negligibly small with the absence of the convergence property of the model even if $D$ and $M$ are sufficiently large. To this end, we adopt another learning formulation, where the nonlinear feature maps $\tilde{g}$ are also learned from data along with matrices $A$ and $B$. \vspace{-3mm} \subsection{Initial Training: Simultaneous Learning of the Feature Maps and the System Matrices} \vspace{-1.8mm} Another class of methods to learn the model \eqref{eq. Koopman model 1} estimates both system matrices $A,B$ and the feature maps $\tilde{g}$ simultaneously, which usually falls into nonlinear regression problems. The resulting models are expected to achieve better predictive accuracy than those of the linear formulations such as EDMD since they can have greater model expressivity with the feature maps $\tilde{g}$ also learned from data along with the matrices $A$ and $B$. Especially, the use of neural networks has been shown to be promising to incorporate into the Koopman operator-based modeling, analyses, and control\cite{Learning_DNN_ACC2019,Learning_Koopman_Invariant_Subspaces,Physics-based_robabilistic_learning,deep_learning_representation_CDC,Wiener_MIMO_ID}. Hence, the proposed method characterizes the feature maps $\tilde{g}$ in \eqref{eq. Koopman model 1} as a neural network aiming at high predictive accuracy and solves a nonlinear regression problem to learn $A,B$, and $\tilde{g}$, which is formulated as Problem \ref{problem. initial training}. \begin{table} \centering \begin{screen} \begin{prob} \label{problem. initial training} \rm{} \ (Initial Training) \hyperref[comment 2]{\textcolor{blue}{\fbox{Please see 3)}}} \\ Find $g$, $A\in\mathbb{R}^{(n+N)\times (n+N)}$, and $B\in \mathbb{R}^{(n+N)\times p}$ s.t. \vspace{-2mm} \begin{align} &\left\{ g,\ A,\ B \right\}= \underset{\left\{ g,A,B \right\}}{ \text{argmin} }\ J(g,A,B), & \label{eq. argmin of learning} \\ & \text{where } J(g,A,B) := \lambda_1 \| AG_x + BU - G_y \|_F^2 &\nonumber \\ & \hspace{16mm}+ \lambda_2 \| W ( AG_x + BU ) - Y \|_F^2, & \label{eq. loss of initial training} \\ & G_x:= [g(x_1)\cdots g(x_{M})], \ U:= [u_1\cdots u_{M}], &\nonumber \\ & G_y:= [g(y_1)\cdots g(y_{M})], \ Y := [y_1\cdots y_{M}], &\nonumber \\ & y_k=F(x_k,u_k), \ k=1,\cdots, M, &\nonumber \\ & W:= \left[ \begin{array}{cc} I & 0 \end{array} \right], & \label{eq. decoder} \\[-1ex] & g:\mathcal{X}\rightarrow \mathbb{R}^{n+N}: x_k\mapsto \left[ \begin{array}{c} x_k \\ \tilde{g}(x_k) \end{array} \right],& \label{eq. g subsystemized} \\ & \tilde{g}(x_k)= \text{NN}(x_k\cc{;w}) \ (\text{a neural network}). & \end{align} \end{prob} \end{screen} \end{table} The loss function $J(g(\cdot),A,B)$ in \eqref{eq. loss of initial training} consists of two terms. The first one multiplied by a hyperparameter $\lambda_1$ accounts for (approximately) minimizing $\|r\|$ in \eqref{eq. norm of r}. The other one with a hyperparameter $\lambda_2$ intends to directly minimize the state-reconstruction error by applying the decoder $W$ to the model prediction in order to compare it with the state $y_k$. While the decoder may be also characterized by a neural network in general, the specific structure \eqref{eq. def of obs} of observables, which explicitly includes the state $x_k$, allows an analytical expression of the decoder $W$ in \eqref{eq. decoder}. The nonlinear feature maps $\tilde{g}$ are defined as a fully-connected feed-forward neural network: \vspace{-2mm} \begin{align} &\text{NN}\hspace{-0.3mm}(\hspace{-0.5mm}x_k\cc{\hspace{-0.2mm};\hspace{-0.5mm}w}\hspace{-0.5mm}) \hspace{-1mm} := \hspace{-1mm} \sigma \hspace{-0.7mm} \left( \Theta_{l} \sigma \hspace{-0.5mm} \left( \hspace{-0.5mm}\cdots\hspace{-0.5mm} \Theta_2 \sigma \hspace{-0.5mm} \left( \Theta_1 x_k \hspace{-1mm}+\hspace{-0.5mm} b_1 \right) \hspace{-0.7mm}+\hspace{-0.8mm} b_2 \hspace{-0.5mm}\cdots\hspace{-0.5mm} \right) \hspace{-0.7mm}+\hspace{-0.7mm} b_l \right)\hspace{-1mm}, & \label{eq. NN} \\ & \cc{w:=\{ \Theta_{i}, b_i \}_{i=1}^l,} & \end{align} where $\Theta_i$, $b_i$, and $\sigma$ are a kernel, a bias, and an activation function, respectively. In this paper, we only consider continuous activation functions so that \eqref{eq. NN} is continuous. \vspace{-2mm} \subsection{Second Training: Modification of the Initial \textcolor{blue}{Model}} \vspace{-1mm} \label{section. second training} \cc{Characterizing observables} by a neural network allows greater expressivity, which \cc{can} lead to higher accuracy of the data-driven model if the optimization problem is feasible. On the other hand, from the controller design perspective, including high\cc{-order} nonlinearit\cc{ies} in the observables is not preferable since it may introduce unexpected or undesirable effect on the closed-loop system due to the modeling error, which could even alter the actual closed-loop system unstable, while it is easier for the state prediction to eliminate the modeling error. Specifically, the state prediction is implemented as follows: \vspace{-2mm} \begin{align} &x^{\text{est}}_{k+1} \hspace{-1mm}=\hspace{-1mm} \underset{=W}{ \underbrace{ \left[ \begin{array}{cc} \hspace{-1mm}I\hspace{-0.2mm} & \hspace{-0.2mm}0\hspace{-1mm} \end{array} \right]}} \hspace{-1mm} \left\{ \hspace{-0.5mm} A \left[ \begin{array}{c} \hspace{-1mm}x^{\text{est}}_{k}\hspace{-1mm} \\ \hspace{-1mm}\tilde{g}(x^{\text{est}}_{k})\hspace{-1mm} \end{array} \right] \hspace{-1mm}+\hspace{-1mm} Bu_k \hspace{-0.5mm} \right\}\hspace{-1mm}, k=0,\hspace{-0.5mm}1,\cdots\hspace{-0.5mm}, & \label{eq. state prediction} \end{align} where $x^\text{est}_k$ denotes the state prediction at time $k$ and $x^\text{est}_0=x_0$ is given. From \eqref{eq. state prediction}, the modeling error \cc{in} the state prediction is evaluated \cc{as}: \vspace{-2mm} \begin{align} x_{k+1} \hspace{-1mm}=\hspace{-1mm} \left[ \begin{array}{cc} \hspace{-1mm}I\hspace{-0.5mm} & \hspace{-0.5mm}0\hspace{-1mm} \end{array} \right] \hspace{-1mm} \left\{ \hspace{-1mm} A \left[ \begin{array}{c} \hspace{-1mm}x_{k}\hspace{-1mm} \\ \hspace{-1mm}\tilde{g}(x_{k})\hspace{-1mm} \end{array} \right] \hspace{-1mm}+\hspace{-1mm} Bu_k \hspace{-1mm} \right\} \hspace{-1mm}+\hspace{-1mm} \left[ \begin{array}{cc} \hspace{-1mm}I\hspace{-0.5mm} & \hspace{-0.5mm}0\hspace{-1mm} \end{array} \right] \hspace{-1mm} r(x_k\hspace{-.2mm},\hspace{-0.2mm}u_k). \label{eq. state prediction exact relation} \end{align} \begin{prop} \label{prop. state prediction} There exist $\tilde{g}$, $A_1\in \mathbb{R}^{n\times n}$, $A_2\in \mathbb{R}^{n\times N}$ and $B_1\in \mathbb{R}^{n\times p}$ s.t. \vspace{-2mm} \begin{align} F(x_k,u_k)= A_1 x_k + A_2 \tilde{g}(x_k) + B_1 u_k, \label{eq. condition for perfect state prediction} \end{align} if and only if \vspace{-5mm} \begin{align} \left[ \begin{array}{cc} \hspace{-1mm}I\hspace{-0.5mm} & \hspace{-0.5mm}0\hspace{-1mm} \end{array} \right] r(x_k,u_k)=0, \end{align} i.e., \eqref{eq. state prediction} has no state prediction error. \end{prop} \begin{proof} Let $[A_1\ A_2]\in \mathbb{R}^{n\times (n+N)}$ and $B_1\in \mathbb{R}^{n\times p}$ be the first $n$ rows of $A$ and $B$ in \eqref{eq. state prediction exact relation}, respectively. From \eqref{eq. governing eq}, the equation \eqref{eq. state prediction exact relation} reads \vspace{-2mm} \begin{align} F(x_k,u_k) \hspace{-1mm}=\hspace{-1mm} A_1 x_k \hspace{-1mm}+\hspace{-1mm} A_2 \tilde{g}(x_k) \hspace{-1mm}+\hspace{-1mm} B_1 u_k \hspace{-1mm}+\hspace{-1mm} \left[ \begin{array}{cc} \hspace{-1mm}I\hspace{-0.7mm} & \hspace{-0.7mm}0\hspace{-1mm} \end{array} \right]\hspace{-1mm} r(x_k,u_k), \end{align} which implies the statement of the proposition. \end{proof} Note that there exist $\tilde{g}$, $A_1$, $A_2$, and $B_1$ that satisfy \eqref{eq. condition for perfect state prediction} with $A_2=0$ or $\tilde{g}(x_k)\equiv 0$ if and only if the original dynamics \eqref{eq. governing eq} is linear. \begin{col} \label{col. state prediction} Suppose \eqref{eq. governing eq} is nonlinear. It is necessary and sufficient for \eqref{eq. state prediction} to be able to achieve zero state prediction error that the feature maps $\tilde{g}$ includes enough number of different features to reconstruct $F$. \end{col} While the accuracy of the state prediction of the model \eqref{eq. Koopman model 1} is evaluated by Proposition \ref{prop. state prediction}, its accuracy in terms of controller design is characterized in a different way. Note that the system to be controlled by the feedback controller $\bm{K}$ in \eqref{eq. controller} is assumed to be a linear time-invariant system: \vspace{-2mm} \begin{align} \xi_{k+1} = A\xi_k + Bu_k, \ \ \xi_k \in \mathbb{R}^{n+N}, \label{eq. control model} \end{align} and we can only ensure properties of the closed-loop system: \vspace{-5mm} \begin{align} \xi_{k+1} &= (A+B\bm{K})\xi_{k} = (A+B\bm{K})^{k+1}\xi_{0}. & \end{align} \cc{Clearly}, \eqref{eq. control model} is identical to the Koopman control model \eqref{eq. Koopman model 1} if $\|r\|=0$ and $\xi_0=[x_0^\mathsf{T}\ \tilde{g}(x_0)^\mathsf{T}]^\mathsf{T}$. However, in general cases where $r(x_k,u_k)\not\equiv 0$, the modeling error may persist at any time and accumulate as follows: \vspace{-2mm} \begin{align} \left[ \begin{array}{c} \hspace{-1mm}x_{k+1}\hspace{-1mm} \\ \hspace{-1mm}\tilde{g}(x_{k+1})\hspace{-1mm} \end{array} \right] &= (A+B\bm{K})^{k+1} \left[ \begin{array}{c} \hspace{-1mm}x_{0}\hspace{-1mm} \\ \hspace{-1mm}\tilde{g}(x_{0})\hspace{-1mm} \end{array} \right] &\nonumber \\ &\hspace{-15mm} + \sum_{i=0}^{k} (A+B\bm{K})^i r \left( x_{k-i}, \bm{K} \left[ \begin{array}{c} \hspace{-1mm}x_{k-i}\hspace{-1mm} \\ \hspace{-1mm}\tilde{g}(x_{k-i})\hspace{-1mm} \end{array} \right] \right), & \label{eq. error accumulation of control model} \end{align} in which \eqref{eq. controller} is substituted. The second term of the r.h.s. of \eqref{eq. error accumulation of control model} represents the discrepancy between \eqref{eq. control model} and \eqref{eq. Koopman model 1}, which directly leads to degradation of controller performance. Since \cc{the error propagation in \eqref{eq. error accumulation of control model}} reflects all components of $r(x_k,u_k)$ unlike the state prediction, the actual closed-loop system could greatly suffer from the modeling error depending on the design and/or the dimensions of $\tilde{g}$, as illustrated in Section \ref{section. motivating example}. In the proposed method, considering that the controller performance of the Koopman control model \eqref{eq. Koopman model 1} can be deteriorated due to the modeling error \cc{(}while it is relatively easy to achieve high predictive accuracy as shown in Corollary \ref{col. state prediction}\cc{)}, the second training process formulated as Problem \ref{prob. second training} follows Problem \ref{problem. initial training} in case the control objective is not achieved by the initially learned model. Specifically, data points sampled from a closed-loop system formed by \eqref{eq. controller} are exclusively used so that the learning problem can directly minimize the modeling error in the regime of closed-loop dynamics. The modification of the initial model is implemented by updating matrices $A$ and $B$ to $A+\Delta A$ and $B+\Delta B$, where we impose the constraint that the modified model retains dynamics close to the initial one so that the high predictive accuracy obtained in the initial training will not be degraded by retraining the model with closed-loop data points only. Specifically, $\Delta A$ and $\Delta B$ have the constraints that their induced 2-norms $\| \Delta A \|:=\text{sup}_{\|x\|=1}\| \Delta A x \|_2$ and $\|\Delta B\|$ are bounded by hyperparameters $\epsilon_A$ and $\epsilon_B$, respectively. \begin{table} \centering \begin{screen} \begin{prob} \label{prob. second training} \rm{} (Modification of the initial model) \\ Given $g$, $A$, $B$, and a controller gain $\bm{K}$ that is designed for ($A$, $B$), find $\Delta A$ and $\Delta B$ s.t. \vspace{-2mm} \begin{align} &\left\{ \Delta A,\ \Delta B \right\}:= \underset{\left\{ \Delta A,\Delta B \right\}}{ \text{argmin} }\ J_c(\Delta A,\Delta B), & \label{eq. argmin of learning 2} \\ & \text{subject to:} \hspace{5mm} \| \Delta A \|\leq \epsilon_A, & \\ &\hspace{20mm} \| \Delta B \|\leq \epsilon_B, & \\[-1ex] &\text{where}&\nonumber \\[-1ex] &J_c(\Delta A,\Delta B) \hspace{-1mm}:=\hspace{-1mm} \hspace{0mm} \lambda_1 \| (A \hspace{-1mm}+\hspace{-1mm} \Delta A) G_x \hspace{-0.5mm}+\hspace{-0.5mm} (B \hspace{-1mm}+\hspace{-1mm} \Delta B)U \hspace{-0.5mm}-\hspace{-0.5mm} G_y \|_F^2 &\nonumber \\ &\hspace{-1mm} +\hspace{-1mm} \lambda_2 \| W ( (A \hspace{-1mm}+\hspace{-1mm} \Delta A) G_x \hspace{-0.5mm} + \hspace{-0.5mm} (B \hspace{-1mm}+\hspace{-1mm} \Delta B)U ) - Y \|_F^2, & \label{eq. loss of second training} \\ & y_k=F(x_k,u_k), u_k=\bm{K}g(x_k). &\nonumber \end{align} \end{prob} \end{screen} \end{table} \vspace{-2.5mm} \subsection{Restricting \cc{the} Input Signal of the Data Set} \vspace{-2mm} \label{section. using restricted inputs} In addition to the modification of the initially learned model, we also propose a data sampling strategy that generates the input signal $u_k$ deterministically \cc{such} that $(u_k,u_{k+1},\cdots)$ will be a sequence of data points sampled from continuous functions. From the modeling perspective, it is in general advisable to sample data points $(x_k, u_k)$ randomly, i.e., from some probability distribution. Indeed, assuming that data points $(x_k, u_k)$ are i.i.d. random variables, it is confirmed that minimizing the loss function $J(g,A,B)$ in \eqref{eq. loss of initial training} corresponds to minimizing the norm \eqref{eq. norm of r} of the modeling error $r$ since the first term of $J(g,A,B)$ is related to $\|r\|$ as: \vspace{-3mm} \begin{align} \cfrac{1}{M} \| AG_x \hspace{-1mm}+\hspace{-1mm} BU \hspace{-1mm}-\hspace{-1mm} G_y \|_F^2 &= \cfrac{1}{M} \sum_{k=1}^{M} \| r(x_k,u_k) \|_2^2 &\nonumber \\[-1ex] &\overset{\text{a.s.}}{ \rightarrow } \int_{\mathcal{X}\times \mathcal{U}} \hspace{-2mm} \| r(x,u) \|_2^2 dxdu \ \ (M\hspace{-1mm}\rightarrow\hspace{-1mm} \infty ) &\nonumber \\ &= \|r\|^2, & \label{eq. relation of loss and the norm of r} \end{align} where the almost sure convergence follows from the strong law of large numbers. However, it is often the case that sampling a \cc{very} large number of data points across the entire space is not practical due to limitations of experimental or computational resources. For non-autonomous systems, the space from which the data $(x_k,u_k)$ is sampled is the product space $\mathcal{X}\times \mathcal{U}$, not the original state space $\mathcal{X}$, and it is especially difficult to sample enough data in applications. As a result, learned models may be overfitted or biased, which do not necessarily lead to the modeling error profile consistent with the analysis in \eqref{eq. relation of loss and the norm of r}. In this paper, we propose to only use deterministic $u_k$ sampled from continuous functions. Since the solution $x(t)$ to \eqref{eq. governing eq in ODE} is assumed to be continuous and $\tilde{g}$ defined in \eqref{eq. NN} is also continuous, the control inputs \eqref{eq. controller} are always discretized points sampled from continuous functions. Hence, for the same reason as that of the modification of the initial model in Section \ref{section. second training}, \textcolor{blue}{ we only use $u_k$ sampled from continuous functions so that Problem \ref{problem. initial training} can minimize $J(g,A,B)$ over possible regime of dynamics realized by the controller \eqref{eq. controller} only. This strategy does not affect learning the autonomous part $A$ of the model since $u_k$ only enters the dynamics of the model through $B$, which is separate from $A$ as in \eqref{eq. Koopman model 1}. } \vspace{-3mm} \section{Numerical Examples} \vspace{-2mm} \label{section. numerical examples} In this section, the control objective is defined as stabilizing the system at the origin while minimizing the cost and LQR is used for the controller design. \vspace{-3mm} \subsection{Simple Pendulum} \vspace{-1.5mm} The simple pendulum is considered as the first example: \vspace{-1mm} \begin{align} \ddot{\theta} = -\sin\theta + u,\ \ (x_1:=\theta, x_2:=\dot{\theta}). \label{eq. pendulum} \end{align} We collect 300 data sets generated by \eqref{eq. pendulum}, each of which consists of a single trajectory of a length of 50 steps with the sampling period $\Delta t=0.1$ starting from an initial condition $x_0\sim \text{Uniform}[-3,3]^2$. The Runge-Kutta method is used to solve \eqref{eq. pendulum} with a step size of 0.01. We include one nonlinear feature $\tilde{g}(x_k)\in \mathbb{R}$ ($N=1$) in the model, which is a neural network with a single hidden layer consisting of 10 neurons and the swish function is used as the activation. Problem \ref{problem. initial training} is implemented \cc{in} the Python TensorFlow module. The following two types of $u_k$ are considered to evaluate the efficacy of the sampling strategy in Section \ref{section. using restricted inputs}: \vspace{-2mm} \begin{align} u_k&\sim \text{Uniform}[-1,1],& \label{eq. uk random} \\[-0.5ex] u_k&= \cos (\omega_i k\Delta t), \ \omega_i:= 20i, \ i=0,1,\cdots,5. & \label{eq. uk cos} \end{align} In the proposed method that adopts \eqref{eq. uk cos}, each single trajectory data set is split evenly into six groups $\mathcal{D}_i$ and $u_k$ with $\omega_i$ is included in $\mathcal{D}_i$ ($i=0,1,\cdots,5$). Figure \ref{fig. pendulum} shows the results of the models obtained by Problem \ref{problem. initial training}, where the state predictions (Figs. \ref{subfig. pendulum pred random} and \ref{subfig. pendulum pred cos}) are implemented according to \eqref{eq. state prediction} and the control simulations (Figs. \ref{subfig. pendulum cl random} and \ref{subfig. pendulum cl cos}) use LQR gains computed with the cost $\sum_{k=0}^{\infty} 100x_{k,1}^2+x_{k,2}^2+u_k^2$. Whereas the state-prediction accuracy barely changes with different types of $u_k$, the controller performance is greatly deteriorated if the model uses the randomly generated input \eqref{eq. uk random}. The controller designed for the model trained with the proposed sampling strategy successfully make the state converge to the origin. Since the control objective is already achieved by the initial model, Problem \ref{prob. second training} is not solved in this example. \begin{figure} \centering \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/pendulum_const_off/figs/pred_g_constrained.png} \caption{State prediction of the model trained with \eqref{eq. uk random}.} \label{subfig. pendulum pred random} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/pendulum_const_off/figs/cl_g_constrained.png} \caption{Controller performance of the model trained with \eqref{eq. uk random}.} \label{subfig. pendulum cl random} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/pendulum/figs/pred_g_constrained.png} \caption{State prediction of the model trained with \eqref{eq. uk cos}.} \label{subfig. pendulum pred cos} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/pendulum/figs/cl_g_constrained.png} \caption{Controller performance of the model trained with \eqref{eq. uk cos}.} \label{subfig. pendulum cl cos} \end{subfigure} \caption{Results of the simple pendulum.} \label{fig. pendulum} \end{figure} \vspace{-3.8mm} \subsection{Inverted Pendulum on A Cart} \vspace{-1.5mm} The inverted pendulum on a cart is considered as the second example, whose dynamics is given as follows\cite{Data_driven_book}: \vspace{-2mm} \begin{align} & \dot{x}_2= \cfrac{ -\hspace{-1mm}m^2L^2g\cos x_3\sin x_3 \hspace{-1mm}+\hspace{-1mm} mL^2A(x_2,x_3,x_4) \hspace{-1mm}+\hspace{-1mm} mL^2u } {mL^2(M+m(1-\cos^2 x_3))}, &\nonumber \\ & \dot{x}_4=\hspace{-1mm} \cfrac{ \begin{array}{l} (m+M)mgL\sin x_3 -mL\cos x_3 A(x_2,x_3,x_4) \\ \hspace{47mm}+mL\cos x_3 u \end{array} } {mL^2(M+m(1-\cos^2 x_3))}, &\nonumber \end{align} \begin{comment} \begin{align} \left\{ \hspace{-2mm} \begin{array}{l} \ddot{x}_1= \cfrac{ -\hspace{-1mm}m^2L^2g\cos x_3\sin x_3 \hspace{-1mm}+\hspace{-1mm} mL^2A(x_2,x_3,x_4) \hspace{-1mm}+\hspace{-1mm} mL^2u } {mL^2(M+m(1-\cos^2 x_3))}, \\ \ddot{x}_3=\hspace{-1mm} \cfrac{ \begin{array}{l} (m+M)mgL\sin x_3 -mL\cos x_3 A(x_2,x_3,x_4) \\ \hspace{47mm}+mL\cos x_3 u \end{array} } {mL^2(M+m(1-\cos^2 x_3))}, \end{array} \right. \nonumber \end{align} \end{comment} $\dot{x}_1=x_2$, $\dot{x}_3=x_4$, where $A(x_2,x_3,x_4)=mL{x_4}^2\sin x_3 -\delta x_2$, $m=1$, $M=5$, $L=2$, $g=-10$, and $\delta=1$. Problem \ref{problem. initial training} is solved with the same conditions as the first example except for the number of hidden layers, which is changed to 25. Also, only the deterministic sampling \eqref{eq. uk cos} for $u_k$ is considered in this example. For the controller design, the cost is defined as $\sum_{k=0}^{\infty} 100x_{k,1}^2+x_{k,2}^2+100x_{k,3}^2+x_{k,4}^2+u_k^2$. While the initially learned model has reasonable predictive accuracy as in Figs. \ref{subfig. invp pred initial} and \ref{subfig. invp error contour initial}, the controller performance suffers from the undesirable modeling error effect, which is shown in Fig. \ref{subfig. invp control initial}. Thus, Problem \ref{prob. second training} is solved to modify the model, for which we additionally collect \cc{data points in the same way as Problem \ref{problem. initial training}}. The Python TensorFlow Constrained Optimization module\cite{tfco_paper} is used to solve Problem \ref{prob. second training} with $\epsilon_A=\epsilon_B=0.1$. It is shown in Fig. \ref{subfig. invp control modified} that the modified model achieves the control objective by updating the model parameters. As a more quantitative analysis, we estimate the basin of attraction of the closed-loop systems by testing various initial conditions, whose results are shown in Figs. \ref{subfig. invp basin of attraction initial} and \ref{subfig. invp basin of attraction modified}. Some initial conditions do not converge to the origin for the closed-loop system formed by the initial model, while it is shown that the given set of initial conditions is a basin of attraction for the one formed by the modified model. Moreover, the modified model retains good state-prediction accuracy thanks to the constraints on $\Delta A$ and $\Delta B$ and it is comparable to that of the initial model (Figs. \ref{subfig. invp pred initial} and \ref{subfig. invp pred modified}). Finally, the profiles of the state prediction errors are evaluated. Figures \ref{subfig. invp error contour initial} and \ref{subfig. invp error contour modified} show the heat maps of $\| [I\ 0]r(x,0) \|_2$ with $x_2=x_4=0$, which are overlaid with the data points used in Problem \ref{problem. initial training}. It is confirmed that both models have quite similar error profile and the good state-prediction accuracy of the initial model is successfully preserved after the modification. \begin{figure} \centering \begin{subfigure}{0.44\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/figs/pred_g_constrained.png} \caption{State prediction of the initial model.} \label{subfig. invp pred initial} \end{subfigure} \begin{subfigure}{0.44\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/figs/pred_gAB_constrained.png} \caption{State prediction of the modified model.} \label{subfig. invp pred modified} \end{subfigure} \begin{subfigure}{0.44\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/error_contour_plot/inverted_pendulum/1/fold_2/g_constrained/error_contour.png} \caption{State prediction error of the initial model.} \label{subfig. invp error contour initial} \end{subfigure} \begin{subfigure}{0.44\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/error_contour_plot/inverted_pendulum/1/fold_2/gAB_constrained/error_contour.png} \caption{State prediction error of the modified model.} \label{subfig. invp error contour modified} \end{subfigure} \begin{subfigure}{0.44\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/figs/cl_g_constrained.png} \caption{Controller performance of the initial model.} \label{subfig. invp control initial} \end{subfigure} \begin{subfigure}{0.44\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/figs/cl_gAB_constrained.png} \caption{Controller performance of the modified model.} \label{subfig. invp control modified} \end{subfigure} \begin{subfigure}{0.44\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/basin_of_attraction/inverted_pendulum/1/fold_2/g_constrained/basin_of_attraction_ts.png} \caption{Estimate of basin of attraction of the initial model.} \label{subfig. invp basin of attraction initial} \end{subfigure} \begin{subfigure}{0.44\linewidth} \centering \includegraphics[width=0.95\linewidth]{fig/invp/basin_of_attraction/inverted_pendulum/1/fold_2/gAB_constrained/basin_of_attraction_ts.png} \caption{Estimate of basin of attraction of the modified model.} \label{subfig. invp basin of attraction modified} \end{subfigure} \caption{Results of the inverted pendulum on a cart. (c),(d): $x_2$ and $x_4$ are fixed to 0. (g),(h): Tested ranges are $x_1(0)\in [-13,6.9]$, $x_2(0)\in [-2.5,2]$ with $x_2(0)=x_4(0)=0$.} \label{fig. invp} \end{figure} \vspace{-4.5mm} \section{Conclusion} \vspace{-3mm} In this paper, we propose a two-stage learning method along with a simple but effective data sampling strategy for the Koopman operator-based control models, in which the additional learning follows the initial training in order to improve the controller performance of the models. Through numerical examples, it was shown that using input signal data deterministically sampled from continuous functions can improve the controller performance and the proposed two-stage learning also contributes to better closed-loop behavior by updating model parameters while retaining high state-prediction accuracy obtained by the initially learned model.
{ "timestamp": "2022-09-20T02:21:08", "yymm": "2209", "arxiv_id": "2209.08637", "language": "en", "url": "https://arxiv.org/abs/2209.08637" }
\section{Introduction} Vehicle similarity learning, also called vehicle re-identification (ReID), is a crucial technique for intelligent surveillance systems in a smart city. It is to track the vehicles with the same identity within a set of images captured by multiple cameras and various viewpoints. With the development of the deep convolutional neural network (DCNN), several approaches for vehicle ReID~\cite{he2019part,he2021transreid,meng2020parsing,zhao2021heterogeneous} have been proposed and achieved impressive performance. Similar to other high-level vision applications such as object detection and semantic segmentation~\cite{chen2021all}, although existing methods can handle vehicle ReID effectively on normal images, they have limited performance under inclement weather, especially in hazy scenario. Haze is a common and inevitable weather phenomenon that leads to poor visual appearances and causes the loss of discriminative information by deteriorating the contents of images for vehicle ReID. Thus, this field still has room for improvement. Inspired by previous dehazing tasks~\cite{cai2016dehazenet}, we can apply an atmospheric scattering model~\cite{koschmieder1924theorie} to synthesize haze images and then train the vehicle ReID models based on the rendered images and the corresponding ID labels. Though this strategy can achieve decent performance on synthetic images, they have limited performance on real haze images due to the domain gap between synthetic and real-world images~\cite{chen2022sjdl}. While this issue can be resolved by adopting real haze images in the training stage, collecting the real haze data and labeling the correct ground truths are difficult and troublesome. Another possible baseline strategy is to adopt the existing dehazing approaches~\cite{chen2019pms,zhang2018densely} or comprehensive image restoration method~\cite{zamir2021multi} as the pre-processing technique and then apply the ReID. Although the above strategies are shown to achieve promising results in haze removal, there is no guarantee that the selected pre-processing techniques would be able to improve ReID, since these two tasks are performed separately and existing dehazing methods are not designed for the purpose of ReID but for human perception. Moreover, most existing dehazing methods require pair data to train the model, but it is infeasible to attain the ground truths of haze images in real-world scenes. Though we can adopt synthetic data to train the network, the domain gap problem may still exist, which may generate undesired dehazed results in real haze scenes and further limit the performance of ReID. \begin{figure*}[t!] \centering \includegraphics[width=1.0\textwidth]{images/teasor.png}{} \makeatother \caption{\textbf{Illustration of different strategies to solve hazy vehicle ReID problem.} One can see that our method outperforms other existing methods in terms of the mean average precision (mAP) and CMC@1. Moreover, other strategies may have limited performance problem in real-world scenarios. We adopt CAL~\cite{rao2021counterfactual} and MPR-Net~\cite{zamir2021multi} as the ReID model and the dehazed model, respectively.} \label{fig:teasor_fig} \end{figure*} By the above analysis, there are two reasons that hinder the development of ReID in real haze scenes: (i) the scarcity of labels of real-world data and (ii) the lack of appropriate guidance for real haze. In this paper, to mitigate these problems, we construct a novel training architecture based on the deep convolutional neural network (DCNN). Inspired by CycleGAN~\cite{isola2017image} which can transform images between any two domains, we introduce the domain transfer technique in the proposed network and combine it with the vehicle ReID. Specifically, the proposed method is trained on a semi-supervised paradigm in an end-to-end fashion and there are two parts in the training process: supervised training for synthetic data and unsupervised training for real-world data. For the former part, the network can learn the knowledge of transformation between two domains and extract more discriminative features for vehicle ReID in fully supervision by paired data (i.e., synthetic hazy images and the corresponding clear ground truths). For the latter parts, we only leverage two sets of unpaired data (i.e., real hazy images and clear images) to strengthen the robustness of the domain transformation and the ReID in real-world scenes in an unsupervised manner. The idea of our method is that, the domain transformation network transfers the input image (i.e., hazy or clear images) between two domains with the same background information and the ReID network extracts the latent features for classifying from two images (i.e., input and transferred image). Inspired by the cycle consistency~\cite{isola2017image,yan2020optical}, two extracted embedding features should be identical since they are from the same vehicles. Thus, we can calculate the consistency of between two extracted features for optimizing the network. Based on the semi-supervised training scheme, the utilization of the synthetic data can guide the unsupervised stage and prevent the network from unstable performance~\cite{isola2017image}. On the other hand, the use of real-world data can improve the generalization ability of our model to real data and further mitigate the domain gap problem when the synthetic data is applied in the training process~\cite{li2019semi}. Moreover, using the domain transformation network can assist the ReID network to learn more discriminative features for the ReID under real-world haze scenarios. By our design, the proposed method can perform vehicle ReID in hazy scenarios effectively without additional annotations in real hazy data which are usually hard to be obtained. Furthermore, our proposed training scheme can be also applied in the case that we have annotations of vehicle ID and achieve better performance. The contribution of this paper is summarized as follows. \begin{itemize} \itemsep0em \item {A novel training paradigm based on semi-supervised learning and domain transformation is proposed to learn hazy vehicle ReID without the labels or clear ground truths of real-world data. We term it \textbf{R}obust \textbf{V}ehicle \textbf{S}imilarity \textbf{L}earning (\textbf{RVSL}). As depicted in \figref{fig:teasor_fig}, by combining domain transferring technique with the ReID network, the proposed method can achieve decent performance to learn discriminative features under real haze scenes without using ID label. Surprisingly, the proposed method achieves competitive performance compared with other existing methods trained with complete ID information.} \item {To constrain the unsupervised stage in the training process, we developed several loss functions such as embedding consistency loss, colinear relation constraint, and monotonously increasing dark channel loss to improve the performance. These loss functions enable the network to learn both domain transformation and ReID in an unsupervised way effectively. Experimental results prove the effectiveness of these loss functions.} \end{itemize} \section{Related Works} \noindent \textbf{Vehicle Re-identification.} With the great effort of data collection and annotation, several large-scale benchmarks for vehicle ReID such as VehicleID~\cite{liu2016deep}, VeRi-776~\cite{liu2017provid}, VERI-Wild~\cite{lou2019veri}, and Vehicle-1M~\cite{guo2018learning}) are proposed. Based on these well-developed benchmarks, several approaches~\cite{hermans2017defense,gao2020vehicle,he2020multi,li2021self,rao2021counterfactual,wang2017orientation} have been developed and most of them rely on DCNN. We can divide them into the following categories. \noindent \smallskip\\ 1) \textit{Meta-information-based methods} which integrate meta-information for feature learning. For example, Zheng \textit{et al.}~\cite{zheng2019attributes} leveraged the additional information such as the camera view, and the vehicle type and color to guide the network. Shen \textit{et al.}~\cite{shen2017learning} integrated the visual-spatio-temporal path proposals and spatial-temporal relations to a Siamese-CNN+Path-LSTM network. Rao \textit{et al.}~\cite{rao2021counterfactual} proposed the attention mechanism with counterfactual causality which enables the network to learn more useful attention for fine-grained features for ReID. \noindent \smallskip\\ 2) \textit{Local information-based methods}: Meng \textit{et al.}~\cite{meng2020parsing} adopted the common region information extracted by a vehicle part parser to improve the mutual representation information between different viewpoints. Khorramshahi \textit{ et al.}~\cite{khorramshahi2020devil} applied Variational Auto-Encoder (VAE) to find crucial detailed information which can be regarded as the pseudo-attention map for highlighting discriminative regions. He \textit{et al.}~\cite{he2019part} combined local and non-local information based on a part-regularized mechanism. These strategies can preserve the variance from near-duplicate vehicles to improve the performance of vehicle ReID. Zhao \textit{et al.}~\cite{zhao2021heterogeneous} applied Cross-camera Generalization Measure technique and integrated region-specific features and cross-level features together to improve the performance of ReID. \noindent \smallskip\\ 3) \textit{Generative Adversarial Network (GAN) based methods}: Zhou \textit{et al.}~\cite{zhou2018aware} applied the conditional multi-view generative network to extract global feature representation from various viewpoints and then adopted adversarial learning to facilitate feature generation. Lou \textit{et al.}~\cite{lou2019veri} designed the FDA-Net to generate hard examples in the feature space based on the GAN to improve the robustness of ReID. Yao \textit{et al.}~\cite{yao2020simulating} proposed to adopt a 3D graphic engine to reduce the content gap between the existing datasets to suppress the domain gap problem. \noindent \smallskip\\ 4) \textit{Vision Transformer (ViT) based Methods}: He \textit{et al.}~\cite{he2021transreid} leveraged the ViT to encode input images as a vector for embedding representation. To further improve representation learning, the jigsaw patch module and side information were adopted in the training scheme. Though the above methods can achieve decent vehicle ReID performance on clear images, they are still limited in real-world hazy image scenarios. \noindent \smallskip\\ \textbf{Single Image Haze Removal.} Based on Koschmieder's model \cite{koschmieder1924theorie}, the formation of haze can be modeled by: \begin{equation} I\left(x\right)=J\left(x\right)t\left(x\right)+A\left(1-t\left(x\right)\right), \label{eq:fog model} \end{equation} where $I(x)$ is the hazy image, $J(x)$ is the haze-free image and $A$ is the global atmospheric light. $t\left(x\right)=e^{-\beta d\left(x\right)}$ is the medium transmission map where $\beta$ is the scattering coefficient and $d(x)$ is the depth from the camera to the object. There are numerous haze removal methods proposed in past decades. They can be classified into prior-based and deep learning-based methods. The former class is to explore the prior knowledge between hazy and haze-free images. For example, He \textit{et al.}~\cite{he2010single} proposed the dark channel prior, Zhu \textit{et al.}~\cite{zhu2015fast} developed the color attenuation prior, and Berman \textit{et al.} proposed the haze-line~\cite{berman2016non} to estimate the dehazed results. The other class is to apply the DCNN. For instance, Qu \textit{et al.}~\cite{Qu_2019_CVPR} proposed multi-resolution generators and discriminators for dehazing in a coarse-to-fine way. Dong \textit{et al.}~\cite{dong2020multi} used the strengthen-operate-subtract boosting strategy to improve the dehazing network. Wu \textit{et al.}~\cite{wu2021contrastive} proposed an auto-encoder-like framework with additive mixup operation and a dynamic feature enhancement module to improve the quality of extracted features for dehazing. Zamir \textit{et al.}~\cite{zamir2021multi} proposed a multi-stage architecture that can encode a diverse set of features simultaneously to restore accurate outputs. Chen \textit{et al.}~\cite{chen2022learning} proposed a unified architecture which can learn multiple adverse weather based on a single architecture. \begin{figure*}[t!] \centering \includegraphics[width=1.0\textwidth]{images/architecture.png}{} \makeatother \caption{\textbf{The architecture of the proposed semi-supervised hazy vehicle ReID network.} Our method consists of supervised and unsupervised training stages for synthetic and real-world data.} \label{fig:architecture} \end{figure*} \section{Proposed Method} \subsection{Overview of the Proposed Method} As shown in \figref{fig:architecture}, there are five modules in the proposed network. That is, two encoders for hazy and clear scenes ($\mathbf{E_{H}}$ and $\mathbf{E_{C}}$), and three decoders for hazy, clear, and ReID ($\mathbf{D_{H}}$, $\mathbf{D_{C}}$, and $\mathbf{D_{ReID}}$), respectively. These modules can be combined to two sub-networks called domain transformation network and re-identification network. The features extracted by $\mathbf{E_{H}}$ and $\mathbf{E_{C}}$ are termed $F_{H}$ and $F_{C}$, respectively. As mentioned in section 1, due to the lack of real haze data, in this paper, inspired by semi-supervision~\cite{li2019semi}, we apply both synthetic data and real data simultaneously in the learning process. At the supervised training stage, the application of the synthetic data can learn the transformation from various domains stably. At the unsupervised stage, we first take the real clear images and then take real hazy images as inputs, respectively. This operation enables our network to learn real-world information of hazy scenes, clear scenes, and ReID through the domain transformation network and ReID network simultaneously. We illustrate the details in the following subsections. \subsection{Domain Transformation Network.} The goal of the domain transformation network (DT-Net) is to help the ReID network to learn haze-invariant features via transforming the domain of the input data. The detailed illustration of this network is as follows. \\\\ \noindent\textbf{Architecture.} The DT-Net consists of two encoders ($\mathbf{E_{H}}$ and $\mathbf{E_{C}}$) and two decoders ($\mathbf{D_{H}}$ and $\mathbf{D_{C}}$). Given a hazy input, the decoder $\mathbf{D_{C}}$ generates the corresponding clear image based on the features $F_{H}$ extracted by encoder $\mathbf{E_{H}}$. On the other hand, the decoder $\mathbf{D_{H}}$ takes the features $F_{C}$ extracted from the encoder $\mathbf{E_{C}}$ to produce the hazy image. The features (i.e., $F_{C}$ and $F_{H}$) extracted by two encoders pass through a double convolution block and a deconvolution block for dimension matching. Then, the upsampled features are concatenated with the features extracted by the first convolution blocks in the encoders to improve the feature diversity. This operation is based on the fact that the features in the shallow layer of the network contain more fruitful spatial and contextual information which can benefit domain transformation~\cite{chen2021Desmoke,hui2020image} while the deeper layers usually consist of more high-level vision. The concatenated features are passed through a double convolution block and a deconvolution block to reconstruct the final domain transformation results. The quality of domain transformation is crucial for the ReID network since it may affect the feature extraction of the input image. Thus, for the synthetic data, we adopt the supervised loss $\mathcal{L}_{DTs}$ to optimize the networks. For real data, we adopt unsupervised losses $\mathcal{L}_{DTu}$. \\\\ \noindent\textbf{Supervised Training Stage.} At this stage, we can train the network in a fully supervised way since the corresponding clear ground truths and ID labels are available. First, we adopt synthetic image pairs to train the DT-Net (i.e., $\mathbf{E_{H}}$, $\mathbf{E_{C}}$, $\mathbf{D_{H}}$, and $\mathbf{D_{C}}$). Specifically, the synthetic haze image $K_{H}^{S}$ and the corresponding clear ground truth $(K_{H}^{S})_{GT}$ are fed into the domain transformation network to calculate the domain transformation loss $\mathcal{L}_{DT_{s}}$ for synthetic data. This operation aims to constrain the distance between the predicted results (i.e., the rendered hazy images and the rendered clear images) and the corresponding ground truths. The domain transformation loss $\mathcal{L}_{DTs}$ can be formulated as follows. \begin{equation} \resizebox{0.55\hsize}{!}{ $\begin{split} \mathcal{L}_{DT_{s}}^{H\rightarrow C}=\frac{1}{M}\sum\limits_{i=1}^{M}\Vert \mathbf{D_{C}}[\mathbf{E_{H}}[K_{H}^{S}(i)]]-(K_{H}^{S})_{GT}(i)\Vert_{1} \end{split}$} \end{equation} \begin{equation} \resizebox{0.55\hsize}{!}{ $\begin{split} \mathcal{L}_{DT_{s}}^{C\rightarrow H}=\frac{1}{M}\sum\limits_{i=1}^{M}\Vert \mathbf{D_{H}}[\mathbf{E_{C}}[(K_{H}^{S})_{GT}(i)]]-K_{H}^{S}(i)\Vert_{1} \end{split}$} \end{equation} where $\Vert\cdot\Vert_{1}$ presents the $L_{1}$ norm and $M$ indicates the number of images. $\mathcal{L}_{DT_{s}}^{C\rightarrow H}$ and $\mathcal{L}_{DT_{s}}^{H\rightarrow C}$ denote the domain transformation loss for 'clear to haze' and 'haze to clear', respectively. The $\mathcal{L}_{DT_{s}}$ loss is the summation of $\mathcal{L}_{DT_{s}}^{C\rightarrow H}$ and $\mathcal{L}_{DT_{s}}^{H\rightarrow C}$. \\\\ \noindent\textbf{Unsupervised Training Stage for Clear Data.} At this stage, to train the DT-Net without the hazy image ground truths, first, we adopted the cycle-consistency mechanism. The input clear image $K_{C}^{R}$ is fed into the DT-Net to render the hazy image $K_{H}^{R'}$, where $K_{H}^{R'}=\mathbf{D_{H}}(\mathbf{E_{C}}(K_{C}^{R}))$. Then, we further take the rendered image to the DT-Net to generate the rendered clear image $K_{C}^{R''}$, where $K_{C}^{R''}=\mathbf{D_{C}}(\mathbf{E_{H}}(K_{H}^{R'}))$. In the same time, several loss functions are adopted to optimize the network. The loss at this stage $\mathcal{L}_{DT_{uc}}$ consists of the rendering consistency loss ($\mathcal{L}_{RC}$), the monotonously increasing dark channel loss ($\mathcal{L}_{MIDC}$), the colinear relation constraint ($\mathcal{L}_{CR}$), and the discriminative loss ($\mathcal{L}_{Dis}$). We illustrate each of them as follows. \\\\ (i) \textit{Rendering Consistency Loss.} This loss is to constrain the learning process of the domain transformation network (i.e., $\mathbf{E_{H}}$, $\mathbf{E_{C}}$, $\mathbf{D_{H}}$, and $\mathbf{E_{C}}$). We adopt the pixel-wise difference between the clear input image $K_{C}^{R}$ and the rendered clear image $K_{C}^{R''}$ to ensure that the domain transformation process can be conducted in two different domains robustly. This loss is formulated as follows. \begin{equation} \centering \resizebox{0.38\hsize}{!}{ $\begin{split} \mathcal{L}_{RC} = \frac{1}{M} \sum_{i=1}^{M} \vert\vert K_{C}^{R}(i)-K_{C}^{R''}(i)\vert\vert_{1} \label{eq:RC_loss} \end{split}$} \end{equation} \\ \smallskip (ii) \textit{Monotonously Increasing Dark Channel Loss.} To further improve the image quality of rendered haze images, inspired by dark channel prior (DCP)~\cite{he2010single}, we propose the monotonously increasing dark channel loss $\mathcal{L}_{MIDC}$. The DCP demonstrates that for most natural clear images, the dark channel values may be close to zero. Specifically, it can be defined as: \begin{equation} \centering \resizebox{0.45\hsize}{!}{ $\begin{split} \textit{DC}(\textit{J})\left(x\right)=\underset{y\in\Omega\left(x\right)}{min}\left(\underset{c\in\left\{r,g,b\right\}}{min}J^{c}\left(y\right)\right)\approxeq0, \label{eq:hl_loss} \end{split}$ } \end{equation} where \textit{DC}($\cdot$) is the operation of the dark channel, $J^{c}(y)$ is the intensity in the color channel \textit{c}, and \textit{\ensuremath{\Omega }}(\textit{x}) is a local patch with a fixed size centered at \textit{x}. With this prior, it can be further extended that, given an image deteriorated by haze, its dark channel value may be higher than that of the original clear image (i.e., $DC(I)(x)\geq DC(J)(x)$). Based on this idea, we proposed $\mathcal{L}_{MIDC}$ which is determined as: \begin{equation} \centering \resizebox{0.6\hsize}{!}{ $\begin{split} \mathcal{L}_{MIDC} = \frac{1}{M} \sum_{i=1}^{M} {DM}(i) \vert\vert DC(K_{H}^{R'}(i))-DC(K_{C}^{R})(i)\vert\vert_{1} \label{eq:RC_loss} \end{split}$} \end{equation} where $DM(i)$ is a binary map that identifies the region where the dark channel values of the clear image $K_{C}^{R}$ are higher than that of its rendered haze result $K_{H}^{R'}$ (i.e., $DC(K_{H}^{R'})(x) < DC(K_{C}^{R})(x)$). With it, we can prevent the rendered pixels from irrational results and further improve the robustness of domain transformation. \smallskip\\\\ (iii) \textit{Colinear Relation Constraint.} Due to the inaccessibility of the ground truth of real-world hazy images, the domain transformation process may generate undesired fake image contents without appropriate training. Although we can leverage the synthetic data to guide the network at the training stage of the synthetic data, it is not applicable at the training stage of the real-world data. To further strengthen the robustness of transformation (i.e., $\mathbf{E_{C}}$ and $\mathbf{D_{H}}$), inspired by the haze-line prior~\cite{berman2016non,yan2020optical}, we develop colinear relation constraint $\mathcal{L}_{CR}$. Based on the physical model of haze illustrated in \eqref{eq:fog model}, Berman \textit{et al.}~\cite{berman2016non} observe that the clear image $J$, the hazy image $I$, and the atmospheric light $A$ are colinear in the RGB space (i.e., $I(x)-A=t(x)(J(x)-A)$). We can adopt this relation to constrain the training process of the network for real-world scenarios. We define the colinear relation constraint as follows. \begin{equation} \centering \resizebox{0.6\hsize}{!}{ $\begin{split} \mathcal{L}_{CR} = \frac{1}{M} \sum_{i=1}^{M} \left[1-\phi(K_{C}^{R}(i)-A(i),K_{H}^{R'}(i)-A(i))\right], \label{eq:hl_loss} \end{split}$} \end{equation} where $A(i)$ is the atmospheric light estimated by the rendered hazy image $K_{H}^{R'}(i)$ and $\phi(\cdot)$ means the cosine similarity. Different from~\cite{yan2020optical}, we adopt the atmospheric light estimation method in~\cite{he2010single} in this loss. With this loss, the consistency of structure and color can be further constrained. \\\\ \smallskip (iv) \textit{Discriminative Loss.} To further constrain unsupervised domain transformation, we adopt the discriminative loss~\cite{goodfellow2014generative} in the training process to distinguish whether the rendered hazy image $K_{H}^{R'}$ is real or fake. In our method, we adopt the saturating discriminative loss~\cite{gui2021review}. \\\\ \noindent\textbf{Unsupervised Training Stage for Real Hazy Data.} At this stage, the real hazy images are adopted to optimize the network without clean ground truths and ID labels. Like the previous stage, the hazy images are fed into the DT-Net (i.e., $\mathbf{E_{H}}$ and $\mathbf{D_{C}}$) to generate the clear images $K_{C}^{R'}$. Subsequently, the rendered clear images are fed into the DT-Net to obtain the rendered hazy images $K_{H}^{R''}$. To optimize our framework, apart from the monotonously increasing dark channel loss ($\mathcal{L}_{MIDC}$) and colinear relation constraint ($\mathcal{L}_{CR}$), the rest of losses in $\mathcal{L}_{DT_{uc}}$ are adopted. Moreover, to improve the predicted clear images by the DT-Net, we introduce two losses: the dark channel loss $\mathcal{L}_{DC}$ to curb the residual haze and total variation loss $\mathcal{L}_{TV}$ to prevent the noise generation. They can be formulated as follows. \begin{equation} \resizebox{0.35\hsize}{!}{ $\begin{split} \mathcal{L}_{DC}=\frac{1}{M}\sum\limits_{i=1}^{M}\Vert DC(K_{C}^{R'}(i))\Vert_{1}, \end{split}$} \end{equation} \begin{equation} \resizebox{0.53\hsize}{!}{ $\begin{split} \mathcal{L}_{TV}=\frac{1}{M}\sum\limits_{i=1}^{M}\Vert \bigtriangledown_{x}K_{C}^{R'}(i)\Vert_{1}+\Vert\bigtriangledown_{y}K_{C}^{R'}(i)\Vert_{1}, \end{split}$} \end{equation} where $\bigtriangledown_{x}$ and $\bigtriangledown_{y}$ denote the gradient operations along the horizontal and vertical directions, respectively. \subsection{Re-identification Network} The re-identification network (ReID-Net) aims to extract discriminative features to search the images with the same identification in the gallery. The details of architecture and training are illustrated as follows. \\\\ \noindent\textbf{Architecture.} The ReID-Net consists two encoders (i.e., $\mathbf{E_{H}}$ and $\mathbf{E_{C}}$) and one decoder (i.e., $\mathbf{D_{ReID}}$). It adopts ResNet-50~\cite{he2016deep} as the backbone, where we apply the first two convolution blocks as the architecture of two encoders. As shown in~\figref{fig:architecture}, the extracted features $F_{H}$ or $F_{C}$ are fed to the decoder $\mathbf{D_{ReID}}$ to generate the ReID results where the decoder consists of the rest of convolution blocks in ResNet-50 and extracted features are down-scaled by global average pooling (GAP) and batch normalization (BN) to generate 2048-d embedding features $F_{ReID}$. Last, we adopt the fully connected layer (FC layer) to match the number of identities for the classification. For the supervised learning of synthetic data, since we have the corresponding ID label, we adopt the triplet loss $\mathcal{L}_{Tri}$ and the ID loss $\mathcal{L}_{ID}$. For the unsupervised learning stage, due to lack of the ID label, the embedding consistency loss $\mathcal{L}_{EC}$ is adopted to constrain the network. This architecture enables our two encoders to learn domain adaptive features because the features extracted by two encoders working on different domains are fed into the same decoder. \\\\ \noindent\textbf{Supervised Training Stage.} At this stage, we train the re-identification network (i.e., $\mathbf{E_{C}}$, $\mathbf{E_{H}}$, and $\mathbf{D_{ReID}}$) by adopting the triplet loss~\cite{hermans2017defense} $\mathcal{L}_{Tri}$ and the ID loss $\mathcal{L}_{ID}$ which can be defined as follows. \begin{equation} \resizebox{0.7\hsize}{!}{ $\begin{split} \mathcal{L}_{Tri}=\frac{1}{M} \sum_{i=1}^{M}\sum_{k}\left[\max_{z_{p}\in\mathcal{P}(z_{i}^{k})}D(z_{i}^{k},z_{p})-\min_{z_{n}\in\mathcal{N}(z_{i}^{k})}D(z_{i}^{k},z_{n})+\delta \right]_{+} \end{split}$} \label{eq:triploss} \end{equation} \begin{equation} \resizebox{0.4\hsize}{!}{ $\begin{split} \mathcal{L}_{ID} = - \frac{1}{M} \sum_{i=1}^{M}\sum_{k} \log \frac{\exp(\sigma_{i}^{y_{i}^{k}})}{\sum_{j = 1}^{C} \exp(\sigma_{i}^{j})} \end{split}$} \label{eq:idloss} \end{equation} where $k\in\{(K_{H}^{S})_{GT},K_{H}^{S}\}$. $\mathcal{P}(z_{i}^{k})$ and $\mathcal{N}(z_{i}^{k})$ denote the positive and negative sample sets, respectively. $z_{i}^{k}$ represents the extracted embedding features from the $i^{th}$ input sample (i.e., ($(K_{H}^{S})_{GT}(i)$ or $K_{H}^{S}(i)$)). $\delta$ is the margin of the triplet loss, $D(\cdot,\cdot)$ is the Euclidean distance, and $[\cdot]_{+}$ equals to $max(\cdot,0)$. For $\mathcal{L}_{ID}$, $\sigma_{i}^{j}$ is the output of the FC layer with the class $j$ based on $i^{th}$ input image. $C$ presents the total number of the class, and $y_{i}$ donates the ground truth class. The ReID loss $\mathcal{L}_{ReID_{s}}$ at this stage is the combination of $\mathcal{L}_{ID}$ and $\mathcal{L}_{Tri}$. \\\\ \noindent\textbf{Unsupervised Training Stage.} At this stage, we feed both real clear data and real hazy data separately. Due to the lack of labels about ID information, to train the ReID network (i.e., $\mathbf{E_{C}}$, $\mathbf{E_{H}}$, and $\mathbf{D_{ReID}}$) with real clear inputs, we develop \textit{embedding consistency loss} ($\mathcal{L}_{EC}$) to calculate the distance of two embedding features extracted from the input clear image and the rendered haze image. Initially, given a clear image $K_{C}^{R}$, it is fed into the DT-Net to render the hazy image $K_{H}^{R'}$ and ReID-Net to extract embedding feature $(F_{ReID})_{C}^{R}$. Then, we further take the rendered image to the ReID-Net to produce the embedding feature $(F_{ReID})_{H}^{R'}$. We can calculate the loss between $(F_{ReID})_{H}^{R'}$ and $(F_{ReID})_{C}^{R}$ because they are the same vehicle. By using this loss, the haze-invariant features can be learned by the ReID-Net effectively. The mathematical expression of this loss is defined as follows. \begin{equation} \centering \resizebox{0.5\hsize}{!}{ $\begin{split} \mathcal{L}_{EC} = \frac{1}{M} \sum_{i=1}^{M} \vert\vert [F_{ReID}(i)]_{C}^{R}-[F_{ReID}(i)]_{H}^{R'}\vert\vert_{1} \label{eq:EC_loss} \end{split}$} \end{equation} By contrast, when the input data is a real hazy image (i.e., $K_{H}^{R}$), the same mechanism is adopted. \section{Experiments} \subsection{Dataset and Evaluation Protocols} \noindent \textbf{Dataset Preparation.} The proposed semi-supervised scheme is trained by both synthetic and real-world haze data. We select haze-free images from Vehicle-1M and VERI-Wild datasets. Subsequently, we apply the haze synthesis procedure proposed in~\cite{li2018benchmarking} to synthesize these images. First, we adopt the method in~\cite{liu2015learning} to estimate the depth map $d$. Then, we render the haze on these clear images by \eqref{eq:fog model} with the predicted depth maps and set $\beta\in[0.4, 1.6]$ and $A\in[0.5, 1]$. Uniquely, each clear data generates a hazy image and all rendered images are divided into the training and the testing sets, respectively. For the real haze data, we survey all existing datasets and find that only Vehicle-1M and VERI-Wild datasets contain the cases in the hazy weather. Thus, we carefully select the vehicle images under hazy scenarios from two datasets. The selected images are split to the training and the testing sets. The details and examples of two types of data are presented in \tabref{tab:syndataset}, \tabref{tab:realdataset} and \figref{fig:dataset-ex}, respectively. \noindent \smallskip\\ \textbf{Evaluation Protocols.} Followed by the protocols of the evaluation proposed in~\cite{lou2019veri,guo2018learning}, we randomly select one hazy image for each vehicle and put it into the probe set. The remained images form the gallery set. We adopt the cumulative matching characteristic (CMC) curve and mean average precision (mAP) to evaluate the performance. \begin{figure}[t!] \centering \includegraphics[width=0.6\textwidth]{images/dataset.png}{} \makeatother \caption{\textbf{Examples of the images in the synthetic dataset and the real-world datasets for vehicle ReID.}} \label{fig:dataset-ex} \end{figure} \begin{figure}[t!] \begin{minipage}[t]{1.0\textwidth} \begin{minipage}[t]{0.5\textwidth} \captionof{table}{\textbf{Detail of the synthetic dataset.} (IDs/Images)} \centering \label{tab:syndataset} \scalebox{0.7}{ \begin{tabular}{cccc} \toprule \textbf{Set} & \textbf{Train} & \textbf{Probe} & \textbf{Gallery} \\ \hline\hline \textbf{VERI-Wild} & 1167/19532 & 389/389 & 389/6125 \\ \textbf{Vehicle-1M} & 1833/23026 & 611/611 & 611/7093 \\ \textbf{Total} & 3000/42558 & 1000/1000 & 1000/13218 \\ \bottomrule \end{tabular} } \end{minipage} \hspace{0.3em} \begin{minipage}[t]{0.5\textwidth} \captionof{table}{\textbf{Detail of the real-world dataset.} (IDs/Images)} \centering \label{tab:realdataset} \scalebox{0.7}{ \begin{tabular}{cccc} \toprule \textbf{Set} & \textbf{Train} & \textbf{Probe} & \textbf{Gallery} \\ \hline\hline \textbf{VERI-Wild} & 156/2472 & 389/389 & 389/5985 \\ \textbf{Vehicle-1M} & 247/2579 & 611/611 & 611/6242 \\ \textbf{Total} & 403/5051 & 1000/1000 & 1000/12227 \\ \bottomrule \end{tabular} } \end{minipage} \end{minipage} \end{figure} \subsection{Implementation Details} \noindent \textbf{Training Stage.} \footnote{More details about training each stage and results are presented in the Supplementary Material.}For the proposed ReID network, ResNet-50 is adopted as the backbone, whose weights are initialized from the model pre-trained on the ImageNet. In the synthetic data training stage, the dimensions of FC layers are set to 3000. The weights of the domain transformation network are initialized by Kaiming normalization~\cite{he2015delving}. The whole network is trained in an end-to-end fashion based on the training sets of synthetic and real-world datasets for learning domain transformation, vehicle ReID and ID classification simultaneously. The input image is resized to 384 $\times$ 384. The training batch sizes at the synthetic data and the real-world data stages are 72 ($2M$) and 36 ($M$), respectively. The local patch size in the dark channel operation is $5\times5$. We apply the data augmentation in the training process including the random cropping and horizontal flipping techniques. The warm-up training strategy is adopted for 120 epochs. The Adam optimizer is adopted with a decay rate of 0.6. The initial learning rate is $1.09\times10^{-5}$, which increases to $10^{-4}$ after the $10^{th}$ epoch. At the training stage of the synthetic data, we adopt the synthetic dataset in \tabref{tab:syndataset} and randomly select one hazy image for each vehicle and put it into the probe set. The rest of images form the gallery set. For the training stage of real-world data, we apply 5051 clear images and hazy images without ID labels, respectively. The network is trained on an Nvidia Tesla V100 GPU for 3 days and we implement it using Pytorch. \noindent \smallskip\\ \textbf{Inference Stage.} At the inference stage, the encoder ($\mathbf{E_{C}}$) and two decoders of the domain transformation network ($\mathbf{D_{C}}$ and $\mathbf{D_{H}}$) are not involved. The computational burden caused by them can be ignored. The Euclidean distance $D$ is computed through embedding features to evaluate the performance. \begin{figure}[t!] \begin{minipage}[t]{0.5\textwidth} \captionof{table}{\textbf{Quantitative evaluation on the hazy vehicle ReID scenario.} The words with \textbf{boldface} indicate the best results, and those with \underline{underline} indicate the second-best results. The texts 'S' and 'R' indicate synthetic and real-world datasets.} \label{tab:quantitative} \centering \scalebox{0.65}{\begin{tabular}{ccccccccc} \toprule \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c}{\textbf{mAP}} & \multicolumn{2}{c}{\textbf{CMC@1}} & \multicolumn{2}{c}{\textbf{CMC@5}} & \multicolumn{2}{c}{\textbf{CMC@10}} \\ \cline{2-9} & \textbf{S} & \textbf{R} & \textbf{S} & \textbf{R} & \textbf{S} & \textbf{R} & \textbf{S} & \textbf{R} \\ \hline \textbf{VRCF} & 25.90 & 36.60 & 61.70 & 63.70 & 76.50 & 78.80 & 81.30 & 83.20 \\ \textbf{VRCF-dehaze} & 61.50 & 50.80 & 85.40 & 78.00 & 95.10 & 92.00 & 97.20 & 95.40 \\ \textbf{VRCF-haze} & 69.00 & 58.00 & 88.60 & 81.10 & 97.60 & 93.80 & 98.40 & 96.80 \\ \textbf{VRCF-all} & 73.00 & 64.40 & 90.90 & 85.50 & 97.30 & 95.70 & 98.60 & 97.80 \\ \textbf{VOC} & 59.70 & 57.40 & 86.10 & 82.80 & 94.30 & 94.00 & 95.60 & 96.60 \\ \textbf{VOC-dehaze} & 63.40 & 49.20 & 87.00 & 74.10 & 94.80 & 89.90 & 96.50 & 94.30 \\ \textbf{VOC-haze} & 67.10 & 59.90 & 88.70 & 83.50 & 95.10 & 94.00 & 96.50 & 97.20 \\ \textbf{VOC-all} & 84.20 & 78.70 & 93.60 & 91.00 & 97.60 & 96.30 & 98.30 & 98.30 \\ \textbf{DMT} & 73.90 & 71.70 & 93.40 & 93.20 & 97.20 & 97.40 & 97.90 & 98.50 \\ \textbf{DMT-dehaze} & 75.10 & 71.60 & 93.40 & 92.40 & 96.90 & 97.50 & 98.30 & 98.40 \\ \textbf{DMT-haze} & 77.30 & 73.40 & 94.00 & 93.40 & 97.60 & 97.60 & 98.60 & 98.80 \\ \textbf{DMT-all} & 82.50 & 80.90 & 98.30 & 96.10 & 98.20 & 98.20 & 98.80 & 99.00 \\ \textbf{VehicleX} & 63.64 & 61.56 & 86.50 & 83.20 & 95.00 & 95.20 & 97.40 & 97.90 \\ \textbf{VehicleX-dehaze} & 73.06 & 64.82 & 89.70 & 83.90 & 96.70 & 95.10 & 98.20 & 97.60 \\ \textbf{VehicleX-haze} & 77.86 & 69.01 & 91.20 & 84.80 & 97.10 & 96.10 & 98.70 & 98.10 \\ \textbf{VehicleX-all} & 80.75 & 76.39 & 93.10 & 89.90 & 97.60 & 96.90 & 98.60 & 98.40 \\ \textbf{TransReID} & 62.90 & 64.00 & 82.40 & 77.70 & 92.30 & 88.80 & 98.40 & 94.00 \\ \textbf{TransReID-dehaze} & 66.80 & 65.30 & 83.00 & 76.60 & 94.10 & 89.90 & 98.10 & 94.60 \\ \textbf{TransReID-haze} & 73.90 & 72.10 & 84.80 & 82.60 & 95.20 & 90.70 & 98.70 & 95.60 \\ \textbf{TransReID-all} & 79.20 & 76.90 & 89.40 & 84.50 & 96.80 & 93.20 & 98.90 & 97.30 \\ \textbf{PVEN} & 72.83 & 75.36 & 63.73 & 66.48 & 84.39 & 86.53 & 89.65 & 91.20 \\ \textbf{PVEN-dehaze} & 81.70 & 78.13 & 73.29 & 69.47 & 92.50 & 89.16 & 96.04 & 93.43 \\ \textbf{PVEN-haze} & 84.55 & 81.92 & 76.60 & 74.09 & 95.02 & 92.15 & 97.84 & 95.66 \\ \textbf{PVEN-all} & \underline{88.63} & \underline{84.08} & 83.55 & 78.31 & 98.45 & 95.40 & 99.20 & 97.76 \\ \textbf{HRCN} & 81.22 & 71.77 & 92.00 & 85.30 & 97.60 & 95.40 & 99.10 & 97.50 \\ \textbf{HRCN-dehaze} & 83.44 & 72.78 & 92.20 & 84.60 & 98.00 & 96.10 & 99.00 & 97.80 \\ \textbf{HRCN-haze} & 85.40 & 78.64 & 92.80 & 89.40 & \underline{98.50} & 96.70 & 99.10 & 98.40 \\ \textbf{HRCN-all} & 87.91 & 81.41 & 94.60 & 91.80 & 98.20 & 97.30 & \textbf{99.30} & 99.00 \\ \textbf{CAL} & 75.52 & 75.94 & 92.50 & 91.70 & 96.50 & 97.60 & 97.90 & 98.40 \\ \textbf{CAL-dehaze} & 83.21 & 77.49 & 94.80 & 94.00 & 98.30 & 98.00 & 98.90 & 98.80 \\ \textbf{CAL-haze} & 86.00 & 80.31 & 95.00 & 94.20 & 97.90 & \underline{98.30} & 98.90 & \underline{99.10} \\ \textbf{CAL-all} & 88.20 & 83.84 & \underline{96.30} & \textbf{96.00} & 98.40 & 98.20 & 98.90 & 99.00 \\ \hline \textbf{Ours} & \textbf{88.66} & \textbf{84.12} & \textbf{96.70} & \underline{95.60} & \textbf{98.60} & \textbf{98.60} & \textbf{99.30} & \textbf{99.30} \\ \textbf{Ours-F} & \textbf{89.14} & \textbf{87.72} & \textbf{96.50} & \textbf{96.90} & \textbf{98.60} & \textbf{98.40} & \textbf{99.40} & \textbf{99.60} \\ \bottomrule \end{tabular} } \end{minipage} \hspace{0.5em} \begin{minipage}[t]{0.7\textwidth} \begin{minipage}[t]{0.7\textwidth} \captionof{table}{\textbf{Ablation study for each module in the real-world test set.}} \centering \scalebox{0.7}{\begin{tabular}{ccccc} \toprule \multirow{2}{*}{\textbf{Method}} & \multicolumn{4}{c}{\textbf{Metric}} \\ \cline{2-5} & \textbf{mAP} & \textbf{CMC@1} & \textbf{CMC@5} & \textbf{CMC@10} \\ \hline\hline \textbf{Baseline-haze} & 76.17 & 93.40 & 97.50 & 98.50 \\ \textbf{Baseline-all} & 77.34 & 93.40 & 97.60 & 98.60 \\ \cdashline{1-5} \textbf{Ours w/o} $\mathcal{L}_{CR}$\& $\mathcal{L}_{MIDC}$ & 76.02 & 93.50 & 97.20 & 98.60 \\ \textbf{Ours w/o} $\mathcal{L}_{MIDC}$ & 81.19 & 94.20 & 98.20 & 99.00 \\ \textbf{Ours w/o} $\mathcal{L}_{CR}$ & 82.27 & 95.30 & 98.00 & 99.20 \\ \cdashline{1-5} \textbf{Ours w/o} $\mathcal{L}_{DC}$\&$\mathcal{L}_{TV}$ & 77.31 & 93.63 & 97.38 & 98.80 \\ \textbf{Ours w/o} $\mathcal{L}_{DC}$ & 81.20 & 94.30 & 98.30 & 98.85 \\ \textbf{Ours w/o} $\mathcal{L}_{TV}$ & 82.50 & 94.60 & 98.50 & 98.90 \\ \cdashline{1-5} \textbf{Ours}& \textbf{84.12} & \textbf{95.60} & \textbf{98.60} & \textbf{99.30} \\ \bottomrule \end{tabular} } \label{tab:ablation} \end{minipage} \\ \hfill \begin{minipage}[t]{0.7\textwidth} \captionof{table}{\textbf{Comparison of performance for using different blocks as encoders $\mathbf{E_{C}}$ and $\mathbf{E_{H}}$ in the real-world test set.}} \centering \scalebox{0.8}{\begin{tabular}{ccccc} \toprule \multirow{1}{*}{\textbf{}} & \multicolumn{1}{c}{\textbf{mAP}} & \multicolumn{1}{c}{\textbf{CMC@1}} & \multicolumn{1}{c}{\textbf{CMC@5}} & \multicolumn{1}{c}{\textbf{CMC@10}} \\ \hline\hline \textbf{Conv\_2} & \textbf{84.12} & \textbf{95.60} & \textbf{98.60} & \textbf{99.30} \\ \textbf{Conv\_3} & 83.84 & 94.90 & 98.20 & 98.90 \\ \textbf{Conv\_4} & 82.56 & 94.40 & 98.20 & 99.00 \\ \textbf{Conv\_5} & 80.82 & 94.90 & 98.10 & 99.00 \\ \bottomrule \end{tabular} } \label{tab:convblock_compare} \end{minipage} \leavevmode \begin{minipage}[t]{0.7\textwidth} \captionof{table}{\textbf{Ablation study for using different training data stages in real-world test set.}} \centering \scalebox{0.75}{ \begin{tabular}{ccccc} \toprule \multirow{2}{*}{\textbf{Stage}} & \multicolumn{4}{c}{\textbf{Metric}} \\ \cline{2-5} & \textbf{mAP} & \textbf{CMC@1} & \textbf{CMC@5} & \textbf{CMC@10} \\ \hline\hline \textbf{Syn} & 78.17 & 92.20 & 96.70 & 97.20 \\ \textbf{Syn+RC} & 80.03 & 94.10 & 98.20 & 98.90 \\ \textbf{Syn+RH} & 81.26 & 94.60 & 98.10 & 98.90 \\ \textbf{Ours}& \textbf{84.12} & \textbf{95.60} & \textbf{98.60} & \textbf{99.30} \\ \bottomrule \end{tabular} } \label{tab:multistage} \end{minipage} \end{minipage} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=1.0\textwidth]{images/rank10_sample1.png}{} \makeatother \caption{\textbf{Visual results of the ranking list on the real-world dataset.} The query images are in the first column and the retrieved top-10 ranking results are in the rest columns. We denote the correct retrieved images with a green border while the false instances are with a red border.} \label{fig:rank_10} \end{figure*} \begin{figure}[t!] \centering \includegraphics[width=0.8\textwidth]{images/C2F_result.png}{} \makeatother \caption{\textbf{Visual comparison of using the unsupervised losses $\mathcal{L}_{CR}$ and $\mathcal{L}_{MIDC}$ in the domain transformation network.} These loss functions can benefit the rendering process to generate more desirable results.} \label{fig:visual_defog_compare} \end{figure} \subsection{Comparison with the Existing Methods} To evaluate the performance of the proposed method, we compare the proposed algorithm with state-of-the-art ReID methods, the VRCF~\cite{gao2020vehicle}, the VOC ~\cite{zhu2020voc}, the DMT~\cite{he2020multi}, the CAL~\cite{rao2021counterfactual}, the VEHICLEX~\cite{yao2020simulating}, the TransReID~\cite{he2021transreid}, the PVEN~\cite{meng2020parsing}, and the HRCN~\cite{zhao2021heterogeneous}. For a fair and comprehensive comparison, these methods are retrained by the following training sets: (i) The ground truth clear images in the synthetic dataset; (ii) The hazy images from the training sets of both synthetic and real-world haze datasets (denoted with the '-haze'); (iii) The two-stage strategy (i.e., dehazing+ReID) which is denoted with '-dehaze'. Specifically, this strategy is the combination of the dehazing method for pre-processing and the ReID models trained by setting (i). For the dehazing method, we adopt one of the state-of-the-art dehazing methods called MPR-Net~\cite{zamir2021multi} which was retrained on hazy vehicle images. (iv) The same training images used in our method including synthetic haze, real-world clear and real-world haze datasets (denoted with '-all'). The aforementioned settings are all with complete ID labels. The results are reported in~\tabref{tab:quantitative}. We can observe the following results. First, compared with other strategies, the proposed method can achieve the competitive performance on vehicle ReID in hazy weather on both synthetic and real-world datasets in terms of mAP and CMC. Second, existing methods trained on all data can obtain better performance compared to the methods trained on other training settings. Third, other methods may have limited performance in real-world scenarios, especially when they are only trained on synthetic images. Last, \textit{surprisingly, though our method is trained without ID labels in real-world data, it can outperform most supervised methods trained with complete ID labels}. We also adopt our method trained with ID labels in the real-world data training stage which is denoted with the suffix 'F'. Specifically, we introduce the triplet loss and the ID loss defined in \eqref{eq:idloss} and \eqref{eq:triploss} to train the real-world data stage. The result indicates that the performance can be improved if we use complete ID labels in the training stage. Our method can be also adopted in the fully supervised scenarios and obtain the decent performance. \subsection{Ablation Studies} \noindent \textbf{Effectiveness of the Semi-supervised Strategy.} In this paper, we proposed the semi-supervised training technique to solve hazy vehicle ReID problem. It uses the domain transformation mechanism which enables us to train the real-world data without the ID labels. We present the effectiveness of this strategy in \tabref{tab:ablation}. We adopt the our ReID network as the baseline and train with two settings for comparison, that -is, the settings (ii) and (iv) reported in subsection 4.3 with complete ID labels. One can see that our method is against the first setting favorably. Moreover, even without the ID labels of real-world data, our method can outperform the baseline trained with complete ID labels since our methods integrate the domain transformation technique, which can improve the ReID network to learn better representation under the hazy scenes. We also show the visual results of the ranking list on the real-world dataset in \figref{fig:rank_10}. One can see that, the baseline retrieves wrong instances because the important features such as the window and light become ambiguous due to the degradation of haze, which may deteriorate the performance of ReID. Moreover, in \tabref{tab:convblock_compare}, we show the results of assigning different convolution blocks as the encoders. One can see that adopting the first two convolution blocks as the encoders can obtain the best performance. However, the proposed domain transformation architecture can assist the network to learn accurate ReID in the haze scenario. \noindent \smallskip\\ \textbf{Effectiveness of the Loss Functions.} In \tabref{tab:ablation}, we verify the effectiveness of the adopted loss functions: the monotonously increasing dark channel loss $\mathcal{L}_{MICD}$ and the colinear relation constraint $\mathcal{L}_{CR}$. One can see that, with two loss functions, the performance of ReID can be improved in both mAP and CMC metrics since the DT-Net can benefit the encoders to learn more robust features with appropriate constraints which can further benefit the performance of ReID. Furthermore, using both the dark channel loss $\mathcal{L}_{DC}$ and the total variation loss $\mathcal{L}_{TV}$ can improve the performance of the network. \figref{fig:visual_defog_compare} presents that, with the proposed loss functions, the rendered results can be more realistic compared with other modules. The rendered results may have the color distortion problems without using $\mathcal{L}_{CR}$. \noindent \smallskip\\ \textbf{Effectiveness of Each Training Stage.} We verify the effectiveness of using real clear data or real hazy data in the training process. We construct three settings for the comparison. Specifically, we adopt: (i) only the synthetic data stage (\textbf{Syn}), (ii) \textbf{Syn} with real clear data stage (\textbf{Syn+RC}), and (iii) \textbf{Syn} with real haze data stage (\textbf{Syn+RH}). The results are reported in \tabref{tab:multistage}. We can see that only adopting the synthetic data may cause limited performance in real-world scenarios due to the domain gap problem. \section{Conclusion} In this paper, to address the vehicle ReID problem under hazy scenarios, a semi-supervised training framework that integrates the domain transformation network and the ReID network is proposed. Moreover, to constrain the unsupervised training stage, several loss functions to bound the two networks are proposed. With these techniques, the proposed method can learn haze-invariant features for robust vehicle ReID. Experimental results show that, compared to existing methods trained on complete ID labels, the proposed methods can achieve decent performance even without using the ID labels in real-world data. \section{Acknowledgement} We thank to National Center for High-performance Computing (NCHC) for providing computational and storage resources. This research was supported by the Ministry of Science and Technology, Taiwan under Grants MOST 108-2221-E-002-072-MY3, MOST 108-2638-E-002-002-MY2, and MOST 111-2221-E-002-136-MY3. \bibliographystyle{splncs04}
{ "timestamp": "2022-09-20T02:20:48", "yymm": "2209", "arxiv_id": "2209.08630", "language": "en", "url": "https://arxiv.org/abs/2209.08630" }
\section{Introduction} Task-oriented dialog (TOD) systems \cite{young2013,budzianowski2018multiwoz} have become prominent and drawn much attention from both academia and industries. Their mission is to help users accomplish specific tasks such as booking restaurants and reserving hotels through natural language conversations, where an external knowledge base (KB) is usually needed to support the generation of a system response. For example, when trying to recommend a restaurant, they will retrieve its address from the KB and generate a response. Many recent state-of-the-art TOD systems \cite{mehri2019,hosseiniasl2020simpletod,Li2021} take a pipeline route that decomposes the task into modules that rely on intermediate annotations such as belief state and dialog act for supervision. These modules can be trained individually and assembled into a dialog system, mitigating the difficulty of generating a desired response directly from the dialog context and user utterance. Another motivation for the pipeline architecture is the necessity of querying KB with belief state, as shown in Figure \ref{fig:intro}, which would otherwise be non-trivial to realize in an end-to-end system. However, these annotations have to be crafted by human annotators, which is hardly realistic in practical scenes such as intelligent customer services where huge amounts of unannotated natural language conversations are accumulated. Besides, errors made in upstream modules may be propagated to downstream modules if they are not trained jointly. \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth]{figures/entity_example3.pdf} \caption{ \label{fig:intro} An example to show that task-oriented dialog systems need to retrieve information (middle) from a knowledge base (KB) to generate a qualified system response. Entity values in the KB are color-highlighted. } \end{figure} There are mainly two approaches to eliminating the reliance on intermediate annotations and generating system response in an end-to-end manner. First, entity information in the KB can be accessed by soft attention \cite{madotto2018mem2seq,reddy2019mlmn,qin2020dfnet}. To this end, a memory network is usually used to encode the KB, and attention and pointer are then utilized to retrieve entity information from the memory. These attention-based methods tend to become cumbersome when the KB scales up. Second, the KB information can be stored in model parameters to avoid direct interaction with the KB during response generation \cite{madotto2020gptke}. This is partly motivated by the observation that pre-trained models such as BERT \cite{devlin2018bert} can carry certain relational and factual knowledge \cite{petroni2019}.~To embed the KB into model parameters, this approach first augments the original training set with KB entries and then encodes the training samples with a powerful encoder. Despite the success in end-to-end TOD systems, one of the remaining problems is entity inconsistency during response generation \cite{qin2019kbret}, which means that the systems usually generate conflicting entity information in system responses. For example, they may generate a response ``\texttt{Gourmet Kitchen is an Italian restaurant}'' while \texttt{Gourmet Kitchen} is actually a \texttt{North American} restaurant. In this work, we aim to address this issue more scalably in our end-to-end TOD system. Following GPT-KE \cite{madotto2020gptke}, we first insert the KB into natural language dialogs by data augmentation, so that the KB can be embedded into model parameters whose size does not scale with the KB. Then, we predict the entity that will appear in the response autoregressively. To avoid generating an inconsistent entity, we impose a trie constraint on the decoding to ensure that the generated entity truly belongs to the KB. The generated entity is taken as an extra input to generate an entity-consistent system response. Besides, since tokens in the entity are integers, which hinders gradient backpropagation, we propose logit concatenation to allow for end-to-end training. We evaluate our system on MultiWOZ 2.1 single \cite{budzianowski2018multiwoz} and CAMREST \cite{wen2017camr}, which are two task-oriented dialog benchmarks widely used in the literature. Experimental results show that it compares favorably with all the baselines. Particularly, it outperforms GPT-KE, a strong end-to-end TOD system that we follow, by a large margin. By ablation studies, we demonstrate that autoregressive entity generation assists in producing entity-consistent system responses in an end-to-end manner. To our best knowledge, this is the first work that attempts to alleviate the entity inconsistency problem in TOD systems by generating the entity first and taking it as an input for response generation. The system can be trained end-to-end without accessing external KBs during response generation. \section{Related Work} \label{sec:related works} End-to-end task-oriented dialog systems have drawn increasing attention in recent years. In one line of work, researchers propose to train the modules of a pipeline system jointly in an end-to-end framework, though they still require intermediate annotations for supervision. Among these works, SimpleTOD \cite{hosseiniasl2020simpletod}, SOLOIST \cite{peng2020soloist}, and UBAR \cite{yang2021ubar} attempt to concatenate the dialog history, user utterance, belief state, dialog act, and system response into a long sequence, which is then modeled by a sequence-to-sequence generation model. HyKnow \cite{gao2021hyknow} extends the belief state to handle both structured and unstructured knowledge and trains the dialog state tracking and response generation modules jointly. Nevertheless, these systems are not the end-to-end solutions we pursue in this work since they still need intermediate annotations. There are mainly two approaches to implementing intermediate annotations free end-to-end TOD systems. First, entity information in the KB can be accessed by soft attention. Mem2Seq \cite{madotto2018mem2seq} combines the ideas of multi-hop attention over memory and a pointer network to incorporate KB information. Wen et al. \shortcite{wen2018dsr} proposed to compute a dialogue state representation from the dialog history and use it to interact with KB representations to retrieve entity information for response generation. GLMP \cite{wu2019glmp} encodes the representations of dialog history and structural KB with a memory network and then passes the result to a decoder for response generation. DF-Net \cite{qin2020dfnet} includes a dynamic fusion module to generate a fused representation that explicitly captures the correlation between domains and uses it to query the KB. When the KB scales up, however, attention-based methods become less efficient. Second, the KB information can be stored in model parameters to avoid further interaction with the KB during response generation. The motivation comes from the observation that pre-trained language models such as BERT \cite{devlin2018bert} and T5 \cite{raffed2020t5} can already carry certain relational and factual knowledge \cite{petroni2019}. GPT-KE \cite{madotto2020gptke} is the seminal dialog system towards this goal. It first augments the training set with KB entries and then learns a response generation model from the augmented set in an end-to-end fashion, thus abandoning the KB during response generation. \section{Methodology} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{figures/model5.pdf} \caption{ \label{fig:model} The architecture of our ECO system. The entity generation module takes the context and user utterance as input and generates a relevant entity. The response generation module takes the context, user utterance, and the generated entity as input to generate a system response. The two modules share the same encoder but have separate decoders. A trie constraint is imposed when generating the entity, and LogitConcat is proposed to facilitate end-to-end optimization. } \end{figure} As shown in Figure \ref{fig:model}, our Entity-COnsistent end-to-end (ECO) task-oriented dialog system begins by embedding the KB into training dialogs (\secref{sec:preprocess}). Following GPT-KE \cite{madotto2020gptke}, we augment the original training set with KB entries and abandon the KB afterward. Unlike GPT-KE, which conducts data augmentation in data pre-processing before training, ECO conducts augmentation for each batch of training samples, which reduces the size of augmented training samples while maintaining high coverage of the KB. We then predict the entity (\secref{sec:auto_entity}) that will appear in the response and incorporate it into response generation (\secref{sec:supervised}) to ensure entity consistency, where LogitConcat is proposed to facilitate end-to-end optimization. \subsection{Notations} \label{sec:predefine} Given a training set $\mathcal{D}_{tr} = \left\{D_1, D_2, \dots, D_N\right\}$ of dialogs, where $D_i = \left\{U_{i,1}, R_{i,1}, \dots, U_{i,T}, R_{i,T}\right\}$ contains $T$ turns of user utterance and system response, we denote the conversational context of the $t$-th turn in dialog $D_i$ as $C_{i,t} = \left\{U_{i,1}, R_{i,1}, \dots, U_{i,t-1}, R_{i,t-1}\right\}$.~A structured knowledge base is given in the form of a set of entities $\mathit{KB} = \left\{E_1, E_2, \dots, E_M\right\}$, each of which is represented as a sequence $E_i = \left\{a_1, v_{i,1}, a_2, v_{i,2}, \dots, a_K, v_{i,K}\right\}$ in which $a_j$ and $v_{i,j}$ denote the $j$th attribute and its value for entity $E_i$, respectively. For simplicity, we assume each turn of dialog only relates to one entity and reformulate it as $\left\{U_{i,t}, R_{i,t}, E_{i,t}\right\}$. A user goal \cite{schatzmann2007user_goal} is defined for each dialog as $G_i=(G_{i,c},G_{i,r})$, where $G_{i,c}$ specifies the constrained information (e.g., \{location=\texttt{center}, price=\texttt{cheap}\}) and $G_{i,r}$ denotes the required information (e.g., \texttt{address}, \texttt{name}). \begin{figure}[t] \centering \fbox{ \includegraphics[width=0.45\textwidth]{figures/knowledge_embed.pdf} } \caption{ \label{fig:knowledge_embed} An example to show how to construct a template from the training sample and generate a new sample from the template. Attributes are color-highlighted. } \end{figure} \subsection{Knowledge Base Embedding} \label{sec:preprocess} To embed the KB into the training set, we first extract all mentioned entity values in both user utterances and ground truth responses based on given span annotations in the original training set. Then, we match entity values with the KB to identify which entity is mentioned in the current turn of conversation. Templates are then constructed by replacing entity-related tokens in the conversation with special attribute placeholders.~For example, \texttt{north american} in Figure \ref{fig:knowledge_embed} is replaced with the corresponding attribute placeholder \texttt{[food]}. This template generation function is denoted as $\mathrm{DELEX}(\cdot)$, which is used to generate a set $\mathcal{D}_{tm}$ of templates from the original training set $\mathcal{D}_{tr}$: \begin{equation} \mathcal{D}_{tm}=\mathrm{DELEX}(\mathcal{D}_{tr}). \end{equation} Next, we generate new dialog samples with the templates in $\mathcal{D}_{tm}$. We refer to this process as data augmentation. To begin with, we obtain a set of KB entities, $\mathcal{G}_{mt} = \left\{E_1, E_2, \dots, E_{G}\right\}$, that match the predefined user goals. Then, we randomly select an entity $E_i$ from $\mathcal{G}_{mt}$ and replace the placeholders with the corresponding values of $E_i$. For instance, we replace \texttt{[food]} and \texttt{[area]} in Figure \ref{fig:knowledge_embed} with \texttt{italian} and \texttt{north}, respectively. The function of generating samples from templates is defined as $\mathrm{RELEX}(\cdot)$, which is executed $P$ times to insert $P$ entities, producing a new set $\mathcal{D}_{au}$: \begin{equation} \mathcal{D}_{au}=\bigcup_{p=1}^{P} \mathrm{RELEX}(\mathcal{D}_{tm}). \end{equation} Note that usually only a subset of samples in $\mathcal{D}_{tr}$ can successfully match with entities in KB during data augmentation, making $\mathcal{D}_{au}$ not cover all the samples of $\mathcal{D}_{tr}$. For this reason, we join $\mathcal{D}_{tr}$ and $\mathcal{D}_{au}$ to get our final training set $\mathcal{D}_{fn}$. \begin{equation} \mathcal{D}_{fn}=\mathcal{D}_{tr} \bigcup \mathcal{D}_{au} \end{equation} The selected entities during the above augmentation process are treated as ground truth entities for the corresponding dialog samples.~This means that only the samples in $\mathcal{D}_{au}$ have entity labels while the samples in $\mathcal{D}_{tr}$ do not. Since all the placeholders in the templates are replaced with values from the same entity, this data augmentation process ensures that the augmented training samples contain consistent entity information. \subsection{Autoregressive Entity Generation} \label{sec:auto_entity} To predict the entity that will appear in the response, we propose to generate it autoregressively. For the sake of brevity, we use $C_t$ and $U_t$ to represent the current dialog context and user utterance, respectively. Then, we concatenate $C_t$ and $U_t$, encode them into a vector representation, and take it as an input for entity generation: \begin{equation} \label{eq:shared_encoder} \textbf{g}_{t}=\mathrm{Enc}(\mathrm{Emb}([C_t;U_t])), \end{equation} where $\mathrm{Emb}(\cdot)$ is the embedding function implemented by a global embedding matrix $\textbf{W}_e$.~$\mathrm{Enc}(\cdot)$ denotes the encoder which is shared with the response generation module (\secref{sec:supervised}). To generate an entity $\hat{E}_t$ autoregressively, the decoder iteratively predicts a token $\hat{e}_{t,k}$ based on the already generated sequence $\hat{E}_{t,<k}$ and vector representation $\textbf{g}_{t}$: \begin{equation} \label{eq:ent_dec} \hat{P}(\hat{e}_{t,k})=\mathrm{Dec}_{e}({\hat{e}}_{t,k}|\hat{E}_{t,<k},\textbf{g}_{t}). \end{equation} Since the gold entities of the samples in $\mathcal{D}_{au}$ are known, the cross-entropy loss of entity generation on $\mathcal{D}_{au}$ is defined as: \begin{equation} \label{eqn:lentity} \mathcal{L}_{en} = \sum_{D\in \mathcal{D}_{au}}\sum_{E_t \in D}\mathrm{CELoss}(\hat{E}_t,E_t), \end{equation} where $E_t$ denotes the ground truth entity for the $t$-th turn of conversation. For the samples in $\mathcal{D}_{tr}$, which have no entity labels, we do not calculate their loss during entity generation, but instead calculate their loss in response generation (\secref{sec:supervised}) to realize end-to-end optimization like DualTKB \cite{dognin2020dualtkb}. \subsubsection{Trie Constraint} \label{sec:tree} Inspired by GENRE \cite{de2020autoregressive}, we construct a trie structure (a prefix tree) to ensure the generated entity truly belongs to the KB. For each entity in the KB, we construct a sequence as follows.~For each value in an entity, we put its attribute placeholder to precede it and concatenate all pairs of attribute and value to form a sequence such as \texttt{[name] cityroomz [area] centre [type] hotel}.~As depicted in Figure \ref{fig:tree}, a node in the trie denotes a token, and its child nodes denote all the succeeding tokens. When decoding the $k$-th token $\hat{e}_{t,k}$ during the generation of entity $\hat{E}_t$, we have the decoded sequence $\hat{E}_{t,<k}=\{\hat{e}_{t,1},\hat{e}_{t,2},\dots,\hat{e}_{t,k-1}\}$ in hand and walk through the trie along the path of $\hat{E}_{t,<k}$ to generate the next token. We use $\mathcal{E}_{t,k}$ to represent the set of possible tokens at this time step and re-compute $\hat{P}(\hat{e}_{t,k})$ as: \begin{equation} \label{eq:ent_gen_trie} {P}(\hat{e}_{t,k})=\left\{ \begin{array}{ll} \frac{\hat{P}(\hat{e}_{t,k})}{Z}, & \hat{e}_{t,k} \in \mathcal{E}_{t,k} \\ 0, & \mathrm{else} \\ \end{array}\right. \end{equation} where \begin{equation} Z=\sum_{\hat{e}_{t,k} \in \mathcal{E}_{t,k}}\hat{P}(\hat{e}_{t,k}). \end{equation} Since only tokens from $\mathcal{E}_{t,k}$ have non-zero probabilities in ${P}(\hat{e}_{t,k})$, the model always samples a token from $\mathcal{E}_{t,k}$. Therefore, the generated entity is guaranteed to be valid constantly. \begin{figure}[] \centering \includegraphics[width=0.42\textwidth]{figures/tree3.pdf} \caption{ \label{fig:tree} A trie with three entity sequences: [\texttt{day}] \texttt{saturday} [\texttt{departure}] \texttt{cambridge}, [\texttt{day}] \texttt{saturday} [\texttt{departure}] \texttt{kings} \texttt{lynn}, and [\texttt{day}] \texttt{friday} [\texttt{departure}] \texttt{peterborough}. } \end{figure} \subsection{Response Generation} \label{sec:supervised} During training, for each sample in $\mathcal{D}_{au}$, the model generates a response based on the context, user utterance, and the corresponding ground truth entity by concatenating and encoding them with the shared encoder defined in Eq. (\ref{eq:shared_encoder}): \begin{equation} \textbf{h}_t=\mathrm{Enc}(\mathrm{Emb}([C_t;U_t;E_t])). \end{equation} For each sample in $\mathcal{D}_{tr}$ which has no ground truth entity label, the generated entity is used: \begin{equation} \label{eq:dec_tr} \textbf{h}_t=\mathrm{Enc}(\mathrm{Emb}([C_t;U_t;\hat{E}_t])). \end{equation} The response decoder then takes $\textbf{h}_t$ as input and generates the response $\hat{R}_t$ token by token as: \begin{equation} P(\hat{r}_{t,k})=\mathrm{Dec}_r(\hat{r}_{t,k}|\hat{R}_{t,<k}, \textbf{h}_t). \end{equation} The cross-entropy loss is calculated between the generated response $\hat{R}_t$ and the ground truth response $R_t$: \begin{equation} \label{eqn:lresp} \mathcal{L}_{re}=\sum_{D\in \mathcal{D}^{}_{fn}}\sum_{R_t \in D}\mathrm{CELoss}(\hat{R}_t,R_t). \end{equation} \subsubsection{Logit Concatenation} \label{sec:LogitConcat} For those samples in $\mathcal{D}_{tr}$, since the tokens in each generated entity are integers, the gradients of response generation cannot be directly passed to the encoder during training. To address this, we modify Eq. (\ref{eq:dec_tr}) and input the distributions of generated entity tokens to the encoder. Specifically, for the $k$-th token $\hat{e}_{t,k}$ of a generated entity $\hat{E}_t$, its output distribution ${P}(\hat{e}_{t,k})$ over vocabulary from the entity decoder is first computed using Eq. (\ref{eq:ent_gen_trie}) and then used to approximate $\hat{e}_{t,k}$ for gradient propagation. ${P}(\hat{e}_{t,k})$ can be encoded as: \begin{equation} \label{eq:prob_emb} \textbf{h}_{t,k}={P}(\hat{e}_{t,k})\textbf{W}_e^T, \end{equation} where $\textbf{W}_e$ is the global embedding matrix introduced above. If both ${P}(\hat{e}_{t,k})$ and $\textbf{W}_e$ receive gradients during propagation, the training may collapse since it is much easier to update $\textbf{W}_e$ than ${P}(\hat{e}_{t,k})$, which requires understanding the context and utterance to obtain relevant information. Therefore, we alter the equation by stopping gradients on $\textbf{W}_e$: \begin{equation} \hat{\textbf{h}}_{t,k}={P}(\hat{e}_{t,k})\cdot \mathrm{StopGrad}(\textbf{W}_e^T), \end{equation} \begin{equation} \hat{\textbf{h}}_{t}=\{\hat{\textbf{h}}_{t,1},\dots,\hat{\textbf{h}}_{t,|\hat{E}_t|}\}. \end{equation} We use $\hat{\textbf{h}}_{t}$ as the representation of entity $\hat{E}_t$ and concatenate it with the embeded context $C_t$ and user utterance $U_t$, which is then encoded to replace Eq. (\ref{eq:dec_tr}) during training: \begin{equation} \textbf{h}_{t}=\mathrm{Enc}(\mathrm{Emb}([C_t;U_t]);\hat{\textbf{h}}_{t}]). \end{equation} Since ${P}(\hat{e}_{t,k})$ is a distribution vector rather than an integer, gradients can be backpropagated to the encoder during training.~At inference time, we take the generated entity tokens rather than ${P}(\hat{e}_{t,k})$ as input for response generation, as described in Eq. (\ref{eq:dec_tr}). This brings a gap between training and inference, which will be studied in Section \ref{sec:abla}. \subsection{Joint Training} The final system is trained by minimizing the sum of entity loss $\mathcal{L}_{en}$ and response loss $\mathcal{L}_{re}$: \begin{equation} \mathcal{L}=\mathcal{L}_{en} + \mathcal{L}_{re}. \end{equation} \section{Experiments} \label{sec:experiment} \begin{table*}[t] \centering \scalebox{0.83}{ \begin{tabular}{lcccccc} \toprule \multicolumn{1}{l}{} & \textbf{BLEU} & \textbf{Inform} & \textbf{Success} & \textbf{Score} & \textbf{F1} & \textbf{Consistency} \\ \midrule Mem2Seq \cite{madotto2018mem2seq} & 6.60 & - & - & - & 21.62 & - \\ DSR \cite{wen2018dsr} & 9.10 & - & - & - & 30.00 & - \\ GLMP \cite{wu2019glmp} & 6.90 & - & - & - & 32.40 & -\\ DF-Net \cite{qin2020dfnet} & 9.40 & - & - & - & 35.10 & -\\ GPT2 \cite{radford2019language} & 14.33 & 64.60 & 51.77 & 72.52 & 30.38 & -\\ GPT-KE \cite{madotto2020gptke} & \textbf{15.05} & 72.57 & 64.16 & 83.42 & 39.58 & 54.46\\ \midrule BART-KE & 12.80$\pm$0.22 & 70.94$\pm$2.05 & 61.36$\pm$2.12 & 78.95$\pm$2.05 & 39.31$\pm$0.22 & 52.96$\pm$0.48\\ ECO (ours) & 12.61$\pm$0.20 & \textbf{83.63$\pm$0.63} & \textbf{75.37$\pm$0.21} & \textbf{92.11$\pm$0.20} & \textbf{40.87$\pm$0.24} & \textbf{56.84$\pm$0.36}\\ \bottomrule \end{tabular} } \caption{ \label{tab:main_woz} Main results on MultiWOZ. Scores of baselines except BART-KE are from original papers, and ``-'' denotes scores originally unavailable. BART-KE is the baseline implemented by replacing GPT-2 in GPT-KE with BART.} \end{table*} \subsection{Dataset} We conduct experiments on MultiWOZ 2.1~single \cite{budzianowski2018multiwoz} and CAMREST \cite{wen2017camr}.~CAMREST consists of one domain of Cambridge restaurant booking while MultiWOZ 2.1 single consists of five domains:~\texttt{Attraction}, \texttt{Hotel}, \texttt{Restaurant}, \texttt{Taxi}, and \texttt{Train}.~Following previous work \cite{qin2020dfnet,madotto2020gptke}, we select only the dialogues which involves a single domain from MultiWOZ 2.1 to form the MultiWOZ 2.1 single dataset. We follow the same pre-processing and augmentation procedures as GPT-KE \cite{madotto2020gptke}.~Note that not all dialogs in the original training set can be successfully used to generate templates due to the diversity of entity values.~On MultiWOZ 2.1 single, 63/116/289/59 templates are respectively generated for domains \texttt{Attraction}/\texttt{Hotel}/\texttt{Restaurant}/\texttt{Train}, and no template is generated for the \texttt{Taxi} domain since MultiWOZ 2.1 single does not provide KB for this domain. On CAMREST, 161 templates are constructed for data augmentation. Following previous works \cite{qin2020dfnet,madotto2020gptke}, we adopt BLEU, Inform, Success, and F1 as the metrics to evaluate model performance on MultiWOZ 2.1 single, and employ BLEU, F1, and Success on CAMREST. Inform and Success are calculated based on the given user goal of a dialog session, and inconsistent entity information will lower the two metrics. Meanwhile, an overall score is also calculated: $\mbox{Score}=\mbox{BLEU}+(\mbox{Inform}+\mbox{Success})/2$. \subsection{Measuring Entity Consistency} Measuring the entity consistency of a given system response remains a problem in task-oriented dialog systems. Qin et al. \shortcite{qin2021citod} annotated three kinds of inconsistency by human experts on the KVRET \cite{eric2017kvret} dataset, i.e., user query inconsistency, dialog history inconsistency, and knowledge base inconsistency. They then trained models as a form of automatic metrics to identify which kind of inconsistency appears in system responses. However, their method is trained on KVRET and may not be suitable to measure inconsistency on other datasets. Furthermore, the first few turns may include irrelevant entity information, such as providing several hotels for the user to choose, which makes it difficult to identify whether the generated response is dialog history consistent or not. The above analysis motivates us to propose a new consistency metric that focuses on user query consistency and knowledge base consistency. It is a turn-level metric that requires all entity information in the user utterance and the system response to belong to the same entity in KB. To be specific, we first extract all entity information in the user utterance and the system response, and then search the knowledge base. If there is an entity that contains all the extracted information, this turn of conversation scores 1, and 0 otherwise. The final Consistency metric is calculated as the average score over all conversation turns. The method of extracting entity information from utterances and responses is the same as in calculating F1. \subsection{Experiment Settings} Different from GPT-KE, we use BART \cite{lewis2020bart} as our backbone model due to the limitation of computation power. We also replace GPT-2 \cite{radford2019language} in GPT-KE with BART to form a new baseline, BART-KE. We set the max input sequence length to 256, the repeat times $P$ in $\mathrm{RELEX}$ to 12, and the batch size to 12. Experiments are conducted on a single NVIDIA 2080ti and cost about 11G GPU RAM. We conduct ablation studies on MultiWOZ 2.1 single as it is a more challenging dataset with multiple domains of dialogs. For most variants of our method, we run 30 epochs and evaluate them per 5 epochs, saving a model checkpoint after each evaluation. We then select the best checkpoint based on model performance on the development set and finally report the test results. For the ablation setting of \textit{w/ tr}, which lacks supervision from gold entity labels for training, we run 50 epochs to select the best. \subsection{Main Results} \label{sec:main_result} The overall results are shown in Table \ref{tab:main_woz} and Table \ref{tab:main_camr}. We observe that ECO outperforms GPT-KE and other baselines by a large margin in all metrics except BLEU, showing that ECO can reach the user goals of this dialog dataset more effectively. The improvement of ECO over BART-KE suggests that ECO's success mainly comes from the model design rather than BART itself. Specifically, by generating an entity with trie constraint to help response generation, ECO obtains consistent entity information and improves entity consistency of generated response. On the other hand, we note that BART-based methods (BART-KE and ECO) achieve relatively lower BLEU scores than the GPT-2 family baselines (GPT-2 and GPT-KE) on MultiWOZ. The main reason should be that we do not post-train BART with language modeling objectives on the training set, which affects the fluency of generated responses, while responses in MultiWOZ are more diverse across domains. \begin{table}[t] \centering \scalebox{0.8}{ \begin{tabular}{lccc} \toprule \multicolumn{1}{l}{} & \textbf{BLEU} & \textbf{F1} & \textbf{Success}\\ \midrule KB-Trs & 14.80 & 45.30 & - \\ MLMN & 13.61 & 54.85 & - \\ BoSsNet & 15.20 & 43.10 & - \\ KBRet & \textbf{18.64} & 55.76 & 62.03\\ GPT-KE & 18.00 & 54.85 & 74.68 \\ \midrule BART-KE & 17.84$\pm$0.28 & 70.42$\pm$0.37 & 75.06$\pm$1.52 \\ ECO (ours) & 18.42$\pm$0.27 & \textbf{71.56$\pm$0.39} & \textbf{78.77$\pm$1.85}\\ \bottomrule \end{tabular} } \caption{ \label{tab:main_camr} Main results on CAMREST. KB-Trs \cite{haihong2019kbtrs}, MLMN \cite{reddy2019mlmn}, BoSsNet \cite{raghu2019bossnet}, KBRet \cite{qin2019kbret}, and GPT-KE \cite{madotto2020gptke} are baselines for comparison. } \end{table} \begin{table*}[t] \centering \scalebox{0.98}{ \begin{tabular}{lcccc} \toprule & \textbf{Percentage (\%)} & \textbf{Inform} & \textbf{Success} & \textbf{F1}\\ \midrule single inform & 46.0 & 91.67$\pm$1.63 & 83.01$\pm$1.98 & 61.09$\pm$0.81 \\ multi inform & 54.0 & 76.78$\pm$1.55 & 68.85$\pm$1.77 & 31.03$\pm$0.73 \\ single success & 84.5 & 83.60$\pm$1.23 & 77.49$\pm$0.43 & 42.74$\pm$0.10 \\ multi success & 15.5 & 83.81$\pm$2.69 & 63.81$\pm$1.35 &33.62$\pm$1.28 \\ total & 100.0 & 83.63$\pm$0.63 & 75.37$\pm$0.21 & 40.87$\pm$0.24 \\ \bottomrule \end{tabular} } \caption{ \label{tab:multi_success} Results of study on how multiple matched entities affect evaluation metrics, where single/multi inform/success refer to the situation with single/multiple matched entities when calculating Inform/Success, and Percentage (\%) means the proportion of samples in the test set. } \end{table*} We also analyze why the improvement of F1 is much smaller than that of Inform and Success on MultiWOZ. Inform and Success are calculated based on user goals, and in some circumstances, there are multiple entities that match a user goal. However, only the one that is mentioned in the ground truth response is counted as correct in F1. Therefore, a large improvement on Inform and Success means ECO achieves user goals better, but the entity mentioned in the generated response may be different from the one in the ground truth. As shown in Table \ref{tab:multi_success}, over 50\% of test samples have multiple matched entities when calculating Inform, and the percentage is 15.5\% when calculating Success. Multiple matched entities reduce model performance on all metrics, especially on F1. \begin{table*}[t] \centering \scalebox{0.93}{ \begin{tabular}{lcccccc} \toprule \multicolumn{1}{l}{} & \textbf{BLEU} & \textbf{Inform} & \textbf{Success} & \textbf{Score} & \textbf{F1} & \textbf{Consistency}\\ \midrule GPT-KE & \textbf{15.05} & 72.57 & 64.16 & 83.42 & 39.58 & 54.46\\ BART-KE & 12.80$\pm$0.22 & 70.94$\pm$2.05 & 61.36$\pm$2.12 & 78.95$\pm$2.05 & 39.31$\pm$0.22 & 52.96$\pm$0.48\\ ECO & 12.61$\pm$0.20 & \textbf{83.63$\pm$0.63} & \textbf{75.37$\pm$0.21} & \textbf{92.11$\pm$0.20} & \textbf{40.87$\pm$0.24} & \textbf{56.84$\pm$0.36}\\ \textit{\quad w/ au} & 8.94$\pm$0.06 & 79.20$\pm$2.26 & 56.34$\pm$0.55 & 76.71$\pm$0.94 & 30.38$\pm$1.67 & 55.49$\pm$0.33\\ \textit{\quad w/ tr} & 11.21$\pm$0.37 & 67.55$\pm$4.41 & 54.87$\pm$4.38 & 72.42$\pm$4.07 & 36.45$\pm$1.17 & 52.43$\pm$1.89 \\ \textit{\quad w/o trie} & 12.40$\pm$0.36 & 80.68$\pm$0.91 & 72.12$\pm$1.25 & 88.80$\pm$0.81 & 39.81$\pm$0.25 & 56.31$\pm$0.62 \\ \textit{\quad w/o LogitConcat} & 12.52$\pm$0.11 & 78.32$\pm$0.96 & 70.50$\pm$2.46 & 86.93$\pm$1.64 & 39.88$\pm$0.40 & 55.15$\pm$0.84 \\ \textit{\quad w/ LogitEval} & 12.85$\pm$0.28 & 71.98$\pm$0.55 & 65.04$\pm$0.96 & 81.36$\pm$1.00 & 40.58$\pm$0.46 & 53.62$\pm$0.81 \\ \bottomrule \end{tabular} } \caption{ \label{tab:abla} Results of ablation studies. ECO \textit{w/ au} represents the ECO variant trained on samples of $\mathcal{D}_{au}$, and ECO \textit{w/ tr} represents the ECO variant trained on samples of $\mathcal{D}_{tr}$. ECO \textit{w/o trie} means that ECO does not apply the trie constraint during entity generation, while ECO \textit{w/o LogitConcat} means it does not apply LogitConcat during training. ECO \textit{w/o StopGrad} represents the ECO variant that drops $\mathrm{StopGrad}$ in LogitConcat. ECO \textit{w/ LogitEval} represents the ECO variant that applies LogitConcat during inference. } \end{table*} \begin{figure*} \centering \includegraphics[width=1\textwidth]{figures/entity_heat_4.pdf} \caption{An example of decoding entity with the trie constraint. Darker backgrounds represent high decoding probabilities. The trie constraint filters out some tokens during decoding and results in a different decoding path, avoiding the red path which has a higher probability but generates inconsistent entity information.} \label{fig:entity_heat} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=0.65\textwidth]{figures/tempNum_multi.pdf} \caption{Ablation study of how the number of templates affects model performance on MuitiWOZ, where \emph{full} means all the templates are used for knowledge base embedding.} \label{fig:temp_num} \end{figure*} \subsection{Ablation Studies} \label{sec:abla} \subsubsection{Training Datasets} Unlike samples in $\mathcal{D}_{tr}$, samples in $\mathcal{D}_{au}$ have ground truth entity labels, so their training objectives are different. We use ECO \textit{w/ tr} to denote the training set that only contains samples from $\mathcal{D}_{tr}$ for end-to-end training, and use ECO \textit{w/ au} to denote the training set that only contains samples from $\mathcal{D}_{au}$. As the results show in Table \ref{tab:abla}, ECO \textit{w/ tr} has drops of 16.08 on Inform, 20.5 on Success, 4.42 on F1, and 4.41 on Consistency compared to ECO. Actually, without entity labels, the entity generation process is hard to converge, making response generation lack entity information as input and lowering model performance as a result. On the other hand, ECO \textit{w/ au} has less drops than ECO \textit{w/ tr} compared to ECO on Inform, Success, and Consistency. However, the drops of ECO \textit{w/ au} is more obvious on F1, which is caused by the fact that $\mathcal{D}_{au}$ does not includes the \texttt{Taxi} domain. These results demonstrate that the augmentation introduces more important KB information for response generation than the original training set. \subsubsection{Trie Constraint} In this work, the trie constraint is the key to guaranteeing entity consistency in a system response. Figure \ref{fig:entity_heat} presents an example of decoding an entity on the trie. Through filtering out non-kid nodes, the decoding path is restricted to a path on the trie. From Table \ref{tab:abla}, we note that ECO outperforms ECO \textit{w/o trie} by 2.95 on Inform, 3.25 on Success, 1.06 on F1, and 0.53 on Consistency. Thus, we can conclude that the model generates more informative responses with the help of consistently generated entities, which accounts for the improvement. \subsubsection{Logit Concatenation} LogitConcat is proposed to enable backpropagation after concatenating the dialogue context, user utterance, and the generated entity when training on $\mathcal{D}_{tr}$. Without this component, the model parameters of the entity generator will not be updated. In Table \ref{tab:abla}, ECO shows a promising improvement of 5.31 on Inform, 4.87 on Success, 0.99 on F1, and 1.69 on Consistency over ECO \textit{w/o LogitConcat}. \subsubsection{Gap between Training and Evaluation} During the evaluation, ECO uses the generated entity sequence as an input for response generation, which is different from the training phase that uses LogitConcat. To study the impact, we conduct an experiment that also applies LogitConcat during the evaluation. From the results (ECO \emph{w/ LogitEval}) in Table \ref{tab:abla}, we observe obvious drops of ECO on Inform, Success, and Consistency. This phenomenon shows that applying LogitConcat during the evaluation weakens the consistency of the generated entities since the probability distribution in LogitConcat is not a valid entity from the KB. \subsubsection{The Number of Templates} To study how the number of templates affects model performance, we randomly select several subsets of templates from the whole template set. As shown in Figure \ref{fig:temp_num}, the performance on all the metrics generally grows when the number of templates increases, but there are fluctuations when the number changes from 100 to 400. \section{Conclusion} \label{sec:conclusion} We proposed an end-to-end task-oriented dialog system by encoding external knowledge into model parameters. To address entity inconsistency in system responses, we proposed to generate the entities first and took them as input to response generation. To ensure the generated entities are valid, a trie constraint was imposed on the generation, and a logit concatenation strategy was introduced to facilitate backpropagation for end-to-end training. Experiments demonstrate that this system can produce more high-quality and entity-consistent responses in an end-to-end manner. For future work, we will extend this system to handle multiple entities that are involved in each turn of conversation. \section*{Acknowledgments} This work was supported by the National Natural Science Foundation of China (No. 62176270) and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No. 2017ZT07X355).
{ "timestamp": "2022-09-20T02:23:15", "yymm": "2209", "arxiv_id": "2209.08708", "language": "en", "url": "https://arxiv.org/abs/2209.08708" }
\section{Method} \vspace{-5pt} \label{sec:method} We consider a \emph{value function approach} \citep[see e.g.,][]{outrata1990numerical,ye1995optimality, liu2021value}, which yields natural first-order algorithms for non-convex $g$ and requires no computation of Hessian matrices. It is based on the observation that \eqref{eq:bo} is equivalent to the following constrained optimization (even for non-convex $g$): \begin{equation} \min_{v, \theta} \, f(v, \th)~~~~~{s.t.}~~~~~ q(v, \th) := g(v,\th) - g^*(v) \leq 0, \label{eq:bo_constrain} \end{equation} where $ g^*(v) := \min_{\th} g(v, \th) = g(v, \th^*(v))$ is known as the value function. Compared with the {hypergradient approach}, this formulation \textbf{does not require calculation of the implicit derivative $\nabla_v \th^*(v)$}: Although $g^*(v)$ depends on $\vs$, its derivative $\nabla_v g^*(v)$ does not depend on $\nabla_v \th^*(v)$, by Danskin's theorem: \bbb \nabla_v g^*(v) = \nabla_1 g(v, \vs) + \dd_v \vs \nabla_2 g(v, \vs) = \nabla_1 g(v, \vs), \label{eq:dfd} \eee where the second term in \eqref{eq:dfd} vanishes because we have $\nabla_2 g(v, \vs) = 0$ by definition of the optimum $\vs$. Therefore, provided that we can evaluate $\vs$ at each iteration, solving \eqref{eq:bo_constrain} yields an algorithm for BO that requires no Hessian computation. In this work, we make use of the dynamic barrier gradient descent algorithm of \citet{gong2021automatic} to solve \eqref{eq:bo_constrain}. This is an elementary first-order algorithm for solving constrained optimization, but it applies only to a special case of the bilevel problem and must be extended to handle the general case we consider here. \textbf{Dynamic Barrier Gradient Descent.} The idea is to iterative update the parameter $(v,\th)$ to reduce $f$ while controlling the decrease of the constraint ${q}$, ensuring that ${q}$ decreases whenever $q>0$. Specifically, denote $\xi$ as the step size, the update at each step is \bbb \label{eq:updadd} (\v_{k+1}, \ss_{k+1}) \gets (\v_{k}, \ss_{k}) - \xi \delta_k, \eee \bbb \label{equ:bar_opt} \text{where}~~~\delta_k = \argmin_{\delta} \norm{\dd f(v_k, \th_k) - \delta }^2 ~~\text{s.t.}~~ \langle \dd q(v_k, \th_k), \delta \rangle \geq \phi_k. \eee Here % $\dd f_k := \dd_{(\v,\th)} f(v_k, \th_k)$, $\dd q_k := \dd_{(\v,\th)} q(v_k, \th_k)$, and $\phi_k\geq 0$ is a non-negative control barrier and should be strictly positive $\phi_k >0$ in the non-stationary points of $q$: the lower bound on the inner product of $\dd q(v_k,\theta_k)$ and $\delta_k$ ensures that the update in \eqref{eq:updadd} can only decrease $q$ (when step size $\xi$ is sufficiently small) until it reaches stationary. In addition, by enforcing $\delta_k$ to be close to $\dd f(v_k, \theta_k)$ in \eqref{equ:bar_opt}, we decrease the objective $f$ as much as possible so long as it does not conflict with descent of $q$. Two straightforward choices of $\phi_k$ that satisfies the condition above are $\phi_{k} = \eta q(v_k, \th_k)$ and $\phi_k = \eta \norm{\dd q(v_k, \th_k)}^2$ with $\eta > 0$. We find that both choices of $\phi_k$ work well empirically and use $\phi_k = \eta \norm{\dd q(v_k, \th_k)}^2$ as the default (see Section~\ref{sec:observations}). The optimization in \eqref{equ:bar_opt} yields a simple closed form solution: \begin{align*} \delta_{k}=\nabla f(v_k,\th_k)+\lambda_{k}\nabla q(v_k,\th_k), ~~ \text{with }~~ \lambda_{k}=\max\left(\frac{\phi_{k}-\left\langle \nabla f(v_k,\th_k),\nabla q(v_k,\th_k)\right\rangle }{||\nabla q(v_k,\th_k)||^{2}},~~0\right), \end{align*} and $\lambda_k=0$ in the case of $||\nabla q(v_k,\th_k)||=0$. \textbf{Practical Approximation.} The main bottleneck of the method above is to calculate the $q(v_k,\th_k)$ and $\nabla q(v_{k}, \theta_k)$ which requires evaluation of $\th^{*}(v_{k})$. In practice, we approximate $\th^{*}(v_{k})$ by $\th_{k}^{(T)}$, where $\th_{k}^{(T)}$ is obtained by running $T$ steps of gradient descent of $g(v_k, \cdot)$ w.r.t. $\th$ starting from $\th_{k}$. That is, we set $\th_{k}^{(0)}=\th_{k}$ and let \begin{align}\label{equ:thetaTT} \hspace{-0.3cm} \th^{(t+1)}_k = \th^{(t)}_k - \alpha \nabla_\th g(v_k, \th^{(t)}_k), ~~~ t = 0,\ldots, T-1, \end{align} for some step size parameter $\alpha>0$. We obtain an estimate of $q(v,\th)$ at iteration $k$ by replacing $\th^{*}(v_{k})$ with $\th_{k}^{(T)}$: % $ \hat{q}(v,\th)=g(v,\th)-g(v,\textcolor{black}{\th^{(T)}_k}). $ We substitute $\hat{q}(v_k,\th_k)$ into (\ref{equ:bar_opt}) to obtain the update direction $\delta_k$. The full procedure is summarized in Algorithm~\ref{alg:bome}. Note that the $\textcolor{black}{\th^{(T)}_k}$ is viewed as a constant when defining $\hat{q}(v,\th)$ and hence no differentiation of $\th^{(T)}_k$ is performed when calculating the gradient $\dd \hat{q}$. This differs from {truncated back-propagation methods} \citep[e.g.,][]{shaban2019truncated} which differentiate through $\theta^{(T)}_k$ as a function of $v$. { Alternatively, it can be viewed as a plug-in estimator. We know that \begin{align*} \nabla_{v_{k}}q(v_{k},\th_{k}) & =\nabla_{v_{k}}g(v_{k},\th_{k})-\nabla_{v_{k}}g(v_{k},\th^{*}(v_{k}))\\ & =\nabla_{v_{k}}g(v_{k},\th_{k})-\left[\nabla_{1}g(v_{k},\th^{*}(v_{k}))+\nabla_{v_{k}}\th^{*}(v_{k})\nabla_{2}g(v_{k},\th^{*}(v_{k}))\right]\\ & =\nabla_{v_{k}}g(v_{k},\th_{k})-\nabla_{1}g(v_{k},\th^{*}(v_{k})), \end{align*} where $\nabla_{1}$ denotes taking the derivative w.r.t. the first variable. Since $\th^{*}(v_{k})$ is unknown, we estimate $\nabla_{1}g(v_{k},\th^{*}(v_{k}))$ by plugging-in $\th_{k}^{(T)}$ to approximate $\th^{*}(v_{k})$: \[ \nabla_{v_{k}}\hat{q}(v_{k},\th_{k})=\nabla_{v_{k}}g(v_{k},\th_{k})-\nabla_{1}g(v_{k},\th^{*}(v_{k})). \] } Each step of Algorithm~\ref{alg:bome} can be viewed as taking one step (starting from $v_k,\th_k$) toward solving an approximate constrained optimization problem: \bbb \label{eq:ghatopt} \min_{\v, \th} f(\v, \th) ~~~~~s.t.~~~~~ g(\v, \th) \leq g( v, \th^{(T)}_k), \eee which can be viewed as a relaxation of the exact constrained optimization formulation \eqref{eq:bo_constrain}, because $\{(\v,\th)\colon g(\v,\th)\leq g^*(v)\}$ is a subset of $\{(\v,\th)\colon g(\v,\th) \leq g(\v, \theta^{(T)}_k)\}$. \begin{algorithm*}[t] \vspace{-0.04cm} \caption{Bilevel Optimization Made Easy (BOME!)} \begin{algorithmic} \STATE \textbf{Goal}: Solve $\min_{v,\theta} f(v,\theta)$ ~~$s.t.$~~ $\theta \in \argmin g(v, \cdot)$. \STATE \textbf{Input}: Initialization $(v_0, \th_0)$; inner step $T$; outer and inner stepsize $\xi$, $\alpha$ (set $\alpha = \xi$ by default). \vspace{.05em} \FOR{ iteration $k$} \vspace{.1em} \STATE 1. Get $\textcolor{MidnightBlue}{\theta_k^{(T)}}$ by $T$ steps of gradient descent on $g(v_k, \cdot)$ starting from $\th_k$ (See Eq.~\eqref{equ:thetaTT}). \STATE 2. Set $\hat{q}(v, \theta)=g(v,\th)-g(v,\textcolor{MidnightBlue}{\th_{k}^{(T)}})$. % \vspace{.2em} \STATE 3. Update $(v, \theta): (v_{k+1},\theta_{k+1}) \gets (v_{k},\theta_k) - \xi ( \dd f(v_{k},\theta_k) + \lambda_k \dd \hat q(v_{k},\theta_k))$ \bb ~~~~~~\text{where}~~~~~ \lambda_k = % \max\left ( \frac{\phi_k - \langle\dd f(v_k, \theta_k),~~ \dd\hat q(v_k,\theta_k) \rangle }{\norm{\dd\hat q(v_k,\theta_k) }^2},~~0 \right ), \vspace{-.5em} \ee \vspace{-.2em} ~~~~and $\phi_k = \eta ||\dd \hat q(v_{k},\theta_k) ||^2$ (default), or $\phi_k = \eta \hat{q}(v_{k},\theta_k) $ with $\eta>0$. % \vspace{.5\baselineskip} \STATE \textbf{Remark}: ~1) We treat $\textcolor{MidnightBlue}{\theta_k^{(T)}}$ as constant when taking derivative of $\hat q$; ~2) In practice, step 3 can have separate stepsize $(\xi_v, \xi_\th)$ and use standard optimizers like Adam~\citep{kingma2014adam}; ~3) We use $\eta=0.5$ and $T=10$ by default. \ENDFOR \end{algorithmic} \label{alg:bome} \end{algorithm*} \section{Introduction} \vspace{-5pt} We consider the bilevel optimization (BO) problem: \begin{equation} \min_{v,\theta} f\big(v, \theta\big)~~~~~s.t.~~~~~\theta \in \argmin_{\theta'} g\big(v, \theta'\big), \label{eq:bo} \end{equation} where the goal is to minimize an \emph{outer objective} $f$ whose variables include the solution of another minimization problem w.r.t an \emph{inner objective} $g$. The $\theta$ and $v$ are the \emph{inner} and \emph{outer} variables, respectively. We assume that $v \in \RR^m, \theta \in \RR^n$ and that $g(v,\cdot)$ attains a minimum for each $v$. BO is useful in a variety of machine learning tasks. A canonical example is hyperparameter optimization, in which case $f$ (resp. $g$) is the validation (resp. training) loss associated with a model parameter $\theta$ and a hyperparameter $v$, and we want to find the optimal hyperparameter $v$ to minimize the validation loss $f$ when $\theta$ is determined by minimizing the training loss; see e.g., \citet{pedregosa2016hyperparameter, franceschi2018bilevel}. Other applications include meta learning~\citep{franceschi2018bilevel}, continual learning~\citep{pham2020contextual}, reinforcement learning~\citep{yang2019provably}, and adversarial learning~\citep{jiang2021learning}. % See \citet{liu2021investigating} for a recent survey. BO is notoriously challenging due to its nested nature. Despite the large literature, most existing methods for BO are slow and unsatisfactory in various ways. For example, a major class of BO methods is based on direct gradient descent on the outer variable $v$ while viewing the optimal inner variable $\theta^*(v) = \arg \min_\theta g(v,\theta)$ as a (uniquely defined) function of $v$. The key difficulty is to calculate the derivative $\dthetadv$ which may require expensive manipulation of the Hessian matrix of $g$ via the implicit differentiation theorem. Another approach is to replace the low level optimization with the stationary condition $\dd_\theta g(v, \theta) =0$. This still requires Hessian information, and more importantly, is unsuitable for nonconvex $g$ since it allows $\theta$ to be any stationary point of $g(v, \cdot)$, not necessarily a minimizer. To the best of our knowledge, the only existing fully first-order BO algorithms\footnote{By fully first-order, we mean methods that only require information of $f,g,\dd f,\dd g$, so this excludes methods that apply auto-differentiation or conjugate gradient that need multiple steps of matrix-vector computation.} are BSG-1~\citep{giovannelli2021bilevel} and BVFSM with its variants~\citep{liu2021value, liu2021valueseq, liu2021towards}; but BSG-1 relies on a non-vanishing approximation that does not yield convergence to the correct solution in general, and BVFSM is sensitive to hyper-parameters on large-scale practical problems and lacks a complete non-asymptotic analysis for the practically implemented algorithm. In this work, we seek a \emph{simple and fast fully first-order} BO method that can be used with non-convex functions including those appear in deep learning applications. The idea is to reformulate \eqref{eq:bo} as a single-level constrained optimization problem using the so-called value-function-based approach~\citep{dinh2010subdifferentials,dempe2020bilevel}. The constrained problem is then solved by stopping gradient on the single variable that contains the higher-order information and applying a simple first-order dynamic barrier gradient descent method based on a method of \citet{gong2021automatic}. Our contributions are: \textbf{1)} we introduce a novel and fast BO method by applying a modified dynamic barrier gradient descent on the value-function reformulation of BO; \textbf{2)} Theoretically, we establish the non-asymptotic convergence of our method to local stationary points (as measured by a special KKT loss) for non-convex $f$ and $g$. Importantly, to the best of our knowledge, this work is the first to establish non-asymptotic convergence rate for a fully first-order BO method. This result is also much beyond that of \citet{gong2021automatic} and \citet{ji2021bilevel}. \textbf{3)} Empirically, the proposed method achieves better or comparable performance while being more efficient than state-of-the-art BO methods on a variety of benchmarks. \section{Conclusion and Future Work} \vspace{-5pt} BOME, a simple fully first-order bilevel method, is proposed in this work with non-asymptotic convergence guarantee. While the current theory requires the inner loop iterations to scale in a logarithmic order w.r.t to the outer loop iterations, we do not observe this empirically. A further study to understand the mechanism is an interesting future direction. \section{Analysis} \vspace{-5pt} We first elaborate the KKT condition of \eqref{eq:bo_constrain} (Section~\ref{sec:kkt}), then quantify the convergence of the method by how fast it meets the KKT condition. We consider both the case when $g$ satisfies the Polyak-{\L}ojasiewicz (PL) inequality w.r.t. $\theta$, hence having a unique global optimum (Section~\ref{sec:pl_theory}), and when $g$ have multiple local minimum (Section~\ref{sec:kl_theory}). \vspace{-5pt} \subsection{KKT Conditions} \label{sec:kkt} \vspace{-5pt} Consider a general constrained optimization of form $\min f(v,\ss)$ s.t. $q(v,\ss)\leq0$. Under proper regularity conditions known as constraint quantifications \citep{nw2006numerical}, the first-order KKT condition gives a necessary condition for a feasible point $(v^*, \ss^*)$ with $q(v^*,\ss^*)\leq 0$ to be a local optimum of \eqref{eq:bo_constrain}: There exists a Lagrangian multiplier $\lambda^* \in [0, +\infty)$, such that \bbb \label{eq:kkt0} \begin{split} & \dd f(v^*, \ss^*) + \lambda^* \dd q(v^*, \ss^*) =0, \\ \end{split} \eee and $\lambda^*$ satisfies the complementary slackness condition $\lambda^* q(v^*, \ss^*) =0$. A common regularity condition to ensure \eqref{eq:kkt0} is the \emph{constant rank constraint quantification (CRCQ)} condition \citep{janin1984directional}. \begin{definition} A point $(v^*, \theta^*)$ is said to satisfy CRCQ with a function $h$ if the rank of the Jacobian matrix $\dd h(v,\theta)$ is constant in a neighborhood of $(v^*,\theta^*)$. \end{definition} Unfortunately, \textbf{the KKT condition in \eqref{eq:kkt0} does not hold for the bilevel optimization in \eqref{eq:bo_constrain}.} The CRCQ condition does not typically hold for this problem. This is because the minimum of $q$ is zero, and hence if $(v^*,\theta^*)$ is feasible for \eqref{eq:bo_constrain}, then $(v^*,\theta^*)$ must attain the minimum of $q$, yielding $q(v^*,\theta^*)=0$ and $\dd q(v^*, \theta^*)=0$ if $q$ is smooth; but we could not have $\dd q(v,\theta) =0$ uniformly in a neighborhood of $(v^*,\theta^*)$ (hence CRCQ fails) unless $q$ is a constant around $(v^*,\theta^*)$. In addition, if KKT \eqref{eq:kkt0} holds, we would have $\dd f(v^*, \ss^*) =-\lambda^*\dd q(v^*, \theta^*)= 0$ which happens only in the rare case when $(v^*,\theta^*)$ is a stationary point of both $f,g$. Instead, one can establish a KKT condition of BO through the form in \eqref{eq:gbo}, {because there is nothing special that prevents $(v^*,\theta^*)$ from satisfying CRCQ with $\dd_\theta q = \dd_\theta g$ (even though we just showed that it is difficult to have CRCQ with $q$)}. Assume $f$ and $\dd_\theta q$ are continuously differentiable, and $(v^*, \th^*)$ is a point satisfying $\dd_\theta q(v^*, \theta^*) =0$ and CRCQ with $\dd_\theta q$. Then by the typical first order KKT condition of \eqref{eq:gbo}, there exists a Lagrange multiplier $\omega^* \in \RR^n$ such that \bbb\label{eq:kkt2} \nabla f(v^*,\th^*)+\nabla (\dd_\theta q(v^*,\th^*))\omega^*=0. \eee This condition can be viewed as the limit of a sequence of \eqref{eq:kkt0} in the following way: assume we relax the constraint in \eqref{eq:bo_constrain} to $q(v,\theta)\leq c_k$ where $c_k$ is a sequence of positive numbers that converge to zero, then we can establish \eqref{eq:kkt0} for each $c_k>0$ and pass the limit to zero to yield \eqref{eq:kkt2}. \begin{proposition} \label{pro:kkt} Assume that $f$, $q$, $\nabla q$ are continuously differentiable and $\norm{\nabla{f}}, f$ is bounded. For a {feasible} point $(v^*, \theta^*)$ of \eqref{eq:bo_constrain} that satisfies CRCQ with {$\dd_\theta q$}, if $(v^*, \theta^*)$ is the limit of a sequence $\{(v_k, \theta_k)\}_{k=1}^\infty$ satisfying $q(v_{k},\th_{k})\neq0$ $\forall k$, and there exists a sequence $\{\lambda_k\} \subset [0,\infty)$ such that \bb \nabla f(v_{k},\th_{k})+\lambda_k \nabla q(v_{k},\th_{k}) \to 0, && q(v_k, \th_k) \to 0, \ee as $k\to +\infty$, then $(v^*,\th^*)$ satisfies \eqref{eq:kkt2}. \end{proposition} This motivates us to use the following function as a measure of stationarity of the solution returned by the algorithm: \[ \K(v,\th)=\underset{\text{local\ improvement}}{\underbrace{{\textstyle \min_{\lambda\ge0}}||\nabla f(v,\th)+\lambda\nabla q(v,\th)||^{2}}}+\underset{\text{feasibility}}{\underbrace{q(v,\th)}}. \] The hope is to have an algorithm that generates a sequence $\{(v_k, \theta_k)\}_{k=0}^\infty$ that satisfies $\K(v_k, \theta_k) \to 0$ as $k\to+\infty$. Intuitively, the first term in $\K(v,\th)$ measures how much $\dd f$ conflicts with $\dd q$ (how much we can decrease $f$ without increasing $q$), as it is equal to the squared $\ell_2$ norm of the solution to the problem $ \min_{\delta}||\nabla f-\delta||^{2}\ s.t.\ \left\langle \nabla q,\delta\right\rangle \ge0. $ The second term in $\K$ measures how much the $\argmin g$ constraint is satisfied. \vspace{-5pt} \subsection{Convergence with unimodal $g$} \label{sec:pl_theory} \vspace{-5pt} We first present the convergence rate when assuming $g(v,\cdot)$ has unique minimizer and satisfies the Polyak-{\L}ojasiewicz (PL) inequality for all $v$, which % guarantees a linear convergence rate of the gradient descent on the low level problem. \vspace{-3pt} \begin{assumption}[PL-inequality] \label{asm:PL-inequality} Given any $v$, assume $g(v,\cdot)$ has a unique minimizer denoted as $\th^*(v)$. Also assume there exists $\kappa>0$ such that for any $(v,\theta)$, $\left\Vert \nabla_{\th}g(v,\th)\right\Vert ^{2}\ge\kappa(g(v,\th)-g(v,\th^*(v)))$. \end{assumption} \vspace{-3pt} The PL inequality % gives a characterization on how a small gradient norm implies global optimality. It is implied from, but weaker than strongly convexity. The PL-inequality is more appealing than convexity because some modern over-parameterized deep neural networks have been shown to satisfy the PL-inequality along the trajectory of gradient descent. See, for example, \citet{frei2021proxy,song2021subquadratic,liu2022loss} for more discussion. \vspace{-3pt} \begin{assumption}[Smoothness] \label{asm:Smoothness} $f$ and $g$ are differentiable, and $\dd f$ and $\dd g$ are $L$-Lipschitz w.r.t. the joint inputs $(v,\theta)$ for some $L\in(0,+\infty)$. \end{assumption} \vspace{-3pt} \begin{assumption}[Boundedness] \label{asm:Boundedness} There exists a constant $M<\infty$ such that $\norm{\nabla g(v,\th)}$, $\norm{\nabla f(v,\th)}$, $|f(v,\th)|$ and $|g(v,\th)|$ are all upper bounded by $M$ for any $(v,\th)$. \end{assumption} \vspace{-3pt} Assumptions \ref{asm:Smoothness} and \ref{asm:Boundedness} are both standard in optimization. \vspace{-3pt} \begin{theorem} \label{thm:Convergence k} Consider Algorithm~\ref{alg:bome} with $\xi, \alpha\le1/L$, $\phi_k = \eta \norm{\nabla \hat{q}(v_k,\th_k)}^2$, and $\ul>0$. Suppose that Assumptions~\ref{asm:PL-inequality}, \ref{asm:Smoothness}, and \ref{asm:Boundedness} hold. Then there exists a constant $c$ depending on $\alpha,\kappa,\ul ,L$ such that when $T \geq c$, we have for any $K \geq 0$, \begin{align*} \min_{k\leq K}\, \K(v_{k},\th_{k})\!=\!O\!\left ( \sqrt{\xi } + \sqrt{\frac{q_0}{\xi K}} + \frac{1}{\xi K} + \exp(- b T) \right ) \end{align*} where $q_0 = q(v_{0},\th_{0})$, and $b>0$ is a constant depending on $\kappa$, $L$, and {$\alpha$}. \end{theorem} \begin{remark} Note that one of the dominant terms depends on the initial value $q_0 = q(v_{0},\th_{0})$. Therefore, we can obtain a better rate if we start from a $\th_0$ with small $q_0$ (hence near the optimum of $g(v_0,\cdot)$). In particular, when $q(v_{0},\th_{0})=O(1)$, choosing % $\xi=O(K^{-1/2})$ gives $ \min_{k\leq K}\K(v_{k},\th_{k}) = O(K^{-1/4} + \exp(-bT))$ rate. On the other hand, if we start from a better initialization such that {$q(v_{0},\th_{0})=O((\xi K)^{-1})$}, then choosing $\xi=O(K^{-2/3})$ % gives $\min_{k\leq K}\K(v_{k},\th_{k}) = O(K^{-1/3}+\exp(-bT))$. % \end{remark} \vspace{-5pt} \subsection{Convergence with multimodal $g$} \vspace{-5pt} \label{sec:kl_theory} The PL-inequality eliminates the possibility of having stationary points that are not global optimum. To study cases in which $g$ has multiple local optima, we introduce the notion of attraction points following gradient descent. \vspace{-3pt} \begin{definition} [Attraction points] Given any $(v,\th)$, we say that $\th^{\diamond}(v,\th)$ is the attraction point of $(v,\th)$ with step size $\alpha>0$ if the sequence $ \{\theta^{(t)}\}_{t=0}^\infty$ generated by gradient descent $\th^{(t)}=\th^{(t-1)}-\alpha\nabla_{\th}g(v,\th^{(t-1)})$ starting from $\th^{(0)}=\th$ converges to $\th^{\diamond}(v,\th)$. \end{definition} \vspace{-3pt} Assume the step size ${\alpha}\le1/L$ where $L$ is the smoothness constant defined in Assumption \ref{asm:Smoothness}, one can show the existence {and uniqueness} of attraction point of any $(v,\th)$ using Proposition 1.1 of \citet{traonmilin2020basins}. Intuitively, the attraction of $(v,\th)$ is where the gradient descent algorithm can not make improvement. In fact, when ${\alpha}\le1/L$, one can show that $g(v,\theta) \leq g(v,\theta^{\diamond}(v,\theta))$ is equivalent to the stationary condition $\dd_\theta g(v,\theta)=0$. The set of $(v,\theta)$ that have the same attraction point forms an attraction basin. Our analysis needs to assume the PL-inequality within the individual attraction basins. \vspace{-3pt} \begin{assumption}[Local PL-inequality within attraction basins] \label{asm:KL-inequality} Assume that for any $(v,\th)$, $\theta^\diamond(v,\theta)$ exists. Also assume that there exists $\kappa>0$ such that for any $(v,\th)$ $\left\Vert \nabla_{\th}g(v,\th)\right\Vert ^{2}\ge\kappa(g(v,\th)-g(v,\theta^{\diamond}(v,\theta))$. \end{assumption} \vspace{-3pt} We can also define local variants of $q$ and $\mathcal K$ as follows: \[ q^\diamond(v,\theta) = g(v,\th) - g(v, \th^{\diamond}(v,\th)), ~~~ \K^{\diamond}(v,\th)= \min_{\lambda\ge0}\norm{\nabla f(v,\th)+\lambda\nabla q^{\diamond}(v,\th)}^{2} + {q^{\diamond}(v,\th)}. \] Compared with Section~\ref{sec:pl_theory}, a key technical challenge is that $\theta^\diamond(v,\theta)$ and hence $q^\diamond(v,\theta)$ can be discontinuous w.r.t. $\th$ when it is on the boundary of different attraction basins; % $\K^{\diamond}$ is not well defined on these points. However, these boundary points are not stable stationary points, and it is possible to use arguments based on the stable manifold theorem to show that an algorithm with random initialization will almost surely not visit them \citep{shub2013global,lee2016gradient}. \vspace{-3pt} \begin{theorem} \label{thm:nonconvex_converge} Consider Algorithm~\ref{alg:bome} with $\xi, \alpha\le1/L$, $\phi_k = \eta \norm{\nabla \hat{q}(v_k,\th_k)}^2$, and $\ul>0$. Suppose that Assumptions~\ref{asm:Smoothness}, \ref{asm:Boundedness}, and \ref{asm:KL-inequality} hold and that $q^\diamond$ is differentiable on $(v_k,\th_k)$ at every iteration $k\geq 0$. Then there exists a constant $c$ depending on $\alpha,\kappa,\ul ,L$, such that when $T \geq c$, we have \[ \min_{k\le K}\, \K^{\diamond}(v_{k},\th_{k})=O\left ( \sqrt{\xi} % + \sqrt{\frac{1}{\xi K}} + \exp(-bT) \right ), \] where $b$ is a positive constant depending on $\kappa$, $L$, and $\alpha$. \end{theorem} \vspace{-3pt} Unlike Theorem~\ref{thm:Convergence k}, the rate does not improve when $q_0^\diamond := q^{\diamond}(v_0, \th_0)$ is small because the attraction basin may change in different iterations, eliminating the benefit of starting from a good initialization. Choosing % $\xi=O(K^{-1/2})$ gives $O(K^{-1/4}+\exp(-bT))$ rate of $\min_{k\leq K}\K^{\diamond}(v_{k},\th_{k})$. \section{Related Works} \vspace{-5pt} The value-function formulation \eqref{eq:bo_constrain} is a classical approach in bilevel optimization \citep{outrata1990numerical,ye1995optimality,dempe2020bilevel}. However, despite its attractive properties, it has been mostly used as a theoretical tool, and much less exploited for practical algorithms compared with the more widely known hypergradient approach (Section~\ref{sec:background}), especially for challenging nonconvex functions $f$ and $g$ such as those encountered in deep learning. One exception is \citet{liu2021value}, which proposes a BO method by solving the value-function formulation using an interior-point method combined with a smoothed approximation. This was improved later in a pessimistic trajectory truncation approach \citep{liu2021towards} and a sequential minimization approach \citep{liu2021valueseq} (BVFSM). Similar to our approach, these methods do not require computation of Hessians, thanks to the use of value function. However, as we observe in experiments (Section~\ref{sec:observations}), BVFSM tends to be dominated by our method both in accuracy and speed, and is sensitive to some hyperparameters that are difficult to tune (such as the coefficients of the log-barrier function in interior point method). Theoretically, \citet{liu2021value, liu2021valueseq, liu2021towards} provide only asymptotic analysis on the convergence of the smoothed and penalized surrogate loss to the target loss. They do not give an analysis for the algorithm that was actually implemented. Our algorithm is build up on the dynamic control barrier method of \citet{gong2021automatic}, an elementary approach for constrained optimization. \citet{gong2021automatic} also applied their approach to solve a lexicographical optimization of form $\min_{\theta} f(\theta)$ s.t. $\theta\in \argmin_{\theta'} g(\theta')$, which is a bilevel optimization without an outer variable (known as \emph{simple bilevel optimization} \citep{dempe2021simple}). Our method is an extension of their method to general bilevel optimization. {Such extension is not straightforward, especially when the lower level problem is non-convex, requiring introducing the stop-gradient operation in a mathematically correct way.} We also provide non-asymptotic analysis for our method, that goes beyond the continuous time analysis in~\citet{gong2021automatic}. A key sophistication in the theoretical analysis is that we need to control the approximation error of $\theta^*(v_k)$ with $\theta_k^{(T)}$ at each step, which requires an analysis significantly different from that of \citet{gong2021automatic}. {Indeed, non-asymptotic results have not yet been obtained for many BO algorithms. Even for the classic hypergradient-based approach (such results are established only recently in \citet{ji2021bilevel}). We believe that we are the first to establish a non-asymptotic rate for a purely first-order BO algorithm under general assumptions, e.g. the lower level problem can be both convex or non-convex.} Another recent body of theoretical works on BO focus on how to optimize when only stochastic approximation of the objectives is provided \citep{ghadimi2018approximation,hong2020two,ji2021bilevel,yang2021provably,guo2021randomized,chen2021single,khanduri2021near}; there are also recent works on the lower bounds and minimax optimal algorithms \citep{ji2021lower,ji2021bilevel}. These algorithms and analysis are based on hypergradient descent and hence require Hessian-vector products in implementation. \vspace{-5pt} \section{Experiment} \vspace{-5pt} \label{sec:experiment} We conduct experiments (1) to study the correctness, basic properties, and robustness to hyperparameters of BOME, and (2) to test its performance and computational efficiency on challenging ML applications, compared with state-of-the-art bilevel algorithms. In the following, we first list the baseline methods and how we set the hyperparameters. Then we introduce the experiment problems in Section~\ref{sec:problems}, which includes 3 toy problems and 3 ML applications, and provide the experiment results. Finally we summarize observations and findings in Section~\ref{sec:observations}. \textbf{Baselines} A comprehensive set of state-of-the-art BO methods are chosen as baseline methods. This includes the \emph{fully first-order} methods: BSG-1~\citep{giovannelli2021bilevel} and BVFSM~\citep{liu2021valueseq}, ; % a \emph{stationary-seeking} method: Penalty ~\citep{mehra2021penalty}, \emph{explicit/implicit} methods: ITD~\citep{ji2021bilevel}, AID-CG (using conjugate gradient), AID-FP (using fixed point method)~\citep{grazzi2020iteration}, reverse (using reverse auto-differentiation)~\citep{franceschi2017forward} stocBiO~\citep{ji2021bilevel}, and VRBO~\citep{yang2021provably}. \textbf{Hyperparameters} Unless otherwise specified, BOME strictly follows Algorithm~\ref{alg:bome} with $\phi_k =\eta \norm{\nabla \hat{q}(v_k, \th_k)}^2$, $\eta=0.5$, and $T = 10$. The inner stepsize $\alpha$ is set to be the same as outer stepsize $\xi$. The stepsizes of all methods are set by a grid search from the set $\{0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100, 500, 1000\}$. All toy problems adopt vanilla gradient descent (GD) and applications on hyperparameter optimization adapts GD with a momentum of $0.9$. Details are provided in Appendix~\ref{sec:apx-exp}. \vspace{-5pt} \subsection{Experiment Problems and Results} \vspace{-5pt} \label{sec:problems} \textbf{Toy Coreset Problem} To validate the \emph{convergence} property of BOME, we consider: \begin{equation*} \begin{split} \textstyle{\min}_{v, \theta} \norm{\theta - x_0}^2 ~~~s.t.~~~ \theta \in \textstyle{\argmin}_\theta \norm{\theta - X\sigma(v)}^2, \end{split} \label{eq:toy-coreset} \end{equation*} where $\sigma(v) = \exp(v)/\sum_{i=1}^4 \exp(v_i)$ is the softmax function, $v \in \mathbb{R}^4, \theta\in \mathbb{R}^2$, and $X= [x_1, x_2, x_3, x_4] \in \mathbb{R}^{2\times 4}$. The goal is to find the closest point to a target point $x_0$ within the convex hull of $\{x_1,\ldots, x_4\}$. See Fig.~\ref{fig:toy_convergence_and_minimax} (upper row) for the illustration and results. \begin{figure*}[t!] \centering \vspace{-20pt} \label{fig:toy_convergence_and_minimax} \includegraphics[width=\textwidth]{figures/toy_conv_and_minimax.png} \vspace{-10pt} \caption{Results on the toy coreset problem and mini-max problem. (a)-(c): the trajectories of $(v_k,\theta_k)$, and $f(v_k,v_k)$ and $\hat q_k(v_k,v_k)$ of BOME (our method), BSG-1~\citep{giovannelli2021bilevel}, BVFSM~\citep{liu2021valueseq}, Penalty ~\citep{mehra2021penalty} and Optimistic GD~\citep{daskalakis2017training} (only for minimax problem). (d)-(e) trajectories of BOME with different choices of inner gradient step $T$ and the control coefficient $\eta$. } \vspace{-0.6cm} \end{figure*} \textbf{Toy Mini-Max Game} Mini-max game is a special and challenging case of BO where $f$ and $g$ contradicts with each other completely (e.g., $f=-g$). We consider \begin{equation} \textstyle{\min}_{v,\theta \in \mathbb{R}}~v\th ~~~~s.t.~~~~ \th \in \textstyle{\argmax}_{\th' \in \mathbb{R}}~v\th'. \label{eq:toy-minimax} \end{equation} The optimal solution is $v^* = \th^* = 0$. Note that the naive gradient descent ascent algorithm diverges to infinity on this problem, and a standard alternative is to use optimistic gradient descent~\citep{daskalakis2017training}. Figure~\ref{fig:toy_convergence_and_minimax} (lower row) shows that BOME works on this problem while other first-order BO methods fail. \textbf{Degenerate Low Level Problem} Many existing BO algorithms require the low level singleton (LLS) assumption, which BOME does not require. To test this, we consider an example from \citet{liu2020generic}: \begin{equation*} \begin{split} \textstyle{\min}_{v\in \RR, \theta\in\RR^2} \norm{\theta - [v;1]}_2^2 ~~~~ s.t.~~~~ \th \in \textstyle{\argmin}_{(\theta_1',\theta_2')\in \RR^2} (\theta_1' - v)^2, \end{split} \label{eq:toy-lls} \end{equation*} where $\th = (\th_1, \th_2) $ and the solution is $v^*=1, \th^*=(1,1)$. See Fig.~\ref{fig:toy_lls_full} in Appendix A.3 for the result. \textbf{Data Hyper-cleaning} We are given a noisy training set $\mathcal D_{\text{train}}:= \{x_i, y_i\}_{i=1}^m$ and a clean validation set $\mathcal D_{\text{val}}$. The goal is to optimally weight the training data points so that the model trained on the weighted training set yields good performance on the validation set: \begin{align*} \textstyle{\min}_{v, \theta} \ell^{\text{val}}(\theta), ~~~ s.t.~~~\theta = \textstyle{\argmin}_{\theta'} \left\{ \ell^{\text{train}}(\theta', v) + c \norm{\theta'}^2 \right\}, \end{align*} where $\ell^{\text{val}}$ is the validation loss on $\mathcal D^{\text{val}}$, and $\ell^{\text{train}}$ is a weighted training loss: $\ell^{\text{train}} = \sum_{i=1}^m \sigma(v_i) \ell(x_i, y_i, \theta)$ with $\sigma(v)= \text{Clip}(v, [0,1])$ and $v \in \RR^m$. We set $c = 0.001$. For the dataset, we use MNIST~\citep{deng2012mnist} (FashionMNIST~\citep{xiao2017fashion}). We corrupt 50\% of the training points by assigning them randomly sampled labels. See Fig.~\ref{fig:ho} (upper panel) for the results. (Results for FashionMNIST are reported in Appendix~\ref{sec:apx-exp-hyperclean}.) \begin{figure*}[t] \centering \vspace{-0.8cm} \includegraphics[width=\textwidth]{figures/hyperparameter_copy_3.pdf} \vspace{-18pt} \caption{Result for hyperparameter optimization. \textbf{Top:} data hyper-cleaning on MNIST dataset. The solid black line is the model performance trained purely on the validation set and the dashed black line is the model performance trained on the validation set and on the part of training set that have correct labels. \textbf{Bottom:} learnable regularization on 20 Newsgroup dataset. The solid black line indicates the model performance without any regularization. All results are averaged on 5 random trials. See Appendix~\ref{sec:apx-exp-hyperclean} and~\ref{sec:apx-exp-regularization} for results on FashionMNIST and more details.} \vspace{-1pt} \label{fig:ho} \end{figure*} \textbf{Learnable Regularization} We apply bilevel optimization to learn the optimal regularization coefficient on the twenty newsgroup dataset:\footnote{Dataset from \url{https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html}.} \begin{align*} \vspace{-0.3cm} \textstyle{\min}_{v,\theta}\, \ell^{\text{val}}(\theta) ~~~\text{s.t.}~~~\theta \in \textstyle{\argmin}_{\theta'} \left\{ \ell^{\text{train}} (\theta') + \norm{W_v \theta'}^2_2 \right\}, \vspace{-0.3cm} \end{align*} where $W_v$ is a matrix depending on $v$, e.g., $W_v = \diag(\exp(v))$. % See Fig.~\ref{fig:ho} (lower panel) for results. \textbf{Continual Learning (CL)} CL studies how to learn on a sequence of tasks in an online fashion without catastrophic forgetting of previously learned tasks. We follow the setting of contextual transformation network (CTN) from \citet{pham2020contextual}, which trains a deep neural network consisting of a quickly updated backbone network (parameterized by $\theta$) and a slowly updated controller network (parameterized by $v$). % When training the $\tau$-th task, we update $(v, \theta)$ by % $$ \textstyle{\min}_{v, \theta} ~\ell_{1:\tau}^\text{val}\big(v, \th\big) ~~~~~s.t.~~~~~\th \in \textstyle{\argmin}_{\th'} ~\ell_{1:\tau}^\text{train}\big(v, \th'\big), $$ where $\ell_{1:\tau}^\text{train}$ and $\ell_{1:\tau}^\text{val}$ are the training and validation loss available up to task $\tau$. % The goal is to update the controller such that the long term loss $\ell_{1:\tau}^\text{val}$ is minimized assuming $\theta$ is adapted to the available training loss when new tasks come. \newcommand{\tend}{t}% Assume the CL process terminates at time $\tend$ . Denote by $a^s_\tau$ the test accuracy of task $s$ after training on task $\tau$. We measure the performance of CL by 1) the final mean accuracy on all seen tasks ($\text{ACC} = \frac{1}{\tend}\sum_{\tau\leq \tend} a^\tau_{\tend}$), 2) how much the model forgets as measured by negative backward transfer $\text{NBT} = \frac{1}{\tend}\sum_{\tau \leq \tend} (a^\tau_\tau - a^\tau_\tend)$, and 3) how fast the model learns on new tasks as measured by forward transfer $\text{FT} = \frac{1}{\tend}\sum_{\tau\leq \tend} a^\tau_\tend$. Note that $\text{FT} = \text{ACC} + \text{NBT}$. We follow the setting of \citet{pham2020contextual} closely, except replacing their bilevel optimizer (which is essentially ITD~\citep{ji2021bilevel}) with BOME. See Appendix~\ref{sec:apx-exp-cl} for experiment details. The results are shown in Table~\ref{tab:cl}, where in addition to the bilevel algorithms, we also compare with a set of state-of-the-art CL algorithms, including MER~\citep{riemer2018learning}, ER~\citep{chaudhry2019tiny}, and GEM~\citep{lopez2017gradient}. Table~\ref{tab:cl} also includes an `Offline" basline -- learning $t$ tasks simultaneously using a single model (which is the upper bound on performance). \begin{table*}[t] \centering \vspace{-0.1cm} \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccc} \toprule \multirow{3}{*}{Method} & \multicolumn{3}{c}{PMNIST} & \multicolumn{3}{c}{Split CIFAR}\\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} & ACC~$(\uparrow)$ & NBT~$(\downarrow)$ & FT~$(\uparrow)$ & ACC~$(\uparrow)$ & NBT~$(\downarrow)$ & FT~$(\uparrow)$\tabularnewline \midrule Offline & $84.95\pm0.95$ & - & - & $74.11\pm0.66$ & - & -\tabularnewline [0.05cm] MER & $76.59\pm0.74$ & $5.73\pm0.59$ & $82.32\pm0.34$ & $60.32\pm0.86$ & $8.91\pm0.86$ & $69.23\pm0.40$\tabularnewline [0.05cm] CTN~(+ITD) & $78.40\pm0.28$ & $5.62\pm0.39$ & $84.02\pm0.29$ & $67.7\pm60.96$ & $4.88\pm0.77$ & $72.58\pm0.62$\tabularnewline [0.05cm] CTN~(+BVFSM) & $77.78\pm0.32$ & $7.25\pm0.28$ & $\pmb{85.03}\pm0.28$ & $67.04\pm0.76$ & $6.97\pm0.62$ & $\pmb{74.01}\pm0.57$ \tabularnewline [0.05cm] CTN~(+BOME) & $\pmb{80.70}\pm0.26$ & $\pmb{4.09}\pm0.27$ & $\pmb{84.79}\pm0.25$ & $\pmb{68.16}\pm0.60$ & $\pmb{4.72}\pm0.75$ & $72.88\pm0.48$\tabularnewline \bottomrule \end{tabular} } \vspace{-5pt} \caption{Results of continual learning as bilevel optimization. We compute the mean and standard error of each method's results over 5 independent runs. Best results are \textbf{bolded}. The full result with comparison against other methods are provided in Table~\ref{tab:cl-full} in the Appendix. } \vspace{-0.65cm} \label{tab:cl} \end{table*} \vspace{-8pt} \subsection{Observations} \vspace{-5pt} \label{sec:observations} \textbf{BOME yields faster learning and better solutions at convergence} Figure~\ref{fig:toy_convergence_and_minimax}-\ref{fig:toy_lls_full} show that BOME converges to the optimum of the corresponding bilevel problems and work well on the mini-max optimization and the degenerate low level problem; see also Fig.\ref{fig:toy_convergence} in Appendix~\ref{sec:apx-exp-coreset}. In comparison, the other methods like BSG-1, BVFSM, and Penalty fail to converge to the true optimum even with a grid search over their hyperparameters. Moreover, in all three toy examples, BOME guarantees that $\hat{q}$, which is a proxy for the optimality of the inner problem, decreases to $0$. From Fig.~\ref{fig:ho}, it is observed that BOME achieves comparable or better performance than the state-of-the-art bilevel methods for hyperparameter optimization. Moreover, BOME exhibits better computational efficiency (Fig.~\ref{fig:ho}), especially on the twenty newsgroup dataset where the dimension of $\theta$ is large. In Table~\ref{tab:cl}, we find that directly plugging in BOME to the CL problem yields a substantial performance boost. \vspace{-2pt} \textbf{Robustness to parameter choices} Besides the standard step size $\xi$ in typical optimizers, BOME only has three parameters: control coefficient $\eta$, inner loop iteration $T$, and inner step size $\alpha$. We use the default setting of $\eta = 0.5$, $T = 10$ and $\alpha = \xi$ across the experiments. From Fig.~\ref{fig:toy_convergence_and_minimax} (d,e) and Fig.~\ref{fig:ho} (b,d), BOME is robust to the choice of $\eta$, $T$ and $\alpha$ as varying them results in almost identical performance. Specifically, $T=1$ works well in many cases (see Figure \ref{fig:toy_convergence_and_minimax} (e) and \ref{fig:ho} (b)). The fact that BOME works well with a small $T$ empirically makes it computationally attractive in practice. \vspace{-4pt} \textbf{Choice of control barrier $\phi_k$} The control barrier is set as $\phi_k = \eta\norm{\nabla \hat{q}(v_k,\theta_k)}^2$ by default. Another option is to use $\phi_k = \eta\hat{q}(v_k,\theta_k)$. We test both options on the data hyper-cleaning and learnable regularization experiments in Fig.~\ref{fig:ho} (d), and observe no significant difference (we choose $\eta$ properly so that both choices of $\phi_k$ is on the same order). Hence we use $\phi_k = \eta\norm{\nabla \hat{q}(v_k,\theta_k)}^2$ as the default. \vspace{-2pt} \textbf{Comparison against BVFSM} The most relevant baseline to BOME is BVFSM, which similarly adopts the value-function reformulation of the bilevel problems. However, BOME consistently outperform BVFSM in both converged results and computational efficiency, across all experiments. More importantly, BOME has fewer hyperparameters and is robust to them, while we found BVFSM is sensitive to hyperparameters. This makes BOME a better fit for large practical bilevel problems. \section{Background} \vspace{-5pt} \label{sec:background} This section provides a brief background on traditional BO methods. Please see \citet{bard2013practical, dempe2020bilevel, dempe2002foundations} for overviews, and \citet{liu2021investigating} for a survey on recent ML applications. \textbf{Hypergradient Descent} Assume that the minimum of $g(v, \cdot)$ is unique for all $v$ so that we can write $\theta^*(v) = \argmin_{\theta} g(v,\theta)$ as a function of $v$; this is known as the low-level singleton (LLS) assumption. The most straightforward approach to solving \eqref{eq:bo} is to conduct gradient descent on $f(\v, \sv)$ as a function of $v$. Note that $$ \dd_v f(v,\th^*(v)) = \ddv f(\v, \sv) + \textcolor{RedViolet}{\dthetadv} \ddt f(\v, \sv). $$ The difficulty is to compute $\textcolor{RedViolet}{\dthetadv}$. From implicit function theorem, it satisfies a linear equation: \begin{equation} \label{eq:lineareq} \ddvt g(\v, \sv) + \ddtt g(\v, \sv) \textcolor{RedViolet}{\dthetadv} = 0. \end{equation} If $\ddtt g$ is invertible, we can solve for $\textcolor{RedViolet}{\dthetadv}$ and obtain a gradient update rule on $v$: \begin{equation*} \v_{k+1} \gets \v_k - \xi \left ( \ddv f_k - \big(\ddvt g_k\big)^\top \big(\ddtt g_k \big)^{-1} \ddt f_k \right ), \end{equation*} where $k$ denotes iteration, $\dd_1 f_k = \dd_1 f(\v_k, \ss^*(\v_k))$ and similarly for the other terms. This approach is sometimes known as the \emph{hypergradient descent}. However, hypergradient descent is computationally expensive: Besides requiring evaluation of the inner optimum $\theta^*(v_k)$, the main computational bottleneck is to solve the linear equation in \eqref{eq:lineareq}. Methods have been developed that approximate \eqref{eq:lineareq} using conjugate gradient~\citep{pedregosa2016hyperparameter,rajeswaran2019meta,grazzi2020iteration}, Neumann series~\citep{liao2018reviving,lorraine2020optimizing}, and related variants \citep{ghadimi2018approximation}. Another popular approximation approach is to replace $\dthetadv$ with $\dd_v \th^{(T)}(\v)$, where $\th^{(T)}(v)$ denotes the $T$-th iteration of gradient descent or other optimization steps on $g(v,\th)$ w.r.t. $\th$ starting from certain initialization. The gradient $\dd_v \th^{(T)}(\v)$ can be calculated with auto-differentiation (AD) with either forward mode~\citep{franceschi2017forward}, backward mode~\citep{franceschi2018bilevel,franceschi2017forward,shaban2019truncated,li2021fully,arbel2021amortized} or their variants \citep{liu2020generic}. While these approaches claim to be first-order, they require many Hessian-vector or Jacobian-vector products at each iteration and are slow for large problems. Other examples of approximation methods include a neural surrogate method which approximates $\theta^*(v)$ and its gradient $\dthetadv$ with neural networks~\citep{mackay2019self} and Newton-Gaussian approximation of the Hessian matrix with covariance of gradient \citep{giovannelli2021bilevel}. Both approaches introduce non-vanishing approximation error that is difficult to control. The neural surrogate method also suffers from high training cost for the neural network. \textbf{Stationary-Seeking Methods.} An alternative method is to replace the argmin constraint in \eqref{eq:bo} with the stationarity condition $\dd_\theta g(\v, \theta) =0,$ yielding a constrained optimization: \begin{equation} \label{eq:gbo} \min_{\v,\th} f(\v, \th)~~~~~s.t.~~~~~ \dd_\th g(\v, \th) = 0. \end{equation} Algorithms for nonlinear equality constrained optimization can then be applied~\citep{mehra2021penalty}. The constraint in \eqref{eq:gbo} guarantees only that $\theta$ is a stationary point of $g(v,\cdot)$, so it is equivalent to \eqref{eq:bo} only when $g$ is convex w.r.t. $\theta$. Otherwise, the solution of \eqref{eq:gbo} can be a maximum or saddle point of $g$. This makes it problematic for deep learning, where non-convex functions are pervasive. \section{Proof of the Result in Section \ref{sec:kkt}} We proof Proposition \ref{pro:kkt} using Proposition 6.3 (presented below using our notation) in \citet{gong2021automatic} by checking all the assumptions required by Proposition 6.3 in \citet{gong2021automatic} are satisfied. Specifically, it remains to show that for any $k$, $\lambda_k<\infty$, $\lim_{k\to\infty}\nabla q(v_{k},\th_{k})=0$ and $q$ is lower bounded (this is trivial as $q\ge 0$ by its definition), which we prove below. Firstly, simple calculation shows that for any $k$, \[ \lambda_{k}=\left[\frac{-\left\langle \nabla f(v_{k},\th_{k}),\nabla q(v_{k},\th_{k})\right\rangle }{||\nabla q(v_{k},\th_{k})||^{2}}\right]_+\le\frac{\sup_{v,\th}||\nabla f(v,\th)||}{||\nabla q(v_{k},\th_{k})||}<\infty. \] Here the last inequality is by $||\nabla q(v_k,\th_k)||>0$. Secondly, note that as we assume $\nabla q$ is continuous, this implies that \[ \lim_{k\to\infty}\nabla q(v_{k},\th_{k})=\nabla q(v^{*},\th^{*}). \] As $q(v^{*},\th^{*})=0$, we have $\nabla q(v^{*},\th^{*})=0$. Using Proposition 6.3 in \citet{gong2021automatic} gives the desired result. \begin{pro}[Proposition 6.3 in \citet{gong2021automatic}] Assume $f,q,\nabla q$ are continuously differentiable. Let $\{[v_{k},\th_{k},\lambda_{k}]:k=1,2,...\}$ be a sequence which satisfies $\lim_{k\to\infty}||\nabla q(v_{k},\th_{k})||=0$ and $\lim_{k\to\infty}||\nabla f(v_{k},\th_{k})+\lambda_{k}\nabla q(v_{k},\th_{k})||=0$. Assume that $[v^{*},\th^{*}]$ is a limit point of $[v_{k},\th_{k}]$ as $k\to\infty$ and $[v^{*},\th^{*}]$ satisfies CRCQ with $\nabla_{\th}q$, then there exists a vector-valued Lagrange multiplier $\omega^{*} \in \RR^m$ (the same length as $\th$) such that \[ \nabla f(v^{*},\th^{*})+\nabla(\nabla_{\th}q(v^{*},\th^{*}))\omega^{*}=0. \] \end{pro} \section{Proof of the Result in Section \ref{sec:pl_theory}} \label{apx:pl_theory} We define $L_{q}:=2L(L/\kappa+1)$ and using Assumption \ref{asm:Smoothness} and \ref{asm:PL-inequality}, we are able to show that $q(v,\th)$ is $L_{q}$-smooth (see Lemma \ref{lem:q smoothness} for details). For simplicity, we also assume that $\xi\le$1 throughout the proof. We use $b$ with some subscript to denote some general $O(1)$ constant and refer reader to section \ref{misc:constant} for their detailed value. Note that $\hat{q}$ defined in Section \ref{sec:method} changes in different iterations (as it depends on $\th_k^{(T)}$) and so does $\nabla \hat{q}$. To avoid the confusion, we introduce several new notations. Firstly, given $v$ and $\th$, $\th^{(T)}$ denotes the results of $T$ steps of gradient of $g(v,\cdot)$ w.r.t. $\th$ starting from $\th$ with step size $\alpha$ (similar to the definition in (\ref{equ:thetaTT})). Note that $\th^{(T)}$ depends on $v$, $\th$ and $\alpha$. Our notation does not reflects this dependency on $v,\alpha$ as we find it introduces no ambiguity while much simplifies the notation. Also note that when taking gradient on $\hat{q}$, the $\th_k^{(T)}$ at iteration $k$ is treated as a constant and the gradient does not pass through it. To be clear, we define $\hdq(v,\th)=\nabla g(v,\th)-\left[\nabla_{1}^{\top}g(v,\th^{(T)}),\textbf{0}^{\top}\right]^{\top}$, where $\textbf{0}$ denotes a zero vector with the same dimension as $\th$. Using this definition, $\hdq(v_k,\th_k) = \nabla \hat{q}(v_k,\th_k)$ at iteration $k$. We also let $\lm(v,\th)$ be the solution of the dual problem of \begin{equation} \label{eq:proof_primal} \min_{\delta}||\hdq(v,\th)-\nabla f(v,\th)||^{2}\ s.t.\ \left\langle \hdq(v,\th),\nabla f(v,\th)\right\rangle \ge\ul||\hdq(v,\th)||^{2}. \end{equation} That is \begin{equation} \label{eq:local solution} \lm(v,\th)=\begin{cases} \frac{[\ul||\hdq(v,\th)||^{2}-\left\langle \hdq(v,\th),\nabla f(v,\th)\right\rangle ]_{+}}{||\hdq(v,\th)||^{2}} & \text{when}\ ||\hdq(v,\th)||>0\\ 0 & \text{when}\ ||\hdq(v,\th)||=0 \end{cases} \end{equation} We might use $\lm$ for $\lm(v,\th)$ when it introduces no confusion. Also, denote $\d(v,\th) = \lm(v,\th) \hdq(v,\th) + \nabla f(v,\th)$ and thus $\delta_k = \d(v_k,\th_k)$. We start with several technical Lemmas showing some basic function properties. \subsection{Technical Lemmas} \begin{lemma} \label{lem:Lower bound} Under Assumption \ref{asm:PL-inequality}, for any $v,\th$, $g(v,\th)-g(v,\th^*(v))\ge\frac{\kappa}{4}||\th-\th^{*}(v)||^{2}$. \end{lemma} \begin{lemma} \label{lem:bound hat dq} Under Assumption \ref{asm:PL-inequality} and \ref{asm:Smoothness}, we have $||\nabla q(v,\th)-\hdq(v,\th)||\le L||\th^{(T)}-\th^{*}(v)||$ for any $v,\th$. Also, when $||\hat{\nabla}q(v,\th)||=0$, $q(v,\th)=0$. \end{lemma} \begin{lemma} \label{lem:th implicit lipschitz} Under Assumption \ref{asm:Smoothness}, \ref{asm:PL-inequality}, we have $||\th^{*}(v_{2})-\th^{*}(v_{1})||\le\frac{2L}{\kappa}||v_{1}-v_{2}||.$ \end{lemma} \begin{lemma} \label{lem:q smoothness} Under Assumption \ref{asm:Smoothness}, we have $||\nabla_{\th}q(v,\th_{1})-\nabla_{\th}q(v,\th_{2})||\le L||\th_{1}-\th_{2}||$, for any $v$. Further assume Assumption \ref{asm:PL-inequality}, we have \[ \left\Vert \nabla q(v_{1},\th_{1})-\nabla q(v_{2},\th_{2})\right\Vert \le L_{q}||[v_{1},\th_{1}]-[v_{2},\th_{2}]||, \] where $L_{q}:=2L(L/\kappa+1)$. \end{lemma} \begin{lemma} \label{lem:opt th_T} Under Assumption \ref{asm:PL-inequality}, \ref{asm:Smoothness} and assume that $\alpha<2/L$. Given any $v,\th$, suppose $\th^{(0)}=\th$ and $\th^{(t+1)}=\th^{(t)}-\alpha\nabla_{\th}q(v,\th^{(t)})$, then for any $t$, we have $q(v,\th^{(t)})\le\exp(-\bi(\alpha,L,\kappa)t)q(v,\th)$, where $\bi(\alpha,L,\kappa)=$ is some strictly positive constant that depends on $\alpha$, $L$ and $\kappa$. \end{lemma} \begin{lemma} \label{lem:bound d} Under Assumption \ref{asm:Boundedness}, for any $[v,\th]$, we have $||\d(v,\th)||,||\nabla q(v,\th)||,||\hdq(v,\th)||\le\bii(M,\ul )$, where $\bii(M,\ul )=(3+\ul)M$. \end{lemma} \begin{lemma} \label{lem:bound lambda psi} Under Assumption \ref{asm:Boundedness}, for any $[v,\th]$, we have $\lm||\hdq||^{2}\le \ul ||\hdq||^{2}+M||\hdq||,$where $\lm$ are defined in (\ref{eq:local solution}). \end{lemma} \begin{lemma} \label{lem:bound dq} Under Assumption \ref{asm:PL-inequality} and \ref{asm:Smoothness}, we have $||\nabla q(v,\th)||\le2\kappa^{-1/2}L_{q}q^{1/2}(v,\th).$ \end{lemma} \subsubsection{Lemmas} Now we give several main lemmas that are used to prove the result in Section \ref{sec:pl_theory}. \begin{lemma} \label{lem:one step descent of q} Under Assumption \ref{asm:PL-inequality}, \ref{asm:Smoothness} and \ref{asm:Boundedness}, when $||\hdq(v_{k},\th_{k})||>0$, we have \begin{align*} q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k}) & \le-\xi \ul ||\nabla q(v_{k},\th_{k})||^{2}+\xi \ul L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||\ (L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||+2L_{q}||\th_{k}-\th^{*}(v_{k})||)\\ & +\xi\bii L||\th_{k}^{(T)}-\th^{*}(v_{k})||+L_{q}\xi^{2}\bii^{2}/2. \end{align*} When $||\hdq(v_{k},\th_{k})||=0$, we have $q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k})\le\xi^{2}L_{q}\bii^{2}/2$. \end{lemma} \begin{lemma} \label{lem:decay q} Under Assumption \ref{asm:PL-inequality}, \ref{asm:Smoothness} and \ref{asm:Boundedness}, choosing $T\ge\biii(\ul ,r,\kappa,L)$, we have \[ q(v_{k},\th_{k})\le\exp(-\biv k)q(v_{0},\th_{0})+\Delta, \] where $\biv=-\log(1-\frac{\xi}{4}\ul \kappa)$ is some strictly positive constant and $\Delta=O(\exp(-\bi T)+\xi)$. \end{lemma} \begin{lemma} \label{lem:decay hdq} Under Assumption \ref{asm:PL-inequality}, \ref{asm:Smoothness} and \ref{asm:Boundedness}, we have \[ \sum_{k=0}^{K-1}||\nabla q(v_{k},\th_{k})||^{2}\le\frac{\bv q(v_{0},\th_{0})}{\xi}+K\xi^{2}\bvi\Delta, \] where $\bv$ is some constant depends on $L_{q},\ul ,\kappa$; $\bvi$ is some constant depends on $\kappa, L$ and $\Delta$ is defined in Lemma \ref{lem:decay q}. \end{lemma} \begin{lemma} \label{lem:Convergence hk} Under Assumption \ref{asm:PL-inequality}, \ref{asm:Smoothness} and \ref{asm:Boundedness}, choosing $T\ge\biii(\ul ,r,\kappa,L)$ and assume that $r,\xi\le1/L$, we have \[ \sum_{k=0}^{K-1}\left[||\d(v_{k},\th_{k})||^{2}+q(v_{k},\th_{k})\right]=O(\xi^{-1}+K\exp(-\bi T/2)+K\xi^{1/2}+\xi^{-1/2}K^{1/2}q^{1/2}(v_{0},\th_{0})). \] \end{lemma} \subsection{Proof of Theorem \ref{thm:Convergence k}} Using our definition of $\lm$ in (\ref{eq:local solution}), we have \begin{align*} ||\nabla f(v,\th)+\lm(v,\th)\nabla q(v,\th)|| & \le||\nabla f(v,\th)+\lm(v,\th)\hdq(v,\th)||+||\lm(v,\th)(\hdq(v,\th)-\nabla q(v,\th))||\\ & =||\d(v,\th)||+||\lm(v,\th)(\hdq(v,\th)-\nabla q(v,\th))||. \end{align*} Using Lemma \ref{lem:bound hat dq}, we know that when $||\hdq||=0$, we have $q=0$ and thus $||\nabla q||=0$. In this case, $||\lm(\hdq-\nabla q)||=0$. When $||\hdq||>0$, some algebra shows that \begin{align*} ||\lm(\hdq-\nabla q)|| & \le\left[\ul -\left\langle \nabla f,\hdq/||\hdq||\right\rangle ||\hdq||^{-1}\right]||\hdq-\nabla q||\\ & \le(\ul -\left\langle \nabla f,\hdq/||\hdq||\right\rangle ||\hdq||^{-1})||\hdq-\nabla q||. \end{align*} Notice that \begin{align*} ||\hdq(v,\th)-\nabla q(v,\th)|| & \le L||\th^{(T)}-\th^{*}(v)||\\ & \le2L\kappa^{-1/2}q^{1/2}(v,\th^{(T)})\\ & \le2L\kappa^{-1/2}\exp(-\bi T/2)q^{1/2}(v,\th)\\ & \le2L\kappa^{-1}\exp(-\bi T/2)||\nabla q(v,\th)||. \end{align*} Here the first inequality is by Lemma \ref{lem:bound hat dq}, the second inequality is by Lemma \ref{lem:Lower bound}, the third inequality is by Lemma \ref{lem:opt th_T} and the last inequality is by Assumption \ref{asm:PL-inequality} (using $||\nabla q(v,\th)||\ge||\nabla_{\th} g(v,\th)||$). Similarly, under assumption that $T\ge\left\lceil -b_{1}^{-1}\log(\frac{1}{16}\kappa^{2}L^{-2})\right\rceil $, $L\kappa^{-1}\exp(-\bi T/2)\le1/4$, \begin{align*} ||\hdq(v,\th)|| & =||\hdq(v,\th)-\nabla q(v,\th)+\nabla q(v,\th)||\\ & \ge||\nabla q(v,\th)||-||\hdq(v,\th)-\nabla q(v,\th)||\\ & \ge||\nabla q(v,\th)||(1-(2L\kappa^{-1}\exp(-\bi T/2)))\\ & \ge\frac{1}{2}||\nabla q(v,\th)||. \end{align*} This implies that \[ \frac{||\hdq-\nabla q||}{||\hdq||}\le2\frac{||\hdq-\nabla q||}{||\nabla q||}\le4L\kappa^{-1}\exp(-\bi T/2). \] We thus have \begin{align*} ||\lm(v,\th)(\hdq(v,\th)-\nabla q(v,\th))|| & \le \ul ||\hdq-\nabla q||+\left\langle \nabla f,\frac{\hdq}{||\hdq||}\right\rangle \frac{||\hdq-\nabla q||}{||\hdq||}\\ & \le2L\kappa^{-1}\exp(-\bi T/2)\left[\ul ||\nabla q(v,\th)||+2\left\langle \nabla f,\frac{\hdq}{||\hdq||}\right\rangle \right]\\ & \le2L\kappa^{-1}\exp(-\bi T/2)(\ul +2)\bii, \end{align*} where the last inequality is by Lemma \ref{lem:bound d}. Combining all the results and using $||\nabla q(v_{k},\th_{k})||\le2\kappa^{-1/2}L_{q}q^{1/2}(v_{k},\th_{k})$ by Lemma \ref{lem:bound dq}, we have \begin{align*} \K(v,\th) & \le||\nabla f(v,\th)+\lm(v,\th)\nabla q(v,\th)||^{2}+q(v,\th)\\ & \le2||\nabla f(v,\th)+\lm(v,\th)\hdq(v,\th)||^{2}+q(v,\th)+2||\lm(v,\th)(\hdq(v,\th)-\nabla q(v,\th))||^{2}\\ & \le2||\d(v,\th)||^{2}+q(v,\th)+8L^{2}\kappa^{-2}\exp(-\bi T)(\ul +2)^{2}\bii^{2}. \end{align*} Using Lemma \ref{lem:Convergence hk}, we have \begin{align*} \min_{k}\K(v_{k},\th_{k}) & =O(\min_{k}(||\d(v_{k},\th_{k})||^{2}+q(v_{k},\th_{k}))+\exp(-\bi T))\\ & =O(\xi^{-1}+K\exp(-\bi T/2)+K\xi^{1/2}+\xi^{-1/2}K^{1/2}q^{1/2}(v_{0},\th_{0})). \end{align*} \subsection{Proof of Lemmas} \subsubsection{Proof of Lemma \ref{lem:one step descent of q}} When $||\hdq(v_{k},\th_{k})||>0$, by Lemma \ref{lem:q smoothness}, we know that $q$ is $L_{q}$-smoothness, we have \begin{align*} q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k}) & \le-\xi\left\langle \nabla q(v_{k},\th_{k}),\d(v_{k},\th_{k})\right\rangle +\frac{L_{q}\xi^{2}}{2}||\d(v_{k},\th_{k})||^{2}\\ & \le-\xi\left\langle \hdq(v_{k},\th_{k}),\d(v_{k},\th_{k})\right\rangle -\xi\left\langle \nabla q(v_{k},\th_{k})-\hdq(v_{k},\th_{k}),\d(v_{k},\th_{k})\right\rangle +L_{q}\xi^{2}\bii^{2}/2\\ & \le-\xi \ul ||\hdq(v_{k},\th_{k})||^{2}-\xi\left\langle \nabla q(v_{k},\th_{k})-\hdq(v_{k},\th_{k}),\d(v_{k},\th_{k})\right\rangle +L_{q}\xi^{2}\bii^{2}/2\\ & \le-\xi \ul ||\hdq(v_{k},\th_{k})||^{2}+\xi\bii||\nabla q(v_{k},\th_{k})-\hdq(v_{k},\th_{k})||+L_{q}\xi^{2}\bii^{2}/2. \end{align*} where the second and the last inequality is by Lemma \ref{lem:bound d} and the third inequality is ensured by the constraint in the local subproblem ($\left\langle \nabla \hdq(v_{k},\th_{k}),\d(v_{k},\th_{k})\right\rangle \ge \eta || \hdq(v_k,\th_k)^2||$.). And by Lemma \ref{lem:bound hat dq}, we have $||\nabla q(v_{k},\th_{k})-\hdq(v_{k},\th_{k})||\le L||\th_{k}^{(T)}-\th^{*}(v_{k})||$. Plug in the bound we have \[ q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k})\le-\xi \ul ||\hdq(v_{k},\th_{k})||^{2}+\xi\bii L||\th_{k}^{(T)}-\th^{*}(v_{k})||\ . \] Also notice that \begin{align*} \left|||\hdq(v_{k},\th_{k})||^{2}-||\nabla q(v_{k},\th_{k})||^{2}\right| & \le||\hdq(v_{k},\th_{k})-\nabla q(v_{k},\th_{k})||\ ||\hdq(v_{k},\th_{k})+\nabla q(v_{k},\th_{k})||\\ & \le||\hdq(v_{k},\th_{k})-\nabla q(v_{k},\th_{k})||\ (||\hdq(v_{k},\th_{k})-\nabla q(v_{k},\th_{k})||+2||\nabla q(v_{k},\th_{k})||)\\ & \le L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||\ (L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||+2||\nabla q(v_{k},\th_{k})||)\\ & =L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||\ (L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||+2||\nabla q(v_{k},\th_{k})-\nabla q(v_{k},\th^{*}(v_{k}))||)\\ & \le L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||\ (L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||+2L_{q}||\th_{k}-\th^{*}(v_{k})||), \end{align*} where the third inequality is by Lemma \ref{lem:bound hat dq}, the equality is by $\nabla q(v_k, \th^*(v_k))=0$ and the last inequality is by Lemma \ref{lem:q smoothness}. Using this bound, we further have \begin{align*} q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k}) & \le-\xi \ul ||\nabla q(v_{k},\th_{k})||^{2}+\xi \ul \left|||\hdq(v_{k},\th_{k})||^{2}-||\nabla q(v_{k},\th_{k})||^{2}\right|\\ & +\xi\bii||\nabla q(v_{k},\th_{k})-\hdq(v_{k},\th_{k})||+L_{q}\xi^{2}\bii^{2}/2\\ & \le-\xi \ul ||\nabla q(v_{k},\th_{k})||^{2}+\xi \ul L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||\ (L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||+2L_{q}||\th_{k}-\th^{*}(v_{k})||)\\ & +\xi\bii L||\th_{k}^{(T)}-\th^{*}(v_{k})||+L_{q}\xi^{2}\bii^{2}/2. \end{align*} When $||\hdq(v_{k},\th_{k})||=0$, by Lemma \ref{lem:bound hat dq}, $q(v_{k},\th_{k})=0$ and hence $\nabla q(v_{k},\th_{k})=0$. We thus have \begin{align*} q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k}) & \le-\xi\left\langle \nabla q(v_{k},\th_{k}),\d(v_{k},\th_{k})\right\rangle +\frac{L_{q}\xi^{2}}{2}||\d(v_{k},\th_{k})||^{2}\\ & =\frac{L_{q}\xi^{2}}{2}||\d(v_{k},\th_{k})||^{2}\\ & \le\xi^{2}L_{q}\bii^{2}/2. \end{align*} \subsubsection{Proof of Lemma \ref{lem:decay q}} By Lemma \ref{lem:one step descent of q}, when $||\hdq(v_{k},\th_{k})||>0$, we have \begin{align*} q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k}) & \le-\xi \ul ||\nabla q(v_{k},\th_{k})||^{2}+\xi \ul L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||\ (L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||+2L_{q}||\th_{k}-\th^{*}(v_{k})||)\\ & +\xi\bii L||\th_{k}^{(T)}-\th^{*}(v_{k})||+L_{q}\xi^{2}\bii^{2}/2. \end{align*} By Lemma \ref{lem:Lower bound} and Lemma \ref{lem:opt th_T} \begin{align*} ||\th_{k}^{(T)}-\th^{*}(v_{k})|| & \le2\kappa^{-1/2}q^{1/2}(v_{k},\th_{k}^{(T)})\le2\kappa^{-1/2}\exp(-\bi T/2)q^{1/2}(v_{k},\th_{k}).\\ ||\th_{k}-\th^{*}(v_{k})|| & \le2\kappa^{-1/2}q^{1/2}(v_{k},\th_{k}). \end{align*} Using those bounds, we know that \[ L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||\ (L_{q}||\th_{k}^{(T)}-\th^{*}(v_{k})||+2L_{q}||\th_{k}-\th^{*}(v_{k})||)\le12L_{q}^{2}\kappa^{-1}\exp(-\bi T)q(v_{k},\th_{k}) \] This implies that \begin{align*} & q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k})\\ \le & -\xi \ul ||\nabla q(v_{k},\th_{k})||^{2}+12\xi \ul L_{q}^{2}\kappa^{-1}\exp(-\bi T)q(v_{k},\th_{k})\\ + & 2\xi\bii L\kappa^{-1/2}\exp(-\bi T/2)q^{1/2}(v_{k},\th_{k})+L_{q}\xi^{2}\bii^{2}/2\\ \le & -\xi \ul \kappa q(v_{k},\th_{k})+12\xi \ul L_{q}^{2}\kappa^{-1}\exp(-\bi T)q(v_{k},\th_{k})\\ + & 2\xi\bii L\kappa^{-1/2}\exp(-\bi T/2)q^{1/2}(v_{k},\th_{k})+L_{q}\xi^{2}\bii^{2}/2. \end{align*} Choosing $T$ such that $T\ge\biii(\ul ,\alpha,\kappa,L)$ where \[ \biii(\ul ,\alpha,\kappa,L)=\left\lceil -b_{1}^{-1}\log(\frac{\ul \kappa}{64\ul L_{q}^{2}})\right\rceil, \] we have \[ q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k})\le-\frac{3}{4}\xi \ul \kappa q(v_{k},\th_{k})+2\xi\bii L\kappa^{-1/2}\exp(-\bi T/2)q^{1/2}(v_{k},\th_{k})+L_{q}\xi^{2}\bii^{2}/2. \] This implies that when $\frac{64\bii^{2}L^{2}}{\ul ^{2}\kappa}\exp(-\bi T)\le q(v_{k},\th_{k})$ and $\frac{2L_{q}\xi\bii^{2}}{\ul \kappa}\le q(v_{k},\th_{k})$, \[ q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k})\le-\frac{1}{4}\xi \ul \kappa q(v_{k},\th_{k}). \] Let $a=\max(\frac{64\bii^{2}L^{2}}{\ul ^{2}\kappa}\exp(-\bi T),\frac{2L_{q}\xi\bii^{2}}{\ul \kappa})$. Also, when $q(v_{k},\th_{k})<a$, \begin{align*} q(v_{k+1},\th_{k+1}) & \le q(v_{k},\th_{k})+2\xi\bii L\kappa^{-1/2}\exp(-\bi T/2)q^{1/2}(v_{k},\th_{k})+L_{q}\xi^{2}\bii^{2}/2\\ & <a+2\xi\bii L\kappa^{-1/2}\exp(-\bi T/2)\sqrt{a}+L_{q}\xi^{2}\bii^{2}/2. \end{align*} Note that \begin{align*} 2\xi\bii L\kappa^{-1/2}\exp(-\bi T/2) & \le\frac{\xi \ul \kappa}{4}\sqrt{a}\\ L_{q}\xi^{2}\bii^{2}/2 & \le\frac{\xi \ul \kappa}{4}a. \end{align*} This gives that in the case of $q(v_k,\th_k)<a$, \[ q(v_{k+1},\th_{k+1})<(1+\frac{\xi \ul \kappa}{4})a. \] Define $k_{0}$ as the first iteration such that $q(v_{k},\th_{k})<a$. This implies that, for any $k\le k_{0}$, \[ q(v_{k},\th_{k})\le(1-\frac{\xi}{4}\ul \kappa)^{k}q(v_{0},\th_{0}). \] When any $k>k_{0}$, we show that $q(v_{k+1},\th_{k+1})\le(1+\frac{\xi \ul \kappa}{4})a$. This can be proved by induction. At $k=k_{0}+1$, if $q(v_{k},\th_{k})<a,$ we have $q(v_{k},\th_{k})<(1+\frac{\xi \ul \kappa}{4})a$. Else if at $k=k_{0}+1$, $q(v_{k},\th_{k})\ge a$, $q(v_{k+1},\th_{k+1})\le q(v_{k},\th_{k})\le a$. We thus have the conclusion that for any $k>k_{0}$, $q(v_{k},\th_{k})\le(1+\frac{\ul \kappa}{4})a$. Combining the result, we have \[ q(v_{k},\th_{k})\le(1-\frac{\xi}{4}\ul \kappa)^{k}q(v_{0},\th_{0})+\Delta, \] where we denote \begin{equation} \label{eq:proof_delta} \Delta=(1+\frac{\ul \kappa}{4})(\frac{64\bii^{2}L^{2}}{\ul ^{2}\kappa^{3}}\exp(-\bi T)+\frac{2L_{q}\xi\bii^{2}}{\ul \kappa})+L_{q}\xi^{2}\bii^{2}/2=O(\exp(-\bi T)+\xi). \end{equation} Let $\biv(\ul ,\kappa,\xi)=-\log(1-\frac{\xi}{4}\ul \kappa)$, we have the desired result. \subsubsection{Proof of Lemma \ref{lem:decay hdq}} By Lemma \ref{lem:bound dq} and \ref{lem:decay q}, we have \begin{align*} ||\nabla q(v_{k},\th_{k})||^{2} & \le2\kappa^{-1}L_{q}^{2}q(v_{k},\th_{k})\\ & \le2\kappa^{-1}L_{q}^{2}\left[\exp(-\biv k)q(v_{0},\th_{0})+\Delta\right], \end{align*} where $\Delta$ is defined in (\ref{eq:proof_delta}). Also notice that \begin{align*} ||\hdq(v,\th)|| & \le||\hdq(v,\th)-\nabla q(v,\th)||+||\nabla q(v,\th)||\\ & \le L||\th^{(T)}-\th^{*}(v)||+||\nabla q(v,\th)||\\ & \le2L\kappa^{-1/2}q^{1/2}(v,\th^{(T)})+||\nabla q(v,\th)||\\ & \le2L\kappa^{-1/2}\exp(-\bi T/2)q^{1/2}(v,\th)+||\nabla q(v,\th)||\\ & \le(2L\kappa^{-1}\exp(-\bi T/2)+1)||\nabla q(v,\th)||\\ & \le(2L\kappa^{-1}+1)||\nabla q(v,\th)|| \end{align*} Here the first inequality is by triangle inequality, the second inequality is by Lemma \ref{lem:bound hat dq}, the third inequality is by Lemma \ref{lem:Lower bound}, the forth inequality is by Lemma \ref{lem:opt th_T} and the fifth inequality is by Assumption \ref{asm:PL-inequality}. Taking summation over iteration and using Lemma \ref{lem:decay q}, we have \begin{align*} \sum_{k=0}^{K-1}||\hdq(v,\th)||^{2} & \le(2L\kappa^{-1}+1)^{2}\sum_{k=0}^{K-1}||\nabla q(v_{k},\th_{k})||^{2}\\ & \le(2L\kappa^{-1}+1)^{2}\left[2\kappa^{-1}L_{q}^{2}q(v_{0},\th_{0})\sum_{k=0}^{K-1}\left[\exp(-\biv k)\right]+K\Delta\right]\\ & \le(2L\kappa^{-1}+1)^{2}\left[\frac{2\kappa^{-1}L_{q}^{2}q(v_{0},\th_{0})}{1-\exp(-\biv)}+K\Delta\right]\\ & =\frac{\bv q(v_{0},\th_{0})}{\xi}+K\bvi\Delta, \end{align*} where we define $\bv(L_{q},\ul ,\kappa)=\frac{16L_{q}^{2}}{\ul \kappa^{2}}(2L\kappa^{-1}+1)^{2}$ and $\bvi(\kappa,L)=(2L\kappa^{-1}+1)^{2}$. \subsection{Proof of Lemma \ref{lem:Convergence hk}} Remind that by our definition of $\lm$ in (\ref{eq:local solution}) and Assumption \ref{asm:Smoothness}, we have \begin{align*} f(v_{k+1},\th_{k+1})-f(v_{k},\th_{k}) & \le-\xi\left\langle \nabla f(v_{k},\th_{k}),\d(v_{k},\th_{k})\right\rangle +\frac{L\xi^{2}}{2}||\d(v_{k},\th_{k})||^{2}\\ & =-\xi\left\langle \d(v_{k},\th_{k})-\lm(v_{k},\th_{k})\hdq(v_{k},\th_{k}),\d(v_{k},\th_{k})\right\rangle +\frac{L\xi^{2}}{2}||\d(v_{k},\th_{k})||^{2}\\ & =-(\xi-\frac{L\xi^{2}}{2})||\d(v_{k},\th_{k})||^{2}+\xi\lm(v_{k},\th_{k})\left\langle \hdq(v_{k},\th_{k}),\d(v_{k},\th_{k})\right\rangle \\ & \le-(\xi-\frac{L\xi^{2}}{2})||\d(v_{k},\th_{k})||^{2}+\xi \ul \lm(v_{k},\th_{k})||\hdq(v_{k},\th_{k})||^{2}\\ & \le-\frac{\xi}{2}||\d(v_{k},\th_{k})||^{2}+\xi \ul \lm(v_{k},\th_{k})||\hdq(v_{k},\th_{k})||^{2}, \end{align*} where the last inequality is by the assumption on $\xi\le1/L$. To show the second inequality, we use the complementary slackness of Problem (\ref{eq:proof_primal}), that is \[ \lm(v_{k},\th_{k})\left[\left\langle \hdq(v_{k},\th_{k}),\d(v_{k},\th_{k})\right\rangle -\ul||\hdq(v_{k},\th_{k})||\right]=0. \] By telescoping, \begin{align*} \sum_{k=0}^{K-1}f(v_{k+1},\th_{k+1})-f(v_{k},\th_{k}) & \le-\frac{\xi}{2}\sum_{k=0}^{K-1}||\d(v_{k},\th_{k})||^{2}+\xi \ul \sum_{k=0}^{K-1}\lm(v_{k},\th_{k})||\hdq(v_{k},\th_{k})||^{2}\\ & \le-\frac{\xi}{2}\sum_{k=0}^{K-1}||\d(v_{k},\th_{k})||^{2}+\xi \ul \sum_{k=0}^{K-1}(\ul ||\hdq(v_{k},\th_{k})||^{2}+M||\hdq(v_{k},\th_{k})||)\\ & =-\frac{\xi}{2}\sum_{k=0}^{K-1}||\d(v_{k},\th_{k})||^{2}+\xi \ul ^{2}\sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||^{2}+\xi \ul M\sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||\\ & \le-\frac{\xi}{2}\sum_{k=0}^{K-1}||\d(v_{k},\th_{k})||^{2}+\xi \ul ^{2}\sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||^{2}+\xi \ul M\sqrt{K}\sqrt{\sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||^{2}}, \end{align*} where the second inequality is by Lemma \ref{lem:bound lambda psi} and the last inequality is by Holder's inequality. Since $\sum_{k=0}^{K-1}f(v_{k+1},\th_{k+1})-f(v_{k},\th_{k})=f(v_{K},\th_{K})-f(v_{0},\th_{0})$, rearrange the terms, we have \[ \xi\sum_{k=0}^{K-1}||\d(v_{k},\th_{k})||^{2}\le2(f(v_{0},\th_{0})-f(v_{K},\th_{K}))+2\xi \ul ^{2}\sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||^{2}+2\xi \ul M\sqrt{K}\sqrt{\sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||^{2}}. \] This implies that \begin{align*} \xi\sum_{k=0}^{K-1}\left[||\d(v_{k},\th_{k})||^{2}+q(v_{k},\th_{k})\right] & \le2(f(v_{0},\th_{0})-f(v_{K},\th_{K}))+2\xi \ul ^{2}\sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||^{2}\\ & +2\xi \ul M\sqrt{K}\sqrt{\sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||^{2}}+\xi\sum_{k=0}^{K-1}q(v_{k},\th_{k}). \end{align*} Using Lemma \ref{lem:decay q}, we know that \[ q(v_{k},\th_{k})\le(1-\frac{\xi}{4}\ul \kappa)^{k}q(v_{0},\th_{0})+\Delta. \] This gives that \[ \xi\sum_{k=0}^{K-1}q(v_{k},\th_{k})\le\frac{4q(v_{0},\th_{0})}{\ul \kappa}+\xi K\Delta. \] Using Lemma \ref{lem:decay hdq}, \ref{lem:decay q} and $\sqrt{x+y}\le\sqrt{x}+\sqrt{y}$, we have \begin{align*} 2\xi \ul ^{2}\sum_{k=1}^{K}||\hdq(v_{k},\th_{k})||^{2} & \le2\ul ^{2}\bv q(v_{0},\th_{0})+2K\ul ^{2}\xi\bvi\Delta\\ 2\xi \ul M\sqrt{K}\sqrt{\sum_{k=1}^{K}||\hdq(v_{k},\th_{k})||^{2}} & \le2\xi^{1/2}K^{1/2}\bv^{1/2}\ul Mq^{1/2}(v_{0},\th_{0})+2K\xi\bvi^{1/2}\ul M\Delta^{1/2}\\ \end{align*} This implies that \begin{align*} & \xi\sum_{k=0}^{K-1}\left[||\d(v_{k},\th_{k})||^{2}+q(v_{k},\th_{k})\right]\\ \le & 2(f(v_{0},\th_{0})-f(v_{K},\th_{K}))+2\ul ^{2}\bv q(v_{0},\th_{0})+2K\ul ^{2}\xi\bvi\Delta+2\xi^{1/2}K^{1/2}\bv^{1/2}\ul Mq^{1/2}(v_{0},\th_{0})\\ + & 2K\xi\bvi^{1/2}\ul M\Delta^{1/2}+\frac{4q(v_{0},\th_{0})}{\ul \kappa}+\xi K\Delta\\ \le & 2(f(v_{0},\th_{0})-f(v_{K},\th_{K}))+(2\ul ^{2}\bv+\frac{4}{\ul \kappa})q(v_{0},\th_{0})+2K\xi(\bvi^{1/2}\ul M\Delta^{1/2}+(\bvi \ul ^{2}+1/2)\Delta)\\ + & 2\xi^{1/2}K^{1/2}\bv^{1/2}\ul Mq^{1/2}(v_{0},\th_{0}) \end{align*} We thus have \begin{align*} & \sum_{k=0}^{K-1}\left[||\d(v_{k},\th_{k})||^{2}+q(v_{k},\th_{k})\right]\\ = & O(\xi^{-1}+K\Delta^{1/2}+\xi^{-1/2}K^{1/2}q^{1/2}(v_{0},\th_{0}))\\ = & O(\xi^{-1}+K\exp(-\bi T/2)+K\xi^{1/2}+\xi^{-1/2}K^{1/2}q^{1/2}(v_{0},\th_{0})). \end{align*} \subsection{Proofs of Technical Lemmas} \subsubsection{Proof of Lemma \ref{lem:Lower bound}} Please see the proof of Theorem 2 in \citet{karimi2016linear}. \subsubsection{Proof of Lemma \ref{lem:bound hat dq}} Since $\nabla_{2}g(v,\th^{*}(v))=0$, we have $\nabla_{v}g(v,\th^{*}(v))=\nabla_{1}g(v,\th^{*}(v))+\nabla_{v}\th^{*}(v)\nabla_{2}g(v,\th^{*}(v))=\nabla_{1}g(v,\th^{*}(v))$. Thus \[ \nabla q(v,\th)=\left[\begin{array}{c} \nabla_{v}g(v,\th)-\nabla_{v}g(v,\th^{*}(v))\\ \nabla_{\th}g(v,\th) \end{array}\right]=\left[\begin{array}{c} \nabla_{v}g(v,\th)-\nabla_{1}g(v,\th^{*}(v))\\ \nabla_{\th}g(v,\th) \end{array}\right]. \] Also note that \[ \hat{\nabla}q(v,\th)=\left[\begin{array}{c} \nabla_{v}g(v,\th)-\nabla_{1}g(v,\th^{(T)})\\ \nabla_{\th}g(v,\th) \end{array}\right]. \] This gives that \begin{align*} ||\nabla q(v,\th)-\hdq(v,\th)|| & =||\nabla_{1}g(v,\th^{(T)})-\nabla_{1}g(v,\th^{*}(v))||\\ & \le L||\th^{(T)}-\th^{*}(v)||. \end{align*} Also when $0=||\hat{\nabla}q(v,\th)||=\sqrt{||\nabla_{v}g(v,\th)-\nabla_{1}g(v,\th^{(T)})||^{2}+||\nabla_{\th}g(v,\th)||^{2}}$, we have $||\nabla_{\th}g(v,\th)||=0$. Under Assumption \ref{asm:PL-inequality}, \[ 0=||\nabla_{\th}g(v,\th)||\ge\kappa(g(v,\th)-g(v,\th^{*}(v)))=\kappa q(v,\th). \] \subsubsection{Proof of Lemma \ref{lem:th implicit lipschitz}} Using Assumption \ref{asm:PL-inequality} and $\nabla_{2}g(v_{1},\th^{*}(v_{1}))=0$, we have \[ ||\nabla_{2}g(v_{1},\th^{*}(v_{2}))||\ge\sqrt{\kappa(g(v_{1},\th^{*}(v_{2}))-g(v_{1},\th^{*}(v_{1}))}. \] Also by Lemma \ref{lem:Lower bound}, we have $g(v_{1},\th^{*}(v_{2}))-g(v_{1},\th^{*}(v_{1})\ge\frac{1}{4}\kappa||\th^{*}(v_{2})-\th^{*}(v_{1})||^{2}$. These imply that \[ ||\nabla_{2}g(v_{1},\th^{*}(v_{2}))||\ge\frac{1}{2}\kappa||\th^{*}(v_{2})-\th^{*}(v_{1})||. \] Also \begin{align*} & ||\nabla_{2}g(v_{1},\th^{*}(v_{2}))||\\ = & ||\nabla_{2}g(v_{1},\th^{*}(v_{2}))-\nabla_{\th}g(v_{2},\th^{*}(v_{2}))||\\ = & ||\nabla_{2}[g(v_{1},\th^{*}(v_{2}))-g(v_{2},\th^{*}(v_{2}))]||\\ \le & ||\nabla_{[1,2]}[g(v_{1},\th^{*}(v_{2}))-g(v_{2},\th^{*}(v_{2}))]||\\ \le & L||v_{1}-v_{2}||, \end{align*} where $\nabla_{[1,2]}$ denotes taking the derivative on both first and second variables. We thus conclude that \[ ||\th^{*}(v_{2})-\th^{*}(v_{1})||\le\frac{2L}{\kappa}||v_{1}-v_{2}||. \] \subsubsection{Proof of Lemma \ref{lem:q smoothness}} To prove the first property, \begin{align*} ||\nabla_{\th}q(v,\th_{1})-\nabla_{\th}q(v,\th_{2})|| & =||\nabla_{\th}g(v,\th_{1})-\nabla_{\th}g(v,\th_{2})||\\ & \le L||\th_{1}-\th_{2}||. \end{align*} Also \begin{align*} \left\Vert \nabla q(v_{1},\th_{1})-\nabla q(v_{2},\th_{2})\right\Vert & =\left\Vert \nabla g(v_{1},\th_{1})-\nabla g(v_{2},\th_{2})-\nabla g(v_{1},\th^{*}(v_{1}))+\nabla g(v_{2},\th^{*}(v_{2}))\right\Vert \\ & \le\left\Vert \nabla g(v_{1},\th_{1})-\nabla g(v_{2},\th_{2})\right\Vert +\left\Vert \nabla_{1}g(v_{1},\th^{*}(v_{1}))-\nabla_{1}g(v_{2},\th^{*}(v_{2}))\right\Vert . \end{align*} By Assumption \ref{asm:Smoothness} (Lipschitz continuity of $\nabla g$), \begin{align*} ||\nabla_{1}g(v_{1},\th^{*}(v_{1}))-\nabla_{1}g(v_{2},\th^{*}(v_{2}))|| & \le||\nabla_{[1,2]}g(v_{1},\th^{*}(v_{1}))-\nabla_{[1,2]}g(v_{2},\th^{*}(v_{2}))||\\ & \le L\sqrt{||\th^{*}(v_{1})-\th^{*}(v_{2})||^{2}+||v_{1}-v_{2}||^{2}}, \end{align*} where $\nabla_{[1,2]}$ denotes taking the derivative on both first and second variable. Also By Lemma \ref{lem:th implicit lipschitz}, \begin{align*} L\sqrt{||\th^{*}(v_{1})-\th^{*}(v_{2})||^{2}+||v_{1}-v_{2}||^{2}} & \le L\sqrt{\frac{4L^{2}}{\kappa^{2}}||v_{1}-v_{2}||^{2}+||v_{1}-v_{2}||^{2}}\\ & \le L(\frac{2L}{\kappa}+1)||v_{1}-v_{2}||. \end{align*} This gives that \begin{align*} \left\Vert \nabla q(v_{1},\th_{1})-\nabla q(v_{2},\th_{2})\right\Vert & \le\left\Vert \nabla g(v_{1},\th_{1})-\nabla g(v_{2},\th_{2})\right\Vert +\left\Vert \nabla_{1}g(v_{1},\th^{*}(v_{1}))-\nabla_{1}g(v_{2},\th^{*}(v_{2}))\right\Vert \\ & \le L\sqrt{||v_{1}-v_{2}||^{2}+||\th_{1}-\th_{2}||^{2}}+\left\Vert \nabla_{1}g(v_{1},\th^{*}(v_{1}))-\nabla_{1}g(v_{2},\th^{*}(v_{2}))\right\Vert \\ & \le L\sqrt{||v_{1}-v_{2}||^{2}+||\th_{1}-\th_{2}||^{2}}+L(\frac{2L}{\kappa}+1)||v_{1}-v_{2}||\\ & \le L_{q}\sqrt{||v_{1}-v_{2}||^{2}+||\th_{1}-\th_{2}||^{2}}, \end{align*} where $L_{q}:=2L(L/\kappa+1)$. \subsubsection{Proof of Lemma \ref{lem:opt th_T}} By Lemma \ref{lem:q smoothness}, we have \[ q(v,\th^{(t+1)})-q(v,\th^{(t)})\le-(\alpha-\frac{L\alpha^{2}}{2})||\nabla_{\th}q(v,\th^{(t)})||^{2}. \] By Assumption \ref{asm:PL-inequality}, we have \[ ||\nabla_{\th}q(v,\th^{(t)})||^{2}=||\nabla_{2}g(v,\th^{(t)})||^{2}\ge\kappa(g(v,\th^{(t)})-g(v,\th^{*}(v))=\kappa q(v,\th^{(t)}). \] Plug-in, we have \[ q(v,\th^{(t+1)})\le(1-(\alpha-\frac{L\alpha^{2}}{2})\kappa)q(v,\th^{(t)}). \] Recursively apply this inequality, we have \[ q(v,\th^{(t)})\le(1-(\alpha-\frac{L\alpha^{2}}{2})\kappa)^{t}q(v,\th). \] Let $\bi(r,L,\kappa)=\log(1-(\alpha-L\alpha^{2}/2)\kappa)$, we have the desired result. \subsubsection{Proof of Lemma \ref{lem:bound d}} Notice that $||\nabla q(v,\th)||\le||\nabla g(v,\th)||+||\nabla g(v,\th^{*}(v))||\le2M$. $||\hdq(v,\th)||\le||\nabla_{v}g(v,\th)||+||\nabla_{1}g(v,\th^{(T)})||+||\nabla_{\th}g(v,\th)||\le3M$. When $||\hdq||=0$, $||\d||=||\nabla f||\le M$. When $||\hdq||>0$, \begin{align*} ||\d|| & =||[\ul||\hdq||^{2}-\left\langle \nabla f,\hdq\right\rangle ]_{+}/||\hdq||^{2}\hdq+\nabla f||\\ & \le\ul||\hdq||+2||\nabla f||\le(2+\ul)M. \end{align*} This concludes that $||\d||\le(2+\ul)M$. \subsubsection{Proof of Lemma \ref{lem:bound lambda psi}} In the case that $\left\langle \nabla f,\hdq\right\rangle <\ul ||\hdq||^{2}$, $\lm||\hdq||^{2}=\ul ||\hdq||^{2}-\left\langle \nabla f,\hdq\right\rangle$. In the other case, $\lm||\hdq||^{2}=0$. Thus in all cases, \begin{align*} \lm||\hdq||^{2} & \le \ul ||\hdq||^{2}+||\nabla f||\ ||\hdq||\\ & \le \ul ||\hdq||^{2}+M||\hdq||. \end{align*} \subsubsection{Proof of Lemma \ref{lem:bound dq}} Notice that since $\nabla q(v,\th^{*}(v))=0$, we have \[ ||\nabla q(v,\th)||=||\nabla q(v,\th)-\nabla q(v,\th^{*}(v))||\le L_{q}||\th-\th^{*}(v)||\le2\kappa^{-1/2}L_{q}q^{1/2}(v,\th), \] where the first inequality is by Lemma \ref{lem:q smoothness} and the second inequality is by Lemma \ref{lem:Lower bound}. \section{Proof of the Result in Section \ref{sec:kl_theory}} We use $b$ with some subscript to denote some general $O(1)$ constant and refer reader to section \ref{misc:constant} for their detailed value. For notation simplicity, given $v$ and $\th$, $\th^{(T)}$ denotes the results of $T$ steps of gradient of $g(v,\cdot)$ w.r.t. $\th$ starting from $\th$ using step size $\alpha$ (similar to the definition in (\ref{equ:thetaTT})). And note that $\hdq(v,\th)=\nabla g(v,\th)-\left[\nabla_{1}^{\top}g(v,\th^{(T)}),\textbf{0}^{\top}\right]^{\top}$, where $\textbf{0}$ denotes a zero vector with the same dimension as $\th$. We refer readers to the beginning of Appendix \ref{apx:pl_theory} for a discussion on the design of this extra notation and how it relates to the notation we used in Section \ref{sec:method}. For simplicity, we omit the superscript $\diamond$ in $q^{\diamond}$ and simply use $q$ to denote $q^{\diamond}$ in the proof. We start with the following two Lemmas. \begin{lemma} \label{lem:Lower bound local} Under Assumption \ref{asm:KL-inequality} and assume $\alpha\le 1/L$, for any $v,\th$, $g(v,\th)-g(v,\th^{\diamond}(v,\th))\ge\frac{\kappa}{4}||\th-\th^{\diamond}(v,\th)||^{2}$. \end{lemma} \begin{proof} It is easy to show that \[g(v,\th^{(t+1)})\le g(v,\th^{(t)})-(\alpha-\frac{L\alpha^{2}}{2})||\nabla_{\th}g(v,\th^{(t)})||^{2}\le g(v,\th^{(t)}). \] We thus have $g(v,\th^{\diamond}(v,\th))\le g(v,\th)$. The result of the proof follows the proof of Theorem 2 in \citet{karimi2016linear}. \end{proof} \begin{lemma} \label{lem:implicit smooth 2} Under Assumption \ref{asm:Smoothness} and \ref{asm:KL-inequality}, $||\th^{\diamond}(v_{2},\th)-\th^{\diamond}(v_{1},\th)||\le\frac{4L}{\kappa}||v_{1}-v_{2}||$ for any $v_{1},v_{2}$. \end{lemma} \begin{proof} Notice that $\nabla q(v_{2},\th^{\diamond}(v_{2},\th))=0$, we have \[ ||\nabla q(v_{1},\th^{\diamond}(v_{2},\th))-\nabla q(v_{2},\th^{\diamond}(v_{2},\th))||=||\nabla q(v_{1},\th^{\diamond}(v_{2},\th))||. \] By Assumption \ref{asm:KL-inequality}, we have $||\nabla q(v_{1},\th^{\diamond}(v_{2},\th))||\ge\sqrt{\kappa(g(v_{1},\th^{\diamond}(v_{2},\th))-g(v_{1},\th^{\diamond}(v_{1},\th))}$. And by Lemma \ref{lem:Lower bound local}, we have \[ g(v_{1},\th^{\diamond}(v_{2},\th))-g(v_{1},\th^{\diamond}(v_{1},\th))\ge\frac{\kappa}{4}||\th^{\diamond}(v_{2},\th)-\th^{\diamond}(v_{1},\th)||^{2}. \] Combing all bounds gives that \[ 2L||v_{1}-v_{2}||\ge||\nabla q(v_{1},\th^{\diamond}(v_{2},\th))-\nabla q(v_{2},\th^{\diamond}(v_{2},\th))||=||\nabla q(v_{1},\th^{\diamond}(v_{2},\th))||\ge\frac{\kappa}{2}||\th^{\diamond}(v_{2},\th)-\th^{\diamond}(v_{1},\th)||. \] This implies that $||\th^{\diamond}(v_{2},\th)-\th^{\diamond}(v_{1},\th)||\le\frac{4L}{\kappa}||v_{1}-v_{2}||.$ \end{proof} Now we proceed to give the proof of Theorem \ref{thm:nonconvex_converge}. Note that \begin{align*} q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k}) & =[g(v_{k+1},\th_{k+1})-g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k+1}))]-[g(v_{k},\th_{k})-g(v_{k},\th^{\diamond}(v_{k},\th_{k}))]\\ & =[g(v_{k+1},\th_{k+1})-g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k}))]-[g(v_{k},\th_{k})-g(v_{k},\th^{\diamond}(v_{k},\th_{k}))]\\ & +[g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k}))-g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k+1}))]\\ & =[g(v_{k+1},\th_{k+1})-g(v_{k},\th_{k})]-[g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k}))-g(v_{k},\th^{\diamond}(v_{k},\th_{k}))]\\ & +[g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k}))-g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k+1}))]. \end{align*} Note that \begin{align*} g(v_{k+1},\th_{k+1})-g(v_{k},\th_{k}) & \le-\xi\left\langle \nabla g(v_{k},\th_{k}),\d(v_{k},\th_{k})\right\rangle +\frac{L\xi^{2}}{2}||\d(v_{k},\th_{k})||^{2}\\ -[g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k}))-g(v_{k},\th^{\diamond}(v_{k},\th_{k}))] & \le\left\langle \nabla_{[1,2]} g(v_{k},\th^{\diamond}(v_{k},\th_{k})),[v_{k+1},\th^{\diamond}(v_{k+1},\th_{k})]-[v_{k},\th^{\diamond}(v_{k},\th_{k})]\right\rangle \\ & +\frac{L}{2}||[v_{k+1},\th^{\diamond}(v_{k+1},\th_{k})]-[v_{k},\th^{\diamond}(v_{k},\th_{k})]||^{2}. \end{align*} Notice that as $\nabla_{2}g(v_{k},\th^{\diamond}(v_{k},\th_{k}))=0$, \[\left\langle \nabla_{[1,2]} g(v_{k},\th^{\diamond}(v_{k},\th_{k})),[v_{k+1},\th^{\diamond}(v_{k+1},\th_{k})]-[v_{k},\th^{\diamond}(v_{k},\th_{k})]\right\rangle =\xi\left\langle \nabla_{[1,2]} g(v_{k},\th^{\diamond}(v_{k},\th_{k})),\d(v_{k},\th_{k})\right\rangle. \] Also using Lemma \ref{lem:implicit smooth 2}, we have \[ ||\th^{\diamond}(v_{k+1},\th_{k})-\th^{\diamond}(v_{k},\th_{k})||\le\frac{4L}{\kappa}||v_{k+1}-v_{k}||. \] This implies that \[ ||[v_{k+1},\th^{\diamond}(v_{k+1},\th_{k})]-[v_{k},\th^{\diamond}(v_{k},\th_{k})]||^{2}\le(\frac{16L^{2}}{\kappa^{2}}+1)||v_{k+1}-v_{k}||^{2}\le(\frac{16L^{2}}{\kappa^{2}}+1)\xi^{2}||\d(v_{k},\th_{k})||^{2}. \] We thus have \[ q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k})\le-\xi\left\langle \nabla q(v_{k},\th_{k}),\d(v_{k},\th_{k})\right\rangle +L_{q}\xi^{2}||\d(v_{k},\th_{k})||^{2}/2+\chi_{k}, \] where we define $L_{q}=(\frac{16L^{2}}{\kappa^{2}}+2)$ and $\chi_{k}=[g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k}))-g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k+1}))].$ Using the same argument in the proof of Lemma \ref{lem:decay q} and Lemma \ref{lem:decay hdq}, we have \begin{align*} & q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k})\\ \le & -\xi \ul ||\nabla q(v_{k},\th_{k})||^{2}+12\xi \ul L_{q}^{2}\kappa^{-1}\exp(-\bi T)q(v_{k},\th_{k})\\ + & 2\xi\bii L\kappa^{-1/2}\exp(-\bi T/2)q^{1/2}(v_{k},\th_{k})+L_{q}\xi^{2}\bii^{2}/2+\chi_{k}\\ \le & -\xi \ul ||\nabla q(v_{k},\th_{k})||^{2}+12\xi \ul L_{q}^{2}\kappa^{-2}\exp(-\bi T)||\nabla q(v_{k},\th_{k})||^{2}\\ + & 2\xi\bii L\kappa^{-1}\exp(-\bi T/2)||\nabla q(v_{k},\th_{k})||+L_{q}\xi^{2}\bii^{2}/2+\chi_{k}. \end{align*} Here the second inequality is by Assumption \ref{asm:KL-inequality}. Choosing $T$ such that $T\ge\bviii(\ul ,\alpha,\kappa,L)$ where \[ \bviii(\ul ,\alpha,\kappa,L)=\left\lceil -b_{1}^{-1}\log(\frac{\kappa^{2}}{48\ul L_{q}^{2}})\right\rceil, \] we have \[ q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k})\le-\frac{3}{4}\xi \ul ||\nabla q(v_{k},\th_{k})||^{2}+2\xi\bii L\kappa^{-1}\exp(-\bi T/2)||\nabla q(v_{k},\th_{k})||+L_{q}\xi^{2}\bii^{2}/2+\chi_{k}. \] Using Young's inequality, given any $x>0$, \[ \exp(-\bi T/2)||\nabla q(v_{k},\th_{k})||\le x\exp(-\bi T)+\frac{1}{x}||\nabla q(v_{k},\th_{k})||^{2}. \] Choosing $x=\frac{4L\bii}{\ul \kappa}$, we have \[ q(v_{k+1},\th_{k+1})-q(v_{k},\th_{k})\le-\frac{1}{4}\xi \ul ||\nabla q(v_{k},\th_{k})||^{2}+\Delta+\chi_{k}, \] where we denote $\Delta=\xi\frac{8L^{2}\bii^{2}}{\ul \kappa^{2}}\exp(-\bi T)+\frac{1}{2}L_{q}\xi^{2}\bii^{2}$. This gives that \[ \frac{1}{4}\xi \ul \sum_{k=0}^{K}||\nabla q(v_{k},\th_{k})||^{2}\le q(v_{0},\th_{0})-q(v_{K},\th_{K})+K\Delta+\sum_{k=0}^{K-1}\chi_{k}. \] Using the same argument in the proof of Lemma \ref{lem:decay hdq}, \[ ||\hdq(v,\th)||\le(2L\kappa^{-1}+1)||\nabla q(v,\th)||. \] We hence have \begin{align*} \sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||^{2} & \le(2L\kappa^{-1}+1)^{2}\sum_{k=0}^{K-1}||\nabla q(v_{k},\th_{k})||^{2}\\ & \le\frac{4(2L\kappa^{-1}+1)^{2}}{\xi \ul }(q(v_{0},\th_{0})-q(v_{K},\th_{K})+K\Delta+\sum_{k=0}^{K-1}\chi_{k}). \end{align*} Similar to the proof of Lemma \ref{lem:Convergence hk}, \[ \sum_{k=0}^{K-1}||\d(v_{k},\th_{k})||^{2}\le\frac{2(f(v_{0},\th_{0})-f(v_{K},\th_{K}))}{\xi}+2\ul ^{2}\sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||^{2}+2\ul M\sqrt{K}\sqrt{\sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||^{2}}. \] Using$\sqrt{x+y}\le\sqrt{x}+\sqrt{y}$, we have \begin{align*} 2\ul ^{2}\sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||^{2} & \le\frac{8\ul (2L\kappa^{-1}+1)^{2}}{\xi }(q(v_{0},\th_{0})-q(v_{K},\th_{K})+K\Delta+\sum_{k=0}^{K-1}\chi_{k})\\ 2\ul M\sqrt{K}\sqrt{\sum_{k=0}^{K-1}||\hdq(v_{k},\th_{k})||^{2}} & \le\sqrt{K}\frac{4\ul^{1/2} M(2L\kappa^{-1}+1)}{\xi^{1/2}}(\sqrt{q(v_{0},\th_{0})-q(v_{K},\th_{K})}+K^{1/2}\Delta^{1/2}+\sqrt{\left[\sum_{k=0}^{K-1}\chi_{k}\right]_{+}}). \end{align*} Also notice that by Assumption \ref{asm:KL-inequality}, \begin{align*} \sum_{k=0}^{K-1}q(v_{k},\th_{k}) & \le\sum_{k=0}^{K-1}\frac{\xi}{\kappa}||\nabla q(v_{k},\th_{k})||^{2}\\ & \le\frac{4}{\ul \kappa\xi}(q(v_{0},\th_{0})-q(v_{K},\th_{K})+K\Delta+\sum_{k=0}^{K-1}\chi_{k}) \end{align*} We hence have \begin{align*} \sum_{k=0}^{K-1}(||\d(v_{k},\th_{k})||^{2}+q(v_{k},\th_{k})) & =O\left(\frac{1}{\xi}+\frac{K\Delta}{\xi}+\frac{K^{1/2}}{\xi^{1/2}}+\frac{K\Delta^{1/2}}{\xi^{1/2}}+K^{1/2}\left(\left[\sum_{k=0}^{K-1}\chi_{k}\right]_+\right)^{1/2}\right)\\ & =O\left(\frac{1}{\xi}+K\exp(-\bi T/2)+K\xi^{1/2}+\frac{K^{1/2}}{\xi^{1/2}}+\left(K\left[\sum_{k=0}^{K-1}\chi_{k}\right]_+\right)^{1/2}\right). \end{align*} Using the same argument as the proof of Theorem \ref{thm:Convergence k}, when $T\ge\left\lceil -b_{1}^{-1}\log(\frac{1}{16}\kappa^2 L^{-2})\right\rceil $, \[ \K^{\diamond}(v,\th)\le2||\d(v,\th)||^{2}+q(v,\th)+8L^{2}\exp(-\bi T)\kappa^{-2}(\ul +2)^{2}\bii^{2}. \] This implies that \begin{align*} \min_{k}\K^{\diamond}(v_{k},\th_{k}) & \le\frac{1}{K}\sum_{k=0}^{K-1}[2||\d(v,\th)||^{2}+q(v,\th)]+8L^{2}\exp(-\bi T)\kappa^{-2}(\ul +2)^{2}\bii^{2}\\ & =O\left(\frac{1}{\xi K}+\exp(-\bi T/2)+\xi^{1/2}+\frac{1}{\xi^{1/2}K^{1/2}}+\left(\left[\frac{1}{K}\sum_{k=0}^{K-1}\chi_{k}\right]_+\right)^{1/2}\right). \end{align*} Now we proceed to bound $\frac{1}{K}\sum_{k=0}^{K-1}\chi_{k}$. Notice that \begin{align*} \chi_{k} & =g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k}))-g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k+1}))\\ & =g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k}))-g(v_{k},\th^{\diamond}(v_{k},\th_{k}))+g(v_{k},\th^{\diamond}(v_{k},\th_{k}))-g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k+1})). \end{align*} Notice that using Assumption \ref{asm:Smoothness} and Lemma \ref{lem:implicit smooth 2} \begin{align*} g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k}))-g(v_{k},\th^{\diamond}(v_{k},\th_{k})) & \le L||[v_{k+1},\th^{\diamond}(v_{k+1},\th_{k})]-[v_{k},\th^{\diamond}(v_{k},\th_{k})]||\\ & \le L(||v_{k+1}-v_{k}||+||\th^{\diamond}(v_{k+1},\th_{k})-\th^{\diamond}(v_{k},\th_{k})||)\\ & \le(L+\frac{4L}{\kappa})||v_{k+1}-v_{k}||\\ & \le(L+\frac{4L}{\kappa})\xi||\d(v_{k},\th_{k})||. \end{align*} Note that using the same procedure as the proof of Lemma \ref{lem:bound d}, $||\d(v_{k},\th_{k})||\le\bii$. We thus conclude that \begin{align*} \sum_{k=0}^{K-1}\chi_{k} & \le\sum_{k=0}^{K-1}g(v_{k},\th^{\diamond}(v_{k},\th_{k}))-g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k+1}))\\ & +(L+\frac{4L}{\kappa})\xi\sum_{k=0}^{K-1}||\d(v_{k},\th_{k})||\\ & \le\sum_{k=0}^{K-1}g(v_{k},\th^{\diamond}(v_{k},\th_{k}))-g(v_{k+1},\th^{\diamond}(v_{k+1},\th_{k+1}))+(L+\frac{4L}{\kappa})\bii\xi K\\ & =g(v_{0},\th^{\diamond}(v_{0},\th_{0}))-g(v_{K},\th^{\diamond}(v_{K},\th_{K}))+(L+\frac{4L}{\kappa})\bii\xi K. \end{align*} We thus have $\frac{1}{K}\sum_{k=0}^{K-1}\chi_{k}=O(\frac{1}{K}+\xi).$ \section{List of absolute constants used in the proofs} \label{misc:constant} Here we summarize the absolute constant used in the proofs. \begin{align*} b_{1}(\alpha,L,\kappa) & =\log(1-(\alpha-L\alpha^{2}/2)\kappa)\\ \bii(M,\ul ) & =(3+\ul )M\\ \biii(\ul ,\alpha,\kappa,L) & =\left\lceil -b_{1}^{-1}\log(\frac{\ul \kappa}{64\ul L_{q}^{2}})\right\rceil \\ \biv(\ul ,\kappa,\xi) & =-\log(1-\frac{\xi}{4}\ul \kappa)\\ \bv(L_{q},\ul ,\kappa) & =\frac{16L_{q}^{2}}{\ul \kappa^{2}}(2L\kappa^{-1}+1)^{2}\\ \bvi(\kappa,L) & =(2L\kappa^{-1}+1)^{2}\\ \bviii(\ul ,\alpha,\kappa,L) & =\left\lceil -b_{1}^{-1}\log(\frac{\kappa^{2}}{48\ul L_{q}^{2}})\right\rceil \end{align*} \section{Experiment Details} \label{sec:apx-exp} We provide details about each experiment in this section. Regarding the implementation of baseline methods: \begin{itemize} \item BVFSM's implementation is adapted from \url{https://github.com/vis-opt-group/BVFSM}. \item Penalty's implementation is adapted from \url{https://github.com/jihunhamm/bilevel-penalty}. \item VRBO's implementation is adapted from \url{https://github.com/JunjieYang97/MRVRBO}. \item AID-CG and AID-FP implementations are adapted from \url{https://github.com/prolearner/hypertorch}. \item ITD implementation is adapted from \url{https://github.com/JunjieYang97/stocBiO}. \end{itemize} \subsection{Toy Coreset Problem} \label{sec:apx-exp-coreset} The problem is: \begin{equation*} \begin{split} \min_{v, \theta} \norm{\theta - x_0}^2 ~~~~s.t.~~~~ \theta \in \argmin_\theta \norm{\theta - X\sigma(v)}^2, \end{split} \label{eq:apx-toy-coreset} \end{equation*} where $\sigma(v) = \exp(v)/\sum_{i=1}^4 \exp(v_i)$ is the softmax function, $v \in \mathbb{R}^4, \theta\in \mathbb{R}^2$, and $X= [x_1, x_2, x_3, x_4] \in \mathbb{R}^{2\times 4}$. where $\sigma(v) = \exp(v)/\sum_{i=1}^4 \exp(v_i)$ is the softmax function. Here the outer objective $f$ pushes $\theta$ to towards $x_0$ while the inner objective $g$ ensures $\theta$ remains in the convex hull formed by 4 points in the 2D plane (e.g. $X = [x_1, x_2, x_3, x_4] \in \mathbb{R}^{2\times 4}$). We choose $x_0 = (3, -2)$ and the four points $x_1 = (1,3)$, $x_2 = (3, 1)$, $x_3 = (-2, 2)$ and $x_4 = (-3, 2)$. We set $v_0 = (0, 0, 0, 0)$ and $\th_0 \in \{(0, 3), (-3, 1), (3.5, 1)\}$. For all methods, we fix both the inner stepsize $\alpha$ and the outer stepsize $\xi$ to be $0.05$ and set $T=10$. For BVFSM and Penalty, we grid search the best hyperparameters from $\{0.001, 0.01, 0.1\}$. For BOME, we choose $\phi = \eta \norm{\nabla \hat{q}}^2$ and ablate over $\eta \in \{0.1, 0.5, 0.9\}$ and $T \in \{1, 10, 100\}$. The visualization of the optimization trajectories over the 3 initial points are plotted in Fig.~\ref{fig:toy_convergence}. \begin{figure*}[h!] \centering \includegraphics[width=\textwidth]{figures/toy_convergence.png} \vspace{-12pt} \caption{Trajectories of $(v_k, \theta_k)$ on the toy coreset problem \eqref{eq:toy-coreset} obtained from BOME (\textcolor{BOME}{blue}) and three recent first-order bilevel methods: BSG-1 ~\citep{giovannelli2021bilevel} (\textcolor{BSG1}{green}), BVFSM~\citep{liu2021valueseq} (\textcolor{BVFSM}{orange}), and Penalty ~\citep{mehra2021penalty} (\textcolor{Penalty}{red}). The goal of the problem is to find the closet point (marked by \textcolor{magenta}{opt.}) to the \textcolor{BSG1}{goal} $x_0$ within the convex envelop of the four vertexes. % All methods start from 3 initial points (\textcolor{red}{start 1-3}), and the converged points are shown in \textcolor{BlueViolet}{darkblue}. For BOME, we also plot the trajectory of $\{\hat{\theta}_k^T\}$ in \textcolor{cyan}{cyan}.} \label{fig:toy_convergence} \end{figure*} As shown, BOME successfully converges to the optimal solution regardless of the initial $\theta_0$, while BSG-1, BVFSM and Penalty methods converge to non-optimal points. We emphasize that for BVFSM and Penalty, the convergence point \emph{depends on} the choice of hyperparameters. \subsection{Toy Mini-max Game} \label{sec:apx-exp-minimax} The toy mini-max game we consider is: \begin{equation} \min_{v \in \mathbb{R}}~v\th^*(v) ~~~~s.t.~~~~ \th^*(v) = \argmax_{\th \in \mathbb{R}}~v\th. \label{eq:toy-minimax} \end{equation} For BOME and BSG-1, BVFSM, and Penalty methods, we again set both the inner stepsize $\alpha$ and $\beta$ to be $0.05$, as no significant difference is observed by varying the stepsizes. For all methods, we set the inner iteration $T=10$. For BVFSM and Penalty, we grid search the best hyperparameters from $\{0.001, 0.01, 0.1\}$. \subsection{Without LLS assumption} \label{sec:apx-exp-lls} The toy example to validate whether BOME requires the low-level singleton assumption is borrowed from \citet{liu2020generic}: \begin{equation*} \begin{split} \min_{v\in \RR, \theta\in\RR^2} \norm{\theta - [v;1]}_2^2 ~~~~ s.t.~~~~ \th \in \argmin_{(\theta_1',\theta_2')\in \RR^2} (\theta_1' - v)^2, \end{split} \label{eq:toy-lls} \end{equation*} where $\th = (\th_1, \th_2) $ and the optimal solution is $v^*=1, \th^*=(1,1)$. Note that the inner objective has infinite many optimal solution $\theta^*(v)$ since it is degenerated. We set both the inner and outer stepsizes to $0.5$ and $T=10$ for all methods. For BVFSM and Penalty, we grid search the best hyperparameters from $\{0.001, 0.01, 0.1\}$. In Fig.~\ref{fig:toy_lls_full}, we provide the distance of $f(v_k, \theta_k), g(v_k, \theta_k), \theta_k, \v_k$ to their corresponding optimal over training time in seconds. Note that BOME ensures that $\hat{q}(v_k, \theta_k) = g(v_k, \th_k) - g(v^*, \th^*)$ decreases to 0. \begin{figure*}[h!] \centering \includegraphics[width=\columnwidth]{figures/toy_lls_full.png} \vspace{-15pt} \caption{Results on Problem~\eqref{eq:toy-lls} which violates the low-level singleton (LLS). We compare BOME against BSG-1, BVFSM, and Penalty. $(v^*,\theta^*)$ denotes the true optimum and The four plots show how fast $f(v_k,\theta_k)$, $g(v_k,\theta_k)$, $\theta_k$ and $v_k$ to the corresponding optimal values w.r.t. the training time in seconds. } \label{fig:toy_lls_full} \end{figure*} \subsection{Data Hyper-cleaning} \label{sec:apx-exp-hyperclean} The bilevel problem for data hyper-cleaning is \[ \min_{v, \theta} \ell^{\text{val}}(\theta), ~~~ \text{s.t.}~~~\theta = \argmin_\theta \ell^{\text{train}}(\theta, v) + c \norm{\theta}^2, \] where $\ell^{\text{val}}$ is the validation loss on $\mathcal D^{\text{val}}$, and $\ell^{\text{train}}$ is a weighted training loss: $\ell^{\text{train}} = \sum_{i=1}^m \sigma(v_i) \ell(x_i, y_i, \theta)$ with $\sigma(v)= \text{Clip}(v, [0,1])$ and $v \in \mathbb{R}^m$. The training data is of size $m=50000$ and hence $w \in [0,1]^{50000}$. The validation data is of size $m=5000$. The model $\theta = (W, b)$ is a linear model with weight $W$ and bias $b$. Where $W \in \mathbb{R}^{10\times 784}$ and $b \in \mathbb{R}^{10}$. For this problem, we set inner stepsize $\alpha = 0.01$ for both MNIST and FashionMNIST dataset for all methods as larger or smaller $\alpha$ result in worse performance. As we observe $v$'s gradient norm is much smaller than $\theta$'s in practice, we conduct a grid search over $\xi_v$ from $\{10.0, 50.0, 100.0, 500.0, 1000.0\}$ and also search whether to apply momentum for gradient descent, for all methods. The momentum is searched from $\{0.0, 0.9\}$. For BVFSM and Penalty methods, we also search for their best hyperparameters from $\{0.001, 0.01, 0.1, 1\}$. The model's initial parameter $\theta_0$ is initialized from a pretrained model learned only on the corrupted data. We split the dataset into $4$ parts: train set, validation set 1, validation set 2, and the test set. For each method, the model is learned on the train set, and the hyperparameter $v$ is tuned using validation set 1. The best hyperparameter of any algorithm (e.g. stepsize, barrier coefficient, etc.) are then chosen based on the best validation performance on validation set 2. Then we report the final performance of the model on the test set. To conduct the ablation on $\alpha$ for BOME, we search for $\alpha \in \{0.25\xi, 0.5 \xi, \xi, 2\xi\}$, where $\xi=0.01$ is the best stepsize we found for BOME. Results on MNIST and FashionMNIST dataset are provided in Fig.~\ref{fig:ho_full}. In the first column of Fig.~\ref{fig:ho_full}, we compare BOME with ($T=1$ and $T=20$) with baseline methods whose $T$ is chosen based on best performance on the validation set 2. \textbf{Remark:} In Fig.~\ref{fig:ho} (top row), we do not include the performance of BSG-1 as we fail to find a set of hyperparameters for BSG-1 to make it work on these data hyper-cleaning problems. VRBO's performance at convergence is tuned by hyparameter search. However, we observe that VRBO learns slowly in practice, as it requires multiple steps of Hessian vector products at each step. We notice that this is slightly inconsistent with the findings in the original paper~\citep{yang2021provably}. We adapt the code from ~\url{https://github.com/JunjieYang97/MRVRBO} and find the original implementation is also slow. It is possible that a good set of hyperparameters can result in better performance. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/ho_full.pdf} \vspace{-15pt} \caption{Bilevel optimization for hyperparameter optimization. \textbf{Top:} data hyper-cleaning on MNIST dataset. The solid black line is the model performance trained purely on the validation set and the dashed black line is the model performance trained on the validation set and on the part of training set that have correct labels. \textbf{Middle:} data hyper-cleaning on FashionMNIST dataset. \textbf{Bottom:} Learnable regularization on 20 Newsgroup dataset. The solid black line indicates the model performance without any regularization. The results for each method is averaged over 5 independent runs.} \label{fig:ho_full} \end{figure*} \subsection{Learnable regularization} \label{sec:apx-exp-regularization} The bilevel optimization formulation of the learnable rgularization problem is: \begin{align*} \min_{v,\theta} \ell^{\text{val}}(\theta) ~~~~s.t.~~~~\theta \in \argmin_{\theta'} \ell^{\text{train}} (\theta') + \norm{W_v \theta'}^2_2. \end{align*} We use a linear model who's parameter $\theta$ is a matrix (e.g. $\theta \in \mathbb{R}^{20\times 130107}$). Hence $v \in \mathbb{R}^{130107}$. For this experiment, the inner stepsize $\alpha$ of all methods are searched from $\{1, 10, 100, 1000\}$. The outer stepsize $\xi$ is searched from $\{0.5, 1, 5, 10, 50, 100, 500, 1000\}$. For BVFSM and Penalty methods, we also search for their best hyperparameters from $\{0.001, 0.01, 0.1, 1\}$. Similar to the experiment on Data Hyper-cleaning, we split the dataset into $4$ parts: train set, validation set 1, validation set 2, and the test set. The initial model parameter $\theta_0$ is initialized from a pretrained model without any regularization (e.g. $v = 0$) to speed up the learning. To conduct the ablation on $\alpha$ for BOME, we search for $\alpha \in \{0.25\xi, 0.5 \xi, \xi, 2\xi\}$, where $\xi=100$ is the best stepsize we found for BOME. In the bottom left of Fig.~\ref{fig:ho_full}, we compare BOME with ($T=1$ and $T=20$) with baseline methods whose $T$ is chosen based on best performance on the validation set 2. \textbf{Remark:} In Fig.~\ref{fig:ho} (bottom row), we do not include the performance of VRBO as we fail to find a set of hyperparameters for VRBO that works well on the learnable regularization experiment. \subsection{Continual Learning} \label{sec:apx-exp-cl} Continual learning (CL) experiment follows closely to the setup in contextual transformation network (CTN) from \citet{pham2020contextual}, which trains a deep neural network consisting of a quickly updated backbone network (parameterized by $\theta$) a slowly updated controller network (parameterized by $\theta$). When training the $\tau$-th task, the update on $(v, \theta)$ is solved from a bilevel optimization: \begin{equation*} \begin{split} \min_{v, \theta} \ell_{1:\tau}^\text{val}\big(v, \th\big) ~~~~\text{s.t.}~~~~~\th \in \argmin_{\th'} \ell_{1:\tau}^\text{train}\big(v, \th'\big). \end{split} \end{equation*} More specifically, \begin{equation} \ell_{1:\tau}^\text{val}\big(v, \th\big) = L^\text{ctrl}\big(\{\th^*(v), v\}; \mathcal{M}^\text{sm}_{<t+1}\big),~~~~\text{and}~~~~ \ell_{1:\tau}^\text{train}\big(v, \th'\big) = L^\text{tr}\big(\{\th, v,\}, D_t\cup \mathcal{M}^\text{em}_{<t}\big). \end{equation} Here, $\mathcal{M}^\text{sm}_t$ and $\mathcal{M}^\text{em}_t$ denote the semantic and episodic memory of task $t$ (e.g. they can be think of validation and training data) and $D_t$ is the training data of task $t$. Hence, the inner objective learns a backbone $\theta^*(v)$ that performs well on the training data which consists of the current task data $D_t$ as well as previous episodic memories $\mathcal{M}^\text{em}_{<t}$. Then, the outer objective encourages good generalization on the held out validation data, which consists of the semantic memory $\mathcal{M}^\text{sm}_{<t+1}$. All hyperparameters of BOME are set to the same as those of CTN. We choose $\phi_k = \eta \norm{\nabla \hat{q}_k(v_k, \theta_k}$ where $\eta = 2.0$, here $\eta$ is chosen from $\{0.1, 0.5, 1.0, 2.0\}$. \begin{table*}[h] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccc} \toprule \multirow{3}{*}{Method} & \multicolumn{3}{c}{PMNIST} & \multicolumn{3}{c}{Split CIFAR}\\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} & ACC~$(\uparrow)$ & NBT~$(\downarrow)$ & FT~$(\uparrow)$ & ACC~$(\uparrow)$ & NBT~$(\downarrow)$ & FT~$(\uparrow)$\tabularnewline \midrule Offline & $84.95\pm0.95$ & - & - & $74.11\pm0.66$ & - & -\tabularnewline [0.05cm] MER & $76.59\pm0.74$ & $6.88\pm0.59$ & $82.32\pm0.34$ & $60.32\pm0.86$ & $11.80\pm0.86$ & $69.23\pm0.40$\tabularnewline [0.05cm] GEM & $72.74\pm0.91$ & $9.45\pm0.95$ & $80.53\pm0.28$ & $61.33\pm1.16$ & $9.21\pm0.94$ & $69.37\pm0.72$\tabularnewline [0.05cm] ER-Ring & $72.11\pm0.46$ & $9.53\pm0.23$ & $80.06\pm0.37$ & $61.96\pm1.22$ & $8.48\pm1.54$ & $69.14\pm0.87$\tabularnewline \midrule CTN~(+ITD) & $78.40\pm0.28$ & $6.25\pm0.41$ & $84.02\pm0.29$ & $67.7\pm60.96$ & $5.76\pm0.69$ & $72.58\pm0.62$\tabularnewline [0.05cm] CTN~(+BVFSM) & $77.78\pm0.32$ & $7.86\pm0.32$ & $85.03\pm0.28$ & $67.04\pm0.76$ & $7.61\pm0.47$ & $\pmb{74.01}\pm0.57$ \tabularnewline [0.05cm] CTN~(+Penalty) & $67.74\pm0.42$ & $10.10\pm0.48$ & $77.28\pm0.61$ & $47.41\pm2.93$ & $9.34\pm2.74$ & $55.76\pm1.64$\tabularnewline [0.05cm] CTN~(+BOME) & $\pmb{80.70}\pm0.26$ & $\pmb{4.73}\pm0.23$ & $\pmb{84.79}\pm0.25$ & $\pmb{68.16}\pm0.60$ & $\pmb{5.32}\pm0.76$ & $72.88\pm0.48$\tabularnewline \bottomrule \end{tabular} } \caption{Results of continual learning as bilevel optimization. We compute the mean and standard error of each method's results over 5 independent runs. Best results are \textbf{bolded}. } \label{tab:cl-full} \end{table*} \section*{Societal Impacts} This paper proposes a simple first order algorithm for bi-level optimization. Many specific instantiation of bi-level optimization such as adversarial learning and data attacking might be harmful to machine learning system in real world, as a general optimization algorithm for bi-level optimization, our method can be a tool in such process. We also develop a great amount of theoretical works and to our best knowledge, we do not observe any significant negative societal impact of our theoretical result.
{ "timestamp": "2022-09-20T02:23:16", "yymm": "2209", "arxiv_id": "2209.08709", "language": "en", "url": "https://arxiv.org/abs/2209.08709" }
\section{Introduction} Testing the theory of general relativity (GR) by observing the light deflection by the Sun \citep{Dyson+1920} made GR famous and initiated the ``modern era'' of astronomy \citep{Johnson1983}. GR has been tested over 100 years, especially in terms of gravitational light deflection, relativistic perihelion advance, and gravitational redshift. Complementary to light deflection is the Shapiro time delay \citep[see][]{Shapiro1964}. Celestial bodies in the solar system (which define Barycentric Celestial Reference System, BCRS) act as gravitational sources and can play an important role in GR tests \citep{Ivanitskaia+1986, Soffel1989, Damour+1991, Damour+1992, Damour+1993, Damour+1994, Hees+2014, Will2014, Crosta+2017, Ni2017, Bernus+2019, Crosta2019}. One important parameter to be tested is the parameterized post-Newtonian (PPN) parameter, $\gamma$, which is measuring the amount of the curvature produced by a unit mass-energy and is equal to unity in GR \citep{Damour1989, Poisson-Will2014, Will2014}. The accuracy of $\gamma$ obtained by measurements has rapidly improved in recent years \citep[for more details see the review by][]{Will2015}. The highest accuracy of $\gamma$ is $\sim 2\times10^{-5}$, determined via data obtained with the Cassini spacecraft \citep{Shapiro1964,Bertotti+2003}. The latest accuracy of $\gamma$ obtained using the very long baseline interferometry (VLBI) technique has reached $\sim 9\times10^{-5}$ \citep{Titov+2018}. Planets, such as Jupiter, have also been used to test GR \citep[e.g.,][]{Fomalont-Kopeikin2003, Fomalont-Kopeikin2008, Abbas+2022, Li+2022}. Light deflection and the underlying theory (e.g., relativistic gravity, gravitational lensing theory, etc.) have become important tools for both astronomy and cosmology \citep{Will2015}. Measurements of $\gamma$ with high accuracy are intertwined with high-precision astrometry. Obtaining a high accuracy of $\gamma$ benefits from the rapid improvement of the relevant technologies and instruments. Ultra-precise ($\sim$ 1 $\mu$as or higher) and ultra-sensitive \citep[e.g., several hundred nJy/beam @ 9.2 GHz for an 8 hr exposure with the Square Kilometre Array, SKA,][]{Bonaldi+2021} astrometry have been the immediate goals for SKA \citep{Braun+2015}, the next-generation Very Large Array \citep[ngVLA,][]{Murphy+2018}, and their pathfinders \citep{Rioja-Dodson2020}, and also Gaia Mission from space \citep{Vecchiato+2003, Butkevich+2022}. Such improvements together with other laboratories and space experiments (e.g., Solar System Odyssey, \citealt{Christophe+2009}; the series of Astrodynamical Space Test of Relativity using Optical Devices, ASTROD, \citealt{Ni1998, Ni2009}; Astrometric Science and Technology Roadmap for Astrophysics, ASTRA, \citealt{Gai+2020}; Transiting Exoplanet Survey Satellite mission, TESS, \citealt{Gai+2022}, etc.; see more experiments in \citealt{Ni2017}) have the potential to yield another 3--4 orders of magnitude of precision in testing GR \citep{Ni2017}. Development of theoretical study of gravity is in turn vital to future ultra-precise and ultra-sensitive astrometry \citep{Sekido-Fukushima2006, Pertit-Luzum2010, Li+2022}. High-precision astrometry has played a leading role in revealing the spiral structure of the Milky Way at both radio \citep[e.g.,][]{Xu+2006, Reid+2009arm, Xu+2016, Reid+2019} and optical \citep[e.g.,][]{Xu+2018raa, Xu+2018, Xu+2021, Poggio+2021} wavelengths, and in producing the 3rd realization of the International Celestial Reference Frame \citep[ICRF3,][]{Charlot2020} and the Gaia DR2/(E)DR3 Celestial Reference Frame \citep[Gaia-CRF2/CRF3,][]{Gaia-Collaboration+2018,Gaia-Collaboration+2022b}. As the astrometric precision is stepping into 10 $\mu$as or better (e.g., the recorded parallax precision has reached $\pm$ 3 $\mu$as by \citealt{Zhang+2013} using the Very Long Baseline Array and the predicted parallax precision is 10 $\mu$as for GAIA data release 4, \citealt{Gaia-Collaboration+2022}), the revealed structures of the Milky Way would be extended to distant regions beyond $\sim$ 10 kpc. High-precision astrometry is also helpful for studying notable VLBI-Gaia positional offsets \citep[i.e., the offsets between measured optical and radio positions;][]{Mignard+2016, Kovalev+2017, Petrov+2017, Charlot2020}. High-precision astrometry can also shed light on other broad sources and problems in astrophysics, e.g., providing electromagnetic counterparts of gravitational wave sources, monitoring the orbit of binaries and analyzing their physical parameters, investigating the evolution of galaxies, etc. \citep[see][]{Reid+2019baas}. High-precision astrometry suffers from the effects posed by the gravitational fields of objects in the solar system. For instance, the predicted deflection angle (according to the predictions of GR) caused by the planets, Pluto, and even large satellites exceeds several $\mu$as and can even reach a dozen of mas \citep[see][]{Fienga+2011, Hees+2014a, Bertone+2014, Crosta+2015, Crosta+2017}. The critical angular distances between compact extragalactic sources (CESs, hereafter) and solar system objects (e.g., planets and the Moon), at which the deflection angles are still $\geq$ 1 $\mu$as, are larger than several arcminutes or even tens of degrees or more \citep[e.g.,][]{Crosta-Mignard2006, Li+2022}. As a pilot work, we have evaluated the effect of light deflection caused by objects in the solar system on high-precision SKA astrometry. This evaluation may be helpful in saving computing resources for the vast data volumes expected to be collected by SKA \citep[e.g.,][]{Quinn+2015}. In addition, this work may contribute to the selection of interesting region and time to test GR or PPN parameters via light deflections or the Shapiro time delay, especially when there are unknown weak sources that can only be detected by the ultra-sensitive SKA \citep[e.g.,][]{Bonaldi+2021}. Those interesting region and time may also be helpful in broader researches \citep{Li+2022}, e.g., measuring additional Shapiro delay relative to post-Newtonian corrections \citep[see][]{Will2014}, testing and developing multi-plane lensing effects \citep{Subramanian-Chitre1984, Erdl-Schneider1993, Ramesh+2021}, higher-order (e.g., second-order) PPN formalisms \citep{Crosta-Mignard2006, Kopeikin-Makarov2007}, etc. The remainder of this paper is organized as follows. Section \ref{sec-delfection-all-solar-system} overviews the deflection angle caused by 195 objects in the solar system, using the sample given in Section \ref{sec-sample}. In Section \ref{sec:impact-regions-start}, we discuss the zones and durations of perturbations posed by the gravitational fields of objects that revolve around the Sun and evaluate the impact of those objects on SKA astrometry. Finally, we summarize our conclusions in Section \ref{sec-summary}. \section{Sample}\label{sec-sample} The sample includes 195 objects in the solar system; i.e., the Sun, all eight planets, Pluto, 177 satellites (including the Moon), and eight asteroids with $GM > 0.1$ km$^3$ s$^{-2}$, where $G$ and $M$ are the gravitational constant and the mass of the solar system object, respectively. The parameters of the Sun were obtained from IAU 2015 Resolution B3 \citep[see][]{IAU2015}. \footnote{See \url{https://www.iau.org/administration/resolutions/general_assemblies/}}The physical and orbital parameters of the planets, Pluto, satellites, and asteroids were obtained from the Jet Propulsion Laboratory (JPL, hereafter). \footnote{Planets and Pluto: physical parameters see \url{https://ssd.jpl.nasa.gov/?planet_phys_par}; orbital parameters see \url{https://ssd.jpl.nasa.gov/planets/approx_pos.html}. Satellites: physical parameters see \url{https://ssd.jpl.nasa.gov/?sat_phys_par}; and orbital parameters see \url{https://ssd.jpl.nasa.gov/?sat_elem}. The parameters of the asteroids were obtained from \url{https://ssd.jpl.nasa.gov/tools/sbdb_query.html}. The references therein are not listed here because they are very numerous, and we kindly advise the reader to the consult the various references found on the aforementioned websites.}We replaced the orbital parameters of the satellites (other than the Moon) with that of their host planets or Pluto because the ratios of the semi-major axis of the orbital revolution for these satellites and that of the corresponding planets or Pluto, $R_{\mathrm{sp}}$, are small; i.e., all of them are $\lesssim 0.04$ (see Table \ref{tab:original-para}). The other parameters required in this work are also listed in Table \ref{tab:original-para}. \section{Deflection Angle Caused by Solar System Objects}\label{sec-delfection-all-solar-system} \subsection{Orbits of the Planets}\label{sec:orbit} To obtain the positions of the planets, Pluto, and asteroids, it is convenient to use the heliocentric plane orbital coordinate system. Assuming that a planet moves along a fixed elliptical orbit, the position of the planet with eccentric anomaly, $E$, is \begin{equation}\label{equ:orbit} x = a(\cos E - e); \;\; y = a \sqrt{1 - e^2} \sin E; \;\; z = 0, \end{equation} where $a$ is the semi-major axis, $e$ is the eccentricity (see Table \ref{tab:original-para}), and $z = 0$ represents that the planet is located in its orbital plane. The calculated coordinates can be transformed into the coordinates in the ecliptic coordinate system ($x_{\mathrm{ecl}}$, $y_{\mathrm{ecl}}$, $z_{\mathrm{ecl}}$) by \begin{equation}\label{equ:ecl} \left\{ \begin{aligned} x_{\mathrm{ecl}} & = (\cos \omega \cos \Omega - \sin \omega \sin \Omega \cos I) \;\; x & + & \;\;(- \sin \omega \cos \Omega - \cos \omega \sin \Omega \cos I) \;\;y \\ y_{\mathrm{ecl}} & = (\cos \omega \sin \Omega + \sin \omega \cos \Omega \cos I) \;\;x & + & \;\; (- \sin \omega \sin \Omega + \cos \omega \cos \Omega \cos I) \;\; y, \\ z_{\mathrm{ecl}} & = (\sin \omega \sin I) \;\; x & + & \;\; (\cos \omega \sin I) \;\; y \end{aligned} \right. \end{equation} where $I$, $\omega$, and $\Omega$ are the inclination, perihelion, and longitude of the ascending node (see Table \ref{tab:original-para}), respectively. These three parameters are valid for the time interval 1800 AD--2050 AD. The computations below are based on the rough orbital determinations derived using the above formulas. \subsection{Light Deflection and Its Impact}\label{sec:deflection} \subsubsection{Calculation of Light Deflection} Light will bend when it passes a massive body. When the impact parameter, $b$, is far less than the distance from the body to both the Earth (observer) and target source, the deflection angle, $\alpha$, can be calculated as follows \citep{Hosokawa+1993} \begin{equation}\label{equ:alpha1} \alpha = (1+\gamma)\frac{2GM}{c^2 b}, \end{equation} where $\gamma$ is the PPN parameter, which is $= 1$ for GR. When $b$ does not satisfy the described condition above, the expression of $\alpha$ \citep{Cowling1984, Will1993, Ni2017} is \begin{equation}\label{equ:alpha2} \alpha = (1+\gamma)\frac{GM}{c^2 b}(\cos \theta_1 - \cos \theta_0), \end{equation} where $\theta_0$ (and $\theta_1$) is the angle between the light propagation vector and the CES \citep[and the Earth; see schematic diagrams of light deflection under the gravitational field of a lens in Figure 3 of][]{Li+2022}. We denote Equation (\ref{equ:alpha1}) as ``Simplified'' and Equation (\ref{equ:alpha2}) as ``Normal'' because Equation (\ref{equ:alpha2}) shows a wider range of applications. $\theta_0 \approx 180^{\circ}$ if the observed target is a distant CES, $b = r \sin \beta$ \citep[see][]{Cowling1984} and $\theta_1 \approx \alpha + \beta$, where $r$ is the distance from the observer to the celestial body, and $\beta$ is the angle between the CES (in the absence of gravitational or aberrational bending) and the celestial body as seen by the observer. When $\theta_1$ approaches 0$^{\circ}$, Equation (\ref{equ:alpha2}) approaches Equation (\ref{equ:alpha1}). When the celestial body is a solar system object, it is valid to replace $\theta_1$ with $\beta$ under a precision of $0.1$ $\mu$as \citep[see][]{Li+2022}. The maximum and minimum distances (i.e., $r_\mathrm{max}$ and $r_\mathrm{min}$, respectively) from the Earth to the celestial body are calculated based on the orbit determination in Section \ref{sec:orbit}. The corresponding angular radii of a celestial body seen from the Earth are $\Theta_{\mathrm{min}}$ and $\Theta_{\mathrm{max}}$ (see Table \ref{tab:betas}), respectively, assuming that the physical size of the celestial body is its mean radius. For satellites other than the Moon, their values of $r_\mathrm{max}$, $r_\mathrm{min}$, $\Theta_{\mathrm{min}}$, and $\Theta_{\mathrm{max}}$ are replaced with the corresponding values of their primary component (including Pluto). In this work, every solar system object is regarded as an isolated individual lens. The reasons are as follows. Firstly, in the process of the data correlator, a necessary step that all radio interference data should be conducted, the total corrected general relativistic time delay is the linear superposition of the time delay of each individual lens in the solar system \citep[see][]{Pertit-Luzum2010}. Secondly, it is showed that the effect of uncertainty of $\beta$ (assumed to be 10 mas, which can be regarded as the combined deflection angle caused by other objects) would be rapidly decreased to less than 0.1 $\mu$as with the increase of $\beta$ for Saturn and Jupiter \citep[see figure 7 of][]{Li+2022}. These facts indicate that it is reasonable to regard every solar system object as an isolated individual lens. The combined deflection angle may not affect the results of individual, and could be a linear superposition of the deflection angle \citep[as a vector, e.g.,][]{Li+2022} of each individual lens in this work. Further study considering multi-plane lensing effects would be conducted to accurately determine the range of application of ``linear superposition of the deflection angle''. \subsubsection{Maximum Deflection Angle}\label{sec-alpha-max} The maximum values of $\alpha$ calculated by Equation (\ref{equ:alpha1}), where $b$ is the mean radius of the celestial body (see Table \ref{tab:original-para}), are consistent with those calculated by \citet{Crosta-Mignard2006}. All planets can deflect light by up to $\sim$ 82.0 $\mu$as or more. Pluto can also bend light by $\sim$ 7.0 $\mu$as. The value of $\alpha_{\max}$ for asteroid Ceres is 1.2 $\mu$as. This asteroid is the only one with $\alpha_{\max} > 1.0$ $\mu$as. The number of asteroids with $\alpha_{\max} > 0.1$ $\mu$as is six. Large satellites can also bend light by several or dozens of $\mu$as. The number of satellites with $\alpha_{\max} > 1.0$ $\mu$as is 14, and they are the Moon, Ganymede, Callisto, Io, Europa (four satellites of Jupiter), Titan, Rhea, Iapetus, Dione (four satellites of Saturn), Titania, Oberon, Ariel, Umbriel (four satellites of Uranus), Triton (a satellite of Neptune), and Charon (a satellite of Pluto). The number of satellites with $\alpha_{\max} > 0.1$ $\mu$as is 21, i.e., the Moon, four, seven, five, three, and one satellite(s) of Jupiter, Saturn, Uranus, Neptune, and Pluto, respectively. $R_{\mathrm{sp}} < 3\%$ and $< 0.2\%$ for satellites other than the Moon with $\alpha_{\max}$ $\geq$ 0.1 and $\geq$ 1.0 $\mu$as, respectively. Therefore, replacing $r_{\mathrm{max}}$ and $r_{\mathrm{min}}$ of these satellites with their primaries does not affect the conclusions of this work. \subsubsection{Impact Ranges}\label{sec-alpha-beta-impact} \begin{figure*}[!htb] \centering \subfigure[Jupiter with Maximum Distance from Earth]{\includegraphics[width=0.49\textwidth]{Jupiter-the-feasibility-of-the-simplest-model-long_v2.pdf}} \subfigure[Jupiter with Minimum Distance from Earth]{\includegraphics[width=0.49\textwidth]{Jupiter-the-feasibility-of-the-simplest-model-short_v2.pdf}} \subfigure[Mars with Maximum Distance from Earth]{\includegraphics[width=0.49\textwidth]{Mar-the-feasibility-of-the-simplest-model-long_v2.pdf}} \subfigure[Mars with Minimum Distance from Earth]{\includegraphics[width=0.49\textwidth]{Mar-the-feasibility-of-the-simplest-model-short_v2.pdf}} \subfigure[Sun with Minimum Distance from Earth]{\includegraphics[width=0.49\textwidth]{Sun-the-feasibility-of-the-simplest-model-long_v2.pdf}} \subfigure[Sun with Maximum Distance from Earth]{\includegraphics[width=0.49\textwidth]{Sun-the-feasibility-of-the-simplest-model-short_v2.pdf}} \caption{$\alpha$ as a function of $\beta$ under $r_\mathrm{max}$ (left) and $r_\mathrm{min}$ (right) for Jupiter (top), Mars (middle), and the Sun (bottom), respectively. The red and green lines present the results computed from Equations (\ref{equ:alpha1}) and (\ref{equ:alpha2}), respectively, and the blue line shows the difference between the two results (see Section \ref{sec-alpha-beta-validity}). The solid and dashed black lines denote $\alpha$ values of 1.0 and 0.1 $\mu$as, respectively, for Jupiter and Mars, and 1.0 and 0.1 mas for the Sun, respectively.} \label{fig-alpha-position-rely} \end{figure*} Figure \ref{fig-alpha-position-rely} presents $\alpha$ as a function of $\beta$ for three objects, i.e., Jupiter, Mars, and the Sun under both $r_\mathrm{max}$ (left) and $r_\mathrm{min}$ (right). The critical value of $\beta$ where $\alpha$ is equal to a given value (e.g., 1.0 $\mu$as) under a given condition (e.g., $r_\mathrm{min}$), can be regarded as the impact range of an event of light deflection, and thus is denoted as $\beta$-impact for short hereafter. For instance, $\beta$-impact for $\alpha$ = 1.0 $\mu$as under $r_\mathrm{min}$, denoted as $\beta_{1.0, \mathrm{min}}$, indicates that a celestial body can deflect light by amount of 1.0 $\mu$as up to $\leq \beta_{1.0, \mathrm{min}}$. This naming convention also applies to other given values of $\alpha$ or conditions; e.g., $\beta_{1.0, \mathrm{max}}$, $\beta_{0.1, \mathrm{min}}$, and $\beta_{0.1, \mathrm{max}}$ represent the critical values of $\beta$ when the deflection angle equals 1.0 $\mu$as under $r_{\mathrm{max}}$, 0.1 $\mu$as under $r_{\mathrm{max}}$, and $r_{\mathrm{min}}$, respectively; and $\beta_{1.0}$ and $\beta_{0.1}$ denote the critical values of $\beta$ when the deflection angle equals 1.0 and 0.1 $\mu$as, respectively. Table \ref{tab:betas} lists $\beta$-impact for more planets, satellites, and asteroids, where only objects with $\alpha_\mathrm{max} > 0.005$ $\mu$as and $\beta_{0.1, \mathrm{max}} > \Theta_\mathrm{max}$ have been presented. The results of $\beta$-impact for the planets, Pluto, and the Moon are consistent with those reported by \citet{Crosta-Mignard2006}. \begin{deluxetable*}{lccccccccccc} \tablecolumns{12} \tabletypesize{\footnotesize} \setlength\tabcolsep{3pt} \renewcommand{\arraystretch}{1.2} \tablecaption{$\beta$-Impact and $\beta$-Validity \label{tab:betas}} \tablehead{ \colhead{Objects} & \colhead{Index} & \colhead{$\Theta_{\mathrm{min}}$} & \colhead{$\Theta_{\mathrm{max}}$} & \colhead{$\beta_\mathrm{1.0, min}$} & \colhead{$\beta_\mathrm{1.0, max}$} & \colhead{$\beta_\mathrm{0.1, min}$} & \colhead{$\beta_\mathrm{0.1, max}$} & \colhead{$\beta_\mathrm{d, 1.0, min}$} & \colhead{$\beta_\mathrm{d, 1.0, max}$} & \colhead{$\beta_\mathrm{d, 0.1, min}$} & \colhead{$\beta_\mathrm{d, 0.1, max}$} \\ \colhead{} & \colhead{} & \colhead{($''$)} & \colhead{($''$)} & \colhead{($''$)} & \colhead{($''$)} & \colhead{($''$)} & \colhead{($''$)} & \colhead{($^{\circ}$)} & \colhead{($^{\circ}$)} & \colhead{($^{\circ}$)} & \colhead{($^{\circ}$)} } \startdata Sun & 1 & 943.45 & 975.52 & 647896 & 647900 & 647989 & 647990 & 0.26 & 0.27 & 0.26 & 0.27 \\ Mercury & 2 & 2.32 & 6.13 & 191 & 507 & 1920 & 5078 & 179.95 & 179.86 & 179.47 & 178.59 \\ Venus & 3 & 4.81 & 31.58 & 2368 & 15555 & 23661 & 148812 & 179.34 & 175.68 & 173.43 & 138.66 \\ Moon & 5 & 883.38 & 987.00 & 22859 & 25534 & 208923 & 228881 & 173.65 & 172.91 & 121.97 & 116.42 \\ Mars & 6 & 1.75 & 12.16 & 203 & 1410 & 2035 & 14102 & 179.94 & 179.61 & 179.43 & 176.08 \\ Jupiter & 9 & 14.93 & 24.39 & 223618 & 320605 & 580120 & 606206 & 117.88 & 90.94 & 18.86 & 11.61 \\ Ganymede & 10 & 0.56 & 0.92 & 19 & 31 & 194 & 316 & 179.99 & 179.99 & 179.95 & 179.91 \\ Callisto & 11 & 0.51 & 0.84 & 14 & 23 & 141 & 230 & 180.00 & 179.99 & 179.96 & 179.94 \\ Io & 12 & 0.39 & 0.64 & 11 & 19 & 116 & 191 & 180.00 & 179.99 & 179.97 & 179.95 \\ Europa & 13 & 0.33 & 0.54 & 5 & 10 & 62 & 103 & 180.00 & 180.00 & 179.98 & 179.97 \\ Saturn & 77 & 7.28 & 9.99 & 43350 & 59304 & 334988 & 398610 & 167.96 & 163.53 & 86.95 & 69.27 \\ Titan & 78 & 0.32 & 0.44 & 9 & 13 & 102 & 140 & 180.00 & 180.00 & 179.97 & 179.96 \\ Rhea & 79 & 0.10 & 0.13 & NaN & NaN & 1 & 2 & 180.00 & 180.00 & 180.00 & 180.00 \\ Iapetus & 80 & 0.09 & 0.13 & NaN & NaN & 1 & 1 & 180.00 & 180.00 & 180.00 & 180.00 \\ Dione & 81 & 0.07 & 0.10 & NaN & NaN & NaN & 1 & 180.00 & 180.00 & 180.00 & 180.00 \\ Uranus & 139 & 1.66 & 2.02 & 3477 & 4241 & 34691 & 42269 & 179.03 & 178.82 & 170.36 & 168.26 \\ Titania & 140 & 0.05 & 0.06 & NaN & NaN & 1 & 1 & 180.00 & 180.00 & 180.00 & 180.00 \\ Oberon & 141 & 0.05 & 0.06 & NaN & NaN & 1 & 1 & 180.00 & 180.00 & 180.00 & 180.00 \\ Neptune & 167 & 1.08 & 1.18 & 2762 & 3001 & 27582 & 29965 & 179.23 & 179.17 & 172.34 & 171.68 \\ Triton & 168 & 0.06 & 0.06 & NaN & NaN & 5 & 6 & 180.00 & 180.00 & 180.00 & 180.00 \\ Pluto & 182 & 0.03 & 0.06 & NaN & NaN & 2 & 3 & 180.00 & 180.00 & 180.00 & 180.00 \\ Ceres & 188 & 0.16 & 0.41 & NaN & NaN & 1 & 4 & 180.00 & 180.00 & 180.00 & 180.00 \\ Pallas & 189 & 0.09 & 0.31 & NaN & NaN & NaN & 1 & 180.00 & 180.00 & 180.00 & 180.00 \\ \enddata \tablecomments{ $\beta$-Impact is the impact range of an event of light deflection (see Section \ref{sec-alpha-beta-impact}); $\beta$-Validity is the range of application of Equation (\ref{equ:alpha1}) (see Section \ref{sec-alpha-beta-validity}); NaN represents that $\beta_\mathrm{1.0, min}$ or $\beta_\mathrm{0.1, min}$ being less than $\Theta_{\mathrm{min}}$, or $\beta_\mathrm{1.0, max}$ or $\beta_\mathrm{0.1, max}$ being less than $\Theta_{\mathrm{max}}$. } \end{deluxetable*} The Sun's gravitational field could bend light in the direction of $\beta \in [\Theta, 180 - \Theta]$ (where $\Theta$ is the apparent radius of the Sun) with a deflection angle $>$ 10 $\mu$as. Above all, Jupiter can deflect light by an amount of 0.1 $\mu$as up to $\leq 168.4^{\circ}$ (i.e., $\beta_{0.1,\mathrm{max}} \approx 168.4^{\circ}$), and by an amount of 1.0 $\mu$as up to $\leq 89.1^{\circ}$ (i.e., $\beta_{1.0,\mathrm{max}} \approx 89.1^{\circ}$; see Figure \ref{fig-alpha-position-rely}). Such large values of $\beta_{0.1,\mathrm{max}}$ and $\beta_{1.0,\mathrm{max}}$ make Jupiter an unavoidable issue in astrometry, like the Sun. For Saturn, the values of $\beta_{0.1,\mathrm{max}}$ and $\beta_{1.0,\mathrm{max}}$ are $\approx 110.7^{\circ}$ and $\approx 16.5^{\circ}$, respectively. For Mercury, Venus, Mars, Uranus, and Neptune, the values of ($\beta_{0.1,\mathrm{max}}$, $\beta_{1.0,\mathrm{max}}$) are $\approx$ ($1.4^{\circ}$, $0.1^{\circ}$), ($41.3^{\circ}$, $4.3^{\circ}$), ($3.9^{\circ}$, $0.4^{\circ}$), ($11.7^{\circ}$, $1.2^{\circ}$), and ($8.3^{\circ}$, $0.8^{\circ}$), respectively. Moon's effect is comparable to that of the planets, i.e., ($\beta_{0.1,\mathrm{max}}$, $\beta_{1.0,\mathrm{max}}$) $\approx$ ($63.6^{\circ}$, $7.1^{\circ}$) (see Table \ref{tab:betas}). For Ganymede, ($\beta_{0.1,\mathrm{max}}$, $\beta_{1.0,\mathrm{max}}$) $\approx$ (316$''$, 31$''$). Eleven satellites (including the Moon) can bend light by 0.1 $\mu$as, and six can bend light by 1.0 $\mu$as, as long as the corresponding $\beta$ is $\geq 1''$. The $\beta$-impacts of Pluto and several asteroids (i.e., Ceres and Pallas) are comparable to those of large satellites. The aforementioned values of ($\beta_{0.1,\mathrm{max}}$, $\beta_{1.0,\mathrm{max}}$) highlight the challenge of conducting high-precise astrometry, especially by using next-generation observatories. \subsubsection{Validity of the Simplified Equation}\label{sec-alpha-beta-validity} Equation (\ref{equ:alpha1}) is much simpler relative to Equation (\ref{equ:alpha2}), therefore it may be helpful in saving computing resources by using Equation (\ref{equ:alpha1}). The difference between Equations (\ref{equ:alpha2}) and (\ref{equ:alpha1}) is presented to judge the range of application of Equation (\ref{equ:alpha1}) (see Figure \ref{fig-alpha-position-rely}). The critical value of $\beta$ where the difference of $\alpha$ calculated from the two equations is equal to a given value (e.g., 1.0 $\mu$as) under a given condition (e.g., $r_\mathrm{min}$) can be regarded as the validity of Equation (\ref{equ:alpha1}) (which is much simpler relative to Equation (\ref{equ:alpha2})), and is denoted as $\beta$-validity for short hereafter. The naming convention is similar to that for $\beta$-impact, and denoted as, e.g., $\beta_{\mathrm{d, 1.0, min}}$, $\beta_{\mathrm{d, 1.0, max}}$, $\beta_{\mathrm{d, 0.1, min}}$, and $\beta_{\mathrm{d, 0.1, max}}$. The Sun and Mars can be seen as two extremes (see Figure \ref{fig-alpha-position-rely}). For the Sun, Equation (\ref{equ:alpha1}) is invalid even when light grazes its surface under a precision of 1.0 $\mu$as. For Jupiter and Saturn (see Table \ref{tab:betas}), $\beta_{\mathrm{d, 0.1, max}}$ is less than 90$^{\circ}$, but $\beta_{\mathrm{d, 1.0, max}}$ exceeds 90$^{\circ}$. For other objects, including the other five planets, Pluto, and all satellites and asteroids, $\beta_{\mathrm{d, 0.1, max}}$, $\beta_{\mathrm{d, 0.1, min}}$, $\beta_{\mathrm{d, 1.0, max}}$ and $\beta_{\mathrm{d, 1.0, min}}$ all exceed 90$^{\circ}$, indicating that Equation (\ref{equ:alpha1}) is valid even under a precision of 0.1 $\mu$as when $\beta \leq 90^{\circ}$. If $\beta_{\mathrm{d, 0.1, max}}$ or $\beta_{\mathrm{d, 1.0, max}}$ exceeds $90^{\circ}$ when $\beta > 90^{\circ}$, no correction for a positional offset caused by the gravitational fields of the corresponding objects is required even under a precision level of 0.1 $\mu$as. \section{Discussion}\label{sec:impact-regions-start} \subsection{Maximum Deflection Angle and Earth-Lens Distance on Impact Range of Light Deflection}\label{sec:imapact-regins-beta} The value of $\beta$-impact is related to $\alpha_{\mathrm{max}}$ and the distance from the celestial body to the Earth (see Section \ref{sec:deflection}). Figure \ref{fig-beta-impact-relation} presents the relationship among $\alpha_{\mathrm{max}}$, distance, and the values of $\beta$-impact (i.e., $\beta_{1.0,\mathrm{min}}$, $\beta_{1.0,\mathrm{max}}$, $\beta_{0.1,\mathrm{min}}$, and $\beta_{0.1,\mathrm{max}}$). The results indicate that the values of $\beta$-impact increase with an increase of $\alpha_{\mathrm{max}}$ and a decrease of distance. \begin{figure*}[!htb] \centering \subfigure[$\beta_{1.0,\mathrm{min}}$ with Maximum Distance]{\includegraphics[width=0.49\textwidth]{relation_max_1.pdf}} \subfigure[$\beta_{1.0,\mathrm{max}}$ with Minimum Distance]{\includegraphics[width=0.49\textwidth]{relation_min_1.pdf}} \subfigure[$\beta_{0.1,\mathrm{min}}$ with Maximum Distance]{\includegraphics[width=0.49\textwidth]{relation_max_01.pdf}} \subfigure[$\beta_{0.1,\mathrm{max}}$ with Maximum Distance]{\includegraphics[width=0.49\textwidth]{relation_min_01.pdf}} \caption{The value of $\beta$-impact (i.e., $\beta_{1.0,\mathrm{min}}$, $\beta_{1.0,\mathrm{max}}$, $\beta_{0.1,\mathrm{min}}$, and $\beta_{0.1,\mathrm{max}}$) as a function of $\alpha_{\mathrm{max}}$ and the distances from the celestial bodies to the Earth. Only sources with $0.1 \leq \alpha_{\mathrm{max}} < 17\,000.0$ $\mu$as and the corresponding values of $\beta$-impact $\geq 1''$ are included.} \label{fig-beta-impact-relation} \end{figure*} \subsection{Zones of Perturbation}\label{sec:imapact-regions-coverage} This section roughly estimates the specific zones in which light from CESs would be deflected by a given angle (i.e., 1.0 and 0.1 $\mu$as) by Mercury, Venus, Mars, Uranus, Neptune, Pluto, and Ceres. The $\beta_\mathrm{0.1, max}$ values of these sources are all $\geq 1''$. Jupiter and Saturn are not included because their values of $\beta_\mathrm{0.1, min}$ are larger than $90^{\circ}$. So heavy is their impact on light from CESs, like the Sun, that they should be taken into consideration in most cases. For satellites, two orbital centers (both the Sun and primary components including Pluto) should be considered. In addition, the perturbed zones for satellites other than the Moon are buried within those for their primary components. For the Moon, $\beta_\mathrm{0.1, min}$ is about $58^{\circ}$. The impact of the Moon is comparable to that of Saturn. Therefore, the perturbed zones for the many satellites are not included in this work. We use the position of the Earth-Moon system as the position of the Earth, and assume that the planets, Pluto, and asteroids move along a fixed elliptical orbit around the Sun. We assume that the CES, the Earth and the celestial body (e.g., planets, Pluto or asteroids) all exist along a single line. The position of a CES can be calculated by assuming two arbitrary values of $E$ for the Earth and the celestial body, respectively; see the definition of $E$ in Equation (\ref{equ:orbit}). The distance from the Earth to a CES is fixed to be $10^{14}$ times larger than the distance from the Earth to the celestial body. This corresponds to a distance from the Earth to a fiducial CES of $\sim$ 500 Mpc if the distance from the Earth to the celestial body is $\sim$ 1 au. If the distance from the Earth to the CES fluctuates between 1 Mpc and 500 Mpc, the corresponding fluctuation of the computed position of the CES is about dozens of $\mu$as. This computed position is used to calculate the deflection angle caused by the gravitational field of solar system objects. The positional uncertainty of CESs of dozens of $\mu$as does not affect the computed deflection angle under precision of 0.1 $\mu$as. Therefore, this fluctuation does not affect the conclusions of this work. The coverage area of the perturbed zone corresponding to $\beta_{1.0}$ (i.e., the zone where light from a CES would be deflected by a celestial body by 1.0 $\mu$as) is a circle, at the given positions of the Earth and the celestial body; i.e., two given values of $E$. The center of this circle is the computed CES position projected on the sky as seen from Earth (coinciding with the projected point on the sky of the trajectory of the corresponding celestial body), and the radius is $\beta_{1.0}$. Similarly, the perturbed zone for $\beta_{0.1}$ is also a circle with the same center, but its radius is $\beta_{0.1}$. The ecliptic coordinate system was used to compute the positions of the Earth, celestial bodies, and CESs. These positions can be transformed into the J2000 equatorial coordinate system as follows: \begin{equation}\label{equ:ecl} \left\{ \begin{aligned} x_\mathrm{eq} & = x_\mathrm{ecl} \\ y_\mathrm{eq} & = \cos \varepsilon\; y_\mathrm{ecl} - \sin \varepsilon\; z_\mathrm{ecl}, \\ z_\mathrm{eq} & = \sin \varepsilon\; y_\mathrm{ecl} + \cos \varepsilon\; z_\mathrm{ecl} \end{aligned} \right. \end{equation} where $\varepsilon = 23.43928^{\circ}$ is the obliquity of the ecliptic. We have computed the whole zone of perturbation for both $\beta_{1.0}$ and $\beta_{0.1}$ (see Figure \ref{fig-coverage}). The perturbed zones for $\beta_{1.0}$ and $\beta_{0.1}$ are ribbons and their widths are denoted as $W_{1.0}$ and $W_{0.1}$, respectively. For Pluto, $W_{1.0}$ and $W_{0.1}$ are less than $\sim$ 1$^{\circ}$ (see Figure \ref{fig-coverage} and Table \ref{tab:coverage}). For Mercury, Mars, and Ceres, $W_{1.0}$ and $W_{0.1}$ are similar and their values are $\sim$ 4$^{\circ}$--9$^{\circ}$, $\sim$ 2$^{\circ}$--6$^{\circ}$, and $\sim$ 5$^{\circ}$--10$^{\circ}$, respectively. The values of $W_{1.0}$ for Uranus and Neptune are $\sim$ 3$^{\circ}$ or less, and close to that of Pluto. However, considering $\beta_{\mathrm{0.1}}$ of $\sim$ 10$^{\circ}$ for Uranus and Neptune, their values of $W_{0.1}$ range from $\sim$ 10$^{\circ}$ to $\sim$ 23$^{\circ}$, which are larger than those for Mercury, Mars, and Ceres. In particular, $W_{0.1}$ for Venus is close to $\sim$ 80$^{\circ}$ and $W_{1.0}$ also reaches $\sim$ 13$^{\circ}$--16$^{\circ}$. These results imply that the size of the perturbed zone is negatively correlated with the distance from the Earth to the celestial body and positively correlated with the value of $\beta_{\mathrm{0.1}}$ or $\beta_{\mathrm{1.0}}$. \begin{figure*}[!htb] \centering \subfigcapskip=-0.3cm \subfigure[Mercury]{\includegraphics[width=0.38\textwidth]{Mercury_coverage.png}} \subfigure[Venus]{\includegraphics[width=0.38\textwidth]{Venus_coverage.png}} \subfigure[Mars]{\includegraphics[width=0.38\textwidth]{Mar_coverage.png}} \subfigure[Uranus]{\includegraphics[width=0.38\textwidth]{Uranus_coverage.png}} \subfigure[Neptune]{\includegraphics[width=0.38\textwidth]{Neptune_coverage.png}} \subfigure[Pluto]{\includegraphics[width=0.38\textwidth]{Pluto_coverage.png}} \subfigure[Ceres]{\includegraphics[width=0.38\textwidth]{Ceres_coverage.png}} \caption{Whole perturbed zone for $\beta_{1.0}$ (red) and $\beta_{0.1}$ (blue). The corresponding celestial bodies are denoted in the subhead. The black points show the projected points on the sky of the trajectory of the corresponding celestial body.} \label{fig-coverage} \end{figure*} \begin{deluxetable*}{cccccccc} \tablecolumns{8} \tablecaption{Widths of the Ribbon-like Perturbed Zones \label{tab:coverage}} \tablehead{ \colhead{Quantity} & \colhead{Mercury} & \colhead{Venus} & \colhead{Mars} & \colhead{Uranus} & \colhead{Neptune} & \colhead{Pluto} & \colhead{Ceres} } \startdata $W_{1.0}$ & $\sim$ 4$^{\circ}$--7$^{\circ}$ & $\sim$ 13$^{\circ}$--16$^{\circ}$ & $\sim$ 2$^{\circ}$--5$^{\circ}$ & $\sim$ 2$^{\circ}$--3$^{\circ}$ & $\sim$ 1$^{\circ}$--2$^{\circ}$ & $\lesssim$ 1$^{\circ}$ & $\sim$ 5$^{\circ}$--10$^{\circ}$ \\ $W_{0.1}$ & $\sim$ 5$^{\circ}$--9$^{\circ}$ & $\sim$ 66$^{\circ}$--80$^{\circ}$ & $\sim$ 4$^{\circ}$--6$^{\circ}$ & $\sim$ 14$^{\circ}$--23$^{\circ}$ & $\sim$ 10$^{\circ}$--17$^{\circ}$ & $\lesssim$ 1$^{\circ}$ & $\sim$ 5$^{\circ}$--10$^{\circ}$ \\ \enddata \tablecomments{ See the detailed distribution of whole perturbed zone in Figure \ref{fig-coverage}. } \end{deluxetable*} \subsection{Duration of Perturbation }\label{sec:imapact-regions-duration} It is interesting to investigate the durations when light from CESs can be deflected by celestial bodies by 1.0 and 0.1 $\mu$as, referred to as $\tau_{1.0}$ and $\tau_{0.1}$, respectively. Similar to the perturbed zones, we only focused on calculating the values for Mercury, Venus, Mars, Uranus, Neptune, Pluto, and Ceres. These bodies have been classified into three categories: planets inside and outside the Earth's orbit and others (i.e., Pluto and Ceres). \subsubsection{Trajectory of the Earth and Celestial Bodies}\label{sec:imapact-regions-duration:introduction} It is necessary to determine the positions of the Earth and the celestial body in the solar system at any given time. For celestial bodies inside and outside the Earth's orbit, two different methods are used to obtain the positions of the Earth and the celestial body. For celestial bodies inside the Earth's orbit, we divide the orbits of a given celestial body into segments, and compute the movement time in each segment. The relative position of the Earth is determined by using this movement time. The reason for choosing the given celestial body as a reference body is that the angle variation of the Earth should not exceed 180$^{\circ}$ if that of the given gravitational body is $\leq 180^{\circ}$, because the angular speed of the Earth is slower. Therefore, in the progress of summing the movement time for all eligible segments (i.e., the segments where the deflection angle is larger than 1.0 or 0.1 $\mu$as), the movement time for any segment should not be double-counted when the angular change of the given celestial body is within $\pm$ 180$^{\circ}$. For celestial bodies outside the Earth's orbit, the selected reference body is the Earth for similar reasons. Specifically, for the computation of planets inside and outside the Earth's orbit, $E$ is divided into 129\,601 parts for the reference bodies; i.e., the resolution of $E$ is about 10$''$. This resolution corresponds to a mean time resolution of $\sim$ 1.0 and $\sim$ 2.5 min for Mercury and Venus (inside the Earth's orbit), and $\sim$ 4.1 min for Mars, Uranus, and Neptune (outside the Earth's orbit), respectively. When computing the values for the category of others (i.e., Pluto and Ceres, also outside the Earth's orbit), $E$ is divided into 5\,184\,001 parts, i.e., the resolution of $E$ is about 0.25$''$. That is because $\beta_{\mathrm{0.1, min}}$ is a few arcseconds for Pluto and Ceres. The corresponding mean time resolution for Pluto and Ceres is about 6.0 s. As a pilot work, we divided the orbit of a given celestial body into 1000 points (this is enough to show details; see Figures \ref{fig-times-Mercury}--\ref{fig-times-Ceres}) as the initial positions, $\theta_{\mathrm{CES}}$, i.e., $\theta_{\mathrm{CES}}$ = $\frac{360^{\circ}}{1000} \times 0, 1, 2, ..., 1000$. For each point, we calculate the value of $E$ for the Earth corresponding to the minimum distance from the Earth to the given gravitational body. We select one of these values of $E$, and set $E + 0^{\circ}, 10^{\circ},..., 180^{\circ}$ as the initial position of the Earth, $\phi_{\mathrm{CES}}$. The number of total initial combinations of a given celestial body and the Earth, $(\theta_{\mathrm{CES}}, \phi_{\mathrm{CES}})$, is 19\,000. The same method as presented in Section \ref{sec:imapact-regions-coverage} is used to determine the position of a CES under $(\theta_{\mathrm{CES}}, \phi_{\mathrm{CES}})$, where the position of a CES remains unchanged during the calculation for a given initial combination of $(\theta_{\mathrm{CES}}, \phi_{\mathrm{CES}})$. We then calculate the duration of perturbation where the angle between the CES and the given celestial body as seen from the Earth is less than $\beta_{1.0}$ and $\beta_{0.1}$, i.e., $\tau_{1.0}$ and $\tau_{0.1}$, respectively. \subsubsection{Timescale Criteria Potentially Affecting SKA Astrometry} Here, we assume that the flux density of a CES is 0.15 Jy/beam @ 9200 MHz \citep[the median flux density in the X band for a short baseline is $\sim$ 0.15 Jy from ICRF3; see][]{Charlot2020}, and 0.30 Jy/beam @ 1400 and 560 MHz (the median flux density in the S band is $\sim$ 0.23 Jy from ICRF3). The sensitivity of SKA is assumed to be 0.43 $\mu$Jy/beam @ 9200 MHz, 0.71 $\mu$Jy/beam @ 1400 MHz, and 2.88 $\mu$Jy/beam @ 560 MHz for an exposure time of 8 hr, and the full width at half-maximum size in the three wavebands is assumed to be 0.09$''$, 0.6$''$, and 1.5$''$, respectively \citep[see][]{Bonaldi+2021}. The integration time needed to achieve a positional accuracy of 1.0 $\mu$as should be $\sim$ 8.0 min @ 9200 MHz, $\sim$ 4.0 hr @ 1400 MHz, and $\sim$ 414.7 hr @ 560 MHz (this one may be meaningless), respectively. These three timescales can be regarded as the timescale criteria that the predicted event potentially affects SKA astrometry under a precision level of 0.1 $\mu$as. These three criteria are denoted as $\tau_{c,0.1,9200}$, $\tau_{c,0.1,1400}$, and $\tau_{c,0.1,560}$ (see Table \ref{tab:timescale}), respectively. The three timescale criteria corresponding to a precision level of 1.0 $\mu$as are smaller than those of 0.1 $\mu$as by a factor of 100, and are denoted as $\tau_{c,1.0, 9200}$, $\tau_{c,1.0,1400}$, and $\tau_{c,1.0, 560}$, respectively. When the duration exceeds 48.0 hr (except for the case at 560 MHz), the events can affect SKA astrometry over multiple epochs. We denote this criterion as $\tau_{c,m}$. \begin{deluxetable*}{ccccccc} \tablecolumns{7} \setlength\tabcolsep{10pt} \tablecaption{Timescale Criteria Potentially Affecting SKA Astrometry \label{tab:timescale}} \tablehead{ \colhead{$\tau_{c,0.1, 9200}$} & \colhead{$\tau_{c,0.1, 1400}$} & \colhead{$\tau_{c,0.1, 560}$\tablenotemark{$\dagger$}} & \colhead{$\tau_{c,1.0, 9200}$} & \colhead{$\tau_{c,1.0, 1400}$} & \colhead{$\tau_{c,1.0, 560}$} & \colhead{$\tau_{c,m}$} \\ \colhead{(min)} & \colhead{(hr)} & \colhead{(hr)} & \colhead{(sec)} & \colhead{(min)} & \colhead{(hr)} & \colhead{(hr)} } \startdata 8.0 & 4.0 & 414.7 & 4.8 & 2.4 & 4.1 & 48.0 \\ \enddata \tablenotetext{\dagger}{This value may be meaningless.} \end{deluxetable*} \subsubsection{Category of Planets inside the Earth's Orbit}\label{sec:imapact-regions-duration:inside} This category includes Mercury and Venus. The computed positions in this work in this category manifest as a group of closed patterns (see Figures \ref{fig-times-Mercury} and \ref{fig-times-Venus}). \begin{figure*}[!htb] \centering \subfigcapskip=-0.3cm \subfigure[$\tau_{1.0}$ as a function of $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$]{\includegraphics[height=0.36\textwidth]{Mercury_time_all_v2.pdf}} \subfigure[Distribution of $\tau_{1.0}$]{\includegraphics[height=0.36\textwidth]{Mercury_time_all_J2000_1.png}} \subfigure[$\tau_{0.1}$ as a function of $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$]{\includegraphics[height=0.36\textwidth]{Mercury_time_all_01_v2.pdf}} \subfigure[Distribution of $\tau_{0.1}$]{\includegraphics[height=0.36\textwidth]{Mercury_time_all_J2000_01.png}} \caption{$\tau_{1.0}$ and $\tau_{0.1}$ for Mercury. The value of $\phi_{\mathrm{CES}}$ is labeled in the legend; see the main text for $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$.} \label{fig-times-Mercury} \end{figure*} \begin{figure*}[!htb] \centering \subfigcapskip=-0.3cm \subfigure[$\tau_{1.0}$ as a function of $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$]{\includegraphics[height=0.36\textwidth]{Venus_time_all_v2.pdf}} \subfigure[Distribution of $\tau_{1.0}$]{\includegraphics[height=0.363\textwidth]{Venus_time_all_J2000_1.png}} \subfigure[$\tau_{0.1}$ as a function of $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$]{\includegraphics[height=0.36\textwidth]{Venus_time_all_01_v2.pdf}} \subfigure[Distribution of $\tau_{0.1}$]{\includegraphics[height=0.363\textwidth]{Venus_time_all_J2000_01.png}} \caption{$\tau_{1.0}$ and $\tau_{0.1}$ for Venus. The description of each map refers to Figure \ref{fig-times-Mercury}.} \label{fig-times-Venus} \end{figure*} Figure \ref{fig-times-Mercury} shows the distributions of $\tau_{1.0}$ and $\tau_{0.1}$ (right panel) and they as functions of $(\theta_{\mathrm{CES}}, \phi_{\mathrm{CES}})$ (left panel) for Mercury. The minimum value of $\tau_{1.0}$ is 1.3 hr, which is larger than $\tau_{c,1.0,9200}$ and $\tau_{c,1.0,1400}$. This indicates that the gravitational field of Mercury affects SKA astrometry over single-epoch observations at 9200 and 1400 MHz under a precision level of 1.0 $\mu$as. This conclusion also applies to the case for a precision level of 0.1 $\mu$as. $\tau_{1.0}$ is larger than 48.0 hr in a few cases, indicating that the gravitational field of Mercury hardly affects SKA astrometry over multi-epoch observations under a precision level of 1.0 $\mu$as. However, $\tau_{0.1}$ is larger than 48.0 hr in most cases, indicating that the gravitational field of Mercury is likely going to affect SKA astrometry over multi-epoch observations under a precision level of 0.1 $\mu$as. Table \ref{tab:duration-effect} summarizes our conclusions. It is worth noting that large values of $\tau_{1.0}$ and $\tau_{0.1}$ do not appear at $\phi_{\mathrm{CES}} = 0$ (i.e., the minimum distance from the Earth to Mercury), but at $\phi_{\mathrm{CES}}$ being about 20$^{\circ}$--40$^{\circ}$ and about 20$^{\circ}$--50$^{\circ}$, respectively. \begin{deluxetable*}{lccccccc} \tablecolumns{8} \tablecaption{Effect of Celestial Bodies on SKA Astrometry \label{tab:duration-effect}} \tablehead{ \colhead{Objects} & \colhead{Index} & \colhead{Precision Level} & \multicolumn{3}{c}{Single-Epoch} & & \colhead{Multi-epoch} \\ \cline{4-6} \colhead{} & \colhead{} & \colhead{($\mu$as)} & \colhead{9200 MHz} & \colhead{1400 MHz} & \colhead{560 MHz} & & \colhead{} } \startdata Mercury & 2 & 1.0 & Y & Y & N & & N? \\ & & 0.1 & Y & Y & N\tablenotemark{$\dagger$} & & Y? \\ Venus & 3 & 1.0 & Y & Y & Y & & Y? \\ & & 0.1 & Y & Y & Y?\tablenotemark{$\dagger$} & & Y \\ Mars & 6 & 1.0 & Y & Y & Y? & & N? \\ & & 0.1 & Y & Y & N?\tablenotemark{$\dagger$} & & Y? \\ Uranus &139& 1.0 & Y & Y & Y & & Y \\ & & 0.1 & Y & Y & Y\tablenotemark{$\dagger$} & & Y \\ Neptune &167& 1.0 & Y & Y & Y & & Y \\ & & 0.1 & Y & Y & Y\tablenotemark{$\dagger$} & & Y \\ Pluto &182& 1.0 & N & N & N & & N \\ & & 0.1 & Y & N?& N\tablenotemark{$\dagger$} & & N? \\ Ceres &188& 1.0 & N & N & N & & N \\ & & 0.1 & N?& N?& N\tablenotemark{$\dagger$} & & N \\ \enddata \tablenotetext{\dagger}{This result may be meaningless.} \tablecomments{Y = affects SKA astrometry, Y? = likely affects SKA astrometry, N? = hardly affects SKA astrometry, N = does not affect SKA astrometry.} \end{deluxetable*} From Figure \ref{fig-times-Venus} it can be seen that the effect of Venus is larger than that of Mercury. The minimum values of $\tau_{1.0}$ and $\tau_{0.1}$ are 25.4 and 254.3 hr, respectively. The maximum values of $\tau_{1.0}$ and $\tau_{0.1}$ are both larger than 912.5 hr (i.e., exceeding one month). Large values of $\tau_{1.0}$ and $\tau_{0.1}$ appear at $\phi_{\mathrm{CES}} \sim 0^{\circ}$--30$^{\circ}$ and $\sim 0^{\circ}$--40$^{\circ}$, respectively. Therefore, for Venus, its gravitational field can largely affect SKA astrometry over both single-epoch and multi-epoch observations (see the summary in Table \ref{tab:duration-effect}). \subsubsection{Category of Planets outside the Earth's Orbit}\label{sec:imapact-regions-duration:outside} This category includes Mars, Uranus, and Neptune. Unlike the category inside the Earth's orbit, the computed points of this category are a group of ribbons with R.A.(J2000) ranging from 0$^{\circ}$ to 360$^{\circ}$ (see Figures \ref{fig-times-Mars}--\ref{fig-times-Neptune}). \begin{figure*}[!htb] \centering \subfigure[$\tau_{1.0}$ as a function of $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$]{\includegraphics[height=0.36\textwidth]{Mar_time_all_v2.pdf}} \subfigure[Distribution of $\tau_{1.0}$]{\includegraphics[height=0.36\textwidth]{Mar_time_all_J2000_1.png}} \subfigure[$\tau_{0.1}$ as a function of $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$]{\includegraphics[height=0.354\textwidth]{Mar_time_all_01_v2.pdf}} \subfigure[Distribution of $\tau_{0.1}$]{\includegraphics[height=0.364\textwidth]{Mar_time_all_J2000_01.png}} \caption{$\tau_{1.0}$ and $\tau_{0.1}$ for Mars. The description of each map refers to Figure \ref{fig-times-Mercury}.} \label{fig-times-Mars} \end{figure*} \begin{figure*}[!htb] \centering \subfigure[$\tau_{1.0}$ as a function of $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$]{\includegraphics[height=0.354\textwidth]{Uranus_time_all_v2.pdf}} \subfigure[Distribution of $\tau_{1.0}$]{\includegraphics[height=0.368\textwidth]{Uranus_time_all_J2000_1.png}} \subfigure[$\tau_{0.1}$ as a function of $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$]{\includegraphics[height=0.346\textwidth]{Uranus_time_all_01_v2.pdf}} \subfigure[Distribution of $\tau_{0.1}$]{\includegraphics[height=0.365\textwidth]{Uranus_time_all_J2000_01.png}} \caption{$\tau_{1.0}$ and $\tau_{0.1}$ for Uranus. The description of each map refers to Figure \ref{fig-times-Mercury}.} \label{fig-times-Uranus} \end{figure*} \begin{figure*}[!htb] \centering \subfigure[$\tau_{1.0}$ as a function of $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$]{\includegraphics[height=0.354\textwidth]{Neptune_time_all_v2.pdf}} \subfigure[Distribution of $\tau_{1.0}$]{\includegraphics[height=0.368\textwidth]{Neptune_time_all_J2000_1.png}} \subfigure[$\tau_{0.1}$ as a function of $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$]{\includegraphics[height=0.35\textwidth]{Neptune_time_all_01_v2.pdf}} \subfigure[Distribution of $\tau_{0.1}$]{\includegraphics[height=0.371\textwidth]{Neptune_time_all_J2000_01.png}} \caption{$\tau_{1.0}$ and $\tau_{0.1}$ for Neptune. The description of each map refers to Figure \ref{fig-times-Mercury}.} \label{fig-times-Neptune} \end{figure*} The effect of Mars is a little greater than that of Mercury (see Figure \ref{fig-times-Mars}). The minimum values of $\tau_{1.0}$ and $\tau_{0.1}$ are 3.9 and 38.8 hr, respectively. $\tau_{1.0}$ is less than 10 hr and $\tau_{0.1}$ less than 100 hr in most cases. The maximum values of $\tau_{1.0}$ and $\tau_{0.1}$ are 379.4 and 1585.3 hr, respectively. Large values of $\tau_{1.0}$ and $\tau_{0.1}$ appear at about 10--20$^{\circ}$ and about 0--40$^{\circ}$, respectively. Mars's gravitational field can affect astrometry over single-epoch observations at precision levels of both 1.0 and 0.1 $\mu$as, and will likely affect multi-epoch observations at a precision level of 0.1 $\mu$as, but will hardly affect multi-epoch observations at a precision level of 1.0 $\mu$as (see Table \ref{tab:duration-effect}). From Figures \ref{fig-times-Uranus} and \ref{fig-times-Neptune}, it can be seen that the effects of Uranus and Neptune are larger than that of Venus. The minimum value of $\tau_{1.0}$ is larger than 994.9 hr (i.e., exceeding one month). The value of $\tau_{0.1}$ is $\sim$ 1 yr, which is the period of revolution of Earth around the Sun, indicating that the obtained value reaches the calculation limit of this work. Therefore, for Uranus and Neptune, their gravitational fields can largely affect astrometry for both single-epoch and multi-epoch observations (see Table \ref{tab:duration-effect}). \subsubsection{Category of Others}\label{sec:imapact-regions-duration:other} This category includes Pluto and Ceres. Objects in this category are also outside the Earth's orbit, and thus show similar trajectories as the category of planets outside the Earth's orbit, i.e., a group of ribbons with R.A.(J2000) ranging from 0$^{\circ}$ to 360$^{\circ}$ (see Figures \ref{fig-times-Pluto}--\ref{fig-times-Ceres}). For these two objects, the values of $\beta_{\mathrm{1.0, max}}$ are less than their angular radii seen from the Earth. Therefore, the gravitational fields of Pluto and Ceres do not affect SKA astrometry under a precision level of 1.0 $\mu$as, and we did not calculate $\tau_{1.0}$. \begin{figure*}[!htb] \centering \subfigure[$\tau_{1.0}$ as a function of $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$]{\includegraphics[height=0.36\textwidth]{Pluto_time_all_01_v2.pdf}} \subfigure[Distribution of $\tau_{1.0}$]{\includegraphics[height=0.36\textwidth]{Pluto_time_all_J2000_01.png}} \caption{$\tau_{0.1}$ for Pluto. The description of each map refers to Figure \ref{fig-times-Mercury}.} \label{fig-times-Pluto} \end{figure*} \begin{figure*}[!htb] \centering \subfigure[$\tau_{1.0}$ as a function of $\theta_{\mathrm{CES}}$ and $\phi_{\mathrm{CES}}$]{\includegraphics[height=0.36\textwidth]{Ceres_time_all_01_v2.pdf}} \subfigure[Distribution of $\tau_{1.0}$]{\includegraphics[height=0.36\textwidth]{Ceres_time_all_J2000_01.png}} \caption{$\tau_{0.1}$ for Ceres. The description of each map refers to Figure \ref{fig-times-Mercury}.} \label{fig-times-Ceres} \end{figure*} For Pluto, the maximum value of $\tau_{0.1}$ is 108.0 hr, but it only appears at $\phi_{\mathrm{CES}} \sim 80^{\circ}$ (see Figure \ref{fig-times-Pluto}). The minimum value of $\tau_{0.1}$ exceeds 8.0 min, and the value of $\tau_{0.1}$ exceeding 4.0 hr only appears at $\phi_{\mathrm{CES}}$ $\sim$ 60$^{\circ}$--90$^{\circ}$. Therefore, Pluto's gravitational field can largely affect astrometry over single-epoch observations at 9200 MHz, but will hardly affect single-epoch observations at 1400 MHz, and will hardly affect multi-epoch observations (see Table \ref{tab:duration-effect}). For Ceres, the maximum value of $\tau_{0.1}$ is 10.2 hr, but it only appears at $\phi_{\mathrm{CES}} \sim 40^{\circ}$ (see Figures \ref{fig-times-Ceres}). The groups of values of $\tau_{1.0}$ exceeding 8 min only appear at $\phi_{\mathrm{CES}}$ $\sim$ 0$^{\circ}$--60$^{\circ}$. Therefore, its gravitational field will hardly affect SKA astrometry (see Table \ref{tab:duration-effect}). \section{Summary and Conclusions}\label{sec-summary} We have calculated the maximum deflection angle caused by 195 objects in the solar system, including the Sun, all planets, 177 satellites (contains the Moon), and eight asteroids with $GM > 0.1$ km$^{3}$ s$^{-2}$ (see Table \ref{tab:original-para}). An overview of the deflection angle caused by these objects is as follows (see Table \ref{tab:betas}). \begin{enumerate} \item Twenty-one satellites and six asteroids can deflect light from CESs by more than 0.1 $\mu$as, and 14 satellites and one asteroid (i.e., Ceres) can bend light by more than 1.0 $\mu$as. \item Jupiter and Saturn can bend light by amount of 0.1 $\mu$as up to 100$^{\circ}$ and by amount of 1.0 $\mu$as up to dozens of degrees. The ranges of influence under 0.1 $\mu$as and 1.0 $\mu$as for the other planets (other than the Earth) and the Moon are $1.4^{\circ}$--63.6$^{\circ}$ and 0.1$^{\circ}$--7.1$^{\circ}$, respectively. But for the satellites and Ceres, the corresponding ranges of influence are all from a few arcseconds to a few hundred arcseconds. \end{enumerate} Further computations, regarding the zones and durations of perturbations caused by the celestial bodies, were made towards Mercury, Venus, Mars, Uranus, Neptune, Pluto, and Ceres, and the main results are as follows. \begin{enumerate} \item The computed perturbation zones with deflection angles larger than 1.0 $\mu$as are ribbons with a width of several degrees or less, except for Venus whose corresponding ribbon width is $\sim$ 13$^{\circ}$--16$^{\circ}$. For deflection angles larger than 0.1 $\mu$as, the widths increase to more than a dozen degrees for Uranus and Neptune, increase to close to 80$^{\circ}$ for Venus, and remain almost unchanged for Mercury, Mars, Pluto, and Ceres (see Table \ref{tab:coverage} and Figure \ref{fig-coverage}). \item The computation of the durations of perturbations (see Figures 4--10) posed by the gravitational field of solar system objects indicates that the gravitational field of Ceres will hardly affect SKA astrometry, but that of Pluto may have a small effect on the astrometry of single-epoch observations under a precision level of 0.1 $\mu$as. The gravitational fields of Mercury and Mars may have a great influence on astrometry for single-epoch observations at precision levels of both 1.0 and 0.1 $\mu$as, and those of other objects can largely affect astrometry for both single and multi-epoch observations (see Table \ref{tab:duration-effect}). \end{enumerate} \acknowledgments We would like to thank the anonymous referee for the helpful comments and suggestions that helped to improve the paper. This work was sponsored by the Natural Science Foundation of Jiangsu Province (Grants No. BK20210999), the Entrepreneurship and Innovation Program of Jiangsu Province, the NSFC Grants Nos. 12203104, 11933011 and 11873019, the Key Laboratory for Radio Astronomy, CAS.
{ "timestamp": "2022-09-20T02:23:04", "yymm": "2209", "arxiv_id": "2209.08702", "language": "en", "url": "https://arxiv.org/abs/2209.08702" }
\section{Introduction} \label{sec:Introduction} \emph{Bent functions}, firstly proposed by Rothaus in 1976 \cite{Rothaus1976}, have been extensively investigated during the past few decades due to their important applications in cryptography \cite{Carlet2021}, the design of sequence \cite{Olsen1982, Abdukhalikov2021} and coding theory \cite{Calderbank1986, Ding2016, Pott2011}. The most distinct and useful characterization of bent functions is the so-called flat absolute Walsh spectrum, i.e., all spectral values under the Walsh-Hadamard transform have the same absolute value. As cryptographic primitives, bent functions have the maximum distance to the set of all affine functions. This implies that bent functions can contribute to the best confusion effect in cryptosystems. It is known that bent functions only exist on even numbers of variables, and their algebraic degrees are at most $\frac{n}{2}$ ($n$ is the number of variables, similarly hereinafter). Up to now, many methods for constructing bent functions have been proposed, of which a non-exhaustive list is \cite{Carlet1994, Carlet2016, Dillon1974, Dobbertin1995, Gao2012, Hodzic2020, McFarland1973, Mesnager2014, Su2017, Su2020, Tang2017, Zhang2020, Zhang2017}. The book \cite{Mesnager2016} provides a detailed survey of the results on bent functions. In \cite{Parker2000, Riera2006}, the bent criterion was generalized by using a transform composed of the tensor product of the identity matrix, the Walsh-Hadamard matrix and the nega-Hadamard matrix. A Boolean function is called \emph{negabent} if it has a flat absolute spectrum under the nega-Hadamard transform. Interestingly, negabent functions exist on both even and odd numbers of variables, and all affine functions are negabent \cite{Parker2007}. Like bent functions, the maximum possible algebraic degree of any negabent function is $\lceil \frac{n}{2} \rceil$ \cite{Stanica2012}. Some constructions and characterizations of negabent functions have been addressed in \cite{Parker2007, Sarkar2012, Schmidt2008, Stanica2010, Stanica2012, Su2013, Zhou2017}. A Boolean function is called \emph{bent-negabent} if it is both bent and negabent. Some constructions of bent-negabent functions in the Maiorana-McFarland class have been proposed in \cite{Parker2007, Schmidt2008, Stanica2012}. The algebraic degrees of these functions are upper bounded by $\lfloor \frac{n}{4} \rfloor +1$. In \cite{Su2013}, a construction of $n$-variable bent-negabent functions with any possible algebraic degrees ranging from $2$ to maximum $\frac{n}{2}$ has been presented. All bent-negabent functions generated from this construction are in the completed Maiorana-McFarland class. In \cite{Zhang2015}, bent-negabent functions outside the completed Maiorana-McFarland class have been constructed under the framework of the indirect sum construction. \emph{Symmetric} Boolean functions are a subclass of Boolean functions whose outputs are invariant for all permutations of the inputs. It has been proved that a symmetric function is bent if and only if it is quadratic \cite{Savicky1994}, and a symmetric function is negabent if and only if it is affine \cite{Sarkar2009}. This directly implies the nonexistence of bent-negabent functions in the symmetric class. \emph{Rotation symmetric} Boolean functions are a subclass of Boolean functions whose outputs are invariant under the cyclic shift of the inputs. To date, several constructions of rotation symmetric bent functions have been addressed in \cite{Su2017, Gao2012, Tang2017}. Nevertheless, whether there exist bent-negabent functions in the rotation symmetric class is still an open problem. The nonexistence of rotation symmetric bent-negabent functions has been investigated under several conditions \cite{Mandal2018, Mandal2020, Sarkar2015}. Moreover, recently in \cite{Sun2022}, it has been proved that there do not exist any rotation symmetric bent-negabent functions for almost all even numbers of variables. In \cite{Kavut2007321}, the rotation symmetric property was generalized to \emph{$k$-rotation symmetric }property. A Boolean function is called $k$-rotation symmetric if it is invariant under the $k$-cyclic shift of the inputs, but not the $l$-cyclic shift for all $1 \le l \le k-1$. Several constructions of 2-rotation symmetric bent functions have been proposed in \cite{Su2017, Su2020}. However, there are no constructions of bent-negabent functions in the generalized rotation symmetric until now. Among all methods for constructing bent functions, an effective one is to modify the truth tables of known bent functions. That is to say, given an input set and a known bent function, the outputs of the new function are complements of those of the given function for inputs in the set, and same for inputs outside the set. This method was first proposed in \cite{Su2017} to construct rotation symmetric bent functions with any possible algebraic degrees by modifying the truth table of Rothaus' bent function. In \cite{Zhang2020}, a different construction was also given by modifying the truth table of Rothaus' bent function. Recently in \cite{Su2020}, three generic constructions of bent functions were presented by using the linear subspace and their orthogonal complement subspace to construct the inputs sets, and modifying the truth tables of Rothaus' bent function and Maiorana-McFarland class of bent functions, among which the first construction contains the constructions in \cite{Su2017} and \cite{Zhang2020} as special cases. However, the negabentness has not been considered in \cite{Su2020}. We find that the first construction cannot give rise to negabent functions. For the other two constructions, we have done some simulations which show that some bent functions from these constructions are not negabent. In this paper, we generalize the constructions in \cite{Su2020} in order to get bent-negabent functions and 2-rotation symmetric bent-negabent functions. For the constructions in terms of modifying the truth tables of known bent-negabent functions, we analyze some sufficient conditions for the \emph{fragmentary Walsh-Hadamard transform} and the \emph{fragmentary nega-Hadamard transform}, which will be formally defined in Section \ref{section:insight}, such that the produced functions are bent-negabent. Using the linear subspace and the coset leader, we construct four vector sets, over which the two fragmentary transforms of given quadratic bent-negabent functions satisfy the required conditions, so we obtain four constructions of bent-negabent functions. First, based on a class of quadratic bent-negabent functions on $4t$ variables, we propose two methods for modifying its truth table to obtain new bent-negabent functions on $4k$ ($t = k$) and $8k$ ($t=2k$) variables, respectively. Second, based on a class of quadratic bent-negabent functions on $4t+2$ variables, we give two constructions of bent-negabent functions on $4k+2$ ($t=k$) and $8k+2$ ($t=2k$) variables, respectively. All constructions of bent-negabent functions mentioned above use quadratic bent-negabent functions instead of Rothaus' bent functions, so they are not special cases of the former two generic constructions in \cite{Su2020}. Although those starting functions we used are in the Maiorana-McFarland class, our constructions are still not special cases of the third generic constriction in \cite{Su2020}, because the required conditions in \cite{Su2020} are not satisfied. We will address this in details in Section \ref{section:comparison}. We also investigate the necessary and sufficient conditions such that those constructed bent-negabent functions have the maximum algebraic degree. Finally, we present a construction of 2-rotation symmetric bent-negabent functions with any possible algebraic degrees by modifying the truth tables of a class of quadratic 2-rotation symmetric bent-negabent functions. Furthermore, the algebraic normal forms and duals of all these newly constructed bent-negabent functions are determined. The reminder of this paper is organized as follows. In Section \ref{section:preliminaries}, we review some definitions and notations of Boolean functions, and some basic properties of linear subspaces and cosets. In Section \ref{section:insight}, we introduce a new insight into the construction of bent-negabent functions. In Section \ref{section:bent_negabent_4k}, we present two constructions of bent-negabent functions on $4k$ and $8k$ variables by modifying the truth tables of a class of quadratic bent-negabent functions. In Section \ref{section:bent_negabent_4k+2}, we provide two constructions of bent-negabent functions on $4k+2$ and $8k+2$ variables. We also analyze the algebraic normal forms, algebraic degrees and duals of these bent-negabent functions in their respective corresponding sections. In Section \ref{section:2_RS_bent_negabent}, we give a construction of 2-rotation symmetric bent-negabent functions with any possible algebraic degrees. In Section \ref{section:comparison}, we compare our constructions to some known results. Section \ref{section:conclusion} concludes this paper. \section{Preliminaries} \label{section:preliminaries} Let $\F_2, \R$ and $\C$ be the binary field, the real number field and the complex field, respectively. We shall use $+$ to denote the addition in $\F_2$, $\R$ and $\C$, and the actual addition is determined by the context. Let $\mathbb{F}_2^n$ be the $n$-dimensional vector space of $\mathbb{F}_2$, where $n$ is a positive integer. Given vectors $\bm{\alpha} = (a_0, \cdots, a_{n-1})$ and $\bm{\beta} = (b_0, \cdots, b_{n-1})$ in $\mathbb{F}_2^n$, we say that $\bm{\alpha}$ covers $\bm{\beta}$ if $a_i \ge b_i$ for all $0 \le i \le n-1$, and denote this relation by $\bm{\alpha} \succeq \bm{\beta}$. The usual scalar (or dot) product over $\mathbb{F}_2$ and the Hadamard (or term-wise) product of $\bm{\alpha}$ and $\bm{\beta}$, are respectively defined by \begin{align*} & \bm{\alpha} \cdot \bm{\beta} = a_0b_0 + \cdots + a_{n-1}b_{n-1}, \\ & \bm{\alpha} * \bm{\beta} = (a_0b_0, \cdots, a_{n-1}b_{n-1}). \end{align*} We shall denote by $\bm{0}_n$ ($\bm{1}_n$, respectively) the all-zero vector (all-one vector, respectively) in $\mathbb{F}_2^n$, and $\bm{\mathrm{e}}_n^{\varepsilon}$ the vector in $\mathbb{F}_2^n$ with $\varepsilon \in \F_2$ in the first position and $0$ elsewhere, i.e., $\bm{\mathrm{e}}_n^{\varepsilon} = (\varepsilon, \bm{0}_{n-1})$. Given a complex number $c = s + r \imath \in \mathbb{C}$, where $a, b \in \mathbb{R}$ and $\imath = \sqrt{-1}$, we denote its absolute value by $|c| = \sqrt{s^2 + r^2}$. For a nonempty subset $H$ of $\mathbb{F}_2^n$, if $\albu + \bebu \in H$ for any vectors $\albu, \bebu \in H$, then $H$ is called a \emph{linear subspace} of $\mathbb{F}_2^n$. And $H^{\perp} = \{\xbu \in \mathbb{F}_2^n : \albu \cdot \xbu = 0,\ \forall \ \albu \in H \}$ is called the \emph{orthogonal complement subspace} of $H$. Given $\bm{\alpha} \in \mathbb{F}_2^n$ and a linear subspace $H$ of $\mathbb{F}_2^n$, a \emph{coset} of $H$ in $\F_2^n$, denoted by $C_{\albu} (H)$, is defined as the affine subspace \begin{align*} C_{\albu} (H) = \bm{\alpha} + H = \{\albu + \bebu : \bebu \in H \}. \end{align*} Then we know $H$ partitions $\mathbb{F}_2^n$ as a union of the cosets of $H$. Let $\dim H$ represent the dimension of $H$, i.e., $2^{\dim H} = |H|$, where $|H|$ denotes the cadinality of $H$. Then, we have \begin{align*} \mathbb{F}_2^n = \bigcup_{i=1}^{2^{n-\dim H}} C_{\albu_i} (H), \end{align*} where $C_{\albu_i} (H) \cap C_{\albu_j} (H) = \emptyset$ if $i \ne j$, and $\{\bm{\alpha}_i \}$ is called a \emph{complete set of coset representatives} of $H$ in $\F_2^n$, denoted by $R_H = \{\bm{\alpha}_i : i = 1, \cdots, 2^{n - \dim H} \}$. A Boolean function on $n$ variables is a mapping from $\mathbb{F}_2^n$ to $\mathbb{F}_2$. By convention, we shall denote the set of all $n$-variable Boolean functions by $\mathcal{B}_n$. The most basic representation of $f \in \mathcal{B}_n$ is the truth table, which is a sequence of all outputs of $f$ with inputs in lexicographic order, i.e., \[ f = [f(0, \cdots, 0), f(1, 0, \cdots, 0), f(0, 1, \cdots, 0), \cdots, f(1, \cdots, 1)]. \] For any $f \in \mathcal{B}_n$, it can be uniquely expressed by the multivariate polynomial representation, called the algebraic normal form (ANF): \[ f(\bm{\mathrm{x}}) = \sum_{\bm{\mathrm{u}} \in \mathbb{F}_2^n} c_{\bm{\mathrm{u}}} \bm{\mathrm{x}}^{\bm{\mathrm{u}}}, \] where $\bm{\mathrm{x}} = (x_0, \cdots, x_{n-1}), \bm{\mathrm{u}} = (u_0, \cdots, u_{n-1}) \in \mathbb{F}_2^n$, $c_{\bm{\mathrm{u}}} \in \mathbb{F}_2$, and $\bm{\mathrm{x}}^{\bm{\mathrm{u}}} = \prod_{i=0}^{n-1} {x_i}^{u_i}$. The algebraic degree of $f$ is defined as $\deg(f) = \max_{\bm{\mathrm{u}} \in \mathbb{F}_2^n} \{\wt(\bm{\mathrm{u}}) : c_{\bm{\mathrm{u}}} \ne 0 \}$, where $\wt(\bm{\mathrm{u}}) = \sum_{i=0}^{n-1} u_i$ is the Hamming weight of $\bm{\mathrm{u}}$. Given a subset $S$ of $\mathbb{F}_2^n$, its characteristic function, denoted by $\chi_S$, is defined as the $n$-variable Boolean function $ \chi_S(\bm{\mathrm{x}}) = \begin{cases} 1,\ \bm{\mathrm{x}} \in S, \\ 0,\ \text{otherwise}. \end{cases} $ The Walsh-Hadamard transform and the nega-Hadamard transform of $f \in \mathcal{B}_n$ at $\bm{\mathrm{u}} \in \mathbb{F}_2^n$, denoted by $\W_f(\bm{\mathrm{u}})$ and $\N_f(\bm{\mathrm{u}})$, are respectively defined by \begin{align} & \W_f(\bm{\mathrm{u}}) = \sum_{\bm{\mathrm{x}} \in \mathbb{F}_2^n} (-1)^{f(\bm{\mathrm{x}}) + \bm{\mathrm{u}} \cdot \bm{\mathrm{x}}}, \label{align:WHT}\\ & \N_f(\bm{\mathrm{u}}) = \sum_{\bm{\mathrm{x}} \in \mathbb{F}_2^n} (-1)^{f(\bm{\mathrm{x}}) + \bm{\mathrm{u}} \cdot \bm{\mathrm{x}}} \cdot \imath^{\wt(\bm{\mathrm{x}})}. \label{align:NHT} \end{align} \begin{definition} \rm Let $n$ be a positive even integer. A Boolean function $f$ on $n$ variables is called \emph{bent} if $|\W_f(\bm{\mathrm{u}})| = 2^{\frac{n}{2}}$ for all $\ubu \in \F_2^n$. \end{definition} \begin{definition} \rm Let $n$ be a positive integer. A Boolean function $f$ on $n$ variables is called \emph{negabent} if $|\N_f(\bm{\mathrm{u}})| = 2^{\frac{n}{2}}$ for all $\ubu \in \F_2^n$. \end{definition} For any bent function $f \in \mathcal{B}_n$, its \emph{dual}, denoted by $\tilde{f}\in \mathcal{B}_n$ and defined by \begin{align} 2^{\frac{n}{2}}(-1)^{\tilde{f}(\bm{\mathrm{x}})} = \W_f (\bm{\mathrm{x}}),\ \text{for all}\ \bm{\mathrm{x}} \in \mathbb{F}_2^n, \label{align:Dual_def} \end{align} is also bent. A Boolean function is called \emph{bent-negabent} if it is both bent and negabent. From \cite[Theorem 11]{Parker2007} we know that the dual of a bent-negabent function is also bent-negabent. The Maiorana-McFarland class of bent functions \cite{McFarland1973, Dillon1974} contains all $2m$-variable functions of the following form: \begin{align} f(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = \bm{\mathrm{x}} \cdot \pi(\bm{\mathrm{y}}) + \varphi(\bm{\mathrm{y}}),\ \text{for all}\ \bm{\mathrm{x}}, \bm{\mathrm{y}} \in \mathbb{F}_2^{m}, \label{align:MM_class} \end{align} where $\pi$ is any permutation on $\mathbb{F}_2^{m}$ and $\varphi$ is any Boolean function on $m$ variables. It is known that the dual of $f$ in (\ref{align:MM_class}) is given by \begin{align} \tilde{f}(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = \bm{\mathrm{y}} \cdot \pi^{-1}(\bm{\mathrm{x}}) + \varphi(\pi^{-1}(\bm{\mathrm{x}})), \label{align:dual_MM} \end{align} where $\pi^{-1}$ is the inverse of $\pi$ \cite{Dillon1974}. Given a vector $\bm{\mathrm{x}} = (x_0, \cdots, x_{n-1}) \in \mathbb{F}_2^n$ and $0 \le l \le n-1$, we shall denote the $l$-cyclic shift of $\bm{\mathrm{x}}$ by $\rho_n^{l}(\bm{\mathrm{x}}) = (x_l, \cdots, x_{n-1}, x_0, \cdots, x_{l-1})$. \begin{definition} \rm For $f \in \mathcal{B}_n$, given an integer $k|n$, if $f(\rho_n^k(\bm{\mathrm{x}})) = f(\bm{\mathrm{x}})$ for all $\bm{\mathrm{x}} \in \mathbb{F}_2^n$, and at least one $\bm{\mathrm{x}} \in \mathbb{F}_2^n$ such that $f(\rho_n^l(\bm{\mathrm{x}})) \ne f(\bm{\mathrm{x}})$ for each integer $0 < l < k$, then $f$ is called a \emph{$k$-rotation symmetric Boolean function}. Especially, 1-rotation symmetric Boolean functions are the usual rotation symmetric Boolean functions. \end{definition} \section{New Insight into the Construction of Bent-Negabent Functions} \label{section:insight} In this section, we first give the definitions of the fragmentary Walsh-Hadamard transform and the fragmentary nega-Hadamard transform of an $n$-variable Boolean function $f$ over $T$, where $T\subset \F_2^n$. Then, based on these notions, we provide a new insight into the construction of bent-negabent functions. The notion fragmentary Walsh-Hadamard transform is presented as follows, which was introduced in \cite{Zhang2019} to construct resilient Boolean functions on an odd number of variables with strictly almost optimal nonlinearity. \begin{definition} \rm (\cite[Definition 1]{Zhang2019}) \label{definition:fragWalsh} Given a function $f \in \mathcal{B}_n$ and a subset $T$ of $\F_2^n$, the \emph{fragmentary Walsh-Hadamard transform} of $f$ over $T$ at $\ubu \in \F_2^n$, denoted by $\W_{f, T}(\ubu)$, is defined by \begin{align} & \W_{f, T}(\bm{\mathrm{u}}) = \sum_{\bm{\mathrm{x}} \in T} (-1)^{f(\bm{\mathrm{x}}) + \bm{\mathrm{u}} \cdot \bm{\mathrm{x}}}. \label{align:PWHT} \end{align} \end{definition} Similarly to Definition \ref{definition:fragWalsh}, we define the fragmentary nega-Hadamard transform. \begin{definition} \rm Given a function $f \in \mathcal{B}_n$ and a subset $T$ of $\F_2^n$, the \emph{fragmentary nega-Hadamard transform} of $f$ over $T$ at $\ubu \in \F_2^n$, denoted by $\N_{f, T}(\ubu)$, is defined by \begin{align} \N_{f, T}(\bm{\mathrm{u}}) = \sum_{\bm{\mathrm{x}} \in T} (-1)^{f(\bm{\mathrm{x}}) + \bm{\mathrm{u}} \cdot \bm{\mathrm{x}}} \cdot \imath^{\wt(\bm{\mathrm{x}})}. \label{align:PNHT} \end{align} \end{definition} Let $n$ be an even integer and $\xbu \in \F_2^n$. Given a Boolean function $f_0 \in \mathcal{B}_n$ and a nonempty subset $T$ of $\mathbb{F}_2^n$, we use $T$ to modify the truth table of $f_0$ to present a construction of $n$-variable Boolean functions as \begin{align} \label{align:f_frame_construction} f(\bm{\mathrm{x}}) = f_0(\bm{\mathrm{x}}) + \chi_T(\bm{\mathrm{x}}) = \begin{cases} f_0(\bm{\mathrm{x}}) + 1,\ \bm{\mathrm{x}} \in T, \\ f_0(\bm{\mathrm{x}}),\ \hspace{0.7cm}\text{otherwise}. \end{cases} \end{align} \begin{theorem} \rm \label{theorem:frame_construction} With the above notations, we have the following results. \begin{enumerate} [(1)] \item Given a bent function $f_0$, $f$ in (\ref{align:f_frame_construction}) is bent if $\W_{f_0, T} (\ombu) = c_{\ombu} \W_{f_0} (\ombu)$, where $c_{\ombu}\in \{0, 1 \}$, for any $\ombu \in \F_2^n$. Moreover, if $f$ is bent, the dual of $f$ is given by $\tilde{f}(\xbu) = \tilde{f}_0(\xbu) + \chi_{\{\ombu \in \F_2^n : c_{\ombu} = 1\}} (\xbu)$. \item Given a negabent functions $f_0$, $f$ in (\ref{align:f_frame_construction}) is negabent if $\N_{f_0, T} (\ombu) = c_{\ombu} \N_{f_0} (\ombu)$, where $c_{\ombu} \in \{0, 1, \frac{1\pm \imath}{2} \}$, for any $\ombu \in \F_2^n$. \end{enumerate} \end{theorem} \begin{proof} By (\ref{align:WHT}), the Walsh-Hadamard transform of $f$ at $\ombu \in \F_2^n$ is given by \begin{align*} \W_f(\ombu) =& \sum_{\xbu \in \F_2^n \setminus T} (-1)^{f_0(\xbu) + \ombu \cdot \xbu} + \sum_{\xbu \in T} (-1)^{f_0(\xbu) + 1 + \ombu\cdot \xbu} \notag \\ =& \W_{f_0} (\ombu) - 2 \W_{f_0, T} (\ombu). \end{align*} Then we have \begin{align*} \W_f(\ombu) = \begin{cases} \W_{f_0} (\ombu),\ \hspace{0.3cm} \W_{f_0, T} (\ombu) = 0, \mathrm{i.e.}, c_{\ombu} = 0, \\ - \W_{f_0} (\ombu),\ \W_{f_0, T} (\ombu) = \W_{f_0} (\ombu), \mathrm{i.e.}, c_{\ombu} = 1. \end{cases} \end{align*} Hence, $f$ is bent if $c_{\ombu} \in \{0, 1 \}$ for any $\ombu \in \F_2^n$. Together with the definition of dual in (\ref{align:Dual_def}), we know \begin{align*} \tilde{f}(\xbu) = \begin{cases} \tilde{f}_0(\xbu),\ \hspace{0.6cm} c_{\xbu} = 0, \\ \tilde{f}_0(\xbu) + 1,\ c_{\xbu} = 1. \end{cases} \end{align*} Then the assertion (1) is established. Similarly, $\N_f$ can be expressed by $\N_{f_0}$ and $\N_{f_0, T}$ as \begin{align*} \N_f(\ombu) = \N_{f_0}(\ombu) - 2\N_{f_0, T}(\ombu),\ \text{for all}\ \ombu \in \F_2^n. \end{align*} Then we have \begin{align*} \N_f(\ombu) = \begin{cases} \N_{f_0} (\ombu),\ \ \ \ \N_{f_0, T} (\ombu) = 0, \\ - \N_{f_0} (\ombu),\ \N_{f_0, T} (\ombu) = \N_{f_0}(\ombu), \\ \imath \N_{f_0}(\ombu),\ \ \ \N_{f_0, T} (\ombu) = \frac{1-\imath}{2}\N_{f_0}(\ombu), \\ - \imath \N_{f_0}(\ombu),\ \N_{f_0, T} (\ombu) = \frac{1+\imath}{2}\N_{f_0}(\ombu). \end{cases} \end{align*} Hence, the assertion (2) is established. \end{proof} In the following two sections, with quadratic bent-negabent functions serving as $f_0$, we construct suitable sets $T$, over which the fragmentary Walsh-Hadamard transform and the fragmentary nega-Hadamard transform of $f_0$ satisfy the both conditions in Theorem \ref{theorem:frame_construction}, so that those functions produced from (\ref{align:f_frame_construction}) are bent-negabent. \section{Constructions of Bent-Negabent Functions on $4t$ Variables} \label{section:bent_negabent_4k} In this section, we present two constructions of bent-negabent functions on $4k$ variables and $8k$ variables by modifying the truth tables of a class of quadratic bent-negabent in the Maiorana-McFarland class. The ANFs, algebraic degrees and duals of the constructed bent-negabent functions are also analyzed. We first review a characterization of bent-negabent functions in the Maiorana-McFarland class \cite{Stanica2012}. \begin{theorem} \rm (\cite[Theorem 17]{Stanica2012}) \label{theorem:MM_BentNegabent} Let $\pi$ be a weight-sum invariant permutation on $\F_2^m$, i.e., $\wt(\bm{\mathrm{x}}+ \bm{\mathrm{y}}) = \wt(\pi(\bm{\mathrm{x}}) + \pi(\bm{\mathrm{y}}))$ for all $\bm{\mathrm{x}}, \bm{\mathrm{y}} \in \mathbb{F}_2^m$. Then $f$ in (\ref{align:MM_class}) is bent-nagabent if and only if $\varphi$ is bent. \end{theorem} Since bent functions only exist on even number of variables, in Theorem \ref{theorem:MM_BentNegabent}, $m$ has to be even. Hence, bent-negabent functions are produced from Theorem \ref{theorem:MM_BentNegabent} only on $4t$ variables, where $t$ is an integer. Let $m = 2t$. We shall denote $\bm{\mathrm{x}}' = (x_0, \cdots, x_{t-1}),\ \bm{\mathrm{x}}'' = (x_t, \cdots, x_{m-1}), \bm{\mathrm{y}}' = (y_0, \cdots, y_{t-1}),\ \bm{\mathrm{y}}'' = (y_t, \cdots, y_{m-1}) \in \mathbb{F}_2^t$, and $\bm{\mathrm{x}} = (\bm{\mathrm{x}}', \bm{\mathrm{x}}''),\ \bm{\mathrm{y}} = (\bm{\mathrm{y}}', \bm{\mathrm{y}}'') \in \mathbb{F}_2^m$. In Theorem \ref{theorem:MM_BentNegabent}, by setting $\pi$ as the identical mapping, i.e., $\pi(\ybu) = \ybu$ (a weight-sum invariant permutation obviously) and $\varphi$ as Rothaus' bent function, i.e., $\varphi(\bm{\mathrm{y}}) = \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}''$, we immediately obtain a class of quadratic $4t$-variable bent-negabent functions of the following form: \begin{align} g_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = \bm{\mathrm{x}}\cdot \bm{\mathrm{y}} + \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' = \sum_{i=0}^{m-1}x_i y_i + \sum_{i=0}^{t-1}y_i y_{t+i}. \label{align:g_0} \end{align} Similarly, we shall denote $\bm{\mathrm{u}}' = (u_0, \cdots, u_{t-1}), \bm{\mathrm{u}}'' = (u_t, \cdots, u_{m-1}), \bm{\mathrm{v}}' = (v_0, \cdots, v_{t-1}), \bm{\mathrm{v}}'' = (v_t, \cdots, v_{m-1}) \in \mathbb{F}_2^t$, and $\bm{\mathrm{u}} = (\bm{\mathrm{u}}', \bm{\mathrm{u}}''), \bm{\mathrm{v}} = (\bm{\mathrm{v}}', \bm{\mathrm{v}}'') \in \mathbb{F}_2^{m}$. From (\ref{align:dual_MM}) and the proof of \cite[Theorem 17]{Stanica2012}, we know that the Walsh-Hadamard transform and the nega-Hadamard transform of $g_0$ at $(\bm{\mathrm{u}}, \bm{\mathrm{v}}) \in \mathbb{F}_2^{4t}$ are respectively given by \begin{align} & \W_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) = 2^{2t} (-1)^{\bm{\mathrm{u}}'\cdot \bm{\mathrm{u}}''+ \bm{\mathrm{u}}\cdot \bm{\mathrm{v}}}, \label{align:Walsh_{g_0}} \\ & \N_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) = 2^{2t}(-1)^{(\bm{\mathrm{u}}'+ \bm{\mathrm{v}}')\cdot (\bm{\mathrm{u}}''+ \bm{\mathrm{v}}'')} \imath^{t - \wt(\bm{\mathrm{u}})}. \label{align:Nega_{g_0}} \end{align} Given a nonempty subset $S$ of $\mathbb{F}_2^{4t}$, using it to modify the truth table of $g_0$, in the sequel, we obtain a systematic construction of $4t$-variable Boolean functions as \begin{align} \label{align:g_BentNegabent} g(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = g_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}) + \chi_S(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = \begin{cases} g_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}) + 1,\ (\bm{\mathrm{x}}, \bm{\mathrm{y}}) \in S, \\ g_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}),\ \hspace{0.7cm}\text{otherwise}. \end{cases} \end{align} In the following subsections, we will present two methods to define $S$ such that $g$ in (\ref{align:g_BentNegabent}) is bent-negabent. To avoid confusion, we will use $S_1$ and $S_2$ instead of $S$. To investigate the fragmentary Walsh-Hadamard transforms and the fragmentary nega-Hadamard transform, we will frequently use the exponential sum of linear functions, as shown in the following lemma. \begin{lemma} \rm \label{lemma:ex_sum} For any $\bm{\mathrm{\alpha}} \in \mathbb{F}_2^k$, we have \[ \sum_{\bm{\mathrm{x}} \in \mathbb{F}_2^k} (-1)^{\bm{\mathrm{\alpha}} \cdot \bm{\mathrm{x}}} = \begin{cases} 2^k,\ \bm{\mathrm{\alpha}} = \bm{\mathrm{0}}_k, \\ 0,\ \hspace{0.2cm} \text{otherwise}. \end{cases} \] \end{lemma} \subsection{Bent-Negabent Functions on $4k$ Variables} In this subsection, let $k$ be an integer, and $t = k$. For $\gmbu = (\gmbu_1, \gmbu_2) \in \F_2^{2k}$, where $\gmbu_i \in \mathbb{F}_2^{k}$ for $i = 1, 2$, we shall define \begin{align*} L_{\gmbu} = \{(\xbu, \ybu) \in \mathbb{F}_2^{4k}: (\xbu', \ybu') \in \mathbb{F}_2^{2k}, \xbu'' = \xbu' + \gmbu_1, \ybu'' = \ybu' + \gmbu_2 \}. \end{align*} Let $\Gamma$ be a nonempty subset of $\mathbb{F}_2^{2k}$, and $S_1$ be a subset of $\mathbb{F}_2^{4k}$ defined by \begin{align} S_1 = \bigcup_{\gmbu \in \Gamma} L_{\gmbu}. \label{align:S1_set} \end{align} We have the following result. \begin{theorem} \rm \label{theorem:4k} Given the subset $S_1$ of $\mathbb{F}_2^{4k}$ defined in (\ref{align:S1_set}) and $g_0 \in \mathcal{B}_{4k}$ defined in (\ref{align:g_0}), the $4k$-variable function $g$ in (\ref{align:g_BentNegabent}) is bent-negabent. \end{theorem} In order to prove this theorem, we need the following lemma, which gives the fragmentary Walsh-Hadamard transform and the fragmentary nega-Hadamard transform of $g_0$ over $S_1$. \begin{lemma} \rm \label{lemma:4k_WHT&NHT} Given the subset $S_1$ of $\mathbb{F}_2^{4k}$ defined in (\ref{align:S1_set}) and $g_0 \in \mathcal{B}_{4k}$ defined in (\ref{align:g_0}), the fragmentary Walsh-Hadamard transform and the fragmentary nega-Hadamard transform of $g_0$ over $S_1$ at $(\bm{\mathrm{u}}, \bm{\mathrm{v}}) \in \mathbb{F}_2^{4k}$ are respectively given by \begin{align} \W_{g_0, S_1}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) =& \begin{cases} \W_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}),\ \bm{\gamma}_1 = \bm{\mathrm{u}}' + \bm{\mathrm{u}}'' + \bm{\mathrm{v}}'+ \bm{\mathrm{v}}'' + \bm{1}_k, \bm{\gamma}_2 = \bm{\mathrm{u}}'+ \bm{\mathrm{u}}''\ \text{for}\ \gmbu \in \Gamma, \\ 0,\ \hspace{1.4cm}\text{otherwise}, \label{align:WHT_4k} \end{cases} \\ \N_{g_0, S_1}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) =& \begin{cases} \N_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}),\ \bm{\gamma}_1 = \bm{\mathrm{v}}' + \bm{\mathrm{v}}'', \bm{\gamma}_2 = \bm{\mathrm{u}}'+ \bm{\mathrm{u}}'' + \bm{\mathrm{v}}' + \bm{\mathrm{v}}'' + \bm{1}_k\ \text{for}\ \gmbu \in \Gamma, \\ 0,\ \hspace{1.3cm}\text{otherwise}. \label{align:NHT_4k} \end{cases} \end{align} \end{lemma} \begin{proof} By (\ref{align:PWHT}), the fragmentary Walsh-Hadamard transform of $g_0$ over $S_1$ at $(\bm{\mathrm{u}}, \bm{\mathrm{v}}) \in \mathbb{F}_2^{4k}$ is given by \begin{align*} \W_{g_0, S_1}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) =& \sum_{\gmbu \in \Gamma} \sum_{(\bm{\mathrm{x}}, \bm{\mathrm{y}})\in L_{\gmbu}}(-1)^{\bm{\mathrm{x}}\cdot \bm{\mathrm{y}}+ \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' + \bm{\mathrm{u}}\cdot \bm{\mathrm{x}} + \bm{\mathrm{v}} \cdot \bm{\mathrm{y}}} \\ =& \sum_{\bm{\gamma} \in \Gamma } \sum_{(\xbu', \ybu')\in \mathbb{F}_2^{2k}} (-1)^{((\bm{\mathrm{y}}', \bm{\mathrm{y}}'+ \bm{\gamma}_2)+ \bm{\mathrm{u}})\cdot (\bm{\mathrm{x}}', \bm{\mathrm{x}}' + \bm{\gamma}_1) + \bm{\mathrm{y}}'\cdot (\bm{\mathrm{y}}'+ \bm{\gamma}_2) + \bm{\mathrm{v}}\cdot (\bm{\mathrm{y}}', \bm{\mathrm{y}}'+ \bm{\gamma}_2) } \\ =& \sum_{\bm{\gamma} \in \Gamma }(-1)^{\bm{\mathrm{u}}'' \cdot \bm{\gamma}_1 + \bm{\mathrm{v}}'' \cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2} \sum_{\bm{\mathrm{x}}'\in \mathbb{F}_2^k} (-1)^{(\bm{\mathrm{u}}' + \bm{\mathrm{u}}'' + \bm{\gamma}_2)\cdot \bm{\mathrm{x}}'} \sum_{\bm{\mathrm{y}}'\in \mathbb{F}_2^k}(-1)^{(\bm{\mathrm{v}}'+ \bm{\mathrm{v}}''+ \bm{\gamma}_1 + \bm{\gamma}_2 + \bm{1}_k)\cdot \bm{\mathrm{y}}' }. \end{align*} We consider the following two cases. (1) If there does not exist a $\gmbu$ in $\Gamma$ such that $\bm{\mathrm{u}}' + \bm{\mathrm{u}}'' + \bm{\gamma}_2 = \bm{0}_k$ and $\bm{\mathrm{v}}'+ \bm{\mathrm{v}}''+ \bm{\gamma}_1 + \bm{\gamma}_2 + \bm{1}_k = \bm{0}_k$, then we have $\W_{g_0} (\bm{\mathrm{u}}, \bm{\mathrm{v}}) = 0$ by Lemma \ref{lemma:ex_sum}. (2) If there exists a $\gmbu$ in $\Gamma$ such that $\bm{\mathrm{u}}' + \bm{\mathrm{u}}'' + \bm{\gamma}_2 = \bm{0}_k$ and $\bm{\mathrm{v}}'+ \bm{\mathrm{v}}''+ \bm{\gamma}_1 + \bm{\gamma}_2 + \bm{1}_k = \bm{0}_k$, i.e., $\bm{\gamma}_1 = \bm{\mathrm{u}}' + \bm{\mathrm{u}}'' + \bm{\mathrm{v}}'+ \bm{\mathrm{v}}'' + \bm{1}_k$, $\bm{\gamma}_2 = \bm{\mathrm{u}}'+ \bm{\mathrm{u}}''$, it holds that \begin{align*} \bm{\mathrm{u}}'' \cdot \bm{\gamma}_1 + \bm{\mathrm{v}}''\cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2 =& \bm{\mathrm{u}}' \cdot \bm{\gamma}_1 + \bm{\mathrm{v}}'' \cdot \bm{\gamma}_2 \\ =& \bm{\mathrm{u}}'\cdot \bm{\mathrm{u}}'' + \bm{\mathrm{u}}' \cdot (\bm{\mathrm{v}}' + \bm{\mathrm{v}}'') + \bm{\mathrm{v}}'' \cdot (\bm{\mathrm{u}}'+ \bm{\mathrm{u}}'')\\ =& \bm{\mathrm{u}}'\cdot \bm{\mathrm{u}}''+ \bm{\mathrm{u}}\cdot \bm{\mathrm{v}}. \end{align*} Together with (\ref{align:Walsh_{g_0}}), we have $\W_{g_0, S_1} (\bm{\mathrm{u}}, \bm{\mathrm{v}}) = 2^{2k}(-1)^{\bm{\mathrm{u}}'\cdot \bm{\mathrm{u}}''+ \bm{\mathrm{u}}\cdot \bm{\mathrm{v}}} = \W_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}})$. From the two cases discussed above, (\ref{align:WHT_4k}) follows immediately. By (\ref{align:PNHT}), the fragmentary nega-Hadamard transform of $g_0$ over $S_1$ at $(\bm{\mathrm{u}}, \bm{\mathrm{v}}) \in \mathbb{F}_2^{4k}$ is given by \begin{align*} \N_{g_0, S_1}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) =& \sum_{\gmbu \in \Gamma} \sum_{(\bm{\mathrm{x}}, \bm{\mathrm{y}})\in L_{\gmbu}}(-1)^{\bm{\mathrm{x}}\cdot \bm{\mathrm{y}}+ \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' + \bm{\mathrm{u}}\cdot \bm{\mathrm{x}} + \bm{\mathrm{v}} \cdot \bm{\mathrm{y}}}\imath^{\wt(\bm{\mathrm{x}}, \bm{\mathrm{y}})} \\ =& \sum_{\bm{\gamma} \in \Gamma} \sum_{(\xbu', \ybu') \in \mathbb{F}_2^{2k}} (-1)^{((\bm{\mathrm{y}}', \bm{\mathrm{y}}'+ \bm{\gamma}_2)+ \bm{\mathrm{u}})\cdot (\bm{\mathrm{x}}', \bm{\mathrm{x}}' + \bm{\gamma}_1) + \bm{\mathrm{y}}'\cdot (\bm{\mathrm{y}}'+ \bm{\gamma}_2) + \bm{\mathrm{v}}\cdot (\bm{\mathrm{y}}', \bm{\mathrm{y}}'+ \bm{\gamma}_2) } \imath^{\wt(\bm{\mathrm{x}}', \bm{\mathrm{x}}' + \bm{\gamma}_1) + \wt(\bm{\mathrm{y}}', \bm{\mathrm{y}}'+ \bm{\gamma}_2)} \\ =& \sum_{\bm{\gamma} \in \Gamma} (-1)^{\bm{\mathrm{u}}'' \cdot \bm{\gamma}_1 + \bm{\mathrm{v}}'' \cdot \bm{\gamma}_2 + \bm{\gamma}_1\cdot \bm{\gamma}_2} \sum_{\bm{\mathrm{x}}'\in \mathbb{F}_2^k} (-1)^{(\bm{\mathrm{u}}' + \bm{\mathrm{u}}'' + \bm{\gamma}_2)\cdot \bm{\mathrm{x}}' } \imath^{\wt(\bm{\mathrm{x}}', \bm{\mathrm{x}}'+ \bm{\gamma}_1)} \\ & \hspace{2.0cm} \sum_{\bm{\mathrm{y}}'\in \mathbb{F}_2^k}(-1)^{(\bm{\mathrm{v}}'+ \bm{\mathrm{v}}''+ \bm{\gamma}_1 + \bm{\gamma}_2 + \bm{1}_k)\cdot \bm{\mathrm{y}}' } \imath^{\wt(\bm{\mathrm{y}}', \bm{\mathrm{y}}'+ \bm{\gamma}_2)} \\ =& \sum_{\bm{\gamma} \in \Gamma} (-1)^{\bm{\mathrm{u}}'' \cdot \bm{\gamma}_1 + \bm{\mathrm{v}}'' \cdot \bm{\gamma}_2 + \bm{\gamma}_1\cdot \bm{\gamma}_2} \imath^{\wt(\bm{\gamma}_1) + \wt(\gamma_2)} \sum_{\bm{\mathrm{x}}'\in \mathbb{F}_2^k} (-1)^{(\bm{\mathrm{u}}' + \bm{\mathrm{u}}'' + \bm{\gamma}_1 + \bm{\gamma}_2 + \bm{1}_k)\cdot \bm{\mathrm{x}}'} \\ & \hspace{2.0cm} \sum_{\bm{\mathrm{y}}'\in \mathbb{F}_2^k}(-1)^{(\bm{\mathrm{v}}'+ \bm{\mathrm{v}}'' + \bm{\gamma}_1 ) \cdot \bm{\mathrm{y}}'}, \end{align*} where the last identity holds by $\imath^{\wt(\bm{\mathrm{y}}', \bm{\mathrm{y}}'+ \bm{\gamma}_2)} = \imath^ {2\wt(\bm{\mathrm{y}}') + \wt(\bm{\gamma}_2) - 2\wt(\bm{\gamma}_2 * \bm{\mathrm{y}}')} = (-1)^{(\bm{\gamma}_2 + \bm{1}_k) \cdot \bm{\mathrm{y}}'} \imath^{\wt(\bm{\gamma}_2)}$. We consider the following two cases. (1) If there dose not exist a $\bm{\gamma} \in \Gamma$ such that $\bm{\mathrm{u}}'+ \bm{\mathrm{u}}'' + \gmbu_1 + \gmbu_2 + \bm{1}_k = \bm{0}_k$ and $\bm{\mathrm{v}}' + \bm{\mathrm{v}}'' + \gmbu_1 = \bm{0}_k$, then we have $\N_{g_0, S_1}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) = 0$, by Lemma \ref{lemma:ex_sum}. (2) If there exists a $\bm{\gamma} \in \Gamma$ such that $\bm{\mathrm{u}}'+ \bm{\mathrm{u}}'' + \gmbu_1 + \gmbu_2 + \bm{1}_k = \bm{0}_k$ and $\bm{\mathrm{v}}' + \bm{\mathrm{v}}'' + \gmbu_1 = \bm{0}_k$, i.e., $\bm{\gamma}_1 = \bm{\mathrm{v}}' + \bm{\mathrm{v}}''$ and $\bm{\gamma}_2 = \bm{\mathrm{u}}'+ \bm{\mathrm{u}}'' + \bm{\mathrm{v}}' + \bm{\mathrm{v}}'' + \bm{1}_k$, then we have the following derivation: \begin{align} (-1)^{\bm{\mathrm{u}}'' \cdot \bm{\gamma}_1 + \bm{\mathrm{v}}'' \cdot \bm{\gamma}_2 + \bm{\gamma}_1\cdot \bm{\gamma}_2} \imath^{\wt(\bm{\gamma}_1) + \wt(\bm{\gamma}_2)} =& (-1)^{\bm{\mathrm{u}}'' \cdot \bm{\gamma}_1 + \bm{\mathrm{v}}'\cdot \bm{\gamma}_2} \imath^{\wt(\bm{\gamma}_1 + \bm{\gamma}_2) + 2\wt(\bm{\gamma}_1 * \bm{\gamma}_2)} \notag \\ =& (-1)^{\bm{\mathrm{u}}'' \cdot(\bm{\mathrm{v}}' + \bm{\mathrm{v}}'') + \bm{\mathrm{v}}'\cdot (\bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{\mathrm{v}}'+ \bm{\mathrm{v}}''+ \bm{1}_k)} \imath^{\wt(\bm{\mathrm{u}}' + \bm{\mathrm{u}}'' + \bm{1}_k) + 2\wt((\bm{\mathrm{v}}'+ \bm{\mathrm{v}}'') * (\bm{\mathrm{u}}' + \bm{\mathrm{u}}''))} \notag \\ =& (-1)^{\bm{\mathrm{u}}'\cdot \bm{\mathrm{v}}'' + \bm{\mathrm{u}}''\cdot \bm{\mathrm{v}}' + \bm{\mathrm{v}}' \cdot \bm{\mathrm{v}}''} \imath^{k - \wt(\bm{\mathrm{u}}'+ \bm{\mathrm{u}}'')} \notag \\ =& (-1)^{\bm{\mathrm{u}}'\cdot \bm{\mathrm{v}}'' + \bm{\mathrm{u}}''\cdot \bm{\mathrm{v}}' + \bm{\mathrm{v}}' \cdot \bm{\mathrm{v}}''} \imath^{k - \wt(\bm{\mathrm{u}}) + 2\wt(\bm{\mathrm{u}}' * \bm{\mathrm{u}}'')} \notag \\ =& (-1)^{(\bm{\mathrm{u}}'+ \bm{\mathrm{v}}')\cdot (\bm{\mathrm{u}}''+ \bm{\mathrm{v}}'')} \imath^{k - \wt(\bm{\mathrm{u}})}. \label{align:inter_Res1} \end{align} Together with (\ref{align:Nega_{g_0}}), we have $\N_{g_0, S_1}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) = 2^{2k} (-1)^{(\bm{\mathrm{u}}'+ \bm{\mathrm{v}}')\cdot (\bm{\mathrm{u}}''+ \bm{\mathrm{v}}'')} \imath^{k - \wt(\bm{\mathrm{u}})} = \N_{g_0} (\bm{\mathrm{u}}, \bm{\mathrm{v}})$. Then (\ref{align:WHT_4k}) follows from the cases discussed above. \end{proof} \textbf{Proof of Theorem \ref{theorem:4k}}: It is an immediate consequence of Lemma \ref{lemma:4k_WHT&NHT} and Theorem \ref{theorem:frame_construction}. {\hfill $\square$\par} Next, we analyze the ANF and the algebraic degree of $g$ in (\ref{align:g_BentNegabent}). We need the following lemma, which comes from the proof of \cite[Lemma 4]{Su2017}. \begin{lemma} \rm \label{lemma:chi_S_beta} Given $\bm{\beta} \in \mathbb{F}_2^k$, we denote by $S_{\bm{\beta}}$ the set $\{\bm{\mathrm{x}} \in \mathbb{F}_2^{2k} : \bm{\mathrm{x}}' \in \mathbb{F}_2^k, \bm{\mathrm{x}}'' = \bm{\mathrm{x}}' + \bm{\beta} \}$. Then the ANF of the characteristic function of $S_{\bm{\beta}}$ is given by \[ \chi_{S_{\bm{\beta}}} (\bm{\mathrm{x}})= \sum_{\mbox{\tiny $\begin{array} {c} \bm{\mathrm{u}}' * \bm{\mathrm{u}}'' = \bm{0}_k \\ \bm{\mathrm{u}}' + \bm{\mathrm{u}}'' \succeq \bm{\beta} \end{array}$}} \bm{\mathrm{x}}^{\bm{\mathrm{u}}}. \] \end{lemma} By Lemma \ref{lemma:chi_S_beta}, we give the ANF of $g$ in (\ref{align:g_BentNegabent}) in the following theorem. \begin{theorem} \rm \label{theorem:ANF_4k} Given the set $S_1$ defined in (\ref{align:S1_set}), the ANF of $g \in \mathcal{B}_{4k}$ in (\ref{align:g_BentNegabent}) is given by \[ g(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = g_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}) + \sum_{\gmbu \in \Gamma} \left(\sum_{\mbox{\tiny $\begin{array} {c} \bm{\mathrm{u}}' * \bm{\mathrm{u}}'' = \bm{0}_k \\ \bm{\mathrm{u}}' + \bm{\mathrm{u}}'' \succeq \bm{\gamma}_1 \end{array}$}} \bm{\mathrm{x}}^{\bm{\mathrm{u}}}\right) \left(\sum_{\mbox{\tiny $\begin{array} {c} \bm{\mathrm{v}}' * \bm{\mathrm{v}}'' = \bm{0}_k \\ \bm{\mathrm{v}}' + \bm{\mathrm{v}}'' \succeq \bm{\gamma}_2 \end{array}$}} \bm{\mathrm{y}}^{\bm{\mathrm{v}}} \right). \] \end{theorem} In the following corollary, we show the necessary and sufficient condition under which the algebraic degree of $g$ is the maximum. \begin{corollary} \rm \label{corollary:Deg_4k} Given the set $S_1$ defined in (\ref{align:S1_set}), the algebraic degree of $g\in \mathcal{B}_{4k}$ in (\ref{align:g_BentNegabent}) is $2k$ if and only if $|\Gamma|$ is odd. \end{corollary} \begin{proof} For $\bm{\mathrm{u}} \in \mathbb{F}_2^{2k}$ satisfying $\bm{\mathrm{u}}' * \bm{\mathrm{u}}'' = \bm{0}_k$, it is clear that $\wt(\bm{\mathrm{u}}) \le k$. Furthermore, $\bm{\mathrm{u}}' * \bm{\mathrm{u}}'' = \bm{0}_k$ and $\wt(\bm{\mathrm{u}}) = k$ if and only if $\bm{\mathrm{u}}'+ \bm{\mathrm{u}}'' = \bm{1}_k$. In this case, $\bm{\mathrm{u}}'+ \bm{\mathrm{u}}'' \succeq \bm{\gamma}_1$ holds for arbitrary $\bm{\gamma}_1 \in \mathbb{F}_2^k$. For this reason, for any two vectors $(\bm{\gamma}_1, \bm{\gamma}_2)$ and $(\bm{\theta}_1, \bm{\theta}_2)$ in $\Gamma$, where $\gmbu_i, \thbu_i \in \F_2^k$ for $i = 1, 2$, we know that both the functions $\chi_{S_{\bm{\gamma}_1}}(x)\chi_{S_{\bm{\gamma}_2}}(y)$ and $\chi_{S_{\bm{\theta}_1}}(x)\chi_{S_{\bm{\theta}_2}}(y)$ have the degree $2k$, and their monomial terms with degree $2k$ are the same. So, if $|\Gamma|$ is even, all the monomial terms with algebraic degree $2k$ are canceled. Thus, the algebraic degree of $g$ is $2k$ if and only if $|\Gamma|$ is odd. \end{proof} \begin{lemma} \rm (\cite[Theorem 11]{Parker2007}) \label{lemma:dual_bentnegabent} Let $n$ be an even integer, and $\varphi \in \mathcal{B}_n$ be a bent-negabent function. Then $\tilde{\varphi}$ (the dual of $\varphi$) is also bent-negabent. \end{lemma} The dual of $g$ is given in the following theorem. \begin{theorem} \rm \label{theorem:dual_4k} Given the set $S_1$ defined in (\ref{align:S1_set}), the dual of $g\in \mathcal{B}_{4k}$ in (\ref{align:g_BentNegabent}) is still bent-negabent and given by \begin{align} \label{align:dual_g} \tilde{g}(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = \bm{\mathrm{x}}'\cdot \bm{\mathrm{x}}'' + \bm{\mathrm{x}} \cdot \bm{\mathrm{y}} + \chi_{\widetilde{S}_1}(\bm{\mathrm{x}}, \bm{\mathrm{y}}), \end{align} where $\widetilde{S}_1$ is a subset of $\mathbb{F}_2^{4k}$ defined by \[ \widetilde{S}_1 = \bigcup_{\gmbu \in \Gamma} \{(\bm{\mathrm{x}}, \bm{\mathrm{y}}) \in \mathbb{F}_2^{4k} : (\xbu', \ybu') \in \mathbb{F}_2^{2k}, \bm{\mathrm{x}}'' = \bm{\mathrm{x}}' + \bm{\gamma}_2, \bm{\mathrm{y}}'' = \bm{\mathrm{y}}' + \bm{\gamma}_1 + \bm{\gamma}_2 + \bm{1}_k\}. \] \end{theorem} \begin{proof} From Lemma \ref{lemma:dual_bentnegabent} we know that $\tilde{g}$ is also a bent-negabent function. Form (\ref{align:dual_MM}) we know that the dual of $g_0$ is given by $\tilde{g}_0 (\bm{\mathrm{x}}, \bm{\mathrm{y}}) = \bm{\mathrm{x}}' \cdot \bm{\mathrm{x}}'' + \bm{\mathrm{x}} \cdot \bm{\mathrm{y}}$. Then the dual of $g$ is obtained from Theorem \ref{theorem:frame_construction}-(1) and (\ref{align:WHT_4k}). \end{proof} We now show an example of an $8$-variable bent-negabent function with the maximum algebraic degree to illustrate this construction. \begin{example} \rm Let $k=2$ and $\Gamma = \{(0, 0, 0, 1) \}$. By (\ref{align:S1_set}), $S_1$ is given by \begin{align*} S_1 =& \{(\bm{\mathrm{x}}, \bm{\mathrm{y}}) \in \mathbb{F}_2^8 : \bm{\mathrm{x}}'' = \bm{\mathrm{x}}' \in \mathbb{F}_2^2, \bm{\mathrm{y}}' \in \mathbb{F}_2^2, \bm{\mathrm{y}}'' = \bm{\mathrm{y}}' + (0, 1)\} \\ =&\{(0,0, 0,0), (1, 0, 1, 0), (0, 1, 0, 1), (1, 1, 1, 1) \} \times \{(0, 0, 0, 1), (1, 0, 1, 1), (0, 1, 0, 0), (1, 1, 1, 0) \}, \end{align*} where $\times$ expresses that Cartesian product of two sets, i.e., $S\times T = \{(\bm{\alpha}, \bm{\beta}) \in \F_2^{4k} : \bm{\alpha} \in S, \bm{\beta} \in T\}$ for two subsets $S$ and $T$ of $\mathbb{F}_2^{2k}$. Using a SageMath program, we verified that the $8$-variable function $g$ generated by (\ref{align:g_BentNegabent}) is bent-negabent with algebraic degree $4$, and its ANF is given by \begin{center} \fcolorbox{white}{white}{ \parbox{.95\linewidth} {$g(x_0,\cdots, x_3, y_0, \cdots, y_3) = x_0x_1y_0y_1 + x_0x_1y_0y_3 + x_0x_1y_1y_2 + x_0x_1y_1 + x_0x_1y_2y_3 + x_0x_1y_3 + x_0x_3y_0y_1 + x_0x_3y_0y_3 + x_0x_3y_1y_2 + x_0x_3y_1 + x_0x_3y_2y_3 + x_0x_3y_3 + x_0y_0y_1 + x_0y_0y_3 + x_0y_0 + x_0y_1y_2 + x_0y_1 + x_0y_2y_3 + x_0y_3 + x_1x_2y_0y_1 + x_1x_2y_0y_3 + x_1x_2y_1y_2 + x_1x_2y_1 + x_1x_2y_2y_3 + x_1x_2y_3 + x_1y_0y_1 + x_1y_0y_3 + x_1y_1y_2 + x_1y_2y_3 + x_1y_3 + x_2x_3y_0y_1 + x_2x_3y_0y_3 + x_2x_3y_1y_2 + x_2x_3y_1 + x_2x_3y_2y_3 + x_2x_3y_3 + x_2y_0y_1 + x_2y_0y_3 + x_2y_1y_2 + x_2y_1 + x_2y_2y_3 + x_2y_2 + x_2y_3 + x_3y_0y_1 + x_3y_0y_3 + x_3y_1y_2 + x_3y_1 + x_3y_2y_3 + y_0y_1 + y_0y_2 + y_0y_3 + y_1y_2 + y_1y_3 + y_1 + y_2y_3 + y_3$.} } \end{center} \end{example} \subsection{Bent-Negabent Functions on $8k$ Variables} In this subsection, let $k$ be a positive integer, $t = 2k$ and $m=2t$. Let us define a repetition code of length $2d$: \begin{align*} &A_{2d} = \{ \underbrace{0 \cdots 0}_{2d}, \underbrace{1 \cdots 1}_{2d} \}, \\ &B_{2d} = \{ \underbrace{0 \cdots 0}_{d} \underbrace{1 \cdots 1}_{d}, \underbrace{1 \cdots 1}_{d} \underbrace{0 \cdots 0}_{d} \}. \end{align*} For example, \begin{align*} & d = 1,\ A_2 = \{00, 11 \},\ B_2 = \{01, 10 \}, \\ & d = 2,\ A_4 = \{0000, 1111 \},\ B_4 = \{0011, 1100 \}. \end{align*} We shall define \begin{align*} & A_{2d}^r = \{(\xbu_1, \cdots, \xbu_r) : \xbu_i \in A_{2d}\ \text{for}\ 1 \le i \le r \}, \\ & B_{2d}^r = \{(\xbu_1, \cdots, \xbu_r) : \xbu_i \in B_{2d}\ \text{for}\ 1 \le i \le r \}. \end{align*} In the following, we give the result for $d = 1$ in detail. For $d = 1$, $A_2^r$ is a subspace of $\mathbb{F}_2^{2r}$ and $B_2^r$ is a coset of $A_2^r$ in $\F_2^{2r}$. For $\bm{\gamma} = (\bm{\gamma}_1, \bm{\gamma}_2) \in \mathbb{F}_2^{4k}$, where $\bm{\gamma}_i \in \mathbb{F}_2^{2k}$ for $i=1, 2$, let us define \begin{align} C_{\gmbu, A_2^{2k}} = \{(\xbu, \ybu) \in \mathbb{F}_2^{8k} : \xbu \in A_2^{2k}, \ybu \in C_{\gmbu} (A_2^{2k}) \}. \end{align} Let $\Gamma$ be a nonempty subset of $R_{A_2^{2k}}$, i.e., a complete set of coset representatives of $A_2^{2k}$ in $\F_2^{4k}$. We shall define a subset $S_2$ of $\mathbb{F}_2^{8k}$ by \begin{align} S_2 = \bigcup_{\gmbu \in \Gamma} C_{\gmbu, A_2^{2k}}, \label{align:S2_set} \end{align} for which we have the following result. \begin{theorem} \rm \label{theorem:8k_variable} Given the subset $S_2$ of $\mathbb{F}_2^{8k}$ defined in (\ref{align:S2_set}) and $g_0 \in \mathcal{B}_{8k}$ defined in (\ref{align:g_0}), the $8k$-variable function $g$ in (\ref{align:g_BentNegabent}) is bent-negabent. \end{theorem} In order to prove Theorem \ref{theorem:8k_variable}, we need the following two lemmas, which are the fragmentary Walsh-Hadamard transform and the fragmentary nega-Hadamard transform of linear functions over $A_2^{k}$ and $g_0$ over $S_2$, respectively. \begin{lemma} \rm \label{lemma:ex_sum_A^k} For any $\bm{\mathrm{u}} \in \mathbb{F}_2^{2k}$, we have the following results on the fragmentary Walsh-Hadamard transform and the fragmentary nega-Hadamard transform of a linear function: \begin{align} & \sum_{\bm{\mathrm{x}} \in A_2^k}(-1)^{\bm{\mathrm{u}}\cdot \bm{\mathrm{x}}} = \begin{cases} 2^k,\ \bm{\mathrm{u}} \in A_2^k, \\ 0,\ \hspace{0.2cm} \text{otherwise}, \end{cases} \label{align:ex_sum_A^k} \\ & \sum_{\bm{\mathrm{x}} \in A_2^k}(-1)^{\bm{\mathrm{u}}\cdot \bm{\mathrm{x}}}\imath^{\wt(\bm{\mathrm{x}})} = \begin{cases} 2^k,\ \bm{\mathrm{u}} \in B_2^k, \\ 0,\ \hspace{0.2cm}\text{otherwise}. \end{cases} \label{align:nega_ex_sum_A^k} \end{align} \end{lemma} \begin{proof} First, for any $\ubu \in A_2^k$, we have $\sum_{\bm{\mathrm{x}} \in A_2^k}(-1)^{\bm{\mathrm{u}}\cdot \bm{\mathrm{x}}} = \sum_{\bm{\mathrm{x}} \in A_2^k}(-1)^0 = |A_2^k| = 2^k$. On the other hand, for any $\ubu \notin A_2^k$, we know that $\sum_{\bm{\mathrm{x}} \in A_2^k}(-1)^{\bm{\mathrm{u}}\cdot \bm{\mathrm{x}}} = 0$ since $|\{\xbu \in A_2^k : \ubu \cdot \xbu = 0 \}| = \frac{|A_2^k|}{2}$. Hence, (\ref{align:ex_sum_A^k}) holds. Let $\bm{\mathrm{u}} = (\bm{\mathrm{u}}_0, \cdots, \bm{\mathrm{u}}_{k-1})$ and $\bm{\mathrm{x}} = (\bm{\mathrm{x}}_0, \cdots, \bm{\mathrm{x}}_{k-1})$, where $\bm{\mathrm{u}}_i, \bm{\mathrm{x}}_i \in \mathbb{F}_2^2$ for $i = 0, \cdots, k-1$. Then we have \[ \sum_{\bm{\mathrm{x}} \in A_2^k}(-1)^{\bm{\mathrm{u}}\cdot \bm{\mathrm{x}}}\imath^{\wt(\bm{\mathrm{x}})} =\prod_{i=0}^{k-1}\left(\sum_{\bm{\mathrm{x}}_i\in A_2}(-1)^{\bm{\mathrm{u}}_i\cdot \bm{\mathrm{x}}_i}\imath^{\wt(\bm{\mathrm{x}}_i)}\right). \] Clearly, it holds that $\sum_{\bm{\mathrm{x}}_i\in A_2}(-1)^{\bm{\mathrm{u}}_i\cdot \bm{\mathrm{x}}_i}\imath^{\wt(\bm{\mathrm{x}}_i)} = 1 - (-1)^{\bm{1}_2\cdot \bm{\mathrm{u}}_i}$, which equals $2$ for $\bm{\mathrm{u}}_i\in B_2$, and $0$, otherwise. Hence, (\ref{align:nega_ex_sum_A^k}) holds. \end{proof} The following lemma gives the fragmentary Walsh-Hadamard transform and the fragmentary nega-Hadamard transform of $g_0$ over $S_2$. \begin{lemma} \rm \label{lemma:8k_WHT&NHT} Given the subset $S_2$ of $\mathbb{F}_2^{8k}$ defined in (\ref{align:S2_set}) and $g_0 \in \mathcal{B}_{8k}$ defined in (\ref{align:g_0}), the fragmentary Walsh-Hadamard transform and the fragmentary nega-Hadamard transform of $g_0$ over $S_2$ at $(\bm{\mathrm{u}}, \bm{\mathrm{v}}) \in \mathbb{F}_2^{8k}$ are respectively given by \begin{align} & \W_{g_0, S_2}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) = \begin{cases} \W_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}),\ \bm{\mathrm{u}} \in C_{\bm{\gamma}} (A_2^{2k}), \bm{\mathrm{v}} \in C_{(\gmbu_2, \gmbu_1)} (A_2^{2k})\ \text{for}\ \gmbu \in \Gamma, \\ 0,\ \hspace{1.4cm}\text{otherwise}, \end{cases} \label{align:WHT_8k} \\ & \N_{g_0, S_2}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) = \begin{cases} \N_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}),\ \ubu + \gmbu \in B_2^{2k}, \vbu + \gmbu + (\gmbu_2, \gmbu_1) \in B_2^{2k}\ \text{for}\ \gmbu \in \Gamma, \\ 0,\ \hspace{1.3cm}\text{otherwise}. \end{cases} \label{align:NHT_8k} \end{align} \end{lemma} \begin{proof} By (\ref{align:PWHT}), the fragmentary Walsh-Hadamard transform of $g_0$ over $S_2$ at $(\bm{\mathrm{u}}, \bm{\mathrm{v}}) \in \mathbb{F}_2^{8k}$ is given by \begin{align*} \W_{g_0, S_2}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) =& \sum_{\gmbu \in \Gamma} \sum_{(\bm{\mathrm{x}}, \bm{\mathrm{y}})\in C_{\gmbu, A_2^{2k}}}(-1)^{\bm{\mathrm{x}}\cdot \bm{\mathrm{y}}+ \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' + \bm{\mathrm{u}}\cdot \bm{\mathrm{x}} + \bm{\mathrm{v}} \cdot \bm{\mathrm{y}}} \\ =& \sum_{\gmbu \in \Gamma} \sum_{\xbu \in A_2^{2k} } \sum_{\ztbu \in A_2^{2k} } (-1)^{\xbu \cdot (\gmbu + \ztbu) + (\gmbu_1 + \ztbu_1)\cdot (\gmbu_2 + \ztbu_2) + \bm{\mathrm{u}}\cdot \xbu + \bm{\mathrm{v}} \cdot (\gmbu + \ztbu) } \\ =& \sum_{\gmbu \in \Gamma} (-1)^{\gmbu_1\cdot \gmbu_2 + \vbu \cdot \gmbu} \sum_{\ztbu \in A_2^{2k}}(-1)^{[\vbu + (\gmbu_2, \gmbu_1)] \cdot \ztbu} \sum_{\xbu \in A_2^{2k}}(-1)^{(\ubu + \gmbu) \cdot \xbu} \\ =& 2^{4k} \sum_{\gmbu \in \Gamma_1(\bm{\mathrm{u}}, \bm{\mathrm{v}})}(-1)^{\bm{\gamma}_1 \cdot \bm{\gamma}_2 + \vbu \cdot \gmbu}, \end{align*} where $\ztbu_i \in \F_2^{2k}$ for $i = 1, 2$ and $\ztbu = (\ztbu_1, \ztbu_2)$, and $\Gamma_1(\bm{\mathrm{u}}, \bm{\mathrm{v}})$ is a subset of $\Gamma$ defined by \[ \Gamma_1(\bm{\mathrm{u}}, \bm{\mathrm{v}}) = \{\gmbu \in \Gamma : \bm{\mathrm{u}} \in C_{\bm{\gamma}} (A_2^{2k}), \bm{\mathrm{v}} \in C_{(\gmbu_2, \gmbu_1)} (A_2^{2k}) \}, \] and the second identity holds since $\bm{\mathrm{y}} \in C_{\bm{\gamma}} (A_2^{2k})$ if and only if $\bm{\mathrm{y}} = \bm{\gamma} + \bm{\zeta}$ for $\bm{\zeta} \in A_2^{2k}$, and the third identity holds by the fact that $\bm{\mathrm{x}} \cdot \bm{\zeta} = 0$ and $\ztbu_1 \cdot \ztbu_2 = 0$ for $\bm{\mathrm{x}}, \bm{\zeta} \in A_2^{2k}$, and the last identity holds by (\ref{align:ex_sum_A^k}). For $\gmbu \in \Gamma_1(\bm{\mathrm{u}}, \bm{\mathrm{v}})$, we know $(\ubu' + \vbu'') \cdot (\ubu'' + \vbu') = 0$ since both $\ubu' + \vbu''$ and $\ubu'' + \vbu'$ are in $A_2^k$. Then, we have \begin{align} \bm{\gamma}_1 \cdot \bm{\gamma}_2 + \vbu \cdot \gmbu =& (\vbu' + \bm{\gamma}_2) \cdot (\vbu'' + \bm{\gamma}_1) + \vbu' \cdot \vbu'' \notag \\ =& \vbu' \cdot \vbu'' \notag \\ =& \vbu' \cdot \vbu'' + (\ubu' + \vbu'') \cdot (\ubu'' + \vbu') \notag \\ =& \bm{\mathrm{u}}'\cdot \bm{\mathrm{u}}'' + \bm{\mathrm{u}} \cdot \bm{\mathrm{v}}. \label{align:inter_Res3} \end{align} Recalling the Walsh-Hadamard transform of $g_0$ in (\ref{align:Walsh_{g_0}}), we have \[ \W_{g_0, S_2}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) = |\Gamma_1(\bm{\mathrm{u}}, \bm{\mathrm{v}})| \W_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}). \] To prove (\ref{align:WHT_8k}), it is sufficient to prove that the cardinality of $\Gamma_1(\bm{\mathrm{u}}, \bm{\mathrm{v}})$ is less than or equal to $1$. Assume that there are two elements $\gmbu$ and $\thbu$ in $\Gamma_1(\bm{\mathrm{u}}, \bm{\mathrm{v}})$. Then, there exist two vectors $\bm{\alpha}, \bm{\beta} \in A_2^k$ such that $\begin{cases} \bm{\gamma} = \bm{\mathrm{u}}+ \bm{\alpha}, \\ \bm{\theta} = \bm{\mathrm{u}}+ \bm{\beta}, \end{cases} $ which implies $ \gmbu + A_2^{2k} = \ubu + A_2^{2k} = \thbu + A_2^{2k}. $ Then, we have $\gmbu = \thbu$ since they are coset representatives of $A_2^{2k}$ in $\F_2^{4k}$. Hence, we have $|\Gamma_1(\bm{\mathrm{u}}, \bm{\mathrm{v}})| \le 1$, and (\ref{align:WHT_8k}) follows immediately. By (\ref{align:PNHT}), the fragmentary nega-Hadamard transform of $g_0$ over $S_2$ at $(\bm{\mathrm{u}}, \bm{\mathrm{v}}) \in \mathbb{F}_2^{8k}$ is given by \begin{align*} \N_{g_0, S_2}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) =& \sum_{\gmbu \in \Gamma} \sum_{(\bm{\mathrm{x}}, \bm{\mathrm{y}})\in C_{\gmbu, A_2^{2k}}}(-1)^{\bm{\mathrm{x}}\cdot \bm{\mathrm{y}}+ \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' + \bm{\mathrm{u}}\cdot \bm{\mathrm{x}} + \bm{\mathrm{v}} \cdot \bm{\mathrm{y}}}\imath^{\wt(\bm{\mathrm{x}}, \bm{\mathrm{y}})} \\ =& \sum_{\gmbu \in \Gamma} \sum_{\xbu \in A_2^{2k} } \sum_{ \ztbu \in A_2^{2k} } (-1)^{\xbu \cdot (\bm{\zeta} + \bm{\gamma}) +(\ztbu_1 + \gmbu_1) \cdot (\ztbu_2 + \gmbu_2) + \bm{\mathrm{u}} \cdot \xbu + \bm{\mathrm{v}}\cdot (\bm{\zeta} + \bm{\gamma}) } \imath^{\wt(\xbu) + \wt(\bm{\zeta} + \bm{\gamma})} \\ =& \sum_{\gmbu \in \Gamma}(-1)^{\vbu \cdot \gmbu + \bm{\gamma}_1\cdot \bm{\gamma}_2} \imath^{\wt(\gmbu)} \sum_{\ztbu \in A_2^{2k}} (-1)^{(\vbu + \gmbu + (\gmbu_2, \gmbu_1)) \cdot \ztbu} \imath^{\wt(\ztbu)} \sum_{\xbu \in A_2^{2k}}(-1)^{(\ubu + \gmbu) \cdot \xbu} \imath^{\wt(\xbu)} \\ =& 2^{4k} \sum_{\gmbu \in \Gamma_2(\bm{\mathrm{u}}, \bm{\mathrm{v}})}(-1)^{\vbu \cdot \gmbu + \bm{\gamma}_1\cdot \bm{\gamma}_2} \imath^{\wt(\gmbu)}, \end{align*} where $\ztbu_i \in \F_2^{2k}$ for $i = 1, 2$ and $\ztbu = (\ztbu_1, \ztbu_2)$, and $\Gamma_2(\bm{\mathrm{u}}, \bm{\mathrm{v}})$ is a subset of $\Gamma$ defined by \[ \Gamma_2(\bm{\mathrm{u}}, \bm{\mathrm{v}}) = \{\gmbu \in \Gamma : \ubu + \gmbu \in B_2^{2k}, \vbu + \gmbu + (\gmbu_2, \gmbu_1) \in B_2^{2k} \}, \] and the second identity holds since $\bm{\mathrm{y}}\in C_{\gmbu} (A_2^{2k})$ if and only if $\ybu = \bm{\gamma} + \bm{\zeta}$ for $\bm{\zeta} \in A_2^{2k}$, and the third identity holds by the fact that $\xbu \cdot \ztbu = \bm{\zeta}_1 \cdot \bm{\zeta}_2 = 0$ for $\xbu, \ztbu \in A_2^{2k}$, and the last identity holds by (\ref{align:nega_ex_sum_A^k}). Note that $B_2^k$ is an affine subspace of $\mathbb{F}_2^m$, and can be expressed as $B_2^k = \bm{\xi} + A_2^k$, where $\bm{\xi}$ is an arbitrary vector in $B_2^k$. For $\gmbu \in \Gamma_2(\bm{\mathrm{u}}, \bm{\mathrm{v}})$, there exist $\bm{\lambda}_1, \bm{\lambda}_2, \bm{\lambda}_3, \bm{\lambda}_4 \in A_2^k$ such that $\bm{\mathrm{u}}' = \gmbu_1 + \bm{\xi} + \bm{\lambda}_1, \bm{\mathrm{u}}'' = \gmbu_2 + \bm{\xi} + \bm{\lambda}_2, \bm{\mathrm{v}}' = \gmbu_1 + \bm{\gamma}_2 + \bm{\xi} + \bm{\lambda}_3$ and $\bm{\mathrm{v}}'' = \bm{\gamma}_1 + \bm{\gamma}_2 + \bm{\xi} + \bm{\lambda}_4$. Then we have \begin{align} (-1)^{\vbu \cdot \gmbu + \gmbu_1 \cdot \gmbu_2} \imath^{\wt(\gmbu)} =& (-1)^{(\gmbu_1 + \bm{\gamma}_2 + \bm{\xi} + \bm{\lambda}_3)\cdot \bm{\gamma}_1 + (\bm{\gamma}_1 + \bm{\gamma}_2 + \bm{\xi} + \bm{\lambda}_4)\cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2} \imath^{\wt(\gmbu)} \notag \\ =& (-1)^{(\ldbu_1 + \ldbu_3 + \ubu')\cdot \bm{\gamma}_1 + (\ldbu_2 + \ldbu_4 + \ubu'')\cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2} \imath^{\wt(\gmbu)} \notag \\ =& (-1)^{(\ldbu_1 + \ldbu_3)\cdot \bm{\gamma}_1 + (\ldbu_2 + \ldbu_4)\cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2} \imath^{\wt(\ubu' + \gmbu_1) - \wt(\ubu') + \wt(\ubu'' + \gmbu_2) - \wt(\ubu'')} \notag \\ =& (-1)^{(\ldbu_1 + \ldbu_3)\cdot \bm{\gamma}_1 + (\ldbu_2 + \ldbu_4)\cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2} \imath^{2k - \wt(\ubu)} \notag \\ =& (-1)^{(\ldbu_1 + \ldbu_3 + \gmbu_2)\cdot (\ldbu_2 + \ldbu_4 + \gmbu_1)} \imath^{2k - \wt(\ubu)} \notag \\ =& (-1)^{(\ubu' + \vbu')\cdot (\ubu'' + \vbu'')} \imath^{2k - \wt(\ubu)}, \label{align:inter_Res4} \end{align} where the forth identity holds since both $\ubu' + \gmbu_1$ and $\ubu'' + \gmbu_2$ are in $B_2^k$, and the fifth identity holds since both $\ldbu_1 + \ldbu_3$ and $\ldbu_2 + \ldbu_4$ are in $A_2^k$. Together with (\ref{align:Nega_{g_0}}), we have \[ \N_{g_0, S_2}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) = |\Gamma_2(\bm{\mathrm{u}}, \bm{\mathrm{v}})| \N_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}). \] To prove (\ref{align:NHT_8k}), we should prove that the cardinality of $\Gamma_2(\bm{\mathrm{u}}, \bm{\mathrm{v}})$ is less than or equal to $1$. Assume that there are two elements $\gmbu$ and $\thbu$ in $\Gamma_2(\bm{\mathrm{u}}, \bm{\mathrm{v}})$. Then, there exist two vectors $\bm{\alpha}, \bm{\beta}$ in $A_2^k$ such that $\begin{cases} \bm{\gamma} = \ubu + \bm{\xi} + \bm{\alpha}, \\ \bm{\theta} = \ubu + \bm{\xi} + \bm{\beta}. \end{cases}$ Then we have $\gmbu + A_2^{2k} = \ubu + \bm{\xi} + A_2^{2k} = \thbu + A_2^{2k}$, which implies $\gmbu = \thbu$ by the definition of $\Gamma$. Hence, we have $|\Gamma_2(\bm{\mathrm{u}}, \bm{\mathrm{v}})| \le 1$, and (\ref{align:NHT_8k}) follows. \end{proof} \textbf{Proof of Theorem \ref{theorem:8k_variable}}: It follows from Lemma \ref{lemma:8k_WHT&NHT} and Theorem \ref{theorem:frame_construction} directly. {\hfill $\square$\par} To analyze the ANF of $g$, we need the following lemma. \begin{lemma} \rm \label{lemma:chi_S_alpha} For $\gmbu = (\gamma_0, \cdots, \gamma_{4k-1}) \in \F_2^{4k}$, the characteristic function of $C_{\gmbu, A_2^{2k}}$ is given by \[ \chi_{C_{\gmbu, A_2^{2k}}} (\xbu, \ybu) = \left(\prod_{i=0}^{2k-1}(x_{2i}+ x_{2i+1} + 1) \right) \left(\prod_{i=0}^{2k-1} (y_{2i} + y_{2i+1} + \gamma_{2i} + \gamma_{2i+1} + 1) \right). \] \end{lemma} In the following theorem, we give the ANF of $g$. \begin{theorem} \rm \label{theorem:ANF_8k} Given the set $S_2$ defined in (\ref{align:S2_set}), the ANF of $g\in \mathcal{B}_{8k}$ in (\ref{align:g_BentNegabent}) is given by \[ g(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = g_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}) + \sum_{\gmbu \in \Gamma} \chi_{C_{\gmbu, A_2^{2k}}} (\xbu, \ybu), \] where $\chi_{C_{\gmbu, A_2^{2k}}}$ is given by Lemma \ref{lemma:chi_S_alpha}. \end{theorem} \begin{remark} \rm \label{remark:4k&8k} It is obvious that the subset $S_2$ of $\mathbb{F}_2^{8k}$ in (\ref{align:S2_set}) is different from the subset $S_1$ of $\mathbb{F}_2^{8k}$ in (\ref{align:S1_set}). Moreover, from Theorem \ref{theorem:ANF_8k} we know the ANF of $\chi_{S_2}$ does not contain any term with $x_{2i}x_{2i+1}$ or $y_{2i}y_{2i+1}$ for $0 \le i \le 2k-1$, but the ANF of $\chi_{S_1}$ contains some of these terms, by Theorem \ref{theorem:ANF_4k}. Hence, bent-negabent functions constructed by Theorem \ref{theorem:8k_variable} are different from those on $8k$ variables constructed by Theorem \ref{theorem:4k}. \end{remark} From Theorem \ref{theorem:ANF_8k}, we immediately give the necessary and sufficient condition such that $g$ has the maximum algebraic degree. \begin{corollary} \rm Given the set $S_2$ defined in (\ref{align:S2_set}), the algebraic degree of $g \in \mathcal{B}_{8k}$ in (\ref{align:g_BentNegabent}) is $4k$ if and only if $|\Gamma|$ is odd. \end{corollary} The dual of $g$ is given in the following theorem, of which the proof can be finished similarly to Theorem \ref{theorem:dual_4k} and we omit it. \begin{theorem} \rm Given the set $S_2$ defined in (\ref{align:S2_set}), the dual of $g\in \mathcal{B}_{8k}$ in (\ref{align:g_BentNegabent}) is also bent-negabent, and given by \[ \tilde{g} (\bm{\mathrm{x}}, \bm{\mathrm{y}}) = \bm{\mathrm{x}}'\cdot \bm{\mathrm{x}}'' + \bm{\mathrm{x}} \cdot \bm{\mathrm{y}} + \chi_{\tilde{S}_2}(\bm{\mathrm{x}}, \bm{\mathrm{y}}), \] where $\widetilde{S}_2$ is a subset of $\mathbb{F}_2^{8k}$ defined by \[ \tilde{S}_2 = \bigcup_{\gmbu \in \Gamma} \{(\bm{\mathrm{x}}, \bm{\mathrm{y}}) \in \mathbb{F}_2^{8k} : \xbu \in C_{\bm{\gamma}} (A_2^{2k}), \ybu \in C_{(\gmbu_2, \gmbu_1)} (A_2^{2k}) \}. \] \end{theorem} \section{Constructions of Bent-Negabent Functions on $4t+2$ Variables} \label{section:bent_negabent_4k+2} In this section, we present two constructions of bent-negabent functions on $4k+2$ and $8k+2$ variables by modifying the truth tables of a class of quadratic bent-negabent functions with simple form. Let $m=2t$, $\bm{\mathrm{x}}$ and $\bm{\mathrm{y}}$ be of the same meaning as those in Section \ref{section:bent_negabent_4k}. We shall denote $x_m, y_m \in \mathbb{F}_2$, and $\bm{\mathrm{X}} = (\bm{\mathrm{x}}, x_m), \bm{\mathrm{Y}} = (\bm{\mathrm{y}}, y_m) \in \mathbb{F}_2^{2t+1}$. We present a class of $(4t+2)$-variable Boolean functions of the following form: \begin{align} h_0(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) = \bm{\mathrm{X}} \cdot \bm{\mathrm{Y}} + x_0y_m + \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' = \sum_{i=0}^{m} x_iy_i + x_0y_{m} + \sum_{i = 0}^{t-1} y_iy_{t+i}. \label{align:h_0} \end{align} Similarly, let $\bm{\mathrm{u}}$ and $\bm{\mathrm{v}}$ be of the same meaning as those in Section \ref{section:bent_negabent_4k}, $u_m, v_m \in \mathbb{F}_2$, and $\bm{\mathrm{U}} = (\bm{\mathrm{u}}, u_m), \bm{\mathrm{V}} = (\bm{\mathrm{v}}, v_m) \in \mathbb{F}_2^{2t+1}$. In the following lemma, we show that $h_0$ is a bent-negabent function and provide its Walsh-Hadamard transform and nega-Hadamard transform in terms of $g_0$. \begin{lemma} \rm The function $h_0$ in (\ref{align:h_0}) is bent-negabent, and the Walsh-Hadamard transform and the nega-Hadamard transform of $h_0$ at $(\bm{\mathrm{U}}, \bm{\mathrm{V}}) \in \mathbb{F}_2^{4t+2}$ are respectively given by \begin{align} \W_{h_0}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) =& 2^{2t+1} (-1)^{\bm{\mathrm{u}}'\cdot \bm{\mathrm{u}}'' + \bm{\mathrm{U}}\cdot \bm{\mathrm{V}} + u_m(v_0 + u_t)}, \label{align:WHT_h0} \\ \N_{h_0}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) =& \N_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) \left[1 + \imath (-1)^{u_m} + (-1)^{u_0 + u_t + v_t + v_m} - \imath(-1)^{u_m + u_0 + u_t + v_t + v_m} \right], \label{align:h0_NegaHadaTrans} \end{align} where $g_0 \in \mathcal{B}_{4t}$ is defined in (\ref{align:g_0}) and $\N_{g_0}$ is given in (\ref{align:Nega_{g_0}}). \end{lemma} \begin{proof} The function $h_0$ can be rewritten as $h_0(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) = \bm{\mathrm{X}} \cdot (y_0 + y_m, y_1, \cdots, y_m) + \bm{\mathrm{y}}' \cdot \bm{\mathrm{y}}''$. Hence, $h_0$ is a bent function in the Maiorana-McFarland class. It is easy to verify that the inverse of the permutation $(y_0 + y_m, y_1, \cdots, y_m)$ is itself. By (\ref{align:dual_MM}) and (\ref{align:Dual_def}), the Walsh-Hadamard transform of $h_0$ at $(\bm{\mathrm{U}}, \bm{\mathrm{V}}) \in \mathbb{F}_2^{4t+2}$ is given by \begin{align*} \W_{h_0}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) =& 2^{2t+1} (-1)^{\bm{\mathrm{V}} \cdot (u_0 + u_m, u_1, \cdots, u_m) + (u_0+ u_m, u_1, \cdots, u_{t-1}) \cdot \bm{\mathrm{u}}''} \notag \\ =& 2^{2t+1} (-1)^{\bm{\mathrm{u}}'\cdot \bm{\mathrm{u}}'' + \bm{\mathrm{U}}\cdot \bm{\mathrm{V}} + u_m\cdot (v_0 + u_t)}. \end{align*} By (\ref{align:NHT}), the nega-Hadamard transform of $h_0$ at $(\bm{\mathrm{U}}, \bm{\mathrm{V}}) \in \mathbb{F}_2^{4t+2}$ is given by \begin{align*} \N_{h_0}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) =& \sum_{(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) \in \mathbb{F}_2^{4t+2}} (-1)^{\bm{\mathrm{X}} \cdot \bm{\mathrm{Y}}+ x_0y_m + \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' + \bm{\mathrm{U}}\cdot \bm{\mathrm{X}}+ \bm{\mathrm{V}} \cdot \bm{\mathrm{Y}}} \imath^{\wt(\bm{\mathrm{X}}, \bm{\mathrm{Y}})} \\ =& \sum_{(\bm{\mathrm{x}}, \bm{\mathrm{y}}) \in \mathbb{F}_2^{4t}} (-1)^{\bm{\mathrm{x}}\cdot \bm{\mathrm{y}} + \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' + \bm{\mathrm{u}}\cdot \bm{\mathrm{x}} + \bm{\mathrm{v}} \cdot \bm{\mathrm{y}}} \imath^{\wt(\bm{\mathrm{x}}, \bm{\mathrm{y}})} \sum_{x_m \in \mathbb{F}_2}(-1)^{u_m\cdot x_m} \imath^{\wt(x_m)} \sum_{y_m \in \mathbb{F}_2}(-1)^{(x_0 + v_m + x_m)\cdot y_m} \imath^{\wt(y_m)} \\ =& \sum_{(\bm{\mathrm{x}}, \bm{\mathrm{y}}) \in \mathbb{F}_2^{4t}} (-1)^{\bm{\mathrm{x}}\cdot \bm{\mathrm{y}} + \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' + \bm{\mathrm{u}}\cdot \bm{\mathrm{x}} + \bm{\mathrm{v}} \cdot \bm{\mathrm{y}}} \imath^{\wt(\bm{\mathrm{x}}, \bm{\mathrm{y}})} \left[1 + (-1)^{x_0 + u_m + v_m} + \imath (-1)^{u_m} + \imath (-1)^{x_0 + v_m}\right] \\ =& \N_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) + (-1)^{u_m + v_m}\N_{g_0}(\bm{\mathrm{u}}+ \bm{\mathrm{e}}_m^1, \bm{\mathrm{v}}) + \imath(-1)^{u_m}\N_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) + \imath(-1)^{v_m}\N_{g_0}(\bm{\mathrm{u}}+ \bm{\mathrm{e}}_m^1, \bm{\mathrm{v}}). \end{align*} From (\ref{align:Nega_{g_0}}) we know \begin{align*} \N_{g_0}(\bm{\mathrm{u}}+ \bm{\mathrm{e}}_m^1, \bm{\mathrm{v}}) =& 2^{2t}(-1)^{(\bm{\mathrm{u}}'+ \bm{\mathrm{v}}'+ \bm{\mathrm{e}}_t^1)\cdot (\bm{\mathrm{u}}''+ \bm{\mathrm{v}}'')} \imath^{t - \wt(\bm{\mathrm{u}}+ \bm{\mathrm{e}}_m^1)} \\ =& 2^{2t}(-1)^{(\bm{\mathrm{u}}'+ \bm{\mathrm{v}}')\cdot (\bm{\mathrm{u}}''+ \bm{\mathrm{v}}'') + u_t + v_t} \imath^{t - \wt(\bm{\mathrm{u}}) - 1 + 2\wt(u_0)} \\ =& (-1)^{u_t + v_t+ u_0 + 1} \imath \N_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}). \end{align*} Then, (\ref{align:h0_NegaHadaTrans}) follows immediately. It is easy to verify that $|1 + \imath (-1)^{u_m} + (-1)^{u_0 + u_t + v_t + v_m} - \imath(-1)^{u_m + u_0 + u_t + v_t + v_m}| = 2$ for all $(u_m, u_0 + u_t + v_t + v_m)$ in $\F_2^2$. Hence, $h_0$ is negabent. \end{proof} Given a nonempty subset $S$ of $\mathbb{F}_2^{4t+2}$, our systematic construction of $(4t+2)$-variable Boolean functions by modifying the truth table of $h_0 \in \mathcal{B}_{4t+2}$ in (\ref{align:h_0}) is given by \begin{align} \label{align:h_BentNegabent} h(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) = h_0(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) + \chi_S(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) = \begin{cases} h_0(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) + 1,\ (\bm{\mathrm{X}}, \bm{\mathrm{Y}}) \in S, \\ h_0(\bm{\mathrm{X}}, \bm{\mathrm{Y}}),\ \hspace{0.7cm}\text{otherwise}. \end{cases} \end{align} In the following subsections, we will provide two methods for defining $S$ such that $h$ in (\ref{align:h_BentNegabent}) is a bent-negabent function. To avoid confusion, we will use $S_3$ and $S_4$ instead of $S$. \subsection{Bent-Negabent Functions on $4k+2$ Variables} In this subsection, let $k$ be an integer, $t = k$ and $m = 2t$. Given $\bm{\gamma} = (\bm{\gamma}_1, \bm{\gamma}_2) \in \F_2^{2k}$, where $\gmbu_i \in \F_2^k$ for $i=1, 2$, let $E_{\bm{\gamma}}$ be an arbitrary nonempty subset of $\mathbb{F}_2$, i.e., $E_{\bm{\gamma}} = \{0 \}, \{1 \}$ or $\mathbb{F}_2$. We shall define \begin{align} L_{\gmbu, E_{\gmbu}} = \{(\Xbu, \Ybu) \in \mathbb{F}_2^{4k+2} : (\xbu, \ybu) \in L_{\gmbu}, x_m \in \mathbb{F}_2, y_m \in E_{\gmbu} \}. \end{align} Let $\Gamma$ be a nonempty subset of $\mathbb{F}_2^{2k}$, and $S_3$ be a subset of $\mathbb{F}_2^{4k+2}$ defined by \begin{align} S_3 = \bigcup_{\gmbu \in \Gamma} L_{\gmbu, E_{\gmbu}}. \label{align:S3_set} \end{align} We have the following result, of which the proof will be given later. \begin{theorem} \rm \label{theorem:4k+2} Given the subset $S_3$ of $\mathbb{F}_2^{4k+2}$ defined in (\ref{align:S3_set}) and $h_0 \in \mathcal{B}_{4k+2}$ defined in (\ref{align:h_0}), the $(4k+2)$-variable function $h$ in (\ref{align:h_BentNegabent}) is bent-negabent. \end{theorem} In the following lemma, we give the fragmentary Walsh-Hadamard transform and the fragmentary nega-Hadamard transform of $h_0$ over $S_3$, whose proof is presented in Appendix. \begin{lemma} \rm \label{lemma:4k+2_WHT&NHT} Given the subset $S_3$ of $\mathbb{F}_2^{4k+2}$ defined in (\ref{align:S3_set}) and $h_0 \in \mathcal{B}_{4k+2}$ defined in (\ref{align:h_0}), the fragmentary Walsh-Hadamard transform and fragmentary nega-Hadamard transform of $h_0$ over $S_3$ at $(\bm{\mathrm{U}}, \bm{\mathrm{V}}) \in \mathbb{F}_2^{4k+2}$ are respectively given by \begin{align} & \W_{h_0, S_3}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = \begin{cases} \W_{h_0}(\bm{\mathrm{U}}, \bm{\mathrm{V}}),\ \bm{\gamma}_2 = \bm{\mathrm{u}}'+ \bm{\mathrm{u}}'' + \bm{\mathrm{e}}_{k}^{u_m}, \bm{\gamma}_1 + \bm{\gamma}_2 = \bm{\mathrm{v}}' + \bm{\mathrm{v}}'' + \bm{1}_k, \\ \hspace{2.1cm} \text{and}\ u_m \in E_{\bm{\gamma}}\ \text{for}\ \gmbu \in \Gamma, \\ 0,\ \hspace{1.6cm} \text{otherwise}, \end{cases} \label{align:WHT_4k+2} \\ & \N_{h_0, S_3}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = \begin{cases} \frac{1}{2}(1+\imath (-1)^{u_0+u_k+v_k+v_m+u_m+\varepsilon}) \N_{h_0} (\Ubu, \Vbu),\ \text{if there is only}\\ \hspace{2.0cm} \text{one vector}\ \bm{\gamma} = (\bm{\gamma}_1, \bm{\gamma}_2) \in \Gamma\ \text{such that}\ \bm{\gamma}_1 = \bm{\mathrm{v}}' + \bm{\mathrm{v}}'',\ \text{and} \\ \hspace{2.0cm} \bm{\gamma}_1 + \bm{\gamma}_2 = \bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{1}_k + \bm{\mathrm{e}}_{k}^{\varepsilon},\ \text{where}\ \varepsilon \in E_{\bm{\gamma}}, \\ \N_{h_0} (\Ubu, \Vbu),\ \text{if there are two vectors}\ \bm{\gamma} = (\bm{\gamma}_1, \bm{\gamma}_2), \hat{\bm{\gamma}} = (\bm{\gamma}_1, \hat{\bm{\gamma}}_2) \in \Gamma\ \\ \hspace{2.0cm} \text{such that}\ \bm{\gamma}_1 = \bm{\mathrm{v}}' + \bm{\mathrm{v}}'',\ \bm{\gamma}_1 + \bm{\gamma}_2 = \bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{1}_k + \bm{\mathrm{e}}_{k}^{\varepsilon},\ \text{and} \\ \hspace{2.0cm} \bm{\gamma}_1 + \hat{\bm{\gamma}}_2 = \bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{1}_k + \bm{\mathrm{e}}_{k}^{\hat{\varepsilon}},\ \text{where}\ \varepsilon \in E_{\bm{\gamma}}$ and $\hat{\varepsilon} \in E_{\hat{\bm{\gamma}}}, \\ 0,\ \hspace{1.6cm}\text{otherwise}. \end{cases} \label{align:NHT_4k+2} \end{align} \end{lemma} \textbf{Proof of Theorem \ref{theorem:4k+2}}: It is an immediate consequence of Lemma \ref{lemma:4k+2_WHT&NHT} and Theorem \ref{theorem:frame_construction}. {\hfill $\square$\par} The ANF of $h$ is given in the following theorem. \begin{theorem} \rm Given the set $S_3$ defined in (\ref{align:S3_set}), the ANF of $h\in \mathcal{B}_{4k+2}$ in (\ref{align:h_BentNegabent}) is given by \[ h(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) = h_0(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) + \sum_{\gmbu \in \Gamma} \left[\left(\sum_{\mbox{\tiny $\begin{array} {c} \bm{\mathrm{u}}' * \bm{\mathrm{u}}'' = \bm{0}_k \\ \bm{\mathrm{u}}' + \bm{\mathrm{u}}'' \succeq \bm{\gamma}_1 \end{array}$}} \bm{\mathrm{x}}^{\bm{\mathrm{u}}}\right) \left(\sum_{\mbox{\tiny $\begin{array} {c} \bm{\mathrm{v}}' * \bm{\mathrm{v}}'' = \bm{0}_k \\ \bm{\mathrm{v}}' + \bm{\mathrm{v}}'' \succeq \bm{\gamma}_2 \end{array}$}} \bm{\mathrm{y}}^{\bm{\mathrm{v}}} \right)\chi_{E_{\bm{\gamma}}}(y_m)\right], \] where $\chi_{E_{\bm{\gamma}}}$ is the characteristic function of $E_{\bm{\gamma}}$, i.e., $\chi_{E_{\bm{\gamma}}}(y_m) = \begin{cases} y_m,\ \hspace{0.6cm} E_{\bm{\gamma}} = \{1\}, \\ y_m + 1,\ E_{\bm{\gamma}} = \{0\}, \\ 1,\ \hspace{0.9cm} E_{\bm{\gamma}} = \mathbb{F}_2. \end{cases}$ \end{theorem} \begin{proof} The characteristic function of $S_3$ in (\ref{align:S3_set}) is given by \[ \chi_{S_3}(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) = \sum_{\gmbu \in \Gamma} \chi_{S_{\bm{\gamma}_1}}(\bm{\mathrm{x}}) \chi_{S_{\bm{\gamma}_2}}(\bm{\mathrm{y}})\chi_{E_{\bm{\gamma}}}(y_m), \] where $\chi_{S_{\bm{\gamma}_i}}$ for $i=1, 2$ are given by Lemma \ref{lemma:chi_S_beta}. Then the desired result is reached. \end{proof} In the following corollary, we give the necessary and sufficient condition under which the algebraic degree of $h$ reaches the maximum. \begin{corollary} \rm \label{corollary:Deg_4k+2} Given the set $S_3$ defined in (\ref{align:S3_set}), the algebraic degree of $h\in \mathcal{B}_{4k+2}$ in (\ref{align:h_BentNegabent}) is $2k+1$ if and only if $\sum_{\bm{\gamma} \in \Gamma} |E_{\bm{\gamma}}|$ is odd. \end{corollary} \begin{proof} For the same reason as Corollary \ref{corollary:Deg_4k}, for any two vectors $\bm{\gamma} = (\bm{\gamma}_1, \bm{\gamma}_2)$ and $\bm{\theta} = (\bm{\theta}_1, \bm{\theta}_2)$ in $\Gamma$ satisfying $|E_{\bm{\gamma}}| = |E_{\bm{\theta}}| = 1$, we know that the functions $\chi_{S_{\bm{\gamma}_1}}(\bm{\mathrm{x}}) \chi_{S_{\bm{\gamma}_2}}(\bm{\mathrm{y}})\chi_{S_{E_{\bm{\gamma}}}}(y_m)$ and $\chi_{S_{{\thbu}_1}}(\bm{\mathrm{x}}) \chi_{S_{{\thbu}_2}}(\bm{\mathrm{y}})\chi_{S_{E_{\thbu}}}(y_m)$ have the degree $2k+1$, and their monomial terms with degree $2k+1$ are the same. So, if $\sum_{\bm{\gamma} \in \Gamma} |E_{\bm{\gamma}}|$ is even , all the monomial terms with algebraic degree $2k+1$ are canceled. Hence, the algebraic degree of $h$ is $2k+1$ if and only if $\sum_{\bm{\gamma} \in \Gamma} |E_{\bm{\gamma}}|$ is odd. \end{proof} From (\ref{align:Dual_def}) and (\ref{align:WHT_h0}) we know $\tilde{h}_0 (\bm{\mathrm{X}}, \bm{\mathrm{Y}}) = \bm{\mathrm{X}} \cdot \bm{\mathrm{Y}} + \bm{\mathrm{x}}'\cdot \bm{\mathrm{x}}'' + x_m\cdot (x_k + y_0)$. The dual of $h$ is given in the following theorem. \begin{theorem} \rm Given the set $S_3$ defined in (\ref{align:S3_set}), the dual of $h\in \mathcal{B}_{4k+2}$ in (\ref{align:h_BentNegabent}) is also bent-negabent and given by \[ \tilde{h} (\bm{\mathrm{X}}, \bm{\mathrm{Y}}) = \bm{\mathrm{X}} \cdot \bm{\mathrm{Y}} + \bm{\mathrm{x}}'\cdot \bm{\mathrm{x}}'' + x_m\cdot (x_k + y_0) + \chi_{\tilde{S}_3}(\bm{\mathrm{X}}, \bm{\mathrm{Y}}), \] where $\tilde{S}_3$ is a subset of $\mathbb{F}_2^{4k+2}$ defined by \begin{align*} \tilde{S}_3 = \bigcup_{\gmbu \in \Gamma} \{(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) \in \mathbb{F}_2^{4k+2} : (\xbu', \ybu') \in \mathbb{F}_2^{2k}, \bm{\mathrm{x}}'' = \bm{\mathrm{x}}' + \bm{\mathrm{e}}_{k}^{x_m} + \bm{\gamma}_2, x_m \in E_{\bm{\gamma}}, \bm{\mathrm{y}}'' = \bm{\mathrm{y}}' + \bm{1}_k + \bm{\gamma}_1 + \bm{\gamma}_2, y_m \in \mathbb{F}_2 \}. \end{align*} \end{theorem} Based on Theorem \ref{theorem:4k+2} , we show an example of a $10$-variable bent-negabent function with the maximum algebraic degree. \begin{example} \rm Let $k=2$, $\Gamma = \{(1, 0, 0, 0), (0, 1, 0, 1) \}$, $E_{(1, 0, 0, 0)} = \{1 \}$, and $E_{(0, 1, 0, 1)} = \mathbb{F}_2$. By (\ref{align:S3_set}), $S_3$ is given by \begin{align*} S_3 =& \{ (0, 0, 1, 0), (1, 0, 0, 0), (0, 1, 1, 1), (1, 1, 0, 1) \} \times \mathbb{F}_2 \times \{(0, 0, 0, 0), (1, 0, 1, 0), (0, 1, 0, 1), (1, 1, 1, 1) \} \times \{1 \} \\ & \cup \{(0, 0, 0, 1), (1, 0, 1, 1), (0, 1, 0, 0), (1, 1, 1, 0) \} \times \mathbb{F}_2 \times \{(0, 0, 0, 1), (1, 0, 1, 1), (0, 1, 0, 0), (1, 1, 1, 0) \} \times \mathbb{F}_2. \end{align*} Using a SageMath program, we verified that the $10$-variable function generated by (\ref{align:h_BentNegabent}) is bent-negabent with algebraic degree $5$, and its ANF is given by \begin{center} \fcolorbox{white}{white}{ \parbox{.95\linewidth} {$h(x_0, \cdots, x_4, y_0, \cdots, y_4) = x_0x_1y_0y_1y_4 + x_0x_1y_0y_1 + x_0x_1y_0y_3y_4 + x_0x_1y_0y_3 + x_0x_1y_0y_4 + x_0x_1y_1y_2y_4 + x_0x_1y_1y_2 + x_0x_1y_1y_4 + x_0x_1y_1 + x_0x_1y_2y_3y_4 + x_0x_1y_2y_3 + x_0x_1y_2y_4 + x_0x_1y_3y_4 + x_0x_1y_3 + x_0x_1y_4 + x_0x_3y_0y_1y_4 + x_0x_3y_0y_1 + x_0x_3y_0y_3y_4 + x_0x_3y_0y_3 + x_0x_3y_0y_4 + x_0x_3y_1y_2y_4 + x_0x_3y_1y_2 + x_0x_3y_1y_4 + x_0x_3y_1 + x_0x_3y_2y_3y_4 + x_0x_3y_2y_3 + x_0x_3y_2y_4 + x_0x_3y_3y_4 + x_0x_3y_3 + x_0x_3y_4 + x_0y_0y_1y_4 + x_0y_0y_3y_4 + x_0y_0y_4 + x_0y_0 + x_0y_1y_2y_4 + x_0y_1y_4 + x_0y_2y_3y_4 + x_0y_2y_4 + x_0y_3y_4 + x_1x_2y_0y_1y_4 + x_1x_2y_0y_1 + x_1x_2y_0y_3y_4 + x_1x_2y_0y_3 + x_1x_2y_0y_4 + x_1x_2y_1y_2y_4 + x_1x_2y_1y_2 + x_1x_2y_1y_4 + x_1x_2y_1 + x_1x_2y_2y_3y_4 + x_1x_2y_2y_3 + x_1x_2y_2y_4 + x_1x_2y_3y_4 + x_1x_2y_3 + x_1x_2y_4 + x_1y_0y_1 + x_1y_0y_3 + x_1y_1y_2 + x_1y_2y_3 + x_1y_3 + x_2x_3y_0y_1y_4 + x_2x_3y_0y_1 + x_2x_3y_0y_3y_4 + x_2x_3y_0y_3 + x_2x_3y_0y_4 + x_2x_3y_1y_2y_4 + x_2x_3y_1y_2 + x_2x_3y_1y_4 + x_2x_3y_1 + x_2x_3y_2y_3y_4 + x_2x_3y_2y_3 + x_2x_3y_2y_4 + x_2x_3y_3y_4 + x_2x_3y_3 + x_2x_3y_4 + x_2y_0y_1y_4 + x_2y_0y_3y_4 + x_2y_0y_4 + x_2y_1y_2y_4 + x_2y_1y_4 + x_2y_2y_3y_4 + x_2y_2y_4 + x_2y_2 + x_2y_3y_4 + x_2y_4 + x_3y_0y_1 + x_3y_0y_3 + x_3y_1y_2 + x_3y_1 + x_3y_2y_3 + x_4y_4 + y_0y_2 + y_1y_3$. } } \end{center} \end{example} \subsection{Bent-Negabent Functions on $8k+2$ Variables } \label{subsection:8k+2} In this subsection, let $k$ be an integer, $t = 2k$ and $m = 2t$. Given $\bm{\gamma} = (\bm{\gamma}_1, \bm{\gamma}_2) \in \mathbb{F}_2^{4k}$, where $\gmbu_i \in \F_2^{2k}$ for $i=1, 2$, let $E_{\bm{\gamma}}$ be an arbitrary nonempty subset of $\mathbb{F}_2$, i.e., $E_{\bm{\gamma}} = \{0 \}, \{1 \}$ or $\mathbb{F}_2$. We shall define \begin{align} C_{\gmbu, A_2^{2k}, E_{\gmbu}} = \{(\Xbu, \Ybu)\in \mathbb{F}_2^{8k+2} : (\xbu, \ybu) \in C_{\gmbu, A_2^{2k}}, x_m \in \mathbb{F}_2, y_m \in E_{\gmbu} \}. \end{align} Let $\Gamma$ be a nonempty subset of $R_{A_2^{2k}}$. We shall define a subset $S_4$ of $\mathbb{F}_2^{8k+2}$ by \begin{align} S_4 = \bigcup_{\gmbu \in \Gamma} C_{\gmbu, A_2^{2k}, E_{\gmbu}}. \label{align:S4_set} \end{align} We have the following result, of which the proof will be given later. \begin{theorem} \rm \label{theorem:8k+2} Given the subset $S_4$ of $\mathbb{F}_2^{8k+2}$ defined in (\ref{align:S4_set}) and $h_0 \in \mathcal{B}_{8k+2}$ defined in (\ref{align:h_0}), the $(8k+2)$-variable function $h$ in (\ref{align:h_BentNegabent}) is bent-negabent. \end{theorem} The following lemma gives the fragmentary Walsh-Hadamard transform and the fragmentary nega-Hadamard transform of $h_0$ over $S_4$, whose lengthy proof is given in Appendix. \begin{lemma} \rm \label{lemma:8k+2_WHT&NHT} Given the subset $S_4$ of $\mathbb{F}_2^{8k+2}$ defined in (\ref{align:S4_set}) and $h_0 \in \mathcal{B}_{8k+2}$ defined in (\ref{align:h_0}), the fragmentary Walsh-Hadamard transform and the fragmentary nega-Hadamard transform of $h_0$ over $S_4$ at $(\bm{\mathrm{U}}, \bm{\mathrm{V}}) \in \mathbb{F}_2^{8k+2}$ are respectively given by \begin{align} & \W_{h_0, S_4}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = \begin{cases} \W_{h_0}(\bm{\mathrm{U}}, \bm{\mathrm{V}}),\ u_m \in E_{\bm{\gamma}}, \bm{\mathrm{u}} + \bm{\mathrm{e}}_{4k}^{u_m} \in C_{\gmbu}(A_2^{2k}), \bm{\mathrm{v}} \in C_{(\gmbu_2, \gmbu_1)} (A_2^{2k})\ \text{for}\ \gmbu \in \Gamma, \\ 0,\ \hspace{1.6cm} \text{otherwise}. \end{cases} \label{align:WHT_8k+2} \\ & \N_{h_0, S_4}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = \begin{cases} \frac{1}{2}(1+\imath (-1)^{u_0+u_{2k}+v_{2k}+v_m+u_m+\varepsilon}) \N_{h_0} (\Ubu, \Vbu),\\ \hspace{2.0cm} \varepsilon \in E_{\bm{\gamma}}, \bm{\mathrm{u}} + \gmbu+ \bm{\mathrm{e}}_{4k}^{\varepsilon} \in B_2^{2k}, \bm{\mathrm{v}} + \gmbu + (\bm{\gamma}_2, \bm{\gamma}_1) \in B_2^{2k}\ \text{for}\ \gmbu \in \Gamma, \\ 0,\ \hspace{1.6cm}\text{otherwise}. \end{cases} \label{align:NHT_8k+2} \end{align} \end{lemma} \textbf{Proof of Theorem \ref{theorem:8k+2}}: It follows from Lemma \ref{lemma:8k+2_WHT&NHT} and Theorem \ref{theorem:frame_construction} immediately. {\hfill $\square$\par} In the following theorem, we give the ANF of $h$. \begin{theorem} \rm Given the set $S_4$ defined in (\ref{align:S4_set}), the ANF of $h \in \mathcal{B}_{8k+2}$ in (\ref{align:h_BentNegabent}) is given by \[ h(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) = h_0(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) + \sum_{\gmbu \in \Gamma} \chi_{C_{\gmbu, A_2^{2k}}}(\xbu, \ybu)\chi_{E_{\gmbu}} (y_m), \] where $\chi_{C_{\gmbu, A_2^{2k}}}$ is given by Lemma \ref{lemma:chi_S_alpha}. \end{theorem} \begin{remark} \rm For the same reason as Remark \ref{remark:4k&8k}, we know that bent-negabent functions constructed from Theorem \ref{theorem:8k+2} are different from those on $8k+2$ variables constructed from Theorem \ref{theorem:4k+2}. \end{remark} In the following corollary, we give the necessary and sufficient condition such that the algebraic degree of $h$ reaches the maximum. The proof can be completed similarly to Corollary \ref{corollary:Deg_4k+2} and we omit it. \begin{corollary} \rm Given the set $S_4$ defined in (\ref{align:S4_set}), the algebraic degree of $h\in \mathcal{B}_{8k+2}$ in (\ref{align:h_BentNegabent}) is $4k+1$ if and only if $\sum_{\bm{\gamma} \in \Gamma} |E_{\bm{\gamma}}|$ is odd. \end{corollary} We have the following result on the dual of $h$. \begin{theorem} \rm Given the set $S_4$ defined in (\ref{align:S4_set}), the dual of $h \in \mathcal{B}_{8k+2}$ in (\ref{align:h_BentNegabent}) is also bent-negabent, and given by \[ \tilde{h} (\bm{\mathrm{X}}, \bm{\mathrm{Y}}) = \bm{\mathrm{X}}\cdot \bm{\mathrm{Y}} + \bm{\mathrm{x}}'\cdot \bm{\mathrm{x}}'' + x_m\cdot (x_{2k} + y_0) + \chi_{\tilde{S}_4}(\bm{\mathrm{X}}, \bm{\mathrm{Y}}), \] where $\tilde{S}_4$ is a subset of $\mathbb{F}_2^{8k+2}$ defined by \begin{align*} \tilde{S}_4 = \bigcup_{\gmbu \in \Gamma } \{(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) \in \mathbb{F}_2^{8k+2} :\ & x_m \in E_{\bm{\gamma}}, \bm{\mathrm{x}} + \bm{\mathrm{e}}_{4k}^{x_m} \in C_{\gmbu}(A_2^{2k}), \bm{\mathrm{y}} \in C_{(\gmbu_2, \gmbu_1)} (A_2^{2k}), y_m \in \mathbb{F}_2 \}. \end{align*} \end{theorem} \section{Construction of 2-Rotation Symmetric Bent-Negabent Functions with Any Possible Algebraic Degrees} \label{section:2_RS_bent_negabent} In this section, we present a construction of 2-rotation symmetric bent-negabent functions with any possible algebraic degrees by modifying the truth tables of a class of quadratic 2-rotation symmetric bent-negabent functions. Let $\bm{\mathrm{x}} = (x_0, \cdots, x_{2k-1}), \bm{\mathrm{y}} = (y_0, \cdots, y_{2k-1}) \in \mathbb{F}_2^{2k}$. For simplicity, we shall denote \[ \begin{cases} \bm{\mathrm{x}}_{ev} = (x_0, x_2, \cdots, x_{2k-2}), \\ \bm{\mathrm{x}}_{od} = (x_1, x_3, \cdots, x_{2k-1}), \\ \bm{\mathrm{y}}_{ev} = (y_0, y_2, \cdots, y_{2k-2}), \\ \bm{\mathrm{y}}_{od} = (y_1, y_3, \cdots, y_{2k-1}). \end{cases} \] Let $f_0 \in \mathcal{B}_{4k}$ be a 2-rotation symmetric Boolean function, which is affine equivalent to $g_0 \in \mathcal{B}_{4k}$ in (\ref{align:g_0}), with the following ANF: \begin{align} f_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = g_0(\bm{\mathrm{x}}_{ev}, \bm{\mathrm{y}}_{ev}, \bm{\mathrm{x}}_{od}, \bm{\mathrm{y}}_{od}) =& \bm{\mathrm{x}}_{ev} \cdot \bm{\mathrm{x}}_{od} + \bm{\mathrm{y}}_{ev} \cdot \bm{\mathrm{y}}_{od} + \bm{\mathrm{x}}_{od} \cdot \bm{\mathrm{y}}_{od} \notag \\ =& \sum_{i = 0}^{k-1} (x_{2i}x_{2i+1} + y_{2i}y_{2i+1} + x_{2i+1}y_{2i+1}). \label{align:f_0} \end{align} From \cite[Theorem 2]{Schmidt2008} we know that $f_0$ is also bent-negabent. Next, we propose a method for constructing bent-negabent functions by modifying the truth table of $f_0$. Let $\Gamma$ be a nonempty subset of $\mathbb{F}_2^{2k}$, and $T$ a subset of $\mathbb{F}_2^{4k}$ defined by \begin{align} T = \bigcup_{\bm{\gamma} \in \Gamma} \{(\bm{\mathrm{x}}, \bm{\mathrm{y}}) \in \mathbb{F}_2^{4k}: \bm{\mathrm{x}}\in \mathbb{F}_2^{2k},\ \bm{\mathrm{y}} = \bm{\mathrm{x}} + \bm{\gamma} \}. \label{align:T_set} \end{align} The following corollary follows from Theorem \ref{theorem:4k} immediately. \begin{corollary} \rm \label{corollary:f} Given the subset $T$ of $\mathbb{F}_2^{4k}$ defined in (\ref{align:T_set}), the function $f \in \mathcal{B}_{4k}$ defined by \begin{align} \label{align:f_BentNegabent} f(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = f_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}) + \chi_T(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = \begin{cases} f_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}) + 1,\ (\bm{\mathrm{x}}, \bm{\mathrm{y}}) \in T, \\ f_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}),\ \hspace{0.7cm}\text{otherwise}, \end{cases} \end{align} is bent-negabent. \end{corollary} We shall denote the orbit generated by $\bm{\mathrm{x}} \in \mathbb{F}_2^{2k}$ by $O_{2k}(\bm{\mathrm{x}}) = \{\rho_n^i(\bm{\mathrm{x}}) : 0 \le i < 2k \}$. We choose a representative element from every orbit, and denote the set of all representative elements by $R_{2k}$. For example, for $k=2$, if we select the lexicographically first element as the representative element of every obit, then $R_4 = \{(0,0,0,0), (1,0,0,0), (1,1,0,0), (1,0,1,0), (1, 1, 1, 0), (1,1,1,1) \}$. Based on Corollary \ref{corollary:f}, we present a construction of 2-rotation symmetric bent-negabent functions in the following theorem. \begin{theorem} \rm \label{theorem:RS_BentNegabent} Let $P$ be an arbitrary nonempty subset of $R_{2k}$, and $ \Gamma = \bigcup_{\bm{\beta} \in P} O_{2k}(\bm{\beta}). $ Then the $4k$-variable function $f$ defined by (\ref{align:f_BentNegabent}) is a 2-rotation symmetric bent-negabent function. \end{theorem} \begin{proof} From Corollary \ref{corollary:f} we know that $f$ is bent-negabent. To prove the 2-rotation symmetric property of $f$, considering that $f_0$ is a 2-rotation symmetric function, it is sufficient to prove that $\chi_T$ is a rotation symmetric function. That is to say, the set $T$ is the union of some orbits. Suppose that $(\bm{\mathrm{x}}, \bm{\mathrm{y}}) \in \mathbb{F}_2^{4k}$ is an arbitrary element of $T$, and $\bm{\mathrm{y}} = \bm{\mathrm{x}} + \bm{\gamma}$, where $\bm{\gamma} = (\gamma_0, \cdots, \gamma_{2k-1}) \in \Gamma$. We have \begin{align*} \rho_{4k}^1(\bm{\mathrm{x}}, \bm{\mathrm{y}}) =& \rho_{4k}^1(x_0, \cdots, x_{2k-1}, y_0, \cdots, y_{2k-1}) \\ =& (x_1, \cdots, x_{2k-1}, y_0, y_1, \cdots, y_{2k-1}, x_0) \\ =& (x_1, \cdots, x_{2k-1}, y_0, x_1 + \gamma_1, \cdots, y_{2k-1} + \gamma_{2k-1}, y_0 + \gamma_0) \\ =& (x_1, \cdots, x_{2k-1}, y_0, (x_1, \cdots, x_{2k-1}, y_0) + \rho_{2k}^1(\bm{\gamma}) ). \end{align*} From the definition of $\Gamma$ we know $\rho_{2k}^1(\bm{\gamma})\in \Gamma$. Then, $\rho_{4k}^1(\bm{\mathrm{x}}, \bm{\mathrm{y}})$ is also an element of $T$. Hence, $T$ is the union of some orbits. This completes the proof. \end{proof} We give the ANF of $f$ in Theorem \ref{theorem:RS_BentNegabent} in the following theorem. \begin{theorem} \rm \label{theorem:ANF_RS_BentNegabent} The ANF of $f \in \mathcal{B}_{4k}$ in Theorem \ref{theorem:RS_BentNegabent} is given by \begin{align*} f(\bm{\mathrm{x}}, \bm{\mathrm{y}}) =& f_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}) + \sum_{\bm{\beta} \in P} \sum_{\bm{\gamma} \in O_{2k}(\bm{\beta})} \sum_{\mbox{\tiny $\begin{array} {c} \bm{\mathrm{u}} * \bm{\mathrm{v}} = \bm{0}_{2k} \\ \bm{\mathrm{u}} + \bm{\mathrm{v}} \succeq \bm{\gamma} \end{array}$}} (\bm{\mathrm{x}}, \bm{\mathrm{y}})^{(\bm{\mathrm{u}}, \bm{\mathrm{v}})}. \end{align*} \end{theorem} In the following corollary, we give the necessary and sufficient condition under which the algebraic degree of $f$ in Theorem \ref{theorem:RS_BentNegabent} reaches the maximum. \begin{corollary} \rm The algebraic degree of $f \in \mathcal{B}_{4k}$ in Theorem \ref{theorem:RS_BentNegabent} is $2k$ if and only if $\sum_{\bm{\beta} \in P} |O_{2k}(\bm{\beta})|$ is odd. \end{corollary} By Theorem \ref{theorem:dual_4k}, we directly give the dual of $f$ in Theorem \ref{theorem:RS_BentNegabent}. \begin{theorem} \rm The dual of $f \in \mathcal{B}_{4k}$ in Theorem \ref{theorem:RS_BentNegabent} is also a 2-rotation symmetric bent-negabent function, and given by \[ \tilde{f}(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = \bm{\mathrm{x}}_{ev} \cdot \bm{\mathrm{x}}_{od} + \bm{\mathrm{y}}_{ev}\cdot \bm{\mathrm{y}}_{od} + \bm{\mathrm{x}}_{ev}\cdot \bm{\mathrm{y}}_{ev} + \chi_{\tilde{T}}(\bm{\mathrm{x}}, \bm{\mathrm{y}}), \] where $\tilde{T}$ is a subset of $\mathbb{F}_2^{4k}$ defined by \[ \tilde{T} = \{(\bm{\mathrm{x}}, \bm{\mathrm{y}}) \in \mathbb{F}_2^{4k} : (\bm{\mathrm{x}}_{ev} + \bm{\mathrm{x}}_{od}+ \bm{\mathrm{y}}_{ev} + \bm{\mathrm{y}}_{od} + \bm{1}_{k}, \bm{\mathrm{x}}_{ev} + \bm{\mathrm{y}}_{ev}) \in \bigcup_{\bm{\beta} \in P} O_{2k}(\bm{\beta})\}. \] \end{theorem} In what follows, we give two simplified forms of $f$ in Theorem \ref{theorem:RS_BentNegabent}. We first present a result on the linear combination of $\sum_{\bm{\mathrm{u}} * \bm{\mathrm{v}} = \bm{0}_{2k}, \bm{\mathrm{u}} + \bm{\mathrm{v}} \succeq \bm{\gamma}} (\bm{\mathrm{x}}, \bm{\mathrm{y}})^{(\bm{\mathrm{u}}, \bm{\mathrm{v}})}$. \begin{lemma} \rm (\cite[Lemma 5]{Su2017}) \label{lemma:Su_RS} For each $\bm{\alpha} \in R_{2k}$, there exists a nonempty subset $A_{\bm{\alpha}} \subseteq R_{2k}$ such that \begin{align*} \sum_{\mbox{\tiny $\begin{array} {c} \bm{\mathrm{u}} * \bm{\mathrm{v}} = \bm{0}_{2k} \\ \bm{\mathrm{u}} + \bm{\mathrm{v}} \in O_{2k}(\bm{\alpha}) \end{array}$}} (\bm{\mathrm{x}}, \bm{\mathrm{y}})^{(\bm{\mathrm{u}}, \bm{\mathrm{v}})} = \sum_{\bm{\beta} \in A_{\bm{\alpha}}} \sum_{\bm{\gamma} \in O_{2k}(\bm{\beta})} \sum_{\mbox{\tiny $\begin{array} {c} \bm{\mathrm{u}} * \bm{\mathrm{v}} = \bm{0}_{2k} \\ \bm{\mathrm{u}} + \bm{\mathrm{v}} \succeq \bm{\gamma} \end{array}$}} (\bm{\mathrm{x}}, \bm{\mathrm{y}})^{(\bm{\mathrm{u}}, \bm{\mathrm{v}})}. \end{align*} \end{lemma} From Theorem \ref{theorem:ANF_RS_BentNegabent} and Lemma \ref{lemma:Su_RS}, the first simplified form of $f$ in Theorem \ref{theorem:RS_BentNegabent} is given as follows. \begin{corollary}\rm \label{corollary:2RS_BN} Let $A$ be a nonempty subset of $R_{2k}$. The function $f \in \mathcal{B}_{4k}$ defined by \begin{align*} f(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = f_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}) + \sum_{\bm{\gamma} \in A} \left( \sum_{\mbox{\tiny $\begin{array} {c} \bm{\mathrm{u}} * \bm{\mathrm{v}} = \bm{0}_{2k} \\ \bm{\mathrm{u}} + \bm{\mathrm{v}} \in O_{2k}(\bm{\gamma}) \end{array}$}} (\bm{\mathrm{x}}, \bm{\mathrm{y}})^{(\bm{\mathrm{u}}, \bm{\mathrm{v}})} \right) \end{align*} is a 2-rotation symmetric bent-negabent function. \end{corollary} In Corollary \ref{corollary:2RS_BN}, if $A$ only contains one element, then we obtain the following corollary. \begin{corollary}\rm \label{corollary:2RS_BN_anydegree} Let $\bm{\gamma}$ be an arbitrary vector in $ \mathbb{F}_2^{2k}$ and $\wt(\bm{\gamma}) \ge 2$. The function $f \in \mathcal{B}_{4k}$ defined by \begin{align*} f(\bm{\mathrm{x}}, \bm{\mathrm{y}}) = f_0(\bm{\mathrm{x}}, \bm{\mathrm{y}}) + \sum_{\mbox{\tiny $\begin{array} {c} \bm{\mathrm{u}} * \bm{\mathrm{v}} = \bm{0}_{2k} \\ \bm{\mathrm{u}} + \bm{\mathrm{v}} \in O_{2k}(\bm{\gamma}) \end{array}$}} (\bm{\mathrm{x}}, \bm{\mathrm{y}})^{(\bm{\mathrm{u}}, \bm{\mathrm{v}})} \end{align*} is a 2-rotation symmetric bent-negabent function, and $\deg(f) = \wt(\bm{\gamma})$. \end{corollary} \begin{remark} \rm Using Corollary \ref{corollary:2RS_BN_anydegree}, we can easily obtain 2-rotation symmetric bent-negabent functions with any possible algebraic degrees ranging from $2$ to $2k$. \end{remark} Next, according to Corollary \ref{corollary:2RS_BN_anydegree}, we give an example of an $8$-variable 2-rotation symmetric bent-negabent function with the maximum algebraic degree. \begin{example} \rm In Corollary \ref{corollary:2RS_BN_anydegree}, let $k=2$ and $\bm{\gamma} = \bm{1}_4$. Then the ANF of $f$ is given by \begin{center} \fcolorbox{white}{white}{ \parbox{.9\linewidth} {$f(x_0, \cdots, x_3, y_0, \cdots, y_3) = f_0(x_0, \cdots, x_3, y_0, \cdots, y_3) + x_0x_1x_2x_3 + x_0x_1x_2y_3 + x_0x_1x_3y_2 + x_0x_2x_3y_1 + x_1x_2x_3y_0 + x_0x_1y_2y_3 + x_0x_2y_1y_3 + x_0x_3y_1y_2 + x_1x_2y_0y_3 + x_1x_3y_0y_2 + x_2x_3y_0y_1 + x_0y_1y_2y_3 + x_1y_0y_2y_3 + x_2y_0y_1y_3 + x_3y_0y_1y_2 + y_0y_1y_2y_3$,} } \end{center} which was verified to be a 2-rotation symmetric bent-negabent function by using a SageMath program. \end{example} \section{Comparisons with Known Results} \label{section:comparison} In this section, we compare some relations on bent and negabent functions, and show that those constructions of bent-negabent functions in Sections \ref{section:bent_negabent_4k} and \ref{section:bent_negabent_4k+2} are not special cases of the third generic construction in \cite{Su2020}. Interestingly, all the characteristic functions $\chi_{S_i}$ for $i = 1, 2, 3, 4$ are negabent but not bent, and $\chi_T$ is a rotation symmetric negabent functions but not bent. (The proof is not difficult, and we do not include it in this paper.) Hence, in our constructions, the sum of a quadratic bent-negabent function and a negabent function not bent function, gives rise to a bent-negabent function. Recall the important characterization of negabent functions that a function $f$ on $2k$ variables is negabent if and only if $f + \sigma_2$ is bent, where $\sigma_2$ is the $2k$-variable quadratic homogeneous symmetric function. We summarize these results in Table \ref{table:someresults} for showing the differences in the negabent case. Next, we explain that our constructions of bent-negabent functions are not special cases of the third generic construction in \cite{Su2020}. We present the third generic construction in \cite{Su2020} in the following theorem. \begin{table}[t] \caption{Some relations on bent and negabent functions} \begin{center} \label{table:someresults} \small \begin{tabular}{lll} \toprule $f$ & \ \ quadratic function $\delta$ & \ \ $f+\delta$ \cr \midrule bent & \ \ $\sigma_2$: bent not negabent & \ \ negabent \cr negabent & \ \ $\sigma_2$: bent not negabent & \ \ bent \cr $\chi_{S_1}$/$\chi_{S_2}$: \emph{negabent not bent} & \ \ $g_0$: \emph{bent-negabent} & \ \ \emph{bent-negabent} \cr $\chi_{S_3}$/$\chi_{S_4}$: \emph{negabent not bent} & \ \ $h_0$: \emph{bent-negabent} & \ \ \emph{bent-negabent} \cr \tabincell{l}{$\chi_T$: \emph{rotation symmetric} \\ \hspace{0.6cm} \emph{negabent not bent}} & \ \ \tabincell{l}{$f_0$: \emph{2-rotation symmetric} \\ \hspace{0.5cm} \emph{bent-negabent}} & \ \ \tabincell{l}{\emph{2-rotation symmetric} \\ \emph{bent-negabent}} \cr \bottomrule \end{tabular} \end{center} \end{table} \begin{theorem} \rm (\cite[Theorem 8]{Su2020}) \label{theorem:Su3Construction} Let $f_0 \in \mathcal{B}_{2m}$ be a bent function of the form (\ref{align:MM_class}), and $L$ be a linear subspace of $\F_2^{m}$, and $\Theta$ be a nonempty subset of $R_L$, i.e., a complete set of coset representatives of $L$ in $\F_2^{m}$. For $\xbu, \ybu \in \F_2^m$, define a subset $S$ of $\F_2^{2m}$ by \begin{align} \label{align:Su_S_set} S = \bigcup_{\thbu \in \Theta} \{(\xbu, \ybu) \in \F_2^{2m} : \xbu \in L, \ybu \in C_{\thbu}(L^{\perp}) \}. \end{align} Suppose that $\pi, \varphi$ and $L$ satisfy the following two conditions: \begin{enumerate} [C-1] \item $\pi$ is linear and satisfies $\pi(L^{\perp}) = L^{\perp}$, \item $\varphi(\albu + L^{\perp}) = \varphi(\albu)$ for all $\albu \in \F_2^m$. \end{enumerate} Then the function defined by $f(\xbu, \ybu) = f_0(\xbu, \ybu) + \chi_S(\xbu, \ybu)$ is bent. \end{theorem} Comparing with Theorem \ref{theorem:Su3Construction}, we have the following discussions about our four constructions, i.e., Theorems \ref{theorem:4k}, \ref{theorem:8k_variable}, \ref{theorem:4k+2} and \ref{theorem:8k+2}, where each of them is not a special case of Theorem \ref{theorem:Su3Construction}. (i) {\bf Theorem \ref{theorem:4k}:} For $m = 2k$, we know that $g_0$ is a function in the Maiorana-McFarland class with $\pi(\ybu) = \ybu$, $\varphi(\ybu) = \ybu'\cdot \ybu''$. For $S_1$ in (\ref{align:S1_set}), $\{\xbu \in \F_2^{2k}: \xbu' \in \F_2^k, \xbu'' = \xbu' + \gmbu_1\}$ is not a linear subspace except for $\gmbu_1 = \bm{0}_k$. If $\gmbu_1 = \bm{0}_k$, i.e., $L = \{\xbu \in \F_2^{2k}: \xbu' = \xbu'' \in \F_2^k \}$, then $L^{\perp} = L$ and $\gmbu_2 = \bm{0}_k$. That is to say, $\Gamma = \Theta = \{\bm{0}_m \}$, and $S_1 = S = \{(\xbu, \ybu)\in \F_2^{4k} : \xbu' = \xbu'' \in \F_2^k, \ybu' = \ybu'' \in \F_2^k \}$. For this case, the condition C-1 is satisfied, while C-2 is not satisfied. For instance, let $k = 2$ and $\albu = (1, 0, 0, 0)$. We have $\albu + L^{\perp} = \{(1, 0, 0, 0), (0, 0, 1, 0), (1, 1, 0, 1), (0, 1, 1, 1) \}$, and $\varphi(\albu + L^{\perp}) = \F_2 \ne \varphi(\albu) = 0$. (ii) {\bf Theorem \ref{theorem:8k_variable}:} It is clear that the set $S_2$ in (\ref{align:S2_set}) is a special cases of $S$ in (\ref{align:Su_S_set}) with $L = L^{\perp} = A_2^{2k}$. Then we know that the condition C-1 is satisfied while C-2 is not satisfied. For example, let $k = 1$ (i.e., $m=4$) and $\albu = (1, 0, 0, 0)$. Then, $\albu + A_2^{2k} = \{(1, 0, 0, 0), (0, 1, 0, 0), (1, 0, 1, 1), (0, 1, 1, 1) \}$, and $\varphi(\albu + A_2^{2k}) = \F_2 \ne \varphi(\albu) = 0$. (iii) {\bf Theorem \ref{theorem:4k+2}:} In accordance with those notations in Section \ref{section:bent_negabent_4k+2}, we rewrite $S$ in (\ref{align:Su_S_set}) as \begin{align} \label{align:Su_S_set2} S = \bigcup_{\thbu \in \Theta} \{(\Xbu, \Ybu) \in \F_2^{2m+2} : \Xbu \in L, \Ybu \in C_{\thbu}(L^{\perp}) \}. \end{align} We know $h_0$ is a bent function in the Maiorana-McFarland class with $\pi(\Ybu) = (y_0 + y_m, y_1, \cdots, y_m)$, $\varphi(\Ybu) = \ybu'\cdot \ybu''$. For the similar reason to (i), $S_3$ in (\ref{align:S3_set}) is a special cases of $S$ in (\ref{align:Su_S_set2}) if and only if $\Gamma = \Theta = \{\bm{0}_m \}$, and $L = \{\Xbu \in \F_2^{2t+1} : \xbu' = \xbu'' \in \F_2^t, x_m \in \F_2 \}$ and $L^{\perp} = \{\Xbu \in \F_2^{2t+1} : \xbu' = \xbu'' \in \F_2^t, x_m=0 \}$. For this case, the condition C-1 is satisfied while C-2 is not satisfied. For example, let $k=2$ and $\albu = (1, 0, 0, 0, 0)$. Then we have $\albu + L^{\perp} = \{(1, 0, 0, 0, 0), (1, 1, 0, 1, 0), (0, 0, 1, 0, 0), (0, 1, 1, 1, 0) \}$ and $\varphi(\albu + L^{\perp}) = \F_2 \ne \varphi(\albu) = 0$. (iv) {\bf Theorem \ref{theorem:8k+2}:} It is obvious that $S_4$ in (\ref{align:S4_set}) is a special cases of $S$ in (\ref{align:Su_S_set2}) with $L = A_2^{2k} \times \F_2$ and $L^{\perp} = A_2^{2k} \times \{0 \}$. Then the condition C-1 is satisfied while C-2 is not satisfied. For example, let $k = 1$, i.e., $m=4$, and $\albu = (1, 0, 0, 0, 0)$. Then, $\albu + A_2^{2k} \times \{0 \} = \{(1, 0, 0, 0, 0), (0, 1, 0, 0, 0), (1, 0, 1, 1, 0), (0, 1, 1, 1, 0) \}$, and $\varphi(\albu + A_2^{2k} \times \{0 \}) = \F_2 \ne \varphi(\albu) = 0$. \section{Concluding Remarks} \label{section:conclusion} In this paper, we have focused on systematic methods for constructing bent-negabent functions. We have discussed three sets of the new constructions: (1) bent-negabent functions on $4k$ and $8k$ variables by using the sets $S_1$ in (\ref{align:S1_set}) and $S_2$ in (\ref{align:S2_set}) to modify the truth table of $g_0$ in (\ref{align:g_0}); (2) bent-negabent functions on $4k+2$ and $8k+2$ variables by using the sets $S_3$ in (\ref{align:S3_set}) and $S_4$ in (\ref{align:S4_set}) to modify the truth table of $h_0$ in (\ref{align:h_0}); (3) 2-rotation symmetric bent-negabent functions on $4k$ variables with any possible algebraic degrees by modifying the truth table of $f_0$ in (\ref{align:f_0}). We also identified the necessary and sufficient conditions under which the algebraic degrees of bent-negabent functions from (1) and (2) reach the maximum. The ANFs and duals of all these constructed functions were also determined. Moreover, all our constructions of bent-negabent functions mentioned above are not special cases of the generic constructions of bent functions in \cite{Su2020}. On the other hand, it has been proved in \cite{Schmidt2008} that the algebraic degree of any $n$-variable ($n$ even and $n > 6$) bent-negabent function in the Maiorana-McFarland class is at most $\frac{n}{2}-1$. Hence, our newly constructed bent-negabent functions with the maximum algebraic degree cannot be in the Maiorana-McFarland class. Are they in or outside the completed Maiorana-McFarland class? It is an interesting problem deserving further research. \section*{Appendix} \textbf{Proof of Lemma \ref{lemma:4k+2_WHT&NHT}}: By (\ref{align:PWHT}), the fragmentary Walsh-Hadamard transform of $h_0$ over $S_3$ at $(\bm{\mathrm{U}}, \bm{\mathrm{V}}) \in \mathbb{F}_2^{4k+2}$ is given by \begin{align*} \W_{h_0, S_3}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) =& \sum_{\gmbu \in \Gamma} \sum_{(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) \in L_{\gmbu, E_{\gmbu}}} (-1)^{\bm{\mathrm{X}} \cdot \bm{\mathrm{Y}}+ x_0y_m + \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' + \bm{\mathrm{U}}\cdot \bm{\mathrm{X}}+ \bm{\mathrm{V}} \cdot \bm{\mathrm{Y}}} \\ =& \sum_{\gmbu \in \Gamma}(-1)^{\bm{\mathrm{u}}'' \cdot \bm{\gamma}_1 + \bm{\mathrm{v}}''\cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2} \sum_{\bm{\mathrm{x}}' \in \mathbb{F}_2^k} (-1)^{(\bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{\gamma}_2)\cdot \bm{\mathrm{x}}'} \sum_{\bm{\mathrm{y}}' \in \mathbb{F}_2^k} (-1)^{(\bm{\mathrm{v}}'+ \bm{\mathrm{v}}'' + \bm{1}_k + \bm{\gamma}_1 + \bm{\gamma}_2) \cdot \bm{\mathrm{y}}'} \\ & \hspace{1.0cm} \sum_{y_m \in E_{\bm{\gamma}}} (-1)^{(x_0 + v_m) \cdot y_m} \sum_{x_m \in \mathbb{F}_2}(-1)^{(u_m + y_m) \cdot x_m}. \end{align*} From Lemma \ref{lemma:ex_sum} we know \begin{align} \sum_{y_m \in E_{\bm{\gamma}}} (-1)^{(x_0 + v_m) \cdot y_m} \sum_{x_m \in \mathbb{F}_2}(-1)^{(u_m + y_m) \cdot x_m} = \begin{cases} 2(-1)^{(x_0 + v_m) \cdot u_m},\ u_m \in E_{\bm{\gamma}}, \\ 0,\ \hspace{2.3cm} u_m \notin E_{\bm{\gamma}}. \end{cases} \label{align:inter_Res2} \end{align} Let $\Theta(\bm{\mathrm{U}}, \bm{\mathrm{V}})$ be a subset of $\Gamma$ defined by $\Theta(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = \{\bm{\gamma} \in \Gamma : u_m \in E_{\bm{\gamma}} \}$. Then, we have \begin{align*} \W_{h_0, S_3}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) =& 2 \sum_{\gmbu \in \Theta(\bm{\mathrm{U}}, \bm{\mathrm{V}})}(-1)^{\bm{\mathrm{u}}'' \cdot \bm{\gamma}_1 + \bm{\mathrm{v}}''\cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2 + u_m\cdot v_m} \sum_{\bm{\mathrm{x}}' \in \mathbb{F}_2^k} (-1)^{(\bm{\mathrm{u}}'+ \bm{\mathrm{u}}'' + \bm{\mathrm{e}}_{k}^{u_m} + \bm{\gamma}_2)\cdot \bm{\mathrm{x}}'} \\ & \hspace{1.0cm} \sum_{\bm{\mathrm{y}}' \in \mathbb{F}_2^k} (-1)^{(\bm{\mathrm{v}}'+ \bm{\mathrm{v}}'' + \bm{1}_k + \bm{\gamma}_1 + \bm{\gamma}_2) \cdot \bm{\mathrm{y}}'}. \end{align*} (1) If there does not exist a $\gmbu$ in $\Theta(\bm{\mathrm{U}}, \bm{\mathrm{V}})$ such that $\bm{\gamma}_2 = \bm{\mathrm{u}}'+ \bm{\mathrm{u}}'' + \bm{\mathrm{e}}_{k}^{u_m}$ and $\bm{\gamma}_1 + \bm{\gamma}_2 = \bm{\mathrm{v}}' + \bm{\mathrm{v}}'' + \bm{1}_k$, then we have $\W_{h_0, S_3}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = 0$. (2) If there exists a $\gmbu$ in $\Theta(\bm{\mathrm{U}}, \bm{\mathrm{V}})$ such that $\bm{\gamma}_2 = \bm{\mathrm{u}}'+ \bm{\mathrm{u}}'' + \bm{\mathrm{e}}_{k}^{u_m}$ and $\bm{\gamma}_1 + \bm{\gamma}_2 = \bm{\mathrm{v}}' + \bm{\mathrm{v}}'' + \bm{1}_k$, it holds that \begin{align*} \bm{\mathrm{u}}''\cdot \bm{\gamma}_1 + \bm{\mathrm{v}}'' \cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2 + u_m \cdot v_m =& \bm{\mathrm{u}}'' \cdot \bm{\gamma}_1 + (\bm{\mathrm{v}}' + \bm{\gamma}_2 + \bm{1}_k) \cdot \bm{\gamma}_2 + u_m \cdot v_m \\ =& \bm{\mathrm{u}}''\cdot \bm{\gamma}_1 + \bm{\mathrm{v}}' \cdot \bm{\gamma}_2 + u_m \cdot v_m \\ =& \bm{\mathrm{u}}'' \cdot (\bm{\mathrm{v}}' + \bm{\mathrm{v}}'' + \bm{1}_k) + (\bm{\mathrm{u}}''+ \bm{\mathrm{v}}')\cdot \bm{\gamma}_2 + u_m \cdot v_m \\ =& \bm{\mathrm{u}}'\cdot \bm{\mathrm{u}}'' + \bm{\mathrm{u}}'\cdot \bm{\mathrm{v}}' + \bm{\mathrm{u}}''\cdot \bm{\mathrm{v}}'' + u_m\cdot v_m + u_m \cdot (u_k + v_0) \\ =& \bm{\mathrm{u}}'\cdot \bm{\mathrm{u}}'' + \bm{\mathrm{U}} \cdot \bm{\mathrm{V}} + u_m \cdot (u_k + v_0). \end{align*} Then we have $\W_{h_0, S_3}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = 2^{2k+1} (-1)^{\bm{\mathrm{u}}'\cdot \bm{\mathrm{u}}'' + \bm{\mathrm{U}} \cdot \bm{\mathrm{V}} + u_m \cdot (u_k + v_0)} = \W_{h_0} (\bm{\mathrm{U}}, \bm{\mathrm{V}})$, by (\ref{align:WHT_h0}). Then (\ref{align:WHT_4k+2}) follows from the two cases discussed above. By (\ref{align:PNHT}), the fragmentary nega-Hadamard transform of $h_0$ over $S_3$ at $(\bm{\mathrm{U}}, \bm{\mathrm{V}}) \in \mathbb{F}_2^{4k+2}$ is given by \begin{align*} \N_{h_0, S_3} (\bm{\mathrm{U}}, \bm{\mathrm{V}}) =& \sum_{\gmbu \in \Gamma} \sum_{(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) \in L_{\gmbu, E_{\gmbu}}} (-1)^{\bm{\mathrm{X}} \cdot \bm{\mathrm{Y}} + x_0y_m + \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' + \bm{\mathrm{U}}\cdot \bm{\mathrm{X}} + \bm{\mathrm{V}} \cdot \bm{\mathrm{Y}}} \imath^{\wt(\bm{\mathrm{X}}, \bm{\mathrm{Y}})} \\ =& \sum_{\gmbu \in \Gamma} \sum_{\bm{\mathrm{x}}' \in \mathbb{F}_2^k} (-1)^{\ubu \cdot (\bm{\mathrm{x}}', \bm{\mathrm{x}}'+ \bm{\gamma}_1)} \imath^{\wt(\bm{\mathrm{x}}', \bm{\mathrm{x}}'+ \bm{\gamma}_1)} \\ & \hspace{1.0cm} \sum_{\bm{\mathrm{y}}' \in \mathbb{F}_2^k} (-1)^{(\bm{\mathrm{x}}', \bm{\mathrm{x}}'+ \bm{\gamma}_1)\cdot (\bm{\mathrm{y}}', \bm{\mathrm{y}}'+ \bm{\gamma}_2)+ \bm{\mathrm{y}}'\cdot (\bm{\mathrm{y}}' + \bm{\gamma}_2) + \vbu \cdot (\bm{\mathrm{y}}', \bm{\mathrm{y}}' + \bm{\gamma}_2)} \imath^{\wt(\bm{\mathrm{y}}', \bm{\mathrm{y}}' + \bm{\gamma}_2)} \\ & \hspace{1.0cm} \sum_{y_m \in E_{\gamma}}(-1)^{(x_0 + v_m)\cdot y_m} \imath^{\wt(y_m)} \sum_{x_m \in \mathbb{F}_2}(-1)^{(u_m+ y_m) \cdot x_m} \imath^{\wt(x_m)} \\ =& \sum_{\gmbu \in \Gamma}(-1)^{\bm{\mathrm{u}}'' \cdot \bm{\gamma}_1 + \bm{\mathrm{v}}''\cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2} \imath^{\wt(\bm{\gamma}_1) + \wt(\bm{\gamma}_2)} \sum_{\bm{\mathrm{x}}' \in \mathbb{F}_2^k} (-1)^{(\bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{1}_k + \bm{\gamma}_1 + \bm{\gamma}_2)\cdot \bm{\mathrm{x}}'} \\ & \hspace{1.0cm} \sum_{\bm{\mathrm{y}}' \in \mathbb{F}_2^k} (-1)^{(\bm{\mathrm{v}}'+ \bm{\mathrm{v}}'' + \bm{\gamma}_1) \cdot \bm{\mathrm{y}}'} \sum_{y_m \in E_{\bm{\gamma}}} [1 + \imath (-1)^{u_m + y_m} ](-1)^{(x_0 + v_m)\cdot y_m} \imath^{\wt(y_m)} \\ =& \sum_{\gmbu \in \Gamma} (-1)^{\bm{\mathrm{u}}'' \cdot \bm{\gamma}_1 + \bm{\mathrm{v}}''\cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2} \imath^{\wt(\bm{\gamma}_1) + \wt(\bm{\gamma}_2)} \sum_{\bm{\mathrm{y}}' \in \mathbb{F}_2^k} (-1)^{(\bm{\mathrm{v}}'+ \bm{\mathrm{v}}'' + \bm{\gamma}_1) \cdot \bm{\mathrm{y}}'} \\ & \hspace{1.0cm} \sum_{y_m \in E_{\bm{\gamma}}} [1+ \imath (-1)^{u_m + y_m} ](-1)^{v_m \cdot y_m} \imath^{\wt(y_m)} \sum_{\bm{\mathrm{x}}' \in \mathbb{F}_2^k} (-1)^{(\bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{1}_k + \bm{\gamma}_1 + \bm{\gamma}_2 + \bm{\mathrm{e}}_{k}^{y_m})\cdot \bm{\mathrm{x}}'}. \end{align*} For simplicity, we shall denote $(-1)^{\bm{\mathrm{u}}'' \cdot \bm{\gamma}_1 + \bm{\mathrm{v}}''\cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2} \imath^{\wt(\bm{\gamma}_1) + \wt(\bm{\gamma}_2)} \cdots$ by $T(\bm{\mathrm{U}}, \bm{\mathrm{V}}, \gmbu)$. That is to say, \[ \N_{h_0, S_3}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = \sum_{\gmbu \in \Gamma} T(\bm{\mathrm{U}}, \bm{\mathrm{V}}, \gmbu). \] For $(\bm{\mathrm{U}}, \bm{\mathrm{V}}) \in \mathbb{F}_2^{4k+2}$ satisfying $\bm{\mathrm{v}}' + \bm{\mathrm{v}}'' = \bm{\gamma}_1$, we know that there exist at most two different vectors $\gmbu = (\gmbu_1, \gmbu_2)$ and $\hat{\gmbu} = (\gmbu_1, \hat{\bm{\gamma}}_2)$ such that $\bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{1}_k + \bm{\gamma}_1 + \bm{\gamma}_2 + \bm{\mathrm{e}}_{k}^{\varepsilon} = \bm{0}_k$ and $\bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{1}_k + \bm{\gamma}_1 + \hat{\bm{\gamma}}_2 + \bm{\mathrm{e}}_{k}^{\hat{\varepsilon}} = \bm{0}_k$, where $\varepsilon \in E_{\gmbu}$ and $\hat{\varepsilon} \in E_{\hat{\gmbu}}$. Moreover, if both $\bm{\gamma}, \hat{\bm{\gamma}}$ exist, then $\varepsilon + \hat{\varepsilon} = 1$. We consider the following three cases. (1) If there does not exist a $\bm{\gamma} \in \Gamma$ such that $\bm{\mathrm{v}}' + \bm{\mathrm{v}}'' = \bm{\gamma}_1, \varepsilon \in E_{\bm{\gamma}}$, and $\bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{1}_k + \bm{\gamma}_1 + \bm{\gamma}_2 + \bm{\mathrm{e}}_{k}^{\varepsilon} = \bm{0}_k$, then $T(\bm{\mathrm{U}}, \bm{\mathrm{V}}, \gmbu) = 0$ for all $\gmbu \in \Gamma$, by Lemma \ref{lemma:ex_sum}. Then we have $\N_{h_0, S_3}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = 0$. (2) If there exists only one $\bm{\gamma} \in \Gamma$ such that $\bm{\mathrm{v}}' + \bm{\mathrm{v}}'' = \bm{\gamma}_1$ and $\bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{1}_k + \bm{\gamma}_1 + \bm{\gamma}_2 + \bm{\mathrm{e}}_{k}^{\varepsilon} = \bm{0}_k$, where $\varepsilon \in E_{\bm{\gamma}}$, then we have \[ T(\bm{\mathrm{U}}, \bm{\mathrm{V}}, \bm{\gamma}) = 2^{2k}(-1)^{\bm{\mathrm{u}}''\cdot \bm{\gamma}_1 + \bm{\mathrm{v}}''\cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2 + v_m \cdot \varepsilon} \imath^{\wt(\bm{\gamma}_1) + \wt(\bm{\gamma}_2) + \wt(\varepsilon)} [1+ \imath (-1)^{u_m + \varepsilon} ]. \] By (\ref{align:inter_Res1}), it holds that \begin{align*} (-1)^{(\bm{\mathrm{u}}'+ \bm{\mathrm{v}}')\cdot (\bm{\mathrm{u}}''+ \bm{\mathrm{v}}'')} \imath^{k - \wt(\bm{\mathrm{u}})} =& (-1)^{\bm{\mathrm{u}}''\cdot \bm{\gamma}_1 + \bm{\mathrm{v}}'' \cdot (\bm{\gamma}_2 + \bm{\mathrm{e}}_{k}^{\varepsilon}) + \bm{\gamma}_1 \cdot (\bm{\gamma}_2 + \bm{\mathrm{e}}_{k}^{\varepsilon})} \imath^{\wt(\bm{\gamma}_1) + \wt(\bm{\gamma}_2 + \bm{\mathrm{e}}_{k}^{\varepsilon})} \\ =& (-1)^{\bm{\mathrm{u}}''\cdot \bm{\gamma}_1 + \bm{\mathrm{v}}'' \cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2 + (\gamma_{1, 0} + v_k)\cdot \varepsilon} \imath^{\wt(\bm{\gamma}_1) + \wt(\bm{\gamma}_2) + \wt(\varepsilon) - 2 \wt(\gamma_{2, 0} \cdot \varepsilon)} \\ =& (-1)^{\bm{\mathrm{u}}''\cdot \bm{\gamma}_1 + \bm{\mathrm{v}}'' \cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2 + (\gamma_{1, 0} + \gamma_{2, 0} + v_k) \cdot \varepsilon} \imath^{\wt(\bm{\gamma}_1) + \wt(\bm{\gamma}_2) + \wt(\varepsilon)} \\ =& (-1)^{\bm{\mathrm{u}}''\cdot \bm{\gamma}_1 + \bm{\mathrm{v}}'' \cdot \bm{\gamma}_2 + \bm{\gamma}_1 \cdot \bm{\gamma}_2 + (u_0 + u_k + v_k) \cdot \varepsilon} \imath^{\wt(\bm{\gamma}_1) + \wt(\bm{\gamma}_2) + \wt(\varepsilon)}, \end{align*} where $\bm{\gamma_1} = (\gamma_{1, 0}, \gamma_{1, 1}, \cdots, \gamma_{1, k-1}), \bm{\gamma_2} = (\gamma_{2, 0}, \gamma_{2, 1}, \cdots, \gamma_{2, k-1}) \in \mathbb{F}_2^k$. Let $g_0 \in \mathcal{B}_{4k}$ be defined in (\ref{align:g_0}). Together with (\ref{align:Nega_{g_0}}) we have \begin{align} \N_{h_0, S_3} (\Ubu, \Vbu) =& T(\bm{\mathrm{U}}, \bm{\mathrm{V}}, \gmbu) = (-1)^{(u_0 + u_k + v_k+ v_m) \cdot \varepsilon} [1+ \imath (-1)^{u_m + \varepsilon} ] \N_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) \notag \\ =& \frac{1}{2}[(1 + (-1)^\varepsilon)(1+\imath(-1)^{u_m}) + (1-(-1)^\varepsilon) (1-\imath(-1)^{u_m}) (-1)^{u_0+u_k+v_k+v_m}]\N_{g_0}(\ubu, \vbu) \notag \\ =& \frac{1}{2}(1+\imath (-1)^{u_0+u_k+v_k+v_m+u_m+\varepsilon}) \N_{h_0} (\Ubu, \Vbu). \label{align:T(UV)} \end{align} (3) If there exist two vectors $\bm{\gamma} = (\bm{\gamma}_1, \bm{\gamma}_2), \hat{\bm{\gamma}} = (\bm{\gamma}_1, \hat{\bm{\gamma}}_2)$ in $\Gamma$ such that $\bm{\mathrm{v}}' + \bm{\mathrm{v}}'' = \bm{\gamma}_1$, $\bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{1}_k + \bm{\gamma}_1 + \bm{\gamma}_2 + \bm{\mathrm{e}}_{k}^{\varepsilon} = \bm{0}_k$ and $\bm{\mathrm{u}}'+ \bm{\mathrm{u}}''+ \bm{1}_k + \bm{\gamma}_1 + \hat{\bm{\gamma}}_2 + \bm{\mathrm{e}}_{k}^{\hat{\varepsilon}} = \bm{0}_k$, where $\varepsilon \in E_{\bm{\gamma}}$ and $\hat{\varepsilon} \in E_{\hat{\bm{\gamma}}}$, then the fragmentary nega-Hadamard transform of $h_0$ over $S_3$ at $(\Ubu, \Vbu) \in \F_2^{4k+2}$ is \begin{align*} \N_{h_0, S_3}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = T(\bm{\mathrm{U}}, \bm{\mathrm{V}}, \gmbu) + T(\bm{\mathrm{U}}, \bm{\mathrm{V}}, \hat{\gmbu}). \end{align*} Together with (\ref{align:T(UV)}) and $\varepsilon + \hat{\varepsilon} = 1$, we have \begin{align*} \N_{h_0, S_3}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = \N_{h_0}(\Ubu, \Vbu). \end{align*} Hence, (\ref{align:NHT_4k+2}) follows immediately from the three cases discussed above. {\hfill $\square$\par} To prove Lemma \ref{lemma:8k+2_WHT&NHT}, we need the following lemma. \begin{lemma} \rm \label{lemma:Gamma_4} Let the notations $\Gamma$ and $E_{\gmbu}$ be the same as those in Subsection \ref{subsection:8k+2}. For $(\ubu, \vbu) \in \F_2^{8k}$, there is at most one $\gmbu \in \Gamma$ satisfying $\ubu + \gmbu + \bm{\mathrm{e}}_{4k}^{\varepsilon} \in B_2^{2k}$ and $\vbu + \gmbu + (\gmbu_2, \gmbu_1) \in B_2^{2k}$, where $\varepsilon \in E_{\bm{\gamma}}$. \end{lemma} \begin{proof} Suppose that there are two elements $\gmbu, \thbu \in \Gamma$ satisfying $\begin{cases} \ubu + \gmbu + \bm{\mathrm{e}}_{4k}^{\varepsilon} \in B_2^{2k}, \\ \ubu + \thbu + \bm{\mathrm{e}}_{4k}^{\kappa} \in B_2^{2k}, \end{cases}$ and $\begin{cases} \vbu + \gmbu + (\gmbu_2, \gmbu_1) \in B_2^{2k}, \\ \vbu + \thbu + (\thbu_2, \thbu_1) \in B_2^{2k}, \end{cases}$ where $\varepsilon \in E_{\bm{\gamma}}, \kappa \in E_{\thbu}$. Let $\xibu$ be an arbitrary element in $B_2^{2k}$. Then there are $\ldbu_1, \ldbu_2, \ldbu_3, \ldbu_4$ in $A_2^{2k}$ such that $\begin{cases} \ubu + \gmbu + \bm{\mathrm{e}}_{4k}^{\varepsilon} = \xibu + \ldbu_1, \\ \ubu + \thbu + \bm{\mathrm{e}}_{4k}^{\kappa} = \xibu + \ldbu_2, \end{cases}$ and $\begin{cases} \vbu + \gmbu + (\gmbu_2, \gmbu_1) = \xibu + \ldbu_3, \\ \vbu + \thbu + (\thbu_2, \thbu_1) = \xibu + \ldbu_4. \end{cases}$ We consider the following two cases. (1) Supposing $\varepsilon = \kappa$, then we have $\gmbu + A_2^{2k} = \ubu + \bm{\mathrm{e}}_{4k}^{\varepsilon} + \xibu + A_2^{2k} = \thbu + A_2^{2k}$, which indicates $\gmbu = \thbu$ by the definition of $\Gamma$. (2) Supposing $\varepsilon \ne \kappa$, i.e., $\varepsilon + \kappa = 1$, then we have $\begin{cases} \gmbu + \thbu + \bm{\mathrm{e}}_{4k}^1 \in A_2^{2k}, \\ \gmbu + \thbu + (\gmbu_2, \gmbu_1) + (\thbu_2, \thbu_1) \in A_2^{2k}, \end{cases} \Rightarrow (\gmbu_2, \gmbu_1) + (\thbu_2, \thbu_1) + \bm{\mathrm{e}}_{4k}^1 \in A_2^{2k} \Rightarrow \gmbu_1 + \thbu_1 \in A_2^k$. This causes a contradiction with $\gmbu + \thbu + \bm{\mathrm{e}}_{4k}^1 \in A_2^{2k}$. Hence, this cases will not happen. Then the expected result follows from the cases discussed above. \end{proof} \textbf{Proof of Lemma \ref{lemma:8k+2_WHT&NHT}}: By (\ref{align:PWHT}), the fragmentary Walsh-Hadamard transform of $h_0$ over $S_4$ at $(\bm{\mathrm{U}}, \bm{\mathrm{V}}) \in \mathbb{F}_2^{8k+2}$ is given by \begin{align*} \W_{h_0, S_4}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) =& \sum_{\gmbu \in \Gamma} \sum_{(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) \in C_{\gmbu, A_2^{2k}, E_{\gmbu}}} (-1)^{\bm{\mathrm{X}}\cdot \bm{\mathrm{Y}}+ x_0y_m + \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' + \bm{\mathrm{U}} \cdot \bm{\mathrm{X}} + \bm{\mathrm{V}} \cdot \bm{\mathrm{Y}}} \\ =& \sum_{\gmbu \in \Gamma} \sum_{\xbu \in A_2^{2k} } \sum_{\ztbu \in A_2^{2k} } (-1)^{\xbu \cdot (\gmbu + \ztbu) + (\gmbu_1 + \ztbu_1)\cdot (\gmbu_2 + \ztbu_2) + \ubu \cdot \xbu + \vbu \cdot (\gmbu + \ztbu) } \\ & \hspace{1.0cm} \sum_{x_m \in \mathbb{F}_2} \sum_{y_m \in E_{\bm{\gamma}}} (-1)^{(x_0 + x_m) \cdot y_m+ u_m\cdot x_m + v_m \cdot y_m} \\ =& \sum_{\gmbu \in \Gamma} (-1)^{\vbu \cdot \gmbu + \bm{\gamma}_1 \cdot \bm{\gamma}_2} \sum_{\ztbu \in A_2^{2k}}(-1)^{(\vbu + (\gmbu_2, \gmbu_1)) \cdot \ztbu} \sum_{\xbu \in A_2^{2k}}(-1)^{(\ubu + \gmbu ) \cdot \xbu} \\ & \hspace{1.0cm} \sum_{y_m \in E_{\bm{\gamma}}} (-1)^{(x_0 + v_m) \cdot y_m} \sum_{x_m \in \mathbb{F}_2}(-1)^{(u_m + y_m) \cdot x_m}, \end{align*} where $\ztbu \in \F_2^{2k}$ for $i=1, 2$ and $\ztbu = (\ztbu_1, \ztbu_2)$, and the second identity holds since $\ybu \in C_{\gmbu} (A_2^{2k})$ if and only if $\ybu= \gmbu + \bm{\zeta}$ for $\bm{\zeta} \in A_2^{2k}$, and the last identity holds by the fact that $\ztbu \cdot \bm{\mathrm{x}} = \bm{\zeta}_1 \cdot \bm{\zeta}_2 = 0$ for $\bm{\mathrm{x}}, \bm{\zeta} \in A_2^{2k}$. Let $\Phi(\bm{\mathrm{U}}, \bm{\mathrm{V}})$ be a subset of $\Gamma$ defined by $\Phi(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = \{\bm{\gamma} \in \Gamma : u_m \in E_{\bm{\gamma}} \}$. By (\ref{align:inter_Res2}), we have \begin{align*} \W_{h_0, S_4}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) =& 2\sum_{\bm{\gamma} \in \Phi(\bm{\mathrm{U}}, \bm{\mathrm{V}})} (-1)^{\vbu \cdot \gmbu + \bm{\gamma}_1 \cdot \bm{\gamma}_2 + u_m \cdot v_m} \sum_{\ztbu \in A_2^{2k}}(-1)^{(\vbu + (\gmbu_2, \gmbu_1)) \cdot \ztbu} \sum_{\xbu \in A_2^{2k}}(-1)^{(\ubu + \gmbu + \bm{\mathrm{e}}_{4k}^{u_m}) \cdot \xbu} \\ =& 2^{4k+1} \sum_{\gmbu \in \Gamma_3(\bm{\mathrm{U}}, \bm{\mathrm{V}})} (-1)^{\vbu \cdot \bm{\gamma} + \bm{\gamma}_1 \cdot \bm{\gamma}_2 + u_m \cdot v_m}, \end{align*} where $\Gamma_3(\bm{\mathrm{U}}, \bm{\mathrm{V}})$ is a subset of $\Phi(\bm{\mathrm{U}}, \bm{\mathrm{V}})$ defined by \begin{align*} \Gamma_3(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = \{\gmbu \in \Phi(\bm{\mathrm{U}}, \bm{\mathrm{V}}) : \ubu + \bm{\mathrm{e}}_{4k}^{u_m} \in C_{\gmbu} (A_2^{2k}), \bm{\mathrm{v}} \in C_{(\gmbu_2, \gmbu_1)} (A_2^{2k}) \}, \end{align*} and the second identity holds by (\ref{align:ex_sum_A^k}). For $\gmbu \in \Gamma_3(\bm{\mathrm{U}}, \bm{\mathrm{V}})$, by (\ref{align:inter_Res3}), we know \begin{align*} \vbu \cdot \gmbu + \bm{\gamma}_1 \cdot \bm{\gamma}_2 + u_m \cdot v_m =& (\bm{\mathrm{u}}'+ \bm{\mathrm{e}}_{2k}^{u_m}) \cdot \bm{\mathrm{u}}'' + (\ubu + \bm{\mathrm{e}}_{4k}^{u_m}) \cdot \bm{\mathrm{v}} + u_m \cdot v_m \\ =& \bm{\mathrm{u}}'\cdot \bm{\mathrm{u}}'' + \bm{\mathrm{U}} \cdot \bm{\mathrm{V}} + u_m \cdot (u_{2k} + v_0). \end{align*} Together with (\ref{align:WHT_h0}), it holds that \[ \W_{h_0, S_4}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = |\Gamma_3(\bm{\mathrm{U}}, \bm{\mathrm{V}})| \W_{h_0} (\bm{\mathrm{U}}, \bm{\mathrm{V}}). \] Similarly to the proof of $|\Gamma_1(\bm{\mathrm{u}}, \bm{\mathrm{v}})| \le 1$ in Lemma \ref{lemma:8k_WHT&NHT}, we can easily prove $|\Gamma_3(\bm{\mathrm{U}}, \bm{\mathrm{V}})| \le 1$. Then (\ref{align:WHT_8k+2}) follows immediately. By (\ref{align:PNHT}), the fragmentary nega-Hadamard transform of $h_0$ over $S_4$ at $(\bm{\mathrm{U}}, \bm{\mathrm{V}}) \in \mathbb{F}_2^{8k+2}$ is given by \begin{align*} \N_{h_0, S_4}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) =& \sum_{\gmbu \in \Gamma} \sum_{(\bm{\mathrm{X}}, \bm{\mathrm{Y}}) \in C_{\gmbu, A_2^{2k}, E_{\gmbu}}} (-1)^{\bm{\mathrm{X}}\cdot \bm{\mathrm{Y}}+ x_0y_m + \bm{\mathrm{y}}'\cdot \bm{\mathrm{y}}'' + \bm{\mathrm{U}}\cdot \bm{\mathrm{X}}+ \bm{\mathrm{V}} \cdot \bm{\mathrm{Y}}} \imath^{\wt(\bm{\mathrm{X}}, \bm{\mathrm{Y}})} \\ =& \sum_{\gmbu \in \Gamma} \sum_{\ztbu \in A_2^{2k} } \sum_{\xbu \in A_2^{2k}} (-1)^{\xbu \cdot (\gmbu + \ztbu) + (\gmbu_1 + \ztbu_1) \cdot (\gmbu_2 + \ztbu_2) + \ubu \cdot \xbu + \vbu \cdot (\gmbu + \ztbu)} \imath^{\wt(\xbu) + \wt(\gmbu + \ztbu)} \\ & \hspace{1.0cm} \sum_{y_m \in E_{\bm{\gamma}}}(-1)^{(x_0 + v_m)\cdot y_m} \imath^{\wt(y_m)} \sum_{x_m \in \mathbb{F}_2}(-1)^{(u_m+ y_m) \cdot x_m} \imath^{\wt(x_m)} \\ =& \sum_{\gmbu \in \Gamma} (-1)^{\vbu \cdot \gmbu + \bm{\gamma}_1\cdot \bm{\gamma}_2} \imath^{\wt(\bm{\gamma})} \sum_{\ztbu \in A_2^{2k}}(-1)^{(\vbu + \gmbu + (\gmbu_2, \gmbu_1)) \cdot \ztbu} \imath^{\wt(\ztbu)} \sum_{\xbu \in A_2^{2k}}(-1)^{(\ubu + \gmbu) \cdot \xbu} \imath^{\wt(\xbu)} \\ & \hspace{1.0cm} \sum_{y_m \in E_{\bm{\gamma}}} [1 + \imath(-1)^{u_m + y_m}] (-1)^{(x_0 + v_m)\cdot y_m} \imath^{\wt(y_m)} \\ =& \sum_{\gmbu \in \Gamma} (-1)^{\vbu \cdot \gmbu + \bm{\gamma}_1\cdot \bm{\gamma}_2} \imath^{\wt(\bm{\gamma})} \sum_{\ztbu \in A_2^{2k}}(-1)^{(\vbu + \gmbu + (\gmbu_2, \gmbu_1)) \cdot \ztbu} \imath^{\wt(\ztbu)} \\ & \hspace{1.0cm} \sum_{y_m \in E_{\bm{\gamma}}} [1 + \imath(-1)^{u_m + y_m}] (-1)^{v_m\cdot y_m} \imath^{\wt(y_m)} \sum_{\xbu \in A_2^{2k}}(-1)^{(\ubu + \gmbu + \bm{\mathrm{e}}_{4k}^{y_m}) \cdot \xbu} \imath^{\wt(\xbu)}, \end{align*} where $\ztbu \in \F_2^{2k}$ for $i=1, 2$ and $\ztbu = (\ztbu_1, \ztbu_2)$, and the second identity holds since $\ybu \in C_{\gmbu} (A_2^{2k})$ if and only if $\ybu= \gmbu + \bm{\zeta}$ for $\bm{\zeta} \in A_2^{2k}$, and the third identity holds by the fact that $\ztbu \cdot \bm{\mathrm{x}} = \bm{\zeta}_1 \cdot \bm{\zeta}_2 = 0$ for $\bm{\mathrm{x}}, \bm{\zeta} \in A_2^{2k}$. Let $\Gamma_4(\bm{\mathrm{U}}, \bm{\mathrm{V}})$ be a subset of $\Gamma$ defined by \[ \Gamma_4(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = \{\gmbu \in \Gamma : \varepsilon \in E_{\bm{\gamma}}, \ubu + \gmbu + \bm{\mathrm{e}}_{4k}^{\varepsilon} \in B_2^{2k}, \vbu + \gmbu + (\gmbu_2, \gmbu_1) \in B_2^{2k} \}. \] Then, by (\ref{align:nega_ex_sum_A^k}) we have \[ \N_{h_0, S_4}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) = 2^{4k} \sum_{\gmbu \in \Gamma_4(\bm{\mathrm{U}}, \bm{\mathrm{V}})} (-1)^{\bm{\mathrm{v}} \cdot \bm{\gamma} + \bm{\gamma}_1\cdot \bm{\gamma}_2 + v_m \cdot \varepsilon} \imath^{wt(\bm{\gamma}) + \wt(\varepsilon)}[1 + \imath(-1)^{u_m + \varepsilon}]. \] From Lemma \ref{lemma:Gamma_4}, we know $|\Gamma_4(\Ubu, \Vbu)| \le 1$ for all $(\Ubu, \Vbu) \in \F_2^{8k+2}$. (1) If $|\Gamma_4(\Ubu, \Vbu)| = 0$, then $\N_{h_0, S_4}(\Ubu, \Vbu) = 0$. (2) If $|\Gamma_4(\Ubu, \Vbu)| = 1$, for $\gmbu \in \Gamma_4(\bm{\mathrm{U}}, \bm{\mathrm{V}})$, from (\ref{align:inter_Res4}) we have \begin{align*} (-1)^{\bm{\mathrm{v}} \cdot \bm{\gamma} + \bm{\gamma}_1 \cdot \bm{\gamma}_2} \imath^{\wt(\gmbu)} =& (-1)^{(\bm{\mathrm{u}}'+ \bm{\mathrm{v}}' + \bm{\mathrm{e}}_{2k}^{\varepsilon}) \cdot (\bm{\mathrm{u}}''+ \bm{\mathrm{v}}'')} \imath^{2k -\wt(\bm{\mathrm{u}} + \bm{\mathrm{e}}_{4k}^{\varepsilon})} \\ =& (-1)^{(\bm{\mathrm{u}}' + \bm{\mathrm{v}}') \cdot (\bm{\mathrm{u}}'' + \bm{\mathrm{v}}'') + (u_{2k} + v_{2k})\cdot \varepsilon} \imath^{2k - \wt(\bm{\mathrm{u}}) - \wt(\varepsilon) + 2\wt(u_0 * \varepsilon)} \\ =& (-1)^{(\bm{\mathrm{u}}' + \bm{\mathrm{v}}') \cdot (\bm{\mathrm{u}}'' + \bm{\mathrm{v}}'') + (u_0 + u_{2k} + v_{2k})\cdot \varepsilon} \imath^{2k - \wt(\bm{\mathrm{u}}) - \wt(\varepsilon)}. \end{align*} Then the fragmentary nega-Hadamard transform of $h_0$ over $S_4$ at $(\Ubu, \Vbu) \in \F_2^{8k+2}$ is given by \begin{align*} \N_{h_0, S_4}(\bm{\mathrm{U}}, \bm{\mathrm{V}}) =& 2^{4k} (1 + \imath(-1)^{u_m + \varepsilon}) (-1)^{(\bm{\mathrm{u}}' + \bm{\mathrm{v}}') \cdot (\bm{\mathrm{u}}'' + \bm{\mathrm{v}}'') + (u_0 + u_{2k} + v_{2k} + v_m)\cdot \varepsilon } \imath^{2k - \wt(\bm{\mathrm{u}})} \\ =& (-1)^{(u_0 + u_{2k} + v_{2k} + v_m)\cdot \varepsilon } (1 + \imath(-1)^{u_m + \varepsilon}) \N_{g_0}(\bm{\mathrm{u}}, \bm{\mathrm{v}}) \\ =& \frac{1}{2}(1+\imath (-1)^{u_0+u_{2k}+v_{2k}+v_m+u_m+\varepsilon}) \N_{h_0} (\Ubu, \Vbu), \end{align*} where $g_0 \in B_{8k}$ is the function defined in (\ref{align:g_0}), and the second identity holds by (\ref{align:Nega_{g_0}}). Therefore, (\ref{align:NHT_8k+2}) follows immediately from the two cases discussed above. {\hfill $\square$\par}
{ "timestamp": "2022-09-20T02:23:27", "yymm": "2209", "arxiv_id": "2209.08712", "language": "en", "url": "https://arxiv.org/abs/2209.08712" }
\section{Introduction} \label{sec-intro} Due to their increased usage within myriad software applications, artificial intelligence algorithms now influence many aspects of people's lives, particularly when they are embedded into decision-support tools used by educators, government agencies, and various industry sectors. Thus, it is crucial to make sure that these algorithms are scrutinized to ensure fairness and remove unjust biases. Bias has been shown to exist in several deployed AI systems, including the well known Correlational Offender Management Profiling for Alternative Sanctions (COMPAS). COMPAS is an automated decision making system used by the US criminal justice system for assessing a criminal defendant's likelihood of re-offending. By exploring the risk scores assigned to individuals, this system has been shown to be biased against African Americans \cite{chouldechova2017fair}. Other examples include a version of Google's targeted advertising system in which highly paid jobs were advertised more frequently to men vs.\ women \cite{lambrecht2019algorithmic}. Bias in computer vision is a major problem, often stemming from the training datasets used for computer vision models \cite{tommasi2017deeper}. There is evidence suggesting the existence of multiple types of bias, including capture and selection bias, in popular image datasets \cite{torralba2011unbiased}. The problems arising from bias in computer vision can manifest in different ways. For instance, it is observed that in activity recognition models, when the datasets contain gender bias, the bias is further amplified by the models trained on those datasets \cite{zhao2017men}. Face recognition models may exhibit lower accuracy for some classes of race or gender \cite{buolamwini2018gender}. Works such as \cite{wang2020revise,yang2020towards} suggest methods to mitigate bias in visual datasets. Several studies have deployed GANs for bias mitigation in image datasets. For example, \cite{sattigeri2019fairness} modified the value function of GAN to generate fair image datasets. FairFaceGAN~\cite{hwang2020fairfacegan} implements a facial image-to-image translation, preventing unwanted translation in protected attributes. Ramaswamy et al. propose a model to produce training data that is balanced for each protected attribute, by perturbing the latent vector of a GAN \cite{ramaswamy2021fair}. Other studies employing GANs for fair data generation include \cite{choi2020fair, sharmanska2020contrastive}. A variety of techniques beyond GANs have been applied to the problems of fairness in AI. A deep information maximization adaptation network was used to reduce racial bias in face image datasets~\cite{wang2019racial}, and reinforcement learning was used to learn a race-balanced network in \cite{wang2019mitigate}. Wang et al. propose a generative few-shot cross-domain adaptation algorithm to perform fair cross-domain adaption and improve performance on minority category \cite{wang2021towards}. The work in \cite{xu2021consistent} proposes adding a penalty term into the softmax loss function to mitigate bias and improve fairness performance in face recognition. Quadriento et al. \cite{quadrianto2019discovering} propose a method to discover fair representations of data with the same semantic meaning of the input data. Adversarial learning has also successfully been deployed for this task \cite{zhang2018mitigating, wang2019balanced}. This paper addresses the issue of a decision-making process being dependent on \emph{protected attributes}, where this dependence should ideally be avoided. From a legal perspective, a protected attribute is an attribute upon which discrimination is illegal \cite{pessach2020algorithmic}, e.g. gender or race. Let $D = (\mathcal{X},\mathcal{S},\mathcal{Y})$ be a dataset, where $\mathcal{X}$ represents unprotected attributes, $\mathcal{S}$ is the protected attribute, and $\mathcal{Y}$ be the target attribute. If in the dataset $D$, the target attribute is not independent of the protected attribute ($\mathcal{Y} \not\perp \mathcal{S}$), then it is very likely that the decisions $\mathcal{\hat{Y}}$ made by a decision-making system which is trained on $D$, is also not independent of the protected attribute ($\mathcal{\hat{Y}} \not\perp \mathcal{S}$). We propose a model to reconstruct an image dataset to reduce statistical dependency between a protected attribute and target attribute. We modify a U-net \cite{ronneberger2015u} to reconstruct the image dataset and apply the Hilbert-Schmidt norm of the cross-covariance operator \cite{gretton2005measuring} between reproducing kernel Hilbert spaces of the target attribute and the protected attribute, as a measure of statistical dependence. Unlike many previous algorithms, our proposed method doesn't require training new classifiers on the unbiased data, but instead reconstructing images in a way that reduces the bias entailed by using the same classifiers. In Section~\ref{sec-method} we present the problem, the notion of independence, and our proposed methodology. In Section~\ref{sec-experiments} we describe the CelebA dataset and the choice of feature categorization, introduce the baseline model with which we compare our results \cite{ramaswamy2021fair}, our model's implementation details, and finally present the experiments and results. Bias mitigation methods can be divided into three general categories of \emph{pre-process}, \emph{in-process}, and \emph{post-process}. Pre-process methods include modifying the training dataset before feeding it to the machine learning model. In-process methods include adding regularizing terms to penalize some representation of bias during the training process. Finally, post-process methods include modifying the final decisions of the classifiers \cite{hardt2016equality}. Kamiran and Calders~ \cite{kamiran2012data} propose methods such as suppression which includes removing attributes highly correlated with the protected attribute, reweighing, i.e. assigning weights to different instances in the data, and massaging the data to change labels of some objects. Bias mitigation methods often come at the expense of losing some accuracy, and these preliminary methods usually entail higher fairness-utility cost. More sophisticated methods with better results include using generative models to augment the biased training dataset with unbiased data \cite{ramaswamy2021fair}, or training the models on entirely synthetic unbiased data \cite{rajabi2021tabfairgan}. Wang et al.\cite{wang2020towards} provide a set of analyses and a benchmark to evaluate and compare bias mitigation techniques in visual recognition models. \section{Methodology} \label{sec-method} Consider a dataset $D = (\mathcal{X},\mathcal{S},\mathcal{Y})$, where $\mathcal{X}$ is the set of images, $\mathcal{Y} = \{+1, -1\}$ is the target attribute such as attractiveness, and $\mathcal{S} = \{A,B,C,...\}$ is the protected attribute such as gender. Assume there exists a classifier $f:(\mathcal{X}) \rightarrow \mathcal{Y}$, such that the classifier's prediction for target attribute is not independent from the protected attribute, i.e. $f(\mathcal{X}) \not\perp \mathcal{S}$. Our objective is to design a transformation $g:\mathcal{X} \rightarrow \widetilde{\mathcal{X}}$, such that 1) $f(\widetilde{\mathcal{X}}) \perp \mathcal{S}$, i.e. the classifier's predictions for target attribute is independent of the protected attribute , and 2) $f(\widetilde{\mathcal{X}}) \approx f(\mathcal{X})$, i.e. the classifier still achieves high accuracy. In other words we want to train a network to transform our original images, such that if the classifiers that are trained on the original and unmodified images, are used to predict the target attribute (attractiveness in our example) from the transformed version of an image, they still achieve high accuracy, while the predictions of those classifiers are independent of the protected attribute (gender in our example). It should be noted that we are not seeking to train new classifiers, but rather only aim to modify the input images. This is a main distinction between our methodology and most of other techniques (e.g. \cite{quadrianto2019discovering} and \cite{ramaswamy2021fair}), in which the process includes training new classifiers on modified new image datasets and achieving \emph{fair classifiers}. Our proposed model consists of a U-net \cite{ronneberger2015u} as the neural network that transforms the original images. This type of network was originally proposed for medical image segmentation, and has been widely used since its introduction. The encoder-decoder network consists of two paths, a contracting path consisting of convolution and max pooling layers, and a consecutive expansive path consisting of upsampling of the feature map and convolutions. Contrary to \cite{ronneberger2015u} where each image is provided with a segmented image label, we provide our U-net with the exact same image as the label, and alter the loss function from cross-entropy to mean squared error, so that the network gets trained to produce an image as close to the original image as possible, in a pixel-wise manner. \begin{figure}[t!] \centering \includegraphics[width=0.7\linewidth,keepaspectratio]{network_highq.png} \caption{Our model consists of an encoder-decoder (U-net) and a double-output pre-trained ResNet classifier. First, the output batch of the U-net (reconstructed images) is compared with the original batch of images by calculating MSE loss. Then, the output batch of the U-net passes through the ResNet and statistical dependency of the two vectors is calculated by HSIC. Detailed architecture of the U-net is described in the supplementary material.} \label{fig-network} \end{figure} While some previous fairness studies consider \textit{decorrelating} the target attribute from the protected attributes, what must be ultimately sought however, is independence between the protected attribute and the target attribute. Dealing with two random variables which are uncorrelated is easier than independence, as two random variables might have a zero correlation, and still be dependent (e.g. two random variables $A$ and $B$ with recordings $A=[-2,-1,0,1,2]$ and $B=[4,1,0,1,4]$ have zero covariance, but are apparently not independent). Given a Borel probability distribution $\mathbf{P}_{ab}$ defined on a domain $\mathcal{A} \times \mathcal{B}$, and respective marginal distributions $\mathbf{P}_a$ and $\mathbf{P}_b$ on $\mathcal{A}$ and $\mathcal{B}$, independence of $a$ and $b$ ($a \rotatebox[origin=c]{90}{$\models$} b$) is equal to $\mathbf{P}_{xy}$ factorizing as $\mathbf{P}_x$ and $\mathbf{P}_y$. Furthermore, two random variables $a$ and $b$ are independent, if and only if any bounded continuous function of the two random variables are uncorrelated \cite{gretton2005kernel}. Let $\mathcal{F}$ and $\mathcal{G}$ denote all real-value functions defined on domains $\mathcal{A}$ and $\mathcal{B}$ respectively. In their paper Gretton et al. \cite{gretton2005measuring} define the Hilbert-Schmidt norm of the cross-covariance operator: \begin{equation} HSIC(\mathbf{P}_{ab}, \mathcal{F}, \mathcal{G}) \coloneqq ||C_{ab}||^2_{HS} \label{eq-HSIC-def} \end{equation} \noindent where $C_{ab}$ is the cross-covariance operator. They show that if $||C_{ab}||^2_{HS}$ is zero, then $cov(f,g)$ will be zero for any $f \in \mathcal{F}$ and $g \in \mathcal{G}$, and therefore the random variables $a$ and $b$ will be independent. Furthermore, they show if $\mathcal{Z} \coloneqq {(a_1, b_1),...,(a_n,b_n)} \in \mathcal{A} \times \mathcal{B}$ are a series of n independent observations drawn from $\mathbf{P}_{ab}$, then a (biased) estimator of \textbf{HSIC} is \cite{gretton2005measuring}: \begin{equation} HSIC(\mathcal{Z},\mathcal{F}, \mathcal{G}) \coloneqq (n-1)^{-2}\mathbf{tr}(KHLH) \label{eq-HSIC-est} \end{equation} \noindent where $H,K,L \in \mathbb{R}^{n \times n}$, $K$ and $L$ are Gram matrices \cite{horn2012matrix}, $K_{ij} \coloneqq k(a_i, a_j)$, $L_{ij} \coloneqq l(b_i, b_j)$, $k$ and $l$ are universal kernels, and $H_{ij} \coloneqq \delta_{ij} - n^{-1}$ centers the observations in feature space. We use Hilbert-Schmidt independence criteria to penalize the model for dependence between the target attribute and the protected attribute. \begin{figure*} \begin{center} \includegraphics[width=0.8\linewidth,keepaspectratio]{faces_celebA.jpg} \caption{Examples of CelebA dataset original images. Images in the first row are labeled \texttt{not Male} and images in the second row are labeled \texttt{Male}. In each row, the first three images are labeled \texttt{Attractive} and the last three images are labeled \texttt{not Attractive}.} \label{fig-faces-explanatory} \end{center} \end{figure*} \subsection{Training Loss Function} \label{sec:training} We seek to modify a set of images, such that 1) the produced images are close to the original images, and 2) the predicted target attribute is independent from the predicted protected attribute. In the optimization problem, image quality (1) is measured by pixel-wise MSE loss. For independence (2), consider our U-net network as a mapping from original image to the transformed image, i.e. $U_w(\mathbf{x}) = \widetilde{\mathbf{x}}$. Consider also a function $h:\mathcal{X} \rightarrow [0,1]\times[0,1]$, where $h(\mathbf{x}_i) = (h_1(\mathbf{x}_i), h_2(\mathbf{x}_i)) = (\mathrm{P}(y_i=1|\mathbf{x}_i), \mathrm{P}(s_i=1|\mathbf{x}_i))$. Our objective is to train the parameters of $U_w$ such that $h_1(U_w(\mathbf{x})) \rotatebox[origin=c]{90}{$\models$} h_2(U_w(\mathbf{x}))$, i.e. $h_1(U_w(\mathbf{x}))$ is independent of $h_2(U_w(\mathbf{x}))$ . Given $X$ representing a batch of N training images and $\widetilde{X}$ representing the transformed batch, our formal optimization problem is as follows: \begin{equation} \begin{split} \mathop{\mathrm{minimize}}_{U_w} &\underbrace{\frac{1}{NCWH} \sum_{n=1}^{N} \sum_{i,j,k} (\mathbf{x}_{ijk}^{n} - \widetilde{\mathbf{x}}_{ijk}^{n})^2}_\textrm{image accuracy} \\ &+ \lambda \times \; \underbrace{HSIC(h_1(\widetilde{X}), h_2(\widetilde{X}))}_\textrm{independence} \end{split} \end{equation} \noindent where $N$ is the number of samples, $C$ is the number of channels of an image, $W$ is the width of an image, $H$ is the height of an image, and $\lambda$ is the parameter that controls the trade-off between accuracy of the transformed images and independence (fairness). In practice, the mapping function $U_w$ that we use is a U-net, the function $h(\cdot)$ is a pre-trained classifier with two outputs $h_1$ and $h_2$, each being the output of a Sigmoid function within the range of $[0,1]$, where $h_1 = \mathrm{P}(Y = 1|X)$ (a vector of size $N$), and $h_2=\mathrm{P}(S = 1|X)$ (also a vector of size $N$), and $HSIC(\cdot,\cdot)$ denotes Hilbert-Schmidt Independence Criteria. Figure~\ref{fig-network} shows the network architecture and a schematic of the training procedure. Consider a batch of original images $X$ entering the U-net. The U-net then produces the reconstructed images $U_w(X) = \widetilde{X}$. To calculate the \emph{image accuracy} part of the loss function, the original image batch $X$ is provided as label and the Mean Squared Error is calculated to measure the accuracy of the reconstructed images. The ResNet component in Figure~\ref{fig-network} is our $h(\cdot)$ function as described before, which is a pre-trained ResNet classifier that takes as input a batch of images and returns two probability vectors. The second part of the loss function, \emph{independence}, is calculated by entering the reconstructed images $\widetilde{X}$ into this ResNet classifier, and calculating the HSIC between the two vectors. As noted before, the image dataset is reconstructed in a way that using them on the original biased classifiers, will result in an improvement in classifications. This is dissimilar to some previous works such as \cite{ramaswamy2021fair} and \cite{quadrianto2019discovering}, in which the model training process includes augmenting the original dataset with generated images and training new fair classifiers \cite{ramaswamy2021fair}, or discovering fair representations of images and subsequently training new classifiers \cite{quadrianto2019discovering}. \section{Experiments} \label{sec-experiments} In this section, we test the methodology described in Section~\ref{sec-method} on CelebA dataset \cite{liu2015faceattributes}. We first introduce the CelebA dataset and the attribute categories in CelebA. We then describe the implementation details of our model. Subsequently, the method described in Ramaswamy et al. \cite{ramaswamy2021fair} and the two versions of it that we use as baseline models to compare our results with are introduced. Finally, we introduce evaluation metrics and present the results. \subsection{CelebA dataset} \label{sec-celeba} CelebA is a popular dataset that is widely used for training and testing models for face detection, particularly recognising facial attributes. It consists of 202,599 face images of celebrities, with 10,177 identities. Each image is annotated with 40 different binary attributes describing the image, including attributes such as \texttt{Black\_Hair}, \texttt{Pale\_Skin}, \texttt{Wavy\_Hair}, \texttt{Oval\_Face}, \texttt{Pointy\_Nose}, and other attributes such as \texttt{Male}, \texttt{Attractive}, \texttt{Smiling}, etc. The CelebA dataset is reported to be biased \cite{zhang2018examining}. In this experiment, we consider \texttt{Male} attribute as the protected attribute (with $\texttt{Male} = 0$ showing the image does not belong to a man and $\texttt{Male} = 1$ showing the image belongs to a man), and \texttt{Attractive} to be the target attribute. We divide the dataset into train and test sets, with train set containing 182,599 and test set containing 20,000 images. In the training set, $67.91\%$ of images with $\texttt{Male}=0$ are annotated to be attractive ($\texttt{Attractive}=1$), while only $27.93\%$ of images with $\texttt{Male}=1$ are annotated as being attractive ($\texttt{Attractive}=1$). This shows bias exists against images with $\texttt{Male}=1$. In order to compare our results with \cite{ramaswamy2021fair}, we follow their categorization of CelebA attributes. Leaving out gender (\texttt{Male}) as the protected attribute, among the rest 39 attributes in CelebA dataset, \cite{ramaswamy2021fair} eliminates some attributes such as \texttt{Blurry} and \texttt{Bald} as they contain less than 5\% positive images. The remaining 26 attributes is subsequently categorized into three groups. \emph{inconsistently-labeled} attributes are the ones that by visually examining sets of examples, the authors often disagree with the labeling and could not distinguish between positive and negative examples \cite{ramaswamy2021fair}. This group includes attributes such as \texttt{Straight\_Hair}, and \texttt{Big\_Hair}. The second group of attributes are the ones that are called \emph{gender-dependent} and the images are labeled to have (or not have) attributes based on the perceived gender \cite{ramaswamy2021fair}. These include attributes such as \texttt{Young, Arched\_Eyebrows} and \texttt{Receding\_Hairline}. Finally, the last group of attributes are called \emph{gender-independent}. These attributes are fairly consistently labeled and are not much dependent on gender expression. This group includes attributes such as \texttt{Black\_Hair, Bangs}, and \texttt{Wearing\_Hat}. The list of all attributes is provided in supplementary material. In order to compare our results with \cite{ramaswamy2021fair}, we follow their categorization of CelebA attributes. Leaving out gender (\texttt{Male}) as the protected attribute, among the rest 39 attributes in CelebA dataset, \cite{ramaswamy2021fair} eliminates some attributes such as \texttt{Blurry} and \texttt{Bald} as they contain less than 5\% positive images. The remaining 26 attributes is subsequently categorized into three groups. \emph{inconsistently-labeled} attributes are the ones that by visually examining sets of examples, the authors often disagree with the labeling and could not distinguish between positive and negative examples \cite{ramaswamy2021fair}. This group includes \texttt{Straight\_Hair, Big\_Lips, Big\_Nose, Oval\_Face, Pale\_Skin}, and \texttt{Wavy\_Hair}. The second group of attributes are the ones that are called \emph{gender-dependent} and the images are labeled to have (or not have) attributes based on the perceived gender \cite{ramaswamy2021fair}. These include \texttt{Young, Arched\_Eyebrows, Attractive, Bushy\_Eyebrows, Pointy\_Nose}, and \texttt{Receding\_Hairline}. Finally, the last group of attributes are called \emph{gender-independent}. These attributes are fairly consistently labeled and are not much dependent on gender expression. This group of attributes include \texttt{Black\_Hair, Bangs, Blond\_Hair, Brown\_Hair, Chubby, Wearing\_Earrings, Bags\_Under\_Eyes, Eyeglasses, Gray\_Hair, High\_Cheekbones, Mouth\_Slightly\_Open, Narrow\_Eyes, Smiling}, and \texttt{Wearing\_Hat}. \begin{figure*} \centering \includegraphics[width=0.8\linewidth,keepaspectratio]{faces.jpg} \caption{Examples of CelebA dataset images and how the model reconstructs them. The first row shows a set of images from the original testing set, and the second row shows the reconstructed images.} \label{fig-faces} \end{figure*} \subsection{Attribute classifiers} For attribute classifiers, we use ResNet-18 pre-trained on ImageNet, in which the last layer is replaced with a layer of size one, along with a Sigmoid activation for binary classification. We train all models for 5 epochs with batch sizes of 128. We use the Stochastic Gradient Descent optimizer with a learning rate of 1e-3 and momentum of 0.9. We use a step learning rate decay with step size of 7 and factor of 0.1. After training, we will have 26 classifiers that receive an image and perform a binary classification on their respective attribute. \subsection{Implementation details} \label{sec-implementationdetails} As shown in Figure~\ref{fig-network}, a ResNet-18 network is used to accompany the U-net to produce predictions for \texttt{Male} and \texttt{Attractive}. Prior to training the U-net, the ResNet-18 \cite{russakovsky2015imagenet} which is pre-trained on ImageNet, is modified by replacing its output layer with a layer of size two, outputing the probability of attractiveness and gender. The ResNet-18 is then trained for 5 epochs on the train set, with a batch size of 128. We use the Stochastic Gradient Descent optimizer with a learning rate of 1e-3 and momentum of 0.9. We use a step learning rate decay with step size of 7 and factor of 0.1. After the ResNet is trained and prepared, we train the U-net as described in Section~\ref{sec-method} on the train set. The detailed architecture of the U-net is described in Supplementary Material. In our implementation of biased estimator of HSIC estimator in Equation~\ref{eq-HSIC-est}, we use Gaussian RBF kernel function for $k(\cdot,\cdot)$ and $l(\cdot,\cdot)$. The training was conducted on a machine with two NVIDIA GeForce RTX 3090, and each training of the U-Net took 1 hour. When the training is complete, the U-net is ready to reconstruct images. Figure~\ref{fig-faces} shows six examples of how the U-net modifies the original images. We train our model for 5 epochs with an $\lambda = 0.07$. \subsection{Comparison with baseline models} We compare our results with Ramaswamy et al.'s method, described in their paper `Fair Attribute Classification through Latent Space De-biasing' \cite{ramaswamy2021fair}. Building on work by \cite{denton2019image} which demonstrates a method to learn interpretable image modification directions, they develop an improved method by perturbing latent vector of a GAN, to produce training data that is balanced for each protected attribute. By augmenting the original dataset with the generated data, they train target classifiers on the augmented dataset, and show that these classifiers will be fair, with high accuracy. The second model that we compare our results with is explicit removal of biases from neural network embeddings, presented in \cite{alvi2018turning}. The authors provide an algorithm to remove multiple sources of variation from the feature representation of a network. This is achieved by including secondary branches in a neural network with the aim to minimize a confusion loss, which in turn seeks to change the feature representation of data such that it becomes invariant to the spurious variations that are desired to be removed. We implement Ramaswamy et al.'s method as follows: As mentioned in their paper, we used progressive GAN with 512-D latent space trained on the CelebA training set from the PyTorch GAN Zoo. We use 10,000 synthetic images and label the synthetic images with a ResNet-18 (modified by adding a fully connected layer with 1,000 neurons). Then we trained a linear SVM to learn the hyper-planes in the latent space as proposed in the original paper. We generate $\mathcal{X}_{syn}$ (160,000 images) to generate a synthetic dataset which aims to de-bias \texttt{Male} from all 26 attributes one by one. Next, we train ResNet-18 classifiers on the new datasets consisting of augmenting $\mathcal{X}$ and $\mathcal{X}_{syn}$. We call this model as \emph{GANDeb}. We use the implementation of \cite{alvi2018turning} with the uniform confusion loss $-(1/\vert D \vert) \sum_{d}{\log q_d}$ provided in \cite{wang2020towards}. \subsection{Evaluation metrics} In evaluating the results of our model with the baseline models, three metrics are used. To capture the accuracy of the classifiers, we measure the \emph{average precision}. This metric combines precision and recall at every position and computes the average. A higher average precision (\textbf{AP}) is desired. To measure fairness, there are multiple metrics proposed in the literature \cite{mehrabi2021survey}. Among the most commonly used metrics is \emph{demographic parity} (\textbf{DP}). This metric captures the disparity of receiving a positive decision among different protected groups ($|P(\hat{Y}=1|S=0) - P(\hat{Y}=1|S=1)|$). A smaller \textbf{DP} shows a fairer classification and is desired. Finally for our last fairness measure, we follow \cite{lokhande2020fairalm} and \cite{ramaswamy2021fair} and use \emph{difference in equality of opportunity} (\textbf{DEO}), i.e. the absolute difference between the true positive rates for both gender expressions ($|TPR(S=0) - TPR(S=1)|$). A smaller \textbf{DEO} is desired. \subsection{Results} \label{sec-results} All the values reported in this section, are evaluated on the same test set. Prior to comparing the results of our method with the comparison models, to assess the original training data, the performance of baseline classifiers being trained on the original train set, and tested on the test set is presented. The AP, DP, and DEO values of classifiers trained on the original training set is shown in Table~\ref{tab-results} under \emph{Baseline}. Looking into Baseline values, the AP of classifiers for gender-independent category of attributes is higher than gender-dependent category, and the AP of inconsistent category is less than the other two categories. As expected, DP and DEO for gender-dependent category of attributes is higher than the other two categories. In Table~\ref{tab-results}, we compare our model with GAN Debiasing (GanDeb) \cite{ramaswamy2021fair}, Adversarial debiasing (AdvDb) presented in \cite{alvi2018turning}, and the Baseline on the original data. Looking into the average precision scores, the results show that GanDeb is slightly performing better than Ours. This is anticipated, since half of the training data for GanDeb consists of the original images, and therefore a higher average precision is expected. AdvDb on the other hand is performing poorly in terms of average precision, with average precision scores far away from other models. Looking into demographic parity scores, the results show that GanDeb falls behind the other two models in two out of three attribute categories. While Ours is performing better for gender dependent and gender independent attribute categories. Looking into the third fairness measure, difference in equality of opportunity, AdvDb and ours are performing better than GanDeb in all three categories of attributes. Ours beats AdvDb for inconsistent attributes category, AdvDb beats Ours in gender dependent category, and AdvDb slightly beats Ours for gender independent category of attributes. In summary, Ours is close to GanDeb in terms of maintaining high average precision scores, which means higher accuracy of prediction, while beating GanDeb in terms of fairness metrics. Also, while AdvDb performance in terms of fairness enforcement is better than ours in 3 out of 6 cases, it falls behind significantly in terms of average precision. To explore the trade-off between fairness and precision, we perform the following experiment: $\lambda$ was increased between $[0.01, 0.15]$ in steps of 0.01, and for each value of $\lambda$, the model was trained three times, each time for 1 epoch. Figure~\ref{fig-ablation} shows how AP, DEO, and DP change. The results show that by increasing $\lambda$, precision decreases while fairness measures improve. \begin{table*}[ht] \resizebox{\linewidth}{!}{% \begin{tabular}{cl|cccc|cccc|cccc|} \cline{3-11} \multicolumn{2}{c|}{\textbf{}} & \multicolumn{3}{c|}{$\textbf{AP} \uparrow$} & \multicolumn{3}{c|}{$\textbf{DP} \downarrow$} & \multicolumn{3}{c|}{$\textbf{DEO} \downarrow$} \\ \cline{3-11} \multicolumn{2}{c|}{} & \multicolumn{1}{c|}{Incons.} & \multicolumn{1}{c|}{G-dep} & \multicolumn{1}{c|}{G-indep} & \multicolumn{1}{c|}{Incons.} & \multicolumn{1}{c|}{G-dep} & \multicolumn{1}{c|}{G-indep} & \multicolumn{1}{c|}{Incons.} & \multicolumn{1}{c|}{G-dep} & \multicolumn{1}{c|}{G-indep} \\ \hline \multicolumn{2}{|c|}{Baseline} & \multicolumn{1}{c|}{0.667} & \multicolumn{1}{c|}{0.79} & \multicolumn{1}{c|}{0.843} & \multicolumn{1}{c|}{0.147} & \multicolumn{1}{c|}{0.255} & \multicolumn{1}{c|}{0.137} & \multicolumn{1}{c|}{0.186} & \multicolumn{1}{c|}{0.243} & \multicolumn{1}{c|}{0.163} \\ \hline \multicolumn{2}{|c|}{GanDeb} & \multicolumn{1}{c|}{0.641} & \multicolumn{1}{c|}{0.763} & \multicolumn{1}{c|}{0.831} & \multicolumn{1}{c|}{0.106} & \multicolumn{1}{c|}{0.233} & \multicolumn{1}{c|}{0.119} & \multicolumn{1}{c|}{0.158} & \multicolumn{1}{c|}{0.24} & \multicolumn{1}{c|}{0.142} \\ \hline \multicolumn{2}{|c|}{AdvDb} & \multicolumn{1}{c|}{0.243} & \multicolumn{1}{c|}{0.333} & \multicolumn{1}{c|}{0.218} & \multicolumn{1}{c|}{0.091} & \multicolumn{1}{c|}{0.169} & \multicolumn{1}{c|}{0.121} & \multicolumn{1}{c|}{0.136} & \multicolumn{1}{c|}{0.149} & \multicolumn{1}{c|}{0.098} \\ \hline \multicolumn{2}{|c|}{Ours} & \multicolumn{1}{c|}{0.618} & \multicolumn{1}{c|}{0.732} & \multicolumn{1}{c|}{0.839} & \multicolumn{1}{c|}{0.097} & \multicolumn{1}{c|}{0.146} & \multicolumn{1}{c|}{0.118} & \multicolumn{1}{c|}{0.124} & \multicolumn{1}{c|}{0.172} & \multicolumn{1}{c|}{0.114} \\ \hline \end{tabular} } \caption{Comparing the results of our model with Baseline, GAN debiasing (GanDeb), and Adversarial debiasing (AdvDb). Showing AP (Average Precision, higher the better), DP (Demographic Parity, lower the better), and DEO (Difference in Equality of Opportunity, lower the better) values for each attribute category. Each number is the average over all attributes within that specific attribute category.} \label{tab-results} \end{table*} \begin{figure*}[t!] \centering \includegraphics[width=1.0\linewidth,keepaspectratio]{ablation.png} \caption{Exploring the trade-off between accuracy and fairness by incremental increasing of parameter $\lambda$. Each data point is the average over three trainings, with standard deviation of the three trainings shown as confidence intervals.} \label{fig-ablation} \end{figure*} \subsection{Interpretation and the effect on other attributes} In this section, we aim to display the correspondence between an attribute's relationship with \texttt{Attractive} attribute, and the extent to which the model modifies that attribute. To do so, for each attribute, we record two values, namely HSIC value between that attribute and the \texttt{Attractive} attribute, and the change in demographic parity. To calculate the change in demographic parity, we first calculate the demographic parity of the classifier for that specific attribute, when the classifier classifies the original testing set images (similar to \emph{Baseline} in previous tables, but for each attribute separately). We then calculate the demographic parity of the classifier for that specific attribute, when the classifier receives the modified training images \textbf{Ours(5,0.07)}. We then subtract the two values, to get the change in demographic parity for that specific attribute. Figure~\ref{fig-attributes} presents the results, with the red bars showing the change in demographic parity for each attribute, and the blue bars showing the statistical dependence measured by HSIC, between each attribute with \texttt{Attractive} attribute, in the original training data. The results show that the absolute change in demographic parity is positively correlated with that attribute's statistical dependence with the attribute \texttt{Attractive}, with a Pearson correlation coefficient of 0.757. For instance, we observe large changes in demographic parity for attributes such as \texttt{Young, Big\_Nose, Pointy\_Nose, Oval\_Face}, and \texttt{Arched\_Eyebrows}, as they are typically associated with being attractive, and therefore reflected in the CelebA dataset labels. \begin{figure*}[t!] \centering \includegraphics[width=0.6\linewidth,keepaspectratio]{attributes_change_edited.png} \caption{Displaying the relationship between an attribute's statistical dependence on \texttt{Attractive} attribute, and the extent to which the model modifies that attribute. Blue bars show the HSIC between each attribute with \texttt{Attractive} attribute in the original data. Red bars show the absolute difference in demographic parity of each attribute's classifier, acting on original images and transformed images, respectively.} \label{fig-attributes} \end{figure*} \section{Conclusions} \label{sec-conclusions} We proposed an image reconstruction process to mitigate bias against a protected attribute. The model's performance was evaluated on CelebA dataset and compared with an augmentation based method developed by \cite{ramaswamy2021fair}. The proposed model showed promising results in mitigating bias while maintaining high precision for classifiers. An interesting aspect of the results is that although we only explicitly train the U-net to remove dependence between the target attribute (\texttt{Attractive}) and the protected attribute (\texttt{Male}), classifiers related to many other attributes, most of which have a statistical dependency with the target attribute, become `fairer'. An advantage of the proposed model is that it does not rely on modifying downstream classifiers, and rather includes only modifying the input data, hence making it suitable to be deployed in an automated machine learning pipeline more easily and with lower cost. As a potential future direction, we intend to consider the problem in a situation where multiple protected attributes are present, and attributes are non-binary. We also intend to apply similar methodology on other data types such as tabular data.
{ "timestamp": "2022-09-20T02:21:19", "yymm": "2209", "arxiv_id": "2209.08648", "language": "en", "url": "https://arxiv.org/abs/2209.08648" }
\section*{Introduction} An additive invariant on varieties over a base field $k$ with values in an abelian group $A$ is a function $\mu:\{\mathrm{Varieties}/k\} \rto A$ such that for any closed immersion $Y \rcofib X$, $[X] = [Y] + [X \smallsetminus Y]$. Such an invariant must factor uniquely through a homomorphism from the Grothendieck ring of varieties, $K_0(\Var_k)$, defined by \[K_0(\Var_k) \defeq \begin{array}{c} \hbox{free ab. gp. gen.} \\ \hbox{by varieties over $k$} \end{array} \left/ \begin{array}{ll} {} {Y \rcofib^{\mathrm{closed}} X} \\ {} [X] = [Y] + [X \smallsetminus Y]. \end{array}\right.\] In other words, $K_0(\Var_k)$ is the \emph{universal} additive invariant: if all structure on $K_0(\Var_k)$ could be understood then all additive invariants would also be understood. Such an invariant is called \emph{multiplicative} if it takes values in a ring and satisfies the additional condition that $\mu(X\times Y) = \mu(X)\mu(Y)$. Defining the ring structure on $K_0(\Var_k)$ to be $[X][Y] = [X\times Y]$ implies that any multiplicative additive invariant must factor uniquely through a ring homomorphism from $K_0(\Var_k)$. The group $K_0(\Var_k)$ can be modeled topologically as the connected components of a $K$-theory spectrum $K(\Var_k)$, introduced in \cite{Z-Kth-ass}. The higher homotopy groups of this spectrum encode further geometric information about piecewise-isomorphisms of varieties. The group $K_1(\Var_k)$ can be thought of (by analogy with the $K$-theory of a ring) as a ``determinant'' for piecewise-automorphisms of varieties, in the following manner. Consider the determinant of a matrix over a field $F$. The determinant is a collection of homomorphisms $\det_n: GL_n(F) \rto F^\times$ for each positive integer $n$ satisfying the following conditions: \begin{description} \item[additivity] for positive integers $m$ and $n$, the following diagram commutes: \begin{diagram} { GL_m(F) \oplus GL_n(F) & GL_{m+n}(F) \\ F^\times \oplus F^\times & F^\times \\}; \cofib{1-1}{1-2} \to{1-1}{2-1}_{\det_m\oplus \det_n} \to{1-2}{2-2}^{\det_{m+n}} \to{2-1}{2-2}^\cdot \end{diagram} \item[initial conditions] the following computations hold: \[\textstyle{\det_2} \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right) = -1 \qqand \det|_{GL_1(F)} = \mathrm{id}.\] \end{description} In particular, the additivity of the determinant ensures \emph{stability}: the homomorphisms $\det_n$ extend to a homomorphism $GL(F) \rto F^\times$. If we generalize $F$ to a ring we have a choice about whether to enforce the initial conditions or not. If we enforce the initial conditions we can use the standard formula for the determinant and the additivity condition will hold; this determinant will take values in $R^\times$. However, it also makes sense to say that the condition we truly care about is additivity; in this case, we can say that the determinant should take values in the largest abelian group possible (ensuring that additivity still holds)---the group $GL(R)^{ab} = K_1(R)$. The fact that $K_1(R)$ is not necessarily isomorphic to $R^\times$, as it is for a field, demonstrates that for some base rings the determinant contains more information than a single unit. Moreover, although the determinant on $GL_1(R)$ is no longer the identity, there is a natural inclusion $GL_1(R) \rto K_1(R)$. For a more in-depth discussion of $K_1(R)$, see \cite[Chapter III]{kbook}. In the case of varieties there is a similar description of the determinant for automorphisms of varieties.\footnote{This definition an discussion generalizes directly to piecewise-automorphisms of varieties; however, in the interest of readability we focus entirely on honest automorphisms here.} For a variety $X$, write $\Aut(X)$ for the group of automorphisms of $X$. The additivity condition for the determinant can be described as the fact that for every variety $X$ there is a homomorphism \[\textstyle{\det_X}:\Aut(X) \rto K_1(\Var_k)\] satisfying the additivity condition that the diagram \begin{diagram} { \Aut(X) \oplus \Aut(Y) & \Aut(X\amalg Y) \\ K_1(\Var_k) \oplus K_1(\Var_1) & K_1(\Var_k) \\}; \cofib{1-1}{1-2} \to{1-1}{2-1}_{\det_X\oplus \det_Y} \to{1-2}{2-2}^{\det_{X\amalg Y}} \to{2-1}{2-2} \end{diagram} commutes for any varieties $X$ and $Y$ over $k$. The initial conditions are more complicated, although the condition on the swap has a natural interpretation. Finite sets can be considered $0$-dimensional varieties, and this induces a map $\S \simeq K(\mathbf{Fin}) \rto K(\Var_k)$. The non-identity element in the group $K_1(\mathbf{Fin}) \cong \pi_1\S \cong \Z/2$ thus has an image in $K_1(\Var_k)$; this is the element which is the determinant of the ``swap''. Note that this agrees with the classical definition: if finite sets are mapped to vector spaces of the appropriate dimension, then the two-point swap is mapped to the matrix $ \begin{matrix}{cc} 0 & 1 \\ 1 & 0 \end{matrix}$ and the induced map on $K_1$ is exactly the homomorphism $\Z/2 \rto F^\times$ taking $-1$ to itself. \begin{remark*} It is natural to ask what the natural analog of the computation of the determinant of $GL_1(F)$ could be. One possible candidate is the following: the group $\Aut(\mathbb{P}^1)$ contains as a subgroup $k^\times$, where $a\in k^\times$ represents the automorphism $[x:y] \rgoesto [ax:y]$. There is therefore an induced map $k^\times \rto K_1(\Var_k)$. The question of whether this map is injective is still open. \end{remark*} In order to analyze elements in $K_1(\Var_k)$ it is possible to ``lift'' invariants of $K_0(\Var_k)$. More rigorously, suppose that we are given a homomorphism $K_0(\Var_k) \rto K_0(\C)$ for some category $\C$ which has an associated $K$-theory spectrum (such as finite sets, projective $R$-modules, etc.). It is often possible to lift this homomorphism to a map of spectra $K(\Var_k) \rto K(\C)$; this map will induce a map $K_n(\Var_k) \rto K_n(\C)$ for all integers $n$. For example, when $k$ is finite, consider the usual local zeta function of a variety $X$, written in terms of the action of Frobenius on $\ell$-adic cohomology, \[Z(X,t) = \prod_{i=0}^{2\dim X} \det (1- t \Frob|_{H^i_c(\bar X, \Q_\ell)})^{(-1)^{i+1}}.\] The local zeta function induces a homomorphism $K_0(\Var_k) \rto K_0(\mathrm{GalRep}(\Q_\ell))$, where $\mathrm{GalRep}(Q_\ell)$ is the category of finitely-generated continuous Galois representations. In \cite{CWZ-zeta} the authors lift this homomorphism to a map of spectra and use it to show that when $|k| \equiv 3 \pmod 4$ the group $K_1(\Var_k)$ contains elements which are not in the image of $K_1(\mathbf{Fin})$; in particular, they show that the element $[\mathbb{P}^1,1/x] \in \Aut(\mathbb{P}^1)$ is not in the image of $K_1(\mathbf{Fin})$. However, the authors were not able to show that the induced map $K(\Var_k) \rto K(\mathrm{GalRep}(\Q_\ell))$ is a map of $E_\infty$-ring spectra; in less-technical language, the authors could lift the group homomorphism but not the multiplicative structure to the map of spectra. Although the spectrum $K(\Var_k)$ was shown in \cite{campbell14} to have an $E_\infty$-structure, the method for constructing the map of spectra could not ensure that the map is compatible with this structure. In this paper, we develop alternate machinery for constructing an $E_\infty$-structure on $K(\Var_k)$ and use it to construct an $E_\infty$-version of the local zeta function. In order to ensure that the map is $E_\infty$ we use a different, more combinatorial, model of the local zeta function: \[Z(X,t) = \exp \sum_{n \geq 1} \frac{|X(\mathbb{F}_{q^n})|}{n} t^n,\] when $k = \mathbb{F}_q$. Noting that the set $X(\mathbb{F}_{q^n})$ is uniquely determined by the set $X(\bar\mathbb{F}_q)$ together with the action of Frobenius, we consider the local zeta function to be a map $K_0(\Var_k) \rto K_0(\mathbf{AFSet}_{\hat Z})$. Here $\mathbf{AFSet}_{\hat Z}$ is the category of \emph{almost-finite sets}; see Section~\ref{sec:afsets} for a rigorous definition. This gives rise to the following theorem: \begin{maintheorem}[Theorems~\ref{thm:Einfty} and \ref{thm:ringhom}] \label{main:Einfty} The spectra $K(\Var_k)$ and $K(\mathbf{AFSet}_{\hat Z})$ are $E_\infty$-ring spectra. The local zeta function induces a ring homomorphism $K_*(\Var_k) \rto K_*(\mathbf{AFSet}_{\hat Z})$. \end{maintheorem} This ring structure allows us to do a more in-depth analysis of the structure of $K_1(\Var_k)$. The $E_\infty$-structure on $K(\Var_k)$ induces a multiplication \[K_0(\Var_k) \otimes K_1(\mathbf{Fin}) \rto K_0(\Var_k) \otimes K_1(\Var_k) \rto K_1(\Var_k).\] The elements in this image of this map are those which can be represented by an automorphism which takes two copies of some variety $X$ and swaps them. The element in the image of $K_1(\mathbf{Fin})$ is one such element, but there are others as well. We call elements in this image \emph{permutative}, and those not in the image \emph{non-permutative}. The model of the local zeta function constructed in this paper allows us to conclude the following: \begin{maintheorem}[Corollary~\ref{cor:gen-root}] There exist non-permutative elements in $K_1(\Var_k)$ for all finite fields $k$ with odd characteristic. In particular, let $b = \mathrm{ord}_2(|k|-1)$, and let $\alpha\in k$ be a primitive $2^b$-th root of unity. Then the element $[\mathbb{P}^1, x \mapsto \alpha x]$ is a non-permutative element of $K_1(\Var_k)$. \note{What about even?} \end{maintheorem} \begin{remark*} Permutative elements exist in $K_n(\Var_k)$ for all positive $n$, not just $n=1$; the proof of \cite[Theorem 6.6]{CWZ-zeta} uses such elements to show that when $k$ is a subfield of $\CC$ there are infinitely many nontrivial groups $K_n(\Var_k)$. (For finite fields this is clear, as the map $X \rgoesto X(k)$ induces a map $K(\Var_k) \rto K(\mathbf{Fin})$ which splits the map $K(\mathbf{Fin})\rto K(\Var_k)$.) \end{remark*} The proofs of these theorems use the machinery of \emph{assemblers}, originally introduced in \cite{Z-Kth-ass}. Assemblers contain the combinatorial data of how different objects of interest decomppose but strip out the specific data of the context. Their combinatorial nature makes them amenable to analytic techniques, equipping them with d\'evissage and localization theorems analogous to Quillen's theorems for abelian categories. The assembler of varieties over $k$, generally denoted $\Var_k$ to be consistent with the notation for the Grothendieck ring, has as objects the varieties over $k$ and as morphisms locally-closed immersions. It is also equipped with a pretopology generated by the coverage $\{Y \rcofib X, X\smallsetminus Y \rcofib X\}$ (where $Y \rcofib X$ is a closed immersion). In this paper we show that the assembler definition of the $K$-theory of varieties gives rise to an $E_\infty$-structure on the $K$-theory by showing that the $K$-theory functor on assemblers is ``almost monoidal,'' taking monoid objects in assemblers to $E_\infty$-ring spectra. In fact, this can be generalized a bit further, showing that not only monoid objects in the category of assemblers but ``monoidal'' assemblers (assemblers equipped with an operation which is associative, commutative, and unital up to a natural isomorphism) are taken to $E_\infty$-ring spectra. \begin{maintheorem} \label{thm:maina} The $K$-theory functor $K: \mathbf{Asm} \rto \Sp$ is monoidal and takes symmetric monoidal assemblers to $E_\infty$-ring spectra and symmetric monoidal morphisms of assemblers to ring homomorphisms on $K$-groups. \end{maintheorem} For more rigorous definitions and theorem statements, see Section~\ref{sec:Kmonoidal}. Moreover, we show in Theorem~\ref{thm:K1mult} that the generators and relations given in \cite[Theorem B]{Z-ass-pi1} for $K_1(\Var_k)$ interact in a natural manner with the multiplicative structure. \subsection*{Organization} This paper is targeted towards those interested in the applications, including the derived $\zeta$-function. In the service of this, we front-load the applications, and leave the proofs of the structural theorems for later sections. Section~\ref{sec:review} contains a quick review of assemblers and the results vital for an understanding of the applications. Section~\ref{sec:ex} gives several examples of interest to this paper. Section~\ref{sec:Gsets} analyzes non-permutative elements in $G$-sets, which is used in Section~\ref{sec:zetamor} to detect non-permutative elements in $K(\Var_k)$; Section~\ref{sec:zetamor} has all of the main applications of the main theorem in this paper. The last four sections cover the technical underpinnings of Theorem~\ref{thm:maina}. Section~\ref{sec:technical} gives a run-down of the technical results necessary for the proofs. Section~\ref{sec:monoidal} proves that the claimed product on assemblers produces a symmetric monoidal structure. Section~\ref{sec:Kmonoidal} proves Theorem~\ref{thm:maina}. Lastly, Section~\ref{sec:K1} proves that generators of $K_1$ interact with the monoidal structure in the expected manner. \subsection*{Acknowledgements} The author is grateful to Anna-Marie Bohmann, David Corwin, Brian Huang, Niles Johnson, and Angelica Osorno for helpful conversations and for answering all sorts of questions. The author would also like to extend special thanks to Thomas Barnet-Lamb for his extreme patience with all of the different variations of basic number theory questions that came up during the writing of this paper. \section{A quick run-down of assemblers} \label{sec:review} \begin{definition}[{\cite[Definition 2.4]{Z-Kth-ass}}] \label{def:ass} In a Grothendieck site $\C$ with an initial object, a \emph{covering family} is a family of morphisms which generates a covering sieve. A family $\mathscr{F}$ is \emph{disjoint} if for any two morphisms $f:A \rto C$ and $g: B \rto C$ in $\mathscr{F}$, $A\times_C B$ exists an is equal to the initial object. An \emph{assembler} is a Grothendieck site $\C$, satisfying the following extra conditions: \begin{itemize} \item[(I)] $\C$ has an initial object $\initial$, and $\initial$ has an empty covering family. \item[(R)] For any two finite disjoint covering families of an object $A$, there is a common refinement which is itself finite and disjoint. \item[(M)] All morphisms in $\C$ are monic. \end{itemize} An assembler is \emph{closed} if the category $\C$ has all pullbacks. Note that in this case, axiom (R) holds automatically. We generally assume that the initial object in $\C$ is unique, although this assumption does not affect any of the constructions in this paper. For an assembler $\C$, we write $\C^\circ$ for the full subcategory of noninitial objects of $\C$. A morphism of assemblers is a functor which preserves the initial objects and disjointness and which is continuous with respect to the topology. Write $\mathbf{Asm}$ for the category of assemblers, and $c\mathbf{Asm}$ for the subcategory of closed assemblers and pullback-preserving morphisms of assemblers. \end{definition} The main examples of assemblers appearing in this paper are the following: \begin{example} Let $G$ be a discrete group. The assembler $\S^\mathbf{Asm}_G$ has two objects, $\initial$ and $*$, with one morphism $\initial \rto *$. In addition, $*$ has automorphism group $G$. The topology is generated by the trivial covering families on $\initial$ and $*$, together with the empty covering family of $\initial$. When $G$ is trivial we omit it from the notation. \end{example} \begin{example} Let $G$ be a group. The assembler $\mathbf{Fin}_G$ has as objects the finite $G$-sets, with morphism $G$-equivariant injections. A family is a covering family if it is mutually surjective. In other words, a family $\{f_i:A_i \rto A\}_{i\in I}$ is a covering family if $\bigcup_{i\in I} f_i(A_i) = A$. (As before, when $G$ is trivial we omit it from the notation.) When $G$ is profinite, the assembler $\mathbf{AFSet}_G$ is the assembler of \emph{almost-finite} $G$-sets: those $G$-sets $S$ such that $S^H$ is finite for any closed subgroup $H \leq G$ and such that for all $x\in S$, the orbit $G \cdot x$ is finite. Again, the morphisms are $G$-equivariant inclusions and covering families are mutually surjective. \end{example} \begin{example} Let $k$ be a field. The assembler $\Var_k$ has as objects $k$-varieties (i.e. reduced separated schemes of finite type over $k$) and as morphisms locally closed immersions. The topology is generated by the coverage consisting of families $\{f:Y \rcofib X, X \smallsetminus Y \rcofib X\}$, where $f$ is a closed immersion. More generally, for a Noetherian scheme $S$ the assembler $\Var_S$ with objects varieties over $S$ is defined analogously to $\Var_k$. \end{example} Assemblers have a $K$-theory which classifies ``scissors congruence'' of the objects of the assembler: \begin{theorem}[{\cite[Theorem A]{Z-Kth-ass}}] \label{thm:K0} There exists a functor $K: \mathbf{Asm} \rto \Sp$ from the category of assemblers to the category of spectra such that for any assembler $\C$, $\pi_0K(\C)$ is the free abelian group generated by objects of $\C$ modulo the relations \[[A] = \sum_{i\in I} [A_i] \qquad \hbox{for any finite disjoint covering family $\{A_i \rto A\}_{i\in I}$.}\] \end{theorem} \begin{definition} For an assembler $\C$, write $K_n(\C) \defeq \pi_n K(\C)$. \end{definition} Although generators and relations for $K_n(\C)$ for $n > 0$ akin to those given in the above theorem are difficult to come by, there is a simple description of those elements in $K_1(\C)$ which are of interest in the current context: \begin{theorem}[Corollary \ref{cor:K1sp}] \label{thm:K1} Let $\C$ be an assembler. Let $A$ be an object in $\C$, an let $\sigma\in \Aut(A)$. Then the pair $[A, \sigma]$ represents an element of $K_1(\C)$. These satisfy the following relations: \begin{itemize} \item For any finite disjoint covering family $\{f_i:A_i\rto A\}_{i\in I}$ such that for each $i\in I$, there is a $\sigma_i\in \Aut(A_i)$ making the square \begin{diagram} { A & A \\ A_i & A_i \\}; \arrowsquare{\sigma}{f_i}{f_i}{\sigma_i} \end{diagram} commute, \[[A,\sigma] = \sum_{i\in I} [A_i, \sigma_i].\] \item If $\sigma, \sigma'\in \Aut(A)$ then \[[A,\sigma] + [A,\sigma'] = [A, \sigma\circ \sigma'].\] \end{itemize} \end{theorem} This theorem does not claim that these are the \emph{only} relations satisfied by these elements, or that $K_1(\C)$ is generated by these elements. However, as these are the only elements of interest in this paper we restrict our attention to this simpler statement. For a more comprehensive analysis, see \cite[Theorem B]{Z-ass-pi1}. There is a monoidal structure on the category of closed assemblers.\footnote{Although it should be possible to extend this construction to all assemblers, this would require more technical work which would distract from the main idea. As all examples of interest to us are closed we focus on this subclass of examples.} The intuition behind it comes from the following example: \begin{example} \label{ex:seg} Let $\mathbf{Seg}$ be the assembler whose objects are closed intervals in $\mathbf{R}$ and whose morphisms are isometric injections. Let $\mathbf{Rec}$ be the assembler whose objects are sets of the form $[a,a'] \times [b,b']$ in $\mathbf{R}^2$, with morphisms isometric embeddings. (Again, the topologies consist of the mutually surjective families.) An object in $\mathbf{Rec}$ is a ``pair of objects'' in $\mathbf{Seg}$, and any decomposition of the elements of the pair produces a decomposition of the whole object: \begin{center} \begin{tikzpicture}[anchor=base,baseline,yshift=-2em,xscale=1.3] \draw [|-|,yshift=-0.8em] (0,0) to node[font=\scriptsize,below] {$[a,a']$} (2,0); \draw [|-|,xshift=-0.8em] (0,0) to node[font=\scriptsize,left] {$[b,b']$} (0,2); \draw (0,0) rectangle (2,2); \end{tikzpicture} \setlen{4em}{\inlineArrow{->, decorate, decoration={snake}}} \begin{tikzpicture}[anchor=base,baseline,yshift=-2em,xscale=1.3] \draw [|-|,yshift=-0.8em] (0,0) to node[font=\scriptsize,below] {$[a,a']$} (2,0); \draw [|-|,xshift=-0.8em] (0,0) to node[font=\scriptsize,left] {$[b,b']$} (0,2); \draw (0,0) rectangle (2,2); \draw[red] (0.3,-0.2) -- (0.3,-0.4) (1.4,-0.2) -- (1.4,-0.4) (1.8,-0.2) -- (1.8,-0.4); \draw[red] (0.3,0) -- (0.3,2) (1.4,0) -- (1.4,2) (1.8,0) -- (1.8,2); \draw[blue] (-0.2,0.5) -- (-0.4,0.5) (-0.2,1.2) -- (-0.4,1.2) (-0.2,1.7) -- (-0.4,1.7); \draw[blue] (0,0.5) -- (2,0.5) (0,1.2) -- (2,1.2) (0,1.7) -- (2,1.7); \end{tikzpicture} . \end{center} \end{example} The idea of the monoidal structure on $\mathbf{Asm}$ is to produce covering families which are similarly ``gridded'' in the general setting. \begin{definition} Let $\C$ and $\D$ be two closed assemblers. The assembler $\C\sma \D$ has as underlying category the full subcategory of $\C\times \D$ consisting of those pairs $(C,D)$ where $C = \initial$ if and only if $D = \initial$. The topology on this assembler is generated by the coverage consisting of those families \[\{(A_i,B_j) \rto (A,B)\}_{(i,j)\in I\times J}\] where both $\{A_i \rto A\}_{i\in I}$ and $\{B_j \rto B\}_{j\in J}$ are covering families in $\C$ and $\D$, respectively. \end{definition} This is not the usual topology on the product of sites. The usual topology on the product would have as covering families those families $\{(A_i,B_i) \rto (A,B)\}_{i\in I}$ where $\{A_i \rto A\}_{i\in I}$ and $\{B_i \rto B\}_{i\in I}$ are covering families in $\C$ and $\D$, respectively. In particular, the family of shaded rectangles \begin{center} \begin{tikzpicture}[anchor=base,baseline,yshift=-2em,xscale=1.3] \draw [|-|,yshift=-0.8em] (0,0) to node[font=\scriptsize,below] {$[a,a']$} (2,0); \draw [|-|,xshift=-0.8em] (0,0) to node[font=\scriptsize,left] {$[b,b']$} (0,2); \draw (0,0) rectangle (2,2); \draw[fill=lightgray] (0,0) rectangle (0.3,0.5) rectangle (1.4,1.2) rectangle (1.8,1.7) rectangle (2,2); \draw[red] (0.3,-0.2) -- (0.3,-0.4) (1.4,-0.2) -- (1.4,-0.4) (1.8,-0.2) -- (1.8,-0.4); \draw[red] (0.3,0) -- (0.3,2) (1.4,0) -- (1.4,2) (1.8,0) -- (1.8,2); \draw[blue] (-0.2,0.5) -- (-0.4,0.5) (-0.2,1.2) -- (-0.4,1.2) (-0.2,1.7) -- (-0.4,1.7); \draw[blue] (0,0.5) -- (2,0.5) (0,1.2) -- (2,1.2) (0,1.7) -- (2,1.7); \end{tikzpicture} \end{center} gives a covering family under the standard topology, but not under the topology in $\mathbf{Seg} \sma \mathbf{Seg}$. In $\mathbf{Seg} \sma \mathbf{Seg}$ all $16$ rectangles in the picture are necessary for a covering family. \begin{remark} This definition of the topology is interesting in that it allows us to produce a natural example of a Waldhausen category which does not satisfy the Saturation Axiom. See Example~\ref{ex:nonsaturated}. \end{remark} The main technical result of this paper is the following: \begin{theorem}[Lemma~\ref{lem:smasym}, Section~\ref{sec:Kmonoidal}] \label{thm:monoidal} The structure $(c\mathbf{Asm},\sma, \S)$ is a symmetric monoidal structure on the category of closed assemblers. The $K$-theory functor is monoidal and thus takes monoid objects to ring spectra. Given a symmetric monoidal assembler $\C$ (see Definition~\ref{def:symmonass}), $K(\C)$ is an $E_\infty$-ring spectrum. A monoidal morphism of symmetric monoidal assemblers $\C \rto \D$ induces a ring homomorphism $K_*(\C) \rto K_*(\D)$. \end{theorem} Here, a ``symmetric monoidal assembler'' is a weakened form of a monoid object in assemblers; it is directly analogous to the definition of a symmetric monoidal category with the cartesian product of categories replaced by the $\sma$-product of assemblers. In particular, this theorem implies that if $\C$ is a symmetric monoidal assembler then $K_*(\C)$ is a graded-commutative ring. \begin{proposition} Let $\C$ be a symmetric monoidal assembler with multiplication map $\mu$, and let $[A],[A']\in K_0(\C)$ and $[B, \sigma]\in K_1(\C)$. Then \[[A][A'] = [\mu(A,A')] \qqand [A][B,\sigma] = [\mu(A,B), \mu(1_A, \sigma)].\] \end{proposition} \section{Examples} \label{sec:ex} \subsection{Finite $G$-sets} \label{ex:finG} \begin{notation} Let $G$ be a group. Denote by $C_G$ a set of representatives for conjugacy classes of subgroups of $G$. \end{notation} Let $G$ be a finite group, and let $\mathbf{Fin}_G$ be the assembler whose objects are finite $G$-sets, and whose morphisms are $G$-equivariant injections. The topology on the assembler is generated by mutually surjective families. Every finite $G$-set is a disjoint union of its $G$-orbits. Since all morphisms in $\mathbf{Fin}_G$ are injective, the image of any $G$-orbit is an isomorphic $G$-orbit. By picking a point in an orbit, we see that any $G$-orbit is isomorphic to $G/H$ for some subgroup $H$, and $G/H$ is isomorphic to $G/H'$ if and only if $H$ and $H'$ are conjugate in $G$. Let $\mathbf{Fin}_G^{[H]}$ be the full subassembler of $\mathbf{Fin}_G$ whose objects are disjoint unions of $G$-orbits, each of which is isomorphic to $G/H$. Then \[\mathbf{Fin}_G \cong \prod_{H\in C_G} \mathbf{Fin}_G^{[H]}.\] Since $K$-theory of assemblers commutes with finite products, in order to analyze the $K$-theory of $\mathbf{Fin}_G$ it is sufficient to understand the $K$-theory of each $\mathbf{Fin}_G^{[H]}$. Fix $H\in C_G$. Let $S\in \mathbf{Fin}_G^{[H]}$ have exactly one $G$-orbit. Then every other object in $\mathbf{Fin}_G^{[H]}$ is isomorphic to a disjont union of copies of $S$, and the automorphism group of $S$ is $W_GH$, the Weyl group of $H$ in $G$. Thus by d\'evissage for assemblers \cite[Theorem B]{Z-Kth-ass} $K(\mathbf{Fin}_G^{[H]}) \simeq K(\S^\mathbf{Asm}_{W_GH}) \simeq \Sigma^\infty_+ B(W_GH)$. Putting this together, we have \begin{equation} \label{eq:KGSet} K(\mathbf{Fin}_G) \simeq \prod_{H\in C_G} \Sigma^\infty_+ B(W_GH). \end{equation} The Cartesian product of finite $G$-sets produces a functor $\mu:\mathbf{Fin}_G \sma \mathbf{Fin}_G \rto \mathbf{Fin}_G$. This preserves the initial object and disjointness by definition, so in order to check that it is a morphism of assemblers it suffices to show that it takes covering families to covering families; in particular, it suffices to check that for covering families $\{f_i:A_i \rto A\}_{i\in I}$ and $\{g_j:B_j \rto B\}_{j\in J}$, the family \[\{(f_i,g_j): A_i \times B_j \rto A\times B\}_{(i,j)\in I\times J}\] is a covering family. This is true because for any $(a,b)\in A\times B$, if we choose $i\in I$ such that $a\in \im f_i$ and $j\in J$ such that $b\in \im g_j$, $(a,b)$ will be in the image of $(f_i,g_j)$, as desired. By definition, \[K_0(\mathbf{Fin}_G) \cong A(G),\] the Burnside ring of $G$; the monoidal structure above extends the ring structure of the Burnside ring to the higher $K$-groups. To finish up this section we want to make a couple of observations which will be useful when we discuss almost-finite sets in Section~\ref{sec:afsets}. Suppose that $G = \Z/p^n$. We define $c_i: \mathbf{Fin}_{\Z/p^n} \rto \mathbf{Fin}$ by \[c_i(S) \defeq \big\{s\in S \,\big|\, |G\cdot s| \leq p^i\big\}.\] The map $\prod_{i=0}^n c_i: \mathbf{Fin}_G \rto \prod_{i=0}^n \mathbf{Fin}$ is the ``ghost coordinate'' map: on $K_0$ it takes a $G$-set to coordinates that add/multiply coordinatewise, and all information about the relative sizes of the $G$-orbits can be recovered from them. This can also be coordinatized in an alternate manner, using ``Burnside coordinates.'' Define an ``orbit counting map'' $b_i: \mathbf{Fin}_{\Z/p^n} \rto \mathbf{Fin}$ by \[b_i(S) \defeq \big\{s\in S \,\big|\, |G\cdot s| = p^i\big\}_G.\] Note that the relationship between $|b_i(S)|$ and $|c_i(S)|$ looks closely related to the relationship between the Witt coordinates and the ghost coordinates: \[|c_i(S)| = \sum_{j=0}^i p^j |b_j(S)|.\] We can then define $b = \prod_{i=0}^n b_i: \mathbf{Fin}_{\Z/p^n} \rto \mathbf{Fin}^n$; note that this is an (additive) isomorphism on $K_0$.\footnote{The terminology ``$b$'' for the orbit counting map may seem arbitrary, but it is chosen to agree with the ``$b$-coordinates'' defined for the isomorphism $\tau$ from the standard presentation of the Witt ring to the Burnside ring described in \cite[p7]{dresssiebeneicher89}.} We get the following commutative diagram: \begin{diagram}[4em] { \mathbf{Fin}^n \sma \mathbf{Fin}^n & \mathbf{Fin}_{\Z/p^n} \sma \mathbf{Fin}_{\Z/p^n} & \mathbf{Fin}^n \sma \mathbf{Fin}^n \\ \mathbf{Fin}^n & \mathbf{Fin}_{\Z/p^n} & \mathbf{Fin}^n \\}; \to{1-2}{1-1}_{b\sma b} \to{1-2}{1-3}^{c\sma c} \to{2-2}{2-1}_b \to{2-2}{2-3}^c \to{1-2}{2-2}^\times \to{1-3}{2-3}^\times \diagArrow{densely dotted, ->}{1-1}{2-1} \end{diagram} What must the dotted map be in order to make the diagram commute? On each pair of coordinates, the dotted map counts the number of each type of orbit which appears in the Cartesian product of orbits. Since $b$ forgets all $G$-action information, this is functorial and it does not matter which representatives are taken when the dotted map is defined. Thus the dotted map exists. All of the horizontal maps in ths diagram are isomorphisms on $K_0$, and the conversion between ``ghost coordinates'' and ``Burnside coordinates'' is the conversion between the right-hand side of the diagram and the left-hand side of the diagram. \subsection{Almost-finite $G$-sets} \label{sec:afsets} Now let $G$ be a profinite group, and let $S$ be a $G$-set. $S$ is \emph{almost-finite} if for all open subgroups $U$ of $G$, $S^U$ is finite, and if for all $x\in S$, the orbit $G\cdot x$ is finite. For an in-depth discussion of almost-finite sets (and spaces), see \cite{dresssiebeneicher88}. We define $\mathbf{AFSet}_G$ to be the assembler whose objects are almost-finite $G$-sets and whose morphisms are $G$-equivariant inclusions. The topology on $\mathbf{AFSet}_G$ is given by the mutually surjective covering families. Although we would like to use d\'evissage for assemblers to compute the $K$-theory of $\mathbf{AFSet}_G$, this is not directly possible, since an almost-finite set can be the union of infinitely many different $G$-orbits. Thus we need to be a little bit more clever. For an open subgroup $U$ of $G$, let $\C_{G/U}$ be the full subcategory of $\SC(\mathbf{AFSet}_G)$ containing only those $G$-sets which are unions of orbits isomorphic to $G/U$. Each such set is a finite disjoint union of copies of $G/U$, and thus $\C_{G/U} \cong \SC(\S_{W_GU})$. Thus \[\SC(\mathbf{AFSet}_G) \simeq \prod_{\substack{\mathrm{conj.\ class}\\U \leq G}} \C_{G/U} \cong \prod_{\substack{\mathrm{conj.\ class}\\U \leq G}} \SC(\S_{W_GU}).\] For any conjugacy class $U$, the functor projecting to the $U$-coordinate is a morphism of assemblers, so induces a map on $K$-theory; in particular after applying $K$-theory there exists a map \begin{equation} \label{eq:decomp} K(\mathbf{AFSet}_G) \rto^\psi \prod_{U\in C_G} \Sigma_+^\infty B(W_GU). \end{equation} Write $\psi_U: K(\mathbf{AFSet}_G) \rto \Sigma_+^\infty BW_GU$ for the projection onto the $U$-th coordinate. The ring $K_0(\mathbf{AFSet}_{G})$ is exactly the Burnside ring of $G$, so $\psi$ induces an isomorphism on $K_0$. By \cite[Corollary 1 and Theorem 2.12.7]{dresssiebeneicher89}, the Burnside ring is isomorphic to the big Witt ring, where the coordiates can be considered to be the ``orbit counting'' maps (analogous to $b_i$ in the previous section). Addition is represented by the usual disjoint union of sets, while multiplication is given by Cartesian product of almost-finite sets, with the unit the singleton set with the trivial $G$-action. Note that this multiplication, while induced by a morphism of assemblers $\mathbf{AFSet}_G \sma \mathbf{AFSet}_G \rto \mathbf{AFSet}_G$ is not induced by multiplications on components $\S_{G/U} \sma \S_{G/U} \rto \S_{G/U}$, illustrating that the decomposition above is not compatible with the multiplicative structure. (More concretely: the product of two $G/U$-orbits decomposes into separate orbits; it cannot be modeled by a product of singleton sets.) This is where the interesting multiplicative properties of Witt vectors come from. To illustrate this last observation, consider the case $G = \Z_p$ and the maps $b$ and $c$ defined in the previous section. There is an analogous commutative diagram \begin{diagram} {\mathbf{Fin}^{\mathbf{N}} \sma \mathbf{Fin}^{\mathbf{N}} & \mathbf{Fin}_{\Z_p} \sma \mathbf{Fin}_{\Z_p} & \mathbf{Fin}^\mathbf{N} \sma \mathbf{Fin}^\mathbf{N} \\ \mathbf{Fin}^{\mathbf{N}} & \mathbf{Fin}_{\Z_p} & \mathbf{Fin}^{\mathbf{N}} \\}; \to{1-2}{1-1}_{b\sma b} \to{1-2}{1-3}^{c\sma c} \to{2-2}{2-1}_b \to{2-2}{2-3}^c \to{1-3}{2-3}^\times \to{1-2}{2-2}^\times \diagArrow{densely dotted,->}{1-1}{2-1} \end{diagram} Again, the multiplication induced on the $b$-coordinates is not a simple coordinate-wise one, as the product of two orbits of size $p^i$ (for example) is not a single orbit of size $p^i$. The formula for the relationship between the $c$-coordinates and the $b$-coordinates is exactly that between the ghost coordinates and the Burnside ring. If we compose with the canonical isomorphism between the Burnside ring and the Witt ring, this becomes the standard coordinate transformation between the Witt ring and the ghost coordinates. For more details, see \cite{dresssiebeneicher88}. Moreover, these two separate perspectives illustrate where many of the strange multiplicative properties of the Witt vectors come from. The Witt coordinates are closely related to the Burnside coordinates; however, under multiplication, the Burnside coordinates do not simply multiply. Consider that the product of a single orbit of size $p^i$ and an orbit of size $p^j$ (with, WLOG, $j \geq i$) is $p^i$ orbits of size $p^j$; thus the product of a vector with a single $1$ in position $i$ with a vector with a single $1$ in position $j$ is a vector with a $p^i$ in position $j$. \begin{remark} In the discussion above it is tempting to use a result showing that $K$-theory commutes with infinite products, such as \cite{carlsson95} or \cite{kasprowskiwinges20}, to conclude that $\psi$ is an isomorphism. Unfortunately, the $K$-theory of assemblers cannot directly use either of these results---the former assumes Waldhausen categories with cylinder functors (which $\SC(\C)$ does not have) and the latter assumes exact categories---and thus these results are not accessible to us at this point. As we expect that $K$-theory of assemblers does commute with infinite products, we conjecture that the map $\psi$ is actually a weak equivalence. \end{remark} \subsection{Varieties} \label{sec:varieties} Let $\Var_k$ be the assembler of varieties over a base field $k$. The morphisms in the assembler are locally closed embeddings; the topology is generated by the coverage $\{Y \rcofib X, X \smallsetminus Y \rto X\}$, where $Y \rcofib X$ is a closed embedding. This example is discussed in detail in \cite[Section 5.1]{Z-Kth-ass}. The fiber product (over $\Spec k$) of varieties induces a product structure on $\Var_k$, with $\Spec k$ as the unit. This produces an $E_\infty$-ring structure on $K(\Var_k)$ which on $K_0$ gives the Grothendieck ring of varieties. If we would like to work over a base Noetherian scheme $S$, instead of a base field, an analogous construction works with the ring structure given by taking the fiber product over $S$ (in which case $S$ is the unit). (Again, we assume that varieties over $S$ are reduced separated schemes of finite type over $S$.) It is important to note that the technical condition in the following lemma requires \textbf{equalities} and not \textbf{canonical isomorphisms}; the desired formulas always hold up to canonical isomorphism. \begin{lemma} Let $f:T \rto S$ be a morphism of schemes. Base change along $f$ induces a symmetric monoidal morphism of assemblers $\Var_S \rto \Var_T$. In particular, $K_*(\Var_S) \rto K_*(\Var_T)$ is ring homomorphism. \end{lemma} \begin{proof} Base change is a morphism of assemblers because it preserves disjointness and covering families in the generating coverage. To check that it is a symmetric monoidal map it suffices to check that the following diagrams commute up to natural isomorphism: \[ \begin{inline-diagram}[3em] { \Var_S \sma \Var_S & \Var_T \sma \Var_T \\ \Var_S & \Var_T \\}; \arrowsquare{f'\sma f'}{\mu}{\mu}{f'} \end{inline-diagram} \qqand \begin{inline-diagram} { \S & \Var_S \\ & \Var_T. \\}; \to{1-1}{1-2}^\eta \to{1-1}{2-2}_\eta \to{1-2}{2-2}^{f'} \end{inline-diagram} \] Since $f'$ maps $S$ to $T$, the right-hand diagram commutes. Since both base change and the multiplication are via fiber products, the left-hand diagram commutes up to natural isomorphism. \end{proof} \begin{remark} It may seem that this lemma is overly-complicated: the given formulas always hold up to unique isomorphism, and for most purposes this is sufficient. However, this is \emph{not} sufficient in the current case: in order to define a monoid object in a category, the given diagrams must commute \emph{exactly}, not simply up to unique isomorphism. This is, in fact, precisely the problem that $\infty$-categories are designed to address: situations where coherence issues occur because of imprecise commutativity. In the current situation it ought to be the case that the weaker commutativity should be sufficient, and that the map $K(\Var_S) \rto K(\Var_T)$ should be $E_\infty$ regardless of which model is taking. However, proving this would require keeping track of the $E_\infty$-operad structure through the $K$-theoretic machinery, which can be quite complicated. (See for example \cite{elmendorfmandell} for a description of this; note that their example is not sufficient for the current application, and would need to be weakened further, as the current situation would not be bipermutative.) We therefore take the alternate track of focusing on the special case of interest to us. \end{remark} \begin{example} Let $A$ and $B$ be rings, with a map $f: \Spec B \rto \Spec A$. Write $\tilde \Var_A$ for the full subassembler of $\Var_{\Spec A}$ of reduced separated affine schemes of finite type over $\Spec A$. Since all varieties have a finite disjoint covering family by affines, the inclusion $\tilde \Var_A \rto \Var_{\Spec A}$ induces an equivalence on $K$-theory (by d\'evissage, \cite[Theorem B]{Z-Kth-ass}). Define the assembler $\tilde \Var_B$ analogously. The multiplication and the base change to working over $B$ on $\tilde \Var_A$ can be modeled by the tensor product of $A$-algebras, and it suffices to define a tensor product for which the formulas $A\otimes_A B = B$ and \[(R\otimes_A R')\otimes_A B = (R\otimes_A B)\otimes_B (R' \otimes_A B)\] hold. (Note, again, that these are equalities, and not isomorphisms; these will of course always be canonically isomorphic.) For example, here is a method for building such a model. Pick a tensor product functor on $\tilde \Var_A$. For every $A$-algebra $R$ there is a $B$-algebra $R\otimes_AB$ which is generated by pairs $(r,b)$. Define a tensor product on $\tilde \Var_B$ by defining $(R\otimes_A B) \otimes_B (R'\otimes_A B)$ to be generated by classes of triples $(r,r',b)$ modulo the necessary relations. (This differs from the usual definition: usually we would take pairs of pairs $((r,b),(r',b'))$ and define the appropriate relations on those.) Thus, on the subcategory of those modules which are in the image of base change from $\tilde\Var_A$, we impose the formula by definition. \end{example} In a more combinatorial example, we can construct a derived $\zeta$-function. Let $k$ be a finite field, $\Var_k$ be the assembler of varieties over $k$, and $\mathbf{AFSet}_{\hat \Z}$ be the assembler of almost-finite $\hat \Z$-sets. There is a morphism of assemblers \begin{equation} \label{eq:zeta} \zeta: \Var_k \rto \mathbf{AFSet}_{\hat Z} \qquad \hbox{given by } X \rgoesto X(\bar k). \end{equation} \begin{lemma} \label{lem:zetamon} The morphism of assemblers $\zeta$ is a symmetric monoidal morphism, in the sense that the following diagrams commute up to natural isomorphism: \[ \begin{inline-diagram} {\Var_k \sma \Var_k & \mathbf{AFSet}_{\hat \Z} \sma \mathbf{AFSet}_{\hat \Z} \\ \Var_k & \mathbf{AFSet}_{\hat \Z}\\}; \arrowsquare{\zeta \sma \zeta}{\mu}{\mu}{\zeta} \end{inline-diagram} \qqand \begin{inline-diagram} {\S^\mathbf{Asm} & \Var_k \\ & \mathbf{AFSet}_{\hat \Z}.\\}; \to{1-1}{1-2}^\eta \to{1-1}{2-2}_\eta \to{1-2}{2-2}^\zeta \end{inline-diagram}\] In particular, $K_*(\zeta)$ is a ring homomorphism. \end{lemma} \section{Detecting non-permutative elements in $G$-sets} \label{sec:Gsets} In this section we discuss how to use the theory developed above to detect nontrivial elements in $K_1$ of an assembler. The particular elements we are concerned with are the ``non-permutative'' elements: \begin{definition} Let $E$ be a connective $E_\infty$-ring spectrum with unit map $\S \rto E$. An element in $\pi_nE$ is \emph{$0$-dimensional} if it is in the image of $\pi_n\S \rto \pi_nE$. An element in $\pi_nE$ is \emph{permutative} if it is in the image of the map \[\pi_0E \otimes \pi_n\S \rto \pi_0E \otimes \pi_n E \rto \pi_n E.\] If an element is not permutative then it is \emph{non-permutative}. \end{definition} In \cite{CWZ-zeta} the authors showed that there exist elements in $K_n(\Var_k)$ which are not $0$-dimensional; however, the question of whether there exist non-permutative elements was left open. The goal of the rest of this paper is to show that such elements exist in $K(\Var_k)$. The existence of non-permutative elements in the higher homotopy of $E$ demonstrates that $E$ is not uniquely determined by $\pi_0E$ and the higher homotopy groups of $\S$. In particular, we will use it to demonstrate that the higher $K$-groups of $\Var_k$ contain nontrivial information about the geometry of varieties. The important theorem for determining that certain elements are non-permutative is the following theorem, whose proof is put off until later: \begin{theorem} \label{thm:K1mult} For a closed symmetric monoidal assembler $\C$, $[X]\in K_0(\C)$, and $[A,\tau]\in K_1(\C)$, \[[X] [A,\tau] = [X\times A, 1 \times \tau] \in K_1(\C).\] \end{theorem} This is a special case of Theorem~\ref{thm:K1mult-real}. \subsection{Finite $G$-sets} First, consider the simple case of finite $G$-sets. From (\ref{eq:KGSet}) it follows that \[K_1(\mathbf{Fin}_G) \cong \bigoplus_{H\in C_G} \Z/2 \oplus (W_GH)^\mathrm{ab}.\] By Theorem~\ref{thm:K1mult}, the product of an element in $K_0(\mathbf{Fin}_G)$, represented as $[A]$ for some finite $G$-set $A$, and the nontrivial element in $K_1(\mathbf{Fin})$, represented by $[\{1,2\}, \tau]$ is represented by the element $[A \amalg A, \tau]$, where $\tau$ swaps the two copies of $A$. In particular, neither of the morphisms in this pair has a nontrivial action on any $G$-orbit of $A$; thus this lands in the subgroup $\bigoplus_{H\in C_G} \Z/2 \oplus 0$. Conversely, any term in this subgroup is represented, since $[G/H \amalg G/H, \tau]$ represents the element with a $1$ in the $H$-coordinate and $0$'s everywhere else. Thus the non-permutative elements in the group are exactly those that are outside this subgroup. \subsection{Almost-finite $G$-sets} The analysis of the previous subsection can be extended to almost-finite $G$-sets, although there is again the additional difficulty that we do not know whether $K$-theory commutes with infinite products. However, we can show the following: \begin{proposition} Recall the map $\psi$ from (\ref{eq:decomp}). All permutative elements $\alpha\in K_1(\mathbf{AFSet}_G)$ have \[\psi_*\alpha \in \prod_{H \in C_G} Z/2 \oplus 0 \subset \prod_{H\in C_G} \Z/2 \oplus (W_GH)^\mathrm{ab}.\] Moreover, the subgroup of permutative elements surjects onto this subgroup. \end{proposition} \begin{proof} The proof that the image of any permutative element lies in this group is identical to the previous subsection. That every element in this subgroup is represented can be seen by taking an element in the subgroup, which is of the form $\{\epsilon_H\}_{H\in C_G}$ with $\epsilon_H = \pm 1$. Consider the almost-finite $G$-set $A \defeq \coprod_{\substack{H\in C_G \\ \epsilon_H = 1}} G/H$. Then $[A\amalg A, \tau]$ exactly represents $\{\epsilon_H\}_{H\in C_G}$. \end{proof} \section{A combinatorial derived zeta function} \label{sec:zetamor} Let $k$ be a finite field, and let $L_n$ be the unique extension of $k$ of degree $n$. The local zeta function of a variety $X$ over $k$ is defined by \[Z(X,t) \defeq \exp \sum_{n \geq 1} |X(L_n)|\frac{t^n}{n}.\] There are two other classical ways of writing this function. We can think of the function $Z(-,t)$ as taking a variety to a power series in $t$ with integer coefficients. (We know that it will have integer coefficients because we can rewrite the expression above as $\sum_{n \geq 0} |(\Sym^nX)(k)|t^n$.) We can see by analyzing the expression for $Z(X,t)$ that, for a closed embedding $Y \rcofib X$, \[Z(X,t) = Z(Y,t) Z(X\smallsetminus Y,t).\] Thus $Z(-,t)$ is actually a homomorphism $K_0(\Var_k) \rto (1+t\Z\llbracket t\rrbracket,\times)$. If we consider the codomain as the big Witt ring with the multiplication of Witt vectors, $Z(-,t)$ is a ring homomorphism. By the discussion in Section~\ref{ex:finG}, the big Witt ring is $K_0(\mathbf{AFSet}_{\hat \Z})$, and, indeed, we can see that all of the data necessary to construct $Z(X,t)$ is contained in the $\hat\Z$-structure of $X(\bar k)$. Thus the morphism of assemblers $\zeta$ in (\ref{eq:zeta}) gives the desired derived map \[K(\zeta): K(\Var_k) \rto K(\mathbf{AFSet}_{\hat \Z}).\] Since $\zeta$ is a morphism of monoidal objects, this map induces a map of rings. Moreover, because this map is compatible with the unit maps it takes permutative elements to permutative elements. In order to demonstrate that an element in $K_1(\Var_k)$ is non-permutative it therefore suffices to show that it maps to a non-permutative element in $K_1(\mathbf{AFSet}_{\hat \Z})$, The map $\psi$ defined in Section~\ref{sec:afsets} induces a map \begin{equation} \label{eq:zeta-decomp} K_1(\mathbf{AFSet}_{\hat\Z}) \rto \prod_{n \geq 1} K_1(\S^\mathbf{Asm}_{\Z/n}) \cong \prod_{n \geq 1} \Z/2\oplus \Z/n. \end{equation} We write an element in the codomain as $\prod_{n \geq 1}(\pm 1, g)$. In each pair we call the first coordinate the \emph{external} coordinate and the second the \emph{internal} coordinate; these correspond to the sign of the permutation of $*$'s in the objects of $\SC(\S^\mathbf{Asm}_{\Z/n})$ and the action of $\Z/n$, respectively. \begin{lemma} \label{lem:mult-of-eta} Let $[X]\in K_0(\Var_k)$, and let $\eta\in K_1(\mathbf{Fin})$ be the nonzero element, and write $L_j$ for the extension of $k$ of degree $j$. Let \[X_n = \{x\in X(L_n)\,|\, x\notin X(L_m),\ m < n\}.\] Then \[\psi_* \circ \zeta_*([X]\eta) = \prod_{n \geq 1} ((-1)^{|X_n|/n},0).\] \end{lemma} \begin{proof} Write $\psi_n$ for the composition of $\psi_*$ and the projection onto the coordinate indexed by $n$. Write $\mathbf{Fin}^{\Z/n}$ for the assembler of finite sets with free $\Z/n$-action; then $K(\mathbf{Fin}^{\Z/n}) \simeq \Sigma_+^\infty B\Z/n$, and the map $\Var_k \rto \mathbf{Fin}^{\Z/n}$ mapping $X$ to $X_n$ gives the $n$-th coordinate of the map $\psi_*\zeta_*$. There is a map of assemblers $\mathbf{Fin} \rto \mathbf{Fin}^{\Z/n}$ defined by $S \rgoesto S\times \Z/n$, with $\Z/n$ acting on the second coordinate. It suffices to show that for all $n$ there exists a morphism $\sigma_n$ which makes the following square commute: \begin{diagram} { K_1(\Var_k) & K_1(\mathbf{Fin}^{\Z/n}) \\ K_0(\Var_k) \otimes K_1(\S^\mathbf{Asm}) & K_1(\mathbf{Fin}). \\}; \to{1-1}{1-2}^{\psi_n} \to{2-1}{2-2}^{\sigma_n} \to{2-1}{1-1}_{\mu} \cofib{2-2}{1-2} \end{diagram} Pick a generator of $K_0(\Var_k)$, represented by a variety $[X]$ and the generator $[*\amalg *, \mathrm{swap}]$ of $K_1(\S^\mathbf{Asm})$. The map $\mu$ then maps this generator to the generator $[X \amalg X, \mathrm{swap}]$; the map $\psi_n$ maps this to the generator \[\big[X_n \amalg X_n, \mathrm{swap}\big].\] Note that $X_n \cong (X_n)_{\Frob} \times \Z/n$. Thus if we define \[\sigma_n([X]\otimes \eta) \defeq \big[(X_n)_{\Frob} \amalg (X_n)_{\Frob}, \mathrm{swap}\big]\] the diagram commutes, as desired. Moreover, the sign of the induced permutation is exactly the sign of swapping $|X_n|/n$ orbits, giving the desired formula. \end{proof} Since $K_*(\zeta)$ is a ring homomorphism, the image of a permutative element in $K_*(\Var_k)$ is a permutative element in $K_*(\mathbf{AFSet}_{\hat\Z})$. In particular, permutative elements have all internal coordinates equal to $0$. In order to find a nonpermutative element it suffices to find an element which has a nonzero internal coordinate. By \cite[Theorem B]{Z-ass-pi1}, any automorphism of a variety represents an element of $K_1(\Var_k)$. The functor $\zeta$ takes this data to a $\hat\Z$-set $X(\bar k)$ together with a $\hat\Z$-equivariant permutation; projecting onto the $\Z/2$-coordinate in (\ref{eq:zeta-decomp}) induces a map \[\psi_2: K_1(\Var_k) \rto K_1(\S^\mathbf{Asm}_{\Z/2}) \cong \Z/2\oplus \Z/2.\] Here, the first coordinate is the external coordinate, and the second is the internal coordinate. As before, we write the external coordinate multiplicatively and the internal coordinate additively. \begin{definition} Let $S$ be a finite set equipped with an action of $\Z/m\oplus \Z/n$. Suppose that this action is free when restricted to both $\Z/m\oplus 1$ and $1\oplus \Z/n$. For a point $x\in S$ write $[x]$ for the orbit of $x$ under the $\Z/m\oplus \Z/n$ action. This orbit has \emph{type $(d,a)$} if $(d,0) \cdot x = (0,a) \cdot x$ for $0 \leq a < n$ and $d$ is the minimal positive integer for which such an integer $a$ exists. Define $\# S_{(d,a)}$ for the number of orbits of type $(d,a)$. Let $X$ be a variety over $\mathbb{F}_q$ equipped with an automorphism $\varphi$ which acts on $X_n$ freely with order $m$. We consider $X_n$ to be equipped with action of $\Z/m\oplus \Z/n$ by having $(1,0)$ act by $\varphi$ and $(0,1)$ act by Frobenius. Define $\#X^{\varphi,n}_{(d,a)}$ to be the number of orbits of type $(d,a)$ in $X_n$; when $n$ and $\varphi$ are clear from context we omit them. \end{definition} If there exists an orbit of type $(d,a)$ in $S$ then it is necessarily the case that $d |m$ and that $\frac md = \frac{n}{(n,a)}$. In particular, $d = m$ if and only if $a = 0$. We can use this to describe the image of an element $[X,\varphi]\in K_1(\Var_k)$ under $\psi_n$ explicitly in terms of the actions on orbits. \begin{proposition} \label{prop:stable-unstable} Let $X$ be a variety over $k$ and let $\varphi$ be an automorphism of $X$; suppose that $\varphi$ acts freely on $X_n$ with order $m$. For each pair $(d,a)$ write $\#X_{(d,a)}$ be the number of orbits of $X_n$ of type $(d,a)$. Then \[\psi_n([X,\varphi]) = \sum_{(d,a)} (\#X_{(d,a)})((-1)^{d+1}, a).\] \end{proposition} \begin{proof} The element in $K_1(\mathbf{Fin}^{\Z/n})$ is the sum of its actions on orbits, so it suffices to consider a single orbit $[x]$ of type $(d,a)$. This orbit consists of $d$ disjoint Frobenius orbits. If we write $x_i = \varphi^i\cdot x$ for $i = 0,\ldots,d-1$ we can think of the $d$ Frobenius orbits as the sets $\{x_i,\Frob\cdot x_i, \ldots, \Frob^{n-1}\cdot x_i\}$ then the action of $\varphi$ takes the $i$-th orbit to the $i+1$-st orbit with no $\Z/n$-action for all $0 \leq i < d-1$; however, when $i = d-1$ it takes the $i$-th orbit to the $0$-th orbit with an additional $\Z/n$ action by $a$. We can thus write it as a cyclic permutation of $d$ orbits, followed by a twist by $a$ on a single orbit. Thus the representative in $K_1(\mathbf{Fin}^{\Z/n})$ is \[((-1)^{d+1},0) + (1,a) = ((-1)^{d+1}, a),\] as desired. Summing over all orbits gives the desired formula. \end{proof} Proposition~\ref{prop:stable-unstable} and Lemma~\ref{lem:mult-of-eta} can be used to find non-permutative elements in $K_1(\Var_{\mathbb{F}_q})$. Our first result is a special case of the above result. \begin{proposition} \label{prop:su-special} Fix an integer $n > 1$. Let $\lambda \in k^\times$ have order $m$, and define $P_1 = \# (\mathbb{P}^1)_{(m,0)}$ and for any other $d | (n,m)$ let $P_d = \# (\mathbb{P}^1)_{(m/d,n/d)}$; for all other $d$ we define $P_d=0$. Let $\phi$ be the Euler $\phi$-function. Then \[\psi_n([\mathbb{P}^1,\lambda x]) = \bigg( (-1)^{P_1(m+1) + P_2(m/2+1)}, \sum_{d|(n,m)} P_{d} \phi(d) \frac{n}{d}\bigg), \] where $\phi$ is the Euler $\phi$-function. In particular, if $(n,m) = 1$ then the second coordinate is $0$. \end{proposition} \begin{proof} From the formula in Proposition~\ref{prop:stable-unstable}, and writing $X = \mathbb{P}^1$ for conciseness, \[\psi_n([\mathbb{P}^1,\lambda x]) = \sum_{(d,a)} (\#X_{(d,a)}) ((-1)^{d+1},a).\] Suppose that $d,a,a'$ are such that both $X_{(d,a)}$ and $X_{(d,a')}$ are nonempty. The necessary conditions on $a$ and $a'$ ensure that there exists a constant $c$ and two primitive roots $g,g'$ such that \[\lambda^a = g^c \qqand \lambda^{a'} = (g')^c.\] With these primitive roots we can construct a bijection between $X_{(d,a)}$ and $X_{(d,a')}$ in the following manner. Given an orbit $[x]\in X_{(d,a)}$, by definition \[x^{q^d} = \lambda^a x \Leftrightarrow x^{q^d-1} = \lambda^a.\] Writing $x = g^y$ , such a point $x$ is a solution $y$ to the equation \[y(q^d-1) \equiv ca \pmod{q^n-1}.\] In particular, this point $x$ corresponds to a point $x' \defeq (g')^y$. This gives a function $X_{(d,a)} \rto X_{(d,a')}$; the inverse is given by the reverse choice of primitive roots. This bijection shows that for any two $a$ and $a'$ satisfying $\frac md = \frac n{(n,a)}$ it is the case that $\#X_{(d,a)} = \# X_{(d,a')}$. Thus in the sum above we can choose $a = (n,a) = \frac n{m/d}$ for every $d$ and multiply by the number of choices of $a$, which is exactly $\phi(m/d)$. Moreover, such a choice is possible only if $(m/d) \mathrel{|} n$. Thus the sum can be rewritten as \[\psi_n([\mathbb{P}^1,\lambda x]) = \sum_{\substack{d|m \\ m/d | n}} \phi\left(\frac md\right) \# X_{(d,\frac n{m/d})} \left((-1)^{d+1}, \frac md\right) = \sum_{d | (n,m)} \phi(d) \# X_{(m/d,n/d)} ((-1)^{m/d+1}, d).\] Since $\phi(d)$ is even unless $d = 1$ or $2$, the first coordinate of each summand will almost always be $1$. Taking this into account gives the desired formula. \end{proof} Using this we can do a a complete analysis of the image under $\psi_n$ of $[\mathbb{P}^1,1/x] = [\mathbb{P}^1, -x]$, in order to illustrate both the benefits and the drawbacks of the approach. \begin{corollary} \label{cor:calc} Let $k = \mathbb{F}_q$. Writing $n = 2^m n'$ with $n'$ odd, \begin{equation} \label{eq:form} \psi_n([\mathbb{P}^1,-x]) = \begin{cases} \left((-1)^{\frac{q-1}{2}}, 0\right) \caseif n = 1, \\ \left((-1)^{\frac{q-1}{2}},\frac{q-1}{2}\right) \caseif n=2, \\ (1,0) \caseotherwise. \end{cases} \end{equation} In particular, if $q \equiv 3 \pmod 4$ then $[\mathbb{P}^1,-x] = [\mathbb{P}^1, 1/x]$ is non-permutative. \end{corollary} \begin{proof} When $n = 1$ Galois orbits are trivial, and thus the internal coordinate is $0$. Thus the question becomes to compute the sign of the permutation that $1/x$ induces on $\mathbb{F}_q\smallsetminus\{0\}$. There are two fixed points ($\pm 1$) and the rest are paired up into transpositions, so the sign is $(-1)^{\frac{q-1}{2}}$, as desired. For $n>1$ we use the formula in Proposition~\ref{prop:su-special}. In this the formula simplifies to: \[\psi_n([\mathbb{P}^1,-x]) = \begin{cases} \left( (-1)^{P_1}, P_{2} \frac n2 \right) \caseif n \hbox{ is even} \\ ((-1)^{P_1},0) \caseotherwise. \end{cases}\] In particular, the result only depends on the parities of $P_1 = \# X_{(2,0)}$ and $P_2 = \# X_{(1,\frac n2)}$. Before we begin the more complicated cases, some notation for the rest of the proof. For integers $a$ and $b$, $M_a(b)$ is the number of aperiodic necklaces of length $b$ with beads of $a$ colors. The function $\mu(m)$ is the Mobius function, which is $0$ if $m$ is not squarefree, and otherwise is $-1$ to the power of the number of distinct prime factors of $m$. The symbol $\delta_{ij}$ is the Kronecker delta function. The important facts about these to know are that \[M_a(b) = \frac 1b \sum_{d|b} \mu\left(\frac bd\right) a^d \qqand \delta_{1a} = \sum_{d|a} \mu(d).\] Suppose $n$ is odd, so $X_{(1,\frac n2)}$ is empty. We have (by Mobius inversion) \begin{align*} P_1 &= \frac{1}{2n} \left( q^n - \#\{\hbox{points in smaller extensions}\}\right) = \frac{1}{2n} \sum_{d|n} \mu\left(\frac nd\right) q^d. \end{align*} Since we only care about the parity of this number it suffices to consider the sum $\sum_{d|n} \mu(n/d) q^d$ modulo $4$. Since $n$ is odd, $d$ must also always be odd; in particular, $q^d \equiv q\pmod 4$ for all $d$. Thus, since $n > 1$, \[(2n)P_1 \equiv q \sum_{d|n} \mu\left(\frac nd \right) = 0 \pmod 4.\] This completes the odd case. When $n$ is even, write $n = 2^m n'$; there are two types of orbits $(2,0)$ and $(1,\frac n2)$. Consider first orbits of type $(1,\frac n2)$. A point in an orbit of type $(1,\frac n2)$ satisfies $-x = \Frob^{n/2} x$. There are exactly $q^{n/2}-1$ of these; if any solution lies in a subextension of even index then it must lie in $\mathbb{F}_q$, which contains exactly $2$ solutions. Call a point $x$ \emph{good} if $x^{q^{n/2}-1} = 1$. Let $L_1,\ldots,L_b$ be the maximal proper subfields of $\mathbb{F}_q$ of odd index. Then, using the principle of inclusion/exclusion (or Mobius inversion), \begin{align*} n P_2 &= (q^{n/2}-1) - \sum_{i=1}^b \#\{\hbox{good points in }L_i\} + \sum_{i,j} \#\{\hbox{good points in }L_i \cap L_j\} - \cdots\\ &= \sum_{\substack{d | n,\ \frac nd\ \mathrm{odd}}} \mu\left(\frac n d\right) (q^{d/2}-1) = \sum_{d|n'} \mu\left(\frac{n'}{d}\right)(q^{2^{m-1}})^d - \delta_{1n'}. \end{align*} Only the parity of $P_2$ matters, so it suffices to consider the right-hand side modulo $2^{m+1}$. If $m > 1$, since $q$ is odd then $q^{2^{m-1}} \equiv 1 \pmod{2^{m+1}}$, so \[nP_2 = \sum_{d|n'} \mu\left(\frac{n'}{d}\right) - \delta_{1n'} \equiv 0 \pmod{2^{m+1}}.\] On the other hand, if $m = 1$ we have $q^d \equiv q \pmod 4$ and thus \[nP_2 \equiv (q - 1)\delta_{1n'} \pmod 4,\] thus giving the desired formula. Now consider $P_1$. Using the fact that an orbit of type $(2,0)$ has $2n$ points and an orbit of type $(1,\frac n2)$ has $n$ points, in terms of point counts over $\mathbb{F}_{q^n}$ \begin{align*} P_1 &= \frac{1}{2n} \left( q^n - \#\{\hbox{points in smaller extensions}\} - \#\{\hbox{points in orbits of type }(1,\frac n2)\} \right)\\ &= \frac{1}{2n} \left(q^n - \#\{\hbox{points in smaller extensions}\} - nP_2\right). \end{align*} By Mobius inversion, \[q^n - \#\{\hbox{points in smaller extensions}\} = \sum_{d|n} \mu\left(\frac nd\right) q^d.\] Thus the parity of $P_1$ can be determined by considering $2n\# P_1$ modulo $2^{m+2}$. \[2n P_1 \equiv \bigg(\sum_{d|n} \mu\left(\frac nd \right) q^d\bigg) - \delta_{1n'}(q^{2^{m-1}}-1) \pmod{2^{m+2}}.\] Consider the first sum. If $\mathrm{ord}_2 d < m-1$ then $\mu(n/d) = 0$. Thus the only summands which are nonzero must have $\mathrm{ord}_2(d) = m$ or $m-1$. Thus \[\sum_{d|n} \mu\left(\frac nd \right) q^d = \sum_{d|n'} \mu\left(\frac{n'}{d}\right) q^{2^md} - \sum_{d|n'} \mu\left(\frac{n'}{d}\right) q^{2^{m-1}d} = \sum_{d|n'} \mu\left(\frac{n'}{d}\right) (q^{2^md} - q^{2^{m-1}d}).\] Modulo $2^{m+2}$, $q^{2^m d} \equiv 1$. As $d$ is odd, $q^{2^{m-1}d} \equiv q^{2^{m-1}}$ modulo $2^{m+2}$. We can therefore conclude that \[2n\#X_{(2,0)} \equiv - 2\delta_{1n'}(q^{2^{m-1}}-1) \pmod{2^{m+2}}.\] In particular, if $n' > 1$ this is $0$. If $m > 1$ then $q^{2^{m-1}}\equiv 1 \pmod{2^{m+1}}$, this must be $0$. Lastly, if $n = 2$ this is $-2 (q-1) \pmod 8$; in other words, $\# X_{(2,0)}$ is even if $q$ is $q \equiv 1 \pmod 4$ and odd if $q \equiv 3 \pmod 4$. \end{proof} \begin{remark} The question of whether $[\mathbb{P}^1,1/x]$ is non-permutative (or even non-$0$-dimensional!) when $q \equiv 1 \pmod 4$ remains open. \end{remark} The previous result is implies that multiplying by $-1$ is non-permutative if $-1$ is not a square. We can use the intuition behind this to show that non-permutative elements always exist: \begin{corollary} \label{cor:gen-root} Let $\mathbb{F}_q$ be a finite field such that $\mathrm{ord}_2(q-1) = \ell$. Let $\alpha$ be a primitive $2^\ell$-th root of unity. Then the element $[\mathbb{P}^1, \alpha x]$ is non-permutative. \end{corollary} \begin{proof} When $\ell=1$ this is simply Corollary~\ref{cor:calc}, so we focus on the case $\ell > 1$. By Proposition~\ref{prop:su-special}, keeping in mind that $m = 2^\ell$, \[\psi_{2^\ell}[\mathbb{P}^1, \alpha x] = \bigg((-1)^{P_1+P_2}, 2^{\ell-1} \sum_{j=1}^\ell P_{2^j}\bigg).\] To show that $[\mathbb{P}^1,\alpha x]$ is non-permutative it therefore suffices to check that $\sum_{j=1}^\ell P_{2^j}$ is odd. In fact, we claim that $P_{2^\ell}$ is odd and $P_{2^j}$ is even for $1 \leq j < \ell$. First, consider $P_{2^\ell}$, which counts orbits of type $(1,1)$. Solutions to $x^q = \alpha x$ correspond to solutions to \[a(q-1) \equiv \frac{q^{2^\ell}-1}{2^\ell} \pmod{q^{2^\ell}-1}.\] Since $(q-1)/2^\ell \mathrel{|} q-1, \frac{q^{2^\ell}-1}{2^\ell}, q^{2^\ell}-1$, solutions to this equation exist exactly when solutions to \[2^\ell a \equiv q^{2^\ell-1} + \cdots + q + 1 \pmod{2^\ell(q^{2^{\ell}-1} + \cdots + 1)}\] exist. As there are $2^\ell$ terms in the right-hand side of the equivalence, solutions exist---and thus there are exactly $q-1$ different solutions to the original equation. Note, in addition, that none of these are in extensions of lower degree. If we instead consider solutions to $x^q = \alpha x$ in $\mathbb{F}_{2^d}$ with $d < \ell$ then they correspond to solutions to \[2^\ell a \equiv q^{2^d-1} + \cdots + q + 1 \pmod{2^\ell(q^{2^d-1} + \cdots + q + 1}.\] The left-hand side of the equivalence is $0$ mod $2^\ell$, while the right-hand side is $2^d \not\equiv 0 \pmod{2^\ell}$; thus there are no solutions in lower extensions. Thus \[P_{2^ell} = \frac{q-1}{2^\ell} \equiv 1 \pmod 2.\] Now consider $P_{2^{\ell-1}}$, which counts orbits of type $(2,2)$. Solutions to $x^{q^2} = \alpha^2 x$ correspond to solutions to \[a(q^2 - 1) \equiv \frac{q^{2^\ell}-1}{2^{\ell-1}} \pmod{q^{2^\ell}-1}.\] As above, there are exactly $q^2-1$ solutions to this equation. Now consider $x^{q^2} = \alpha^2 x$ over $\mathbb{F}_{q^{2^d}}$ for $d < \ell$. If $d = 0,1$ there are no solutions, since $x^{q^2} = x$. If $d \geq 2$ then solutions to this equation correspond to solutions to \[a(q^2-1) \equiv \frac{q^{2^d}-1}{2^{\ell-1}} \pmod{q^{2^d}-1}.\] Dividing all three terms by $\frac{q^2-1}{2^{\ell-1}}$ gives \[2^{\ell-1}a \equiv q^{2^d-2} + q^{2^d-4} + \cdots + q^2 + 1 \pmod{2^{\ell-1}(q^{2^d-2} + q^{2^d-4} + \cdots + q^2 + 1)}.\] The left-hand side of the equivalence is $0$ mod $2^{\ell-1}$, but the right-hand side is equivalent to $2^{d-1} \not\equiv 0 \pmod{2^{\ell-1}}$, since $d < \ell$. Thus there are no solutions, and thus no points over lower-degree extenstions. However, every orbit of type $(1,1)$ also gives solutions to this equation, as do orbits of type $(1, 2^{\ell-1}+1)$ (which are in bijection with orbits of type $(1,1)$). Thus \[P_{2^{\ell-1}} = \frac{1}{2^{\ell+1}}\left((q^2-1) - 2^{\ell+1}P_1\right) = \frac{1}{2^{\ell+1}}\left((q^2-1) - 2(q-1)\right) = \frac{(q-1)^2}{2^{\ell+1}}.\] Thus $\mathrm{ord}_2 P_{2^{\ell-1}} = 2\ell - (\ell+1) = \ell -1$; since $\ell > 1$ this is positive, and thus $P_{2^{\ell-1}}$ is even. We now claim that for $1 \leq r \leq \ell$, \[P_{2^{\ell-r}} = \frac{(q^{2^{r-1}}-1)^2}{2^{\ell+r}},\] so that \[\mathrm{ord}_2 P_{2^{\ell-r}} = \ell + r - 2 \geq 1;\] in particular $P_{2^{\ell-r}}$ is always even, from which the result follows. We prove this by induction on $r$. The base cases $r = 0,1$ were done above, so we proceed to the inductive step. This is analogous to the base case $r=1$, although with slightly more bookkeeping. To compute $P_{2^{\ell-r}}$ we first count solutions to $x^{q^r} = \alpha^{2^r} x$. In $\mathbb{F}_{q^{2^\ell}}$ solutions to this correspond to solutions to \[a(q^{2^r}-1) \equiv \frac{q^{2^\ell}-1}{2^{\ell-r}} \pmod{q^{2^\ell}-1}.\] Since all three terms are divisible by $q^{2^r}-1$ solutions to the equation exist, and thus there are $q^{2^r}-1$ solutions. For $d < \ell$, solutions to the equation in $\mathbb{F}_{q^{2^d}}$ correspond to solutions to \[a(q^{2^r}-1) \equiv \frac{q^{2^d}-1}{2^{\ell-r}} \pmod{q^{2^d}-1}.\] Dividing both sides by $\frac{q^{2^r}-1}{2^{\ell-r}}$ shows that solutions to this equation exist exactly when there exist solutions to \[2^{\ell-r}a \equiv q^{2^d-2^r} + \cdots + q^{2^d} + 1 \pmod{2^{\ell-r}(q^{2^d-2^r} + \cdots + q^{2^d} + 1)}.\] Modulo $2^{\ell-r}$ the left-hand side is $0$ but the right-hand side is $2^{d-r}$, which is not $0$; thus there are no solutions over lower-degree extensions. However, some of these extensions come from orbits of type $P_{2^{\ell-r'}}$ for $r' < r$ and we have \begin{align*} P_{2^{\ell-r}} &= \frac{1}{2^{\ell+r}}\bigg((q^{2^r}-1) - \sum_{j=0}^{r-1} 2^{r-j} \cdot 2^{\ell+j} P_{2^{\ell-j}} \bigg) = \frac{1}{2^{\ell+r}}\left(q^{2^{r-1}}-1\right)^2, \end{align*} by applying the induction hypothesis and the two base cases inside the summation. \end{proof} As an alternate approach, one can consider elliptic curves with complex multiplication by $i$. \begin{example} Let $k = \mathbb{F}_q$ with $q\equiv 1 \pmod 4$. Then the elliptic curve $E$ given by $y^2 = x^3 + x$ has an automorphism $\varphi: (x,y) \rgoesto (-x,iy)$. This has order 4 on all finite points except for those where $y = 0$, which are all over $\mathbb{F}_q$. Consider $\psi_2[E, \varphi]$. There are two types of orbits: $(4,0)$ and $(2,1)$. A point $(x,y)$ in an orbit of type $(2,1)$ has $(\bar x, \bar y) = (x, -y)$; in other words, if we write $\mathbb{F}_{q^2} = \mathbb{F}_q[\sqrt{\alpha}] $ for some $\alpha\in \mathbb{F}_q$ then $x \in \mathbb{F}_q$ and $y = y'\sqrt\alpha$. Thus the point $(x,y')$ is an $\mathbb{F}_q$-point of the curve $E'$ given by $\alpha y^2 = x^3 + x$, the quadratic twist of $E$. Conversely, any finite point on $E'$ where $y \neq 0$ corresponds to a point in an orbit of type $(2,1)$; thus the number of orbits of type $(2,1)$ is a quarter of the number of finite points with nonzero $y$-coordinate: \[\# X_{(2,1)} = \frac{1}{4}E'(\mathbb{F}_q) - 1.\] Therefore \begin{align*} \# X_{(4,0)} &= \frac{1}{8}\left(E(\mathbb{F}_{q^2}) - E(\mathbb{F}_q) - 4 \# X_{(2,1)}\right) = \frac18E(\mathbb{F}_{q^2}) - \frac{q-1}{4}. \end{align*} Using Proposition~\ref{prop:stable-unstable} we conclude that \[\psi_2[E,\varphi] = \left((-1)^{\# E(\mathbb{F}_{q^2})/8 - (q-1)/4}, \frac14 \#E'(\mathbb{F}_q)-1\right) \in \Z/2\times \Z/2.\] In particular, $[E,\varphi]$ is non-permutative if $\# E'(\mathbb{F}_q)$ is a multiple of $8$. For example, consider $q = 5$. In this case $\# E(\mathbb{F}_{q^2}) = 32$ and $\#E'(\mathbb{F}_q) = 8$. Thus this element is non-permutative. \end{example} As base change is also an $E_\infty$-map, these examples provide the tools to determine that certain classes over infinite bases are also non-permutative. \begin{corollary} Let $k$ be a global or local field with a place of cardinality $q \equiv 3 \pmod{4}$. Then $K_1(\Var_{\mathcal{O}_k})$ has non-permutative elements. Alternately, if $k$ contains a place of cardinality $q \equiv 2^{\ell-1}+1 \pmod {2^\ell}$ and a primitive $2^{\ell-1}$-st root of unity then $K_1(\Var_{\mathcal{O}_k})$ has non-permutative elements. \end{corollary} \begin{proof} Consider the map $K(\Var_{\mathcal{O}_k}) \rto K(\Var_{\mathbb{F}_q})$ induced by the reduction to $\mathbb{F}_q$. This is an $E_\infty$-map and thus the preimage of any non-permutative element is non-permutative. In the first case, consider the element $[\mathbb{P}^1,-x]$; by Corollary~\ref{cor:calc} its image under base change is non-permutative, and therefore it is also non-permutative. In the second case, consider the automorphism of $\mathbb{P}^1$ given by $x \rgoesto \alpha x$, for $\alpha$ a primitive $2^{\ell-1}$-st root of unity. The image of $[\mathbb{P}^1,\alpha x]\in K_1(\Var_{\mathcal{O}_k})$ in $K_1(\Var_{\mathbb{F}_q})$ is non-permutative by Corollary~\ref{cor:gen-root}, and is therefore non-permutative. \end{proof} \begin{remark} In \cite{CWZ-zeta}, the author and collaborators give an alternate construction of such a ``derived zeta function.'' By using the observation that $|X(L_n)|$ is exactly the number of fixed points of $\mathrm{Frob}_k^{n}$ and using the Grothendieck--Lefschetz fixed point theorem, the collaborators construct a map of spectra $K(\Var_k) \rto K(\End(\Q_\ell))$ which on $K_0$ is exactly the zeta function (using a result of Almkvist \cite{grayson78} to show that the composition of that map with $K_0(\End(\Q_\ell)) \rto (1+t\Z\llbracket t\rrbracket,\times)$ is $Z(-,t)$). As $K_0(\End(\Q_\ell))$ is the rational Witt vectors, this produces a derived zeta function which recalls that zeta functions should be rational. Our current construction cannot do that, as the data about all orbits of different sizes are independent. However, the construction of \cite{CWZ-zeta} has the weakness that it was not a map of $E_\infty$-ring spectra, as it was constructed using two formal inverses to weak equivalences. The construction was also significantly more complicated than the construction in this paper, leading to much more difficulty with analysis. In future work, the author hopes to find a construction that unifies the strength of these two approaches, so that it can retain the rationality data as well as the ring structure. \end{remark} \section{Technical Preliminaries} \label{sec:technical} In this section we review some of the technical preliminaries necessary for the proofs. Most of the results in this section can be found in \cite[Section 2]{Z-Kth-ass}; we revisit them here in the interest of readability. \begin{definition}[{\cite[Definition 2.1]{zakharevich10}}] Let $\C$ be an assembler. The category $\Tw(\C)$ is defined to have \begin{description} \item[objects] tuples $\SCob{A}{i}$, where $I$ is a finite set and each $A_i$ is a noninitial object in $\C$. \item[morphisms] A morphism $f:\SCob{A}{i} \rto \SCob{B}{j}$ is a map of finite sets $f:I \rto J$ (called the \emph{set map}), together with morphisms $f_i:A_i \rto B_{f(i)}$ in $\C$, for each $i\in I$ (called the \emph{component maps}). The component maps must satisfy the condition that for $i\neq i'$, if $f(i) = f(i')$ then $f_i$ and $f_{i'}$ are disjoint. \item[composition] The composition of $f:\SCob{A}{i} \rto \SCob{B}{j}$ and $g:\SCob{B}{j} \rto \SCob{C}{k}$ is given by the set map $g\circ f$, together with the component maps $g_{f(i)} \circ f_i:A_i \rto C_{gf(i)}$. \end{description} The category $\mathcal{W}(\C)$ is the subcategory of $\Tw(\C)$ containing all morphisms $\SCob{A}{i} \rto \SCob Bj$ such that for all $j\in J$, the family $\{f_i;A_i \rto B_j\}_{j\in f^{-1}(i)}$ is a finite disjoint covering family. \end{definition} In this paper, we use two distinct constructions of the $K$-theory of an assembler: the $\Gamma$-space definition from \cite{Z-Kth-ass} and the Waldhausen category definition from \cite{Z-ass-pi1}. In \cite[Theorem 2.1]{Z-ass-pi1} it is shown that these two constructions produce equivalent $K$-theories for closed assemblers; in this paper, we will also show that this equivalence respects the monoidal structure. We give a short review of these definitions here. Write $\Gamma\mathbf{Spc}$ for the category of $\Gamma$-spaces, $\mathbf{WaldCat}$ for the category of Waldhausen categories, and $\Sp$ for the category of symmetric spectra. We begin by recalling the two definitions. \begin{definition}[{\cite[Definition 2.12]{Z-Kth-ass}}] \label{def:Kass} Let $X$ be a pointed set; write $X^\circ \defeq X \smallsetminus \{*\}$. For an assembler $\C$, we write $X \sma \C$ for the assembler $\bigvee_{x\in X^\circ} \C$; here, the wedge product of assemblers is obtained by taking their unions and associating the initial objects. This gives a functor $\FinSet_* \times \mathbf{Asm} \rto \mathbf{Asm}$. In addition, there is a natural transformation $\cdot \sma N\mathcal{W}(\C) \rto N\mathcal{W}(\cdot \sma \C)$ given for each $X$ by the composition \[X \sma N\mathcal{W}(\C) \cong \bigvee_{X^\circ} N\mathcal{W}(\C) \rto N\left(\bigoplus_{X^\circ} \mathcal{W}(\C)\right) \cong N\mathcal{W}(X \sma \C).\] For an assembler $\C$, we define \[K^\Gamma(\C) \defeq \mathbf{B}(X \rgoesto \mathcal{W}(X \sma \C)).\] Here, $\mathbf{B}$ is the classifying spectrum functor which takes a $\Gamma$-space to a spectrum. When not comparing this construction to $K^W$ (defined below), we write $K$ instead of $K^\Gamma$. \end{definition} \begin{definition}[{\cite[Definition 1.7]{Z-ass-pi1}}] Suppose that $\C$ is a closed assembler. We define the category $\SC(\C)$ to have \begin{description} \item[objects] $\ob \Tw(\C)$ \item[morphisms] A morphism $f:\SCob Ai \rto \SCob Bj$ is represented by a span \[\SCob Ai \lto^p \SCob Ck \rto^\sigma \SCob Bj.\] Here $p$ is a morphism in $\Tw(\C)$, and $\sigma$ is represented by a set map $\sigma: K \rto J$, together with component maps $\sigma_k:C_k \rto B_{\sigma(k)}$ which are isomorphisms in $\C$. \item[composition] The composition of two morphisms $f: \SCob{A}{i} \rto \SCob{B}j$ and $g: \SCob Bj \rto \SCob Ck$ represented by a diagram \begin{squisheddiagram} { & \SCob{A'}{i'} & & \SCob{B'}{j'} \\ \SCob Ai && \SCob Bj && \SCob Ck \\}; \to{1-2}{2-1}_p \to{1-2}{2-3}^\sigma \to{1-4}{2-3}_q \to{1-4}{2-5}^\tau \end{squisheddiagram} is defined by pulling back $q$ along $\sigma$ and composing down the two sides. It is necessary to check that such a pullback produces a well-defined composition; see\cite[Lemma 6.4]{zakharevich10}. \end{description} We give $\SC(\C)$ the structure of a Waldhausen category by defining \begin{description} \item[cofibrations] to be those morphisms where $p$ is in $\mathcal{W}(\C)$ and $\sigma$ has an injective set map, and \item[weak equivalences] to be those cofibrations where $\sigma$ has a bijective set map. \end{description} For a closed assembler $\C$, we define \[K^W(\C) \defeq K(\SC(\C)).\] \end{definition} In assemblers it is often possible to compute the quotient of the $K$-theory of an assembler by the $K$-theory of a subassembler by simply ``removing'' the objects of the subassembler. \begin{definition}[{\cite[Definition 2.9]{Z-Kth-ass}}] Let $\C$ be an assembler and $\D$ a sieve in $\C$. The assembler $\C \smallsetminus \D$ is defined to have as its underlying category the full subcategory of $\C$ containing all objects not in $\D^\circ$. A family $\{f_i:A_i \rto A\}_{i\in I}$ is a covering family in $\C\smallsetminus\D$ if there exists a family $\{f_j:A_j \rto A\}_{j\in J}$ with each $A_j\in \D$ such that $\{f_i:A_i \rto A\}_{i\in I\cup J}$ is a covering family in $\C$. \end{definition} Often it is the case that $K(\C)/K(\D) \simeq K(\C\smallsetminus\D)$; see \cite[Theorem D]{Z-Kth-ass} for more detail. Here, we consider a situation where this does \emph{not} hold, as it will be important intuition for the construction of the monoidal structure on $c\mathbf{Asm}$. Consider an object $A$ which has an empty covering family. The morphism $\{\}_\emptyset \rto \{A\}_{\{*\}}$ is a weak equivalence in $\SC(\C)$. Thus, morally speaking, $A$ should not contribute to the $K$-theory of $\C$. However, this can be deceiving, as the underlying categorical structure of $\C$ can contribute to the $K$-theory of $\C$ despite this. To help illustrate this, we present an example, which will be helpful in understanding the difference between $\C\boxtimes \D$ and $\C \sma \D$ in Section~\ref{sec:monoidal}: \begin{example} \label{ex:square} Let $\C$ be the assembler with the following underlying category: \begin{diagram} { & & B \\ \initial & A & & D. \\ & & C \\}; \to{2-1}{2-2} \to{2-2}{1-3} \to{1-3}{2-4} \to{2-2}{3-3} \to{3-3}{2-4} \end{diagram} The topology on $\C$ is generated by the covering families $\{B \rto D, C \rto D\}$ and the empty covering families of $A$ and $\initial$. Note that $D$ has \emph{no} finite disjoint covering families. Thus, despite the fact that $D$ is covered by $B$ and $C$, \[K(\C) \simeq \S\vee\S\vee\S,\] with one copy of $\S$ for each of $B$, $C$, and $D$. Now consider $\C\smallsetminus \{\initial \rto A\}$; this assembler is given by the diagram \begin{diagram} { & B \\ \initial & & D. \\ & C\\}; \to{2-1}{1-2} \to{1-2}{2-3} \to{2-1}{3-2} \to{3-2}{2-3} \end{diagram} The topology on this assembler is generated by the covering family $\{B \rto D, C \rto D\}$ and the empty covering family on $\initial$. Here, $D$ \emph{does} have a finite disjoint covering family; thus \[K(\C \smallsetminus \{\initial \rto A\}) \simeq \S \vee \S,\] with one copy of $\S$ for each of $B$ and $C$. \end{example} \section{A monoidal structure on the category of assemblers} \label{sec:monoidal} The goal of this section is to construct a symmetric monoidal structure on $c\mathbf{Asm}$ in such a way that the $K$-theory functor is symmetric monoidal and $\S^\mathbf{Asm}$ is the unit. We begin with a helper definition. \begin{definition} Let $\C,\D$ be two closed assemblers. The assembler $\C\boxtimes \D$ has as its underlying category the category $\C\times \D$, and its topology is generated by the coverage in which the covering families are families $\{(A_i,B_j) \rto (A,B)\}_{(i,j)\in I\times J}$, where $\{A_i \rto A\}_{i\in I}$ is a covering family in $\C$ and $\{B_j \rto B\}_{j\in J}$ is a covering family in $\D$. \end{definition} \begin{lemma} $\C\boxtimes \D$ is a closed assembler. \end{lemma} \begin{proof} First we need to check that the Grothendieck topology is well-defined. To check this we simply need to check that the pullback of a family in the coverage is still in the coverage; this follows because covering families in $\C$ and $\D$ are closed under pullbacks. Axioms (I) and (M) follow directly from the definition. Axiom (R) holds because $\C\boxtimes \D$ has pullbacks. \end{proof} \begin{remark} This construction is analogous to the construction of the product topology. In the product topology, the generating open sets are the products of the opens in the two categories. More concretely, suppose $U \subseteq X$ is covered by $\{A_1,\ldots,A_n\}$ and $V\subseteq Y$ is covered by $\{B_1,\ldots,B_m\}$. Then to cover $U\times V$ we need to take $\{A_i \times B_j \,|\, 1 \leq i \leq n, 1 \leq j \leq m\}$. Contrast this with the usual disjoint union topology on $\C\times \D$ (where $\C$ and $\D$ are sites) where $\{(A_i,B_i) \rto (A,B)\}_{i\in I}$ is a covering family if $\{A_i \rto A\}_{i\in i}$ and $\{B_i \rto B\}_{i\in I}$ are both covering families. \end{remark} Let $\alpha_{\C,\D,\E}: (\C\boxtimes \D) \boxtimes \E \rto \C\boxtimes (\D\boxtimes \E)$ be the functor taking $((A,B),C)$ to $(A,(B,C))$. Let $\gamma_{\C,\D}: \C\boxtimes \D \rto \D\boxtimes \C$ be the functor taking $(A,B)$ to $(B,A)$. Let $\lambda_\C: \S^\mathbf{Asm}\boxtimes \C \rto \C$ be the projection onto the second coordinate and let $\rho_\C: \C \boxtimes \S^\mathbf{Asm} \rto \C$ be the projection onto the first coordinate. \begin{lemma} \label{lem:boxtimes_nats} The natural transformations $\alpha$, $\gamma$, $\lambda$ and $\rho$ satisfy all of the axioms of a symmetric monoidal structure except the condition that $\lambda$ and $\rho$ be natural isomorphisms. \end{lemma} \begin{proof} The only part that is not direct from the definitions is checking that $\lambda_\C$ and $\rho_\C$ are well-defined morphisms of assemblers. We focus on $\lambda_\C$; the result for $\rho_\C$ will follow analogously. Since the topology on $\S^\mathbf{Asm}\boxtimes\C$ is generated by a pretopology and since $\lambda_\C$ commutes with pullbacks (since pullbacks in $\S^\mathbf{Asm}\boxtimes \C$ are done coordinatewise) it suffices to check that for any covering family $\mathscr{F}$, $\lambda_\C\mathscr{F}$ is a covering family. A covering family in $\S^\mathbf{Asm}\boxtimes \C$ is a finite refinement of families in the coverage. The projection of a family in the coverage is a covering family. The refinement of a covering family can either refine the projection, or it can add some morphisms to the covering family (which still keeps it a covering family). Either way, the projection of a covering family is a covering family, as desired. \end{proof} Thus $\boxtimes$ does not produce a symmetric monoidal structure on $\mathbf{Asm}$. To make this into a monoidal structure it is necessary to rectify this problem. \begin{definition} We consider $\C\mathrel{\tilde\vee}\D$ to be the full subassembler of $\C\boxtimes \D$ containing those objects where one coordinate or the other is the initial object. The subassembler $\C\mathrel{\tilde\vee}\D$ is a sieve in $\C\boxtimes \D$. Define \[\C\sma \D \defeq (\C\boxtimes \D) \smallsetminus (\C\mathrel{\tilde\vee}\D).\] \end{definition} The relationship of $\C\sma\D$ to $\C\boxtimes \D$ has the exact flavor of Example~\ref{ex:nonsaturated}, with objects in $\C\mathrel{\tilde \vee}\D$ being obstructions to objects in $\C\boxtimes\D$ being disjoint. Once these objects are removed, moreover, the natural transformations $\alpha, \gamma, \lambda,\rho$ are all well-defined with $\boxtimes$ replaced by $\sma$, since \emph{as categories} $\C\sma\D$ can be thought of as a full subcategory of $\C\boxtimes \D$. \begin{lemma} \label{lem:smasym} $(c\mathbf{Asm}, \S^\mathbf{Asm}, \sma)$ is a symmetric monoidal category. \end{lemma} \begin{proof} By Lemma~\ref{lem:boxtimes_nats} all that remains to show is that $\lambda$ and $\rho$ are natural isomorphisms. We check that $\lambda_\C$ is an isomorphism $\S^\mathbf{Asm}\sma \C \rto \C$. The objects of $\S^\mathbf{Asm}\sma \C$ are the initial object and pairs $(*,A)$ with $A\in \C$. $\lambda_\C$ takes the first to the initial object and the second to itself. The structure on morphisms is analogous. Since the functor is a bijection on both objects and morphisms, it is an isomorphism, as desired. We must also check that $\lambda_{\S^\mathbf{Asm}} = \rho_{\S^\mathbf{Asm}}$. This is the case because $\S^\mathbf{Asm} \sma \S^\mathbf{Asm}$ has a unique nontrivial map to $\S^\mathbf{Asm}$; since both $\lambda_{\S^\mathbf{Asm}}$ and $\rho_{\S^\mathbf{Asm}}$ areisomorphisms, they must be equal. \end{proof} Each object of $\C\tilde\vee\D$ sitting inside $\C\boxtimes \D$ has an empty covering family. It is thus tempting to conclude that $K(\C\boxtimes \D)$ may already have the correct symmetric monoidal structure, at least up to homotopy. In general this is \emph{not} the case, as structures similar to those in Example~\ref{ex:square} arise. In fact, most objects in $\C^\circ\times \D^\circ$ have \emph{no} nontrivial finite disjoint covering families. Indeed, suppose that $(A,B)$ is an object for which the empty family is not a covering family. Then any nontrivial covering family will have two elements that share a coordinate, and will therefore not be disjoint. On the other hand, in $\C\sma \D$, any pair of finite disjoint covering families produces a finite disjoint covering family, since only one of the coordinates being disjoint is sufficient for disjointness. For a more concrete example, consider $K(\S^\mathbf{Asm} \boxtimes \S^{\mathbf{Asm}})$. $\S^\mathbf{Asm} \boxtimes \S^\mathbf{Asm}$ has three noninitial objects and no nontrivial finite disjoint covering families. Thus $K(\S^\mathbf{Asm}\boxtimes \S^\mathbf{Asm}) \simeq \S \vee\S\vee\S$. If the functor were correctly monoidal it would instead be $\S$, so we see that $\boxtimes$ is not the desired structure, even up to homotopy. The monoidal structure of assemblers gives rise to an interesting phenomenon: in general, the category $\SC(\C\sma\D)$ will \emph{not} be saturated. We give an explicit example to illustrate how this can arise, and we stress that such examples are the norm and not the exception: \begin{example} \label{ex:nonsaturated} Let $\C$ be the assembler $\mathbf{Seg}$, discussed in Example~\ref{ex:seg}. In $\SC(\mathbf{Seg}\sma\mathbf{Seg})$ there is the following diagram of morphisms: \begin{center} \begin{tikzpicture}[scale=1.5] \draw (0,0) rectangle (1,1); \draw[xshift=-1pt] (2,2) rectangle (2.3,2.7); \draw[yshift=-1pt] (2.3,2) rectangle (3,2.3); \draw[yshift=1pt] (2,2.7) rectangle (2.7,3); \draw[xshift=1pt] (2.7,2.3) rectangle (3,3); \draw (2.3,2.3) rectangle (2.7,2.7); \draw[xshift=-1pt,yshift=-1pt] (4,0) rectangle (4.3,0.3); \draw[xshift=-1pt] (4,0.3) rectangle (4.3,0.7); \draw[xshift=-1pt, yshift=1pt] (4,0.7) rectangle (4.3,1); \draw[yshift=-1pt] (4.3,0) rectangle (4.7,0.3); \draw (4.3,0.3) rectangle (4.7,0.7); \draw[yshift=1pt] (4.3,0.7) rectangle (4.7,1); \draw[xshift=1pt, yshift=-1pt] (4.7,0) rectangle (5,0.3); \draw[xshift=1pt] (4.7,0.3) rectangle (5,0.7); \draw[xshift=1pt, yshift=1pt] (4.7,0.7) rectangle (5,1); \draw[->] (1.1,1.1) to (1.9,1.9); \draw[->] (3.1,1.9) to node[above=-.7ex,sloped] {$\sim$} (3.9,1.1); \draw[->] (1.1,0.5) to node[above=-.7ex,sloped] {$\sim$} (3.9,0.5); \end{tikzpicture} \end{center} A weak equivalence is a morphism that can be written as a finite composition of decompositions into ``grids'' on each rectangle; the two marked morphisms are therefore weak equivalences, but the unmarked one is not. Thus $\SC(\mathbf{Seg}\sma\mathbf{Seg})$ does not satisfy the saturation axioms. \end{example} \section{The interaction of $K$-theory and the monoidal structure} \label{sec:Kmonoidal} In an ideal world, the $K$-theory functor would be monoidal and we could construct ring spectra simply by finding monoid objects inside $\mathbf{Asm}$. However, that is not the case: even the category of pointed finite sets does not produce an honest ring spectrum, but rather an $E_\infty$-spectrum, as it is not possible to make a completely rigid model of both of the monodial structures (disjoint union and product) on finite sets. However, we can produce the next best thing: a bipermutative category. \begin{definition} A category $\C$ is \emph{permutative} if it is equipped with a functor $\oplus: \C\times\C \rto \C$, an object $0\in \C$, and a natural isomorphism $\gamma: a\oplus b \cong b \oplus a$ satisfying the extra conditions that \begin{itemize} \item[(1)] $a \oplus (b \oplus c) = (a\oplus b) \oplus c$, \item[(2)] $a \oplus 0 = a = 0 \oplus a$, and \item[(3)] $\gamma_{a,0)} = 1_a$ and the following diagrams commute: \[ \begin{inline-diagram} { a \oplus b & b\oplus a & a \oplus b\\}; \to{1-1}{1-2}^\gamma \to{1-2}{1-3}^\gamma \node (m-100-100) at (m-1-1) {\phantom{$a\oplus b$}}; \diagArrow{bend right}{100-100}{1-3}_{1_{a\oplus b}} \end{inline-diagram} \qquad \begin{inline-diagram} { a\oplus b \oplus c & & c\oplus a \oplus b \\ & a\oplus c \oplus b. \\}; \to{1-1}{1-3}^\gamma \to{1-1}{2-2}_{1\oplus \gamma} \to{2-2}{1-3}_{\gamma\oplus 1} \end{inline-diagram} \] \end{itemize} In other words, a permutative category is a symmetric monoidal category with strict associativity and unit. This is referred to as the \emph{additive} structure on the permutative category. A permutative category $\C$ is \emph{bipermutative} if it is equipped with a second permutative structure $(\C,\otimes,1)$ (which is referred to as the \emph{multiplicative} structure)and natural distributivity maps \[d_l: (a\otimes b) \oplus (a'\otimes b) \rto (a\oplus a') \otimes b\] and \[d_r: (a\otimes b) \oplus (a \otimes b') \rto a \otimes (b\oplus b')\] satisfying certain compatibility requirements, described in \cite[Definition 3.3, 3.6]{elmendorfmandell}. \end{definition} It is not immediately obvious why distributivity maps cannot be ``rigidified'' away, when it is known why monoidal structures can generally be replaced with rigid versions. To help with this, and as it will be used in Proposition~\ref{prop:biperm}, we give an explicit description of a bipermutative structure on the category of finite sets. \begin{example} \label{ex:finset} Let $\FinSet$ be the category with \begin{description} \item[objects] the sets $\emptyset$ and $\{1,\ldots,n\}$ for all natural numbers $n$ and \item[morphisms] functions between finite sets. \end{description} The additive permutative structure on finite sets is given by disjoint union, where we think of ``concatenating the two sets in order''. Thus in $\{1,\ldots,k\} \oplus \{1,\ldots,\ell\} = \{1,\ldots,k+\ell\}$ we think of the first $k$ elements as coming from $\{1,\ldots,k\}$ and the rest as coming from $\{1,\ldots,\ell\}$, in the correct order. (Although morphisms are not required to preserve order, we keep track of it here so as to analyse the symmetry more precisely.) The map $\gamma$ is the $k,\ell$-shuffle which preserves the order of the two sets and moves the elements past one another. The multiplicative permutative structure is via the cartesian product of sets, in which the isomorphism $\{1,\ldots,k\} \times \{1,\ldots,\ell\} \cong \{1,\ldots,k\ell\}$ is given via the lexicographic ordering of pairs. With this second structure, the natural transformation $d_r$ is the identity map, but the natural transformation $d_l$ is \emph{not}, as illustrated below: \begin{center} \begin{tikzpicture} \node at (1,-1) {$a\otimes (b\oplus b')$}; \node at (6,-1) {$(a \otimes b)\oplus (a \otimes b')$}; \draw (0,0) rectangle (2,1) (0,1) rectangle (2,2); \foreach \i in {10,20,30,40,50,60,70,80,90} \draw[blue!\i!green,->] ({\i/50},0.1) -- ({\i/50},1.9); \draw (5,0) rectangle (7,1); \draw (5,1.1) rectangle (7,2.1); \foreach \i in {5,10,15,20,25,30,35,40,45} \draw[blue!\i!green,->] ({\i/25+5},1.2) -- ({\i/25+5},2); \foreach \i in {50,55,60,65,70,75,80,85,90} \draw[blue!\i!green,->] ({(\i-45)/25+5},0.1) -- ({(\i-45)/25+5},0.9); \end{tikzpicture} \end{center} In this picture, the rectangle represents the set of pairs, with the arrows showing the induced ordering, with greener arrows before bluer arrows. Note that in the two pictures the orderings are not the same. \end{example} We now investigate the structures on assemblers that produce morphisms of bipermutative categories. We begin with a simple observation. \begin{lemma} Let $\C$ be an assembler. Then $\mathcal{W}(\C)$ is a permutative category with the permutative structure induced by the permutative structure on $\FinSet$. \end{lemma} \begin{proof} The category $\mathcal{W}(\C)$ has as objects tuples $\{A_i\}_{i\in I}$ with $I\in \FinSet$ and $A_i\in \C^\circ$ for all $i$. We define \[\{A_i\}_{i=1}^n \oplus \{B_j\}_{j=1}^m = \{C_k\}_{k=1}^{n+m},\] where $C_k = A_k$ if $k \leq n$ and $C_k = B_{k-n}$ for $k > n$. The $0$ object is the empty tuple $\{\}_\emptyset$. As $\FinSet$ has a permutative structure, this inherits the same structure. \end{proof} Given a sufficiently strict monoidal product on $\C$, this permutative structure can be extended to a bipermutative structure. \begin{notation} Given $I,J\in \FinSet$, write $I\otimes J$ for the set $I\times J$ with the lexicographic ordering, associated via this ordering with the set $\{1,\ldots,|I|\cdot|J|\}$. We will write $(i,j)\in I\otimes J$ for the image of $(i,j)$ under this ordering. \end{notation} \begin{definition} Let $\C$ be an assembler with a multiplication $\mu: \C \sma \C \rto \C$. A \emph{symmetry for $\mu$} is a natural isomorphism $\gamma: \mu \Rto \mu\circ \tau$, where $\tau$ swaps the two factors of $\C$, satisfying the extra condition that $\gamma_{A,B} \circ \gamma_{B,A} = 1_{\mu(A,B)}$ for all $A,B$. \end{definition} \begin{proposition} \label{prop:biperm} Let $\C$ be a monoid object in $\mathbf{Asm}$ with product $\mu: \C\sma \C \rto \C$. Then the permutative structure on $\mathcal{W}(\C)$ extends to a ring structure, with \[\{A_i\}_{i\in I} \otimes \{B_j\}_{j\in J} \cong \{\mu(A_i,B_j)\}_{(i,j)\in I\otimes J}.\] If in addition $\C$ is equipped with a symmetry for $\mu$ then this ring structure is a bipermutative structure. \end{proposition} \begin{proof} First we must check that the given tensor product is a well-defined functor $\otimes:\mathcal{W}(\C) \times \mathcal{W}(\C) \rto \mathcal{W}(\C)$. We factor this as \[\mathcal{W}(\C) \times \mathcal{W}(\C) \rto^\nu \mathcal{W}(\C\sma\C) \rto^{\mathcal{W}(\mu)} \mathcal{W}(\C).\] Here, we define $\nu$ by \[\nu(\SCob{A}{i}, \SCob{A'}{i'}) = \{(A_i,A'_{i'})\}_{(i,i')\in I\times I'}\] on objects, and define it on morphisms by taking the pair $f: \SCob Ai \rto \SCob Bj$ and $f': \SCob{A'}{i'} \rto \SCob{B'}{j'}$ to the morphism defined by the set map $f\times f': I\times I' \rto J\times J'$ and the component maps $(f_i, f_{i'}'): (A_i, A'_{i'}) \rto (B_{f(i)}, B'_{f'(i')})$. It remains to check that this is well-defined: that if both $f$ and $f'$ were morphisms in $\mathcal{W}(\C)$ then $\nu(f,f')$ is a morphism in $\mathcal{W}(\C\sma \C)$. In particular it is necessary to check: \begin{description} \item[disjointness] Given distinct $(i_0,i'_0),(i_1,i'_1)\in I\times I'$ with $f(i_0) = f(i_1)$ and $f'(i_0') = f'(i_1')$ we must check that $(f_{i_0},f'_{i'_0})$ and $(f_{i_1},f'_{i'_1})$ are disjoint. it suffices to check that this is equal to $\initial$. However, since the pairs were distinct, one of the two coordinates must be different; suppose WLOG that it is the first one, so that $i_0 \neq i_1$. Then the maps $f_{i_0}$ and $f_{i_1}$ are disjoint, so $A_{i_0}\times_{B_{f(i_0)}} A_{i_1} = \initial$. But then the pullback has a single initial coordinate when computed inside $\C\times \C$, and is therefore equal to $\initial$ in $\C\sma \C$, as desired. \item[covering] A morphism $f:\SCob Ai \rto \SCob Bj$ in $\mathcal{W}(\C)$ is a collection of finite disjoint covering families $\{f_i;A_i \rto B_j\}_{i\in f^{-1}(j)}$. We have already checked that $\nu(f,f')$ is a collection of disjoint morphisms; it remains to check that they give covering families. In particular, it must be the case that for all $(j,j')\in J\times J'$, the family \[\{(f_i,f'_{i'}): (A_i,A'_{i'}) \rto (B_j, B'_{j'})\}_{(i,i')\in (f,f')^{-1}(j,j')}\] is a covering family. It is an element of the coverage that generates the topology on $\C \sma \C$, so it is a covering family, as desired. \end{description} The unit map for $\C$ is a morphism of assemblers $\S \rto \C$; denote by $\nu\in \C$ the image of $*$. We claim that $\otimes$ is strictly associative and $\{\nu\}_{\{1\}}$ is a strict unit for $\otimes$. We have that \[(\SCob{A}{i} \otimes \SCob{B}{j}) \otimes \SCob Ck = \{\mu(\mu(A_i,B_j),C_k)\}_{((i,j),k)\in (I\otimes J)\otimes K}\] and \[\SCob{A}{i} \otimes (\SCob{B}{j} \otimes \SCob Ck) = \{\mu(A_i,\mu(B_j,C_k))\}_{((i,j),k)\in I\otimes (J\otimes K)}.\] Since $\FinSet$ is bipermutative, the two indexing sets are equal. Since $\C$ is a monoid object, $\mu$ is strictly associative, so the two objects are equal, as well. Thus $\otimes$ is strictly associative. Similarly, \[\{\nu\}_{\{1\}} \otimes \SCob Ai = \{\mu(\nu, A_i)\}_{(1,i)\in \{1\}\otimes I};\] since $\mu$ is strictly unital, this is equal to $\SCob Ai$, as desired. The analogous proof works for $\SCob Ai \otimes \{\nu\}_{\{1\}}$. If we have the structure map $\gamma$ then symmetry also holds. Indeed, the map \[\SCob Ai \otimes \SCob Bj = \{\mu(A_i,B_j)\}_{(i,j)\in I\otimes J} \rto \{\mu(B_j,A_i)\}_{(j,i)\in J\otimes I} = \SCob Bj \otimes \SCob Ai\] is given by sending $(i,j)\in I\otimes J$ to $(j,i)\in J\otimes I$ using the symmetry map in $\FinSet$, and mapping $\mu(A_i,B_j) \rto \mu(B_j,A_i)$ via $\gamma_{A_i,B_j}$. This satisfies the relation for the symmetry map in a permutative category because the set map does (as $\FinSet$ is a bipermutative category) and the components maps satisfy it by the condition on $\gamma$. We now turn to checking the relations for the ring structure. We define distributivity maps \begin{align*} &d_l: (\SCob{A}{i} \otimes \SCob Bj)\oplus (\SCob{A'}{i'}\otimes \SCob Bj) \rto (\SCob Ai \oplus \SCob{A'}{i'}) \otimes \SCob Bj \\ & d_r: (\SCob Ai \otimes \SCob Bj) \oplus (\SCob Ai \otimes \SCob{B'}{j'}) \rto \SCob Ai \otimes (\SCob Bj \oplus \SCob{B'}{j'}) \end{align*} to be induced from the distributivity maps on $\FinSet$, with identity maps as the components. We follow the naming from \cite[Definition 3.3, Definition 3.6]{elmendorfmandell}. Axiom (a) holds because the product of any set with the empty set is empty. Axioms (b), (c), (d), (e) and (f) hold because they hold in $\FinSet$ and all component maps in the given diagrams are identities. Given the extra structure of the symmetry $\gamma$, Axiom (e') holds in $\FinSet$, and over each index this reduces to the diagram \begin{diagram} { \mu(A,B) & \mu(A,B) \\ \mu (B,A) & \mu(B,A) \\}; \arrowsquare{1}{\gamma_{A,B}}{\gamma_{A,B}}{1} \end{diagram} which commutes. \end{proof} \begin{corollary} The $K$-theory of any monoid object in $\C$ is an $A_\infty$-ring spectrum. The $K$-theory of a monoid object equipped with a symmetry for the multiplication is an $E_\infty$-ring spectrum. \end{corollary} Using this we can construct some examples of $E_\infty$-ring spectra. \begin{example} For a group $G$, let $\mathbf{Fin}_G$ be the assembler with \begin{description} \item[objects] pairs of an integer $m \geq 0$ together with a finite set of tuples of integers of length $m$ equipped with a $G$-action, and \item[morphisms] $G$-equivariant inclusions of sets. \end{description} This has a multiplication $\mu: \mathbf{Fin}_G \sma \mathbf{Fin}_G \rto \mathbf{Fin}_G$ taking a pair $(m,S) \sma (n,T)$ to $(m+n, S\times T)$, where an element in $S\times T$ is modeled as a tuple of length $m+n$, except when $m = 1$ and $|S| = 1$, in which case $(m, S) \sma (n,T) = (n,T)$ (and similarly for the case when $n=1$ and $|T| = 1$). The $G$-action on $S\times T$ is diagonal. The multiplication is strictly associative and strictly unital, with unit $(1, \{1\})$. It is also equipped with a symmetry for the multiplication, taking $S\times T$ to $T\times S$ via a shuffle swapping the first $m$ and last $n$ coordinates applied to each tuple. Thus, by the corollary, $K(\mathbf{Fin}_G)$ is an $E_\infty$-ring spectrum. \end{example} Moreover, \cite[Theorem 9.3.8]{elmendorfmandell} immediately implies the following: \begin{proposition} A symmetric monoid map of symmetric monoid objects in assemblers induces an $E_\infty$-map on the $K$-theories. \end{proposition} Unfortunately, much of the time we do not immediately have a strict model for the multiplication. Consider the example we are most interested in: varieties. Given varieties $X$ and $Y$ over $k$ the product should be the fiber product $X\times_k Y$. However, to make $\Var_k$ a monoid object, we must be able to model this fiber product \emph{rigidly}. Although this is possible we prefer to apply a more general technique and work with strong morphisms of monoidal assemblers. \begin{definition} \label{def:symmonass} A \emph{(symmetric) monoidal assembler} is an assembler $\C$ equipped with a map $\mu: \C \sma \C \rto \C$ and a map $\eta: \S \rto \C$ satisfying the axioms of a (symmetric) monoidal category. More concretely, a (symmetric) monoidal assembler is equipped with a natural isomorphism $\alpha_{A,B,C}: \mu(A,\mu(B,C)) \rto \mu(\mu(A,B),C)$ and natural isomorphisms $\lambda_A: \mu(A,\eta(*)) \rto A$ and $\rho_A:\mu(\eta(*),A) \rto A$ (as well as a natural isomorphism $\gamma_{A,B}:\mu(A,B) \rto \mu(B,A)$, in the symmetric case) satisfying the usual relations of a symmetric monoidal category. A symmetric monoidal morphism of symmetric monoidal assemblers $F:\C \rto \D$ is a morphism of assemblers together with a natural transformation $\nu_{A,B}: \mu_\D(F(A), F(B)) \rto F(\mu_\C(A,B))$ and a morphism $\epsilon: \eta_\D(*) \rto F(\eta_\C(*))$ satisfying the relations of a symmetric monoidal functor. \end{definition} We begin by showing that the $K$-theory of a symmetric monoidal assembler is an $E_\infty$-ring spectrum. \begin{definition} For any sequence of objects $[A_1,\ldots,A_n]$ in $\C$, define \[M[A_1,\ldots,A_n] \defeq \mu(A_1,\mu(A_2,\ldots,\mu(A_{n-1},A_n))).\] Define $M[] = \eta(*)$. Let $\hat \C$ be the assembler with \begin{description} \item[noninitial objects] finite sequences $[A_1,\ldots,A_n]$ of noninitial objects in $\C$, and \item[morphisms] given by \[\Hom_{\hat \C}([A_1,\ldots,A_n], [B_1,\ldots,B_m]) = \Hom_\C(M([A_1,\ldots,A_n]), M([B_1,\ldots,B_m])).\] \item[topology] covering families are those which are mapped to covering families by $M$. \end{description} Thus $M$ extends to a functor $M:\hat \C \rto \C$ which is essentially surjective, full, and faithful. We can then define the topology by defining the covering families to be the preimages of covering families under $M$. Then $M$ is a continuous equivalence of families; the inverse equivalence is given by the functor mapping $A$ to the sequence $[A]$---except for objects isomorphic to $\eta(*)$, which are mapped to $[]$. We define a symmetric monoidal structure on $\hat \C$ on objects by defining \[\mu([A_1,\ldots,A_n],[B_1,\ldots,B_m]) = [A_1,\ldots,A_n,B_1,\ldots,B_m].\] On morphisms, we define $\mu(f:A \rto A',g:B \rto B')$ to be the morphism given by \[\mu(M(A),M(B)) \rto^{\cong} M(\mu(A,B)) \rto^{\mu(f,g)} M(\mu(A',B')) \rto^{\cong} \mu(M(A'),M(B')).\] The two marked isomorphisms are uniquely defined using the associator, so this is a well-defined morphism. The associator and the two projections are identities. The symmetry map \[\gamma_{A,B}: \mu([A_1,\ldots,A_n], [B_1,\ldots,B_m]) \rto \mu([B_1,\ldots,B_m], [A_1,\ldots,A_n]\] is induced by the unique shuffle map \[\mu(M(A),M(B)) \rto^{\cong} \mu(M(B), M(A)).\] \end{definition} \begin{lemma} \label{lem:tomonoid} For a (symmetric) monoidal assembler $\C$, the functor $M$ induces a homotopy equivalence $K(\hat \C) \rto K(\C)$. In addition, $\hat \C$ is a monoid object in $\mathbf{Asm}$. \end{lemma} \begin{proof} By definition, $\C$ is the full subassembler of $\hat \C$ containing all sequences of length at most $1$. As this is an equivalence of categories it induces an equivalence $\mathcal{W}(\C) \rto \mathcal{W}(\hat C)$, and thus induces an equivalence on $K$-theories, as desired. We now check that $\hat \C$ is a monoid object. First we show that $\mu$ and $\eta$ are well-defined morphisms of assemblers. For $\eta$ this is clear: since $\S^\mathbf{Asm}$ has no nontrivial covering families and no disjoint objects, this simply states that the map $\S^\mathbf{Asm} \rto \hat \C$ taking $*$ to $[]$ is a functor. Now consider $\mu$. It preserves the initial object by definition. To check that it preserves covering families it suffices to check that it preserves all families in the coverage. Thus consider a covering family $\{(A_i,B_j) \rto (A,B)\}_{(i,j)\in I\times J}$ in the coverage generating the topology of $\hat \C \sma \hat \C$. By definition, this implies that $\{M(A_i) \rto M(A)\}_{i\in I}$ and $\{M(B_j) \rto M(B)\}_{j\in J}$ are covering families in $\C$, and thus $\{\mu(M(A_i),M(B_j)) \rto \mu(M(A),M(B))\}_{(i,j)\in I\times J}$ is a covering family in $\hat \C$. By definition, the associator induces an isomorphism \[\mu(M(X),M(Y)) \rto M(\mu(X,Y))\] for all $X$ and $Y$. Thus the covering family $\{\mu(M(A_i),M(B_j)) \rto \mu(M(A),M(B))\}_{(i,j)\in I\times J}$ implies that $\{M(\mu(A_iB_j)) \rto M(\mu(A,B))\}_{(i,j)\in I\times J}$ is a covering family as well. Thus $\mu$ preserves covering families. To check that it preserves disjointness note that Since $\hat \C$ is strictly associative and strictly unital the relevant commutative diagrams in $\mathbf{Asm}$ commute on-the-nose, and $\hat \C$ is a monoid object. \end{proof} Together with \cite[Corollary 3.9]{elmendorfmandell}, the above results prove the following theorem. \begin{theorem} \label{thm:Einfty} Let $\C$ be a symmetric monoidal assembler. Then $K(\C)$ is equivalent to a strictly commutative ring symmetric spectrum. \end{theorem} \begin{example} Let $\Var_S$ be the assembler of varieties over $S$, and define the monoidal structure by $\mu(X,Y) \defeq X\times_S Y$. Then $K(\Var_S)$ is an $E_\infty$-ring spectrum. \end{example} We now turn our attention to symmetric monoidal morphisms. \begin{theorem} \label{thm:ringhom} A symmetric monoidal morphism of symmetric monoidal assemblers induces a ring homomorphism on $K$-groups. \end{theorem} \begin{proof} The main issue is the question of how strict the symmetric monoidal morphism is. Let $F: \C \rto \D$ be a symmetric monoidal morphism of symmetric monoidal assemblers. Using strictification, there is a diagram \begin{diagram} { \hat \C & \hat \D \\ \C & \D. \\}; \arrowsquare{\hat F}{}{}{F} \diagArrow{implies, -implies}{2-1}{1-2} \end{diagram} commuting up to natural isomorphism; thus the map $K_*(\C) \rto K_*(\D)$ is the same as the composition \[K_*(\C) \rto K_*(\hat \C) \rto K_*(\hat \D) \rto K_*(\D).\] The functor $\mathcal{W}(\hat F)$ is strict on the additive and multiplicative structure, since $\hat F$ is strict on the multiplicative structure by definition, and the rest of the structure arises from the structure on $\FinSet$, which is the same in both domain and codomain. Thus by \cite[Theorem 9.3.7]{elmendorfmandell} it is equivalent to a map of strictly commutative ring spectra, and thus induces a ring homomorphism on homotopy groups. Since the diagram commutes up to natural isomorphism, it commutes up to homotopy after applying $K$-theory\note{CHECK THIS!!!}; in particular, the homomorphisms induced on $\pi_*$ must be the same. Thus, since the vertical maps are ring isomorphisms and the top map is a ring homomorphism, the induced map $K_*(\C) \rto K_*(\D)$ must also be a ring homomorphism. \end{proof} With this theorem we now have the two desired examples: \begin{example} Let $\Var_k$ be the assembler of $k$-varieties for $k$ a finite field, and let $G = \mathrm{Gal}(\bar k/k)$. Then the map $\zeta: \Var_k \rto \mathbf{AFSet}_G$ given by $X \rgoesto X(\bar k)$ is a strong monoidal map of assemblers, and thus induces an $E_\infty$-map on the $K$-theory. \end{example} \begin{example} Let $f: T \rto S$ be a map of finite type. Then base change induces an $E_\infty$-map of $K$-theories $K(\Var_S) \rto K(\Var_T)$. In particular, for $K$ a number field with $k$ a residue field of its ring of integers, the map $K(\Var_{\Spec \mathcal{O}_K}) \rto K(\Var_k)$ is an $E_\infty$-map of spectra. \end{example} \section{Interaction with $K_1$} \label{sec:K1} A model for the $K_1$ of an assembler was given in \cite{Z-ass-pi1}, which contains the following theorem: \begin{theorem}[{\cite[Theorem B]{Z-ass-pi1}}] \label{thm:Bpi1} For any assembler $\C$, every element of $K_1(\C)$ can be represented by a pair of morphisms \[\morpair{A}{B}{f}{g}\] in $\mathcal{W}(\C)$. These satisfy the relations \[ \big[\morpair{A}{B}{f}{f}\big] = 0, \qquad \big[\morpair{B}{C}{g_1}{g_2}\big] + \big[\morpair{A}{B}{f_1}{f_2}\big] = \big[\longmorpair{3em}{A}{C}{g_1f_1}{g_2f_2}\big] \] and \[{\noHorizArrowDetection \big[\morpair{A}{B}{f_1}{f_2}\big] + \big[\morpair{C}{D}{g_1}{g_2}\big] = \big[\begin{tikzpicture}[baseline] \node (A) at (0,0) {$A\amalg C$}; \node (B) at (5em,0) {$B\amalg D$}; \diagDrawArrow{->, bend left}{A}{B}^{f_1\amalg g_1} \diagDrawArrow{->, bend right}{A}{B}_{f_2\amalg g_2}% \end{tikzpicture}\big] } \] \end{theorem} The particular case of interest in this paper is when $A = B$ is an object of $\C$ (which can be considered objects of $\mathcal{W}(\C)$ by indexing over a singleton) and $g$ is the identity morphism. Then $f$ must be an automorphism of $A$. We write the generator \[[A,f] \defeq \big[\morpair AAf{1_A}\big].\] When restricted to such generators, the above theorem implies the following: \begin{corollary}\label{cor:K1sp} Let $\C$ be an assembler which contains disjoint unions, in the sense that for any two objects $A$ and $B$, the pushout of the diagram \[A \lto \initial \rto B\] exists and has a covering family by the induced inclusions from $A$ and $B$; we denote this by $\amalg$. Every element $[A, f]$ with $A\in \ob\C$ and $f\in \Aut(A)$ represents an element in $K_1(\C)$. These elements sastisfy the relations \[[A, 1_A] = 0, \qquad [A, g] + [A, f] = [A, gf] \qquad [A, f] + [B, g] = [A\amalg B, f\amalg g].\] \end{corollary} \begin{proof} The first two relations follow directly from Theorem~\ref{thm:Bpi1} so we focus on proving the last case. The $\amalg$ in the statement of the theorem is the disjoint union in $\mathcal{W}(\C)$, which simply takes disjoint unions on indexing sets, so it implies that \[[A,f] + [B,g] = \big[\morpair{\{A,B\}}{\{A,B\}}{f\amalg g}{1}\big].\] It remains to check that the representative on the right is $[A\amalg B, f \amalg g]$. Let $f\amalg g: A \amalg B \rto A \amalg B$ be the morphism on the disjoint union in $\C$. Let $(f,g): \{A,B\} \rto \{A,B\}$ be the morphism induced by $f$ and $g$ inside $\mathcal{W}(\C)$. Let $\delta: \{A,B\} \rto \{A\amalg B\}$ be the morphism given by the covering family $\{A \rto A\amalg B, B \rto A\amalg B\}$ induced from the coproduct in $\C$. Inside $\mathcal{W}(\C)$, the square \begin{diagram} { \{A,B\} & \{A,B\} \\ \{A\amalg B\} & \{A \amalg B\} \\}; \arrowsquare{(f,g)}{\delta}{\delta}{f\amalg g} \end{diagram} commutes. Therefore in $K_1(\C)$, \begin{align*} \big[\morpair{\{A,B\}}{\{A,B\}}{f\amalg g}{1}\big] &= \big[\morpair{\{A,B\}}{\{A,B\}}{f\amalg g}{1}\big] + \big[\morpair{\{A,B\}}{\{A\amalg B\}}{\delta}{\delta}\big]\\ &= \big[\morpair{\{A,B\}}{\{A\amalg B\}}{\delta}{\delta}\big] + \big[\morpair{A\amalg B}{A \amalg B}{f\amalg g}{1_{A\amalg B}}\big] = [A\amalg B, f\amalg g], \end{align*} as desired. \end{proof} We now turn to examining how these interact with the product structure. From the previous section we know that for any pair of closed assemblers $\C$ and $\D$, there is a map \[\phi:K(\C) \sma K(\D) \rto K(\C \sma \D).\] Applying $\pi_1$ induces a map \[\big(K_0(\C) \otimes K_1(\D)\big) \oplus \big(K_1(\C) \otimes K_0(\D)\big) \rcofib \pi_1(K(\C) \sma K(\D)) \rto^{\pi_1\phi} K_1(\C\sma \D).\] Since $K_0(\C)$ is generated by symbols $[C]$, where $C\in \ob\SC(\C)$, and $K_1(\D)$ is generated by symbols $\big[\morpair{A}{B}{f}{g}\big]$, it makes sense to ask whether there is a good representative for \[\phi\left([C] \otimes \big[\morpair{A}{B}{f}{g}\big]\right).\] \begin{theorem} \label{thm:K1mult-real} Let $\C$ and $\D$ be closed assemblers, and let $\phi:K^W(\C) \sma K^W(\D) \rto K^W(\C\sma \D)$ be given by the monoidal structure of $K^W$. For any $[C]\in K_0(\C)$ and $\big[\morpair{A}{B}{f}{g}\big]$ in $K_1(\D)$, \[\phi_*\left( [C] \otimes \big[\morpair ABfg\big] \right) = \left[ \longmorpair{1.2}{(C,A)}{(C,B)}{(1_C,f)}{(1_C,g)} \right] \in K_1(\C\sma \D).\] Analogously, for any $\big[\morpair CDfg\big]$ in $K_1(\C)$ and $[A]\in K_0(\D)$, \[\phi_*\left( \big[\morpair CDfg\big]\otimes [A] \right) = \left[ \longmorpair{1.2}{(C,A)}{(D,A)}{(f,1_A)}{(g,1_A)} \right] \in K_1(\C\sma \D).\] \end{theorem} \begin{proof} In \cite[Theorem 2.5]{murotonks07} Muro and Tonks give a presentation for the $1$-type of a Waldhausen category in a way which is compatible with its multiplicative structure. More concretely, they show that for any Waldhausen category $\E$ there exists a stable quadratic module (see \cite[Definition 1.4]{murotonks07}) generated by the objects and weak equivalences in $\E$, whose homology encodes $K_0(\E)$ and $K_1(\E)$. This is compatible with the multiplicative structures on Waldhausen categories, in the sense that for a biexact functor $F: \C\times \D \rto \E$ of Waldhausen categories, the generators map in a predictable manner, with (for example) $F_*([A], [C \rwe D]) = [F(A,C) \rwe F(A,D)]$. Muro and Tonks then give generators and relations for $K_1$ of a Waldhausen category based on the structure of the stable quadratic module. In \cite[Definition 3.2, Proposition 3.4]{Z-ass-pi1} a special case of the structure is worked out, in the case when the Waldhausen structures arise from assemblers; generators and relations for $K_0$ and $K_1$ of an assembler follow from this in \cite[Theorem 3.8]{Z-ass-pi1}. The map $\phi$ in the statement of the theorem arises from the biexact functor $\SC(\C) \times \SC(\D) \rto \SC(\C\sma \D)$ induced by $\nu_{\C,\D}$. Applying the multiplicative structure given by Muro and Tonks to the presentations of $K_0$ and $K_1$ gives the desired result. \end{proof} \begin{remark} The map on $\pi_1$ also induces a map \[\operatorname{Tor}_1(K_0(\C), K_0(\D)) \rto K_1(\C\sma \D).\] If this term is nontrivial it may be possible to find an interesting interpretation for its image in $K_1(\C\sma\D)$. \end{remark} \begin{corollary} Let $\C$ and $\D$ be closed assemblers. For any objects $A\in \C$, $B\in \D$, $\alpha\in \Aut(A)$ and $\beta\in \Aut(B)$, \[\phi_*([A]\otimes [B,\beta]) = [(A,B), (1, \beta)]\in K_1(\C\sma\D)\] and \[\phi_*([A,\alpha]\otimes [B]) = [(A,B), (\alpha, 1)] \in K_1(\C\sma \D).\] \end{corollary} \bibliographystyle{IZ}
{ "timestamp": "2022-09-20T02:21:10", "yymm": "2209", "arxiv_id": "2209.08640", "language": "en", "url": "https://arxiv.org/abs/2209.08640" }
\section{\label{SI_Setup} SUPPLEMENTARY EXPERIMENTAL METHODS} \subsection{Experimental Setup} Optical measurements were performed in a home-built confocal cryogenic confocal microscope with a helium flow cryostat (Janis ST-500 probe station). The confocal microscope contains two independent branches for excitation and detection for near-infrared (NIR) and visible wavelengths. Both branches are equipped with a scanning galvo system (Thorlabs GVS012). The two branches are combined with a pellicle beamsplitter (Thorlabs BP245B1) and directed into a 4f system. Finally, a NIR 50X objective lens (Olympus LCPLN50XIR) inside vacuum is used to focus excitation onto the sample and collect the fluorescence signal. The NIR branch of the confocal microscope was used for neutral silicon vacancy (SiV$^{0}${}) measurement. The excitation channel and detection channel are combined with a 925 nm dichroic beamsplitter (Semrock FF925-Di01-25-D). The resonant excitation channel is combined with the detection channel with a 10/90 beamspliter (Thorlabs BS044). Off-resonant excitation is performed with a tunable diode laser (Toptica DL pro 850 nm), and the signal is filtered using a 937 nm long pass filter (Semrock FF01-937/LP-25). The laser is pulsed using a home-built shutter system \cite{Shutter2015}, and intensity controlled by a variable optical attenuator (Thorlabs V800PA). Photoluminescence excitation spectroscopy at the zero-phonon line is performed using resonant excitation with a tunable diode laser (Toptica CTL 950) and detecting the sideband emission of SiV$^{0}${} with a 980~nm long pass filter (Semrock LP02-980RE-25). The signal is coupled into a single mode fiber and detected either by a CCD spectrometer (Princeton Instruments Acton SP-2300i with Pixis 100 CCD and 300 g/mm grating) or by a superconducting nanowire detector (Quantum Opus, optimized for 950 - 1100 nm). For ODMR, microwave (MW) excitation is applied using a thin wire stretched across the sample. The MW excitation is generated with a signal generator (Keysight N9310A) and then amplified by a high-power MW amplifier (Triad TB1003). Two 0.8 - 2 GHz MW circulators (Ditom D3C0802S) were added after the amplifier for circuit protection. The MW excitation is pulsed using a fast MW switch (Mini-Circuits ZASWA-2-50DR+) gated by a TTL pulse generator (Spincore PBESR-PRO-500-PCI). For ODMR, MW excitation is modulated to have a 2 ms period with 50\% duty cycle. The visible branch of the confocal microscope was used for measurements of negatively charged silicon vacancy centers (SiV$^-${}) and for visible wavelength photodoping to form SiV$^{0}${}. The excitation and detection channels are combined with a 650 nm dichroic beamsplitter (Thorlabs DMLP650). The microscope is equipped with three different lasers for 532 nm (Lamdapro UG-100 mW), 594 nm (Newport R-39582), and 637 nm (Thorlabs LP637-SF70) excitation. The three lasers are coupled to a single fiber using a RGB Combiner (Thorlabs RGB26HF). The 637 nm laser is pulsed using a home-built shutter system \cite{Shutter2015}. The 532 nm laser is pulsed using an acousto-optic modulator and intensity controlled by a variable optical attenuator (Thorlabs V600A). For fluorescence measurement of SiV$^{0}${}, the signal is filtered by a bandpass filter (Thorlabs FB740-10) and detected by a single photon detector (Excelitas SPCM-AQRH-44-FC). \subsection{Sample Preparation} A \{110\} diamond (D1) grown by plasma chemical-vapor deposition (Element Six) was studied in the main text. The diamond was doped with silicon during growth, and the silicon concentration in the sample is measured to be 0.8~ppm based on secondary ion mass spectrometry. The concentration of SiV$^-${} centers is estimated to be $\sim$30 ppb by comparing the SiV$^-${} fluorescence with a sample of known SiV$^-${} concentration. This sample contains SiV$^{0}${} centers in some regions, and the optical spin polarization of these SiV$^{0}${} centers was previously studied in Ref.~\cite{Zhang2020}. Throughout this work, we work in regions where no SiV$^{0}${} can be observed without photoactivated itinerant carriers. The concentration of the nitrogen vacancy (NV$^-$) centers is estimated to be $\sim$0.03 ppb by comparing the NV$^-$ fluorescence with a sample of known NV$^-$ concentration~\cite{Rose2017}. Data based on a second silicon doped diamond (D2) grown by plasma chemical-vapor deposition (Element Six) is presented in the supplemental material. The concentration of SiV$^-${} centers is estimated to be $\sim$40 ppb using UV-Vis absorption. The concentration of NV$^-$ centers in this sample is estimated to be $\sim$0.07 ppb. In this sample, we observe stabilization of nonequilibrium SiV$^{0}${}, and similar wavelength dependent photoactivation of itinerant carriers. Data based on a third silicon doped diamond (D3) grown by plasma chemical-vapor deposition (Element Six) is presented in the supplemental material to demonstrate the heterogeneous charge dynamics behavior among different samples and the spectroscopic signature of SiV$^{2-}$. The sample is doped with nitrogen and silicon during growth, with an estimated nitrogen concentration of 50 ppb based on growth conditions and a silicon concentration of 300 ppb based on secondary ion mass spectrometry (SIMS) measurement. We estimate the resulting NV$^-$ concentration to be $\sim$0.01 ppb and the SiV$^-${} concentration to be $\sim$2 ppb by comparing the NV$^-$ fluorescence and the SiV$^-${} fluorescence to fluorescence in samples with known NV$^-$ and SiV$^-${} concentration. Data based on a fourth diamond (D4) grown by plasma chemical-vapor deposition (Element Six) is presented in the supplemental material for study of single center dynamics. The optical properties of the single SiV$^{0}${} centers in this sample were previously characterized in Ref.~\cite{Rose2018}. The concentration of NV$^-$ centers in this sample is not uniform, and is estimated to be in the range of 0.1 - 0.9 ppb. This sample was implanted with silicon with a fluence of $1\times 10^9$~cm$^{-2}$. After implantation, the sample was annealed up to 1200 $\degree$C with the following steps: (1) Ramp to 100 $^\circ$C over 1 hour, hold for 11 hours; (2) Ramp to 400 $^\circ$C over 4 hours, hold for 8 hours; (3) Ramp to 800 $^\circ$C over 6 hours, hold for 8 hours; (4) Ramp to 1200 $^\circ$C over 6 hours, hold for 2 hours; (4) Let cool to room temperature. Single centers are observable after the annealing. \begin{figure}[h!] \centering \includegraphics[width = 129mm]{FigureSI/FigSI_FTIR_combined_0808.pdf} \caption{{\bf FTIR measurements}. (a) FTIR spectra on different diamond samples. A HPHT diamond with high nitrogen concentration is plotted for reference. The peak at 1344 cm$^{-1}$ is characteristic to $\mathrm{N}_s^0$ and the peak height can converted to the $\mathrm{N}_s^0$ concentration with a converstion factor of 30 ppm per cm$^{-1}$ absorbance. Inset: FTIR spectra near the expected $\mathrm{N}_s^0$ peak. No peak can be identified except for the HPHT diamond. (b) FTIR spectrum for sample D3 before and after the noise filtering. (c) Fourier transform of FTIR spectrum on sample D3 (blue). The interference induced oscillation leads to a peak at 412~Hz. A notch filter centerd at 412~Hz with a bandwidth of 10~Hz is used to filter out the background oscillation (yellow).} \label{fig:FTIR} \end{figure} We use Fourier-transform infrared spectroscopy (FTIR) to estimate the nitrogen concentration in the samples. The 1344 cm$^{-1}$ peak in the FTIR spectrum is related to substitutional nitrogen ($\mathrm{N}_s^0$, also referred to as the P1 center) and can be used for quantitative estimation of the concentration \cite{Liggins2010} (Fig.~\ref{fig:FTIR}(a)). In some samples, background oscillations show up in the FTIR spectrum due to interference of reflections from the two surfaces. We mitigate the contribution from this periodic background by filtering the spectrum with a narrow-band notch filter (Fig.~\ref{fig:FTIR}(b) and Fig.~\ref{fig:FTIR}(c)). No peak from $\mathrm{N}_s^0$ can be observed. Based on the noise level (standard deviation) from 1360 cm$^{-1}$ to 1380 cm$^{-1}$, we put a conservative upper bound of 300 ppb for $\mathrm{N}_s^0$ concentration in these samples. It was reported that the ratio between total nitrogen concentration and NV$^-$ concentration is typically around 300:1 in nitrogen doped CVD diamonds~\cite{Edmonds2012}. Using this conversion factor, the nitrogen concentration is projected to be 9 ppb, 21 ppb and 3 ppb for sample D1, D2 and D3 based on the estimated native NV$^-$ concentration. We note that the projected nitrogen concentration of 3 ppb for sample D3 is smaller than the estimation (50~ppb) from growth condition. This difference suggests that the incorporation of silicon likely influences the concentration ratio between nitrogen and NV$^-$. Therefore, the projected nitrogen concentration here will be a conservative lower bound. \section{Additional Measurements} \subsection{Charge state dynamics in the dark} We probe the population decay of nonequilibrium SiV$^{0}${} centers in the dark after they are generated. No decay can be observed after 30~min (Fig.~\ref{fig:LA_dark}), consistent with previous observation of long-lived nonequilibrium charge states for NV centers and SiV$^-${} centers \cite{Jayakumar2016,Dhomkar2018}. \begin{figure}[h!] \centering \includegraphics[width = 129mm]{FigureSI/FigSI_DarkDynamics_LA.pdf} \caption{{\bf Charge state stability in the dark on sample D1}. After fixed location 532~nm illumination, nonequilibrium SiV$^{0}${} centers are probed after dark periods of 15 minutes and 30 minutes with 0.28~mW of 857 nm. The spatial distribution of SiV$^{0}${} remains stable, suggesting SiV$^{0}${} centers generated in this nonequilibrium charge environment are long lived in the dark.} \label{fig:LA_dark} \end{figure} \subsection{Time-dependent study of carrier transport} In this section, we analyze the time-dependent carrier transport and capture by studying the hole diffusion process as a function of illumination time and excitation power. The width of the SiV$^{0}${} distribution is extracted by detecting the edge of the bright torus with a fixed threshold. Its evolution as a function of 532~nm illumination duration is shown in (Fig.~\ref{fig:Diffusion_Time}(a)). For a diffusion process described by Brownian motion, the width ($\sigma$) of the diffusion can be described by $\sigma(t) = (2D_{eff})^{1/2}t^{1/2}$ where $D_{eff}$ is the effective diffusion coefficient. We observe that the growth rate of the torus deviates significantly from the $t^{1/2}$ scaling with a fitted scaling of $t^{1/4.6}$. This slower growth rate suggests that the carrier diffusion cannot be described with the simple diffusion model. By inspecting the SiV$^{0}${} profile as a function of time, we observe that the saturated SiV$^{0}${} intensity reaches a similar value at different locations, and the farther away SiV$^-${} centers are converted to SiV$^{0}${} only when the closer SiV$^-${} centers are already converted. This suggests that the SiV$^-${} centers are acting as a strong absorptive medium for the holes, limiting the rate of hole diffusion. \begin{figure}[h!] \centering \includegraphics[width = 129mm]{FigureSI/FigSI_DiffusionFit.pdf} \caption{{\bf Time dependent measurement of SiV$^{0}${} generation}. (a) Spatial distribution of SiV$^{0}${} as a function of 532~nm (0.95~mW) illumination time. The duration was set from 0.3~s to 199.5~s with logarithmic spacing. The threshold for edge detection was set to 12 kcps. (b) Width of the SiV$^{0}${} torus as a function of total 532~nm illumination time. The data was fitted with a model $\sigma(t) = D^{1/2}t^{1/n}$, where $t$ denotes the total 532~nm time, $D$ is a free coefficient, and n is the exponent describing the speed for the growth of the torus. The blue curve is a fit to the data with $n$ constrained to 2 while the yellow curve is a fit to the date with $n$ as a free parameter, with $n = 4.6 \pm 0.1$.} \label{fig:Diffusion_Time} \end{figure} To probe the hole diffusion dynamics before the saturation of SiV$^{0}${} centers, we performed the same measurement with lower 532~nm power. Two different scalings in the growth rate of the torus are observed (Fig. S3(b)). At early times, the size of torus grows with a scaling close to $n = 2$, consistent with the diffusion model. In the late times where the saturation of SiV$^{0}${} becomes more prominent, the growth rate of the torus slows down and follows a scaling of $n = 4.0 \pm 0.1$, similar to the scaling observed in the higher power measurements (Fig.~\ref{fig:Diffusion_Time}). \begin{figure}[h!] \centering \includegraphics[width = 129mm]{FigureSI/FigSI_DiffusionFit_lowerP.pdf} \caption{{\bf Time dependent measurement of SiV$^{0}${} generation}. (a) Spatial distribution of SiV$^{0}${} as a function of 532~nm (0.24~mW) illumination time. The duration was set from 0.3~s to 199.5~s with logarithmic spacing. The threshold for edge detection was set to 12 kcps. (b) Width of the SiV$^{0}${} torus as a function of total 532~nm illumination time. Two different timescales are observed at early and late times. The data at different times was fitted with a model $\sigma(t) = D^{1/2}t^{1/2}$, where $t$ denotes the total 532~nm time, $D$ is a free coefficient, and n is the exponent describing the speed for the growth of the torus. The blue line is a fit to the data with $n$ constrained to 2. The yellow line is a fit to the early time data with $n$ as a free parameter, with $n = 1.8 \pm 0.1$. The red dashed line is a fit to the late time data with $n$ as a free parameter, with $n = 4.0 \pm 0.1$. } \label{fig:Diffusion_Time_2} \end{figure} In addition to the strong absorption of holes from the SiV$^-${} centers, other impurities in the sample can also affect the carrier diffusion process. For example, in order to satisfy charge neutrality in the sample, positively charged defects (presumably positively charged substitutional nitrogen) should be present to compensate for the negative charges from SiV$^-${} prior to photoactivation of carriers. After generation of SiV$^{0}${} centers via hole capture, one needs to consider the resulting space-charge potential from the remaining positively charged substitutional nitrogen and the itinerant electrons. This space-charge potential can affect the carrier diffusion significantly~\cite{Lozovoi2020}. \subsection{Ionization dynamics of SiV$^{0}${} centers} We probe the stability of SiV$^{0}${} centers under different excitation wavelengths. The power dependence of the ionization rates are summarized in Fig.~\ref{fig:ionization}(a). We observe that the ionization rates can vary by orders of magnitude depending on the wavelength. When exciting at 857~nm, below SiV$^{0}${} ionization threshold ($\sim$1.5~eV~\cite{Collins1995}), ionization is slow and the rate saturates at high powers. When exciting above the ionization threshold at 637~nm, a linear scaling of the ionization rate is observed, consistent with a single-photon ionization process. For 532~nm, the ionization rate is much faster compared to that of 637~nm, and the rate saturates at higher powers. We note that previous photoconducivity measurements on SiV$^{0}${} showed relatively flat responses above 1.8~eV~\cite{Collins1995}. The difference in ionization rate between 532~nm and 637~nm excitation suggests influences of charge dynamics from other other coexisting defects. \begin{figure}[h!] \centering \includegraphics[width = 129mm]{FigureSI/FigSI_Ionization.pdf} \caption{{\bf Ionization dynamics of SiV$^{0}${} centers}. (a) Power dependent ionization rate of SiV$^{0}${} centers at different wavelengths. The dashed lines are linear fits to the data. The fit for 637 nm is consistent with a linear scaling. The rates for 532~nm and 857~nm deviate from a linear scaling and saturate at higher powers. (b) Energy level diagram with the hypothesized microscopic model for SiV$^{0}${} ionization at different wavelengths.} \label{fig:ionization} \end{figure} The hypothesized ionization processes under different wavelengths are shown in Fig.~\ref{fig:ionization}(b). For 857 nm excitation, the excitation energy (1.45~eV) is below the ionization threshold of SiV$^{0}${}, leading to a suppressed ionization rate. The non-negligible below-threshold ionization may be facilitated by shallow charge traps in the sample. These charge traps are photo-inactive, and have a finite lifetime for the trapped charges, which is independent to optical excitation. The holes from the SiV$^{0}${} centers can tunnel to the charge traps, and the saturation of these traps will lead to the saturation of ionization. For 637~nm excitation, the excitation energy is higher than the ionization threshold, resulting in a single-photon ionization process with a linear scaling for the power dependence. With 532~nm excitation, photoinduced processes from the coexisting NV centers and P1 centers need to be considered, as the interplay between these processes and SiV$^{0}${} ionization can modify the charge dynamics significantly. NV centers cycle between the negative charge state (NV$^-$) and the neutral charge state (NV$^0$) under 532~nm excitation, while this charge cycling is not possible under 637~nm excitation. NV$^-$ centers can capture the holes generated from photoionization of SiV$^{0}${} efficiently due to their large hole-capture cross section \cite{Lozovoi2021}. Therefore, under 532 nm excitation NV centers can serve as a continuously replenished sink for holes, which leads to a faster instantaneous ionization rate. Additionally, the P1 center ionization threshold is around 1.7 eV, and the ionization rate is around ten times larger at 2.3 eV (532 nm) than at 1.9 eV (637 nm) \cite{Farrer1969,Nesladek1998,Isberg2006,Heremans2009}. The resulting positively charged P1 centers can then form a local space-charge potential \cite{Lozovoi2020}, preventing the diffusion of holes, resulting in a saturation of the ionization rate at high powers. We note that without detailed knowledge of the concentrations of different defects and the relevant capture and ionization rates for the photoinduced processes, it is difficult to disentangle the competing charge dynamics. A full model involving charge generation and transport, and local space-charge potential within the excitation volume may help to provide more a definitive understanding and assignment of the underlying processes, and would be an interesting avenue for future exploration. \subsection{Comparing the photoactivation effect of 532 nm and 561 nm} To further confirm the origin of carriers in our sample, we performed fixed location illumination using 561 nm and 532 nm with the same illumination power and duration on two spots separated by 30 microns. Bright tori of SiV$^{0}${} fluorescence of similar sizes are observed, as shown in Fig.~\ref{fig:LA_561}. Together with the wavelength dependence shown in the main text (Fig.~5), it can be concluded that a sharp transition for the photoactivation of holes exists between 561~nm and 595~nm. Notably, these wavelengths are to the blue and red, respectively, of the 575~nm zero-phonon line of NV$^0$, below which continuous charge state cycling of NV centers is possible. This sharp transition strongly suggests NV centers as the main source of itinerant carriers in our sample. \begin{figure}[h!] \centering \includegraphics[width = 86mm]{FigureSI/FigSI_LA_561_doping.pdf} \caption{{\bf Stabilization of SiV$^{0}${} with photoactivation of carriers using 561 nm illumination}. Optical illumination with 561 nm (left) and 532 nm (right) are focused on two locations separated by 30 microns. The illumination power is 0.46~mW and the illumination time is 30~s. The asymmetric pattern generated under 532 nm may arise from astigmatism of visible beam going through NIR optics at some tilt angle.} \label{fig:LA_561} \end{figure} \subsection{Photogeneration of SiV$^{0}${} using 637 nm excitation} In recent studies, it was observed that 638 nm excitation of SiV$^-${} centers can affect the charge state of nearby SiV$^-${} centers \cite{Gardill2021}. In sample D1, we observe a similar effect using 637~nm illumination, but the effect was less efficient comparing to that of 532~nm illumination (Fig.~\ref{fig:LA_red_doping}). After fixed location 637~nm illumination, we observe generation of a small amount of nonequilibrium SiV$^{0}${} centers. At the same time, the spatial distributions of SiV$^{0}${} and SiV$^-${} are inverted, suggesting that hole capture of SiV$^-${} is responsible for the stabilization of SiV$^{0}${}. Under 637 nm illumination, the photoactivation of carriers cannot be accounted by the charge dynamics of NV centers, where NV$^-$ centers photoionize to the NV$^0$ centers and only produce a limited number of electrons. Similarly, substitutional nitrogen (N$_s^0$) photoionize weakly under 637~nm excitation, but during this process only electrons are generated \cite{Dhomkar2018}. With the high concentration of SiV$^-${} centers in our sample, it is likely that the photogeneration of nonequilibrium SiV$^{0}${} under 637 nm illumination is related to continuous charge cycling of SiV centers \cite{Gardill2021}. \begin{figure}[h!] \centering \includegraphics[width = 86mm]{FigureSI/FigSI_LA_red_doping.pdf} \caption{{\bf Stabilization of SiV$^{0}${} with with 637 nm excitation}. (a) SiV$^{0}${} spatial distribution after 532 nm laser illumination (left) or 637 nm laser illumination (right). (b) SiV$^-${} spatial distribution after 532 nm laser illumination (left) or 637 nm laser illumination (right). The whole area is initialized to be SiV$^-${} rich with low power 532 nm raster scans prior to the laser illumination. Despite the higher excitation power and the longer illumination time for 637 nm illumination, the photoactivation of carriers (inferred by photogeneration of SiV$^{0}${}) is much weaker compared to that of 532 nm illumination.} \label{fig:LA_red_doping} \end{figure} \subsection{Photogeneration of SiV$^{0}${} in sample D2} To check that the photogeneration of SiV$^{0}${} is not a unique phenomenon in a single sample, we repeat the photoactivation of itinerant carriers on another sample, D2. Similar to sample D1, we observe photogeneration of SiV$^{0}${} and depletion of SiV$^-${} centers with fixed location 532~nm illumination~(Fig.~\ref{fig:MT_confocal}). \begin{figure}[h!] \centering \includegraphics[width = 86mm]{FigureSI/FigSI_MT_Confocal.pdf} \caption{{\bf Charge state conversion of SiV centers with 532~nm illumination on sample D2}. Spatial distribution of SiV$^{0}${} centers (left) and SiV$^-${} centers (right) after a fixed location 532~nm illumination (0.98 mW for 60~s).} \label{fig:MT_confocal} \end{figure} In sample D2, under the same illumination condition as 532~nm, photogeneration of SiV$^{0}${} centers cannot be observed with 637~nm illumination, consistent with the observation in sample D1 (Fig.~\ref{fig:MT_wavelength}). Considering the similar NV concentration on the two samples, this also suggests NV center as the source of holes on sample D2. \begin{figure}[h!] \centering \includegraphics[width = 129mm]{FigureSI/FigSI_MT_Wavelength.pdf} \caption{{\bf Wavelength dependence of photogeneration of SiV$^{0}${} on sample D2}. Optical illumination at 637~nm and 532~nm are focused on the same location sequentially for 60~s with 0.68~mW. The arrows indicate the temporal sequence. SiV$^{0}${} centers were only observed after 532 nm illumination.} \label{fig:MT_wavelength} \end{figure} We probe the stability of the photogenerated SiV$^{0}${} centers in the dark in sample D2. No appreciable change of population is observed after 15 minutes (Fig.~\ref{fig:MT_dark}), suggesting the nonequilibrium SiV$^{0}${} centers in this sample are also long-lived. \begin{figure}[h!] \centering \includegraphics[width = 86mm]{FigureSI/FigSI_DarkDynamics_MT.pdf} \caption{{\bf Charge state stability in the dark on sample D2}. After fixed location 532~nm illumination, nonequilibrium SiV$^{0}${} centers are probed after a dark period of 15 minutes with 0.29~mW of 857 nm. The spatial distribution of SiV$^{0}${} remains stable.} \label{fig:MT_dark} \end{figure} \subsection{Photogeneration of SiV$^{0}${} in sample D3} In this section, we show that the charge state dynamics of SiV can be strongly sample dependent by studying the photo-dynamics on a third sample, D3. 532 nm illumination is focused at a fixed location on sample D3. Afterwards, the spatial distribution of SiV$^{0}${} and SiV$^-${} centers are probed~(Fig.~\ref{fig:SiV2m}). For SiV$^{0}${}, we observe appearance of a bright torus (Fig.~\ref{fig:SiV2m}, left), similar to the observation in sample D1 and D2. However, the spatial distribution of SiV$^-${} is drastically different (Fig.~\ref{fig:SiV2m}, right). Four salient features are worth noting: (1) SiV$^-${} is bright under direct 532~nm illumination; (2) near the illumination location, a dark torus can be observed, the size of which is consistent with the bright torus of SiV$^{0}${}; (3) an additional bright torus is observed outside the smaller dark torus; (4) far away from the illumination location, the SiV centers are neither in the neutral charge state or in the negative charge state. The first two features are consistent with our observations in sample D1 and D3, while the last two features are drastically different. First, the spatial distributions of SiV$^{0}${} and SiV$^-${} are no longer inverted. At the same time, both SiV$^{0}${} and SiV$^-${} are dark far away from the illumination location. Since optical illumination cannot create SiV centers but can only modify the SiV charge state, this suggests that a third charge state of SiV is present in this sample. Second, outside the dark torus of SiV$^-${}, a bright torus corresponding to photogeneration of SiV$^-${} is observed. Without any initial SiV$^{0}${} population, the photogeneration of SiV$^-${} centers suggests that the initial state prior to SiV$^-${} photogeneration is SiV$^{2-}$. In sample D3, the SiV centers are thermodynamically stable in the form of SiV$^{2-}$. With the photoactivated holes, the SiV$^{2-}$ centers are first converted to SiV$^-${} centers via single hole capture. Afterwards, the photogenerated SiV$^-${} centers can be converted to SiV$^{0}${} centers via additional hole capture. The larger torus size for SiV$^-${} compared to SiV$^{0}${} can then be explained by the fact that photogenenration of SiV$^{0}${} is possible only after photogeneration of SiV$^-${}. \begin{figure}[h!] \centering \includegraphics[width = 86mm]{FigureSI/FigSI_Lisbon_Confocal.pdf} \caption{{\bf Charge state dynamics of SiV centers in sample D3}. Spatial distribution of SiV$^{0}${} (left) and SiV$^-${} (right) after fixed location illumination using 1 mW 532 nm for 60s. The SiV$^{0}${} signal shows a bright torus around the illumination position. The SiV$^-${} signal shows an inner dark torus with the same size as the SiV$^{0}${} torus as well as a larger bright torus. The readout of SiV$^{0}${} was using 8~mW 857 nm excitation while the readout of SiV$^-${} was using 0.37~mW 637~nm.} \label{fig:SiV2m} \end{figure} Based on our observations, the following model can account for the differences among these samples: the hole capture effect of SiV centers is robust across different samples, while the preferred charge state in equilibrium can vary depending on the details of the samples (SiV$^-${} in sample D1 and D2, SiV$^{2-}$ in sample D3). This preferential charge state of SiV centers depends on the local Fermi level of the diamond \cite{Gali2013}, where a higher Fermi level favors SiV$^{2-}$ and lowering the Fermi level favors SiV$^-${} and eventually SiV$^{0}${}. With a lower concentration of nitrogen, the Fermi level is closer to the middle of the bandgap, favoring more SiV$^-${}. With a higher concentration of nitrogen, the Fermi level is pinned to 1.7~eV below the conduction band minimum, favoring the stabilization of SiV$^{2-}$. For samples D1, D2 and D3, we are unable to measure the difference of nitrogen concentrations using FTIR due to the limited sensitivity. Additionally, charge traps in the samples can affect the effect of Fermi level pinning from nitrogen. Nevertheless, the observation of different thermodynamically preferred charge state in different samples may resolve the dispute in previous literatures about the dark state of SiV$^-${} centers \cite{Gardill2021,Dhomkar2018} . We note that the above discussion seems to resolve the apparent discrepancies in the charge capture process among samples, but the SiV charge state under direct 532~nm illumination remains unresolved. Specifically, in our samples, SiV$^-${} centers are bright under 532~nm excitation, while in some other works, SiV$^-${} centers are reported to enter a dark state under 515~nm or 532~nm excitation \cite{Gardill2021,Dhomkar2018}. More work is needed to resolve this discrepancy, but again, the concentration of local defects can play a big role. For example, even within a diffraction-limited spot, optical excitation can still address many SiV centers, NV centers and P1 centers simultaneously, carrier transport between these defects needs to be considered. The different concentration for these three defects in different samples may be used to resolve the discrepancy of SiV$^-${} dynamics under direct 532~nm illumination. \subsection{Single center charge dynamics with 532~nm illumination} In this section, we present preliminary measurements of the influence of photoactivated carriers on the charge state dynamics of individual SiV centers. The single centers measured in this section are from two samples: (1) a region with low silicon doping in sample D1 and (2) an additional silicon implanted sample D4. Due to the low density of centers, single centers are resolvable as individual bright spots in the fluorescence scan~(Fig.~\ref{fig:AK_single}(a)). \begin{figure}[h!] \centering \includegraphics[width = 86mm]{FigureSI/FigSI_Alaska_Combined.pdf} \caption{{\bf Effect of remote 532~nm illumination on single centers}. (a) Fluorescence scan on sample D4 showing individual bright centers. The green star indicates the position for continuous 532 nm illumination. The bright spot in the white circle indicates the single center studied in (b). (b) Fluorescence count rate of the single center under 857~nm (7~mW) with and without the 532~nm illumination (0.2~mW). The distance between the single center and the illumination location is $\sim$12.5 $\mu$m.} \label{fig:AK_single} \end{figure} For the single centers in these two samples, the emission of centers show visible telegraph switching between a bright state and a dark state (Fig.~\ref{fig:AK_single}(b)). The high contrast charge state switching allows for quantification of the charge state stability. For these single centers, we take fluorescence time traces with and without continuous 532~nm illumination in a nearby location. The emission statistics of several centers change upon remote photoactivation (Fig.~\ref{fig:single_hist}). For some centers, we observed a significant decrease of population in the bright state, while the opposite change was observed for a center in sample D4 (Fig.~\ref{fig:single_hist}(a)). Additionally, no appreciable change was observed for one of the centers in sample D1 (Fig.~\ref{fig:single_hist}(d)). The variation among single centers may arise from an unknown source of local inhomogeneity; for example, high strain \cite{Rose2018} may modify the charge capture process. \begin{figure}[h!] \centering \includegraphics[width = 129mm]{FigureSI/FigSI_SC_Variation.pdf} \caption{{\bf Variation of charge state dynamics with remote 532~nm illumination for single centers}. Histograms of the fluorescence time traces on 5 different single centers on sample D1 and D4 with and without remote 532~nm illumination are measured. The label in each plot denotes the distance between the 532~nm illumination and the single center. The 532~nm power was kept 1.5~mW except for (a) where 0.2~mW was used. The lower count rate for the center in (a) was due to the usage of a different detector with lower efficiency. (a) shows the histogram for the single center studied in Fig.~\ref{fig:AK_single}.} \label{fig:single_hist} \end{figure} \clearpage
{ "timestamp": "2022-09-20T02:23:27", "yymm": "2209", "arxiv_id": "2209.08710", "language": "en", "url": "https://arxiv.org/abs/2209.08710" }
\section{On the deterministic output of the decoder} \label{sec:appendix} In \cref{sec:2qrldc}, we assumed the relaxed decoder $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ for a 2-query weak RLDC has deterministic output conditioned on the queries $j, k$. Here we explain the reason behind this assumption. Given an index $i \in [n]$ and queries $j,k$ made by $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$, in the most general setting the output could be a random variable which depends on $i$ and $y_j$, $y_k$, where $y_j$, $y_k$ are the answers to queries $j$, $k$, respectively. An equivalent view is that the decoder picks a random function $f$ according to some distribution $D_{j,k}^{i}$, and outputs $f(y_j, y_k)$. Here $D_{j,k}^{i}$ is a distribution over all functions $\{f \colon \set{0,1}^2 \rightarrow \set{0,1,\perp} \}$. We claim that one can use a single \emph{deterministic} function $f_D\colon \set{0,1}^2 \rightarrow \set{0,1,\perp}$ in place of the random function $f \sim D_{j,k}^{i}$, while preserving perfect completeness and the relax decoding property (for the same parameters). We say a pair $(a,b) \in \ensuremath{{\{0,1\}}}\xspace^2$ is achievable if there exists $x \in \ensuremath{{\{0,1\}}}\xspace^n$ such that $(C(x)_j, C(x)_k)=(a,b)$. Otherwise we say $(a,b)$ is unachievable. A simple consequence of the perfect completeness property is that, any two functions $f_1, f_2$ in the support of $D_{j,k}^{i}$ must agree on achievable pairs. Furthermore, the value they agree on cannot be $\perp$. Therefore, we can define $f_D$ as follows: \begin{align*} \forall (a, b) \in \ensuremath{{\{0,1\}}}\xspace^2, \quad f_D(a,b) = \begin{cases} 0 & \textup{if $f(a,b)=0$ for all $f \in \textup{supp}(D_{j,k}^{i})$} \\ 1 & \textup{if $f(a,b)=1$ for all $f \in \textup{supp}(D_{j,k}^{i})$} \\ \perp & \textup{otherwise} \end{cases}. \end{align*} It is easily verified that $f_D$ satisfies perfect completeness. Indeed, whenever the output is $\perp$, it entails the decoder read an unachievable pair, which can only occur in a corrupted codeword. Also note that for any function $f \in \textup{supp}(D_{j,k}^{i})$, either $f_D$ agrees with $f$ on $(a,b)$, or $f_D(a,b)=\perp$. Therefore, $f(a,b) \in \set{x_i, \perp}$ implies $f_D(a,b) \in \set{x_i, \perp}$, and the relaxed decoding property is preserved. \subsection{Strong Insdel Relaxed Locally Correctable Codes}\label{sec:rLCC} In this section, we show how to tweak the above construction to obtain a strong Insdel relaxed Locally Correctable Code (RLCC) with constant locality when the outer Hamming RLDC is replaced with a Hamming RLCC. We first give a formal definition of relaxed Locally Correctable Codes. \begin{definition}[Relaxed Locally Correctable Code]\label{def:rLCC} A $(q,\delta,\alpha,\rho)$-Relaxed Locally Correctable Code (RLCC) $C \colon \Sigma^n \rightarrow \Sigma^m$ is a code for which there exists a randomized decoder that makes at most $q$ queries to the received word $y$, and satisfies the following properties: \begin{enumerate} \item (Perfect completeness) For every $i \in [m]$, if $y = C(x)$ for some message $x$ then the decoder, on input $i$, outputs $y_i$ with probability $1$. \item (Relaxed decoding) For every $i \in [m]$, if $y$ is such that $\mathrm{dist}\hspace{-1pt}(y, C(x)) \leq \delta$ for some unique $C(x)$, then the decoder, on input $i$, outputs $c_i$ or $\bot$ with probability $\geq \alpha$. \item (Success rate) For every $y$ such that $\mathrm{dist}\hspace{-1pt}(y, C(x)) \leq \delta$ for some unique $C(x)$, there is a set $I$ of size $\geq \rho m$ such that for every $i \in I$ the decoder, on input $i$, correctly outputs $c_i$ with probability $\geq \alpha$. \end{enumerate} We will denote an RLCC that satisfies all 3 conditions by the notion of Strong RLCC, and one that satisfies the first 2 conditions by the notion of Weak RLCC. Furthermore, if the $q$ queries are made in advance, before seeing entries of the codeword, then the decoder is said to be {\em non-adaptive}; otherwise, it is called {\em adaptive}. \end{definition} As with \cref{def:strongRLDC}, the probabilities in the above definition are taken over the randomness of the decoding algorithm, and $\mathrm{dist}\hspace{-1pt}$ is a normalized metric. When $\mathrm{dist}\hspace{-1pt}$ is the normalized Hamming distance, then we say that the code is a Hamming RLCC; similarly, when $\mathrm{dist}\hspace{-1pt}$ is the normalized edit distance, we say that the code is a Insdel RLCC. To obtain our strong Insdel RLCC, we replace the weak Hamming RLDC of \cref{alg:encoder} with the weak Hamming RLCC of \cref{lem:rlcc} due to \cite{AsadiS21}. \iffalse \begin{remark} \cite{AsadiS21} actually gives the construction of a strong RLCC, i.e. codes that satisfies a third property which requires that the decoder must successfully decode (does not output $\perp$) a constant fraction of the coordinates with high probability. \end{remark} \begin{theorem} \label{thm:main-rilcc-construction} For any $\gamma>0$ and $\ensuremath{\varepsilon} \in (0,1/2)$, there exist constants $\delta \in (0,1/2)$ and $q=q(\delta,\ensuremath{\varepsilon},\gamma)$, and strong $(q,\delta,1/2 + \ensuremath{\varepsilon}, 1/2 )$-insdel RLCCs $C\colon \set{0,1}^n\rightarrow \set{0,1}^m$ with $m=O(n^{1+\gamma})$. \end{theorem} \fi We now prove the following corollary. \strongirlcc* \begin{proof} The construction is based on a slight modification of code presented in Section~\ref{sec:wrLDC}. For the encoding process, the outer code $C_{\text{out}} \colon \ensuremath{{\{0,1\}}}\xspace^n \rightarrow \ensuremath{{\{0,1\}}}\xspace^{k}$ is replaced by a non-adaptive weak $(q_{\text{out}},\delta_{\text{out}},1/2+\eps_{\text{out}})$-relaxed Hamming LCC. The existence of such a code is guaranteed by Lemma~\ref{lem:rlcc}. We pick $\eps_{\text{out}} = 2\ensuremath{\varepsilon}$ and $q_{\text{out}}(\delta, \ensuremath{\varepsilon}, \gamma)$ to be a sufficiently large constant such that $k = {n^{1+O(1/q)}}$ and $k\log k = O(n^{1+\gamma}) $. The encoding algorithm is the same as \cref{alg:encoder}. To ensure the total length of zero buffers is exactly half of the codeword length, we append another $t$ 0's at the end of the codeword output by \cref{alg:encoder}. By the analysis in \cref{sec:wlrdc-enc}, we know that for any message $x\in\ensuremath{{\{0,1\}}}\xspace^n$, the codeword $C(x)$ has length $\Theta(k\log k) = O(n^{1+\gamma})$. The decoding process is similar to \cref{alg:decoder}. Given an input index $i$, we do the following: \begin{enumerate} \item If the index $i$ belongs to a zero buffer, the decoder aborts and outputs $0$. \item The decoder checks if the oracle $w$ has length $m$. If not, it aborts and outputs $\bot$. \item Let $\tilde{i}$ be the index of the non-zero block that contains $i$-th symbol. The decoder invokes $C_{\text{out}}.\ensuremath{\mathsf{Dec}}\xspace(\tilde{i})$. \item For each query $j \in [k]$ received from $C_{\text{out}}.\ensuremath{\mathsf{Dec}}\xspace(\tilde{i})$ \begin{enumerate} \item The decoder computes codewords $C_{\text{in}}(j, 0)$ and $C_{\text{in}}(j,1)$, and additionally computes the first index $i_0 \in [t]$ such that $C_{\text{in}}(j,0)[i_0] \neq C_{\text{in}}(j,1)[i_0]$. \item The decoder samples $i_1,\dotsc, i_d \in [t]$ uniformly and independently at random. \item The decoder sets a bit $\ensuremath{\widetilde{y}}\xspace_j$ as follows. If $w[ 2(j-1)t + i_\ell ] = C_{\text{in}}(j,0)[i_\ell]$ for all $\ell \in \{0,1,\dotsc, d\}$, then set $\ensuremath{\widetilde{y}}\xspace_j = 0$; else if $w[ (j-1)(2t+1)+ i_\ell ] = C_{\text{in}}(j,1)[i_\ell]$ for all $\ell \in \{0,1,\dotsc, d\}$, then set $\ensuremath{\widetilde{y}}\xspace_j = 1$; else if neither case occurs, abort and output $\bot$. \item Answer $C_{\text{out}}.\ensuremath{\mathsf{Dec}}\xspace(i)$ with bit $\ensuremath{\widetilde{y}}\xspace_j$ and await the next query. \end{enumerate} \item Let $\tilde{y}_{\tilde{i}} = C_{\text{out}}.\ensuremath{\mathsf{Dec}}\xspace(\tilde{i})$. Compute the $\tilde{i}$-th non-zero block $C_{\text{in}}(\tilde{i}, \tilde{y}_{\tilde{i}})$. Output the bit in this block that is in the $i$-th position of the whole codeword. \end{enumerate} For the perfect completeness, we note our decoding algorithm will always output $C(x)_i$ if there are no corruption, which is guaranteed by the perfect completeness of the weak Hamming RLCC. If the input index $i$ is in one of the zero buffers, the decoder always outputs $0$ correctly. The success rate for those indices is $1$. The relaxed decoding property holds for those indices. For $i$ not in a zero buffer, let $\tilde{i}$ be the index of the non-zero block that contains the $i$-th symbol. By the relaxed decoding property of the weak Hamming RLCC, given access to $\mathbf{Y}$ (defined in the proof of Theorem~\ref{thm:main-rildc-construction}), the relaxed decoder $\ensuremath{\mathsf{Dec}}\xspace_{out}$ of $C_{\text{out}}$ will output $C_{\text{out}}(x)_{\tilde{i}}$ or $\bot$ with probability at least $1/2+\eps_{\text{out}}$. If $\ensuremath{\mathsf{Dec}}\xspace_{out}$ outputs $C_{\text{out}}(x)_{\tilde{i}}$ correctly, our decoder can also output $C(x)_i$ correctly. From the analysis in the proof of Theorem~\ref{thm:main-rildc-construction}, we know the decoder will output $C(x)_i$ or $\bot$ with success probability at least $1/2 + \ensuremath{\varepsilon}$. Thus, the relaxed decoding property holds. Finally, we can let $I_y \subseteq [m]$ be the set of all indices that is in a zero buffer. Since the total length of zero buffers is exactly half of the codeword length, $\abs{I_y} = m/2$. We always output 0 correctly for $i\in I_y$. Thus, our code achieves a success rate $\rho \ge 1/2$. \end{proof} \section{Weak Insdel RLDC and Strong Insdel RLCC Constructions}\label{sec:wrLDC} We prove \cref{thm:main-rildc-construction,thm:main-rilcc-construction} in this section. As a reminder, a weak $(q, \delta, \alpha)$-Insdel RLDC satisfies the first two conditions of \cref{def:strongRLDC}. Our constructions will have constant locality $q$, constant error-tolerance $\delta$, and codeword length $m = O(n^{1+\gamma})$ for any $\gamma \in (0,1)$. \paragraph{Notation.} We introduce some additional notation we use throughout this section. We say that a set of integers $I \subset \mathbb{Z}$ of size $n$ is an \emph{interval} if $I = \{a, a+1, \dotsc, a+(n-1)\}$ for some integer $a$. For two strings $x, y \in \ensuremath{{\{0,1\}}}\xspace^*$, we let $\ensuremath{\mathrm{LCS}}\xspace(x,y) \in \ensuremath{{\{0,1\}}}\xspace^*$ denote the \emph{longest common sub-sequence} between $x$ and $y$. We also let $\ensuremath{\mathbf{LCS}}\xspace(x,y)$ denote a matching between $x$ and $y$ given by $\ensuremath{\mathrm{LCS}}\xspace(x,y)$. Formally, $\ensuremath{\mathbf{LCS}}\xspace(x,y)$ denotes a sequence of tuples $(i_1, j_1), \dotsc, (i_k, j_k)$, where $k = |\ensuremath{\mathrm{LCS}}\xspace(x,y)|$ satisfying $i_1 < i_2 < \dotsc < i_k \leq |x|$, $j_1 < j_2 < \dotsc j_k \leq |y|$, and $x_{i_\ell} = y_{j_\ell}$ for all $\ell \in [k]$. Note that $\ensuremath{\mathrm{LCS}}\xspace(x,y)$ need not be unique, but we can always fix one. It is well-known that $\ED(x,y) = |x| + |y| - 2 \cdot |\ensuremath{\mathrm{LCS}}\xspace(x,y)|$. \subsection{Encoding Algorithm}\label{sec:wlrdc-enc} The main ingredients of our encoding algorithm consist of an outer code $C_{\text{out}} \colon \set{0,1}^n \rightarrow \set{0,1}^{k}$ and an inner code $C_{\text{in}}$ which we shall view as a mapping $C_{\text{in}} \colon [k] \times \set{0,1} \rightarrow \set{0,1}^{t}$. The encoding of $x \in \set{0,1}^n$ is given by \begin{align*} C(x) \coloneqq C_{\text{in}}(1, y_1) \circ 0^t \circ C_{\text{in}}(2, y_2) \circ 0^t \circ \dots \circ 0^t \circ C_{\text{in}}(k, y_k)\; , \end{align*} where each $y_j \in \set{0,1}$ is obtained by writing $C_{\text{out}}(x)=y_1 \circ y_2 \circ \cdots \circ y_{k}$. Assuming the rate of $C_{\text{in}}$ is a constant $r_{\text{in}}=\lceil 1+\log_2{k} \rceil /t > 0$, the length of this encoding $C(x)$ is $m\coloneqq (2k-1) t = \Theta(k\log(k))$. We note that the concatenation introduces buffers of 0s (i.e., the string $0^t$), which slightly deviates from the standard code concatenation for Hamming errors. We present our formal encoding algorithm in \cref{alg:encoder}. Next we instantiate the concatenation framework by picking specific outer and inner codes. \paragraph{The outer code.} We take the outer code $C_{\text{out}} \colon \ensuremath{{\{0,1\}}}\xspace^n \rightarrow \ensuremath{{\{0,1\}}}\xspace^{k}$ to be a non-adaptive weak $(q_{\text{out}},\delta_{\text{out}},1/2 + \eps_{\text{out}})$-relaxed Hamming LDC given by \cref{lem:weak-ham-rldc} with $\gamma_{\text{out}} < \gamma$ and $\eps_{\text{out}} = 2\ensuremath{\varepsilon}$, where $\gamma$ and $\ensuremath{\varepsilon}$ are given in \cref{thm:main-rildc-construction}. In particular, we have that $k = O(n^{1+\gamma_{\text{out}}})$ and $k \log(k) = O(n^{1+\gamma})$. \paragraph{The inner code.} We take the inner code $C_{\text{in}} \colon [k] \times \ensuremath{{\{0,1\}}}\xspace \rightarrow \ensuremath{{\{0,1\}}}\xspace^t$ to be an insertion-deletion code (\text{i.e.}\xspace, it is a non-local code) due to Schulman and Zuckerman \cite{SchZuc99}, given by \cref{lem:sz-code}. In particular, $C_{\text{in}}$ has the following properties: \begin{enumerate} \item $C_{\text{in}}$ has constant rate $r_{\text{in}} = 1/\beta > 0$, where $\beta$ is given in \cref{lem:sz-code}. \item $C_{\text{in}}$ has constant minimum (normalized) edit distance $\delta_{\text{in}} \in (0,1/2)$. \item For any interval $I \subseteq [t]$ with $|I|\ge 2$ and any $(j,y) \in [k]\times\set{0,1}$, it holds that $\textsf{wt}(C_{\text{in}}(j,y)_{I}) \ge \floor{|I|/2}$, where $\textsf{wt}(\cdot)$ denotes the Hamming weight. \end{enumerate} \input{alg-encoder} With our choices of outer and inner codes, we prove the following key lemma about the resulting concatenation code $C$ (\text{i.e.}\xspace, \cref{alg:encoder}). \begin{lemma}\label{lem:self-nonsimilarity} For any interval $I \subseteq [m]$ of length at most $(2-\delta_{\text{in}})t$, and any index $j \in [k]$, we have $\ED\tuple{C[I], C_{\text{in}}(j, 1-y_j)} \ge \delta_{\text{in}} t/2$. \end{lemma} \begin{remark} Note that for any interval $I$, we can lower bound the edit distance between $C[I]$ and $C_{\text{in}}(j, 1-y_j)$ by $\abs{|C_I| - |C_{\text{in}}(j,1-y_j)} = \abs{|I| - t}$. For any $I$ of length greater than $(2-\delta_{\text{in}}) t$, this implies a lower bound of $(1-\delta_{\text{in}}) t \geq \delta_{\text{in}} t / 2$ for all $\delta_{\text{in}} \leq 1/2$. \end{remark} \begin{proof} Consider an arbitrary LCS matching $M \coloneqq \ensuremath{\mathbf{LCS}}\xspace(C[I], C_{\text{in}}\tuple{j, 1-y_j}) \subseteq I \times [t]$ between $C[I]$ and $C_{\text{in}}\tuple{j,1-y_j}$. We prove that $|M| \le (1-\delta_{\text{in}}/2)t$, from which the lemma follows. We introduce some notation first. Given our LCS matching $M$ and a interval $J \subseteq I$, we let $M(J) \coloneqq \{ i \in J \colon \exists j \in [t], (i,j) \in M\}$ denote the set of indices in $J$ that exist in the matching $M$. Let $N(J) \coloneqq \set{j \in [t] \colon \exists i \in M(J), (i,j) \in M} \subseteq [t]$ the set of indices in $[t]$ which are matched with $M(J)$. Finally, we denote by $\textsf{Span}(J)$ the smallest interval that covers the set $N(J)$. We extend all of the definitions to unions of intervals as follows:\footnote{$\textsf{Span}(J_1\cup J_2)$ may not be well-defined if $J_1\cup J_2$ is itself an interval. However, in this paper we will only use this notation for disjoint and non-adjacent $J_1$ and $J_2$, in which case there is a unique way to partition $J_1 \cup J_2$ into disjoint intervals.} \begin{align*} M(J_1 \cup J_2) = M(J_1) \cup M(J_2), \ N(J_1 \cup J_2) = N(J_1) \cup N(J_2), \ \textsf{Span}(J_1 \cup J_2) = \textsf{Span}(J_1) \cup \textsf{Span}(J_2). \end{align*} See \Cref{fig:wrldc-notation} for a pictorial overview of $M(J)$, $N(J)$, and $\textsf{Span}(J)$. Since $M$ is a monotone matching, we have that $J_1 \cap J_2 = \varnothing$ implies $\textsf{Span}(J_1) \cap \textsf{Span}(J_2) = \varnothing$. We also have $|M(J)|=|N(J)|\le|\textsf{Span}(J)|$. \input{fig-wrldc-notation} Now we turn back to the proof. Since $|I|< 2t$, the interval $I$ spans across at most 2 buffers and/or codewords. It is convenient to partition $I$ into $I_b \cup I_c$, where $I_b$ and $I_c$ are unions of at most 2 intervals which correspond to buffers and codewords, respectively. We first show that $\abs{\textsf{Span}(I_b)} \ge 2\abs{M(I_b)}$. Intuitively, this is because the density of ``1'' is at least $1/2$ in any interval of an inner codeword, so every matched ``0'' in a buffer has to be accompanied with an insertion of ``1''. Formally, due to Property 3 of $C_{\text{in}}$, we have \begin{align*} \abs{\textsf{Span}(I_b)} \le 2\textsf{wt}\bigg(C_{\text{in}}(j, 1-y_j)[\textsf{Span}(I_b)]\bigg) \le 2\abs{\textsf{Span}(I_b) \setminus N(I_b)} = 2\tuple{\abs{\textsf{Span}(I_b)} - \abs{M(I_b)}}, \end{align*} since any index $j \in N(I_b)$ is matched to an index in a buffer, which is necessarily a ``0''. We finish the proof in two cases. \textit{Case 1:} $I_c$ is the union of two intervals. In this case, we can deduce that $I$ completely contains a buffer, \text{i.e.}\xspace, $|I_b|=t$, and that $|I_c|\le |I|-|I_b|\le (1-\delta_{\text{in}})t$. Therefore, we have \begin{align*} 2|M| \le 2|M(I_c)| + 2|M(I_b)| \le |I_c| + \abs{\textsf{Span}(I_c)} + \abs{\textsf{Span}(I_b)} \le (1-\delta_{\text{in}})t + t = 2(1-\delta_{\text{in}}/2)t. \end{align*} The last inequality is due to $\abs{\textsf{Span}(I_b)} + \abs{\textsf{Span}(I_c)} \le t$, since $\textsf{Span}(I_b), \textsf{Span}(I_c) \subseteq [t]$ are disjoint intervals. \textit{Case 2:} $I_c$ is an interval. In this case, note that $M \cap (I_c \times [t])$ corresponds to a common subsequence between $C_{\text{in}}(j,1-y_j)$ and some other codeword $C_{\text{in}}(j', y_{j'})$. The distance property of $C_{\text{in}}$ implies $|M(I_c)| \le (1-\delta_{\text{in}})t$. Similarly we can upper bound $|M|$ by \begin{align*} 2|M| = 2|M(I_c)| + 2|M(I_b)| \le (1-\delta_{\text{in}})t + \abs{\textsf{Span}(I_c)} + \abs{\textsf{Span}(I_b)} \le (1-\delta_{\text{in}})t + t = 2(1-\delta_{\text{in}}/2)t. \end{align*} To conclude, we have $|M| \le (1-\delta_{\text{in}}/2)t$ in both cases. It follows that $\ED\tuple{C[I], C_{\text{in}}(j,1-y_j)}\ge \delta_{\text{in}} t/2$. \end{proof} \subsection{The Decoding Algorithm}\label{sec:wrldc-dec} Our goal is to construct a relaxed decoder $\ensuremath{\mathsf{Dec}}\xspace$ that, given input an index $i \in [n]$ and oracle access to some binary string $w$ which is $\delta$-close to some codeword $C(x)$ in edit distance, outputs either the bit $x_i$ or $\bot$ with probability at least $1/2 + \ensuremath{\varepsilon}$. We present our formal decoding algorithm in \cref{alg:decoder}. In our construction, the decoding algorithm invokes the relaxed decoder for the outer code $C_{\text{out}}$ while providing it with access to a simulated oracle $\widetilde{y} \in \set{0,1}^k$ using the oracle $w$ which is the corrupted codeword. At a high level, the decoder operates as follows. \begin{enumerate} \item The decoder ensures that the oracle $w$ has length $m$; else it outputs $\bot$. \item The decoder invokes $C_{\text{out}}.\ensuremath{\mathsf{Dec}}\xspace(i)$. \item For each query $j \in [k]$ received from $C_{\text{out}}.\ensuremath{\mathsf{Dec}}\xspace(i)$ \begin{enumerate} \item The decoder computes codewords $C_{\text{in}}(j, 0)$ and $C_{\text{in}}(j,1)$, and additionally computes the first index $i_0 \in [t]$ such that $C_{\text{in}}(j,0)[i_0] \neq C_{\text{in}}(j,1)[i_0]$. \item The decoder samples $i_1,\dotsc, i_d \in [t]$ uniformly and independently at random. \item The decoder sets a bit $\ensuremath{\widetilde{y}}\xspace_j$ as follows. If $w[ 2(j-1)t + i_\ell ] = C_{\text{in}}(j,0)[i_\ell]$ for all $\ell \in \{0,1,\dotsc, d\}$, then set $\ensuremath{\widetilde{y}}\xspace_j = 0$; else if $w[ 2(j-1)t+ i_\ell ] = C_{\text{in}}(j,1)[i_\ell]$ for all $\ell \in \{0,1,\dotsc, d\}$, then set $\ensuremath{\widetilde{y}}\xspace_j = 1$; else if neither case occurs, abort and output $\bot$. \item Answer $C_{\text{out}}.\ensuremath{\mathsf{Dec}}\xspace(i)$ with bit $\ensuremath{\widetilde{y}}\xspace_j$ and await the next query. \end{enumerate} \item Output symbol $\ensuremath{\widetilde{x}}\xspace = C_{\text{out}}.\ensuremath{\mathsf{Dec}}\xspace(i)$. \end{enumerate} \begin{remark} We choose to check that the received word has length $m$ to simplify the analysis. However, this is not necessary by the following observations. First, if $|w| < m$, if the decoder every queries a symbol $j > |w|$, then we assume the oracle returns $\bot$, at which point our decoder can abort and output $\bot$. Second, if $|w| > m$, then our decoder will simply ignore any bits beyond $w_m$. \end{remark} \input{alg-decoder} \paragraph{Analysis of the Decoder.} Clearly, the query complexity of \cref{alg:decoder} is $(d+1) \cdot q_{\text{out}} + 2$.\footnote{If $m$ is hard-coded in the decoder and the length $m'$ of the received word is additionally given as input, then $q = (d+1)\cdot q_{\text{out}}$.} We take $\delta\coloneqq \delta_{\text{in}}\delta_{\text{out}}/128$. Fix a message $x \in \ensuremath{{\{0,1\}}}\xspace^n$ and oracle string $w$ such that $\ED(C(x), w) \leq \delta \cdot 2m$. Moreover, let $y = C_{\text{out}}(x) \in \ensuremath{{\{0,1\}}}\xspace^k$. For $j \in [k]$, we denote by $I_j$ the interval that correspond to $C_{\text{in}}(j, y_j)$ in $C(x)$. Formally, \begin{align*} I_j \coloneqq \set{2(j-1)t+1, \dotsc, 2(j-1)t + t}. \end{align*} Let $\ensuremath{\mathbf{LCS}}\xspace \coloneqq \ensuremath{\mathbf{LCS}}\xspace(C(x), w)$. Note that $|\ensuremath{\mathbf{LCS}}\xspace| \geq (1-\delta)m$ since $\ED(C(x), w) \leq \delta \cdot 2m$. \begin{definition} We say $j\in [k]$ is \emph{dangerous} if $\ED\tuple{w[I_j], C_{\text{in}}(j, 1-y_j)} \le \delta_{\text{in}} t/4$. \end{definition} We first show that if a block $j$ is not dangerous, then $\widetilde{y}_j = 1-y_j$ happens with small probability. \begin{proposition}\label{prop:dangerous-bad} If $j$ is not dangerous, then $\Pr[\widetilde{y}_j = 1-y_j] \leq (1-\delta_{\text{in}}/8)^d$. \end{proposition} \begin{proof} It suffices to show that for a uniformly random $i \in [t]$, we have \begin{align*} \Pr\left[w[2(j-1)t+i] = C_{\text{in}}(j,1-y_j)[i]\right] \le 1-\delta_{\text{in}}/8. \end{align*} Since $j$ is not dangerous, we have $\ED(w[I_j], C_{\text{in}}(j,1-y_j))>\delta_{\text{in}} t/4$. Therefore \begin{align*} \Pr[w[{2(j-1)t+i}] \neq C_{\text{in}}(j,1-y_j)[i]] &= \frac{1}{t} \cdot \HAM\xspace\tuple{w[{I_j}], C_{\text{in}}(j,1-y_j)}\\ &\ge \frac{1}{2t} \cdot \ED(w[I_j], C_{\text{in}}(j,1-y_j)) > \frac{\delta_{\text{in}}}{8}.\qedhere \end{align*} \end{proof} Now the key step is to upper bound the number of dangerous blocks. \begin{lemma} \label{lem:dangerous-ub} The total number of dangerous blocks is at most $\delta_{\text{out}} k/2$. \end{lemma} \begin{proof} Let $\ensuremath{\mathbf{LCS}}\xspace \coloneqq \ensuremath{\mathbf{LCS}}\xspace(C(x), w)$ denote an arbitrary LCS matching between $C(x)$ and $w$. Let $U_C, U_w \subseteq [m]$ be the sets of unmatched bits in $C(x)$ and $w$, respectively; \text{i.e.}\xspace, for every $i \in U_C$ (resp., $j \in U_w$), we have that $(i,\ell) \not\in \ensuremath{\mathbf{LCS}}\xspace$ (resp, $(\ell, j) \not\in \ensuremath{\mathbf{LCS}}\xspace$) for all $\ell \in [m]$. For each $j \in [k]$, define the following set of indices which are matched to $I_j$, i.e., \begin{align*} N(I_j) \coloneqq \set{i \in [m] \colon \exists i' \in I_j, (i,i') \in \ensuremath{\mathbf{LCS}}\xspace}, \end{align*} and let $\textsf{Span}_j$ be the smallest interval covering $N(I_j)$. Since $\textsf{LCS}$ is a monotone matching, the intervals $\textsf{Span}_1, \dots, \textsf{Span}_k$ are disjoint. We can thus lower bound the edit distance between $C(x)$ and $w$ by \begin{align*} \ED\tuple{C(x), w} = |U_C| + |U_w| \ge \sum_{j=1}^{k}\abs{U_C \cap \textsf{Span}_j} + \sum_{j=1}^{k}\abs{U_w \cap I_j}. \end{align*} \begin{claim} \label{clm:dangerous-imply-error} If $j$ is dangerous, then either $\abs{U_C \cap \textsf{Span}_j} \ge \delta_{\text{in}} t/8$, or $\abs{U_w \cap I_j} \ge \delta_{\text{in}} t/16$. \end{claim} \begin{proof}[Proof of Claim~\ref{clm:dangerous-imply-error}] Assume for the sake of contradiction that $\abs{U_C \cap \textsf{Span}_j} < \delta_{\text{in}} t/8$ and $|U_w \cap I_j| < \delta_{\text{in}} t/16$. We first note that \begin{align*} |\textsf{Span}_j| = |U_C \cap \textsf{Span}_j| + |N(j)| < \delta_{\text{in}} t/8 + t. \end{align*} We also note that $\ensuremath{\mathbf{LCS}}\xspace \cap (\textsf{Span}_j \times I_j)$ corresponds to a common subsequence between $C(x)[\textsf{Span}_j]$ and $w[I_j]$, which has length at least \begin{align*} \abs{\ensuremath{\mathbf{LCS}}\xspace \cap (\textsf{Span}_j \times I_j)} = \abs{I_j \setminus U_w} > t-\delta_{\text{in}} t/16. \end{align*} In other words, we have \begin{align*} \ED\tuple{w[{I_j}], C(x)[{\textsf{Span}_j}]} &< |I_j| + |\textsf{Span}_j| - 2(t-\delta_{\text{in}} t/16) \\ &< t + (1+\delta_{\text{in}}/8)t - 2t + \delta_{\text{in}} t/8 = \delta_{\text{in}} t/4. \end{align*} Since $j$ is dangerous, we also have $\ED\tuple{w[I_j], C_{\text{in}}(j, 1-y_j)}\le \delta_{\text{in}} t/4$. The triangle inequality thus implies \begin{align*} \ED\tuple{C(x)[{\textsf{Span}_j}], C_{\text{in}}(j, 1-y_j)} &\le \ED\tuple{w[{I_j}], C(x)[{\textsf{Span}_j}]} + \ED\tuple{w[{I_j}], C_{\text{in}}(j, 1-y_j)}\\ &< \delta_{\text{in}} t/4 + \delta_{\text{in}} t/4 = \delta_{\text{in}} t/2. \end{align*} However, this contradicts \cref{lem:self-nonsimilarity}. \end{proof} Denote by $D$ the set of dangerous blocks. By \cref{clm:dangerous-imply-error} we have \begin{align*} \delta \cdot 2m \ge \ED\tuple{w, C(x)} \ge \sum_{j \in D}\tuple{\abs{U_C \cap \textsf{Span}_j} + \abs{U_w \cap I_j}} \ge |D| \cdot \frac{\delta_{\text{in}} t}{16}. \end{align*} Plugging in $\delta = \delta_{\text{in}}\delta_{\text{out}}/128$ and $m=(2k-1)t \leq 2kt$, we obtain $|D| \le \delta_{\text{out}} k/2$. \end{proof} Now we are ready to prove \cref{thm:main-rildc-construction}. We recall the theorem below. \weakirldcmain* \begin{proof} Consider the concatenation code $C$ (with buffers) of \cref{alg:encoder} and a relaxed decoder $\ensuremath{\mathsf{Dec}}\xspace$ defined in \cref{alg:decoder}. We fix an arbitrary message $x \in \set{0,1}^n$ and a string $w$ such that $\ED\tuple{C(x), w}\le \delta \cdot 2m$. Here $\delta \coloneqq \delta_{\text{in}}\delta_{\text{out}}/128$. We also denote $y \coloneqq C_{\text{out}}(x) \in \set{0,1}^k$. For the remainder of the proof, unless otherwise stated, all lines referenced are from our decoder description in \cref{alg:decoder}. Perfect completeness follows directly via the index $i_0$ computed in \cref{line:perfect-completeness}, the computation of $\ensuremath{\widetilde{y}}\xspace_j$ in \cref{line:tildey}, and the perfect completeness of the outer code $C_{\text{out}}$. We now focus on proving relaxed decoding. Let $D \subseteq [k]$ be a subset containing the indices of all dangerous blocks in $w$. We have that $|D|\le \delta_{\text{out}} k/2$ by \cref{lem:dangerous-ub}. Recall that in \cref{alg:decoder}, $\ensuremath{\mathsf{Dec}}\xspace$ invokes the relaxed decoder $\ensuremath{\mathsf{Dec}}\xspace_{out}$ of $C_{\text{out}}$ while providing it with oracle access to some string $\widetilde{y} \in \set{0,1}^k$. Denote $\mathbf{Y}=\tuple{Y_1,Y_2,\dots,Y_k}$ where each $Y_j \in \set{0,1,\perp}$ is the result that would have been returned by the decoding algorithm on query $j$ in \cref{line:bot-check} of \cref{alg:decoder} (even if $j$ might not be queried). Since the decoder uses independent samples in \cref{line:index-sampling} of \cref{alg:decoder} for each query $j$, $Y_j$'s are independent random variables. Due to \cref{prop:dangerous-bad}, for every $j \in [k]\setminus D$ it holds that \begin{align*} \Pr\left[ Y_j = 1-y_j \right] \le (1-\delta_{\text{in}}/8)^d < e^{-d\delta_{\text{in}}/8} = \delta_{\text{out}}/4, \end{align*} where in the last equality follows from choosing $d = 8\ln(4/\delta_{\text{out}})/\delta_{\text{in}}$. Denote $d(\mathbf{Y}, y)=\sum_{j \in [k]}\mathbf{1}\set{Y_j=1-y_j}$. Since $Y_j$'s are independent, an application of the Chernoff bound shows that \begin{align*} \Pr\left[ d(\mathbf{Y},y) > \delta_{\text{out}} k \right] \le \Pr\left[ \sum_{j \in [k]\setminus D}\mathbf{1}\set{Y_j=1-y_j}\ge \frac{\delta_{\text{out}} k}{2} \right] \le \exp\tuple{-\delta_{\text{out}}^2(k-|D|)/8} \le \ensuremath{\varepsilon} \end{align*} for large enough $n$ (and thus $k$). Given $\mathbf{Y} \in \set{0,1,\perp}^k$, define a string $c(\mathbf{Y}) \in \set{0,1}^k$ as follows: \begin{align*} c(\mathbf{Y})_j \coloneqq \begin{cases} y_j & \textup{if $Y_j \in \set{y_j, \perp}$} \\ 1-y_j & \textup{if $Y_j = 1-y_j$} \end{cases}. \end{align*} According to \cref{line:bot-check} of \cref{alg:decoder}, $\ensuremath{\mathsf{Dec}}\xspace$ aborts with output $\perp$ as long as $Y_j=\perp$ for any query $j$. On the other hand, if $Y_j \neq \perp$ for all queries $j$, then $\ensuremath{\mathsf{Dec}}\xspace$ should output the result returned by $\ensuremath{\mathsf{Dec}}\xspace_{out}$ as if it had oracle access to $c(\mathbf{Y})$. In any case, we have \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, w) \in \set{x_i, \perp} \mid \mathbf{Y} \right] \ge \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace_{out}(i, c(\mathbf{Y})) \in \set{x_i, \perp} \right]. \end{align*} To conclude, we have \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, w) \in \set{x_i, \perp} \right] &= \E_{\mathbf{Y}}\left[ \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, w) \in \set{x_i, \perp} \mid \mathbf{Y} \right] \right] \\ &\ge \E_{\mathbf{Y}\colon d(\mathbf{Y}, y)\le \delta_{\text{out}} k}\left[ \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, w) \in \set{x_i, \perp} \mid \mathbf{Y} \right] \right] \cdot \Pr\left[ d(\mathbf{Y}, y) \le \delta_{\text{out}} k \right] \\ &\ge \E_{\mathbf{Y}\colon d(\mathbf{Y}, y)\le \delta_{\text{out}} k}\left[ \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace_{out}(i, c(\mathbf{Y})) \in \set{x_i, \perp} \right] \right] - \ensuremath{\varepsilon}. \end{align*} By definition of $d(\mathbf{Y}, y)$, it holds that $\HAM\xspace(c(\mathbf{Y}), y)\le \delta_{\text{out}} k$ whenever $d(\mathbf{Y}, y) \leq \delta_{\text{out}} k$. Since $C_{\text{out}}$ is a $(q_{\text{out}}, \delta_{\text{out}}, \eps_{\text{out}})$-relaxed LDC, it holds that \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace_{out}(i, c(\mathbf{Y})) \in \set{x_i, \perp} \right] \ge \frac{1}{2} + \eps_{\text{out}}. \end{align*} By our choice of $\eps_{\text{out}} = 2\ensuremath{\varepsilon}$, we have that \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, w) \in \set{x_i, \perp} \right] \ge \frac{1}{2} + 2\ensuremath{\varepsilon} - \ensuremath{\varepsilon} = \frac{1}{2} + \ensuremath{\varepsilon}. \end{align*} The query complexity is $q(\delta, \ensuremath{\varepsilon}, \gamma) \coloneqq (d+1)\cdotq_{\text{out}} + 2 = \Theta\tuple{q_{\text{out}} \cdot \log(1/\delta_{\text{out}})/\delta_{\text{in}}} = \Theta(1)$. \end{proof} \subsection{Lower bounds for adaptive 2-Query Hamming RLDCs} \label{subsec:adaptive-2qRLDC} Now we turn to the actual proof, which still works for possibly adaptive decoders. Let $C$ be a weak $(2,\delta,1/2+\ensuremath{\varepsilon})$-RLDC with perfect completeness. We fix a relaxed decoder $\ensuremath{\mathsf{Dec}}\xspace$ for $C$. Without loss of generality, we assume $\ensuremath{\mathsf{Dec}}\xspace$ works as follows: on input $i \in [n]$, $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ picks the first query $j \in [m]$ according to a distribution $\+D_i$. Let $b \in \set{0,1}$ be the answer to this query. Then $\ensuremath{\mathsf{Dec}}\xspace$ picks the second query $k \in [m]$ according to a distribution $\+D_{i;j,b}$, and obtains an answer $b' \in \set{0,1}$. Finally, $\ensuremath{\mathsf{Dec}}\xspace$ outputs a random variable $X_{i;j,b,k,b'}\in \set{0,1,\perp}$. We partition the support of $\+D_i$ into the following two sets: \begin{align*} F_i^{0} &\coloneqq \set{j \in \supp(\+D_i) \colon \forall b,b' \in \set{0,1}, k \in \supp(\+D_{i;j,b,k,b'}), \Pr[X_{i;j,b,k,b'} = \perp] = 0}, \\ F_i^{>0} &\coloneqq \set{j \in \supp(\+D_i) \colon \exists b,b' \in \set{0,1}, k \in \supp(\+D_{i;j,b,k,b'}), \Pr[X_{i;j,b,k,b'}=\perp] > 0}. \end{align*} We will still apply the restriction guaranteed by \Cref{lem:random-restriction} to $C$. The sets $S_i$, $T_j$, $W$, $S_{i,-}$, $S_{i,+}$ (are their counterparts for $C_{J|\rho}$) are defined in the exact same way. The following claim is adapted from \Cref{clm:bot-fix}. \begin{claim} \label{clm:adaptive-bot-fix} $(\supp(\+D_i) \setminus S_i) \subseteq F_i^{0}$. \end{claim} \begin{proof} Let $j \in \supp(\+D_i) \setminus S_i$ and we will show $j \in F_i^{0}$. By the definition of $S_i$, $j \notin S_i$ means that there are partial assignments $\sigma_{00}, \sigma_{01}, \sigma_{10}, \sigma_{11} \in \set{0,1}^{n-1}$ such that \begin{align*} C_j\tuple{\*x_{-i} = \sigma_{00}, x_i = 0} = 0, \quad C_j\tuple{\*x_{-i} = \sigma_{01}, x_i = 1} = 0, \\ C_j\tuple{\*x_{-i} = \sigma_{10}, x_i = 0} = 1, \quad C_j\tuple{\*x_{-i} = \sigma_{11}, x_i = 1} = 1, \end{align*} where $\*x_{-i}$ is defined as $\tuple{x_t \colon t \in [n]\setminus\set{i}}$. Let $C_{00}, C_{01}, C_{10}, C_{11}$ be encodings of the corresponding assignments mentioned above. Consider an arbitrary query $k \in \supp(\+D_{i;j,0})$, and let $b_1', b_2'$ be the $k$-th bit of $C_{00}$ and $C_{01}$, respectively. We note that $X_{i;j,0,k,b_1'}$ is the output of $\ensuremath{\mathsf{Dec}}\xspace(i,C_{00})$ conditioned on the queries $j, k$, and $X_{i;j,0,k,b_2'}$ is the output of $\ensuremath{\mathsf{Dec}}\xspace(i,C_{01})$ conditioned on the queries $j, k$. Due to perfect completeness of $\ensuremath{\mathsf{Dec}}\xspace$, we have \begin{align*} \Pr[X_{i;j,0,k,b_1'}=0] = 1, \quad \Pr[X_{i;j,0,k,b_2'}=1]=1. \end{align*} Therefore, it must be the case that $b_1' \neq b_2'$, which implies that $\Pr[X_{i;j,0,k,b'}=\perp]=0$ for any $b' \in \set{0,1}$. An identical argument shows that $\Pr[X_{i;j,1,k,b'}=\perp]=0$ for any $k \in \supp(\+D_{i;j,1})$ and $b' \in \set{0,1}$. Thus we have shown $j \in F_i^{0}$. \end{proof} We remark that the above claim also implies $F_i^{>0} \subseteq S_i$, since $\supp(\+D_i)$ is a disjoint union of $F_i^{0}$ and $F_i^{>0}$. In other words, conditioned on the event that the first query $j$ is not contained in $S_i$, the decoder never outputs $\perp$. The next claim is adapted from \Cref{clm:fixable-or-useless}. \begin{claim} \label{clm:adaptive-fixable-or-useless} Let $j \in \supp(\+D_i)\cap S_i$. For any $b \in \set{0,1}$ one of the following three cases occurs: \begin{enumerate} \item $\supp(\+D_{i;j,b}) \subseteq S_i$; \item For any $k \in \supp(\+D_{i;j,b}) \setminus S_i$, $\Pr[X_{i;j,b,k,0} = b] = \Pr[X_{i;j,b,k,1} = b] = 1$; \item For any $k \in \supp(\+D_{i;j,b}) \setminus S_i$, $\Pr[X_{i;j,b,k,0} = 1-b] = \Pr[X_{i;j,b,k,1} = 1-b] = 1$. \end{enumerate} \end{claim} \begin{proof} Since $j \in S_i$, we may, without loss of generality, assume that $C_j \restriction_{x_i=0}$ is a constant function. Let us further assume $C_j \restriction_{x_i=0} \;\equiv 0$. The proofs for the other cases are going to be similar. Suppose $\supp(\+D_{i;j,0}) \not\subseteq S_i$, and let $k \in \supp(\+D_{i;j,0}) \setminus S_i$. By the definition of $S_i$, $k \notin S_i$ means that there are partial assignments $\sigma_{00}, \sigma_{01} \in \set{0,1}^{n-1}$ such that \begin{align*} C_k(x_i=0, \*x_{-i}=\sigma_{00}) = 0, \quad C_k(x_i=0, \*x_{-i}=\sigma_{01}) = 1. \end{align*} Let $C_{00}$ and $C_{01}$ be the encodings of the corresponding assignments mentioned above. We note that $X_{i;j,0,k,0}$ and $X_{i;j,0,k,1}$ are the outputs of $\ensuremath{\mathsf{Dec}}\xspace(i,C_{00})$ and $\ensuremath{\mathsf{Dec}}\xspace(i,C_{01})$, respectively, conditioned on the queries $j$, $k$. Due to perfect completeness of $\ensuremath{\mathsf{Dec}}\xspace$, we must have \begin{align*} \Pr[X_{i;j,0,k,0} = 0] = \Pr[X_{i;j,0,k,1} = 0] = 1, \end{align*} since both $C_{00}$ and $C_{01}$ encode messages with $x_i = 0$. Now we claim that $C_j\restriction_{x_i=1} \;\equiv 1$ must hold. Otherwise there is a partial assignment $\sigma_{10} \in \ensuremath{{\{0,1\}}}\xspace^{n-1}$ such that \begin{align*} C_j(x_i=1, \*x_{-i}=\sigma_{10}) = 0. \end{align*} Let $C_{10}$ be the encoding of this assignment, and let $b' \in \set{0,1}$ be the $k$-th bit of $C_{10}$. On the one hand, $X_{i;j,0,k,b'}$ is the output $\ensuremath{\mathsf{Dec}}\xspace(i,C_{10})$ conditioned on the queries $j$, $k$, and we have just established \begin{align*} \Pr[X_{i;j,0,k,b'} = 0] = 1. \end{align*} On the other hand, $\ensuremath{\mathsf{Dec}}\xspace(i,C_{10})$ should output $x_i=1$ with probability 1 due to perfect completeness. This contradiction shows that $C_j\restriction_{x_i=1} \;\equiv 1$. Similarly, suppose $\supp(\+D_{i;j,1}) \not\subseteq S_i$ and let $k \in \supp(\+D_{i;j,1}) \setminus S_i$, meaning that there are partial assignments $\sigma_{10}, \sigma_{11} \in \set{0,1}^{n-1}$ such that \begin{align*} C_k(x_i=1, \*x_{-i}=\sigma_{10}) = 0, \quad C_k(x_i=1, \*x_{-i}=\sigma_{11}) = 1. \end{align*} Let $C_{10}$ and $C_{11}$ be the corresponding encodings, and note that $X_{i;j,1,k,0}$ and $X_{i;j,1,k,1}$ are the outputs of $\ensuremath{\mathsf{Dec}}\xspace(i,C_{10})$ and $\ensuremath{\mathsf{Dec}}\xspace(i,C_{11})$, respectively, conditioned on the queries $j$, $k$. Perfect completeness of $\ensuremath{\mathsf{Dec}}\xspace$ implies \begin{align*} \Pr[X_{i;j,1,k,0} = 1] = \Pr[X_{i;j,1,k,1} = 1] = 1, \end{align*} since both $C_{10}$ and $C_{11}$ encode messages with $x_i = 1$. So far we have shown that for any $b \in \set{0,1}$ such that $\supp(\+D_{i;j,b}) \not\subseteq S_i$, it holds that \begin{align*} \forall k \in \supp(\+D_{i;j,b})\setminus S_i, \quad \Pr[X_{i;j,b,k,0}=b] = \Pr[X_{i;j,b,k,1}=b] = 1, \end{align*} provided that $C_j \restriction_{x_i = 0} \;\equiv 0$. In case of $C_j \restriction_{x_i = 0} \;\equiv 1$, we can use an identical argument to deduce that for any $b \in \set{0,1}$ such that $\supp(\+D_{i;j,b}) \not\subseteq S_i$, it holds that \begin{align*} \forall k \in \supp(\+D_{i;j,b})\setminus S_i, \quad \Pr[X_{i;j,b,k,0}=1-b] = \Pr[X_{i;j,b,k,1}=1-b] = 1. \end{align*} \end{proof} Here is another way to view \cref{clm:adaptive-fixable-or-useless}: conditioned on the event that the first query $j$ is contained in $S_i$, either the second query $k$ is also contained in $S_i$, or the output $X_{i;j,b,k,b'}$ is independent of the answer $b'$ to query $k$. In either case, the decoder's output depends solely on the $S_i$-portion of the received string. Once again, the conclusions of \cref{clm:adaptive-bot-fix} and \cref{clm:adaptive-fixable-or-useless} hold for $C_{J|\rho}$, with $S_i$ replaced by $S_i'$. Finally, we are ready to prove \Cref{thm:main-2qRLDC}. We recall the Theorem below. \twoqrldcmain* \begin{proof} The proof is almost identical to the one for \Cref{prop:non-adaptive-2qRLDC}. First, we can show that there exists $I \subseteq [n']$ of size $|I| \ge n'-O_{\delta}(1) = \Omega(n)$ such that $|S_{i,-}'| \le \delta m/4$ for all $i \in I$, and hence $|S_i'|=|S_{i,-}'|+|S_{i,+}'|\le \delta m/2$. Second, similar to the proof of \cref{lem:LDC-reduction}, for each $i \in I$ we can construct a decoder $D_i$ for $x_i$ as follows. $D_i$ restarts $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ until it makes a first query $j \in [m']\setminus S_i'$. Then $D_i$ finishes simulating $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ and returns its output. With the help of \Cref{clm:adaptive-bot-fix} and \Cref{clm:adaptive-fixable-or-useless}, the same analysis in \Cref{lem:LDC-reduction} shows that $D_i$ never returns $\perp$, and that the probability of returning $x_i$ is at least $1/2+\ensuremath{\varepsilon}$. Finally, the theorem follows from \cref{thm:two-query-lb}. \end{proof} \section{Lower Bounds for 2-Query Hamming RLDCs} \label{sec:2qrldc} We prove \Cref{thm:main-2qRLDC} in this section. As a reminder, a weak $(q,\delta,\alpha)$-RLDC satisfies the first two conditions in \cref{def:strongRLDC}, and non-adaptive means the decoder makes queries according to a distribution which is independent of the received string $y$. Here we are interested in the case $q=2$ and $\alpha=1/2+\ensuremath{\varepsilon}$. To avoid overloading first-time readers with heavy notations, we first present a proof of the lower bound for \emph{non-adaptive} decoders, i.e., decoders with a query distribution independent of the received string. This proof will be easier to follow, while the crucial ideas behind it remain the same. The proof for the most general case is presented in the last subsection, with an emphasis on the nuances in dealing with adaptivity. \subsection{A Warm-up: the lower bound for non-adaptive decoders} In the following, we fix a relaxed decoder $\ensuremath{\mathsf{Dec}}\xspace$ for $C$. In this subsection, we assume that $\ensuremath{\mathsf{Dec}}\xspace$ is non-adaptive, and that it has the first two properties specified in \cref{def:strongRLDC}. To avoid technical details, we also assume $\ensuremath{\mathsf{Dec}}\xspace$ always makes exactly 2 queries (otherwise add dummy queries to make the query count exactly 2). Given an index $i \in [n]$ and queries $j,k$ made by $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$, in the most general setting the output could be a random variable which depends on $i$ and $y_j$, $y_k$, where $y_j$, $y_k$ are the answers to queries $j$, $k$, respectively. An equivalent view is that the decoder picks a random function $f$ according to some distribution and outputs $f(y_j, y_k)$. Let $\mathtt{DF}^i_{j,k}$ be the set of all decoding functions $f \colon \set{0,1}^2 \rightarrow \set{0,1,\perp}$ which are selected by $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ with non-zero probability when querying $j,k$. We partition the queries into the following two sets \begin{align*} F_{i}^{0} &\coloneqq \set{ \set{j,k} \subseteq [m] \colon \forall f \in \mathtt{DF}^i_{j,k} \textup{ the truth table of $f$ contains no ``$\perp$''}}, \\ F_{i}^{\ge 1} &\coloneqq \set{ \set{j,k} \subseteq [m] \colon \exists f \in \mathtt{DF}^i_{j,k} \textup{ the truth table of $f$ contains at least 1 ``$\perp$''}}. \end{align*} \iffalse We observe that $F_{i}\itn{3} = F_{i}\itn{4} = \varnothing$ since otherwise $\ensuremath{\mathsf{Dec}}\xspace$ fails to have perfect completeness. Indeed if $\{j, k\} \in F_i\itn{4}$, then $f^i_{j, k}$ cannot compute $x_i$ correctly when there is no error. If $\{j, k\} \in F_i\itn{3}$, then the truth table of $f_{j, k}^i$ contains three $\bot$. But the $i$th bit of the message can be either $0$ or $1$. So for one of them, $f^i_{j, k}$ has to output $\bot$, when there is no error. Thus, we should restrict our attention to $0\le \ell\le 2$. {Xin}{magenta}{probably better to have a bit explanation, especially for the case of $F_{i}\itn{3} = \varnothing$.} \fi \iffalse \begin{definition} Let $G \subseteq [n]$ be the following subset: \begin{align*} G \coloneqq \set{i \in [n] \colon \Pr\left[\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)\textup{ reads $F_{i}\itn{1} \cup F_{i}\itn{2}$} \right] \le \frac{1}{2}}. \end{align*} \end{definition} Suppose $|G| \ge n/2$. Consider a (non-relaxed) decoder $\ensuremath{\mathsf{Dec}}\xspace'$ which on input $i$ simulates $\ensuremath{\mathsf{Dec}}\xspace$ on input $i$, except that $\ensuremath{\mathsf{Dec}}\xspace'$ outputs a uniform random bit when $\ensuremath{\mathsf{Dec}}\xspace$ reads $F_{i}\itn{1} \cup F_{i}\itn{2}$. We have for any $x \in \set{0,1}^n$ and $y \in \set{0,1}^m$ such that $d(C(x), y) \le \delta m$, and any $i \in [n]$ that \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace'(y, i) = x_i \right] \ge \frac{1}{2} \cdot \tuple{\frac{1}{2} + \ensuremath{\varepsilon}} + \frac{1}{2} \cdot \frac{1}{2} = \frac{1}{2} + \frac{\ensuremath{\varepsilon}}{2}. \end{align*} Therefore $C$ is also a non-relaxed $(2,\delta,\ensuremath{\varepsilon}/2)$-LDC for the $n/2$ bits specified by $G$. Theorem~\ref{thm:main-2qRLDC} will follow from the exponential lower bound for non-relaxed 2-query LDCs. In the rest of the section, we will assume $|G| < n/2$. \fi \paragraph{Notation.} Given a string $w \in \ensuremath{{\{0,1\}}}\xspace^m$ and a subset $S\subseteq [m]$, we denote $w[S]\coloneqq (w_i)_{i\in S} \in \ensuremath{{\{0,1\}}}\xspace^{|S|}$. Given a Boolean function $f \colon \set{0,1}^{n} \rightarrow \set{0,1}$, and $\sigma \in \set{0,1}$, we write $f\restriction_{x_i = \sigma}$ to denote the restriction of $f$ to the domain $\set{\*x \in \set{0,1}^n \colon x_i = \sigma}$. For a sequence of restrictions, we simply write $f\restriction_{(x_{j_1}, \dots, x_{j_k})=(\sigma_1,\dots,\sigma_k)}$, or $f_{J|\sigma}$ where $J=[n]\setminus\set{j_1,\dots,j_k}$ and $\sigma=(\sigma_1,\dots,\sigma_k)$. Note that $f_{J|\sigma}$ is a Boolean function over the domain $\ensuremath{{\{0,1\}}}\xspace^{J}$. We will identify the encoding function of $C$ as a collection of $m$ Boolean functions \begin{align*} \+C \coloneqq \set{C_1, \dots, C_m \colon \forall j \in [m], C_j \colon \set{0,1}^n \rightarrow \set{0,1}}. \end{align*} Namely, $C(x)=(C_1(x), C_2(x), \dots, C_m(x))$ for all $x \in \ensuremath{{\{0,1\}}}\xspace^n$. For $j \in [m]$, we say $C_j$ is \emph{fixable} by $x_i$ if at least one of the restrictions $C_j\restriction_{x_i=0}$ and $C_j\restriction_{x_i=1}$ is a constant function. Denote \begin{align*} S_i \coloneqq \set{j \in [m] \colon C_j\textup{ is fixable by }x_i}, \quad T_j \coloneqq \set{i \in [n] \colon C_j\textup{ is fixable by }x_i}, \end{align*} and $w_{j} \coloneqq |T_j|$. Let \begin{align*} W \coloneqq \set{j \in [m] \colon w_{j} \ge 3\ln(8/\delta) }. \end{align*} For $i \in [n]$ define the sets $S_{i,+} \coloneqq S_{i} \cap W$, and $S_{i,-} \coloneqq S_i \cap \overline{W}$. Let $J \subseteq [n]$ and $\rho \in \set{0,1}^{\overline{J}}$. A code $C \colon \set{0,1}^n \rightarrow \set{0,1}^m$ restricted to $\*x_{\overline{J}} = \rho$, denoted by $C_{J|\rho}$, is specified by the following collection of Boolean functions \begin{align*} \+C_{J|\rho} \coloneqq \set{C_j\restriction_{\*x_{\overline{J}}=\rho} \colon j \in [m], C_j\restriction_{\*x_{\overline{J}}=\rho}\textup{ is not a constant function}}. \end{align*} Namely, we restrict each function $C_j$ in $\+C$ to $\*x_{\overline{J}}=\rho$, and eliminate those that have become constant functions. $C_{J|\rho}$ encodes $n'$-bit messages into $m'$-bit codewords, where $n'=|J|$ and $m' = \abs{\+C_{J|\rho}} \le m$. We note that the local decoder $\ensuremath{\mathsf{Dec}}\xspace$ for $C$ can also be used as a local decoder for $C_{J|\rho}$, while preserving all the parameters. This is because, $\ensuremath{\mathsf{Dec}}\xspace$ never needs to really read a codeword bit which has become a constant function under the restriction $J|\rho$. The lemma below will be useful later in the proof. It shows that a constant fraction of the message bits can be fixed so that most codeword bits $C_j$ with large $w_j$ become constants. \begin{lemma} \label{lem:random-restriction} There exist a set $J \subseteq [n]$ and assignments $\rho \in \set{0,1}^{\overline{J}}$ such that $|J| \ge n/6$, and $|W \setminus A| \le \delta m/4$, where $A \subseteq W$ collects all codeword bits $j \in W$ such that $C_j \restriction_{\*x_{\overline{J}}=\rho}$ is a constant function. \end{lemma} \begin{proof} Let $J$ be a random subset formed by selecting each $i \in [n]$ independently with probability $1/3$. For each $j \in \overline{J}$, set $\rho_j = 0$ or $\rho_j = 1$ with probability $1/2$. We have $\E[|J|]=n/3$, and hence the Chernoff bound shows that $|J| < n/6$ with probability $\exp(-\Omega(n))$. Furthermore, for each $j \in W$, $C_j\restriction_{\*x_{\overline{J}}=\rho}$ becomes a constant function except with probability $\delta/8$. This is because for each $i \in T_j$, $C_j\restriction_{x_i=0}$ or $C_j\restriction_{x_i=1}$ is a constant function, and either case happens with probability $1/3$. Therefore \begin{align*} \Pr\left[ C_j\restriction_{\*x_{\overline{J}}=\rho}\textup{ is not constant} \right] \le \tuple{1-\frac{1}{3}}^{|T_j|} < e^{-|T_j|/3} \le \frac{\delta}{8}, \end{align*} where the last inequality is due to $w_j = |T_j| \ge 3\ln(8/\delta)$, since $j \in W$. By linearity of expectation and Markov's inequality, we have \begin{align*} & \Pr\left[ \sum_{j \in W}\mathbf{1}\set{C_j\restriction_{\*x_{\overline{J}}=\rho}\textup{ is not constant}} \ge \frac{\delta}{4}|W| \right] \\ \le& \frac{ \E\left[ \sum_{j \in W}\mathbf{1}\set{C_j\restriction_{\*x_{\overline{J}}=\rho}\textup{ is not constant}} \right]}{\delta|W|/4} \\ =& \frac{\sum_{j \in W}\Pr\left[ C_j\restriction_{\*x_{\overline{J}}=\rho}\textup{ is not constant} \right]}{\delta|W|/4} \\ \le& \frac{\delta/8 \cdot |W|}{\delta|W|/4} \le \frac{1}{2}. \end{align*} Applying a union bound gives \begin{align*} \Pr\left[ \tuple{|J| < n/6} \lor \tuple{\sum_{j \in W}\mathbf{1}\set{C_j\restriction_{\*x_{\overline{J}}=\rho}\textup{ is not constant}} \ge \frac{\delta}{4}|W|} \right] \le \exp\tuple{-\Omega(n)} + \frac{1}{2} < 1. \end{align*} Finally, we can conclude that there exist $J \subseteq [n]$ and $\rho\in\ensuremath{{\{0,1\}}}\xspace^{\overline{J}}$ such that $|J| \ge n/6$, and $C_j\restriction_{\*x_{\overline{J}}=\rho}$ becomes a constant function for all but $\delta/4$ fraction of $j \in W$. \end{proof} Let $J \subseteq [n]$ and $\rho \in \set{0,1}^{\overline{J}}$ be given by the \cref{lem:random-restriction}, and consider the restricted code $C_{J|\rho}$. By rearranging the codeword bits, we may assume $J=[n']$ where $n'=|J| \ge n/6$. Let $A \subseteq [m]$ be the set of codeword bits which get fixed to constants under $J|\rho$. We denote $W'\coloneqq W\setminus A$, $S_i'\coloneqq S_i \setminus A$, $S_{i,-}'\coloneqq S_{i,-}\setminus A$, and $S_{i,+}'\coloneqq S_{i,+}\setminus A$. Note that $|W'|=|W\setminus A| \le \delta m/4$, and thus $|S_{i,+}'|=|S_{i,+}\cap W'|\le \delta m/4$ for all $i \in [n']$. We emphasize that $S_i'$ does not necessarily contain all codeword bits fixable by $x_i$ in the restricted code $C_{J|\rho}$, as fixing some message bits may cause more codeword bits to be fixable by $x_i$. We first show that the queries of $C$ must have certain structures. The following claim characterizes the queries in $F_i^{\ge 1}$. \begin{claim} \label{clm:bot-fix} Suppose $\set{j,k} \in F_{i}^{\ge 1}$. Then we must have $j,k \in S_i$. \end{claim} \begin{proof} Let $\set{j, k} \in F_{i}^{\ge 1}$. Suppose for the sake of contradiction that $j \notin S_i$. This implies there are partial assignments $\sigma_{00}, \sigma_{01}, \sigma_{10}, \sigma_{11} \in \set{0,1}^{n-1}$ such that \begin{align*} C_j\tuple{\*x_{-i} = \sigma_{00}, x_i = 0} = 0, \quad C_j\tuple{\*x_{-i} = \sigma_{01}, x_i = 1} = 0, \\ C_j\tuple{\*x_{-i} = \sigma_{10}, x_i = 0} = 1, \quad C_j\tuple{\*x_{-i} = \sigma_{11}, x_i = 1} = 1, \end{align*} where $\*x_{-i}$ is defined as $\tuple{x_t \colon t \in [n]\setminus\set{i}}$. Let $C_{00}, C_{01}, C_{10}, C_{11}$ be encodings of the corresponding assignments mentioned above. Since the relaxed decoder has perfect completeness, when $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ is given access to $C_{00}$ or $C_{10}$ it must output $x_i=0$. Note that the $j$-th bit is different in $C_{00}$ and $C_{10}$. Similarly, when $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ is given access to $C_{01}$ or $C_{11}$ it must output $x_i = 1$. However, this already takes up 4 entries in the truth table of any decoding function $f \in \mathtt{DF}_{j,k}^i$, leaving no space for any ``$\perp$'' entry. This contradicts with the assumption $\set{j,k} \in F_{i}^{\ge 1}$. \end{proof} Here is another way to view \cref{clm:bot-fix} which will be useful later: Suppose $\set{j, k}$ is a query set such that $j \notin S_i$ (or $k \notin S_i$), then $\set{j, k} \in F_{i}^{0}$. In other words, conditioned on the event that some query is not contained in $S_i$, the decoder never outputs $\perp$. The following claim characterizes the queries in $F_i^{0}$. \begin{claim} \label{clm:fixable-or-useless} Suppose $\set{j,k} \in F_{i}^{0}$, and $j \in S_i$. Then one of the following three cases occur: (1) $k \in S_i$, (2) $C_j=x_i$, or (3) $C_j=\neg x_i$. \end{claim} \begin{proof} Since $j \in S_i$, we may, without loss of generality, assume that $C_j\restriction_{x_i=0}$ is a constant function. Let us further assume it is the constant-zero function. The proofs for the other cases are going to be similar. Denote by $f(y_j, y_k)$ the function returned by $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ conditioned on reading $\set{j,k}$. Any function $f \in \mathtt{DF}_{j,k}^i$ takes values in $\set{0,1}$ since $\set{j,k} \in F_i^{0}$. Suppose case (1) does not occur, meaning that $C_k\restriction_{x_i=0}$ is not a constant function. Then there must be partial assignments $\sigma_{00}, \sigma_{01} \in \set{0,1}^{n-1}$ such that \begin{align*} C_k(x_i=0, \*x_{-i}=\sigma_{00}) = 0, \quad C_k(x_i=0, \*x_{-i}=\sigma_{01}) = 1. \end{align*} Let $C_{00}$ and $C_{01}$ be the encodings of the corresponding assignments mentioned above. Due to perfect completeness of $\ensuremath{\mathsf{Dec}}\xspace$, it must always output $x_i = 0$ when given access to $C_{00}$ or $C_{01}$. That means $f(0,0)=f(0,1)=0$. Now we claim that $C_j\restriction_{x_i=1}$ must be the constant-one function. Otherwise there is a partial assignment $\sigma_{10} \in \ensuremath{{\{0,1\}}}\xspace^{n-1}$ such that \begin{align*} C_j(x_i=1, \*x_{-i}=\sigma_{10}) = 0. \end{align*} Let $C_{10}$ be the encoding of this assignment. On the one hand, due to perfect completeness $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ should always output $x_i=1$ when given access to $C_{10}$. On the other hand, $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ outputs $f((C_{10})_j,0)=f(0,0)=0$. This contradiction shows that $C_j\restriction_{x_i=1}$ must be the constant-one function. Therefore $C_j=x_i$, i.e., case (2) occurs. Similarly, when $C_j\restriction_{x_i=0}$ is the constant-one function, we can deduce that $C_j=\neg x_i$, i.e., case (3) occurs. \end{proof} We remark that \cref{clm:bot-fix} and \cref{clm:fixable-or-useless} jointly show that for any query set $\set{j,k}$ made by $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ there are 2 essentially different cases: (1) both $j, k$ lie inside $S_i$, and (2) both $j, k$ lie outside $S_i$. The case $j \in S_i, k\notin S_i$ ($k \in S_i, j\notin S_i$, resp.) means that $k$ ($j$, resp.) is a dummy query which is not used for decoding. Furthermore, conditioned on case (2), the decoder never outputs $\perp$. Another important observation is that all properties of the decoder discussed above hold for the restricted code $C_{J|\rho}$, with $S_i$ replaced by $S_i'$. This is because $C_{J|\rho}$ uses essentially the same decoder, except that it does not actually query any codeword bit which became a constant. For a subset $S \subseteq [m]$, we say ``$\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ reads $S$'' if the event ``$j \in S$ and $k \in S$'' occurs where $j, k \in [m]$ are the queries made by $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$. The following lemma says that conditioned on $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ reads some subset $S$, there is a way of modifying the bits in $S$ that flips the output of the decoder. \begin{lemma} \label{lem:conditional-zero} Let $S \subseteq [m]$ be a subset such that $\Pr[\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)\textup{ reads }S]>0$. Then for any string $s \in \set{0,1}^m$ and any bit $b \in \set{0,1}$, there exists a string $z \in \set{0,1}^m$ such that $z[[m]\setminus S]=s[[m]\setminus S]$, and \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, z) = 1-b \mid \ensuremath{\mathsf{Dec}}\xspace(i,\cdot)\textup{ reads }S \right] = 1. \end{align*} \end{lemma} \begin{proof} Let $x \in \set{0,1}^{n}$ be a string with $x_i=1-b$. Let $z \in \set{0,1}^{m}$ be the string satisfying \begin{align*} z[S] = C(x)[S], \quad z[[m]\setminus S] = s[[m]\setminus S]. \end{align*} Since $\ensuremath{\mathsf{Dec}}\xspace$ has perfect completeness, we have \begin{align*} 1 = \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, C(x)) = x_i \mid \ensuremath{\mathsf{Dec}}\xspace(i,\cdot)\textup{ reads }S \right] = \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, z) = 1-b \mid \ensuremath{\mathsf{Dec}}\xspace(i,\cdot)\textup{ reads }S \right]. \end{align*} \end{proof} The next lemma is a key step in our proof. It roughly says that there is a local decoder for $x_i$ in the standard sense as long as the size of $S_{i}$ is not too large. \begin{lemma} \label{lem:LDC-reduction} Suppose $i \in [n]$ is such that $\abs{S_{i}} \le \delta m/2$. Then there is a $(2,\delta/2,1/2+\ensuremath{\varepsilon})$-local decoder $D_i$ for $i$. In other words, for any $x \in \set{0,1}^{n}$ and $y \in \set{0,1}^{m}$ such that $\HAM\xspace(C(x), y) \le \delta m/2$, we have \begin{align*} \Pr\left[ D_i(y) = x_i \right] \ge \frac{1}{2} + \ensuremath{\varepsilon}, \end{align*} and $D_i$ makes at most 2 queries into $y$. \end{lemma} \begin{proof} Let $i \in [n]$ be such that $\abs{S_{i}} \le \delta m/2$. The local decoder $D_i$ works as follows. Given $x \in \set{0,1}^{n}$ and $y \in \set{0,1}^{m}$ such that $\HAM\xspace(C(x), y) \le \delta m/2$, $D_i$ obtains a query set $Q$ according to the query distribution of $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ conditioned on $Q \subseteq [m]\setminus S_i$. Then $D_i$ finishes by outputting the result returned by $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$. Denote by $E_i$ the event ``$\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ reads $[m]\setminus S_i$'', i.e., both two queries made by $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ lie outside $S_i$. In order for the conditional distribution to be well-defined, we need to argue that $E_i$ occurs with non-zero probability. Suppose this is not the case, meaning that $Q \cap S_i \neq \varnothing$ for all possible query set $Q$. Let $z \in \set{0,1}^m$ be the string obtained by applying Lemma~\ref{lem:conditional-zero} with $S = S_i$, $s=C(x)$ and $b=x_i$. Claim~\ref{clm:bot-fix} and Claim~\ref{clm:fixable-or-useless} jointly show that either $Q \subseteq S_i$, or the decoder's output does not depend on the answers to queries in $Q \setminus S_i$. In any case, the output of $\ensuremath{\mathsf{Dec}}\xspace(i,z)$ depends only on $z[S_i]$. However, by the choice of $z$ we now have a contradiction since \begin{align*} \frac{1}{2} + \ensuremath{\varepsilon} \le \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i,z) \in \set{x_i, \perp} \right] = \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i,z) \in \set{x_i, \perp} \mid \ensuremath{\mathsf{Dec}}\xspace(i,\cdot)\textup{ reads }S_i \right] = 0, \end{align*} where the first inequality is due to $\HAM\xspace(C(x),z)\le |S_i| < \delta m$ and the relaxed decoding property of $\ensuremath{\mathsf{Dec}}\xspace$. By definition of $D_i$, it makes at most 2 queries into $y$. Its success rate is given by \begin{align*} \Pr[D_i(y) = x_i] = \Pr[\ensuremath{\mathsf{Dec}}\xspace(i,y) = x_i \mid E_i]. \end{align*} Therefore it remains to show that \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i,y) = x_i \mid E_i \right] \ge \frac{1}{2} + \ensuremath{\varepsilon}. \end{align*} Let $z$ be the string obtained by applying Lemma~\ref{lem:conditional-zero} with $S=S_i$, $s=y$ and $b=x_i$. From previous discussions we see that conditioned on $\overline{E_i}$ (i.e., the event $E_i$ does not occur), the output of $\ensuremath{\mathsf{Dec}}\xspace(i,z)$ only depends on $z[S_i]$. Therefore \begin{align} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, z) \in \set{x_i, \perp} \mid \overline{E_i} \right] = 1-\Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, z) = 1-x_i \mid \overline{E_i} \right] = 0. \label{eqn:conditional-zero} \end{align} We also have that $z$ is close to $C(x)$ since \begin{align*} \HAM\xspace(z, C(x)) \le \HAM\xspace(z, y) + \HAM\xspace(y, C(x)) \le \abs{S_i} + \delta m/2 \le \delta m. \end{align*} Thus, the relaxed decoding property of $\ensuremath{\mathsf{Dec}}\xspace$ gives \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, z) \in \set{x_i, \perp} \right] \ge \frac{1}{2} + \ensuremath{\varepsilon}. \end{align*} On the other hand, we also have \begin{align*} & \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, z) \in \set{x_i, \perp} \right] \\ =& \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, z) \in \set{x_i, \perp} \mid \overline{E_i} \right] \cdot \Pr\left[ \overline{E_i} \right] + \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, z) \in \set{x_i, \perp} \mid E_i \right] \cdot \Pr\left[ E_i \right] \\ =& \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, z) \in \set{x_i, \perp} \mid \overline{E_i} \right] \cdot \Pr\left[ \overline{E_i} \right] + \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, y) \in \set{x_i, \perp} \mid E_i \right] \cdot \Pr\left[ E_i \right] \tag*{($z[[m]\setminus S_i]=y[[m]\setminus S_i]$)} \\ =& \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, y) \in \set{x_i, \perp} \mid E_i \right] \cdot \Pr\left[ E_i \right] \tag*{Equation (\ref{eqn:conditional-zero})}\\ \le& \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, y) \in \set{x_i, \perp} \mid E_i \right]. \end{align*} Note that by Claim~\ref{clm:bot-fix}, conditioned on $E_i$, $\ensuremath{\mathsf{Dec}}\xspace(i,\cdot)$ never outputs ``$\perp$''. We thus have \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(i, y) = x_i \mid E_i \right] \ge \frac{1}{2} + \ensuremath{\varepsilon}. \end{align*} \end{proof} We remark once again that the above lemma holds for the restricted code $C_{J|\rho}$, with $S_i$ replaced by $S_i'$. Below we prove an exponential lower bound for non-adaptive 2-query Hamming RLDCs. \begin{proposition} \label{prop:non-adaptive-2qRLDC} Let $C \colon \set{0,1}^n \rightarrow \set{0,1}^m$ be a non-adaptive weak $(2,\delta,1/2+\ensuremath{\varepsilon})$-RLDC. Then $m = 2^{\Omega_{\delta,\ensuremath{\varepsilon}}(n)}$. \end{proposition} \begin{proof} Let $C_{J|\rho}\colon \set{0,1}^{n'}\rightarrow \set{0,1}^{m'}$ be the restricted code where $J|\rho$ is given by \cref{lem:random-restriction}, and $A \subseteq [m]$ be the set of codeword bits which get fixed to constants. We also let $S_i'\coloneqq S_i\setminus A$, $S_{i,-}'=S_{i,-}\setminus A$, $S_{i,+}'=S_{i,+}\setminus A$. Denote $T_j'\coloneqq \set{i \in [n'] \colon j \in S_i'}$. Since $S_i'\subseteq S_i$ for each $i$, we also have $T_j'\subseteq T_j$ for each $j$. In particular, for each $j \notin W' \subseteq W$, we have $|T_j'| \le |T_j| \le 3\ln(8/\delta)$. Therefore \begin{align*} \underset{i \in [n']}{\mathbb{E}}\left[|S_{i,-}'|\right] = \frac{1}{n'}\sum_{i=1}^{n'}|S_{i,-}'| = \frac{1}{n'}\sum_{j \in [m']\setminus W'}|T_j'| \le 3\ln(8/\delta)\cdot \frac{m'}{n'}. \end{align*} Therefore by Markov's inequality, \begin{align*} \underset{i \in [n']}{\Pr}\left[|S_{i,-}'| > \delta m'/4 \right] \le \frac{12\ln(8/\delta)}{\delta n'} = O_{\delta}\tuple{\frac{1}{n'}}. \end{align*} In other words, there exists $I \subseteq [n']$ of size $|I| \ge n'-O_{\delta}(1)$ such that $|S_{i,-}'| \le \delta m'/4$ for all $i \in I$. For any such $i \in I$, we have $|S_i'| = |S_{i,-}'| + |S_{i,+}'| \le \delta m'/4 + \delta m'/4 = \delta m'/2$. By \cref{lem:LDC-reduction}, we can view $C_{J|\rho}$ as a $(2,\delta/2,1/2+\ensuremath{\varepsilon})$-LDC for message bits in $I$ (for instance, we can arbitrarily fix the message bits outside $I$), where $|I| > n'-O_{\delta}(1) = \Omega(n)$. Finally, the statement of the proposition follows from \cref{thm:two-query-lb}. \end{proof} \input{hamming-rLDC-adaptive} \section{Strong Insdel rLDCs} The following definition is adapted from Definition 3.2 of~\cite{gur2019lower}. \begin{definition}[Strong rILDC] A code $C \colon \Sigma^n \rightarrow \Sigma^m$ is a $(q,\delta,\alpha,\rho)$-relaxed insdel LDC (rILDC) if there exists a probabilistic algorithm $\ensuremath{\mathsf{Dec}}\xspace$ with the following properties. \begin{itemize} \item \textsf{(Perfect) Completeness:} For every $x \in \Sigma^n$ and $i \in [n]$, it holds that \begin{align*} \Pr[\ensuremath{\mathsf{Dec}}\xspace(C(x), m, i) = x_i] = 1. \end{align*} \item \textsf{Relaxed Decoding:} For every $x \in \Sigma^n$ and $y \in \Sigma^{m'}$ such that $\ED\tuple{C(x), y} \le \delta \cdot 2m$, and for every $i \in [n]$, we have \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(y, m', i) \in \set{x_i, \perp} \right] \ge \alpha. \end{align*} \item \textsf{Success Rate:} For every $x \in \Sigma^n$ and $y \in \Sigma^{m'}$ such that $\ED\tuple{C(x), y} \le \delta \cdot 2m$, there exists a set $I_y \subseteq [n]$ of size $\abs{I_y} \ge \rho n$ such that for every $i \in I_y$, we have \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(y, m', i) = x_i \right] \ge \alpha. \end{align*} \end{itemize} Furthermore, in every invocation, $\ensuremath{\mathsf{Dec}}\xspace$ reads at most $q$ symbols of $y$. The probabilities are taken over the randomness of $\ensuremath{\mathsf{Dec}}\xspace$, and $\ED\tuple{C(x),y}$ denotes the minimum number of insertions/deletions necessary to transform $C(x)$ into $y$. \end{definition} The following theorem is implicit in~\cite{blocki2021exponential}. \begin{theorem} \label{thm:lb-channel} Let $\delta \in (0,1)$ be a constant. There exists a channel $\mathfrak{D}=\mathfrak{D}_{m,\delta} \colon \set{0,1}^m \rightarrow \set{0,1}^{\le m}$ with the following properties. \begin{itemize} \item For every $s \in \set{0,1}^m$, $\ED\tuple{\mathfrak{D}(s), s} \le \delta\cdot 2m$. {Minshen}{purple}{Our error distribution may violate this with exponentially small probability, but I think we can condition on this event and it will not make much difference.} \item Suppose $C \colon \set{0,1}^n \rightarrow \set{0,1}^m$ is a code which is locally decodable on average against $\mathfrak{D}$. Formally, there is a randomized algorithm $\ensuremath{\mathsf{Dec}}\xspace$ satisfying \begin{align*} \forall i \in [n], \quad \Pr_{\substack{x \in \set{0,1}^n \\ y \sim \mathfrak{D}(C(x))}}\left[ \ensuremath{\mathsf{Dec}}\xspace(y, |y|, i) = x_i \right] \ge \frac{1}{2} + \ensuremath{\varepsilon}, \end{align*} and $\ensuremath{\mathsf{Dec}}\xspace$ makes at most $q$ queries to $y$ in each invocation. Then for $q\ge 3$ we have $m = \exp\tuple{\Omega_{\delta,\ensuremath{\varepsilon}}(n^{1/(2q-4)})}$. \end{itemize} \end{theorem} \subsection{Exponential lower bound for strong rILDCs} In this subsection, we prove the following theorem. \begin{theorem} \label{thm:srildc-main} Let $C \colon \set{0,1}^n \rightarrow \set{0,1}^m$ be a strong $(q,\delta,\alpha,\rho)$-rILDC where $\alpha>1/2$. Then $m = \exp\tuple{n^{\Omega_{\delta,\alpha,\rho}(1/q)}}$. \end{theorem} The proof has two steps. The first step is a straightforward confidence amplification step which boosts the success rate $\alpha$ by running the decoding algorithm multiple times. In the second step, we show that if $\alpha$ is sufficiently close to 1, a strong rILDC will imply a (non-relaxed) insdel LDC that is decodable on average against the channel $\mathfrak{D}$ mentioned in Theorem~\ref{thm:lb-channel}. The exponential lower bound is thus obtained by applying Theorem~\ref{thm:lb-channel}. \begin{lemma} \label{lem:amplify} Let $C \colon \set{0,1}^n \rightarrow \set{0,1}^m$ be a strong $(q,\delta,1/2+\beta,\rho)$-rILDC where $\beta > 0$. Then for any $\ensuremath{\varepsilon} > 0$, $C$ is also a strong $(q\cdot \ln(1/\ensuremath{\varepsilon})/(2\beta^2), \delta, 1-\ensuremath{\varepsilon}, \rho)$-rILDC. \end{lemma} \begin{proof} Let $\ensuremath{\mathsf{Dec}}\xspace$ be a relaxed local decoder for $C$. For some integer $T$ to be decided, consider the following alternative local decoder $\ensuremath{\mathsf{Dec}}\xspace_{T}$ for $C$. On input $(y, m, i)$, $\ensuremath{\mathsf{Dec}}\xspace_T$ independently runs $\ensuremath{\mathsf{Dec}}\xspace(y, m, i)$ for $T$ times, and obtains outputs $r_1, r_2,\dots, r_T \in \set{0,1,\bot}$. For $b \in \set{0,1,\bot}$ we denote \begin{align*} S_b \coloneqq \set{t \in [T] \colon r_t = b}. \end{align*} $\ensuremath{\mathsf{Dec}}\xspace_T$ outputs $0$ or $1$ if $|S_0| \ge T/2$ or $|S_1| \ge T/2$, respectively. Otherwise $\ensuremath{\mathsf{Dec}}\xspace_T$ outputs $\bot$. Now we prove the three properties of $\ensuremath{\mathsf{Dec}}\xspace_T$. Perfect completeness is easy to see. The relaxed decoding property is violated when $|S_{1-x_i}| \ge T/2$. By the Chernoff bound, this happens with probability at most $e^{-2\beta^2 T}$, since for each $t \in [T]$ we have \begin{align*} \Pr[r_t = 1-x_i] = 1-\Pr[r_t \in \set{x_i, \perp}] \le 1 - \tuple{\frac{1}{2} + \beta} = \frac{1}{2}-\beta. \end{align*} Let $I_y \subseteq [n]$ be the subset given by the third property of $\ensuremath{\mathsf{Dec}}\xspace$. That is, for each $i \in I_y$ and $t \in [T]$, we have \begin{align*} \Pr[r_t = x_i] \ge \frac{1}{2}+\beta. \end{align*} Again, by the Chernoff bound we have $\Pr[|S_{x_i}| < T/2] < e^{-2\beta^2 T}$, for each $i \in I_y$. Finally, we take $T = \ln(1/\ensuremath{\varepsilon})/(2\beta^2)$ which ensures $e^{-2\beta^2 T} \le \ensuremath{\varepsilon}$. We note that $\ensuremath{\mathsf{Dec}}\xspace_{T}$ makes $q\cdot T = q\cdot \ln(1/\ensuremath{\varepsilon})/(2\beta^2)$ queries to $y$. \end{proof} \begin{lemma} \label{lem:reduce-to-ILDC} Let $C \colon \set{0,1}^n \rightarrow \set{0,1}^m$ be a $(q,\delta,\alpha,\rho)$-relaxed insdel LDC. Suppose $\rho\alpha + (1-\rho)\alpha/2 = 1/2+\ensuremath{\varepsilon}$ for some $\ensuremath{\varepsilon} > 0$. Then there exists a decoder $\ensuremath{\mathsf{Dec}}\xspace$ and a subset $I \subseteq [n]$ of size at least $\ensuremath{\varepsilon} n$ such that for every $i \in I$, we have \begin{align*} \Pr_{\substack{x \in \set{0,1}^n \\ y \sim \mathfrak{D}(C(x))}}\left[ \ensuremath{\mathsf{Dec}}\xspace(y, |y|, i) = x_i \right] \ge \frac{1}{2} + \frac{\ensuremath{\varepsilon}}{2}. \end{align*} The probability is taken over the uniform random choice of $x \in \set{0,1}^n$, the randomness of $\mathfrak{D}$, and the randomness of $\ensuremath{\mathsf{Dec}}\xspace$. \end{lemma} \begin{proof} Let $\ensuremath{\mathsf{Dec}}\xspace_0$ be the relaxed decoder for $C$. The local decoder $\ensuremath{\mathsf{Dec}}\xspace$ will simulate $\ensuremath{\mathsf{Dec}}\xspace_0$ and output the result, except when $\ensuremath{\mathsf{Dec}}\xspace_0$ returns ``$\perp$'', $\ensuremath{\mathsf{Dec}}\xspace$ instead returns a uniform random bit. We note that $\mathfrak{D}$ introduces at most $\delta \cdot 2m$ insdel errors. Therefore by definition of strong relaxed insdel LDCs, for a random index $i \in [n]$, we have \begin{align*} \Pr_{\substack{i \in [n] \\ x \in \set{0,1}^n \\ y \sim \mathfrak{D}(C(x))}}\left[ \ensuremath{\mathsf{Dec}}\xspace(y, |y|, i) = x_i \right] \ge \rho\alpha + \tuple{1 - \rho} \cdot \frac{\alpha}{2} = \frac{1}{2} + \ensuremath{\varepsilon}. \end{align*} By a Markov-type argument, there is a subset $I \subseteq [n]$ of size at least $\ensuremath{\varepsilon} n$ such that for all $i \in I$, we have \begin{align*} \Pr_{\substack{x \in \set{0,1}^n \\ y \sim \mathfrak{D}(C(x))}}\left[ \ensuremath{\mathsf{Dec}}\xspace(y, |y|, i) = x_i \right] \ge \frac{1}{2} + \frac{\ensuremath{\varepsilon}}{2}. \end{align*} \end{proof} Now we are ready to prove Theorem~\ref{thm:srildc-main}. \begin{proof}[Proof of Theorem~\ref{thm:srildc-main}] Write $\alpha=1/2+\beta$ for some constant $\beta>0$. Taking $\ensuremath{\varepsilon} = \rho/4$ in Lemma~\ref{lem:amplify}, we have that $C$ is also a strong $(q',\delta,\alpha', \rho)$-rILDC, where $q'=q\cdot \ln(4/\rho)/(2\beta^2)$ and $\alpha'=1-\rho/4$. Note that \begin{align*} \rho\alpha' + \frac{(1-\rho)\alpha'}{2} = \frac{(1+\rho)\alpha'}{2} = \frac{(1+\rho)(1-\rho/4)}{2} \ge \frac{1}{2} + \frac{\rho}{4}. \end{align*} Thus we can apply Lemma~\ref{lem:reduce-to-ILDC} with $\ensuremath{\varepsilon} = \rho/4$ to obtain a subset $I\subseteq [n]$ of size $|I| \ge \rho n/4$, and a decoder $\ensuremath{\mathsf{Dec}}\xspace$ for bits in $I$ which has success rate $1/2+\rho/8$ on average against $\mathfrak{D}$. By Theorem~\ref{thm:lb-channel}, this implies \begin{align*} m \ge \exp\tuple{\Omega_{\delta,\rho}\tuple{|I|^{1/(2q'-4)}}} \ge \exp\tuple{\Omega_{\delta,\rho}\tuple{\tuple{\rho n/4}^{1/q \cdot (8\beta^2)/\ln(4/\rho)}}} = \exp\tuple{n^{\Omega_{\delta,\beta,\rho}(1/q)}}, \end{align*} as desired. \end{proof} \section{Introduction}\label{sec:intro} Locally Decodable Codes (LDCs) \cite{KatzT00, SudanTV99} are error-correcting codes $C: \Sigma^n \rightarrow \Sigma^m$ that have super-fast decoding algorithms that can recover individual symbols of a {\em message} $x\in \Sigma^n$, even when worst-case errors are introduced in the {\em codeword} $C(x)$. Similarly, Locally Correctable Codes (LCCs) are error-correcting codes $C: \Sigma^n \rightarrow \Sigma^m$ for which there exist very fast decoding algorithms that recover individual symbols of the {\em codeword} $C(x)\in \Sigma^m$, even when worst-case errors are introduced. LDCs/LCCs were first discovered by Katz and Trevisan \cite{KatzT00} and since then have proven to be crucial tools in many areas of computer science, including private information retrieval, probabilistically checkable proofs, self-correction, fault-tolerant circuits, hardness amplification, and data structures (e.g., \cite{BabaiFLS91,LundFKN92,BlumLR93,BlumK95,ChorKGS98,ChenGW13,AndoniLRW17} and surveys \cite{Tre04-survey,Gasarch04}). The {\em parameters} of interest of these codes are their {\em rate}, defined as the ratio between the message length $n$ and the codeword length $m$, their {\em relative minimum distance}, defined as the minimum normalized Hamming distance between any pair of codewords, and their {\em locality} or {\em query complexity}, defined as the number of queries a decoder makes to a received word $y\in \Sigma^m$. Trade-offs between the achievable parameters of Hamming LDCs/LCCs have been studied extensively over the last two decades \cite{KerenidisW04, WehnerW05, GoldreichKST06, Woodruff07, Yekhanin08, Yekhanin12, DvirGY11, Efremenko12,GalM12, BhattacharyyaDS16, BhattacharyyaG17, bhattacharyya2017lower, DvirSW17,KoppartyMRS17, BhattacharyyaCG20} (see also surveys by Yekhanin \cite{Yekhanin12} and by Kopparty and Saraf \cite{KoppartyS16}). Specifically, for $2$-query Hamming LDCs/LCCs it is known that $m=2^{\Theta(n)}$ \cite{KerenidisW04, GoldreichKST06, Ben-AroyaRW08, bhattacharyya2017lower}. However, for $q>2$ queries, the current gap between upper and lower bounds is superpolynomial in $n$. In particular, the best constructions have super-polynomial codeword length \cite{Yekhanin08,DvirGY11,Efremenko12}, while the most general lower bounds for $q\geq 3$ are of the form $m=\Omega((\frac{n}{\log n})^{1+1/(\ceil{\frac{q}{2}}-1)})$ \cite{KatzT00,KerenidisW04}. In particular, for $q=3$, \cite{KatzT00} showed an $m=\Omega(n^{3/2})$ bound, which was improved in \cite{KerenidisW04} to $m=\Omega(n^2/\log^2 n)$. This was further improved by \cite{Woodruff07,Woodruff12} to $m=\Omega(n^2/\log n)$ for general codes and $m=\Omega(n^2)$ for linear codes. \cite{bhattacharyya2017lower} used new combinatorial techniques to obtain the same $m=\Omega(n^2/\log n)$ bound. A very recent paper \cite{AlrabiahGKM} breaks the quadratic barrier and proves that $m=\Omega(n^3/\poly\log n)$. We note that the exponential lower bound on the length of $3$-query LDCs from \cite{GalM12} holds only for some restricted parameter regimes, and do not apply to the natural ranges of the known upper bounds. Motivated by this large gap in the constant-query regime, as well as by applications in constructions of Probabilistically Checkable Proofs (PCPs), Ben-Sasson, Goldreich, Harsha, Sudan, and Vadhan \cite{Ben-SassonGHSV06} introduced a relaxed version of LDCs for Hamming errors. Specifically, the decoder is allowed to output a ``decoding failure'' answer (marked as ``$\bot$''), as long as it errs with some small probability. More precisely, a \emph{$(q,\delta, \alpha, \rho)$-relaxed LDC} is an error-correcting code satisfying the following properties. \begin{definition}\label{def:strongRLDC} A $(q,\delta, \alpha, \rho)$-Relaxed Locally Decodable Code ${C}: \Sigma^n \rightarrow \Sigma^m$ is a code for which there exists a decoder that makes at most $q$ queries to the received word $y$, and satisfies the following further properties: \begin{enumerate} \item (Perfect completeness) For every $i\in [n]$, if $y=C(x)$ for some message $x$ then the decoder, on input $i$, outputs $x_i$ with probability $1.$ \item (Relaxed decoding) For every $i\in [n]$, if $y$ is such that $dist(y, C(x))\leq \delta $ for some unique $C(x)$, then the decoder, on input $i$, outputs $x_i$ or $\bot$ with probability $\geq \alpha$. \item (Success rate) For every $y$ such that $dist(y, C(x))\leq \delta $ for some unique $C(x)$, there is a set $I$ of size $\geq \rho n$ such that for every $i\in I$ the decoder, on input $i$, correctly outputs $x_i$ with probability $\geq \alpha$. \end{enumerate} We will call an RLDC that satisfies all $3$ conditions by the notion of {\em strong} RLDC, and one that satisfies just the first $2$ conditions by the notion of {\em weak} RLDC, in which case it is called a $(q,\delta, \alpha)$-RLDC. Furthermore, if the $q$ queries are made in advance, before seeing entries of the codeword, then the decoder is said to be {\em non-adaptive}; otherwise, it is called {\em adaptive}. \end{definition} The above definition is quite general, in the sense that $dist(a,b)$ can refer to several different distance metrics. In the most natural setting, we use $dist(a,b)$ to mean the ``relative'' Hamming distance between $a,b\in \Sigma^m$, namely $dist(a,b)=|\{i\colon a_i\ne b_i\}|/m$. This corresponds to the standard RLDCs for Hamming errors. As it will be clear from the context, we also use $dist(a,b)$ to mean the ``relative'' Edit distance between $a,b \in \Sigma^*$, namely $dist(a, b)=\ED(a, b)/(|a|+|b|)$, where $\ED(a, b)$ is the minimum number of insertions and deletions to transform string $a$ into $b$. This corresponds to the new notion introduced and studied here, which we call {\em Insdel RLDCs}. Throughout this paper, we only consider the case where $\Sigma=\{0,1\}$. Definition \ref{def:strongRLDC} has also been extended recently to the notion of {\em Relaxed Locally Correctable Codes (RLCCs)} by Gur, Ramnarayan, and Rothblum \cite{GurRR20}. RLDCs and RLCCs have been studied in a sequence of exciting works, where new upper and lower bounds have emerged, and new applications to probabilistic proof systems have been discovered \cite{gur2019lower, ChiesaGS20, GurRR20, AsadiS21, GurL21}. Surprisingly, Ben-Sasson \,{\it et~al.}\, \cite{Ben-SassonGHSV06} construct strong RLDCs with $q=O(1)$ queries and $m=n^{1+O(1/\sqrt{q})}$, and more recently Asadi and Shinkar \cite{AsadiS21} improve the bounds to $m=n^{1+O(1/q)}$, in stark contrast with the state-of-the-art constructions of standard LDCs. Gur and Lachish \cite{GurL21} show that these bounds are in fact tight, as for every $q\geq 2$, every weak $q$-query RLDC must have length $m=n^{1+1/O(q^{2})}$ for non-adaptive decoders. We remark that the lower bounds of \cite{GurL21} hold even when the decoder does not have perfect completeness and in particular valid message bits are decoded with success probability $2/3.$ Dall'Agnon, Gur, and Lachish \cite{dall2021structural} further extend these bounds to the setting where the decoder is adaptive, with $m=n^{1+1/O(q^{2}\log^2 q)}.$ \subsection{Our results} As discussed before, since the introduction of RLDCs, unlike standard LDCs, they displayed a behaviour amenable to nearly linear-size constructions, with almost matching upper and lower bounds. However, recently \cite{GurL21} conjecture that for $q=2$ queries, there is in fact an exponential lower bound, matching the bounds for standard LDCs. In this paper, our first contribution is a proof of their conjecture, namely to show that Hamming $2$-query RLDCs require exponential length. In fact, our exponential lower bound for $q=2$ applies even to weak RLDCs, which only satisfy the first two properties (perfect completeness and relaxed decoding), and even for adaptive decoders. \begin{restatable}{theorem}{twoqrldcmain} \label{thm:main-2qRLDC} Let $C \colon \set{0,1}^n \rightarrow \set{0,1}^m$ be a weak adaptive $(2,\delta,1/2+\ensuremath{\varepsilon})$-RLDC. Then $m = 2^{\Omega_{\delta,\ensuremath{\varepsilon}}(n)}$. \end{restatable} Our results are the first exponential bounds for RLDCs. Furthermore, combined with the constructions with nearly linear codeword length for some constant number of queries \cite{Ben-SassonGHSV06, AsadiS21}, our results imply that RLDCs experience a ``phase transition''-type phenomena, where the codeword length drops from being exponential at $q=2$ queries to being almost linear at $q=c$ queries for some constant $c> 2$. In particular, this also implies that there is a query number $q$ where the codeword length drops from being super-polynomial at $q$ to being polynomial at $q+1$. Finding this exact threshold query complexity is an intriguing open question. As our second contribution, we introduce and study the notion of RLDCs correcting {\em insertions and deletions}, namely Insdel RLDCs. The non-relaxed variants of Insdel LDCs were first introduced in \cite{Ostrovsky-InsdelLDC-Compiler}, and were further studied in \cite{BlockBGKZ20,ChengLZ20,block2021private}. Local decoding in the Insdel setting is motivated in DNA storage \cite{Olgica17}, and in particular \cite{Banaletal-nature2021} show recent advances in bio-technological aspects of random access to data in these precise settings. In \cite{Ostrovsky-InsdelLDC-Compiler,BlockBGKZ20}, the authors give Hamming to Insdel reductions which transform any Hamming LDC into an Insdel LDC with rate reduced by a constant multiplicative factor, and locality increased by a $\ensuremath{\mathrm{polylog}}\xspace(m)$ multiplicative factor. Unfortunately, these compilers do not imply constant-query Insdel LDCs, whose existence is still an open question. The results of \cite{blocki2021exponential} show strong lower bounds on the length of constant-query Insdel LDCs. In particular, they show that linear Insdel LDCs with $2$ queries do no exist, general Insdel LDCs for $q= 3$ queries must have $m=\exp(\Omega(\sqrt{n}))$, and for $q\geq 4$ they must have $m=\exp(n^{\Omega(1/q)}).$ In this work we continue the study of locally decodable codes in insertion and deletion channels by proving the first upper and lower bounds regarding the relaxed variants of Insdel LDCs. We first consider strong Insdel RLDCs, which satisfy all three properties of Definition \ref{def:strongRLDC} and where the notion of distance is now that of relative edit distance. We adapt and extend the results of \cite{blocki2021exponential} to establish strong lower bounds on the codeword length of strong Insdel RLDCs. In particular, we prove that $m= \exp(n^{\Omega(1/q)})$ for any strong Insdel RLDC with locality $q$. \begin{restatable}{theorem}{strirldcmain} \label{thm:main-sinsdel RLDC} Let $C \colon \set{0,1}^n \rightarrow \set{0,1}^m$ be a non-adaptive strong $(q,\delta,1/2+\beta,\rho)$-Insdel RLDC where $\beta>0$. Then for every $q\ge 2$ there is a constant $c_1=c_1(q,\delta,\beta,\rho)$ such that \begin{align*} m = \exp\tuple{c_1\cdot n^{\Omega_{\rho}(\beta^2/q)}}. \end{align*} Furthermore, the same bound holds even if $C$ does not have perfect completeness. If $C$ has an adaptive decoder, the same bound holds with $\beta$ replaced by $\beta/2^{q-1}$. Formally, there exists a constant $c_2=c_1(q,\delta,\beta/2^{q-1},\rho)$ such that \begin{align*} m = \exp\tuple{c_2\cdot n^{\Omega_{\rho}(\beta^2/(q2^{2q}))}}. \end{align*} \end{restatable} Our reduction shown in the proof of \cref{thm:main-2qRLDC}, together with the impossibility results of standard {\em linear} or {\em affine} 2-query Insdel LDCs from \cite{blocki2021exponential} show a further impossibility result for linear and for affine $2$-query Insdel RLDCs (see remarks before \cref{cor:linear-2qIRLDC}). A linear code of length $m$ is defined over a finite field $\F$ and it is a linear subspace of the vector space $\F^m$, while an affine code is an affine subspace of $\F^m$. We then consider {\em weak} Insdel RLDCs that only satisfy the first two properties (perfect completeness and relaxed decoding). In contrast with \cref{thm:main-sinsdel RLDC}, we construct weak Insdel RLDCs with constant locality $q=O(1)$ and length $m=n^{1+\gamma}$ for some constant $\gamma \in (0, 1)$. To the best of our knowledge, this is the first positive result in the constant-query regime and the Insdel setting. However, the existence of a constant-query standard Insdel LDC (or even a constant-query strong Insdel RLDC) with any rate remains an open question. Finally, it is easy to see that our exponential lower bound for weak Hamming RLDCs with locality $q=2$ still applies in the Insdel setting, since Insdel errors are more general than Hamming error. Thus, in the Insdel setting we discover the same ``phase transition''-type phenomena as for Hamming RLDCs. \begin{restatable}{theorem}{weakirldcmain} \label{thm:main-rildc-construction} For any $\gamma>0$ and $\ensuremath{\varepsilon} \in (0,1/2)$, there exist constants $\delta \in (0,1/2)$ and $q=q(\delta,\ensuremath{\varepsilon},\gamma)$, and non-adaptive weak $(q,\delta,1/2+\ensuremath{\varepsilon})$-Insdel RLDCs $C\colon\set{0,1}^n\rightarrow \set{0,1}^m$ with $m=O(n^{1+\gamma})$. \end{restatable} We remark that in the Hamming setting, \cite{Ben-SassonGHSV06} shows that the first two properties of \cref{def:strongRLDC} imply the third property for codes with constant query complexity and which can withstand a constant fraction of errors. Our results demonstrate that, in general, unlike in the Hamming case, the first two properties do not imply the third property for Insdel RLDCs from \cref{def:strongRLDC}. Indeed, while for strong Insdel RLDCs we have $m= \exp(n^{\Omega(1/q)})$ for codes of locality $q$, there exists $q=O(1)$ for which we have constructions of weak Insdel RLDCs with $m=n^{1+\gamma}.$ This observation suggests that there are significant differences between Hamming RLDCs and Insdel RLDCs. We note that our construction of weak Insdel RLDCs can be modified to obtain strong Insdel Relaxed Locally Correctable Codes (Insdel RLCCs). Informally, an Insdel RLCC is a code for which codeword entries can be decoded to the correct value or $\bot$ with high probability, even in the presence of insdel errors. The formal definition of RLCC is given in \cref{sec:rLCC} (see \cref{def:rLCC}). We have the following corollary. \begin{restatable}{corollary}{strongirlcc}\label{thm:main-rilcc-construction} For any $\gamma>0$ and $\ensuremath{\varepsilon} \in (0,1/2)$, there exist constants $\delta \in (0,1/2)$ and $q=q(\delta,\ensuremath{\varepsilon},\gamma)$, and non-adaptive strong $(q,\delta,1/2 + \ensuremath{\varepsilon}, 1/2 )$-Insdel RLCCs $C\colon \set{0,1}^n\rightarrow \set{0,1}^m$ with $m=O(n^{1+\gamma})$. \end{restatable} \subsection{Overview of techniques} \subsubsection{Exponential Lower Bound for Weak Hamming RLDCs with \texorpdfstring{$q=2$}{q=2}} To simplify the presentation, we assume a non-adaptive decoder in this overview. While the exact same arguments do not directly apply to adaptive decoders\footnote{\label{foot:kt-obs}For standard LDCs Katz and Trevisan \cite{KatzT00} observed that an adaptive decoder could be converted into a non-adaptive decoder by randomly guessing the output $y_j$ of the first query $j$ to learn the second query $k$. Now we non-adaptively query the received codeword for both $y_j$ and $y_k$. If our guess for $y_j$ was correct then we continue simulating the adaptive decoder. Otherwise, we simply guess the output $x_i$. If the adaptive decoder succeeds with probability at least $p \geq 1/2 + \epsilon$ then the non-adaptive decoder succeeds with probability $p' \geq 1/4 + p/2 \geq 1/2 + \epsilon/2$. Unfortunately, this reduction does not preserve perfect completeness as required by our proofs for relaxed $2$-query Hamming RLDCs i.e., if $p=1$ then $p' = 3/4$.}, with a bit more care they can be adapted to work in those settings. At a high level we prove our lower bound by transforming any non-adaptive $2$-query weak Hamming RLDC for messages of length $n$ and $\delta$ fraction of errors into a standard $2$-query Hamming LDC for messages of length $n'=\Omega(n)$, with slightly reduced error tolerance of $\delta/2$. Kerenidis and de Wolf \cite{KerenidisW04} proved that any $2$-query Hamming LDC for messages of length $n$ must have codeword length $m = \exp(\Omega(n))$. Combining this result with our transformation, it immediately follows that any $2$-query weak Hamming RLDC must also have codeword length $m = \exp(\Omega(n))$. While our transformation does not need the third property (success rate) of a strong RLDC, we crucially rely on the property of {\em perfect completeness}, and that the decoder only makes $q=2$ queries. Let $C\colon\set{0,1}^n \rightarrow \set{0,1}^m$ be a weak $(2,\delta,1/2+\ensuremath{\varepsilon})$-RLDC. For simplicity (and without loss of generality), let us assume the decoder $\ensuremath{\mathsf{Dec}}\xspace$ works as follows. For message $x$ and input $i \in [n]$, the decoder non-adaptively makes 2 random queries $j ,k \in [m]$, and outputs $f_{j,k}^{i}(y_j, y_k) \in \set{0,1,\perp}$, where $y_j, y_k$ are answers to the queries from a received word $y$, and $f_{j,k}^{i} \colon \ensuremath{{\{0,1\}}}\xspace^2 \rightarrow \set{0,1,\perp}$ is a deterministic function. When there is no error, we have $y_j = C(x)_j$ and $y_k = C(x)_k$. We present the main ideas below, and refer the readers to \cref{sec:2qrldc} for full details. \paragraph{Fixable codeword bits.} The starting point of our proof is to take a closer look at those functions $f_{j,k}^{i}$ with $\perp$ entries in their truth tables. It turns out that when $f_{j,k}^{i}$ has at least one $\perp$ entry in the truth table, $C(x)_j$ can be fixed to a constant by setting either $x_i=0$ or $x_i=1$, and same for $C(x)_k$. To see this, note that the property of perfect completeness forces $f_{j,k}^{i}$ to be $0$ or $1$ whenever $x_i=0$ or $x_i=1$ and there is no error. Thus if neither $x_i=0$ nor $x_i=1$ fixes $C(x)_j$, then there must be two entries of $0$ and two entries of $1$ in the truth table of $f_{j,k}^{i}$, which leaves no space for $\perp$ (see \cref{clm:bot-fix}). Thus, when there is at least one $\perp$ entry in the truth table of $f_{j,k}^{i}$, we say that $C(x)_j$ and $C(x)_k$ are \emph{fixable} by $x_i$. This motivates the definition of the set $S_i$, which contains all indices $j \in [m]$ such that the codeword bits $C(x)_j$ are fixable by $x_i$; and the definition of $T_j$, the set of all indices $i \in [n]$ such that $C(x)_j$ is fixable by the message bits $x_i$.\ It is also natural to pay special attention to queries $j,k$ that are not both contained in $S_i$, since in this case the function $f_{j,k}^{i}$ never outputs $\perp$. \paragraph{The query structure.} In general, a query set $\set{j,k}$ falls into one of the following three cases: (1) both $j,k$ lie inside $S_i$; (2) both $j,k$ lie outside of $S_i$; (3) one of them lies inside $S_i$ and the other lies outside of $S_i$. It turns out that case (3) essentially never occurs for a decoder with perfect completeness. The reason is that when, say, $j \in S_i$ and $k \notin S_i$, one can effectively pin down every entry in the truth table of $f_{j,k}^{i}$ by using the perfect completeness property, and observe that the output of $f_{j,k}^{i}$ does not depend on $y_k$ at all (see \cref{clm:fixable-or-useless}). Thus in this case we can equivalently view the decoder as only querying $y_j$ where $j \in S_i$, which leads us back to case (1). In what follows, we denote by $E_1$ the event that case (1) occurs, and by $E_2$ the event that case (2) occurs. \paragraph{The transformation by polarizing conditional success probabilities.} We now give a high level description of our transformation from a weak RLDC to a standard LDC. Let $y$ be a string which contains at most $\delta m/2$ errors from the codeword $C(x)$. We have established that the success probability of the weak RLDC decoder on $y$ is an average of two conditional probabilities \begin{align*} \Pr[\ensuremath{\mathsf{Dec}}\xspace(i,y) \in \set{x_i, \perp}] = p_1 \cdot \Pr[\ensuremath{\mathsf{Dec}}\xspace(i,y) \in \set{x_i, \perp} \mid E_1] + p_2 \cdot \Pr[\ensuremath{\mathsf{Dec}}\xspace(i,y) \in \set{x_i, \perp} \mid E_2], \end{align*} where $p_1 = \Pr[E_1]$ and $p_2 = \Pr[E_2]$. Let us assume for the moment that $S_i$ has a small size, e.g., $|S_i|\le \delta m/2$. The idea in this step is to introduce additional errors to the $S_i$-portion of $y$, in a way that drops the conditional success probability $\Pr[\ensuremath{\mathsf{Dec}}\xspace(i,y) \in \set{x_i, \perp} \mid E_1]$ to 0 (see \cref{lem:conditional-zero}). In particular, we modify the bits in $S_i$ to make it consistent with the encoding of any message $\hat{x}$ with $\hat{x}_i=1-x_i$. Perfect completeness thus forces the decoder to output $1-x_i$ conditioned on $E_1$. Note that we have introduced at most $\delta m/2 + |S_i| \le \delta m$ errors in total, meaning that the decoder should still have an overall success probability of $1/2+\ensuremath{\varepsilon}$. Furthermore, now the conditional probability $\Pr[\ensuremath{\mathsf{Dec}}\xspace(i,y) \in \set{x_i, \perp} \mid E_2]$ takes all credits for the overall success probability. Combined with the observation that $\ensuremath{\mathsf{Dec}}\xspace$ never outputs $\perp$ given $E_2$, this suggests the following natural way to decode $x_i$ in the sense of a standard LDC: sample queries $j, k$ according to the conditional probability given $E_2$ (i.e., both $j,k$ lie outside $S_i$) and output $f_{j,k}^{i}(y_j, y_k)$. This gives a decoding algorithm for standard LDC, with success probability $1/2+\ensuremath{\varepsilon}$ and error tolerance $\delta m/2$ (see \cref{lem:LDC-reduction}), modulo the assumption that $|S_i|\le \delta m/2$. \paragraph{Upper bounding \texorpdfstring{$|S_i|$}{|S\_i|}.} The final piece in our transformation from weak RLDC to standard LDC is to address the assumption that $|S_i|\le \delta m/2$. This turns out to be not true in general, but it would still suffice to prove that $|S_i| \leq \delta m/2$ for $n' = \Omega(n)$ of the message bits $i$. If we could show that $|T_j|$ is small for most $j \in [m]$, then a double counting argument shows that $|S_i|$ is small for most $i \in [n]$. Unfortunately, if we had $C(x)_j = \bigwedge_{i=1}^n x_i$ for $m/2$ of the codeword bits $j$ then we also have $|T_j| = n$ for $m/2$ codeword bits and $|S_i| \geq m/2 \geq \delta m/2$ for all message bits $i \in [n]$. We address this challenge by first arguing that any weak RLDC for $n$-bit messages can be transformed into another weak RLDC for $\Omega(n)$-bit messages for which we have $|T_j| \le 3\ln(8/\delta) $ for all but $\delta m/4$ codeword bits. The transformation works by fixing some of the message bits and then eliminating codeword bits that are fixed to constants. Intuitively, if some $C(x)_j$ is fixable by many message bits, it will have very low entropy (e.g., $C(x)_j$ is the AND of many message bits) and hence contain very little information and can (likely) be eliminated. We make this intuition rigorous through the idea of random restriction: for each $i \in [n]$, we fix $x_i=0$, $x_i=1$, or leave $x_i$ free, each with probability $1/3$. The probability that $C(x)_j$ is not fixed to a constant is at most $(1-1/3)^{|T_j|}\le \delta/8$, provided that $|T_j| \ge 3\ln(8/\delta)$. After eliminating codeword bits that are fixed to constants, we show that with probability at least $1/2$ at most $\delta m/4$ codeword bits $C(x)_j$ with $|T_j| \ge 3\ln(8/\delta)$ survived\footnote{We are oversimplifying a bit for ease of presentation. In particular, the random restriction process may cause a codeword bit $C(x)_j$ to be fixable by a new message bit $x_i$ that did not belong to $T_j$ before the restriction -- We thank an anonymous reviewer for pointing this out to us. Nevertheless, for our purpose it is sufficient to eliminate codeword bits that initially have a large $|T_j|$. See the formal proof for more details.}. Note that with high probability the random restriction leaves at least $n/6$ message bits free. Thus, there must exist a restriction which leaves at least $n/6$ message bits free ensuring that $|T_j| \ge 3\ln(8/\delta)$ for at most $\delta m/4$ of the remaining codeword bits $C(x)_j$. We can now apply the double counting argument to conclude that $|S_i| \le \delta m/2$ for $\Omega(n)$ message bits, completing the transformation. \paragraph{Adaptive decoders} For possibly adaptive decoders, we are going to follow the same proof strategy. The new idea and main difference is that we focus on the first query made by the decoder, which is always non-adaptive. We manage to show that the first query determines a similar query structure, which is the key to the transformation to a standard LDC. More details can be found in \Cref{subsec:adaptive-2qRLDC}. \subsubsection{Lower Bounds for Strong Insdel RLDCs} We recall that a strong Insdel RLDC $C$ is a weak Insdel RLDC which satisfies an additional property: for every $x \in \ensuremath{{\{0,1\}}}\xspace^n$ and $y \in \ensuremath{{\{0,1\}}}\xspace^{m'}$ such that $\ED(C(x), y) \le \delta\cdot 2m$, there exists a set $I_y \subseteq [n]$ of size $|I_y|\ge \rho n$ such that for every $i \in I_y$, we have $\Pr[\ensuremath{\mathsf{Dec}}\xspace(i,y) = x_i] \ge \alpha$. In other words, for $\rho$-fraction of the message bits, the decoder can correctly recover them with high probability, just like in a standard Insdel LDC. Towards obtaining a lower bound on the codeword length $m$, a natural idea would be to view $C$ as a standard Insdel LDC just for that $\rho$-fraction of message bits, and then apply the exponential lower bound for standard Insdel LDCs from \cite{blocki2021exponential}. This idea would succeed if the message bits correctly decoded with high probability were the same for all potential corrupted codewords $y$. However, it could be the case that $i \in I_y$ for some strings $y$, whereas $i \notin I_{y'}$ for other strings $y'$. Indeed, allowing the set $I_y$ to depend on $y$ is the main reason why very short constant-query Hamming RLDCs exist. We further develop this observation to obtain our lower bound. We use an averaging argument to show the existence of a \emph{corruption-independent} set $I$ of message bits with $|I|=\Omega(n)$, which the decoder can recover with high probability. To this end, we need to open the ``black box'' of the lower bound result of Blocki \,{\it et~al.}\, \cite{blocki2021exponential}. The proof in \cite{blocki2021exponential} starts by constructing an error distribution $\+E$ with several nice properties, and deduce the exponential lower bound based solely on the fact that the Insdel LDC should, on average (i.e., for a uniformly random message $x$), correctly recover each bit with high probability under $\+E$ (see \cref{thm:lb-channel}). One of the nice properties of $\+E$ is that it is oblivious to the decoding algorithm $\ensuremath{\mathsf{Dec}}\xspace$. Therefore, it makes sense to consider the average success rate against $\+E$, i.e., $\Pr[\ensuremath{\mathsf{Dec}}\xspace(i,y)=x_i]$, where $i \in [n]$ is a uniformly random index, $x \in \ensuremath{{\{0,1\}}}\xspace^n$ is a uniformly random string, and $y$ is a random string obtained by applying $\+E$ to $C(x)$. By replacing $\perp$ with a uniformly random bit in the output of $\ensuremath{\mathsf{Dec}}\xspace$, the average success rate is at least $\rho\alpha+(1-\rho)\alpha/2=(1+\rho)\alpha/2$, since there is a $\rho$-fraction of indices for which $\ensuremath{\mathsf{Dec}}\xspace$ can correctly recover with probability $\alpha$, and for the remaining $(1-\rho)$-fraction of indices the random guess provides an additional success rate of at least $\alpha/2$. Assuming $\alpha$ is sufficiently close to 1, which we can achieve by repeating the queries independently for a constant number of times and doing something similar to a majority vote, the average success rate against $\+E$ is strictly above $1/2$. Therefore, there exist a constant fraction of indices for which the success rate against $\+E$ is still strictly above $1/2$, and the number of queries remains a constant. This is sufficient for the purpose of applying the argument in~\cite{blocki2021exponential} to get an exponential lower bound. Full details appear in \cref{sec:srldc}. \subsubsection{Constant-Query Weak Insdel RLDC} \input{construction-overview} \input{openquestions} \input{relatedwork} \subsection{Organization} The remainder of the paper is dedicated to proving all our results presented in \cref{sec:intro}. We give general preliminaries and recall some prior results used in our results in \cref{sec:prelims}. We prove \cref{thm:main-2qRLDC} in \cref{sec:2qrldc}, prove \cref{thm:main-sinsdel RLDC} in \cref{sec:srldc}, and prove \cref{thm:main-rildc-construction,thm:main-rilcc-construction} in \cref{sec:wrLDC}. \section{Acknowledgements} We are indebted to some anonymous reviewers who helped us improve the presentation of the paper. \bibliographystyle{alpha} \subsubsection{Exponential Lower Bound for Weak Hamming RLDCs with \texorpdfstring{$q=2$}{q=2}} \alex{Maybe put this old section in a separate tex file so we have it somewhere else? Or comment it out?} At a high level we prove our lower bound by transforming any non-adaptive $2$-query Weak Hamming RLDC for messages of length $n$ and $\delta$ fraction of errors into a standard $2$-query Hamming LDC for messages of length $n'=\Omega(n)$, with slightly reduced error tolerance of $\delta n'/2$. Kerenidis and de Wolf \cite{KerenidisW04} proved that any $2$-query Hamming LDC for messages of length $n$ must have codeword length $m = \exp(\Omega(n))$. Combining this result with our transformation, it immediately follows that any $2$-query Weak Hamming RLDC must also have codeword length $m = \exp(\Omega(n))$. Our current proof only works for non-adaptive decoding, but we conjecture that the lower bound also holds for adaptive $2$-query weak Hamming RLDCs, which is left as an open question. \footnote{For standard LDCs Katz and Trevisan \cite{KatzT00} observed that an adaptive decoder could be converted into a non-adaptive decoder by randomly guessing the output $y_j$ of the first query $j$ to learn the second query $k$. Now we non-adaptively query the received codeword for both $y_j$ and $y_k$. If our guess for $y_j$ was correct then we continue simulating the adaptive decoder. Otherwise, we simply guess the output $x_i$. If the adaptive decoder succeeds with probability at least $p \geq 1/2 + \epsilon$ then the non-adaptive decoder succeeds with probability $p' \geq 1/4 + p/2 \geq 1/2 + \epsilon/2$. Unfortunately, this reduction does not preserve perfect completeness as required by our proofs for relaxed $2$-query Hamming RLDCs i.e., if $p=1$ then $p' = 3/4$.} The key technical challenge is the transformation from weak Hamming RLDCs to standard Hamming LDCs. While our transformation does not need the third property (success rate) of a strong RLDC, we crucially rely on the property of {\em perfect completeness}, and that the decoder only makes $q=2$ queries. Intuitively, we argue that there is a large subset $Z \subseteq [n]$ of size $|Z| = \Omega(n)$ message indices which can be locally decoded in the standard sense whenever the remaining message bits $\overline{Z}$ are fixed appropriately, i.e., $x_{\overline{Z}} = \sigma$ for some fixed string $\sigma \in \{0,1\}^{|\overline{Z}|}$. Now if $C:\{0,1\}^n \rightarrow \{0,1\}^m$ is our relaxed LDC we can define another standard LDC $C':\{0,1\}^{|Z|} \rightarrow \{0,1\}^m$ as follows: given input $x' \in \{0,1\}^{|Z|}$ we define $C'(x') = C(x)$ where $x$ is the unique string such that $x_Z = x'$ and $x_{\overline{Z}} = \sigma$. The local decoder, on input $i'$, finds the corresponding index $i \in Z$ and simulates the relaxed decoder to obtain codeword queries $j,k$ conditioned on the event that $j,k$ both lie outside of some set $S_i$ (which we define below). For this purpose we use rejection sampling, and we define $S_i$ to ensure that if $j,k \not \in S_i$ then the relaxed decoder will never output $\bot$. Using this we will argue that as long as the received word is close to $C'(x')$, for any $i' \leq |Z|$ the local decoder will output the correct answer $x'_i$ with probability {\em at least} $1/2 + \epsilon$. We now give an intuitive explanation of the sets $S_i$ and $Z$. In the following, we let $x \in \{0,1\}^n$ denote the input message for our RLDC, and $y$ denote the received word for $x$. If there is no error, then $y=C(x)$ is the corresponding codeword for $x$. From here onward, $y$ refers to a codeword. Without loss of generality we can assume that if the decoder wants to recover message bit $x_i$, and queries two bits $y_j$ and $y_k$, then the decoder will output $f^i_{j,k}(y_j,y_k)$ for some function $f^i_{j,k} :\{0,1\}^2 \rightarrow \{0,1,\bot\}$. Observe that for any pair $\{j,k\}$ of possible queries, the truth table of $f^i_{j,k}$ can contain {\em at most} two $\bot$'s. This is because, by perfect completeness, the truth table must contain at least one $1$ and at least one $0$, which correspond to the cases of $x_i=0$ or $x_i=1$, and there are only $4$ entries in the truth table. Now, if the truth table contains no $\bot$'s, then $f^i_{j,k}$ behaves like a standard (non-relaxed) decoder. Otherwise, if the truth table contains at least one $\bot$, then either it contains exactly one $1$, or it contains exactly one $0$ (or exactly one of each). Suppose that the truth table contains exactly one $0$ (the case of exactly one $1$ is completely symmetric). In particular, for some $(b_1,b_2) \in \{0,1\}^2$ we have $f_{j,k}^i(b_1,b_2)=0$ and $f_{j,k}^{i}(b_1',b_2') \neq 0$ for any other input $(b_1',b_2') \neq (b_1,b_2)$. This means that if we fix the $i$'th bit of the message to $x_i=0$ then this {\em must} set bits $j$ and $k$ of the codeword to $y_j=b_1$ and $y_k=b_2$. In this case we say that $y_j$ (resp. $y_k$) is {\em fixable} by $x_i$. Observe that for a particular index $j \in [m]$ it is possible for $y_j$ to be fixable by multiple $x_i$'s, e.g., if $y_j = x_{i_1} \wedge x_{i_2} \wedge x_{i_3}$ (resp. $y_j = x_{i_1} \vee x_{i_2} \vee x_{i_3}$) then $y_j$ can be fixed to $0$ (resp. $1$) by setting any one of the three input bits $x_{i_1},x_{i_2},x_{i_3}$ to $0$ (resp. $1$). Given an index $i \in [n]$ of the message, we can now define $S_i$ to be the set of all codeword indices $j \in [m]$ such that $y_j$ is fixable by $x_i$. Note that by the above discussion, if the decoder queries two indices $j, k \not \in S_i$, then the decoder will never output $\bot$ since there is no $\bot$ in the truth table of the function $f^i_{j,k}$. Similarly, given a codeword index $j \in [m]$, we can let $T_j$ denote the set of all message indices $i \in [n]$ for which $y_j$ is fixable by $x_i$. One may try to argue that for at least $\Omega(n)$ indices $i \in [n]$, the size of $S_i$ is small. This turns out to be not necessarily true, and we in fact prove a slightly modified version where we first fix all but $\Omega(n)$ message bits. {Kuan}{cyan}{This sentence seems unfinished. Do we want to say " and then show that for the remaining message bit, the size of $ S_i$ is small"?} To do this, we let $W$ be all codeword indices $j\in [m]$ for which $|T_j| \geq 3 \ln (8/\delta)$, and define $S_{i,-} = S_i \cap \overline{W}$ and $S_{i,+} = S_i \cap W$. Our argument proceeds in several steps: \begin{enumerate} \item We argue that there are $o(n)$ indices $i$ for which $\left| S_{i,-}\right| \geq \delta m/4$. This follows immediately from the observation that $\sum_{i=1}^n |S_{i,-}| = \sum_{j \in [m]\setminus W} |T_j| \leq 3 \ln(8/\delta) m$. \item While we cannot directly argue that $\left| S_{i,+}\right|$ is small for most $i \in [n]$, we do have $\left|S_{i,+}\right| \leq |W|$ for all $i \in [n]$. Thus, if we could ensure that $|W| \leq \delta m/4$, it would immediately follow that $\left|S_{i,+}\right| \leq |W| \leq \delta m/4$ for all $i \in [n]$. A probabilistic argument shows that by fixing {\em at most} $5n/6$ of the message bits we can ensure that we fix all but $\delta m/4$ of the codeword bits with indices in $W$. In particular, we pick a subset $J \subseteq [n]$ by including each $i \in [n]$ independently with probability $1/3$, and then we pick a random string $\sigma \in \{0,1\}^{|\overline{J}|}$. Let $W' \subseteq W$ be the set of all codeword indices $j \in W$ whose corresponding codeword bits are not already fixed by the restriction $x_{\overline{J}} = \sigma$. We argue that the expected size of $W'$ is at most $ \delta m/8$. To see this, note that for each $j \in W$ we have $|T_j| \geq 3 \ln(8/\delta)$ independent choices of $x_i$ to fix the bit $y_j$, i.e., for each $i \in T_j$ there is some bit $b$ such that setting $x_i=b$ fixes the bit $y_j$. The probability of the event that $i \in \overline{J}$ and $x_i=b$ is $(2/3)(1/2)=(1/3)$. Thus, the probability that $y_j$ is not fixed is at most $(1-1/3)^{|T_j|} \leq (1-1/3)^{3 \ln (8/\delta)} \leq \delta/8$ and the expected size of $|W'|$ is at most $\delta m/8$. Markov's inequality now implies that $\Pr[|W'| \geq \delta m/4] \leq 1/2$ and standard concentration bounds imply that $|\overline{J}| \leq 5n/6$ with high probability. Thus, we can conclude that there exists a set $J \subseteq [n]$ of size $|J| \geq n/6$ and an assignment $\sigma \in \{0,1\}^{|\overline{J}|}$ of the remaining message bits in $\overline{J}$, which fixes all but $\delta m/4$ of the codeword bits with indices in $W$, i.e., such that $|W'| \leq \delta m/4$. We can now use $|W'|$ to upper bound $\left|S_{i,+}\right|$ for those unfixed message bits $x_i$ i.e., $\left| S_{i,+} \cap W' \right| \leq \delta m/4 $. \item Fixing the set $J \subseteq [n]$ and the string $\sigma \in \{0,1\}^{|\overline{J}|}$ defined above, we can now define $Z \subseteq J$ as the set of indices $i \in J$ such that $|S_{i,-}| \leq \delta m/4$. Observe that $|Z| \geq n/6-o(n)$ and, in particular, we will obtain a standard $2$-query LDC for $|Z|$-bit messages with $|Z| = \Omega(n)$. Given a message $x' \in \{0,1\}^{|Z'|}$ we encode $x'$ by picking a string $x \in \{0,1\}^n$ such that $x_Z = x'$ and $x_{\overline{J}} = \sigma$, and then output $C(x)$ where $C$ is the encoding function of the RLDC. Finally, we can argue that for each $i' \leq |Z|$ the bit $x'_{i'}$ can be locally decoded from any received word that is sufficiently close to $C(x)$, which we explain next. \item Let $U \subseteq [m]$ denote the set of all codeword bits whose output is {\em not} fixed by the restriction $x_{\overline{J}} = \sigma$. Let $x \in \{0,1\}^n$ be consistent with the restriction $x_{\overline{J}} = \sigma$ and consider some corrupted $y'$ whose Hamming distance to $y=C(x)$ is upper bounded by $\delta m/2$. For an index $i \in Z$ let $p_i$ denote the probability that the local decoder, given oracle access to $y'$, outputs $x_i$ conditioned on the event that neither of the queries $j$ or $k$ are in the set $S_i \cap U$. Our goal is to prove that $p_i \geq 1/2 + \ensuremath{\varepsilon}$. If this is true, then our local decoding algorithm can simulate the relaxed local decoder to obtain queries $j,k \not \in S_i \cap U$ --- if the relaxed decoder outputs $j,k$ such that $j \in S_i \cap U$ or $k \in S_i \cap U$ we will simply rewind and repeat the simulation until both $j,k \not \in S_i \cap U$. Suppose for contradiction that we did not have $p_i \geq 1/2 + \ensuremath{\varepsilon}$. Then we will construct a new codeword $y''$ with Hamming Distance at most $\delta m/2$ (resp. $\delta m$) from $y'$ (resp. $y$) which contradicts the second property of our relaxed decoder, i.e., the probability that the relaxed decoder, given oracle access to $y''$, will output the wrong answer $1-x_i$ is unacceptably high. A key insight in our analysis is that for any index $i \in Z$ we have $|S_i \cap U| \leq |S_{i,-}|+ |S_{i,+} \cap U| \leq \delta m/4 + |W'| \leq \delta m/2$. Let $x' \in \{0,1\}^n$ match $x$ on every index except $i$, i.e., $x_i=1-x_i'$. We now define a string $y''$ which matches the codeword $C(x')$ on indices $S_i \cap U$ (i.e., $y''_j = C(x')_j$ for all $j \in S_i \cap U$) and matches the received codeword $y'$ on all other indices i.e., $y''_j= y'_j$ for all $j \not \in S_i \cap U$. We make the following observations: (1) The Hamming distance between $y''$ and $y'$ is upper bounded by $|U \cap S_i| \leq \delta m/2$. By the triangle inequality the Hamming distance between $y''$ and $y$ is at most $\delta m$. (2) Conditioning on the event that the relaxed decoder queries $j,k \in S_i \cap U$, the output will {\em always} be incorrect, i.e., $1-x_i$ if the received codeword was $y''$. This follows from perfect completeness because $y''$ matches $C(x')$ on indices $S_i \cap U$ and because $x'_i=1-x_i$. (3) Conditioning on the event that the decoder queries $j \in S_i \cap U$ and $k \not \in S_i \cap U$ (or vice versa), the output will {\em always} be incorrect, i.e., $1-x_i$. The argument in this case is a bit more subtle, but we observe that if $j \not \in S_i$ or $k \not \in S_i$ the decoding function $f_{j,k}^i$ has no $\bot$'s in the truth table (otherwise we would have both $j,k \in S_i$). If $k \not \in U$ then $y_k$ was already fixed by the partial assignment $x_{\overline{J}}= \sigma$ and $y_j$ must encode $x_i$, e.g., the encoding function either set the $j$-th codeword bit to be $y_j = x_i$ or the encoding function sets $y_j = 1-x_i$. Similarly, in \cref{clm:fixable-or-useless}, we argue that if $j \in S_i \cap U$ and $k \not \in S_i$, then we either have $y_j=x_i$ (and, by perfect completeness, $f_{j,k}^i(b_1,b_2) = b_1$) or we have $y_j = 1-x_i$ (and, by perfect completeness, $f_{j,k}^i(b_1,b_2) =1- b_1$). In any case, we have $f_{j,k}^i(y_j'',y_k'') = x_i' = 1-x_i$. (4) Conditioning on the event that the relaxed decoder queries indices $j,k \not \in S_i \cap U$, the decoder will provide the same output for strings $y''$ and $y'$. Note that if $j,k \not \in S_i$ then the decoding functions $f_{j,k}^i$ cannot contain a $\bot$ in its truth table, i.e., conditioning on $j,k \not \in S_i$ the probability that we output $\bot$ is $0$. Thus, the probability of outputting the incorrect bit $1-x_i$ is at least $1-p_i$. In particular, the conditional probability of outputting the wrong bit $1-x_i$ is always at least $1-p_i$ whether we condition on the event that $j,k \in S_i$, or the event that $j,k \not \in S_i$ or the event that $|S_i \cap \{j,k\}| = 1$. It follows that the overall probability that the relaxed decoder (given oracle access to $y''$) outputs the wrong answer $1-x_i$ is too high, i.e., at least $1-p_i > 1/2 - \ensuremath{\varepsilon}$. \end{enumerate} The full details appear in Section \ref{sec:2qrldc}. \section{Open Questions}\label{sec:openproblems} \paragraph{Exact ``phase-transition'' thresholds.} Our results show that both in the Hamming and Insdel setting there is a constant $q$ such that every $q$-query RLDC requires super-polynomial codeword length, while there exists a $(q+1)$-query RLDC of polynomial codeword length. Finding the precise $q$ remains an intriguing open question. Further, a more refined understanding of codeword length for RLDCs making $3, 4, 5$ queries is another important question, which has lead to much progress in the understanding of the LDC variants. \paragraph{Constant-query strong Insdel RLDCs/RLCCs.} While we do construct the first weak RLDCs in the Insdel setting, the drawback of our constructions is the fact that our codes do not satisfy the third property of Definition \ref{def:strongRLDC}. Building strong Insdel RLDCs remains an open question. We note that our lower bounds imply that for a constant number of queries, such codes (if they exist) must have exponential codeword length. \paragraph{Applications of local Insdel codes.} As previously mentioned, Hamming LDCs/RLDCs have so far found many applications such as private information retrieval, probabilistically checkable proofs, self-correction, fault-tolerant circuits, hardness amplification, and data structures. Are there analogous or new applications of the Insdel variants in the broader computing area? \paragraph{Lower bounds for Hamming RLDCs/LDCs} Our $2$-query lower bound for Hamming RLDCs crucially uses the perfect completeness property of the decoder. An immediate question is whether the bound still holds if we allow the decoder to have imperfect completeness. We also note that the argument in our exponential lower bounds for $2$-query Hamming RLDCs fail to hold for alphabets other than the binary alphabet, and we leave the extension to larger alphabet sizes as an open problem. Another related question is to understand if one can leverage perfect completeness and/or random restrictions to obtain improved lower bounds for $q\geq 3$-query standard Hamming LDCs. Perfect completeness has been explicitly used before to show exponential lower bounds for $2$-query LCCs \cite{bhattacharyya2017lower}. \iffalse {Jeremiah}{blue}{Constructions: weak constant query rILDC with perfect completeness. I believe the construction is also a strong constant query rILCC with perfect completeness i.e., some of the bits in the codeword are constant and can always be recovered.} Open problems: \begin{itemize} \item lower bounds for 3-query LDCs with perfect completness? \item Impossibility result for linear 2-query insdel rLDCs? \item average smoothness reduction for insdel LDC? (Probably not possible if constant query polyrate weak insdel RLDC works out via modification of compiler. ) \item (Strong) Relaxed insdel LCCs? Probably the same argument \alex{maybe a stronger relaxed insdel LCC that doesn't achieve success rate $1/2$ simply because we have buffers? Or analysis on the non-buffered part to increase the success rate?} \item Implications of insdel LDCs/LCCs? \item adaptive vs non-adaptive Hamming RLDC lb \end{itemize} \fi \section{Preliminaries}\label{sec:prelims} For natural number $n \in \ensuremath{\mathbb{N}}$, we let $[n] \coloneqq \{1,2,\dotsc, n\}$. We let ``$\circ$'' denote the standard string concatenation operation. For a string $x \in \ensuremath{{\{0,1\}}}\xspace^*$ of finite length, we let $|x|$ denote the length of $x$. For $i \in [|x|]$, we let $x[i]$ denote the $i$-th bit of $x$. Furthermore, for $I \subseteq [|x|]$, we let $x[I]$ denote the subsequence $x[i_1] \circ x[i_2] \circ \cdots \circ x[i_\ell]$, where $i_j \in I$ and $\ell = |I|$. For two strings $x, y \in \ensuremath{{\{0,1\}}}\xspace^n$ of length $n$, we let $\ensuremath{\mathsf{HAM}}(x,y)$ denote the \emph{Hamming Distance} between $x$ and $y$; \text{i.e.}\xspace, $\ensuremath{\mathsf{HAM}}(x,y) \coloneqq \abs{\{ i \in [n] \colon x_i \neq y_i \}}$. Similarly, we let $\ED(x,y)$ denote the \emph{Edit Distance} between $x$ and $y$; \text{i.e.}\xspace, $\ED(x,y)$ is the minimum number of insertions and deletions needed to transform string $x$ into string $y$. We often discuss the \emph{relative Hamming Distance} (resp., \emph{relative Edit Distance}) between $x$ and $y$, which is simply the Hamming Distance normalized by $n$, \text{i.e.}\xspace, $\ensuremath{\mathsf{HAM}}(x,y)/n$ (resp., the Edit Distance normalized by $|x| + |y|$, \text{i.e.}\xspace, $\ED(x,y)/(|x|+|y|)$). Finally, the \emph{Hamming weight} of a string $x$ is the number of non-zero entries of $x$, which we denote as $\mathsf{wt}(x) \coloneqq |\{i \in [|x|] \colon x_i \neq 0\}|$. For completeness, we recall the definition of a classical locally decodable code, or just a \emph{locally decodable code}. \begin{definition}[Locally Decodable Codes]\label{def:LDC} A $(q, \delta, \alpha)$-Locally Decodable Code $C \colon \Sigma^n \rightarrow \Sigma^m$ is a code for which there exists a randomized decoder that makes at most $q$ queries to the received word $y$ and satisfies the following property: for every $i \in [n]$, if $y$ is such that $\mathrm{dist}\hspace{-1pt}(y, C(x)) \leq \delta$ for some unique $C(x)$, then the decoder, on input $i$, outputs $x_i$ with probability $\geq \alpha$. Here, the randomness is taken over the random coins of the decoder, and $\mathrm{dist}\hspace{-1pt}$ is a normalized metric. If $\mathrm{dist}\hspace{-1pt}$ is the relative Hamming distance, then we say that the code is a Hamming LDC; similarly, if $\mathrm{dist}\hspace{-1pt}$ is the relative edit distance, then we say that the code is an Insdel LDC. \end{definition} We recall the general $2$-query Hamming LDC lower bound \cite{KerenidisW04, Ben-AroyaRW08}. \begin{theorem}[\cite{KerenidisW04, Ben-AroyaRW08}]\label{thm:two-query-lb} For constants $\delta, \ensuremath{\varepsilon} \in (0,1/2)$ there exists a constant $c = c(\delta, \ensuremath{\varepsilon}) \in (0,1)$ such that if $C \colon \ensuremath{{\{0,1\}}}\xspace^n \rightarrow \ensuremath{{\{0,1\}}}\xspace^m$ is a $(2, \delta, 1/2+\ensuremath{\varepsilon})$ Hamming LDC then $m \geq 2^{cn-1}$. \end{theorem} In our weak Insdel RLDC construction, we utilize a weak Hamming RLDC due to \cite{Ben-SassonGHSV06}. \begin{lemma}[\cite{Ben-SassonGHSV06}]\label{lem:weak-ham-rldc} For constants $\ensuremath{\varepsilon}, \delta \in (0,1/2)$ and $\gamma \in (0,1)$, there exists a constant $q = O_{\delta,\ensuremath{\varepsilon}}(1/\gamma^2)$ and a weak $(q,\delta, 1/2+\ensuremath{\varepsilon})$-Hamming RLDC $C \colon \ensuremath{{\{0,1\}}}\xspace^n \rightarrow \ensuremath{{\{0,1\}}}\xspace^m$ with $m = O(n^{1+\gamma})$. Moreover, the decoder of this code is non-adaptive. \end{lemma} Our construction additionally utilizes the well-known Schulman-Zuckerman Insdel codes \cite{SchZuc99}. \begin{lemma}[Schulman-Zuckerman (SZ) Code \cite{SchZuc99}]\label{lem:sz-code} There exists constants $\beta \geq 1$ and $\delta > 0$ such that for large enough values of $t > 0$, there exists a code $C \colon \ensuremath{{\{0,1\}}}\xspace^t \rightarrow \ensuremath{{\{0,1\}}}\xspace^{\beta t}$ capable of decoding from $\delta$-fraction of Insdel errors and the additional property that for every $x \in \ensuremath{{\{0,1\}}}\xspace^t$ and $y = C(x)$, every substring $y'$ of $y$ with length at least $2$ has Hamming weight $\geq \floor{|y'|/2}$. \end{lemma} Our strong Insdel RLCC construction relies on a weak Hamming RLCC. We utilize the following weak Hamming RLCC implicit in \cite{AsadiS21}. \begin{lemma}[Implied by Theorem 1 of \cite{AsadiS21}] \label{lem:rlcc} For every sufficiently large $q\in \ensuremath{\mathbb{N}}$ and $\ensuremath{\varepsilon}\in (0,1/2)$, there is a constant $\delta$ such that there exists a weak $(q, \delta, 1/2 + \ensuremath{\varepsilon})$-relaxed Hamming Locally Correctable Code $C\colon \set{0,1}^n\rightarrow \set{0,1}^m$ with $m = n^{1+O(1/q)}$. Moreover, the decoder of this code is non-adaptive. \end{lemma} \subsection{Further discussion about related work} \paragraph{Insdel codes.} The study of error correcting codes for insertions and deletions was initiated by Levenstein \cite{Levenshtein_SPD66}. While progress has been slow because constructing codes for insdel errors is strictly more challenging than for Hamming errors, strong interest in these codes lately has led to many exciting results \cite{SchZuc99, Kiwi_expectedlength, guruswami2017deletion,HaeuplerS17, GuruswamiL18,HaeuplerSS18,HaeuplerS18,BrakensiekGZ18, ChengJLW18, ChengHLSW19, ChengJ0W19, GuruswamiL19,HaeuplerRS19,Haeupler19, LiuTX20,GuruswamiHS20, ChengGHL21, ChengL21} (See also the excellent surveys of \cite{Sloane2002OnSC,Mercier2010ASO,Mitzenmachen-survey, haeupler2021synchronization}). \paragraph{Insdel LDCs.} \cite{OPS07} gave private-key constructions of LDCs with $m=\Theta(n)$ and locality $\ensuremath{\mathrm{polylog}}\xspace(n)$. \cite{BlockiKZ19} extended the construction from \cite{OPS07} to settings where the sender/decoder do not share randomness, but the adversarial channel is resource bounded. \cite{block2021private} applied the \cite{BlockBGKZ20} compiler to the private key Hamming LDC of \cite{OPS07} (resp. resource bounded LDCs of \cite{BlockiKZ19}) to obtain private key Insdel LDCs (resp. resource bounded Insdel LDCs) with constant rate and $\ensuremath{\mathrm{polylog}}\xspace(n)$ locality. Insdel LDCs have also been recently studied in {\em computationally bounded channels}, introduced in \cite{Lipton94}. Such channels can perform a bounded number of adversarial errors, but do not have unlimited computational power as the general Hamming channels. Instead, such channels operate with bounded resources. As expected, in many such limited-resource settings one can construct codes with strictly better parameters than what can be done generally \cite{GopalanLD04,MicaliPSW05, Guruswami_Smith:2016, ShaltielS16}. LDCs in these channels under Hamming error were studied in \cite{OPS07, HemenwayO08, HemenwayOSW11, HemenwayOW15, BlockiGGZ19,BlockiKZ19}. \cite{block2021private} applied the \cite{BlockBGKZ20} compiler to the Hamming LDC of \cite{BlockiKZ19} to obtain a constant rate Insdel LDCs with $\ensuremath{\mathrm{polylog}}\xspace(n)$ locality for resource bounded channels. The work of \cite{ChengLZ20} proposes the notion of locally decodable codes with randomized encoding, in both the Hamming and edit distance regimes, and in the setting where the channel is oblivious to the encoded message, or the encoder and decoder share randomness. For edit error they obtain codes with $m=O(n)$ or $m= n \log n$ and $\ensuremath{\mathrm{polylog}}\xspace(n)$ query complexity. However, even in settings with shared randomness or where the channel is oblivious or resource bounded, there are no known constructions of Insdel LDCs with constant locality. Locality in the study of insdel codes was also considered in \cite{HaeuplerS18}, which constructs explicit synchronization strings that can be locally decoded. \section{Lower Bounds for Strong Insdel RLDCs}\label{sec:srldc} \iffalse \begin{definition}[Strong insdel RLDC] A code $C \colon \Sigma^n \rightarrow \Sigma^m$ is a $(q,\delta,\alpha,\rho)$-relaxed insdel LDC (insdel RLDC) if there exists a probabilistic algorithm $\ensuremath{\mathsf{Dec}}\xspace$ with the following properties. \begin{itemize} \item \textsf{(Perfect) Completeness:} For every $x \in \Sigma^n$ and $i \in [n]$, it holds that \begin{align*} \Pr[\ensuremath{\mathsf{Dec}}\xspace(C(x), m, i) = x_i] = 1. \end{align*}{Kuan}{cyan}{Is this extra parameter in the decoding function necessary?} \item \textsf{Relaxed Decoding:} For every $x \in \Sigma^n$ and $y \in \Sigma^{m'}$ such that $\ED\tuple{C(x), y} \le \delta \cdot 2m$, and for every $i \in [n]$, we have \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(y, m', i) \in \set{x_i, \perp} \right] \ge \alpha. \end{align*} \item \textsf{Success Rate:} For every $x \in \Sigma^n$ and $y \in \Sigma^{m'}$ such that $\ED\tuple{C(x), y} \le \delta \cdot 2m$, there exists a set $I_y \subseteq [n]$ of size $\abs{I_y} \ge \rho n$ such that for every $i \in I_y$, we have \begin{align*} \Pr\left[ \ensuremath{\mathsf{Dec}}\xspace(y, m', i) = x_i \right] \ge \alpha. \end{align*} \end{itemize} Furthermore, in every invocation, $\ensuremath{\mathsf{Dec}}\xspace$ reads at most $q$ symbols of $y$. The probabilities are taken over the randomness of $\ensuremath{\mathsf{Dec}}\xspace$, and $\ED\tuple{C(x),y}$ denotes the minimum number of insertions/deletions necessary to transform $C(x)$ into $y$. \end{definition} \fi In this section, we prove \Cref{thm:main-sinsdel RLDC}. We remind the readers that a strong $(q,\delta,\alpha,\rho)$-Insdel RLDC satisfies all 3 conditions in \cref{def:strongRLDC}, and here we are mainly interested in the case where $q$ is a constant and $\alpha=1/2+\beta$ for $\beta > 0$. In fact, \cref{thm:main-sinsdel RLDC} would still hold even without perfect completeness (i.e., Condition 1 in \cref{def:strongRLDC}), as our proof does not rely on this condition. A corollary to this observation is that essentially the same lower bound also holds for strong Insdel RLDCs with adaptive decoders. This is because the Katz-Trevison reduction from adaptive to non-adaptive decoders does preserve Condition 2 and 3 in \cref{def:strongRLDC}, with the same $\rho$ and mildly worse $\alpha=1/2+\beta/2^{q-1}$ (see \cref{foot:kt-obs}). Our proof relies on the following result, which is implicit in \cite{blocki2021exponential}. \cite{blocki2021exponential} shows an exponential lower bound on the length of constant-query Insdel LDCs. The core of their argument is the construction of an error distribution $\+D$, whereby they derive necessary properties of the code to imply the exponential lower bound. As remarked in Section 4.1 of \cite{blocki2021exponential}, $\+D$ is oblivious to the decoding algorithm. That means their result holds even if the code is required to handle an error pattern much more innocuous than adversarial errors. This stronger statement allows us to define the notion of ``locally decodable on average against $\+D$'', which would otherwise not be well-defined if the adversary is adaptive to the decoding strategy. \begin{theorem}[\cite{blocki2021exponential}]\label{thm:lb-channel} Let $\delta \in (0,1)$ be a constant. There exists a channel $\mathfrak{D}$ for $m$-bit strings with the following properties. \begin{itemize} \item For every $s \in \set{0,1}^m$, $\Pr_{s'\sim \mathfrak{D}(s)}[\ED\tuple{s', s} > \delta\cdot 2m] < \ensuremath{\mathsf{negl}}\xspace(m)$. \item Suppose $C \colon \set{0,1}^n \rightarrow \set{0,1}^m$ is a code which is locally decodable on average against $\mathfrak{D}$. Formally, there is a randomized algorithm $\ensuremath{\mathsf{Dec}}\xspace$ satisfying \begin{align*} \forall i \in [n], \quad \Pr_{\substack{x \in \set{0,1}^n \\ y \sim \mathfrak{D}(C(x))}}\left[ \ensuremath{\mathsf{Dec}}\xspace(i, y) = x_i \right] \ge \frac{1}{2} + \ensuremath{\varepsilon}, \end{align*} where the probability is taken over the uniform random choice of $x \in \set{0,1}^n$, the randomness of $\mathfrak{D}$, and the randomness of $\ensuremath{\mathsf{Dec}}\xspace$. Furthermore, $\ensuremath{\mathsf{Dec}}\xspace$ makes at most $q$ non-adaptive queries into $y$ in each invocation. Then for every $q\ge 2$ there is a constant $\kappa = \kappa(q,\delta,\ensuremath{\varepsilon})$ such that \begin{align*} m = \exp\tuple{\kappa \cdot n^{1/(2q-3)}}. \end{align*} \end{itemize} \end{theorem} As a side note, in the definition of Insdel LDCs in \cite{blocki2021exponential}, the decoder also has the length of the received string $y$ as an input. However, the channel $\mathfrak{D}$ constructed in \cite{blocki2021exponential} ensures that $|y|=m$ except with exponentially small probability, where $y \sim \mathfrak{D}(C(x))$. For this reason, we omit this extra input as it almost gives no information to the decoder. The proof of Theorem~\ref{thm:main-sinsdel RLDC} consists of two steps, and they are captured by Lemma~\ref{lem:amplify} and Lemma~\ref{lem:reduce-to-ILDC} below. The first step is a straightforward confidence amplification step which boosts the success rate $\alpha$ by running the decoding algorithm multiple times. In the second step, we show that if $\alpha$ is sufficiently close to 1, a strong Insdel RLDC will imply a standard Insdel LDC that is decodable on average against the channel $\mathfrak{D}$ mentioned in Theorem~\ref{thm:lb-channel}, which is sufficient for deriving the exponential lower bound. \begin{lemma} \label{lem:amplify} Let $C \colon \set{0,1}^n \rightarrow \set{0,1}^m$ be a non-adaptive strong $(q,\delta,1/2+\beta,\rho)$-Insdel RLDC where $\beta > 0$. Then for any $\ensuremath{\varepsilon} > 0$, $C$ is also a non-adaptive strong $(q\cdot \ln(1/\ensuremath{\varepsilon})/(2\beta^2), \delta, 1-\ensuremath{\varepsilon}, \rho)$-Insdel RLDC. \end{lemma} \begin{proof} Let $\ensuremath{\mathsf{Dec}}\xspace$ be a relaxed local decoder for $C$. For some integer $T$ to be decided, consider the following alternative local decoder $\ensuremath{\mathsf{Dec}}\xspace_{T}$ for $C$. On input $(y, m, i)$, $\ensuremath{\mathsf{Dec}}\xspace_T$ independently runs $\ensuremath{\mathsf{Dec}}\xspace(y, m, i)$ for $T$ times, and obtains outputs $r_1, r_2,\dots, r_T \in \set{0,1,\bot}$. For $b \in \set{0,1,\bot}$ we denote \begin{align*} S_b \coloneqq \set{t \in [T] \colon r_t = b}. \end{align*} $\ensuremath{\mathsf{Dec}}\xspace_T$ outputs $0$ or $1$ if $|S_0| \ge T/2$ or $|S_1| \ge T/2$, respectively. Otherwise $\ensuremath{\mathsf{Dec}}\xspace_T$ outputs $\bot$. Clearly, $\ensuremath{\mathsf{Dec}}\xspace_T$ is non-adaptive if $\ensuremath{\mathsf{Dec}}\xspace$ is non-adaptive. Now we prove the three properties of $\ensuremath{\mathsf{Dec}}\xspace_T$. Perfect completeness is easy to see. The relaxed decoding property is violated when $|S_{1-x_i}| \ge T/2$. By the Chernoff bound, this happens with probability at most $e^{-2\beta^2 T}$, since for each $t \in [T]$ we have \begin{align*} \Pr[r_t = 1-x_i] = 1-\Pr[r_t \in \set{x_i, \perp}] \le 1 - \tuple{\frac{1}{2} + \beta} = \frac{1}{2}-\beta. \end{align*} Let $I_y \subseteq [n]$ be the subset given by the third property of $\ensuremath{\mathsf{Dec}}\xspace$. That is, for each $i \in I_y$ and $t \in [T]$, we have \begin{align*} \Pr[r_t = x_i] \ge \frac{1}{2}+\beta. \end{align*} Again, by the Chernoff bound we have $\Pr[|S_{x_i}| < T/2] < e^{-2\beta^2 T}$, for each $i \in I_y$. Finally, we take $T = \ln(1/\ensuremath{\varepsilon})/(2\beta^2)$ which ensures $e^{-2\beta^2 T} \le \ensuremath{\varepsilon}$. We note that $\ensuremath{\mathsf{Dec}}\xspace_{T}$ makes $q\cdot T = q\cdot \ln(1/\ensuremath{\varepsilon})/(2\beta^2)$ queries to $y$. \end{proof} \begin{lemma} \label{lem:reduce-to-ILDC} Let $C \colon \set{0,1}^n \rightarrow \set{0,1}^m$ be a non-adaptive $(q,\delta,\alpha,\rho)$-Insdel RLDC. Suppose $\rho\alpha + (1-\rho)\alpha/2 = 1/2+\ensuremath{\varepsilon}$ for some $\ensuremath{\varepsilon} > 0$. Then there exists a non-adaptive decoder $\ensuremath{\mathsf{Dec}}\xspace$ and a subset $I \subseteq [n]$ of size at least $\ensuremath{\varepsilon} n/2$ such that for every $i \in I$, we have \begin{align*} \Pr_{\substack{x \in \set{0,1}^n \\ y \sim \mathfrak{D}(C(x))}}\left[ \ensuremath{\mathsf{Dec}}\xspace(i, y) = x_i \right] \ge \frac{1}{2} + \frac{\ensuremath{\varepsilon}}{4}. \end{align*} The probability is taken over the uniform random choice of $x \in \set{0,1}^n$, the randomness of $\mathfrak{D}$, and the randomness of $\ensuremath{\mathsf{Dec}}\xspace$. \end{lemma} \begin{proof} Let $\ensuremath{\mathsf{Dec}}\xspace_0$ be the relaxed decoder for $C$. The local decoder $\ensuremath{\mathsf{Dec}}\xspace$ will simulate $\ensuremath{\mathsf{Dec}}\xspace_0$ and output the result, except when $\ensuremath{\mathsf{Dec}}\xspace_0$ returns ``$\perp$'', $\ensuremath{\mathsf{Dec}}\xspace$ instead returns a uniform random bit. Clearly, $\ensuremath{\mathsf{Dec}}\xspace$ is non-adaptive if $\ensuremath{\mathsf{Dec}}\xspace_0$ is non-adaptive. We note that $\mathfrak{D}$ introduces at most $\delta \cdot 2m$ insertions and deletions except with probability $\ensuremath{\mathsf{negl}}\xspace(m)$. Therefore by definition of strong Insdel RLDCs (specifically Condition 2 and 3 in \cref{def:strongRLDC}), for a random index $i \in [n]$, we have \begin{align*} \Pr_{\substack{i \in [n] \\ x \in \set{0,1}^n \\ y \sim \mathfrak{D}(C(x))}}\left[ \ensuremath{\mathsf{Dec}}\xspace(i, y) = x_i \right] \ge \rho\alpha + \tuple{1 - \rho} \cdot \frac{\alpha}{2} - \ensuremath{\mathsf{negl}}\xspace(m) \ge \frac{1}{2} + \frac{\ensuremath{\varepsilon}}{2}, \end{align*} for large enough $m$ (and thus $n$). The first inequality is because conditioned on $i \in I_y$ (which happens with probability $\ge \rho$ for any $y$), $\ensuremath{\mathsf{Dec}}\xspace(i, y)=x_i$ with probability $\alpha$; and conditioned on $i \notin I_y$ (which happens with probability $\le 1-\rho$), the random guess provides an additional success rate of $\alpha/2$. We will show that the following subset of indices has large density: \begin{align*} I = \set{i \in [n] \colon \Pr_{\substack{x \in \set{0,1}^n \\ y \sim \mathfrak{D}(C(x))}}\left[ \ensuremath{\mathsf{Dec}}\xspace(i, y) = x_i \right] \ge \frac{1}{2} + \frac{\ensuremath{\varepsilon}}{4}}. \end{align*} Denote by $p = |I|/n$ the density of $I$. We have \begin{align*} \frac{1}{2} + \frac{\ensuremath{\varepsilon}}{2} &\le \Pr_{\substack{i \in [n] \\ x \in \set{0,1}^n \\ y \sim \mathfrak{D}(C(x))}}\left[ \ensuremath{\mathsf{Dec}}\xspace(i, y) = x_i \right] \\ &= \Pr[i \in I] \cdot \Pr_{\substack{x \in \set{0,1}^n \\ y \sim \mathfrak{D}(C(x))}}\left[ \ensuremath{\mathsf{Dec}}\xspace(i, y) = x_i \mid i \in I \right] + \Pr[i \notin I] \cdot \Pr_{\substack{x \in \set{0,1}^n \\ y \sim \mathfrak{D}(C(x))}}\left[ \ensuremath{\mathsf{Dec}}\xspace(i, y) = x_i \mid i \notin I \right] \\ &\le p\cdot 1 + (1-p)\cdot \tuple{\frac{1}{2} + \frac{\ensuremath{\varepsilon}}{4}} \\ &\le \frac{1}{2} + \frac{\ensuremath{\varepsilon}}{4} + \frac{p}{2}. \end{align*} It follows that $p \ge \ensuremath{\varepsilon}/2$. \end{proof} Now we are ready to prove \Cref{thm:main-sinsdel RLDC}. We recall the theorem below. \strirldcmain* \begin{proof} We first prove the bound for non-adaptive decoders. Taking $\ensuremath{\varepsilon} = \rho/4$ in Lemma~\ref{lem:amplify}, we have that $C$ is also a non-adaptive strong $(q',\delta,\alpha', \rho)$-Insdel RLDC, where $q'=q\cdot \ln(4/\rho)/(2\beta^2)$ and $\alpha'=1-\rho/4$. Note that \begin{align*} \rho\alpha' + \frac{(1-\rho)\alpha'}{2} = \frac{(1+\rho)\alpha'}{2} = \frac{(1+\rho)(1-\rho/4)}{2} \ge \frac{1}{2} + \frac{\rho}{4}. \end{align*} Thus we can apply Lemma~\ref{lem:reduce-to-ILDC} with $\ensuremath{\varepsilon} = \rho/4$ to obtain a subset $I\subseteq [n]$ of size $|I| \ge \rho n/8$, and a decoder $\ensuremath{\mathsf{Dec}}\xspace$ satisfying \begin{align*} \Pr_{\substack{x \in \set{0,1}^n \\ y \sim \mathfrak{D}(C(x))}}\left[ \ensuremath{\mathsf{Dec}}\xspace(i, y) = x_i \right] \ge \frac{1}{2} + \ensuremath{\varepsilon}' \coloneqq \frac{1}{2}+\frac{\rho}{16} \end{align*} for every $i \in I$. By \cref{thm:lb-channel}, for some constant $c_1=c_1(q,\delta,\beta,\rho)$ we have \begin{align*} m \ge \exp\tuple{\kappa(q',\delta,\ensuremath{\varepsilon}') \cdot |I|^{1/(2q'-3)}} \ge \exp\tuple{c_1 \cdot n^{1/(2q')}} = \exp\tuple{c_1\cdot n^{\beta^2/(q \cdot \ln(4/\rho))}} \end{align*} as desired. For the adaptive case, we note that the proof does not rely on perfect completeness of the decoder (i.e., Condition 1 in \cref{def:strongRLDC}). Therefore, we can apply the Katz-Trevisan reduction (see \cref{foot:kt-obs}) to obtain a non-adaptive decoder for $C$ which satisfies Condition 2 and 3 in \cref{def:strongRLDC}, with the same $\rho$ and mildly worse $\alpha=1/2+\beta/2^{q-1}$. The proof argument presented in this section still applies to such a non-adaptive decoder, whereby we can derive the same lower bound except with $\beta$ replaced by $\beta/2^{q-1}$. \end{proof} We end this section with a remark on linear/affine weak 2-query insdel RLDCs. In \Cref{sec:2qrldc}, a transformation from RLDCs to standard LDCs was given for Hamming errors. This is done by fixing some message bits to 0 or 1, together with other modifications to the decoding algorithm. Here the code will remain affine if the initial code is linear or affine. One key step in the analysis is using the perfect completeness condition to deduce structural properties about the queries. We note that the same argument would yield the same query structure for weak insdel RLDCs. Altogether, this allows us to use the impossibility result for affine 2-query insdel LDCs \cite{blocki2021exponential} to conclude the following. \begin{corollary} \label{cor:linear-2qIRLDC} For any linear or affine weak $(2,\delta,\ensuremath{\varepsilon})$ insdel RLDC $C \colon \set{0,1}^n \rightarrow \set{0,1}^m$, we have $n=O_{\delta,\ensuremath{\varepsilon}}(1)$. \end{corollary} \section{Weak Insdel rLDCs} A construction?
{ "timestamp": "2022-09-20T02:22:41", "yymm": "2209", "arxiv_id": "2209.08688", "language": "en", "url": "https://arxiv.org/abs/2209.08688" }
\section{introduction and main result} In~\cite{BrezisLieb} Brezis and Lieb posed the question whether it is possible to bound the `Sobolev deficit' $$ \Vert \nabla f \Vert^2_{\mathrm L^2({\mathord{\mathbb R}}^d)} - S_d\,\Vert f \Vert^2_{\mathrm L^{2^*}({\mathord{\mathbb R}}^d)} $$ from below in terms of some natural distance from the manifold of optimizers. Here $d \ge 3$, $2^*=2\,d/(d-2)$ is the `Sobolev exponent', and $$ S_d = \tfrac14\,d\,(d-2)\,|{\mathord{\mathbb S}}^d|^{2/d} $$ is the sharp Sobolev constant. The function $f$ belongs to the homogeneous Sobolev space $\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)$, that is, it is in $\mathrm L^1_{\rm loc}({\mathord{\mathbb R}}^d)$, its distributional gradient is a square-summable function and it vanishes at infinity in the sense that $|\{ x\in{\mathord{\mathbb R}}^d:\,|f(x)|>\epsilon\}|<\infty$ for all $\epsilon>0$. Here~$|A|$ denotes the Lebesgue measure of a measurable set $A$. Throughout this paper we deal with real-valued functions. With minor additional effort our arguments can be extended to the case of complex-valued functions. Rodemich~\cite{Rodemich}, Aubin~\cite{Aubin} and Talenti~\cite{Talenti} (see also~\cite{Rosen}) proved that the Sobolev deficit is non-negative. Moreover, it was shown by Lieb~\cite{Lieb}, Gidas, Ni and Nirenberg~\cite{GidasNiNirenberg} and Caffarelli, Gidas and Spruck~\cite{CaffarelliGidasSpruck} that the deficit vanishes if and only if the function $f$ is of the form \begin{equation} \label{eq:optimizers} f(x) = c\left(a + |x-b|^2\right)^{-\frac{d-2}2}\,, \end{equation} where $a\in(0,\infty)$, $b\in {\mathord{\mathbb R}}^d$ and $c\in {\mathord{\mathbb C}}$ are constants. These functions are often called `Aubin--Talenti functions'. Let $\mathcal M$ denote the $(d+2)$-dimensional manifold of functions of the form~\eqref{eq:optimizers}. The question of Brezis and Lieb was answered by Bianchi and Egnell~\cite{BianchiEgnell}: there is a strictly positive constant $c_{\rm BE}$ such that for any $f \in \dot{\mathrm H}^1({\mathord{\mathbb R}}^d)\setminus\mathcal M$ \begin{equation}\label{eq:bianchi-egnell} \mathcal E(f) :=\frac{ \Vert \nabla f \Vert^2_{\mathrm L^2({\mathord{\mathbb R}}^d)} - S_d\,\Vert f \Vert^2_{\mathrm L^{2^*}({\mathord{\mathbb R}}^d)}} {\inf_{g\in\mathcal M} \Vert \nabla f - \nablag \Vert^2_{\mathrm L^2({\mathord{\mathbb R}}^d)}} \ge c_{\rm BE}\,. \end{equation} Lions~\cite{MR834360} has shown that, if the Sobolev deficit is small for some function $f$, then $f$ has to be close to the manifold $\mathcal M$ of Sobolev optimizers. The closeness is in the strongest possible sense, namely with respect to the norm in $\dot H^1({\mathord{\mathbb R}}^d)$. The Bianchi--Egnell inequality~\eqref{eq:bianchi-egnell} makes the qualitative result of Lions quantitative. In particular, it shows that the distance to the manifold vanishes quadratically in the Sobolev deficit. Such `stability' estimates have been established in other contexts as well, e.g., for the isoperimetric inequality or for classical inequalities in real and harmonic analysis. In fact, stability has attracted a lot of attention in recent years and we refer to~\cite{FuscoMaggiPratelli,CianchiFuscoMaggiPratelli,FigalliMaggiPratelli,CicaleseLeonardi,ChenFrankWeth,CarlenFrankLieb,Christ_HY,Figallietal,Christ,MR3695890,FrankLieb2,FrankLieb,FigalliZhang,BonforteDolbeaultNazaretSimonov} for an incomplete list of works in this direction. In several of them the strategy of Bianchi and Egnell or its generalizations play an important role. An interesting point about~\eqref{eq:bianchi-egnell} and other inequalities obtained by this method is that nothing seems to be known about the optimal value of the constant $c_{\rm BE}$ except for the fact that it is strictly positive. The proof in~\cite{BianchiEgnell} proceeds by a spectral estimate combined with a compactness argument and hence cannot give any information about $c_{\rm BE}$. Explicit quantitative estimates are known only for a distance to $\mathcal M$ measured by a weaker norm than~\eqref{eq:bianchi-egnell}, functions of $\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)$ satisfying additional constraints or superquadratic estimates of the distance which degenerate in a neighbourhood of $\mathcal M$ and much more is known for subcritical interpolation inequalities than for Sobolev-type inequalities: see~\cite{BDGV,MR3103175,MR3227280,MR3640894,BonforteDolbeaultNazaretSimonov,Frank_2022} for some references. It is the aim of this note to address the question of proving~\eqref{eq:bianchi-egnell} with an explicit lower bound on $c_{\rm BE}$ for any \emph{non-negative function} $f \in \dot{\mathrm H}^1({\mathord{\mathbb R}}^d)\setminus \mathcal M$. We did not succeed in proving a bound for sign-changing functions $f$ and, in particular, we did not succeed in proving the natural conjecture that $c_{\rm BE}$ is the infimum over non-negative functions in $ \dot{\mathrm H}^1({\mathord{\mathbb R}}^d)\setminus \mathcal M$. Whether this is true or not is an open problem. We shall assume, henceforth, that we consider the functional $\mathcal E(f)$ for non-negative functions only. Let us introduce the notation \begin{equation} \label{eq:nu} \nu(\delta):=\sqrt{\frac{\delta}{1-\delta}} \end{equation} for any $\delta\in(0,1)$ and assume that $q = 2^*$ denotes the Sobolev exponent. Here is our theorem. \begin{theorem}\label{main} Let $d \ge 3$, $q=2\,d/(d-2)$ and $f \in \dot{\mathrm H}^1({\mathord{\mathbb R}}^d)\setminus\mathcal M$ be any non-negative function. Then $\mathcal E(f)$ in~\eqref{eq:bianchi-egnell} is bounded below with a bound given by $$ \mathcal E(f)\ge \sup_{0<\delta<1} \delta\,\mu(\delta) $$ where $\mu(\delta)\ge\mathsf m\big(\nu(\delta)\big)$ with $\delta\mapsto\nu(\delta)$ as in~\eqref{eq:nu} and $\mathsf m$ explicitly defined by \begin{equation} \label{eq:mudelta} \begin{aligned} \mathsf m(\nu):=\,&\tfrac4{d+4} - \tfrac2q\,\nu^{q-2} && \text{if}\quad d\geq 6\,,\\ \mathsf m(\nu):=\,&\tfrac4{d+4} -\tfrac13 \,(q-1)\,(q-2)\,\nu - \tfrac2q\,\nu^{q-2} && \text{if}\quad d=4,5\,,\\ \mathsf m(\nu):=\,&\tfrac47 - \tfrac{20}3\,\nu - 5\,\nu^2- 2\,\nu^3 - \tfrac13\,\nu^4 && \text{if}\quad d=3\,. \end{aligned} \end{equation} \end{theorem} We refer to Appendix~\ref{numerical} for considerations on the numerical values of our lower bound on $\mathcal E(f)$. We also note the \emph{upper bound} \begin{equation} \label{eq:upperboundintro} \inf_{0\leq f\in\dot H^1({\mathord{\mathbb R}}^d)\setminus\mathcal M} \mathcal E(f) \leq \frac{4}{d+4} \,; \end{equation} see \eqref{eq:upperbound}. Apart from the extension to sign-changing functions, another question that remains open is what the optimal value of the constant $c_{\rm BE}$ in \eqref{eq:bianchi-egnell} is and whether it is attained for an extremal function. In particular, is it only attained in the limit of functions approaching the manifold~$\mathcal M$ and therefore given by a spectral gap? This would correspond to equality in \eqref{eq:upperboundintro}. This is \emph{not} the case for the corresponding question in the case of the planar isoperimetric inequality, where the constant is strictly smaller than the constant in the corresponding spectral gap inequality and where one can prove the existence of an optimizing function; see~\cite{BianchiniCroceHenrot}. For further studies under an additional convexity assumption, see~\cite{Campi,AlvinoFeroneNitsch,CicaleseLeonardi2}. Let us describe the strategy of the proof of Theorem~\ref{main}. Superficially, it is analogous to that by Bianchi and Egnell~\cite{BianchiEgnell} in the sense that one splits the problem into two regions, one where $f$ is close to the manifold of Sobolev optimizers and the other where it is far away. These regions are defined in terms of the quantity $\inf_{g\in\mathcal M} \Vert \nabla f-\nablag \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2/ \Vert \nabla f \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2$, specifically by requiring that this quantity is either less or equal than $\delta$, or (strictly) bigger than $\delta$. Here $\delta>0$ is a free parameter that we will optimize over at the end. Note that, since $\inf_{g\in\mathcal M} \Vert \nabla f-\nablag \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2 \le \Vert \nabla f \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2$, we may always assume that $\delta \le 1$ and even $\delta<1$. In Section~\ref{sec:expansion}, for $\delta\in(0,1)$ small enough, we estimate $\mu(\delta)$ such that \begin{equation}\label{eq:smalldelta} \mathcal E(f) \ge \mu(\delta) \end{equation} whenever \begin{equation}\label{M:neighbourhood} 0<\inf_{g\in\mathcal M} \Vert \nabla f-\nablag \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2 \le \delta\,\Vert \nabla f \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2 \end{equation} and prove that $\mu(\delta)$ is positive. The argument in this regime is similar to that of Bianchi and Egnell and is based on a spectral gap inequality. We make the qualitative expansions of~\cite{BianchiEgnell} quantitative and obtain for all $\delta\in(0,1)$ a remainder bound smaller than an explicit, strictly positive constant. In Section~\ref{sec:flow}, we obtain a lower bound on $\mathcal E(f)$ in case $\inf_{g\in\mathcal M}\! \Vert \nabla f - \nablag \Vert^2_{\mathrm L^2({\mathord{\mathbb R}}^d)}>\delta\,\Vert \nabla f \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2$. To handle this regime we use an idea taken from a paper by Christ~\cite{Christ} in which he establishes a quantitative error term for the Riesz rearrangement inequality. The idea, in a rough outline, is to construct a continuous family of rearrangements $f_\tau$, $0\le \tau < \infty$, such that $f_0=f$, $\Vert f_\tau \Vert_{\mathrm L^{2^*}({\mathord{\mathbb R}}^d)}= \Vert f \Vert_{\mathrm L^{2^*}({\mathord{\mathbb R}}^d)}$, $\tau \mapsto \Vert \nabla f_\tau \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}$ is non-increasing and $\inf_{g\in\mathcal M} \Vert \nabla(f_\tau-g) \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2 \to 0$ as $\tau \to \infty$. Clearly $$ \mathcal E(f) \ge \frac{ \Vert \nabla f \Vert^2_{\mathrm L^2({\mathord{\mathbb R}}^d)} - S_d\,\Vert f \Vert^2_{\mathrm L^{2^*}({\mathord{\mathbb R}}^d)}} {\Vert \nabla f \Vert^2_{\mathrm L^2({\mathord{\mathbb R}}^d)}} = 1-S_d\,\frac{ \Vert f \Vert^2_{\mathrm L^{2^*}({\mathord{\mathbb R}}^d)}} {\Vert \nabla f \Vert^2_{\mathrm L^2({\mathord{\mathbb R}}^d)}} \ge \frac{ \Vert \nabla f_\tau \Vert^2_{\mathrm L^2({\mathord{\mathbb R}}^d)} - S_d\,\Vert f_\tau \Vert^2_{\mathrm L^{2^*}({\mathord{\mathbb R}}^d)}} {\Vert \nabla f_\tau \Vert^2_{\mathrm L^2({\mathord{\mathbb R}}^d)}}\,. $$ Starting with $\inf_{g\in\mathcal M} \Vert \nabla f-\nablag \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2 > \delta\,\Vert \nabla f \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2$, one would like to run the flow until at a certain point $\tau_0$ one has \begin{equation}\label{eq:equality} \inf_{g\in\mathcal M} \Vert \nabla(f_{\tau_0}-g) \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2 = \delta\,\Vert \nabla f_{\tau_0} \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2 \end{equation} and, using~\eqref{eq:smalldelta}, one would conclude that $$ \mathcal E(f) \ge \frac{ \Vert \nabla f_{\tau_0} \Vert^2_{\mathrm L^2({\mathord{\mathbb R}}^d)} - S_d\,\Vert f_{\tau_0} \Vert^2_{\mathrm L^{2^*}({\mathord{\mathbb R}}^d)}} {\Vert \nabla f_{\tau_0} \Vert^2_{\mathrm L^2({\mathord{\mathbb R}}^d)}} =\delta\,\frac{ \Vert \nabla f_{\tau_0} \Vert^2_{\mathrm L^2({\mathord{\mathbb R}}^d)} - S_d\,\Vert f_{\tau_0} \Vert^2_{\mathrm L^{2^*}({\mathord{\mathbb R}}^d)}} {\inf_{g\in\mathcal M} \Vert \nabla(f_{\tau_0}-g) \Vert_{\mathrm L^2({\mathord{\mathbb R}}^d)}^2} \ge \delta\,\mu(\delta)\,. $$ The details of this argument are more involved than presented here, mostly because the function $\tau\mapsto\|\nabla f_\tau\|_{\mathrm L^2({\mathord{\mathbb R}}^d)}$ need not be continuous, so the existence of a $\tau_0$ as in~\eqref{eq:equality} is not guaranteed. The details will be given in Section~\ref{sec:flow}. Additional results on the symmetric decreasing rearrangement and numerical values of the constant $\kappa$ in Theorem~\ref{main} are given in two appendices,~\ref{contrearr} and~\ref{numerical} at the end of the paper. In order to make notations lighter, we will write $\|\cdot\|_q=\|\cdot\|_{\mathrm L^q({\mathord{\mathbb R}}^d)}$ whenever the space is~${\mathord{\mathbb R}}^d$ with Lebesgue's measure. \section{Competing symmetries, the sequence and the flow}\label{sec:flow} In a first step one uses `competing symmetries' to move the initial function $f$ close, but not exactly to the desired location~\eqref{M:neighbourhood}. This is done by building a sequence $(f_n)_{n\in{\mathord{\mathbb N}}}$ and considering in Lemma~\ref{alternatives} two alternatives, (a) and (b). In case (a), the whole sequence stays outside the neighbourhood~\eqref{M:neighbourhood} as well as its limit: this is anyway enough to prove the result of Theorem~\ref{main}. In case (b), in a further step one uses a continuous rearrangement (flow) to achieve~\eqref{eq:equality} or, actually, a substitute for it. \subsection{Competing symmetries}\label{sec:competing} The functional $\mathcal E(f)$ is conformally invariant in the sense that if $C:{\mathord{\mathbb R}}^d\cup\{\infty\} \to {\mathord{\mathbb R}}^d\cup\{\infty\}$ is a conformal map, the function $$ f_C(x) = |{\rm det}\,DC(x)|^{1/2^*}f\big(C( x)\big) $$ satisfies $$ \mathcal E(f_C) = \mathcal E(f)\,. $$ In order to verify this, we recall that any conformal map is a composition of scalings, translations, rotations and inversions. For scalings, translations and rotations in ${\mathord{\mathbb R}}^d$ the claimed invariance is easy to see. The additional map to consider is the inversion $I(x)= \frac{x}{|x|^2}$ and a straightforward change of variables shows that $$ \Vert \nabla f_I \Vert_2^2 = \Vert \nabla f\Vert_2^2\,, \quad \Vert f_I \Vert_{2^*}^2 = \Vert f \Vert_{2^*}^2\,. $$ The equality $$ \inf_{g\in\mathcal M} \Vert \nabla(f_I-g) \Vert_2^2 = \inf_{g\in\mathcal M} \Vert \nabla f-\nablag \Vert_2^2 $$ follows from $$ \inf_{g\in\mathcal M} \Vert \nabla(f_I-g) \Vert_2^2=\inf_{g\in\mathcal M} \Vert \nabla(f-g_I) \Vert_2^2 = \inf_{g\in\mathcal M} \Vert \nabla f-\nablag \Vert_2^2 $$ since $I^2=I$ and $g \to g_I$ maps the set $\mathcal M$ to itself in a one-to-one and onto fashion. Another and perhaps easier way to see the conformal invariance is to pull the problem up to the sphere via the stereographic projection. We denote by $s = (s_1, s_2, \dots, s_{d+1})$ the coordinates in ${\mathord{\mathbb R}}^{d+1}$. Then the unit sphere ${\mathord{\mathbb S}}^d \subset {\mathord{\mathbb R}}^{d+1}$ can be parametrized in terms of stereographic coordinates by $$ s_j = \frac{2\,x_j}{1+|x|^2}\,,\quad j=1, \dots, d\,,\quad s_{d+1} = \frac{1-|x|^2}{1+|x|^2}\,. $$ We set \begin{equation} \label{eq:euclidtosphere} F(s) = \left( \frac{1+|x|^2}2\right)^{\frac{d-2}2} f(x) \end{equation} and find by a straightforward computation that $$ \mathcal E(f) = \frac{ \Vert \nabla f \Vert^2_2 - S_d\,\Vert f \Vert^2_{2^*}} {\inf_{g\in\mathscr M} \Vert \nabla f - \nablag \Vert^2_2} = \frac{ \Vert \nabla F \Vert^2_{\mathrm L^2({\mathord{\mathbb S}}^d)} + \tfrac14\,d\,(d-2)\,\Vert F\Vert_{\mathrm L^2({\mathord{\mathbb S}}^d)}^2 - S_d\,\Vert F \Vert^2_{\mathrm L^{2^*}({\mathord{\mathbb S}}^d)}} {\inf_{G \in \mathscr M} \left\{\Vert \nabla F- \nabla G \Vert^2_{\mathrm L^2({\mathord{\mathbb S}}^d)} + \tfrac14\,d\,(d-2)\,\Vert F-G\Vert_{\mathrm L^2({\mathord{\mathbb S}}^d)}^2\right\} }\,, $$ where $G$ is a function of the form $$ G(s) = c\,\big(a+b\cdot s\big)^{-\frac{d-2}2}\,, $$ and $a>0$, $b\in {\mathord{\mathbb R}}^d$ and $c \in {\mathord{\mathbb C}}$ are constants, and $\mathscr M$ is the corresponding set of functions. On the sphere the inversion $I$ takes the form of the reflection $(s_1, \dots, s_d, s_{d+1}) \to (s_1, \dots, s_d, -s_{d+1})$. A second ingredient for the construction of the flow is the technique of `competing symmetries', invented in~\cite{CarlenLoss}. Consider any non-negative function $f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)$ and its counterpart $F \in \mathrm H^1({\mathord{\mathbb S}}^d)$ given by~\eqref{eq:euclidtosphere}. Set $$ (UF)(s) = F(s_1,s_2, \dots, s_{d+1}, -s_d)\,, $$ which corresponds to a rotation by $\pi/2$ that maps the `north pole' axis $(0, 0, \dots, 1)$ to $(0,\dots,1,0)$. Reversing~\eqref{eq:euclidtosphere} the function that corresponds to $UF$ on ${\mathord{\mathbb R}}^d$ is given by \begin{equation}\label{eq:U} (Uf)(x) = \left(\frac2{|x-e_d|^2}\right)^{\frac{d-2}2} f\left(\frac{x_1}{|x-e_d|^2}, \dots, \frac{x_{d-1}}{|x-e_d|^2}, \frac{|x|^2-1}{|x-e_d|^2}\right), \end{equation} where $e_d=(0,\dots, 0, 1)\in{\mathord{\mathbb R}}^d$. It follows that $$ \mathcal E(Uf) = \mathcal E(f)\,. $$ The operation $U$ is obviously linear, invertible and an isometry on $\mathrm L^{2^*}({\mathord{\mathbb R}}^d)$. We also consider the symmetric decreasing rearrangement $$ \mathcal R f(x) = f^*(x)\,. $$ The most important properties are that $f$ and $f^*$ are equimeasurable and that $\Vert \nabla f^* \Vert_2 \le \Vert \nabla f\Vert_2$. For elementary properties of rearrangements the reader may consult~\cite{LiebLoss}. Being equimeasurable, this map is also an isometry on $\mathrm L^{2^*}({\mathord{\mathbb R}}^d)$. It is when using the decreasing rearrangement that we use the fact that $f$ is a non-negative function. For functions that change sign one conventionally defines their rearrangement as the rearrangement of their absolute value. Passing from a function to its absolute value does not alter the numerator of $\mathcal E(f)$, but we have not been able to quantify its influence on the denominator. On ${\mathord{\mathbb R}}^d$, let \begin{equation} \label{gstar} g_*(x):=|{\mathord{\mathbb S}}^d|^{-\frac{d-2}{2\,d}} \left( \frac2{1+|x|^2}\right)^\frac{d-2}2\,. \end{equation} Note that $\Vert g_*\Vert_{2^*} = 1$ because it is obtained as the stereographic projection of the constant function on ${\mathord{\mathbb S}}^d$ with $2^*$-norm equal to $1$. The following theorem was proved in~\cite{CarlenLoss}. \begin{theorem}\label{thm:competingsymmetries} Let $f \in \mathrm L^{2^*}({\mathord{\mathbb R}}^d)$ be a non-negative function. Consider the sequence $(f_n)_{n\in{\mathord{\mathbb N}}}$ of functions \begin{equation}\label{fn} f_n =(\mathcal RU)^n f\quad\forall\,n\in{\mathord{\mathbb N}}\,. \end{equation} Then $$ \lim_{n \to \infty} \Vert f_n - h_f\Vert_{2^*} = 0 $$ where $h_f=\Vert f \Vert_{2^*}\,g_*\in \mathcal M$. Moreover, if $f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)$, then $(\Vert \nabla f_n \Vert_2^2)_{n\in{\mathord{\mathbb N}}}$ is a non-increasing sequence. \end{theorem} It does not seem clear whether the functional $\mathcal E(f)$ decreases or increases under rearrangement. The next lemma helps to explain this point. Define $\mathcal M_1$ to be the set of the elements in $\mathcal M$ with $2^*$-norm equal to $1$. \begin{lemma}\label{innerproduct} For any $f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)$, we have $$ \inf_{g\in\mathcal M} \Vert \nabla f - \nablag\Vert^2_2 = \Vert \nabla f \Vert_2^2 - S_d\,\sup_{g\in\mathcal M_1} \left(f, g^{2^*-1}\right)^2\,. $$ \end{lemma} Here $(\cdot,\cdot)$ is the $\mathrm L^2({\mathord{\mathbb R}}^d)$ inner product or, more precisely, the duality pairing between $\mathrm L^{2^*}({\mathord{\mathbb R}}^d)$ and~$\mathrm L^{(2^*)'}({\mathord{\mathbb R}}^d)$. \begin{proof} Let $g$ be any Aubin--Talenti function with $\Vert g \Vert_{2^*}=1$. The function $g$ is an optimizer of the Sobolev inequality, i.e., $\Vert \nablag \Vert_2^2 = S_d\,\Vert g \Vert_{2^*}^2=S_d$ and is a solution of the Sobolev equation $$ -\Delta g = S_d\,\frac{g^{2^*-1}}{\Vert g \Vert_{2^*}^{2^*-2}} = S_d\,g^{2^*-1}\,. $$ Hence for any non-negative constant $c$ we find $$ \Vert \nabla (f-c\,g)\Vert_2^2 = \Vert \nabla f \Vert_2^2 - 2\,c\,(\nabla f,\nablag) + c^2\,\Vert \nablag \Vert_2 = \Vert \nabla f \Vert_2^2 - 2\,c\,S_d\,\left(f, g^{2^*-1}\right) + S_d\,c^2 $$ and minimizing with respect to $c$ we find the lower bound $ \Vert \nabla f \Vert_2^2 - S_d\,\left(f,g^{2^*-1}\right)^2$, which proves the lemma. \end{proof} Under the decreasing rearrangement, the term $\Vert \nabla f\Vert_2^2$ does not increase whereas the term $\sup_{g\in\mathcal M_1} \left(f, g^{2^*-1}\right)^2$ increases. To see this, note that the supremum is attained at some Aubin--Talenti function of the form~\eqref{eq:optimizers} that is a strictly symmetric function about the point $b\in{\mathord{\mathbb R}}^d$. Replacing $f$ by its symmetric decreasing rearrangement about that point increases $\left(f, g^{2^*-1}\right)^2$, in fact strictly unless $f$ is already symmetric decreasing about the point $b$. Thus, while the numerator in $\mathcal E(f)$ decreases under rearrangements so does the denominator and there are no direct conclusions to be drawn from this. The next lemma summarizes what we have shown. \begin{lemma}\label{lm:monotone} For the sequence $(f_n)_{n\in{\mathord{\mathbb N}}}$ in Theorem~\ref{thm:competingsymmetries} we have that $n\mapsto\sup_{g\in\mathcal M_1} \left(f_n, g^{2^*-1}\right)^2$ is strictly increasing, $\lim_{n \to \infty} \inf_{g\in\mathcal M} \Vert \nabla f_n - \nablag\Vert_{2^*}^2$ is strictly decreasing and $$ \lim_{n \to \infty} \inf_{g\in\mathcal M} \Vert \nabla f_n - \nablag\Vert_2^2 = \lim_{n \to \infty} \Vert \nabla f_n \Vert^2_2 - S_d\,\Vert h_f \Vert_{2^*} ^2= \lim_{n \to \infty} \Vert \nabla f_n \Vert^2_2 - S_d\,\Vert f \Vert_{2^*} ^2\,. $$ \end{lemma} \begin{proof} From $$ \inf_{g\in\mathcal M} \Vert \nabla f_n - \nablag\Vert_2^2 = \Vert \nabla f_n \Vert^2_2 - S_d\,\sup_{g\in\mathcal M_1} \left(f_n, g^{2^*-1}\right)^2 $$ we see that the first term converges since $(\Vert \nabla f_n \Vert^2_2)_{n\in{\mathord{\mathbb N}}}$ is a non-increasing sequence. For the second term, which is strictly increasing, we have by H\"older's inequality $$ \sup_{g\in\mathcal M_1} \left(f_n, g^{2^*-1}\right)^2 \le \Vert f_n \Vert_{2^*}^2 = \Vert f \Vert_{2^*}^2 $$ and since $g_*$ as defined in~\eqref{gstar} is in $\mathcal M_1$ we have $$ \liminf_{n\to \infty} \sup_{g\in\mathcal M_1} \left(f_n, g^{2^*-1}\right)^2 \ge \liminf_{n\to \infty} \left(f_n,g_*\right)^2 = \Vert f \Vert_{2^*}^2 $$ by Theorem~\ref{thm:competingsymmetries}. \end{proof} \begin{lemma} \label{alternatives} Assume that $0\leq f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)\setminus\mathcal M$ satisfies $$ \inf_{g\in\mathcal M} \Vert \nabla f - \nablag\Vert_2^2 \ge \delta\,\Vert \nabla f \Vert_2^2 $$ and let $(f_n)_{n\in{\mathord{\mathbb N}}}$ be the sequence defined by~\eqref{fn}. Then one of the following alternatives holds: \begin{enumerate} \item[(a)] for all $n=0,1,2 \dots$ we have $$ \inf_{g\in\mathcal M} \Vert \nabla f_n - \nablag\Vert_2^2 \ge \delta\,\Vert \nabla f_n \Vert_2^2 $$ \item[(b)] there is a natural number $n_0$ such that $$ \inf_{g\in\mathcal M} \Vert \nabla f_{n_0} - \nablag\Vert_2^2 \ge \delta\,\Vert \nabla f_{n_0} \Vert_2^2 $$ and $$ \inf_{g\in\mathcal M} \Vert \nabla f_{n_0+1} - \nablag\Vert_2^2 < \delta\,\Vert \nabla f_{n_0+1} \Vert_2^2\,. $$ \end{enumerate} \end{lemma} \begin{proof} Assume that alternative (a) does not hold. Then there is a largest value $n_0\ge0$ such that $\inf_{g\in\mathcal M} \Vert \nabla f_{n_0} - \nablag\Vert_2^2 \ge \delta\,\Vert \nabla f_{n_0} \Vert_2^2$. \end{proof} \begin{lemma}\label{alternativea} Assume that $f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)\setminus \mathcal M$ satisfies $$ \inf_{g\in\mathcal M} \Vert \nabla f - \nablag\Vert^2_2 \ge \delta\,\Vert \nabla f \Vert_2^2 $$ and suppose that in Lemma~\ref{alternatives} alternative {\rm (a)} holds for the sequence $(f_n)_{n\in{\mathord{\mathbb N}}}$ defined by~\eqref{fn}. Then $$ \mathcal E(f) \ge \delta\,. $$ \end{lemma} \begin{proof} We have \begin{equation}\label{Efn} \mathcal E(f) = \frac{\Vert \nabla f \Vert^2_2 - S_d\,\Vert f \Vert_{2^*}^2} {\inf_{g\in\mathcal M} \Vert \nabla f - \nablag\Vert^2_2 } \ge \frac{\Vert \nabla f \Vert^2_2 - S_d\,\Vert f \Vert_{2^*}^2} {\Vert \nabla f \Vert^2_2 } \geq \frac{\Vert \nabla f_n \Vert^2_2 - S_d\,\Vert f \Vert_{2^*}^2} {\Vert \nabla f _n\Vert^2_2} \,, \end{equation} where the second inequality is a consequence of $\Vert \nabla f _n\Vert^2_2\leq\Vert \nabla f\Vert^2_2$ for all $n=0$, $1$, $2$,\dots proved in Theorem~\ref{thm:competingsymmetries}. By the assumption that alternative {\rm (a)} holds and by Lemma~\ref{lm:monotone}, we learn that $$ \lim_{n \to \infty}\Vert \nabla f _n\Vert^2_2\leq\frac1\delta\,\lim_{n \to \infty}\,\inf_{g\in\mathcal M} \Vert \nabla f_n - \nablag\Vert_2^2= \frac1\delta\left(\lim_{n \to \infty} \Vert \nabla f_n \Vert^2_2 - S_d\,\Vert f \Vert_{2^*} ^2\right)\,. $$ Since $$ \lim_{n \to \infty} \Vert \nabla f_n \Vert_2^2 - S_d\,\Vert f \Vert_{2^*}^2 \ge \delta\,\lim_{n \to \infty} \Vert \nabla f_n \Vert_2^2 \ge \delta\,S_d\,\lim_{n \to \infty} \Vert f_n\Vert_{2^*}^2= \delta\,S_d\,\Vert f \Vert_{2^*}^2 >0\,, $$ we can take the limit as $n\to\infty$ on the right side of~\eqref{Efn} and compute the limit of the quotient as the quotient of the limits. This proves the lemma. \end{proof} \subsection{Continuous rearrangement} Next, we analyze the case where the alternative (b) in Lemma~\ref{alternatives} holds. Let us introduce \begin{equation} \label{eq:mudelta2} \mu(\delta) := \inf\left\{ \mathcal E(f) :\,0\leq f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)\setminus\mathcal M\,,\inf_{g\in\mathcal M} \Vert \nabla f - \nablag\Vert^2_2 \leq \delta\,\Vert \nabla f \Vert^2_2 \right\}. \end{equation} \begin{lemma}\label{mu1} For any $\delta\in (0,1]$, we have $\mu(\delta)\leq 1$. \end{lemma} \begin{proof} By Lemma~\ref{innerproduct}, we have $$ \inf_{g\in\mathcal M} \|\nabla f-\nablag\|_2^2 = \Vert \nabla f \Vert^2_2 - S_d\,\sup_{g\in\mathcal M_1} \left(f,g^{2^*-1}\right)^2 $$ and it follows from H\"older's inequality that $$ \sup_{g\in\mathcal M_1} \left(f,g^{2^*-1}\right)^2 \leq \|f\|_{2^*}^2\,. $$ Thus, the denominator in $\mathcal E(f)$ that enters the definition of $\mu(\delta)$ is at least as large as the numerator, so the quotient is at most 1. \end{proof} Our goal in this subsection is to prove the following lower bound. \begin{lemma}\label{alternativeb} Assume that $f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)\setminus \mathcal M$ satisfies $$ \inf_{g\in\mathcal M} \Vert \nabla f - \nablag\Vert_2^2 \ge \delta\,\Vert \nabla f \Vert_2^2 $$ for some $\delta\in(0,1)$ and suppose that in Lemma~\ref{alternatives} alternative {\rm (b)} holds for the sequence $(f_n)_{n\in{\mathord{\mathbb N}}}$ of Theorem~\ref{thm:competingsymmetries} defined by~\eqref{fn}. Then, with $\mu(\delta)$ defined by~\eqref{eq:mudelta2}, we have $$ \mathcal E(f)\ge \delta\,\mu(\delta)\,. $$ \end{lemma} For the proof of this lemma we introduce a continuous rearrangement flow, which interpolates between a function and its symmetric decreasing rearrangement. The basic ingredient for this flow is similar to a flow that Brock introduced~\cite{Brock,Brock2} and which interpolates between a function and its Steiner symmetrization with respect to a given hyperplane. Brock's construction, in turn, is based on ideas of Rogers~\cite{Rogers} and Brascamp--Lieb--Luttinger~\cite{BrascampLiebLuttinger}. Our flow is obtained by glueing together infinitely many copies of Brock's flows with respect to a sequence of judiciously chosen hyperplanes. A similar construction was performed by Bucur and Henrot~\cite{BucurHenrot}; see also~\cite{Christ}. More specifically, for a given hyperplane $H$, Brock's flow interpolates between a given function $f$ and $f^{*H}$, the Steiner symmetrized function with respect to $H$. The family that interpolates between $f$ and $f^{*H}$ is denoted by $f^H_\tau, \tau \in [0,\infty]$, and we have $$ f_0=f\,,\quad f^H_\infty = f^{*H}\,. $$ Further, for any $\tau$, $f^H_\tau$ and $f$ are equimeasurable, i.e., $$ \left|\left\{x \in {\mathord{\mathbb R}}^d: f^H_\tau(x) >t \right\}\right| = \left|\left\{x \in {\mathord{\mathbb R}}^d: f(x) >t \right\}\right| \quad\forall\,t>0\,. $$ Moreover, if $f\in \mathrm L^p({\mathord{\mathbb R}}^d)$ for some $1\leq p<\infty$, then $\tau\mapsto f^H_\tau$ is continuous in $\mathrm L^p({\mathord{\mathbb R}}^d)$. By choosing a sequence of hyperplanes we construct another flow $\tau \mapsto f_\tau$ that has the same properties but interpolates between $f$ and $f^*$, the symmetric decreasing rearrangement. In Appendix~\ref{contrearr} we explain this in more detail and prove the following properties that are important for our proof, assuming $f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)$. From the $\mathrm L^{2^*}({\mathord{\mathbb R}}^d)$ continuity of the flow we will deduce that \begin{equation} \label{eq:flowprop2} \lim_{\tau \to \tau_0} \sup_{g\in\mathcal M_1}\left(f_\tau,g\right)^2 = \sup_{g\in\mathcal M_1}\left(f_{\tau_0},g\right)^2\,. \end{equation} Concerning the gradient we prove the monotonicity \begin{equation} \label{eq:flowprop1} \Vert \nabla f_{\tau_2} \Vert_2 \le \Vert \nabla f_{\tau_1} \Vert_2\,,\quad 0\leq\tau_1\leq\tau_2\leq\infty\,, \end{equation} and the right continuity \begin{equation} \label{eq:flowprop3} \lim_{\tau_2\to\tau_1^+} \Vert \nabla f_{\tau_2} \Vert_2 = \Vert \nabla f_{\tau_1} \Vert_2\,,\quad 0\leq\tau_1<\infty\,. \end{equation} \begin{proof}[Proof of Lemma~\ref{alternativeb}] We begin by motivating and explaining the strategy of the proof. As before, we bound \begin{equation} \label{eq:flowargument} \mathcal E(f) = \frac{\Vert \nabla f \Vert^2_2 - S_d\,\Vert f \Vert_{2^*}^2} {\inf_{g\in\mathcal M} \Vert \nabla f - \nablag\Vert^2_2 } \ge \frac{\Vert \nabla f \Vert^2_2 - S_d\,\Vert f \Vert_{2^*}^2} {\Vert \nabla f \Vert^2_2 } \ge \frac{\Vert \nabla f_{n_0} \Vert^2_2 - S_d\,\Vert f_{n_0} \Vert_{2^*}^2} {\Vert \nabla f_{n_0} \Vert^2_2 }\,. \end{equation} We could bound the right side further from below by replacing $f_{n_0}$ by $f_{n_0+1}$. This bound, however, might be too crude for our purposes and we proceed differently. The move from $f_{n_0}$ to $f_{n_0+1}$ consists of two steps, namely first applying a conformal rotation and second applying symmetric decreasing rearrangement. The first step leaves all terms on the right side invariant and we do carry out this step. The second step leaves the $2^*$-norm invariant, while the gradient term does not go up. In fact, the gradient term might go down too far. Therefore, we replace the application of the rearrangement by a continuous rearrangement flow. In order to make the notation less cumbersome we shall denote $Uf_{n_0}$ by $\mathsf f_0$ where $U$ denotes the conformal rotation~\eqref{eq:U}. We denote by $\mathsf f_\tau$, $0\leq\tau\leq\infty$, the continuous rearrangement starting at $\mathsf f_0$ and let \begin{equation} \label{eq:finfty} \mathsf f_{\infty}=f_{n_0+1}\,. \end{equation} Ideally, we would like to find $\tau_0\in[0,\infty)$ such that $$ \inf_{g\in\mathcal M} \Vert \nabla \mathsf f_{\tau_0} - \nablag\Vert^2_2 = \delta\,\Vert \nabla \mathsf f_{\tau_0} \Vert^2_2\,. $$ Then the right side of~\eqref{eq:flowargument} is equal to $$ 1 - S_d\,\frac{\Vert \mathsf f_{0} \Vert_{2^*}^2} {\Vert \nabla \mathsf f_{0} \Vert^2_2 } \ge 1 - S_d\,\frac{\Vert \mathsf f_{\tau_0} \Vert_{2^*}^2} {\Vert \nabla \mathsf f_{\tau_0} \Vert^2_2 } = \delta\,\frac{\Vert \nabla \mathsf f_{\tau_0} \Vert^2_2 - S_d\,\Vert \mathsf f_{\tau_0} \Vert_{2^*}^2}{\inf_{g\in\mathcal M} \Vert \nabla \mathsf f_{\tau_0} - \nablag\Vert^2_2}\,, $$ which can be bounded from below by $\delta\,\mu(\delta)$, since $\mathsf f_{\tau_0}$ is admissible in the infimum~\eqref{eq:mudelta2}. This would prove the desired bound. The problem with this argument is that the existence of such a $\tau_0\in[0,\infty)$ is in general not clear, since neither of the terms $\inf_{g\in\mathcal M} \Vert \nabla \mathsf f_{\tau} - \nablag\Vert^2_2$ and $\Vert \nabla \mathsf f_{\tau} \Vert^2_2$ needs to be continuous in~$\tau$. Nevertheless, we will be able to adapt the above argument to yield the same conclusion. We now turn to the details of the argument. Recalling that $$ \inf_{g\in\mathcal M} \|\nabla \mathsf f_0 - \nablag\|_2^2 \geq \delta\,\|\nabla \mathsf f_0\|_2^2\,, $$ we define $$ \tau_0 := \inf\left\{ \tau\geq 0 :\,\inf_{g\in\mathcal M} \|\nabla \mathsf f_\tau - \nablag\|_2^2 < \delta\,\|\nabla \mathsf f_\tau \|_2^2 \right\} $$ with the convention that $\inf\emptyset = \infty$. If $\tau<\tau_0\in(0,\infty]$, similarly as before, the right side of~\eqref{eq:flowargument} is equal to $$ \frac{\Vert \nabla \mathsf f_{0} \Vert^2_2-S_d\,\Vert \mathsf f_{0} \Vert_{2^*}^2} {\Vert \nabla \mathsf f_{0} \Vert^2_2 } = 1 - S_d\,\frac{\Vert \mathsf f_{0} \Vert_{2^*}^2} {\Vert \nabla \mathsf f_{0} \Vert^2_2 } \ge \frac{\Vert \nabla \mathsf f_{\tau} \Vert^2_2 - S_d\,\Vert \mathsf f_{\tau_0} \Vert_{2^*}^2}{\Vert \nabla \mathsf f_{\tau} \Vert^2_2}\ge\delta\, \frac{\Vert \nabla \mathsf f_{\tau} \Vert^2_2 - S_d\,\Vert \mathsf f_{\tau_0} \Vert_{2^*}^2}{\inf_{g\in\mathcal M} \|\nabla \mathsf f_\tau - \nablag\|_2^2} \,, $$ where the last inequality arises from $\inf_{g\in\mathcal M} \|\nabla \mathsf f_\tau - \nablag\|_2^2 \geq \delta\,\|\nabla \mathsf f_\tau \|_2^2$ for any $\tau\in[0,\tau_0)$. Taking the limit inferior as $\tau\to\tau_0^-$, we obtain \begin{equation} \label{limtau0} \frac{\Vert \nabla \mathsf f_{0} \Vert^2_2-S_d\,\Vert \mathsf f_{0} \Vert_{2^*}^2} {\Vert \nabla \mathsf f_{0} \Vert^2_2 }\geq \delta\,\frac{\lim_{\tau\to\tau_0^-} \Vert \nabla \mathsf f_{\tau} \Vert^2_2 - S_d\,\Vert \mathsf f_{\tau_0} \Vert_{2^*}^2}{\liminf_{\tau\to\tau_0^-} \inf_{g\in\mathcal M} \Vert \nabla \mathsf f_{\tau} - \nablag \Vert^2_2}\,. \end{equation} Note that the denominator appearing here does not vanish. Indeed, we have $$ \inf_{g\in\mathcal M} \|\nabla \mathsf f_\tau - \nablag\|_2^2 \geq \delta\,\|\nabla \mathsf f_\tau \|_2^2\geq \delta\,S_d\,\| \mathsf f_\tau \|_{2^*}^2= \delta\,S_d\,\|f\|_{2^*}^2>0\quad\forall\,\tau\in[0,\tau_0) $$ and, as a consequence, \begin{equation*} \label{eq:biggerequal} \liminf_{\tau\to\tau_0^-} \inf_{g\in\mathcal M} \Vert \nabla \mathsf f_{\tau} - \nablag \Vert^2_2\geq \delta\,S_d\,\|f\|_{2^*}^2>0 \,. \end{equation*} The same inequality \eqref{limtau0} remains valid if $\tau_0=0$ and if we interpret $\lim_{\tau \to \tau_0^-}$ and $\liminf_{\tau\to\tau_0^-}$ as evaluating at $\tau_0=0$. At this point we find it convenient to apply Lemma~\ref{innerproduct} and use the representation $$ \inf_{g\in\mathcal M} \Vert \nabla \mathsf f_{\tau} - \nablag\Vert^2_2 = \Vert \nabla \mathsf f_{\tau} \Vert_2^2 - S_d\,\sup_{g\in\mathcal M_1} \left(\mathsf f_{\tau}, g^{2^*-1}\right)^2\,. $$ Using~\eqref{eq:flowprop2}, that is, the continuity of $\tau\mapsto\sup_{g\in\mathcal M_1} \left(\mathsf f_\tau, g^{2^*-1}\right)^2$, we see that $$ \liminf_{\tau\to\tau_0^-} \inf_{g\in\mathcal M} \Vert \nabla \mathsf f_{\tau} - \nablag \Vert^2_2 = \lim_{\tau\to\tau_0^-} \Vert \nabla \mathsf f_{\tau} \Vert^2_2 - S_d\,\sup_{g\in\mathcal M_1} \left(\mathsf f_{\tau_0}, g^{2^*-1}\right)^2\,. $$ Thus, the relevant quotient is equal to \begin{equation} \label{eq:quotientproof} \frac{\lim_{\tau\to\tau_0^-} \Vert \nabla \mathsf f_{\tau} \Vert^2_2 - S_d\,\Vert \mathsf f_{\tau_0} \Vert_{2^*}^2}{\lim_{\tau\to\tau_0^-} \Vert \nabla \mathsf f_{\tau} \Vert^2_2 - S_d\,\sup_{g\in\mathcal M_1} \left(\mathsf f_{\tau_0}, g^{2^*-1}\right)^2}\,. \end{equation} Our goal in the remainder of this proof is to show that this quotient is larger or equal than $\mu(\delta)$. We will use the fact that \begin{equation} \label{eq:holder} \sup_{g\in\mathcal M_1} \left(\mathsf f_{\tau_0}, g^{2^*-1}\right)^2 \le \Vert f _{\tau_0} \Vert_{2^*}^2\,, \end{equation} which follows from H\"older's inequality. We also note that equality holds here if and only if $\mathsf f_{\tau_0}\in\mathcal M$. Let us first handle the case where $\mathsf f_{\tau_0}\in\mathcal M$. Then by~\eqref{eq:biggerequal} and because of equality in~\eqref{eq:holder}, the quotient~\eqref{eq:quotientproof} is equal to 1, which by Lemma~\ref{mu1} can be further bounded from below by~$\mu(\delta)$, leading to the claimed bound. This completes the proof in the case $\mathsf f_{\tau_0}\in\mathcal M$ and in what follows we assume \begin{equation*} \mathsf f_{\tau_0}\not\in\mathcal M\,. \end{equation*} As a consequence of this assumption and~\eqref{eq:holder}, we have \begin{equation} \label{eq:proofuseass2} \|\nabla \mathsf f_{\tau_0}\|_2^2 > S_d\,\| \mathsf f_{\tau_0}\|_{2^*}^2 \geq S_d\,\sup_{g\in\mathcal M_1} \left(\mathsf f_{\tau_0}, g^{2^*-1}\right)^2\,. \end{equation} Next, we observe that for $\alpha > \beta$ the function $x\mapsto(x-\alpha)/(x-\beta)$ is monotone increasing on the interval $(\beta,\infty)$. This, together with the strict inequality in~\eqref{eq:proofuseass2}, implies that the quotient~\eqref{eq:quotientproof} can be bounded from below by \begin{equation} \label{eq:lowerboundproof} \frac{\lim_{\tau\to\tau_0^-} \Vert \nabla \mathsf f_{\tau} \Vert^2_2 - S_d\,\Vert \mathsf f_{\tau_0} \Vert_{2^*}^2}{\lim_{\tau\to\tau_0^-} \Vert \nabla \mathsf f_{\tau} \Vert^2_2 - S_d\,\sup_{g\in\mathcal M_1} \left(\mathsf f_{\tau_0}, g^{2^*-1}\right)^2} \geq \frac{\Vert \nabla \mathsf f_{\tau_0} \Vert^2_2 - S_d\,\Vert \mathsf f_{\tau_0} \Vert_{2^*}^2}{\Vert \nabla \mathsf f_{\tau_0} \Vert^2_2 - S_d\,\sup_{g\in\mathcal M_1} \left(\mathsf f_{\tau_0}, g^{2^*-1}\right)^2}\,. \end{equation} We now claim that \begin{equation} \label{eq:goodineq} \inf_{g\in\mathcal M} \|\nabla \mathsf f_{\tau_0} - \nablag\|_2^2 \leq \delta\,\|\nabla \mathsf f_{\tau_0}\|_2^2\,. \end{equation} Once this is proved, we can bound the right side of~\eqref{eq:lowerboundproof} from below by $\mu(\delta)$. This inequality is the claimed inequality after taking into account~\eqref{limtau0}. To prove~\eqref{eq:goodineq}, we first note that it is verified if $\tau_0=\infty$. Indeed, $\mathsf f_{\infty}=f_{n_0+1}$ by~\eqref{eq:finfty} and therefore, by assumption of alternative (b), $\inf_{g\in\mathcal M} \|\nabla \mathsf f_{\infty} - \nablag\|_2^2 <\delta\,\|\nabla \mathsf f_{\infty}\|_2^2$. Now let $\tau_0<\infty$. We argue by contradiction and assume that \begin{equation} \label{eq:case2} \inf_{g\in\mathcal M} \|\nabla \mathsf f_{\tau_0} - \nablag\|_2^2 > \delta\,\|\nabla \mathsf f_{\tau_0}\|_2^2\,. \end{equation} Because of this strict inequality and the definition of $\tau_0$ there are $\sigma_k\in(\tau_0,\infty)$ for any $k\in{\mathord{\mathbb N}}$ with $\lim_{k\to\infty}\sigma_k=\tau_0$ such that $\inf_{g\in\mathcal M} \|\nabla \mathsf f_{\sigma_k} - \nablag\|_2^2 < \delta\,\|\nabla \mathsf f_{\sigma_k}\|_2^2$, that is, $$ \|\nabla \mathsf f_{\sigma_k}\|_2^2 - S_d\,\sup_{g\in\mathcal M_1} \left(\mathsf f_{\sigma_k},g^{2^*-1}\right)^2 < \delta\,\|\nabla \mathsf f_{\sigma_k} \|_2^2\quad\forall\,k\in{\mathord{\mathbb N}}\,. $$ Letting $k\to\infty$ and using~\eqref{eq:flowprop2} as well as the right continuity of $\|\nabla \mathsf f_\tau\|_2^2$, see~\eqref{eq:flowprop3}, we deduce that $$ \|\nabla \mathsf f_{\tau_0}\|_2^2 - S_d\,\sup_{g\in\mathcal M_1} \left(\mathsf f_{\tau_0},g^{2^*-1}\right)^2 \leq \delta\,\|\nabla \mathsf f_{\tau_0} \|_2^2\,. $$ This is the same as $\inf_{g\in\mathcal M} \|\nabla \mathsf f_{\tau_0} - \nablag\|_2^2 \leq \delta\,\|\nabla \mathsf f_{\tau_0}\|_2^2$ and contradicts~\eqref{eq:case2}. This proves~\eqref{eq:goodineq} and completes the proof of the lemma. \end{proof} \begin{remark} The above argument would be simpler if $\tau\mapsto \Vert \nabla \mathsf f_\tau \Vert_2^2$ were continuous for an appropriate choice of hyperplanes (see Appendix~\ref{contrearr}) in the definition of the flow. Since the flow is weakly continuous in $\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)$, continuity of the norm is equivalent to (strong) continuity of the flow in $\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)$. Thus, for continuity of the norm for an appropriate choice of hyperplanes, it is necessary that there is such a choice for which the Steiner symmetrizations approximate $f^*$ in $\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)$. According to a theorem of Burchard~\cite{Burchard} this holds if and only if $f$ is co-area regular, i.e, if and only if the distribution function $$ h\mapsto |\{x \in {\mathord{\mathbb R}}^d: f(x)>h,\,\nabla f(x) = 0\}| $$ has no absolutely continuous component. As shown by Almgren and Lieb~\cite{AlmgrenLieb}, both co-area regular and co-area irregular functions are dense for $d\geq 2$. \end{remark} \subsection{Summary} Let us summarize the result of this section. \begin{corollary}\label{Cor:Summary} Take $\delta\in(0,1)$ and assume that $0\leq f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)\setminus\mathcal M$ satisfies $$ \inf_{g\in\mathcal M} \Vert \nabla f - \nablag\Vert_2^2 \ge \delta\,\Vert \nabla f \Vert_2^2\,. $$ Then, with $\mu(\delta)$ defined by~\eqref{eq:mudelta2}, we have $$ \mathcal E(f) \ge \delta\,\mu(\delta)\,. $$ \end{corollary} Indeed, by Lemma~\ref{alternatives} either alternative (a) or (b) holds. In first case, we apply Lemmas~\ref{alternativea} and~\ref{mu1}, and in the second case, we apply Lemma~\ref{alternativeb}. In the next section we will bound $\mu(\delta)$ from below by an explicit function of $\delta$. \section{Analysis close to the manifold of optimizers}\label{sec:expansion} \subsection{Expansions with remainder terms} Our goal in this subsection is to prove the following proposition. \begin{proposition}\label{expand} Let $X$ be a measure space and $u,r\in \mathrm L^q(X)$ for some $q\geq 2$ with $u\geq 0$ and $u+r\geq 0$. Assume also that $\int_X u^{q-1}\,r\,dx =0$. \begin{itemize} \item If $2\leq q\leq 3$, then $$ \|u+r\|_q^2 \leq \|u\|_q^2 + \|u\|_q^{2-q} \left( (q-1) \!\int_X u^{q-2}\,r^2\,dx + \frac2q \!\int_X r_+^q\,dx \right). $$ \item If $3\leq q\leq 4$, then \begin{align*} \|u+r\|_q^2 \leq&\, \|u\|_q^2\\ &+ \|u\|_q^{2-q} \left( (q-1) \!\int_X u^{q-2}\,r^2\,dx + \tfrac13\,(q-1)\,(q-2) \!\int_X u^{q-3}\,r^3\,dx + \tfrac2q \!\int_X |r|^q\,dx \right). \end{align*} \item If $q=6$, then \begin{multline*} \|u+r\|_q^2 \leq\, \|u\|_q^2 + \|u\|_q^{2-q}\,\Big( 5\!\int_X u^{q-2}\,r^2\,dx + \tfrac{20}3\!\int_X u^{q-3}\,r^3\,dx \\ + 5\!\int_X u^{q-4}\,r^4\,dx + 2\!\int_X u^{q-5}\,r^5\,dx + \tfrac13\!\int_X r^6\,dx \Big)\,. \end{multline*} \end{itemize} \end{proposition} Similar bounds can also be derived for $q\in(4,\infty)\setminus\{6\}$. They become increasingly more complicated as $q$ passes an integer. We restrict ourselves to the case $q=6$, which is the only case in $(4,\infty)$ that we need, as it corresponds to the Sobolev exponent in dimension $d=3$. For the proof of the proposition, we need two lemmas, which we discuss next. \begin{lemma}\label{ineq1} We have the upper bounds following expansions. \begin{itemize} \item If $2\leq q\leq 3$, then, for all $x\ge -\,1$, $$ (1+x)^q \leq 1+ q\,x + \tfrac12\,q\,(q-1)\,x^2 + x_+^q\,. $$ \item If $3\leq q\leq 4$, then, for all $x\ge -\,1$, $$ (1+x)^q \leq 1+ q\,x + \tfrac12\,q\,(q-1)\,x^2 + \tfrac16\,q\,(q-1)\,(q-2) \,x^3 + |x|^q\,. $$ \item If $q=6$, then, for all $x\ge -\,1$, $$ (1+x)^6 = 1+ 6\,x + 15\,x^2 + 20\,x^3 + 15\,x^4 + 6\,x^5 + x^6\,. $$ \end{itemize} \end{lemma} \begin{proof} We begin with the case $2\leq q\leq 3$ and set $$ f(x) := (1+x)^q - 1 - q\,x - \tfrac12\,q\,(q-1)\,x^2 - x_+^q\,. $$ For any $x\ge -\,1$, we compute \begin{align*} f'(x) & = q \left( (1+x)^{q-1} - 1 - (q-1)\,x - x_+^{q-1} \right), \\ f''(x) & = q\,(q-1) \left( (1+x)^{q-2} -1 -x_+^{q-2} \right). \end{align*} For $-1\leq x\leq 0$ we clearly have $(1+x)^{q-2} -1 -x_+^{q-2} = (1-|x|)^{q-2} - 1 \leq 0$. For $x\geq 0$ we have, by a well-known elementary inequality, $(1+x)^{q-2} -1 -x_+^{q-2} = (1+x)^{q-2} - 1 -x^{q-2} \leq 0$. To summarize, $f$ is concave on $[-1,\infty)$. We conclude that, for all $x\ge -\,1$, $$ f(x) \leq f(0) - f'(0)\,x\,. $$ Since $f(0)=f'(0)=0$, this is the claimed inequality. We now turn to the case $3\leq q\leq 4$ and set this time $$ f(x) := (1+x)^q - 1 - q\,x - \tfrac12\,q\,(q-1)\,x^2 - \tfrac16\,q\,(q-1)\,(q-2) \,x^3 - |x|^q\,. $$ Again, we compute \begin{align*} f'(x) & = q \left( (1+x)^{q-1} - 1 - (q-1)\,x - \tfrac12 \,(q-1)\,(q-2)\,x^2 - |x|^{q-2}\,x \right),\\ f''(x) & = q\,(q-1) \,\Big( (1+x)^{q-2} - 1 - (q-2)\,x - |x|^{q-2} \Big)\,. \end{align*} Since again $f(0)=f'(0)=0$, the claimed inequality will follow if we can show concavity of $f$ on $[-1,\infty)$, that is, $g\leq 0$ on $[-1,\infty)$ where $$ g(x):= (1+x)^{q-2} - 1 - (q-2)\,x - |x|^{q-2}\,. $$ We compute \begin{align*} g'(x) & = (q-2) \left( (1+x)^{q-3} - 1 - |x|^{q-4}\,x \right),\\ g''(x) & = (q-2)\,(q-3) \left( (1+x)^{q-4} - |x|^{q-4} \right). \end{align*} We discuss $g$ separately on $[-1,0]$ and on $(0,\infty)$. \begin{itemize} \item[$\circ$] We begin with the second case. For $x>0$ we have, by the same elementary inequality as before, $(1+x)^{q-3}-1-x^{q-3}< 0$. Thus, $g'<0$ on $(0,\infty)$. Since $g(0)=0$, we deduce $g<0$ on $(0,\infty)$. \item[$\circ$] Now let us consider the interval $[-1,0]$. We see that $g''>0$ on $(-1,-1/2)$ and $g''<0$ on $(-1/2,0)$. Therefore $g'$ is increasing on $(-1,-1/2)$ and decreasing on $(-1/2,0)$. Since $g'(-1)=g'(0)=0$, we conclude that $g'>0$ on $(-1,0)$ and therefore $g$ is increasing on $(-1,0)$. Since $g(0)=0$ we conclude that $g<0$ on $[-1,0)$, as claimed. \end{itemize} If $q=6$ we simply expand $(1+x)^6$. This completes the proof of the lemma. \end{proof} We will also use the following elementary lemma. \begin{lemma}\label{ineq2} If $q\geq 2$, then, for all $x\geq 0$, $$ (1+x)^{\frac 2q} \leq 1 + \tfrac2q\,x\,. $$ \end{lemma} \begin{proof}[Proof of Proposition~\ref{expand}] We only give the proof in the case $2\leq q\leq 3$. We have, by Lemma~\ref{ineq1}, almost everywhere on $X$, $$ (u+r)^q \leq u^q + q\,u^{q-1}\,r + \tfrac12\,q\,(q-1)\,u^{q-2}\,r^2 + r_+^q\,. $$ Integrating this and using the assumed orthogonality condition, we obtain $$ \int_X (u+r)^q\,dx \leq \int_X u^q\,dx + \tfrac12\,q\,(q-1) \!\int_X u^{q-2}\,r^2\,dx + \int_X r_+^q\,dx\,. $$ Applying Lemma~\ref{ineq2}, we obtain $$ \left( \int_X (u+r)^q\,dx \right)^\frac2q \leq \left( \int_X u^q\,dx \right)^\frac2q + \left( \int_X u^q\,dx \right)^\frac{2-q}q \left( (q-1)\! \int_X u^{q-2}\,r^2\,dx + \tfrac2q\! \int_X r_+^q\,dx \right). $$ This is the claimed inequality for $2\leq q\leq 3$. The proof in the remaining cases proceeds similarly. \end{proof} \subsection{Application to the Sobolev functional} Throughout this subsection, we assume that $d\geq 3$ and we set $$ q=2^*=\frac{2\,d}{d-2}\,. $$ We recall that we denote by $S_d$ the optimal constant in the Sobolev inequality $\Vert \nabla f \Vert^2_2 \geq S_d\,\|f\|_q^2$ and by $\mathcal M$ the set of all optimizers in this inequality. \begin{proposition}\label{beexpand} Let $q=2^*$, $0\leq f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)$ and $u\in\mathcal M$ be such that $$ \|\nabla f-\nabla u\|_2 = \inf_{g\in\mathcal M} \|\nabla f-\nablag\|_2\,. $$ Set $r:=f-u$ and $\sigma := \|r\|_q/\|u\|_q$. \begin{itemize} \item If $d\geq 6$, we have $$ \Vert \nabla f \Vert^2_2 - S_d\,\|f\|_q^2 \geq \int_{{\mathord{\mathbb R}}^d}\Big(|\nabla r|^2 - S_d\,(q-1)\,\|u\|_q^{2-q}\,u^{q-2}\,r^2 \Big)\,dx - \tfrac2q\,\|\nabla r\|^2_2\,\sigma^{q-2}\,. $$ \item If $d=4$, $5$, we have \begin{multline*} \Vert \nabla f \Vert^2_2 - S_d\,\|f\|_q^2 \\ \geq \int_{{\mathord{\mathbb R}}^d} \left( |\nabla r|^2 - S_d\,(q-1)\,\|u\|_q^{2-q}\,u^{q-2}\,r^2 \right)dx- \|\nabla r\|^2_2 \left( \tfrac13\,(q-1)\,(q-2)\,\sigma + \tfrac2q\,\sigma^{q-2} \right). \end{multline*} \item If $d=3$, we have $$ \Vert \nabla f \Vert^2_2 - S_d\,\|f\|_q^2 \geq \int_{{\mathord{\mathbb R}}^d} \left( |\nabla r|^2 - 5\,S_3\,\|u\|_6^{-4}\,u^4\,r^2 \right)dx - \|\nabla r\|^2_2 \left( \tfrac{20}3\,\sigma + 5\,\sigma^2 + 2\,\sigma^3 + \tfrac13\,\sigma^4 \right). $$ \end{itemize} \end{proposition} \begin{proof} This follows directly from Proposition~\ref{expand} in the previous section. We note that the orthogonality conditions \begin{equation} \label{eq:orth} (\nabla r,\nabla u)=(\nabla f-\nabla u,\nabla u)=0\quad\text{and}\quad (r,u^{q-1})=(f-u,u^{q-1})=0 \end{equation} are satisfied because of the choice of $u$. Indeed, since $\mathcal M$ is closed under multiplication by a scalar, we find that $$ 0 = \tfrac d{d\alpha}\|\nabla f-\alpha\,\nabla u\|^2_2\,\big|_{\alpha=1} = 2\,(\nabla f,\nabla u) - 2\,\|\nabla u\|^2_2= 2\,(\nabla r,\nabla u) = 2\,c_u\,(r,u^{q-1})\,, $$ where, in the last equality, we used the equation $-\Delta u = c_u\,u^{q-1}$ with $c_u:=\|\nabla u\|^2_2/\|u\|_q^q$. Finally, we use the Sobolev inequality $S_d\,\|r\|_q^2 \leq \|\nabla r\|^2_2$ for the term multiplying the quantity involving $\sigma$. \end{proof} Let us recall a spectral gap inequality which appears, for instance, in Rey's paper~\cite[Appendix~D]{Rey} slightly before the work of Bianchi and Egnell~\cite{BianchiEgnell}. \begin{lemma}\label{gap} Let $d\ge3$, $q=2^*$, $f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)$ and $u\in\mathcal M$ be such that $\|\nabla f-\nabla u\| = \inf_{g\in\mathcal M} \|\nabla f-\nablag\|$. Then $r:=f-u$ satisfies $$ \int_{{\mathord{\mathbb R}}^d} \left( |\nabla r|^2 - S_d\,(q-1)\,\|u\|_q^{2-q}\,|u|^{q-2}\,r^2 \right)dx \geq \frac4{d+4} \int_{{\mathord{\mathbb R}}^d} |\nabla r|^2\,dx\,. $$ \end{lemma} \begin{proof} By translation and dilation invariance, we may assume that $u(x) = c\,\big(1+|x|^2\big)^{-(d-2)/2}$ for some constant $c>0$. Then, by inverse stereographic projection and the discussion in Subsection \ref{sec:competing}, the question becomes to prove the inequality \begin{equation} \label{eq:gapproof} \int_{{\mathord{\mathbb S}}^d} \left( |\nabla R|^2 - d\,|R|^2 \right)d\omega \geq \frac4{d+4} \int_{{\mathord{\mathbb S}}^d} \left( |\nabla R|^2 + \tfrac14\,d\,(d-2)\,|R|^2 \right)d\omega \end{equation} for all $R$ that are orthogonal to spherical harmonics of degrees $\ell\leq 1$. Diagonalizing the Laplace--Beltrami operator, the inequality becomes $$ \ell\,(\ell+d-1) - d \geq \frac4{d+4} \left( \ell\,(\ell+d-1) + \tfrac14\,d\,(d-2)\right) \quad\text{for all}\,\ell\geq 2\,. $$ This is elementary to check. \end{proof} \begin{remark} If we look at the quadratic form $r\mapsto\int_{{\mathord{\mathbb R}}^d} \left( |\nabla r|^2 - S_d\,(q-1)\,\|u\|_q^{2-q}\,|u|^{q-2}\,r^2 \right)dx$, one may wonder why no essential spectrum should be taken into account. This is indeed an issue (see for instance~\cite[Proposition~1.16]{BonforteDolbeaultNazaretSimonov}) if the form is defined on $\mathrm L^2({\mathord{\mathbb R}}^d)$, but not anymore if we consider the operator $-\,u^{2-q}\,\Delta-S_d\,(q-1)\,\|u\|_q^{2-q}$ on $\mathrm L^2({\mathord{\mathbb R}}^d,\,u^{q-2}\,dx)$ as the image of $\mathrm H^1({\mathord{\mathbb S}}^d)$ through the stereographic projection is compactly embedded in $\mathrm L^2({\mathord{\mathbb R}}^d,\,u^{q-2}\,dx)$, so that only discrete spectrum has to be taken into account. With $r\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)\hookrightarrow\mathrm L^{2^*}({\mathord{\mathbb R}}^d)\hookrightarrow\mathrm L^2({\mathord{\mathbb R}}^d,\,u^{q-2}\,dx)$, the spectral gap computation in the proof of Lemma~\ref{gap} is justified as already noted in~\cite[Appendix]{BianchiEgnell}.\end{remark} Now we insert the spectral gap inequality in the expansion and obtain the following. \begin{corollary} Let $q=2^*$ and $0\leq f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)$. Set $\mathcal D(f):=\inf_{g\in\mathcal M} \|\nabla f-\nablag\|_2$ and $\tau := \mathcal D(f)/(\Vert \nabla f \Vert^2_2-\mathcal D(f)^2)^{1/2}$. \begin{itemize} \item If $d\geq 6$, we have \begin{align*} \Vert \nabla f \Vert^2_2 - S_d\,\|f\|_q^2 & \geq \big( \tfrac4{d+4} - \tfrac2q\,\tau^{q-2} \big)\, \mathcal D(f)^2\,. \end{align*} \item If $d=4$, $5$, we have \begin{align*} \Vert \nabla f \Vert^2_2 - S_d\,\|f\|_q^2 & \geq \big( \tfrac4{d+4} - \tfrac13\,(q-1)\,(q-2)\,\tau - \tfrac2q\,\tau^{q-2} \big)\, \mathcal D(f)^2\,. \end{align*} \item If $d=3$, we have $$ \Vert \nabla f \Vert^2_2 - S_d\,\|f\|_q^2 \geq \left( \tfrac4{7} - \tfrac{20}3\,\tau - 5\,\tau^2 - 2\,\tau^3 - \tfrac13\,\tau^4 \right) \mathcal D(f)^2\,. $$ \end{itemize} \end{corollary} \begin{proof} Let $u\in\mathcal M$ be such that $r=f-u$ satisfies $\|\nabla r\|=\mathcal D(f)$. Set $\sigma := \|r\|_q/\|u\|_q$. Then, by combining Proposition~\ref{beexpand} and Lemma~\ref{gap}, we obtain the following bounds. \begin{itemize} \item[$\circ$] If $d\geq 6$, \begin{align*} \Vert \nabla f \Vert^2_2 - S_d\,\|f\|_q^2 & \geq \big( \tfrac4{d+4} - \tfrac2q\,\sigma^{q-2} \big)\, \mathcal D(f)^2\,. \end{align*} \item[$\circ$] If $d=4$, $5$, \begin{align*} \Vert \nabla f \Vert^2_2 - S_d\,\|f\|_q^2 & \geq \big( \tfrac4{d+4} - \tfrac13\,(q-1)\,(q-2)\,\sigma - \tfrac2q\,\sigma^{q-2} \big)\, \mathcal D(f)^2\,. \end{align*} \item[$\circ$] If $d=3$, $$ \Vert \nabla f \Vert^2_2 - S_d\,\|f\|_q^2 \geq \left( \tfrac4{7} - \tfrac{20}3\,\sigma - 5\,\sigma^2 - 2\,\sigma^3 - \tfrac13\,\sigma^4 \right) \mathcal D(f)^2\,. $$ \end{itemize} We slightly weaken these inequalities, but convert them into a more explicit form. Set $$ \rho := \mathcal D(f)/\|\nabla f\|_2\,. $$ We recall that $(\nabla f,\nabla u) = \|\nabla u\|^2_2$ was shown in~\eqref{eq:orth}. This implies $$ \rho^2\,\Vert \nabla f \Vert^2_2 = \mathcal D(f)^2 = \|\nabla f-\nabla u\|^2_2 = \Vert \nabla f \Vert^2_2 - \|\nabla u\|^2_2\,, $$ that is, $\|\nabla u\|_2 = \sqrt{1-\rho^2}\,\|\nabla f\|_2$. As a consequence, \begin{multline*} \sigma = \|f-u\|_q/\|u\|_q \leq \|\nabla f-\nabla u\|_2/\|\nabla u\|_2\\ = (1-\rho^2)^{-1/2}\,\|\nabla f - \nabla u\|_2/\|\nabla f\|_2 = (1-\rho^2)^{-1/2}\,\rho = \tau\,. \end{multline*} Thus we can replace $\sigma$ by $\tau$ in the above bounds and obtain the assertion of the corollary. \end{proof} We can reformulate this corollary using the notation $\mathcal D(f)$, $\delta\mapsto\nu(\delta)$ as in~\eqref{eq:nu}, $\mathsf m$ explicitly defined by~\eqref{eq:mudelta} and, according to~\eqref{eq:mudelta2}, $$ \mu(\delta) = \inf\left\{ \frac{\Vert \nabla f \Vert^2_2 - S_d\,\|f\|_q^2}{\mathcal D(f)^2} :\,0\leq f\in\dot{\mathrm H}^1({\mathord{\mathbb R}}^d)\setminus\mathcal M\,,\,\mathcal D(f)^2\leq \delta\,\Vert \nabla f \Vert^2_2 \right\}. $$ \begin{corollary}\label{Cor:close} With the above notations, we have $\mu(\delta)\ge\mathsf m\big(\nu(\delta)\big)$. \end{corollary} This follows immediately from the previous corollary if we note that $$ \tau = \frac{\mathcal D(f)/\|\nabla f\|}{\sqrt{1-\mathcal D(f)^2/\Vert \nabla f \Vert^2_2}} \leq \nu(\delta)\,. $$ Combining Corollaries \ref{Cor:Summary} and \ref{Cor:close} completes the proof of Theorem~\ref{main}. \qed A standard consequence of the above analysis is the estimate \begin{equation} \label{eq:upperbound} \inf_{0\leq f\in\dot H^1({\mathord{\mathbb R}}^d)\setminus\mathcal M} \mathcal E(f) \le\frac4{d+4}\,. \end{equation} Indeed, this bound is derived using the fact that the constant $4/(d+4)$ in inequality \eqref{eq:gapproof} is optimal and attained if (and only if) $R$ is a spherical harmonic of degree two.
{ "timestamp": "2022-09-20T02:21:27", "yymm": "2209", "arxiv_id": "2209.08651", "language": "en", "url": "https://arxiv.org/abs/2209.08651" }
\section{Introduction} Uniform hyperbolicity was originally intended to encompass a residual, or at least a dense subset of all smooth dynamical systems \cite{Anosov1967, Smale1967}, although it was soon realized that this is not true \cite{AS1970,Newhouse1970}. Uniformly hyperbolic systems are structurally stable \cite{Anosov1967} and admit a very precise topological description of their behavior: there are finitely many compact transitive invariant subsets such that every forward orbit of the system accumulates on one of them \cite{Smale1967}. The dynamics near such attractors may be quite chaotic and thus essentially unpredictable after a long period of time. However, Sinai, Ruelle and Bowen demonstrated that these attractors are the supports of physical measures and thus behave well from a statistical point of view~\cite{Bowen1975, Ruelle1976, Sinai1972}. Kifer further showed that such systems are stochastically stable (i.e.~the physical measures continuously vary) under small random perturbations~\cite{Kifer1974}. Building on this background, in a conference in honor of Douady in 1995 (refer to \cite{Palis2000,Palis2005}), Palis developed a global picture recovering, in a more probabilistic formulation, much of the paradigm of uniform hyperbolic systems. Namely, Palis conjectured that every smooth dynamical system can be approximated by systems having only finitely many attractors, which support physical measures that describe the time averages for Lebesgue almost all points and are stochastically stable under small random perturbations. An important contribution to the stochastic part of the global Palis conjecture was provided by Brin--Kifer~\cite{BK1987} and Ara\'ujo~\cite{Araujo2000}. Under some natural nondegenerate assumptions on noise, they found finitely many absolutely continuous ergodic stationary measures (in particular physical measures) with pairwise disjoint supports and whose statistical basins of attraction cover the whole ambient space almost everywhere. From now on, we refer to this property as \emph{finitude of physical measures} or~{\tt(FPM)} for short\footnote{All the measures considered in this paper will be probabilities. Also, the context of this paper is the study of absolutely continuous ergodic stationary probabilities (which are, in particular, physical). For that reason, the property is simply called ``finitude of physical measures''.}. See Definition~\ref{dfn:11} for a more precise description. To be more specific on the nondegenerate assumptions, Brin and Kifer assumed that the transition probability has a continuous density, while Ara\'ujo assumed that the transition probability has a density whose support includes a ball of which the diameter is uniformly bounded from below. In the last two decades, their nondegenerate conditions appeared as the main assumption of many works on stochastic stability around the Palis conjecture (especially of systems without uniform hyperbolicity); see e.g.~\cite{Araujo2001, APP2014, AT2005, BV1996, BV2006}. \enlargethispage{3\baselineskip} In this paper, we will refine the Brin--Kifer and Ara\'ujo conditions by showing that {\tt (FPM)} follows merely if one assumes that the transition probability is absolutely continuous. This is a significant improvement for applications. For instance, random dynamical systems generated by additive noise (the most common noise in applications; see Remark~\ref{rm:0302} for details) are some examples that satisfy these conditions. Our proof is quite different from~\cite{Araujo2000, BK1987}. It is based on Markov operators theory, which enables us to obtain much stronger properties than {\tt (FPM)}, such as the exponential decay of the annealed correlation functions of each physical measure for some iterate of the random map. Moreover, we give a \emph{necessary and sufficient} condition for {\tt (FPM)} in terms of Markov operators, extending a previous work by Inoue and Ishitani~\cite{II1991} for Perron--Frobenius operators. That is, we introduce the notion of \emph{mean constrictivity} for Markov operators and show its equivalence with both {\tt (FPM)} and the property of \emph{asymptotic periodicity in mean} introduced by Inoue and Ishitani. This is also a generalization of the result by Lasota, Li and Yorke in~\cite{LLY1984} for the equivalence between constrictivity and asymptotic periodicity of Markov operators, which served as a stepping-stone for several later papers studying the existence of absolutely continuous invariant density for stochastic operators (see e.g.~\cite{LM}). This functional approach allows us to give a hierarchy of classes of Markov operators, which implies, together with plenty of examples indicating the difference between the classes, that {\tt (FPM)} are much weaker than the Brin--Kifer and Ara\'ujo conditions, refer to Figures~\ref{fig:hierarchy} and~\ref{fig:subhierarchy} (and Remark~\ref{rem:AraujoD}). For instance, we include some random dynamical systems generated by finitely many continuous maps (so-called iterated function systems) and by multiplicative noise with a common fixed point (another important class of noise in applications). These systems never have absolutely continuous transition~probabilities. \subsection{Finitude of physical measures {\tt(FPM)}}\label{ss:1.1} Let $X$ be a Polish space equipped with a probability measure $m$ on the Borel $\sigma$-field $\mathscr B$ of $X$. Let $(T , \mathscr A, p)$ be a probability space and consider the product space $( \Omega , \mathscr F, \mathbb P)=(T^\mathbb{N}, \mathscr{A}^\mathbb{N},p^\mathbb{N})$. In this paper, we will deal with a measurable map $f: T\times X \to X$ where we denote $f_t=f(t,\cdot)$ for $t\in T$ and consider the following nonautonomous iterations $$ f^0_\omega=\mathrm{id} \quad \text{and} \quad f^n_\omega= f_{\omega_{n}}\circ \dots \circ f_{\omega_1} \ \ \text{for} \ n\in \mathbb N \ \text{and $\omega =(\omega _1, \omega _2, \ldots )\in \Omega$}. $$ Since we consider the Bernoulli probability $\mathbb{P}=p^\mathbb{N}$ on $\Omega$, the sequence $\{ \omega =(\omega _1, \omega _2, \ldots ) \mapsto \omega_n\} _{n\geq 1}$ of noises at each step is an independent and identically distributed random process. Thus, the sequence $\{f_\omega ^n(x_0)\} _{n\geq 0}$ can be view as a (time-homogeneous) discrete Markov chain $\{ X_n\} _{n\geq 0}$ on $( \Omega , \mathscr F, \mathbb P)$ with initial distribution $X_0(\omega)=x_0$ and transition probability given by \begin{equation}\label{eq:0302a} P(x,A)= p(\{t\in T: \, f_t(x)\in A\})=\mathbb{P}(\{\omega\in \Omega: f_\omega(x)\in A\}) \quad \text{for $x\in X$, \ $A\in\mathscr{B}$}. \end{equation} Recall that a nonnegative function $Q(x,A)$ defined for $x \in X$ and $A \in \mathscr{B}$ is called a \emph{(Markov) transition probability} if \begin{enumerate}[label=(\roman*)] \item \label{eq:Markov1} $Q(x,\cdot)$ is a probability measure for every fixed $x\in X$, \item \label{eq:Markov2} $Q(\cdot, A)$ is a $\mathscr{B}$-measurable function for every fixed $A\in \mathscr{B}$. \end{enumerate} We shall also use the $n$-th transition probability $P^n(x,A)$ of the process $\{ X_n\} _{n\geq 0}$~given~by \begin{align*} P^n(x,A)& = \mathbb{P}(\{\omega\in \Omega: f^n_\omega(x)\in A\}) \end{align*} which is also a Markov transition probability. We refer to~\cite{Arnoldbook} and~\cite{MT2012} for general theories of random dynamical systems and Markov processes, respectively. A probability measure $\mu$ on $X$ is said to be a \emph{stationary measure} of $f$ if \begin{equation} \label{def:stationary} \mu(A) = \int (f_t)^{}_*\mu(A)\, dp(t) \quad \text{for all $A \in \mathscr{B}$}. \end{equation} Here, $g^{}_*\nu$ denotes the pushforward of a measure $\nu$ by a measurable map $g$, that is, $g^{}_*\nu (A)=\nu( g^{-1}A)$ for $A\in \mathscr B$. It is well known that $\mu$ is a stationary measure of $f$ if and only if $\mathbb{P}\times \mu$ is an invariant measure for the skew-product \begin{equation}\label{eq:skew} F: \Omega\times X \to \Omega\times X, \quad F(\omega,x)=(\sigma\omega,f_\omega(x)) \end{equation} where $\sigma$ is the shift operator defined on $\Omega$ (see~\cite{O83}). Moreover, we say that $\mu$ is ergodic if $\mathbb{P}\times \mu$ is an ergodic probability measure of $F$. This is equivalent to asking that any $A\in\mathscr{B}$ such that $P(x,A)\geq 1_A(x)$ where $1_A$ is the indicator function has $\mu$-measure 0 or 1 (see~\cite[Appendix A.1]{Kifer1986} and Appendix~\ref{appendix:B}). Finally, recall that a $\nu$-null set is a measurable set that has $\nu$-measure zero, and a measure $\nu$ is said to be absolutely continuous with respect to a measure $\lambda$ if any $\lambda$-null set is $\nu$-null. Our goal is to find a condition on $f$ under which {\tt (FPM)} holds. \begin{dfn}\label{dfn:11} We say that $f$ satisfies {\tt (FPM)} if there exist finitely many ergodic stationary probability measures $\mu _1,\dots, \mu_r$ of $f$ such that { \begin{enumerate}[topsep=-0.1cm] \item[1)] they are absolutely continuous with respect to $m$; \item[2)] they have pairwise disjoint supports (up to an $m$-null set); \item[3)] the union of the fiberwise statistical basins of attraction of these measures $\mathbb{P}$-almost surely covers $X$ up to a set of null $m$-measure. That is, \begin{equation*}\label{eq:10100a} m\left(X\setminus (B_\omega(\mu _1)\cup \dots \cup B_\omega(\mu_r))\right)=0 \quad \text{for $\mathbb P$-almost every $ \omega \in \Omega $} \end{equation*} where \\[-0.5cm] \begin{equation*}\label{eq:10100b} B_\omega(\mu _i)=\left\{ x\in X : \, \lim _{n\to \infty} \frac{1}{n} \sum _{j=0}^{n-1} \delta_{f_{ \omega} ^{j}(x)} = \mu _i \right\} \quad \text{for $i=1,\dots,r$}. \end{equation*} Here, the limits of measures are taken in the weak*-topology. \end{enumerate}} \end{dfn} It is not difficult to show (see Lemma~\ref{lem-mu1}) that, for the measure $\mu_i$ above, $B_\omega(\mu_i)$ has full $\mu_i$-measure for $\mathbb{P}$-almost all $\omega\in \Omega$ and $i=1,\dots,r$. Moreover, since $\mu_i$ is absolutely continuous with respect to $m$, it holds that $m(B_\omega(\mu _i))>0$ for $\mathbb{P}$-almost every $\omega\in \Omega$. Therefore, $\mu_i$ is a \emph{physical measure} with respect to the reference measure $m$ in the sense that $\mathbb{P}$-almost surely the fiberwise statistical basin of attraction $B_\omega(\mu_i)$ has a positive $m$-measure for each $i=1, \ldots ,r$. See more details on the notion of physical measures for random maps in Remark~\ref{rem:physical}. In the deterministic case, that is, when $f_t=g$ for all $t\in T$ with some $g:X\to X$, {\tt(FPM)} means that the dynamics of $g$ can be statistically understood by finitely many ergodic invariant physical probability measures that describe the time average of almost all points in $X$. When and where this property holds attracted great interest in dynamical systems theory, and we refer e.g.~to~\cite{BDV2006, Palis2000}. Notice that there are some obstacles to the finitude of physical ergodic invariant probability measures, such as the so-called Newhouse domains~\cite{Colli1998,GST1993,Leal2008,Newhouse1979,PV1994} in which generically infinitely many attractors coexist. Moreover, recently Berger has realized that the coexistence of infinitely many attractors is locally Kolmogorov typical in parametric families of endomorphisms of surfaces and diffeomorphisms of higher dimensional manifolds, see~\cite{BR2021b,BR2021a,Berger2016,Berger2017}. In contrast to the Palis conjecture, the so-called Takens' last problem asked if it is possible to construct a persistent class of dynamics of a compact manifold where time averages do not exist (named as historic behavior; cf.~\cite{Ruelle2001,Takens2008}) on a Lebesgue positive measure set. Some important advances in this question have been obtained in~\cite{KS2017} where the authors constructed a locally dense class of $C^r$-surface diffeomorphisms ($r\geq 2$) with historic behavior on a positive Lebesgue measure set. This result has been extended to $C^\infty$ and real analytic surface diffeomorphisms in \cite{BB2020} and to $C^r$-diffeomorphisms with $r\geq 1$ in dimension three and higher in~\cite{Barrientos2021} (see also~\cite{KNS2021} for a specific three-dimensional example). Ara\'ujo \cite{Araujo2000} gave a quite useful sufficient condition {(see~(1) below)} for~{\tt(FPM)} for the following class of random maps. First of all, here $X$ is a compact Riemannian manifold and $T$ is the unit ball of a Euclidean space with $m$ and $p$ being the normalized Lebesgue measures on $X$ and $T$, respectively. Now, \begin{enumerate}[label=(\arabic*),leftmargin=1cm] \item $f: T \times X\to X$ is a continuous map and $f_{t }$ is a $C^1$-diffeomorphism for every~$t\in T$; there are $n_0\in \mathbb N$ and a positive number $\xi_0$ such that for all $n\geq n_0$ and $x\in X$ \begin{enumerate}[label=(\Alph*),leftmargin=1cm] \item $\left\{ f^n_\omega (x) : \omega \in \Omega \right\}$ contains the ball of radius $\xi_0$ and centered at $f^n_0(x)$; \item $P^n(x,\cdot)$ is absolutely continuous with respect to $m$. \end{enumerate} \end{enumerate} Here, we wrote $f^n_0$ for the usual $n$-th iteration of the single map $f_0 : X\to X$. Condition~(A) above is a topological requirement. On the other hand, since the $n$-th and $(n-1)$-th transition probabilities are related by the recurrence \begin{equation} \label{eq:recurence} P^{n}(x,A)=\int P^{n-1}(y,A) P(x,dy) \end{equation} we have condition (B) by only requiring that $P^{n_0}(x,\cdot)$ be absolutely continuous with respect to $m$ for all $x\in X$. A greater requirement is the continuity of $f$ in item~(1). For instance, such a requirement is not necessary for the approach by Brin and Kifer~\cite[Section 2]{BK1987} to get~{\tt(FPM)}. These authors dealt with abstract Markov chains $\{ X_n\}_{n\geq 0}$ on a compact Riemannian manifold $X$ with transition probability $P(x,A)$ having a continuous density in the following sense: \begin{enumerate}[resume] \label{Brin--Kifer} \item there are an integer $n_0\geq 1$ and a nonnegative function $p(x, y)$ that is continuous in both variables such that for any $x\in X$ and $A \in \mathscr{B}$, $$ P^{n_0}(x,A)=\int_{A} p(x,y) \, dm(y). $$ \end{enumerate} Thus, $P^{n_0}(x,\cdot)$ is absolutely continuous with respect to the normalized Lebesgue measure~$m$ \emph{for all} $x\in X$ as in the Ara\'ujo's condition (B). Although this abstract approach seems more general, this is not the case because any Markov chain in a Polish space can be represented by a random map as in \eqref{eq:0302a} (cf.~\cite{JKR2015, Kifer1986}). We will prove that condition~(1) and condition~(2), respectively, imply that there are $n_0\in\mathbb N$ and a nonnegative function $p(x,y)$ such that $P^{n_0}(x,dy)=p(x,y)\,dm(y)$ and $x \mapsto p(x,\cdot)$ is a continuous map from $X$ to $L^1(m)$. See Remarks~ \ref{rem:gAraujo} and \ref{rem:gBrin--Kifer}. Here $L^1(m)=L^1(X,\mathscr{B},m)$ denotes, as usual, the Banach space of all real-valued $\mathscr{B}$-measurable functions $\varphi$ on~$X$ whose $L^1$-norm $\|\varphi \|\coloneqq\int _X\vert \varphi \vert dm$ is bounded, where two functions that coincide with each other $m$-almost everywhere are identified. In view of this, the following theorem is a notable improvement of Brin--Kifer's and Ara\'ujo's sufficient conditions to get~{\tt(FPM)}: \begin{mainthm} \label{thm:A} Let $(X , \mathscr B, m)$ and $(T, \mathscr A, p)$ be a compact Polish probability space and a probabilistic space, respectively. Consider a measurable map $f:T\times X\to X$ and let $P^n(x,A)$ be the $n$-th transition probabilities for the associated Markov chain induced by $f$. Assume that for some $n_0\in \mathbb{N}$, \begin{equation*} P^{n_0}(x,dy)=p(x,y)\,dm(y) \quad \text{such that} \quad x \in X \mapsto p(x,\cdot)\in L^1(m) \ \ \text{is continuous}. \end{equation*} Then, $f$ satisfies {\tt(FPM)}. \end{mainthm} In the following series of remarks, we will show some easily checkable assumptions from which Theorem \ref{thm:A} is applicable, allowing us to compare with the previous sufficient conditions from the literature. We say that $f: T\times X\to X$ is a \emph{continuous random map} if $f$ is measurable and $f_t=f(t,\cdot):X\to X$ is continuous for $p$-almost every $t\in T$. \begin{rem} \label{rem:gAraujo} Firstly, {\tt(FPM)} follows from the following assumption. \begin{enumerate}[label=(\roman*), leftmargin=1cm] \item \label{item:araujo} {\it Let $(X , \mathscr B, m)$ and $(T, \mathscr A, p)$ be a compact Polish probability space and a probabilistic space, respectively, such that for some $n_0\in \mathbb N$, it holds that \begin{enumerate} \item \label{item:Feller0} $f:T\times X\to X$ is a continuous random map, \item $P^{n_0}(x,\cdot)$ is absolutely continuous with respect to $m$ for all $x\in X$. \end{enumerate}} \end{enumerate} This generalizes Ara\'ujo's result in \cite{Araujo2000}. The fact that~(i) implies {\tt(FPM)} follows immediately from Corollary~\ref{cor:rem} and Theorem~\ref{thm:A}. We will prove in Proposition~\ref{prop:multi:ex1} that in the context of (i), neither condition (a) nor condition (b) is sufficient to get~{\tt(FPM)}. Moreover, in Proposition~\ref{prop:multi:ex1} we also show that these conditions are not necessary to obtain {\tt(FPM)}. Notice also that (i) generalizes Ara\'ujo's result by removing condition~(A) in his result, and we will give in Theorem \ref{AAraujo2000-generalization} (1) another practically sufficient condition for {\tt(FPM)} which utilizes condition~(A). \end{rem} \begin{rem} \label{rem:gBrin--Kifer} Secondly, {\tt(FPM)} holds under the following assumption: \begin{enumerate}[label=(\roman*), leftmargin=1cm] \stepcounter{enumi} \item \label{item:L1-cont} \emph{Let $(X,\mathscr{B},m)$ be a compact probability Polish space and $P^{n_0}(x,dy)=p(x,y)\, dm(y)$ for some $n_0\in\mathbb N$, where $p(\cdot,y)$ is a continuous function on $X$ for $m$-almost every $y\in X$.} \end{enumerate} This clearly generalizes Brin--Kifer's result in \cite{BK1987}. The proof of this observation follows from the Schaff\'e--Riez theorem, cf.~\cite{kusolitsch2010theorem}. Indeed, if $\{x_n\}_{n\in\mathbb N}$ is a converging sequence to $x$, then by the continuity of $p(\cdot,y)$ we get $p(x_n,y) \to p(x,y)$ and thus the Schaff\'e--Riez theorem implies that $\|p(x_n,\cdot)-p(x,\cdot)\| \to 0$ as $n\to\infty$. This means that $p(x,\cdot)$ varies continuously on $L^1(m)$ with respect to $x$. Now, Theorem~\ref{thm:A} implies {\tt(FPM)}. See also Theorem \ref{AAraujo2000-generalization} (2) for another weakening of Brin--Kifer's condition. \end{rem} \begin{rem}\label{rm:0302} One of the most important noises in real applications that satisfy the assumption in Theorem~\ref{thm:A} is the random dynamical system generated by \emph{additive noise with absolutely continuous distribution}. Here, $X=T$ is a {compact} Lie group where the algebraic multiplication is denoted by "$+$", $m$ is the Haar measure, and $p$ is an absolutely continuous probability with respect to $m$. The torus $\mathbb R^d/\mathbb Z^d$ {is} often considered in the literature as {an example}. The random map $f$ is defined by $f_t(x) =f_0(x) +t $ for some continuous map $f_0:X\to X$. As a slight abuse of notation, we denote the density function of $p$ by $p(x)$. Then, $P(x,A)$ can be written as \[ P(x,A) = \int _A p(y-f_0(x))\, dm(y) \quad \text{for $x\in X$ and $A\in\mathscr B$} \] (cf.~\cite[Equation (10.5.5)]{LM}). Hence, $P(x,\cdot )$ is absolutely continuous with respect to $m$ for all $x\in X$. Notice that if the support of $p(x)$ does not include any open ball centered at $0$ nor $p(x)$ is not continuous, then both Ara\'ujo's condition (A) and Brin--Kifer's condition (2) are violated in general. Furthermore, it seems difficult to have the condition in Theorem~\ref{thm:A} with $n_0=1$ because $x\in X\mapsto p(y-f_0(x))\in \mathbb R$ is not continuous for some $y\in X$ in general. However, surprisingly, by virtue of Remark~\ref{rem:gAraujo} we know that the condition in Theorem~\ref{thm:A} is always satisfied (and thus {\tt(FPM)} holds). In fact, by Corollary~\ref{cor:rem}, the condition in Theorem~\ref{thm:A} holds with~$n_0=2$. \end{rem} \begin{rem} \label{rem:multiplicative-noise} Another important class of noise that appears in real applications is \emph{multiplicative noise with absolutely continuous distribution} (cf.~\cite{Athreya2003, Sumi2021}). For instance, consider the case when $X=T=[0,1]$, $m$ is the Lebesgue measure and $p$ is an absolutely continuous probability with respect to $m$ with density function $p(x)$. Define $f_t(x) = t g(x) $, where $g$ is some continuous map on $X$. It is straightforward to see that $P(x,A)$ is of the form \[ P(x,A) = \int _A p\left(\frac{y}{g(x)}\right)dm(y) \quad \text{for $x\in X$ and $A\in\mathscr B$} \] (cf.~\cite[Equation (10.7.5)]{LM}). Therefore, if $g$ is bounded away from zero, then $P(x,\cdot )$ is absolutely continuous with respect to $m$, and by Remark~\ref{rem:gAraujo}, {\tt(FPM)} holds. Actually, the condition in Theorem~\ref{thm:A} holds (with $n_0=2$). In fact, this (together with Theorem~\ref{thm:B}) generalizes~\cite[Theorem 10.7.1]{LM}. On the other hand, in contrast to additive noise, if $g(x)=0$ for some $x\in X$, then the absolute continuity of $P(x,\cdot )$, the condition in Theorem~\ref{thm:A} and {\tt(FPM)} can fail to hold. See Section \ref{subsec:multiple} for details. \end{rem} \begin{rem}\label{rmk:IFS} Finally, we remark that the important class of random dynamical systems generated by iterated function systems with probabilities (see Section~\ref{ss:e} for its formal definition) does not meet the assumption in Theorem~\ref{thm:A}. Indeed, in this case, we have that $T$ is a finite set $\{1,\dots,k\}$ and thus $$ P(x,\cdot)= \sum_{i=1}^k p_i \delta_{f_i(x)} \quad \text{where $p_i =p(\{ i\})>0$} $$ which cannot be absolutely continuous with respect to $m$ in general. However, some of them will be in the range of Section~\ref{s:MO} to get {\tt(FPM)}, in which several weaker versions of the condition in Theorem~\ref{thm:A} are given. \end{rem} The proof of Theorem~\ref{thm:A} is based on the analysis of the annealed Perron--Frobenius operator associated with the random dynamical system generated by $f$. We will show that this operator belongs to the class of constrictive Markov operators, which has been extensively studied in the literature. This observation allows us to generalize Theorem~\ref{thm:A} in terms of Markov operators. In particular, we can obtain new practical sufficient conditions implying {\tt(FPM)}, as indicated in Theorem~\ref{AAraujo2000-generalization}. As we will explain in Remark~\ref{rmk:6.7}, such conditions generalize the sufficient condition studied in the work by Ara\'ujo and Ayta\c{c}~\cite{AA2017} to get uniform ergodicity (a stronger property than~{\tt(FPM)}). \subsection{Markov operators}\label{s:MO} In order to provide a general definition of Markov operator, assume that $(X,\mathscr{B},m)$ is any abstract probability space (not necessarily a Polish space as in the previous subsection). Let $D(m)=D(X,\mathscr{B},m)$ be the space of density functions, that is, \[ D(m) =\left\{h \in L^1(m) : h \geq 0 \ \text{$ m$-almost everywhere and $\Vert h \Vert =1$} \right\}. \] An operator $P: L^1(m)\to L^1 (m)$ is called a \emph{Markov operator} if $P$ is linear, positive (i.e.~$P\varphi \geq 0$ $m$-almost everywhere if $\varphi \geq 0$ $m$-almost everywhere) and \begin{equation}\label{eq:0219} \int P\varphi \, dm = \int \varphi \, dm \quad \text{ for all $\varphi \in L^1(m)$}. \end{equation} Note that a positive linear operator $P$ on $L^1(m)$ is a Markov operator\footnote{Any Markov operator $P$ is a bounded operator: Given $\varphi \in L^1(m)$, consider $\varphi _+\coloneqq \max\{\varphi ,0\}, \varphi _-\coloneqq \max\{ -\varphi ,0\}$. Then, $P\varphi _+, P\varphi _- \geq 0$, and thus $\Vert P\varphi \Vert \leq \int (P\varphi _+ + P\varphi _- )\, dm = \int \varphi _+ dm + \int \varphi _-dm = \Vert \varphi \Vert$. } if and only if $P(D(m)) \subset D(m)$. It is not difficult to see that this is equivalent to $P^*1_X = 1_X$, where $P^*$ is the adjoint operator of $P$, that is, $P^*$ is a bounded linear operator on $L^\infty (m)\cong (L^1(m))^*$ given by $\int P^*\psi \cdot \varphi \, dm = \int \psi \cdot P\varphi \, dm$ for $\psi\in L^\infty (m)$, $\varphi\in L^1(m)$. Recall that $L^\infty(m)=L^\infty(X,\mathscr{B},m)$ is the Banach space of bounded real-valued $\mathscr{B}$-measurable $m$-essentially bounded functions defined on $X$. As usual, two functions that coincide with each other $m$-almost everywhere, are identified. A key property of Markov operators for the purpose of this paper is \emph{constrictivity}. ~\enlargethispage{-1.75cm} \begin{dfn}\label{dfn:1011} A sequence $(Q_n)_{n\geq 1}$ of Markov operators on $L^1(m)$ is called \begin{enumerate} \item {\it constrictive} if there exists a compact set $F$ of $L^1(m)$ such that for any $h \in D(m)$, \[ \lim_{n\to \infty} d(Q_n h,F)=0; \] \item {\it uniformly constrictive} if there exists a compact set $F$ of $L^1(m)$ such that \[ \lim _{n\to \infty} \sup _{h \in D(m)} d(Q_n h, F)=0. \] \end{enumerate} Here $d(\varphi,F)=\inf_{\psi\in F}\|\varphi -\psi \|$. In particular, a Markov operator $P$ on $L^1(m)$ is called \begin{enumerate}[resume] \item ({\it uniformly}) {\it constrictive} if the sequence $(P^n)_{n\geq 1}$ is (uniformly) constrictive. \item {\it mean constrictive} if the sequence $(A_n)_{n\geq 1}$ is constrictive, where $A_n$ is given by \[ A_n\varphi = \frac{1}{n}\sum_{i=0}^{n-1}P^i\varphi \quad \text{for $\varphi \in L^1(m)$}. \] \end{enumerate} \end{dfn} The compact set $F$ above is called a \emph{constrictor}. These conditions appeared in the context of mean ergodic theorems, see \cite{Emelyanov, LM}. Notice that, by definition, we have \[ \text{uniformly constrictive \ \ $\Rightarrow$ \ \ constrictive \ $\Rightarrow$ \ \ mean constrictive .} \] See also Figure~\ref{fig:hierarchy} for a global picture. Furthermore, the uniform constrictivity of $P$ is known to be equivalent to the quasi-compactness of $P$ in $L^1(m)$, cf.~\cite[Theorem 2]{Bart95}. Recall that an operator $Q$ is called \emph{quasi-compact} if there is a compact linear operator $R$ such that $\|Q^n-R\|_{\rm op}<1$ for some $n\in \mathbb{N}$. Hence, the class of quasi-compact operators contains the subclass of \emph{eventually compact operators}, that is, the linear operators $Q$ such that $Q^n$ is compact for some $n\in\mathbb{N}$. \subsubsection{Perron--Frobenius operators} One of the most important examples of Markov operators is the \emph{Perron--Frobenius operator} induced by a nonsingular transformation $g:X\to X$. Recall that $g$ is nonsingular (with respect to the reference measure $m$ on $X$) if the preimage of any $m$-null set by $g$ is $m$-null. The Perron--Frobenius operator $\mathcal{L}_{g}: L^1(m) \to L^1(m)$ of $g$ is defined by the formula $$ g_*m_\varphi(A) = \int_A \mathcal{L}_g\varphi \, dm \quad \text{ for all $\varphi \in L^1(m)$ and $A\in \mathscr{B}$} $$ where $m_\varphi$ is the finite signed measure given by \begin{equation}\label{eq:mphi} m_\varphi(A) =\int _A \varphi \, dm \quad \text{for $A\in \mathscr{B}$.} \end{equation} It is easy to see that $\mathcal{L}_g$ is a Markov operator and $m_{\mathcal{L}_g\varphi}=g_*m_\varphi$. As in this example, Markov operators $P$ naturally appear in the study of (random) dynamical systems, and $( P^n\varphi ) _{n\geq 0}$ is interpreted as the evolution of density functions driven by the system. We refer to \cite{baladi2000positive, baladi2018dynamical,Emelyanov,foguel2007ergodic,LM}. Let $f: T \times X\to X$ be as in Section~\ref{ss:1.1}. We simply write $\mathcal{L}_{t}$ for the Perron--Frobenius operator of $f_{t}=f(t,\cdot)$ for $t\in T$ and define the \emph{annealed Perron--Frobenius operator} ${\mathcal{L}}_f: L^1(m) \to L^1(m)$ by \[ {\mathcal{L}}_f \varphi (x) =\int \mathcal{L}_{t} \varphi(x) \, dp(t). \] Then, it is straightforward to see that ${\mathcal{L}}_f$ is also a Markov operator. Now we are in a position to state two of our main results. \begin{mainthm} \label{thm:B} Under the assumption of Theorem~\ref{thm:A}, $\mathcal{L}_f$ is eventually compact, in particular, uniformly constrictive. \end{mainthm} We will obtain the above result by proving an equivalent but apparently more general theorem stated in terms of Markov operators and Markov processes. See Theorem~\ref{thm:B-Markov} together with Proposition~\ref{prop:ultra-Feller+B} and Remark~\ref{rem:P=Lf}. In the setting of Theorem \ref{thm:B}, since $\mathcal{ L}_f$ is quasi-compact, we can obtain exponential decay of annealed correlation functions, in the sense that there are finitely many absolutely continuous ergodic stationary measures $\mu_1, \ldots ,\mu_r$ with pairwise disjoint supports and constants $k\in \mathbb N$, $C>0$, $\rho \in (0,1)$ such that for any $\varphi , \psi\in L^\infty(m)$, $n\in \mathbb N$ and $ j=1,\ldots ,r$, we have \begin{equation}\label{eq:expmixing} \left\vert \int \psi \circ f^{kn}_\omega \cdot \varphi \, d(\mathbb P\times \mu _j) - \int \psi\, d\mu_j \int \varphi\, d\mu _j\right\vert \leq C\rho ^{kn}\Vert \varphi \psi\Vert _{\infty}. \end{equation} Refer to, for example,~\cite{Buzzi1999,Liverani1995} for background and,~\cite{BG2009} for proof\footnote{Although the paper \cite{BG2009} only dealt with deterministic maps, one can show the claim by literally repeating the argument in \cite[Appendix B]{BG2009} with $\mathcal L_f$ instead of a Perron--Frobenius operator $\mathcal L_g$ of a nonsingular map $g:X\to X$, after realizing the duality $ \int \psi \circ f^{n}_\omega \cdot \varphi \, d(\mathbb P\times m) = \int \psi \cdot \mathcal L_f^n \varphi \, dm$ corresponding to $ \int \psi \circ g^{n} \cdot \varphi \, dm = \int \psi \cdot \mathcal L_g^n \varphi \, dm$.}. Actually, this claim with $r=k=1$ was shown by Ara\'ujo and Ayta\c c in~\cite{AA2017} under a bit stronger assumption than Ara\'ujo's conditions~(A) and~(B) in a different manner (using a purely probability-theoretic technique); see also Remark \ref{rmk:6.7}. Furthermore, the quasi-compactness of $\mathcal L_f$ may lead to other several limit theorems (such as central limit theorem, large deviation principle, local limit theorem and almost sure invariance principle) via the so-called Nagaev--Guivarc'h perturbative spectral method, refer to \cite{ANV2015,Gouezel2015,HH2001}. \begin{mainthm} \label{thm:C} Let $(X , \mathscr B, m)$ and $(\Omega , \mathscr F, \mathbb{P})=(T^\mathbb{N}, \mathscr A^\mathbb{N}, p^\mathbb{N})$ be a {locally compact} Polish probability space and the infinite product space of a probability space $(T, \mathscr A, p)$, respectively. Consider a measurable map $f:T\times X\to X$. Then, the followings are equivalent: \begin{enumerate}[label=(\roman*), leftmargin=1cm] \item ${\mathcal{L}}_f$ is mean constrictive; \item $f$ satisfies {\tt(FPM)}. \end{enumerate} Moreover, if $f$ satisfies any of the above equivalent conditions and $\mu_1,\dots,\mu_r$ denote the measures that appear in the property~{\tt(FPM)}, then \begin{enumerate}[ leftmargin=1cm] \item $ (\mathbb{P}\times m)\left(B(\mu _1)\cup \dots \cup B(\mu_r)\right)=1$; \item $\mathbb{P}\left( B_x(\mu_1) \cup \dots \cup B_x(\mu_r)\right) = 1$ for $m$-almost every~$x \in X $; \end{enumerate} where for each $i=1,\dots,r$, $$ B(\mu _i)=\left\{ (\omega,x)\in \Omega\times X : \, \lim _{n\to \infty} \frac{1}{n} \sum _{j=0}^{n-1} \delta_{f_{ \omega} ^{j}(x)} = \mu _i \right\} $$ and \begin{align*} B_x(\mu_i)=\left\{\omega\in \Omega: \lim_{n\to\infty} \frac{1}{n}\sum_{j=0}^{n-1} \delta_{f^j_\omega(x)} = \mu_i \right\}. \end{align*} \end{mainthm} We note that since uniform constrictivity implies mean constrictivity, Theorem~\ref{thm:A} is just a consequence of Theorems~\ref{thm:B} and~\ref{thm:C}. In fact, conclusions (1) and (2) in Theorem~\ref{thm:C} also hold under the assumptions of Theorem~\ref{thm:A}. Notice that $B(\mu_i)$ is basically the statistical basin of attraction of the measure $\mathbb{P}\times \mu_i$ for the skew-product map $F$ in~\eqref{eq:skew}. Thus, the above theorem is actually a characterization of an absolutely continuous version of the Palis conjecture for the class of deterministic systems of the form~\eqref{eq:skew}, in terms of the annealed {Perron--Frobenius} operator of the random map $f$. In particular, when $T$ (or $\Omega$) is a singleton, Theorem~\ref{thm:C} provides a characterization of the existence of finitely many $m$-absolutely continuous invariant probability measures for deterministic dynamics such that the union of their basins has full $m$-measure, in terms of the Perron--Frobenius operator. \subsubsection{General Markov operators} We next generalize the equivalence in Theorem~\ref{thm:C} to general Markov operators. However, to do this, we need some preliminaries. Let $P$ be a Markov operator on $L^1(m)$ and consider the adjoint operator $P^*$ on $L^\infty(m)$. We define the support of a real-valued function $h$ (up to an $m$-null set) by $\operatorname*{supp} h = \{ x\in X : \, h(x) \not= 0\}$. We say that $h$ is an \emph{invariant density} of $P$ (or a \emph{$P$-invariant density}) if $h \in D(m)$ and $Ph =h $. As usual, $P^n$ and $(P^*)^n$ denote the $n$-th iterated of $P$ and $P^*$ respectively. Let $(P^n)^*$ be the adjoint operator of $P^n$. Using recursively the duality relation, it is not difficult to see that $(P^n)^*=(P^*)^n$. For simplicity of notation, we will simply write $P^{n*}$ when no confusion can arise. Having this notation in mind, we say that a $P$-invariant density $h$ has the \emph{maximal support} if \begin{align}\label{max.supp} \lim_{n\to\infty}P^{n*}1_{\operatorname*{supp} h}(x)=1 \quad \text{for $m$-almost all $x\in X$.} \end{align} A probability measure $\mu$ is called \emph{ergodic} if any $A\in\mathscr{B}$ such that $P^*1_A\geq 1_A$ satisfies $\mu(A)\in \{0,1\}$. We will say that a $P$-invariant density $h \in D(m)$ is \emph{ergodic} if the probability measure $m_h$ given in~\eqref{eq:mphi} is ergodic. See Appendix~\ref{appendix:B} for more details on equivalent definitions. \begin{mainthm}\label{thm:D} {Let $(X,\mathscr{B},m)$ be an abstract probability space and consider a Markov operator $P: L^1(m) \to L^1(m)$. Then, the following conditions are equivalent:} \begin{enumerate} \item[\tt (MC)] $P$ is mean constrictive; \item[\tt (FED)] $P$ admits finitely many ergodic $P$-invariant densities $h_1,\dots,h_r$ with mutually disjoint supports (up to an $m$-null set) and the invariant density function $h=\frac{1}{r}(h_1+\dots+h_r)$ has the maximal support; \item[\tt (APM)] There exist finitely many ergodic $P$-invariant densities $h_1,\dots,h_r$ with mutually disjoint supports (up to an $m$-null set) and positive bounded linear functionals $\lambda_1, \dots, \lambda_r$ on $L^1(m)$ such that \begin{align* \lim_{n\to\infty} \big\| A_n\varphi-\sum_{i=1}^r\lambda_i(\varphi)h_i\big\| =0 \quad \text{for any $\varphi\in L^1(m)$}. \end{align*} \end{enumerate} \end{mainthm} The condition {\tt(APM)}, named as \emph{asymptotic periodicity in mean}, was introduced by Inoue and Ishitani for Perron--Frobenius operators in~\cite{II1991} as a weaker version of the classic property \emph{asymptotic periodicity}. In turn, asymptotic periodicity was introduced and shown to be equivalent to constrictivity in~\cite{LLY1984}. The equivalence between the conditions {\tt(FED)} and {\tt(APM)} is the generalization of Inoue--Ishitani~\cite{II1991} to the case of general Markov operators. Similarly, the equivalence between~{\tt(APM)} and~{\tt(MC)} is a generalization of the spectral decomposition theorem (on $L^1(m)$) that has been developed by Lasota, Li, Yorke, Komorn\'ik, Bartoszek during the eighties and nineties, and by Storozhuk, Toyokawa more recently. Refer to~\cite{Bartoszek2008,Toyokawa2020} and references therein. Actually, we will prove in Theorem~\ref{thmeq} a more complete version of Theorem~\ref{thm:D} where we will provide a sequence of equivalences between {\tt(MC)} and, a priori, weaker conditions. \begin{rem} \label{rem:ergodic-implies-MC} If $1_X$ is an ergodic $P$-invariant density, then $P$ is {\tt (MC)}. Indeed, since any two ergodic $P$-invariant densities are equal or have disjoint support up to an $m$-null set, see Proposition~\ref{prop:ergodic} in Appendix~\ref{appendix:B}, $1_X$ is actually the unique invariant ergodic density of $P$. Then $P$ satisfies {\tt (FEM)} and consequently, by Theorem~\ref{thm:D}, $P$ is {\tt (MC)}. \end{rem} } \subsection{Hierarchy of classes of Markov operators} We have considered the classes of \emph{uniform constrictive}~{\tt(UC)}, \emph{constrictive}~{\tt(C)} and \emph{mean constrictive}~{\tt(MC)} Markov operators introduced in Definition~\ref{dfn:1011}. In this subsection, we introduce in Definition~\ref{asy.const} a new class between them that we call \emph{asymptotic constrictivity}~{\tt(AC)}. In the next subsection, we will provide examples of random maps that show the difference between these classes. First, to show the global picture, we characterize the class of Markov operators in $L^1(m)$ for which there is an invariant density, which we denote~by~{\tt(S)}. \subsubsection{{Straube class}} In~\cite{straube1981existence} Straube studied the existence of invariant densities for the Perron--Frobenius operator associated with a nonsingular transformation $g:X\to X$. Namely, Straube showed that there exists a $g$-invariant probability measure which is absolutely continuous with respect to $m$ if and only if there exist $\delta>0$ and $0 < \alpha < 1$ such that $m(A)<\delta$ implies $m(g^{-k}(A))<\alpha$ for all $k\geq 0$. Recall that a $g$-invariant absolutely continuous probability measure corresponds to an invariant density of the Perron--Frobenius operator $\mathcal{L}_{g}$. Moreover, \[ m\left(g^{-k}(A)\right)= \int 1_A \circ g^k(x) \, dm = \int_A \mathcal{L}^k_{g} 1_X \, dm. \] More recently, Islam, G\'{o}ra and Boyarsky in~\cite{islam2005generalization} proved also similar necessary and sufficient conditions in terms of Markov operator for the existence of absolutely continuous stationary measures for a certain class of iterated function systems with probabilities. The following theorem finally provides a characterization of the class {\tt(S)} in the spirit of {\tt (MC)}, generalizing the previous results in~\cite{islam2005generalization,straube1981existence}. A more complete version will be given in Theorem \ref{cor:E}. \begin{mainthm} \label{GST} For a Markov operator $P:L^1(m)\to L^1(m)$, the following assertions are equivalent: \begin{enumerate} \item \label{item1} There exists an invariant density for $P$; \item \label{item2} There exist $\alpha\in(0,1)$ and $\delta>0$ such that \[ \sup_{n\ge0}\int_AP^n1_X \, dm<\alpha \quad \text{for any $A\in\mathscr{B}$ with $m(A)<\delta$}; \] \item \label{item3} There exist $\alpha\in(0,1)$ and $\delta>0$ such that \[ \sup_{n\ge0}\int_AA_n1_X \, dm<\alpha \quad \text{for any $A\in\mathscr{B}$ with $m(A)<\delta$}; \] \end{enumerate} Moreover, the equivalence also holds by taking $\alpha=1$ in all the above items. \end{mainthm} \subsubsection{Weakly almost periodic class} {Consider} the class of Markov operators on $L^1(m)$ having an invariant density with the maximal support as in~\eqref{max.supp}. This class was characterized in~\cite{Toyokawa2020} as the well-known class of Markov operators in $L^1(m)$ called {weakly almost~periodic}. \begin{dfn}\label{wap} A Markov operator $P$ on $L^1(m)$ is {\it weakly almost periodic} if $(P^n\varphi)_{n\geq 1}$ is weakly precompact for any $\varphi\in L^1(m)$, that is, if any sequence $(P^{n_k}\varphi)_{k\geq 1}$ contains a further subsequence $(P^{n_{k_j}}\varphi)_{j\geq 1}$ that weakly converges to a function in~$L^1(m)$. \end{dfn} \enlargethispage{1cm} By the Dunford--Pettis theorem\footnote{\label{f:DP} The theorem states that $F\subset L^1(m)$ is weakly precompact (i.e.~any sequence in $F$ contains a weakly converging subsequence in $L^1(m)$) if and only if $F$ is bounded (i.e.~there is $M>0$ such that $\Vert \varphi \Vert<M$ for all $\varphi \in F$) and uniformly integrable (i.e.~for any $\varepsilon >0$, there is $\delta >0$ such that $\int _A \varphi dm <\epsilon$ for any $\varphi \in F$ and $A\in\mathscr B$ with $m(A)<\delta$). Refer to e.g.~\cite[Theorem~4.7.18]{bogachev2007measure}.}, a Markov operator $P$ is weakly almost periodic if and only if \begin{enumerate}[leftmargin=2cm] \item[{\tt(WAP)}] {\it for any $\varepsilon>0$ and $\varphi\in L^1(m)$, there is $\delta>0$ such that \[ \int_A P^n\varphi\,dm<\varepsilon \quad \text{for any $n\in\mathbb{N}$ and $A\in\mathscr{B}$ with $m(A)<\delta$.} \]} \end{enumerate} As mentioned, due to \cite[Theorem~3.1]{Toyokawa2020}, we have that {\tt(WAP)} is equivalent to the existence of a $P$-invariant density with the maximal support. Therefore, {\tt(WAP)} implies {\tt(S)}, and moreover, {\tt (MC)} implies {\tt(WAP)} from Theorem~\ref{thm:D}. \subsubsection{Asymptotically constrictive class}\label{sss:acc} {An equivalent formulation for the class of constrictive Markov operators $P:L^1(m)\to L^1(m)$ is} \begin{enumerate}[leftmargin=1.75cm] \item[{\tt(C)}] \label{CDP} {\it for any $\varepsilon>0$ there is $\delta>0$ such that for any $\varphi\in D(m)$, there is $n_0\in\mathbb N$ satisfying \[ \int_A P^n\varphi \,dm<\varepsilon \quad \text{for any $n\geq n_0$ and $A\in\mathscr{B}$ with $m(A)<\delta$.} \]} \end{enumerate} One of the implications follows easily from Dunford--Pettis theorem. Indeed, if $P$ is constrictive with a constrictor $F$, since $F$ is compact (in particular, weakly precompact) in $L^1(m)$ and $\int _AP^n \varphi\, dm\leq d(P^n \varphi ,F) +\sup _{\phi \in F}\int _A\phi \, dm$ for each $\varphi\in D(m)$, $n\in \mathbb N$ and $A\in \mathscr B$, we immediately obtain {\tt (C)}. One can find the proof of the converse e.g.~in~\cite{komornik1993asymptotic}. Having in mind these equivalent formulations of {\tt (C)} and {\tt (WAP)}, we introduce the following middle property between constrictivity and weak almost periodicity, which we could come up with while looking for non-constrictive examples. \begin{dfn}\label{asy.const} A Markov operator $P$ on $L^1(m)$ is said to be {\it asymptotically constrictive}~{\tt(AC)} if for any $ \varepsilon>0$, there is $\delta>0$ such that \begin{equation}\label{dfn:AC} \limsup _{n \rightarrow \infty} \int_{A} P^{n} \varphi\, dm<\varepsilon \quad \text{for any $\varphi \in D(m)$ and $A \in \mathscr{B}$ with~$m(A)<\delta$.} \end{equation} \end{dfn} It is straightforward to see that {\tt (C)} implies {\tt(AC)}. We prove in Theorem~\ref{propC2}~that~{\tt(AC)} implies {\tt(MC)}. We summarize the hierarchy between classes in the following~figure. \begin{figure}[H] \begin{center} \text{\tt (UC) \ $\relationarrow{Obvious}{Prop.~\ref{prop:figa}}{-3em}{-1.8em}$ \ \ (C) \ $\relationarrow{Obvious}{Prop.~\ref{prop:contracting}}{-3em}{-1.8em}$ \ \ (AC) \ $\relationarrow{Thm.~\ref{propC2}}{Prop.~\ref{prop:rotations}}{-3.1em}{-1.8em}$ \ \ (MC) \ $\relationarrow{Thm.~\ref{thm:D}}{Prop.~\ref{prop:rotations}}{-2.8em}{-1.8em}$ \ \ (WAP) \ $\relationarrow{Obvious}{Prop.~\ref{prop:figb}}{-3em}{-1.8em}$ \ \ (S)}. \end{center} \caption{The hierarchy of the classes between {\tt (UC)} and {\tt (S)}.} \label{fig:hierarchy} \end{figure} \subsubsection{Sub-hierarchy in {\tt(UC)}} \label{sec:(UC)} A transition probability $P(x,A)$ on $X\times \mathscr{B}$ is said to be \emph{$m$-nonsingular} if $P(x,A)=0$ for $m$-almost every $x\in X$ whenever $m(A)=0$. It is well-known that the theories of Markov operators and Markov processes (transition probabilities) are intimately related. Indeed, given an $m$-nonsingular transition probability $P(x,A)$ for $x\in X$ and $A\in \mathscr{B}$, we can induce a Markov operator $P$ on $L^1(m)$ such that $P\varphi$ is the Radon--Nikod\'{y}m derivative with respect to $m$ of the finite signed measure $$ \mu_\varphi(A)= \int \varphi(x) P(x,A)\, dm, \quad \text{for $A\in \mathscr{B}$}. $$ See~\cite{ito1964invariant} or~\cite[Proposition~V.4.2]{neveu1965mathematical} for more details on the construction. Conversely, given a Markov operator $P$ on $L^1(m)$ one can define $$ P^n(\cdot,A)=P^{n*}1_A \quad \text{for all $A\in \mathscr{B}$ and $n\geq 1$}. $$ Notice that $P^n(\cdot,A)$ is an equivalent class and thus as a real-valued function is only defined up to $m$-null sets. It is not hard to see that one may choose in each equivalent class $P^n(\cdot,A)$ a $\mathscr{B}$-measurable function $P^n(x,A)$ on $X$ for every fixed $A\in\mathscr{B}$ such that it differs from an $m$-nonsingular transition probability only on a negligible set of points. Nevertheless, if $X$ is a Polish space, Neveu proved in~\cite[Proposition~V.4.4]{neveu1965mathematical} that these representations can be chosen appropriately to get that $P^n(x,A)$ is a transition probability which induces the Markov operator $P^n$ on $L^1(m)$ as explained above. Taking into account this relation between Markov operators and Markov processes, in what follows we will introduce two subclasses in {\tt(UC)}. Let $(X,\mathscr{B},m)$ be a Polish probability space and consider a Markov operator $P$ on $L^1(m)$. Let $P^n(x,A)$ be an $n$-transition probability that induces $P^n$ as explained above. We first introduce the following classical condition from the theory of Markov chain adapted to Markov operators in $L^1(m)$:\footnote{ Recall that $P(x,A)$ is said to satisfy the \emph{Doeblin} condition if there exist $n_0\geq 1$, $\varepsilon >0$, $\delta <1$ and a probability $\mu$ such that $P^{n_0}(x,A)>\varepsilon$ for all $x\in X$ and $A\in\mathscr{B}$ with $\mu(A)>\delta$ (cf.~\cite{MT2012}). If $\mu$ is in addition required to be absolutely continuous with respect to $m$, then this condition is equivalent to {\tt (D)}.} \begin{enumerate} \it \item[{\tt(D)}] there exist $n_0\geq 1$, $0<\varepsilon<1$, $\delta>0$ and a probability $\mu$ absolutely continuous with respect to $m$ such that $P^{n_0}(x,A)<\varepsilon$ for all $x\in X$ and $A\in\mathscr{B}$ with $\mu(A)<\delta$. \end{enumerate} The second class that we introduce is also (an equivalent condition to) a classical property widely discussed and studied in the literature of Markov processes under the name of \emph{uniform ergodicity} that we adapt to Markov operators on $L^1(m)$: \begin{enumerate} \it \item[{\tt(D*)}] there exist $n_0\geq 1$, $0<\varepsilon<1$, $\delta>\frac{1}{2}$ and a probability $\mu$ absolutely continuous with respect to $m$ such that $P^{n_0}(x,A)<\varepsilon$ for all $x\in X$ and $A\in\mathscr{B}$ with $\mu(A)<\delta$. \end{enumerate} Dorea and Pereira \cite{Dorea2006} introduced {\tt(D*)} without the absolute continuity of $\mu$ as an equivalent condition to uniform ergodicity. In Proposition~\ref{prop:equi-uni-erg} we will show equivalent conditions to {\tt(D*)} including uniform ergodicity adapted to Markov operators in $L^1(m)$. Furthermore, we will show that the condition {\tt(D*)} implies that $P$ only admits a unique invariant density, see Remark~\ref{rem:uni-erg}. In Proposition~\ref{prop:UC} we will prove that {\tt(UC)} is the class of Markov operators $P$ on $L^1(m)$ that satisfy the following condition: \begin{enumerate} \it \item[{\tt (UC)}] there are $n_0\geq 1$, $0<\varepsilon<1$, $\delta>0$ and a probability $\mu$ absolutely continuous with respect to $m$ such that $P^{n_0}(x,A)<\varepsilon$ for all $A\in\mathscr{B}$ with $\mu(A)<\delta$ and $m$-almost every $x\in X$ (depending on $A$). \end{enumerate} In view of these characterizations, one immediately obtains the following relations: \begin{figure}[H] \begin{center} \text{\tt (D*) \ $\relationarrow{Obvious}{Prop \ref{prop:dd}}{-2.8em}{-1.7em}$ \ \ (D) \ $\relationarrow{Prop~\ref{prop:UC}}{Prop \ref{prop:ucd}}{-3.0em}{-1.8em}$ \ \ (UC)}. \end{center} \caption{The subhierarchy in {\tt (UC)}.} \label{fig:subhierarchy} \end{figure} In the next subsection of examples, we will show that neither of the converse implications above holds. However, if we restrict ourselves to the class of strong Feller operators, then {\tt (UC)} implies {\tt (D)}. See~Proposition~\ref{prop:Felle+UC=D}. Due to this result, the annealed Perron--Frobenius operator $\mathcal L_f$ satisfies {\tt(D)} when $f$ satisfies the Ara\'ujo or Brin--Kifer conditions (see Remark~\ref{rem:AraujoD}). Furthermore, $\mathcal{L}_f$ satisfies {\tt (D*)} for the conditions in Ara\'ujo--Ayta\c{c}~\cite{AA2017}. In fact, we will generalize the argument in Ara\'ujo--Ayta\c{c} to show that a version of Ara\'ujo's condition (other than the condition in Remark \ref{rem:gAraujo}) is sufficient to obtain {\tt (UC)}. See Remark~\ref{rmk:6.7}~for~details. \subsection{Examples and counterexamples}\label{ss:e} In this subsection, to complete Remark \ref{rem:gAraujo} and the list that reverse implications in Figures \ref{fig:hierarchy} and \ref{fig:subhierarchy} fail, we will consider several examples coming from the three aforementioned important classes of random dynamical systems, i.e.~additive noise, multiplicative noise and iterated function systems. We will also explain that some examples can be easily modified to be deterministic systems. \subsubsection{Additive type noise} \label{subsec:additive} First, we consider some perturbed systems with additive type noise, which will indicate the reverse implications in Figures \ref{fig:subhierarchy}. Let $X$ and $T$ be the closed interval $[0,1]$ equipped with the Lebesgue measure, denoted by $m$ and $p$, respectively. Consider a measurable map $f_0:X\to X$ and consider the random map $f:T\times X\to X$ given by $f(t,x)=f_t(x)$, \[ f_t(x) = \begin{cases} 0 \quad & \text{for} \ x=0,\\ f_0(x) +t \pmod{1} \quad & \text{for} \ x\neq 0. \end{cases} \] Then the following proposition holds for this $f$. \begin{prop}\label{prop:ucd} $\mathcal L_f$ satisfies~{\tt(UC)}, but does not satisfy {\tt(D)}. \end{prop} Next, let us consider $X=X_-\cup X_+$ with $X_-=(-1,0]$, $X_+=(0,1]$ and $T=X_+$. We equip $X$ and $T$ with the Lebesgue measures $m$ and $p$, respectively. Let $\iota :X \to X$ be the involution given by $\iota (x) =x+1$ for $x\in X_-$ and $\iota (x) = x-1$ for $x\in X_+$, so that $\iota (X_-) = X_+$ and $\iota (X_+) = X_-$. Let $f_0:X\to X$ be a measurable map satisfying that $f_0(X_-) \subset X_+$, $f_0(X_+) \subset X_-$ and $ f_0 \circ \iota = \iota \circ f_0 $ on $ X_+$. Define the random map $\widetilde f: T\times X_+\to X_+$ by \[ \widetilde f_t(x) = \iota \circ f_0(x)+t \pmod{1} \] and let $f: T\times X\to X$ be the random map given by \begin{equation} f_t(x) = \begin{cases} \iota \circ \widetilde f_t(x) & \text{for $x\in X_+$},\\ \widetilde f_t\circ \iota (x) & \text{for $x\in X_-$}. \end{cases}\label{eq:dd} \end{equation} See Figure \ref{fig_eg_X}. Then, the following proposition holds for this $f$. \begin{prop}\label{prop:dd} $\mathcal L_f$ satisfies~{\tt(D)}, but does not satisfy {\tt(D*)}. \end{prop} \subsubsection{Multiplicative noise}\label{subsec:multiple} We secondly focus on a family of perturbed systems with multiplicative noise. As announced, these examples prove the assertions in Remark \ref{rem:gAraujo}. Let us consider a random map $f:T\times X\to X$ given by the following multiplicative noise, \begin{equation}\label{eq:0621a} f_{t}(x)=(1-\varepsilon t) f_{0}(x), \quad x \in[0,1], \ \ t \in[0,1] \ \ \text{and} \ \ 0<\varepsilon<1. \end{equation} Here $f_{0}:[0,1] \to [0,1]$ is a measurable map and $X=T=[0,1]$ is equipped with the Lebesgue measure which, as before, we denote by $m$ and $p$ respectively. The next three examples in Proposition \ref{prop:multi:ex1} show that both conditions (a) and~(b) in Remark~\ref{rem:gAraujo} are neither sufficient nor necessary to get {\tt(FPM)}. Realize that the first example gives an important warning: the natural relaxation of the condition~(b) from ``for all $x\in X$'' to ``for $m$-almost every $x\in X$'' (denoted by \emph{almost-(b)}) is~{not useful to~get~{\tt(FPM)}. \begin{prop}\label{prop:multi:ex1} Consider the random map given in \eqref{eq:0621a}. \begin{enumerate} \item Let $f_0(x)=\frac{x}{2}$. Then, $f$ does not satisfy (b), but satisfies (a) and almost-(b). Moreover, {\tt(S)} does not hold. In particular, {\tt(FPM)} does not hold. However, we have that \[ \lim _{n\to \infty} \frac{1}{n}\sum _{j=0}^{n-1} \delta _{f^j_\omega (x)} = \delta _0 \quad \text{for all $x\in X$ and $\omega \in \Omega$.} \] \item Let $f_0(x)=\frac{x}{2}$ if $x\not=0$ and $f_0(0)=\frac{1}{2}$. Then, $f$ does not satisfy (a), but satisfies (b). Moreover, {\tt(S)} does not hold. In particular, {\tt(FPM)} does not hold. \item Let $f_0(x)=2x \text{ (mod 1)}$. Then, $f$ does not satisfy (a) and~(b), but $\mathcal{L}_f$ satisfies {\tt (C)}. In particular, {\tt(FPM)} holds. \end{enumerate} \end{prop} {Proposition~\ref{prop:multi:ex1} (1) provides a class of examples that does not satisfy {\tt(FPM)} but almost-(b) holds. However, this example still has a unique non-absolutely continuous physical measure. The next example shows that there is a drastic gap between (b) and almost-(b): finitude versus infinitude. Consider a random map $f$ under a multiplicative type noise, given by \begin{equation}\label{eq:0707c} f_t(x) =tx + (1- t)f_0(x) \quad x \in X=[0,1], \ \ t \in T=[0,1]. \end{equation} Now, we consider a concrete $C^1$ map $f_0: X\to X$ with infinitely many sinks, which was essentially given by Ara\'ujo in~\cite[Example 1]{Araujo2001}. Let $\phi : X \to \mathbb R$ be a $C^1$ function given by \begin{equation}\label{eq:0707b} \phi(x) = X^4 \sin \frac{1}{X} \qquad \text{where $\displaystyle X=\frac{2}{\pi} \left(x-\frac{1}{2}\right)$}. \end{equation} Note that $\phi$ can be seen as a $C^1$ function on the Lie group $S^1=\mathbb R/\mathbb Z$ under the identification of $S^1$ with $X=[0,1]$. Notice also that $\phi$ has (countably) infinitely many local maxima and minima, which accumulates to $\frac{1}{2}$. Therefore, the one-time map $f_0$ of the gradient flow given by $\dot{x} = \nabla \phi (x)$ has infinitely many sinks that accumulate to $\frac{1}{2}$ and whose basin covers $X$ except the sources of $f_0$. \begin{prop}\label{prop:0707} Consider the random map given in~\eqref{eq:0707c} with the time-one map $f_0$ of the gradient flow induced by the potential function~\eqref{eq:0707b}. Then, $f$ does not satisfy (b), but satisfies (a) and almost-(b). Moreover, there are infinitely many points $(s_k) _{k\geq 1} \subset X$ such that \[ G(\delta_{s_k})=\left\{x\in X \, :\, \lim _{n\to \infty} \frac{1}{n}\sum _{j=0}^{n-1} \delta _{ f^n_\omega (x)} =\delta _{s_k} \quad \text{for all $\omega\in\Omega$} % \right\} \] is a non-empty open set (in particular, has an $m$-positive measure) for all $k\geq 1$, and $\bigcup _{k=1}^\infty G(\delta_{s_k}) =X$ up to an $m$-zero measure set. In particular, {\tt(FPM)} does not hold. \end{prop} } \subsubsection{Iterated function systems}\label{sss:ifs} Finally, we consider some random maps generated by iterated function systems~(IFS). These examples will disprove the reverse implications in Figures~\ref{fig:hierarchy}. As explained in {Remark}~\ref{rmk:IFS}, an IFS with probabilities is a random map $f: T\times X\to X$ on a finite set $T=\{1, 2, \ldots ,k\}$ with a probability measure {$p$ where $p(\{i\})=p_i>0$ for $i=1,\dots,k$.} Notice that this setting allows the deterministic case, that is, the case $k=1$. As before, set $\Omega =T^{\mathbb N}$ and $\mathbb P=p^{\mathbb N}$. Then, $\mathbb P$ is the Bernoulli probability on $\Omega$, and the corresponding annealed Perron--Frobenius operator is of the form $$ {\mathcal{L}}_f \varphi = \int \mathcal{L}_t \varphi \, dp(t) = \sum_{i=1}^k p_i \mathcal{L}_i\varphi \quad \text{for $\varphi\in L^1(m)$} $$ where $\mathcal{L}_i$ is the Perron--Frobenius operator of $f_i=f(i ,\cdot )$ for $i=1,\dots, k$. Throughout this subsection, we keep the setting and notations. \vspace{0.1cm} \noindent (a) {\it Random expanding maps.} Let {$X$ be the unit interval $[0,1]$} equipped with the Borel $\sigma$-field $\mathscr B$ and the normalized Lebesgue measure $m$. Let $f_i$ be a piecewise $C^2$ nonsingular transformation with a finite partition for each $i=1,\dots, k$. Assume the following expanding (on average) condition: \begin{eqnarray}\label{expanding_condition} \sum_{i=1}^k \frac{p_i}{\left\lvert f'_i(x)\right\rvert} <1 \quad\text{for all $x\in X$.} \end{eqnarray} When $x$ is a discontinuity point of some $f_i$, the left and right limits of $f'_i(x)$ are considered and \eqref{expanding_condition} with these limits instead of $f_i'(x)$ are required. Then, the following proposition holds for this $f$. \begin{prop}\label{prop:figa} ${\mathcal{L}}_f$ satisfies {\tt(C)}, but does not satisfy {\tt(UC)}. \end{prop} \vspace{0.1cm} \noindent (b) {\it Random contracting maps.} Let $(X,\mathscr{B},m)$ be as in the previous example. Consider the case $k=2$ and \[ f_1(x)=\frac{x}{2}, \quad f_2(x)=\frac{x}{2}+\frac{1}{2} \quad \text{($x\in X$)} \quad \text{with} \quad p_1=p_2=\frac{1}{2} \] known as a special case of Bernoulli convolution \cite{peres1996absolute}. Then the following proposition holds for this $f$. \begin{prop}\label{prop:contracting} ${\mathcal{L}}_f$ satisfies {\tt(AC)}, but does not satisfy {\tt(C)}. \end{prop} \begin{rem} Note that each $f_i$ does not satisfy even {\tt (S)}: the Dirac measure at $0$ (resp.~$1$) is the only invariant probability measure of $f_1$ (resp.~$f_2$), but it is not absolutely continuous with respect to Lebesgue measure. Namely, the random map $f$ has a property that each deterministic map $f_\omega$ does not have (such behaviors are called \emph{noise-induced phenomena}, and have been attracting attention among physicists, cf.~\cite{galatolo2020existence}). This is contrastive to other examples in Section~\ref{sss:ifs}, which causes us to work a bit harder in Section~\ref{sss:dce}. \end{rem} \begin{rem} A slightly weaker condition than {\tt (AC)} (i.e.~``some $ \varepsilon>0$'' instead of ``any $ \varepsilon>0$'') appeared in {\cite{Komornik1989,komornik1991asymptotic}} with the name of {\it smoothing} property, and was later called {\it almost constrictivity} in~\cite{komornik1993asymptotic}. {Note that the definition of smoothing in~\cite{KL1987} is stronger than ones in~\cite{Komornik1989,komornik1991asymptotic}, whence the one in~\cite{KL1987} is indeed equivalent to constrictivity.} The author of {\cite{Komornik1989}} asserted that a Markov operator $P$ is {asymptotically periodic (or equivalently constrictive) when $P$ is} smoothing. However, this claim cannot be true because the example in Proposition~\ref{prop:contracting} satisfies the smoothing property, but does not satisfy {\tt(C)}. We also remark that it was proven in~\cite{komornik1993asymptotic} that the almost constrictivity implies {\tt (WAP)}. Figure~\ref{fig:hierarchy} shows that the results in this paper largely improved it. \end{rem} \vspace{0.1cm} \noindent (c) {\it Random rotations.} Now let $X=\mathbb{T}^1=\mathbb{R}/\mathbb{Z}$ equipped with the Borel $\sigma$-field $\mathscr B$ and the normalized Lebesgue measure $m$. Let $k=2$ and consider two irrational rotations $f_1$ and $f_2$ with angles $\alpha$ and $\beta$, that is, \[ f_1(x)=x+\alpha \pmod{1} \quad {\rm and}\quad f_2(x)=x+\beta \pmod{1} \] where $\alpha, \beta \in [0,1]$. Let $p_1=p_2=\frac{1}{2}$. \begin{prop}\label{prop:rotations} The following hold. \begin{enumerate} \item {If $\alpha-\beta$ is irrational}, then ${\mathcal{L}}_{f}$ satisfies~{\tt(C)}. \item If $\alpha$ and $\beta$ are irrational numbers such that $\alpha-\beta$ is rational, then ${\mathcal{L}}_{f}$ satisfies {\tt(MC)} but does not satisfy {\tt(AC)}. \item If $\alpha$ and $\beta$ are rational numbers, then ${\mathcal{L}}_{f}$ satisfies {\tt(WAP)} but does not satisfy {\tt(MC)}. \end{enumerate} \end{prop} \vspace{0.1cm} \noindent (d) {\it Direct sums of random contraction and expanding map.} Let $k=2$ and consider the direct sum of random transformations, where one satisfies {\tt (S)} and the other does not. That is, let $X= X_-\sqcup X_+$ and equip $X$ with a probability measure $m$ for which both $X_-$ and $X_+$ have a positive measure. With $p_1=p_2=\frac{1}{2}$, define $$ f_1(x)=\begin{cases} \tau_1^-(x) & \text{for $x\in X_-$}\\ \tau_1^+(x) & \text{for $x\in X_+$}\\ \end{cases} \quad {\rm and}\quad f_2(x)=\begin{cases} \tau_2^-(x) & \text{for $x\in X_-$}\\ \tau_2^+(x) & \text{for $x\in X_+$}\\ \end{cases} $$ where the random dynamics generated by $\tau_1^+,\tau_2^+:X_+\to X_+$ (with equal probabilities) satisfies {\tt (S)}, and the random dynamics generated by $\tau_1^-,\tau_2^-:X_-\to X_-$ does not satisfy {\tt (S)}. A typical example is the case when $\tau_1^+,\tau_2^+$ are one-dimensional piecewise $C^2$ nonsingular transformations with finite partitions satisfying the expanding condition \eqref{expanding_condition} (with $\tau _i^+$ instead of $f_i$), and $\tau _1^-(x) =\tau _2^-(x) = \frac{x}{2}$ on $X_-=[0,1]$. Then, the following proposition holds for this $f$. \begin{prop}\label{prop:figb} ${\mathcal{L}}_f$ satisfies {\tt(S)}, but does not satisfy {\tt(WAP)}. \end{prop} \subsubsection{Deterministic systems}\label{sss:dce} By slightly modifying examples in Section \ref{sss:ifs}, one can easily make examples of deterministic dynamical systems that disprove the reverse implications in Figures \ref{fig:hierarchy}, as follows. Recall that the baker's transformation is a map $g: X\to X$ on $X=[0,1]^2$ (equipped with the Lebesgue measure $m$) defined by $ g(x,y)= (2x,\frac{y}{2})$ when $0\leq x<\frac {1}{2}$ and $g(x,y)=(2x-2,\frac{y}{2}+\frac{1}{2})$ when $\frac {1}{2}\leq x\leq 1$. \begin{prop}\label{prop:dce} The following hold. \begin{enumerate} \item Any one-dimensional piecewise $C^2$ nonsingular transformation with a finite partition satisfying \eqref{expanding_condition} is {\tt(C)} but not {\tt(UC)}. \item The baker's transformation is {\tt (AC)} but not {\tt (C)}. \item Any irrational rotations are {\tt(MC)} but not {\tt(AC)}. \item Any rational rotations are {\tt(WAP)} but not {\tt(MC)}. \item Define $g:X\to X$ as a measurable map preserving a splitting of $X$ by two positive measure sets $X_-$, $X_+$ such that the restriction of $g$ on $X_+$ satisfies {\tt (S)} and the restriction of $g$ on $X_-$ does not. Then, $g$ is {\tt (S)} but not {\tt (WAP)}. \end{enumerate} \end{prop} \subsection{Questions} \label{sec:1.4} We would like to leave questions on the notion of physical noise, a possible generalization of Theorem~\ref{thm:C} for more general probabilities, and statistical properties of Markov operators satisfying {\tt (AC)}. \subsubsection{On the physical noise} We call noise satisfying condition (b) in Remark~\ref{rem:gAraujo} as \emph{physical noise}. This kind of noise is frequently used in the literature as indicated in Remarks~\ref{rm:0302} and~\ref{rem:multiplicative-noise}. As an important consequence of Remark~\ref{rem:gAraujo}, any continuous random perturbations of a dynamics on a compact space $X$ by a physical noise satisfies {\tt(FPM)}. Recall that~{\tt(FPM)} asks the existence of finitely many ergodic measures with disjoint supports which are absolutely continuous with respect to $m$ and the union of its fiberwise statistical basin of attraction covers almost surly $X$. Proposition~\ref{prop:multi:ex1} provides a class of examples of continuous random perturbation of a dynamics by an \emph{almost physical noise} (that is, condition almost-(b)) which does not satisfy~{\tt(FPM)}. However, this example still has a unique non-absolutely continuous physical measure where any fiberwise statistical basin of attraction covers the full space up to a null set. Then, a natural question is whether~{\tt(FPM)} could be weakened by asking finitely many physical ergodic stationary measures instead of finitely many absolutely continuous ergodic stationary measures. Call this property {\tt(fpm)} for short. On the other hand, Proposition~\ref{prop:0707} shows that considering random perturbations by almost physical noise is still insufficient to get~{\tt(fpm)}. \begin{question} Find a reasonable sufficient condition to get~{\tt(fpm)} generalizing the physical noise. \end{question} \subsubsection{Generalization of Theorem \ref{thm:C}} Let $f:T\times X\to X$ be a $\mathbb{P}$-random map as defined in Appendix~\ref{sec:apendix} where $\mathbb{P}$ is a shift-invariant probability measure on $(\Omega,\mathscr{F})=(T^\mathbb{N},\mathscr{A}^{\mathbb{N}})$ but not necessarily a Bernoulli probability as before. Consider the annealed Perron--Frobenius operator ${\mathcal{L}}_f$~given~by $$ {\mathcal{L}}_f \varphi = \int \mathcal{L}_\omega \varphi \, d\mathbb{P}(\omega) \quad \text{for $\varphi \in L^1(m)$} $$ where $\mathcal{L}_\omega$ is the Perron--Frobenius of $f_\omega=f_t$ for $(\omega_i)_{i\geq 0} \in \Omega$ with $\omega_0=t$. Then Theorem~\ref{thm:D} applies to ${\mathcal{L}}_f$. However, we cannot conclude Theorem~\ref{thm:C} from this because an invariant density of ${\mathcal{L}}_f$ does not correspond in general with a stationary measure with respect to $\mathbb{P}$. To be more clear, an invariant density $h$ for $ {\mathcal{L}}_f$ defines a probability measure $\mu$ by $d\mu=h\, dm$ which satisfies that \begin{equation} \label{eq:stationary} \mu(A) =\int (f_\omega)_*\mu(A) \, d\mathbb{P}(\omega) \quad \text{for all $A\in \mathscr{B}$}. \end{equation} However, a stationary measure $\mu$ with respect to $\mathbb{P}$ is not necessarily a measure that satisfies~\eqref{eq:stationary}. See, for instance,~\cite{matias_2021} where stationary measures for general Markov probabilities are characterized. This stationary is necessary to obtain invariant measures for the skew-product $F$ in~\eqref{eq:skew} associated with $f$. One can see that if $\mu$ satisfies~\eqref{eq:stationary} and $\mathbb{P}$ is not a Bernoulli measure, $\mathbb{P}\times\mu $ is not in general an invariant measure of $F$ (cf.~\cite[Example~1.4.7]{Arnoldbook}). Since the $F$-invariance of $\mathbb{P}\times\mu$ is essential for us to prove Theorem~\ref{thm:C}, we do not know how to obtain this result in this case. \begin{question} Is it possible to obtain a similar result to Theorem~\ref{thm:C} for a random iteration driving by a general probability $\mathbb{P}$ on $\Omega$ of a measurable map $f$ as above? \end{question} \subsubsection{Equivalent conditions to {\tt (AC)} and {\tt(UC)}} \label{sec:Questions-(AC)-(UC)} We next consider the counterpart for {\tt (AC)} in the position of {\tt (APM)} for {\tt (MC)}. First, we introduce some definitions. An invariant density $h$ of a Markov operator $P: L^1(m) \to L^1(m)$ is called \emph{mixing} (resp.~\emph{exact}) if \[ \lim_{n\to\infty}P^n\varphi = h\int \varphi \, dm \quad \text{weakly (resp.~strongly) in $L^1(m)$} \] for any $\varphi\in L^1(m)$ whose support is included in $\operatorname*{supp} h$. {If $h$ is exact and the above converge occurs exponentially fast (i.e.~there are constant $C>0$, $0<\rho <1$ independently of $\varphi$ such that $\Vert P^n\varphi - h\int \varphi \, dm\Vert \leq C\rho^n \Vert \varphi \Vert$ for each $n$), we say that $P$ is exponentially exact. } In particular, given a non-singular measurable map $g: X\to X$ and an $\mathcal L_g$-invariant density $h$, a probability measure $m_h$ given in \eqref{eq:mphi} is mixing (resp.~exact) for $g$ if and only if $h$ is mixing (resp.~exact) for $\mathcal L_g$. {See Appendix~\ref{appendix:D} for more details.} We also recall the well-known fact that a Markov operator $P$ is {\tt (C)} if and only if \begin{itemize} \item[{\tt (AP)}] \label{AP} \it $P$ is \emph{asymptotically periodic}, that is, there exist finitely many densities $g_1,\dots,g_r$ with mutually disjoint supports (up to an $m$-null set), positive bounded linear functionals $\lambda_1, \dots, \lambda_r$ on $L^1(m)$ and a permutation $\rho $ of $\{1,\ldots ,r\}$ such that $P g_i = g_{\rho (i)}$ for each $i=1,\dots,r$ and for $\varphi \in L^1(m)$, \begin{equation}\label{eq:0630exa} \lim _{n\to\infty} P^{n} \big( \varphi - \sum _{i=1}^r \lambda _i(\varphi ) g_i\big) \to 0 \quad \text{strongly in $L^1(m)$.} \end{equation} \end{itemize} It is also known that if $P$ satisfies {\tt (AP)} with $r=1$, then $g_1$ is exact. Refer to e.g.~\cite{LM}. Mimicking the above asymptotically periodic condition, we introduce the following class of Markov operators: \begin{itemize} \item[{\tt (APW)}] \label{APW} \it $P$ is \emph{asymptotically periodic weakly}, that is, {\tt (AP)} holds with ``weakly'' instead of ``strongly'' in~\eqref{eq:0630exa}. \end{itemize} \begin{question} \label{q:AC} Is {\tt (AC)} equivalent to {\tt (APW)}? Moreover, is $g_1$ mixing if $r=1$? \end{question} { The different classes considered in this paper can be classified into three categories according to the type of conditions involved in the definition of the class: conditions on \emph{constrictor}, conditions \`a la \emph{Dunford--Pettis} or conditions on \emph{periodicity} in the limit. Since many of these conditions are ultimately equivalent, we have avoided as far as possible introducing many names to indicate the different equivalent definitions. In Table~\ref{table:question} we organize this classification of classes, but first we introduce two more classes: \begin{enumerate}[itemsep=0.5cm] \it \item[{\tt (EAP)}] \label{EAP} $P$ is \emph{exponentially asymptotically periodic}, that is, {\tt (AP)} holds with \[ \bigg\| P^{n} \big( \varphi - \sum _{j=1}^r \lambda _j(\varphi ) g_j\big)\bigg \| \leq C\rho ^n \Vert\varphi\Vert \quad \text{for any $\varphi\in L^1(m)$} \] for some $C>0$, $0< \rho <1$ taken independently of $n$, $\varphi$, instead of \eqref{eq:0630exa}. \item[{\tt (CW)}] \label{def:CW} $P$ is \emph{constrictive weakly}\footnote{ There exists a quite similar name in literature, called \emph{weak constrictivity}. $P$ is said to be weakly constrictive if there is a weakly compact set $F$ of $\lim _{n\to\infty}d(P^nh,F ) = 0$ for any $h\in D(m)$. It is known that weak constrictivity is an equivalent condition to {\tt (C)} (cf.~\cite{komornik1993asymptotic}). }, that is, there exists a weakly compact set $F$ of $D(m)$ such that for any $h\in D(m)$ there exists $(\psi _n)_{n\in \mathbb N} \subset F$ satisfying that \[ \lim _{n\to\infty}(P^nh -\psi _n ) = 0\quad \text{weakly in $L^1(m)$} \] \end{enumerate} In Proposition~\ref{prop:efinal} we will show that {\tt (APW)} $\Rightarrow$ {\tt (CW)} $\Rightarrow$ {\tt (AC)}. As we see below, these implications have some consequences. The first obvious consequence is that Question~\ref{q:AC} is actually reduced to prove {\tt(AC)} $\Rightarrow$ {\tt(APW)}. Moreover, \begin{equation}\label{eq:implications} \text{{\tt (C)} $\Rightarrow$ {\tt(CW)} $\Rightarrow$ {\tt (MC)}} \quad \text{and} \quad \text{{\tt (AP)} $\Rightarrow$ {\tt (APW)} $\Rightarrow$ {\tt(APM)}.} \end{equation} Indeed, notice first that ``$\lim _{n\to\infty}d(P^nh , F ) = 0$'' in the notion of constrictive Markov operator given in Definition~\ref{dfn:1011} can be rephrased as there is $(\psi _n)_{n\in \mathbb N} \in F$ such that \mbox{$P^nh -\psi _n \to 0$} strongly in $L^1(m)$. Thus, clearly {\tt (C)} implies {\tt(CW)}. Also, since {\tt(AC)} implies {\tt (MC)}, from Proposition~\ref{prop:efinal} and Theorem~\ref{thm:D}, we have {\tt (CW)} $\Rightarrow$ {\tt(MC)}. Similarly, we have {\tt (AP)} $\Rightarrow$ {\tt (APW)} $\Rightarrow$ {\tt(APM)}. Finally, we mention that neither of the converses of the implications in~\eqref{eq:implications} holds. % To see this, we will prove in Proposition~\ref{prop:mixing-no-exact} that in example (b) of Section~\ref{sss:ifs}, $1_X$ is a $P$-invariant density (called the trivial density) which is mixing, but not exact. In particular, $P$ satisfies {\tt (APW)} with $r=1$ and consequently {\tt(CW)} from Proposition~\ref{prop:efinal}. However, since any Markov operator with the mixing and non-exact trivial density is not constrictive (cf.~\cite{LM}), $P$ in the example does not satisfy~{\tt (C)}. Also, {\tt(MC)} (resp.~{\tt(APM)}) does not imply {\tt(CW)} (resp.~{\tt(APW)}) since {\tt (AC)} is not equivalent to {\tt (MC)} (and thus {\tt(APM)}) from Propositions~\ref{prop:rotations}~(2) and~\ref{prop:dce}~(3). \begin{table} \vspace{0.5cm} \centering \footnotesize \begin{tabular}{|c||c|c|c|c|} \cline{1-5} constrictor & \begin{tabular}{@{}c@{}} \tt(UC) \\[-0.1cm] \tiny Def.~\ref{dfn:1011} \end{tabular} & \begin{tabular}{@{}c@{}} \tt (C) \\[-0.1cm] \tiny Def.~\ref{dfn:1011} \end{tabular} & \begin{tabular}{@{}c@{}} \tt (CW)? \\[-0.1cm] \tiny p.~\pageref{def:CW} \end{tabular} & \begin{tabular}{@{}c@{}} {\tt (MC)} \\[-0.1cm] \tiny Def.~\ref{dfn:1011} \end{tabular} \\ \hline Dunford--Pettis & \begin{tabular}{@{}c@{}} \tt(UC) \\[-0.1cm] \tiny Prop.~\ref{prop:UC} (3) \end{tabular} & \begin{tabular}{@{}c@{}} {\tt (C)} \\[-0.1cm] \tiny p.~\pageref{CDP} \end{tabular} & \begin{tabular}{@{}c@{}} {\tt (AC)} \\[-0.1cm] \tiny Def.~\ref{asy.const} \end{tabular} & \begin{tabular}{@{}c@{}} {\tt (MC)} \\[-0.1cm] \tiny Thm.~\ref{thmeq} \end{tabular} \\ \hline periodicity & \begin{tabular}{@{}c@{}} {\tt(EAP)}? \\[-0.1cm] \tiny p.~\pageref{EAP} \end{tabular} & \begin{tabular}{@{}c@{}} {\tt (AP)} \\[-0.1cm] \tiny p.~\pageref{AP} \end{tabular} & \begin{tabular}{@{}c@{}} {\tt (APW)}? \\[-0.1cm] \tiny p.~\pageref{APW} \end{tabular} & \begin{tabular}{@{}c@{}} {\tt (APM)} \\[-0.1cm] \tiny Thm.~\ref{thm:D} \end{tabular} \\ \hline $r=1$ & exp.~exact ? & exact & mixing ? & ergodic \\ \hline \end{tabular} \caption{Classification of different notions of constrictive type classes.}\label{table:question} \end{table} In Theorem~\ref{thmeq} we prove the equivalence between the different definitions for the class of constrictive in mean Markov operators. In Proposition~\ref{prop:UC}, we prove the equivalence between the definitions of uniformly constrictive \`a la Danford--Pettis and the version on the constrictor. Finally, to complete Table~\ref{table:question} it remains to solve (together with {\tt(AC)} $\Rightarrow$ {\tt(APW)}) the following question: \begin{question} Is {\tt(UC)} equivalent with {\tt(EAP)}? Moreover, if {\tt(EAP)} holds with $r=1$, then is $g_1$ exponentially exact? \end{question} } \subsection{Organization of the paper} In Section~\ref{sec:thmB} we will prove Theorem~\ref{thm:B} and show the generalization of Ara\'ujo's result mentioned after Theorem~\ref{thm:A}. The proof of the most general versions of Theorems~\ref{thm:D} and~\ref{GST} is carried on in Section~\ref{s:thmD} and~\ref{sec:GST} respectively. Theorem~\ref{thm:C} is proved in Section~\ref{s:thmc}. In Section~\ref{s:UC} we study the sub-hierarchy in~{\tt(UC)}. Finally, in Section~\ref{s:examples} we provide the proof of the propositions of the examples discussed in the introduction. We also include four appendices that may be of independent interest. In Appendix~\ref{sec:apendix} we briefly mention in a general framework some properties of the annealed Perron--Frobenius operator that we will use. Appendix~\ref{sec:Markov-restriction2} studies and generalizes the restriction of a Markov operator to the support of a density. In Appendix~\ref{appendix:B}, we study the ergodicity of invariant densities. Finally, in Appendix~\ref{appendix:D} we relate the definition of mixing and exactness in this section with the classical definition of mixing and exactness for deterministic maps. \enlargethispage{2\baselineskip} \section{Feller continuity and quasi-compactness: proof of Theorem~\ref{thm:B}} \label{sec:thmB} A Markov transition probability $P(x,A)$ is said to be \emph{Feller continuous}, \emph{strong Feller continuous} and \emph{ultra Feller continuous} if the family of probabilities $P(x,\cdot)$ varies continuously with respect to the weak* topology, setwise convergence and total variation distance on the space of probabilities respectively. Namely, for any $x\in X$ and every sequence $\{x_n\}_{n\geq 1}$ with $x_n\to x$, \begin{enumerate}[itemsep=0.2cm] \item $P(x,A)$ is Feller if $P(x_n,\cdot) \to P(x,\cdot)$ in the weak* topology, i.e., if $$\int \varphi(y) \, P(x_n,dy) \to \int \varphi(y) \, P(x,dy)$$ \text{for all bounded continuous real-valued function $\varphi$ on $X$.} \item $P(x,A)$ is strong Feller if $P(x_n,\cdot) \to P(x,\cdot)$ in setwise convergence, i.e., if $$P(x_n,A) \to P(x,A) \quad \text{for all $A\in\mathscr{B}$}.$$ \item $P(x,A)$ is ultra Feller if $P(x_n,\cdot) \to P(x,\cdot)$ in total variation distance. i.e., if $$\|P(x_n,A)-P(x,A)\|_{TV} \to 0$$ where the total variation distance of two Borel probability measures $\mu$ and $\nu$ on $X$~is~given~by $$ \|\mu-\nu\|_{TV}=2\cdot \sup _{A\in {\mathscr{B}}} |\mu (A)-\nu (A)|. $$ \end{enumerate} It is clear by definition that ultra Feller continuity implies strong Feller continuity. Moreover, since $X$ is a Polish space, an equivalent way of describing the setwise convergence of a sequence of measures $(\mu_n)_{n\geq 1}$ to $\mu$ is the following: $$ \lim_{n\to \infty} \int \varphi \, d\mu_n = \int \varphi \, d\mu, $$ for all bounded Borel measurable real-valued function $\varphi$ on $X$. This is because the simple functions are dense among the bounded Borel measurable real-valued functions on $X$ under the supremum norm. Thus, as a consequence, strong Feller continuity implies Feller continuity. The converse of these implications is not true in general. However, although ultra Feller continuity seems, at first sight, to be stronger than the strong Feller continuity, it turns out that the two are almost equivalent. More precisely, according to~\cite[Theorem.~3.37]{hairer2009non}, if two Markov transition probabilities $Q(x,A)$ and $R(x,A)$ are strong Feller, then the convolution $QR$ given by $$ QR(x,A)=\int R(y,A)\, Q(x,dy) $$ is an ultra Feller continuous Markov transition probability. In view of Chapman--Kolmogorov relation, $$ P^{n+k}(x,A)=\int P^n(y,A) P^k(x,dy) \quad \text{for all $n,k\in \mathbb{N}$}, $$ we have that $P^{n+k}(x,A)$ is the convolution of $P^{n}(x,A)$ and $P^{k}(x,A)$. In particular, we get the following remark: \begin{rem} \label{rem1} If $P^{n_0}(x,A)$ is strong Feller continuous for some $n_0\geq 1$, then $P^{2n_0}(x,A)$ is ultra Feller continuous. \end{rem} To prove Theorem~\ref{thm:B} we need the following proposition that shows some equivalent formulation of the assumption of Theorem~\ref{thm:A}. First, recall that a Markov transition probability $P(x,A)$ is said to be $m$-nonsigular if $m(A)=0$ implies that $P(x,A)=0$ for $m$-almost every $x\in X$. We also address the reader to recall how a transition probability can be associated with a Markov operator explained at the beginning of Section \ref{sec:(UC)}, see also~\cite[Chapter V.4]{neveu1965mathematical}. \begin{prop} \label{prop:ultra-Feller+B} Let $(X,\mathscr{B},m)$ be a Polish probability space. Consider a Markov transition probability $P(x,A)$ with $x\in X$ and $A\in \mathscr{B}$. Then, the following conditions are equivalent: \begin{enumerate}[label=(\alph*),itemsep=0.2cm] \item there exists $n_0\geq 1$ such that \begin{enumerate}[label=(\arabic*)] \item $P^{n_0}(x,A)$ is strong Feller continuous, and \item $P^{n_0}(x,\cdot)$ is $m$-nonsigular; \end{enumerate} \item there exists $n_0\geq 1$ such that \begin{enumerate}[label=(\roman*)] \item $P^{n_0}(x,A)$ is ultra Feller continuous, and \item $P^{n_0}(x,\cdot)$ is absolutely continuous with respect to $m$ for all $x\in X$; \end{enumerate} \item there exists $n_0\geq 1$ such that $$P^{n_0}(x,dy)=p(x,y)\, dm(y) \ \ \text{with} \ \ x \in X\mapsto p(x,\cdot) \in L^1(m) \ \ \text{continuous.}$$ \end{enumerate} Moreover, if $P^n(x,A)$ is an $n$-th Markov transition probability associated to a Markov operator $P:L^1(m) \to L^1(m)$, then $P^n(x,A)$ is $m$-nonsingular for each $n\geq 1$. \end{prop} \begin{proof} Assume the condition (a), and show (b). According to Remark~\ref{rem1}, the condition (1) implies that $P^{2n_0}(x,A)$ is ultra Feller continuous. Moreover, since $P^{n}(x,A)$ is the convolution of $P$ and $P^{n-1}$ (see \eqref{eq:recurence}) we get from (2) that $P^n(x,\cdot)$ is $m$-nonsingular for all $n\geq n_0$ and $m$-almost every $x\in X$. In particular, $P^{2n_0}(x,\cdot)$ is $m$-nonsingular. In fact, the continuity with respect to the total variation distance implies that $P^{2n_0}(x,\cdot)$ is actually absolutely continuous with respect to $m$ for all $x\in X$. Indeed, take $A\in\mathscr{B}$ with $m(A)=0$, $x\in X$ and consider $x_n\to x$ with $x_n\in X$ such that $P^{2n_0}(x_n,A)=0$ for all $n\geq 1$. By the ultra Feller continuity of $P^{2n_0}(\cdot,A)$ we have that $P^{2n_0}(x_n,A)\to P^{2n_0}(x,A)$ as $n\to \infty$. Then $P^{2n_0}(x,A)=0$ as required. Consequently, we get (i) and (ii) for the positive integer~$2n_0$. Conversely, (i) and (ii) clearly imply (1) and (2). We now prove that (i) and (ii) are equivalent to $L^1(m)$-continuity of the Radon--Nikod\'{y}m derivative $p(x,\cdot)$ of $P^{n_0}(x,\cdot)$ with respect to $m$. But this follows immediately from the well-known fact that $$ \left\lVert P^{n_0}(x,\cdot)-P^{n_0}(x',\cdot)\right\rVert_{TV} = \left\lVert p(x,\cdot)-p(x',\cdot)\right\rVert \quad \text{for all $x,x'\in X$}. $$ See the equation before Lemma 2 of \cite{kusolitsch2010theorem}. This completes the proof of the equivalences. The last assertion follows immediately from the fact that $P^n(x,A)=P^{n*}1_A(x)$ for $m$-almost every $x\in X$. Indeed, by duality $\int P^n(x,A) \, dm = \int_A P^n1_X \, dm =0$ whenever $m(A)=0$. Consequently, $P^n(x,A)=0$ for $m$-almost every $x\in X$ concluding that $P^n(x,A)$ is $m$-singular. \end{proof} The following result is Theorem~\ref{thm:B} stated in terms of Markov processes and Markov operators. \begin{thm} \label{thm:B-Markov} Let $(X , \mathscr B, m)$ be a compact Polish probability space. Consider a Markov operator $P:L^1(m) \to L^1(m)$ and let $P^n(x,A)$ be an associated $n$-th transition probability. Assume that there exists $n_0\in \mathbb{N}$ such that $P^{n_0}(x,A)$ is strong Feller continuous. Then $P$ is eventually compact. \end{thm} \begin{proof According to Proposition~\ref{prop:ultra-Feller+B}, we can assume that $P^{n_0}(x,A)$ satisfies conditions (i) and (ii) in that proposition. Now, from~\cite[Lemma~1]{HK1964}, we have that the absolute continuity of $P^{n_0}(x,\cdot)$ with respect to $m$ is equivalent to the uniformly absolutely continuity of such measure with respect to $m$ for all $x\in X$. That is, for any $x\in X$ and $\varepsilon>0$, there is $\delta_x=\delta_x(\varepsilon)>0$ such that $$P^{n_0}(x,A)<\varepsilon \quad \text{for all $A\in \mathscr{B}$ with $m(A)<\delta_x$.}$$ \begin{claim} \label{claim:compact} For each $\varepsilon>0$, there is $\delta=\delta(\varepsilon)>0$ such that $$\text{$P^{n_0}(x,A)<\varepsilon$ \ \ for all $A\in \mathscr{B}$ with $m(A)<\delta$ and $x\in X$.}$$ \end{claim} \begin{proof} By the continuity of the family of probability measures $P^{n_0}(x,\cdot)$ with respect to the total variation, for each $x\in X$, there exists a neighborhood $V(x,\varepsilon)$ of $x$ such that $P^{n_0}(x',A)<\varepsilon$ for all $x'\in V(x,\varepsilon)$ and $A \in \mathscr{B}$ with $m(A)<\delta_x$. Since the union of the open sets $V(x,\varepsilon)$ for $x\in X$ covers the compact set $X$, we can extract a finite subcover $V(x_1,\varepsilon), \dots, V(x_k,\varepsilon)$. Thus, the claim follows by taking $\delta=\min\{\delta_{x_1},\dots,\delta_{x_k}\}$. \end{proof} Finally, let us conclude the proof. First, recall that a \emph{bounded} family $F\subset L^1(m)$ is said to be \emph{uniform integrable} (in $L^1(m)$) if for every $\varepsilon>0$ there is $\delta>0$ such that $$ \int_A |g| \, dm <\varepsilon \quad \text{for all $g\in F$ and $A\in\mathscr{B}$ with $m(A)<\delta$}.$$ Writing $P^{n_0}(x,dy)=p(x,y)\, dm(y)$ and take $F=\{p(x,\cdot): x\in X\} \subset D(m)$. Claim~\ref{claim:compact} implies that $F$ is uniform integrable (in $L^1(m)$). Then, according to~\cite[Corollary~2.5 (b)]{wu2000uniformly}, the operator $\pi^2$ is compact where $\pi:L^\infty(m)\to L^\infty(m)$ is given by $$ \pi \psi (x) = \int \psi(y) p(x,y)\, dm(y) = \int \psi(y) P^{n_0}(x,dy) = P^{n_0*}\psi(x), \quad \psi\in L^\infty(m). $$ That is, $\pi$ is the adjoint operator $P^{n_0*}$ of $P^{n_0}$. Thus, $P^{2n_0*}=(P^{n_0*})^2=\pi^2$ is a compact operator. Hence, by Schauder's theorem~(cf.~\cite[Theorem~4.19]{rudin1991functional}), $P^{2n_0}$ is also compact, concluding the proof. \end{proof} We just indicate that Theorem~\ref{thm:B} follows immediately from Proposition~\ref{prop:ultra-Feller+B} and Theorem~\ref{thm:B-Markov}. Actually, Theorem~\ref{thm:B} and Theorem~\ref{thm:B-Markov} are equivalent from the following observation. \begin{rem} \label{rem:P=Lf} As indicated in the introduction, it is well known that the theory of Markov operators and Markov processes are intimately related. Less known perhaps is that the general theory of Markov operators is actually equivalent to the particular theory of annealed Perron--Frobenius operator associated with random maps on Polish probability space $(X,\mathscr{B},m)$. Indeed, clearly given a random map $f$ we define a Markov operator by means of the annealed Perron--Frobenius operator $\mathcal{L}_f$. Conversely, given a Markov operator $P:L^1(m)\to L^1(m)$, we consider a transition probability $P(x,A)$ which induces $P$. See~\cite[Proposition~V.4.4]{neveu1965mathematical} and Section~\ref{sec:(UC)} for more details. Notice that $$P(x,A)=P^*1_A(x) \quad \text{for $m$-almost every $x\in X$}$$ {where $P^*$ is the adjoint operator of $P$}. Now, Kifer proved in~\cite[Theorem~1.1]{Kifer1986} that any transition probability in a Polish\footnote{Actually the only requirement is that $(X,\mathscr{B})$ will be countably generated measurable space, i.e., that the $\sigma$-algebra $\mathscr{B}=\sigma(\mathcal{A})$ for some countable subset $\mathcal{A}$ of $\mathscr{B}$.} space $X$ can be represented by a random product of independent and identically distributed measurable maps. That is, there exists a probability space $(T,\mathscr{A},p)$ and a measurable map $f:T\times X \to X$ such that $$ P(x,A)=p(\{t\in T: f_t(x) \in A\}) $$ where $f_t=f(t,\cdot)$. This implies that $P(x,A)=\mathcal{L}^*_f1_A(x)$ for $m$-almost every $x\in X$ where {$\mathcal{L}^*_f$ is the adjoint operator of the annealed Perron--Frobenius operator $\mathcal{L}_f$ associated with~$f$}. Hence, $P^*1_A=\mathcal{L}^*_f1_A$ and therefore $P=\mathcal{L}_f$. To see this final assertion, observe first that any $g \in L^{\infty}(m)$ can be approximated uniformly by simple functions $g_n=\sum_{i=1}^N a_i 1_{A_i}$ where $a_i=a_i(n) \in \mathbb{R}$, $A_i=A_i(n) \in \mathscr{B}$ and $N=N(n)\in\mathbb{N}$. Then, $g_n$ converges to $g$ in $L^\infty(m)$-norm and hence $$ P^*g = \lim_{n\to\infty} P^*g_n = \lim_{n\to\infty} \sum_{i=1}^N a_i \, P^*1_{A_i} = \lim_{n\to\infty} \sum_{i=1}^N a_i \, \mathcal{L}_f^*1_{A_i} = \lim_{n\to \infty} \mathcal{L}^*_f g_n = \mathcal{L}^*_f g $$ This implies that $P^*=\mathcal{L}_f^*$ as well as $P=\mathcal{L}_f$ as required. \end{rem} \subsection{Generalization of Ara\'ujo's result} We will prove that under the condition (i) mentioned in Remark~\ref{rem:gAraujo}, the assumptions of Theorems~\ref{thm:A} and~\ref{thm:B} hold. To do this, we need some preliminaries. The following proposition provides a well-known sufficient condition to obtain that a given Markov transition probability is Feller continuous. The proof is straightforward and could be founded in~\cite[Theorem~4.22]{hairer2006ergodic}. Recall that a continuous random map is a measurable map $f:T\times X\to X$ such that $f_t=f(t,\cdot)$ is continuous for $p$-almost every $t\in T$. \begin{prop} \label{prop:sinprueba} If $P(x,A)$ is the transition probability associated to a continuous random map, then $P(x,A)$ is Feller continuous. \end{prop} The following result extends~\cite[Theorem~1.2 in Section~4]{kushner2001heavy} to the case of compact Polish probability spaces. \begin{thm} \label{thm:book-generalized} Let $(X,\mathscr{B},m)$ be a compact Polish probability space. Consider a Markov transition probability $P(x,A)$ with $x\in X$ and $A\in \mathscr{B}$. Assume that \begin{enumerate} \item $P(x,A)$ is Feller continuous, and \item $P(x,\cdot)$ is absolutely continuous with respect to $m$ for all $x\in X$. \end{enumerate} Then $P(x,A)$ is a strong Feller continuous transition probability. \end{thm} We will prove the above result by modifying the argument in \cite{kushner2001heavy} where Theorem~\ref{thm:book-generalized} was shown with $\mathbb R^d$ instead of $X$. For this modification, we need the following. \begin{lem}\label{lem:0204} Let $(X, \mathscr B, m)$ be a Polish probability space. Then, for any $A\in \mathscr{B}$ and $\delta >0$, there exists $B\in \mathscr{B}$ such that \[ m(\partial B) =0 \quad \text{and} \quad m(A \Delta B) <\delta \] where $\partial B$ is the boundary of $B$ and $A \Delta B$ is the symmetric difference of $A$ and $B$. \end{lem} \begin{proof} Consider the subalgebra $\mathcal{A}$ in $\mathscr{B}$ of $m$-continuity sets, i.e., $B\in \mathcal{A}$ if and only if $B\in \mathscr{B}$ and $m(\partial B)=0$. Let us prove that the $\sigma$-algebra generated by the $\mathcal{A}$ is $\mathscr{B}$. To see this, fix any metric $d$ compatible with the topology of $X$. Note that for $x\in X$, $\partial{B}_r(x)\subset \{ y \in X: d(x,y)=r\}$ where $B_r(x)$ denotes the ball centered at $x$ and of radius $r>0$. In particular, $\partial{B}_r(x)\cap \partial B_s(s)=\emptyset$ for $r\not= s$. But in a space of finite measure, there can only be countably many pairwise disjoint sets of positive measure. Hence, $B_r(x) \in \mathcal{A}$ for all but at most countably many $r$. In particular, $\mathcal{A}$ contains a neighborhood base of $x$. This shows that $\sigma(\mathcal{A})=\mathscr{B}$ as desired. Now, since the subalgebra $\mathcal{A} \subset \mathscr{B}$ generates $\mathscr{B}$, according to the well-known result on approximation generating subalgebras (cf.~\cite[Theorem~1.1]{liu2006smooth}), for every $A\in \mathscr{B}$ and $\delta>0$, there is $B\in \mathcal{A}$ such that $m(A\Delta B) < \delta$. This concludes the proof of the lemma. \end{proof} Now, we are in a position to prove Theorem~\ref{thm:book-generalized}. \begin{proof}[Proof of Theorem~\ref{thm:book-generalized}] Let us fix $A\in \mathscr{B}$ and $x\in X$, and consider a sequence $\{ x_n\} _{n\geq 1}$ converging to~$x$. We will prove that $P(x_n,A) \to P(x,A)$ as $n\to\infty$ concluding the strong Feller continuity of $P(x,A)$. It follows from Portemanteau's theorem~(cf.~\cite[Theorem~13.16]{Klenke2008}) that $P(x_n, \cdot )$ converges to $P(x, \cdot )$ in the weak$^*$ topology if and only if $P(x_n, B) $ converges to $P(x, B)$ for any continuity set $B$ of $P(x, \cdot )$, i.e.~any $B\in \mathscr{B}$ satisfying $P(x, \partial B) =0$. The former condition holds because we assumed that $P(x,A)$ is Feller continuous, and thus, recalling that $P(x, \cdot )$ is absolutely continuous with respect to $m$ by assumption, \begin{equation}\label{eq:0204c} P(x_n, B) \to P(x, B) \quad \text{for any $B\in \mathscr{B}$ satisfying $m( \partial B) =0$.} \end{equation} Now, according to~\cite[Lemma~1]{HK1964}, the absolute continuity of a probability measure $\nu$ with respect to $m$ is equivalent to the uniform absolute continuity of $\nu$ with respect to $m$. That is, $\nu (B) \to 0$ as $m(B) \to 0$. Thus, for each $\varepsilon >0$ and $n\geq 1$, by Lemma \ref{lem:0204} and the absolute continuity of $P(x_n ,\cdot )$ with respect to $m$, one can find $B_{\varepsilon ,n}\in \mathscr{B}$ such that \[ m(\partial B_{\varepsilon ,n}) =0 \quad \text{and} \quad P(x_n, A \Delta B_{\varepsilon ,n}) < \frac{\varepsilon }{2^n}. \] Set $$C_\varepsilon=\bigcup _{n\geq 1}B_{\varepsilon ,n} \quad \text{and} \quad D_\varepsilon=\bigcap _{n\geq 1}B_{\varepsilon ,n}.$$ \begin{claim} \label{claim-Yushi} It holds that $$ P(x, C_\varepsilon) -2\varepsilon \leq \liminf _{n\to\infty}P(x_n, A) \leq \limsup _{n\to\infty} P(x_n, A) \leq P(x, C_\varepsilon). $$ \end{claim} \begin{proof} Since $$m(\partial (C_\varepsilon \setminus D_\varepsilon)) \leq m(\partial C_\varepsilon \cup \partial D_\varepsilon ) \leq 2 \sum _{n=1}^\infty m(\partial B_{\varepsilon ,n}) =0$$ it follows from \eqref{eq:0204c} that \begin{align*} \limsup _{n\to\infty} P(x_n, C_\varepsilon \setminus B_{\varepsilon ,n}) &\leq \limsup _{n\to\infty} P(x_n, C_\varepsilon\setminus D_\varepsilon) =P(x, C_\varepsilon\setminus D_\varepsilon) \\ &\leq P(x, (A\Delta C_\varepsilon) \cup (A \Delta D_\varepsilon )) \leq 2\sum _{n=1}^\infty \frac{\varepsilon}{2^n} = 2\varepsilon . \end{align*} Therefore, since \begin{align*} P(x_n, C_\varepsilon) &\leq P(x_n, C_\varepsilon\setminus B_{\varepsilon ,n}) + P(x_n, A\cup B_{\varepsilon ,n}) \\ &\leq P(x_n, C_\varepsilon\setminus B_{\varepsilon ,n}) + P(x_n, A\Delta B_{\varepsilon ,n}) + P(x_n, A) \end{align*} by applying \eqref{eq:0204c} again (note that $m(\partial C_\varepsilon) \leq \sum _{n=1}^\infty m(\partial B_{\varepsilon ,n}) =0$) we have \[ \liminf _{n\to\infty}P(x_n, A) \geq \liminf _{n\to\infty}P(x_n, C_\varepsilon) -2\varepsilon -\lim_{n\to\infty} \frac{\varepsilon}{2^n} = P(x, C_\varepsilon) -2\varepsilon . \] On the other hand, since $$P(x_n, A) \leq P(x_n, B_{\varepsilon ,n}) + P(x_n, A\Delta B_{\varepsilon ,n}) \leq P(x_n, C_\varepsilon) + \frac{\varepsilon}{2^n}$$ we have \begin{align*} \limsup _{n\to\infty} P(x_n, A) \leq P(x, C_\varepsilon) \end{align*} concluding the proof of the claim. \end{proof} Finally, observe that \[ \vert P(x,C_\varepsilon ) - P(x,A)\vert \leq P(x,A\Delta C_\varepsilon) \leq \sum _{n=1}^\infty \frac{\varepsilon }{2^n} = \varepsilon . \] Thus, $P(x,C_\varepsilon) \to P(x,A)$ as $\varepsilon\to 0$. Hence, by Claim~\ref{claim-Yushi}, we get that \[ \lim _{n\to\infty} P(x_n, A) = \lim _{\varepsilon\to\infty} P(x, C_\varepsilon ) = P(x, A). \] This concludes the proof of the theorem. \end{proof} \begin{rem} A converse of Theorem~\ref{thm:book-generalized} follows from~\cite[Proposition~12.1.7]{douc2018markov}. Also, from~\cite[Remark~2]{ito1964invariant}, by arguing as in the implication of (b) from (a) in Proposition~\ref{prop:ultra-Feller+B}, we immediately get the following more specific converse: if $P(x,A)$ is a strong Feller continuous Markov transition probability on a Polish probability space $(X,\mathscr{B},m)$, then there exists a probability measure $m^*$ on $(X,\mathscr{B})$ such that $m$ is absolutely continuous with respect to $m^*$ and $P(x,A)$ satisfies (1) and (2) with $m^*$ instead~$m$. \end{rem} \begin{cor} \label{cor:rem} Let $(X,\mathscr{B},m)$ be a compact Polish probability space. Consider a continuous random map \mbox{$f:T\times X\to X$} where $(T,\mathscr{A},p)$ is a probability space and let $P^n(x,A)$ be the associated $n$-th transition probability. Assume that there is $n_0\geq 1$ such that $$P^{n_0}(x,\cdot) \ \ \text{is absolutely continuous with respect to $m$ for all $x\in X$}.$$ Then, for each $x\in X$, there exists $p(x,\cdot)\in D(m)$ such that $$P^{2n_0}(x,A)=p(x,y) \, dm(y) \quad \text{and} \quad x \in X\mapsto p(x,\cdot)\in L^1(m) \ \ \text{is continuous}.$$ Moreover, the annealed Perron--Frobenius operator $\mathcal{L}_f$ associated with $f$ is eventually compact. \end{cor} \begin{proof} According to Proposition~\ref{prop:sinprueba}, $P^{n_0}(x,A)$ is a Feller continuous Markov transition probability. Hence, by Theorem~\ref{thm:book-generalized}, we have that $P^{n_0}(x,A)$ is actually strong Feller continuous. Therefore, Remark~\ref{rem1}, Proposition~\ref{prop:ultra-Feller+B} and Theorem~\ref{thm:B-Markov} immediately imply the conclusion of the corollary. \end{proof} \section{Existence of invariant measures: proof of Theorem~\ref{GST}} \label{sec:GST} In what follows, $(X,\mathscr{B},m)$ denotes any abstract probability space. {We will prove in this section a generalization of Theorem~\ref{GST} which will be used to prove Theorem~\ref{thm:D} in the next section. But first, to prove these results} we will need some preliminaries that can be interesting by themselves. \subsection{Fundamental lemma} If $\phi$ belongs to $L^1(m)$, then the support of $\phi$, denoted by $\operatorname*{supp} \phi$, is not defined in a unique way, since $\phi$ can be represented by two functions whose values are different in a zero $m$-measuring set. However, since it is common to simplify terminology, we consider $\operatorname*{supp} \phi$ as a set defined by the relation $\phi\not =0$. Moreover, if we want to emphasize that a relationship between sets holds except a set of zero $m$-measure, we say that it holds \emph{up to an $m$-null set}. For instance, $A\subseteq B$ up to an $m$-null set if $m(A\setminus B)=0$. \begin{lem}\label{supp} Let $E$ be either $L^1(m)$ or $L^\infty(m)$. Let $P:E\to E$ be a positive linear bounded operator and consider $\phi, \psi \in E$ with $\psi\geq 0$. It holds that \begin{enumerate} \item $ \operatorname*{supp} P\phi \subset \operatorname*{supp} P1_{\operatorname*{supp} \phi }$ and $\operatorname*{supp} P\psi=\operatorname*{supp} P1_{\operatorname*{supp} \psi }$, \item if $\operatorname*{supp} \phi \subset \operatorname*{supp} \psi$, then $\operatorname*{supp} P\phi\subset \operatorname*{supp} P\psi$. \end{enumerate} \end{lem} \begin{proof} It is easy to see that (1) implies (2). Indeed, if $\operatorname*{supp} \phi \subset \operatorname*{supp} \psi$, we have that $1_{\operatorname*{supp} \phi } \leq 1_{\operatorname*{supp} \psi}$ and since $P$ is positive linear operator, $P1_{\operatorname*{supp} \phi } \leq P 1_{\operatorname*{supp} \psi}$. This implies that $\operatorname*{supp} P1_{\operatorname*{supp} \phi}\subset \operatorname*{supp} P1_{\operatorname*{supp} \psi}$. Now, (1) immediately implies the conclusion of~(2). But, actually, if (2) holds for any pair $\phi$ and $\psi$ in the assumptions of the lemma, we also get (1) as follows. Since $\operatorname*{supp} \phi = \operatorname*{supp} 1_{\operatorname*{supp} \phi}$, by (2) we get that $\operatorname*{supp} P\phi\subset \operatorname*{supp} P1_{\operatorname*{supp} \phi}$. Same argument and inclusion hold for $\psi$. But, in this case, taking into account that $\psi \geq 0$, we can also apply (2) to $\operatorname*{supp} 1_{\operatorname*{supp} \psi} = \operatorname*{supp} \psi$ getting the other inclusion and proving (1). In view of the above observation, the proof of the lemma is reduced to show~(2). To do this, let $\phi$ and $\psi$ be as in the statement. From the approximation by simple functions, $\phi$ and $\psi$ can be written as the pointwise limit of a functions $$ \phi_n=\sum_{k=1}^{N}a_k 1_{F_k} \quad \text{and} \quad \psi_n=\sum_{k=1}^{M}b_k 1_{G_k} $$ respectively, where $N=N(n), M=M(n)\in \mathbb{N}$, $a_k=a_k(n) \not=0$, $b_k=b_k(n)>0$ and $F_k=F_k(n),G_k=G_k(n) \in \mathscr{B}$ such that $F_\ell \cap F_j =\emptyset$ and $G_\ell\cap G_j = \emptyset$ if $\ell\not =j$. Moreover, $\phi_n$ and $\psi_n$ can be chosen\footnote{For a non-negative measurable function $g$, we consider the mesh with width $2^{-n}$ from the level $2^{-n}$ up to $2^n$, i.e., $$g_n = \sum_{k=0}^{2^{2n}-1} (k+1)2^{-n}1_{g^{-1}(k2^{-n},\,(k+1)2^{-n}]}+2^n1_{g^{-1}(2^n,\,\infty)}.$$ Then we can see that $g_n$ pointwise converges to $g$ since $0\leq g_n(x)-g(x)<2^{-n}$ on the set where $g\leq 2^n$. Moreover, $\operatorname*{supp} g=\operatorname*{supp} g_n$. For a general $g$, we consider $g=g^+ - g^-$ where $g^+$ and $g^-$ are the positive and negative parts of $g$ which we approximate by simple functions $g^+_n$ and $g_n^-$ as before. Finally, it is simple to verify that $g_n=g^+_n-g^-_n$ converges in the norm $L^1(m)$ or $L^{\infty}(m)$ where appropriate, to $g$ and $\operatorname*{supp} g_n = \operatorname*{supp} g$.} so that \begin{enumerate}[label=(\roman*)] \item $\operatorname*{supp} \phi_n = \operatorname*{supp} \phi \subset \operatorname*{supp} \psi = \operatorname*{supp} \psi_n$, and \item $\phi_n$ and $\psi_n$ converges in norm of $E$ to $\phi$ and $\psi$, respectively. \end{enumerate} Since $P$ is a bounded operator, (ii) implies that $P\phi_n \to P\phi$ and $P\psi_n \to P\psi$. Then \begin{enumerate}[resume, label=(\roman*)] \item $m((\operatorname*{supp} P\phi_n) \Delta (\operatorname*{supp} P\phi))\to 0$ and $m((\operatorname*{supp} P\psi_n) \Delta (\operatorname*{supp} P\psi))\to 0$ as $n\to\infty$. \end{enumerate} Note that by (i), $\phi_n$ and $\psi_n$ satisfy the assumption in (2). It is easy to see that if~(2) holds for $\phi_n$ and $\psi_n$, then (iii) implies that $\operatorname*{supp} P\phi \subset \operatorname*{supp} P\psi$ up to an $m$-null set. Hence, we reduce the proof to check the conclusion of (2) for $\phi_n$ and $\psi_n$. Taking into account $a_k \not = 0$, $P1_{F_k} \geq 0$ and the linearity of $P$, we see that \begin{equation}\label{eq:lem1} \begin{aligned} \operatorname*{supp} P\phi_n=\operatorname*{supp} \sum_{k=1}^N a_k P1_{F_k} &\subset \bigcup_{k=1}^N \operatorname*{supp} P1_{F_k} \\ &= \operatorname*{supp} \sum_{k=1}^N P1_{F_k}=\operatorname*{supp} P1_{\cup_{k=1}^N F_k} = \operatorname*{supp} P1_{\operatorname*{supp} \phi_n}. \end{aligned} \end{equation} Now, using that $b_k> 0$, $P1_{G_k} \geq 0$ and the linearity of $P$, \begin{equation}\label{eq:lem3} \operatorname*{supp} P1_{\operatorname*{supp} \psi_n}=\operatorname*{supp} P1_{\cup_{k=1}^M G_k}=\operatorname*{supp} \sum_{k=1}^MP1_{G_k} = \operatorname*{supp} \sum_{k=1}^M b_k P1_{G_k} = \operatorname*{supp} P\psi_n. \end{equation} Observe that~\eqref{eq:lem1} and~\eqref{eq:lem3} implies (1) for $\phi_n$ and $\psi_n$. As we see at the beginning of the proof, (1) implies (2) and we conclude the lemma. \end{proof} \subsection{Restrictions of Markov operatos} \label{sec:Markov-restriction} Fix $S\in \mathscr{B}$ with $m(S)>0$. Let us consider the probability space $(S,\mathscr{B}_S,m_S)$ where $$ \mathscr{B}_S=\{A\in \mathscr{B}: A\subset S\} \quad \text{and} \quad m_S(A)=\frac{m(A)}{m(S)}, \ \ A\in \mathscr{B}_S \ \ \ \left(\text{i.e.,} \ dm_S=\frac{1_S}{m(S)}\, dm\right). $$ Denote $L^1(S,\mathscr{B}_S,m_S)$ by $L^1(m_S)$. Observe that $L^1(m_S) \hookrightarrow L^1(m)$ by means of the inclusion \begin{equation}\label{eq:def-mS} 1_S: \phi\in L^1(m_S) \mapsto 1_S\phi \in L^1(m) \quad \text{given by} \quad 1_S\phi(x)= \begin{cases} \phi(x) & x\in S, \\ 0 & x\in X\setminus S.\end{cases} \end{equation} Although we are using the same notation $1_S$ to designate the characteristic function of $S$ and the inclusion by the characteristic function, it should cause no confusion. Now, given a Markov operator \mbox{$P:L^1(m)\to L^1(m)$,} we define an operator \begin{equation}\label{def:P_S} P_S: L^1(m_S) \to L^1(m_S), \qquad P_S\phi = 1_S\cdot P(1_S\phi) \quad \text{for} \ \phi\in L^1(m_S). \end{equation} It is not difficult to see that $P_S$ is a positive contraction of $L^1(m_S)$, that is, $P_S\phi\geq 0$ if $\phi\geq 0$ and $\|P_S\|\leq 1$. Taking advantage of the abuse of notation, we can extend $P_S$ to $L^1(m)$ as follows: $$ P_S\phi = 1_S \cdot P(1_S \cdot \phi) \quad \text{for} \ \phi\in L^1(m). $$ When no confusion can arise, we will omit the dot in the above expression and in~\eqref{def:P_S}. We also identify $L^1(m_S)$ with $$1_S(L^1(m_S))=\{\phi \in L^1(m): \operatorname*{supp} \phi \subset S\} \subset L^1(m).$$ \begin{lem} \label{lem:P_S-Markov} Let $P:L^1(m)\to L^1(m)$ be a Markov operator and consider $S\in\mathscr{B}$ with $m(S)>0$. If $\operatorname*{supp} P1_S \subset S$ up to an $m$-null set, then $P_S:L^1(m_S) \to L^1(m_S)$ defined in~\eqref{def:P_S} is a Markov operator and $$P_S\phi=P(1_S\phi) \quad \text{for all $\phi\in L^1(m)$}.$$ \end{lem} \begin{proof} Let $\phi \in L^1(m)$. Note that $\operatorname*{supp} 1_S\phi \subset S = \operatorname*{supp} 1_S$. Thus, by Lemma~\ref{supp}, $\operatorname*{supp} P(1_S\phi) \subset \operatorname*{supp} P1_S \subset S$ up to an $m$-null set. In particular, $1_S P(1_S\phi) = P(1_S\phi)$. Then, for any $\phi \in L^1(m)$, $$ \int P_S\phi \, dm_S = \frac{1}{m(S)} \int_S P(1_S\phi) \, dm = \frac{1}{m(S)}\int P(1_S\phi) \, dm =\frac{1}{m(S)} \int_S \phi \, dm =\int \phi \, dm_S. $$ Therefore, $P_S$ is a Markov operator. \end{proof} The implication in the previous lemma is actually an equivalence. To see this observation and other interesting equivalent conditions see Proposition~\ref{prop:P_h-Markov} in the Appendix~\ref{sec:Markov-restriction2} where the theory of the restriction of a Markov operator is extended and clarified. For the following result, recall the notion of weak almost periodicity {\tt(WAP)} in Definition~\ref{wap}. As mentioned, according to~\cite[Theorem~3.1]{Toyokawa2020}, {\tt(WAP)} is equivalent to the existence of an invariant density with the maximal support. \begin{prop} \label{prop:wap} Let $P:L^1(m)\to L^1(m)$ be a Markov operator and consider $h\in D(m)$ such that $Ph=h$ and set $S=\operatorname*{supp} h$. Then, $P_S:L^1(m_S) \to L^1(m_S)$ is a weakly almost periodic Markov operator. Moreover, $P_Sh=h$ and $P_S\phi = P(1_S\phi)$ for all $\phi \in L^1(m)$. \end{prop} \begin{proof} Note that $\operatorname*{supp} h = S = \operatorname*{supp} 1_S$ and by Lemma~\ref{supp} and the $P$-invariance of $h$, it follows that $S=\operatorname*{supp} h = \operatorname*{supp} Ph = \operatorname*{supp} P1_S$ up to an $m$-null set as well as $m(S)>0$. Hence, Lemma~\ref{lem:P_S-Markov} implies that $P_S$ is a Markov operator and $P_S \phi = P(1_S\phi)$ for all $\phi \in L^1(m)$. To prove that $P_S$ is also weakly almost periodic, it suffices to see that $P_Sh=h$ since $P_S^*1_S =1_S$ by the Markov property and thus $h$ has trivially the maximal support (as a fixed point of $P_S$). But this is proved as follows: $ P_S h = P(1_S h) = Ph =h. $ \end{proof} \subsection{Proof of Theorem~\ref{GST}} In this subsection, we prove a more general version of Theorem \ref{GST}. In particular, we remark that the new item \eqref{item5} below was shown in~\cite{socala1988existence} to be a sufficient condition for the existence of a $P$-invariant density, and below we show it is indeed also a necessary condition. \begin{thm} \label{cor:E} Let $(X,\mathscr{B},m)$ be an abstract probability space and consider a Markov operator $P: L^1(m) \to L^1(m)$. Then, the following conditions are equivalent: \begin{enumerate \item \label{item1} There exists an invariant density for $P$; \item \label{item2} There exist $\alpha\in(0,1)$ and $\delta>0$ such that \[ \sup_{n\ge0}\int_AP^n1_X \, dm<\alpha \quad \text{for any $A\in\mathscr{B}$ with $m(A)<\delta$}; \] \item \label{item3} There exist $\alpha\in(0,1)$ and $\delta>0$ such that \[ \sup_{n\ge0}\int_AA_n1_X \, dm<\alpha \quad \text{for any $A\in\mathscr{B}$ with $m(A)<\delta$}; \] \item \label{item4} There exist $\alpha\in(0,1)$ and $\delta\geq 0$ such that $$ \inf_{n\ge1}\int_A A_n1_X \, dm>1-\alpha \quad \text{for any $A\in\mathscr{B}$ with $m(A)>\delta$}. $$ \item \label{item5} There exist $\alpha\in(0,1)$ and $\delta>0$ such that \[ \limsup_{n\to \infty} \int_AP^n1_X \, dm<\alpha \quad \text{for any $A\in\mathscr{B}$ with $m(A)<\delta$}; \] \item \label{item7} There exist $\alpha\in(0,1)$ and $\delta>0$ such that \[ \limsup_{n\to \infty} \int_A A_n1_X \, dm<\alpha \quad \text{for any $A\in\mathscr{B}$ with $m(A)<\delta$}; \] \end{enumerate} The function $1_X$ in (5) and (6) could be substituted by any arbitrary density $h\in D(m)$. Moreover, the equivalence also holds by taking $\alpha=1$ in all the above items. \end{thm} \begin{proof} We first prove~(1) implies~(2). Let $g\in D(m)$ be an invariant density of $P$, that is, $Pg=g$. Hence, Proposition~\ref{prop:wap} implies that $P_S:L^1(m_S) \to L^1(m_S)$ is a weak almost periodic Markov operator where $S=\operatorname*{supp} g$. According to~\cite[Theorem~3.1]{Toyokawa2020}, this is equivalent to weak compactness of $\{P_S^n1_S\}_{n\geq 0}$ in $L^1(m_S)$. Hence, by Dunford--Pettis characterization, for each $\varepsilon>0$, there exists $\delta>0$ such that \begin{equation} \label{eq:DP} \sup_{n\geq 0} \int_{A}P_S^n1_S \, dm_S <\varepsilon \quad \text{for any $A\in \mathscr{B}_S$ with $m_S(A)<\delta$}. \end{equation} Note that clearly $m(S)>0$ since $g$ is a density function. Thus, $m(X\setminus S) <1$ and hence, we can take $\varepsilon>0$ small enough such that $\alpha\coloneqq\varepsilon \cdot m(S) +m(X\setminus S) <1$. Then, by Dunford--Pettis characterization, there is $\delta>0$ such that~\eqref{eq:DP} holds. Let $\delta'=\delta \cdot m(S)>0$. Observe that, according to Proposition~\ref{prop:wap}, $P_S 1_S = P1_S$ and then, $P^n_S 1_S = P^n 1_S$ and $\operatorname*{supp} P^n1_S \subset S$ up to an $m$-null set for all $n\geq 0$. Finally, for any $A\in \mathscr{B}$ with $m(A)<\delta'$ it holds that $m_S(A\cap S) <\delta$ and then, \begin{align*} \sup_{n\ge 0}\int_A P^n1_X \, dm &= \sup_{n\ge0} \left(\int_A P^n 1_S \, dm + \int_A P^n1_{X\setminus S} \, dm\right) \\ &=\sup_{n\ge0} \left( m(S) \cdot \int_{A\cap S} P^n_S 1_S \, dm_S + \int_A P^n1_{X\setminus S} \, dm\right)\\ &< m(S) \cdot \varepsilon + \sup_{n\ge 0}\int_{X\setminus S} P^{n*}1_X \, dm\\ &<\varepsilon\cdot m(S) +m(X\setminus S)=\alpha. \end{align*} The implication from the condition~(2) to~(3) follows immediately taking into account that $$ \int_A A_n1_X \, dm = \frac{1}{n} \sum_{i=0}^{n-1} \int_A P^i1_X \, dm < \frac{1}{n} \sum_{i=0}^{n-1} \alpha =\alpha $$ for all $n\geq 1$ and $A\in\mathscr{B}$ with $m(A)<\delta$. {We show the implication from~(3) to~(4). Observe that since $P$ is a Markov operator, then $P^*1_X=1_X$ and thus, $$\int A_n 1_X \, dm = \int A^*_n 1_X \, dm = \frac{1}{n}\sum_{k=0}^{n-1} \int P^{k*}1_X \,dm = 1.$$ From this and assumption~(3), we have \begin{align*} \inf_{n\geq 1}\int_{X\setminus E}A_n1_X \, dm = \inf_{n\geq 1} \int A_n 1_X \, dm - \sup_{n\geq 1} \int_E A_n1_X \, dm > 1 -\alpha \end{align*} for all $E\in\mathscr{B}$ with $m(E)<\delta$. Setting $\delta$ in~(4) as $1-\delta$ in~(3), we get the desired implication. } Next we show the implication from~(4) to~(1). Suppose contrarily that there is no invariant density for $P$. Then, according to~\cite[Theorem~4.6 and Lemma~4.5]{Krengel}, there exists $\psi\in L^\infty(m)$ with $\psi\geq 0$, fully supported on $X$ (i.e., $\operatorname*{supp} \psi =X$) and $\|A^*_n\psi\|_{\infty} \to 0$ as $n\to 0$. Now, since $h$ is fully supported on $X$, for $\delta\geq 0$ as in the condition~(4), we can find $\eta>0$ such that the set $E=\{h\geq \eta\}$ satisfies $m(E)>\delta$. Since $\psi\ge\eta$ on $E$, $ \eta^{-1} \psi \geq 1_E$. Hence, \begin{align*} \int_{E} A_n1_X \, dm &=\int 1_{E} A_n 1_{X} \, dm \leq \int \frac{\psi}{\eta} \, A_n 1_{X} \, dm =\frac{1}{\eta}\int A_n^*\psi \, dm \le\frac{1}{\eta}\left\lVert A_n^*\psi \right\rVert_{\infty} \to 0 \end{align*} as $n\to\infty$. This contradicts~(4) and the proof is done. Finally, observe that the previous arguments work assuming $\alpha=1$ in all of the items. To complete the proof we will see now the equivalence with~(5), (6). Clearly (2) implies (5). Also, (5) immediately implies (6). On the other hand, from~\cite[Theorem~1]{socala1988existence}, we have that~(6) implies~(1). Moreover, a slight modification of the argument of~\cite[Theorem~1]{socala1988existence} also shows that~(6) implies~(1). Indeed, assume (6) instead of (5) (which is called (C) in \cite{socala1988existence} and where $1_X$ could be substituted by any density in $D(m)$). Define \[ \lambda(\varphi )= \mathrm{Lim} \, \int A_n1_X \, \varphi\, dm \quad \text{for $\varphi\in L^\infty(m)$} \] where $\mathrm{Lim}$ is a Banach limit (i.e.~replace $P^n$ in the definition of $\lambda (h)$ of \cite{socala1988existence} with $A_n$). The rest of the proof in \cite[Theorem~1]{socala1988existence} literally works to conclude~(1) in our case, except proving $\lambda(P^*\varphi)=\lambda(\varphi)$ for each $\varphi \in L^\infty (m)$, that is the unique calculation that one needs to do. To see it, \begin{align*} \lambda(P^*\varphi) &= \mathrm{Lim} \, \int A_n1_X\, P^*\varphi\,dm = \mathrm{Lim} \, \int \frac{1}{n}\sum_{i=0}^{n-1} P^{i+1}1_X \, \varphi \, dm\\ & =\mathrm{Lim} \, \int \left( A_{n} 1_X + \frac{1}{n}P^n1_X - \frac{1}{n} 1_X\right) \, \varphi \, dm = \lambda (\varphi) + \mathrm{Lim} \, \frac{1}{n} \int ( P^n1_X - 1_X)\, \varphi \, dm. \end{align*} In the last equality we used the linearity of Banach limits. On the other hand, \[ \left\vert \int ( P^n1_X - 1_X)\, \varphi \, dm \right\vert \leq \Vert \varphi\Vert _{\infty} \ \int (P^n1_X + 1_X )\, dm = 2\Vert \varphi\Vert _{\infty} \] since $P$ is a Markov operator. Hence we get $\frac{1}{n} \int ( P^n1_X - 1_X)\, \varphi \, dm \to 0$ as $n\to \infty$, and obtain the desired equality $\lambda(P^*\varphi)=\lambda(\varphi)$. The version for $\alpha=1$ follows since, actually, the assumption in \cite[Theorem~1]{socala1988existence} is (5) for $\alpha=1$ and where $1_X$ could be substituted by any density in $D(m)$. This completes the proof. \end{proof} The implications between~(1)--(4) in Theorem~\ref{cor:E} require strongly that $P$ is a Markov operator. However, the equivalence between (1) and (4) for $\alpha=1$ holds even under the weaker assumption that the linear positive operator $P$ is just a contraction (i.e., $\|P\|_{\rm op}\leq 1$). As a final result of this section, we will prove it as follows. Compare with~\cite[Theorem~4.2 and Corollary~4.7 in Chapter~3]{Krengel} {and~\cite[Theorem~2]{neveu1967existence}.} \begin{thm} If $P:L^1(m)\to L^1(m)$ is a linear positive contraction, then the following are equivalent: \begin{enumerate}[label=(\roman*)] \item \label{item11} there exists a $P$-invariant density; \item \label{item22} there exists $\delta\geq 0$ such that $$\inf_{n\geq 0} \int_E P^{n}1_X \, dm >0 \quad \text{for all $E\in\mathscr{B}$ with $m(E)>\delta$;}$$ \item \label{item33} there exists $\delta\geq 0$ such that $$\inf_{n\geq 0} \int_E A^{n}1_X \, dm >0 \quad \text{for all $E\in\mathscr{B}$ with $m(E)>\delta$.}$$ \end{enumerate} \end{thm} \begin{proof} Observe that the proof of the implication of (1) from (4) in Theorem~\ref{cor:E} does not require that $\int P\varphi \, dm =\int \varphi \, dm$ for all $\varphi\in L^1(m)$. This shows that~(iii) implies~(i). The equivalence between~(i) and~(ii) follows from~\cite[Theorem~4.2]{Krengel} as follows. This result says that if $h$ is a $P$-invariant density, then $$ \inf_{n\geq 0} \int_{\tilde{E}} P^n 1_X \, dm >0 \quad \text{for all $\tilde{E}\subset \operatorname*{supp} h$ with $m(\tilde{E})>0$}. $$ Taking $\delta=1-m(\operatorname*{supp} h)\geq 0$, we have that if $E\subset X$ with $m(E)>\delta$, then $m(\tilde{E})>0$ where $\tilde{E}=E\cap \operatorname*{supp} h$. Hence, from the above inequality it holds that $$ \inf_{n\geq 0} \int_{E} P^n1_X \, dm \geq \inf_{n\geq 0} \int_{\tilde{E}} P^n1_X \, dm >0. $$ Finally, since~(ii) implies (iii) immediately, we conclude the cycle of implications and thus the proof of the theorem. \end{proof} \section{Mean constrictivit : proof of Theorem~\ref{thm:D}}\label{s:thmD} In the sequel, we will prove the following result which is a more general version of Theorem~\ref{thm:D}. \begin{thm}\label{thmeq} Let $(X,\mathscr{B},m)$ be an abstract probability space and consider a Markov operator $P: L^1(m) \to L^1(m)$. Then the following conditions are equivalent: \begin{enumerate}[itemsep=0.2cm] \item[\tt (MC)]\label{MC} $P$ is \emph{mean constrictive}; \item[ \tt (MC2)]\label{MC2} There is a compact set $F\subset L^1(m)$ and $\kappa<1$ such that \begin{align*} \limsup_{n\to\infty} d(A_n \varphi,F) \leq \kappa \quad \text{for any $\varphi\in D(m)$}; \end{align*} \item[\tt (WMC)]\label{WMC} $P$ is \emph{weakly mean constrictive}, i.e., there is a weakly compact set $F\subset L^1(m)$~such~that \begin{align*} \lim_{n\to\infty} d(A_n \varphi,F)=0 \quad \text{for any $\varphi\in D(m)$}; \end{align*} \item[\tt (WMC2)]\label{WMC2} There is a weakly compact set $F\subset L^1(m)$ and $\kappa<1$ such that \begin{align*} \limsup_{n\to\infty} d(A_n \varphi,F) \leq \kappa \quad \text{for any $\varphi\in D(m)$}; \end{align*} \item[\tt (MCDP)]\label{MC3} $P$ is \emph{mean constrictive \`a la Dunford--Pettis}, i.e., for every $\varepsilon>0$, there exists $\delta>0$ such that for any $\varphi\in D(m)$, there is $n_0=n_0(\varepsilon,\varphi)\geq 1$ satisfying that \begin{align*} \int_E A_n \varphi\,dm <\varepsilon \quad \text{for any $n\geq n_0$ and $E\in\mathscr{B}$ with $m(E)<\delta$}; \end{align*} \item[\tt (MCDP2)]\label{MC4} There exist $\kappa<1$ and $\delta>0$ such that for any $\varphi\in D(m)$, there is $n_0=n_0(\varphi)\geq 1$ satisfying that \begin{align*} \int_E A_n \varphi\,dm \leq\kappa \quad \text{for any $n\geq n_0$ and $E\in\mathscr{B}$ with $m(E)<\delta$}; \end{align*} \item[\tt (AMC)]\label{AMC} $P$ is \emph{asymptotically mean constrictive}, i.e., for every $\varepsilon>0$, there is $\delta>0$~such~that \begin{align*} \limsup_{n\to\infty} \int_E A_n \varphi\,dm <\varepsilon \quad \text{for any $\varphi \in D(m)$ and $A \in \mathscr{B}$ with~$m(A)<\delta$}; \end{align*} \item[\tt (AMC2)]\label{AMC2} There exist $\kappa<1$ and $\delta>0$ such that \begin{align*} \limsup_{n\to\infty} \int_E A_n \varphi\,dm <\kappa \quad \text{for any $\varphi \in D(m)$ and $A \in \mathscr{B}$ with~$m(A)<\delta$}; \end{align*} \item[\tt (FED)]\label{FPM} $P$ admits finitely many ergodic invariant densities $h_1,\dots,h_r$ with mutually disjoint supports (up to an $m$-null set) and the invariant density function $h=\frac{1}{r}(h_1+\dots+h_r)$ has the maximal support; \item[\tt (APM)]\label{AD} $P$ is \emph{asymptotically periodic in mean}, i.e., there exist finitely many ergodic invariant densities $h_1,\dots,h_r$ with mutually disjoint supports (up to an $m$-null set) and positive bounded linear functionals $\lambda_1, \dots, \lambda_r$ on $L^1(m)$ such that \begin{align*} \lim_{n\to\infty} \left\lVert A_n\varphi-\sum_{i=1}^r\lambda_i(\varphi)h_i\right\rVert =0 \quad \text{for any $\varphi\in L^1(m)$}. \end{align*} \end{enumerate} \end{thm} \begin{figure} $$ \xymatrix{ \tt (MCDP) \ar@{=>}[rd]_{trivial} \ar@{=>}[dd]_{trivial} & & & & & & & \ar@{=>}[lllllll]_{Lemma~\tiny\ref{lemC1}} \tt (WMC) \\ & \tt (ACM) \ar@{=>}[d]_{trivial} & & & & & & \\ \tt (MCDP2) \ar@{=>}[r]_{trivial} & \tt (AMC2) \ar@{=>}[rr]^{Theorem~\tiny\ref{propC2}} & & \tt (FED) \ar@{=>}[rr]_{Lemma~\tiny\ref{lemC3}} & & \tt (APM) \ar@{=>}[rr]^{Lemma~\tiny\ref{lemC4}} & & \ar@{=>}[uu]_{trivial} \tt (MC) \ar@{=>}[d]^{trivial} \\ \tt (WMC2) \ar@{=>}[u]^{Lemma~\tiny\ref{lemC1}} & & & & & & & \tt (MC2) \ar@{=>}[lllllll]^{trivial} } $$ \caption{Diagram of the relations between the conditions of Theorem~\ref{thmeq}.} \label{fig2} \end{figure} We will prove Theorem~\ref{thmeq} following the implications described in Figure~\ref{fig2}. To organize the proof, we divide the proof into two subsections. \subsection{Proof of the lemmas} \begin{lem}\label{lemC1} Condition {\tt(WMC)} (resp.~{\tt(WMC2)}) implies condition {\tt (MCDP)} (resp.~{\tt(MCDP2)}). \end{lem} \begin{proof} Let $F$ be as in the condition {\tt(WMC)}. Fix $\varepsilon>0$. The condition {\tt(WMC)} guarantees that for any $\varphi\in D(m)$ there exists $n_0=n_0(\varepsilon,\varphi)\geq 1$ such that \begin{align*} \inf_{\phi\in F}\left\lVert A_n \varphi-\phi\right\rVert <\frac{\varepsilon}{2} \quad \text{for any $n\geq n_0$}. \end{align*} Since $F$ is weakly compact, there exists $\delta=\delta(\varepsilon)>0$ such that for any $\phi\in F$ and $E\in \mathscr{B}$ with $m(E)<\delta$, we have $\int_E | \phi| \, dm<\frac{\varepsilon}{2}$. Thus, for any set $E$ with $m(E)<\delta$ \begin{align*} \int_EA_n \varphi\,dm&\le\inf_{\phi\in F}\left\lVert A_n \varphi-\phi\right\rVert+\int_E | \phi| \, dm <\varepsilon \quad \text{for any $n\ge n_0$.} \end{align*} This proves {\tt (MCDP)}. The implication {\tt(WMC2)} implies {\tt(MCDP2)} follows analogously. Namely, we put $\kappa$ in {\tt(MCDP2)} as the sum of $\kappa$ in {\tt(WMC2)} and $\varepsilon=\frac{1-\kappa}{2}$ from the weak compactness~of~$F$. \end{proof} \begin{lem}\label{lemC3} The condition {\tt(FED)} implies the condition {\tt(APM)}. \end{lem} \begin{proof} Suppose the condition {\tt(FED)}. In particular, the existence of an invariant density with the maximal support implies that $P$ is {\tt(WAP)}, cf.~\cite[Theorem~3.1]{Toyokawa2020}. Yosida and Kakutani proved in~\cite{yosida1941operator} that {\tt(WAP)} implies mean ergodicity.\footnote{The converse of this implication was recently proved in~\cite[Proposition~3.9]{Toyokawa2020}.} Recall that a Markov operator $P:L^1(m)\to L^1(m)$ is called mean ergodic if $A_n\varphi$ converges in $L^1(m)$-norm for any $\varphi\in L^1(m)$. For each density $h\in D(m)$, denote by $h^*$ the limit in $L^1(m)$-norm of $A_nh$. Observe that $h^*$ is an invariant density of $P$. Hence, from the finitude of ergodic invariant densities $h_1,\dots,h_r$, or equivalently extremal points in the space of invariant densities~(see Proposition~\ref{prop:ergodic} in Appendix~\ref{appendix:B}), we can find $\lambda_1(h), \dots \lambda_r(h)$ with $\lambda_i(h)\ge 0$, $\lambda_1(h)+\dots+\lambda_r(h)=1$ such $h^*=\lambda_1(h) h_1 + \dots + \lambda_r(h) h_r$. We have the equation in {\tt (APM)} for any $h\in D(m)$ and also for $\varphi\in L^1(m)$ with $\varphi\geq 0$. Then, considering the positive part and negative part of any function in $L^1(m)$, we can extend the above argument to $L^1(m)$. {Obviously} $\lambda_i:L^1(m)\to\mathbb{R}$ is bounded and linear so that the proof is done. \end{proof} \begin{lem}\label{lemC4} The condition {\tt(APM)} implies the condition {\tt(MC)}. \end{lem} \begin{proof} The proof is straightforward, taking the compact constrictor for $A_n$ by \begin{align*} F=\left\{ \varphi\in L^1(m): \ \varphi\text{ is a convex combination of }h_1,\dots,h_r \right\}. \end{align*} Therefore, the proof is done. \end{proof} \subsection{Proof of the theorem} In this subsection, we will show that {\tt (AMC2)} implies~{\tt(FED)}. \begin{rem} \label{prop:invariant} Observe that from Proposition~\ref{prop:wap}, the restriction $P_S$ of a Markov operator $P$ to the support $S$ of a $P$-invariant density is a Markov operator. Hence, Lemma~\ref{prop:P_h-Markov} in the Appendix~\ref{sec:Markov-restriction2} implies that $P^*1_S\geq 1_S$. In particular the sequence $(P^*1_S)_{n\geq 1}$ is increasing. \end{rem} \begin{prop}\label{propC22} {\tt (AMC2)} implies that the set of ergodic invariant densities of $P$ is nonempty and finite. Moreover, these ergodic invariant densities have mutually disjoint support. \end{prop} \begin{proof} Observe first that by virtue of Theorem~\ref{cor:E} (\ref{item7}), it immediately follows from {\tt (AMC2)} that there exists a $P$-invariant density. Thus, by Corollary~\ref{prop:ergodic}, $$\mathcal{E}=\{g \in L^1(m): \ \text{$g$ is an ergodic $P$-invariant density} \} \not = \emptyset.$$ The following claim is the key to obtaining the finitude of ergodic $P$-invariant densities. \begin{claim} \label{claim:Edelta} Consider $E\in \mathscr{B}$ such that $m(E)> 0$ and $P^*1_E\geq 1_E$. Let $\delta >0$ be the constant given in {\tt (AMC2)}. Then, $m(E)\geq \delta$. \end{claim} \begin{proof} Assume that $m(E)<\delta$. Then, applying {\tt (AMC2)} to $h=\frac{1_E}{m(E)} \in D(m)$, we have \[ \limsup_{n\to\infty} \int_E A_n\frac{1_E}{m(E)} \, dm \leq \kappa <1. \] On the other hand, for any $n\geq 1$, \begin{align*} \int_E A_n\frac{1_E}{m(E)} \, dm &= \frac{1}{m(E)} \int _EA^*_n 1_E \, dm = \frac{1}{m(E)} \int _E\frac{1}{n} \sum_{i=0}^{n-1} P^{i*}1_E \, dm \geq \frac{1}{m(E)} \int _E 1_E \, dm =1. \end{align*} This is a contradiction, and thus, we have $m(E)\geq \delta$. \end{proof} Now, by Remark~\ref{prop:invariant}, the support $S$ of any invariant density satisfies $P^*1_S\geq 1_S$ and by Corollary~\ref{prop:ergodic} and Remark~\ref{rem:disjoin-support} two distinct ergodic $P$-invariant densities have disjoint support. Hence, it follows from Claim~\ref{claim:Edelta} that the cardinality of $\mathcal{E}$ is less than $\delta^{-1}$. Set $r$ be this cardinality (notice that $r>0$ since $\mathcal{E}\not=\emptyset$). Thus, we get that $P$ admits finitely many ergodic invariant densities $h_1,\dots,h_r$ in $D(m)$ with mutually disjoint supports. \end{proof} \begin{prop} \label{propWAP} {\tt(WAP)} holds if {\tt(AMC2)} holds. \end{prop} We postpone the proof of the above proposition and show how to use it to conclude the following main result of this subsection: \begin{thm}\label{propC2} {\tt (AMC2)} implies {\tt (FED)}. In particular, {\tt (AC)} implies {\tt (MC)}. \end{thm} \begin{proof} Let $h_1,\dots,h_r$ be the ergodic invariant densities of $P$ obtained in Proposition~\ref{propC22}. To conclude~{\tt (FED)} we need to prove that the $P$-invariant density $h= \frac{1}{r} (h_1+\dots+h_r)$ has the maximal support. According to Proposition~\ref{propWAP}, $P$ is {\tt(WAP)}. Then there is a $P$-invariant density $g$ with the maximal support, i.e., such that $\lim_{n\to\infty} P^{n*}1_{\operatorname*{supp} g}=1_X$. Since the set of $P$-invariant densities is convex and the ergodic densities are its extremal points, it follows that $g=\lambda_1 h_1 + \dots+ \lambda_r h_r$ with $\lambda_i \geq 0$ and $\lambda_1+\dots+\lambda_r=1$. Then, $\operatorname*{supp} g \subset \operatorname*{supp} h$ and thus we also have $\lim_{n\to\infty} P^{n*}1_{\operatorname*{supp} h}=1_X$. This shows that $h$ has the maximal support concluding the proof. The final implication follows immediately from the first part of the theorem since, trivially, {\tt(AC)} $\Rightarrow$ {\tt (AMC2)} and {\tt (FED)} $\Rightarrow$ {\tt(MC)} by Lemmas~\ref{lemC3} and~\ref{lemC4}. \end{proof} \subsubsection{Proof of Proposition~\ref{propWAP}} Let $h$ be a $P$-invariant density. Set $S=\operatorname*{supp} h$. Define $$\varphi\stackrel{\scriptscriptstyle\rm def}{=} \lim_{n\to\infty} P^{n*}1_{S}.$$ This limit exists since the sequence $(P^{n*}1_{S})_{n\geq 1}$ is increasing. See Remark~\ref{prop:invariant}. The first observation is that $0\leq \varphi \leq 1$. Also, by the monotone continuity property of $P^*$ (see~\cite[Proposition V.4.1]{neveu1965mathematical}), $P^*\varphi =\varphi$. Moreover, since $P^*1_{S} \geq 1_{S}$, one has that $\varphi=1$ on $S$. Let us split $$X=E_0 \cup E \cup E_1, \quad E=\{x\in X: 0<\varphi<1\} \ \ \text{and} \ \ E_i=\{x\in X: \varphi=i\} \ \text{for $i=0,1$}.$$ \begin{lem} \label{lem1} We have $P^*1_E\leq 1_E$ and $P^*1_{E_0} \geq 1_{E_0}$. \end{lem} \begin{proof} Note that $\operatorname*{supp} 1_E \subset \operatorname*{supp} \varphi$ and $\operatorname*{supp} 1_E \subset \operatorname*{supp} (1_X-\varphi)$. Then, by Lemma~\ref{supp}, it follows that $$\operatorname*{supp} P^*1_E \subset \operatorname*{supp} P^*\varphi \cap \operatorname*{supp} P^*(1_X-\varphi)=\operatorname*{supp} \varphi \cap \operatorname*{supp} (1_X-\varphi)=E.$$ Then, Proposition~\ref{prop:P_h-Markov} in Appendix~\ref{sec:Markov-restriction2} implies that $P^*1_{E} \leq 1_{E}$. On the other hand, since $\lim_{n\to\infty} \min\{1_X,n\varphi\}=1_{\operatorname*{supp} \varphi}$, it holds that $$P^*1_{\operatorname*{supp} \varphi} =\lim_{n\to\infty} P^*\min\{1_X,n\varphi\} \leq \lim_{n\to\infty} \min\{1_X,n\varphi\}=1_{\operatorname*{supp} \varphi}.$$ Equivalently, $P^*1_{E_0} \geq 1_{E_0}$ since $E_0=X\setminus \operatorname*{supp} \varphi$. \end{proof} \begin{lem} \label{lem:anterior} For every $A\subset \operatorname*{supp} \varphi \setminus S$, $\lim_{n\to\infty} P^{n*}(\varphi 1_A)=0$. \end{lem} \begin{proof} By assumption, $1_A \leq 1_{\operatorname*{supp} \varphi \setminus S}$. Hence, $0\leq \varphi 1_A \leq \varphi 1_{\operatorname*{supp} \varphi \setminus S}$. Then, since $$ \lim_{n\to\infty} P^{n*}(\varphi 1_{\operatorname*{supp} \varphi \setminus S}) = \lim_{n\to\infty} P^{n*}(\varphi 1_{\operatorname*{supp} \varphi} - \varphi 1_S)= \lim_{n\to\infty} P^{n*}\varphi - \lim_{n\to\infty} P^{n*}1_{S}= \varphi-\varphi=0, $$ it also follows that $\lim_{n\to\infty} P^{n*}(\varphi 1_A)=0$. \end{proof} Let us define $$\psi\stackrel{\scriptscriptstyle\rm def}{=}\lim_{n\to\infty} P^{n*}1_{E}.$$ As above, notice that this limit exists since, from Lemma~\ref{lem1}, the sequence $(P^{n*}1_{E})_{n\geq 1}$ is decreasing. Moreover, again, by the monotone continuity property of $P^*$, we have that~$P^*\psi=\psi$ and $\operatorname*{supp} \psi \subset E$. \begin{lem} \label{lem:1-phi} If $m(E_0)=0$, then $\varphi = 1_X -\psi$. \end{lem} \begin{proof} Since $m(E_0)=0$, \begin{equation*}\label{eq:lim} 1_X=\lim_{n\to\infty} P^{n*}1_X= \lim_{n\to\infty} P^{n*}(1_{S}+1_{E_1\setminus S}+1_E) = \varphi +\lim_{n\to\infty} P^{n*}1_{E_1\setminus S} + \psi. \end{equation*} Having it in mind that $1_{E_1\setminus S}=\varphi 1_{E_1\setminus S}$ and $ E_1\setminus S \subset \operatorname*{supp} \varphi \setminus S$, Lemma~\ref{lem:anterior} implies that $\lim_{n\to\infty} P^{n*}1_{E_1\setminus S}=0$. Substituting above, we get $1_X=\varphi + \psi$ concluding the proof. \end{proof} For each $\varepsilon>0$, let us denote $ B_{\varepsilon}=\{x\in X: 1-\varepsilon \leq \psi(x)\}. $ \begin{lem} \label{lem:epsilon} If $\varphi=1_X - \psi$, then $\lim_{n\to \infty} P^{n*}(\psi 1_{B_\varepsilon})=\psi$ for every $\varepsilon>0$. Moreover, if $m(B_{\varepsilon})=0$ for some $\varepsilon>0$, then $\psi=0$ (and hence, $\varphi=1$) $m$-almost everywhere. \end{lem} \begin{proof} For each $k\geq 1$, define $C_k=\{x\in X: \, \frac{1}{k+1}\leq \varphi(x) < \frac{1}{k}\}$. Since $\varphi=1_X - \psi$, we have $$ C_k = \left\{ x\in X: \, 1-\frac{1}{k}< \psi(x) \leq 1-\frac{1}{k+1}\right\}. $$ Hence, $$ 1_{C_k}\psi \leq 1_{C_k}\left(1-\frac{1}{k+1}\right) =1_{C_k}\cdot\frac{k}{k+1} \leq k\, 1_{C_k}\varphi. $$ Moreover, from Lemma~\ref{lem:anterior}, since $C_k \subset E \subset \operatorname*{supp} \varphi \setminus S$, we have that $$ 0\leq \lim_{n\to\infty}P^{n*}\left(1_{C_k}\psi\right) \le k\lim_{n\to\infty}P^{n*}\left(1_{C_k}\varphi\right)=0 $$ and thus, for each $k\geq 1$, \begin{equation}\label{eq:lim=0} \lim_{n\to\infty}P^{n*}\left(1_{C_k}\psi\right)=0. \end{equation} On the other hand, for each $\varepsilon>0$, there exists $k_0\geq 1$ such that $$E\setminus B_\varepsilon =\{x\in X: 0<\psi(x)<1-\varepsilon\} \subset \bigcup_{k=1}^{k_0} C_k. $$ Then, from~\eqref{eq:lim=0} we get \begin{equation}\label{eq24} \lim_{n\to\infty}P^{n*}\left(1_{E\setminus B_\varepsilon}\psi\right)=0 \quad \text{for all $\varepsilon>0$.} \end{equation} Now, observe that $\psi=\psi1_E=\psi 1_{B_\varepsilon}+ \psi 1_{E\setminus B_\varepsilon}$. Hence, by~\eqref{eq24}, we conclude that \begin{align*} \lim_{n\to\infty}P^{n*}\left(\psi1_{B_{\varepsilon}}\right)=\lim_{n\to\infty} P^{n*}\psi = \psi. \end{align*} Finally, note that if $m(B_{\varepsilon})=0$ for some $\varepsilon>0$ then $\psi<1-\varepsilon$ $m$-almost everywhere. From this, it follows that \begin{align*} \psi=P^{n*}\psi=P^{n*}(\psi 1_E)< (1-\varepsilon)P^{n*}1_E\to(1-\varepsilon)\psi\quad \text{as $n\to\infty$.} \end{align*} Then $\psi=0$ $m$-almost everywhere. \end{proof} Now we set \[ \mathcal{F}=\{S \subset X : S \ \ \text{is the support of a} \ P\text{-invariant density} \}. \] Then, according to Proposition~\ref{propC22} there is at least one $P$-invariant density and thus $\mathcal{F}\not=\emptyset$. The inclusion up to an $m$-null set induces a partial order in $\mathcal{F}$. Given a chain, by the Zorn lemma, there exists a maximal element $S\in \mathcal{F}$ of the chain in the sense that if $S \subset S'$ and $S'\in \mathcal{F}$ then $S'=S$ up to an $m$-null set. Let $h\in D(m)$ be a $P$-invariant density such that $S=\operatorname*{supp} h$ and consider for this density the sets $E_0$, $E$ and $E_1$ defined above. \begin{lem} \label{lem:MCDP2} {\tt(AMC2)} implies that $m(E_0)=0$ and $m(B_{\varepsilon})=0$ for some $\varepsilon>0$. \end{lem} \begin{proof} By Lemma~\ref{lem1}, $P^*1_{E_0}\geq 1_{E_0}$. If $m(E_0)>0$, then Proposition~\ref{prop:P_h-Markov} implies that $P_{E_0}:L^1(m_{E_0})\to L^1(m_{E_0})$ is a Markov operator. Therefore, by virtue of Theorem~\ref{cor:E}~(6), it immediately follows from~{\tt (AMC2)} that there exists a $P_{E_0}$-invariant density $g\in D(m_{E_0})$. In particular, it satisfies that $P(1_{E_0}g)=1_{E_0}g$ and $\int 1_{E_0} g \, dm = m(E_0)$. Hence, $\phi= \frac{1}{2}(h+\frac{1_{E_0}g}{m(E_0)})$ is a $P$-invariant density and thus $S'=\operatorname*{supp} \phi \in \mathcal{F}$ and $S=\operatorname*{supp} h \subsetneq S'$. This contradicts the maximality of $S$, concluding that $m(E_0)=0$. Next we will see that there exists $\varepsilon>0$ such that $m(B_\varepsilon)=0$. From the assumption~{\tt (AMC2)}, there are $0<\kappa<1$ and $\delta>0$ so that \begin{align}\label{eq26} \limsup _{n\to\infty} \int_B A_n\phi \, dm \le\kappa \quad \text{for each $\phi \in D(m)$ and $B\in\mathscr{B}$ with $m(B)<\delta$.} \end{align} On the other hand, one can find $\varepsilon_0>0$ such that for any $0<\varepsilon\le\varepsilon_0$ it holds $m(B_{\varepsilon})<\delta$ since $m(B_\varepsilon)\to0$ as $\varepsilon\to0$. We fix $0<\varepsilon<\min\{\varepsilon_0,1-\kappa\}$. We will see that $m(B_{\varepsilon})=0$. Arguing by contradiction, we assume that $m(B_\varepsilon)>0$. Then for a function $\phi =\frac{1_{B_\varepsilon}}{m(B_\varepsilon)}$, it follows from~\eqref{eq26} that \begin{align*} \kappa&\ge\limsup _{n\to\infty}\int_{B_\varepsilon}A_n\phi\,dm\\ &=\limsup _{n\to\infty}\int_X \phi\cdot A_{n}^*1_{B_\varepsilon}\,dm \ge\limsup _{n\to\infty}\int_X \phi\cdot A_{n}^*(\psi1_{B_\varepsilon})\,dm =\int_X \phi \psi \,dm \end{align*} by the Lebesgue dominated convergence theorem and Lemma~\ref{lem:epsilon} (see $\varphi=1_X-\psi$ due to $m(E_0)=0$ and Lemma~\ref{lem:1-phi}). But we also have \begin{align*} \int_X \phi \psi \, dm= \frac{1}{m\left(B_\varepsilon\right)}\int_{B_\varepsilon}\psi \,dm \ge\frac{1}{m\left(B_\varepsilon\right)}\int_{B_\varepsilon}(1-\varepsilon)\,dm >\kappa. \end{align*} Therefore, we conclude $m(B_{\varepsilon})=0$. \end{proof} \begin{proof}[Proof of Proposition~\ref{propWAP}] The result follows from Lemmas~\ref{lem:1-phi},~\ref{lem:epsilon}~and~\ref{lem:MCDP2}. \end{proof} {\subsection{On the classes {\tt (AC)}, {\tt(CW)} and {\tt (APW)} } In Theorem~\ref{propC2} we have proved that {\tt(AC)} $\Rightarrow$ {\tt (MC)} and, in particular from Theorem~\ref{thm:D}, {\tt(AC)} implies {\tt (APM)}. The following proposition concludes that conditions {\tt (APW)} and {\tt (CW)} introduced in Section~\ref{sec:Questions-(AC)-(UC)} are sufficient for {\tt(AC)}. Thus, as a consequence, {\tt(APW)} $\Rightarrow$ {\tt(APM)} and \mbox{{\tt (CW)} $\Rightarrow$ {\tt(MC)}.} \begin{prop}\label{prop:efinal} It holds that {\tt (CW)} implies {\tt (AC)}. Furthermore, {\tt (APW)} implies {\tt (CW)}. \end{prop} \begin{proof} Assume that {\tt (CW)} holds. Hence, given $h\in D(m)$, there is $(\psi _n)_{n\geq 1} \subset F$ such that $P^nh -\psi _n\to 0$ weakly in $L^1(m)$ as $n\to \infty$. Therefore, \begin{align*} \limsup _{n\to\infty}\int _AP^n h\, dm &= \limsup _{n\to\infty}\left(\int _A\psi _n \, dm +\int _A(P^n h - \psi _n) \, dm\right) \\ &= \limsup _{n\to\infty} \int _A\psi _n \, dm \leq \sup _{\psi \in F} \int _A \psi \, dm. \end{align*} Since $F$ is a weakly compact set, it follows from Dunford--Pettis theorem that for any $\varepsilon >0$, there is $\delta >0$ such that $ \sup _{\psi \in F} \int _A \psi \, dm<\varepsilon$ for any $A\in\mathscr{B}$ with $m(A)<\delta$. This concludes that $P$ satisfies {\tt (AC)}. Next we assume that {\tt (APW)} holds. Then, for $h \in D(m)$, since $P$ is a Markov operator, due to the weak convergence in {\tt(APW)}, \begin{align} \label{eq:convex-combination} 0 &=\lim _{n\to\infty} \int P^n\big(h -\sum _{i=1}^r\lambda _i(h )g_i\big) \, dm =\int \big(\varphi -\sum _{i=1}^r\lambda _i(h )g_i \big) \, dm= 1- \sum _{i=1}^r\lambda _i(h ). \end{align} Set $$F\coloneqq \left\{\sum_{i=1}^r a_i \, g_i : \ a_i\in \mathbb R,\; \sum _{i=1}^r a_i =1\right\} \subset D(m).$$ Notice that $F$ is a weakly compact set and also $P$-invariant since $Pg_i=g_{\rho(i)}$ where $\rho$ is the permutation of $\{1,\dots,r\}$ in the definition of {\tt(APW)}. Moreover, in view of~\eqref{eq:convex-combination}, the weak convergence in {\tt (APW)} means that for any $h\in D(m)$, there exists $$ \psi _n\coloneqq \sum _{i=1}^r \lambda _i(h)P^ng_i = \sum _{i=1}^r \lambda _i(h)g_{\rho^n(i)} \in F $$ such that $P^nh -\psi _n \to 0$ weakly as $n\to \infty$. This concludes that $P$ satisfies {\tt (CW)}. \end{proof} } \section{Characterization of finitude of physical measures: proof of Theorem~\ref{thm:C}}\label{s:thmc} Let $(X , \mathscr B, m)$ and $(\Omega , \mathscr F, \mathbb{P})=(T^\mathbb{N}, \mathscr A^\mathbb{N}, p^\mathbb{N})$ be a locally compact Polish probability space and the infinite product space of a probability space $(T, \mathscr A, p)$, respectively. Consider a measurable map $f: T\times X \to X$ and $$ f^0_\omega=\mathrm{id} \quad \text{and} \quad f^n_\omega= f_{\omega_{n-1}}\circ \dots \circ f_{\omega_0} \ \ \text{for} \ n>0 \ \text{and $\omega =(\omega _0, \omega _1, \ldots )\in \Omega$}, $$ where we denoted $f_t=f(t,\cdot)$ for $t\in T$. Recall the notion of stationary measure of $f$ in~\eqref{def:stationary}. Also, recall that the convergence of measures that we are considering is in the weak* topology. That is, $\mu_n \to \mu$ if and only if \begin{equation} \label{eq:weak*convergence} \int \varphi \,d\mu_n \to \int \varphi \, d\mu \quad \text{for all $\varphi\in C_b(X)$} \end{equation} where $C_b(X)$ denotes the set of bounded real-valued continuous functions of $X$. However, since $X$ is a locally compact Polish space and the measures $\mu_n$ and $\mu$ are assumed to be probabilities, according to Portemanteau's theorem~(cf.~\cite[Theorem~13.16]{Klenke2008}),~\eqref{eq:weak*convergence} is equivalent to \begin{equation*} \label{eq:vide-convergence} \int \varphi \,d\mu_n \to \int \varphi \, d\mu \quad \text{for all $\varphi\in C_c(X)$}. \end{equation*} Here $C_c(X) \subset C_b(X)$ denotes the set of compactly supported continuous functions. \begin{lem} \label{lem-mu1} Let $\mu$ be an ergodic stationary measure of $f$. Then, \begin{enumerate}[label=(\roman*)] \item $\bar{\mu}(B(\mu))=1$ where $\bar{\mu}=\mathbb{P}\times\mu$; \item $\mu(B_\omega(\mu))=1$ for $\mathbb{P}$-almost every $\omega\in \Omega$; \item $\mathbb{P}(B_x(\mu))=1$ for $\mu$-almost every $x\in X$. \end{enumerate} Here, \begin{equation} \label{eq:movida} B_\omega(\mu)=\big\{x\in X: \, (\omega,x)\in B(\mu)\big\} \ \quad \text{and} \quad \ B_x(\mu)=\big\{\omega\in \Omega: \, (\omega,x)\in B(\mu)\big\} \end{equation} and $$ B(\mu)=\left\{ (\omega,x)\in \Omega\times X: \, \lim _{n\to \infty} \frac{1}{n} \sum _{j=0}^{n-1} \delta_{f_{ \omega}^{j}(x)}=\mu\right\}. $$ \end{lem} \begin{proof} Since $\mu$ is an ergodic stationary measure of $f$, the measure $\bar{\mu}=\mathbb{P}\times \mu$ is an ergodic invariant probability measure of the corresponding skew-product transformation $F$. We now consider the set $B(\mu)$ of points $(\omega,x)\in \Omega\times X$ such that for any $\varphi \in C_c(X)$, it holds $$ \lim _{n\to \infty} \frac{1}{n} \sum _{j=0}^{n-1} \varphi \circ f_{ \omega}^{j}(x) = \int _X \varphi d\mu. $$ Similarly we define the set $B(\mu,\varphi)$ of points $(\omega,x)\in \Omega\times X$ for which the above limit holds for a fixed continuous function $\varphi:X\to \mathbb{R}$. Taking into account that $$ \varphi \circ f_{ \omega}^{n}(x) = \bar{\varphi} \circ F^{n}(\omega,x) \quad \text{and} \quad \int\varphi \, d\mu = \int \bar{\varphi} \, d\bar{\mu} $$ where $$ \bar{\varphi}(\omega,x)=\varphi(x) \quad \text{for } (\omega,x)\in \Omega\times X \quad (\bar\varphi \in C_c(\Omega\times X)), $$ it follows from the Birkhoff Ergodic Theorem that $B(\mu,\varphi)$ has full $\bar{\mu}$-measure. Now, since we assumed that $X$ is locally compact Polish space, $C_c(X)$ is separable {(cf.~\cite[Section~42]{willard2012general})}. Thus, taking a countable dense subset $S$ of $C_c(X)$, it is not difficult to see that $$B({\mu})=\bigcap_{\varphi\in S} B(\mu,\varphi)$$ and therefore $B(\mu)$ has also full $\bar{\mu}$-measure. Finally, we observe that $B_\omega(\mu)$ and $B_x(\mu)$ are the $\omega$-section and $x$-section of $B(\mu)$ respectively, i.e., it holds~\eqref{eq:movida}. Hence, by Fubini's theorem, we have that $\mu(B_\omega(\mu))=1$ and $\mathbb{P}(B_x(\mu))=1$ for $\mathbb{P}$-almost every $\omega\in \Omega$ and $\mu$-almost every $x\in X$ respectively. \end{proof} We highlight the following lemma for future reference. This lemma follows immediately by the Fubini theorem and the definition of $B_\omega(\mu)$ and $B_x(\mu)$ in~\eqref{eq:movida}. \begin{lem} \label{lem:equi-basin} The following are equivalent: \begin{enumerate} \item $\bar{m}(B(\mu_1)\cup \dots \cup B(\mu_r))=1$ where $\bar{m}=\mathbb{P}\times m$; \item $m(B_\omega(\mu_1)\cup \dots \cup B_\omega(\mu_r))=1$ for $\mathbb{P}$-almost every $\omega\in \Omega$; \item $\mathbb{P}(B_x(\mu_1)\cup \dots \cup B_x(\mu_r))=1$ for $m$-almost every $x\in X$. \end{enumerate} \end{lem} \begin{rem}\label{rem:physical} The equivalence in Lemma~\ref{lem:equi-basin} also holds asking "$>0$" instead of "$=1$". Thus, an ergodic stationary probability $\mu$ of $f$ {is said to be a} \emph{physical measure} if one of the following equivalent conditions holds: \begin{enumerate} \item $\bar{m}(B(\mu))>0$ where $\bar{m}=\mathbb{P}\times m$; \item $m(B_\omega(\mu))>0$ for $\mathbb{P}$-almost every $\omega\in \Omega$; \item $\mathbb{P}(B_x(\mu))>0$ for $m$-almost every $x\in X$. \end{enumerate} This definition agrees with the classical notion of physical measure in the deterministic case (that is when $\Omega$ is a singleton). However, this new definition differs from previous notions of physical measure for random maps introduced in the literature (see for instance~\cite[Equation~(2)]{alves2007stochastic} or \cite[page~313]{Araujo2000}). This concept was introduced by asking $m(G(\mu))>0$ where $$ G(\mu)=\left\{x\in X: \, \lim _{n\to \infty} \frac{1}{n} \sum _{j=0}^{n-1} \delta_{f_{ \omega}^{j}(x)}=\mu \quad \text{for $\mathbb{P}$-almost every $\omega\in \Omega$} \right\}. $$ Since $\bar{m}(B(\mu))\geq m(G(\mu))$, one has that $m(G(\mu))>0$ implies $\bar{m}(B(\mu))>0$. However, a priori, it is not expected the converse. Also, defining $G(\mu)$ as the statistical basin of attraction does not seem appropriate, as the following example shows. \begin{example} \label{example:figura4} Let $X=T=[-1,1]$ be equipped with the normalized Lebesgue measure and let $X_-=[-1,0)$, $X_+=(0,1]$. Consider a continuous map $f_0: X\to X$ which has exactly two sinks $p_- = -\frac{1}{2}$, $p_+ = \frac{1}{2}$ whose basin of attraction is $X_-$, $X_+$, respectively, and $0$ is the fixed point of $f_0$. With the notation $B(x,r)$ for the ball of radius $r$ centered at $x$, we also assume that $f_0(X )\subset B(0,\frac{3}{4})$ and $f_0( B(p_\pm , \frac{1}{4}))\subset B(p_\pm , \frac{1}{8})$. Then, for the random map $f: T\times X\to X$ with additive noise given by $f_t(x) = f_0(x) + \frac{1}{8} t$, it holds that $f_t(B(p_\pm , \frac{1}{4}))\subset B(p_\pm , \frac{1}{4})$ for any $t\in T$. See Figure \ref{fig_eg_Y}. \begin{figure}[h] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{fig_example_5_4_a} \caption{The map $f_0$ on $[-1,1]$.} \label{fig:first} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{fig_example_5_4_b} \caption{The map $f_t$ on $[-1,1]$} \label{fig:second} \end{subfigure} \caption{Illustrations of the random map $f$ in Example~\ref{example:figura4}.} \label{fig_eg_Y} \end{figure} By the argument in Remark \ref{rm:0302}, the annealed Perron--Frobenius operators associated with the restrictions of $f$ on both $B(p_- , \frac{1}{4})$ and $B(p_+ , \frac{1}{4})$ satisfy {\tt (FPM)}. Therefore, $f$ admits at least two absolutely continuous ergodic stationary measures $\mu _-$, $\mu _+$ whose supports are included in $B(p_- , \frac{1}{4})$, $B(p_+ , \frac{1}{4})$, respectively. Since the annealed Perron--Frobenius operator associated with $f$ itself also satisfies {\tt (FPM)}, $f$ admits finitely many absolutely continuous ergodic stationary measures $\mu _1,\ldots ,\mu _r$, two of which are $\mu _-$, $\mu _+$, such that \[ m( B_\omega (\mu _1) \cup \cdots \cup B_\omega (\mu _r)) =1 \quad \text{for $\mathbb P$-almost every $\omega$.} \] On the other hand, by the continuity of $f_0$, there is a neighborhood $U$ of $0$ such that $f_0(U) \subset B(0,\frac{1}{16})$, so that for each $x\in U$, both $\{ f_t(x) : t\in T\} \cap X_+$ and $\{ f_t(x) : t\in T\} \cap X_-$ have positive Lebesgue measure. Consequently, one can find positive $\mathbb P$-measure sets $\Gamma_-$, $\Gamma_+$ and $n_0\geq 1$ such that if $\omega \in \Gamma_\pm$ then $f_\omega ^n(x) \in B(p_\pm , \frac{1}{4})$ for any $n\geq n_0$ and $x\in U$. This concludes that $U\not \subset \bigcup _{j=1}^r G (\mu _j) $. Therefore, \[ 0<m(G(\mu_1)\cup \dots \cup G(\mu_r))<1. \] In conclusion, if one expects finitely many physical measures where the union of their basins of attraction covers the whole space almost everywhere, then the good notion of the basin should be the fiberwise statistical basin. \end{example} \end{rem} The following lemma will be essential to prove Theorem~\ref{thm:C}. {In this lemma, the support $\operatorname*{supp} \eta$ of a measure $\eta$ on $X$ (which is not necessarily absolutely continuous with respect to $m$) is defined as the set of all points $x$ in $X$ for which every open neighborhood $V$ of $x$ has positive measure. } \begin{lem} \label{lem:attraction-to-stationary} Let $\mu$ be a stationary measure of $f$. Then for any probability measure $\eta$ with {$\operatorname*{supp} \eta \subset B_\omega(\mu)$}, $$ \lim_{n\to \infty} \frac{1}{n}\sum_{j=0}^{n-1} (f^j_\omega)_* \eta = \mu. $$ In particular, if $x\in B_\omega(\mu)$, then \begin{equation} \label{eq:L1conv} \lim_{n\to \infty} \frac{1}{n}\sum_{j=0}^{n-1} \varphi(f^j_\omega(x)) = \int \varphi \, d\mu \quad \text{for all $\varphi \in C_b(X)$.} \end{equation} \end{lem} \begin{proof} Take any probability measure $\eta$ with support contained in $B_\omega(\mu)$. Given any $\varphi \in C_c(X)$, by definition of $B_\omega(\mu)$, we have that for each $x\in \operatorname*{supp} \eta$ it holds $$ \lim_{n\to \infty} \frac{1}{n}\sum_{j=0}^{n-1} \varphi(f^j_\omega(x)) =\int \varphi \, d\mu. $$ Taking integrals over $\operatorname*{supp} \eta$ with respect to the probability measure $\eta$ on both sides of the equality, the dominated convergence theorem gives $$ \lim_{n\to \infty} \frac{1}{n}\sum_{j=0}^{n-1} \int \varphi(f^j_\omega(x)) \, d\eta =\int \varphi \, d\mu. $$ Since $$ \int \varphi \, d(f^j_\omega)_*\eta = \int \varphi\circ f^j_\omega \, d\eta \quad \text{for every $j\geq 0$} $$ we get the first part of the lemma. To prove~\eqref{eq:L1conv}, it suffices to apply the first part of this lemma to $\eta=\delta_x$ for $x\in B_\omega(\mu)$. \end{proof} Let us show that (ii) implies (i) in Theorem~\ref{thm:C}. To do this, according to the equivalences shown in Theorem~\ref{thm:D}, it suffices to show the following proposition. \begin{prop} \label{prop:FPM-FED} If $f$ satisfies {\tt(FPM)}, then ${\mathcal{L}}_f$ satisfies the condition {\tt(FED)}. \end{prop} \begin{proof} Since $f$ satisfies {\tt(FPM)} we have only finitely many absolutely continuous (with respect to $m$) ergodic stationary probability measures $\mu_1,\dots, \mu_r$ which have pairwise disjoint supports and $m(B_\omega(\mu_1)\cup\dots \cup B_\omega(\mu_r))=1$ for $\mathbb{P}$-almost every $\omega\in \Omega$. Let $h_i$ be the Radon--Nikod\'{y}m derivative of the measure $\mu_i$ with respect to $m$ for $i=1,\dots,r$. Observe that $h_i$ is an invariant density of ${\mathcal{L}}_f$ because $\mu_i$ is a stationary probability measure of $f$. Moreover, since the supports of such measures are pairwise disjoints, we also get that the densities $h_1,\dots,h_r$ have mutually disjoint supports. To conclude {\tt(FED)} for ${\mathcal{L}}_f$ we need to prove that the density $h=\frac{1}{r} (h_1+\dots+h_r)$ has the maximal support. This means that \begin{equation} \label{eq:maximal} \lim_{n\to\infty} {\mathcal{L}}_f^{n*}1_{\operatorname*{supp} h}(x) = 1 \quad \text{for $m$-almost every $x\in X$} \end{equation} where ${\mathcal{L}}_f^{\,*}$ is the adjoint operator of ${\mathcal{L}}_f$. Given $x\in X$ and $\omega \in B_x(\mu_i)$ for some $i\in \{1,\dots,r\}$, we have that $x\in B_\omega(\mu_i)$. Therefore, by Lemma~\ref{lem:attraction-to-stationary}, $$ \lim_{n\to \infty} \frac{1}{n}\sum_{j=0}^{n-1} 1_{\operatorname*{supp} h} \circ f^j_\omega(x) = \mu_i({\operatorname*{supp} h})=1. $$ That is, for every $x\in X$, \begin{equation}\label{eq:lim1} \lim_{n\to \infty} \frac{1}{n}\sum_{j=0}^{n-1} 1_{\operatorname*{supp} h} \circ f^j_\omega(x) =1 \quad \text{for all $\omega \in B_x(\mu_1)\cup \dots \cup B_x(\mu_r)$.} \end{equation} Taking into account Lemma~\ref{lem:equi-basin}, we have that $\mathbb{P}(B_x(\mu_1)\cup \dots \cup B_x(\mu_r))=1$ for $m$-almost every $x\in X$. Then, using the notation introduced in Appendix~\ref{sec:apendix}, by Lemma~\ref{lem:A2}~(4), the dominated convergence theorem, Lemma~\ref{lem:A2}~(2) and~\eqref{eq:lim1} imply that, for $m$-almost every $x\in X$, \begin{align*} \lim_{n\to \infty} \frac{1}{n}\sum_{j=0}^{n-1} {\mathcal{L}}_f^{j*}1_{\operatorname*{supp} h}(x) &= \lim_{n\to \infty} \frac{1}{n}\sum_{j=0}^{n-1} \int \mathcal{L}_\omega^{j*} 1_{\operatorname*{supp} h}(x) \, d\mathbb{P}(\omega) \\ &=\int_{\cup_i B_x(\mu_i)} \, \lim_{n\to \infty} \frac{1}{n}\sum_{j=0}^{n-1} 1_{\operatorname*{supp} h} \circ f^j_\omega(x)\, d\mathbb{P}(\omega)=1. \end{align*} Further, according to Remark~\ref{prop:invariant}, we have that $({\mathcal{L}}_f^{j*}1_{\operatorname*{supp} h})_{j\geq 1}$ is monotonic increasing and thus, mean converges imply sequence converges (this follows as a consequence of the Stolz--Ces\`aro theorem). That is, we obtain~\eqref{eq:maximal} concluding the proof. \end{proof} Now we will prove that (i) implies (ii). According to Theorem~\ref{thm:D}, if ${\mathcal{L}}_f$ is mean constrictive, then ${\mathcal{L}}_f$ satisfies {\tt (APM)}. That is, ${\mathcal{L}}_f$ admits finitely many ergodic invariant densities $h_1,\dots,h_r$ in $D(m)$ with mutually disjoint supports and $\lambda_1, \dots, \lambda_r$ bounded linear functionals such that for any $\varphi\in L^1(m)$, \begin{align}\label{eq:AD} \lim_{n\to\infty} \left\lVert A_n\varphi-\sum_{i=1}^r\lambda_i(\varphi)h_i\right\rVert =0. \end{align} Since $h_i$ is an ergodic invariant density $h_i$ for ${\mathcal{L}}_f$, the measure $\mu_i$ is an absolutely continuous (with respect to $m$) ergodic stationary measures of $f$ where $d\mu_i=h_i \,dm$. Moreover, since the support $\operatorname*{supp} \nu$ of a measure $\nu$ absolutely continuous with respect to $m$ is defined as the support of the Radon--Nikod\'{y}m derivative $\frac{d\nu}{dm}$ of $\nu$ with respect to $m$, we also obtain that $\operatorname*{supp} \mu_1,\dots,\operatorname*{supp} \mu_r$ are pairwise disjoints. To prove that $f$ satisfies~{\tt(FPM)} it remains to show that the union of fiberwise basins of attraction of these measures is $X$ modulus a set of zero $m$-measure. This will be obtained in the following proposition. \begin{prop} \label{prop:APM-FPM} Assume that ${\mathcal{L}}_f$ satisfies~{\tt(APM)} as above. Then $$\text{$m(X\setminus (B_\omega(\mu_1)\cup \dots \cup B_\omega(\mu_r)))=0$ \ \ for \ \ $\mathbb{P}$-almost every $\omega\in \Omega$.}$$ \end{prop} \begin{proof} Let $\Omega _0 =\{ \omega \in \Omega : \mu_i(B_\omega(\mu _i)) =1, \ \text{for $i=1,\dots,r$}\}$. From item (i) in Lemma~\ref{lem-mu1} we get $\mathbb{P}(\Omega_0)=1$. Set \[ A_\omega =X\setminus \left(B_\omega (\mu _1)\cup \dots \cup B_\omega(\mu_r) \right). \] Then, for any $\omega \in \Omega_0$ it holds that $\mu_i(A_\omega) = 0$ for all $i=1,\dots,r$ because $A_\omega \subset X \setminus B_\omega (\mu_i)$. Consequently, since $d\mu_i= h_i \, dm$ \begin{equation}\label{eq:3b} \int _{A_\omega} h_i \, dm =0 \quad \text{for all $i=1,\dots,r$ and $\omega\in \Omega_0$}. \end{equation} Set \[ \Omega _1 = \bigcap _{n=0}^\infty \sigma ^{-n} (\Omega _0), \] which is also a full $\mathbb P$-measure set. Notice that $\omega \in \Omega _1$ implies $\sigma ^n\omega \in \Omega _0$ for any $n\geq 0$. Thus, from~\eqref{eq:3b} \begin{equation}\label{eq:4b} \int _{A_{\sigma^n\omega}} h_i \, dm =0 \quad \text{ for each $i=1,\ldots ,r$ and $n\geq 1$ and $\omega\in \Omega_1$.} \end{equation} For simplicity of notation, we write $\bar{m}=\mathbb P\times m$. \begin{claim}\label{claim3} For $\bar{m}$-almost every~$(\omega , x)\in \Omega \times X$, there is $k\geq 1$ such that $f_\omega^k(x)\in X\setminus A_{\sigma ^k\omega }$. \end{claim} \begin{proof} By contradiction, assume that the claim does not hold. Therefore, \[ \bar{m}(B_0)>0, \quad B_0= \{ (\omega , x)\in \Omega \times X : \, f_\omega^n(x)\in A_{\sigma ^n\omega } \ \ \text{for all $n\geq 1$}\}. \] Set $B= B_0 \cap (\Omega _1\times X)$, which is still a positive $\bar{m}$-measure set because $\Omega _1 \times X$ is a full measure set. Denoting by $B_\omega = \{ x \in X: (\omega , x)\in B\}$ the $\omega$-section of $B$, Fubini's theorem implies that for every $n\geq 1$, \[ \int_\Omega \int_{B_\omega } 1_{A_{\sigma ^n\omega }}\circ f^{n}_\omega \ dm \, d\mathbb P = \int_\Omega \int_{B_\omega } \, d{m}\, d\mathbb P = \bar{m}(B) >0. \] Hence, with the notation from Appendix~\ref{sec:apendix}, see Lemma~\ref{lem:A2}~(2), \begin{equation}\label{eq:contradition} \frac{1}{n}\sum_{i=0}^{n-1} \int_\Omega \int_{B_\omega } \mathcal{L}_{\omega}^{i*}1_{A_{\sigma^i\omega}} \, dm \, d\mathbb P =\bar{m}(B)>0. \end{equation} Given $\varphi\in L^1(m)$ and $\omega\in \Omega$, let us write { $$ \mathcal{L}^n_\omega\varphi = \sum_{j=1}^r \lambda_j(\varphi)h_j + \mathcal{Q}^n_\omega \varphi \quad \text{and} \quad A_n\varphi = \sum_{j=1}^r \lambda_j(\varphi)h_j + \mathcal{Q}_n \varphi $$ where we recall that (see Lemma~\ref{lem:A2} (4)), $$ A_n\varphi = \frac{1}{n}\sum_{i=0}^{n-1} {\mathcal{L}}^i_f\varphi = \frac{1}{n}\sum_{i=0}^{n-1} \int \mathcal{L}_{\omega}^{i}\varphi \, d\mathbb P. $$ Then, \begin{align*} \frac{1}{n}\sum_{i=0}^{n-1} \int_\Omega \int _{X} &\mathcal Q_\omega ^i 1_X\, \, {dm}\,d\mathbb P = \int \big( A_n 1_X\ - \sum_{j=1}^r \lambda_j(1_X) h_j \big) \, {dm} = \int \mathcal Q_n1_X \, {dm} \leq \| \mathcal Q_n1_X\|. \end{align*} On the other hand, by Lemma~\ref{lem:A2}~(3), Equation~\eqref{eq:4b} and the above inequality, \begin{align*} \frac{1}{n}\sum_{i=0}^{n-1} &\int_\Omega \int_{B_\omega } \mathcal{L}_{\omega}^{i*}1_{A_{\sigma^i\omega}} \, dm \, d\mathbb P \\ &\leq \frac{1}{n}\sum_{i=0}^{n-1} \int_\Omega\int_X \mathcal{L}_{\omega}^{i*}1_{A_{\sigma^i\omega}} \, {dm}\,d\mathbb P = \frac{1}{n}\sum_{i=0}^{n-1} \int_\Omega \int _{A_{\sigma ^i\omega }} \mathcal L_\omega ^i 1_X \, \,{dm}\,d\mathbb P \\ &= \frac{1}{n}\sum_{i=0}^{n-1} \int_\Omega \int _{A_{\sigma ^i\omega }} \mathcal Q_\omega ^i 1_X\, \, {dm}\,d\mathbb P \leq \frac{1}{n}\sum_{i=0}^{n-1} \int_\Omega \int _{X} \mathcal Q_\omega ^i 1_X\, \, {dm}\,d\mathbb P \leq \Vert \mathcal Q_n1_X \Vert \to 0. \end{align*}} The last limit (as $n \to \infty$) follows from the assumption~\eqref{eq:AD}. But this limit provides a contradiction with~\eqref{eq:contradition}. \end{proof} Fix $(\omega , x)\in \Omega \times M$ (up to zero measure sets) and let $k\geq 1$ be the integer of Claim~\ref{claim3}. Set $y= f^k_\omega (x) \in X\setminus A_{\sigma ^k\omega }$. By definition of $A_{\sigma^k\omega}$, there is $i$ such that $y\in G_{\sigma ^k\omega }(\mu _i)$. {Then, in view of~\eqref{eq:movida}, $(\sigma^k\omega,y)\in B(\mu_i)$ and hence \[ \lim_{n\to\infty}\frac{1}{n}\sum_{j=0}^{n-1} \delta _{ f^j_{\sigma^k\omega} (y)} = \mu _i . \] On the other hand, $$ \frac{1}{n}\sum_{j=0}^{n-1}\delta_{f^j_\omega(x)}= \frac{n-k}{n} \left( \frac{1}{n-k}\sum_{j=0}^{n-k-1} \delta_{f_{\sigma^k\omega}^j(y)} \right) + \frac{1}{n}\sum_{j=0}^{k-1}\delta_{f^j_{\omega}(x)} \to \mu_i $$ as $n\to \infty$. This show that $x \in B_\omega(\mu_i)$ concluding the proof. } \end{proof} Finally, we summarize and complete the last details of the proof of Theorem~\ref{thm:C}: \begin{proof}[Proof of Theorem~\ref{thm:C}] If $f$ satisfies {\tt (FPM)}, then $\mathcal{L}_f$ is {\tt (FED)} by Proposition~\ref{prop:FPM-FED} and hence Theorem~\ref{thm:D} implies that $\mathcal{L}_f$ is {\tt (MC)}. This proves (ii) implies (i) in Theorem~\ref{thm:C}. On the other hand, if $\mathcal{L}_f$ is {\tt(MC)}, then it is also {\tt (APM)} by Theorem~\ref{thm:D} and hence, one concludes that $f$ satisfies {\tt(FPM)} from Proposition~\ref{prop:APM-FPM}. This shows (i) implies (ii). Finally, (1) and (2) follow from Lemma~\ref{lem:equi-basin}. \end{proof} \section{Sub-hierarchy in {\tt (UC)}} \label{s:UC} In this section, we complete the proof of the implications in the hierarchies of Figures~\ref{fig:subhierarchy}, as well as provide practical sufficient conditions for {\tt (FPM)} other than conditions in Section~\ref{ss:1.1}. {In the sequel, $(X,\mathscr{B},m)$ denotes a Polish probability space, $P:L^1(m) \to L^1(m)$ is a Markov operator and $P^n(x,A)$ is an $n$-transition probability that induces $P^n$.} We first give equivalent conditions for {\tt (UC)}. \begin{prop} \label{prop:UC} The following assertions are equivalent: \begin{enumerate} \item $P$ satisfies {\tt (UC)}; \item $P:L^1(m)\to L^1(m)$ is a quasi-compact operator; \item for any $\varepsilon>0$, there are $\delta >0$ and $n_0 \geq 1$ such that $$ \sup_{\varphi\in D(m)} \int_A P^{n_0}\varphi < \varepsilon \quad \text{for all $A\in \mathscr{B}$ with $m(A)< \delta$;} $$ \item there are $n_0 \geq 1$, $0<\varepsilon<1$ and $\delta >0$ such that $$ \sup_{\varphi\in D(m)} \int_A P^{n_0}\varphi < \varepsilon \quad \text{for all $A\in \mathscr{B}$ with $m(A)< \delta$;} $$ \item for any $\varepsilon>0$, there are $\delta >0$ and $n_0 \geq 1$ such that $P^{n_0}(x,A)<\varepsilon$ for all $A\in \mathscr{B}$ with $m(A)< \delta$ and $m$-almost every $x\in X$ (depending on $A$); \item there are $n_0 \geq 1$, $0<\varepsilon<1$ and $\delta >0$ such that $P^{n_0}(x,A)<\varepsilon$ for all $A\in\mathscr{B}$ with $m(A)<\delta$ and $m$-almost every $x\in X$ (depending on A); \item for any $\varepsilon>0$, there are $\delta >0$, $n_0 \geq 1$ and a probability $\mu$ absolutely continuous with respect to $m$ such that $P^{n_0}(x,A)<\varepsilon$ for all $A\in\mathscr{B}$ with $\mu(A)<\delta$ and $m$-almost every $x\in X$ (depending on $A$); \item there are $n_0\geq 1$, $0<\varepsilon<1$, $\delta>0$ and a probability $\mu$ absolutely continuous with respect to $m$ such that $P^{n_0}(x,A)<\varepsilon$ for all $A\in\mathscr{B}$ with $\mu(A)<\delta$ and $m$-almost every $x\in X$ (depending on $A$). \end{enumerate} \end{prop} \begin{proof} The equivalence between (1)--(4) follows from~\cite[Theorem~2]{Bart95}. On the other hand, observe that \begin{equation} \label{eq:duality} \int_A P^n\varphi \, dm = \int \varphi(x) \, P^{n*}1_A(x) \, dm = \int \varphi(x) P^n(x,A) \, dm \end{equation} for all $\varphi \in L^1(m)$, $A\in \mathscr{B}$ and $n\geq 1$. Taking into account that if $\varphi\in D(m)$ then $\int \varphi\, dm =1$, from~\eqref{eq:duality} one immediately gets that~(5) implies~(3) and (6) implies (4). We are going to see now that (3) implies~(5). By contradiction, assume that there exists $\varepsilon>0$ satisfying that for any $\delta>0$ and $n\geq 1$ there are $A\in\mathscr{B}$ with $m(A)<\delta$ and set $E\subset X$ with $m(E)>0$ such that $P^n(x,A)\geq \varepsilon$ for all $x\in E$. Take $\varphi=\frac{1}{m(E)} \, 1_E \in D(m)$. Hence, (3) implies that $\int_A P^n\varphi \, dm <\varepsilon$. However, by~\eqref{eq:duality} $$ \int_A P^n\varphi \, dm = \int \varphi(x) P^n(x,A) \,dm = \frac{1}{m(E)} \int_E P^n(x,A) \, dm \geq \varepsilon $$ we get a contradiction. The implication of (6) from~(4) follows analogously as the implication of (5) from~(3). This concludes the equivalence from (1) to (6). To complete the proof, we will show that (5) is equivalent to (7) and (6) is equivalent to (8). Observe that clearly, (5) implies (7) and (6) implies (8) by taking $\mu=m$. As before, the converse implications are analogous and thus we only prove (7) implies (5). To do this, by (7) we have that $\mu$ is absolutely continuous with respect to $m$. In particular, according to~\cite[Lemma~1]{HK1964}, $\mu$ is uniform absolutely continuous with respect to $m$. That is, for any $\eta>0$ there is $\alpha>0$ such that $\mu(A)<\eta$ for all $A\in\mathscr{B}$ with $m(A)<\alpha$. Then taking $\eta=\delta$ in (7), we get that for any $\varepsilon>0$, there are $\alpha >0$ and $n \geq 1$ such that $P^n(x,A)<\varepsilon$ for all $A\in \mathscr{B}$ with $m(A)< \alpha$ and $m$-almost every $x\in X$ (depending on $A$). This concludes (5). \end{proof} As already mentioned in Section \ref{sec:(UC)}, it follows from ((8) of) Proposition \ref{prop:UC} that {\tt (D)} implies {\tt (UC)}. The following proposition shows that the converse is also true if $P$ is strong Feller. \begin{prop} \label{prop:Felle+UC=D} Let $P:L^1(m) \to L^1(m)$ be a Markov operator. Assume that $P$ is strong Feller, that is, there exists $k\in\mathbb{N}$ such that an associated $k$-th transition probability $P^k(x,A)$ is strong Feller continuous. Then $P$ satisfies {\tt (UC)} if and only if $P$ satisfies {\tt (D)}. \end{prop} \begin{proof} In view of Proposition~\ref{prop:UC}, {\tt (D)} implies {\tt (UC)} even if we do not assume that $P$ is strong Feller. We will prove now the converse. Assume first that $P(x,A)$ is strong Feller continuous and~{\tt(UC)}. That is, $k=1$ in the statement of the proposition. The first observation in this case is that $P^n(x,A)$ is strong Feller continuous for all $n\geq 1$. To see this, recall that $P^{n+1}(x,A)=\int P(y,A) \, P^n(x,dy)$ for $n\geq 1$ where $P^1(x,A)=P(x,A)$ is strong Feller continuous by assumption. Arguing by induction, if $P^n(x,A)$ is strong Feller continuous, it is Feller continuous and thus $P^n(x,\cdot)$ varies continuously in the weak* topology. Moreover, according to Proposition \ref{prop:ultra-Feller+B}, since $P(x,A)$ is strong Feller continuous, $y\mapsto P(y,A)$ is a bounded continuous function for all $A\in\mathscr{B}$. Thus, $$ P^{n+1}(x',A)=\int P(y,A) \, P^n(x,dy) \to \int P(y,A) \, P^n(x,dy)= P^{n+1}(x,A) \quad \text{if $x'\to x$}. $$ Therefore, $P^{n+1}(x,A)$ is strong Feller continuous. Now, having the continuity of $x\mapsto P^{n_0}(x,A)$ in mind, we get that~(8) in Proposition~\ref{prop:UC} holds for all $x\in X$. This concludes {\tt (D)}. Let us prove the proposition for $k>1$, i.e.~assume now that $P$ satisfies {\tt(UC)} and $P^k(x,A)$ is strong Feller continuous for $k>1$. To see the conclusion, consider the operator $Q=P^k$. By definition, $Q$ satisfies {\tt(UC)}. Moreover, the associated transition probability of $Q$ is $Q(x,A)= P^k(x,A)$. Thus, we can apply the case with $k=1$ for $Q$, concluding that $Q$ satisfies~{\tt(D)}. In particular, $P$ satisfies~{\tt(D)}. \end{proof} \begin{rem} \label{rem:AraujoD} If a random map $f$ satisfies the Ara\'ujo or Brin--Kifer conditions, Proposition~\ref{prop:sinprueba}, Theorem~\ref{thm:book-generalized} and Proposition~\ref{prop:Felle+UC=D} implies that the Perron--Frobenius operator $\mathcal L_f$ satisfies {\tt(D)}. \end{rem} Another important consequence of Proposition~\ref{prop:UC} is the following practical sufficient conditions for {\tt (FPM)}. \begin{thm} \label{AAraujo2000-generalization} Assume that $P^{n_0}(x,dy)=p(x,y)\,dm(y)$ and one of the following holds: \begin{enumerate}[leftmargin=1cm] \item there are $\gamma>0$ and $\alpha>0$ such that \begin{enumerate} \item $m(\operatorname*{supp} p(x,\cdot))>\gamma$ for $m$-almost every~$x\in X$, and \item $\alpha \leq p(x,y)$ for $m$-almost every~$x\in X$ and $m$-almost every~$y\in \operatorname*{supp} p(x,\cdot)$; \end{enumerate} \item there is $\beta>0$ such that $p(x,y) \leq \beta$ for $m$-almost every~$x\in X$ and $m$-almost every~$y\in \operatorname*{supp} p(x,\cdot)$. \end{enumerate} Then, $\mathcal L_f$ satisfies~{\tt(UC)}. In particular, {if $X$ is locally compact}, $f$ satisfies~{\tt(FPM)}. \end{thm} \begin{proof} By Proposition \ref{prop:UC} (6), the condition (2) implies that $\mathcal L_f$ satisfies~{\tt (UC)}. Therefore, it follows from the implications in Figure~\ref{fig:hierarchy} and Theorem~\ref{thm:C} that $f$ satisfies~{\tt (FPM)}. Assume the condition~(1). Then, for any $A\in \mathscr B$ with $m(A)>1-\frac{\gamma}{2}$, it holds \begin{align*} P^{n_0}(x,A)&=\int_A p(x,y)\, dm(y) = \int_{A\cap \mathrm{supp} \, p(x,\cdot)} p(x,y) \, dm(y)\\ & \geq \alpha m(A\cap \mathrm{supp}\, p(x,\cdot))>\alpha \, \frac{\gamma}{2}. \end{align*} In the last inequality, we used that $m(A\cap \mathrm{supp}\, p(x,\cdot)) =m(A) + m( \mathrm{supp}\, p(x,\cdot)) - m(A\cup \mathrm{supp}\, p(x,\cdot)) > (1-\frac{\gamma}{2}) + \gamma -1 = \frac{\gamma}{2}$. Hence, by considering complements and using Proposition \ref{prop:UC} (6) again, we get~{\tt (UC)} for $\mathcal L_f$, and $f$ satisfies~{\tt (FPM)}. \end{proof} \begin{rem}\label{rmk:6.7} As previously mentioned, Theorem \ref{AAraujo2000-generalization} (2) is another weakening of Brin--Kifer's condition (see (2) in page~\pageref{Brin--Kifer}). Moreover, we find Ara\'ujo--Ayta\c{c}~\cite{AA2017} for Theorem~\ref{AAraujo2000-generalization} (1). Indeed, they considered the case when $X$ is a compact manifold equipped with the normalized Lebesgue measure $m$, and assumed that there exist $\alpha , \beta , \gamma >0$ and $t_*\in T$ such that for every $x\in X$, \begin{itemize} \item[(i)] $\operatorname*{supp} p(x,\cdot )$ includes the ball of radius $\gamma$ centered at $f_{t_*}(x)$, and \item[(ii)] $\alpha \leq p (x,y)\leq \beta$ for $m$-almost every $y\in \operatorname*{supp} p(x,\cdot ) $. \end{itemize} Obviously, the conditions in Theorem \ref{AAraujo2000-generalization} relaxed Ara\'ujo--Ayta\c{c}'s condition to get that {\tt (UC)}.\footnote{They also assumed an aperiodicity condition to ensure uniform ergodicity, meaning that $X$ cannot be decomposed into $\ell$ subsets $X=X_1\cup \cdots \cup X_\ell$ ($\ell \geq 2$) such that $p(x, X_{i +1\pmod{\ell}})$ for all $x\in X_i$ and $1\leq i\leq \ell$. We do not assume it, as indicated in the second example in Section \ref{subsec:additive}, which satisfies the conditions in Theorem~\ref{AAraujo2000-generalization} but violates the aperiodicity condition.} Moreover, in view of the following section, under the assumptions in~\cite[Thm.~A]{AA2017}, one gets {\tt (D*)}. \end{rem} \subsection{{\tt (D*)} and uniform ergodicity} Let us also give equivalent conditions for {\tt (D*)}. \begin{prop} \label{prop:equi-uni-erg} The following assertions are equivalent: \begin{enumerate} \item $P$ satisfies {\tt (D*)}; \item there is a probability measure $\pi$ absolutely continuous with respect to $m$ such that $$ \lim_{n\to \infty} \sup_{x\in X} \|P^n(x,\cdot) - \pi\|_{TV}=0; $$ \item there are a probability measure $\pi$ absolutely continuous with respect to $m$ and constants $C>0$, $0<\lambda<1$ such that $$ \|P^n(x,\cdot)-\pi\|_{TV}\leq C \lambda^n \quad \text{for all $x\in X$}. $$ \end{enumerate} \end{prop} \begin{proof} The equivalence between (2) and (3) follows from~\cite[Theorem~16.0.2]{MT2012}. It is also clear that (2) implies (1) by taking the measure $\mu$ in (1) as the measure $\pi$ in (2). Finally, the implication of (2) from (1) follows basically from~\cite[Theorem~2]{Dorea2006}. Namely, Dorea and Pereira proved that (1) implies the convergence in (2) but they do not conclude that $\pi$ is absolutely continuous with respect to $m$. To prove this, observe that the convergence in (2) implies that $\pi$ is the unique measure such that $$\pi(A)=\int P(x,A) \, d\pi(x) \quad \text{for all $A\in\mathscr{B}$}.$$ See Remark~\ref{rem:uni-erg} for more details. On the other hand, (1) implies condition {\tt(D)} adapted to Markov operators in $L^1(m)$ (see Section \ref{sec:(UC)}). In view of Theorem~\ref{GST} (together with the observation that $P(x,A)=\int _AP^n1_Xdm$), one has that {\tt(D)} implies {\tt (S)}. That is, there is $g\in D(m)$ such that $Pg=g$. Hence, \begin{align} \label{eq:uniqueness} m_g(A) &\stackrel{\scriptscriptstyle\rm def}{=} \int_A g \, dm = \int_A Pg \, dm = \int g \, P^*1_A \, dm = \int P(x,A) \, dm_g \quad \text{for all $A\in\mathscr{B}$}. \end{align} Therefore, by the uniqueness, $m_g=\pi$, proving that $\pi$ is absolutely continuous. \end{proof} \begin{rem} \label{rem:uni-erg} In Markov processes theory, condition (2) in Proposition~\ref{prop:equi-uni-erg} without the absolute continuity of $\pi$ with respect to $m$ is called uniform ergodicity (cf.~\cite{MT2012}). Apart from the background, it would still make sense to call any of the equivalent conditions in Proposition~\ref{prop:equi-uni-erg} uniform ergodicity (for Markov operators on $L^1(m)$) because the conditions imply that there exists a unique invariant density of $P$. To see this, observe first that (2) in Proposition~\ref{prop:equi-uni-erg} implies that $\pi$ is the unique invariant probability in the sense that $$\pi(A)=\int P(x,A) \, d\pi(x) \quad \text{for all $A\in\mathscr{B}$}.$$ Indeed, if $\nu$ is a probability satisfying also the above relation, then $$\nu(A)=\int P(x,A)\,d\nu(x) = \int P(x,A) P(y,dx)\,d\nu(y) = \dots = \int P^n(x,A)\, d\nu.$$ Since $P^n(x,A)$ converges uniformly to $\pi (A)$ on $x$ as $n\to \infty$, the right-hand side of the above expression converges to $\pi(A)$, and therefore, $\nu=\pi$. Since $\pi$ is absolutely continuous with respect to $m$, let us write $d\pi= h \, dm$ with $h\in D(m)$. Then, \begin{align*} \int_A h(x) \,dm &= \pi (A) =\int P(x,A)\, d\pi = \int h(x) P(x,A) \, dm \\ &= \int h(x) P^*1_A(x) \, dm = \int_A Ph(x) \, dm. \end{align*} This implies that $Ph=h$. Moreover, by the uniqueness shown above, arguing as in~\eqref{eq:uniqueness}, we conclude that $h$ is the unique invariant density of $P$. \end{rem} \section{Examples and counterexamples: proof of propositions} \label{s:examples} In this section, we prove all propositions given in Section \ref{ss:e}. \subsection{Proof of Propositions \ref{prop:ucd} and \ref{prop:dd}: Additive type noise} We first prove Proposition~\ref{prop:ucd}. Recall that our random map is given by~\eqref{eq:dd}. To see {\tt(UC)}, notice that if $x\neq 0$ then $$\{ t \in T : f_t(x) \in A\} = \{ t \in T : f_0(x) +t \in A \pmod{1}\}$$ is the translated set of $A$ by $f_0(x)$. Hence, by the invariance of the Lebesgue measure for translations, \begin{equation}\label{eq:0617} P(x, A)= p\left(\{t \in T: f_t(x) \in A\}\right) =p(A) =m(A). \end{equation} Now, {\tt(UC)} immediately follows from the interpretation of {\tt(UC)} in Section \ref{sec:(UC)} (e.g.~for $n_0=1$, $\delta =\varepsilon =\frac{1}{2}$ and $\mu =m$). On the other hand, since $\{ t\in T: f_t(0) =0\} =T$, we have $P(0,\{0\}) = 1$, namely $P(0, \cdot )= \delta _0$. Thus it follows from \eqref{eq:recurence} that $P^n(0,\{0\}) = 1$ for any $n\geq 1$. This means that {\tt(D)} is violated with $x=0$ and $A=\{ 0\}$. \begin{figure} \centering \begin{subfigure}{0.47\textwidth} \includegraphics[width=\textwidth]{fig_prop_1_11_a} \caption{The maps $\iota$ and $f_0$ on $[-1,1]$.} \label{fig:first} \end{subfigure} \hfill \begin{subfigure}{0.47\textwidth} \includegraphics[width=\textwidth]{fig_prop_1_11_b} \caption{The map $f_t$ on $[-1,1]$} \label{fig:second} \end{subfigure} \caption{Illustrations of the random map $f$ in Proposition~\ref{prop:ucd}.} \label{fig_eg_X} \end{figure} We next prove Proposition \ref{prop:dd}. Recall the random map displayed in Figure~\ref{fig_eg_X}. Notice that $p(A) =2m(A) $ for any Borel set $A\subset X_+$. Furthermore, by the argument obtaining \eqref{eq:0617}, \[ p\left(\left\{t : \widetilde f_t(y) \in B\right\}\right) = 2m(B) \quad \text{for any $y\in X_+$ and Borel set $B\subset X_+$} \] and thus \[ P(x,A) = \begin{cases} p\left(\left\{t : \widetilde f_t(x) \in \iota (A)\right\}\right) = 2m(\iota (A)) =2m(A) & \text{for $x\in X_+$}\\ p\left(\left\{t : \widetilde f_t(\iota (x)) \in A\right\}\right) = 2m(A)& \text{for $x\in X_-$} \end{cases} \] for each $A\in \mathscr B$. By \eqref{eq:recurence}, $P^n(x, A) =2m(A)$ for each $n\geq 1$, $x\in X$ and $A\in \mathscr B$. Therefore, obviously {\tt (D)} holds (e.g.~for $n_0=1$, $\delta =\varepsilon =\frac{1}{2}$ and $\mu =m$). On the other hand, given $n_0\geq 1$, $0< \varepsilon <1$, $\delta > \frac{1}{2}$ and a probability measure $\mu$, it holds that $\mu (X_+) \leq \frac{1}{2}$ or $\mu (X_-) \leq \frac{1}{2}$. For simplicity assume that $\mu (X_+) \leq \frac{1}{2}$, and set $A=X_+$. Then, $\mu (A)<\delta $ but $P^{n_0}(x,A) =2m(A) =1> \varepsilon$. This means that {\tt (D*)} is violated. \subsection{Proof of Propositions \ref{prop:multi:ex1} { and \ref{prop:0707}}: Multiplicative noise} Recall condition~(b) in Remark~\ref{rem:gAraujo}. {Also recall that condition almost-(b) is the natural relaxation of the condition~(b) from ``for all $x\in X$'' to ``for $m$-almost every $x\in X$''.} Let $f$ be the random map given in \eqref{eq:0621a}, i.e.~random perturbation of a measurable map $f_0: [0,1]\to [0,1]$ by multiplicative noise. To prove Propositions \ref{prop:multi:ex1}, we need the following lemma. \begin{lem}\label{lem:multiple} Denote the set of fixed points and zeros of $f_{0}$ by $F$ and $Z$, respectively, that is, $F=\left\{x : f_{0}(x)=x\right\}$ and $Z=\left\{x : f_{0}(x)=0\right\}$. Then, the following holds: \begin{enumerate} \item If $0 \in F \cap Z$, then $f$ does not satisfy (b). \item If $Z=\emptyset$, then $f$ satisfies (b). \item If $m(Z)=0$, then $f$ satisfies almost-(b). \end{enumerate} \end{lem} \begin{proof} Assume that $0 \in F \cap Z$. Then, $\left\{f_{\omega}^{n}(0) : \omega \in \Omega\right\}=\{0\}$ for all $n \geq 1$, implying that $P_{n}(0,\cdot )=\delta _0$ for all $n \geq 1$. Since $\delta_{0}$ is not absolutely continuous with respect to the Lebesgue measure $m$ of $[0,1]$, (b) does not hold. Take a point $x$ such that $x\not\in Z$. Then $f_{0}(x)>0$, so $\left\{f_{\omega}(x) : \omega \in \Omega\right\}$ is a closed interval which is not a point set, and $P(x, \cdot)$ is the normalized Lebesgue measure on the closed interval, implying that $P(x, \cdot)$ is absolutely continuous for all $x \not\in Z$. Hence, $Z=\emptyset$ implies (b), and $m(Z)=0$ implies almost-(b). \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:multi:ex1}] Let $f_0$ be the measurable map in (1) or (2) of Proposition~\ref{prop:multi:ex1}, that is, $f_0 (x) = \frac{x}{2}$ for $x\in (0,1]$ and $ f_0(0) =0$ or $\frac{1}{2}$. To see $\mathcal L_f$ does not satisfy {\tt (S)} for this $f$, recall that for a given Markov operator $P:L^1(m)\to L^1(m)$, according to Theorem~\ref{GST}, the existence of a $P$-invariant density is equivalent to the following condition: there is $\delta>0$~such~that $$\sup_{n\geq 1} \int_A P^n1_X \, dm < 1 \quad \text{for any} \ \ A\in \mathscr{B} \ \ \text{with} \ \ m(A)<\delta.$$ By duality, this implies that $$ \int P^{n*}1_A \, dm <1 \quad \text{for all $n\geq 1$ and $A\in \mathscr{B}$ with $m(A)<\delta$.} $$ Since $P^{n*}1_A(x) =P^n(x, A)\leq 1$ for any $x\in X$, we conclude the following necessary condition for the existence of a $P$-invariant density: \begin{enumerate}[rightmargin=1.5cm, label=($\star$)] \item \label{*} there is $\delta>0$ such that for all $n\geq 1$ and $A\in\mathscr{B}$ with $m(A) <\delta $, there exists $E\in \mathscr{B}$ with $m(E)>0$ satisfying that $P^{n}(x,A) < 1$ for all $x\in E$. \end{enumerate} We will prove that $\mathcal L_f$ does not satisfy ~($\star$). Notice that, for that purpose, it suffices to show that for any $\delta>0$ there are $n_0\geq 1$ and $A_0\in\mathscr{B}$ such that $m(A_0) <\delta $ but $P^{n_0}(x, A_0) =1$ for $m$-almost every $x$. Observe also that since $0< f_t(x) \leq f_0(x)$ for all $t\in[0,1]$ and $x\neq 0$, we have that, if $x\neq 0$ then \[ 0<f^n_\omega(x)\leq f_0^n(x) =\frac{1}{2^n} \quad \text{ for any $n\geq 1$ and $\omega\in \Omega$.} \] Hence, the support of $P^n(x,\cdot) =\mathbb P(\{ \omega : f^n_\omega(x) \in \, \cdot \, \})$ is contained in the interval $(0,\frac{1}{2^n}]$ of length $\frac{1}{2^n}$ if $x\neq 0$. Therefore, for any $\delta>0$, by taking {$n_0\geq -\frac{\log \delta }{\log 2}$} and $A_0=(0,\frac{1}{2^{n_0}}]$, we have that $m(A_0)<\delta $ and $P^{n_0}(x, A_0) =1$ whenever $x\neq 0$. From these observations, ($\star$) does not hold, and consequently, neither does {\tt (S)} for $\mathcal L_f$. In particular, due to the implications in Figure~\ref{fig:hierarchy}, {\tt (FPM)} does not hold for $\mathcal L_f$. Let us complete the proof of items (1) and (2) of Proposition \ref{prop:multi:ex1}. When $f_0$ is the continuous map in (1), obviously (a) in Remark~\ref{rem:gAraujo} holds and $Z=\{0\}$. So, by Lemma~\ref{lem:multiple}, $f$ does not satisfy (b) but satisfies almost-(b). When $f_0$ is the map in (2) having discontinuity at $0$, (a) does not hold and $Z=\emptyset$. Therefore, by using Lemma~\ref{lem:multiple} again, we conclude that $f$ satisfies~(b). We next prove item (3) of Proposition \ref{prop:multi:ex1}. Let $f_0(x) =2x \pmod{1}$ for $x\in [0,1]$. Then, since $0\in F\cap Z$, it follows from Lemma~\ref{lem:multiple} that $f$ does not satisfy (b). On the other hand, by \cite[Theorem 3.1]{Iwata2013}, we find that $\mathcal{L}_f$ satisfies {\tt (C)}. Therefore, $f$ satisfies {\tt (FPM)} by Theorem~\ref{thm:C} together with the implications in Figure~\ref{fig:hierarchy}. \end{proof} { Now, we will prove Proposition~\ref{prop:0707}. Hence, let $f$ be the random map given in~\eqref{eq:0707c}, i.e.~random perturbation of a measurable map $f_0: [0,1]\to [0,1]$ by multiplicative type noise. First, we need the following lemma. \begin{lem}\label{lem:multiple2} Denote the set of fixed points of $f_{0}$ by $F$. The following hold: \begin{enumerate} \item $F= \emptyset$ if and only if then $f$ satisfies (b); \item If $m(F)=0$, then $f$ satisfies almost-(b). \end{enumerate} \end{lem} \begin{proof} Take a point $x\not\in F$. Then $f_{0}(x) -x\neq 0$ and thus $I_x=\left\{f_{t}(x) :\, t \in T\right\}$ is a closed interval which is not a point set. Hence, $P(x, \cdot)$ is the normalized Lebesgue measure on the closed interval $I_x$. This implies that $P(x, \cdot)$ is absolutely continuous for all $x \not\in F$. On the other hand, if $x\in F$, then $\left\{f_{\omega}^{n}(x) :\, \omega \in \Omega\right\}=\{x\}$ for all $n \geq 1$. Consequently $P^{n}(x,\cdot )=\delta _x$ for all $n \geq 1$. Since $\delta_{x}$ is not absolutely continuous with respect to $m$, (b) does not hold. The claim immediately follows from these observations. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:0707}] Now let $f_0$ be the $C^1$ map given by the gradient flow with potential~\eqref{eq:0707b}. As mentioned, $f_0$ has infinitely many sinks $(s_k)_{k\geq 1}$ and sources $(r_k)_{k\geq 1}$ such that the union of the basins of $s_k$ for $f_0$ coincides with $X\setminus \{r_1, r_2,\ldots \}$. In particular, $F=\{r_k,s_k: \, k\geq 1\}$ is the set of fixed point of $f_0$. Hence, it follows from Lemma~\ref{lem:multiple2} that $f$ satisfies almost-(b), but not (b). Furthermore, take $x$ in the basin of $s_k$ for $f_0$. In view of \eqref{eq:0707c}, $f_t(x)$ is a convex combination of $x$ and $f_0(x)$. Thus, $f_t(x)$ is in the closed interval between these points. In particular (from the orientation preserving) in the closed interval whose endpoints are $x$ and $s_k$. Arguing recursively, $f_\omega^n(x)$ is in the closed interval whose endpoints are $s_k$ and $f_0^{n-1}(x)$ for any $\omega \in \Omega$ and $n\geq 1$. Since $f_0^n(x) \to s_k$, this implies that $f_\omega ^n(x)\to s_k$ as $n\to\infty$ for any $\omega \in \Omega$. This completes the proof of Proposition~\ref{prop:0707}. \end{proof}} \subsection{Proof of Proposition \ref{prop:figa}: Random expanding maps} The constrictivity of $\mathcal L_f$ under the condition \eqref{expanding_condition} is just the consequence of the work by Boyarsky and Levesque \cite[Theorem 2]{BR1988}. Furthermore, it is not difficult to see that $\mathcal L_f$ is not uniformly constrictive as follows. Notice that, for any $n_0\geq 1$ and $x\in X$, \[ P^{n_0}(x, \cdot ) = \sum _{\omega \in \{ 1,\ldots ,k\}^{n_0}} p_{\omega} \delta _{f_\omega ^{n_0}(x)} \] where $p_{\omega} =p^{n_0}(\{\omega \})$ and $f_\omega ^{n_0} =f_{\omega _{{n_0}-1}}\circ \cdots \circ f_{\omega _0}$ for $\omega =(\omega _0,\ldots ,\omega _{{n_0}-1})$. Thus, if we consider the finite set $A:=\{f_\omega ^{n_0}(x) : \omega \in \{ 1,\ldots ,k\}^{n_0}\}$, then $P^{n_0}(x, A )=1$ but $\mu (A) =0$ for any $m$-absolutely continuous probability measure $\mu$. By virtue of the interpretation of {\tt(UC)} in Section \ref{sec:(UC)}, this concludes the violation of {\tt (UC)}. \subsection{Proof of Proposition \ref{prop:contracting}: Random contracting maps} For the proof of Proposition \ref{prop:contracting}, the next proposition is important. Recall the mixing property and exactness of an invariant density for a Markov operator given in Section~\ref{sec:1.4}. \begin{prop} \label{prop:mixing-no-exact} For $f$ in Proposition~\ref{prop:contracting}, $1_X$ is an $\mathcal L_f$-invariant density. Moreover, $1_X$ is mixing, but is not exact. \end{prop} \begin{proof} It is obvious that $1_X$ is an invariant density for $\mathcal L_f$. Furthermore, we immediately find that \begin{align*} \mathcal{L}_f^*\varphi(x)=\frac{1}{2}\mathcal{L}_1^*\varphi(x)+\frac{1}{2}\mathcal{L}_2^*\varphi(x)=\frac{1}{2}\varphi\left(\frac{x}{2}\right)+\frac{1}{2}\varphi\left(\frac{x+1}{2}\right) \end{align*} which is equivalent to the Perron--Frobenius operator $\mathcal L_g$ of the (deterministic) dyadic map $g(x)=2x\pmod{1}$. Then, for any $A,B\in\mathscr{B}$, we have \begin{align*} \int_X \mathcal{L}_f^n 1_A\cdot 1_B \, dm = \int_X 1_A\cdot \mathcal L_g^n1_B \, dm \to m(A)m(B) \end{align*} as $n\to\infty$ since $m$ is mixing for $g$. Therefore, (using a simple function approximation) we conclude that $1_X$ is mixing. On other other hand, for $\varphi =1_{A}-1_{A^c}$ with $A=[0,\frac{1}{2}]$ and any $n\in\mathbb{N}$, we have $$ \int_X|\mathcal{L}_f^n \varphi |dm=\sum_{k=0}^{2^n-1}\int_X\left|1_{\left[\frac{2k}{2^{n+1}},\frac{2k+1}{2^{n+1}}\right]}-1_{\left[\frac{2k+1}{2^{n+1}},\frac{2k+2}{2^{n+1}}\right]}\right|\,dm=\int_X1\, \,dm=1. $$ If $1_X$ will be exact then, $\mathcal{L}_f^n\varphi \to \int \varphi\, dm =0$ in $L^1$-norm. Thus, the above computation implies that $1_X$ is not exact. \end{proof} Let us move to the proof of Proposition~\ref{prop:contracting}. It is well-known that, in the class of constrictive Markov operators preserving $1_X$, the mixing property implies the exactness (cf.~\cite[Remark~5.5.1]{LM}). Thus, the above proposition already concludes that $P$ is not constrictive. As another proof, we can also check it directly as follows. Recall the Dunford--Pettis interpretation of {\tt (C)} (given in Section \ref{sss:acc}): $\mathcal L_f$ is constrictive if and only if for any $\varepsilon>0$ there exists $\delta>0$ such that for any $h \in D(m)$, there is $n_0\geq 1$ for which $$ \int_A \mathcal{L}_f^nh \, dm<\varepsilon \quad \text{for any $n\geq n_0$ and $A\in\mathscr{B}$ with $m(A)\leq\delta$}. $$ The key point is that the set $A$ can depend on $n$, compare it with the definition of~{\tt(AC)}. Fix $\varepsilon>0$ and $\delta>0$. Let $A=[0,\delta]$ and $h =\frac{1}{\delta}1_A \in D(m)$. Then $$ \mathcal{L}_f^n h =\frac{1}{\delta}\sum_{k=0}^{2^n-1} 1_{\left[\frac{k}{2^n},\frac{k}{2^n}+\frac{\delta}{2^n}\right]}. $$ Thus, by letting $B_n= \operatorname*{supp} \mathcal{L}_f^n h $, we have $$ \int_{B_n}\mathcal{L}_f^n h \, dm = 1\quad \text{and } \quad m(B_n )=\delta \, \text{ for any $n\geq1$}, $$ which implies that $\mathcal{L}_f$ is not constrictive. On the other hand, since $\mathcal{L}_f$ is mixing with respect to $m$, we can conclude $\mathcal{L}_f$ is asymptotically constrictive as follows. For any $\varepsilon>0$, set $\delta=\varepsilon$. Then for any $h\in D(m)$ and $B\in\mathscr B$ with $m(B)<\delta$, by the mixing property, we have $$ \lim_{n\to\infty}\int_B \mathcal{L}_f^nh \, dm=\int_Xh \, dm\cdot m(B) < \delta =\varepsilon. $$ This completes the proof of Proposition \ref{prop:contracting}. \subsection{Proof of Proposition \ref{prop:rotations}: Random rotations} To refrain notations, in this subsection we identify any closed interval in ${S}^1$ with its corresponding set in $[0,1)$ (a closed interval or a set $[0,a]\cup [b,1)$ with some $a < b$). \subsubsection{Case (1): Irrational $\alpha-\beta$} It is sufficient to show that $\mathcal L_f$ is asymptotically stable, that is, $\|\mathcal L_f^n\varphi-1_X\|\to 0$ as $n\to\infty$ for any $\varphi\in D(m)$, since an asymptotically stable Markov operator is constrictive (see \cite{LM}). For $\varphi\in D(m)$, we can calculate $\mathcal L_f^n\varphi$ as \begin{align*} \mathcal L_f^n\varphi(x) &=\frac{1}{2^n}\sum_{k=0}^n\binom{n}{k}\varphi(x-(n-k)\alpha-k\beta) =\frac{1}{2^n}\sum_{k=0}^n\binom{n}{k}\varphi(x-n\alpha+k(\alpha-\beta))\\ &=\frac{1}{2^n}\sum_{k=0}^n\binom{n}{k}\mathcal L_{\beta-\alpha}^{k}\varphi(x-n\alpha) \end{align*} where $\mathcal L_\gamma$ denotes the Perron--Frobenius operator for the irrational rotation with angle $\gamma$. Since $\mathcal L_f1_X=1_X$ and $\alpha-\beta$ is irrational, using the mean ergodic theorem weighted with binomial coefficients (see \cite[Theorem 4.1 and Corollary 4.3]{dykema2009brown}), $$ \mathcal L_f^n\mathcal L_{-\alpha}^n\varphi =\mathcal L_f^n\varphi(x+n\alpha) =\frac{1}{2^n}\sum_{k=0}^n\binom{n}{k}\mathcal L_{\beta-\alpha}^{k}\varphi(x)\to 1_X \text{ in $L^1(m)$ as $n\to\infty$.} $$ It is clear that $\mathcal L_f\mathcal L_{-\alpha}=\mathcal L_{-\alpha}\mathcal L_f$, $\mathcal L_{-\alpha}1_X=1_X$ and $\|\mathcal L_{-\alpha}\psi\|=\|\psi\|$ for any $\psi\in L^1(m)$. Therefore, we have $$ \|\mathcal L_f^n\varphi-1_X\| =\|\mathcal L_{-\alpha}^n(\mathcal L_f^n\varphi-1_X)\| =\|\mathcal L_{-\alpha}^n \mathcal L_f^n\varphi-\mathcal L_{-\alpha}^n 1_X\| =\|\mathcal L_f^n\mathcal L_{-\alpha}^n\varphi-1_X\| \to 0 $$ as $n\to\infty$, which completes the proof. \subsubsection{Case (2): Irrational $\alpha$ and $\beta$ with rational $\alpha-\beta$} Let $\alpha-\beta=\frac{\ell}{N}$ for some integer $\ell$ and $N$. {Fix $0<\delta<1$.} Set $B_i=[\frac{i}{N},\frac{i}{N}+\frac{\delta}{N}]$ for $i=0,\cdots,N-1$ and {$B=B_0\cup \dots \cup B_{N-1}$. Notice that $m(B)=\delta$.} We first show that $\mathcal L_f 1_{B}=1_{B+\alpha}$. {Indeed, observe that} for any $i=0,\dots,N-1$ there is a unique $j=0,\dots,N-1$ such that $B_i+\alpha=B_{j}+\beta$, where $A+t=[a+t,b+t]$ for $A=[a,b]$ and $t\in[0,1)$. {This follows for $j=i-\ell \pmod{N}$, since} $$ \left(\frac{i}{N}+\alpha\right)-\left(\frac{j}{N}+\beta\right)= \frac{i}{N}-\frac{j}{N}+\alpha-\beta=\frac{i}{N}-\frac{j}{N}+\frac{\ell}{N}=0\pmod{1}, $$ and such $j$ is unique. Then we have $$ \mathcal{L}_f 1_B = \frac{1}{2}\sum_{i=0}^{N-1}(1_{B_i+\alpha}+1_{B_i+\beta})=\frac{1}{2}\sum_{i=0}^{N-1}(1_{B_i+\alpha}+1_{B_{i}+\alpha})=\sum_{i=0}^{N-1}1_{B_i+\alpha}= 1_{B+\alpha}. $$ Moreover, we have that $\mathcal{L}_f^n1_B=1_{B+\alpha n}$ for any $n\geq 1$. Take $\varphi\coloneqq\frac{1}{\delta}1_{B} \in D(m)$ since $m(B)=\delta$. Since $\alpha\not\in \mathbb{Q}$, we can consider a sequence $\{n_j\}_{j\geq 1}$ such that $\alpha n_j\to 0 \pmod{1}$ as $j\to\infty$. Hence, $$ \int_B \mathcal{L}_f^{n_j}\varphi \, dm\to 1 \quad\text{as $j\to\infty$}. $$ Therefore, $\mathcal{L}_f$ does not satisfies {\tt (AC)}. We next prove that $\mathcal{L}_f$ is mean constrictive. It is clear that the function $1_X$ is invariant for $\mathcal{L}_f$. Then, from Remark~\ref{rem:ergodic-implies-MC}, it is sufficient to show that {$1_X$ is an ergodic density}. {Let $\mathcal{L}_f^*1_A=1_A$, where $\mathcal{L}_f^*$ is the adjoint operator for $\mathcal{L}_f$. We will prove $m(A)\in \{0,1\}$. Since obviously $1_A \in L^2(m)$, we have the Fourier series of $1_A$ as follows, $1_A(x)=\sum_{n=-\infty}^\infty a_n e^{2n\pi i x}$. Then we can calculate \[ \mathcal{L}_f^*\varphi(x) =\frac{1}{2}\sum_{n=-\infty}^\infty a_n e^{2n\pi i (x+\alpha)}+\frac{1}{2}\sum_{n=-\infty}^\infty a_n e^{2n\pi i (x+\beta)}. \] Then $\mathcal{L}_f^*\varphi=\varphi$ and $$ \mathcal{L}_f^*\varphi(x)=\frac{1}{2}\mathcal{L}_1^*\varphi(x)+\frac{1}{2}\mathcal{L}_2^*\varphi(x)=\frac{1}{2}\varphi(x+\alpha)+\frac{1}{2}\varphi(x+\beta) $$ imply \[ \frac{1}{2}\sum_{n=-\infty}^\infty a_n e^{2n\pi i (x+\alpha)}+\frac{1}{2}\sum_{n=-\infty}^\infty a_n e^{2n\pi i (x+\beta)}=\sum_{n=-\infty}^\infty a_n e^{2n\pi i x}. \] Due to the uniqueness of Fourier series, it must be satisfied that, for any $n\in\mathbb{Z}$, \[ \frac{1}{2} e^{2n\pi i \alpha}+\frac{1}{2} e^{2n\pi i \beta}= 1. \] It can be rewritten as \[ e^{2n\pi i \gamma}+1=2 e^{-2n\pi i \beta}\quad \text{with $\gamma\coloneqq\alpha-\beta$}. \] That is, \[ \cos(2n\pi\gamma)+1=2\cos(-2n\pi\beta)\quad{\rm and}\quad \sin(2n\pi\gamma)=2\sin(-2n\pi\beta). \] Hence \[ \left(2\cos(2n\pi\beta)-1\right)^2+\left(-2\sin(2n\pi\beta)\right)^2=1, \] which leads to $\cos(2n\pi\beta)=1$. This equality holds only for $n=0$ since $\beta$ is irrational. Thus, all coefficient $a_n$ must be 0 for any $n\neq0$, which shows $1_A$ is constant and hence $m(A)=0$ or $1$. \subsubsection{Case (3): Rational $\alpha$ and $\beta$} By the same arguments, we have $\mathcal L_f1_B=1_{B+\alpha}$. Unlike the case $\alpha,\beta\in[0,1]\backslash\mathbb{Q}$, we further have $\mathcal{L}_f 1_B = 1_{B}$ by taking $N$ as the least common multiple of the denominator for $\alpha$ and $\alpha-\beta$. Then, for any $\delta>0$, taking $\varphi=\frac{1}{\delta}1_B$, we get $m(B)=\delta$ but \[ \int_B A_n \varphi \, dm=\frac{1}{n}\sum_{j=0}^{n-1}\int_B \mathcal{L}_f^j\varphi \, dm=1, \] since the support of $\mathcal{L}_f^n\varphi$ and $A_n\varphi$ is always on $B$. This concludes that $\mathcal{L}_f$ does not satisfy~{\tt (MC)}. Finally, since $1_X$ is clearly invariant for $\mathcal{L}_f$, by the fact that {\tt (WAP)} is equivalent to the existence of an invariant density with the maximal support \cite[Theorem~3.1]{Toyokawa2020}, $\mathcal{L}_f$ satisfies {\tt (WAP)}. This completes the proof. \subsection{Proof of Proposition \ref{prop:figb}: Direct sums of random maps} Notice that the random dynamics $f_+$ on $X_+$ generated by $\tau _1^+$, $\tau _2^+$ satisfies {\tt (S)} because $f$ satisfies {\tt (S)} and preserves $X_+$. That is, $\mathcal L_{f_+}$ admits an invariant density, denoted by $h_+$. Let $h$ be the density function on $X$ given by $h(x)=0$ on $X_-$ and $h(x)=h_+(x)$ on $X_+$. Then, $h$ is obviously an invariant density of $\mathcal L_f$, namely, {\tt (S)} holds for $f$. On the other hand, {\tt (WAP)} does not hold. If $\mathcal L_f$ has an invariant density $h$ with the maximal support, then an integral function $h_-$ on $X_-$, given by $ h_-(x) = h(x)$, should be invariant for $\mathcal L_{f_-}$, where $f_-$ is the random dynamics generated by $\tau _1^-$, $\tau _2^-$, due to the invariance of $X_-$, $X_+$ for $f$. This contradicts the assumption that $f_-$ is not in~{\tt(S)}. \subsection{Proof of Proposition \ref{prop:dce}: Deterministic systems} All statements except one for the baker's transformation can be proven by literally repeating the argument in the previous subsections. On the other hand, for the statement for the baker's transformation, it immediately follows from the fact that the baker's transformation is mixing but not exact with respect to the Lebesgue measure $m$ (cf.~\cite{alexander1984fat}). \section{Annealed Perron--Frobenius operators} \label{sec:apendix} Let $(X,\mathscr{B},m)$ be a Polish probability space and consider $(\Omega,\mathscr{F})=(T^\mathbb{N},\mathscr{A}^{\mathbb{N}})$ an infinite product space of a measure space. Now, we introduce a probability measure $\mathbb{P}$ on $(\Omega,\mathscr{F})$ which is shift invariant but not necessarily a Bernoulli probability. Let $f:T\times X \to X$ be a measurable map. We consider iterations $f^n_\omega=f_{\omega_{n}}\circ \dots \circ f_{\omega_1}$ for $n\geq 1$ where the sequence $\omega=(\omega_i)_{i\geq 1}$ is chosen from $\Omega$ according to the probability $\mathbb{P}$. To emphasize this random choice, we call $f$ as a $\mathbb{P}$-random map. We can introduce the annealed Perron--Frobenius operator ${\mathcal{L}}_f$~given~by \begin{equation}\label{A:eq00} {\mathcal{L}}_f \varphi = \int \mathcal{L}_\omega \varphi \, d\mathbb{P}(\omega) \quad \text{for $\varphi \in L^1(m)$} \end{equation} where $\mathcal{L}_\omega$ is the Perron--Frobenius operator of $f_\omega$. Recall that we introduced $\mathcal{L}_\omega\varphi$ as the Radon--Nikod\'ym derivative of the signed measure $(f_\omega)_* m_\varphi$ with respect to $m$ where $ m_\varphi (B) = \int_B \varphi \, dm \quad \text{for all $B\in\mathscr{B}$}. $ That is, for each $\varphi \in L^1(m)$, \begin{equation} \label{A:eq0} \mathcal{L}_\omega\varphi \in L^1(m) \ \ \text{such that} \ \ (f_{\omega})_*m_\varphi(B) = \int_B \mathcal{L}_\omega\varphi \, dm \ \ \text{for all $B\in\mathscr{B}$}. \end{equation} In other words, $(f_{\omega})_*m_\varphi = m_{\mathcal{L}_\omega\varphi}$, i.e., \begin{equation}\label{A:eq1} \int_{(f_\omega)^{-1}(B)} \varphi \, dm = \int_B \mathcal{L}_\omega\varphi \, dm \quad \text{for all $B\in\mathscr{B}$}. \end{equation} The following lemma characterizes $\mathcal{L}_f$ in similar terms. \begin{lem} \label{lem:A0} For each $\varphi\in L^1(m)$, ${\mathcal{L}}_f\varphi$ is the Radon--Nikod\'{y}m derivative of the measure $\bar{m}_\varphi$ with respect to $m$ where $$ \bar{m}_\varphi (B) = \int (f_\omega)_* m_\varphi(B) \, d\mathbb{P} \quad \text{for all $B\in\mathscr{B}$}. $$ That is, $$ \mathcal{L}_f\varphi \in L^1(m) \ \ \text{such that} \ \ \bar{m}_\varphi(B) = \int_B {\mathcal{L}}_f\varphi \, dm \ \ \text{for all $B\in\mathscr{B}$}. $$ In particular, \begin{equation}\label{eq:estationary} {\mathcal{L}}_f\varphi = \varphi \quad \iff \quad m_\varphi(B) = \int (f_\omega)_* m_\varphi (B) \, d\mathbb{P} \quad \text{for all $B\in\mathscr{B}$}. \end{equation} \end{lem} \begin{proof} According to the definition of the annealed operator $\mathcal{L}_f$ in~\eqref{A:eq00} and the Perron--Frobenius operator of $f_\omega$ in~\eqref{A:eq0}, it holds $$ \int_B \mathcal{L}_f\varphi \, dm = \int_B \int \mathcal{L}_\omega\varphi \, d\mathbb{P} \,dm = \int \int_B \mathcal{L}_\omega\varphi \, dm \,d\mathbb{P} = \int (f_\omega)_*m_\varphi (B)\, d\mathbb{P}. $$ This proves the first part of the lemma. The second part follows immediately by observing that $\mathcal{L}_f\varphi=\varphi$ if and only if $\int_B \mathcal{L}_f \varphi \, dm = m_\varphi(B)$. Hence, the above equation reads as ${m}_\varphi =\bar{m}_\varphi$ and thus~\eqref{eq:estationary} follows. \end{proof} \begin{rem} Since the Perron--Frobenius operator also acts on the space of finite signed measures as $$ \mathcal{L}_f\mu (B)= \int 1_B \circ f_\omega (x) \, d\mathbb{P}(\omega) d\mu(x) \quad \text{for all $B\in\mathscr{B}$,} $$ the equivalence~\eqref{eq:estationary} can be generalized as follows: \begin{equation*}\label{eq:estationarymeasure} {\mathcal{L}}_f\mu = \mu \quad \iff \quad \mu(B) = \int (f_\omega)_* \mu (B) \, d\mathbb{P} \quad \text{for all $B\in\mathscr{B}$}. \end{equation*} \end{rem} \begin{rem} \label{rem:anexo} Note that when $\mathbb{P}=p^\mathbb{N}$ is a Bernoulli measure on $\Omega$, $$ \int (f_\omega)_* m_\varphi (B) \, d\mathbb{P} = \int (f_{\omega_1})_*m_\varphi (B) \, dp(\omega_1). $$ Thus, from~\eqref{eq:estationary}, $\varphi$ is an $\mathcal{L}_f$-invariant density if and only if $m_\varphi$ is an $f$-stationary measure in the sense introduced in~\eqref{def:stationary} that is absolutely continuous~with~respect~to~$m$. \end{rem} Let us write \begin{equation*} \mathcal{L}^n_\omega = \mathcal{L}_{\omega_n}\circ \dots \circ \mathcal{L}_{\omega_1} \quad \text{for $\psi\in L^\infty(m)$ and $\omega\in \Omega$} \end{equation*} {where $\sigma$ is the left shift on $\Omega$ preserving $\mathbb{P}$.} As usual, we introduce the adjoint operator of ${\mathcal{L}}_f:L^1(m)\to L^1(m)$ as the operator ${\mathcal{L}}^{\,*}_f:L^\infty(m)\to L^\infty(m)$ satisfying $$ \langle {\mathcal{L}}^{\,*}_f\psi, \varphi \rangle = \langle \psi, {\mathcal{L}}_{f}\varphi \rangle \quad \text{for $\psi\in L^\infty(m)$ and $\varphi \in L^1(m)$ \ \ where \ \ $\langle h,g\rangle = \int hg \, dm$.} $$ Recall that since $(\mathcal{L}^n_f)^*=(\mathcal{L}^*_f)^n$ where $\mathcal{L}^n_f$ and $(\mathcal{L}^*_f)^n$ are the $n$-th iteration of $\mathcal{L}_f$ and $\mathcal{L}^*_f$ respectively, we simply write it as $\mathcal{L}^{n*}_f$. Similarly, we write $(\mathcal{L}^{n}_\omega)^*=\mathcal{L}^{n*}_\omega$. \begin{lem} \label{lem:A2}For any $n\geq 1$, $\varphi\in L^1(m)$ and $\psi\in L^\infty(m)$, it holds \begin{enumerate}[itemsep=0.25cm] \item \label{lem:A2item1} $\mathcal{L}_\omega^n=\mathcal{L}_{f^n_\omega}$; \item \label{eq:P*} $\mathcal{L}^{n*}_\omega \psi = \psi \circ f^n_\omega$; \item \label{lem:A2item4}$\langle {\mathcal{L}}^{n*}_\omega\psi, \varphi \rangle = \langle \psi, {\mathcal{L}}^{\, n}_{\omega}\varphi \rangle$. \end{enumerate} Moreover, if $\mathbb{P}$ is a Bernoulli measure on $\Omega$ (i.e., an infinite product $p^\mathbb{N}$ of a probability measure ${p}$ on $T$), it holds \begin{enumerate}[resume] \item \label{lem:A2item2} ${\mathcal{L}}_f^n \varphi = \int \mathcal{L}^n_\omega \varphi \, d\mathbb{P}$; \item \label{lem:A2item3} ${\mathcal{L}}_f^{n*} \psi = \int \mathcal{L}_\omega^{n*} \psi \, d\mathbb{P}$. \end{enumerate} \end{lem} \begin{proof} Let us prove (1). To do this we proceed by induction. It is clear that (1) holds for $n=1$. Assume that $(f^{n-1}_\omega)_* m_\varphi = m_{\mathcal{L}^{n-1}_\omega\varphi}$. Then $$(f^n_\omega)_* m_\varphi=(f_{\sigma^{n-1}\omega})_* ((f^{n-1}_\omega)_* m_\varphi) =(f_{\sigma^{n-1}\omega})_*m_{\mathcal{L}^{n-1}_\omega\varphi}.$$ From this, having in mind~\eqref{A:eq1}, we get \begin{align*} (f^n_\omega)_* m_\varphi (B) = \int_{(f_{\sigma^{n-1}\omega})^{-1}(B)} \mathcal{L}_{\omega}^{n-1}\varphi \, dm = \int_B \mathcal{L}_{\sigma^{n-1}\omega} \circ \mathcal{L}_{\omega}^{n-1}\varphi \, dm = \int_B \mathcal{L}_{\omega}^{n}\varphi \, dm. \end{align*} This implies that $\mathcal{L}_{f^n_\omega}=\mathcal{L}_\omega^n$. Now, (2) follows immediately from (1) since $\mathcal{L}_\omega^{n*}= (\mathcal{L}_\omega^{n})^* =(\mathcal{L}_{f^n_\omega})^*$. Hence, as is well-known, $(\mathcal{L}_{f^n_\omega})^*\psi=\psi \circ f^n_\omega$. Finally, observe that~(3) is just the duality relation. To conclude the proof, from now on we assume that the probability $\mathbb{P}$ is Bernoulli. \begin{claim} \label{claimA1} For any $n\geq 1$, $$m_{\mathcal{L}^n_f\varphi}(B)=\int(f^n_\omega)_*m_\varphi(B) \, d\mathbb{P} \quad \text{for all $B\in\mathscr{B}$ and $\varphi\in L^1(m)$}.$$ \end{claim} \begin{proof} We prove the claim by induction. For $n=1$, we need to prove that $m_{\mathcal{L}_f\varphi}(B)=\int(f^n_\omega)_*m_\varphi(B) \, d\mathbb{P}$. Observe that this is just Lemma~\ref{lem:A0}. Thus, we can assume that the claim holds for~$n-1$. Then, using again Lemma~\ref{lem:A0}, \begin{align*} m_{\mathcal{L}^n_f\varphi} (B) &= \int_B \mathcal{L}^n_f\varphi \, dm = \int_B \mathcal{L}_f(\mathcal{L}^{n-1}_f\varphi)\, dm = \int_B (f_\omega)_*m_{\mathcal{L}^{n-1}_f\varphi}(B)\, d\mathbb{P}. \end{align*} By induction we have that \begin{align*} \int_B (f_\omega)_*m_{\mathcal{L}^{n-1}_f\varphi}(B)\, d\mathbb{P} &= \iint (f^{n-1}_{\bar{\omega}})_* m_\varphi(f^{-1}_\omega(B))\, d\mathbb{P}(\bar\omega)\,d\mathbb{P}(\omega) \\ &= \iint (f^{}_\omega\circ f^{n-1}_{\bar{\omega}})_* m_\varphi(B)\, d\mathbb{P}(\bar\omega)\,d\mathbb{P}(\omega). \end{align*} Taking into account that $\mathbb{P}$ is a Bernoulli measure and that $f_\omega=f_{\omega_1}$ and $f^{n-1}_{\bar{\omega}}=f_{\bar\omega_{n-1}}\circ \dots \circ f_{\bar\omega_{1}}$ where $\omega=(\omega_i)_{i\geq 1}$ and $\bar{\omega}=(\bar\omega_i)_{i\geq 1}$, we have $$ \iint (f^{}_\omega\circ f^{n-1}_{\bar{\omega}})_* m_\varphi(B)\, d\mathbb{P}(\bar\omega)\,d\mathbb{P}(\omega) = \int (f^{n}_\omega)_* m_\varphi(B)\, d\mathbb{P}. $$ Putting together all the above equations, we get the claim. \end{proof} Now we conclude (4). To do this, we use the first item in this lemma, the fact that the Perron--Frobenius operator of $f^n_\omega$ satisfies that $m_{\mathcal{L}_{f^{n}_\omega}\varphi}=(f^n_\omega)_*m_\varphi$, and finally Claim~\ref{claimA1}. Then we get that \begin{align*} \int_B \int \mathcal{L}_\omega^n\varphi \, d\mathbb{P}\,dm &= \int \int_B \mathcal{L}_{f^{n}_\omega}\varphi \, dm \, d\mathbb{P} = \int (f^n_\omega)_*m_{\varphi}(B)\, d\mathbb{P} = \int_B P^n\varphi \, dm. \end{align*} From this, since $B\in \mathscr{B}$ is arbitrary, we obtain that $\mathcal{L}_f^n\varphi=\int \mathcal{L}_\omega^n\varphi \, d\mathbb{P}\,dm$ as desired. Finally, we will prove (5). By (3), (4) and the duality, we have \begin{align*} \langle \mathcal{L}^{n*}_f\psi,\varphi \rangle &= \langle \psi,\mathcal{L}^{n}_f\varphi \rangle =\langle \psi,\int\mathcal{L}^{n}_\omega\varphi \, d\mathbb{P} \rangle = \int \langle \psi,\mathcal{L}^{n}_\omega\varphi \rangle \, d\mathbb{P} \\ &= \int \langle \mathcal{L}^{n*}_\omega\psi,\varphi \rangle \, d\mathbb{P} = \langle \int\mathcal{L}^{n*}_\omega\psi \, d\mathbb{P},\varphi \rangle. \end{align*} This implies $\mathcal{L}^{n*}_f\psi=\int\mathcal{L}^{n*}_\omega\psi \, d\mathbb{P}$ as required. \end{proof} \section{Generalized restrictions of Markov operators} \label{sec:Markov-restriction2} We will extend the theory of restrictions of Markov operators developed in Section~\ref{sec:Markov-restriction}. Roughly speaking, we want to replace the reference measure $m$ with an absolutely continuous invariant measure and restrict the Markov operator to the support of such measure recovering the Markov property among other ergodic properties. Let $P:L^1(m)\to L^1(m)$ be a Markov operator. Take $h \in D(m)$ and define the probability measure $m_h$ as usual by $dm_h=h\,dm$. Denote $S=\operatorname*{supp} h$ and note that $S\in \mathscr{B}$ with $m(S)>0$. Let us consider $L^1(m_h)=L^1(S, \mathscr{B}_{S}, m_{h})$ where $\mathscr{B}_{S}$ denotes the trace $\sigma$-algebra of $S$ in $\mathscr{B}$. As in Section~\ref{sec:Markov-restriction}, $L^1(m_h) \hookrightarrow L^1(m)$. Abusing notation, we write this inclusion by $1_{S}$ and identify $L^1(m_S)$ with $$1_S(L^1(m_S))=\{\phi \in L^1(m): \operatorname*{supp} \phi \subset S\} \subset L^1(m).$$ We define the operator \begin{align*} P_{h}:L^1(m_h) \to L^1(m_h), \qquad P_{h}\phi = \frac{1_{S} P(1_S \phi h)}{h} \quad \text{for $\phi\in L^1(m_h)$}. \end{align*} Actually, $P_h$ acts on $L^1(m)$ by the same formula. Moreover, if $h=\frac{1_S }{ m(S)}$, then $P_h\phi=1_SP(1_S\phi)$. That is, $m_h$ and $P_h$ coincides with the measure $m_S$ and the operator $P_S$ introduced i ~\eqref{eq:def-mS} and~\eqref{def:P_S} respectively. It is clear that $P_h$ is a bounded linear positive operator on $L^1(m_h)$. Moreover, $P_h$ is also a contraction on $L^1(m_h)$ since $\|P_h\|_{\rm op} \leq 1$. On the other hand, $L^{\infty}(m_h)\hookrightarrow L^{\infty}(m)$ by the same canonical inclusion $1_S$. As before, we identify $L^1(m_h)$ with $1_{S}(L^{\infty}(m_h))$. We also have that $P^*_{h}$, the adjoint operator of $P_{h}$, acting on $L^{\infty}(m_h)$ coincides with $1_{S}P^*1_{S}$. Indeed, for each $\varphi\in L^1(m_h)$ and $\psi\in L^{\infty}(m_h)$, \begin{align*} \int_S \varphi \cdot P_{h}^*\psi \, dm_h &=\int_S P_{h}\varphi\cdot \psi\, dm_h =\int_X 1_S \frac{P(\varphi h)}{h} \cdot \psi h \, dm \\ &=\int_X \varphi h\cdot P^*(1_{S}\psi) \, dm =\int_{S} \varphi \cdot P^*(1_{S}\psi) \, dm_h \end{align*} as desired. \begin{prop} \label{prop:P_h-Markov} Let $P:L^1(m)\to L^1(m)$ be a Markov operator and consider $S\in \mathscr{B}$ with $m(S)>0$. The following conditions are equivalent: \begin{enumerate} \item\label{PS:item1} $P^*1_{X\setminus S} \leq 1_{X\setminus S}$. Equivalently, $P^*1_{S} \geq 1_S$; \item\label{PS:item2} $\operatorname*{supp} P1_S \subset S$. Equivalently, $\operatorname*{supp} P^*1_{X\setminus S} \subset X\setminus S$; \item\label{PS:item3} $1_SP(1_S\phi)=P(1_S\phi)$ for all $\phi\in L^1(m)$. Equivalently, $$P_S\phi = P(1_S\phi) \ \ \text{for all $\phi\in L^1(m)$} \quad \text{or} \quad P(1_S\phi) \in L^1(m_S) \ \ \text{for all $\phi\in L^1(m)$};$$ \item\label{PS:item4} for every $\phi \in L^1(m)$ it holds that $$\int_S P(1_S\phi) \, dm = \int_S \phi \, dm;$$ \item\label{PS:item5} $P_S:L^1(m_S)\to L^1(m_S)$ is a Markov operator; \item \label{PS:item6} $P_h: L^1(m_h)\to L^1(m_h)$ is a Markov operator for any $h\in D(m)$ with $S=\operatorname*{supp} h$. \end{enumerate} \end{prop} \begin{proof} Let us prove the equivalence between the above conditions. First, we will prove \eqref{PS:item1} $\Rightarrow$ \eqref{PS:item2}. Suppose that~\eqref{PS:item2} does not hold. Then, there is $B \subset \operatorname*{supp} P1_S$ with $m(B\setminus S)>0$. From this and the condition~\eqref{PS:item1}, it follows $$ 0<\int_{B\setminus S} P1_{S} \, dm = \int 1_S P^*1_{B\setminus S} \, dm \leq \int 1_S P^*1_{X\setminus S} \, dm \leq \int 1_S 1_{X\setminus S} \, dm =0 $$ which is a contradiction. Note that the equivalence indicated in \eqref{PS:item1} immediately follows by using $1_X=P^*1_{S} + P^*1_{X\setminus S}$. Furthermore, the equivalence indicated in \eqref{PS:item2} follows immediately by the duality $$\int_{X\setminus S} P1_{S}\,dm = \int_{S} P^*1_{X\setminus S}\, dm.$$ Now, we will show that \eqref{PS:item2} $\Rightarrow$ \eqref{PS:item3}. Let $\phi \in L^1(m)$. Note that $\operatorname*{supp} 1_S\phi \subset S = \operatorname*{supp} 1_S$. Thus, by Lemma~\ref{supp} and condition~\eqref{PS:item2}, $\operatorname*{supp} P(1_S\phi) \subset \operatorname*{supp} P1_S \subset S$ up to $m$-null set. In particular, $1_S P(1_S\phi) = P(1_S\phi)$. That is, condition~\eqref{PS:item3} holds. We will see that \eqref{PS:item3} $\Rightarrow$ \eqref{PS:item4}. Assuming~\eqref{PS:item3} and using that $P$ is a Markov operator, for any $\phi \in L^1(m)$, $$ \int_S P(1_S\phi)\, dm = \int 1_S P(1_S\phi) \, dm = \int P(1_S\phi)\, dm = \int 1_S\phi \, dm = \int_S \phi \, dm $$ and thus~\eqref{PS:item4} holds. Next, we will prove that \eqref{PS:item4} and \eqref{PS:item5} are equivalent. Clearly, $P_S$ is a bounded linear positive operator. Note that, by definition, for any $\phi\in L^1(m_S)$, $$ \int P_S\phi \, dm_S = \frac{1}{m(S)} \int_S P(1_S\phi) \, dm \quad \text{and} \quad \int \phi \, dm_S = \frac{1}{m(S)}\int_S \phi \, dm. $$ Thus, clearly we get that \eqref{PS:item4} implies that $\int P_s\phi \, dm_S=\int \phi \, dm_S$ and viceversa. We will obtain the implication \eqref{PS:item4} to \eqref{PS:item6}. Similar as above, for any $\varphi\in L^1(m)$ and $h\in D(m)$ with $S=\operatorname*{supp} h$, $P_h$ is clearly a bounded linear positive operator. Moreover, taking into account that $\varphi h = 1_S \varphi h$, \begin{equation}\label{eq:Ph-final} \int P_h\varphi \, dm_h =\int_S P(1_S\varphi h) \, dm \quad \text{and} \quad \int \varphi \, dm_h = \int_S \varphi h \, dm. \end{equation} Hence, applying \eqref{PS:item4} to $\phi=\varphi h$, we immediately get $$\int P_h \varphi \, dm = \int \varphi \, dm_h.$$ Thus, $P_h$ is Markov and we conclude \eqref{PS:item6}. Since \eqref{PS:item6} $\Rightarrow$ \eqref{PS:item5} is clear, the rest to show is \eqref{PS:item4} $\Rightarrow$ \eqref{PS:item1}. Suppose contrarily that there is some $B\subset S$ such that $m(B)>0$ and $1_B P^* 1_S<1_B$. This implies that \[ \int_S 1_B\,dm>\int_S 1_BP^*1_S\,dm=\int_S P(1_B1_S)\,dm \] but clearly contradicts to the assumption \eqref{PS:item4} with $\phi=1_B$. \end{proof} \section{Ergodicity of invariant densities} \label{appendix:B} Let $P:L^1(m) \to L^1(m)$ be a Markov operator as introduced in Section~\ref{s:MO}. We associate $P$ with a new operator, which we will still denote by $P$, acting on the set of probability measures $\mu$ on $(X,\mathscr{B})$ which are absolutely continuous with~respect~to~$m$~by $$ P\mu(A)=\int P^*1_A(x) \, d\mu(x) \quad \text{for any $A\in \mathscr{B}$}. $$ Here $P^*:L^\infty(m)\to L^\infty(m)$ denotes the adjoint operator of $P:L^1(m)\to L^1(m)$. Recall that $D(m)=\{\phi \in L^1(m): \phi \geq 0, \ \|\phi\|=1 \}$ and for a given $\phi \in D(m)$, we denote $m_{\phi}$ the probability measure given by $dm_{\phi}=\phi\, dm$. We say that $\phi \in D(m)$ is a $P$-invariant density if $P\phi =\phi$. Similarly, a probability measure $\mu$ on $(X,\mathscr{B})$ is said to be $P$-invariant if $P\mu=\mu$. Denote by $D_P(m)$ and $I_P(m)$, respectively, the convex sets of $P$-invariant densities and $P$-invariant probability measures that are absolutely continuous with respect to $m$. \begin{lem} \label{lema:apendix-ergodicty} Let $\phi \in D(m)$. Then, $P\phi=\phi$ if and only if ${P}m_{\phi}=m_{\phi}$. In particular, $D_P(m)$ is identified with $I_{m}({P})$. \end{lem} \begin{proof} If $P\phi=\phi$, then for any $A\in\mathscr{B}$, $$ Pm_{\phi}(A)=\int P^*1_A \, dm_{\phi} = \int P^*1_A \phi \, dm = \int_A P\phi \, dm = \int_A \phi \, dm =m_\phi (A).$$ Conversely, if $Pm_{\phi} =m_{\phi}$, then for any $A\in \mathscr{B}$, $$ m_\phi(A) = \int P^*1_A \phi \, dm = \int_A P\phi \, dm. $$ Thus $P\phi$ is the Radon--Nikod\'{y}m derivative of $m_\phi$ with respect to $m$. Since this derivative is exactly $\phi$, we have $P\phi=\phi$. \end{proof} Recall that an invariant probability measure $\nu$ of a transformation $g$ of a measurable space $(Y,\mathscr{A})$, i.e., $\nu=g_*\nu$, is said to be \emph{ergodic} if $\nu(A)\in \{0,1\}$ for all set $A\in\mathscr{A}$ such that {$A \subset g^{-1}(A)$. Equivalently, for all $A\in \mathscr{A}$ such that $A=g^{-1}(A)$.} Moreover, it is also well known that one can weaken both previous conditions of invariance of the set $A$ by simply asking that the relation holds up to a $\mu$-null set. {That is, if $\mathcal{L}^*_g1_A \geq 1_A$ or $\mathcal{L}^*_g1_A = 1_A$, respectively, for $\mu$-almost everywhere where $\mathcal{L}^*_g\psi=\psi \circ g$ for $\psi\in L^\infty(m)$ is the adjoint Perron--Frobenius operator of $g$. The following proposition shows some of these equivalences in a more general setting.} \begin{prop}\label{thm:apendix-ergodic} Let $\mu\in I_P(m)$. Then the following conditions are equivalent: \begin{enumerate} \item $\mu(A)\in \{0,1\}$ for any $A\in \mathscr{B}$ such that $P^*1_A \geq 1_A$ for $m$-almost everywhere; \item $\mu(A)\in \{0,1\}$ for any $A\in \mathscr{B}$ such that $P^*1_A \geq 1_A$ for $\mu$-almost everywhere; \item $\mu(A)\in \{0,1\}$ for any $A\in \mathscr{B}$ such that $P^*1_A = 1_A$ for $\mu$-almost everywhere. \end{enumerate} \end{prop} \begin{proof} (2) $\Rightarrow$ (3): Obvious. (3) $\Rightarrow$ (2): Let us consider $A\in \mathscr{B}$ such that $P^*1_A \geq 1_A$ for $\mu$-almost everywhere. Since $\mu \in I_P(m)$, then $\mu(A)=\int P^*1_A \, d\mu$. On the other hand, $\mu(A)=\int 1_A \,d\mu$. Thus, since $P^*1_A \geq 1_A$ for $\mu$-almost everywhere, we get $0\leq \int P^*1_A -1_A \, d\mu =0$. From this follows that $P^*1_A=1_A$ for $\mu$-almost everywhere. Consequently, it immediately follows that (2) implies (3). (2) $\Rightarrow$ (1): Let us consider $A\in \mathscr{B}$ such that $P^*1_A \geq 1_A$ for $m$-almost everywhere. This means that there is $B\in \mathscr{B}$ such that $m(B)=0$ and $P^*1_A(x) \geq 1_A(x)$ for all $x\in X\setminus B$. Since $\mu$ is absolutely continuous with respect to $m$, then $\mu(B)=0$. Thus, $P^*1_A \geq 1_A$ for $\mu$-almost everywhere. From this, it immediately follows that (3) implies (2). (1) $\Rightarrow$ (2): Let us consider $A\in \mathscr{B}$ such that $P^*1_A \geq 1_A$ for $\mu$-almost everywhere. Since $\mu\in I_P(m)$, by Lemma~\ref{lema:apendix-ergodicty}, we have $\phi\in D_P(m)$ such that $\mu=m_\phi$ (i.e., $d\mu=\phi \, dm$). Then, by Proposition~\ref{prop:invariant}, we have that $P^*1_{X\setminus S} \leq 1_{X\setminus S}$ for $m$-almost everywhere where $S=\operatorname*{supp} \phi$. Then, \begin{equation}\label{eq:1SP} 1_SP^*1_A=1_SP^*1_{S\cap A} + 1_S P^*1_{(X\setminus S)\cap A}=1_SP^*1_{S\cap A} \leq P^*1_{S\cap A} \end{equation} for $m$-almost everywhere. But observe that $\operatorname*{supp} 1_SP^*1_A \subset S$ and $m$ and $\mu$ are equivalent on $S$. Thus, since $P^*1_A \geq 1_A$ for $\mu$-almost everywhere we have $1_{S\cap A}=1_{S}1_A \leq 1_SP^*1_A$ $m$-almost everywhere. Putting this together with~\eqref{eq:1SP}, we get that $1_{S\cap A} \leq P^*1_{S\cap A}$ $\mu$-almost everywhere. By assumption, $m(S\cap A)m(X\setminus (S\cap A))=0$. Consequently, since $\mu(A)=\mu(S\cap A)$ and $\mu(X\setminus A)=\mu(X\setminus (S\cap A))$, one has that $\mu(A)\mu(X\setminus A)=0$. \end{proof} Let $h \in D_P(m)$. From, Lemma~\ref{lema:apendix-ergodicty}, we can apply Proposition~\ref{thm:apendix-ergodic} to $\mu=m_{h}$. \begin{dfn} \label{def:ergodic} Let $h \in D_P(m)$ and set $\mu =m_h$. If any of the equivalent items in Proposition~\ref{thm:apendix-ergodic} holds, we say that \emph{$\mu$ is an ergodic $P$-invariant measure}, \emph{$h$ is an ergodic $P$-invariant density}, \emph{$(P,\mu)$ is ergodic} or \emph{$(P,h)$ is ergodic}.\footnote{The reference measure $m$ is always implicit when we mention $P$.} \end{dfn} Observe that if $P1_X=1_X$, then according to Lemma~\ref{lema:apendix-ergodicty}, it holds that $Pm=m$, i.e., $m\in I_P(m)$. Then, Proposition~\ref{thm:apendix-ergodic} implies that $1_X$ is an ergodic $P$-invariant density if and only if $m(A)\in \{0,1\}$ for all $A\in\mathscr{B}$ with $P^*1_A=1_A$. Motivated by this last condition and following Krengel~\cite[page~126]{Krengel}, we introduce the following definition: \begin{dfn} A Markov operator $P:L^1(m)\to L^1(m)$ is called \emph{ergodic in the sense of Krengel} if the set of $P^*$-invariant sets $\mathscr{B}_i=\{A\in \mathscr{B}: P^*1_A=1_A\ \text{$m$-almost everywhere}\}$ is trivial, i.e., $\mathscr{B}_i=\{\emptyset, X\}$ up to an $m$-null set. \end{dfn} In view of this definition, the above observation can be written as follows: \begin{rem} \label{rem:apendix} If $P1_X=1_X$, then the following are equivalent: \begin{enumerate} \item $(P,1_X)$ is ergodic (or in other words, $(P,m)$ is ergodic); \item $P$ is ergodic in the sense of Krengel. \end{enumerate} In general, $P$ could be ergodic in the sense of Krengel and $1_X \not\in D_P(m)$. See, for instance, Remark~\ref{rem:reciprocal}. Note that Definition~\ref{def:ergodic} requires that $1_X$ is a $P$-invariant density to be ergodic. \end{rem} Now, recalling the theory of restriction of Markov operators introduced in Section~\ref{sec:Markov-restriction} and generalized in Appendix~\ref{sec:Markov-restriction2} we have the following: \begin{thm} \label{thm:ergodic2} Let $h\in D_P(m)$ and $S=\operatorname*{supp} h$. Then the following conditions are equivalent: \begin{enumerate} \item $(P,h)$ is ergodic (or in other words, $(P,m_h)$ is ergodic); \item $(P_h,1_S)$ is ergodic (or in other words, $(P_h,m_h)$ is ergodic); \item $P_h$ is ergodic in the sense of Krengel; \item $P_S$ is ergodic in the sense of Krengel. \end{enumerate} \end{thm} \begin{proof} Recall that $P^*_h \psi =1_S P^*(1_S\psi)$ for all $\psi \in L^\infty(m_h)$. (1) $\Rightarrow$ (3): Let us consider $A\in \mathscr{B}_S$ such that $P^*_h1_A = 1_A$ for $\mu$-almost everywhere. Then $1_S P^*1_A=1_A$ $\mu$-almost everywhere and hence also $m$-almost everywhere. In particular, $P^*1_A \geq 1_A$ $m$-almost everywhere and thus, by assumption, $m_h(A)\in \{0,1\}$. (3) $\Leftrightarrow$ (2): Notice that $P_h1_S=1_S$. Thus, the equivalence follows from Remark~\ref{rem:apendix}. (2) $\Rightarrow$ (1): Let us consider $A\in \mathscr{B}$ such that $P^*1_A \geq 1_A$ for $\mu$-almost everywhere. Since $h\in D_P(m)$, we have $P^*1_{X\setminus S} \leq 1_{X\setminus S}$ for $m$-almost everywhere. Hence, as argued in the previous theorem (see~\eqref{eq:1SP}), $$1_{S\cap A} \leq 1_SP^*1_A=1_SP^*1_{S\cap A} + 1_S P^*1_{(X\setminus S)\cap A}=1_SP^*1_{S\cap A} = P_h^*1_{S\cap A}$$ $m$-almost everywhere. Then, by assumption we have that $m_h(S\cap A)\in \{0,1\}$. But, since $m_h(A)=m_h(S\cap A)$ we conclude the implication. (3) $\Leftrightarrow$ (4): Notice that $P^*_S$ and $P^*_h$ are both of the form $1_SP^*1_S$ acting, respectively, on $L^1(m_S)$ and $L^1(m_h)$. Since both measures $m_S$ and $m_h$ are equivalent, we have that $P^*_S1_A=1_A$ $m_S$-almost everywhere if and only if $P^*_h1_A=1_A$ $m_h$-almost everywhere. From this, it immediately follows the equivalence. \end{proof} \begin{dfn} A Markov operator $P:L^1(m)\to L^1(m)$ is called \emph{conservative} if there is $\varphi\in L^1(m)$ such that $\varphi>0$ on $X$ and $$ X=\left\{x \in X: \sum_{i=0}^\infty P^i\varphi(x)=\infty\right\} \quad \text{up to an $m$-null set}. $$ \end{dfn} \begin{prop} \label{prop:conservative} Let $h\in D(m)$ be a $P$-invariant density and set $S=\operatorname*{supp} h$. Then both, $P_h$ and $P_S$ are conservative Markov operators. In particular, if $h\in D_P(m)$ is ergodic then, \begin{enumerate}[leftmargin=0.75cm] \item $P_S:L^1(m_S) \to L^1(m_S)$ is a conservative and ergodic Markov operator and $D_{P_S}(m_S)=\{h\}$; \item $P_h:L^1(m_h) \to L^1(m_h)$ is a conservative and ergodic Markov operator and \mbox{$D_{P_h}(m_h)=\{1_S\}$.} \end{enumerate} \end{prop} \begin{proof} Since $h$ is $P$-invariant, according to Proposition~\ref{prop:wap} and Proposition~\ref{prop:P_h-Markov}, $P_S$ and $P_h$ are Markov operators and $P_Sh=h$. Also, it is not difficult to check that $P_h1_S=1_S$. On the other hand, $P_S$ and $P_h$ are conservative since $h>0$ and $1_S>0$ and $\sum_{i\geq 0} P_S^i h =\infty$ and $\sum_{i\geq 0} P_h^i1_S=\infty$ on $S$. If in addition $h$ is ergodic, then from Theorem~\ref{thm:ergodic2} we get that $P_S$ and $P_h$ are conservative and ergodic Markov operators. Finally, from~\cite[Theorem~A in Chapter~VI]{foguel2007ergodic} it follows that $D_{P_S}(m_S)=\{h\}$ and $D_{P_h}=\{1_S\}$. \end{proof} \begin{rem} \label{rem:reciprocal} In view of the above proposition, if $h$ is an ergodic $P$-invariant density with $h>0$ on $X$ and $1_X \not = h$, then $P$ is ergodic in the sense of Krengel but \mbox{$1_X \not \in D_{P}(m)=\{h\}$}. \end{rem} {The next proposition is well-known for the case of transformations. One can generalize this to the case for Markov operators in terms of invariant densities.} \begin{prop} \label{prop:ergodic} Let $h \in D_P(m)$ and set $S=\operatorname*{supp} h$. Then, $h$ is ergodic if and only if $h$ is an extremal point of $D_P(m)$. That is, it cannot be decomposed as $$\text{$h = th_1 +(1-t)h_2$ with $t \in (0, 1)$ and $h_1, h_2 \in D_P(m)$.}$$ Moreover, if $h_1 \in D_P(m)$ is ergodic and $h_2\in D_P(m)$, then either $$h_1=h_2 \quad \text{or} \quad m(\operatorname*{supp} h_1 \cap \operatorname*{supp} h_2) =0.$$ \end{prop} \begin{proof} Suppose that $h$ is ergodic. By Proposition~\ref{prop:conservative}, $P_S$ is a Markov operator and $D_{P_S}(m_S)=\{h\}$. In particular, we have $P_Sg = P(1_Sg)$ for all $g \in L^1(m_S)$. See Proposition~\ref{prop:wap} or Proposition~\ref{prop:P_h-Markov}. Now, if $h=ah_1+(1-a)h_2$ for some $0<a<1$ and $h_1,h_2\in D_P(m)$, then $h=h_1=h_2$. Indeed, observe that since $h_i$ is a $P$-invariant density, according again to Proposition~\ref{prop:conservative}, $P_{S_i}$ is a Markov operator and $P_{S_i}h_i=h_i$ where $S_i=\mathrm{supp}\, h_i$ for $i=1,2$. Then, since $S_i\subset S$, we have $P_Sh_i=P(1_Sh_i) =P(1_{S_i}h_i)=P_{S_i}h_i=h_i$ for $i=1,2$. Consequently, it follows $h=h_1=h_2$ since $D_{P_S}(m_S)=\{h\}$. This proves that $h$ is an extremal point of $D_P(m)$. Now, we will prove the converse. Suppose $h$ is not ergodic and denote by $\mu:=m_{h}$. Take a set $A\in \mathscr{B}$ such that $P^*1_A \geq 1_A$ and $0<\mu(A) < 1$. Hence, since $\mu(S\setminus A) =1 -\mu(A)$, we can write $h$ as a convex combination as follows \begin{align*} h=\mu(A)\cdot\frac{1_Ah}{\mu(A)} + \mu({S\setminus A})\cdot\frac{1_{S\setminus A}h}{\mu({S\setminus A})}. \end{align*} Moreover, since $ (1_Ah)/\mu(A)$ and $(1_{S\setminus A}h)/\mu(S\setminus A)$ have both $L^1$-norm equals one, to show that $h$ is not an extremal point in $D_P(m)$ it suffices to prove that $h 1_A$ and $h 1_{S\setminus A}$ are $P$-invariant. To prove this, observe first that since $h=1_A h+1_{S\setminus A} h$ and $h=Ph=P(1_Ah) + P( 1_{S\setminus A}h)$, one can write \begin{equation}\label{eq:anpendix-final} 1_A h - P(1_Ah) = P(1_{S\setminus A}h) -1_{S\setminus A} h. \end{equation} From Lemma~\ref{lem:P_S-Markov}, it follows that $\operatorname*{supp} P1_A \subseteq A$ up to an $m$-null set. Morevover, since $\operatorname*{supp} h = S = \operatorname*{supp} 1_{S}$, by the $P$-invariance of $h$ and Lemma~\ref{supp}, it follows that $ S=\operatorname*{supp} h = \operatorname*{supp} Ph = \operatorname*{supp} P1_{S}$ up to an $m$-null set. Hence, $\operatorname*{supp} P1_{S\setminus A}\subseteq S\setminus A$. Additionally, since $\operatorname*{supp} (1_Ah) \subseteq \operatorname*{supp} 1_A$ and $\operatorname*{supp} (1_{S\setminus A}h) \subseteq \operatorname*{supp} 1_{S\setminus A}$, Lemma~\ref{supp} implies that $\operatorname*{supp} P(1_A h) \subseteq \operatorname*{supp} P1_A \subseteq A$ and $\operatorname*{supp} P(1_{S\setminus A}h) \subseteq \operatorname*{supp} P1_{S\setminus A} \subseteq S\setminus A$. Consequentely, $\operatorname*{supp} (1_A h - P(1_Ah)) \subseteq A$ and $\operatorname*{supp} (P(1_{S\setminus A}h) - 1_{S\setminus A}h ) \subseteq X\setminus A$ and thus it follows from~\eqref{eq:anpendix-final} that \begin{align*} P(1_Ah)=1_Ah \quad \text{ and } \quad P(1_{S\setminus A}h)=1_{S\setminus A}h \end{align*} as desired. Finally, we will prove the second part of the proposition. Let us assume $h_1$ and $h_2$ as in the statement. Clearly if $h_1=h_2$ then $m(S)>0$ where $S=\operatorname*{supp} h_1 \cap \operatorname*{supp} h_2$. Conversely, suppose that $m(S)>0$. Since $P1_S \leq P1_{\operatorname*{supp} h_i}$, by Lemma~\ref{supp}, $\operatorname*{supp} P1_{S} \subset \operatorname*{supp} P1_{\operatorname*{supp} h_i} = \operatorname*{supp} Ph_i = \operatorname*{supp} h_i$ for $i=1,2$ and thus $\operatorname*{supp} P1_S \subset S$. According to Proposition~\ref{prop:P_h-Markov}, $P_S$ is a Markov operator, $P_S\phi=P(1_S\phi)$ and \begin{equation} \label{eq:contra} \int_{S} P(1_{S}\phi) \, dm = \int_{S} 1_{S}\phi \, dm \end{equation} for all $\phi\in L^1(m)$. This implies that $1_Sh_i$ is $P$-invariant. Indeed, since $Ph_i=h_i$, it follows that $1_SP(1_Sh_i)=P_Sh_i=P(1_{S}h_i) \leq h_i$. Hence $P(1_Sh_i) \leq 1_Sh_i$. On the other hand, if $P(1_{S}h_i)<1_{S}h_i$ on a set $A\in\mathscr{B}$ of positive $m$-measure, then $$ \int_{S} P(1_{S}h_i) \, dm < \int_{A} 1_Sh_i \, dm + \int_{S\setminus A} 1_Sh_i \, dm = \int_{S} 1_{S}h_i \, dm$$ contradicting~\eqref{eq:contra}. Therefore, we have $m(A)=0$ and $P(1_Sh_i)=1_Sh_i$. Then, $g_i=\frac{1_Sh_i}{H_i} \in D_P(m)$ where $H_i=\int_S h_i \,dm$. Moreover, $m_{g_1}$ is absolutely continuous with respect to $m_{h_1}$ since $dm_{g_1}= \frac{1_S}{H_1}\, dm_{h_1}$. In particular, since $m_{h_1}$ is ergodic, we also have that $m_{g_1}$ is ergodic. Since $S=\operatorname*{supp} g_1$, we obtain from Proposition~\ref{prop:conservative} that $D_{P_S}(m_S)=\{g_1\}$. But, since $S=\operatorname*{supp} g_2$ and $g_2$ is $P$-invariant, we also have that $g_2 \in D_{P_S}(m_S)$. Thus, $g_2=g_1$. From this, it follows that $h_1=h_2$ as desired. \end{proof} \begin{rem} \label{rem:disjoin-support} Recall that the support of $h \in L^1(m)$ is only well-defined up to a set of measure zero. Thus, without loss of generality, from the above corollary, we can assume that the supports of any two ergodic $P$-invariant densities $h_1$ and $h_2$ are identical or disjoint. \end{rem} To conclude, we will relate the previous definition of ergodicity for invariant measures of Markov operators with the classical approach in random dynamical systems and probability theory. First of all, let us consider an annealed Perron--Frobenius operator $\mathcal{L}_f:X\to X$ associated with a random map $f:T\times X\to X$ (with respect to a Bernoulli probability measure $\mathbb{P}$). Recall that, as we showed in Remark~\ref{rem:P=Lf}, when $X$ is a Polish space, any Markov operator can be represented as an annealed Perron--Frobenius operator. As explained in Section~\ref{sec:(UC)}, associated with this map we have a transition probability $P(x,A)$ which coincides, for each $A\in \mathscr{B}$, with $\mathcal{L}_f^*1_A$ on $m$-almost everywhere. Let us consider an absolutely continuous probability measure $\mu$ on $X$ which is $\mathcal{L}_f$-invariant. A classical approach to introducing the ergodicity of $\mu$ is first to leave this measure to an invariant measure of a deterministic dynamical system that represents $f$ in some sense. Then, ergodicity is defined throughout this leaf measure. A deterministic dynamical system that represents $f$ is its associated skew-product given by $$ F:\Omega \times X \to \Omega \times X, \quad F(\omega,x)=(\sigma\omega, f_\omega (x))$$ with $f_\omega = f(t,\cdot)$ if $\omega_0=t$, $\omega=(\omega_i)_{i\in \mathbb{Z}} \in \Omega=T^{\mathbb{N}}$ and $\sigma:\Omega\to \Omega$ the shift operator. The measure $\mu$ is lifted to the $F$-invariant measure $\mathbb{P}\times \mu$. But we can also consider the shift operator acting on $(X^\mathbb{Z},\mathscr{B}^\mathbb{Z})$. By Kolmogorov extension theorem, one can construct a shift-invariant probability measure $\mathbb{P}_{\mu}$ on $X^\mathbb{Z}$ from the consistent sequence of measures $\{\mathbb{P}_{\mu}^n\}_{n\geq 0}$ where $\mathbb{P}_{\mu}^n$ is defined on $X^{2n+1}$ by $$ \int \varphi \, d\mathbb{P}^n_{\mu}= \int \varphi(x_{-n},\dots, x_{n}) \, P(x_{n-1},dx_{n})\dots P(x_{-n},dx_{-n+1})\mu(dx_{-n}). $$ The following result follows basically from classical results of Markov chain and random dynamical systems. We refer to~\cite[Section~5.2]{hairer2006ergodic} and~\cite[Proposition~1.3]{liu2006smooth} (see also~\cite{Kifer1986,O83}) for more details. \begin{thm} Let $f:T\times X\to X$ be a random map (with respect to a Bernoulli probability measure $\mathbb{P}$) and denote by $\mathcal{L}_f:L^1(m)\to L^1(m)$ the associated annealed Perron--Frobenius operator. Let $\mu \in I_m(\mathcal{L}_f)$. Then the following conditions are equivalent: \begin{enumerate} \item $(\mathcal{L}_f,\mu)$ is ergodic; \item $\mathbb{P}_\mu$ is an ergodic shift invariant measure on $X^\mathbb{Z}$; \item $\mathbb{P}\times \mu$ is an ergodic $F$-invariant probability measure on $\Omega\times X$. \end{enumerate} \end{thm} \section{Mixing and exactness of invariant densities} \label{appendix:D} In this appendix we recall definition of mixing and exactness for (deterministic) dynamics, and relate them with mixing and exactness for Markov operators defined in Section \ref{sec:1.4}. Although the content of this appendix should be a folklore among experts, we include it for convenience of the reader. Let $(X, \mathscr B, m)$ be a probability space. Let $g: X\to X$ be an $m$-nonsingular measurable map, and consider a $g$-invariant probability measure $\mu$ on $X$. Then, $\mu$ is called {\emph mixing} if $\mu (g^{-n} A \cap B)\to \mu(A)\mu(B)$ as $n\to\infty$ for any $A, B\in\mathscr B$ (cf.~\cite{W2000}). Recall also that $\mu$ is said to be \emph{exact} if \[ \bigcap_{n\ge0}g^{-n}\mathscr{B}=\{\emptyset,X\}\pmod{\mu}. \] From Lin's theorem (\cite{Lin}), $\mu$ is exact if and only if \[ \lim_{n\to\infty}\left\lVert\mathcal{L}_g^n\varphi\right\rVert_{L^1(\mu)}=0 \quad \text{for any $\varphi\in L^1_0(\mu) \coloneqq\left\{\psi\in L^1(\mu):\int_X\psi \, d\mu =0\right\}$.} \] Below as in Appendix \ref{sec:Markov-restriction2}, we identify $\varphi\in L^1(m_S)$ with $1_S\varphi \in L^1(m)$ for an $m$-positive measure set $S$. \begin{lem} Assume that $\mu$ is a $g$-invariant probability measure absolutely continuous with respect to $m$ with the density map $h$. Let $S:=\operatorname*{supp} h$. Then, $\mu$ is mixing if and only if \[ \lim_{n\to\infty}\mathcal{L}_g^n\varphi=h\int_X\varphi dm \quad \text{weakly in $L^1(m)$ for any $\varphi\in L^1(m_S)$.} \] Furthermore, $\mu$ is exact if and only if \begin{equation}\label{eq:0701ex2} \lim_{n\to\infty}\mathcal{L}_g^n\varphi=h\int_X\varphi \, dm \quad \text{strongly in $L^1(m)$ for any $\varphi\in L^1(m_S)$.} \end{equation} \end{lem} \begin{proof} Note first that the mixing property of $\mu$ is equivalent to require that \[ \int \psi \circ g^{n} \cdot \varphi\,d\mu \to \int \psi \, d\mu \int \varphi \,d\mu \quad \text{as $n\to\infty$} \] for any $\psi \in L^\infty(\mu )$ and $\varphi \in L^1(\mu )$. Due to the duality between $\mathcal L_g$ and $\psi \mapsto \psi \circ g$, this can be written as \[ \int \psi \cdot \mathcal L_g^n( \varphi h)\,dm \to \int \psi \left(h \int (\varphi h )\, dm \right) dm \quad \text{as $n\to\infty$}. \] Hence, we immediately get the claim for mixing. We next show the claim for exactness. Since exactness is hereditary property between absolutely continuous measures, it follows from Lin's theorem mentioned above that $\mu$ is exact if and only if \begin{equation}\label{eq:0701ex} \lim_{n\to\infty}\left\lVert\mathcal{L}_g^n\varphi\right\rVert_{L^1(m_S)}=0 \text{ for any $\varphi\in L^1_0(m_S)$.} \end{equation} Thus, it suffices to show the equivalence between \eqref{eq:0701ex} and \eqref{eq:0701ex2}. \eqref{eq:0701ex}$\Rightarrow$\eqref{eq:0701ex2}: Note that $\mathcal{L}_g^n\varphi-h\int_X\varphi dm=\mathcal{L}_g^n(\varphi-h\int_X\varphi dm)$ and \[ \int_X\left(\varphi-h\int_X\varphi dm\right)dm=\frac{1}{m(S)}\int_X\varphi dm_S-\int_Xhdm\cdot\frac{1}{m(S)}\int_X\varphi dm_S=0. \] \eqref{eq:0701ex2}$\Rightarrow$\eqref{eq:0701ex}: It is straightforward if we take $\varphi\in L^1_0(m_S)$. \end{proof} We finally remark that, if a map $g:X\to X$ admits an ergodic invariant probability measure $\mu$, then it holds by Birkhoff's pointwise ergodic theorem \[ \lim _{n\to\infty}\int \frac{1}{n} \sum _{j=0}^{n-1} \varphi \circ g^j \cdot \psi \, d(\mathbb P\times \mu) = \int \psi d\mu \int \varphi d\mu \] for any $\varphi \in L^\infty(\mu )$, $\psi \in L^1 (\mu )$. Thus, in the case when $\mu =h\, dm$ with some $h\in D(m)$, \[ \lim _{n\to\infty} \frac{1}{n} \sum _{j=0}^{n-1} \mathcal L_{ g} ^j\psi = h\int \psi d\mu \quad \text{weakly in $L^1(m)$ for each $\psi \in L^1(m_S)$.} \] On the other hand, since $h$ is an ergodic invariant density and the restriction of $\mathcal L_{ g }$ on $L^1(m_S)$ satisfies {\tt (WAP)}, it follows from Yoshida--Kakutani's mean ergodic theorem (\cite[Theorem 1]{yosida1941operator}) that \[ \lim _{n\to\infty} \frac{1}{n} \sum _{j=0}^{n-1} \mathcal L_{ g } ^j\psi = h\int \psi d\mu \quad \text{strongly in $L^1(m)$ for each $\psi \in L^1(m_S)$.} \] \section*{Acknowledgement} We are sincerely grateful to V\'ictor Ara\'ujo, Hiroki Sumi, Mitsuhiro Shishikura and Masayuki Asaoka for their valuable comments. P.~G.~Barrientos was supported by grant PID2020-113052GB-I00 funded by MCIN, PQ 305352/2020-2 (CNPq) and JCNE E-26/201.305/2022 (FAPERJ). F.~Nakamura was supported by JSPS KAKENHI Grant Number 19K21834. Y.~Nakano was supported by JSPS KAKENHI Grant Numbers 19K14575, 19K21834, 21K03332 and 22K03342. H.~Toyokawa was supported by JSPS KAKENHI Grant Numbers 19K21834 and 21K20330. \bibliographystyle{siam}
{ "timestamp": "2022-09-21T02:13:26", "yymm": "2209", "arxiv_id": "2209.08714", "language": "en", "url": "https://arxiv.org/abs/2209.08714" }
\section{Introduction} \label{sec:intro} Object Re-IDentification (ReID) \cite{he2021transreid,bedagkar2014survey}, the task of matching a particular object across different camera views, has been widely studied due to its applications in visual surveillance, especially in the field of person and vehicle tracking. Most of the existing work on object ReID is mainly focused on tackling this problem in a normal surveillance domain, e.g. security cameras installed on the top of a building. With the rapid development in the UAV industry, visual surveillance using UAV devices has received increasing attention, such as normal surveillance. However, ReIDs based on drones or UAVs \cite{zhang2020person,wang2019vehicle} have remained an under-researched topic. Unlike the normal domain, the ReID of UAV objects is arguably more challenging because drone-based images often contain more uncertainty (e.g., view angles, camera distance, and weather conditions) than standard surveillance images. In the context of the ReID object based on UAVs, there exist several obstacles arising from the increased uncertainty. As shown in Figure \ref{fig:intro-multi}, both the scale and pose variations are important factors to consider because the altitude changes of the flying drones can be substantial. As the images captured are by UAV flying at different altitudes, a ReID model operating on a single scale or pose cannot create a descriptive feature space that fully characterizes the entire aerial domain. Therefore, it is natural to develop a ReID model capable of learning multiscale and pose-invariant features. Moreover, to detect subtle changes of objects of interest (e.g., a person wearing different colored dresses, different glasses, shoes, etc.), we target a discriminative feature space that is generalizable to both coarse- and fine-grained features. \begin{figure}[t] \begin{center} \includegraphics[width=8cm]{multi-scale.png} \end{center} \caption{Example images showing the need for multiscale feature fusion. The images are taken by drones flying at multiple altitudes. A single scale can only capture specific parts of objects, lacking the capability to learn fine scale features. Multiscale features can help a model learn highly discriminative feature space by fusing information across multiple scales.} { \label{fig:intro-multi} } \vspace{-0.2in} \end{figure} Today, the convolutional neural network (CNN) is arguably the most popular backbone in terms of designing ReID classifiers. However, the latest advances in deep learning have seen the great success of Transformers \cite{liu2021swin, zhou2021deepvit} in natural language processing and computer vision. This class of convolution-free models can be a better fit for the problem of object ReID using self-attention. Most existing research on object (Persons, Vehicles) ReID has been conducted for normal surveillance. Limited attention has been paid to object ReID in aerial surveillance. Most approaches (e.g., part-based convolutional baseline \cite{sun2018beyond}, generative adaptive alignment network \cite{wang2019rgb}, multiple granularity network \cite{wang2018learning}) follow CNN-based architectures. Transformer \cite{dosovitskiy2020image} is a new trend in the solution of object ReID, establishing it as a strong baseline that beats current state-of-the-art methods. Motivated by the recent success of the transformer in normal surveillance object ReID \cite{he2021transreid}, we adopt an all-attention transformer-based approach for UAV-based object Re-ID. To address the challenges arising from aerial images, we propose an uncertainty-aware approach that exploits hierarchical feature maps and global channel attention gate for object Re-identification in the UAV domain. Our proposed approach uses a modified version of Pyramid Vision Transformer (PVT) \cite{wang2021pyramid} as the backbone, which is a convolution-free architecture. Then we apply spatial attention \cite{woo2018cbam} to these multi-resolution feature maps to put more focus on important features, filtering out irrelevant ones. To incorporate camera information, we add additional head to the PVT model and optimize the network using both camera ID, object ID, and center loss in a joint fashion. Moreover, we normalize the style variance present in different cameras, incorporating Batch Instance Normalization (BIN) \cite{nam2018batch} in our model. Finally, with the help of the channel attention aggregation gate \cite{zhou2019omni}, the model selectively learns the feature maps with higher weights. Inspired by recent work on modeling feature uncertainty for personal Re-ID \cite{yu2019robust}, we propose to model aerial uncertainty by predicting the variance of data as the model output. PVT-based object recognition with uncertainty estimation is particularly suitable for UAV-based Re-ID of people and vehicles in aerial surveillance and long-range biometrics \cite{ye2021deep}. Our \textbf{technical contributions} can be summarized as follows: \textbf{(I)} We seek to explore a modified Pyramid Vision Transformer (PVT) tailored for object re-identification in UAV-based scenarios. The proposed model utilizes multiscale features to re-identify objects for aerial surveillance. \textbf{(II)} Spatial attention module helps focus on the relevant information of the feature maps by filtering out noise. The channel attention gate follows an adaptive fusion scheme, which dynamically selects the appropriate feature maps to exploit channel-wise dependencies. \textbf{(III)} We train our model considering uncertainty in terms of object identity and camera identity for multitask learning. The model is regularized using Batch Instance Normalization (BIN) to mitigate style variations in multiple cameras. \textbf{(IV)} Our proposed framework achieves state-of-the-art performance on two aerial surveillance datasets, PRAI-1581 \cite{zhang2020person} and VRAI \cite{wang2019vehicle}, respectively. \section{Methodology} \label{sec:format} In this section, we present our multiscale approach for object Re-Identification. An overview of our proposed method is outlined in Figure \ref{fig:model}. We propose a multi-task Pyramid Vision Transformer (PVT) \cite{wang2021pyramid}, a convolution-free backbone designed to learn multiscale feature maps. We use two heads for object and camera ID recognition, respectively. Batch Instance Normalization (BIN) is incorporated in our model to achieve camera style invariance. To make feature maps spatially aware of the location of important objects, we apply spatial attention \cite{woo2018cbam} to feature maps of different resolutions. Finally, we combine multiscale feature maps in an adaptive way using a Channel Attention Gate \cite{woo2018cbam, zhou2019omni}. To make the identifier robust to occlusion, we estimate the aleatory uncertainty\cite{wang2019aleatoric} present in the data while computing the loss for the model. \subsection{Multi-task Pyramid Vision Transformer} \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{pvt-paper.png} \end{center} \caption{The overall architecture of our proposed model. The model uses multi-task PVT as the backbone. We apply Spatial Attention (SA) on feature maps of individual scales which helps to focus the network on the most informative features. The channel attention (CA) is a mini-network with shared wights. The Identification (ID) head is responsible for classifying the object instances whereas CamID head gives prediction on camera ID labels considering uncertainty of prediction in both cases. Batch Instance Normalization (BIN) layer helps the network to achieve style generalization among different cameras.} \label{fig:model} \vspace{-0.2in} \end{figure*} \label{sec:pagestyle} Pyramid Vision Transformer (PVT) \cite{wang2021pyramid} generates hierarchical feature maps of multiple resolutions. This architecture has four blocks. Each block is responsible for generating a feature map of certain resolution. Each stage consists of a patch embedding layer and a transformer encoder layer. Each transformer encoder layer is composed of a modified attention layer named spatial reduction attention (SRA) and a feedforward layer. SRA is designed to reduce memory cost so that high-resolution feature maps can be processed. First, the input image with a size of $H\times W\times3$ is separated into $\frac{HW}{4^2}$ patches and the patch size is $4\times4$. Then, each patch is flattened through a linear projection and passed through a number of transformer encoders. After that, we get the output feature map of size $\frac{H}{4}\times \frac{W}{4} \times C_1$. Similarly, we obtain the output feature maps of sizes $\frac{H}{8}\times \frac{W}{8} \times C_2$, $\frac{H}{16}\times \frac{W}{16} \times C_3$, and $\frac{H}{32}\times \frac{W}{32} \times C_4$, respectively. The original PVT uses non-overlapping patches. Inspired by the recent work of TransReID \cite{he2021transreid}, we have generated overlapping patches using a sliding window or shifting operator. The head in PVT is used to identify objects. Differently from the original PVT, we use an additional head for camera ID recognition. By doing this, we try to take advantage of the camera ID labels to fuse camera information into the original model. As the generated feature maps are produced by multiple transformer encoders, the generated feature space is complex, and it can be confusing for the model to extract meaningful features for the ReID task. To obtain a more refined feature map so that the model can learn more informative spatial features, we apply spatial attention to the feature maps of different resolutions. In spatial attention, both MaxPooling and AveragePooling are performed on the channel dimension, and the pooled feature space is concatenated to generate the 2D spatial attention map. Additionally, we have added Batch Instance Normalization (BIN) \cite{nam2018batch} to reduce style variations on multiple cameras. The purpose of using BIN is to normalize the style-preserving discriminative features of the ReID objectReID. To fully exploit the functionality of multiscale features, we combine feature maps of multiple resolutions using a global channel attention gate. To tune the channel weights of multiscale feature maps in a dynamic fashion, we use a shared channel attention gate to learn the inter-channel relationship within a feature map regarding the importance of feature maps. We follow the design procedure mentioned in \cite{zhou2019omni} to implement this gate as a mini-network composed of several global average pooling layers (GAP) and a multi-layer perceptron (MLP) with reduced hidden dimension and one ReLU-activated hidden layer followed by sigmoid activation. \subsection{Loss Function Formulation} \label{sec:typestyle} To train our model, we use a combination of three uncertainty-aware losses: identity loss,cameraID and center loss. Identity loss consists of softmax cross-entropy loss and triplet loss \cite{zhang2020rethinking,hermans2017defense}. Camera ID information is learned through centroid triplet loss \cite{wieczorek2021unreasonable} and center loss considers the distance from center information.\\ \textbf{Uncertainty-aware ID loss:} This loss is computed using a hybrid of softmax cross-entropy loss and triplet loss \cite{hu2018person} taking into account the uncertainty between classes. The uncertainty-aware softmax cross-entropy loss is calculated by \cite{zhang2020rethinking}: \begin{equation} \label{e:id} \mathcal{L}_{softmax}=\frac{1}{2N\sigma(x_i)^2}\sum_{i=1}^{N}log(p_{id}(h_i^{id},y_i))+ \frac{1}{2}{log\sigma(x_i)^2} \end{equation} where $N$ is the number of samples, $y_i$ is the ground truth label, and $\sigma(x_i)^2$ is the variance of the data. We model our uncertainty-aware soft-margin triplet loss using the following formula \cite{hermans2017defense}: \begin{multline} \label{e:trip} \mathcal{L}_{triplet}=\frac{1}{\sigma(x_a)^2}log(1+exp(f(x_a,x_n)-f(x_a,x_p))+\\ \frac{1}{2}{log\sigma(x_a)^2} \end{multline} where a triplet consists of $<x_a, x_p, x_n>$ in which $x_a$ is the anchor image of a person, $x_p$ is the positive anchor image belonging to the identity of the same person, and $x_n$ is the negative anchor image belonging to a different person. Note that triplet loss cannot measure the overall spatial distribution of features, while cross-entropy loss does not have enough discriminant power among features. Therefore, it is better to combine these two as follows: \begin{equation} \label{e:id} \mathcal{L}_{ua\_id}={L}_{softmax}+{L}_{triplet} \end{equation} \noindent\textbf{Uncertainty aware camera ID Loss:} To tackle intraclass variations arising from view angle, camera style, distance, etc., we apply a soft-margin version of centroid triplet loss since the class centroid can be considered as the mean representation for the retrieval task. Inspired by the unreasonable effectiveness of centroids in image retrieval \cite{wieczorek2021unreasonable}, we propose to calculate uncertainty-aware camera ID loss based on centroids as follows: \begin{multline} \mathcal{L}_{ua\_camid}=\frac{1}{\sigma(x_a)^2}log(1+exp(f(x_a,c_n)-f(x_a,c_p))+\\ \frac{1}{2}{log\sigma(x_a)^2}, \end{multline} where $c_p$ and $c_n$ are the corresponding centroids of the class for the positive and negative classes. \\ \noindent\textbf{Uncertainty-aware Center Loss:} We also analyze uncertainty-aware center loss \cite{luo2019bag} using the following formula: \begin{equation} \label{e:tl} \mathcal{L}_{ua\_center}=\frac{1}{2\sigma}\sum_{i=1}^{B} {\lvert\lvert {f}_{t_i}-{c_{y_i}}\rvert\rvert}^2_2, \end{equation} where $y_i$ is the label of the $i^th$ image in a mini-batch and B is the batch size. $c_{y_i}$ is the center of deep features in the $y_i$th class. The overall loss can be formulated as follows. \begin{equation} \label{e:tl} \mathcal{L}_{total}=\alpha_1\mathcal{L}_{ua\_id}+\alpha_2\mathcal{L}_{ua\_camid}+ \alpha_3 \mathcal{L}_{ua\_center} \end{equation} Here, $\alpha_1$, $\alpha_2$, and $\alpha_3$ are the regularization parameters for the corresponding losses. \vspace{-0.2in} \section{Experiments} \label{sec:majhead} \subsection{Datasets} \vspace{-0.1in} We have conducted our experiments on two aerial surveillance datasets named Person ReID for Aerial Imagery (PRAI-1581) \cite{zhang2020person} dataset and Vehicle Re-identification for Aerial Image (VRAI) \cite{wang2019vehicle} dataset. \textbf{PRAI} is a newly released aerial surveillance dataset which contains 39,461 person images of 1581 classes captured by two UAV drones with a flight altitude ranging from 20 to 60 meters above the ground. \textbf{VRAI dataset} consists around 137,613 images of 13,022 vehicles taken by two UAV drones. This is the largest UAV based vehicle dataset to date. \subsection{Implementation Details} \vspace{-0.1in} \label{ssec:subhead} For the PRAI dataset, the training set includes 19,523 images from 782 classes. For the test set, the number of query and gallery images are 4680 and 15258, respectively. For the VRAI dataset, the training set contains 66,113 images with 6,302 classes. For the test set, the query set contains 15,747 images and the gallery set contains 55,753 images, respectively. In our experiment, we investigated PVT, a multiscale transformer, as the backbone network. The backbone network is pre-trained on ImageNet 2012 dataset. We train our model using 4 Titan 1080GTX GPUs. Before training, the images are resized to $224\times224$. The batch size is set to 128. ADAM optimizer is used with a momentum of 0.9 and a weight decay of $1e^{-4}$. The learning rate is initialized as 0.000015 with a cosine rate decay. For performance evaluation, we use two metrics: Cumulative Matching Characteristic (CMC) and mean Average Precision (mAP). \begin{figure}% \centering \subfloat[\centering Person]{{\includegraphics[width=24mm,height=18mm]{00000021_0001_00000023.jpg} }}% \subfloat[\centering PVT-Large]{{\includegraphics[width=24mm,height=18mm]{fine_1.jpg} }}% \subfloat[\centering Ours]{{\includegraphics[width=24mm,height=18mm]{fine_1m.jpg} }}% \caption{Visualization of feature maps using guided back-propagation. Baseline PVT-Large \cite{wang2021pyramid} fails to retrieve fine features, while ours captures more discriminative features.}% \label{fig:example}% \vspace{-0.2in} \end{figure} \subsubsection{Comparison with state-of-the-art} \label{sssec:subsubhead} We compare our proposed approach with the latest methods, and the results are reported in Table \ref{t:bigtable}. Our proposed approach outperforms the previous state-of-the-art in both the PRAI-1581 and VRAI dataset, respectively. For the VRAI dataset, our approach achieves 84.47\% Rank-1 accuracy and an mAP of 82.86\%. For the PRAI dataset, the accuracy of Rank-1 is 59.18\% and the mAP is 51.45\%. We report results based on single-query settings for both datasets. It is worth mentioning that the gain over previous SOTA methods is consistent across object domains (person vs. vehicle) and performance metrics (Rank-1 vs. mAP). The performances of our model for mAP and different rank scores are presented in Figure \ref{fig:res} for the PRAI-1581 and VRAI data set. It can be observed that PRAI is a lot more challenging than VRAI because of people's relatively smaller size, large pose variations, and deformable motion. \begin{figure}[htb] \vspace{-0.2in} \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=4.5cm,height=3.4cm]{cmc_3.png}} \centerline{(a) CMC curve}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.5cm,height=3.2cm]{map.png}} \centerline{(b) mAP}\medskip \end{minipage} \caption{Performance of our model for PRAI-1581 and VRAI.} \vspace{-0.2in} \label{fig:res} \end{figure} \begin{table} \caption{Performance comparison with state-of-the-art methods for PRAI-1581 and VRAI.} \vspace{-0.2in} \begin{center} \begin{tabular}{l|c|c} \label{t:bigtable} Method (Person ReID)& Rank-1 & mAP \\ \hline\hline PCB \cite{sun2018beyond,zhang2020person} & 47.47 & 37.15\\ \hline SVDNet \cite{sun2017svdnet,zhang2020person}& 46.10& 36.70 \\ \hline MGN \cite{wang2018learning,zhang2020person} & 49.64& 40.86 \\ \hline OSNet \cite{zhou2019omni,zhang2020person}& 54.40& 42.10 \\ \hline TransReID \cite{he2021transreid}& 56.30& 49.81 \\ \hline \textbf{Ours} & \textbf{59.18} & \textbf{51.45}\\ \hline Method (Vehicle ReID) & Rank-1 & mAP \\ \hline\hline MGN \cite{wang2018learning,wang2019vehicle} & 67.84 & 69.49 \\ \hline RAM (ResNet-50) \cite{liu2018ram,wang2019vehicle} & 68.58 & 69.37\\ \hline RAM (VGG-16) \cite{liu2018ram,wang2019vehicle} & 72.05 & 57.33\\ \hline Multi-task+DP \cite{wang2019vehicle} & 80.30 & 78.63 \\ \hline TransReID \cite{he2021transreid}& 82.68 & 81.48 \\ \hline \textbf{Ours} & \textbf{84.47} & \textbf{82.86}\\ \hline \end{tabular} \end{center} \vspace{-0.2in} \end{table} \vspace{-0.2in} \section{Conclusion} \label{sec:print} We have presented an uncertainty-aware multiscale transformer-based approach for the UAV-based object Re-ID. Our approach captures the information of instances with different levels of detail by multitasking PVT-based backbone architecture. The proposed model tries to solve object Re-ID as multitask learning problem using a unified framework trained with object ID, camera ID, and center loss. We quantitatively and qualitatively evaluated our proposed method on two UAV-based aerial surveillance datasets. The experimental results demonstrate the superiority of the proposed model over the previous state-of-the-art. \bibliographystyle{IEEEbib}
{ "timestamp": "2022-09-20T02:22:36", "yymm": "2209", "arxiv_id": "2209.08686", "language": "en", "url": "https://arxiv.org/abs/2209.08686" }
\section{Introduction}\label{sec:introduction} Quantum computing has recently received the spotlight because of its potential to solve complex problems faster than classical algorithms. In contrast to classical computation, which uses a linear scale in bits, quantum computing uses an exponential scale in qubits. It's because the entanglement of qubits can allow to represent multiple states simultaneously. As a result, quantum machine learning (QML) has attained linear or sublinear complexity in comparison to the polynomial complexity of conventional machine learning, even in the current era of noisy intermediate scale quantum (NISQ). Accordingly, various research utilize QML to optimize its objectives~\cite{aimlab2022icte,aimlab2022dynn}. However, there is still a challenging problem in utilizing QML~\textit{i.e.}, barren plateaus. Barren plateaus hinder the training of QML, and a lot of research has proved that increasing qubits in ansatz induces barren plateaus~\cite{mcclean2018barren}. In this paper, we aim to optimize QML, especially quantum-based CNN (QCNN), using only a finite number of qubits to prevent barren plateaus while maintaining reasonable performance. This approach is called fidelity-variation training (FV-Train) in this paper. The novelty of our proposed FV-Train is numerically and experimentally evaluated, and we finally confirm our proposed method achieves desired performance improvements. \section{Related Work} \BfPara{Basic Quantum Gates}A qubit is a two-state quantum-mechanical computing unit where the quantum state is represented with two basis states $|0\rangle ,|1\rangle$ in Bloch sphere~\cite{guan2021quantum}. The quantum state can be described as $|\psi\rangle= \alpha|0\rangle + \beta|1\rangle$, where $\alpha^2+\beta^2=1$. To utilize a single quantum computing system, a classical data $\delta$ is embedded to quantum state with the rotation gates $R_x(\delta)$, $R_y(\delta)$, and $R_z(\delta)$, where each gate represents the rotation of $\delta$ over $x$-, $y$-, and $z$-axes in Bloch sphere, respectively. By entangling qubits with \textit{controlled-NOT} (CNOT) gates in a multi-qubit system, quantum computing achieves an advantage in processing speed. \BfPara{Quantum CNN (QCNN)}\label{sec:quanvolution} QCNN, also called quanvolutional neural network, is a quantum version of CNN for 2D images using quantum circuits as a convolutional filter~\cite{DBLP:journals/qmi/HendersonSPC20}. To fully leverage massively parallel computations on the superposition of quantum states with finite number of qubits, QCNN designs random quantum circuit layers to extract the intrinsic features of target images. The main challenge in the research of QCNN is minimizing the number of qubits while the ansatz-based filters carry out convolution and full intrinsic features extraction. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Figure/shorts_model.PNG} \caption{Overall Process of FV-Train.}\label{fig:data_reuploading} \end{figure} \section{FV-Train} This section presents our proposed QCNN training algorithm named fidelity-variation training. As mentioned above, using only a small number of qubits while maintaining performance is challenging in QCNN. Note that in QCNN, each ansatz acts like a convolutional filter in classical CNN. To extract various features of target image, we aim to vary the nature of the quanvolutional filter. For this purpose, we design an FV regularizer $\mathcal{L_{FV}}$, motivated by Uhlmann's fidelity function~\cite{jozsa1994fidelity}. The fidelity of output quantum states from two quanvolutional filters $q_X$ and $q_{Y}$ is defined as $\Phi(\rho_{q_{X}}, \rho_{q_{Y}}) = |\langle \psi_{q_X}|\psi_{q_{Y}}\rangle|^2$, where $\rho_{q_X} =|\psi_{q_X} \rangle \langle \psi_{q_X}|$ and $\rho_{q_{Y}} =|\psi_{q_{Y}} \rangle \langle \psi_{q_{Y}}|$. As the similarity between the two states increases, the fidelity converges to 1. On the other hand, as the similarity between the two states decreases, the fidelity converges to 0, which means that $q_{Y}$ does not follow $q_X$. We assume that a reduction in the fidelity between output states of the quanvolutional filters enables the extraction of various intrinsic features. With the assumption, we define FV regularizer as \begin{equation} \mathcal{L_{FV}} = 1 - {1 \over {}_{L}C_2} \sum_{l,l^{'} \in L} \nolimits \Phi(\psi_{q_{l}}, \psi_{q_{l^{'}}}). \end{equation} The training process of QCNN with FV regularizer is described in Algorithm 1. The parameters ($x$, $y$) are denoted as the input data and label, respectively. The predicted label can be produced by activating fully-connected layer on the concatenation of the observable in each filter. Cross-entropy loss is described as $\mathcal{L_{CE}} = -{1\over C}\sum^C_{c=1} \log{p(y_{pred} = y_c|x)}$, where $C$ represents the number of classes. Consequently, the total loss of QCNN can be denoted as, \begin{equation} \mathcal{L}_{total} ={1\over D} \sum_{(x, y) \in \zeta^k} \nolimits [\mathcal{L_{CE}} +\lambda \mathcal{L_{FV}}], \end{equation} where $D$ is the batch size and $\lambda$ is the hyper-parameter of FV regularizer. \begin{algorithm2e}[t]\label{alg: FV-train} \SetCustomAlgoRuledWidth{0.44\textwidth} \caption{Fidelity Variation Train (FV-Train)}\label{alg:Training algorithm} \textbf{Initialization.} \texttt{QCNN} parameters, $w$; \For{ $e = \{1,2,\dots, E\}$} { \For{ $(x, y) \in \zeta^k$} { \For{$l, l^{'} \in \{1,2,\dots,L-1\}$} { Get features with $l$-th and $l^{'}$-th filter\; Calculate $\mathcal{L_{FV}}$\; Calculate loss gradients\; } Calculate $\mathcal{L}_e^k \leftarrow \mathcal{L}_{total}$\; $\ensuremath{\bm{\theta}}^k_{e+1}\leftarrow \ensuremath{\bm{\theta}}^k_{e}-\eta_e\nabla_{\theta^k_{e}}\ensuremath{\mathcal{L}}^{k}_{e}$\; } } \end{algorithm2e} \section{Experiments} \BfPara{Setting} To corroborate the performance of the FV-Train, we train a QCNN to classify the MNIST dataset through a classical training method (Vanilla-Train) and the FV-Train with various fidelity regularizer parameter ($\lambda$ = 0.1, 0.5, 1). We design two random ansatz-based convolutional filters to extract the intrinsic features. The initial fidelity between the two filters is set to $0.611$ in all experiment setting. \BfPara{Experimental Results} Fig.~\ref{exp:main} represents the performance difference between the Vanilla-Train and the FV-Train. Fig.~\ref{exp:main} (a) shows the fidelity of each training method. Note that fidelity is a measure of closeness of two quantum states. The FV-Train with different $\lambda$ shows that the features extracted through the FV-Trained filter are more diversified than the features from Vanilla-Trained filter. Here, we confirm our assumption that diversified features results in good performance. In Fig.~\ref{exp:main} (b), we observe that the FV-Train outperforms the Vanilla-Train. The FV-Train ($\lambda=0.1$) achieves 5.5\% higher top-1 accuracy than the Vanilla-Train, and the FV-Train ($\lambda=0.5, 10$) achieves slightly higher accuracy than the vanilla-train. From the results, we confirm that even with the same QCNN, the FV-Train obtains more diverse features than the vanilla-train. \section{Conclusion} In this paper, we propose a novel QCNN training framework based on the concept of CNN and QML. To get various filters with finite number of qubits, we design FV-Train regularizer. To corroborate the performance of FV-Train with QCNN, we compare the performance of QCNN with FV-Train and vanilla-train. In future research, we intend to solve the barren plateaus problem of QML by extracting various intrinsic features through FV-Train while using only a finite number of qubits. \begin{figure}[t] \centering \begin{tabular}{@{}c c@{}} \includegraphics[width=.485\columnwidth]{Figure/fidelity-eps-converted-to.pdf} &\includegraphics[width=.485\columnwidth]{Figure/accuracy-eps-converted-to.pdf}\\ ~~~~\small (a) Top-1 accuracy. &~~~~~\small (b) Fidelity. \end{tabular}\vspace{-3mm} \caption{Performance evaluation for FV-Train.} \vspace{-1mm} \label{exp:main} \end{figure} \small
{ "timestamp": "2022-09-20T02:23:51", "yymm": "2209", "arxiv_id": "2209.08727", "language": "en", "url": "https://arxiv.org/abs/2209.08727" }
\section{Introduction} \label{sec:introduction} Over the last two decades, studies focused on the formation and evolution of our Galaxy have been significantly advanced by the first generation of wide-field, digital imaging surveys and the Gaia astrometric mission. The extensive photometric databases that resulted have provided, for the first time, spectacular panoramic views of the Milky Way tidal streams \citep{belokurov_2006,ibata_2007,ibata2019a,mcconnachie_2009,shipp_2018} and revealed the existence of large stellar sub-structures in the halo, which have been interpreted as observational evidence of our home Galaxy's hierarchical formation. Furthermore, the PAndAS Survey \citep{mcconnachie_2009} has revealed a panoramic view of the Andromeda halo with a multitude of tidal streams, arcs, shells and other irregular structures that are possibly related to ancient merger events. These observations confirm the $\Lambda$CDM prediction that tidally disrupted dwarf galaxies are important contributors to the formation of Galactic stellar halos. The next generation of Galactic and extragalactic surveys (e.g.\ LSST) will dissect the stellar halo structure of these Local Group spirals with unprecedented detail, promising further improvements in our understanding of the early formation and merger history of the Milky Way. While some of the known Milky Way and M31 stellar streams can be well characterized in a wide parameter space and also using observations of their individual stars, results for individual systems are not easy to compare with numerical simulations due to the natural stochasticity of galaxy assembly histories in the $\Lambda$CDM model. Although statistical distributions, for example of halo assembly times or satellite luminosities, are well-defined for galaxies selected in a narrow range of stellar mass and/or halo mass, individual systems may show large deviations from the mean. To overcome this limitation, a search for streams and other merger debris in a larger sample of Milky Way-like galaxies is required. This is a daunting task. Because of their extremely faint surface brightness, the observed frequency of stellar streams is very low even in ultra-deep imaging surveys; see \citet{hood_2018} for a modern review. In this paper, we will focus only on {\it stellar tidal streams}, arising from the tidal disruption of dwarf galaxies by more massive systems. We exploit the deep, wide-field imaging from the DESI Legacy Surveys \citep{dey2019} to systematically explore the frequency and photometric properties of streams in the stellar halos of 181 Milky Way analogue targets previously selected for the {\it Satellites Around Galactic Analogs} (SAGA) survey \citep{geha2017,mao2021}. \begin{figure} \centering \includegraphics[width=0.85 \columnwidth]{SAGA_Fig1_Jun9.jpg} \caption{Sample of images showing stellar streams around galaxies listed in Table \ref{tab:photometry}. For illustrative purposes, shallower colour images (also from the {\it DESI Legacy Imaging Surveys}) have been superimposed on saturated central region of each host galaxy.} \label{fig-sample} \end{figure} \begin{table*} \centering {\small \caption{ Photometry of stellar streams around MW analogue galaxies. Column 1 gives the name of the host galaxy and column 2 its distance; column 3 shows the surface brightness limit in the $r$ band calculated in this work; columns 3 and 4 show the {\it Detection Significance Index}, as defined in \citet{martinez-delgado2021}. Columns 5 to 7 show the surface brightness in the $g$ passband, in the $r$ passband, and the $(g - r)_\mathrm{0}$ colour of the streams, averaged over all the apertures placed on the stream; column 8 indicates whether the stream has been reported for the first time in {\it this work}, indicated by ($\ast$), or in one of the following previous works: (1) \citet{martinez-delgado2021}; (2) \citet{morales2018}; (3) \citet{ludwig2014}; (4) \citet{knierman2013}}. \label{tab:photometry} \begin{tabular}{lcccccccc} Host & D & $\mu_\mathrm{r, limit}$ & \multicolumn{2}{c}{$\mathrm{DSI}_\mathrm{stream}$} & $\langle \mu_{g}\rangle_\textrm{stream}$ & $\langle\mu_{r}\rangle_\mathrm{stream}$ & $\langle (g - r)_\mathrm{0} \rangle_\mathrm{stream}$ & Reference \\ & & & maximum & average & & & & \\ & Mpc & [mag arcsec$^{-2}$] & $\sigma$ & $\sigma$ & [mag arcsec$^{-2}$] & [mag arcsec$^{-2}$] & [mag] & \\ \hline\hline NGC0636 & 29.2 & 28.88 & 45.58 & 31.86 & 26.66 $\pm$ 0.03 & 25.86 $\pm$ 0.02 & 0.75 $\pm$ 0.04 & ($\ast$) \\ NGC1079 & 31.4 & 28.78 & 15.24 & 11.31 & 27.51 $\pm$ 0.05 & 27.00 $\pm$ 0.05 & 0.48 $\pm$ 0.07 & ($\ast$) \\ NGC1209 & 38.3 & 28.91 & 8.85 & 4.71 & 28.71 $\pm$ 0.05 & 27.98 $\pm$ 0.03 & 0.68 $\pm$ 0.07 & ($\ast$) \\ NGC1309 & 34.3 & 28.76 & 24.42 & 23.02 & 25.66 $\pm$ 0.02 & 26.26 $\pm$ 0.02 & 0.56 $\pm$ 0.02 & (1) \\ NGC2460 & 34.8 & 28.81 & 10.39 & 8.06 & 27.50 $\pm$ 0.05 & 26.57 $\pm$ 0.04 & 0.85 $\pm$ 0.02 & (3) \\ NGC2543 & 37.6 & 28.55 & 10.18 & 9.00 & 26.66 $\pm$ 0.06 & 25.86 $\pm$ 0.06 & 0.72 $\pm$ 0.08 & ($\ast$) \\ NGC2648 & 32.7 & 28.19 & 22.70 & 16.62 & 26.49 $\pm$ 0.03 & 25.96 $\pm$ 0.04 & 0.49 $\pm$ 0.05 & ($\ast$) \\ NGC2701 & 36.5 & 28.58 & 6.63 & 5.55 & 26.85 $\pm$ 0.07 & 26.47 $\pm$ 0.08 & 0.37 $\pm$ 0.10 & ($\ast$) \\ NGC2782 & 39.9 & 28.51 & 28.69 & 20.55 & 26.14 $\pm$ 0.01 & 25.63 $\pm$ 0.02 & 0.48 $\pm$ 0.02 & (4) \\ NGC3614 & 36.1 & 28.57 & 9.79 & 6.64 & 27.78 $\pm$ 0.06 & 27.07 $\pm$ 0.05 & 0.68 $\pm$ 0.08 & ($\ast$) \\ NGC3689 & 39.8 & 28.00 & 10.75 & 6.45 & 27.55 $\pm$ 0.05 & 26.82 $\pm$ 0.05 & 0.56 $\pm$ 0.07 & (1) \\ NGC4378 & 37.2 & 28.21 & 24.06 & 22.17 & 27.24 $\pm$ 0.03 & 26.53 $\pm$ 0.03 & 0.68 $\pm$ 0.04 & ($\ast$) \\ NGC4750 & 27.7 & 28.57 & 54.58 & 35.07 & 26.81 $\pm$ 0.02 & 26.30 $\pm$ 0.03 & 0.48 $\pm$ 0.03 & ($\ast$) \\ NGC4793 & 36.3 & 28.11 & 20.02 & 18.04 & 26.16 $\pm$ 0.04 & 25.60 $\pm$ 0.06 & 0.55 $\pm$ 0.07 & ($\ast$) \\ NGC4799 & 40.1 & 27.93 & 8.49 & 6.98 & 26.65 $\pm$ 0.04 & 26.20 $\pm$ 0.07 & 0.41 $\pm$ 0.08 & ($\ast$) \\ NGC5297 & 35.5 & 28.55 & 28.00 & 18.58 & 26.35 $\pm$ 0.04 & 25.70 $\pm$ 0.04 & 0.63 $\pm$ 0.05 & ($\ast$) \\ NGC5493 & 40.05 & 28.30 & 32.96 & 28.06 & 26.38 $\pm$ 0.02 & 25.69 $\pm$ 0.02 & 0.63 $\pm$ 0.003 & ($\ast$) \\ NGC5604 & 39.0 & 28.18 & 12.29 & 9.93 & 26.35 $\pm$ 0.05 & 25.81 $\pm$ 0.05 & 0.46 $\pm$ 0.07 & ($\ast$) \\ NGC5631 & 31.7 & 28.54 & 12.88 & 10.01 & 27.60 $\pm$ 0.04 & 26.98 $\pm$ 0.04 & 0.59 $\pm$ 0.06 & ($\ast$) \\ NGC5750 & 25.3 & 28.23 & 29.41 & 27.37 & 27.38 $\pm$ 0.05 & 26.69 $\pm$ 0.04 & 0.63 $\pm$ 0.06 & (2) \\ NGC5812 & 27.2 & 28.38 & 55.09 & 30.73 & 26.54 $\pm$ 0.04 & 25.67 $\pm$ 0.02 & 0.77 $\pm$ 0.04 & ($\ast$) \\ NGC7721 & 31.8 & 27.87 & 19.44 & 13.24 & 25.79 $\pm$ 0.03 & 25.23 $\pm$ 0.04 & 0.53 $\pm$ 0.04 & (3) \\ \hline \end{tabular} } \end{table*} \section{Methodology} \label{sec:methodology} \subsection{Image Sample} \label{sec:imagesample} The second phase of the SAGA survey \citep{mao2021} defines a parent sample of Milky Way-like host galaxies with absolute $K$-band magnitude in the range $-23 < M_{K} < -24.6$ mag, approximately equivalent to the stellar mass range $10^{10} < M_{\star} < 10^{11}\,\mathrm{M}_{\odot}$. The sample excludes close pairs of hosts, defined by a host-satellite K-band magnitude difference of $\Delta K < 1.6$ mag. The SAGA survey only carried out spectroscopic follow-up for hosts in this parent sample with distances $25 < d < 40.75$~Mpc. Further details of the SAGA II parent sample can be found in \citet{mao2021}. We inspected the images of the resulting sample of 181 galaxies using the Legacy Survey Sky Viewer\footnote{\url{https://www.legacysurvey.org/viewer}} and selected for further analysis a subset of targets in which stellar tidal streams could be identified by eye. From this visual inspection, a total of 22 galaxies with detected streams were selected. Image cutouts of these selected targets were then computed from the raw data from the {\it DESI Legacy Imaging Surveys} \citep[][; LS]{dey2019} using a modified version of the LS reduction pipeline {\it Legacypipe}. This alters the way the image backgrounds (``sky models'') are computed; {\it Legacypipe} by default uses a flexible spline sky model which can over-subtract the outskirts of large galaxies. Instead, we assume a flat background level for each overlapping CCD. We first minimize the relative background levels between the overlapping CCDs in each band, and then, after detecting and masking sources as well as Gaia stars, we subtract the sigma-clipped median in the outer half of the image cutout (see \citet[][]{martinez-delgado2021} for details). The resulting wide-field images reach surface brightness limits as faint as 29 mag arcsec$^{-2}$ in the $r$ band (see Section \ref{sec:dataanalysis}), ensuring a sufficient image depth to be able to measure very faint tidal structures. The images analysed in this work are listed in Table \ref{tab:photometry} and examples of them are shown in Figure \ref{fig-sample}. \begin{figure}[h!] \centering \includegraphics[width=0.8\columnwidth]{Fig3_June13.jpg} \caption{Examples of our photometry measurement method, showing the apertures placed on the stellar streams around NGC5812 and NGC2543 along with the suspected progenitors, in order to measure their surface brightness and colours.} \label{fig-photometry} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{histogram+fits_sat-streams.png} \caption{Histogram showing the distribution of the average $(g - r)_\mathrm{0}$ colour of stellar streams around 22 galaxies from our sample, (those listed in Table~\ref{tab:photometry}) together with the same colour of the 127 satellite galaxies from the 36 SAGA systems sample.} \label{fig-histograms} \end{figure} \begin{table} \centering {\small \caption{Comparison between the average $(g - r)_\mathrm{0}$ colour of each streams and the corresponding colour of its visually identified progenitor.} \label{tab:colors} \begin{tabular}{lccc} Host & $\langle (g - r)_\mathrm{0} \rangle_\mathrm{stream}$ & $\langle (g - r)_\mathrm{0} \rangle_\mathrm{progenitor}$ & $\Delta$ \\ & [mag] & [mag] & [mag] \\ \hline\hline NGC2543 & 0.72 $\pm$0.08 & 0.51 $\pm$ 0.02 & 0.21 $\pm$ 0.08 \\ NGC2648 & 0.49 $\pm$0.05 & 0.56 $\pm$0.003 & -0.07 $\pm$ 0.05\\ NGC3614 & 0.68 $\pm$0.08 & 0.65 $\pm$0.08 & 0.03 $\pm$ 0.11\\ NGC3689 & 0.56 $\pm$0.07 & 0.59 $\pm$0.02 & -0.03 $\pm$ 0.07 \\ NGC4793 & 0.55 $\pm$0.07 & 0.39 $\pm$0.01 & 0.16 $\pm$ 0.07\\ NGC5297 & 0.63 $\pm$0.05 & 0.64 $\pm$0.004 & -0.01 $\pm$ 0.05\\ NGC5750 & 0.63 $\pm$0.06 & 0.57 $\pm$0.02 & 0.06 $\pm$ 0.06\\ NGC5812 & 0.77 $\pm$0.04 & 0.63 $\pm$0.005 & 0.14 $\pm$ 0.04\\ \hline \end{tabular} } \end{table} \subsection{Data Analysis} \label{sec:dataanalysis} We carried out the photometric analysis with {\it GNU Astronomy Utilities} (Gnuastro)\footnote{\url{http://www.gnu.org/software/gnuastro}}. We made all the measurements by applying Gnuastro's {\sc MakeCatalog} subroutine on the sky-subtracted images generated by Gnuastro's {\sc NoiseChisel} \citep{Akhlaghi15,Akhlaghi19}. The program also provides us with the errors in the photometry, calculated as inversely proportional to the SNR \footnote{\url{https://www.gnu.org/software/gnuastro/manual/html_node/Magnitude-measurement-error-of-each-detection.html}}. Our photometric analysis includes measurements of surface brightness in the LS $r$ and $g$ passbands for each stream, and for their candidate progenitor satellite, where identified. Taking advantage of the depth and photometric quality of the LS survey images, we have also measured the $(g - r)_\mathrm{0}$ colour of the streams. We measure the surface brightness limit of the images for the $g$ and $r$ passbands following the approach of \cite{Roman2020}, i.e. we report the value corresponding to $+3\sigma$ of the sky background in an area of 100 arcsec$^2$. Table~\ref{tab:photometry} reports the surface brightness limit for the $r$ band, which is representative of the depth of the corresponding images in other bands. We measured surface brightness and colours on circular apertures, placed manually following closely the detection map of the stream generated by {\sc NoiseChisel}, once all foreground and background sources were masked. A succession of circular apertures allows to measure colour gradients and can easily adapt to the stream contour, though in a few cases where the stream shape so allowed, larger polygonal apertures were used to reduce the measurement error. Regions where the stream surface brightness was judged to be significantly blended with light from the host galaxy were avoided. As an illustration of the method, Figure \ref{fig-photometry} shows an example of a stream on which apertures have been placed manually in order to perform the measurement. We obtain a representative surface brightness and colour for each stream by taking the mean of the corresponding individual aperture measurements. \section{Results } \label{sec:results} Table \ref{tab:photometry} shows the results of our photometric analysis. We identified tidal streams around 22 galaxies from the sample of 181 MW analogues. This suggests that 12.2\% $\pm$ 2.4\% of the SAGA II galaxies have a stellar stream in the halo, for a $r$-band surface brightness limit range of our images between 27.8 and $29\, \mathrm{mag\, arcsec}^{-2}$ (see Table \ref{tab:photometry}). This implies that, with 95\% confidence, the percentage of typical SAGA sample halos that have readily observable stellar streams is between 7.4\% and 16.9\%. This result is similar to that reported by \cite{morales2018} for their systematic assessment of the frequency of tidal streams around a different sample of Milky Way-like galaxies in the local Universe. \citeauthor{morales2018} used co-added SDSS DR9 $g$, $r$ and $i$ band images processed using an image-enhancing technique similar to that of \cite{miskolczi2011}, with a typical surface brightness limited of $28.1\ \pm 0.3\ \mathrm{mag~arcsec^{-2}}$. They reported a total of 28 tidal streams from a sample of 297 galaxies, providing a conservative estimate that only $\sim 10\%$ of galaxies show evidence of diffuse features that may be linked to satellite accretion events. The measured ranges of stream surface brightness are $25.66 < \mu_{g} < 28.71$ and $25.23 < \mu_{r} < 27.98$ $\mathrm{mag\, arcsec}^{-2}$. The {\it Detection Significance Index} (DSI), as defined in \citet{martinez-delgado2021}, is calculated by comparing the measurements for a given aperture with the median and standard deviation of $N$ random measurements in pixels with no source detection \footnote{\url{https://www.gnu.org/software/gnuastro/manual/html_node/Upper-limit-magnitude-of-each-detection.html}}. The {\it Reference} column of Table \ref{tab:photometry} indicates whether each stream has been reported in the literature or is reported for the first time in this work. Figure~\ref{fig-histograms} compares the $(g - r)_\mathrm{0}$ colour distribution of the stellar streams identified in Table \ref{tab:photometry}, shown in red, to that of the 127 spectroscopically confirmed satellite galaxies from the 36 SAGA systems presented in \citet{mao2021}, shown in blue. Hypothesis contrast of normality shows that the null hypothesis that these colour distributions come from a Gaussian distribution cannot be rejected with a 99\% confidence level. We therefore fit Gaussian functions to each distribution, finding means and standard deviations of $0.59 \pm 0.12\,\mathrm{mag}$ for the streams and $0.39 \pm 0.13 \,\mathrm{mag}$ for the SAGA satellites. The mean colour of the streams is therefore 0.20 mag redder than that of the SAGA satellites. An equality of means hypothesis test shows that the null hypothesis can be rejected with a statistical confidence level larger than 99.999\% (p-value $< 10^{-10}$) and the alternative hypothesis that mean colour of the streams is redder than mean colour of satellites can be accepted. The $(g - r)_\mathrm{0}$ colours we find are similar to those obtained for the streams described in the proof-of-concept study of \citet{martinez-delgado2021}, who reported a mean and standard deviation of $0.66\pm 0.12$~mag. In approximately $36\%$ of the streams in our sample, we have identified a highly likely progenitor by visual inspection. This allows us to explore similarities and differences in the stellar populations of satellites and their streams, including the presence of population gradients along the streams. As shown in Fig.~\ref{fig-photometry} for the cases of NGC 2543 and NGC 5812, we placed apertures on the the likely progenitors as well as along the tidal features. Table~\ref{tab:colors} compares the $(g - r)_\mathrm{0}$ colour of the stream (averaged over the apertures as described in Section \ref{sec:dataanalysis}) with that measured in an aperture placed on the suspected progenitor. We see a significant difference in colour for the streams around NGC2543, NGC4793 and NGC5812, with the stream redder than its likely progenitor by 0.21, 0.16 and 0.14~mag, respectively. For the rest of streams where a progenitor is suspected, the colour difference is within the uncertainties of our colour measurement, and therefore not significant. To test whether the differences observed in our sample are statistically significant or not, we have performed a hypothesis test of the difference between the stream and the progenitor colours, and we have obtained that streams are, on average, $0.057 \pm 0.021$~mag redder that their progenitor, with a confidence level $>99.99\%$. \section{Conclusions} \label{sec:discussion} The main conclusions of this letter are as follows: \begin{itemize} \item We have developed a new methodology, based on Gnuastro, for measuring the surface brightness and colours of streams. \item We have applied this methodology to enhanced DESI Legacy Imaging Survey $grz$ data for a subset of the SAGA sample (a stellar mass-selected sample of Milky Way analogues at distances up to 40 Mpc). \item We have detected 16 previously unreported streams in this sample (see table 1, {\it Reference} column). The streams we have analyzed have $r$-band surface brightnesses in the range $25.23 < \mu_{r} < 27.98\,\mathrm{mag\,arcsec^{-2}}$. \item We have carried out a statistical comparison of $(g - r)_\mathrm{0}$ colours for the detectable stream and satellite populations in our sample, finding that the detectable stream population is significantly redder on average. \item In those systems where a progenitor of the stream could be identified by visual inspection, we find the stream is on average slightly redder than the progenitor. \end{itemize} We suggest that the differences we find between the stream and satellite colour distributions may be explained by a combination of selection bias and physical effects. We provide here a brief summary of possible explanations, and defer a detailed discussion to future work. The SAGA survey selects a sample of candidate satellites based on catalogue photometry and follows up a subset of these with multi-object fibre spectrographs to obtain redshifts. Extremely compact (M32-like) candidates were not followed up \citep{geha2017}; although such objects tend to be red, relatively few are known. More significantly, redshifts are more difficult to obtain for candidates with low mean surface brightness, which also tend to be redder. \citet{mao2021} argue that this redshift incompleteness is a weak effect that does not significantly bias the distribution of star formation rates (hence colours) in the spectroscopic sample. However, the completeness of the initial target catalogue may also be important. \citet{font22} explore this issue in detail through comparison to the ARTEMIS suite of cosmological simulations. They suggest that the photometric SAGA candidate sample may have a significant bias against low surface brightness satellites, and that this bias has a much stronger effect on the resulting colour distribution. Comparing to a separate survey of satellites in the Local Volume (Exploration of Local VolumE Satellites, {\sc ELVES}, see \citet{carlsten2021}), they find evidence that fainter galaxies in SAGA are biased towards bluer colours. However, even with the small sample of stream colours presently available, we find at least two reasons to consider physical explanations for the colour differences in addition to selection effects. First, \citet{font22} find the potential selection bias in SAGA mostly affects the fainter satellite magnitudes ($M_{\mathrm{V}}>-12$), and that the colours of brighter (systematically bluer) satellites are not strongly biased. Although we cannot yet quantify the total luminosity of the streams in our sample, it is likely that readily detectable streams have some bias towards the brighter end of the luminosity function of disrupted progenitors (albeit with large uncertainty due to the wide variety of stream morphology and viewing angle). If we were to compare the streams only to the brighter SAGA satellites, rather than the full sample, the discrepancy in colour would be reinforced. Put another way, we detect no streams as blue as the bluest SAGA satellites. Secondly, the difference in colour seen in the small number of stream-progenitor pairs in our sample suggests colour gradients may contribute alongside selection-driven differences between the stream and satellite samples (and other population-level effects, such as different average ages). Such gradients may be established either before disruption or during the disruption process. A wide variety of physical processes could create gradients through their effects on the relative timescales of gas removal (due to ejection and ram pressure stripping), star formation in residual cold gas, and tidal stripping. At the most basic level, complete tidal disruption will prevent further star formation, leading to the systematic reddening of dynamically older streams. Cosmological simulations are necessary to make quantitative predictions for colour distributions, accounting for the range of satellite star formation histories, gas fractions and orbits, and variations in the satellite accretion rate and disruption efficiency over the range of dark matter halo masses that may correspond to the SAGA sample. To make further progress, we are currently constructing a larger sample of galaxies within the Stellar Streams Legacy Survey \citep{martinez-delgado2021}. This sample will comprise more than 800 Milky Way-like galaxies. By analysing this sample using the techniques presented in this paper, we will be able to more robustly test our conclusions and carry out meaningful comparisons to physical models of satellite star formation, accretion and disruption. \begin{acknowledgements} We want to thank to Yao-Yuan Mao, Marla Geha and Risa Wechsler for providing the original SAGA sample for this paper and useful comments. We also thank Dustin Lang and John Moustakas for running the modified {\it Legacypipe} code to produce the images used here. DMD acknowledges financial support from the Talentia Senior Program (through the incentive ASE-136) from Secretar\'\i a General de Universidades, Investigaci\'{o}n y Tecnolog\'\i a, de la Junta de Andaluc\'\i a. DMD acknowledge funding from the State Agency for Research of the Spanish MCIU through the ``Center of Excellence Severo Ochoa" award to the Instituto de Astrof{\'i}sica de Andaluc{\'i}a (SEV-2017-0709) and project (PDI2020-114581GB-C21/ AEI / 10.13039/501100011033). MAGF acknowledges financial support from the Spanish Ministry of Science and Innovation through the project PID2020-114581GB-C22. SRF acknowledge financial support from the Spanish Ministry of Economy and Competitiveness (MINECO) under grant number AYA2016-75808-R, AYA2017-90589-REDT and S2018/NMT-429, and from the CAM-UCM under grant number PR65/19-22462. SRF acknowledges support from a Spanish postdoctoral fellowship, under grant number 2017-T2/TIC-5592. APC is supported by the Taiwan Ministry of Education Yushan Fellowship and Taiwan National Science and Technology Council grant 109-2112-M-007-011-MY3. The photometry analysis in this work was partly done using GNU Astronomy Utilities (Gnuastro, ascl.net/1801.009) version $0.17$. Work on Gnuastro has been funded by the Japanese MEXT scholarship and its Grant-in-Aid for Scientific Research (21244012, 24253003), the European Research Council (ERC) advanced grant 339659-MUSICOS, and from the Spanish Ministry of Economy and Competitiveness (MINECO) under grant number AYA2016-76219-P. The Leiden Observatory has provided facilities and computer infrastructure for carrying out part of this work. M.A acknowledges the financial support from the Spanish Ministry of Science and Innovation and the European Union - NextGenerationEU through the Recovery and Resilience Facility project ICTS-MRR-2021-03-CEFCA. \end{acknowledgements}
{ "timestamp": "2022-09-20T02:21:05", "yymm": "2209", "arxiv_id": "2209.08636", "language": "en", "url": "https://arxiv.org/abs/2209.08636" }
\section{Introduction} \label{introduction} Spiking neural networks (SNNs) closely resemble biological neural networks~\cite{ghosh2009spiking}. Each neuron has an internal state representing its current activation, and information transfer between neurons is sparsely transmitted via spikes that occur when a neuron's internal activation exceeds a threshold~\cite{gerstner2014neuronal}. Spiking networks, when deployed on tailored neuromorphic processors, have the potential to be extremely energy efficient and process data with low latencies~\cite{frady2020neuromorphic,davies2021advancing}. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{imgs/plot_final.pdf} \caption{Left: We train compact, localized spiking neural networks that solely recognize places in a local region of the environment. Middle: The independent training of these local networks leads to a lack of global regularization. This results in hyperactive neurons that strongly respond to places outside their training region. We detect and remove those hyperactive neurons. Right: At deployment time, a query image is fed to all networks in parallel. As hyperactive neurons were removed, the strongest response of \emph{all} remaining neurons in \emph{all} networks is used for the place matching decision.} \vspace*{-0.2cm} \label{fig:frontpage} \end{figure*} SNNs have thus been used in a number of robotics applications~\cite{abadia2021cerebellar, vitale2021event,dupeyroux2021neuromorphic,stagsted2020event, tieck2017towards,tieck2018controlling, kreiser2020chip, lele2021end}, including the visual place recognition (VPR) task~\cite{zhu2020spatio,Hussaini2022} that is considered in this paper. A VPR system has to find the matching reference image given a query image of a place, with the difficulty that the appearance of the query image can differ significantly from the reference image due to change in season, time of the day, or weather conditions~\cite{Garg2021,Lowry2015,masone2021survey,zhang2021visual}. VPR is crucial in a range of robot localization tasks, including loop closure detection for Simultaneous Localization and Mapping (SLAM)~\cite{Garg2021,Lowry2015,masone2021survey,zhang2021visual}. Thus far SNNs have not been widely applied in VPR tasks. One key limitation of prior works~\cite{Hussaini2022,zhu2020spatio} is the specialization in \emph{only} small-scale environments. In this work, we aim to increase the capacity of SNNs to an order of magnitude larger environments. We do so by taking inspiration by the brain, which commonly uses a modular organization of neuron groups that act in parallel to efficiently perform complex recognition tasks~\cite{mountcastle1978organizing, krubitzer1995organization}. Specifically, there is evidence of an \emph{ensemble effect} for perception and learning tasks~\cite{varela2001brainweb, o2006modeling, bock2014anatomical, li2008aversive}. In our work, we implement such an ensemble by deriving SNNs for VPR that take a divide and conquer approach \cite{jacobs1991adaptive}. Each local region of the environment is encoded in a compact, localized SNN, which is responsible only for this local region. At deployment time, all localized encoders compete with each other and are free to respond to any place, resulting in a highly scalable and parallelizable system. % This concept is also known as a mixture of experts, where each ensemble member is an expert on a sub-task (in our case a local region of the environment), and all ensemble members cooperate to perform prediction for complex learning tasks (in our case recognizing places in a large-scale environment) \cite{jacobs1991adaptive, happel1994design}. We note that there are other types of ensemble learning that average the prediction of e.g.~different classifiers, but within this paper, we refer to ensembles that specialize on distinct subsets of the training data. Such \emph{independent} processing overcomes the computational constraints that arise when increasing the network size in a non-modular spiking network \cite{auda1999modular}. While being higly scalable, localized SNNs do not interact with other ensemble members at training time and have no global regularization. As a result, some neurons erroneously respond to places outside their area of expertise. In this work, we refer to these neurons as \emph{hyperactive}. Our proposed regularization approach improves model performance by detecting and removing these problematic hyperactive neurons. % The key contributions of our work are: \begin{enumerate} \item We introduce the concept of ensemble spiking neural networks for scalable visual place recognition (\Cref{fig:frontpage}). Each ensemble member is compact and specializes in a local region of the environment at training time. At deployment time, the query image is provided to all ensemble members in parallel, followed by a fusion of the place predictions. \item As each ensemble member focuses independently on a local region of the environment, there is a lack of \emph{global} regularization. After training the ensemble members, we detect hyperactive neurons, i.e.~neurons that frequently respond to images \emph{outside} their training area, and ignore the responses of these hyperactive neurons at deployment time. % \item We demonstrate that our method outperforms prior spiking networks~\cite{Hussaini2022} both on small datasets (for which~\cite{Hussaini2022} was designed for) and large datasets containing over 2,500 images, where~\cite{Hussaini2022} catastrophically fails. Our method performs competitively when compared to conventional VPR methods, namely NetVLAD~\cite{Arandjelovic2018} and SAD~\cite{milford2012seqslam}, on the Nordland~\cite{sunderhauf2013we} and Oxford RobotCar~\cite{RobotCar} datasets. % \end{enumerate} To foster future research, we will make our code available upon paper acceptance. \section{Related works} In this section, we review spiking neural networks in robotics research (\Cref{SNN_robotics}), ensemble neural networks concepts (\Cref{ensemble_SNN}), and key related works on visual place recognition (\Cref{bioinspired_VPR}). \subsection{Spiking neural networks in robotics research} \label{SNN_robotics} The neuromorphic computing field develops hardware, sensors and algorithms that are inspired by biological neural networks, with the aim of exploiting their advantages including robustness, generalization capabilities, and incredible energy efficiency~\cite{sandamirskaya2022neuromorphic,davies2021advancing}. Spiking neural networks are one class of algorithms considered within neuromorphic computing. Such spiking neural networks can be trained via unsupervised methods such as Spike-Timing-Dependent-Plasticity~\cite{feldman2012spike}, or by converting pre-trained conventional artificial neural networks to spiking networks~\cite{ding2021optimal,rueckauer2017conversion,bu2021optimal}. ANN-to-SNN conversion approaches have demonstrated comparable performance to their ANN equivalents; however, these approaches typically cannot fully exploit the advantages of SNNs. % We note that the non-differentiable nature of spikes in SNNs prevents direct application of supervised techniques such as back-propagation; however, some recent works proposed solutions to approximate back-propagation for SNNs~\cite{renner2021backpropagation, lee2020enabling}. Thanks to their desirable characteristics, SNNs have gathered interest in a range of robotics applications, including control~\cite{abadia2021cerebellar,vitale2021event,dupeyroux2021neuromorphic,stagsted2020event}, manipulation~\cite{tieck2017towards,tieck2018controlling}, scene understanding \cite{kreiser2020chip}, and object tracking~\cite{lele2021end}. Key works that use spiking networks for robot localization, the task considered in this paper, include an energy-efficient uni-dimensional SLAM~\cite{tang2019spiking}, a robot navigation controller system~\cite{tang2018gridbot}, a pose estimation and mapping system~\cite{kreiser2018pose}, and models of the place, grid and border cells of rat hippocampus \cite{galluppi2012live} based on RatSLAM~\cite{milford2004ratslam}. However, thus far the performance of these methods have only been demonstrated in simulated~\cite{galluppi2012live, kreiser2018pose}, constrained indoor~\cite{tang2019spiking, tang2018gridbot}, or small-scale outdoor environments~\cite{Hussaini2022}. The most similar prior work is~\cite{Hussaini2022} which introduced a high-performing SNN for VPR. However,~\cite{Hussaini2022} was limited to recognizing just 100 places, compared to several thousand places in our proposed ensemble spiking networks. \subsection{Ensemble neural networks} \label{ensemble_SNN} Ensemble neural networks contain multiple ensemble members, with each ensemble member being responsible for a simple sub-task \cite{jacobs1991adaptive,happel1994design, auda1999modular}. In this paper, we decompose the learning data so that each ensemble member is trained in parallel on a disjoint subset of the data~\cite{auda1999modular}. % Various ensemble schemes have been used in SNN research, including unsupervised ensembles for spiking expectation maximization networks~\cite{shim2016unsupervised}. % The most similar ensemble SNN is that of~\cite{panda2017ensemblesnn}. However, each ensemble member in~\cite{panda2017ensemblesnn} learns a portion of an input image, opposed to different sections of the input data as in our work. \subsection{Visual place recognition} \label{bioinspired_VPR} Visual place recognition (VPR) is the task of recognizing a previously visited place despite changes in appearance and perceptual aliasing \cite{Garg2021,Lowry2015}. VPR is often considered as a template matching problem, where the query image is matched to the most similar reference image. Recent works on VPR are dominated by deep learning. A widely-known deep learning approach is NetVLAD \cite{Arandjelovic2018}, which is based on the Vector of Locally Aggregated Descriptors (VLAD) \cite{jegou2010vlad}, trained end-to-end thanks to a differentiable pooling layer. Many works extended NetVLAD in several directions~\cite{yu2019spatial, hausler2021patch, khaliq2022multires, xu2021esa}. As NetVLAD still performs competitively, we use it for benchmarking in this work. % Bio-inspiration has a long history in VPR research: The hippocampus of rodent brains has inspired RatSLAM~\cite{milford2004ratslam}, 3D grid cells and multilayer head direction cells inspired~\cite{yu2019neuroslam}, and cognitive processes of fruit flies inspired~\cite{chancan2020hybrid}. Other works are based on spatio-temporal memory architectures~\cite{nguyen2013spatio,neubert2019neurologically}. We note that the detection and removal of hyperactive neurons in our approach is conceptually similar to using salient features of a place representation~\cite{newman2005slam}. \section{Methodology} The core idea in our method (\Cref{fig:frontpage}) is to train compact spiking networks that learn a local region of the environment (\Cref{pre}). By combining the predictions of these localized networks at deployment time within an ensemble scheme (\Cref{ensemble}) and introducing global regularization (\Cref{hyp_detection}), we enable large-scale place recognition. \subsection{Preliminaries} \label{pre} Our ensemble is homogeneous, i.e.~each expert within the ensemble has the same architecture and uses the same hyperparameters. The experts only differ in their training data, which consists of geographically non-overlapping regions of the environment. The training of a single expert spiking network follows~\cite{diehl2015unsupervised,Hussaini2022} and is briefly introduced for completeness in this subsection. \textbf{Network structure: } Each expert module consists of three layers: 1) The input layer transforms each input image into Poisson-distributed spike trains via pixel-wise rate coding. The number of input neurons $K_P$ corresponds to the number of pixels in the input image: $K_{P} = W \times H$, where \mbox{$W$ and $H$} correspond to the width and height of the input image respectively. 2) The $K_P$ input neurons are fully connected to $K_{E}$ excitatory neurons. Each excitatory neuron learns to represent a particular stimulus (place), and a high firing rate of an excitatory neuron indicates high similarity between the learned and presented stimuli. Note that multiple excitatory neurons can learn the same place. 3) Each excitatory neuron connects to exactly one inhibitory neuron. These inhibitory neurons inhibit all excitatory neurons except the excitatory neuron it receives a connection from. This enables lateral inhibition, resulting in a winner-takes-all system. \textbf{Neuronal dynamics: } The neuronal dynamics of all neurons are implemented using the Leaky-Integrate-and-Fire (LIF) model~\cite{gerstner2014neuronal}, which describes the internal voltage of a spiking neuron in the following form: \begin{equation} \tau \frac{dV}{dt} = (E_{rest} - V) + g_{e} (E_{exc} - V) + g_{i} (E_{inh} - V), \end{equation} where $\tau$ is neuron time constant, $E_{rest}$ is the membrane potential at rest, $E_{exc}$ and $E_{inh}$ are the equilibrium potentials of the excitatory and inhibitory synapses with synaptic conductance $g_{e}$ and $g_{i}$ respectively. \textbf{Network connections: } The connections between the inhibitory and excitatory neurons are defined with constant synaptic weights. The synaptic conductance between input neurons and excitatory neurons is exponentially decaying, as modeled by: \begin{equation} \tau_{ge} \frac{dg_{e}}{dt} = -g_{e}, \end{equation} where the time constant of the excitatory postsynaptic neuron is $\tau_{g_{e}}$. The same model is used for inhibitory synaptic conductance $g_{i}$ with the inhibitory postsynaptic potential time constant $\tau_{g_{i}}$. \textbf{Weight updates: } The biologically inspired unsupervised learning mechanism Spike-Timing-Dependent-Plasticity (STDP) is used to learn the connection weights between the input layer and excitatory neurons. Connection weights are increased if the presynaptic spike occurs before a postsynaptic spike, and decreased otherwise. % The synaptic weight change $\Delta w$ after receiving a postsynaptic spike is defined by: \begin{equation} \Delta w = \eta (\textit{x}_{pre} - \textit{x}_{tar})(w_{max} - w)^\mu, \end{equation} where $\eta$ is the learning rate, $\textit{x}_{pre}$ records the number of presynaptic spikes, $\textit{x}_{tar}$ is the presynaptic trace target value when a postsynaptic spike arrives, $w_{max}$ is the maximum weight, and $\mu$ is a ratio for the dependence of the update on the previous weight. \textbf{Local regularization: } To prevent individual neurons from dominating the response, homeostasis is implemented through an adaptive neuronal threshold. The voltage threshold of the excitatory neurons is increased by a constant $\Theta$ after the neuron fires a spike, otherwise the voltage threshold decreases exponentially. We note that the homeostasis provides regularization only on the \emph{local}, expert-specific scale, not on the \emph{global} ensemble-level scale. \textbf{Neuronal assignment: } The network training encourages the network to discern the different patterns (i.e.~places) that were presented during training. As the training is unsupervised, one needs to assign each of the $K_E$ excitatory neurons to one of the $L$ training places ($K_E \gg L$). Following~\cite{diehl2015unsupervised}, we record the number of spikes $S_{e,i}$ of the $e$-th excitatory neuron when presented with an image of the $i$-th place. The highest average response of the neurons to place labels across the local training data is then used for the assignment $A_e$, such that neuron $K_{e}$ is assigned to place $l^*$ if: \begin{equation} A_{e} = l^* = \argmax_l S_{e,l} \end{equation} \textbf{Place matching decisions: } Following~\cite{diehl2015unsupervised}, given a query image $q$, the matched place $\hat{l}$ is the place $l$ which is the label assigned to the group of neurons with the highest sum of spikes to the query image $\big(\text{i.e.~}A_e = \hat{l}\big)$. Formally: \begin{equation} \hat{l} = \argmax_l \sum_{e[A_e=l]} S_{e,l}^q \label{eq:matching} \end{equation} \subsection{Ensemble Scheme} \label{ensemble} The previous section described how to train individual spiking networks following~\cite{Hussaini2022}. In this section, we present our novel ensemble spiking network, which consists of a set of $\mathcal{M}=\{M_1,\dots,M_i,\dots,M_N\}$ experts. The $i$-th expert is tasked to learn the places contained in non-overlapping subsets $R_i \in \mathcal{R}$ of the reference database $\mathcal{R}$, whereby % \begin{equation} \mathcal{R}=\bigcup_{i \in \{1,\dots,N\}} R_i\ \ \text{with}\ R_i \cap R_j = \varnothing\ \ \forall i\neq j \end{equation} All subsets are of equal size, i.e.~$|R_i|=\kappa$. Therefore, at training time the expert modules are independent and do not interact with each other, a key enabler of scalability. At deployment time, the query image $q$ is provided as input to \emph{all} experts \emph{in parallel}. The place matching decision is obtained by considering the spike outputs of \emph{all} ensemble members, rather than just a single expert as in Eq.~(\ref{eq:matching}). \subsection{Hyperactive neuron detection} \label{hyp_detection} The basic fusion approach that considers all spiking neurons of all ensemble members is problematic. As the expert members are only ever exposed to their local subset of the training data, there is a lack of global regularization to unseen training data outside of their local subset. In the case of spiking networks, this phenomenon leads to ``hyperactive'' neurons that are spuriously activated when stimulated with images from outside their training data. % We decided to detect and remove these hyperactive neurons. To detect hyperactive neurons, we do not require access to query data. We use the cumulative number of spikes $S_{e,l}^i$ fired by neurons $K_e^i$ of each module $M_i \in \mathcal{M}$ in response to the entire reference dataset $\mathcal{R}$. $S_{e,l}^i$ indicates the number of spikes fired by neuron $K_e$ of module $M_i$ in response to image $l\in\mathcal{R}$. Neuron $K_e^i$ is considered hyperactive if \begin{equation} \sum_l S_{e,l}^i\geq \theta, \end{equation} where $\theta$ is a threshold value that is determined as described in \Cref{hyperparameter}. The place match is then obtained by the highest response of neurons that are assigned to place $\hat{l}$ after ignoring all hyperactive neurons: \begin{equation} \hat{l} = \argmax_{i} \sum_{e[A_e=l]} S_{e,l}^q \mathds{1}_{\sum_l S_{e,l}^i<\theta}, \end{equation} where the indicator function $\mathds{1}$ filters all hyperactive neurons. \subsection{Hyperparameter search} \label{hyperparameter} We use a grid search to tune the network's hyperparameters: the time constant of the inhibitory synaptic conductance $\tau_{gi}$, and the threshold value to detect the hyperactive neurons $\theta$. We train the modules multiple times using the reference images $\mathcal{R}$ introduced in \Cref{ensemble}, and vary $\tau_{gi}$ and $\theta$. We then observe the performance in response to a query set $\mathcal{C}$ which is geographically separate from the test set $\mathcal{T}$. Specifically, for each combination of the hyperparameter values, we evaluate the performance of the ensemble SNN model on the $\mathcal{C}$ calibration images using the precision at 100\% recall metric (see \Cref{evaluation_metrics}). We select the $\tau_{gi}$ and $\theta$ hyperparameter values that lead to the highest performance and use these values for all test images at deployment time. \section{Experimental Setup} \subsection{Implementation details} \label{imp_details} We implemented our ensemble spiking neural network in Python3 and the Brian2 simulator~\cite{stimberg2019brian}. % We pre-processed all reference and query input images by resizing images to $W\times H=28\times 28$ pixels, and patch-normalizing images~\cite{milford2012seqslam} using patches of size $W_P\times H_P=7\times 7$ pixels.% We use rate coding to convert input images to Poisson spike trains. % The number of $K_P=784$ neurons in the input layer corresponds to the number of pixels in the input image. We used $\kappa=25$ consecutive places to train each ensemble member. The hyperparameter search in \Cref{hyperparameter} resulted in $\tau_{gi}=0.5$ and $\theta=100$ for the Nordland dataset, and $\tau_{gi}=0.5$ with $\theta=180$ for the Oxford RobotCar dataset. Given an input image, the number of spikes of the $K_E=400$ excitatory (output) neurons in the last 10 epochs are recorded. The SNN modules were trained in parallel for 60 epochs. \subsection{Datasets} We evaluated our ensemble spiking neural network on two widely used VPR datasets, Nordland \cite{sunderhauf2013we} and Oxford RobotCar \cite{RobotCar}. The Nordland dataset~\cite{sunderhauf2013we} captures a 728 km train journey in Norway where the same traverse is recorded during spring, summer, fall and winter. As in prior works~\cite{molloy2020intelligent, hausler2019multi, hausler2021patch} tunnels and sections where the train travels below 15 km/hr were removed. We trained our model on the spring and fall traverses, and we used summer traverse as the query dataset. We subsampled places every 8 seconds (about 100 meters) from the entire dataset, resulting in 3300 places. % The Oxford RobotCar dataset \cite{RobotCar} contains over 100 traversals captured under varying weather conditions, times of the day and seasons. As in \cite{molloy2020intelligent}, our reference dataset consists of sun (2015-08-12-15-04-18) and rain (2015-10-29-12-18-17) traverses, and our query dataset is the dusk (2014-11-21-16-07-03) traverse. We sampled places roughly every 8 seconds (about 100 meters), resulting in 450 places.% \subsection{Evaluation metrics} \label{evaluation_metrics} The precision at 100\% recall (P@100R) is the percentage of correct matches when the system is forced to match each query image to one of the reference images. The recall at $N$ (R@N) metric is the percentage of correct matches if at least one of the top $N$ predicted place labels is correctly matched. % We consider a query image to be correctly matched only if it is matched \emph{exactly} to the correct place. In other words, we apply a strict ground truth tolerance of zero, noting that the distance between the sampled places within the datasets is relatively small. \subsection{Baseline methods} \label{baseline_methods} We compare the performance of our method against two conventional VPR approaches: Firstly, the Sum-of-Absolute-Differences (SAD) \cite{milford2012seqslam} which computes the pixel-wise difference between each query image and all reference images. For a fair comparison, we applied the same resizing and patch-normalizing steps as in our approach (see \Cref{imp_details}). Secondly, NetVLAD which generalizes across different datasets and is robust to viewpoint and appearance changes. As NetVLAD is built on a VGG backbone, we used the original input image size of $640 \times 360$ pixels for Nordland and resized the input images to $640 \times 480$ pixels for Oxford RobotCar, potentially giving it an advantage over the low-dimensional input images in our proposed method. We also compare against a non-ensemble SNN~\cite{Hussaini2022}, which in~\cite{Hussaini2022} was limited to just 100 places because of a relatively small network size. To compare against~\cite{Hussaini2022} on our large datasets, we increase the network size of their approach to contain $K_E=|\mathcal{R}|$ output neurons (i.e.~one output neuron per place). We note that increasing the number of neurons in their SNN results in significantly longer training and inference times (\Cref{fig:scalability}). Given the constraints in simulating such a large network, we train their network for two epochs in case of the Nordland dataset, and 26 epochs for the (smaller) Oxford RobotCar dataset. % In~\Cref{small_comparison}, we additionally compare our method to~\cite{Hussaini2022} in a small-scale environment (for which~\cite{Hussaini2022} was designed).% \begin{table}[t] \caption{Precision at 100\% recall comparison } \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{2.5pt} \label{T_results} \centering \begin{tabular}{c|cc} Method & Nordland & Oxford RobotCar\\\hline Hussaini et al.~\cite{Hussaini2022} & 0.3\% & 4.0\%\\ SAD~\cite{milford2012seqslam} & 45.1\% & 41.3\%\\ NetVLAD~\cite{Arandjelovic2018} & 35.1\% & \textbf{44.8\%}\\ Ensemble SNN with hyperactive neurons & 35.7\% & 30.1\%\\ Ensemble SNN (ours) & \textbf{52.6\%} & 40.5\%\\ \end{tabular} \vspace{-0.2cm} \end{table} \section{Results} In this section, we first provide a performance comparison of our ensemble SNN model against NetVLAD~\cite{Arandjelovic2018}, Sum-of-Absolute-Differences (SAD)~\cite{milford2012seqslam} and the currently best performing SNN~\cite{Hussaini2022} (\Cref{comparison}). We then evaluate the effect of removing hyperactive neurons in \Cref{detection_perf}. Finally, \Cref{small_comparison} provides an ablation study where we demonstrate that in small-scale environments which~\cite{Hussaini2022} was designed for, the performance of our ensemble SNN compares to prior non-ensemble SNN~\cite{Hussaini2022} \begin{figure}[t] % % \centering \includegraphics[width=0.99\linewidth,trim={1mm 1mm 1mm 1mm},clip]{imgs/legend.pdf} \subfloat[PR Nordland]{% \includegraphics[width=0.48\linewidth]{imgs/PR_NRD_SFS_L25_S8_O3275_SA.pdf} } % \subfloat[PR ORC]{% \centering \includegraphics[width=0.48\linewidth]{imgs/PR_ORC_L25_S8_O425_SA.pdf} }\\[0.1cm] \hrule \vspace*{-0.1cm} \subfloat[R@N Nordland]{% \includegraphics[width=0.48\linewidth]{imgs/NRD_RecallAtN_SA.pdf} } \subfloat[R@N ORC]{% \includegraphics[width=0.48\linewidth]{imgs/ORC_RecallAtN_SA.pdf} } \caption{Precision and recall curves (top) and recall@N plots (bottom) of our ensemble SNN model, ensemble SNN model where hyperactive are not ignored, the previous best performing SNN by Hussaini et al.~\cite{Hussaini2022} and conventional methods NetVLAD~\cite{Arandjelovic2018} and SAD~\cite{milford2012seqslam}.} \label{fig:PR_R@N} \vspace*{-0.25cm} \end{figure} \subsection{Comparison to state-of-the-art approaches} \label{comparison} We first compare our ensemble SNN to conventional VPR techniques, with the aim of merely demonstrating the potential of SNN-based approaches, as opposed to outperforming these VPR techniques. The results are summarized in \Cref{T_results}.% For the Nordland dataset, our ensemble SNN model obtains a R@1 of 52.6\%, outperforming both SAD (with a R@1 of 45.1\%) and NetVLAD (with a R@1 of 35.1\%). % We note that NetVLAD is known to perform relatively poorly on the rural Nordland dataset, as it was trained on urban data. For the Oxford RobotCar dataset, the R@1 of our ensemble SNN model is 40.5\%, while the SAD approach has a similar R@1 of 41.3\%, the NetVLAD approach obtains a higher R@1 of 44.8\%. The performance of our ensemble SNN achieves a similar R@20 to NetVLAD, demonstrating the performance capability of our method. \Cref{T_results} also presents the performance of the previous non-ensemble SNN model~\cite{Hussaini2022}, which is our main competitor. \cite{Hussaini2022} catastrophically failed to perform place recognition at large-scale, with a precision at 100\% recall of just 0.3\% on the Nordland and 4.0\% on the Oxford RobotCar datasets. We note that \cite{Hussaini2022} specialized in place recognition on small datasets. In addition to poor performance, the large number of output neurons \emph{within a single network} for~\cite{Hussaini2022} results in significantly increased inference times. This is opposed to our modular approach, where the neuronal dynamics of \emph{independent and compact} ensemble members are cheaper to compute. We highlight the computational advantages and better scalability of our method in \Cref{fig:scalability}. \subsection{Importance of hyperactivity detection} \label{detection_perf} This section evaluates that it is crucial to introduce global regularization by detecting and ignoring hyperactive neurons. As shown in \Cref{T_results} and \Cref{fig:PR_R@N}, our ensemble SNN model where hyperactive neurons are ignored improves the precision at 100\% recall compared to the base ensemble SNN model (that includes hyperactive neurons) on both Nordland (absolute increase of 16.9\%) and Oxford RobotCar (absolute increase of 10.4\%). % \Cref{fig:hyp} compares the neuron precision of hyperactive and non-hyperactive neurons trained on the Nordland dataset and highlights that the precision at recognizing correct places of non-hyperactive neurons is significantly higher than that of hyperactive neurons. We further evaluate the sensitivity of our ensemble SNN with respect to the threshold value $\theta$ (see \Cref{hyperparameter}). Specifically, we evaluate the precision at 100\% recall at different threshold values ($ 0 < \theta < \theta_{max}=200$). Note that $\theta=0$ corresponds to the baseline performance where both hyperactive and non-hyperactive neurons are used. For both the Nordland and Oxford RobotCar datasets, \Cref{fig:abl_thr} shows that our method is not sensitive to particular values of $\theta$, with a wide range of high-performing settings. Importantly, any $\theta>0$ improves performance compared to the baseline model where hyperactive neurons are included ($\theta=0$).% \begin{figure}[t] \centering \includegraphics[width=0.7\columnwidth]{imgs/scalability_time_64_updated_v3.pdf} % \caption{The query time over increasing network size of our proposed ensemble SNN in comparison to~\cite{Hussaini2022}. We measured the time taken for the network to process a query image with increasing network sizes. Our ensemble SNN approach scales linearly with increasing network size. In comparison, the non-modular SNN~\cite{Hussaini2022} could only be tested for up to 6400 output neurons on CPU and its query time does not scale to large networks.} \label{fig:scalability} % \end{figure} \begin{figure}[t] % % \centering % % \includegraphics[width=0.49\linewidth]{imgs/NRD_boxplot_neuron_accuracy_exclude_zeros.pdf} % % \vspace*{-0.2cm} \caption{ We compare the neuron precision of hyperactive and non-hyperactive neurons trained on the Nordland dataset. The neuron precision is the \textit{number of times} a neuron has fired spikes to the correct place over the \textit{total number of times} the neuron has fired spikes across the entire query dataset. The lower the neuron precision, the more (incorrect) places a neuron is responsive to, beyond the single correct place. % Non-hyperactive neurons have significantly higher precision in responding to correct places compared to hyperactive neurons, supporting our proposal of removing hyperactive neurons at deployment time. % % } \label{fig:hyp} \vspace*{-0.1cm} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{imgs/NumSpikes_thresholding_accuracy_A0.pdf} \vspace*{-0.2cm} \caption{Threshold hyperparameter selection ablation study: On the $x$-axis we plot the threshold value $\theta$ to ignore hyperactive neurons, and on the $y$-axis the precision at 100\% recall on the test set. There is a wide range of high-performing values, indicating robustness against the precise value of $\theta$. The highest performing threshold value from the calibration process (green circle) is used to select the threshold for deployment. The ideal threshold that would have led to the highest performance at test time is indicated by a red circle.} \label{fig:abl_thr} \vspace*{-0.1cm} \end{figure} \subsection{Comparison to prior SNN in small environments} \label{small_comparison} In this ablation study, we present a like-for-like comparison of our proposed ensemble SNN % and a non-ensemble SNN model from~\cite{Hussaini2022} which facilitates direct comparison to the previous state-of-the-art VPR system using SNNs. As~\cite{Hussaini2022} was designed for small-scale datasets, we do so by considering a much smaller dataset limited to 100 places. Note that the previous section has already shown that~\cite{Hussaini2022} fails catastrophically in large environments. % Specifically, we trained $N=5$ ensemble members, each containing $K_E=400$ excitatory neurons. We used the first $\kappa_{cal}=25$ places to calibrate $\tau_{gi}$ and $\theta$.% The P@100R of our proposed ensemble SNN model at 91.0\% is considerably higher than that of prior work on non-ensemble SNNs~\cite{Hussaini2022} (79.0\%), which in conjunction with the results in \Cref{comparison} demonstrates that our ensemble method is both scalable and provides improved performance in both small and large scale environments. % The PR curve for these experiments is shown in \Cref{fig:A2A}. \begin{figure}[t] % % \centering \includegraphics[width=0.49\columnwidth]{imgs/PR_NRD_SFS_small_L25_S8_O175_SA.pdf} % % % % % \caption{Precision and recall curves of our proposed ensemble SNN in comparison to~\cite{Hussaini2022} in a small-scale environment.} \label{fig:A2A} % \end{figure} \section{Conclusions and Future Work} In this paper, we demonstrated that an ensemble of spiking neural networks can perform the visual place recognition task massively parallelized. Each ensemble member specializes in recognizing a small subset of places within a local region. Typically local ensemble members operate independently without any global regularization. We introduce global regularization by detecting and ignoring hyperactive neurons, which respond strongly to previously unseen places. Our experiments demonstrated significant performance gains and scalability improvements compared to prior SNNs, and comparable performance to NetVLAD and SAD. Future work will follow a number of research directions. We will investigate how our approach can be made robust to significant viewpoint changes. We are investigating to use event streams (from event cameras) as input data, instead of converting images to spike trains via rate coding, to further reduce the power requirements and localization latency. % We are working towards implementing our method on Intel's neuromorphic processor, Intel Loihi \cite{davies2021advancing}. Deployment on neuromorphic hardware in similar applications \cite{frady2020neuromorphic} has demonstrated high energy efficiency, high throughput and low latency. Finally, we will investigate integrating our VPR system into a full SNN-based SLAM pipeline.
{ "timestamp": "2022-09-20T02:23:43", "yymm": "2209", "arxiv_id": "2209.08723", "language": "en", "url": "https://arxiv.org/abs/2209.08723" }
\section{Conclusions} \label{sec:Conclusion} In conclusion, we introduced an effective adaptive-frequency MPC and optimization framework for bipedal locomotion over terrains with discontinuities such as stepping stones with varied gait periods and step lengths. In addition, we also introduced the adaptive-frequency trajectory optimization framework to generate optimal gait periods for each step, CoM trajectory, and foot positions based on the terrain. We paired MPC with WBC for more accurate tracking control performance. Through numerical validation in simulation, we successfully allowed the robot to walk over a series of uneven stepping stones with perturbations while maintaining the robot's average linear velocity at 1.5 $\unit{m/s}$. \section{Adaptive-frequency Control with Varied Gait Periods} \label{sec:trackingControl} \begin{figure*}[!t] \hspace{0.2cm} \center \begin{subfigure}[b]{0.78\textwidth} \centering \includegraphics[width=\textwidth]{Figures/optimization1.png} \caption{Snapshot of Optimization Results} \label{fig:optresults} \end{subfigure} \\ \begin{subfigure}[b]{0.85\textwidth} \centering \includegraphics[clip, trim=0.2cm 0cm 0.3cm 0cm, width=\columnwidth]{Figures/snapshots1.png} \caption{Snapshot of Controller Tracking Results in Simulation with Terrain Perturbations (all using results from (a))} \label{fig:trackingresults} \end{subfigure} \caption{{\bfseries Motion Snapshots of Uneven Stepping Stone Locomotion} a). Optimization results. b). Simulation results of various cases with terrain perturbations.} \label{fig:snapshots} \vspace{-1.5em} \end{figure*} In this section, we present a force-and-moment-based MPC with adaptive frequency in bipedal walking gait with varied step lengths to overcome discontinued terrains without slowing down or coming to a complete stop. The optimization introduced in Section.\ref{subsec:Optimization} outputs optimized sampling times for MPC, which can also be interpreted as the gait period for each step. Hence it is important to modify these controllers to accept walking gait with different gait periods. \subsection{Adaptive-frequency MPC for Bipedal Locomotion} \label{subsec:VGP-MPC} First, we present the adaptive-frequency MPC. The MPC framework works with varied gait periods from the optimization results. Both MPC and optimization use the same simplified dynamics model shown in Figure. \ref{fig:design}. To form a linear state-space dynamics equation for MPC, we choose to include gravity $\bm g$ as a dummy state variable $\bm x = [{\bm \Theta};{\bm p}_c;{\bm \omega};\dot {{\bm p}}_c; \bm g] \in \mathbb{R}^{15}$ in equation (\ref{eq:simpDyn}) to form, \begin{align} \label{eq:linearSS} \dot{{\bm { x}}}(t) = {\hat{\bm A_c}} {{\bm {x}}} + {\hat{\bm B_c}} \bm u. \end{align} where continuous-time matrices ${\hat{\bm A_c} \in \mathbb{R}^{15\times15}}$ and ${\hat{\bm B_c} \in \mathbb{R}^{15\times10}}$ are modified from $\bm A$ and $\bm B$. A formulation of the MPC problem with finite horizon $k$ can be written in the following form, \begin{align} \label{eq:MPCform} \underset{\bm{x,u}}{\operatorname{min}} \:\: & \sum_{i = 0}^{k-1}(\bm x_{i+1}- \bm x_{i+1}^{ref})^T\bm Q_i(\bm x_{i+1}- \bm x_{i+1}^{ref}) + \bm{R}_i\| \bm{u}_i \| \end{align} \begin{subequations} \begin{align} \label{eq:dynamicCons} \:\:\operatorname{s.t.} \quad {\bm {x}}[i+1] = \bm {\hat{A}}[i]\bm x[i] + \bm {\hat{B}}[i]\bm u[i], \\ \label{eq:frictionCons} \nonumber -\mu {F}_{iz} \leq F_{ix} \leq \mu {F}_{iz} \quad \quad\\ -\mu {F}_{iz} \leq F_{iy} \leq \mu {F}_{iz} \quad \quad\\ \label{eq:forceCons} 0< {F}_{min} \leq F_{iz} \leq {F}_{max} \quad \quad\\ \label{eq:MPCeqCons} \bm D_i \bm u_i = 0 \quad \quad \quad \quad \end{align} \end{subequations} The objective of the problem is to drive state $\bm x$ close to command and minimize $\bm u$. These objectives are weighted by diagonal matrices $\bm Q_i\in \mathbb{R}^{15\times15}$ and $\bm R_i\in \mathbb{R}^{10\times10}$. Equation (\ref{eq:dynamicCons}) to (\ref{eq:forceCons}) are constraints of the MPC problem. Equation (\ref{eq:dynamicCons}) is an equality constraint of the linearized dynamics equation in discrete-time at $i$th time-step derived from equation (\ref{eq:linearSS}). Equation (\ref{eq:frictionCons}) describes inequality constraints on contact friction pyramid. Equation (\ref{eq:forceCons}) describes the bounds of reaction forces. Equation (\ref{eq:MPCeqCons}) enforces gait constraint to ensure the swing leg exerts zero control input. The translation of the proposed MPC problem into Quadratic Programming (QP) form to be efficiently solved can be found in many related works and previous works (e.g., \cite{di2018dynamic}, \cite{li2021force}). \subsection{Whole-body Control} \label{subsec:WBC} With adaptive-frequency MPC, in a step with a long gait period, the sampling frequency can be as low as only 20 $\unit{Hz}$. The low-frequency MPC cannot guarantee optimal tracking performance. Hence we choose to combine MPC with WBC to ensure more accurate tracking control. The WBC is an established level-low control method to map reaction forces to joint torques on legged robots \cite{kim2019highly,chignoli2021humanoid}. We adapt the WBC to work with force-and-moment-based MPC control input and allow bipedal walking gait with varied gait periods. The WBCs used in \cite{kim2019highly} and \cite{kim2020dynamic} are paired with a high-frequency joint PD controller to track desired joint position and velocity in addition to computing joint torques based on prioritized tasks. Both CoM and swing foot position control are parts of the WBC tasks. Our WBC framework only uses torque output from QP optimization and does not require joint tracking. Instead, we chose to continue using Cartesian space PD swing foot control \cite{li2021force} to track optimal foot placement from optimization. With this approach, the WBC tasks reduced to only driving CoM position and rotation $\bm x_c = [ p_{c,x},\: p_{c,y},\: p_{c,z},\:\phi,\:\theta,\:\psi ]^\intercal$ to desired input (i.e. trajectory tracking). Hence it avoids extra computation time at the very computation-costly derivative of contact Jacobian $\dot{\bm J_c}$ for the 5-DoF bipedal robot leg. The full joint space equation of motion for the bipedal robot has the form, \begin{align} \label{eq:EOM} \mathbf M \ddot{\mathbf q} + \mathbf C + \mathbf g = \left[\begin{array}{c} \mathbf 0 \\ \bm \tau \end{array} \right] + \bm \tau_b \end{align} $\ddot{\mathbf q}$ is a linear vector space containing both entries of body state (i.e. CoM position vector and Euler angles) and joint states components, $\ddot{\mathbf q} = [\ddot{\mathbf q}_b;\: \ddot{\mathbf q}_j]$, where $\ddot{\mathbf q}_b \in \mathbb R^6$, $\ddot{\mathbf q}_j \in \mathbb R^{10}$, and $\bm \tau_b = \bm {J}_c^\intercal \bm u$. The desired acceleration of the CoM tracking task uses the optimal CoM trajectory from the trajectory optimization as reference $\bm x_c^{des}$, and is computed based on a PD control law, \begin{align} \label{eq:desAcc} \ddot{\bm x}_c^{des} = \bm K_P^{WBC}(\bm x_c^{des} - \bm x_c) + \bm K_D^{WBC}(\dot{\bm x}_c^{des} - \dot{\bm x}_c) \end{align} And the acceleration command $\ddot {\mathbf {q}}_{cmd}$ is calculated by a similar task-space projection algorithm in \cite{kim2019highly}. Now the WBC-QP problem to compute the minimized relaxation components of MPC ground reaction force $\Delta \bm u$ and joint acceleration command $\Delta \ddot{\mathbf q}$ is as follows, \begin{align} \label{eq:WBC-QP} \underset{{\Delta \ddot{\mathbf q},\Delta \bm u}}{\operatorname{min}} \:\: & \Delta \ddot{\mathbf q}^\intercal {\mathbf H} \Delta \ddot{\mathbf q} + \Delta \bm u^\intercal {\mathbf K} \Delta \bm u \vspace{0.5cm} \end{align} \begin{subequations} \begin{align} \label{eq:WBC_cons1} \nonumber \operatorname{s.t.} \quad \mathbf S_{b}\{\mathbf M (\Delta \ddot{\mathbf q} + \ddot{\mathbf q}_{cmd}) + \mathbf C + \mathbf g \\ - \bm J_c^\intercal (\Delta \bm u + \bm u)\} = \mathbf 0 \\ \label{eq:WBC_cons3} \quad \quad \quad \bm u_{min} \leq \Delta \bm u + \bm u \leq \bm u_{max} \quad\\ \label{eq:WBC_cons4} \quad \quad \quad \bm \tau_{min} \leq \bm \tau \leq \bm \tau_{max} \quad \end{align} \end{subequations} In equation (\ref{eq:WBC-QP}), $\mathbf H \in \mathbb{R}^{16\times16}$ and $\mathbf K \in \mathbb{R}^{10\times10}$ are diagonal weighting matrices for each objective. Equation (\ref{eq:WBC_cons1}) is a dynamics constraint to control the floating base dynamics. Selection matrix $\mathbf S_{b}\in \mathbb{R}^{6\times16}$ consists of 1s and 0s to identify the float base joints. The final joint torques can be calculated as \begin{align} \label{eq:torque} \left[\begin{array}{c} \bm 0 \\ \bm \tau \end{array} \right] = \mathbf M (\Delta \ddot{\mathbf q} + \ddot{\mathbf q}_{cmd}) + \mathbf C + \mathbf g - \bm J_c^\intercal (\Delta \bm u + \bm u) \end{align} As for swing leg, the joint torques $\bm {\tau}_{swing,n} \in \mathbb{R}^{5}$ are computed separately by inverse Jacobian $\bm J_{v,n}^\intercal$ of leg $n$, \begin{align} \label{eq:forceTorqueMapSwing} \bm {\tau}_{swing,n} = \bm J_{v,n}^\intercal \bm F_{swing,n}. \end{align} Where swing foot force $\bm F_{swing,n}$ is determined by a simple PD control law, \begin{align} \label{eq:pdlaw} \bm F_{swing,n}=\bm K_P(\bm p_{n,des}-\bm p_{n})+\bm K_D(\dot{\bm p}_{n,des}-\dot{\bm p}_n) \end{align} \section{Introduction} \label{sec:Introduction} Uneven terrain locomotion has always been one of the most important problems that researchers aim to solve on bipedal robots via motion planning and control. The value of such capability will allow bipedal robots to perform robust locomotion in many real-world tasks such as rescue and exploration missions with unknown terrains. Recent advancement in control strategies has allowed many successful integrations of control frameworks with bipedal robots. For instance, on one hand, Hybrid Zero Dynamics (HZD) model \cite{westervelt2003hybrid} is an effective control scheme employed on bipedal robots such as MABEL \cite{sreenath2011compliant}. HZD on ATRIAS robot \cite{rezazadeh2015spring} has allowed more intricate motion planning strategies to be integrated, such as gait libraries for stepping stones \cite{nguyen2018dynamic}. The gait library collected from offline optimization has allowed ATRIAS (2-D) to precisely place its foot on the stepping stones by online motion planning and position control. This position-control-based approach requires accurate terrain information, including next stone distance and height, and is not robust to uneven terrain perturbations. On the other hand, force-based control schemes on quadruped robots became more popular. Such control frameworks can be used with linearized dynamics models and constraints. The Quadratic Programming (QP)-based force control and Model Predictive Control (MPC) on quadruped robots (\cite{di2018dynamic,nguyen2019optimized}) both employ simplified rigid-body dynamics and have demonstrated effectiveness in stable locomotion over uneven terrain. We believe bipedal robots can also benefit from the robustness on uneven terrain with force-based locomotion control. \begin{figure}[t] \center \includegraphics[width=1 \columnwidth]{Figures/title4.png} \caption{{\bfseries Bipedal Robot Traversing Terrain with Uneven Stepping Stones} Simulation video: \protect\url{https://youtu.be/8hLihy96lCg}. } \label{fig:title} \vspace{-1.5em} \end{figure} Our recent work on force-and-moment-based MPC schemes on a 16-Degree-of-Freedom (DOF) bipedal robot \cite{li2021force} has allowed stable 3-D locomotion with fixed gait periods (i.e. fixed-frequency MPC). \update{However, due to the unawareness of the terrain, the robot cannot adapt its footsteps based on the terrain.} The next-step foot placement \cite{raibert1986legged} of bipedal locomotion is dependent on both linear velocity and gait period. Hence, when maintaining a constant velocity during walking while aiming to vary step length, it can be achieved by adjusting the gait period of each step. We introduce adaptive frequency to the MPC to allow the robot to walk with varied gait periods for each step and achieve varied step lengths with a constant walking speed. Kino-dynamics-based trajectory optimization has been introduced and used in many works on mobile-legged robots (e.g. \cite{dai2014whole, herzog2016structured}). The framework has the advantage of simplified system dynamics while being able to apply robot joint constraints. To synchronize the motion control and optimization, we use the same simplified dynamics model in both MPC and optimization, the same foot placement policy in swing foot control and optimization foot placement, and the same discrete time steps in MPC and optimization. Many related works (e.g.,\cite{kryczka2015online,khadiv2016step,guo2021fast,daneshmand2021variable}) that use trajectory optimization/planning for bipedal gait and trajectory generation share a similarity in that the foot placement adaptation is included in the frameworks to optimize best capture point locations. In our work, to allow bipedal robots to overcome very narrow stepping stones, exact foot placement on the stone is required. We pre-define the desired step locations in optimization to optimize the gait periods and CoM trajectory based on each stride length. Tracking optimal trajectory with only MPC is not optimal due to its inherent low sampling frequency, which is even lower with a long gait period. We pair the MPC with a higher-frequency Whole-body Control (WBC) for more accurate trajectory tracking. MIT Mini Cheetah \cite{katz2019mini,kim2019highly} quadruped robot and MIT Humanoid robot \cite{chignoli2021humanoid} both have demonstrated outstanding balancing performance during dynamical motion with the force-based MPC and WBC combination. We develop the WBC strategy to work with our bipedal force-and-moment-based MPC. WBC in \cite{kim2020dynamic} employed on bipedal robots \cite{kim2016stabilizing, ahn2019control} validated the feasibility of a WBC-type control strategy in dynamic locomotion with periodic gaits. In our approach, We combine Kino-dynamics trajectory optimization with adaptive-frequency MPC framework for bipedal robot traversing stepping stones and use WBC as low-level force-to-torque mapping and trajectory tracking control. The main contributions of the paper are as follows: \begin{itemize} \item \update{We allow the bipedal to have adaptive foot placement and gait periods for each step, and realize it in control with adaptive-frequency MPC as our main locomotion controller.} \item \update{We enhance the adaptive-frequency MPC by kino-dynamics trajectory optimization for optimal trajectory generation and WBC as tracking control.} \item \update{We use the proposed framework in bipedal locomotion over uneven stepping stones. The proposed method allows the bipedal robot to maintain high speed at around 1.5 $\unit{m/s}$ when traversing uneven stepping stone terrains with height, width, and stone surface shape perturbations while only requiring minimal terrain knowledge.} \end{itemize} The rest of the paper is organized as follows. Section. \ref{sec:robotModel} introduces the physical design parameters of the bipedal robotand the overview of the system architecture including optimization and control. Section.\ref{sec:trackingControl} presents the adaptive-frequency trajectory optimization framework with the bipedal kino-dynamics model. Section. \ref{sec:trackingControl} presents the adaptive-frequency MPC framework. Some simulation result highlights and comparisons are presented in Section. \ref{sec:Results}. \section{Kino-dynamics-based Adaptive-frequency Trajectory Optimization} \label{subsec:Optimization} Humans can walk with different step lengths every step to adapt to the terrain and can allow swing foot to remain in the air for different periods of time. We intend to use this adaptive-frequency trajectory optimization framework to allow bipedal robots to walk with such characteristics. We choose the kino-dynamics model in our optimization framework in order to reduce the computation cost compared to using a full-dynamics model. The average solving time of offline trajectory optimization in our approach is shown in Table. \ref{tab:solvingTime}. \subsection{Simplified Dynamics Model} We first present the force-and-moment-based simplified dynamics model we use in both the kino-dynamics trajectory optimization and adaptive-frequency MPC framework, introduced in the author's previous work \cite{li2021force}. The simplified force-based dynamics model with ground reaction force and moment control inputs is shown in Figure. \ref{fig:design}. The control input consists of $ \bm u=[\bm F_1;\:\bm F_2;\:\bm M_1;\:\bm M_2]^\intercal \in \mathbb{R}^{10}$, where $ \bm F_n = [ F_{nx},\: F_{ny},\: F_{nz}]^\intercal, \bm M_n = [ M_{ny},\: M_{nz}]^\intercal,$ leg $n = 1, 2 $. We choose the state variables as $[{\bm \Theta};{\bm p}_c;{\bm \omega};\dot {{\bm p}}_c]$ and control inputs as $\bm u$, then the simplified dynamics equation can be represented as \begin{align} \label{eq:simpDyn} \frac{d}{dt}\left[\begin{array}{c} {\bm \Theta}\\{\bm p}_c\\{\bm \omega}\\\dot {{\bm p}}_c \end{array} \right] = \bm A \left[\begin{array}{c} {\bm \Theta}\\{\bm p}_c\\{\bm \omega}\\\dot {{\bm p}}_c \end{array} \right] + \bm B \bm u + \left[\begin{array}{c} \mathbf 0_{3\times1}\\\mathbf 0_{3\times1}\\\mathbf 0_{3\times1}\\\bm g \end{array} \right] \end{align} \begin{align} \label{eq:A} \bm A = \left[\begin{array}{cccc} \mathbf 0_3 & \mathbf 0_3 & \mathbf R_z & \mathbf 0_3 \\ \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 & \mathbf I_3\\ \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3\\ \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 \end{array} \right], \mathbf R_z = \left[\begin{array}{ccc} {c_\psi} & -{s_\psi} & 0 \\ {s_\psi} & c_\psi & 0 \\ 0 & 0 & 1 \end{array} \right] \end{align} \begin{align} \label{eq:B} \bm B = \left[\begin{array}{ccccc} \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_{3\times2} & \mathbf 0_{3\times2} \\ \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_{3\times2} & \mathbf 0_{3\times2} \\ \frac{ (\bm p_1 - \bm p_c)\times}{\bm I_G} & \frac{ (\bm p_2 - \bm p_c)\times}{\bm I_G} & \frac{\mathbf L}{\bm I_G} & \frac{ \mathbf L}{\bm I_G} \\ \frac{\mathbf {I}_{3}}{m_{trunk}} & \frac{ \mathbf {I}_{3}}{m_{trunk}} & \mathbf {0}_{3\times2} & \mathbf {0}_{3\times2} \end{array} \right] \end{align} where $s$ denotes sine operator, and $c$ denotes cosine operator. Note that $\mathbf R_z$ is simplified by the assumption of small roll and pitch angles $\phi \approx 0, \: \theta \approx 0$. \cite{li2021force} In equation (\ref{eq:B}), $\bm {I}_G \in \mathbb{R}^{3\times3}$ represent the rotation inertia of the rigid body in the world frame. $\bm p_n$ represents the Cartesian coordinate of the contact point on $n$th foot. $\mathbf L$ is the selection matrix to enforce the 5-D control input, $\mathbf L = [0, 0; 1, 0; 0, 1]$. \subsection{Optimization Problem Formulation} The adaptive-frequency trajectory optimization is an offline multiple-shooting discretization method \cite{bulirsch2002introduction} to optimize the robot's CoM trajectory, foot placements, and gait period of each step based on the terrain map. It also maintains the linear velocity close to reference input to generate a smoother walking trajectory. The optimization variable $\mathbf X \in \mathbb{R}^{39(N+1)}$ includes \begin{align} \label{eq:X} \mathbf X = [\bm x_N ;\:\: \bm p_{N,1} ;\:\: \bm p_{N,2} ; \:\: \mathbf q_N ;\:\: \bm u_N ;\:\: dt_0\dots dt_{N}] \end{align} where $ dt_1\dots dt_{N}$ are discrete sampling times between each two time steps with $N$ total time steps. Subscript $N$ indicates the variable is a column vector of length of $N+1$. For bipedal walking gait, we define the total number of time steps the stance leg spends on the ground and the total number of time steps the swing leg spends in the air to be both 5; hence the one complete two-step gait period consists of 10 time steps. The MPC prediction horizon is also 10 time steps, which means it predicts a full cycle of periodic gait. It is important to ensure every 5 $dt_i$s has the same length, and thereby the gait period of each step $l$ is the summation of 5 time steps. The formulation of the nonlinear programming (NLP) problem is as follows. The optimization objective is to drive the linear velocity close to the command and minimize the ground reaction force to maximize efficiency. \begin{align} \label{eq:cost} \underset{\bm{x},\:dt_1\dots dt_{N} }{\operatorname{minimize}} \:\: \sum_{i = 0}^{N} \bm \alpha_i(\bm{ \dot p}_{c,x}[i] -\bm {\dot p}_{c,x}^{ref})^2 + \bm u[i]^\intercal \bm \beta _i \bm u[i] \end{align} \begin{subequations} \begin{align} \label{eq:cons1} \operatorname{s.t.} \:\: \text{Initial Condition}:\:\bm x_0 = \bm x[0] \\ \label{eq:cons2} \:\: \text{End Condition}:\:\bm x_N = \bm x[N] \\ \label{eq:cons3} \text{Simplified Dynamics: equation (\ref{eq:simpDyn})} \\ \label{eq:cons4} \:\: \text{Periodic Gait Constraint} \\ \label{eq:cons5} \mathbf q_{min} \leq \mathbf q_n[i] = \texttt{IK}(\bm x[i],\: \bm p_n[i]) \leq \mathbf q_{max} \\ \label{eq:cons6} \bm \tau_{min} \leq \bm J_n^\intercal(\mathbf q_n[i])\bm u_n[i] \leq \tau_{max} \\ \label{eq:cons7} \bm p_l(\texttt{terrain}) = \bm p_{n,l} = \bm p_{hip,l}+\frac{t_{stance}}{2}\bm { \dot p}_{c,l} \\ \label{eq:cons8} 0.02 \leq dt_i \leq 0.05 \end{align} \end{subequations} Equation (\ref{eq:cons4}) enforces the periodic walking gait of the bipedal robot with 5 time steps stance phase and 5 times steps swing phase. Equation (\ref{eq:cons5}) enforces joint angle limits. Equation (\ref{eq:cons6}) constrains joint torques by contact Jacobians. Lastly, the swing foot placement is enforced by the inverted-pendulum-based foot placement policy. (\cite{raibert1986legged,di2018dynamic,li2021force}). With this foot placement policy, the optimization framework can adapt to the most optimal gait period based on how far one step needs to place to overcome the terrain while keeping the robot's linear velocity constant. $t_{stance}$ represents the total time the stance foot spends on the ground, which is the summation of 5 time steps at step $l$. The placement at touch-down for each step $l$ is acclimated to the terrain (i.e. each step is on a stepping stone). \section{Results} \label{sec:Results} In this section, we will present highlighted results for validation of our proposed adaptive-frequency control and optimization framework. Associated simulation videos can be found via the link under Figure. \ref{fig:title}. We validate our proposed approach in a high-fidelity physical-realistic simulation in MATLAB Simulink with Simscape Multibody library. We also use Spatial v2 software package \cite{featherstone2014rigid} to acquire coefficients of dynamics equations in WBC and CasADi \cite{Andersson2019} for offline optimization. Firstly, we present the comparison between MPC-only control vs. MPC+WBC in tracking sinusoidal height command with double-leg stance. Due to low sampling frequency, previous works usually only use MPC as locomotion control and use QP-based force control as balance/stance control for its higher frequency (e.g. \cite{nguyen2019optimized,chignoli2021humanoid,li2021force}). Figure. \ref{fig:heightcomparison} shows the comparison of simulation snapshots between the two approaches, it can be observed that the WBC+MPC approach we proposed performed ideally in height tracking while the MPC-only approach failed over time. \begin{figure}[h] \vspace{0.2cm} \center \includegraphics[width=1 \columnwidth]{Figures/comparison2.png} \caption{{\bfseries Height Command Tracking Results} Simulation snapshots are several time steps } \label{fig:heightcomparison} \vspace{-0.2cm} \end{figure} Secondly, we compare the locomotion performance over stepping stones in simulation with the following approaches. \begin{enumerate} \item With fixed-frequency MPC + WBC, at 0.3s \item With adaptive-frequency MPC + WBC \item With adaptive-frequency MPC + WBC + optimization \end{enumerate} As can be seen in Figure. \ref{fig:fixed_gait_mpc}, the approach with fixed-frequency control cannot adapt the foot placement based on the stepping stone gap distance. In Figure. \ref{fig:no_opt}, the adaptive-frequency MPC+WBC framework with manually-input gait periods based on the terrain shows improvement from the fixed gait period case. However, it cannot achieve precise foot placement on stepping stones nor maintain a preferable trajectory, therefore failed after only a few stones. Our proposed approach, shown in Figure. \ref{fig:with_opt}, with both adaptive-frequency control and optimization can allow the bipedal robot to traverse through the stepping stone terrain. Figure. \ref{fig:velocity_tracking} shows the velocity tracking performance with our proposed approach 3). The simulation velocity stays smoothly close to the desired trajectory through the stepping stone terrain. \begin{figure}[t] \vspace{0.2cm} \center \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[clip, trim=0cm 8.2cm 0cm 0cm, width=\columnwidth]{Figures/comparison1.png} \caption{Simulation results: fixed-frequency control} \label{fig:fixed_gait_mpc} \end{subfigure} \\ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[clip, trim=0cm 4.1cm 0cm 4.1cm, width=\columnwidth]{Figures/comparison1.png} \caption{Simulation results: adaptive-frequency control only} \label{fig:no_opt} \end{subfigure} \\ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[clip, trim=0cm 0cm 0cm 8.1cm, width=\columnwidth]{Figures/comparison1.png} \caption{Simulation results: adaptive-frequency control + optimization (proposed approach)} \label{fig:with_opt} \end{subfigure} \caption{{\bfseries Motion Snapshots of Uneven Stepping Stone Locomotion} Comparison of fixed-frequency control vs. adaptive-frequency control vs. adaptive-frequency control + optimization } \label{fig:snapshots} \vspace{-0.0cm} \end{figure} \begin{figure}[!h] \vspace{0.2cm} \center \includegraphics[width=1 \columnwidth]{Figures/velocity_tracking.pdf} \caption{{\bfseries Velocity Tracking Results} Simulation with perturbed stone shapes } \label{fig:velocity_tracking} \end{figure} We also would like to present the solver computation times for several tasks in CasADi with IPOPT solver in MATLAB R2021b. As a benchmark, the PC platform we use for offline optimization has an AMD Ryzen 5-5600X CPU clocked at 4.65$\unit {GHz}$. In Table.\ref{tab:solvingTime}, we measure the solving time of the proposed adaptive-frequency trajectory optimization. The cases are categorized into the number of stepping stones in the terrain. We run the optimization with 30 randomized terrain setups for each case and compute the average time. Lastly, we present the uneven stepping stone terrain locomotion results with our proposed approach. In realistic scenarios, the stepping stone surface shapes, heights, and widths may vary, hence the errors and disturbances in a vision-based terrain map acquisition system may hinder the accuracy of terrain information. In our approach, we can allow the terrain map in the optimization framework to be simplified to uniformly sized stepping stones with varied center-to-center distances, shown in Figure. \ref{fig:optresults}. We then use this optimization result to control the robot to traverse the terrains with various perturbations, shown in Figure. \ref{fig:trackingresults}. These terrain perturbations including varied stepping stone widths, heights, and surface shapes. In the above simulation results, the linear velocity the robot maintained during the task is 1.5 $\unit{m/s}$. The stone center-to-center gap distance is between 15 $\unit{cm}$ to 30 $\unit{cm}$.The maximum stone height perturbation is 5 $\unit{cm}$. The stone width perturbation varied between 4 $\unit{cm}$ to 10 $\unit{cm}$. \begin{table}[!t] \vspace{0.2cm} \centering \caption{Offline Optimization Solving Time} \label{tab:solvingTime} \begin{tabular}{ccccc} \hline Cases: & 4 stones & 5 stones & 6 stones & 7 stones\\ \hline Solving time: & 6.72$\unit{s}$ & 7.93$\unit{s}$ & 10.15$\unit{s}$ & 12.23$\unit{s}$ \\ \hline \end{tabular} \vspace{-0.3cm} \end{table} \section{Bipedal Robot Model and System Overview} \label{sec:robotModel} \subsection{Bipedal Robot Model} In this section, we present the bipedal robot model that is used for this work. Our bipedal robot model is enhanced from our previous design in \cite{li2021force}, a small-scale bipedal robot with 5-DoF legs. Presented in Figure. \ref{fig:design}, each of the robot legs consists of ab/ad, hip, thigh, calf, and ankle joints which are all actuated by Unitree A1 torque-controlled motor. A1 motor is a powerful joint motor with a 33.5 $\unit{Nm}$ maximum torque output and 21.0 $\unit{rad/s}$ maximum joint speed output while weighing only 0.6 $\unit{kg}$. \begin{figure}[!h] \vspace{0.1cm} \center \includegraphics[width=1 \columnwidth]{Figures/robotAndLeg2.png} \caption{{\bfseries Bipedal Robot Configuration and Simplified Dynamics Model}} \label{fig:design} \vspace{-0.5em} \end{figure} In this bipedal leg design, we strategically placed all joint actuators on the upper of the thigh links, close to the hips, for mass concentration, in order to minimize the leg dynamics during locomotion. Negligible leg mass is an important assumption in our force-and-moment-based simplified dynamics model in MPC \cite{li2021force}. The trunk mass of the bipedal robot is 5.8 $\unit{kg}$ and the overall mass is around 11 $\unit{kg}$. More details about the physical design parameters can also be found in \cite{li2021force}. \subsection{System Overview} \label{sec:sysoverview} The optimization and control system block diagram is shown in Figure. \ref{fig:controlArchi}. We aim to achieve varied step lengths for each step in bipedal locomotion by varying gait frequencies in adaptive-frequency MPC. The proposed framework is built around this controller. In order to allow more stable and efficient locomotion, we pair the MPC control framework with offline trajectory optimization to generate desired trajectories. \begin{figure}[!h] \vspace{-0.3cm} \center \includegraphics[width=1 \columnwidth]{Figures/system2.pdf} \caption{{\bfseries System Block Diagram} Optimization and control architecture.} \label{fig:controlArchi} \vspace{-.2cm} \end{figure} The optimization framework uses terrain map to generate discrete optimization data including desired body CoM trajectory $\bm x_{des} \in \mathbb{R}^3$, desired foot position $\bm p_{n,des} \in \mathbb{R}^3$ for $n$th foot, and discrete sampling time $dt_i$ at time step $i$ for MPC. The CoM trajectory and foot positions are linearly-interpolated to have a sampling frequency at 1 $\unit{kHz}$ to match the frequency of swing leg control and WBC. The MPC accepts the optimization data at its native frequency due to the synchronization of sampling time. Reaction forces from MPC and swing leg control are input into WBC to be mapped to joint torques $\bm \tau \in \mathbb{R}^{10}$. The robot state feedback $\bm x \in \mathbb{R}^{12}$ include body Euler angles (roll, pitch, and yaw) ${\Theta = [\phi,\:\theta,\:\psi]}^\intercal$, position $\bm p_c$, velocity of body CoM $\dot{\bm p}_c$, and angular velocity $\bm \omega$. Joint feedback $\mathbf q \in \mathbb{R}^{10}$ includes the joint positions of the bipedal robot. \section{System Overview} \label{sec:sysoverview} The optimization and control system block diagram is shown in Figure. \ref{fig:controlArchi}. We aim to achieve varied step lengths for each step in bipedal locomotion by adaptive-frequency MPC. The proposed framework is built around this controller. In order to allow more stable and efficient locomotion, we pair the MPC control framework with offline trajectory optimization to generate desired trajectories. Since MPC has a low sampling frequency and is determined by varied gait periods, we use a higher-frequency and task-oriented WBC scheme to allow more accurate trajectory tracking control. \begin{figure}[!h] \vspace{-0.3cm} \center \includegraphics[width=1 \columnwidth]{Figures/system2.pdf} \caption{{\bfseries System Block Diagram} Optimization and control architecture.} \label{fig:controlArchi} \vspace{-.2cm} \end{figure} The optimization framework uses terrain map to generate discrete optimization data including desired body CoM trajectory $\bm x_{des} \in \mathbb{R}^3$, desired foot position $\bm p_{n,des} \in \mathbb{R}^3$ for $n$th foot, and discrete sampling time $dt_i$ at time step $i$ for MPC. The CoM trajectory and foot positions are linearly-interpolated to have a sampling frequency at 1 $\unit{kHz}$ to match the frequency of swing leg control and WBC. The MPC accepts the optimization data at its native frequency due to the synchronization of sampling time. Reaction forces from MPC and swing leg control are input into WBC to be mapped to joint torques $\bm \tau \in \mathbb{R}^{10}$. The robot state feedback $\bm x \in \mathbb{R}^{12}$ include body Euler angles (roll, pitch, and yaw) ${\Theta = [\phi,\:\theta,\:\psi]}^\intercal$, position $\bm p_c$, velocity of body CoM $\dot{\bm p}_c$, and angular velocity $\bm \omega$. Joint feedback $\mathbf q \in \mathbb{R}^{10}$ includes the joint positions of the bipedal robot.
{ "timestamp": "2022-09-20T02:21:53", "yymm": "2209", "arxiv_id": "2209.08664", "language": "en", "url": "https://arxiv.org/abs/2209.08664" }
\section{Introduction} Isometric actions of Lie groups on Riemannian manifolds have been widely studied in the literature, as they constitute a powerful tool applied to deal with several interesting problems in both mathematics and physics, yielding important geometric and topological consequences. They appear in all branches of science where symmetries preserving length and angle measures play a role. In this paper we propose a generalization of the classical setting of isometric actions by studying a notion of isometric action of a categorified version of a Lie group \cite{BD,BS} on a categorified version of a Riemannian manifold \cite{dHF,dHF2,GGHR,PPT}. More precisely, we are interested in studying isometric actions of Lie $2$-groups on Riemannian groupoids. On the one hand, as is explicitly mentioned by Baez and Lauda in \cite{BD}, the notion of Lie 2-group goes back to Brown and Spencer in \cite{BS} where it became clear that classical group theory is just the beginning of a larger subject that sometimes is called higher-dimensional group theory. In many contexts where we are tempted in using groups to tackle certain symmetries-involved problems, it turns out actually to be more natural to use a richer kind of structure where, in addition to group elements describing symmetries, we also have isomorphisms between these, thus describing symmetries between symmetries. On the other hand, Riemannian groupoids recently appeared into the picture as a differentiable model allowing to perform Riemannian geometry techniques over more general singular spaces than orbifolds or leaf spaces of singular foliations. Since the seminal works \cite{dHF,dHF2}, the notion of Riemannian groupoid (stack) defined and studied therein has been applied to satisfactorily extend several theories where Riemannian manifolds have played an important role. For instance, some of the recent contributions in which Riemannian groupoid metrics have been used as a tool to describe topological and geometric feutures of Lie groupoids and their differentiable stacks are the following. \begin{itemize} \item Resolutions of proper Riemannian groupoids were introduced in \cite{PTW} in order to obtain a desingularization of their underlying differentiable stacks via a successive blow-up construction. \item A theory of stacky geodesics on Riemannian stacks was developed in \cite{dHdM} allowing to establish a stacky version of the Hopf--Rinow Theorem. \item The problem of understanding invariant linearization of proper Lie groupoids was addressed in \cite{dHdM2} where the authors fixed and extended previous results in the literature as well as provided a sufficient criterion that uses compatible complete metrics and covers the case of proper group actions. \item Recently, a notion of Morse Lie groupoid morphism was introduced in \cite{OV}, allowing to extend the main results of classical Morse theory to the context of Lie groupoids. It is worth mentioning that the notion of isometric 2-action we will introduce in this paper was used in \cite{OV} as one of the fundamental ingredients needed to construct an equivariant double Morse--Bott complex which computes the equivariant cohomology of a $2$-action as defined in \cite{OBT}. \end{itemize} The paper is organized as follows. In Section \ref{S:2} we briefly introduce the necessary terminology and notions about Riemannian Lie groupoids and Lie $2$-groups we will be dealing with along the paper. In Section \ref{S:3} we define isometric Lie $2$-group actions on Riemannian groupoids and exhibit some of their immediate properties. We provide sufficient conditions to ensure that Riemannian groupoid metrics that are invariant by the action of a Lie $2$-group exist and use the nice ideas implemented to give a proof of the groupoid linearization theorem in \cite{dHF} to describe an equivariant weakly groupoid linearization result. Based in the classical constructions we state versions of both the Slice Theorem and the Equivariant Tubular Neighborhood Theorem for proper and free Lie $2$-group actions. We also quickly explain a way to define a ``2-group orbit type topological stratification'' for the Lie groupoid we are working with, define orthogonal Lie $2$-groups and describe their infinitesimal counterpart, exhibit a few examples and provide some interesting applications using principal connection warpings and Cheeger deformations. Finally, in Section \ref{S:4} we present a description of the Lie $2$-group of strong (weak) groupoid isometries of a Lie groupoid equipped with a $0$-metric and determine a model for its Lie $2$-algebra of strong (weak) Killing multiplicative vector fields. We apply some of the results of this section to the case in which we are equipped with an isometric Lie 2-group action. It is important to mention that our infinitesimal descriptions are motivated by those results developed in \cite{OW} in order to describe the Lie $2$-algebra structure that the set of multiplicative vector fields on a Lie groupoid has. In particular, we show that the Lie $2$-algebra of weak Killing multiplicative vector fields is Morita invariant, thus yielding a good notion of geometric Killing vector field on a quotient Riemannian stack. This notion of geometric Killing vector field recovers the classical notions of Killing vector field on both a Riemannian manifold and a Riemannian orbifold as defined for instance in \cite{BZ} as well as the notion of transverse Killing vector field on a regular Riemannian foliation as defined in \cite[p. 84]{Mo}. We end the section by proving that the algebra of geometric Killing vector fields on a quotient Riemannian stack that is represented by a Riemannian foliation groupoid has finite dimension. \medskip {\bf Acknowledgments:} We would like to tank Mateus de Melo and Cristian Ortiz for several enlightened discussions, comments, and suggestions which allowed us to improve this work. Herrera was supported by the Brazilian Federal Foundation CAPES - Finance Code 001. Valencia was supported by Grant 2020/07704-7 Sao Paulo Research Foundation - FAPESP. \section{Preliminaries}\label{S:2} In this short section we briefly introduce the basics on Lie groupoids we shall be using throughout. For specific details the reader is recommended to visit for instance \cite{BD,BS,dH,Ma,dHF}. A \emph{Lie groupoid} $X_1\rightrightarrows X_0$ consists of a manifold $X_0$ of objects and a manifold $X_1$ of arrows, two surjective submersions $s_X,t_X:X_1\to X_0$ respectively indicating the source and the target of the arrows, and a smooth associative composition $m_X:X_2\to X_1$ over the set of composable arrows $X_2=X_1\times_{X_0} X_1$, admitting unit $u_X:X_0\to X_1$ and inverse $i_X:X_1\to X_1$, subject to the usual groupoid axioms. The collection of maps mentioned above are called \emph{structural maps} of the Lie groupoid $X_1\rightrightarrows X_0$. We shall drop up the sub-index notation for the structural maps only if there is no risk of confusion. Special instances of Lie groupoids are given by manifolds, Lie groups, Lie group actions, surjective submersions, foliations, pseudogroups, principal bundles, vector bundles, among others \cite{dH, Ma}. Let $X_1\rightrightarrows X_0$ be a Lie groupoid. For each $x\in X_0$, its \emph{isotropy group} $X_x:=s_X^{-1}(x)\cap t_X^{-1}(x)$ is a Lie group and an embedded submanifold in $X_1$. There is an equivalence relation in $X_0$ defined by $x\sim y$ if there exists $p\in X_1$ with $s_X(p)=x$ and $t_X(p)=y$. The corresponding equivalence class of $x\in X_0$ is denoted by $\mathcal{O}_x\subseteq X_0$ and called the \emph{orbit} of $x$. The previous equivalence relation defines a quotient space $X_0/X_1$ called the \emph{orbit space} of $X_0\rightrightarrows X_1$. This space equipped with the quotient topology is in general a \emph{singular space}, that is, it does not carry a differentiable structure making the quotient projection $X_0\to X_0/X_1$ a surjective submersion. A \emph{Lie groupoid morphism} between two Lie groupoids $X_1\rightrightarrows X_0$ and $X_1'\rightrightarrows X_0'$ is a pair $\phi:=(\phi^1,\phi^0)$ where $\phi^1:X_1\to X_1'$ and $\phi^0:X_0\to X_0'$ are smooth maps commuting with both source and target maps and preserving the composition maps. Finally, we recall that the \emph{Lie algebroid} associated to the Lie groupoid $X_1\rightrightarrows X_0$ is defined to be the vector bundle $A_X:=\textnormal{ker}(ds_X)|_{X_0}\subset TX_1$ with anchor map $\rho:A_X\to TX_0$ obtained by restricting $dt_X:TX_1\to TX_0$ to $A_X$. {\bf Riemannian groupoids:} The notion of Riemannian metric on a Lie groupoid that we will be working with along the paper was introduced in \cite{dHF} (see also \cite{GGHR,PPT}). Such a notion is compatible with the groupoid composition so that it plays an important role in several parts of our work. Because of our purposes we mainly focus in using both $1$-metrics and $0$-metrics which were initially defined in \cite{GGHR} and \cite{PPT}, respectively, but they were recovered later in \cite{dHF} as particular cases of a more general notion. We start by recalling that a submersion $\pi:(E,\eta^E)\to B$ with $(E,\eta^E)$ a Riemannian manifold is said to be \emph{Riemannian} if the fibers of it are equidistant (transverse condition). In this case the base $B$ gets an induced metric $\eta^B:=\pi_\ast \eta^E$ for which the linear map $d\pi(e)=(\textnormal{ker}(d\pi(e)))^\perp\to T_{\pi(e)}B$ is an isometry for all $e\in E$. If $(\eta^{E})^\ast$ denotes the dual metric associated to $\eta^{E}$ then the condition for a Riemannian submersion can be rephrased as follows. For all $e\in E$ the map $d\pi(e)^\ast: T_{\pi(e)}^\ast B \to \textnormal{ker}(d\pi(e))^\circ$ is an isometry, where $\textnormal{ker}(d\pi(e))^\circ$ denotes the annihilator of the vectors tangent to the fiber. It is well known that given a Lie groupoid $X_1\rightrightarrows X_0$ every pair of composable arrows in $X_2$ may be identified with an element in the space of commutative triangles so that it admits an action of $S_3$ determined by permuting the vertices of such triangles. In these terms, a \emph{Riemannian groupoid} is a pair $(X_1\rightrightarrows X_0,\eta)$ where $X_1\rightrightarrows X_0$ is a Lie groupoid and $\eta=\eta^{(2)}$ is a Riemannian metric on $X_2$ that is invariant by the $S_3$-action and transverse to the composition map $m_X:X_2\to X_1$. The metric $\eta^{(2)}$ induces metrics $\eta^{(1)}=((\pi_2)_X)_\ast \eta^{(2)}=(m_X)_\ast \eta^{(2)}=((\pi_1)_X)_\ast \eta^{(2)}$ on $X_1$ and $\eta^{(0)}=(s_X)_\ast \eta^{(1)}=(t_X)_\ast \eta^{(1)}$ on $X_0$ such that $(\pi_2)_X, m_X, (\pi_1)_X:X_2\to X_1$ and $s_X,t_X:X_1\to X_0$ are Riemannian submersions and $i_X:X_1\to X_1$ is an isometry. This is because the $S_3$-action permutes these face maps. The metric $\eta^{(j)}$, for $j=2,1,0$, is called a $j$-\emph{metric}. It is important to mention that every proper groupoid can be endowed with a $2$-metric (more generally, an $n$-metric as defined below) and if a Lie groupoid admits a $2$-metric then it is weakly linearizable around any saturated submanifold. For more details visit \cite{dHF}. {\bf Lie 2-groups:} A \emph{Lie 2-group} is a Lie groupoid in the category of Lie groups. In other words, a Lie 2-group is a Lie groupoid $G_1\rightrightarrows G_0$ where both $G_1$ and $G_0$ are Lie groups and the structural maps of $G_1\rightrightarrows G_0$ are Lie group homomorphisms \cite{BD,BS}. We will denote by $\ast$ the composition of arrows in $G_1\rightrightarrows G_0$ and by $\cdot$ the product of arrows in $G_1$. For instance, the fact that the multiplication is a morphism of groups amounts to the identity: \begin{equation}\label{MultiProduct} (g_1\ast g_2)\cdot(g_1'\ast g_2')=(g_1\cdot g_1')\ast(g_2\cdot g_2'),\quad \forall (g_1,g_2),(g_1',g_2')\in G_2, \end{equation} where $G_2$ is the space of pairs of composable arrows of $G_1\rightrightarrows G_0$. It is well known that there are several alternative ways to think of Lie 2-groups. One of them is described in terms of crossed modules. Recall that a \emph{crossed module of Lie groups} is a quadruple $(G,H,\rho,\alpha)$ consisting of a Lie group homomorphism $\rho:H\to G$ together with an action $\alpha$ of $G$ on $H$, $(g,h)\mapsto g h$, by Lie group automorphisms such that: \[ \rho(g h)=g \rho(h) g^{-1},\quad \rho(h)h'=h h' h^{-1}, \quad g\in G,\ h,h'\in H. \] To such a crossed module one associates a Lie $2$-group with objects the Lie group $G_0:=G$, arrows the semi-direct product $G_1:=H\rtimes G$ so that: \[ (h_1,g_1)\cdot (h_2,g_2):=(h_1 (g_1h_2), g_1 g_2), \] and structure maps: \[ s_G(h,g)=g,\quad t_G(h,g)=\rho(h)g, \quad (h_1,\rho(h_2)g_2)\ast (h_2,g_2)=(h_1h_2,g_2). \] Conversely, any Lie 2-group $G_1 \rightrightarrows G_0$ has an associated crossed module of Lie groups $(G,H,\rho,\alpha)$, where $G=G_0$, $H=\textnormal{ker}(s_G)$, $\rho:=t_G|_H:H\to G$ and $G$ acts on $H$ by conjugation via the identity bisection: $g h:=1_g\cdot h\cdot 1_{g^{-1}}$. This defines an equivalence of categories between the category of Lie 2-groups and the category of crossed modules of Lie groups \cite{BD}. A \emph{Lie 2-algebra} is a Lie groupoid in the category of Lie algebras. More concretely, a Lie 2-algebra is a Lie groupoid $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ where both $\mathfrak{g}_1$ and $\mathfrak{g}_0$ are Lie algebras and the structural maps of $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ are Lie algebra homomorphisms. As expected, the Lie functor provides a one-to-one correspondence between Lie 2-groups and Lie 2-algebras. The infinitesimal object associated to a crossed module of Lie groups is the so-called \emph{crossed module of Lie algebras}. This is a quadruple $(\mathfrak{g},\mathfrak{h},\partial,\mathcal{L})$ where $\mathfrak{g}$ and $\mathfrak{h}$ are Lie algebras and $\partial:\mathfrak{h}\to\mathfrak{g}$ and $\mathcal{L}:\mathfrak{g}\to\textnormal{Der}(\mathfrak{h})$ are Lie algebra homomorphisms verifying $$\partial(\mathcal{L}_xy)=[x,\partial(y)]_\mathfrak{g}\quad\textnormal{and}\quad \mathcal{L}_{\partial(x)}y=[x,y]_\mathfrak{h}.$$ A Lie $2$-algebra $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ has an associated crossed module given by the data $\mathfrak{h}=\textnormal{ker}(s)$, $\mathfrak{g}=\mathfrak{g}_0$, $\partial=t|_\mathfrak{h}$ and $\mathcal{L}_x=\textnormal{ad}^{1}_{u(x)}$ for all $x\in \mathfrak{g}$. Conversely, a crossed module of Lie algebras $(\mathfrak{g},\mathfrak{h},\partial,\mathcal{L})$ has an associated Lie $2$-algebra which in turn is given by the data $\mathfrak{g}_1=\mathfrak{h}\rtimes \mathfrak{g}$ with Lie algebra structure provided by the semi-direct product with respect to $\mathcal{L}$, $\mathfrak{g}_1=\mathfrak{g}$, and structural maps $$s(x,y)=y,\quad t(x,y)=\partial(x)+y,\quad u(y)=(0,y),\quad i(x,y)=(-x,y+\partial(x)), $$ $$m((x',y+\partial(x)),(x,y))=(x+x',y).$$ A \emph{morphism} of crossed modules $f:(\mathfrak{g},\mathfrak{h},\partial,\mathcal{L})\to (\mathfrak{g}',\mathfrak{h}',\partial',\mathcal{L}')$ consists of two Lie algebra homomorphisms $f_0:\mathfrak{g}\to \mathfrak{g}'$ and $f_1:\mathfrak{h}\to \mathfrak{h}'$ such that $f_1\circ \partial =\partial'\circ f_0$ and $f_0(\mathcal{L}_xy)=\mathcal{L}'_{f_1(x)}(f_0(y))$. This induces a pair of Lie algebra homomorphisms $\textnormal{ker}(\partial)\to \textnormal{ker}(\partial')$ and $\textnormal{coker}(\partial)\to \textnormal{coker}(\partial')$. A morphism of crossed modules is a \emph{quasi-isomorphism} if both of these Lie algebra morphisms are isomorphisms. The \emph{derived category} of crossed modules of Lie algebras is defined to be the localization of the category of crossed modules of Lie algebras obtained by inverting all quasi-isomorphisms. \section{Isometric Lie 2-group actions}\label{S:3} In the sequel we shall denote by $G_1\rightrightarrows G_0$ a Lie 2-group and by $(X_1\rightrightarrows X_0,\eta)$ a Riemannian groupoid. A \emph{left 2-action} of $G_1\rightrightarrows G_0$ on $X_1\rightrightarrows X_0$ is defined to be a Lie groupoid morphism $\theta=(\theta^1,\theta^0):(G_1\times X_1\rightrightarrows G_0\times X_0)\to(X_1\rightrightarrows X_0)$ such that both maps $\theta^1$ and $\theta^0$ are usual left Lie group actions. Here $G_1\times X_1\rightrightarrows G_0\times X_0$ denotes the product Lie groupoid. Note that given a left 2-action of $G_1\rightrightarrows G_0$ on $X_1\rightrightarrows X_0$ we immediately get that the structural maps of $X_1\rightrightarrows X_0$ are equivariant with respect the structural maps of $G_1\rightrightarrows G_0$. More precisely, if $p\in X_1, x\in X_0$ and $g\in G_1, g_0\in G_0$ then we obtain that \begin{equation}\label{Equivariant} s_X(gp)=s_G(g)s_X(p),\quad t_X(gp)=t_G(g)t_X(p),\quad u_X(g_0x)=u_G(g_0)u_X(x). \end{equation} Moreover, we have that the action on arrows is \emph{multiplicative}, meaning that for pairs of composable arrows $(p,q)\in X_2$ and $(g,h)\in G_2$ the following formula holds true \begin{equation}\label{MultAction} (p*q)(g*h)=(pg)*(qh). \end{equation} Here we are denoting by $m_X(p,q)=p*q$ and $m_G(g,h)=g*h$. \begin{definition} A $2$-action of $G_1\rightrightarrows G_0$ on $(X_1\rightrightarrows X_0,\eta)$ is said to be \emph{isometric} if $G_1$ acts by isometries on $(X_1,\eta^{(1)})$. \end{definition} An immediate consequence of the previous definition is the following. \begin{lemma}\label{Rmk1} The action $G_0$ on $(X_0,\eta^{(0)})$ is also by isometries. \end{lemma} \begin{proof} Let $v,w\in T_{x}X_0$ and $g_0\in G_0$. It is clear that there are $\tilde{v},\tilde{w}\in \textnormal{ker}(d(s_X)_p)^{\perp_{\eta^{(1)}}}$ with $s_X(p)=x$ such that $d(s_X)_p(\tilde{v})=v$ and $d(s_X)_p(\tilde{w})=w$ and there is $g\in G_1$ such that $s_G(g)=g_0$. Therefore, \begin{eqnarray*} (\theta^0_{g_0})_x^\ast\eta^{(0)}(v,w) & = & \eta^{(0)}_{g_0x}(d(\theta^0_{g_0})_x(v),d(\theta^0_{g_0})_x(w))=\eta^{(0)}_{g_0x}(d(\theta^0_{g_0}\circ s_X)_p(\tilde{v}),d(\theta^0_{g_0}\circ s_X)_p(\tilde{w}))\\ \star& = & \eta^{(0)}_{g_0x}(d(s_X\circ \theta^1_{g})_p(\tilde{v}),d(s_X\circ \theta^1_{g})_p(\tilde{w}))=\eta^{(1)}_{gp}(d(\theta^1_{g})_p(\tilde{v}),d(\theta^1_{g})_p(\tilde{w}))\\ & = & \eta^{(1)}_{p}(\tilde{v},\tilde{w})=\eta^{(0)}_{x}(v,w). \end{eqnarray*} In the equality $\star$ above we used the fact that $s_X$ is a Riemannian submersion and that the action $\theta^1$ preserves the horizontal distribution $\textnormal{ker}(ds_X)^{\perp_{\eta^{(1)}}}$. Note that this computation does not depend on the choice of $\tilde{v},\tilde{w},p$ and $g$. Furthermore, we may obtain the same conclusion by choosing $\eta^{(0)}=(t_X)_\ast \eta^{(1)}$ instead of $\eta^{(0)}=(s_X)_\ast \eta^{(1)}$. \end{proof} \begin{remark}\label{Rmk2} It is clear that $G_2$ is also a Lie group with the structure induced from the direct product $G_1\times G_1$. With this in mind it is simple to see that the $2$-action $\theta$ indices a canonical left action $\theta^2$ of $G_2$ on $X_2$. We could have defined the notion of isometric $2$-action by requiring that $G_2$ acts on $(X_2,\eta^{(2)})$ by isometries. If we do so then, just as we did in Lemma \ref{Rmk1}, we can prove that the action of $G_1$ on $(X_1,\eta^{(1)})$ is by isometries and, in turn, the action of $G_0$ on $(X_0,\eta^{(0)})$ will be also by isometries. This is because $\eta^{(1)}=(\pi_{2,X})_\ast \eta^{(2)}=(m_X)_\ast \eta^{(2)}=(\pi_{1,X})_\ast \eta^{(2)}$ and $\pi_{2,X}, m_X, \pi_{1,X}:X_2\to X_1$ are Riemannian submersions. More generally, if $\eta^{(n)}$ is an $n$-metric on $X_n$ in the sense of \cite{dHF} and the induced left action $\theta^n$ of $G_n$ on $(X_n,\eta^{(n)})$ is by isometries then the action $\theta^k$ of $G_k$ on $(X_k,\eta^{(k)})$ will be by isometries for all $0\leq k\leq n-1$. Conversely, if we suppose that $G_0$ and $G_1$ act by isometries on $(X_0,\eta^{(0)})$ and $(X_1,\eta^{(1)})$, respectively, then the induced action of $G_n$ on $(X_n,\eta^{(n)})$ is by isometries for all $n\geq 2$. Last assertion follows by using the formula $$\eta^{(2)}=(\pi_1)_X^\ast \eta^{(1)}+(\pi_2)_X^\ast \eta^{(1)}-s_X^\ast \eta^{(0)}\circ (d(\pi_1)_X\oplus d(\pi_1)_X),$$ which is derived from Remark 2.5 from \cite{dHF} once the $n$-metric is chosen. \end{remark} For the purposes of this paper we only work with both $1$-metrics and $0$-metrics unless otherwise stated. Most of the notions we will introduce below shall have an analogous statement if the simplicial approach from Remark \ref{Rmk2} is considered. \begin{remark}\label{Rmk3} There is a different formulation for the notion of $2$-action in terms of crossed modules \cite{hsz}. Namely, an action $\beta=(\beta^0, \beta^1, \beta^2)$ of a crossed module of Lie groups $(G,H,\rho,\alpha)$ on $X_1\rightrightarrows X_0$ consists of three smooth actions $\beta^0:G\times X_0\to X_0$, $\beta^1:G\times X_1\to X_1$, and $\beta^{2}:H\times X_1\to X_1$ satisfying the compatibility conditions (6.3.8)–(6.3.12) from \cite[Def. 6.3.4]{hsz}. Furthermore, from \cite[Lem. 6.3.13]{hsz} if $\theta=(\theta^1,\theta^2)$ is a $2$-action of $G_1\rightrightarrows G_0$ on $X_1\rightrightarrows X_0$ then $\beta^0=\theta^0$, $\beta^1=\theta^0\circ (u\times \textnormal{id}_{X_1})$, and $\beta^2=\theta^1|_H$ define an action of the associated crossed module on $X_1\rightrightarrows X_0$. Conversely, given an action $\beta=(\beta^0, \beta^1, \beta^2)$ of a crossed module $(G,H,\rho,\alpha)$ on $X_1\rightrightarrows X_0$ then $\theta^0=\beta^0$ and $\theta^1_{(h,g)}=\beta^2_{h}\circ \beta^1_g $ for all $(h,g)\in H\rtimes G$, defines a $2$-action of the associated Lie $2$-group. \end{remark} Motivated by what we just mentioned we set up the following definition. \begin{definition} We say that an action $\beta=(\beta^0, \beta^1, \beta^2)$ of a crossed module of Lie groups $(G,H,\rho,\alpha)$ on $(X_1\rightrightarrows X_0,\eta)$ is isometric if both $\beta^1$ and $\beta^2$ determine isometric actions on $(X_1,\eta^{(1)})$. \end{definition} Thus, the following result is clear: \begin{lemma} There exists a one-to-one correspondence between isometric actions of Lie $2$-groups and isometric actions of crossed modules of Lie groups. \end{lemma} An interesting consequence that comes up from our definition of isometric Lie $2$-group action is the following. The notion of Riemannian submersions in the Lie groupoid context was introduced in \cite{dHF2}. So, the following result is expected: \begin{proposition}\label{PQuotient} If $\theta$ is a free and proper isometric $2$-action of $G_1\rightrightarrows G_0$ on $(X_1\rightrightarrows X_0,\eta)$ then there is a structure of Riemannian groupoid $(X_1/G_1\rightrightarrows X_0/G_0, \overline{\eta})$ so that the canonical projection $\pi=(\pi_1,\pi_0):(X_1\rightrightarrows X_0)\to (X_1/G_1\rightrightarrows X_0/G_0)$ becomes a Riemannian groupoid submersion. \end{proposition} \begin{proof} We start by exhibiting the Lie groupoid structure on $X_1/G_1\rightrightarrows X_0/G_0$. This fact was stated for instance in \cite{GZ} but without proof so that we exhibit a proof of it by seek of completeness. It is well known that $X_j/G_j$ admits a unique manifold structure so that $\pi_j$ (for $j=0,1$) is a surjective submersion. We define source and target maps respectively as $\overline{s}([p])=[s_X(p)]$ and $\overline{t}([p])=[t_X(p)]$ for all $p\in X_1$. As consequence of Identities \eqref{Equivariant} these maps are well defined and, moreover, both of them are surjective submersions since $\pi_j$, $s$, and $t$ are so. We have that $([p],[q])\in (X_1/G_1)_2$ if and only if $\overline{s}([p])=\overline{t}([q])$ which in turn holds true if and only if $s_X(p)=g_0t_X(q)$ for some $g_0\in G_0$. Thus, we define $\overline{m}([p],[q])=[m_X(p,gq)]$ for some $g\in G_1$ such that $t_G(g)=g_0$. It is simple to check that $\overline{m}$ does not depend on the choice of $g$ and it is well defined because of Property \eqref{MultAction}. This is also clearly smooth and associative since $m_X$ is associative and Identity \eqref{MultAction} is satisfied. The unit map and the inversion are respectively defined by $\overline{u}([x])=[u_X(x)]$ and $\overline{i}([p])=[i_X(p)]$ for all $x\in X_0$ and $p\in X_1$. It is also easy to verify that this maps are well defined, smooth, and they satisfy the required groupoid conditions. Consider, for $j=0,1$, the induced Riemannian metric $\overline{\eta}^{(j)}=(\pi_j)_\ast \eta^{(j)}$ on $X_j/G_j$ making of $\pi_j$ a Riemannian submersion. Moreover, note that $$\overline{i}_\ast \overline{\eta}^{(1)}=(\overline{i}\circ \pi_1)_\ast \eta^{(1)}=(\pi_1\circ i_X)_\ast\eta^{(1)}=(\pi_1)_\ast \eta^{(1)}=\overline{\eta}^{(1)},$$ since $i$ is an isometry, and $$\overline{s}_\ast \overline{\eta}^{(1)}=(\overline{s}\circ \pi_1)_\ast \eta^{(1)}=(\pi_0\circ s_X)_\ast\eta^{(1)}=(\pi_0)_\ast \eta^{(0)}=(\pi_0\circ t_X)_\ast\eta^{(1)}=(\overline{t}\circ \pi_1)_\ast \eta^{(1)}=\overline{t}_\ast \overline{\eta}^{(1)},$$ so that $\overline{s}_\ast \overline{\eta}^{(1)}=\overline{t}_\ast \overline{\eta}^{(1)}=\overline{\eta}^{(0)}$. Therefore, $(X_1/G_1\rightrightarrows X_0/G_0,\overline{\eta})$ with $\overline{\eta}=(\overline{\eta}^{(1)},\overline{\eta}^{(0)})$ is a Riemannian groupoid and $\pi=(\pi_1,\pi_0)$ is a Riemannian submersion of groupoids, as desired. \end{proof} \begin{remark} Let us denote by $\eta^\ast$ the dual metric associated to a Riemannian metric $\eta$. From \cite{dHF} we know that if $\pi:E\to B$ is a submersion and $\lbrace \eta_1,\cdots,\eta_k\rbrace$ is a collection of $\pi$-transverse metrics then its tangent average $\frac{1}{k}\sum_{l=1}^k\eta_l$ fails to be $\pi$-transverse again in general. Nevertheless, its cotangent average $\frac{1}{k}\left( \sum_{l=1}^k\eta_l^\ast\right)^\ast$ is always $\pi$-transverse which sometimes makes more advantageous to take a cotangent space point of view in the study of Riemannian submersions. \end{remark} We will deal with the classical approach used to ensure the existence of invariant Riemannian metrics on $G$-manifolds to provide a similar construction in our context. Namely: \begin{lemma}\label{Existence1} If $G$ is a compact Lie group seen as a Lie unit 2-group $G\rightrightarrows G$ acting on $(X_1\rightrightarrows X_0,\eta)$, then there exists a $1$-metric $\overline{\eta}$ making of such an action isometric. In particular, if $G$ is compact and $X_1\rightrightarrows X_0$ is proper then there always exists a $1$-metric making of such a $2$-action isometric. \end{lemma} \begin{proof} Let us consider the normalized Haar measure $\mu$ on $G$. We define $\overline{\eta}=(\overline{\eta}^{(1)},\overline{\eta}^{(0)})$ by averaging its dual as follows: $$(\overline{\eta}^{(1)})^\ast:=\int_{G}(\theta^1)^\ast_g (\eta^{(1)})^\ast d\mu(g).$$ The dual metric $(\overline{\eta}^{(0)})^\ast$ has a similar defining formula but using $\eta^{(0)}$ and $\theta^0$ instead of $\eta^{(1)}$ and $\theta^1$. It is simple to check that $\overline{\eta}=(\overline{\eta}^{(1)},\overline{\eta}^{(0)})$ is the $1$-metric we are looking for. Last part of the assertion above follows from the fact that every proper Lie groupoid admits $n$-metrics \cite{dHF}. \end{proof} A similar result can be obtained only by requiring that $G\rightrightarrows G$ acts properly on $(X_1\rightrightarrows X_0,\eta)$. Same proof as in Lemma \ref{Existence1} goes through by applying \cite[Lem. 4.2]{K}. Let us assume for a moment that $G_1$ is compact so that $G_0$ is also compact. If $\mu_1$ is the normalized Haar measure on $G_1$ then, by uniqueness, the pushforward measure $s_{G\ast}\mu_1$ agrees with the normalized Haar measure $\mu_0$ on $G_0$ since $s_G$ a surjective Lie group homomorphism and \begin{equation}\label{PushHaar} \int_{G_0}f d(s_{G\ast}\mu_1)=\int_{G_1} f\circ s_G d\mu_1, \end{equation} for each continuous function $f:G_0\to \mathbb{R}$. Analogously, $t_{G\ast}\mu_1=\mu_0$ and $i_{G\ast}\mu_1=\mu_1$. Note also that the fact that $i_{G\ast}\mu_1=\mu_1$ and $s_G\circ i_G=t_G$ immediately implies that $t_{G\ast}\mu_1=s_{G\ast}\mu_1$. Thus, we are in conditions to state: \begin{theorem}\label{Existence2} If $G_1\rightrightarrows G_0$ is a Lie $2$-group, with $G_1$ compact, acting on $(X_1\rightrightarrows X_0,\eta)$, then there exists a $1$-metric $\overline{\eta}$ making of such an $2$-action isometric. \end{theorem} \begin{proof} As it was done in Lemma \ref{Existence1}, by dual averaging we define: $$(\overline{\eta}^{(1)})^\ast:=\int_{G_1}(\theta^1)^\ast_g(\eta^{(1)})^\ast d\mu_1(g)\quad\textnormal{and}\quad (\overline{\eta}^{(0)})^\ast:=\int_{G_0}(\theta^0)^\ast_{g_0}(\eta^{(0)})^\ast d\mu_0(g_0).$$ The result will follow from the fact that $\theta$ is a $2$-action, $\eta$ is already a $1$-metric, and Identity \eqref{PushHaar} holds true. Indeed, on the one hand, a straightforward computation using the fact that $i$ is an isometry of $(X_1,\eta^{(1)})$ verifying both Identity \eqref{Equivariant} and $i_{G\ast}\mu_1=\mu_1$ implies that $i_X$ is an isometry of $(X_1,\overline{\eta}^{(1)})$ as well. The fact that $s_X:X_1\to X_0$ is a Riemannian submersion tells us that $ds_X(p)^\ast: T_{s_X(p)}^\ast X_0 \to \textnormal{ker}(ds_X(p))^\circ$ is an isometry for all $p\in X_1$ where $\textnormal{ker}(ds_X(p))^\circ$ denotes the annihilator of the vectors tangent to the fiber. Given $p\in X_1$, covectors $\alpha,\beta\in T_{s_X(p)}^\ast X_0$, and $g\in G_1$ such that $s_G(g)=g_0$ we get the following chain of equalities: \begin{eqnarray*} (\overline{\eta}^{(0)}_{s_X(p)})^\ast (\alpha,\beta) &=& \int_{G_0}(\eta^{(0)}_{g_0s_X(p)})^\ast ((\theta^0_{g_0})^\ast(\alpha ),(\theta^0_{g_0})^\ast(\beta ))d\mu_0\\ \eqref{PushHaar}&=& \int_{G_1}(\eta^{(0)}_{s_G(g)s_X(p)})^\ast ((\theta^0_{s_G(g)})^\ast(\alpha ),(\theta^0_{s_G(g)})^\ast(\beta ))d\mu_1\\ & = & \int_{G_1}(\eta^{(1)}_{gp})^\ast (ds_X(gp)^\ast((\theta^0_{s_G(g)})^\ast(\alpha )),ds_X(gp)^\ast((\theta^0_{s_G(g)})^\ast(\beta )))d\mu_1\\ & = & \int_{G_1}(\eta^{(1)}_{gp})^\ast ((\theta^1_{g})^\ast(ds_X(p)^\ast(\alpha )),(\theta^1_{g})^\ast(ds_X(p)^\ast(\beta )))d\mu_1\\ &= & (\overline{\eta}^{(1)}_{p})^\ast (ds_X(p)^\ast(\alpha ),ds_X(p)^\ast(\beta )), \end{eqnarray*} from which we conclude that $s_X$ is also Riemannian for the averaged metrics. With analogous computations we get a similar conclusion for $t_X:X_1\to X_0$ so that the result follows as claimed. \end{proof} \begin{remark}\label{Rmk4} It is worth noticing that if we consider an $n$-metric on $X_n$ and take into account the simplicial approach described in Remark \ref{Rmk2} then by applying similar averaging arguments as in the proof of Theorem \ref{Existence2} it is possible to show the existence of another $n$-metric on $X_n$ that is invariant by the action of $G_n$ on $X_n$. \end{remark} The \emph{tangent groupoid} $TX_1\rightrightarrows TX_0$ is obtained by applying the tangent functor at both manifolds of arrows and objects and all of its structural maps. If $S \subset X_0$ is a saturated manifold (only an orbit for instance) then we can restrict the groupoid structure to $X_{S}=s_X^{-1}(S)=t_X^{-1}(S)$, thus obtaining a Lie subgroupoid $X_S\rightrightarrows S$ of $X_1\rightrightarrows X_0$. Furthermore, the Lie groupoid structure of $TX_1\rightrightarrows TX_0$ can be restricted to define a new Lie groupoid $\nu(X_S)\rightrightarrows \nu(S)$ having the property that all of its structural maps are fiberwise isomorphisms. The notion of \emph{linearization} around an orbit in the Lie groupoid setting is well known and it was a unsolvable problem for a long time when considering for instance only proper groupoids. We recommend the reader to visit \cite{CS} with the aim of getting a quick introduction and contextualization about such a notion. \begin{definition} Suppose that we have a Lie $2$-action of $G_1\rightrightarrows G_0$ on $X_1\rightrightarrows X_0$ and let us pick a $G_0$-invariant saturated submanifold $S\subset X_0$. We say that $X_1\rightrightarrows X_0$ is \emph{equivariant weakly linearizable} at $S$ if there exist $G$-invariant Lie groupoid neighborhoods $\widetilde{V}\rightrightarrows V$ of $X_{S}\rightrightarrows S$ in $\nu(X_S)\rightrightarrows \nu(S)$ (seen as the zero section) and $\widetilde{U}\rightrightarrows U$ of $X_S\rightrightarrows S$ in $X_1\rightrightarrows X_0$, and a $G$-equivariant isomorphism of Lie groupoids $\phi: (\widetilde{V}\rightrightarrows V)\xrightarrow[]{\cong} (\widetilde{U}\rightrightarrows U)$ which is the identity on $X_S\rightrightarrows S$. \end{definition} The $G$-invariant property of the Lie groupoid neighborhood in $\nu(X_S)\rightrightarrows \nu(S)$, used in the previous definition, makes sense because of the following facts. Let us assume in this case that we are given with a $2$-metric and that $G_2$ acts on $(X_2,\eta^{(2)})$ by isometries (see Remark \ref{Rmk2}). It is simple to check that every $2$-action $\theta=(\theta^1,\theta^0)$ of $G_1\rightrightarrows G_0$ on $X_1\rightrightarrows X_0$ induces a $2$-action $T\theta=(T\theta^1,T\theta^0)$ of $G_1\rightrightarrows G_0$ on $TX_1 \rightrightarrows TX_0$ by differentiating the actions $\theta^1$ and $\theta^0$. Let us pick a $G_0$-invariant saturated submanifold $S$ in $X_0$. This implies that $X_{S}$ is $G_1$-invariant and that $(X_{S})_2$ is $G_2$-invariant. If we respectively use $\eta^{(2)}$, $\eta^{(1)}$, and $\eta^{(0)}$ to identify $\nu(X_{S})_{2}\cong \nu((X_{S})_2)$ with $(T(X_{S})_2)^\perp$, $\nu(X_{S})$ with $TX_S^\perp$, and $\nu(S)$ with $TS^\perp$ then it follows that the $2$-action $T\theta$ restrict to a well defined $2$-action $\overline{T\theta}$ of $G_1\rightrightarrows G_0$ on $\nu(X_{S})\rightrightarrows \nu(S)$ since $\theta$ is an isometric $2$-action. Furthermore, the latter fact also implies that the exponential maps $\textnormal{exp}^{(2)}$, $\textnormal{exp}^{(1)}$, and $\textnormal{exp}^{(0)}$ are equivariant local diffeomorphisms. As isometries preserve (horizontal) geodesics then by following the classical proofs of the equivariant tubular neighborhood theorem in \cite[VI. Thm. 2.2]{B} and \cite[Thm. 4.4]{IK,K} with the proof of weakly linearization theorem given in \cite[Thm. 5.11]{dHF} we easily obtain: \begin{proposition}[Equivariant weakly groupoid linearization]\label{Lin1} Let $G_1\rightrightarrows G_0$ be a Lie $2$-group, with $G_1$ compact, acting by isometries on $(X_1\rightrightarrows X_0,\eta)$. Then there exists an equivariant weakly linearization of $X_1\rightrightarrows X_0$ around any $G_0$-invariant saturated submanifold $S$ in $X_0$. \end{proposition} Observe that we may get a similar conclusion by requiring either that $G_1\rightrightarrows G_0$ proper (or $s$-proper) and $G_0$ compact or else that $G_1\rightrightarrows G_0$ acts properly. \begin{definition}\label{DefSlice} Let $\theta$ be a $2$-action of $G_1\rightrightarrows G_0$ on $X_1\rightrightarrows X_0$ with $G_0$ acting freely. A \emph{groupoid slice} at $x_0\in X_0$ is defined to be a Lie subgroupoid $S_{1_{x_0}}\rightrightarrows S_{x_0}$ of $X_1\rightrightarrows X_0$ such that $S_{x_0}$ and $S_{1_{x_0}}$ are standard slices at $x_0$ and $1_{x_0}$, respectively. \end{definition} The following is a straightforward result. \begin{lemma}\label{IsoOrbit} Take any $x_0\in X_0$. There are natural structures of: \begin{itemize} \item Lie 2-subgroup between the $G$-isotropy groups $\textnormal{Iso}_{G_1}(1_{x_0})\rightrightarrows \textnormal{Iso}_{G_0}(x_0)$, and \item Lie subgroupoid between the $G$-orbits $G_1\cdot 1_{x_0}\rightrightarrows G_0\cdot x_0$. \end{itemize} \end{lemma} It is important to mention that we asked $G_0$ to act freely in order to have a well defined groupoid composition on the Lie groupoids defined in the previous lemma. Let us state a version of the Slice Theorem in our context. Namely: \begin{proposition}[Groupoid slice]\label{SliceThm} If $\theta$ is a free and proper $2$-action of a Lie $2$-group $G_1\rightrightarrows G_0$ on a proper groupoid $X_1\rightrightarrows X_0$ then there exists a groupoid slice $S_{1_{x_0}}\rightrightarrows S_{x_0}$ at each $x_0\in X_0$. \end{proposition} \begin{proof} Consider the induced $2$-action of $\textnormal{Iso}_{G_1}(1_{x_0})\rightrightarrows \textnormal{Iso}_{G_0}(x_0)$ on $X_1\rightrightarrows X_0$. By the properness of $X_1\rightrightarrows X_0$ and by applying Theorem \ref{Existence2} together with Remark \ref{Rmk4}, we may fix a $2$-metric on $X_2$ and use it to construct another $2$-metric on $X_2$ in such a way $\textnormal{Iso}_{G_1}(1_{x_0})\rightrightarrows \textnormal{Iso}_{G_0}(x_0)$ acts isometrically on $X_1\rightrightarrows X_0$. As in the classical case \cite[Thm. 3.49]{AB}, we define $S_{x_0}$ by setting $S_{x_0}=\exp^{(0)}_{x_0}(B_\epsilon(0))$ where $B_\epsilon(0)$ is an open ball of radius $\epsilon>0$ around the origin in the normal space $\nu_{x_0} (G_0 \cdot x_0)$ to the $G_0$-orbit through $x_0$ (normal domain). As $G_1\cdot 1_{x_0}\rightrightarrows G_0\cdot x_0$ is a Lie subgroupoid of $X_1\rightrightarrows X_0$ we have a well defined Lie subgroupoid $\nu(G_1\cdot 1_{x_0})\rightrightarrows \nu(G_0\cdot x_0)$ of $TX_1\rightrightarrows TX_0$ so that we may also consider the Lie groupoid $V_{B_\epsilon(0)}\rightrightarrows B_\epsilon(0)$ where $V_{B_\epsilon(0)}=\overline{ds}_{1_{x_0}}^{-1}(B_\epsilon(0))\cap\overline{dt}_{1_{x_0}}^{-1}(B_\epsilon(0))$. By shrinking $B_\epsilon(0)$ if necessary we may assume that $V_{B_\epsilon(0)}$ is an open ball around the origin in the normal space $\nu_{1_{x_0}} (G_1\cdot 1_{x_0})$ to the $G_1$-orbit through $1_{x_0}$ on which $\exp^{(1)}_{1_{x_0}}$ is well defined. Therefore, we now set $S_{1_{x_0}}= \exp^{(1)}_{1_{x_0}}(V_{B_\epsilon(0)})$. Hence, by arguing with similar arguments as those used to prove the multiplicative property of the exponential maps associated to a Riemannian $2$-metric in \cite{dHF} together with the equivariant property these exponential maps have, we conclude that $S_{1_{x_0}}\rightrightarrows S_{x_0}$ is the Lie subgroupoid of $X_1\rightrightarrows X_0$ we are looking for. \end{proof} The previous result suggests that we may somehow prove a kind of $(G_1\rightrightarrows G_0)$-equivariant ``Tubular Neighborhood Theorem'' for the $G$-orbit groupoid $G_1\cdot 1_{x_0}\rightrightarrows G_0\cdot x_0$. To do so, we need to consider the following construction which has been defined in \cite{HOV} (a particular case of such a construction can be found in \cite[Lem. 9.1.2]{hsz}). \begin{remark}\label{AssociatedBundle} Let $\pi: (P_1\rightrightarrows P_0)\to (X_1\rightrightarrows X_0)$ be a groupoid principal $2$-bundle with structural Lie $2$-group $G_1\rightrightarrows G_0$. Assume that there exists a left $2$-action of $G_1\rightrightarrows G_0$ over another Lie groupoid $F_1\rightrightarrows F_0$. Given this data we can construct two associated fiber bundles $E_j:= P_j\times_{G_j}F_j$ over $X_j$ for $j=0,1$. These are defined as the quotient spaces $(P_j\times F_j)/G_j$ with respect to the actions $g_j\cdot (p_j,f_j)=(p_jg_j^{-1},g_jf_j)$, for all $g_j\in G_j$, $p_j\in P_j$ and $f_j\in F_j$, together with projections $\overline{\pi}_j([p_j,f_j])=\pi_j(p_j)$ onto $X_j$. From \cite{HOV} we know that there exists a natural Lie groupoid structure $E_1\rightrightarrows E_0$ for which the projection $\overline{\pi}: (E_1\rightrightarrows E_0)\to (X_1\rightrightarrows X_0)$ becomes a Lie groupoid fibration. The source and target maps are the obvious ones, namely: $$s_E([p,f])=[s_P(p),s_F(f)]\qquad\textnormal{and}\qquad t_E([p,f])=[t_P(p),t_F(f)],$$ and the groupoid composition is defined as follows. If $s_E([p,f])=t_E([q,l])$ then there is $g_0\in G_0$ such that $(s_P(p)g_0^{-1},g_0s_F(f))=(t_P(q),t_F(l))$. So, we set $$m_E([p,f],[q,l])=[(pg^{-1})\ast q,(gf)\ast l],$$ for some $g$ inside the $s_G$-fiber at $g_0$. The latter equality does not depend on the choice of $g\in s_G^{-1}(g_0)$. \end{remark} Let $\theta$ be a free $2$-action of $G_1\rightrightarrows G_0$ on $X_1\rightrightarrows X_0$ and consider the classical $G$-invariant tubular neighborhoods $\textnormal{Tub}(G_1\cdot 1_{x_0})=\theta^1(G_1,S_{1_{x_0}})$ and $\textnormal{Tub}(G_0\cdot x_0)=\theta^0(G_0,S_{x_0})$ of $G_1\cdot 1_{x_0}$ and $G_0\cdot x_0$, respectively. It is clear that as consequence of Proposition \ref{SliceThm} we have that $\textnormal{Tub}(G_1\cdot 1_{x_0})\rightrightarrows \textnormal{Tub}(G_0\cdot x_0)$ is a Lie subgroupoid of $X_1\rightrightarrows X_0$ and that the $2$-action $\theta$ restricts well over it. Furthermore, $\textnormal{Tub}(G_1\cdot 1_{x_0})\rightrightarrows \textnormal{Tub}(G_0\cdot x_0)$ determines an open Lie groupoid neighborhood of the $G$-orbit groupoid $G_1\cdot 1_{x_0}\rightrightarrows G_0\cdot x_0$. From Lemma \ref{IsoOrbit}, Proposition \ref{SliceThm}, the quotient construction described in Proposition \ref{PQuotient}, and Remark \ref{AssociatedBundle}, we easily deduce that: \begin{lemma} There exists a principal groupoid $2$-bundle $\pi:(G_1\rightrightarrows G_0)\to (G_1/\textnormal{Iso}_{G_1}(1_{x_0})\rightrightarrows G_0/\textnormal{Iso}_{G_0}(x_0))$ with structural Lie $2$-group $\textnormal{Iso}_{G_1}(1_{x_0})\rightrightarrows \textnormal{Iso}_{G_0}(x_0)$, yielding an associated groupoid fibration $$(G_1\times_{\textnormal{Iso}_{G_1}(1_{x_0})}S_{1_{x_0}}\rightrightarrows G_0\times_{\textnormal{Iso}_{G_0}(x_0)}S_{x_0})\to (G_1/\textnormal{Iso}_{G_1}(1_{x_0})\rightrightarrows G_0/\textnormal{Iso}_{G_0}(x_0)),$$ with groupoid fiber $S_{1_{x_0}}\rightrightarrows S_{x_0}$. \end{lemma} We are now in conditions to state our version of the Equivariant Tubular Neighborhood Theorem in this context. \begin{theorem}[Equivariant groupoid tubular neighborhood]\label{ETubular} Suppose that $\theta$ is a free and proper $2$-action of a Lie $2$-group $G_1\rightrightarrows G_0$ on a proper groupoid $X_1\rightrightarrows X_0$. For every $x_0\in X_0$ there exists a $(G_1\rightrightarrows G_0)$-equivariant Lie groupoid isomorphism $$\Psi: (G_1\times_{\textnormal{Iso}_{G_1}(1_{x_0})}S_{1_{x_0}}\rightrightarrows G_0\times_{\textnormal{Iso}_{G_0}(x_0)}S_{x_0}) \xrightarrow[]{\cong} (\textnormal{Tub}(G_1\cdot 1_{x_0})\rightrightarrows \textnormal{Tub}(G_0\cdot x_0)).$$ \end{theorem} \begin{proof} The left $2$-action of $G_1\rightrightarrows G_0$ on $G_1\times_{\textnormal{Iso}_{G_1}(1_{x_0})}S_{1_{x_0}}\rightrightarrows G_0\times_{\textnormal{Iso}_{G_0}(x_0)}S_{x_0}$ we will consider here is the one given by $\overline{g_j}\cdot [g_j,f_j]=[\overline{g_j}g_j,f_j]$ for all $\overline{g_j},g_j\in G_j$ and $f_j\in S_j$. Here $S_1=S_{1_{x_0}}$ and $S_0=S_{x_0}$. The Lie groupoid isomorphism $\Psi$ is defined as $$\Psi^j([g_j,f_j])=\theta^j(g_j,f_j).$$ As consequence of \cite[Thm. 3.57]{AB} we only have to check that $\Psi$ defines indeed a Lie groupoid morphism. By using the structural maps defined in Remark \ref{AssociatedBundle} we have that $$(\Psi^0\circ s_E)([g,f])=\Psi^0([s_G(g),s_X(f)])=s_G(g)s_X(f)=s_X(gf)=(s_X\circ \Psi^1)([g,f]).$$ We can similarly obtain that $\Psi^0\circ t_E=t_X\circ \Psi^1$. Moreover, by applying Formula \eqref{MultAction} we get \begin{eqnarray*} \Psi^1([g,f]\ast [h,l]) &=& \Psi^1([(g\overline{g}^{-1})\ast h,(\overline{g}f)\ast l])=((g\overline{g}^{-1})\ast h)\cdot((\overline{g}f)\ast l)\\ &=& (gf)\ast (hl)=\Psi^1([g,f])\ast \Psi^1([h,l]). \end{eqnarray*} Hence, the result follows as desired. \end{proof} We finish this section by briefly commenting that it is possible to define a groupoid $G$-orbit type ``stratification'' for $X_1\rightrightarrows X_0$. We say that $x_0$ and $y_0$ in $X_0$ have the same $G$-\emph{orbit type} if there exists a Lie groupoid isomorphism $\Phi$ between $G_1\cdot 1_{x_0}\rightrightarrows G_0\cdot x_0$ and $G_1\cdot 1_{y_0}\rightrightarrows G_0\cdot y_0$ such that $\Phi^1$ is $G_1$-equivariant. This automatically implies that $\Phi^0$ is $G_0$-equivariant. It is clear that this $G$-orbit type requirement defines an equivalent relation $\sim$ on $X_0$ for which we denote by $M_{x_0}^\sim$ the equivalent class at $x_0\in X_0$ associated to such a relation. We claim that $M_{x_0}^\sim$ is saturated in $X_0$, that is, $s_X^{-1}(M_{x_0}^\sim)=t_X^{-1}(M_{x_0}^\sim)$. If $p\in s_X^{-1}(M_{x_0}^\sim)$ then $s_X(p)\sim x_0$ so that there is a $(G_1\rightrightarrows G_0)$-equivariant isomorphism between $G_1\cdot 1_{s_X(p)}\rightrightarrows G_0\cdot s_X(p)$ and $G_1\cdot 1_{x_0}\rightrightarrows G_0\cdot x_0$. It is simple to check that $\Phi_p$ defined by $\Phi_p^1(g1_{t_X(p)})=g1_{s_X(p)}$ and $\Phi_p^0(g_0t_X(p))=g_0s_X(p)$ defines another $(G_1\rightrightarrows G_0)$-equivariant isomorphism between $G_1\cdot 1_{t_X(p)}\rightrightarrows G_0\cdot t_X(p)$ and $G_1\cdot 1_{s_X(p)}\rightrightarrows G_0\cdot s_X(p)$ so that by taking $\Phi\circ \Phi_p$ we conclude that $t_X(p)\sim x_0$. The other inclusion may be similarly verified. Thus, by setting $M_{1_{x_{0}}}^\sim=s_X^{-1}(M_{x_0}^\sim)=t_X^{-1}(M_{x_0}^\sim)$ we obtain a collection of topological groupoids $\lbrace M_{1_{x_{0}}}^\sim \rightrightarrows M_{x_0}^\sim\rbrace_{x_0\in X_0}$ which somehow stratifies the Lie groupoid $X_1\rightrightarrows X_0$. The previous observation suggests that by combining classical ideas from \cite[s. 3.5]{AB} with some of the results obtained in this section it could be possible to show that each of the $M_{1_{x_{0}}}^\sim \rightrightarrows M_{x_0}^\sim$ is a honest Lie subgroupoid. Nevertheless, the local $G$-orbit type notion from the classical case does not extended directly in our case. \emph{We conjecture that our guess is true}. \begin{comment} \subsection{Linearization of equivariant groupoid fibrations} By a \emph{fibration} over $X_1\rightrightarrows X_0$ we mean a Lie groupoid morphism $\pi:(E_1\rightrightarrows E_0)\to (X_1\rightrightarrows X_0)$, from another Lie groupoid $E_1\rightrightarrows E_0$ such that: \begin{enumerate} \item the base map $\pi_0:E_0\to X_0$ is a surjective submersion, and \item the map $\tilde{\pi}:E_1\to X_1{}_s\times_{\pi_0} E_0$, $e_1\mapsto (\pi_1(e_1),s_E(e_1))$, is a surjective submersion. \end{enumerate} It follows from (2) that the map $\pi_1:E_1\to X_1$ is also a surjective submersion. The \emph{fiber} over a point $x_0\in X_0$ to be the Lie subgroupoid: \[ \pi^{-1}(x_0):=(\pi^{-1}_1(1_{x_0})\rightrightarrows \pi^{-1}_0(x_0))\subset E.\] Suppose that $G_1\rightrightarrows G_0$ is a Lie $2$-group action on both $E_1\rightrightarrows E_0$ and $X_1\rightrightarrows X_0$. We say that the fibration $\pi$ is $(G_1\rightrightarrows G_0)$-equivariant if $\pi_1$ is $G_1$-equivariant and $\pi_0$ is $G_0$-equivariant. A \emph{cleavage} of $\pi$ is a smooth map $\sigma:X_1 {}_{s_X}\times_{\pi_0}E_0\to E_1$ which is section of the surjective submersion $\tilde{\pi}$. Note that we have an induced left action of the Lie group $G_1 {}_{s_G}\times_{\textnormal{id}}G_0$ (group structure induced by the direct product of groups) on $X_1 {}_{s_X}\times_{\pi_0}E_0$ given by $(g,g_0)(x,e_0)=(gx,g_0e_0)$. Therefore, it follows that $\tilde{\pi}(ge)=(g,s_G(g))\tilde{\pi}(e)$ for all $g\in G_1$. With this in mind it is reasonable to say that a cleavage $\sigma$ is equivariant if $\sigma(gx,g_0e_0)=g\sigma(x,e_0)$ for all $(g,g_0)\in G_1 {}_{s_G}\times_{\textnormal{id}}G_0$. The cleavage is said to be \emph{unital} if it preserves identities, meaning that $\sigma(1_{\pi_0(e_0)},e_0) = 1_{e_0}$, and is defined to be \emph{flat} if it is closed under multiplication, namely $\sigma(x\ast y,e_0) = \sigma(x,t_E(\sigma(y,e_0)))\ast \sigma(y,e_0)$. A fibration admitting a flat unital cleavage is called a \emph{split fibration}. We want to adapt the results from \cite{dHF2} about linearization of groupoid fibrations to the equivariant setting described above. Note that if $S$ is a $G_0$-invariant saturated submanifold in $X_0$ then $\pi_0^{-1}(S)$ is so in $E_0$. \begin{definition} We say that $\pi$ is \emph{equivariant linearizable} around a $G_0$-invariant saturated submanifold $S\subset X_0$ if there are equivariant weakly linearizations $\widetilde{\phi}$ of $E_1\rightrightarrows E_0$ around $\widetilde{S}=\pi_0^{-1}(S)$ and $\phi_X$ of $X_1\rightrightarrows X_0$ around $S$ forming a commutative diagram $$\xymatrix{ \nu(E_{\widetilde{S}})\supset \widetilde{U} \ar[r]^{\widetilde{\phi}} \ar@{>}[d]_{\overline{d\pi}} & \widetilde{V}\subset E \ar@{>}[d]^{\pi}\\ \nu(X_S)\supset U\ar[r]_{\phi} & V\subset X }.$$ \end{definition} From now on let us consider a Lie $2$-group $G_1\rightrightarrows G_0$ with $G_1$ compact. Suppose that $\pi$ is a Riemannian submersion, that is, both $E_1\rightrightarrows E_0$ and $X_1\rightrightarrows X_0$ can be equipped with $2$-metrics for which $\pi^{(2)}$ becomes a Riemannian submersion. As consequence of Proposition \ref{Existence2} we have that both $E_1\rightrightarrows E_0$ and $X_1\rightrightarrows X_0$ can be equipped with $(G_1\rightrightarrows G_0)$-invariant $2$-metrics $\widetilde{\eta}$ and $\eta$ in such a way $\pi$ is still a Riemannian submersion. \begin{lemma}\label{Lin2} If $\pi$ is a Riemannian submersion then the equivariant exponential maps of $\widetilde{\eta}$ and $\eta$ define an equivariant linearization of $\pi$ around any $G_0$-invariant saturated submanifold $S$. \end{lemma} \begin{proof} After applying Proposition \ref{Lin1} it follows that the proof of \cite[Prop. 4.2.2]{dHF2} goes through in this context since $\pi$ is formed by equivariant maps. \end{proof} On the one hand, let us consider the Lie groupoid determined by the homotopy fiber product $E'=X\widetilde{\times}_X E$ over $\textnormal{id}_X$ and $\pi$ which always exists since $\pi$ is a groupoid fibration. The manifolds of objects and arrows of $E'$ are respectively given by $$E'_0=\lbrace (x_0,k,e_0)\in X_0\times X_1\times E_0:\ \pi_0(e_0)\xleftarrow[]{\it k}x_0\rbrace,$$ and $$E'_1=\lbrace (x,k,e)\in X_1\times X_1\times E_1:\ s_X(p)=s_X(k),\ \pi_0(s_E(e))=t_X(k)\rbrace.$$ Its structural maps are defined in an obvious way by using those from $E$ and $X$. On the other hand, by using the Lie $2$-group $G_1\rightrightarrows G_0$ we may obtain another Lie $2$-group determined by the homotopy fiber product $G'=G\widetilde{\times}_G G$ over the identity morphism $\textnormal{id}_G$. As above, the manifolds of objects and arrows of $G'$ are respectively given by $$G'_0=\lbrace (g_0,n,h_0)\in G_0\times G_1\times G_0:\ h_0\xleftarrow[]{\it n}g_0\rbrace,$$ and $$G'_1=\lbrace (g,n,h)\in G_1\times G_1\times G_1:\ s_G(g)=s_G(n),\ s_G(h)=t_G(n)\rbrace.$$ The group structure in both $G'_0$ and $G'_1$ is the induced one from the direct product of groups so that the structural maps of $G'$ are clearly Lie group homomorphisms. \begin{lemma}\label{Lin3} The $2$-actions of $G_1\rightrightarrows G_0$ on both $E_1\rightrightarrows E_0$ and $X_1\rightrightarrows X_0$ induce a natural $2$-action of $G'_1\rightrightarrows G'_0$ on $E'_1\rightrightarrows E'_0$. \end{lemma} \begin{proof} It is straightforward to check that the expressions $$(g_0,n,h_0)\cdot (x_0,k,e_0)=(g_0x_0,nk,h_0e_0)\quad\textnormal{and}\quad(g,n,h)\cdot (x,k,e)=(gx,nk,he),$$ define a $2$-action of $G'_1\rightrightarrows G'_0$ on $E'_1\rightrightarrows E'_0$ since $\pi$ is $(G_1\rightrightarrows G_0)$-equivariant. \end{proof} The morphism $\pi$ has a canonical factorization $$\xymatrix{ & E' \ar[dr]^{\pi'} & \\ E \ar@{>}[ur]^{\iota}\ar[r]_{\pi} & & X }$$ where the maps $\iota$ and $\pi'$ are respectively given, on objects and arrows, by $\iota(e_0)=(\pi_0(e_0), u_X(\pi_0(e_0)),e_0)$, $\iota(e)=(\pi_1(e), u_X(\pi_0(s_E(e)),e)$ and $\pi'(x_0,k,e_0)=x_0$, $\pi'(x,k,e)=x$. If we apply the same factorization to $\textnormal{id}_G$ we get a morphism of Lie $2$-groups $\iota_G:G\to G'$ defined as $\iota_G(g_0)=(g_0,u_G(g_0),g_0)$ on objects and $\iota_G(g)=(g,u_G(s_G(g)),g)$ on arrows. By using the $2$-action from Lemma \ref{Lin3} we obtain that $\iota$ is equivariant of type $\iota_G$, meaning that $\iota(g_0e_0)=\iota_G(g_0)\iota(e_0)$ and $\iota(ge)=\iota_G(g)\iota(e)$ since $\pi$ is $(G_1\rightrightarrows G_0)$-equivariant. Analogously, $\pi'((g_0,n,h_0)\cdot (x_0,k,e_0))=\textnormal{id}'_G(g_0,n,h_0)\pi'(x_0,k,e_0)$ and $\pi'((g,n,h)\cdot (x,k,e))=\textnormal{id}'_G(g,n,h)\pi'(x,k,e)$. From \textcolor{red}{I got stuck. I need to review the ideas I initially had}. \end{comment} \subsection{Orthogonal Lie 2-groups} Let $G_1\rightrightarrows G_0$ be a Lie 2-group. Consider the pairs $L=(L^1,L^0)$ and $R=(R^1,R^0)$ where $L^j$ and $R^j$ for $j=0,1$ are respectively the actions of $G_j$ on itself determined by left and right multiplications. It is simple to check that as consequence of Identity \eqref{MultiProduct} and the fact that the structural maps of $G_1\rightrightarrows G_0$ are Lie group homomorphisms it follows that $L$ and $R$ determine left $2$-actions of $G_1\rightrightarrows G_0$ on itself. \begin{definition} A Lie $2$-group $G_1\rightrightarrows G_0$ is said to be \emph{orthogonal} if it may be equipped with a $1$-metric for which both $L$ and $R$ are isometric $2$-actions. Such a $1$-metric will be called \emph{bi-invariant}. \end{definition} We will think of $1$-metrics on a Lie $2$-algebra $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ as pairs of inner products $\langle \cdot,\cdot\rangle=(\langle \cdot,\cdot\rangle^{(1)},\langle \cdot,\cdot\rangle^{(0)})$ verifying the required conditions of $1$-metric. Let $G_1\rightrightarrows G_0$ be a Lie 2-group with respective Lie 2-algebra $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$. We denote by $\textnormal{Ad}=(\textnormal{Ad}^1,\textnormal{Ad}^0)$ the $2$-action of $G_1\rightrightarrows G_0$ on $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ determined by the adjoint actions $\textnormal{Ad}^j$ of $G_j$ on $\mathfrak{g}_j$ for $j=0,1$. This $2$-action will be called \emph{adjoint $2$-action} of $G_1\rightrightarrows G_0$. \begin{proposition}\label{Bi-invariant1} A Lie $2$-group $G_1\rightrightarrows G_0$ is orthogonal if and only if there exists a $1$-metric $\langle \cdot,\cdot\rangle$ on its Lie algebra $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ for which the adjoint $2$-action is by linear isometries. \end{proposition} \begin{proof} It is clear that if $\eta$ is a bi-invariant $1$-metric then $\eta_e=(\eta^1_{e_1},\eta^0_{e_0})$, where $e_j$ is the identity element in $G_j$, defines a $1$-metric on $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ for which the adjoint $2$-action is by linear isometries. Conversely, given such a $\langle \cdot,\cdot\rangle$, we define $\eta$ on $G_1\rightrightarrows G_0$ by setting $$\eta^{(1)}_g(v,w)=\langle d(L^1_{g^{-1}})_g(v),d(L^1_{g^{-1}})_g(w)\rangle^{(1)}.$$ The metric $\eta^{(0)}$ has a similar defining formula but using $L^0$ and $\langle \cdot,\cdot\rangle^{(0)}$ instead of $L^1$ and $\langle \cdot,\cdot\rangle^{(1)}$. It is clear that $\eta^{(1)}$ is bi-invariant. Therefore, it remains to prove that $\eta$ defines indeed a $1$-metric. Firstly, for $g\in G_1$ and $v,w\in T_g G_1$ we have \begin{eqnarray*} i^\ast_g\eta^{(1)}(v,w) & = & \langle d(L^1_{i(g)^{-1}})_{i(g)}(di_g(v)),d(L^1_{i(g)^{-1}})_{i(g)}(di_g(w))\rangle^{(1)}\\ &=& \langle d(L^1_{i(g^{-1})}\circ i)_g(v),d(L^1_{i(g^{-1})}\circ i)_g(w)\rangle^{(1)}\\ & = & \langle d(i\circ L^1_{g^{-1}})_g(v),d(i\circ L^1_{g^{-1}})_g(w)\rangle^{(1)}\\ &=&\langle di_{e_1}(d(L^1_{g^{-1}})_g(v)), di_{e_1}(d(L^1_{g^{-1}})_g(v))\rangle^{(1)}=\eta^{(1)}_g(v,w). \end{eqnarray*} Secondly, let $g_0\in G_0$ and $v,w\in T_{g_0}G_0$. It is clear that there are $\tilde{v},\tilde{w}\in \textnormal{ker}(ds(g))^{\perp_{\eta^{(1)}}}$ with $s(g)=g_0$ such that $ds_g(\tilde{v})=v$ and $ds_g(\tilde{w})=w$. Thus \begin{eqnarray*} (s_\ast \eta^{(1)})_{g_0}(v,w) & = & \eta^{(1)}_g(\tilde{v},\tilde{w}) = \langle d(L^1_{g^{-1}})_g(\tilde{v}),d(L^1_{g^{-1}})_g(\tilde{w})\rangle^{(1)}\\ & = & \langle ds_{e_1}(d(L^1_{g^{-1}})_g(\tilde{v})), ds_{e_1}(d(L^1_{g^{-1}})_g(\tilde{w}))\rangle^{(0)}\\ &=& \langle d(s\circ L^1_{g^{-1}})_g(\tilde{v}),d(i\circ L^1_{g^{-1}})_g(\tilde{w})\rangle^{(0)}\\ &=& \langle d(L^0_{g_0^{-1}}\circ s)_g(\tilde{v}),d(L^0_{g_0^{-1}}\circ s)_g(\tilde{w})\rangle^{(0)}\\ &=& \langle d(L^0_{g_0^{-1}})_{g_0}(v),d(L^0_{g_0^{-1}})_{g_0}(v)\rangle^{(0)}=\eta^{(0)}_{g_0}(v,w). \end{eqnarray*} Analogously, $t_\ast \eta^{(1)}=\eta^{(0)}$. So, the result follows. \end{proof} From now on we assume that the Lie groups we are working with are connected. It is well known that bi-invariant metrics on a Lie group are in one-to-one correspondence with inner products on its Lie algebra for which the adjoint representation determines infinitesimal isometries \cite{Me,Mi}. A Lie $2$-algebra $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ is said to be \emph{orthogonal} if it admits a $1$-metric $\langle \cdot,\cdot\rangle=(\langle \cdot,\cdot\rangle^{(1)},\langle \cdot,\cdot\rangle^{(0)})$ for which the adjoint representation $\textnormal{ad}^1:\mathfrak{g}_1\to \textnormal{Der}(\mathfrak{g}_1)$ acts by infinitesimal isometries on $(\mathfrak{g}_1,\langle \cdot,\cdot\rangle^{(1)})$. Therefore, as consequence of Proposition \ref{Bi-invariant1} and \cite[Lem. 7.2]{Mi} we get: \begin{corollary}\label{MilnorCharacterization} A Lie $2$-group is orthogonal if and only if its Lie algebra $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ is orthogonal. \end{corollary} From Theorem \ref{Existence2} we get that: \begin{corollary} Every proper Lie $2$-group with either $G_1$ or $G_0$ compact can be endowed with a bi-invariant $1$-metric. \end{corollary} \begin{remark} Suppose that we have a crossed module of Lie algebras $(\mathfrak{g},\mathfrak{h},\partial,\mathcal{L})$ which is $\mathcal{L}$-orthogonal in the sense of \cite{Ba,FMM}. That is, there exist inner products $\langle \cdot,\cdot\rangle_\mathfrak{g}$ and $\langle \cdot,\cdot\rangle_\mathfrak{h}$ such that $\mathcal{L}$ acts by infinitesimal isometries on $(\mathfrak{h},\langle \cdot,\cdot\rangle_\mathfrak{h})$ and the adjoint representation of $\mathfrak{g}$ acts by infinitesimal isometries on $(\mathfrak{g},\langle \cdot,\cdot\rangle_\mathfrak{g})$. Note that this directly implies that the adjoint representation of $\mathfrak{h}$ acts by infinitesimal isometries on $(\mathfrak{h},\langle \cdot,\cdot\rangle_\mathfrak{h})$. Recall that the associated Lie $2$-algebra constructed with the crossed module data has $\mathfrak{g}_1=\mathfrak{h}\rtimes \mathfrak{g}$ with Lie algebra structure provided by the semi-direct product with respect to $\mathcal{L}$. Therefore, a straightforward computation allows us to conclude that the adjoint representation of $\mathfrak{g}_1$ acts by infinitesimal isometries with respect to $\langle \cdot,\cdot\rangle_\mathfrak{h}+\langle \cdot,\cdot\rangle_\mathfrak{g}$ if and only if $\mathcal{L}=0$. As consequence, there is no a canonical correspondence between our notion of orthogonal Lie $2$-algebras and the notion of $\mathcal{L}$-orthogonal crossed module of Lie algebras which is known in the literature. \end{remark} Given an orthogonal Lie $2$-algebra $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ we may split $\mathfrak{g}_1$ as a direct sum of ideals $\mathfrak{g}_1=\mathfrak{h}\oplus \mathfrak{h}^{\perp_1}$ where $\mathfrak{h}=\textnormal{ker}(s)$. As the unit map $u$ is a canonical bisection we would expect that $u(x)\in \mathfrak{h}^{\perp_1}$ for all $x\in \mathfrak{g}_0$ but, however, this is not true in general unless we assume ``non-canonical" identifications. We say that an orthogonal Lie $2$-algebra is \emph{trivial} if the latter condition holds true. This notion comes up by the following simple result. \begin{lemma}\label{0-orthogonal} If $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ is a trivial orthogonal Lie $2$-algebra then $\textnormal{ad}^1_{u(x)}|_\mathfrak{h}=0$ for all $x\in \mathfrak{g}_0$. In consequence, $\mathfrak{h}$ is abelian and $\textnormal{im}(t|_\mathfrak{h})\subseteq \mathfrak{z}(\mathfrak{g}_0)$. \end{lemma} \begin{proof} On the one hand, note that for any $y\in \mathfrak{h}$ and $z\in \mathfrak{g}$ one gets $$\langle \textnormal{ad}^1_{u(x)}(y),z\rangle^{(1)}=\langle u(x),\textnormal{ad}^1_{y}(z)\rangle^{(1)}=0,$$ since $\mathfrak{h}$ is an ideal. Thus, as $\langle \cdot,\cdot\rangle^{(1)}$ is nondegenerate we get that $\textnormal{ad}^1_{u(x)}|_\mathfrak{h}=0$ for all $x\in \mathfrak{g}_0$. On the other hand, for all $y,y'\in \mathfrak{h}$ and $x\in \mathfrak{g}$ it follows that $[y,y']=\textnormal{ad}^1_{u(t(y))}(y')=0$ and $0=t([u(x),y])=[x,t(y)]$ since $t$ is a Lie algebra homomorphism. \end{proof} Motivated by the previous result we set our notion of $0$-orthogonal crossed module. Namely: \begin{definition} A crossed module of Lie algebras $(\mathfrak{g},\mathfrak{h},\partial,0)$ is called 0-\emph{orthogonal} if there exist inner products $\langle \cdot,\cdot\rangle_\mathfrak{g}$ and $\langle \cdot,\cdot\rangle_\mathfrak{h}$ such that the adjoint representation of $\mathfrak{g}$ acts by infinitesimal isometries on $(\mathfrak{g},\langle \cdot,\cdot\rangle_\mathfrak{g})$ and the fibers of the canonical projection $\pi:\mathfrak{g}\to \mathfrak{g}/\textnormal{im}(\partial)$ are equidistant. \end{definition} Note that directly from the definition we get that $\mathfrak{h}$ is abelian and $\textnormal{im}(\partial)\subseteq \mathfrak{z}(\mathfrak{g})$. With this in mind we have: \begin{proposition} There exists a one-to-one correspondence between trivial orthogonal Lie $2$-algebras and $0$-orthogonal crossed modules of Lie algebras. \end{proposition} \begin{proof} If $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ is a trivial orthogonal Lie $2$-algebra then the result follows from Lemma \ref{0-orthogonal} after setting $\langle \cdot,\cdot\rangle_\mathfrak{g}=\langle \cdot,\cdot\rangle^{(0)}$ and $\langle \cdot,\cdot\rangle_\mathfrak{h}=\langle \cdot,\cdot\rangle^{(1)}|_{\mathfrak{h}\times \mathfrak{h}}$. Conversely, let us consider a $0$-orthogonal crossed module $(\mathfrak{g},\mathfrak{h},\partial,0)$ and set $\langle \cdot,\cdot\rangle^{(1)}=\langle \cdot,\cdot\rangle_\mathfrak{h}+\langle \cdot,\cdot\rangle_\mathfrak{g}$ and $\langle \cdot,\cdot\rangle^{(0)}=\langle \cdot,\cdot\rangle_\mathfrak{g}$. From the construction of the Lie $2$-algebra associated to $(\mathfrak{g},\mathfrak{h},\partial,0)$ we easily see that $s$ is a linear Riemannian submersion. Now, on $\mathfrak{g}_1=\mathfrak{h}\rtimes \mathfrak{g}$ we get $$\langle i(x,y),i(x',y')\rangle^{(1)} = \langle x,x'\rangle_\mathfrak{h}+\langle y+\partial(x),y'+\partial(x')\rangle_\mathfrak{g}=\langle x,x'\rangle_\mathfrak{h}+\langle y,y'\rangle_\mathfrak{g}=\langle (x,y),(x',y')\rangle^{(1)},$$ since the fibers of the canonical projection $\pi:\mathfrak{g}\to \mathfrak{g}/\textnormal{im}(\partial)$ are equidistant. As $t=s\circ i$ it follows that $t$ is also a linear Riemannian submersion. The Lie bracket on $\mathfrak{g}_1$ is explicitly given as $$[(x,y),(x',y')]_0=(0,[y,y']_\mathfrak{g}).$$ Hence, the adjoint representation of $\mathfrak{g}_1$ acts by infinitesimal isometries with respect to $\langle \cdot,\cdot\rangle_\mathfrak{h}+\langle \cdot,\cdot\rangle_\mathfrak{g}$ since adjoint representation of $\mathfrak{g}$ acts by infinitesimal isometries with respect to $\langle \cdot,\cdot\rangle_\mathfrak{g}$. \end{proof} \begin{comment} \textcolor{green}{Problem:} \textcolor{blue}{If possible, interpret \cite[Lem. 7.5]{Mi} with the aim of giving a complete characterization of orthogonal Lie $2$-groups}. \textcolor{green}{Problem:} \textcolor{blue}{Try to extend the construction called double orthogonal extension introduced in \cite{MPh} by using the cohomology for Lie $2$-algebras of either \cite{A} or \cite{LSZ}}. \end{comment} \subsection{Some examples and applications} It this short subsection we exhibit some toy examples and interesting applications in which isometric Lie $2$-actions naturally appear. \begin{example} Classical isometric actions of Lie groups $G$ on Riemannian manifolds $M$ are recovered from isometric $2$-actions of unit Lie $2$-groups $G \rightrightarrows G$ acting upon unit groupoids $M \rightrightarrows M$. \end{example} \begin{example} Let $(X_1\rightrightarrows X_0,\eta)$ be a Riemannian groupoid and $(\xi,v)$ be a complete multiplicative Killing vector field on $X_1\rightrightarrows X_0$. Here we consider $\xi$ a Killing vector field on $(X_1,\eta^{(1)})$ and $v$ a Killing vector field on $(X_0,\eta^{(0)})$ (as consequence of Lemma \ref{Rmk1} it is actually enough to ask that $\xi$ is Killing). From \cite{MX} we know that the pair of flows defined by $(\xi,v)$ determine global automorphisms on $X_1\rightrightarrows X_0$ so that we get a well defined isometric $2$-action of $\mathbb{R}\rightrightarrows \mathbb{R}$ on $(X_1\rightrightarrows X_0,\eta)$. In particular, the flow of the multiplicative vector field $((1_\xi)_{X_1},\xi_{X_0})$ formed by the fundamental vector fields of an isometric $2$-action determines another isometric $2$-action. \end{example} \begin{example}\label{ExampleOrthogonal} Let $G$ be an orthogonal Lie group and $H\leq G$ be a normal Lie subgroup. It is clear that $H$ acts on $G$ by left multiplication, leading to the action Lie groupoid $H\times G\rightrightarrows G$. Note that its space of arrows has a group structure, namely the semi-direct product by the conjugation action $C_g(h)=ghg^{-1}$ of $G$ on $H$ so that we get a well defined Lie $2$-group. More importantly, by applying the gauge trick construction behind Proposition 4.7 and Example 4.9 in \cite{dHF} we conclude that it is possible to cook up a $1$-metric on $H\rtimes G\rightrightarrows G$ made out from the initial bi-invariant metric on $G$ in such a way it becomes an orthogonal Lie $2$-group. \end{example} \begin{example} Let $(M,\mathcal{F})$ be a regular Riemannian foliation and consider a free and proper isometric foliated action $G\times (M,\mathcal{F})\to (M,\mathcal{F})$ of a Lie group $G$. From \cite[Thm. 3.7]{GZ} it is known that the Lie $2$-group $G\rtimes G\rightrightarrows G$, as defined in Example \ref{ExampleOrthogonal}, determines a canonical $2$-action on the holonomy groupoid $\textnormal{Hol}(M,\mathcal{F})\rightrightarrows M$ which extends the given action of $G$ on $M$. The Riemannian metric on $M$ completely determines a $0$-metric on $\textnormal{Hol}(M,\mathcal{F})\rightrightarrows M$ which can be extended to a $1$-metric \cite[Ex. 3.12]{dHF}. Thus, the fact that $G$ acts on $M$ isometrically implies that the extended $2$-action of $G\rtimes G\rightrightarrows G$ on $\textnormal{Hol}(M,\mathcal{F})\rightrightarrows M$ is by isometries. \end{example} In the next example we will consider the notion of multiplicative $2$-connection $\omega=(\omega^1,\omega^2)$ on a groupoid principal $2$-bundle $\pi:(P_1\rightrightarrows P_0)\to (X_1\rightrightarrows X_0)$ with structural Lie $2$-group $G_1\rightrightarrows G_0$ as defined for instance in \cite{HOV} (see also \cite{CCK}). \begin{example}[Principal groupoid warping] Let us prove that if $(X_1\rightrightarrows X_0,\eta)$ is a Riemannian groupoid and $G_1\rightrightarrows G_0$ is orthogonal then there exists a $1$-metric $\overline{\eta}$ on $P_1\rightrightarrows P_0$ for which the $2$-action of $G_1\rightrightarrows G_0$ is isometric and such that $\pi=(\pi_1,\pi_0):(P_1\rightrightarrows P_0)\to (X_1\rightrightarrows X_0)$ is a Riemannian groupoid submersion. Consider the associated $1$-metric $\langle \cdot,\cdot\rangle$ on the Lie $2$-algebra $\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0$ of $G_1\rightrightarrows G_0$ and define $$\overline{\eta}^{(1)}(v,w)=\eta^{(1)}(d\pi_1(v),d\pi_1(w))+\langle \omega^1(v),\omega^1(w)\rangle^{(1)}.$$ The metric $\overline{\eta}^{(0)}$ is similarly defined by using instead $\eta^{(0)}$, $\pi_0$, $\langle \cdot,\cdot\rangle^{(0)}$, and $\omega^0$. It is simple to check that this expression yields a well defined right $G_j$-invariant metric on $P_j$ for which $\pi_j:P_j\to X_j$ becomes a Riemannian submersion ($j=0,1$). This is because $\pi_j$ is constant along the action orbits, $\omega^j$ is of $\textnormal{Ad}^j$-invariant type, and $\textnormal{ker}(d\pi_j)^{\perp_{\overline{\eta}^{(j)}}}=\textnormal{ker}(\omega^j)$. Let us verify that $\overline{\eta}=(\overline{\eta}^{(1)},\overline{\eta}^{(0)})$ determines a $1$-metric on $P_1\rightrightarrows P_0$. Note that, by abusing a little on the notation, we may rewrite $\overline{\eta}^{(j)}$ in a simpler way as $\overline{\eta}^{(j)}=(\pi_j)^\ast\eta^{(j)}+(\omega^j)^\ast \langle \cdot,\cdot\rangle^{(j)}$. Recall that $\pi$ and $\omega:(TP_1 \rightrightarrows TP_0)\to (\mathfrak{g}_1\rightrightarrows \mathfrak{g}_0)$ are Lie groupoid morphisms. So, on the one hand we get \begin{eqnarray*} (i_P)^\ast \overline{\eta}^{(1)} & = & (\pi_1\circ i_P)^\ast\eta^{(1)}+(\omega^1\circ i_P)^\ast \langle \cdot,\cdot\rangle^{(1)}=(i_X\circ \pi_1)^\ast\eta^{(1)}+(d(i_G)_{e_1}\circ \omega^1)^\ast \langle \cdot,\cdot\rangle^{(1)}\\ &=& (\pi_1)^\ast\eta^{(1)}+(\omega^1)^\ast \langle \cdot,\cdot\rangle^{(1)}=\overline{\eta}^{(1)}. \end{eqnarray*} On the other hand, if $v\in \textnormal{ker}(d(s_P)_{p})^{\perp_{\overline{\eta}^{(1)}}}$ then it is simple to verify that the identities $\pi_0\circ s_P=s_X\circ \pi_1$ and $\omega^0\circ s_P=d(s_G)_{e_1}\circ \omega^1$ imply that $d(\pi_1)_p(v)\in \textnormal{ker}(d(s_X)_{\pi_1(p)})^{\perp_{\eta^{(1)}}}$ and $\omega^1(v)\in \textnormal{ker}(d(s_G)_{e_1})^{\perp_{\langle \cdot,\cdot\rangle^{(1)}}}$. Let us pick $v_1,v_2\in \textnormal{ker}(d(s_P)_{p})^{\perp_{\overline{\eta}^{(1)}}}$. Thus, \begin{eqnarray*} \overline{\eta}^{(0)}_{s_P(p)}(ds_P(p)(v_1),ds_P(p)(v_2)) & = & \eta^{(0)}_{\pi_0(s_P(p))}(d(\pi_0)_{s_P(p)}(ds_P(p)(v_1)),d(\pi_0)_{s_P(p)}(ds_P(p)(v_2)))\\ & + & \langle \omega^0(ds_P(p)(v_1)),\omega^0(ds_P(p)(v_2))\rangle^{(0)}\\ & = & \eta^{(0)}_{s_X(\pi_1(p))}(d(s_X)_{\pi_1(p)}(d\pi_1(p)(v_1)),d(s_X)_{\pi_1(p)}(d\pi_1(p)(v_1)))\\ & + & \langle d(s_G)_{e_1}(\omega^1(v_1)),d(s_G)_{e_1}(\omega^1(v_2))\rangle^{(0)}\\ & = & \eta^{(1)}_{\pi_1(p)}(d\pi_1(p)(v_1),d\pi_1(p)(v_1)) + \langle \omega^1(v_1),\omega^1(v_2)\rangle^{(0)}\\ & = & \overline{\eta}^{(1)}(v_1,v_2). \end{eqnarray*} Analogously, it follows that $(t_P)_\ast \overline{\eta}^{(1)}=\overline{\eta}^{(0)}$. Hence, we have shown that $(P_1\rightrightarrows P_0,\overline{\eta})$ is a Riemannian groupoid for which $G_1\rightrightarrows G_0$ acts isometrically and $\pi$ is a Riemannian submersion of groupoids, as claimed. \end{example} Next example comes motivated by a beautiful construction known in the literature as \emph{Cheeger deformation}. The classical construction can be found for instance in \cite[s. 6.1]{AB}. \begin{example}[Cheeger groupoid deformation] Suppose that $(G_1\rightrightarrows G_0, Q)$ is an orthogonal Lie 2-group, with $G_1$ compact, acting isometrically on a Riemannian groupoid $(X_1\rightrightarrows X_0,\eta)$. On the Lie groupoid product $X_1\times G_1\rightrightarrows X_0\times G_0$ we can consider the 1-metric $\eta\oplus \frac{1}{\tau}Q$. There is a natural free $2$-action of $G_1\rightrightarrows G_0$ on $X_1\times G_1\rightrightarrows X_0\times G_0$ where the $G_j$-actions, $j=1,0$, on $X_j\times G_j$ are given by \begin{equation}\label{CheegerD} h_j\cdot(p_j,g_j)=(h_jp_j,h_jg_j),\qquad p_j\in X_j,\ h_j,g_j\in G_j. \end{equation} We claim that the quotient groupoid $\frac{X_1\times G_1}{G_1}\rightrightarrows\frac{X_0\times G_0}{G_0}$ determined by the previous actions is isomorphic to $X_1\rightrightarrows X_0$. Let us consider the groupoid principal $2$-bundle $G_1\rightrightarrows G_0$ over the point groupoid $\lbrace e_1\rbrace\rightrightarrows\lbrace e_0\rbrace$ with structural Lie $2$-group $G_1\rightrightarrows G_0$. By applying Remark \ref{AssociatedBundle} we know that by using the $2$-action of $G_1\rightrightarrows G_0$ on $X_1\rightrightarrows X_0$ we can construct the associated Lie groupoid bundle $(X_1\times_{G_1}G_1\rightrightarrows X_0\times_{G_0}G_0)\to (\lbrace e_1\rbrace\rightrightarrows\lbrace e_0\rbrace)$. It is important to notice that $X_1\times_{G_1}G_1\rightrightarrows X_0\times_{G_0}G_0$ is precisely the quotient groupoid $\frac{X_1\times G_1}{G_1}\rightrightarrows\frac{X_0\times G_0}{G_0}$. Therefore, as the new Lie groupoid bundle has groupoid fiber $X_1\rightrightarrows X_0$ and base groupoid $\lbrace e_1\rbrace\rightrightarrows\lbrace e_0\rbrace$ then we have the desired isomorphism. Under this identification the canonical groupoid projection $\pi=(X_1\times G_1\rightrightarrows X_0\times G_0)\to (X_1\rightrightarrows X_0)$ is formed by the maps $\pi_j(p_j,g_j)=g_j^{-1}p_j$. Observe that the $2$-action \eqref{CheegerD} is also isometric. Thus, as consequence of Proposition \ref{PQuotient} there is a unique 1-metric $\eta_\tau$ on $X_1\rightrightarrows X_0$ making of the projection $\pi=(X_1\times G_1\rightrightarrows X_0\times G_0,\eta\oplus \frac{1}{\tau}Q)\to (X_1\rightrightarrows X_0,\eta_\tau)$ a Riemannian groupoid submersion. It is important to point out that by construction $\eta_\tau \mapsto \eta$ when $\tau$ goes to $\infty$ and the $2$-action of $G_1\rightrightarrows G_0$ on $(X_1\rightrightarrows X_0,\eta_\tau)$ is still isometric when $\tau>0$. More importantly, the 1-parameter family of $1$-metrics $\eta_\tau$ on $X_1\rightrightarrows X_0$ varies smoothly with $\tau$ and extends smoothly to $\tau = 0$ with $\eta_0 = \eta$. Hence, $\eta_\tau$ with $\tau\geq 0$ is a deformation of $\eta$ by other $(G_1\rightrightarrows G_0)$-invariant metrics on $X_1\rightrightarrows X_0$ which we call \emph{Cheeger groupoid deformation} of $\eta$. The reader is recommended to visit \cite[s. 6,1]{AB} for specific details about the last assertions in the classical case. \end{example} \begin{remark} It is worth mentioning that the notion of isometric 2-action we have introduced in this section was used in \cite{OV} as one of the fundamental ingredients needed to construct an equivariant double Morse-Bott complex which computes the equivariant cohomology of a $2$-action as defined in \cite{OBT}. \end{remark} \section{Isometries and multiplicative Killing vector fields}\label{S:4} The aim of this section is to bring a description of the group of isometries associated to a $0$-metric on a Lie groupoid. This will give rise to a notion of geometric Killing vector field on a quotient Riemannian stack. We start by describing what would be our attempt to set the \emph{diffeomorphism group} of a differentiable stack. Some of the references we shall be following throughout are \cite{Ma,OW} and \cite[App. D]{An}. A \emph{bisection} of a Lie groupoid $X_1\rightrightarrows X_0$ is a smooth map $\sigma:X_0\to X_1$ such that $s_X\circ \sigma=\textnormal{id}_{X_0}$ and $\iota_{\sigma}:X_0\to X_0$ defined by $\iota_{\sigma}(x):=t_X(\sigma(x))$ is a diffeomorphism. The set of all bisections of $X_1\rightrightarrows X_0$ will be denoted by $\textnormal{Bis}(X)$. This has the structure of an infinite-dimensional Lie group where the multiplication of two bisections $\sigma$ and $\sigma'$ is given by $\sigma\bullet \sigma'(x):=\sigma((t_X\circ \sigma')(x))*\sigma'(x)$ for all $x\in X_0$; see for instance \cite{SW} and \cite[s. 1.4]{Ma}. Let us denote by $\textnormal{Aut}(X)$ the group of Lie groupoid automorphisms of $X_1\rightrightarrows X_0$. Given a bisection $\sigma \in \textnormal{Bis}(X)$ one has an inner automorphism $I_{\sigma}:X_1\to X_1$ defined by $$I_{\sigma}(p):=\sigma(t_X(p))*p* i_X(\sigma(s_X(p))).$$ Clearly, $I_{\sigma}$ covers the map $\iota_{\sigma}$. This inner automorphism allows us to define what we call the \emph{crossed module of automorphisms of a Lie groupoid} $(\textnormal{Aut}(X),\textnormal{Bis}(X),I,\alpha)$ where the map $\alpha$ is defined as $\alpha_{\Phi}(\sigma):=\Phi\circ \sigma \circ \phi^{-1}$ for all $\Phi \in \textnormal{Aut}(X)$ covering $\phi:X_0\to X_0$ and $\sigma \in \textnormal{Bis}(X)$. Accordingly, we have a 2-group $\textnormal{Bis}(X)\ltimes \textnormal{Aut}(X)\rightrightarrows \textnormal{Aut}(X)$ called the \emph{2-group of Lie groupoid automorphisms}. \begin{proposition} The orbit space of the 2-group of Lie groupoid automorphisms equals the set of Lie groupoid automorphisms up to smooth natural equivalences. Namely: $$\textnormal{Aut}(X)/\textnormal{Bis}(X)=\left\lbrace [\Phi]\,|\, \Phi\in \textnormal{Aut}(X),\quad \Psi\sim \Phi \Leftrightarrow \exists_{\alpha}\left(\Psi\stackrel{\alpha}{\Rightarrow}\Phi\right)\right\rbrace.$$ \end{proposition} \begin{proof} Let us describe the orbit of an element in $\Phi\in \textnormal{Aut}(X)$ covering $\phi$. If we pick $\sigma\in \textnormal{Bis}(X)$ and $\Psi \in \textnormal{Aut}(X)$ covering $\psi$ such that $I_{\sigma}\Phi=\Psi$ then it follows that $$\Psi(p)=I_{\sigma}(\Phi(p))=\sigma(\phi(t_X(p)))*\Phi(p)*i_X(\sigma(\phi(s_X(p)))),$$ for some $p \in X_1$ so that $ \Psi(p)*\sigma(\phi(s_X(p)))=\sigma(\phi(t_X(p)))*\Phi(p)$. Therefore, by setting $\alpha:=\sigma\circ \phi$ we get a smooth natural transformation $\Phi\stackrel{\alpha}{\Rightarrow}\Psi$. Conversely, note that if $\Phi\stackrel{\alpha}{\Rightarrow}\Psi$ a smooth natural transformation then for some $p\in X$ it holds $\alpha(t_X(p))*\Phi(p)=\Psi(p)*\alpha(s_X(p))$, thus obtaining that $t_X(\Phi(p))=s_X(\alpha(t_X(p)))$ and $s_X(\Psi(p))=t_X(\alpha(s_X(p)))$. On the one hand, by setting $\sigma:=\alpha\circ \phi^{-1}$ it follows that $s_X\circ\sigma=\textnormal{id}_{X_0}$. On the other hand, observe that $$ \psi(s_X(p))=t_X(\alpha(s_X(p)))=t_X(\sigma(\phi( s_X)))(p)=\iota_{\sigma}(\phi(s_X(p))).$$ Hence, we have obtained that $i_{\sigma}=\psi\circ \phi^{-1}$ so that it is a diffeomorphism. \end{proof} Let $G_1\rightrightarrows G_0$ be a Lie $2$-group acting on $X_1\rightrightarrows X_0$ by the left. \begin{lemma}\label{2ActionDecomposition} The normal subgroup $H=\textnormal{ker}(s_{G})$ acts on $X_1\rightrightarrows X_0$ by bisections and $G_0$ acts by Lie groupoid automorphisms. Moreover, the right multiplication map defined on $s_X$-fibers for each arrow is $H$-equivariant. \end{lemma} \begin{proof} Consider $h\in H$ and define the map $\sigma_{h}:X_0\to X_1$ as $\sigma_h(x):=h1_x.$ It is clear that $s_X(\sigma_h(x))=x$ and $t(\sigma_h(x))=\rho(h)x$ so that $\sigma_h$ is a well defined bisection. Let us now take $g\in G_0$ and define the map $\Sigma_g:X_1\to X_1$ as $\Sigma_g(x):=1_{g}x$. Equation \eqref{MultAction} implies that \[1_{g}(x*y)=(1_g*1_g)(p*q)=(1_{g}p)*(1_{g}q),\] for all $g\in G_0$ and $(p,q)\in X_2$, thus obtaining that $\Sigma_g$ is a Lie groupoid morphism which clearly satisfies $(\Sigma_g)^{-1}=\Sigma_{g^{-1}}$. Note that $s_{X}(hp)=s_{X}(p)$ for all $p\in X_1$ and $h\in H$ so that the left action of $H$ on $X_1$ preserves the $s_X$-fibers. Therefore, for each $y\xleftarrow[]{\it p}x$ the right action $R_{p}:s_X^{-1}(y)\to s_X^{-1}(x)$ satisfies that \[R_{p}(hq)=(hq)*(1_{e}p)=(h*1_e)(q*p)=hR_{p}(q).\] In consequence, $R_p$ is $H$-equivariant as claimed. \end{proof} Same result can be obtained if we consider right 2-actions instead of left ones. \begin{lemma}\label{LemmaIso1} There is a natural morphism of crossed modules of Lie groups $(\sigma,\Sigma):(G,H,\rho,\alpha) \to (\textnormal{Aut}(X),\textnormal{Bis}(X),I,\alpha)$ where $\sigma_h$ and $\Sigma_g$ are defined as in Lemma \ref{2ActionDecomposition}. \end{lemma} \begin{proof} Let us check that $\Sigma \circ \rho=I\circ \sigma$ and $\sigma_{\alpha_{g}(h)}=\alpha_{\Sigma_g}(\sigma_h)$ for all $g \in G$ and $h\in H$. Firstly, for $ p\in X_1$ and $h\in H$ we obtain \begin{eqnarray*} I_{\sigma_h}(p)&=&\sigma_h(t_X(p))*p*i(\sigma_h(s_X(p)))=(h1_{t_X(p)})*p*i(h1_{s_X(p)})\\ &=&(h*e)(1_{t_X(p)}*p)*i(h1_{s_X(p)})=hp*i_{G}(h)1_{s(p)}=1_{\rho(h)}p=\Sigma_{\rho(h)}(p). \end{eqnarray*} Secondly, for $x \in X_0$, $g\in G_0$ and $h\in H$ we get $$ \alpha_{\Sigma_g}(\sigma_h)(x) =\Sigma_g(h1_{(g^{-1}x)})=1_gh1_{g^{-1}}1_x=\alpha_{g}(h)1_x=\sigma_{\alpha_g(h)}(x). $$ \end{proof} We are now in conditions to define an \emph{infinitesimal 2-action} associated to a right Lie 2-group action. Let $A_X$ be the Lie algebroid of $X_1\rightrightarrows X_0$ and for each $\xi \in \mathfrak{g}_0$ we denote its fundamental vector fields as $\tilde{\xi}$. We also denote by $\mathfrak{X}_{m}(X)$ the set of multiplicative vector fields on $X_1\rightrightarrows X_0$ and by $(\mathfrak{X}_{m}(X),\Gamma(A_{X}),\delta,D)$ its associated crossed module. Here we have that $\delta(\alpha)=\alpha^r-\alpha^l$ and $D_{(\xi,v)}\alpha=[\xi,\alpha^r]|_{X_0}$ for all $\alpha\in \Gamma(A_X)$ and $(\xi,v)\in \mathfrak{X}_{m}(X)$; see \cite{OW} for specific details. \begin{theorem}\label{InfinitesimalAction} Suppose that we have a right 2-action of $G_1\rightrightarrows G_0$ on $X_1\rightrightarrows X_0$. Then there is a canonical homomorphism of Lie 2-algebra $j=(j_{-1},j_0):(\mathfrak{g},\mathfrak{h},\partial,\mathcal{L})\to (\mathfrak{X}_{m}(X),\Gamma(A_{X}),\delta,D)$ defined by $j_{-1}(\xi)=\left.\tilde{\xi}\right|_{X_0}$ and $j_0(\zeta)=(\tilde{1_{\zeta}},\tilde{\zeta})$ for all $\xi \in \mathfrak{h}$ and $\zeta \in \mathfrak{g}$. \end{theorem} \begin{proof} To see that $j$ is a well defined morphism we have to check that for any $\xi \in \mathfrak{h}$ its fundamental vector field $\tilde{\xi}$ belongs to $\mathfrak{X}_{inv}^{s}(X)$ and that for any $\zeta \in \mathfrak{g}$ it holds that $(\tilde{1_{\zeta}},\tilde{\zeta})$ is in $\mathfrak{X}_{m}(X)$. The latter assertion is clear since if $\exp{t\zeta}\in G_0$ then Lemma \ref{2ActionDecomposition} implies that the pair of flows $(\varphi_t^{\tilde{1_\zeta}},\varphi_t^{\tilde{\zeta}})$ determines a Lie groupoid morphism. Now, if $\xi \in \mathfrak{h}$ then again from Lemma \ref{2ActionDecomposition} it follows that the flow of its fundamental vector field lies inside the $s$-fibers since $s_{X}(\varphi_t^{\tilde{\xi}}(p))=s_{X}(p\exp(t\xi))=s_{X}(p)$. Therefore, $\tilde{\xi}\in \mathfrak{X}^s(X)$. Furthermore, for $y\xleftarrow[]{\it p}x$ and $q\in s_X^{-1}(y)$ one has that $$ R_{p}(\varphi^{\tilde{\xi}}_t(q)) =(q\exp(t\xi))*p=(q*p)(\exp{t\xi}*1_e)=\varphi_t^{\tilde{\xi}}(q*p)=\varphi_t^{\tilde{\xi}}(R_p(q)), $$ thus obtaining that $d(R_p)_q(\tilde{\xi}_q)=\tilde{\xi}_{q*p}$ so that $\tilde{\xi}\in \mathfrak{X}_{inv}^s(X)$ and $\left.\tilde{\xi}\right|_{X_0}\in \Gamma(A_X)$. Let us finally verify that for $\xi\in \mathfrak{h}$ and $\zeta \in \mathfrak{g}$ it satisfies that $\delta(j_{-1}(\xi))=j_{0}(\partial \xi)$ and $j_{-1}(\mathcal{L}_{\eta}\xi)=D_{j_0(\eta)}(j_{-1}(\xi))$. On the one hand, by using the flow of vector field $\delta(j_0(\xi))=\left.\tilde{\xi}\right|_{X_0}^r-\left.\tilde{\xi}\right|_{X_0}^l$ and Equation \eqref{MultAction} we get \begin{eqnarray*} \varphi_t^{\delta(j_{-1}(\xi))}(p)&=&\varphi_{t}^{\tilde{\xi}}(1_{t_{X}(p)})*p*\iota_X(\varphi_t^{\tilde{\xi}}(1_{s_X(p)}))=1_{t_{X}(p)}\exp(t\xi)*p*i_X(1_{s_{X}(p)}\exp(t\xi))\\ &=&(1_{t_X(p)}*p)\exp(t\xi)*1_{s_X(p)}i_G(\exp(t\xi))=p(\exp(t\xi)*i_G(\exp(t\xi)))\\ &=&p(1_{t_G(\exp(t\xi))})=p1_{\exp(t\partial(\xi))}=p\exp(t1_{\partial \xi})=\varphi_t^{{j_0(\partial \xi)}}(p). \end{eqnarray*} Hence, $\delta(j_{-1}(\xi))=j_{0}(\partial \xi)$. On the other hand, observe that \[ D_{j_0(\eta)}(j_{-1}(\xi))=\left.\left[\tilde{1_{\eta}},\tilde{\xi}\right]\right|_{X_0}=\left.\tilde{\left[1_{\eta},\xi\right]}\right|_{X_0}=\left.\tilde{\mathcal{L}_{\eta}(\xi))}\right|_{X_0}=j_{-1}(\mathcal{L}_{\eta}\xi).\] \end{proof} \subsection{Isometries of a 0-metric} Recall that a $0$-metric on a Lie groupoid $X_1\rightrightarrows X_0$ is a Riemannian metric $\eta$ on $X_0$ which is transversely invariant by the canonical left action of $X_1\rightrightarrows X_0$ on $X_0$, compare \cite{dHF,PPT}. Note that this is the same that requiting that $\eta$ is transversely invariant by the action of the group of bisections $\textnormal{Bis}(X)\times X_0\to X_0$ which is defined by $\sigma\cdot x:=\iota_{\sigma}(x)$. It is clear that this action preserves the orbits so that it induces a well defined action on the normal space of a orbit. In consequence, $\eta$ is a 0-metric if and only if for all $\sigma \in \textnormal{Bis}(X)$ the map $\overline{d\iota_{\sigma}}:(\nu_x(\mathcal{O}),\overline{\eta})\to (\nu_{\iota_{\sigma}(x)}(\mathcal{O}),\overline{\eta})$ is a linear isometry. Let $(X_1\rightrightarrows X_0,\eta)$ be a Lie groupoid equipped with a 0-metric $\eta$ and consider the following sets $$\textnormal{Bis}_{\eta}(X)=\left\lbrace \sigma \in \textnormal{Bis}(X)\,|\, \iota_{\sigma}^*\eta=\eta\right\rbrace\quad\textnormal{and}\quad \mathrm{Iso}(X,\eta)=\left \lbrace (\Phi,\phi)\in \mathrm{Aut}(X)\,|\, \phi^*\eta=\eta \right\rbrace.$$ \begin{proposition}\label{IsoStrong1} The quadruple $(\mathrm{Iso}(X,\eta),\textnormal{Bis}_{\eta}(X),I,\alpha)$ determines a sub-crossed module structure of $(\textnormal{Aut}(X),\textnormal{Bis}(X),I,\alpha)$. \end{proposition} \begin{proof} It is clear that $I(\textnormal{Bis}_{\eta}(X))\subseteq \mathrm{Iso}(X,\eta)$. As $I_{\alpha_{\Phi}(\sigma)}=\Phi I_{\sigma}\Phi^{-1}$ for all $\sigma \in \textnormal{Bis}_{\eta}(X)$ and $\Phi \in \textnormal{Iso}(X,\eta)$ then when restricting to unities we have that $\iota_{\alpha_{\Phi}(\sigma)}=\phi\circ \iota_{\sigma}\circ \phi^{-1}$, thus obtaining an isometry. \end{proof} The Lie 2-group associated to the crossed module $(\mathrm{Iso}(X,\eta),\textnormal{Bis}_{\eta}(X),I,\alpha)$ will be called \emph{Lie 2-group of strong isometries} of $(X_1\rightrightarrows X_0,\eta)$. Let us now consider the sets $$\Gamma_{\eta}(A_X)=\left\lbrace \alpha \in \Gamma(A_X)\,|\, \rho(\alpha) \in \mathfrak{o}(X_0,\eta)\right\rbrace\quad\textnormal{and}\quad \mathfrak{o}_{m}(X)=\left\lbrace (\xi,v)\in \mathfrak{X}_{m}(X)\,|\, v \in \mathfrak{o}(X_0,\eta)\right\rbrace$$ where $\mathfrak{o}(X_0,\eta)$ denotes the Lie algebra of Killing vector fields of $(X_0,\eta)$. In these terms we may describe the infinitesimal version of the Lie $2$-group of strong isometries as follows. \begin{proposition}\label{Strongkilling} The quadruple $(\mathfrak{o}_{m}(X),\Gamma_{\eta}(A_X),\delta,D)$ defines a sub-crossed module structure of $(\mathfrak{X}_{m}(X),\Gamma(A_{X}),\delta,D)$. \end{proposition} \begin{proof} Firstly, note that $\mathfrak{o}_{m}(X)$ is a Lie subalgebra of $\mathfrak{X}_{m}(X)$ since $\mathfrak{o}(X_0,\eta)$ is a Lie algebra. Secondly, if $\alpha, \beta \in \Gamma_{\eta}(A_X)$ then $\rho([\alpha,\beta])=[\rho(\alpha),\rho(\beta)]\in \mathfrak{o}(X_0,\eta)$ so that $[\alpha,\beta]\in \Gamma_{\eta}(A)$. If $(\xi,v)\in \mathfrak{o}_{m}(X)$ and $\alpha \in \Gamma_{\eta}(A)$ then we have by definition that $D_{\xi}(\alpha)=\left.[\xi,\alpha^r]\right|_{X_0} \in \Gamma(A_X)$. However, the equivariance identity implies that $\delta(D_{\xi}(\alpha))=[\xi,\delta(\alpha)]$. Therefore, it holds that $\left.\delta(D_{\xi}\alpha)\right|_{X_0}=\left.[\xi,\delta(\alpha)]\right|_{X_0}$ which is the same thing that saying $\rho(D_{\xi}\alpha)=[v,\rho(\alpha)] \in \mathfrak{o}(X_0,\eta)$ since $v$ is also a Killing vector field. \end{proof} The Lie $2$-algebra associated to the crossed module $(\mathfrak{o}_{m}(X),\Gamma_{\eta}(A_X),\delta,D)$ will be called \emph{Lie 2-algebra of strong multiplicative Killing vector fields} of $(X_1\rightrightarrows X_0,\eta)$. \begin{remark} It is worth noticing that if we have an isometric action of a Lie $2$-group $G_1\rightrightarrows G_0$ on $(X_1\rightrightarrows X_0,\eta)$ then the assertions of Lemma \ref{LemmaIso1} and Theorem \ref{InfinitesimalAction} can be rewritten in terms of $(\mathrm{Iso}(X,\eta),\textnormal{Bis}_{\eta}(X),I,\alpha)$ and $(\mathfrak{o}_{m}(X),\Gamma_{\eta}(A_X),\delta,D)$, respectively. \end{remark} A diffeomorphism $\phi:X_0\to X_0$ is said to be a \emph{transversal isometry} of $(X_0,\eta)$ if $\overline{d\phi}:\nu(\mathcal{O}_x)\to \nu(\mathcal{O}_{\phi(x)})$ is a fiberwise isometry for every groupoid orbit $\mathcal{O}_x$ in $X_0$. Consider the set $$\textnormal{Iso}_{\textnormal{w}}(X,\eta)=\left \lbrace (\Phi,\phi)\in \mathrm{Aut}(X)\,|\, \phi\ \textnormal{transversal isometry of}\ (X_0,\eta)\right\rbrace.$$ Note that for every $\sigma\in \textnormal{Bis}(X)$ it follows that $\iota_\sigma$ is a transversal isometry of $(X_0,\eta)$. Thus, by arguing as in Proposition \ref{IsoStrong1} we easily get that: \begin{lemma} The quadruple $(\mathrm{Iso}_{\textnormal{w}}(X,\eta),\textnormal{Bis}(X),I,\alpha)$ determines a sub-crossed module structure of $(\textnormal{Aut}(X),\textnormal{Bis}(X),I,\alpha)$. \end{lemma} The Lie $2$-group determined by the crossed module $(\mathrm{Iso}_{\textnormal{w}}(X,\eta),\textnormal{Bis}(X),I,\alpha)$ is called \emph{Lie 2-group of weak isometries} of $(X_1\rightrightarrows X_0,\eta)$. Let us denote by $\mathfrak{o}^{\textnormal{w}}(X_0,\eta)$ the set of vector fields on $X_0$ whose flow determines a (local) transversal isometry of $(X_0,\eta)$ and consider the set $\mathfrak{o}_{m}^{\textnormal{w}}(X)=\left\lbrace (\xi,v)\in \mathfrak{X}_{m}(X)\,|\, v \in \mathfrak{o}^{\textnormal{w}}(X_0,\eta)\right\rbrace$. Observe that if $v_1,v_2\in \mathfrak{o}^{\textnormal{w}}(X_0,\eta)$ then for $t$ small enough we have that the commutator flow $\varphi_{-\sqrt{t}}^{v_2}\varphi_{-\sqrt{t}}^{v_1}\varphi_{\sqrt{t}}^{v_2}\varphi_{\sqrt{t}}^{v_1}$ is a transversal isometry so that $\mathfrak{o}^{\textnormal{w}}(X_0,\eta)$ is a Lie subalgebra of $\mathfrak{X}(X_0)$. From \cite{SW} we know that $\textnormal{Lie}(\textnormal{Bis}(X))$ is identified with $\Gamma(A_X)$. Therefore, by using similar arguments as those in Proposition \ref{Strongkilling} we obtain a description of the Lie $2$-group of weak isometries of $(X_1\rightrightarrows X_0,\eta)$. Namely: \begin{proposition} The quadruple $(\mathfrak{o}_{m}^{\textnormal{w}}(X),\Gamma(A_X),\delta,D)$ defines a sub-crossed module structure of $(\mathfrak{X}_{m}(X),\Gamma(A_{X}),\delta,D)$. \end{proposition} Accordingly, the Lie $2$-algebra associated to the crossed module $(\mathfrak{o}_{m}^{\textnormal{w}}(X),\Gamma(A_X),\delta,D)$ will be called \emph{Lie 2-algebra of weak multiplicative Killing vector fields} of $(X_1\rightrightarrows X_0,\eta)$. \begin{theorem} Let $(X_1\rightrightarrows X_0,\eta)$ be a Riemannian groupoid with $\eta$ a 1-metric and $X_1\rightrightarrows X_0$ proper. Then there always exists a multiplicative vector field $(\xi,v)$ with $v \in \mathfrak{o}^{\textnormal{w}}(X_0,\eta)$. \end{theorem} \begin{proof} Let us pick $v_1 \in \mathfrak{o}^{\textnormal{w}}(X_0,\eta)$. Since $s$ is a surjective submersion there is a vector field $\xi_1$ on $X_1$ such that $\xi_1$ is $s_X$-related with $v_1$. The fact that $\overline{ds}:\nu(G_{\mathcal{O}})\to \nu(\mathcal{O})$ is a fiberwise isometry ($s_X$ is a Riemannian submersion) implies that the flow of $\xi_1$ is a local transversal isometry of $(X_1,\eta)$ since $\xi_1$ and $v_1$ are $s$-related. Let us now consider a proper Haar measure system $\lbrace \mu^x\rbrace$ for $G\rightrightarrows M$. By following results proved in \cite{CS} we can construct a multiplicative vector field $\xi:X_1\to TX_1$ by taking the average with respect to $\lbrace \mu^x\rbrace$: $$\xi(p)=\int_{a\in t^{-1}(s(p))}dm_{(pa,a^{-1})}(\xi_1(ap),di_a(\xi_1(a))\mu(a).$$ This vector field is $s_X$-related with the vector field $v$ on $X_0$ defined as $$v(x)=\int_{a\in t^{-1}(x)}dt_a(\xi_1(a))\mu(a).$$ Therefore, our result will follow once we prove that the flow of $v$ is a local transversal isometry of $(X_0,\eta)$ since $\xi$ and $v$ are $s_X$-related. However, this follows from a straightforward computation after noting that the flow of $v$ is given by $\varphi^v_r(x)=\int_{a\in t^{-1}(x)}(t\circ \varphi^{\xi_1}_r)(a)\mu(a)$ and $\overline{dt}:\nu(G_{\mathcal{O}})\to \nu(\mathcal{O})$ is a fiberwise isometry since $t_X$ is also a Riemannian submersion. \end{proof} \subsection{Morita invariance} In this subsection we apply some of the result from \cite{OW} to our context in order to define a notion of geometric Killing field on a quotient Riemannian stack. Let us start by introducing some necessary terminology. A \emph{Morita map} is a groupoid morphism $\phi:(X_1\rightrightarrows X_0) \to (Y_1\rightrightarrows Y_0)$ which is \emph{fully faithful} and \emph{essentially surjective}, in the sense that the source/target maps define a fibred product of manifolds $X_1 \cong (X_0 \times X_0) \times_{ (Y_0\times Y_0)} Y_1$ and that the map $Y_1 \times _{Y_0}X_0\to X_0$ sending $(\phi^0(x)\to y)\mapsto y$ is a surjective submersion \cite{dH,MM}. An important fact shown in \cite{dH} is that a Lie groupoid morphism is a Morita map if and only if it yields an isomorphism between \emph{transversal data}. That is, the morphism must induce: a homeomorphism between the orbit spaces, a Lie group isomorphism $X_x\cong Y_{\phi^0(x)}$ between the isotropies and isomorphisms between the normal representations $X_x\curvearrowright \nu_x\to Y_{\phi^0(x)}\curvearrowright \nu'_{\phi^0(x)}$. We think of a quotient stack as a Lie groupoid up to Morita equivalence in the sense that two Lie groupoids $X$ and $Y$ define the same stack if there is a third groupoid $Z$ and Morita maps $Z\to X$ and $Z\to Y$ \cite{dH}. These two Morita maps may be assumed to be Morita fibrations (surjective submersion at the level of objects) \cite{MM}. The quotient stack associated to the Lie groupoid $X_1\rightrightarrows X_0$ will be denoted by $[X_0/X_1]$. Two Riemannian metrics $\eta_1$ and $\eta_2$ on $X_1\rightrightarrows X_0$ are said to be \emph{equivalent} if they induce the same inner product on the normal vector spaces over the groupoid orbits \cite{dHF2}. More generally, we define a \emph{Riemannian Morita map} (\emph{fibration}) $\phi: (Z_1 \rightrightarrows Z_0) \to (X_1\rightrightarrows X_0)$ as a Morita map between Riemannian groupoids that induces isometries on the normal vector spaces to the groupoid orbits $\nu_z(\mathcal{O}^Z)\to \nu_{\phi(z)}(\mathcal{O}^X)$ (Riemannian submersion at the level of objects). By using this terminology we have that $\eta_1$ and $\eta_2$ are equivalent if and only if the identity $\textnormal{id}: (X_1 \rightrightarrows X_0,\eta_1) \to (X_1\rightrightarrows X_0,\eta_2)$ is a Riemannian Morita map. Following \cite{OW}, given a Morita fibration $\phi: (Z_1 \rightrightarrows Z_0) \to (X_1\rightrightarrows X_0)$ we denote the set of projectable sections $$\Gamma(A_Z)^\phi=\lbrace \alpha \in \Gamma(A_Z):\textnormal{there exists}\ \alpha'\in \Gamma(A_X)\ \textnormal{such that}\ \phi\alpha=\alpha'\phi\rbrace.$$ If $\alpha \in \Gamma(A_Z)$ then the surjectivity of $\phi$ at the level of objects implies that there exists at most one section $\alpha'\in \Gamma(A_X)$ such that $\phi\alpha=\alpha'\phi$ so that it follows that there is a natural linear map $\phi_\ast : \Gamma(A_Z)^\phi\to \Gamma(A_X)$. We denote by $\Gamma(A_Z)^\phi \hookrightarrow \Gamma(A_Z)$ the inclusion map. It is clear that we can similarly define the set of projectable multiplicative sections $\mathfrak{X}_m(Z)^\phi$, a natural map $\phi_\ast: \mathfrak{X}_m(Z)^\phi\to \mathfrak{X}_m(X)$ and an inclusion $\mathfrak{X}_m(Z)^\phi\hookrightarrow \mathfrak{X}_m(Z)$. From \cite{OW} it follows that $(\mathfrak{X}_m(Z)^\phi,\Gamma(A_{Z})^\phi,\delta,D)$ is a sub-crossed module of $(\mathfrak{X}_m(Z),\Gamma(A_{Z}),\delta,D)$ and both maps $\phi_\ast: (\mathfrak{X}_m(Z)^\phi,\Gamma(A_{Z})^\phi,\delta,D)\to (\mathfrak{X}_m(X),\Gamma(A_{X}),\delta,D)$ and $(\mathfrak{X}_m(Z)^\phi,\Gamma(A_{Z})^\phi,\delta,D)\hookrightarrow(\mathfrak{X}_m(Z),\Gamma(A_{Z}),\delta,D)$ are morphisms of crossed-modules. More importantly, $$(\mathfrak{X}_m(Z),\Gamma(A_{Z}),\delta,D) \hookleftarrow (\mathfrak{X}_m(Z)^\phi,\Gamma(A_{Z})^\phi,\delta,D) \xrightarrow[]{\it \phi_\ast} (\mathfrak{X}_m(X),\Gamma(A_{X}),\delta,D),$$ are quasi-isomorphisms of crossed modules. For specific details visit \cite{OW}. \begin{lemma}\label{MoritaLemma1} Let $\phi: (Z_1 \rightrightarrows Z_0,\eta^Z) \to (X_1\rightrightarrows X_0,\eta^X)$ be a Morita Riemannian fibration. Then \begin{itemize} \item $(\mathfrak{o}_m(Z)^\phi,\Gamma_{\eta}(A_{Z})^\phi,\delta,D)$ is a sub-crossed module of $(\mathfrak{o}_m(Z),\Gamma_\eta(A_{Z}),\delta,D)$, \item the inclusion $(\mathfrak{o}_m(Z)^\phi,\Gamma_{\eta}(A_{Z})^\phi,\delta,D)\hookrightarrow(\mathfrak{o}_m(Z),\Gamma_{\eta}(A_{Z}),\delta,D)$ is a morphism of crossed modules, and \item the projection $\phi_\ast: (\mathfrak{o}_m(Z)^\phi,\Gamma_\eta(A_{Z})^\phi,\delta,D)\to (\mathfrak{o}_m(X),\Gamma_\eta(A_{X}),\delta,D)$ is a morphism of crossed modules. \end{itemize} Moreover, $$(\mathfrak{o}_m(Z),\Gamma_\eta(A_{Z}),\delta,D) \hookleftarrow (\mathfrak{o}_m(Z)^\phi,\Gamma_\eta(A_{Z})^\phi,\delta,D) \xrightarrow[]{\it \phi_\ast} (\mathfrak{0}_m(X),\Gamma_\eta(A_{X}),\delta,D),$$ are quasi-isomorphisms of crossed modules. Same conclusion holds true for the weak counterpart. \end{lemma} \begin{proof} By using similar arguments as those in Lemma \ref{Rmk1} if follows that if $v$ is a (weak) Killing vector field on $(Z_0,\eta^Z)$ then $\phi_\ast(v)$ is also a (weak) Killing vector field on $(X_0,\eta^X)$. Therefore, the result follows by applying Proposition 7.4 and Theorem 7.3 from \cite{OW} after restricting the structure. \end{proof} The following result is clear. \begin{lemma}\label{MoritaLemma2} If $\eta_1$ and $\eta_2$ are equivalent Riemannian metrics on $X_1\rightrightarrows X_0$ then the crossed modules $(\mathfrak{o}_{m}^{\textnormal{w}}(X,\eta_1),\Gamma(A_X),\delta,D)$ and $(\mathfrak{o}_{m}^{\textnormal{w}}(X,\eta_2),\Gamma(A_X),\delta,D)$ agree. \end{lemma} Suppose that $X$ and $Y$ are Morita equivalent Lie groupoids so that there is a third Lie groupoid $Z$ with Morita fibrations $Z\to X$ and $Z\to Y$. From \cite{dHF2} we know that if $\eta^X$ is a Riemannian metric on $X$ then there exists a Riemannian metric $\eta^Z$ on $Z$ that makes the fibration $Z\to X$ Riemannian. We can slightly modify $\eta^Z$ by a cotangent averaging procedure so that we get another Riemannian metric $\tilde{\eta}^Z$ on $Z$ which descends to $Y$ defining a Riemannian metric $\eta^Y$ making of the fibration $Z\to Y$ Riemannian. It turns out that these pullback and pushforward constructions are well-defined and mutually inverse modulo equivalence of metrics. This is because $\eta^Z$ and $\tilde{\eta}^Z$ turn out to be equivalent. In this case we refer to $(X,\eta^X)$ and $(Y,\eta^Y)$ as being \emph{Morita equivalent Riemannian groupoids}. It suggests a definition for Riemannian metrics over differentiable stacks. Namely, a stacky metric on the orbit stack $[X_0/X_1]$ associated to a Lie groupoid $X_1\rightrightarrows X_0$ is defined to be an equivalence class $[\eta]$ of a Riemannian metric $\eta$ on $X$. For further details the reader is recommended to visit \cite{dHF2}. \begin{theorem}\label{MoritaTheorem} If $(X_1 \rightrightarrows X_0,\eta^X)$ and $(Y_1 \rightrightarrows Y_0,\eta^Y)$ are Morita equivalent Riemannian groupoids then the crossed modules $(\mathfrak{o}_{m}^{\textnormal{w}}(X),\Gamma(A_X),\delta,D)$ and $(\mathfrak{o}_{m}^{\textnormal{w}}(Y),\Gamma(A_Y),\delta,D)$ are isomorphic in the derived category of crossed modules. In consequence, the following quotient spaces are isomorphic: $$\mathfrak{o}_{m}^{\textnormal{w}}(X)/\textnormal{im}(\delta)\cong \mathfrak{o}_{m}^{\textnormal{w}}(Y)/\textnormal{im}(\delta).$$ \end{theorem} \begin{proof} This result is consequence of Lemmas \ref{MoritaLemma1} and \ref{MoritaLemma2} together with Theorem 7.4 and Corollary 7.2 from \cite{OW}. \end{proof} Motivated by the previous result and \cite[Def. 8.1]{OW} we set up the following definition. \begin{definition} Let $(X_1 \rightrightarrows X_0,\eta)$ be a Riemannian groupoid. A \emph{geometric Killing vector field} on the quotient Riemannian stack $([X_0/X_1],[\eta])$ is defined to be $$\mathfrak{o}([X_0/X_1],[\eta]):=\mathfrak{o}_{m}^{\textnormal{w}}(X)/\textnormal{im}(\delta).$$ \end{definition} On the one hand, if we consider proper étale Riemannian groupoids then geometric Killing vector fields recover the classical notions of Killing vector fields on both Riemannian manifolds and Riemannian orbifolds as defined for instance in \cite{BZ}. On the other hand, if we consider the Riemannian groupoid $\textnormal{Hol}(M,\mathcal{F})\rightrightarrows M$ associated to a regular Riemannian foliation $(M,\mathcal{F})$ then geometric Killing vector fields recover the notion of transverse Killing vector fields as defined in \cite[p. 84]{Mo}. We finish this section by proving a dimensional result for Riemannian foliation groupoids. A \emph{foliation groupoid} is a Lie groupoid $X_1 \rightrightarrows X_0$ whose space of objects $X_0$ is Hausdorff and whose isotropy groups $X_x$ are discrete for all $x\in X_0$. For instance, every \'etale Lie groupoid with Hausdorff objects manifold is a foliation groupoid. The converse is not true, however every foliation groupoid is Morita equivalent to an \'etale groupoid. As shown in \cite{C}, being a foliation groupoid is equivalent to the associated Lie algebroid anchor map $\rho:A_X\to TX_0$ being injective. As a consequence, the manifold $X_0$ comes with a regular foliation $\mathcal{F}$ tangent to the leaves of $\mathrm{im}(\rho)\subseteq TX_0$. Note that if $X_1 \rightrightarrows X_0$ is source-connected then the leaves of $\mathrm{im}(\rho)\subseteq TX_0$ coincide with the groupoid orbits. \begin{proposition} If $(X_1 \rightrightarrows X_0,\eta)$ is a Riemannian foliation groupoid then the algebra of geometric Killing vector fields on $([X_0/X_1],[\eta])$ has finite dimension. \end{proposition} \begin{proof} Let $\iota: T\hookrightarrow X_0$ be a complete transversal submanifold to the orbit foliation $\mathcal{F}$ of $X_1 \rightrightarrows X_0$ and consider its restricted groupoid $X_T\rightrightarrows T$. As $X_T\rightrightarrows T$ is étale and Morita equivalent to $X_1 \rightrightarrows X_0$ (see \cite[136]{MM}), from Theorem \ref{MoritaTheorem} it follows that $$ \mathfrak{o}([X_0/X_1],[\eta])= \mathfrak{o}_m^{\textnormal{w}}(X_{T}, \iota^*\eta)/\textnormal{im}(\delta)\cong \mathfrak{o}(T)^{X}. $$ Here $\mathfrak{o}(T)^{X}$ denotes the transversal Killing vector fields that are invariant by the normal action. Therefore, $\mathrm{dim}\mathfrak{o}([X_0/X_1],[\eta])\leq \frac{1}{2}\mathrm{codim}(\mathcal{F})(\mathrm{codim}(\mathcal{F})+1).$ \end{proof}
{ "timestamp": "2022-09-20T02:21:15", "yymm": "2209", "arxiv_id": "2209.08643", "language": "en", "url": "https://arxiv.org/abs/2209.08643" }
\section{Introduction} \textbf{Neural Ordinary Differential Equations.} In machine learning, we try to iteratively find a function that best describes the data. There are two basic approaches to finding this function. The first approach is to directly approximate the function by an analytical or numerical method. An ordinary linear regression falls into this category. The second approach is to approximate the derivative of the function. This results in an Ordinary Differential Equation (ODE) which by solving, we get the approximation of the function. We can parameterize the derivative of the function as a neural network. \par Now, consider a residual network \cite{he2016identity} where all the hidden states have the same dimension. Such networks generate an output by doing a sequence of transformations to a hidden state \cite{chen2018neural}: \begin{equation} \label{eq:1} \mathbf{h}_{t+1} = \mathbf{h}_t + f(\mathbf{h}_t, \theta_t) \end{equation} By adding infinite number of layers, we get the continuous dynamics of hidden units using an ODE defined by a neural network \cite{chen2018neural}: \begin{equation} \frac{d\mathbf{h}(t)}{dt} = f(\mathbf{h}(t), t, \theta) \end{equation} where $f(\mathbf{h}(t), t, \theta)$ is a neural network layer parameterized by $\theta$ at layer $t$. By solving the integral: \begin{equation} \mathbf{h}(t) = \mathbf{h}(t_0) + \int_{t_0}^{T} f(\mathbf{h}(t), t, \theta) \, dt, \end{equation} we can get the output value of a hidden layer at some depth $T$. \par \textbf{Semantic Segmentation.} Semantic segmentation refers to the process of assigning each pixel in an image to a class label. Current state-of-the-art neural networks for semantic segmentation require a considerable amount of memory for training (especially with high-resolution images). Based on the fact that neural ODEs use less memory \cite{chen2018neural}, in this paper we propose a novel neural ODE design for the semantic segmentation task. We evaluate our model on the Cityscapes \cite{cordts2016cityscapes}, CamVid \cite{BrostowFC:PRL2008}, LIP \cite{gong2017look}, and PASCAL-Context \cite{mottaghi2014role} datasets and show that it is able to produce the state-of-the-art results using 57\% less memory for training, 42\% less memory for testing, and 68\% less number of parameters. \begin{figure}[t] \includegraphics[width=\textwidth]{images/network} \caption{The overall structure of the proposed method. The down-sampling uses convolutions with $stride=2$. For the up-sampling, we use bilinear interpolation to avoid the checkerboard artifact \cite{odena2016deconvolution}.} \label{fig:segnode} \end{figure} \section{Related Work} {\bf Neural ODEs.} Recently, several works have analyzed the relationship between dynamical systems and deep neural networks. In \cite{weinan2017proposal}, the authors propose the idea of using continuous dynamical systems as a tool for machine learning. In \cite{lu2017beyond}, it has been shown that many effective networks, such as ResNet \cite{he2016identity}, PolyNet \cite{zhang2017polynet}, FractalNet \cite{larsson2016fractalnet}, and RevNet \cite{gomez2017reversible}, can be interpreted as different numerical discretizations of differential equations. When the discretization step approaches zero, it yields a family of neural networks, which are called neural ODEs \cite{chen2018neural}. \cite{chen2018neural} proposes to compute gradients using the adjoint sensitivity method \cite{pontryagin2018mathematical}, in which there is no need to store intermediate quantities during the forward pass of the network. In \cite{zhu2018convolutional}, an interpretation of Dense Convolutional Networks (DenseNets) \cite{huang2017densely} and Convolutional Neural Networks with Alternately Updated Clique (CliqueNets) \cite{yang2018convolutional} is provided from a dynamical systems view point. \par {\bf Memory Usage Reduction.} There are methods to reduce memory footprints. Reduced precision formats are binary floating-point formats that occupy less than 32 bits (four bytes) \cite{courbariaux2014low,micikevicius2017mixed}. These formats either reduce the accuracy or add some processing overhead for converting high precision to low precision. Many other memory reduction techniques are derivatives of binomial gradient check-pointing \cite{griewank2000algorithm,chen2016training,rota2018place}. The overall idea of gradient checkpointing is that the results of cheap operations such as batch normalization \cite{ioffe2015batch} or ReLU can be dropped and then recomputed later. All the gradient check-pointing approaches add processing overhead during training. \par {\bf Semantic Segmentation.} Current state-of-the-art methods for semantic segmentation are based on convolutional neural networks. These networks have different architectures. Encoder-decoder or hourglass networks are used in many computer vision tasks like object detection \cite{lin2017feature}, human pose estimation \cite{newell2016stacked}, image-based localization \cite{melekhov2017image}, and semantic segmentation \cite{long2015fully,badrinarayanan2017segnet,noh2015learning}. Generally, they are made of an encoder and decoder parts such that, the encoder gradually reduces the feature maps resolution and captures high-level semantic information, and the decoder gradually recovers the low-level details. Because these networks lose the image details during the encoder path, they are not able to achieve the highest results without using skip connections. Spatial pyramid pooling models perform spatial pyramid pooling \cite{lazebnik2006beyond,grauman2005pyramid} at different grid scales or apply several parallel atrous convolution \cite{chen2017deeplab} with different rates. These models include the two well-known PSPNet \cite{zhao2017pyramid} and DeepLab \cite{chen2017rethinking}. High-resolution representation networks \cite{wang2019deep,huang2017multi,fourure2017residual,zhou2015interlinked} try to maintain a high-resolution hidden state from input to output. By doing low-resolution convolutions in parallel streams, high-level features are gained while low-level details are not lost. Since these networks require a lot of memory, they first down-sample the input image to a lower resolution before the main body. Some approaches \cite{chen2017deeplab,chandra2016fast} do post-processing, such as conditional random fields, on the network's output to improve the segmentation details, especially around the object boundaries. These approaches add some processing overhead to training and testing. \par {\bf Semantic Segmentation using Neural ODEs.} There are only a few methods for semantic segmentation that have partially incorporated neural ODEs in the network design. In \cite{pinckaers2019neural}, a U-Net is modified to use neural ODEs. In this design, the repeated residual blocks in each branch are replaced by a neural ODE that wraps around only one convolutional block. Although U-Net is a well-known network, more recent networks can achieve higher results than U-Net. In this paper, we design our network based on the HRNetV2 \cite{wang2019deep} which can achieve the state-of-the-art accuracy on the Cityscapes \cite{cordts2016cityscapes}, CamVid \cite{BrostowFC:PRL2008}, and LIP \cite{gong2017look} datasets. Similar to \cite{pinckaers2019neural}, another modified U-Net is introduced in \cite{li2021robust}. This time, instead of replacing a branch with a neural ODE block, a neural ODE block is added at the end of each branch. In \cite{valle2019neural}, a novel approach that combines neural ODEs and the Level Set method is proposed. This approach parameterizes the derivative of the contour as a neural ODE that implicitly learns a forcing function describing the evolution of the contour. This approach is limited to the segmentation of images with one target class. \begin{figure}[t] \includegraphics[width=\textwidth]{images/base-network} \caption{The baseline network is created by repeating the last module from HRNetV2 \cite{wang2019deep} which has four branches with different feature-map resolutions. We use skip connections at the module level (not drawn). So, each module is treated as a residual block. Each small block in a module consists of one set of batch normalization \cite{ioffe2015batch}, ReLU, and convolutional layers. This network is not a neural ODE and is trained similarly to HRNetV2.} \label{fig:base-network} \end{figure} \section{Method} We introduce a baseline network that is trained without the use of neural ODEs. Then we introduce a neural ODE equivalent to the baseline network. Our goal is to compare the results step by step, from a state-of-the-art network to the baseline network, then to the neural ODE network. \subsection{Baseline Network} At the time of writing, one of the state-of-the-art methods in semantic segmentation is HRNetV2 \cite{wang2019deep}. We try to adopt this network architecture and turn it into a residual form such that each module is like a residual block. To this aim, we repeat the last module in series, multiple times and treat each one of them as a residual unit (as depicted in Figure \ref{fig:base-network}). This way, the network consists of multiple residual modules, each module has four branches with different feature-map resolutions, and each branch has multiple residual blocks. In our experiments in this paper, we repeat the main module six times to keep the number of parameters close to HRNetV2. We use this baseline network to gradually evaluate the design of our neural ODE network. \subsection{SegNode} Since the baseline network has an overall residual form, we can turn it into a neural ODE. In this form, a single or multiple modules act as the function $f$ in Equation \ref{eq:1}. This module (or modules) is wrapped in an ODE solver. Since the main module has four convolutional streams with different resolutions and number of channels, we use convolutional layers to create the input feature-maps with the corresponding resolution and number of channels. The resulting four tensors are fed to the ODE solver. The output of the ODE solver has the same format as its input. By using four convolutional layers, the number of channels of the output feature maps is changed to the number of classes. Then, the feature maps are re-scaled to the higher resolution using bilinear interpolation and added together to produce the final output (as shown in Figure \ref{fig:segnode}). Bilinear interpolation is used to avoid the checkerboard artifact \cite{odena2016deconvolution}. We call our network SegNode for short. \begin{table}[t] \caption{Comparison of results on four datasets. We use \dag ~to mark methods pretrained on Mapillary.} \label{tab:results} \centering \setlength\tabcolsep{8pt} \begin{tabular}{l|cccc} \hline Method & Cityscapes & CamVid & LIP & PASCAL-Context \\ \hline HRNetV2 \cite{wang2019deep} & 81.6 & 80.9 & 55.9 & 54.0 \\ HRNetV2+OCR \cite{yuan2020object} & 83.0 & 81.7 & 56.6 & 56.2 \\ HRNetV2+OCR\textsuperscript{\dag} \cite{yuan2020object} & 84.2 & - & - & - \\ \hline U-Node \cite{pinckaers2019neural} & 78.1 & 77.3 & 51.3 & 49.7 \\ NODEs-UNet \cite{li2021robust} & 79.5 & 78.8 & 52.9 & 50.9 \\ \hline Baseline network & 81.7 & 81.0 & 55.9 & 53.9 \\ SegNode & 81.8 & 81.1 & 55.8 & 54.1 \\ SegNode+OCR & 83.1 & {\bf 82.0} & {\bf 56.7} & {\bf 56.2} \\ SegNode+OCR\textsuperscript{\dag} & {\bf 84.5} & - & - & - \\ \hline \end{tabular} \end{table} \section{Experiments} We evaluate our approach on four datasets: Cityscapes \cite{cordts2016cityscapes}, CamVid \cite{BrostowFC:PRL2008}, LIP \cite{gong2017look}, and PASCAL-Context \cite{mottaghi2014role}. Additionally, since the existing neural ODE methods for semantic segmentation have not been evaluated on these datasets, we train and test the two U-Net based methods \cite{pinckaers2019neural,li2021robust} on these datasets and report their accuracy. \subsection{Setup} We pretrain our baseline and SegNode networks on ImageNet \cite{russakovsky2015imagenet} and use the pre-trained networks in all our experiments. We use the mean Intersection over Union (mIoU) metric to compare all the methods. \par For the baseline network, we use AdamW optimizer \cite{loshchilov2017decoupled} with a weight decay of 0.05 and batch size of 16. We apply the "polynomial" learning rate policy with a poly exponent of 0.9 and an initial learning rate of 0.0001. \par For SegNode, we use the Runge-Kutta ODE solver provided by \cite{chen2018neural}. Also, we use the adjoint sensitivity method \cite{pontryagin2018mathematical} which is available in the same implementation. We use the SGD optimizer with a base learning rate of 0.1, a momentum of 0.9, and no weight decay. The polynomial learning rate decay function is used with a poly exponent of 0.9. \par For both the baseline network and SegNode, similar to HRNetV2 \cite{wang2019deep}, we use a stem for the input image, which consists of two stride-2 3$\times$3 convolutions to decrease the resolution to $1/4$, and is connected to the main body. The main body outputs the feature maps with the same resolution ($1/4$), which are then made larger as the original resolution using bilinear interpolation. Each stream in the main body has 48, 96, 192, and 384 channels respectively from the highest resolution to the lowest. We use two modules in the main body to achieve the highest accuracy. \begin{table}[t] \begin{center} \setlength\tabcolsep{4pt} \begin{tabular}{l|cccccc} Method & \rotatebox{90}{Maximum Batch Size for 32GB} & \rotatebox{90}{Memory Usage for Batch Size 24 (GB)} & \rotatebox{90}{Memory Usage for Testing One Image (GB)} & \rotatebox{90}{Training Time per Epoch (Seconds)} & \rotatebox{90}{Testing Time per Image (Milliseconds)} & \rotatebox{90}{Number of Parameters (Millions)} \\ \hline U-Net \cite{ronneberger2015u} & 36 & 21.8 & 0.8 & {\bf 10} & {\bf 4} & 31.0 \\ PSPNet \cite{zhao2017pyramid} & 24 & 31.2 & 0.8 & 18 & 5 & 23.7 \\ Deeplab v3 \cite{chen2017rethinking} & 24 & 31.3 & 1.0 & 24 & 16 & 58.6 \\ HRNetV2 \cite{wang2019deep} & 24 & 31.1 & 1.2 & 18 & 48 & 65.8 \\ \hline Baseline network & 24 & 31.9 & 1.2 & 19 & 49 & 70.9 \\ SegNode & {\bf 62} & {\bf 13.4} & {\bf 0.7} & 34 & 117 & {\bf 20.9} \\ \hline \end{tabular} \end{center} \caption{A comparison of a few important empirical computational measures on an NVIDIA Tesla V100 32GB for CamVid \cite{BrostowFC:PRL2008} dataset. The training time per epoch is calculated using the maximum batch size possible. Our method requires the least amount of memory, but the longest computation time for training and testing.} \label{tab:computational-cost} \end{table} \subsection{Cityscapes} The Cityscapes dataset \cite{cordts2016cityscapes} contains 5k high quality pixel-level finely annotated street images. The finely annotated images are divided into 2,975/500/1,525 images for training, validation, and testing. Also, the dataset contains additional 20k coarsely annotated images. There are 30 classes, and 19 classes among them are used for evaluation. We train on the training, validation, and coarse sets to get the highest accuracy on the test set. \subsection{CamVid} Compared to Cityscapes \cite{cordts2016cityscapes}, CamVid \cite{BrostowFC:PRL2008} is a much smaller dataset focusing on semantic segmentation for driving scenarios. The original version is composed of 701 annotated images in 32 classes with size 960$\times$720 from five video sequences. However, most literature only focuses on the protocol proposed in \cite{badrinarayanan2017segnet} which splits the dataset into 367 training, 101 validation, and 233 test images in 11 classes. We follow this protocol for training on CamVid. \subsection{LIP} The LIP dataset \cite{gong2017look} contains 50,462 human images with detailed annotations. The dataset is divided into 30,462 training, 10,000 validation, and 10,000 test images. The model evaluation is done on 20 categories (including the background label). We follow the common testing protocol \cite{ruan2019devil,wang2019deep} and resize the images to 473$\times$473. \subsection{PASCAL-Context} The PASCAL-Context dataset \cite{mottaghi2014role} adds annotations for more than 400 additional categories to the PASCAL VOC 2010 dataset. It contains 4,998 training and 5,105 validation images, subsets of PASCAL VOC 2010 dataset. The dataset annotations cover 100\% of pixels while the previous annotations covered around 29\%. We follow \cite{wang2019deep,yuan2020object} and evaluate our method on 59 sub-categories. \begin{figure}[t] \includegraphics[width=\textwidth]{images/trajectories} \caption{Segmentation results from trajectories at different times. This image shows how the gradual transformations correct the segmentation over time.} \label{fig:trajectories} \end{figure} \subsection{Results} Table \ref{tab:results} compares the results of our proposed method to different variants of HRNetV2 and existing neural ODE methods. On average our method performs better than HRNetV2 and its variants by a small margin. \par We tried and improved the existing neural ODE methods in our implementation by increasing the number of parameters and tuning hyper-parameters. Still, our proposed design can achieve higher accuracy by a large margin. The main reason is that we started our design from a better-performing network architecture and modified it step-by-step towards the final design. \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{images/error} \caption{The average mean IoU error of the trajectories during solving time, calculated on the PASCAL-Context validation set.} \label{fig:error} \end{figure} \subsection{Empirical Computational Cost} \label{computational-cost} In this section, we provide an empirical comparison between our approach and a few well-known networks. We use an NVIDIA Tesla V100 32GB with the same network hyper-parameters as used before. All the experiments are implemented in Python using PyTorch. The results are calculated on CamVid \cite{BrostowFC:PRL2008} dataset with an image size of $480\times360$. \par Table \ref{tab:computational-cost} compares a few important empirical computational measures. Our method requires the least amount of memory, but the longest computation time for training and testing by a large margin. In particular, compared to HRNetV2 \cite{wang2019deep}, while our method has 68\% less number of parameters, it requires 57\% less memory for training and 42\% less memory for testing. On the other side, HRNetV2 requires 47\% less training time and 59\% less testing time. \subsection{Trajectory Error} In this section, we show how the ODE solver gradually improves its output during test time. Figure \ref{fig:trajectories} visualizes the segmentation output of the network trajectories over time for one sample image. This figure shows the steps that the ODE solver takes during solving the network. To generate each step, the corresponding hyper-parameter of the ODE solver is modified to partially solve its input. \par Figure \ref{fig:error} shows the average mean IoU error of the trajectories over time for all the samples in the PASCAL-Context validation set. One of the biggest issues with neural ODEs is that they require more computational resources during test time. To alleviate this problem, it is possible to sacrifice accuracy for speed. As an example, by sacrificing 3\% of accuracy, the required computational time decreases by 50\%. \section{Conclusion} Based on a current state-of-the-art network, we proposed a novel neural ODE design for semantic segmentation. The new idea of neural ODEs helped us to reduce the memory requirement with the cost of more processing time. While using a notably less amount of memory, our method (SegNode) was able to achieve state-of-the-art results. The proposed method can be used for all the computer vision tasks that can make use of dense 2D predictions such as human pose estimation and object detection tasks. \bibliographystyle{splncs04}
{ "timestamp": "2022-09-20T02:21:59", "yymm": "2209", "arxiv_id": "2209.08667", "language": "en", "url": "https://arxiv.org/abs/2209.08667" }
\section{Introduction} \label{sec:intro} The theory of graph quasirandomness implies that quasirandom graphons are the only graphons $W$ with the self-similarity property that densities of finite graphs are invariant across subgraphons of $W$ (see~\cite{CGW89} for graph quasirandomness and~\cite{Lov12} for graphons). An interesting weakening of this property, which we will motivate further below, is to require only that the family $\cF$ of finite graphs that have positive density is invariant across subgraphons of $W$. We call graphons with this property \emph{weakly random}. It is natural to ask which families $\cF$ can be realized in this way in some weakly random $W$. Since all constant graphons are quasirandom, thus also weakly random, three such families are the cliques, the anti-cliques and the family of all finite graphs. However, there are other families that can be realized in this way such as the family $\cF_{C_4}$ of induced subgraphs of some recursive blow-up of the $4$-cycle, corresponding to the weakly random graphon $W_{C_4}$ that is the limit of the balanced recursive blow-ups of the $4$-cycle. The work of this paper is to show that this notion of weak randomness supports a rich structure theory and provides an illuminating way of studying hereditary classes of graphs based on properties of their limit objects. Before stating our main results, let us further motivate why the study of weak randomness is both natural and tractable, which begins by asking what is special about large cliques and anti-cliques. Recall that the \Erdos--Hajnal Conjecture~\cite{EH89} says that for any proper hereditary class of graphs, there exists a constant $c > 0$ such that any graph of size $n$ in this class either has a clique or an anti-clique of size $n^c$. In~\cite{CM22}, we studied a natural variant of this question in the presence of convergence, which we called the approximate \Erdos--Hajnal property ($\AEHP$), in which we allow for a negligible amount of non-edges in the almost clique or a negligible amount of edges in the almost anti-clique, but require it to be linear-sized. The framework of $\AEHP$ naturally lends itself to analysis via limit theory, i.e., graphons~\cite{LS06} in the graph case, or more generally, flag algebras~\cite{Raz07} and theons~\cite{CR20a} in the case of universal theories in finite relational languages. Surprisingly, hereditary classes of graphs with $\AEHP$ can be characterized as precisely those that avoid containing the aforementioned family $\cF_{C_4}$, see~\cite[Theorem~8.10]{CM22}. In what follows, it will be more convenient to think about hereditary classes of graphs as the models of a particular universal first-order theory $T$ of graphs, so a graphon of $T$ is simply a limit of finite models of $T$. This shift in language supports the model theoretic perspective of studying the theory $T$ (i.e., a hereditary class of graphs) by studying the variation in the class of its infinite models (i.e., its graphons). In this language, a universal theory $T$ of graphs has $\AEHP$ if every graphon of $T$ has a (large) trivial subgraphon, i.e., an almost clique or an almost anti-clique, see~\cite[\S7]{CM22} and Definition~\ref{def:AEHP}. In the proof of the negative side of the characterization of $\AEHP$ for graphs, if all (induced subgraphs of all) finite recursive blow-ups of $C_4$ are models of $T$, then the limit $W_{C_4}$ is a graphon of $T$. Looking through the lens of weak randomness, it is clear that $W_{C_4}$ does not contain trivial subgraphons since both the edge and the non-edge must persistently have positive density in all subgraphons of $W_{C_4}$. Part of the characterization of $\AEHP$ involved showing that persistence of the edge and non-edge implies persistence of every graph in $\cF_{C_4}$. Thus, we are led to ask which families arise as \emph{persistent classes} of graphons, i.e., families $\cF$ of graphs that are precisely those that have positive density in all subgraphons of a given graphon $W$. A related notion is that of a \emph{strongly persistent class}, in which the graphon is further required to be weakly random. A priori these notions are different since a non-weakly random graphon can have finite graphs with positive density in only some of its subgraphons. The first theorem of the present paper is to show the equivalence of strong persistence and persistence and to characterize such families as precisely those that are closed under substructures and substitutions (see Definition~\ref{def:subst}). This requires both understanding properties of substitutions and the construction of appropriate weakly random limits. We prove this result first for graphs (Theorem~\ref{thm:graphpersistence}) and then a suitable generalization of it for structures in arbitrary finite relational languages (Theorem~\ref{thm:persistence}) after developing suitable extensions of the relevant concepts. The appearance of substitution in this characterization, and of the related notion of primality in what follows, is not completely unexpected as both the \Erdos--Hajnal property and its approximate version behave very well under substitution~\cite{APS01,CM22}. Since cliques and anti-cliques are weakly random, we can extend the picture of $\AEHP$ by defining the class $\WR$ as follows: a universal theory of graphs is in $\AEHP$ if all its graphons have trivial subgraphons and a universal theory of graphs is in $\WR$ if all its graphons have weakly random subgraphons. It is immediate that $\AEHP\subseteq\WR$, it is less immediate but shown in the present paper that this containment is proper and that not every universal theory is in $\WR$. Because of the nature and simplicity of the characterization of $\AEHP$ for graphs cited above, it becomes plausible that a characterization of the richer $\WR$ class may exist. In Theorem~\ref{thm:graphs:WR}, we characterize theories of graphs in $\WR$ under the additional natural assumption of closure under substitution as those that have ``few'' prime graphs in the sense that there are no infinite antichains of prime graphs in the induced subgraph partial order, a condition we call \emph{primally almost finite}. In one direction, we build on the analysis of persistence of Theorem~\ref{thm:graphpersistence} and in the other direction, the technology of recursive blow-ups plays a key role. Note that without the assumption of closure under substitutions, it is obvious that $\WR$ is no longer characterized by the primally almost finite condition as, e.g., the theory of bipartite graphs is in $\WR$ (even in $\AEHP$) but has infinite antichains of prime graphs. Many further questions are discussed in the concluding Section~\ref{sec:concl}. \medskip Let us point out that although~\cite{CM22} provides a good motivation for the current work, it is not a pre-requisite for the current paper and we do not rely on any of the results of~\cite{CM22} for our study of weak randomness and the class $\WR$, except for a straightforward characterization of subgraphons and sub-objects~\cite[Lemmas~3.3 and 5.8]{CM22} (see also Section~\ref{sec:prelim} below). To read the current paper, it will be useful to have some familiarity with the theories of graphons and theons, but we repeat the relevant definitions and results in Section~\ref{sec:prelim} to set the notation. Now we describe the structure of the paper. In Section~\ref{sec:prelim}, we review the necessary preliminaries and set notation. Section~\ref{sec:subst} starts to develop the properties of substitution, primality and almost finiteness, which we will need for the rest of the paper. Section~\ref{sec:pers:graph} is devoted to proving the persistence result for graphs, Theorem~\ref{thm:graphpersistence}. Section~\ref{sec:WR:graph} defines the class $\WR$ for graphs and proves the characterization under the assumption of closure under substitutions, Theorem~\ref{thm:graphs:WR}. In Section~\ref{sec:VC:graph}, we study how the notions of weak randomness interact with VC~dimension, show that weakly random graphons of proper theories of graphs must be a.e.\ $\{0,1\}$-valued (Theorem~\ref{thm:weaklyrandom01}) and show that primally almost finite families of graphs must have bounded VC~dimension (Theorem~\ref{thm:primallyalmostfiniteNIP}). In Section~\ref{sec:pers:univ}, we prove the general characterization of strongly persistent classes of structures in finite relational languages (Theorem~\ref{thm:persistence}). In the brief Section~\ref{sec:WR:univ}, we point out which results concerning $\WR$ generalize easily to finite relational languages. In the final Section~\ref{sec:concl}, we summarize and discuss some open problems. \section{Preliminaries} \label{sec:prelim} In this section, we establish the notation and background results that will be used throughout the paper. The main points of the section concern why we can work with canonical theories as opposed to general universal theories, what graphons and theons are and how they interact with open interpretations, and the characterization of subgraphons and sub-objects. \medskip We denote the set of non-negative integers by $\NN$ and the set of positive integers by $\NN_+\df\NN\setminus\{0\}$ and given $n,k\in\NN$, we let $[n]\df\{1,\ldots,n\}$ and let $(n)_k\df n(n-1)\cdots(n-k+1)$ denote the falling factorial. Given a set $V$ and $k\in\NN$, we let $(V)_k$ be the set of \emph{injective} functions $[k]\to V$, we let $\binom{V}{k}\df\{A\subseteq V \mid\lvert A\rvert = k\}$ be the set of subsets of $V$ of size $k$, let $\binom{V}{\leq k}\df\bigcup_{\ell=0}^k\binom{V}{\ell}$ and we let $r(V)\df\bigcup_{k\in\NN_+}\binom{V}{k}$ be the set of non-empty finite subsets of $V$. We will often abuse notation and write $n$ in place of $[n]$ when $V=[n]$ in some of the notation. Throughout this text, unless explicitly mentioned otherwise, all languages are assumed to be finite relational languages. The \emph{arity} of a predicate symbol $P$ in a language $\cL$ is denoted $k(P)\in\NN_+$. Given an $\cL$-structure $M$, we denote the \emph{set of vertices} (i.e., the domain of discourse) of $M$ by $V(M)$. We allow structures and models to have empty vertex sets and the unique structure with empty vertex set, called \emph{trivial structure}, is denoted $K_0$. Given an $\cL$-structure $M$, $V\subseteq V(M)$ and $v\in V(M)$, we denote the \emph{substructure} of $M$ induced by $V$ by $M\rest_V$ (i.e., we have $V(M\rest_V)\df V$ and $P^{M\rest_V}\df P^M\cap V(M)^{k(P)}$ for every $P\in\cL$) and we let $M - v\df M\rest_{V(M)\setminus\{v\}}$. Our substructures and subgraphs will always be induced, but keeping with the tradition of the fields, we will use the short term ``substructure'' for the former but the full term ``induced subgraph'' for the latter. Given finite structures $M$ and $N$ in a language $\cL$, we let $\Tind(M,N)$ be the set of embeddings of $M$ in $N$ (i.e., the set of injective maps $f\colon V(M)\to V(N)$ that preserve all relations and their negations) and let \begin{align*} \tind(M,N) & \df \begin{dcases*} \frac{\lvert\Tind(M,N)\rvert}{(\lvert N\rvert)_{\lvert M\rvert}}, & if $\lvert M\rvert\leq\lvert N\rvert$, \\ 0, & otherwise \end{dcases*} \end{align*} be the \emph{labeled (induced) density} of $M$ in $N$. We also define the \emph{(induced) density} of $M$ in $N$ as the normalized number of substructures of $N$ that are isomorphic to $M$ given by \begin{align*} p(M,N) & \df \frac{ \lvert\{U\subseteq V(N) \mid N\rest_U\cong M\}\rvert }{ \binom{\lvert N\rvert}{\lvert M\rvert} } = \frac{\lvert M\rvert!}{\lvert\Aut(M)\rvert}\cdot\tind(M,N), \end{align*} when $\lvert M\rvert\leq\lvert N\rvert$ (and defined as $0$ otherwise), where $\Aut(M)$ is the group of automorphisms of $M$. Given a universal theory $T$ in $\cL$ and a set $V$, we let $\cK_V[T]$ be the set of all models $M$ of $T$ whose vertex set $V(M)$ is $V$. Given $n\in\NN$, we let $\cM_n[T]$ be the set of models of $T$ of size $n$ up to isomorphism; we typically think of $\cM_n[T]$ as a subset of $\cK_{[n]}[T]$ by putting one representative of each isomorphism class in $\cM_n[T]$. We also let $\cM[T]\df\bigcup_{n\in\NN}\cM_n[T]$. Recall that for universal theories $T_1$ and $T_2$ in finite relational languages $\cL_1$ and $\cL_2$, respectively, an \emph{open interpretation} (or \emph{definition}) from $T_1$ to $T_2$ is a function $I$ (denoted $I\colon T_1\leadsto T_2$) that maps each predicate symbol $P\in\cL_1$ to an open (i.e., quantifier-free) formula $I(P)(x_1,\ldots,x_{k(P)})$ in $\cL_2$ and such that for each axiom $\forall\vec{x}, F(\vec{x})$ of $T_1$, we have $T_2\vdash\forall\vec{x}, I(F)(\vec{x})$ when we declare $I$ to commute with logical connectives. Open interpretations of the form $I\colon T_1\leadsto T_2$ contra-variantly define maps $\cK_V[T_2]\to\cK_V[T_1]$ for each set $V$ given by $(I(M)\vDash P(\vec{x})) \iff (M\vDash I(P)(\vec{x}))$ for each $P\in\cL_1$. In turn, for an open interpretation $I\colon T_1\leadsto T_2$, we let $I(T_2)$ be the universal theory in the language of $T_1$ whose finite models are precisely those of the form $I(M)$ for some $M\in\cM[T_2]$, that is, the axioms of $I(T_2)$ are \begin{align*} \forall x_1,\ldots,x_n, \bigvee_{M\in\cK_n[T_2]}\Dopen(I(M))(x_1,\ldots,x_n) \qquad (n\in\NN), \end{align*} where $\Dopen(N)$ is the \emph{open diagram} of $N$, that is, the open formula \begin{align*} \bigwedge_{1\leq i < j\leq n} x_i\neq x_j\land \bigwedge_{P\in\cL_2}\left(\bigwedge_{\alpha\in P^N} P(x_{\alpha_1},\ldots,x_{\alpha_{k(P)}})\land \bigwedge_{\alpha\in V(N)^{k(P)}\setminus P^N} \neg P(x_{\alpha_1},\ldots,x_{\alpha_{k(P)}})\right) \end{align*} that completely encodes the quantifier-free type (over $\varnothing$) of the tuple $(1,\ldots,n)$ in $N$. To make sense out of $\Dopen(K_0)$ (which must be a quantifier-free formula on zero variables), we allow our formulas to use the \emph{tautological truth symbol} $\top$ so that $\Dopen(K_0)$ is defined as $\top$. An \emph{$\cat{Int}$-isomorphism} (or \emph{interdefinition}) is an open interpretation $I\colon T_1\leadsto T_2$ such that there exists an open interpretation $J\colon T_2\leadsto T_1$ such that for every set $V$, the compositions $J\comp I\colon\cK_V[T_2]\to\cK_V[T_2]$ and $I\comp J\colon\cK_V[T_1]\to\cK_V[T_1]$ are the identity maps. Since $p(M,N) = p(I(M),I(N))$ whenever $I\colon T_1\leadsto T_2$ is an $\cat{Int}$-isomorphism and $M$ and $N$ are finite models of $T_2$, we typically do not distinguish between $\cat{Int}$-isomorphic universal theories. A universal theory $T$ in $\cL$ is \emph{canonical} if $T$ entails \begin{align}\label{eq:canonical} \forall x_1,\ldots,x_{k(P)}, \left(\bigvee_{i\neq j} x_i = x_j \to \neg P(\vec{x})\right) \end{align} for every predicate symbol $P\in\cL$. By~\cite[Theorem~2.3]{CR20a} (see also~\cite[\S2.2]{AC14}), every universal theory is $\cat{Int}$-isomorphic to some canonical theory and as such, from this point forward, all theories are assumed to be canonical theories, unless explicitly mentioned otherwise. Some examples of canonical theories that will be used in the text include the \emph{theory of $k$-hypergraphs} $\TkHypergraph$, that is, the canonical theory with a single \emph{symmetric} \emph{irreflexive}\footnote{In the sense that the predicate is \emph{not} true in any non-injective tuple.} $k$-ary predicate $E$, the \emph{theory of graphs} $\TGraph\df\TkHypergraph[2]$. In these theories, we denote by $K_n^{(k)}\in\cM_n[\TkHypergraph]$ the \emph{complete $k$-hypergraph on $n$ vertices} (i.e., we have $V(K_n^{(k)})\df[n]$ and $E^{K_n^{(k)}}\df ([n])_k$) and we let $K_n\df K_n^{(2)}$ be the \emph{complete graph on $n$ vertices}. Given a $k$-hypergraph $G$, we let $\overline{G}$ denote the \emph{complement} hypergraph of $G$ (given by $V(\overline{G})\df V(G)$ and $E^{\overline{G}}\df ([n])_k\setminus E^G$). In particular, $\overline{K}_n^{(k)}$ is the \emph{empty $k$-hypergraph on $n$ vertices}. Another example of a canonical theory is the \emph{theory of (strict) linear orders} $\TLinOrder$, i.e., the theory with a binary predicate symbol $\prec$ and axioms \begin{gather*} \forall x, \neg(x\prec x),\\ \forall\vec{x}, (x_1\neq x_2 \to (x_1\prec x_2\lor x_2\prec x_1)),\\ \forall\vec{x}, (x_1\prec x_2\land x_2\prec x_3 \to x_1\prec x_3). \end{gather*} Another useful theory is the \emph{theory of permutations} $\TPerm\df\TLinOrder\cup\TLinOrder$, which is the theory of two (strict) linear orders on the same ground set. Its finite models (up to isomorphism) are in one-to-one correspondence correspondence to usual permutations via $S_n\ni\sigma\mapsto M_\sigma\in\cM_n[\TPerm]$, where the first order ${\prec_1^{M_\sigma}}$ of $M_\sigma$ is simply the natural order on $[n]$ and the second order is given by \begin{align*} (M_\sigma\vDash i\prec_2 j) & \iff \sigma^{-1}(i) < \sigma^{-1}(j). \end{align*} Some other examples of theories that can be obtained from $\TGraph$ by adding axioms that will be used are the \emph{theory of graphs of agreements of permutations} $\TPermGraph\df I(\TPerm)$, where $I\colon\TGraph\leadsto\TPerm$ is given by \begin{align*} I(E)(x,y) & \df (x\neq y\land (x\prec_1 y\tot x\prec_2 y)), \end{align*} the \emph{theory of bipartite graphs} $\TBipartite$, which is obtained from $\TGraph$ by adding the axioms $\forall\vec{x},\neg\Dopen(C_{2n+1})(\vec{x})$ for every $n\in\NN_+$, where $C_\ell$ is the $\ell$-cycle graph and the \emph{theory of perfect graphs}, which by the Strong Perfect Graph Theorem~\cite{CRST06}, is obtained from $\TGraph$ by adding the axioms $\forall\vec{x},\neg(\Dopen(C_{2n+1})(\vec{x})\lor\Dopen(\overline{C}_{2n+1})(\vec{x}))$ for every $n\geq 2$. For every finite relational language $\cL$, we let $T_\cL$ be the \emph{pure canonical theory} in $\cL$, that is, the theory whose axioms are precisely~\eqref{eq:canonical} for each $P\in\cL$. Unless explicitly mentioned otherwise, all $\cL$-structures are assumed to be \emph{canonical structures}, that is, models of $T_\cL$. Given a family $\cF$ of models of a canonical theory $T$, we let $\Forb_T(\cF)$ be the theory of models of $T$ that do not have any copies of models in $\cF$, that is, $\Forb_T(\cF)$ is obtained from $T$ by adding the axioms $\forall\vec{x},\neg\Dopen(F)(\vec{x})$ for every $F\in\cF$ (note that if $F = K_0$, then this formula takes the form $\neg\top$, which is tautologically false, so $\Forb_T(\cF)$ has no models when $K_0\in\cF$). Another simple but useful construction is the following: if $\cF$ is a family of finite $\cL$-structures that is closed under substructures, then we let $\Th(\cF)\df\Forb_{T_\cL}(\cM[T_\cL]\setminus\cF)$ be the unique universal theory (up to reaxiomatization) such that $\cM[\Th(\cF)]=\cF$. We say that a canonical theory $T$ is \emph{non-degenerate} if it contains some infinite model (equivalently, if $\cM_n[T]$ is non-empty for every $n\in\NN$). \medskip A sequence $(N_n)_{n\in\NN}$ of finite models of a canonical theory $T$ is called \emph{convergent} if it is \emph{increasing} in the sense that $\lvert N_n\rvert < \lvert N_{n+1}\rvert$ for every $n\in\NN$ and if for every $M\in\cM[T]$, the limit $\lim_{n\to\infty} p(M,N_n)$ exists. Another way of seeing this convergence is that each finite model $N$ of $T$ corresponds to the point $p(\place,N)\in [0,1]^{\cM[T]}$ and convergence of an increasing sequence $(N_n)_{n\in\NN}$ amounts to convergence in the (compact metrizable) product topology of $[0,1]^{\cM[T]}$ of the corresponding sequence $(p(\place,N_n))_{n\in\NN}$. There are essentially two ways to encode limits of convergent sequences. The first is algebraically/syntactically: we say that a function $\phi\colon\cM[T]\to[0,1]$ is the limit of of a convergent sequence $(N_n)_{n\in\NN}$ if $\phi(M)=\lim_{n\to\infty} p(M,N_n)$ for every $M\in\cM[T]$. The theory of flag algebras~\cite{Raz07} describes the set $\HomT{T}$ of functions that are limits of convergent sequences as precisely as the ones that induce positive homomorphisms from a particular commutative $\RR$-algebra $\cA[T]$ to $\RR$, but for this work, the unfamiliarized reader can safely think of $\HomT{T}$ as simply a fancy notation for the subset of $[0,1]^{\cM[T]}$ of all $\phi$ that are limits of some convergent sequence. Note that compactness of $[0,1]^{\cM[T]}$ implies that $\HomT{T}$ is non-empty if and only if $T$ is non-degenerate. For $\phi\in\HomT{T}$, the \emph{theory of positive models} of $\phi$ is the universal theory $\Th(\phi)$ whose finite models are precisely those models $M$ of $T$ such that $\phi(M) > 0$, that is, the axioms of $\Th(\phi)$ are \begin{align*} \forall\vec{x}, \bigvee_{\substack{M\in\cK_n[T]\\\phi(M) > 0}}\Dopen(M)(x_1,\ldots,x_n) \qquad (n\in\NN). \end{align*} \medskip The second way of encoding limits is geometrically/semantically. In the case of graphs, we can encode limits using a \emph{graphon} $W$ over an atomless standard probability space $\Omega=(X,\cA,\mu)$, that is, $W$ is a symmetric function $X\times X\to[0,1]$ measurable in the completion of the product $\sigma$-algebra (typically, the space $\Omega$ is taken to be $[0,1]$ equipped with the Lebesgue measure $\lambda$, in which case, a graphon is simply a symmetric Lebesgue measurable function $[0,1]^2\to[0,1]$). Given one such graphon $W$ over $\Omega=(X,\cA,\mu)$ and a finite graph $G$, the \emph{labeled (induced) density} and the \emph{(induced) density} of $G$ in $W$ are defined respectively as \begin{align*} \tind(G,W) & \df \int_{X^{V(G)}} \prod_{\{v,w\}\in E(G)} W(x_v,x_w) \prod_{\{v,w\}\in E(\overline{G})} (1 - W(x_v,x_w)) \ d\mu(x), \\ \phi_W(G) \df p(G,W) & \df \frac{\lvert G\rvert!}{\lvert\Aut(G)\rvert}\cdot\tind(G,W), \end{align*} where $E(G)\df\{\{v,w\} \mid G\vDash E(v,w)\}$ is the edge set of $G$ and $\overline{G}$ is the complement of $G$. We say that a convergent sequence $(H_n)_{n\in\NN}$ of graphs converges to $W$ if $\lim_{n\to\infty} p(G,H_n) = \phi_W(G)$ for every $G\in\cM[\TGraph]$. Another way of interpreting $\tind(G,W)$ above is to define the set $\Tind(G,W)$ of \emph{labeled (induced) copies} of $G$ in $W$ as \begin{multline} \Tind(G,W) \df \biggl\{(x,y)\in X^{V(G)}\times [0,1)^{\binom{V(G)}{2}} \;\bigg\vert\; \forall\{v,w\}\in\binom{V(G)}{2}, \\ (\{v,w\}\in E(G)\tot y_{\{v,w\}} < W(x_v,x_w)) \biggr\} \end{multline} and note that $\tind(G,W) = (\mu\otimes\lambda)(\Tind(G,W))$. We also use the shorthand $\Th(W)\df\Th(\phi_W)$ for the theory of positive graphs of $W$. Note that when $W$ is $\{0,1\}$-valued, we can interpret it as the adjacency matrix of a graph with vertex set $X$ and for $(x,y)\in X^{V(G)}\times [0,1)^{\binom{V(G)}{2}}$ such that all coordinates of $x$ are distinct, we have $(x,y)\in\Tind(G,W)$ if and only if $x$ is an embedding of $G$ in the graph encoded by the $\{0,1\}$-valued $W$. The main theorem of the theory of graphons~\cite{LS06} says that graphons precisely encode limits of convergent graph sequences. Along with the flag algebra description, this can be easily summarized as $\HomT{\TGraph} = \{\phi_W \mid W\text{ is a graphon}\}$. However, let us note that different graphons can represent the same limit; for example, for any graphon $W$ over $[0,1]$, the graphon $W'$ defined by $W'(x,y)\df W(2x\bmod 1, 2y\bmod 1)$ represents the same limit as $W$ (i.e., we have $\phi_W = \phi_{W'}$). Another very useful theorem is the Graphon Removal Lemma~\cite[Theorem~1]{Pet13}, which says that for any graphon $W$ over $\Omega$, there exists a graphon $W'$ that differs from $W$ only by a zero measure set (hence $\phi_W=\phi_{W'}$) and such that for every $G\in\cM[\TGraph]$, if $\tind(G,W')=0$, then $\Tind(G,W')\subseteq\cD_{V(G)}$, where \begin{align}\label{eq:graphondiagonal} \cD_V & \df \{(x,y)\in X^V\times [0,1)^{\binom{V}{2}} \mid \exists v,w\in V, (v\neq w\land x_v = x_w)\} \end{align} is the \emph{diagonal set}, that is, the Graphon Removal Lemma says that we only need to change $W$ in a zero measure set to remove all off-diagonal copies of finite graphs that have zero density in $W$. \medskip For the general case, we will use the theory of theons~\cite{CR20a} (see also~\cite{Aus08} and~\cite{AC14} for alternative semantic limits). Given an atomless standard probability space $\Omega=(X,\cA,\mu)$ and a set $V$, we let $\cE_V(\Omega)\df X^{r(V)}$ (recall that $r(V)\df\bigcup_{k\in\NN_+}\binom{V}{k}$), equipping it with the completion of the product measure, which is denoted $\mu$ as well, by abuse. For a predicate symbol $P$ in a finite relational language $\cL$, a \emph{$P$-on} over $\Omega$ is a measurable subset of $\cE_{k(P)}(\Omega)$. An \emph{Euclidean structure} in $\cL$ over $\Omega$ is a function $\cN$ that maps each predicate symbol $P\in\cL$ to a $P$-on $\cN_P\subseteq\cE_{k(P)}(\Omega)$. If we are further given a finite (canonical) $\cL$-structure $K$, we define the set of \emph{labeled (induced) copies} of $K$ in $\cN$ as \begin{align*} \Tind(K,\cN) & \df \bigcap_{P\in\cL}\left( \bigcap_{\alpha\in P^K} (\alpha^*)^{-1}(\cN_P)\cap \bigcap_{\alpha\in (V(K))_{k(P)}\setminus P^K} (\alpha^*)^{-1}(\cE_{k(P)}(\Omega)\setminus\cN_P) \right), \end{align*} where for each injection $\alpha\colon [k]\to V$, $\alpha^*\colon\cE_V(\Omega)\to\cE_k(\Omega)$ is the contra-variantly defined ``projection'' given by \begin{align}\label{eq:alpha*} \alpha^*(x)_A & \df x_{\alpha(A)} \qquad (x\in\cE_V(\Omega), A\in r(k)). \end{align} Similarly to the graphon case, we let \begin{align*} \tind(K,\cN) & \df \mu(\Tind(K,\cN)), & \phi_\cN(K) & \df \frac{\lvert K\rvert!}{\lvert\Aut(K)\rvert}\cdot\tind(K,\cN), \end{align*} and we say that a convergent sequence of finite structures $(N_n)_{n\in\NN}$ converges to $\cN$ if $\lim_{n\to\infty} p(K,N_n) = \phi_\cN(K)$ for every finite structure $K$. Similarly, we use the shorthand $\Th(\cN)\df\Th(\phi_\cN)$ for the theory of positive models of $\cN$. For a canonical theory $T$ in $\cL$, a \emph{(weak) $T$-on} over $\Omega$ is an Euclidean structure $\cN$ in $\cL$ over $\Omega$ such that $\phi_\cN(K)=0$ whenever $K$ is a finite $\cL$-structure that is \emph{not} a model of $T$. A \emph{strong $T$-on} over $\Omega$ is a $T$-on $\cN$ such that for every finite $\cL$-structure $K$ that is \emph{not} a model of $T$, we have $\Tind(K,\cN)\subseteq\cD_{V(K)}(\Omega)$, where \begin{align}\label{eq:diagonal} \cD_V(\Omega) & \df \{x\in\cE_V(\Omega) \mid \exists v,w\in V, (v\neq w\land x_{\{v\}} = x_{\{w\}})\} \end{align} denotes the \emph{diagonal set}. The main theorem of the theory of theons says that theons precisely encode limits of convergent sequences of models, that is, we have $\HomT{T} = \{\phi_\cN \mid \cN\text{ is a $T$-on}\}$. In fact, the easy inclusion of this equality is worth spelling out: given a $T$-on $\cN$ over $\Omega$, for each $n\in\NN$, we sample $\rn{\theta}$ in $\cE_n(\Omega)$ according to $\mu$ and let $\rn{N_n}$ be the random element of $\cK_n[T_\cL]$ given by \begin{gather*} V(\rn{N_n}) \df [n],\\ (\rn{N_n}\vDash P(\alpha)) \iff \alpha^*(\rn{\theta})\in\cN_P \qquad (P\in\cL, \alpha\in ([n])_{k(P)}), \end{gather*} where $\alpha^*\colon [k(P)]\to[n]$ is given by~\eqref{eq:alpha*}. It is a straightforward exercise on distribution concentration to check that with probability $1$, the sequence $(\rn{N_n})_{n\in\NN}$ converges to $\phi_\cN$. In particular, this means that any limit $\phi\in\HomT{T}$ is also a limit of a sequence of models $(N_n)_{n\in\NN}$ that does not omit sizes in the sense that $\lvert N_n\rvert = n$ for every $n\in\NN$. However, similarly to graphons, different theons can represent the same limit. Similarly to graphons, another very useful theorem of the theory of theons is the Induced Euclidean Removal Lemma~\cite[Theorem~3.3]{CR20a}, which says that any weak $T$-on can be turned into a strong $T$-on by changing its peons only on a zero measure set (which in particular means the two theons represent the same limit). A fortiori, by viewing a $T$-on $\cN$ as a $\Th(\cN)$-on, the Induced Euclidean Removal Lemma implies that there exists a $T$-on $\cN'$ whose peons differ from those of $\cN$ only by a zero measure set and such that $\Tind(K,\cN')\subseteq\cD_{V(K)}(\Omega)$ whenever $\tind(K,\cN') = 0$. \medskip Given an open formula $F(x_1,\ldots,x_n)$ in $\cL$ and an Euclidean structure $\cN$ in $\cL$ over $\Omega$, the \emph{truth set} $T(F,\cN)\subseteq\cE_n(\Omega)$ of $F$ in $\cN$ is defined inductively as follows. \begin{enumerate} \item $T(x_i = x_i,\cN)\df\cE_n(\Omega)$. \item $T(x_i = x_j,\cN)\df\varnothing$, if $i\neq j$ \label{it:truthnoninjequal} \item $T(P(x_{\alpha_1},\ldots,x_{\alpha_{k(P)}}),\cN)\df\varnothing$, if $\alpha\colon[k(P)]\to[n]$ is not injective. \label{it:truthnotinj} \item $T(P(x_{\alpha_1},\ldots,x_{\alpha_{k(P)}}),\cN)\df(\alpha^*)^{-1}(\cN_P)$ if $\alpha\colon[k(P)]\to[n]$ is injective, where $\alpha^*$ is as in~\eqref{eq:alpha*} for $V=[n]$. \item $T(\place,\cN)$ commutes with logical connectives (so e.g., $T(\neg F,\cN)\df\cE_n(\Omega)\setminus T(F,\cN)$ and $T(F_1\lor F_2,\cN)\df T(F_1,\cN)\cup T(F_2,\cN)$). \end{enumerate} One might argue that items~\ref{it:truthnoninjequal} and~\ref{it:truthnotinj} above should be defined as particular subsets of the diagonal $\cD_n(\Omega)$ (see~\eqref{eq:diagonal}), but since all information is lost in $\cD_n(\Omega)$ (both in weak and strong theons), the definition above is just as good but simpler. It is straightforward to check that for $K\in\cK_n[T_\cL]$, we have $\Tind(K,\cN) = T(\Dopen(K),\cN)$. Open interpretations behave very well with the notion of convergence. Furthermore, there are natural operations in the theories of flag algebras~\cite[Definition~4 and Theorem~2.6]{Raz07} and theons~\cite[Remark~6]{CR20a} that capture this action in the limit. Namely, if $I\colon T_1\leadsto T_2$ is an open interpretation and $(N_n)_{n\in\NN}$ is a convergent sequence of finite models of $T_2$ converging to $\phi\in\HomT{T_2}$ and to the $T_2$-on $\cN$, then $(I(N_n))_{n\in\NN}$ is a convergent sequence of models of $T_1$ converging to $\phi^I\in\HomT{T_1}$ and to the $T_1$-on $I(\cN)$ given by \begin{align} \phi^I(M) & \df \sum_{\substack{M'\in\cM[T_2]\\I(M')\cong M}} \phi(M') \qquad (M\in\cM[T_1]), \label{eq:phiI} \\ I(\cN)_P & \df T(I(P),\cN) \qquad (P\in\cL). \notag \end{align} \medskip Given $\phi\in\HomT{T}$, a \emph{sub-object} of $\phi$ of measure $c > 0$ is a $\psi\in\HomT{T}$ such that there exist a sequence $(N_n)_{n\in\NN}$ of models converging to $\phi$ and sets $A_n\subseteq V(N_n)$ such that $\lim_{n\to\infty}\lvert A_n\rvert/\lvert N_n\rvert = c$ and $(N_n\rest_{A_n})_{n\in\NN}$ converges to $\psi$. By a small abuse, we may use theons $\cN$ and $\cH$ in place of $\phi$ and/or $\psi$, respectively when $\phi_\cN=\phi$ and $\phi_\cH=\psi$. When the underlying theory is $\TGraph$, we will use the more natural name \emph{subgraphon} for the concept of sub-object and with a similar abuse, we will use graphons $W$ and $W'$ in place of $\phi$ and/or $\psi$, respectively when $\phi_W=\phi$ and $\phi_{W'}=\psi$. By~\cite[Lemma~3.3]{CM22}, $\psi\in\HomT{\TGraph}$ is a subgraphon of $W$ of measure $c > 0$ if and only if there exists a measurable function $f\colon X\to [0,1]$ with $\int_X f\ d\mu = c$ such that $\psi = \phi_{W_f}$, where $W_f$ is the graphon over the space $\Omega_f\df(X,\cA,\mu_f)$ defined by \begin{align} \mu_f(B) & \df \frac{1}{c}\int_B f(x)\ d\mu(x),\label{eq:muf} \\ W_f(x,y) & \df W(x,y).\notag \end{align} More generally, by~\cite[Lemma~5.8]{CM22}, $\psi\in\HomT{T}$ is a sub-object of a theon $\cN$ over $\Omega=(X,\cA,\mu)$ of measure $c > 0$ if and only if there exist a measurable function $f\colon X\to [0,1]$ with $\int_X f\ d\mu = c$ and a measure-isomorphism $F$ modulo $0$ from the space $\Omega_f=(X,\cA,\mu_f)$ given by~\eqref{eq:muf} to $\Omega$ such that $\psi=\phi_{\cN\rest_f^F}$ for the theon $\cN\rest_f^F$ over $\Omega_f$ defined by \begin{align*} (\cN\rest_f^F)_P & \df \{x\in\cE_{k(P)}(\Omega_f) \mid x^F\in\cN_P\}\qquad (P\in\cL), \end{align*} where $x^F\in\cE_{k(P)}(\Omega)$ is given by \begin{align*} x^F_B & \df \begin{dcases*} x_B, & if $\lvert B\rvert=1$,\\ F(x_B), & if $\lvert B\rvert\geq 2$. \end{dcases*} \end{align*} When the function $f$ in the above is the indicator function $\One_A$ of some positive measure set $A\subseteq X$, we use the shorthands $\mu_A\df\mu_{\One_A}$, $\Omega_A\df\Omega_{\One_A}$, $W_A\df W_{\One_A}$ and $\cN\rest_A^F\df\cN\rest_{\One_A}^F$ for the concepts above. However, we point out that not every sub-object of $\cN$ is necessarily of the form $\cN\rest_A^F$ for some positive measure set $A$ (see~\cite[Example~45]{CR20b}). \medskip We conclude this section by recalling the definition of the approximate \Erdos--Hajnal property ($\AEHP$) from~\cite[Definition~7.1]{CM22}. \begin{definition}\label{def:AEHP} A universal theory $T$ in a finite relational language $\cL$ has the approximate \Erdos--Hajnal property ($\AEHP$) if every limit $\phi$ of $T$ has a \emph{trivial} sub-object, i.e., a sub-object $\psi$ of the form $\psi=\phi_\cN$ for some $T$-on $\cN$ whose peons all have measure in $\{0,1\}$. In particular, if $T$ is a universal theory of graphs (i.e., $T\vdash\TGraph$), then $T\in\AEHP$ if every graphon that is a limit of $T$ has a \emph{trivial} subgraphon, i.e., a subgraphon that is either a.e.\ equal to $0$ or a.e.\ equal to $1$. \end{definition} An equivalent formulation of $\AEHP$ (see \cite[Theorem~7.11]{CM22}) is that for every convergent sequence $(N_n)_{n\in\NN}$ of models of $T$, there exist sets $U_n\subseteq V(N_n)$ with $\lim_{n\to\infty}\lvert U_n\rvert/\lvert N_n\rvert > 0$ and $(N_n\rest_{U_n})_{n\in\NN}$ converges to a trivial limit. In other words, $\AEHP$ for graphs requires linear-sized almost cliques or almost anti-cliques in the presence of convergence. \section{Substitution and primality} \label{sec:subst} A simple operation that is useful in studying the \Erdos--Hajnal property and its approximate version ($\AEHP$, see Definition~\ref{def:AEHP}) for graphs is substitution. In this section we define a small generalization of this operation for structures in finite relational languages along with some associated notions (e.g., primality) and prove some basic facts about them. Another useful notion covered in this section is that of an almost finite family (Definition~\ref{def:almostfinite}). \begin{definition}\label{def:subst} Given two structures $F_1$ and $F_2$ in a finite relational language $\cL$ and $v\in V(F_1)$, a \emph{substitution} of $v$ in $F_1$ by $F_2$ is an $\cL$-structure $F$ such that there exist functions $f_1\colon V(F_1-v)\to V(F)$ and $f_2\colon V(F_2)\to V(F)$ such that \begin{enumerate} \item $V(F)=\im(f_1)\cup\im(f_2)$, \item $f_2$ is an embedding of $F_2$ in $F$, \item For every $u\in V(F_2)$, the extension of $f_1$ to a function $V(F_1)\to V(F)$ that maps $v$ to $f_2(u)$ is an embedding of $F_1$ in $F$. \end{enumerate} We call the substitution $F$ \emph{standard} if $V(F_1)\cap V(F_2)=\varnothing$ and the functions $f_1$ and $f_2$ act identically on their domains (thus $V(F)= (V(F_1)\setminus\{v\})\cup V(F_2)$). The unique substitution $F$ (up to isomorphism) of $v$ in $F_1$ by $F_2$ that has the smallest possible relation sets $P^F$ ($P\in\cL$) is called the \emph{conservative substitution of $v$ in $F_1$ by $F_2$} and is denoted $F_1^{v\to F_2}$. If $V(F_1)\cap V(F_2)=\varnothing$, then we can formally define $F_1^{v\to F_2}$ by \begin{align*} V(F_1^{v\to F_2}) & \df (V(F_1)\setminus\{v\})\cup V(F_2),\\ P^{F_1^{v\to F_2}} & \df P^{F_2}\cup\{f_u\comp\alpha \mid \alpha\in P^{F_1}\land u\in V(F_2)\}, \end{align*} for every $P\in\cL$, where $f_u\colon V(F_1)\to V(F_2)$ is the function that acts identically on $V(F_1)\setminus\{v\}$ and has $f_u(v)=u$. We say that a family $\cF$ of $\cL$-structures (up to isomorphism) is \emph{strongly closed under substitutions} if for every $F_1,F_2\in\cF$, every $v\in V(F_1)$ and every substitution $F$ of $v$ in $F_1$ by $F_2$, we have $F\in\cF$. We say that $\cF$ is \emph{weakly closed under substitutions} if for every $F_1,F_2\in\cF$, every $v\in V(F_1)$, there exists some substitution $F$ of $v$ in $F_1$ by $F_2$ such that $F\in\cF$. The \emph{strong closure under substitutions} of $\cF$ is the smallest family $S(\cF)$ containing $\cF$ that is strongly closed under substitutions. We say that a finite $\cL$-structure $F$ is \emph{prime}\footnote{This should not to be confused with the notion of prime model/structure of model theory.} if it is not a substitution of $v$ in $F_1$ by $F_2$ for any $F_1,F_2$ and $v\in V(F_1)$ with $\lvert F_1\rvert,\lvert F_2\rvert < \lvert F\rvert$. We say that an $\cL$-structure $F$ is \emph{monochromatic} if for every \emph{unary} predicate symbol $P\in\cL$, we have $F\vDash\forall x\forall y, P(x)\tot P(y)$, that is, each unary predicate is either true everywhere or true nowhere in $F$. \end{definition} \begin{remark}\label{rmk:substitutionunary} Note that if $F$ is a substitution of $v$ in $F_1$ by $F_2$, then for every unary predicate symbol $P\in\cL$, we must have \begin{align*} (F_1\vDash P(v)) \implies (F_2\vDash\forall x, P(x)),\\ (F_1\vDash \neg P(v)) \implies (F_2\vDash\forall x, \neg P(x)). \end{align*} In particular, this means that any $\cF$ that is weakly closed under substitution can have at most one structure $M_1$ of size $1$ (up to isomorphism), all structures $F$ of $\cF$ are monochromatic and of the same ``color'' in the sense that for every unary predicate symbol $P\in\cL$ and every $F\in\cF$ with $\lvert F\rvert\geq 1$, we have $M_1\vDash\forall x, P(x)$ if and only if $F\vDash\forall x, P(x)$. \end{remark} \begin{remark}\label{rmk:increasingsubstitution} Note that if $F$ is a substitution of $v$ in $F_1$ by $F_2$, then $\lvert F\rvert\leq\min\{\lvert F_1\rvert,\lvert F_2\rvert\}$ if and only if $\min\{\lvert F_1\rvert,\lvert F_2\rvert\}\leq 1$. When $F_1$ has a single vertex, then $F\cong F_2$; when $F_2$ has a single vertex, then $F\cong F_1$; and when $F_2$ has no vertices (i.e., $F_2=K_0$), then $F\cong F_1-v$. In particular, this means that every structure of size at most $2$ is prime. \end{remark} \begin{remark}\label{rmk:substarity2} If all predicates in $\cL$ have arity at most $2$ (which in particular covers the case of the theory of graphs), then all substitutions are conservative and the notions of weakly closed under substitutions and strongly closed under substitutions coincide. As such, in Sections~\ref{sec:pers:graph}, \ref{sec:WR:graph} and~\ref{sec:VC:graph} concerning $\TGraph$, we will drop the superfluous qualifiers ``weakly'' and ``strongly'' from the terminology. \end{remark} \begin{remark}\label{rmk:prime3} If all predicates have arity at least $3$, then the notion of prime structure completely degenerates: the only prime structures are the unique structures $K_0$, $M_1$ and $M_2$ of sizes $0$, $1$ and $2$, respectively. The reason why every structure $K$ of size at least $3$ is not prime is that for any $u\in V(K)$ and $v\in V(M_2)$, $K$ is a substitution of $v$ in $M_2$ by $K-u$ since all relations involving $u$ must involve at least two other vertices. \end{remark} \begin{remark}\label{rmk:nondegenerate} If $T$ is a universal theory such that $\cM[T]$ is weakly closed under substitution, then $T$ is non-degenerate if and only if $\cM_2[T]\neq\varnothing$. \end{remark} Let us now prove some basic facts about substitutions and primality. \begin{lemma}\label{lem:substitutionK0} Let $\cF$ be a non-empty family of $\cL$-structures (up to isomorphism) that is weakly closed under substitutions. Then $\cF$ is closed under substructures if and only if $\cF$ contains the trivial structure $K_0$ of size $0$. \end{lemma} \begin{proof} Follows since a substitution of $v$ in $F$ by $K_0$ is isomorphic to $F-v$. \end{proof} \begin{lemma}\label{lem:substitutionsubstructure} Let $F_1,F_2$ be finite $\cL$-structures, let $v\in V(F_1)$. If $F$ is a substitution of $v$ in $F_1$ by $F_2$ and $U\subseteq V(F)$, then there exist sets $U_1\subseteq V(F_1)$ and $U_2\subseteq V(F_2)$ with $v\in U_1$ such that $F\rest_U$ is a substitution of $v$ in $F_1\rest_{U_1}$ by $F_2\rest_{U_2}$. Conversely, if $U_1\subseteq V(F_1)$ and $U_2\subseteq V(F_2)$ are such that $v\in U_1$ and $F'$ is a substitution of $v$ in $F_1\rest_{U_1}$ by $F_2\rest_{U_2}$, then there exist a substitution $F$ of $v$ in $F_1$ by $F_2$ and a set $U\subseteq V(F)$ such that $F\rest_U\cong F'$. \end{lemma} \begin{proof} Without loss of generality, by possibly renaming vertices, we can consider only the case when $F$ is a standard substitution. Then it is straightforward to check that for $U_1\df (U\cap V(F_1))\cup\{v\}$ and $U_2\df U\cap V(F_2)$, we have that $F\rest_U$ is a substitution of $v$ in $F_1\rest_{U_1}$ by $F_2\rest_{U_2}$. \medskip For the second assertion, by possibly renaming vertices, we may suppose without loss of generality that $F'$ is also a standard substitution. Then it is straightforward to see that setting $U\df (U_1\setminus\{v\})\cup U_2$, there exists a standard substitution $F$ of $v$ in $F_1$ by $F_2$ such that $F\rest_U = F'$. \end{proof} \begin{lemma}\label{lem:primesubstructure} If $F\in S(\{F_1,\ldots,F_t\})$ for some finite $\cL$-structures $F_1,\ldots,F_t$ and $P$ is a prime substructure of $F$, then $P$ is a substructure of some $F_i$. \end{lemma} \begin{proof} The proof is by induction in the minimum length $\ell$ of a sequence of substitutions needed to obtain $F$ from $F_1,\ldots,F_t$. If $\ell = 0$, then $F\cong F_i$ for some $i\in[t]$, so $P$ is a substructure of $F_i$. If $\ell > 0$, then $F$ is a substitution of $v$ in $M_1$ by $M_2$ for some $M_1,M_2\in S(\{F_1,\ldots,F_t\})$ and some $v\in V(M_1)$ such that if the minimum lengths of sequences of substitutions needed to obtain $M_1$ and $M_2$ from $F_1,\ldots,F_t$ are $\ell_1$ and $\ell_2$, respectively, then $\ell_1 + \ell_2 + 1 = \ell$, which in particular implies $\ell_1,\ell_2 < \ell$. Without loss of generality, suppose $F$ is a standard substitution of $v$ in $M_1$ by $M_2$. By Lemma~\ref{lem:substitutionsubstructure}, we know that there exist $U_1\subseteq V(M_1)$ and $U_2\subseteq V(M_2)$ with $v\in U_1$ such that $P$ is a substructure of some substitution of $v$ in $M_1\rest_{U_1}$ by $M_2\rest_{U_2}$. Since $P$ is prime, we must have $\lvert P\rvert\leq\min\{\lvert U_1\rvert,\lvert U_2\rvert\}$, so by Remark~\ref{rmk:increasingsubstitution}, either $P\cong M_1$, $P\cong M_2$ or $P\cong M_1-v$, so $P$ is a substructure of either $M_1$ or $M_2$ and by inductive hypothesis, it follows that $P$ is a substructure of some $F_i$. \end{proof} As one might expect, prime structures play a major role in characterizing classes that are strongly closed under substitutions. This is made precise by the next two lemmas. \begin{lemma}\label{lem:strongclosuresubst} Let $\cF$ be a family of finite $\cL$-structures (up to isomorphism) that is strongly closed under substitutions and closed under substructures and let $\cP$ be the set of structures in $\cF$ that are prime. Then $\cF = S(\cP)$. Conversely, if $\cP'$ is a family of prime finite $\cL$-structures that is closed under prime substructures and $\cF = S(\cP')$, then $\cP'=\cP$. \end{lemma} \begin{proof} Let $\cF' = S(\cP)$. It is obvious that $\cF'\subseteq\cF$. Suppose toward a contradiction that $\cF\setminus\cF'\neq\varnothing$ and let $F$ be an $\cL$-structure in $\cF\setminus\cF'$ of minimum size. We claim that $F$ is prime. Indeed, if not, then $F$ is a substitution of some $v$ in some $F_1$ by some $F_2$ with $\lvert F_1\rvert,\lvert F_2\rvert<\lvert F\rvert$. Since both $F_1$ and $F_2$ are proper substructures of $F$ and both $\cF$ and $\cF'$ are strongly closed under substitutions and closed under substructures, this contradicts the minimality of $F$. Thus $F$ is prime, so $F\in\cP$, which contradicts $F\notin\cF'$. \medskip For the second assertion, if $\cF$ is empty, then both $\cP$ and $\cP'$ must also be empty. If $\cF$ is not empty, then each $P'\in\cP'$ is in $\cF=S(\cP)$, so Lemma~\ref{lem:primesubstructure} implies that $P'$ must be a substructure of some element in $\cP$, hence must also be in $\cP$ as it is closed under prime substructures (as $\cF$ is closed under substructures). Similarly, every element of $\cP$ must be an element of $\cP'$ as the latter is also closed under prime substructures. \end{proof} \begin{lemma}\label{lem:stronglyclosedprimecF} Let $T$ be a canonical theory in a finite relational language $\cL$ and let $\cF$ be the set of minimal $\cL$-structures that are \emph{not} models of $T$, that is, the set of all $M\in\cM[T_\cL]\setminus\cM[T]$ such that every proper substructure of $M$ is a model of $T$. Then $\cM[T]$ is strongly closed under substitution if and only if $\cF$ contains only prime structures. \end{lemma} \begin{proof} For the forward implication, note that if $M\in\cM[T_\cL]\setminus\cM[T]$ is \emph{not} prime, then it is a substitution of $v$ in $M_1$ by $M_2$ for some $M_1,M_2\in\cM[T_\cL]$ with $\lvert M_1\rvert,\lvert M_2\rvert < \lvert M\rvert$ and $v\in V(M_1)$. Since $\cM[T]$ is strongly closed under substitutions, we must have either $M_1\notin\cM[T]$ or $M_2\notin\cM[T]$, hence $M\notin\cF$. \medskip For the backward implication, first note that $T$ is a reaxiomatization of $\Forb_{T_\cL}(\cF)$. Let us show that if $M$ is a substitution of $v$ in $M_1\in\cM[T]$ by $M_2\in\cM[T]$, then $M\in\cM[T]$. Without loss of generality, let us assume the substitution to be standard so there is the natural identification of $V(M)$ with $(V(M_1)\setminus\{v\})\cup V(M_2)$. Suppose toward a contradiction that $M\notin\cM[T]$, that is, there exists $F\in\cF$ and $U\subseteq V(M)$ such that $M\rest_U\cong F$. Since $M_2\in\cM[T]$, we must have $U\cap V(M_1)\neq\varnothing$ (otherwise, $F$ would be a substructure of $M_2$) and since $M_1\in\cM[T]$, we must have $\lvert U\cap V(M_2)\rvert\geq 2$ (otherwise, $F$ would be a substructure of $M_1$). But then $F$ is a substitution of $v$ in $M_1\rest_{(U\cap V(M_1))\cup\{v\}}$ by $M_2\rest_{U\cap V(M_2)}$ and since \begin{align*} \lvert (U\cap V(M_1))\cup\{v\}\rvert & \leq \lvert U\rvert - \lvert U\cap V(M_2)\rvert + 1 < \lvert U\rvert = \lvert F\rvert, \\ \lvert U\cap V(M_2)\rvert & \leq \lvert U\rvert - \lvert U\cap V(M_1)\rvert < \lvert U\rvert = \lvert F\rvert, \end{align*} this contradicts the fact that $F$ is prime (as it is an element of $\cF$). Thus $\cM[T]$ is strongly closed under substitutions. \end{proof} Next we cover the notion of an almost finite family of structures and some associated notions. \begin{definition}\label{def:almostfinite} We say that a family $\cF$ of $\cL$-structures (up to isomorphism) is \emph{almost finite} if $\cF$ does not contain any infinite antichain in the substructure partial order. Equivalently, $\cF$ is almost finite if for every infinite $\cF'\subseteq\cF$, there exist $F_1,F_2\in\cF'$ such that $F_1$ is a proper substructure of $F_2$. Given a family $\cF$ of $\cL$-structures (up to isomorphism), let $\cP\subseteq\cF$ be the set of prime structures in $\cF$ and $\cP'\subseteq\cP$ be the set of monochromatic prime structures in $\cF$. We say that $\cF$ is \begin{enumerate} \item \emph{primally finite}, if $\cP$ is finite, \item \emph{primally almost finite}, if $\cP$ is almost finite, \item \emph{monochromatically primally finite}, if $\cP'$ is finite, \item \emph{monochromatically primally almost finite}, if $\cP'$ is almost finite. \end{enumerate} \end{definition} \begin{example}\label{ex:substitutiongeneration} An example of a family of graphs that is primally almost finite, strongly (equivalently, weakly) closed under substitutions and closed under substructures but is not primally finite is $S(\{P_n \mid n\in\NN\})$, where $P_n$ is the path on $n$ vertices. An example of a proper family of graphs that is not primally almost finite, but is strongly closed under substitutions and closed under substructures is $S(\{K_0\}\cup\{C_n \mid n\geq 5\})$, where $C_n$ is the cycle on $n$ vertices (it is straightforward to check that when $n\geq 5$, $C_n$ is prime). Another very important such example is the family $S(\{G_n \mid n\geq 6\}\cup\{K_0\})$, where $G_n$ is the graph obtained from $P_n$ by adding four vertices $a,b,c,d$ and adding the edges $\{a,b\},\{c,d\},\{a,2\},\{b,3\},\{c,n-2\},\{d,n-1\}$, assuming that $V(P_n)=[n]$ in the natural order of the path (see Figure~\ref{fig:pathwithfourcycles}). It is straightforward to check that each $G_n$ is prime and that they are pairwise incomparable in the induced subgraph partial order. Note also that the paths $P_n$ are elements of $S(\{G_n \mid n\geq 6\}\cup\{K_0\})$ as $P_n$ is a substructure of $G_n$. \end{example} \begin{figure}[htb] \input{pathwithfourcycles} \end{figure} \begin{remark}\label{rmk:primallyalmostfinite} A family $\cF$ of the form $\cF = S(\cP')$ for some almost finite set of prime structures $\cP'$ is not necessarily primally almost finite; this is because the set $\cP$ of structures in $\cF$ that are prime is equal to the set of prime substructures of elements of $\cP'$, which may be a proper superset of $\cP'$ (see Lemma~\ref{lem:primesubstructure}). As an example, consider the graphs $G_n$ of Example~\ref{ex:substitutiongeneration} and for each $n\geq 6$, define the (prime) graph $H_n$ as the graph obtained from the disjoint union of $G_6,\ldots,G_n$ by connecting all first vertices of the $P_k$ inside of $G_k$ in a clique (see Figure~\ref{fig:gluedpathswithfourcycles}). Obviously, each $H_n$ is an induced subgraph of $H_{n+1}$, so $\cP'\df\{K_0\}\cup\{H_n\mid n\geq 6\}$ is almost finite, but since $\{G_n\mid n\geq 6\}\subseteq S(\cP')$, it follows that $S(\cP')$ is not primally almost finite. \end{remark} \begin{figure}[htb] \input{gluedpathswithfourcycles} \end{figure} The next lemma uses the fact that the substructure partial order on finite structures is well-founded to provide a useful equivalent formulation of almost finiteness. \begin{lemma}\label{lem:almostfinite} The following are equivalent for a family $\cF$ of finite $\cL$-structures (up to isomorphism). \begin{enumerate} \item The family $\cF$ is almost finite \label{lem:almostfinite:almostfinite} \item For every sequence $(F_n)_{n\in\NN}$ in $\cF$, there exists $n,m\in\NN$ such that $n < m$ and $F_n$ is a substructure of $F_m$ \label{lem:almostfinite:noincreases} \end{enumerate} \end{lemma} \begin{proof} We start with the implication~\ref{lem:almostfinite:almostfinite}$\implies$\ref{lem:almostfinite:noincreases}. If there exist $n,m\in\NN$ with $n < m$ and $F_n\cong F_m$, then $F_n$ is a substructure of $F_m$. Suppose then that the $F_n$ are pairwise non-isomorphic. Let $I$ be the set of all $n\in\NN$ such that for every $m\in\NN$, $F_m$ is not a proper substructure of $F_n$. We claim that $I$ is finite. Indeed, otherwise, $\cF'\df\{F_n\mid n\in I\}$ would be an infinite subfamily of $\cF$ such that for all $F,F'\in\cF'$, $F$ is not a proper substructure of $F'$. We now construct inductively a sequence $m_0,\ldots,m_t$ as follows. Let $m_0 > \max(I)$. Given $m_i$, if $m_i\notin I$, then there exists $m_{i+1}$ such that $F_{m_{i+1}}$ is a proper substructure of $F_{m_i}$; otherwise, we set $t\df i$ and stop the construction. Since $\lvert F_{i+1}\rvert < \lvert F_i\rvert$ and all structures are finite, the construction above must stop and by a simple induction, we have that $F_{m_t}$ is a proper substructure of $F_{m_0}$. Finally, since $m_t\in I$ and $m_0 > \max(I)$, we get $m_t < m_0$. \medskip Let us now show the implication~\ref{lem:almostfinite:noincreases}$\implies$\ref{lem:almostfinite:almostfinite}. Let $\cF'$ be an infinite subfamily of $\cF$ and enumerate it as $(F_n)_{n\in\NN}$ without repetitions. Then there exists $n,m\in\NN$ such that $n < m$ and $F_n$ is a substructure of $F_m$. Since $F_n\neq F_m$, it follows that $F_n$ is a proper substructure of $F_m$. \end{proof} We end this section with the following proposition that relates the notions of primally finite and primally almost finite for a family $\cF$ strongly closed under substitutions and closed under substructures with the number of subclasses of $\cF$ that are strongly closed under substitutions and closed under substructures. \begin{proposition}\label{prop:primcount} Let $\cF$ be a family of finite $\cL$-structures that is strongly closed under substitutions and closed under substructures, let $\cS$ be the set of subfamilies $\cF'$ of $\cF$ that are strongly closed under substitutions and closed under substructures. Then the following hold. \begin{enumerate} \item $\cF$ is primally finite if and only if $\cS$ is finite \label{prop:primcount:finite} \item $\cF$ is primally almost finite if and only if $\cS$ is countable \label{prop:primcount:almostfinite} \end{enumerate} \end{proposition} \begin{proof} Let $\cP$ be the set of prime structures of $\cF$ and let $\cS'$ be the set of subfamilies $\cP'\subseteq\cP$ that are closed under prime substructures. Then Lemma~\ref{lem:strongclosuresubst} gives a natural bijection between $\cS$ and $\cS'$. Since $\cP$ is finite if and only if $\cS'$ is finite, item~\ref{prop:primcount:finite} follows. The same bijection between $\cS$ and $\cS'$ implies that for item~\ref{prop:primcount:almostfinite}, it is sufficient to show that $\cP$ almost finite is equivalent to $\cS'$ countable. Suppose first that $\cP$ is not almost finite, that is, $\cP$ contains an infinite (countable) antichain $\cA\subseteq\cP$. Then for each $\cA'\subseteq\cA$, let $\cC_{\cA'}\subseteq\cP$ be the set of elements of $\cP$ that are substructures of some element of $\cA'$. Since $\cA$ is an antichain, it follows that for $\cA',\cA''\subseteq\cA$ distinct, we have $\cC_{\cA'}\neq\cC_{\cA''}$, hence $\cS'$ has cardinality of the continuum. \medskip Suppose now that $\cS'$ is uncountable. Let us define by induction in $n\in\NN$ two sequences $(P_n)_{n\in\NN}$ and $(\cS'_n)_{n\in\NN}$ with the following properties. \begin{enumerate}[label={\arabic*.}, ref={(\arabic*)}] \item $\cS'_n\subseteq\cS'$ is uncountable.\label{prop:primcount:cS'n} \item There exists $\cC\in\cS'_n$ such that $P_n\in\cC$.\label{prop:primcount:Pnin} \item For every $\cC'\in\cS'_{n+1}$, we have $P_n\notin\cC'$.\label{prop:primcount:Pnout} \end{enumerate} We start by setting $\cS'_0\df\cS'$. Given $\cS'_n$, let $\cP_n\df\bigcup_{\cC\in\cS'_n}\cC$ and for each $P\in\cP_n$, we let $\cS'_n(P)\df\{\cC\in\cS'_n \mid P\notin\cC\}$. Since $\cS'_n$ is uncountable, $\cP_n$ is countable and $\cS'_n\subseteq\{\cP_n\}\cup\bigcup_{P\in\cP_n}\cS'_n(P)$, by the pigeonhole principle, there exists $P_n\in\cP_n$ such that $\cS'_n(P_n)$ is uncountable. Set $\cS'_{n+1}\df\cS'_n(P_n)$ so that by definition, we have $P_n\in\cC$ for some $\cC\in\cS'_n$ but $P_n\notin\cC'$ for every $\cC'\in\cS'_{n+1}$. This concludes the construction. Since each $\cC\in\cS'$ is closed under prime substructures, items~\ref{prop:primcount:cS'n}, \ref{prop:primcount:Pnin} and~\ref{prop:primcount:Pnout} together imply that for every $n < m$, $P_n$ is not a substructure of $P_m$, so by Lemma~\ref{lem:almostfinite}, we get that $\cP$ is not almost finite. \end{proof} \section{Persistence in graphons} \label{sec:pers:graph} In this section we study the notion of (strongly) persistent families of graphs (Definition~\ref{def:persistence} below). The main objective of this section is to characterize persistence for theories of graphs in terms of closure under substitution and under induced subgraphs. We also remind the reader that in this section we drop the qualifiers ``weakly'' and ``strongly'' from ``closed under substitutions'' as they are superfluous for graphs (see Remark~\ref{rmk:substarity2}). \begin{definition}\label{def:persistence} Let $W$ be a graphon. The set of \emph{positive graphs in $W$} is the set $Q(W)\df\cM[\Th(W)]$ of all finite graphs $G$ (up to isomorphism) such that $\phi_W(G) > 0$. The set of \emph{persistently positive graphs in $W$} is the set $P(W)\df\bigcap_{W'} Q(W')$, where the intersection is over all subgraphons $W'$ of $W$. Clearly, $P(W)$ and $Q(W)$ depend only on the limit $\phi_W\in\HomT{\TGraph}$. Thus, for $\phi\in\HomT{\TGraph}$, we let $P(\phi)\df P(\phi_W)$ and $Q(\phi)\df Q(\phi_W)$ for any graphon $W$ such that $\phi = \phi_W$. A graphon $W$ (or the limit $\phi_W$ it represents) is called \emph{weakly random} if $P(W)=Q(W)$. Equivalently, $W$ is weakly random if $Q(W)=Q(W')$ for every subgraphon $W'$ of $W$. A family of graphs (up to isomorphism) $\cF$ is called \emph{persistent} if there exists a graphon $W$ such that $P(W)=\cF$. The family $\cF$ is called \emph{strongly persistent} if there exists a weakly random $W$ such that $P(W) = \cF$ (which must also equal $Q(W)$); in this case, we also say that $W$ (or rather $\phi_W$) is a \emph{universal weakly random limit of $\cF$}. If $T$ is a universal theory of graphs, then we say that $W$ is a universal weakly random limit of $T$ if it is a universal weakly random limit of $\cM[T]$. \end{definition} Obviously, both $Q(W)$ and $P(W)$ are closed under induced subgraphs, $Q(W')\subseteq Q(W)$ whenever $W'$ is a subgraphon of $W$, $P(W)\subseteq Q(W)$ and every strongly persistent family is persistent. \begin{example} The simplest weakly random graphons are of course the two trivial graphons, that is, the clique $W\equiv 1$ and the empty graphon $W\equiv 0$. The next in line are the non-trivial quasirandom graphons $W_p\equiv p$ for some $p\in(0,1)$: this is because just as the trivial graphons, the quasirandom graphons $W_p$ also have the quasirandomness property that the only subgraphon of $W_p$ is $W_p$, up to zero-measure change (for general theories this property is called $\UInduce[1]$ in~\cite{CR20b}). Other examples of weakly random graphons are the recursive blow-up of $C_4$ (see Proposition~\ref{prop:recC4}) and the graphon of agreements of the quasirandom permuton (see Proposition~\ref{prop:agreementsofQRpermuton}). \end{example} The following lemma is a simple but very powerful observation about persistent families. \begin{lemma}\label{lem:PWsubgraphon} Let $W$ be a graphon over a space $\Omega=(X,\cA,\mu)$. Then $P(W) = \bigcap_A Q(W\rest_A)$, where the intersection is over all positive measure sets $A\subseteq X$. \end{lemma} \begin{proof} Since each $W\rest_A$ is a subgraphon of $W$, it is sufficient to show that if $H\in\bigcap_A Q(W\rest_A)$, then $H\in P(W)$. We prove this by the contra-positive: if there exists a subgraphon $W'$ of $W$ such that $H\notin Q(W')$, then $\phi_{W'} = \phi_{W\rest_f}$ for some measurable function $f\colon X\to [0,1]$. Let $A\df\{x\in X \mid f(x) > 0\}$. It is easy to see that $Q(W\rest_f)=Q(W\rest_A)$, thus $H\notin\bigcap_A Q(W\rest_A)$. \end{proof} The objective of this section is to prove the following theorem that characterizes (strongly) persistent families in terms of substitutions. \begin{theorem}\label{thm:graphpersistence} The following are equivalent for a family $\cF$ of finite graphs (up to isomorphism) containing at least one graph of size at least $2$. \begin{enumerate} \item The family $\cF$ is strongly persistent \label{thm:graphpersistence:stronglypersistent} \item The family $\cF$ is persistent \label{thm:graphpersistence:persistent} \item The family $\cF$ is closed under substitutions and induced subgraphs \label{thm:graphpersistence:closed} \end{enumerate} \end{theorem} \begin{example} As we will see, both the theory $\TPerfect$ of perfect graphs (Proposition~\ref{prop:perfectgraphtheory}) and the theory $\TPermGraph$ of graphs of agreements of permutations (Proposition~\ref{prop:agreementsofpermutationtheory}) have their corresponding $\cM[T]$ closed under substitutions, hence both these theories have a universal weakly random graphon (in fact, Proposition~\ref{prop:agreementsofQRpermuton} gives an example for the latter theory that is very different from the one produced in the proof of Theorem~\ref{thm:graphpersistence}). On the negative side, the theory of triangle-free graphs does not have $\cM[T]$ closed under substitutions (as $K_2\in\cM[T]$ but $K_3\cong K_2^{v\to K_2}\notin\cM[T]$), so it does not have a universal weakly random graphon. \end{example} We will prove Theorem~\ref{thm:graphpersistence} through a series of lemmas. As we noted before, the implication~\ref{thm:graphpersistence:stronglypersistent}$\implies$\ref{thm:graphpersistence:persistent} is trivial. The implication~\ref{thm:graphpersistence:persistent}$\implies$\ref{thm:graphpersistence:closed} is a corollary of the following lemma. \begin{lemma}\label{lem:PW} If $W$ is a graphon, then $P(W)$ is closed under substitutions and induced subgraphs. \end{lemma} \begin{proof} It is obvious that $K_0\in P(W)$, so by Lemma~\ref{lem:substitutionK0}, it is sufficient to show that $P(W)$ is closed under substitutions. Let $F_1,F_2\in P(W)$ and $v_0\in V(F_1)$ and let us show that if $W'$ is a subgraphon of $W$, then $\tind(F_1^{v_0\to F_2},W') > 0$. Without loss of generality, we suppose $V(F_1)\cap V(F_2) = \varnothing$. Suppose toward a contradiction that $\tind(F_1^{v_0\to F_2},W') = 0$. By possibly applying the Graphon Removal Lemma~\cite[Theorem~1]{Pet13} to $W'$, we may suppose that the set $\Tind(F_1^{v_0\to F_2},W')$ of copies of $F_1^{v_0\to F_2}$ in $W'$ is contained in the diagonal set $\cD_{V(F_1^{v_0\to F_2})}$ (see~\eqref{eq:graphondiagonal}). Since $F_1\in P(W)$, we must have $\tind(F_1,W') > 0$, that is, the set $\Tind(F_1,W')$ has positive measure. For every $(x,y)\in X^{V(F_1)\setminus\{v_0\}}\times [0,1)^{\binom{V(F_1)}{2}}$, let \begin{align*} U_{x,y} & \df \{z\in X^{\{v_0\}} \mid ((x,z),y)\in\Tind(F_1,W')\}. \end{align*} By Fubini's Theorem, there exists $(x,y)\in X^{V(F_1)\setminus\{v_0\}}\times [0,1)^{\binom{V(F_1)}{2}}$ with all $x$ coordinates distinct such that $U_{x,y}$ has positive measure. Since $W'\rest_{U_{x,y}}$ is a subgraphon of $W'$, hence of $W$, we must have $\tind(F_2,W'\rest_{U_{x,y}}) > 0$, which implies that there exists $(z,w)\in X^{V(F_2)}\times[0,1)^{\binom{V(F_2)}{2}}$ with all $z$ coordinates in $U_{x,y}$, distinct and distinct from those in $x$ such that $z\in\Tind(F_2,W')$. Thus, the point $(\widehat{x},\widehat{y})\in X^{V(F_1^{v_0\to F_2})}\times [0,1)^{\binom{V(F_1^{v_0\to F_2})}{2}}$ defined by \begin{align*} \widehat{x}_v & \df \begin{dcases*} x_v, & if $v\in V(F_1)\setminus\{v_0\}$,\\ z_v, & if $v\in V(F_2)$, \end{dcases*} & \widehat{y}_A & \df \begin{dcases*} y_A, & if $A\subseteq V(F_1)\setminus\{v_0\}$,\\ w_A, & if $A\subseteq V(F_2)$,\\ y_{(A\cap V(F_1))\cup\{v_0\}}, & otherwise. \end{dcases*} \end{align*} is a point in $\Tind(F_1^{v_0\to F_2},W')\setminus\cD_{V(F_1^{v_0\to F_2})}$, a contradiction. \end{proof} To show the final implication~\ref{thm:graphpersistence:closed}$\implies$\ref{thm:graphpersistence:stronglypersistent} of Theorem~\ref{thm:graphpersistence}, we will use the repeating recursive blow-up relative to an infinite sequence of graphs defined below. \begin{definition}\label{def:graphrecursiveblowup} Given a sequence $V = (V_\ell)_{\ell\in\NN}$ of finite sets with $\lvert V_\ell\rvert\geq 2$ for every $\ell\in\NN$, the \emph{Cantor probability space} corresponding to $V$ is the space $\Omega^V = (\prod_{\ell\in\NN} V_\ell, \cA, \nu^V)$, where $\cA$ is the Borel $\sigma$-algebra of the product topology on $\prod_{\ell\in\NN} V_\ell$ and $\nu^V$ is the unique Borel measure such that $\nu^V(K_{\sigma,V}) = \prod_{\ell=0}^{t-1} \lvert V_\ell\rvert^{-1}$ for every $t\in\NN$ and every $\sigma\in\prod_{\ell=0}^{t-1} V_\ell$, where $K_{\sigma,V}$ is the basic clopen set given by \begin{align}\label{eq:KsigmaV} K_{\sigma,V} & \df \left\{\tau\in\prod_{\ell\in\NN} V_\ell \;\middle\vert\; \forall\ell\in\{0,\ldots,t-1\},\tau_\ell = \sigma_\ell \right\}. \end{align} Let $G = (G_m)_{m\in\NN}$ be a sequence of finite graphs with $\lvert G_m\rvert\geq 2$ for every $m\in\NN$, we let the \emph{recursive blow-up relative to $G$} be the limit $\phi_G\in\HomT{\TGraph}$ defined as follows. We let $V=(V_\ell)_{\ell\in\NN}$ be defined by $V_\ell\df V(G_\ell)$ and define the graphon $W^G$ over the space $\Omega^V$ by \begin{align}\label{eq:WG} W^G(x,y) & \df \begin{dcases*} 1, & if $x\neq y$ and $\{x_\ell,y_\ell\}\in E(G_{m_\ell})$,\\ 0, & otherwise, \end{dcases*} \end{align} where $\ell$ is the first position in which $x$ and $y$ differ. Finally, we define $\phi_G\df\phi_{W^G}\in\HomT{\TGraph}$ (see Example~\ref{ex:recC4} and Figures~\ref{fig:recC4} and~\ref{fig:recCsquares} for examples). We let the \emph{repeating recursive blow-up relative to $G$} be the limit $\phi_G^*\in\HomT{\TGraph}$ defined as follows. For each $\ell\in\NN$, we let \begin{align}\label{eq:mell} m_\ell & \df \max\{m\in\NN \mid 2^m \text{ divides } m+1\}, \end{align} we let $G^*\df(G_{m_\ell})_{\ell\in\NN}$ and we define $\phi_G^*\df\phi_{G^*}$. \end{definition} \begin{remark}\label{rmk:lexicographicproduct} Since $W^G$ is $\{0,1\}$-valued, we can interpret it as a (measurable) graph $H$ with vertex set $\Omega^V$ and the reader familiarized with lexicographic products of graphs should note that $H$ is simply the infinite lexicographic product of $(G_m)_{m\in\NN}$. \end{remark} \begin{remark}\label{rmk:mell} The definition of the numbers $m_\ell$ in~\eqref{eq:mell} guarantees a simple but very useful property: for every $m\in\NN$ there exist infinitely many $\ell\in\NN$ with $m_\ell=m$. In fact, for every $m\in\NN$ and every $\ell\in\NN$, there exists $\ell'\in\NN$ with $\ell < \ell'\leq \ell + 2^m$ such that $m_{\ell'} = m$, that is, for every $m\in\NN$, we only need to wait at most $2^m$ steps to see $m$ in the sequence $(m_\ell)_{\ell\in\NN}$ regardless of where we start. \end{remark} \begin{example}\label{ex:recC4} If the sequence $G$ consists of only one graph $G'$ repeated infinitely many times, then $\phi_G$ is the limit of the usual notion of recursive blow-ups of a single graph $G'$ on progressively more and more levels. \begin{figure}[htb] \input{recC4} \end{figure} For example, the limit $\phi_{C_4}$ of recursive blow-ups of $C_4$ used in~\cite[Definition~8.5]{CM22} (see Figure~\ref{fig:recC4}) is obtained as $\phi_G$ (or $\phi_G^*$) when $G$ is the sequence that is constant equal to $C_4$. Alternatively, $\phi_{C_4}$ is also obtained as $\phi_{G'}$ for the sequence $G'\df(K_2,\overline{K}_2,K_2,\overline{K}_2,\ldots)$ that infinitely alternates between the edge graph $K_2$ and the non-edge graph $\overline{K}_2$. Also alternatively, $\phi_{C_4}$ is obtained as $\phi_{G''}^*$ for the sequence $G''=(K_2,\overline{K}_2,\overline{K}_2,\ldots)$ whose first element is $K_2$ and all other elements are $\overline{K}_2$ (as $(G'')^* = G'$). \end{example} Let us show a simple structural fact about the Cantor probability space $\Omega^V$. \begin{lemma}\label{lem:KsigmaV} Let $V = (V_\ell)_{\ell\in\NN}$ be a sequence of finite sets with $\lvert V_\ell\rvert\geq 2$ for every $\ell\in\NN$ and let $A\subseteq\Omega^V$ be a set with positive measure. Then for every $\epsilon > 0$, there exists $t_0\in\NN$ such that for every $t\geq t_0$, there exists $\sigma\in\prod_{\ell=0}^{t-1} V_\ell$ such that $\nu^V(A\cap K_{\sigma,V})\geq (1-\epsilon)\cdot\nu^V(K_{\sigma,V})$, where \begin{align*} K_{\sigma,V} & \df \left\{\tau\in\prod_{\ell\in\NN} V_\ell \;\middle\vert\; \forall\ell\in\{0,\ldots,t-1\}, \tau_\ell = \sigma_\ell \right\} \end{align*} is the basic clopen set defined in~\eqref{eq:KsigmaV}. \end{lemma} \begin{proof} Let $\cB$ be the Boolean algebra generated by $\cC\df\{K_{\sigma,V} \mid t\in\NN\land\sigma\in\prod_{\ell=0}^{t-1} V_\ell\}$ and note that every set in $\cB$ is a finite union of elements of $\cC$. In fact, since for every $t\in\NN$ and every $\sigma\in\prod_{\ell=0}^{t-1} V_\ell$, the collection $\{K_{(\sigma,v),V} \mid v\in V_t\}$ forms a partition of $K_{\sigma,V}$, it follows that for every $B\in\cB$, there exists $t_B\in\NN$ such that for every $t\geq t_B$, the set $B$ can be written as the \emph{disjoint} union $B = \bigcup_{\sigma\in\Sigma_{B,t}} K_{\sigma,V}$ where \begin{align*} \Sigma_{B,t} & \df \left\{\sigma\in\prod_{\ell=0}^{t-1} V_\ell \;\middle\vert\; K_{\sigma,V}\subseteq B \right\}. \end{align*} Namely, we can take any representation of $B$ as a finite union of elements of $\cC$ and let $t_B$ be the maximum length of a $\sigma$ used in this representation. Since $\cB$ generates the $\sigma$-algebra $\cA$ of $\Omega^V$, uniqueness of \Caratheodory's Extension Theorem implies that for every $\delta > 0$, there exists $B\in\cB$ such that $\nu^V(A\symdiff B)\leq\delta$. We claim that by taking $\delta\df\nu^V(A)\cdot\epsilon/(1+\epsilon)$ and $t_0\df t_B$, we get that for every $t\geq t_0$, there must exist $\sigma\in\Sigma_{B,t}$ such that $\nu^V(K_{\sigma,V}\setminus A)\leq\epsilon\cdot\nu^V(K_{\sigma,V})$. Suppose not. Then we have \begin{align*} \delta & \geq \nu^V(A\symdiff B) \geq \sum_{\sigma\in\Sigma_{B,t}} \nu^V(K_{\sigma,V}\setminus A) > \sum_{\sigma\in\Sigma_{B,t}} \epsilon\cdot\nu^V(K_{\sigma,V}) = \epsilon\cdot\nu^V(B) \geq \epsilon(\nu^V(A) - \delta), \end{align*} from which we conclude that \begin{align*} \delta\cdot\frac{1+\epsilon}{\epsilon} > \nu^V(A), \end{align*} contradicting the definition of $\delta$. Finally, from $\nu^V(K_{\sigma,V}\setminus A)\leq\epsilon\cdot\nu^V(K_{\sigma,V})$, we conclude that $\nu^V(A\cap K_{\sigma,V})\geq (1-\epsilon)\cdot\nu^V(K_{\sigma,V})$ as desired. \end{proof} Our next objective is to show that $P(\phi_G^*)$ is precisely $S(\{K_0\}\cup\{G_m \mid m\in\NN\})$. We start by showing the simpler fact $\{G_m \mid m\in\NN\}\subseteq Q(\phi_G)\subseteq S(\{K_0\}\cup\{G_m\mid m\in\NN\})$ in Lemma~\ref{lem:QphiG} below. Clearly this implies the same statement for $\phi_G^*$. \begin{lemma}\label{lem:QphiG} Let $G = (G_m)_{m\in\NN}$ be a sequence of finite graphs with $\lvert G_m\rvert\geq 2$ for every $m\in\NN$. Then $\{G_m\mid m\in\NN\}\subseteq Q(\phi_G)\subseteq S(\{K_0\}\cup\{G_m\mid m\in\NN\})$. \end{lemma} \begin{proof} To see that every $G_m$ has positive density in $\phi_G$, simply note that if we take an arbitrary $\sigma\in\prod_{\ell=0}^{m-1} V(G_\ell)$ and set $x_v\df(\sigma,v)$ for every $v\in V(G_m)$, then $x$ is an embedding of $G_m$ in $W^G$ and thus \begin{align*} \tind(G_m,W^G) & \geq \prod_{\ell=0}^m \frac{1}{\lvert G_\ell\rvert} > 0, \end{align*} hence $\{G_m\mid m\in\NN\}\subseteq Q(\phi_G)$. \medskip To show that $Q(\phi_G)\subseteq S(\{K_0\}\cup\{G_m\mid m\in\NN\})$, we will prove a slightly stronger result: let us show that if $H$ is a finite graph such that $\Tind(H,W^G)\not\subseteq\cD_{V(H)}$, then $H\in S(\{K_0\}\cup\{G_m\mid m\in\NN\})$. The proof is by induction on $\lvert H\rvert$. The first two base cases are when $\lvert H\rvert\leq 1$ (i.e., $H\in\{K_0,K_1\}$), in which case trivially $H\in S(\{K_0\}\cup\{G_m\mid m\in\NN\})$. The next base cases are when $\lvert H\rvert\geq 2$ and $H$ is prime. In this case, we show that $H$ must be an induced subgraph of $G_m$ for some $m\in\NN$. Recall that since $W^G$ is $\{0,1\}$-valued, the set $\Tind(H,W^G)\setminus\cD_{V(H)}$ is alternatively described as the set of pairs $(x,y)\in(\Omega^V)^{V(H)}\times [0,1)^{\binom{V(H)}{2}}$ such that $x$ is an embedding of $H$ in $W^G$. Fix then one such point $(x,y)$ and let $t\in\NN$ be the length of the longest string $\sigma\in\prod_{\ell=0}^{t-1} V(G_\ell)$ that is common to all coordinates of $x$, that is, for every $v\in V(H)$, we have $x_v\rest_{\{0,\ldots,t-1\}} = \sigma$ and there exist $v,w\in V(H)$ such that $(x_v)_t \neq (y_v)_t$. For each $i\in V(G_t)$, let $U_i\df\{v\in V(H) \mid (x_v)_t = i\}$ and let $H_i\df H\rest_{U_i}$. Let also $I\df\{i\in V(G_t)\mid U_i\neq\varnothing\}$. The structure of $W^G$ implies that $H$ is obtained from $G_t\rest_I$ by substituting each $i\in I$ by $H_i$. Since $\lvert H_i\rvert < \lvert H\rvert$ for every $i\in I$ and $H$ is prime, it follows that $\lvert G_t\rest_I\rvert = \lvert H\rvert$ and $\lvert U_i\rvert = 1$ for every $i\in I$, that is, the unique function $\beta\colon V(H)\rightarrowtail V(G_t)$ such that $v\in U_{\beta(v)}$ is an embedding of $H$ in $G_t$. Thus $H$ is an induced subgraph of $G_t$. \medskip We now consider the inductive step when $H$ is not prime. Then $H$ is of the form $F_1^{v\to F_2}$ for some graphs $F_1,F_2$ and $v\in V(F_1)$ with $\lvert F_1\rvert,\lvert F_2\rvert < \lvert H\rvert$. By inductive hypothesis, we have $F_1,F_2\in S(\{K_0\}\cup\{G_m\mid m\in\NN\})$ and since this set is closed under substitutions we get $H\in S(\{K_0\}\cup\{G_m\mid m\in\NN\})$. \end{proof} \begin{lemma}\label{lem:PphiGQphiG} Let $G = (G_m)_{m\in\NN}$ be a sequence of finite graphs with $\lvert G_m\rvert\geq 2$ for every $m\in\NN$. Then $P(\phi_G^*) = Q(\phi_G^*) = S(\{K_0\}\cup\{G_m\mid m\in\NN\})$. \end{lemma} \begin{proof} By Lemma~\ref{lem:QphiG}, we know that $Q(\phi_G^*)\subseteq S(\{K_0\}\cup\{G_m\mid m\in\NN\})$ and since $P(\phi_G^*)\subseteq Q(\phi_G^*)$, it is sufficient to prove that $S(\{K_0\}\cup\{G_m\mid m\in\NN\})\subseteq P(\phi_G^*)$. Let $H\in S(\{K_0\}\cup\{G_m\mid m\in\NN\})$ and let us show that $H\in P(\phi_G^*)$ by induction on $\lvert H\rvert$. The base case is when $H$ is a prime graph. By Lemma~\ref{lem:primesubstructure}, we know there exists $\widehat{m}\in\NN$ such that $H$ is an induced subgraph of $G_{\widehat{m}}$, so it is sufficient to show that $G_{\widehat{m}}\in P(\phi_G^*)$. In turn, by Lemma~\ref{lem:PWsubgraphon}, it is sufficient to show that for every positive measure set $A\subseteq\Omega^V$, the graph $G_{\widehat{m}}$ has positive density in the subgraphon $W^{G^*}\rest_A$. Let $\epsilon$ be any positive number with $\epsilon < 1/\lvert G_{\widehat{m}}\rvert$. By Lemma~\ref{lem:KsigmaV}, there exists $t_0\in\NN$ such that for every $t\geq t_0$, there exists $\sigma^t\in\prod_{\ell=0}^{t-1} V(G_{m_\ell})$ such that $\nu^V(A\cap K_{\sigma^t,V})\geq(1-\epsilon)\cdot\nu^V(K_{\sigma^t,V})$. Let $t\in\NN$ be such that $t_0 < t\leq t_0 + 2^{\widehat{m}}$ and $m_t=\widehat{m}$ as provided by Remark~\ref{rmk:mell}. Let also \begin{align*} T & \df \left\{\tau\in\prod_{\ell=0}^t V(G_{m_\ell}) \;\middle\vert\; \tau\rest_{\{0,\ldots,t-1\}} = \sigma^t \right\}. \end{align*} Since $\{K_{\tau,V} \mid \tau\in T\}$ partitions $K_{\sigma^t,V}$ into $\lvert T\rvert = \lvert G_{m_t}\rvert = \lvert G_{\widehat{m}}\rvert$ parts of equal measure, it follows that for every $\tau\in T$, we have \begin{align*} \nu^V(A\cap K_{\tau,V}) & \geq \left( 1 - \epsilon - \frac{\lvert G_{\widehat{m}}\rvert - 1}{\lvert G_{\widehat{m}}\rvert} \right) \nu^V(K_{\sigma^t,V}) > 0. \end{align*} However, the definition of $W^{G^*}$ implies that if we pick $x_v\in K_{(\sigma^t,v),V}$ for each $v\in V(G_{\widehat{m}})$ (and pick any $y\in [0,1)^{\binom{V(G_{\widehat{m}})}{2}}$), then we get a copy of $G_{\widehat{m}}$ in $W^{G^*}$ and since for every $v\in G_{\widehat{m}}$, we have $\nu^V(A\cap K_{(\sigma^t,v),V}) > 0$ (as $(\sigma^t,v)\in T$), it follows that $G_{\widehat{m}}$ has positive density in $W^{G^*}\rest_A$. \medskip For the inductive step when $H$ is not prime, we must have $H = F_1^{v\to F_2}$ for some graphs $F_1,F_2$ and some $v\in V(F_1)$ with $\lvert F_1\rvert,\lvert F_2\rvert < \lvert H\rvert$. Since $F_1$ and $F_2$ are in $P(\phi_G^*)$ by inductive hypothesis and $P(\phi_G^*)$ is closed under substitutions by Lemma~\ref{lem:PW}, it follows that $H\in P(\phi_G^*)$. \end{proof} We can finally show Theorem~\ref{thm:graphpersistence} that says that a family $\cF$ of graphs with at least one graph of size at least $2$ is strongly persistent if and only if it is persistent if and only if it is closed under substitutions and under induced subgraphs. \begin{proofof}{Theorem~\ref{thm:graphpersistence}} The implication~\ref{thm:graphpersistence:stronglypersistent}$\implies$\ref{thm:graphpersistence:persistent} is trivial: every strongly persistent family is obviously persistent. \medskip For the implication~\ref{thm:graphpersistence:persistent}$\implies$\ref{thm:graphpersistence:closed}, if $\cF = P(W)$ for some graphon $W$, then Lemma~\ref{lem:PW} implies that it is closed under substitutions and induced subgraphs. \medskip For the final implication~\ref{thm:graphpersistence:closed}$\implies$\ref{thm:graphpersistence:stronglypersistent}, by Lemma~\ref{lem:strongclosuresubst}, we have $\cF = S(\cP)$, where $\cP$ is the set of graphs in $\cF$ that are prime. Since $\cF$ contains at least one graph of size at least $2$, $\cP$ must also contain one such graph (since $S(\{K_0,K_1\}) = \{K_0,K_1\}$). Let $G = (G_m)_{m\in\NN}$ be an enumeration of all graphs in $\cP$ of size at least $2$ (potentially with repetitions if $\cP$ is finite). Note that since $\cF = S(\cP)$ is closed under induced subgraphs, it follows that $\cF = S(\{K_0\}\cup\{G_m \mid m\in\NN\})$. By Lemma~\ref{lem:PphiGQphiG}, the repeating recursive blow-up $\phi_G^*$ relative to $G$ satisfies $P(\phi_G^*)=Q(\phi_G^*)=\cF$, hence $\cF$ is strongly persistent. \end{proofof} \section{Weak randomness in graphons} \label{sec:WR:graph} Recall that Theorem~\ref{thm:graphpersistence} characterizes all universal theories of graphs that contain a universal weakly random graphon. In this section, we study a related natural question (see Definition~\ref{def:WR} below): when does every graphon of a universal theory of graphs contain some weakly random subgraphon? As mentioned in the introduction, this property is a generalization of $\AEHP$ (see Definition~\ref{def:AEHP}). We also remind the reader that in this section we drop the qualifiers ``weakly'' and ``strongly'' from ``closed under substitutions'' as they are superfluous for graphs (see Remark~\ref{rmk:substarity2}). \begin{definition}\label{def:WR} We say that a universal theory $T$ of graphs has the \emph{weakly random \Erdos--Hajnal property} (abbreviated $T\in\WR$) if every limit $W$ of $T$ contains a weakly random subgraphon. \end{definition} \begin{remark} Since trivial graphons are weakly random, we obviously have $\AEHP\subseteq\WR$. Furthermore, even though it is also natural to ask what is the class of theories that have \emph{some} weakly random limit, it is clear that this is precisely the set of non-degenerate universal theories. This is because Ramsey's Theorem implies that any non-degenerate theory $T$ of graphs must either contain arbitrarily large cliques, in which case $W\equiv 1$ is a limit of $T$, or contain arbitrarily large anti-cliques, in which case $W\equiv 0$ is a limit of $T$. \end{remark} Similarly to Lemma~\ref{lem:PWsubgraphon}, the following lemma is a simple but very powerful observation about weakly random subgraphons. \begin{lemma}\label{lem:weaklyrandomsubgraphon} A graphon $W$ over a space $\Omega=(X,\cA,\mu)$ contains a weakly random subgraphon $W'$ if and only if there exists a positive measure set $A\subseteq X$ such that $W\rest_A$ is weakly random. \end{lemma} \begin{proof} The backward implication follows because $W\rest_A$ is a subgraphon of $W$. \medskip For the forward implication, we know that $\phi_{W'} = \phi_{W\rest_f}$ for some measurable function $f\colon X\to [0,1]$. Let $A\df\{x\in X \mid f(x) > 0\}$. We claim that $W\rest_A$ is weakly random. Indeed, this follows from Lemma~\ref{lem:PWsubgraphon} and since for every $B\subseteq X$ of positive $\mu_f$-measure, we have $Q(W\rest_A\rest_B) = Q(W\rest_f\rest_B)$. \end{proof} Our next main objective is to characterize the class $\WR$ under the assumption that the set of graphs of the theory is closed under substitutions. \begin{theorem}\label{thm:graphs:WR} Let $T$ be a universal theory of graphs such that $\cM[T]$ is closed under substitutions. Then $T\in\WR$ if and only if $\cM[T]$ is primally almost finite. \end{theorem} Before we prove Theorem~\ref{thm:graphs:WR} above, let us observe a simple corollary of it. \begin{corollary} There exists a universal theory of graphs $T$ with $\cM[T]\subsetneq\cM[\TGraph]$ and $T\notin\WR$. \end{corollary} \begin{proof} The family $\{C_n \mid n\geq 5\}$ of cycles of length at least $5$ is a family of prime graphs that is not almost finite (see Example~\ref{ex:substitutiongeneration}). Let $\cF=S(\{K_0\}\cup\{C_n \mid n\geq 5\})$ and since $\cF$ is closed under substitutions and induced subgraphs but is not primally almost finite, the universal theory $T\df\Th(\cF)$ with $\cM[\Th(\cF)]=\cF$ satisfies $T\notin\WR$ by Theorem~\ref{thm:graphs:WR}. It is also easy to see that $\cM[T]\subsetneq\cM[\TGraph]$, as for example the prime graph $G_6$ of Example~\ref{ex:substitutiongeneration} is not in $\cM[T]$ by Lemma~\ref{lem:primesubstructure}. \end{proof} We start by proving the easier direction of Theorem~\ref{thm:graphs:WR} in the lemma below. In fact, for this direction, we do not even need $\cM[T]$ to be closed under substitutions. \begin{lemma}\label{lem:graphs:primallyalmostfinite->WR} If $T$ is a universal theory of graphs such that $\cM[T]$ is primally almost finite, then $T\in\WR$. \end{lemma} \begin{proof} We prove the lemma by its contra-positive. Suppose $T\notin\WR$ and let us show that the set $\cP$ of graphs of $T$ that are prime is not almost finite. By Lemma~\ref{lem:almostfinite}, it is sufficient to construct a sequence $(R_n)_{n\in\NN}$ in $\cP$ such that for every $n,m\in\NN$, if $n < m$, then $R_n$ is not an induced subgraph of $R_m$. Since $T\notin\WR$, there must exist a limit $\phi\in\HomT{T}$ of $T$ that does not contain any weakly random sub-object. We now construct sequences $(\phi_n)_{n\in\NN}$ of sub-objects of $\phi$ and $(R_n)_{n\in\NN}$ of prime graphs in $\cM[T]$ satisfying: \begin{enumerate} \item For every $n\in\NN$, $\phi_{n+1}$ is a sub-object of $\phi_n$. \item For every $n\in\NN$, $R_n\in Q(\phi_n)\setminus Q(\phi_{n+1})$. \end{enumerate} We construct these sequences inductively as follows. \begin{enumerate}[label={\arabic*.}] \item Set $\phi_0\df\phi$. \item Given $\phi_n\in\HomT{T}$, since $\phi_n$ is a sub-object of $\phi$, we know that $\phi_n$ is not weakly random, so there exists $G_n\in Q(\phi_n)\setminus P(\phi_n)$. Let $\cP_n$ be the set of induced subgraphs of $G_n$ that are prime. Since by Lemma~\ref{lem:PW}, $P(\phi_n)$ is closed under substitutions and $G_n\in S(\cP_n)$, there exists $R_n\in\cP_n\setminus P(\phi_n)$ and since $Q(\phi_n)$ is closed under induced subgraphs, we get $R_n\in Q(\phi_n)\setminus P(\phi_n)$. From the definition of $P(\phi_n)$, it follows that there exists a sub-object $\phi_{n+1}$ of $\phi_n$ (hence $\phi_{n+1}$ is also a sub-object of $\phi$) such that $R_n\in Q(\phi_n)\setminus Q(\phi_{n+1})$. \end{enumerate} Let now $n,m\in\NN$ be such that $n < m$. By induction, we know that $\phi_m$ is a sub-object of $\phi_{n+1}$, so $Q(\phi_m)\subseteq Q(\phi_{n+1})$, which in turn implies that $R_n\in Q(\phi_n)\setminus Q(\phi_m)$. Since $R_m\in Q(\phi_m)$ and $Q(\phi_m)$ is closed under induced subgraphs, it follows that $R_n$ is not an induced subgraph of $R_m$, concluding the proof. \end{proof} For the other side of the characterization of $\WR$, the proposition below shows that under appropriate hypotheses, the recursive blow-up $\phi_R$ of Definition~\ref{def:graphrecursiveblowup} is a graphon without any weakly random subgraphon. \begin{proposition}\label{prop:noweaklyrandomsubgraphon} Let $(R_n)_{n\in\NN}$ be a sequence of prime graphs of size at least $2$ such that for each $n\in\NN$, there exist at most finitely many $m\in\NN$ such that $R_n$ is an induced subgraph of $R_m$. Suppose also that $\prod_{n\in\NN} (1 - 1/\lvert R_n\rvert) = 0$. Then $\phi_R$ does not contain any weakly random sub-object. \end{proposition} Let us first give some intuition on the proof of Proposition~\ref{prop:noweaklyrandomsubgraphon}. First, note that since all $R_n$ are prime graphs and $\phi_R$ is obtained via a limit of recursive blow-ups, which themselves are obtained from the $R_n$ via substitutions, it follows that copies of $R_n$ in $\phi_R$ need to correspond to copies of $R_n$ inside some $R_m$. The condition that each $R_n$ is contained in at most finitely many $R_m$ then ensures that the restriction of $W^R$ to basic clopen sets $K_{\sigma,V}$ (see~\eqref{eq:KsigmaV}) with $\lvert\sigma\rvert$ large enough do not have any copies of $R_n$. Thus, for every positive measure set $A$, there is some $K_{\sigma,V}$ such that $R_n\notin Q(W^R\rest_{A\cap K_{\sigma,V}})$ and $A\cap K_{\sigma,V}$ has positive measure. However, to use this fact show that $\phi_R$ does not contain any weakly random sub-object, we need to also ensure that every positive measure set $A$ contains at least one $R_n$ (with $n$ depending on $A$), so that we conclude that $Q(W^R\rest_A)\neq P(W^R\rest_A)$ since the above argument gives $R_n\in Q(W^R\rest_A)\setminus Q(W^R\rest_{A\cap K_{\sigma,V}})$. This is where the condition $\prod_{n\in\NN} (1 - 1/\lvert R_n\rvert) = 0$ comes in: we will show that any set $A$ avoiding all $R_n$ has measure at most $\prod_{n\in\NN} (1 - 1/\lvert R_n\rvert)$. \begin{proof} For every $t\in\NN$, let $R^t$ be the shifted sequence $(R_{n+t})_{n\in\NN}$. Also, for each $t\in\NN$, let $m_t$ be the maximum $m\in\NN$ such that $R_t$ is an induced subgraph of $R_m$. Note that for every $t\in\NN$, by Lemmas~\ref{lem:substitutionsubstructure} and~\ref{lem:QphiG}, we have $R_t\in Q(\phi_{R^t})\setminus Q(\phi_{R^{m_t+1}})$ since $R_t$ is prime and is not an induced subgraph of any $R_{t'}$ with $t' > m_t$\footnote{In fact, since $\phi_{R^{m_t+1}}$ is a sub-object of $\phi_{R^t}$, we have $Q(\phi_{R^{m_t+1}})\subsetneq Q(\phi_{R^t})$.}. To show that $\phi_R$ does not contain a weakly random sub-object, by Lemma~\ref{lem:weaklyrandomsubgraphon}, it is sufficient to show that for every positive measure set $A\subseteq\Omega^V$, the subgraphon $W^R\rest_A$ is not weakly random. We claim that there exists $t\in\NN$ such that $R_t\in Q(W^R\rest_A)$. Suppose not, let $n\in\NN$ be large enough so that $\prod_{\ell=0}^{n-1} (1-1/\lvert R_\ell\rvert) < \nu^V(A)$ (recall from Definition~\ref{def:graphrecursiveblowup} that $\nu^V$ is the measure in the underlying space of $W^R$) and consider the set \begin{align*} \Sigma & \df \left\{\sigma\in\prod_{\ell=0}^{n-1} V(R_\ell) \;\middle\vert\; \nu^V(A\cap K_{\sigma,V}) > 0 \right\}. \end{align*} We claim that for every $m\in\{0,\ldots,n-2\}$ and every $\tau\in\prod_{\ell=0}^{m-1} V(R_\ell)$, there exists $v_\tau\in V(R_m)$ such that $(\tau,v_\tau)$ is not a prefix of any element of $\Sigma$. Indeed, otherwise, since mapping each $v\in V(R_m)$ to an element of $K_{(\tau,v),V}$ gives an embedding of $R_m$ in $W^R$, we would get \begin{align*} \tind(R_m,W^R\rest_A) & \geq \prod_{v\in V(R_m)} \frac{\nu^V(A\cap K_{(\tau,v),V})}{\nu^V(A)} > 0. \end{align*} Thus, the existence of $v_\tau$ is proved. Let then $\Sigma^*$ be the set of $\sigma\in\prod_{\ell=0}^{n-1} V(R_\ell)$ such that for every $m\in\{0,\ldots,n-2\}$, we have $v_{\sigma\rest_{\{0,\ldots,m-1\}}}\neq\sigma_m$. Our last claim says that $\Sigma\subseteq\Sigma^*$. Now it is easy to see that \begin{align*} \nu^V(A) & = \sum_{\sigma\in\Sigma} \nu^V(A\cap K_{\sigma,V}) \leq \sum_{\sigma\in\Sigma^*} \nu^V(K_{\sigma,V}) = \prod_{\ell=0}^{n-1}\left(1 - \frac{1}{\lvert R_\ell\rvert}\right) < \nu^V(A), \end{align*} a contradiction. This concludes the proof that there exists $t\in\NN$ such that $R_t\in Q(W^R\rest_A)$. We will now show that $W^R\rest_A$ is not weakly random by showing that there exists a sub-object of $W^R\rest_A$ in which $R_t$ has density zero (so that we conclude $P(W^R\rest_A)\subsetneq Q(W^R\rest_A)$ as $R_t$ is in the latter set but not in the former set). Since $\{K_{\sigma,V} \mid \sigma\in\prod_{\ell=0}^{m_t} V(R_\ell)\}$ partitions the space $\Omega^V$, there must exist $\sigma\in\prod_{\ell=0}^{m_t} V(R_\ell)$ such that $\nu^V(A\cap K_{\sigma,V}) > 0$ but note that $\phi_{W^R\rest_{K_{\sigma,V}}} = \phi_{R^{m_t+1}}$ and since $Q(W\rest_{A\cap K_{\sigma,V}})\subseteq Q(W\rest_{K_{\sigma,V}})=Q(\phi_{R^{m_t+1}})$ it follows that $R_t\notin Q(W\rest_{A\cap K_{\sigma,V}})$ as desired. Therefore $\phi_R$ does not contain any weakly random sub-object. \end{proof} \begin{remark}\label{rmk:recCsquares} The product condition $\prod_{n\in\NN} (1 - 1/\lvert R_n\rvert) = 0$ in Proposition~\ref{prop:noweaklyrandomsubgraphon} may seem very unnatural at first. However, it is easy to see that it is necessary for $\phi_R$ to not contain any weakly random sub-object: for example, consider the limit $\phi_R$ for the sequence $R\df (C_{n^2+5})_{n\in\NN}$ (see Figure~\ref{fig:recCsquares}) and fixing $v_n\in V(C_{n^2+5})$ for each $n\in\NN$, let \begin{align*} A & \df \prod_{n\in\NN} (V(C_{n^2+5})\setminus\{v_n\}). \end{align*} \begin{figure}[htbp] \input{recCsquares} \end{figure} Note that $\nu^V(A) = \prod_{n\in\NN}(1 - 1/\lvert C_{n^2+5}\rvert) > 0$. On the other hand, since $C_{n^2+5}-v_n\cong P_{n^2+4}$ is the path with $n^2+4$ vertices, it is obvious that $W^R\rest_A$ represents the same limit as $W^{R'}$ for the sequence $R'=(P_{n^2+4})_{n\in\NN}$. In turn, by Lemma~\ref{lem:QphiG}, we have \begin{align*} Q(W^{R'})\subseteq S(\{K_0\}\cup\{P_{n^2+4} \mid n\in\NN\}) = S(\{P_n\mid n\in\NN\}) \end{align*} and since the family above is primally almost finite, Lemma~\ref{lem:graphs:primallyalmostfinite->WR} implies that $W^{R'}$ contains a weakly random subgraphon, hence so does $W^R$. In fact, with a bit more effort, one can also show that $W^{R'}$ itself is already weakly random, but we omit this proof. We will also see in Proposition~\ref{prop:WRproductcond} that not only does $\phi_R$ contain a weakly random sub-object, but we also have $\Th(\phi_R)\in\WR$. \end{remark} We can now prove Theorem~\ref{thm:graphs:WR} that says that a universal theory of graphs $T$ with $\cM[T]$ closed under substitutions and with $\cM_2[T]$ non-empty is in $\WR$ if and only if $\cM[T]$ is primally almost finite. \begin{proofof}{Theorem~\ref{thm:graphs:WR}} For the backward direction, if $\cM[T]$ is primally almost finite, then by Lemma~\ref{lem:graphs:primallyalmostfinite->WR}, we have $T\in\WR$. \medskip We prove the forward direction by the contra-positive: suppose $\cM[T]$ is not primally almost finite, so there exists an infinite antichain $\{G_n \mid n\in\NN\}$ of prime graphs of $T$, and without loss of generality, suppose every $G_n$ has size at least $2$. For each $n\in\NN$, let $r_n\in\NN_+$ be large enough so that $(1 - 1/\lvert G_n\rvert)^{r_n} \leq 1/2$ and for each $\ell\in\NN$, let $R_\ell\df G_n$ for the unique $n\in\NN$ such that $\sum_{m=0}^{n-1} r_m\leq\ell < \sum_{m=0}^n r_m$. Clearly, for each $\ell\in\NN$, there exist exactly $r_\ell$ values of $t\in\NN$ such that $R_\ell$ is an induced subgraph of $R_t$. On the other hand, we have \begin{align*} \prod_{\ell\in\NN} \left(1 - \frac{1}{\lvert R_\ell\rvert}\right) & = \prod_{n\in\NN} \left(1 - \frac{1}{\lvert G_n\rvert}\right)^{r_n} \leq \prod_{n\in\NN} \frac{1}{2} = 0. \end{align*} By Proposition~\ref{prop:noweaklyrandomsubgraphon}, we know that $\phi_R$ does not contain any weakly random sub-object and by Lemma~\ref{lem:QphiG}, we know that $Q(\phi_R)\subseteq S(\{R_\ell \mid \ell\in\NN\})\subseteq\cM[T]$, so $\phi_R$ is a limit of $T$ without any weakly random sub-object. \end{proofof} We conclude this section with some natural examples of universal theories in $\WR$ and not in $\WR$. We start by showing that the universal theory of induced subgraphs of recursive blow-ups of $C_4$ studied in~\cite[\S 8]{CM22} (see Example~\ref{ex:recC4} and Figure~\ref{fig:recC4}) is the simplest example in $\WR\setminus\AEHP$. \begin{proposition}\label{prop:recC4} The limit recursive blow-up $\phi_{C_4}$ of $C_4$ is weakly random. In particular, the theory $T$ of induced subgraphs of the recursive blow-ups of $C_4$ satisfies $T\in\WR\setminus\AEHP$. \end{proposition} \begin{proof} Recall from Example~\ref{ex:recC4} that the limit $\phi_{C_4}$ recursive blow-up of $C_4$ can be viewed as the repeating recursive blow-up $\phi_{G''}^*$ for the sequence $G''=(K_2,\overline{K}_2,\overline{K}_2,\ldots)$ whose first element is $K_2$ and all other elements are $\overline{K}_2$. There are two ways of seeing that $\phi_{C_4}$ is weakly random. The first is using Lemma~\ref{lem:PphiGQphiG} to conclude that $P(\phi_{C_4})=Q(\phi_{C_4}) = S(\{K_0,K_2,\overline{K}_2\})$. Alternatively, the result follows directly from the results of~\cite{CM22} and Lemma~\ref{lem:PW}: by~\cite[Lemma~8.7]{CM22}, we know that $Q(\phi_{C_4})\subseteq S(\{K_0,K_2,\overline{K}_2\})$, so Lemma~\ref{lem:PW} implies that $P(\phi_{C_4})$ can only be one of $S(\{K_0,K_2\})$, $S(\{K_0,\overline{K}_2\})$ or $S(\{K_0,K_2,\overline{K}_2\})$ and since by~\cite[Lemma~8.8]{CM22} does not contain trivial subgraphons, the first two cases are ruled out, so $P(\phi_{C_4})=Q(\phi_{C_4})=S(\{K_0,K_2,\overline{K}_2\})$. \medskip Since the family of induced subgraphs of recursive blow-ups of $C_4$ is precisely the family $S(\{K_0,K_2,\overline{K}_2\})$, which is primally finite, the fact that $T\in\WR$ follows from Theorem~\ref{thm:graphs:WR}. On the other hand, $\phi_{C_4}$ does not contain trivial subgraphons (this follows directly from~\cite[Lemma~8.8]{CM22} or alternatively from the fact that a trivial subgraphon $W$ must have $Q(W)$ either equal to $S(\{K_0,K_2\})$ or $S(\{K_0,\overline{K}_2\})$), hence $T\notin\AEHP$. \end{proof} \begin{proposition}\label{prop:perfectgraphtheory} The theory $\TPerfect$ of perfect graphs is not in $\WR$. Furthermore, the set $\cM[\TPerfect]$ is closed under substitutions. \end{proposition} \begin{proof} We first show that $\cM[\TPerfect]$ is closed under substitutions. By the Strong Perfect Graph Theorem~\cite{CRST06}, we know that a graph $G$ is perfect if and only if both $G$ and its complement $\overline{G}$ do not contain any induced odd-cycle of length at least $5$. Let us show that if $F_1,F_2$ are perfect graphs and $v\in V(F_1)$, then $F_1^{v\to F_2}$ is also a perfect graph. Since $\overline{F_1^{v\to F_2}}\cong(\overline{F_1})^{v\to\overline{F_2}}$, it is sufficient to show that $F_1^{v\to F_2}$ does not contain any induced odd-cycles of length at least $5$. Without loss of generality, let us suppose $V(F_1)\cap V(F_2)=\varnothing$. Suppose toward a contradiction that $v_1,\ldots,v_{2\ell+1}$ forms an induced odd-cycle of $F_1^{v\to F_2}$ with $\ell\geq 2$. Since both $F_1$ and $F_2$ are perfect, this odd-cycle must contain both vertices of $F_1$ (that are not $v$) and $F_2$. Without loss of generality, suppose $v_i\in V(F_2)$ for every $i\in[k]$ for some $k\in [2\ell+1]$ and $v_{k+1}\in V(F_1)$. Since $v_{k+1}\in V(F_1)$ is adjacent to $v_k\in V(F_2)$, it follows from the structure of $F_1^{v\to F_2}$ that $v_{k+1}$ is adjacent to all of $v_1,\ldots,v_k$, but since the cycle is induced, this can only happen if $k=2$ and $2\ell+1=3$, a contradiction. Therefore, $\cM[\TPerfect]$ is closed under substitutions. \medskip By Theorem~\ref{thm:graphs:WR}, to show that $\TPerfect\notin\WR$, it is sufficient to show that $\cM[\TPerfect]$ is not primally almost finite. But recall that the family of graphs $\{G_n \mid n\geq 6\}$ of Example~\ref{ex:substitutiongeneration} is a family of prime graphs that is not almost finite and since these graphs are bipartite, they are also perfect. \end{proof} Finally, we consider the theory $\TPermGraph\df I(\TPerm)$ of graphs of agreements of permutations, where $I\colon\TGraph\leadsto\TPerm$ is given by \begin{align*} I(E)(x,y) & \df (x\neq y\land (x\prec_1 y\tot x\prec_2 y)), \end{align*} The next proposition provides a natural universal weakly random limit of $\TPermGraph$ as the graphon of agreements of the quasirandom permuton (see Figure~\ref{fig:agreementsgraphon}). However, we defer its proof to Section~\ref{sec:pers:univ} as it will follow as an easy consequence of naturality of weak randomness (Proposition~\ref{prop:naturality}\ref{prop:naturality:WR}) and the fact that the quasirandom permuton is a universal weakly random limit of $\TPerm$ (Proposition~\ref{prop:QRpermuton}). \begin{proposition}\label{prop:agreementsofQRpermuton} The graphon $W$ over $[0,1]^2$ of agreements of the quasirandom permuton given by \begin{align*} W(x,y) & \df \One[\pi_1(x) < \pi_1(y) \tot \pi_2(x) < \pi_2(y)], \end{align*} where $\pi_i\colon[0,1]^2\to[0,1]$ is the projection onto the $i$th coordinate, is a universal weakly random limit of $\TPermGraph$. \end{proposition} \begin{figure}[htb] \input{agreementsgraphon} \end{figure} \begin{proposition}\label{prop:agreementsofpermutationtheory} The theory $\TPermGraph$ of graphs of agreements of permutations is not in $\WR$. Furthermore, $\cM[\TPermGraph]$ is closed under substitutions. \end{proposition} \begin{proof} First let us prove that $\cM[\TPermGraph]$ is closed under substitutions. Let $F,G\in\cM[\TPermGraph]$, let $v\in V(F)$ and without loss of generality, suppose $V(F) = [n]$ and $V(G)=[m]$ for some $n,m\in\NN$ and $\sigma\in S_n$ and $\tau\in S_m$ are permutations representing $F$ and $G$ with $\{i,j\}\in E(F)$ if and only if $i < j\tot\sigma(i)\leq\sigma(j)$ and analogously for $G$ and $\tau$. It is now easy to check that $F^{v\to G}$ is the graph of agreements of the permutation $\pi\in S_{n+m-1}$ defined by \begin{align*} \pi(i) & \df \begin{dcases*} \sigma(i), & if $i < v$ and $\sigma(i) < \sigma(v)$,\\ \sigma(i) + m - 1, & if $i < v$ and $\sigma(v) < \sigma(i)$,\\ \sigma(v) + \tau(i-v+1) - 1, & if $v\leq i < v + m$,\\ \sigma(i-m+1), & if $v+m\leq i$ and $\sigma(i-m+1) < \sigma(v)$,\\ \sigma(i-m+1) + m - 1, & if $v+m\leq i$ and $\sigma(v) < \sigma(i-m+1)$. \end{dcases*} \end{align*} In fact, the above shows that $\cM[\TPerm]$ is weakly closed under substitutions, so $\cM[\TPermGraph]$ inherits this property. Now, by Theorem~\ref{thm:graphs:WR}, it is sufficient to show that $\cM[\TPermGraph]$ is not primally almost finite. Recall that the family $\{G_n \mid n\geq 6\}$ of Example~\ref{ex:substitutiongeneration} is a family of prime graphs that is not almost finite. We claim that for every even\footnote{It is also true for odd $n$, but we only need an infinite subfamily, so even $n$ suffices.} $n\geq 6$, the graph $G_n$ is a graph of agreements of some permutation. Indeed, $G_n$ is the graph of agreements of the permutation $\pi_n\in S_{n+4}$ (see Figure~\ref{fig:primegraphofagreements}) given by \begin{align*} \pi_n(i) & \df \begin{dcases*} n+3, & if $i=1$,\\ n+1, & if $i=2$,\\ n-1, & if $i=3$,\\ n+4, & if $i=4$,\\ n-i+3, & if $6\leq i\leq n$ and $i$ is even,\\ n-i+7, & if $5\leq i\leq n-1$ and $i$ is odd,\\ 1, & if $i=n+1$,\\ 6, & if $i=n+2$,\\ 4, & if $i=n+3$,\\ 2, & if $i=n+4$. \end{dcases*} \end{align*} For example, the values of $\pi_{14}$ (in sequence) are \begin{align*} 17, 15, 13, 18, 16, 11, 14, 9, 12, 7, 10, 5, 8, 3, 1, 6, 4, 2. \end{align*} Thus, $\cM[\TPermGraph]$ is not primally almost finite, hence $\TPermGraph\notin\WR$ by Theorem~\ref{thm:graphs:WR}. \end{proof} \begin{figure}[htbp] \input{primegraphofagreements} \end{figure} We conclude this section with an example of a universal theory $T$ of graphs that is in $\WR$ essentially because of failure of the product condition of Proposition~\ref{prop:noweaklyrandomsubgraphon}. \begin{proposition}\label{prop:WRproductcond} Consider the sequence of graphs $G = (C_{n^2+5})_{n\in\NN}$ and let $T\df\Th(\phi_G)$ be the theory of positive models of the recursive blow-up $\phi_G$ relative to $G$ (see Definition~\ref{def:graphrecursiveblowup}). Then $T\in\WR$. \end{proposition} \begin{proof} Let $V=(V_n)_{n\in\NN}$ be given by $V_n\df V(C_{n^2+5})$ and let $W^G$ be the graphon representation of $\phi_G$ given by~\eqref{eq:WG}. Since $W^G$ is $\{0,1\}$-valued, we can view it as a continuum-sized graph $H$ with vertex set $\Omega^V$, consider the family $\cF$ of all finite graphs that are induced subgraphs of $H$. We claim that $\cF = Q(\phi_G)$. Indeed, we obviously have $Q(\phi_G)\subseteq\cF$ and any induced subgraph $M\in\cF$ of $H$ must be an induced subgraph of the (finite) recursive blow-up $H'_{m_0}\df R^{(C_5,C_6,C_9,\ldots,C_{m_0^2+5})}$ for some $m_0\in\NN$ (see Definition~\ref{def:compatibleblowup}) hence \begin{align*} \phi_G(M) & \geq p(M,H'_{m_0})\cdot\phi_G(H'_{m_0}) \geq p(M,H'_{m_0})\cdot\frac{\lvert H'_{m_0}\rvert!}{\lvert\Aut(H'_{m_0})\rvert}\cdot \left(\prod_{n=0}^{m_0}\frac{1}{n^2+5}\right)^{\lvert H'_{m_0}\rvert} > 0, \end{align*} so $M\in Q(\phi_G)$. Let us now show that $T\in\WR$. Let $\phi\in\HomT{T}$ be an arbitrary limit of $T$. Since $\cM[T]=\cF$, there exists a sequence $U = (U_n)_{n\in\NN}$ of finite subsets of $\Omega^V$ such that the sequence of finite graphs $(H\rest_{U_n})_{n\in\NN}$ converges to $\phi$. For each $k\in\NN$ and each $v\in V(C_{k^2+5})$, let \begin{align*} K_{k,v} & \df \{\sigma\in\Omega^V \mid \sigma_k = v\}. \end{align*} For each $n\in\NN$, let us construct a sequence $(U'_{n,k})_{k\in\NN}$ of subsets of $U_n$ inductively as follows. We set $U'_{n,0}\df U_n$ and given $U'_{n,k}$, let $v_{n,k}$ be a vertex $v\in V(C_{k^2+5})$ that minimizes $\lvert U'_{n,k}\cap K_{k,v}\rvert$ (which can be zero) and let $U'_{n,k+1}\df U'_{n,k}\setminus K_{n,v_{n,k}}$; note that \begin{align*} \lvert U'_{n,k+1}\rvert & \geq \left(1 - \frac{1}{k^2+5}\right)\cdot\lvert U'_{n,k}\rvert. \end{align*} We also let $U'_n\df\bigcap_{k\in\NN} U'_{n,k}$ and note that a simple induction gives \begin{align}\label{eq:Uratio} \frac{\lvert U'_n\rvert}{\lvert U_n\rvert} & \geq \prod_{k\in\NN}\left(1 - \frac{1}{k^2+5}\right) > 0. \end{align} Note also that the definition of $U'_n$ implies that for every $k\in\NN$, there exists $v\in V(C_{k^2+5})$ such that $U'_n\cap K_{k,v} = \varnothing$, which along with the definition of $H$ implies that $H\rest_{U'_n}\in S(\{P_n\mid n\in\NN\})$, where $P_n$ is the path on $n$ vertices. Let then $(H\rest_{U'_{n_\ell}})_{\ell\in\NN}$ be a convergent subsequence of $(H\rest_{U'_n})_{n\in\NN}$ such that $(\lvert U'_{n_\ell}\rvert/\lvert U_{n_\ell}\rvert)_{\ell\in\NN}$ is also convergent and let $\psi\in\HomT{T}$ be the limit of $(H\rest_{U'_{n_\ell}})_{\ell\in\NN}$. Then~\eqref{eq:Uratio} implies that $\psi$ is a sub-object of $\phi$ of measure at least $\prod_{k\in\NN}(1-1/(k^2+5)) > 0$. But since $H\rest_{U'_{n_\ell}}\in S(\{P_n\mid n\in\NN\})$, it follows that $\Th(\psi)$ is primally almost finite, which by Lemma~\ref{lem:graphs:primallyalmostfinite->WR} gives $\Th(\psi)\in\WR$, so $\psi$ has a weakly random sub-object, hence so does $\phi$. \end{proof} \section{VC~dimension and weak randomness} \label{sec:VC:graph} In this section we study how weak randomness and the class $\WR$ interact with the notion of VC~dimension. We remind the reader that in this section we drop the qualifiers ``weakly'' and ``strongly'' from ``closed under substitutions'' as they are superfluous for graphs (see Remark~\ref{rmk:substarity2}). Recall that for a non-trivial graph $G$, the \emph{Vapnik--Chervonenkis dimension}~\cite{VC71} (VC~dimension) of (neighborhoods of) $G$ is the largest size $\VC(G)$ of a set $U\subseteq V(G)$ that is \emph{shattered} by neighborhoods of vertices of $G$ in the sense that for every $A\subseteq U$, there exists $v\in V(G)$ such that $N_G(v)\cap U = A$, where $N_G(v)\df\{w\in V(G)\mid G\vDash E(v,w)\}$ is the neighborhood of $v$ in $G$. By convention, we also let $\VC(K_0)\df 0$. Recall also that (the edge relation in) a class of graphs $\cF$ (or the corresponding universal theory $\Th(\cF)$) is said to have NIP (standing for \emph{not independence property}), or \emph{bounded VC~dimension}, if $\sup\{\VC(G) \mid G\in\cF\} < \infty$. Finally, recall that by~\cite{LS10}, a universal theory $T$ of graphs has NIP if and only if all graphons of $T$ are a.e.\ $\{0,1\}$-valued\footnote{Let us warn the unfamiliarized reader that even if $W$ is a.e.\ $\{0,1\}$-valued, its theory of positive graphs $\Th(W)$ is not necessarily NIP. For example, the construction in the proof of Theorem~\ref{thm:graphpersistence} always yields a $\{0,1\}$-valued graphon with $\Th(W)=\cF$, even if $\cF=\cM[\TGraph]$, which clearly has unbounded VC~dimension.}. Thus, studying NIP is directly related to studying whether the theory has fractional-valued graphons. We start with a simple application of the theory of graph persistence developed so far. \begin{proposition}\label{prop:01subgraphon} If $W$ is a graphon such that there exists a finite graph $G$ with $\phi_W(G)=0$, then $W$ has an a.e.\ $\{0,1\}$-valued subgraphon $W'$. Furthermore, $W'$ can be taken of the form $W' = W\rest_A$ for some positive measure set $A$. \end{proposition} \begin{proof} Define a $\{0,1\}$-valued graphon $\widetilde{W}$ on the same space as $W$ by \begin{align*} \widetilde{W}(x,y) & \df \begin{dcases*} 1, & if $0 < W(x,y) < 1$,\\ 0, & if $W(x,y)\in\{0,1\}$. \end{dcases*} \end{align*} Note that a subgraphon $W'$ of $W$ represented as $W\rest_f$ is a.e.\ $\{0,1\}$-valued if and only if $K_2\notin Q(\widetilde{W}\rest_f)$. Hence, $W$ has an a.e.\ $\{0,1\}$-valued subgraphon if and only if $K_2\notin P(\widetilde{W})$. Thus, to prove the proposition, it is sufficient to show that $K_2\in P(\widetilde{W})$ implies that every finite graph $G$ has positive density in $W$. Since $P(\widetilde{W})$ is closed under substitutions and induced subgraphs, it follows that $K_n\in P(\widetilde{W})$ for every $n\in\NN$ (as $K_n\cong K_2^{v\to K_{n-1}}$). But note that each copy of $K_n$ in $\widetilde{W}$ corresponds to points of $W$ whose pairs all have values in $(0,1)$, hence have strictly fractional (conditional) probability of yielding edges, thus the fact that $K_{\lvert G\rvert}$ has positive density in $\widetilde{W}$ implies that $G$ has positive density in $W$. \medskip For the final part, if $W'=W\rest_f$ for some function $f\colon X\to [0,1]$, then letting $A\df\{x\in X \mid f(x) > 0\}$ yields that $W\rest_A$ is an a.e.\ $\{0,1\}$-valued subgraphon of $W$. \end{proof} The reader might have noticed that with the notable exception of quasirandom graphons, all examples of weakly random graphons of Section~\ref{sec:WR:graph} are $\{0,1\}$-valued. The next proposition says this is not a coincidence: every universal weakly random limit of a \emph{proper} (strongly) persistent class of graphs $\cF$ (i.e., $\cF\subsetneq\cM[\TGraph]$) must necessarily be $\{0,1\}$-valued. \begin{theorem}\label{thm:weaklyrandom01} If $W$ is a weakly random graphon such that there exists a finite graph $G$ with $\phi_W(G) = 0$, then $W$ is a.e.\ $\{0,1\}$-valued. \end{theorem} \begin{proof} We prove this by the contra-positive. Suppose $W$ is a weakly random graphon over some space $\Omega=(X,\cA,\mu)$ that is not a.e.\ $\{0,1\}$-valued. Let us show by induction in $\lvert G\rvert$ that every finite graph $G$ has $\phi_W(G) > 0$. By possibly applying the Graphon Removal Lemma~\cite[Theorem~1]{Pet13}, it is enough to show that $\Tind(G,W)\not\subseteq\cD_{V(G)}$. Obviously $\phi_W(K_0) = \phi_W(K_1) = 1$. So suppose $\lvert G\rvert\geq 2$, let $v_0\in V(G)$ and $H\df G-v_0$. Since $W$ is not a.e.\ $\{0,1\}$-valued, there exists $x_{v_0}\in[0,1]$ such that the set \begin{align*} A & \df \{y\in [0,1]\setminus\{x_{v_0}\} \mid 0 < W(x_{v_0},y) < 1\} \end{align*} has positive measure. Since $W\rest_A$ is a subgraphon of $W$, $W\rest_A$ is also weakly random and since by induction hypothesis, $\phi_W(H) > 0$ we get that there exists a point $(z,w)\in A^{V(H)}\times [0,1)^{\binom{V(H)}{2}}$ that induces an off-diagonal copy of $H$ in $W\rest_A$ (i.e., we have $(z,w)\in\Tind(H,W\rest_A)$). Let us extend $(z,w)$ to a point in $X^{V(G)}\times [0,1)^{\binom{V(G)}{2}}$ by defining $z_{v_0}\df x_{v_0}$ and \begin{align*} w_{\{v_0,w\}} & \df \begin{dcases*} 0, & if $\{v_0,u\}\in E(G)$,\\ \frac{1 + W(x_{v_0},z_u)}{2}, & if $\{v_0,u\}\notin E(G)$, \end{dcases*} \end{align*} for every $u\in V(H)$. It is straightforward to check that $(z,w)$ yields an off-diagonal copy of $G$ in $W$, concluding the proof. \end{proof} Our next objective is to show that for families of graphs $\cF$ that are closed under substitutions and induced subgraphs, determining whether $\cF$ has NIP is reduced to determining whether the family of prime graphs of $\cF$ has NIP. To do so, we need a variation of the definition of VC~dimension. \begin{definition} Given a non-trivial graph $G$, the \emph{$\VC'$~dimension} of $G$ (denoted $\VC'(G)$) is the largest size of a set $U\subseteq V(G)$ that is \emph{almost shattered} by the edge relation of $G$ in the sense that for every non-empty $A\subsetneq U$, there exists $v\in V(G)$ such that $N_G(v)\cap U = A\setminus\{v\}$. By convention, we also let $\VC'(K_0)\df 0$. \end{definition} Note that the notion of almost shattering is weaker than the notion of shattering in \emph{two} points: we only care about sets $A$ that are \emph{non-empty proper subsets} of $U$ and $N_G(v)\cap U$ only needs to match $A$ up to possibly removing $v$ from $A$. Note that for a non-trivial graph $G$, we always have $\VC'(G)\geq 1$ as any singleton set is almost shattered by the edge relation of $G$. \begin{lemma}\label{lem:VCVC'} For a non-trivial graph $G$, we have \begin{align*} \max\{F(n) \mid n\leq \VC'(G)\} & \leq \VC(G) \leq \VC'(G), \end{align*} where \begin{align*} F(n) & \df \begin{dcases*} \max\left\{t\in\NN \;\middle\vert\; \sum_{i=0}^{t-1}\binom{n}{i} < \frac{2^n-2}{n}\right\}, & if $n\geq 3$, \\ 0, & if $n\leq 2$. \end{dcases*} \end{align*} \end{lemma} \begin{proof} Since the notion of shattering implies the notion of almost shattering, it follows trivially that $\VC(G)\leq\VC'(G)$. \medskip Let now $n\leq\VC'(G)$ and let us show that $F(n)\leq\VC(G)$. Note that the result is trivial if $n\leq 2$ as $F(0)=F(1)=F(2)=0$, so let us suppose $n\geq 3$. Since $n\leq\VC'(G)$, we know that there exists a set $U\subseteq V(G)$ of size $n$ that is almost shattered by the edge relation of $G$. For each non-empty $A\subsetneq U$, let $v_A\in V(G)$ be such that $N_G(v_A)\cap U = A\setminus\{v_A\}$ and let $\cF\df\{N_G(v_A)\cap U \mid \varnothing\neq A\subsetneq U\}$. We claim that for every $B\subseteq U$, there are at most $n$ non-empty sets $A\subsetneq U$ such that $N_G(v_A)\cap U = B$. Indeed, since $N_G(v_A)\cap U = A\setminus\{v_A\}$, the set of non-empty $A\subsetneq U$ with $N_G(v_A)\cap U = B$ must be contained in \begin{align*} \{B\}\cup\{B\cup\{u\} \mid u\in U\}. \end{align*} When $B$ is non-empty, the set above has size at most $\lvert U\rvert = n$ and when $B$ is empty, the set above has size $n+1$ but $A$ cannot be equal to $B$. Since there are $2^n-2$ non-empty sets $A\subsetneq U$, we get $\lvert\cF\rvert\geq (2^n-2)/n$. On the other hand, by the definition of $F(n)$, we have \begin{align*} \lvert\cF\rvert & \geq \frac{2^n-2}{n} > \sum_{i=0}^{F(n)-1}\binom{n}{i}, \end{align*} so by the Sauer--Shelah Lemma~\cite{Sau72,She72}, the family $\cF$ shatters some $U'\subseteq U$ with $\lvert U'\rvert\geq F(n)$, thus $\VC(G)\geq F(n)$. \end{proof} \begin{remark}\label{rmk:NIPVC'} It is easy to see that the function $F$ of Lemma~\ref{lem:VCVC'} is unbounded. Indeed, if there was a bound $t_0\in\NN$ such that $F(n)\leq t_0$ for every $n\in\NN$, then we would have \begin{align*} \frac{2^n-2}{n} \leq \sum_{i=0}^{t_0}\binom{n}{i} \leq (t_0+1)\cdot n^{t_0} \end{align*} for every $n\in\NN$, which is yields a contradiction when $n$ is sufficiently large. As a corollary of Lemma~\ref{lem:VCVC'}, it then follows that a universal theory graphs $T$ has NIP if and only if there exists $k\in\NN$ such that $\VC'(G)\leq k$ for every graph $G$ of $T$. \end{remark} The next lemma shows that $\VC'$~dimension behaves very well with respect to the substitution operation. \begin{lemma}\label{lem:VC'subst} Let $F_1$ and $F_2$ be non-trivial finite graphs and $v_0\in V(F_1)$. Then we have $\VC'(F_1^{v_0\to F_2}) = \max\{\VC'(F_1),\VC'(F_2)\}$. \end{lemma} \begin{proof} Without loss of generality, suppose $V(F_1)\cap V(F_2)=\varnothing$ and let $G\df F_1^{v_0\to F_2}$. Since both $F_1$ and $F_2$ are induced subgraphs of $G$ (as both $F_1$ and $F_2$ are non-trivial), it follows that $\VC'(G)\geq\max\{\VC'(F_1),\VC'(F_2)\}$. To prove the other inequality, let $U\subseteq V(G)$ be a set that is almost shattered by the edge relation of $G$ with $\lvert U\rvert = \VC'(G)$. Suppose first that $U\subseteq V(F_i)$ for some $i\in[2]$. Then we claim that the edge relation of $F_i$ also almost shatters $U$. Indeed, if $A\subsetneq U$ is a non-empty set, then we know that there exists $u\in V(G)$ such that $N_G(u)\cap U = A\setminus\{u\}$. Since $A\neq\varnothing$ and $A\neq V(F_i)$ (as $A\subsetneq U\subseteq V(F_i)$), we must have $u\in V(F_i)$ (as every $u\in V(F_{3-i})$ is either adjacent to all of $V(F_i)$ or not adjacent to all of $V(F_i)$) and thus $N_{F_i}(u)\cap U = A\setminus\{u\}$. Therefore, in this case, we get $\VC'(G)\leq\VC'(F_i)\leq\max\{\VC'(F_1),\VC'(F_2)\}$. Suppose then that $U\not\subseteq V(F_1)$ and $U\not\subseteq V(F_2)$. Then we claim that $\lvert U\cap V(F_2)\rvert = 1$. Suppose not and let $v_1\in U\cap V(F_1)$ and $v_2,w_2\in U\cap V(F_2)$ with $v_2\neq w_2$. We consider first the case when $\{v_0,v_1\}\in E(F_1)$ (recall that $v_0$ is the vertex of $F_1$ that is being substituted: $G=F_1^{v_0\to F_2}$), let $A\df\{v_2\}\subsetneq U$ and let $u\in V(G)$ be such that $N_G(u)\cap U = A\setminus\{u\}$. Since $\{v_0,v_1\}\in E(F_1)$ and $v_1\notin A$, we must have $u\notin V(F_2)$ (as every vertex of $V(F_2)$ is adjacent to $v_1$ in $G$) and since $w_2\notin A$, we must have $u\notin V(F_1)$ (as every vertex of $V(F_1)$ is adjacent to $w_2$ in $G$), a contradiction. Consider then the case when $\{v_0,v_1\}\notin E(F_1)$, let $A\df\{v_1,v_2\}\subsetneq U$ and let $u\in V(G)$ be such that $N_G(u)\cap U = A\setminus\{u\}$. Since $\{v_0,v_1\}\notin E(F_1)$ and $v_1\in A$, we must have $u\notin V(F_2)$ and since $v_2\in A$, we must have $u\notin V(F_1)$, a contradiction. This concludes the proof of the claim, that is, we have $\lvert U\cap V(F_2)\rvert = 1$. Let then $w_0$ be the unique element of $U\cap V(F_2)$. We now claim that the set $U'\df (U\setminus\{w_0\})\cup\{v_0\}$ is almost shattered by the edge relation of $F_1$. Let $A'\subsetneq U'$ be a non-empty set and let \begin{align*} A & \df \begin{dcases*} A', & if $v_0\notin A'$,\\ (A'\setminus\{v_0\})\cup\{w_0\}, & if $v_0\in A'$. \end{dcases*} \end{align*} Then there exists $u\in V(G)$ such that $N_G(u)\cap U = A\setminus\{u\}$. Consider first the case when $u\in V(F_1)$. Then we have \begin{align*} N_{F_1}(u)\cap U' & = \begin{dcases*} N_G(u)\cap U, & if $w_0\notin N_G(u)\cap U$,\\ ((N_G(u)\cap U)\setminus\{w_0\})\cup\{v_0\}, & if $w_0\in N_G(u)\cap U$, \end{dcases*} \end{align*} hence $N_{F_1}(u)\cap U' = A'\setminus\{u\}$ (as $N_G(u)\cap U = A\setminus\{u\}$). Consider now the case when $u\notin V(F_1)$ and note that \begin{align*} N_{F_1}(v_0)\cap U' = (N_G(u)\cap U)\setminus\{w_0\} = A\setminus\{u,w_0\} = A'\setminus\{v_0\} \end{align*} since $u\notin A'$ (as $A'\subseteq V(F_1)$). Thus $U'$ is almost shattered by the edge relation of $F_1$, hence $\VC'(G)\leq\VC'(F_1)\leq\max\{\VC'(F_1),\VC'(F_2)\}$. \end{proof} The following simple consequence of Lemmas~\ref{lem:VCVC'} and~\ref{lem:VC'subst} (and Remark~\ref{rmk:NIPVC'}) says that when $\cF = S(\cF')$, then $\cF$ has NIP if and only if $\cF'$ has NIP. \begin{theorem}\label{thm:VC} Let $\cF$ and $\cF'$ be families of finite graphs up to isomorphism and suppose $\cF=S(\cF')$. Then $\cF$ has NIP if and only if $\cF'$ has NIP. In particular, if $\cF$ is a family of finite graphs that is closed under substitutions and under induced subgraphs and $\cP$ is the family of all prime graphs of $\cF$, then $\cF$ has NIP if and only if $\cP$ has NIP. \end{theorem} \begin{proof} By Lemmas~\ref{lem:VCVC'} and~\ref{lem:VC'subst} and Remark~\ref{rmk:NIPVC'} with a simple induction, we have \begin{align*} \sup\{\VC(F) \mid F\in\cF\} < \infty & \iff \sup\{\VC'(F) \mid F\in\cF\} < \infty \\ & \iff \sup\{\VC'(F') \mid F'\in\cF'\} < \infty \\ & \iff \sup\{\VC(F') \mid F'\in\cF'\} < \infty, \end{align*} so the first statement follows. The second statement follows from the first one along with Lemma~\ref{lem:strongclosuresubst}. \end{proof} As a direct corollary of Theorem~\ref{thm:VC}, it follows that any primally finite family $\cF$ that is closed under substitutions and under induced subgraphs has NIP. Our next objective is to show that the same is true in the primally almost finite case. Before we do so, we need yet another example of a family of prime graphs that is not almost finite. \begin{example}\label{ex:cliquedpath} For each $n\geq 9$ odd, let $G_n'$ be the graph obtained from the path on $n$ vertices $P_n$ by adding two vertices $a$ and $b$ adjacent precisely to the fourth and fourth from last vertices of $P_n$, respectively and connecting all even vertices into a clique (see Figure~\ref{fig:cliquedpath}). Formally, we have \begin{align*} V(G_n') & \df [n]\cup\{a,b\}, \\ E(G_n') & \df \begin{multlined}[t] \{\{i,i+1\} \mid i\in[n-1]\}\cup\{a,4\}\cup\{b,n-3\}\\ \cup\{\{2i,2j\}\mid i,j\in[(n-1)/2]\land i\neq j\}. \end{multlined} \end{align*} It is straightforward to check that $\{G_n' \mid n\geq 9\text{ odd}\}$ is a family of prime graphs that is not almost finite. \end{example} \begin{figure}[htbp] \input{cliquedpath} \end{figure} Before we prove the theorem, we need a small consequence of Ramsey's Theorem. \begin{lemma}\label{lem:Ramseydensity} For every $n\in\NN$ and every graphon $W$, we have \begin{align*} \phi_W(K_n) + \phi_W(\overline{K}_n) & \geq \binom{R(n,n)}{n}^{-1}, \end{align*} where $R(n,n)$ is the $(n,n)$-Ramsey number. \end{lemma} \begin{proof} We have \begin{align*} \phi_W(K_n) + \phi_W(\overline{K}_n) & = \sum_{M\in\cM_{R(n,n)}[\TGraph]} (p(K_n,M) + p(\overline{K}_n,M))\cdot\phi_W(M) \\ & \geq \binom{R(n,n)}{n}^{-1}\sum_{M\in\cM_{R(n,n)}[\TGraph]}\phi_W(M) = \binom{R(n,n)}{n}^{-1}, \end{align*} where the inequality follows since at least one $n$-sized subset of each $M\in\cM_{R(n,n)}[\TGraph]$ must induce either $K_n$ or $\overline{K}_n$ in $M$. \end{proof} \begin{theorem}\label{thm:primallyalmostfiniteNIP} Let $T$ be a universal theory of graphs. If $\cM[T]$ is primally almost finite, then the edge relation in $T$ has NIP. In particular, the edge relation in every universal theory of graphs $T'\in\WR$ such that $\cM[T']$ is closed under substitutions has NIP. \end{theorem} \begin{proof} The second assertion follows from the first along with Theorem~\ref{thm:graphs:WR}. \medskip We prove the first assertion by the contra-positive. Since the edge relation in $T$ has unbounded VC~dimension, by~\cite{LS10}, there exists a graphon $W$ that is a limit of $T$ and is \emph{not} a.e.\ $\{0,1\}$-valued. By possibly applying the Graphon Removal Lemma~\cite[Theorem~1]{Pet13}, we may suppose that every graph $G$ that has an off-diagonal copy in $W$ has positive density in $W$. Our objective is to present a family of prime graphs that is not almost finite and such that all graphs in this family have an off-diagonal copy in $W$ (thus $\cM[T]$ is not primally almost finite). For this purpose, we will show that for each $n\in\NN$ with $n\geq 5$, one of the following graphs appears as an off-diagonal copy in $W$: \begin{enumerate} \item The graph $G_{2n-4}$ of Example~\ref{ex:substitutiongeneration}. \item The complement $\overline{G}_{2n-4}$ of the graph of Example~\ref{ex:substitutiongeneration}. \item The graph $G_{2n-1}'$ of Example~\ref{ex:cliquedpath}. \end{enumerate} Since each of these families is a family of prime graphs that is not almost finite (note that primality is preserved under complementation) and one of them must occur for infinitely many $n$, it will follow that $\cM[T]$ is not primally almost finite as desired. Without loss of generality, let us suppose that the underlying space of $W$ is $[0,1]$ and let $(x_0,y_0)\in(0,1)^2$ be a Lebesgue density point with respect to $\ell^\infty$-balls of the positive measure set $A\df W^{-1}((0,1))$ with $x_0\neq y_0$. Let $\epsilon > 0$ be such that $\epsilon < (n\cdot\binom{R(n,n)}{n})^{-2}$ and let $\delta > 0$ be small enough so that \begin{align*} \frac{ \lambda(A\cap B_\delta(x_0,y_0)) }{ \lambda(B_\delta(x_0,y_0)) } & \geq 1 - \epsilon, \end{align*} where $B_\delta(x_0,y_0)$ is the $\ell^\infty$-ball of radius $\delta$ centered at $(x_0,y_0)$. We may also suppose that $\delta > 0$ is small enough so that $(x_0-\delta,x_0+\delta)$ and $(y_0-\delta,y_0+\delta)$ are disjoint subsets of $[0,1]$. Consider the set \begin{align*} C & \df \{(x,y)\in B_\delta(x_0)^n\times B_\delta(y_0)^n \mid \forall i,j\in[n], (x_i,y_j)\in A\}. \end{align*} With a simple union bound, we have \begin{align}\label{eq:lambdaC} \lambda(C) & \geq (1 - n^2\epsilon)\cdot \lambda(B_\delta(x_0,y_0))^n > \left(1 - \binom{R(n,n)}{n}^{-2}\right) \lambda(B_\delta(x_0,y_0))^n. \end{align} Define also \begin{multline*} C' \df \{(x,y)\in B_\delta(x_0)^n\times B_\delta(y_0)^n \mid \exists z\in[0,1)^{\binom{[n]}{2}}, (x,z)\in\Tind(K_n,W)\cup\Tind(\overline{K}_n,W) \\ \land \exists w\in[0,1)^{\binom{[n]}{2}}, (y,w)\in\Tind(K_n,W)\cup\Tind(\overline{K}_n,W)\}. \end{multline*} By Lemma~\ref{lem:Ramseydensity}, we have \begin{align*} \lambda(C') & \geq \binom{R(n,n)}{n}^{-2}\cdot\lambda(B_\delta(x_0,y_0))^n. \end{align*} Putting this together with~\eqref{eq:lambdaC}, we conclude that $\lambda(C\cap C') > 0$. Let then $(x,y)\in C\cap C'$ be a point with all coordinates distinct. We now consider four cases. \begin{enumerate}[label={Case~\arabic*.}, ref={\arabic*}, wide] \item\label{case:emptyempty} There exist points $z,w\in[0,1)^{\binom{[n]}{2}}$ such that $(x,z),(y,w)\in\Tind(\overline{K}_n,W)$. In this case, we construct an off-diagonal copy $(\widehat{x},\widehat{y})\in [0,1]^{V(G_{2n-4})}\times [0,1)^{\binom{V(G_{2n-4})}{2}}$ of the graph $G_{2n-4}$ of Example~\ref{ex:substitutiongeneration} as follows. Recall that $V(G_{2n-4}) = [2n-4]\cup\{a,b,c,d\}$ and for convenience of notation, let us make the identifications $a\df 2n-3$, $b\df 2n-2$, $c\df 2n-1$ and $d\df 2n$. Then for each $i,j\in V(G_{2n-4})$ with $i\neq j$, let \begin{align*} \widehat{x}_i & \df \begin{dcases*} x_{i/2}, & if $i\in[2n]$ is even,\\ y_{(i+1)/2}, & if $i\in[2n]$ is odd, \end{dcases*} \\ \widehat{y}_{\{i,j\}} & \df \begin{dcases*} z_{\{i/2,j/2\}}, & if $i,j\in[2n]$ are both even,\\ w_{\{(i+1)/2,(j+1)/2\}}, & if $i,j\in[2n]$ are both odd,\\ 0, & if $i\in[2n]$ is even, $j\in[2n]$ is odd and $\{i,j\}\in E(G_{2n-4})$, \\ \frac{1 + W(x_{i/2},y_{(j+1)/2})}{2}, & if $i\in[2n]$ is even, $j\in[2n]$ is odd and $\{i,j\}\notin E(G_{2n-4})$. \end{dcases*} \end{align*} The fact that $(x,y)\in C\cap C'$ and all coordinates of $(x,y)$ are distinct guarantees that $(\widehat{x},\widehat{y})$ is an off-diagonal copy of $G_{2n-4}$. \item There exist points $z,w\in[0,1)^{\binom{[n]}{2}}$ such that $(x,z),(y,w)\in\Tind(K_n,W)$. In this case, a construction analogous to the one in case~\ref{case:emptyempty} yields an off-diagonal copy of the complement $\overline{G}_{2n-4}$ of the graph of Example~\ref{ex:substitutiongeneration}. \item\label{case:cliqueempty} There exist points $z,w\in[0,1)^{\binom{[n]}{2}}$ such that $(x,z)\in\Tind(K_n,W)$ and $(y,w)\in\Tind(\overline{K}_n,W)$. We construct an off-diagonal copy $(\widehat{x},\widehat{y})\in [0,1]^{V(G_{2n-1}')}\times [0,1)^{\binom{V(G_{2n-1}')}{2}}$ of the graph $G_{2n-1}'$ of Example~\ref{ex:cliquedpath} as follows. Recall that $V(G_{2n-1}') = [2n-1]\cup\{a,b\}$ and for convenience of notation, let us make the identifications $a\df 2n+1$ and $b\df 2n+3$ (nothing gets identified with the points $2n$ and $2n+2$). Then for each $i,j\in V(G_{2n-1}')$ with $i\neq j$, let \begin{align*} \widehat{x}_i & \df \begin{dcases*} x_{i/2}, & if $i\in[2n-1]$ is even,\\ y_{(i+1)/2}, & if $i\in[2n+3]$ is odd, \end{dcases*} \\ \widehat{y}_{\{i,j\}} & \df \begin{dcases*} z_{\{i/2,j/2\}}, & if $i,j\in[2n-1]$ are both even,\\ w_{\{(i+1)/2,(j+1)/2\}}, & if $i,j\in[2n+3]$ are both odd,\\ 0, & \parbox[t]{0.6\textwidth}{if $i\in[2n-1]$ is even, $j\in[2n+3]$ is odd and $\{i,j\}\in E(G_{2n-1}')$,} \\ \frac{1 + W(x_{i/2},y_{(j+1)/2})}{2}, & \parbox[t]{0.6\textwidth}{if $i\in[2n-1]$ is even, $j\in[2n+3]$ is odd and $\{i,j\}\notin E(G_{2n-1}')$.} \end{dcases*} \end{align*} The fact that $(x,y)\in C\cap C'$ and all coordinates of $(x,y)$ are distinct guarantees that $(\widehat{x},\widehat{y})$ is an off-diagonal copy of $G_{2n-1}'$. \item There exist points $z,w\in[0,1)^{\binom{[n]}{2}}$ such that $(x,z)\in\Tind(\overline{K}_n,W)$ and $(y,w)\in\Tind(K_n,W)$. This case follows from case~\ref{case:cliqueempty} by swapping the roles of $x$ and $y$. \end{enumerate} Therefore $\cM[T]$ is not primally almost finite. \end{proof} \begin{remark} The assumption that $\cM[T]$ is closed under substitution is crucial for the second part of Theorem~\ref{thm:primallyalmostfiniteNIP}, since, for example, the universal theory $\TBipartite$ of bipartite graphs clearly is in $\AEHP\subseteq\WR$ (as every limit contains an empty subgraphon of measure at least $1/2$) but has unbounded VC~dimension. For an example of a theory with unbounded VC~dimension that is in $\WR\setminus\AEHP$, let $T_{C_4}$ be the universal theory of graphs that are induced subgraphs of some (finite) recursive blow-up of $C_4$ (which has bounded VC~dimension by Theorem~\ref{thm:primallyalmostfiniteNIP} as $T_{C_4}$ is primally finite), let $\cF$ be the family of graphs $G$ such that there exists a partition $V(G) = A\cup B$ such that both $G\rest_A$ and $G\rest_B$ are models of $T_{C_4}$ and let $T\df\Th(\cF)$ be the corresponding universal theory of graphs. Obviously, every bipartite graph is a model of $T$, so $T$ has unbounded VC~dimension. Since at least one of $A$ or $B$ must have at least half of the vertices, it follows that every limit of $T$ has a subgraphon that is a limit of $T_{C_4}$ and since $T_{C_4}\in\WR$ (by Proposition~\ref{prop:recC4}), we get $T\in\WR$. On the other hand, since every model of $T_{C_4}$ is a model of $T$ and $T_{C_4}\notin\AEHP$ (by~\cite[Lemma~8.8]{CM22} or Proposition~\ref{prop:recC4} again), it follows that $T\notin\AEHP$. \end{remark} \section{Persistence for universal theories} \label{sec:pers:univ} In this section, we generalize the results of Section~\ref{sec:pers:graph} on (strongly) persistent classes to arbitrary universal theories in finite relational languages. Table~\ref{tab:corresp} below contains the correspondence between the theorems and lemmas of Sections~\ref{sec:pers:graph} and~\ref{sec:WR:graph} and their generalizations in this and the next section. \begin{table}[htbp] \begin{tabular}{*{2}{c}p{7.3cm}} Graph result & Universal theory result & Drawbacks \\ \hline Lemma~\ref{lem:PWsubgraphon} & Lemma~\ref{lem:PWRset}\ref{lem:PWRset:P} & None. \\ Theorem~\ref{thm:graphpersistence} & Theorem~\ref{thm:persistence} & Persistence (item~\ref{thm:persistence:persistent}) can only be added to the list of equivalences when all arities are at most $2$. \\ Lemma~\ref{lem:PW} & Lemma~\ref{lem:Pphi} & Requires either that all arities are at most $2$ (item~\ref{lem:Pphi:arity2}) or weak randomness (item~\ref{lem:Pphi:WR}). \\ Lemma~\ref{lem:QphiG} & \begin{tabular}[t]{c} Lemma~\ref{lem:comprecblowup} and\\ Proposition~\ref{prop:recblowup}\ref{prop:recblowup:conservative},\ref{prop:recblowup:nonconservative} \end{tabular} & When the recursive blow-ups are not conservative (see Definitions~\ref{def:compatibleblowup} and~\ref{def:recursiveblowup}), only partial information is known about the limit theon. \\ Lemma~\ref{lem:PphiGQphiG} & Proposition~\ref{prop:recblowup}\ref{prop:recblowup:P} & None. \\ Lemma~\ref{lem:weaklyrandomsubgraphon} & Lemma~\ref{lem:PWRset}\ref{lem:PWRset:WR} & None. \\ Theorem~\ref{thm:graphs:WR} & Propositions~\ref{prop:monprimalmostfinite->WR} and~\ref{prop:WR->primalmostfinite} & Backward direction requires all arities to be at most $2$. Forward direction is trivial if all arities are at least $3$. \\ Lemma~\ref{lem:graphs:primallyalmostfinite->WR} & Proposition~\ref{prop:monprimalmostfinite->WR} & Requires all arities to be at most $2$. \\ Proposition~\ref{prop:noweaklyrandomsubgraphon} & Propositions~\ref{prop:recblowup}\ref{prop:recblowup:zeroproduct} and~\ref{prop:WR->primalmostfinite} & When all arities are at least $3$, there are only finitely many prime structures (see Remark~\ref{rmk:prime3}). \end{tabular} \caption{Correspondence between theorems and lemmas of Sections~\ref{sec:pers:graph} and~\ref{sec:WR:graph} and their generalizations in Sections~\ref{sec:pers:univ} and~\ref{sec:WR:univ}. Some generalizations have drawbacks (e.g., extra hypotheses or the result might be trivial) as pointed out in the third column.} \label{tab:corresp} \end{table} \begin{definition} Let $\phi\in\HomT{T_\cL}$. The set of \emph{positive $\cL$-structures in $\phi$} is the set $Q(\phi)\df\cM[\Th(\phi)]$ of all finite $\cL$-structures $M$ (up to isomorphism) such that $\phi(M) > 0$. The set of \emph{persistently positive $\cL$-structures in $\phi$} is the set $P(\phi)\df\bigcap_\psi Q(\psi)$, where the intersection is over all sub-objects of $\phi$. We extend these definitions naturally to Euclidean structures $\cN$ in $\cL$ by $Q(\cN)\df Q(\phi_\cN)$ and $P(\cN)\df P(\phi_\cN)$. We say that $\phi$ is \emph{weakly random} if $P(\phi) = Q(\phi)$. A family $\cF$ of finite $\cL$-structures (up to isomorphism) is called \emph{persistent} if there exists $\phi$ such that $P(\phi) = \cF$. The family $\cF$ is called \emph{strongly persistent} if there exists a weakly random $\phi$ such that $P(\phi) = \cF$ (which must also equal $Q(\phi)$); in this case, we also say that $\phi$ is a \emph{universal weakly random limit of $\cF$}. \end{definition} \begin{lemma}\label{lem:PWRset} Let $\cN$ be an Euclidean structure in $\cL$ over $\Omega=(X,\cA,\mu)$. Then the following hold. \begin{enumerate} \item $P(\cN) = \bigcap_A Q(\cN\rest_A^F)$, where the intersection is over all positive measure $A\in\cA$ and all measure-isomorphisms $F$ modulo $0$ from $\Omega_A$ to $\Omega$ (equivalently, we can also use a single measure-isomorphism $F_A$ modulo $0$ for each positive measure $A\in\cA$) \label{lem:PWRset:P} \item $\phi_\cN$ has a weakly random sub-object if and only if there exists a positive measure $A\in\cA$ and a measure-isomorphism $F$ modulo $0$ from $\Omega_A$ to $\Omega$ such that $\phi_{\cN\rest_A^F}$ is weakly random \label{lem:PWRset:WR} \end{enumerate} \end{lemma} \begin{proof} Both items follow from the fact that if $f\colon X\to [0,1]$ is a measurable function with $\int_X f\ d\mu > 0$ and $F$ is a measure-isomorphism modulo $0$ from $\Omega_f$ to $\Omega$, then for $A = \{x\in X \mid f(x) > 0\}$ and any measure-isomorphism $\widetilde{F}$ modulo $0$ from $\Omega_A$ to $\Omega$, we have $Q(\cN\rest_f^F) = Q(\cN\rest_A^{\widetilde{F}})$. \end{proof} \begin{lemma}\label{lem:Pphi} The following hold for $\phi\in\HomT{T_\cL}$. \begin{enumerate} \item If all predicate symbols of $\cL$ have arity at most $2$, then $P(\phi)$ is strongly closed under substitutions and closed under substructures \label{lem:Pphi:arity2} \item If $\phi$ is weakly random, then $P(\phi)$ is weakly closed under substitutions and closed under substructures \label{lem:Pphi:WR} \end{enumerate} \end{lemma} \begin{proof} Since obviously $K_0\in P(\phi)$, by Lemma~\ref{lem:substitutionK0}, it is sufficient to show the assertions of closed under substitutions in each item. Let $F_1,F_2\in P(\phi)$ and $v\in V(F_1)$ and let $\cF$ be the set of standard substitutions of $v$ in $F_1$ by $F_2$. Note that in item~\ref{lem:Pphi:arity2}, by Remark~\ref{rmk:substarity2}, $\cF$ has a unique element (and the notion of strongly and weakly closed under substitutions coincide). Thus, in both items, our objective is to show that $\cF\cap P(\phi)$ is non-empty. Let $\psi$ be a sub-object of $\phi$ and let $\cN$ be an Euclidean structure in $\cL$ over some space $\Omega=(X,\cA,\mu)$ with $\phi_\cN = \psi$. We claim that $\cF\cap Q(\psi)$ is non-empty. Suppose not, that is, suppose $\tind(F,\cN) = 0$ for every $F\in\cF$. By possibly applying the Induced Euclidean Removal Lemma~\cite[Theorem~3.3]{CR20a}, we may suppose that $\Tind(F,\cN)\subseteq\cD_V$ for every $F\in\cF$. Since $F_1\in P(\phi)$, we must have $F_1\in Q(\cN)$, that is, we have $\tind(F_1,\cN) > 0$. For every $x\in X^{r(V(F_1))\setminus\{\{v\}\}}$, let \begin{align*} U_x & \df \{y\in X \mid (x,y)\in\Tind(F_1,\cN)\}. \end{align*} By Fubini's Theorem, there exists $x\in X^{r(V(F_1))\setminus\{\{v\}\}}$ with all coordinates distinct such that $\mu(U_x) > 0$. Let $G$ be a measure-isomorphism modulo $0$ from $\Omega_{U_x}$ to $\Omega$ and since $\phi_{\cN\rest_{U_x}^G}$ is a sub-object of $\psi$, hence also of $\phi$, we must have $F_2\in Q(\cN\rest_{U_x}^G)$, which implies that there exists $z\in\cE_{V(F_2)}$ such that \begin{enumerate}[label={\alph*.}] \item all coordinates of $z$ are distinct; \item all coordinates of $z$ are distinct from the coordinates of $x$; \item for every $v\in V(F_2)$, we have $z_{\{v\}}\in U_x$; \item we have $z\in\Tind(F_2,\cN)$. \end{enumerate} Define then the point $w\in\cE_V$ by the following procedure. \begin{enumerate}[label={\arabic*.}, ref={(\arabic*)}] \item For each $A\subseteq r(V(F_1-v))$, let $w_A\df x_A$ \label{it:F1} \item For each $A\subseteq r(V(F_1-v))$ and each $u\in V(F_2)$, let $w_{A\cup\{u\}}\df x_{A\cup\{v\}}$ \label{it:F1F2} \item For each $A\subseteq r(V(F_2))$, let $w_A\df z_A$ \label{it:F2} \item Define all other coordinates of $w$ arbitrarily. \end{enumerate} Note that all coordinates of $w$ that are indexed by single vertices get defined in items~\ref{it:F1} and~\ref{it:F2} and their definitions guarantee that they are distinct from each other, that is, we have $w\notin\cD_V$. Let then $F$ be the unique $\cL$-structure with $w\in\Tind(F,\cN)$. Then items~\ref{it:F1} and~\ref{it:F1F2} ensure that all injections $V(F_1)\to V$ acting identically on $V(F_1-v)$ are embeddings of $F_1$ in $F$ and item~\ref{it:F2} ensures that the injection $V(F_2)\to V$ that acts identically on $V(F_2)$ is an embedding of $F_2$ in $F$. Thus, we must have $F\in\cF$. Therefore, we have showed that for every sub-object $\psi$ of $\phi$, we have $\cF\cap Q(\psi)\neq\varnothing$. In item~\ref{lem:Pphi:arity2}, since $\cF$ has a single element $F$, it follows that $F\in P(\phi)$, hence $P(\phi)$ is strongly (in this case, equivalently, weakly) closed under substitutions. In item~\ref{lem:Pphi:WR}, since $Q(\psi)=P(\phi)$ as $\phi$ is weakly random, it follows that $\cF\cap P(\phi)\neq\varnothing$, so $P(\phi)$ is weakly closed under substitutions. \end{proof} The next example shows why the hypotheses of Lemma~\ref{lem:Pphi} to get $P(\phi)$ weakly closed under substitutions are crucial. \begin{example}\label{ex:Pphinotclosedundersubst} Consider $\phi\in\HomT{\TkHypergraph[3]}$ that is the disjoint union of a clique and an anti-clique of the same size, that is, $\phi=\phi_\cN$ for the $\TkHypergraph[3]$-on $\cN$ over $[0,1]$ given by \begin{align*} \cN_E & \df \left\{x\in\cE_3 \;\middle\vert\; \max\{x_{\{1\}},x_{\{2\}},x_{\{3\}}\} < \frac{1}{2}\right\}. \end{align*} Since $\phi$ contains both a clique and an anti-clique of positive measure, it follows that $P(\phi)$ does not contain any models of size at least $3$. However, since $\TkHypergraph[3]$ is $2$-categorical, $P(\phi)$ must contain the unique model $K^{(3)}_2$ of size $2$. It then follows that $P(\phi)$ is not even weakly closed under substitutions as any substitution of any vertex of $K^{(3)}_2$ by $K^{(3)}_2$ must have size $3$. \end{example} \begin{definition}\label{def:compatibleblowup} Given a finite sequence $N=(N_0,\ldots,N_n)$ of finite $\cL$-structures with $\lvert N_i\rvert\geq 2$ for every $i\in\{0,\ldots,n\}$, a \emph{recursive blow-up relative to $N$} is an $\cL$-structure $R$ with $V(R)\df\prod_{i=0}^n V(N_i)$ such that for every $j\in\{0,\ldots,n\}$ and every $\sigma\in\prod_{i=0}^{j-1} V(N_i)$, every function $f\colon V(N_j)\to V(R)$ such that $f(v)\rest_{\{0,\ldots,j-1\}} = \sigma$ and $f(v)_j = v$ for every $v\in V(N_j)$ is an embedding of $N_j$ in $R$. The unique recursive blow-up $R$ relative to $N$ that has the smallest possible relation sets $P^R$ ($P\in\cL$) is called the \emph{conservative recursive blow-up relative to $N$} and is denoted $R^N$. Formally, it is given by $V(R^N)\df\prod_{i=0}^n V(N_i)$ and \begin{align*} P^{R^N} \df \biggl\{(\sigma,\alpha_j,\tau^j)_{j=0}^{k(P)} \in (V(R^N))_{k(P)} \;\bigg\vert\; & \sigma\in\prod_{\ell=0}^{i-1} V(N_\ell)\land\alpha\in P^{N_i} \\ & \land \forall j\in [k(P)], \tau^j\in\prod_{\ell=i+1}^n V(N_\ell) \biggr\} \end{align*} for every $P\in\cL$. Given an infinite sequence $N=(N_i)_{i\in\NN}$ of finite $\cL$-structures with $\lvert N_i\rvert\geq 2$ for every $i\in\NN$, a \emph{compatible sequence of recursive blow-ups relative to $N$} is a sequence $R=(R_i)_{i\in\NN}$ such that \begin{enumerate} \item for every $i\in\NN$, $R_i$ is a recursive blow-up relative to $(N_0,\ldots,N_i)$; \item for every $i\in\NN$, every function $f\colon V(R_i)\to V(R_{i+1})$ such that $f(v)\rest_{\{0,\ldots,i\}} = v$ is an embedding of $R_i$ in $R_{i+1}$. \end{enumerate} We call $R$ \emph{conservative} if further $R_i = R^{(N_0,\ldots,N_i)}$ (it is easy to see that this is always compatible). \end{definition} \begin{lemma}\label{lem:comprecblowup} Let $\cF$ be a family of finite $\cL$-structures that is weakly closed under substitutions and closed under substructures and let $N = (N_i)_{i\in\NN}$ be a sequence in $\cF$ with $\lvert M_i\rvert\geq 2$ for every $i\in\NN$. Then there exists a compatible sequence $R=(R_i)_{i\in\NN}$ of recursive blow-ups relative to $N$ with $R_i\in\cF$ for every $i\in\NN$. \end{lemma} \begin{proof} We construct the compatible sequence $R=(R_i)_{i\in\NN}$ inductively by setting $R_0\df N_0$ and given $R_i$, we enumerate the vertices of $R_i$ as $v^i_1,\ldots,v^i_{t_i}$, inductively define $F^i_0,\ldots,F^i_{t_i}$ by $F^i_0\df R_i$, let $F^i_{j+1}\in\cF$ be a standard substitution of $v^i_{j+1}$ in $F^i_j$ by $N_{i+1}$ and set $R_{i+1}\df F^i_{t_i}$. It is straightforward to check by induction that $R$ is a compatible sequence of recursive blow-ups relative to $N$ with $R_i\in\cF$ for every $i\in\NN$. \end{proof} \begin{definition}\label{def:recursiveblowup} Given an infinite sequence $N=(N_i)_{i\in\NN}$ of finite $\cL$-structures with $\lvert N_i\rvert\geq 2$ for every $i\in\NN$, we let the \emph{conservative recursive blow-up relative to $N$} be the $T_\cL$-on $\cN^N$ defined as follows. We let $V = (V_\ell)_{\ell\in\NN}$ be defined by $V_\ell\df V(N_\ell)$ and we define $\cN^N$ over the Cantor probability space $\Omega^V=(\prod_{\ell\in\NN} V_\ell, \cA, \nu^V)$ (see Definition~\ref{def:graphrecursiveblowup}) by \begin{align*} \cN^N_P & \df \{x\in\cE_{k(P)}(\Omega^V) \mid \exists i\in\NN, R^{(N_0,\ldots,N_i)}\vDash P(t_i^P(x)) \}, \end{align*} where $t_i^P(x)\in (\prod_{\ell=0}^i V_\ell)^{k(P)}$ is given by \begin{align}\label{eq:tiP} t_i^P(x)_j & \df x_{\{j\}}\rest_{\{0,\ldots,i\}} \qquad (j\in [k(P)]). \end{align} \end{definition} \begin{proposition}\label{prop:recblowup} Let $R=(R_i)_{i\in\NN}$ be a compatible sequence of recursive blow-ups relative to $N=(N_i)_{i\in\NN}$ and let $V = (V_i)_{i\in\NN}$ be given by $V_i\df V(N_i)$. Then $R$ is convergent and the following hold for its limit $\phi_R\in\HomT{T_\cL}$. \begin{enumerate} \item If $R$ is conservative, then $\phi_R = \phi_{\cN^N}$ \label{prop:recblowup:conservative} \item There exists a $T_\cL$-on $\cH$ over $\Omega^V$ with $\phi_R=\phi_\cH$ \begin{align*} \cN^N_P & \subseteq \cH_P \subseteq \cE_{k(P)}(\Omega^V)\setminus\cN^{\overline{N}}_P\text{ a.e.} \end{align*} for every $P\in\cL$, where $\overline{N}=(\overline{N}_i)_{i\in\NN}$ is the sequence of complementary canonical $\cL$-structures given by \begin{align*} V(\overline{N}_i) & \df V(N_i), & P^{\overline{N}_i} & \df (V(N_i))_{k(P)}\setminus P^{N_i} \qquad (P\in\cL). \end{align* \label{prop:recblowup:nonconservative} \item If $P(N)$ is the set of structures $M$ such that there exist infinitely many $i\in\NN$ with $M\cong N_i$, then $P(N)\subseteq P(\phi_R)$ \label{prop:recblowup:P} \item If $\prod_{i\in\NN} (1 - 1/\lvert N_i\rvert) = 0$, then for every positive measure $A\subseteq\Omega^V$, there exists $i\in\NN$ such that $\tind(N_i,\cH\rest_A^F) > 0$ for every measure-isomorphism $F$ modulo $0$ from $\Omega^V_A$ to $\Omega^V$ \label{prop:recblowup:zeroproduct} \end{enumerate} \end{proposition} \begin{proof} To show that $R$ is convergent, for each $i\in\NN$, define the Euclidean structure $\cN^i$ in $\cL$ over $\Omega^V$ by \begin{align*} \cN^i_P & \df \{x\in\cE_{k(P)}(\Omega^V) \mid R_i\vDash P(t_i^P(x))\}, \end{align*} where $t_i^P(x)$ is given by~\eqref{eq:tiP}, that is, $\cN^i$ is the natural ``step'' Euclidean structure associated with $R_i$ over $\Omega^V$. First note that since $R$ is compatible, for every $i,j\in\NN$, we have \begin{equation}\label{eq:L1bound} \begin{aligned} & \!\!\!\!\!\! \sum_{P\in\cL} \nu^V(\cN^i_P\symdiff\cN^{i+j}_P) \\ & \leq \sum_{P\in\cL} \nu^V(\{x\in\cE_{k(P)}(\Omega^V) \mid \exists a,b\in[k(P)], (a\neq b\land x_{\{a\}}\rest_{\{0,\ldots,i\}} = x_{\{b\}}\rest_{\{0,\ldots,i\}}) \}) \\ & \leq \sum_{P\in\cL} \binom{k(P)}{2}\cdot\sum_{\sigma\in\prod_{\ell=0}^i V_\ell} \nu^V(K_{\sigma,V})^2 \\ & = \sum_{P\in\cL} \binom{k(P)}{2}\cdot\prod_{\ell=0}^i \lvert V_i\rvert^{-1} \xrightarrow{i\to\infty} 0. \end{aligned} \end{equation} Therefore, it follows that for every finite $\cL$-structure $K$, the limit $\lim_{i\to\infty}\tind(K,\cN^i)$ exists. On the other hand, it is also straightforward to check that for every finite $\cL$-structure $K$, we have \begin{align*} \left\lvert \lvert R_i\rvert^{\lvert K\rvert}\cdot\tind(K,\cN^i) - \lvert\Tind(K,R_i)\rvert \right\rvert & \leq \lvert R_i\rvert^{\lvert K\rvert} - (\lvert R_i\rvert)_{\lvert K\rvert} \leq O_K(\lvert R_i\rvert^{\lvert K\rvert-1}) \end{align*} hence we get \begin{align*} \lim_{i\to\infty} \tind(K,R_i) = \lim_{i\to\infty} \tind(K,\cN^i), \end{align*} that is, $R = (R_i)_{i\in\NN}$ is convergent. \medskip Consider now the case when $R$ is conservative. Then the same argument used in~\eqref{eq:L1bound} gives \begin{align*} \sum_{P\in\cL} \nu^V(\cN^i_P\symdiff\cN^N_P) & \leq \sum_{P\in\cL} \binom{k(P)}{2}\cdot\prod_{\ell=0}^i \lvert V_i\rvert^{-1} \xrightarrow{i\to\infty} 0, \end{align*} so item~\ref{prop:recblowup:conservative} follows. \medskip To prove item~\ref{prop:recblowup:nonconservative}, note that~\eqref{eq:L1bound} implies that for each $P\in\cL$, the sequence of indicator functions $(\One_{\cN^i_P})_{i\in\NN}$ is convergent in $L^1(\cE_{k(P)}(\Omega^V))$, so let $f_P$ be their $L^1$-limit. Since $f_P$ is also the a.e.\ limit of $(\One_{\cN^i_P})_{i\in\NN}$, it must be a.e.~$\{0,1\}$-valued, so there exists $\cH_P$ such that $f_P = \One_{\cH_P}$ a.e. Finally, $L^1$-convergence implies that $\lim_{i\to\infty}\tind(K,\cN^i) = \tind(K,\cH)$. We claim that for every $x\in\cN^N_P$, there exists $i_0\in\NN$ such that $x\in\cN^i_P$ for every $i\geq i_0$. Indeed, if $x\in\cN^N_P$, then there exists $i_0\in\NN$ such that $R^{(N_0,\ldots,N_{i_0})}\vDash P(t_{i_0}^P(x))$. The definition of the conservative recursive blow-ups $R^{(N_0,\ldots,N_i)}$ implies that $R^{(N_0,\ldots,N_i)}\vDash P(t_i^P(x))$ for every $i\geq i_0$. From the minimality of the conservative recursive blow-ups, we get $R^i\vDash P(t_i^P(x))$, hence $x\in\cN^i_P$ for every $i\geq i_0$. Since $\One_{\cN^i_P}$ converges a.e.\ to $\One_{\cH_P}$, we conclude that $\cN^N_P \subseteq\cH_P$ a.e. By a symmetric argument, it follows that for every $x\in\cN^{\overline{N}}_P$, there exists $i_0\in\NN$ such that $x\in\cE_{k(P)}(\Omega^V)\setminus\cN^i_P$ for every $i\geq i_0$, from which we conclude that $\cN^{\overline{N}}_P\subseteq\cE_{k(P)}(\Omega^V)\setminus\cH_P$ a.e.\ and thus $\cH_P\subseteq\cE_{k(P)}(\Omega^V)\setminus\cN^{\overline{N}}_P$ a.e. \medskip Let us now show item~\ref{prop:recblowup:P}. Fix $M\in P(N)$ and let us show that $M\in P(\phi_R)$. By Lemma~\ref{lem:PWRset}, it is sufficient to show that for every positive measure $A\subseteq\Omega^V$ and every measure-isomorphism $F$ modulo $0$ from $\Omega^V_A$ to $\Omega^V$, we have $M\in Q(\cH\rest_A^F)$. Let then $\epsilon > 0$ be such that $\epsilon < 1/\lvert M\rvert$. By Lemma~\ref{lem:KsigmaV}, there exists $t_0\in\NN$ such that for every $t\geq t_0$, there exists $\sigma\in\prod_{\ell=0}^{t-1} V_\ell$ such that $\nu^V(A\cap K_{\sigma,V})\geq(1-\epsilon)\cdot\nu^V(K_{\sigma,V})$. Since $M\in P(N)$, there exists $t\geq t_0$ such that $M\cong N_t$. Since $\{K_{(\sigma,u),V} \mid u\in V_t\}$ partitions $K_{\sigma,V}$ into $\lvert V_t\rvert = \lvert M\rvert$ parts of equal measure, it follows that for every $u\in V_t$, we have \begin{align*} \nu^V(A\cap K_{(\sigma,u),V}) & \geq \left( 1 - \epsilon - \frac{\lvert M\rvert - 1}{\lvert M\rvert} \right)\cdot\nu^V(K_{\sigma,V}) > 0. \end{align*} Note now that if $x\in\cE_{V_t}(\Omega^V)$ is such that $x_{\{u\}}\in A\cap K_{(\sigma,u),V}$ for every $u\in V_t$, then $x\in\Tind(N_t,\cH\rest_A^F)$. Thus $\tind(N_t,\cH\rest_A^F) > 0$, hence $M=N_t\in Q(\cH\rest_A^F)$, as desired. \medskip It remains to show item~\ref{prop:recblowup:zeroproduct}. Suppose not, that is, suppose that there exist some positive measure $A\subseteq\Omega^V$ and some measure-isomorphism $F$ modulo $0$ from $\Omega^V_A$ to $\Omega^V$ such that for every $i\in\NN$, we have $\tind(N_i,\cH\rest_A^F) = 0$. Let $n\in\NN$ be large enough so that $\prod_{i=0}^{n-1} (1-1/\lvert N_i\rvert) < \nu^V(A)$ and let \begin{align*} \Sigma & \df \left\{\sigma\in\prod_{i=0}^{n-1} V_i \;\middle\vert\; \nu^V(A\cap K_{\sigma,V}) > 0 \right\}. \end{align*} We claim that for every $m\in\{0,\ldots,n-1\}$ and every $\tau\in\prod_{i=1}^{m-1} V_i$, there exists $u_\tau\in V_m$ such that $(\tau,u_\tau)$ is not a prefix of any element of $\Sigma$. Suppose not, that is, suppose that there exist $m\in\{0,\ldots,n-1\}$ and $\tau\in\prod_{i=1}^{m-1} V_i$ such that for every $u\in V_m$, there exists some $\sigma^u\in\Sigma$ such that $(\tau,u)$ is a prefix of $\sigma^u$. But then the set of $x\in\cE_{V_m}(\Omega^V)$ such that $x_{\{u\}}\in A\cap K_{\sigma^u,V}$ for every $u\in V_m$ is a positive measure set that is contained in $\Tind(N_m,\cH\rest_A^F)$, contradicting the fact that $\tind(N_m,\cH\rest_A^F) = 0$. Thus the claim is proved. Let now $\Sigma^*$ be the set of $\sigma\in\prod_{i=0}^{n-1} V_i$ such that for every $m\in\{0,\ldots,n-2\}$, we have $u_{\sigma\rest_{\{0,\ldots,m-1\}}}\neq\sigma_m$. Our last claim shows that $\Sigma\subseteq\Sigma^*$. Now it is easy to see that \begin{align*} \nu^V(A) & = \sum_{\sigma\in\Sigma} \nu^V(A\cap K_{\sigma,V}) \leq \sum_{\sigma\in\Sigma^*} \nu^V(K_{\sigma,V}) = \prod_{i=0}^{n-1}\left(1 - \frac{1}{\lvert N_i\rvert}\right) < \nu^V(A), \end{align*} a contradiction. Thus, item~\ref{prop:recblowup:zeroproduct} is proved. \end{proof} \begin{theorem}\label{thm:persistence} The following are equivalent for a family $\cF$ of finite $\cL$-structures (up to isomorphism) containing at least one structure of size at least $2$. \begin{enumerate} \item The family $\cF$ is strongly persistent \label{thm:persistence:stronglypersistent} \item The family $\cF$ is weakly closed under substitutions and closed under substructures \label{thm:persistence:closed} \end{enumerate} Furthermore, if all predicate symbols of $\cL$ have arity at most $2$, then the above are also equivalent to: \begin{enumerate}[resume] \item The family $\cF$ is persistent \label{thm:persistence:persistent} \end{enumerate} \end{theorem} \begin{proof} The implication~\ref{thm:persistence:stronglypersistent}$\implies$\ref{thm:persistence:closed} follows from Lemma~\ref{lem:Pphi}\ref{lem:Pphi:WR} as $\cF = P(\phi)$ for some \emph{weakly random} $\phi\in\HomT{T_\cL}$. \medskip For the implication~\ref{thm:persistence:closed}$\implies$\ref{thm:persistence:stronglypersistent}, let $\cF'$ be the set of elements of $\cF$ of size at least $2$ and let $N = (N_i)_{i\in\NN}$ be an enumeration of all elements of $\cF'$ that repeats each element of $\cF'$ infinitely often. Since $\cF$ is weakly closed under substitutions and closed substructures, by Remark~\ref{rmk:substitutionunary}, it follows that $\cF = \cF'\cup\{K_0,F_1\}$ for some $\cL$-structure $F_1$ of size $1$ (and where $K_0$ is the trivial $\cL$-structure of size $0$). By Lemma~\ref{lem:comprecblowup}, there exists a compatible sequence $R=(R_i)_{i\in\NN}$ of recursive blow-ups relative to $N$ with $R_i\in\cF$ for every $i\in\NN$ and by Proposition~\ref{prop:recblowup}\ref{prop:recblowup:P}, we know that $R$ converges to some $\phi_R\in\HomT{T_\cL}$ such that $\cF'=P(N)\subseteq P(\phi_R)$ and since $P(\phi_R)$ is closed under substructures (see Lemma~\ref{lem:Pphi}) and $\cF = \cF'\cup\{K_0,F_1\}$, we must have $\cF\subseteq P(\phi_R)$. On the other hand, since $R_i\in\cF$, it follows that $P(\phi_R)\subseteq Q(\phi_R)\subseteq\cF$, hence $\cF = Q(\phi_R) = P(\phi_R)$ as desired. \medskip If all predicate symbols of $\cL$ have arity at most $2$, then implication~\ref{thm:persistence:persistent}$\implies$\ref{thm:persistence:closed} follows from Lemma~\ref{lem:Pphi}\ref{lem:Pphi:arity2} as $\cF = P(\phi)$ for some $\phi\in\HomT{T_\cL}$ and the implication~\ref{thm:persistence:stronglypersistent}$\implies$\ref{thm:persistence:persistent} is obvious. \end{proof} Again, the assumption of arity at most $2$ is crucial for the inclusion of item~\ref{thm:persistence:persistent} in the equivalence of Theorem~\ref{thm:persistence} as illustrated by Example~\ref{ex:Pphinotclosedundersubst}. We conclude this section by observing operations that preserve the notions discussed so far. The next proposition shows naturality of the operators $Q$ and $P$ and of the weak randomness property in the sense that the operators $P$ and $Q$ commute with open interpretations and weak randomness is preserved by open interpretations. \begin{proposition}\label{prop:naturality} Let $I\colon T_1\leadsto T_2$ be an open interpretation. The following hold for $\phi\in\HomT{T_2}$. \begin{enumerate} \item We have $Q(\phi^I) = I(Q(\phi))$ \label{prop:naturality:Q} \item We have $P(\phi^I) = I(P(\phi))$ \label{prop:naturality:P} \item If $\phi$ is weakly random, then so is $\phi^I$ \label{prop:naturality:WR} \end{enumerate} \end{proposition} \begin{proof} Item~\ref{prop:naturality:Q} follows directly from the definition of $\phi^I$, see~\eqref{eq:phiI}. \medskip Item~\ref{prop:naturality:P} follows directly from item~\ref{prop:naturality:Q} and the fact that if $\psi$ is a sub-object of $\phi$, then $\psi^I$ is a sub-object of $\phi^I$ and conversely, every sub-object of $\phi^I$ is of the form $\psi^I$ for some sub-object $\psi$ of $\phi$. \medskip Item~\ref{prop:naturality:WR} follows trivially from items~\ref{prop:naturality:Q} and~\ref{prop:naturality:P}. \end{proof} Before we proceed, we recall the notion of couplings and independent couplings of limits from~\cite[Definitions~2.3, 2.4 and~2.5]{CR20b}, which played a key role in the study of the natural quasirandomness properties $\UCouple[\ell]$ and $\UInduce[\ell]$ in that work. \begin{definition}\label{def:couplings} Given canonical theories $T_1$ and $T_2$ in finite relational languages $\cL_1$ and $\cL_2$, respectively, the \emph{disjoint union} $T_1\cup T_2$ is the canonical theory in the disjoint union language $\cL_1\disjcup\cL_2$ whose axioms are those of $T_1$ (about predicate symbols in $\cL_1$) and those of $T_2$ (about predicate symbols in $\cL_2$), that is, the models of $T_1\cup T_2$ correspond to a model of $T_1$ and a model of $T_2$ \emph{on the same vertex set}. A \emph{coupling} of $\phi_1\in\HomT{T_1}$ and $\phi_2\in\HomT{T_2}$ is a limit $\psi\in\HomT{T_1\cup T_2}$ such that $\phi_i = \psi^{I_i}$ for every $i\in[2]$, where $I_i\colon T_i\leadsto T_1\cup T_2$ is the \emph{structure-erasing interpretation} that acts identically on predicate symbols of $T_i$. The \emph{independent coupling} of $\phi_1\in\HomT{T_1}$ and $\phi_2\in\HomT{T_2}$ is the limit $\phi_1\otimes\phi_2\in\HomT{T_1\cup T_2}$ given by \begin{align*} (\phi_1\otimes\phi_2)(M) & \df \frac{\lvert\Aut(M_1)\rvert\cdot\lvert\Aut(M_2)\rvert}{\lvert M\rvert!\cdot\lvert\Aut(M)\rvert} \cdot\phi_1(M_1)\cdot\phi_2(M_2), \end{align*} where $M_i\df I_i(M)$. Alternatively, if $\cN^i$ ($i\in[2]$) is a $T_i$-on over $\Omega_i$ with $\phi_{\cN^i}=\phi_i$, then we have $\phi_1\otimes\phi_2 = \phi_{\cN^1\otimes\cN^2}$ for the $(T_1\cup T_2)$-on $\cN^1\otimes\cN^2$ over the product space $\Omega_1\otimes\Omega_2$ given by \begin{align*} (\cN^1\otimes\cN^2)_P & \df \{x\in\cE_{k(P)}(\Omega_1\otimes\Omega_2) \mid \pi_{i,k(P)}(x)\in\cN^i_P\} \end{align*} whenever $P\in\cL_i$ ($i\in[2]$), where $\pi_{i,k(P)}\colon\cE_{k(P)}(\Omega_1\otimes\Omega_2)\to\cE_{k(P)}(\Omega_i)$ is the natural projection. \end{definition} The next proposition says that weak randomness is preserved under independent couplings. \begin{proposition}\label{prop:indepcoup} If $\phi_1\in\HomT{T_1}$ and $\phi_2\in\HomT{T_2}$ are weakly random, then so is their independent coupling $\phi_1\otimes\phi_2$. \end{proposition} \begin{proof} Let $\cN^i$ be a $T_i$-on over $\Omega_i=(X_i,\cA_i,\mu_i)$ such that $\phi_i=\phi_{\cN^i}$ and let $\Omega\df\Omega_1\otimes\Omega_2$. It is clear from the definition of $\phi_1\otimes\phi_2$ that for every $M\in\cM[T_1\cup T_2]$, we have $M\in Q(\phi_1\otimes\phi_2)$ if and only if $I_1(M)\in Q(\phi_1)$ and $I_2(M)\in Q(\phi_2)$, where $I_i\colon T_i\leadsto T_1\cup T_2$ ($i\in[2]$) is the structure-erasing interpretation. By Lemma~\ref{lem:PWRset}\ref{lem:PWRset:P}, to show that $\phi_1\otimes\phi_2$ is weakly random, it is sufficient to show that for every positive measure $A\subseteq\Omega$ and every measure-isomorphism $F$ modulo $0$ from $\Omega_A$ to $\Omega$, we have $Q(\phi_1\otimes\phi_2) = Q((\cN^1\otimes\cN^2)\rest_A^F)$. Let $M\in Q(\phi_1\otimes\phi_2)$ and let us show that $M\in Q((\cN^1\otimes\cN^2)\rest_A^F)$. For each $i\in[2]$, let $M_i\df I_i(M)$ and let \begin{align*} B & \df \{(x,y)\in\cE_{V(M)}(\Omega_1)\times X_2 \mid x\in\Tind(M_1,\cN^1) \land \forall v\in V(M), (x_{\{v\}},y)\in A\}. \end{align*} Our objective is to show that $(\mu_1\otimes\mu_2)(B) > 0$. To do so, for each $y\in X_2$, let \begin{align*} A(y) & \df \{x\in X_1 \mid (x,y)\in A\} \end{align*} and note that Fubini's Theorem implies that the set \begin{align*} \widetilde{X}_2 & \df \{y\in X_2 \mid \mu_1(A(y)) > 0\} \end{align*} has positive $\mu_2$-measure. Since $\phi_1$ is weakly random, for every $y\in\widetilde{X}_2$ and every measure isomorphism $\widetilde{F}_y$ modulo $0$ from $(\Omega_1)_{A(y)}$ to $\Omega_1$, we have $\tind(M_1,\cN^1\rest_{A(y)}^{\widetilde{F}_y}) > 0$, thus Fubini's Theorem gives \begin{align*} (\mu_1\otimes\mu_2)(B) & \geq \int_{\widetilde{X}_2} \tind(M_1,\cN^1\rest_{A(y)}^{\widetilde{F}_y})\cdot\mu_1(A(y))^{\lvert M\rvert}\ d\mu_2(y) > 0. \end{align*} For every $x\in\Tind(M_1,\cN^1)\subseteq\cE_{V(M)}(\Omega_1)$, define the set \begin{align*} B(x) & \df \{y\in X_2 \mid (x,y)\in B\} = \{y\in X_2 \mid \forall v\in V(M), (x_{\{v\}},y)\in A\} \end{align*} and note that Fubini's Theorem again implies that the set \begin{align*} \widetilde{\Tind}(M_1,\cN^1) & \df \{x\in\Tind(M_1,\cN^1) \mid \mu_2(B(x)) > 0\} \end{align*} has positive $\mu_1$-measure. Since $\phi_2$ is weakly random, for every $x\in\widetilde{\Tind}(M_1,\cN^1)$ and every measure isomorphism $\widetilde{G}_x$ modulo $0$ from $(\Omega_2)_{B(x)}$ to $\Omega_2$, we have $\tind(M_2,\cN^2\rest_{B(x)}^{\widetilde{G}_x}) > 0$, thus Fubini's Theorem gives \begin{align*} \tind(M,(\cN^1\otimes\cN^2)\rest_A^F) & \geq \int_{\widetilde{\Tind}(M_1,\cN^1)} \tind(M_2,\cN^2\rest_{B(x)}^{\widetilde{G}_x})\cdot\mu_2(B(x))^{\lvert M\rvert} \ d\mu_1(x) > 0, \end{align*} concluding the proof. \end{proof} \begin{remark} As we mentioned before, weak randomness can be seen as a weakening of the natural quasirandomness property $\UInduce[1]$ of~\cite{CR20b}. Since $\UInduce[1]$ (and more generally, $\UInduce[\ell]$) is not preserved under independent couplings, one can consider the class $\UInduce_\otimes[\ell]$ that is the closure of $\UInduce[\ell]$ under independent couplings and open interpretations and in~\cite[\S10]{CR20b}, it was asked if any of these classes yields a meaningful notion of randomness or if they are already ``too large''. It was already noted in~\cite{CR20b} that the quasirandom permuton (see Proposition~\ref{prop:QRpermuton}) is in $\UInduce_\otimes[\ell]$ for every $\ell\in\NN_+$ and that even the largest class $\UInduce_\otimes[1]$ among the $\UInduce_\otimes[\ell]$ does not contain all limits. Since $\UInduce[1]$ implies weak randomness, from Propositions~\ref{prop:naturality}\ref{prop:naturality:WR} and~\ref{prop:indepcoup} it follows that every element of $\UInduce_\otimes[1]$ is weakly random; this further justifies the adjective ``weak'' in weak randomness: it is a quasirandomness notion weaker than the weakening $\UInduce_\otimes[1]$ of $\UInduce[1]$ that is still meaningful. Let us point out that there are weakly random limits that are not in $\UInduce_\otimes[1]$: namely, one can show that if $W$ is a universal weakly random \emph{$\{0,1\}$-valued} graphon of $\TGraph$ (e.g., $\phi_W=\phi_G^*$ as in Lemma~\ref{lem:PphiGQphiG} for an enumeration $G = (G_m)_{m\in\NN}$ of all finite graphs of size at least $2$), then $\phi$ is weakly random but is not in $\UInduce_\otimes[1]$. However, since the length of the proof outweighs its enlightenment value, we omit it. \end{remark} Recall from Definition~\ref{def:AEHP} that a trivial limit $\phi\in\HomT{T}$ is any limit of the form $\phi=\phi_\cN$ for some theon $\cN$ whose peons all have measure in $\{0,1\}$. For general couplings, the next proposition says that the coupling of a trivial limit with a weakly random limit is weakly random. \begin{proposition}\label{prop:coup} If $\psi$ is a coupling of a trivial $\phi_1\in\HomT{T_1}$ and a weakly random $\phi_2\in\HomT{T_2}$, then $\psi$ is weakly random. \end{proposition} \begin{proof} Let $\cL_1$ and $\cL_2$ be the languages of $T_1$ and $T_2$, respectively. Since $\phi_1$ is trivial, it follows that $\cL_1$ can be partitioned into $\cL_1=\cL_1^0\cup\cL_1^1$ so that for every $M_1\in Q(\phi_1)$ every $P\in\cL_1$, we have \begin{align*} P^{M_1} & = \begin{dcases*} \varnothing, & if $P\in\cL_1^0$,\\ (V(M_1))_{k(P)}, & if $P\in\cL_1^1$. \end{dcases*} \end{align*} This implies that if $\xi$ is a coupling of some $\zeta\in\HomT{T_2}$ with $\phi_1$, then \begin{align}\label{eq:Qxi} Q(\xi) & = \{\widehat{M}_2 \mid M_2\in Q(\zeta)\}, \end{align} where $\widehat{M}_2\in\cM_{V(M_2)}[T_1\cup T_2]$ is given by \begin{align*} P^{\widehat{M}_2} & \df \begin{dcases*} P^{M_2}, & if $P\in\cL_2$,\\ \varnothing, & if $P\in\cL_1^0$,\\ (V(M_2))_{k(P)}, & if $P\in\cL_1^1$. \end{dcases*} \end{align*} Now since $\phi_1$ is trivial, $\phi_1$ is the only sub-object of $\phi_1$, which means that every sub-object $\psi'$ of $\psi$ is a coupling of $\phi_1$ with some sub-object $\phi_2'$ of $\phi_2$. Since $\phi_2$ is weakly random, we have $Q(\phi_2')= Q(\phi_2)$, hence $Q(\psi') = Q(\psi)$ follows since the right-hand side of~\eqref{eq:Qxi} is the same for $(\xi,\zeta)=(\psi,\phi_2)$ and $(\xi,\zeta)=(\psi',\phi_2')$. Therefore $\psi$ is weakly random. \end{proof} As a simple application of Propositions~\ref{prop:naturality} and~\ref{prop:indepcoup} above, let us prove Proposition~\ref{prop:agreementsofQRpermuton} that says that the graphon of agreements of the quasirandom permuton (see Figure~\ref{fig:agreementsgraphon}) is a universal weakly random limit of $\TPermGraph$ by showing that the quasirandom permuton $\psi_{\QR}\in\HomT{\TPerm}$ has the same property for $\TPerm$. Recall that the quasirandom permuton is given by $\psi_{\QR}\df\phi_{\cN^{\QR}}$, where $\cN^{\QR}$ is the $\TPerm$-on over $[0,1]^2$ given by \begin{align*} \cN^{\QR}_{\prec_i} & \df \{x\in\cE_2([0,1]^2) \mid \pi_i(x_{\{1\}}) < \pi_i(x_{\{2\}})\} \qquad (i\in[2]), \end{align*} where $\pi_i\colon[0,1]^2\to[0,1]$ is the projection onto the $i$th coordinate. \begin{proposition}\label{prop:QRpermuton} The quasirandom permuton $\psi_{\QR}$ is a universal weakly random limit of $\TPerm$. \end{proposition} \begin{proof} It is straightforward to check that $Q(\psi_{\QR}) = \cM[\TPerm]$. On the other hand, $\psi_{\QR}$ is the independent coupling $\psi\otimes\psi$ of the unique limit $\psi\in\HomT{\TLinOrder}$ of the theory of (strict) linear orders with itself. Since $\psi$ is obviously weakly random (as $\TLinOrder$ is finitely categorical), by Proposition~\ref{prop:indepcoup}, it follows that $\psi_{\QR}$ is weakly random. \end{proof} We can now derive Proposition~\ref{prop:agreementsofQRpermuton} that says that the graphon of agreements of the quasirandom permuton is universal weakly random limit of $\TPermGraph$ as an easy consequence. \begin{proofof}{Proposition~\ref{prop:agreementsofQRpermuton}} The graphon of agreements of the quasirandom permuton represents the limit $\phi\df\psi_{\QR}^I$ for the open interpretation $I\colon\TGraph\leadsto\TPerm$ given by \begin{align*} I(E)(x,y) & \df x\neq y \land (x\prec_1 y \tot x\prec_2 y), \end{align*} so $\phi$ is weakly random by Propositions~\ref{prop:naturality}\ref{prop:naturality:WR} and~\ref{prop:QRpermuton}. Finally, Proposition~\ref{prop:naturality}\ref{prop:naturality:Q} implies $Q(\phi) = I(Q(\psi_{\QR})) = I(\TPerm) = \TPermGraph$. \end{proofof} \begin{remark}\label{rmk:TPermWR} It is easy to see that the same permutations used in the proof of Proposition~\ref{prop:agreementsofpermutationtheory} can be used to show that $\cM[\TPerm]$ is closed under substitutions but not primally almost finite, hence $\TPerm\notin\WR$. However, let us point out that had we proved only the result for $\TPerm$, this would not have immediately implied Proposition~\ref{prop:agreementsofpermutationtheory} as primality is not necessarily preserved under open interpretations (even though closure under substitutions is). \end{remark} \section{What about weak randomness in general?} \label{sec:WR:univ} In this brief section we provide a partial generalization of Theorem~\ref{thm:graphs:WR} of Section~\ref{sec:WR:graph} to universal theories in finite relational languages. For the easier direction, we will only be able to generalize Lemma~\ref{lem:graphs:primallyalmostfinite->WR} when all arities are at most $2$ (Proposition~\ref{prop:monprimalmostfinite->WR}) and even though the harder direction will generalize directly in Proposition~\ref{prop:WR->primalmostfinite} below, this naive generalization is essentially empty when all arities are at least $3$, as in this case there are only finitely many prime structures (see Remark~\ref{rmk:prime3}). It is not clear at this point what form a characterization of $\WR$ should take in the presence of higher arity predicates. \begin{definition}\label{def:WRgeneral} We say that a canonical theory $T$ in a finite relational language has the \emph{weakly random \Erdos--Hajnal property} (abbreviated $T\in\WR$) if every $\phi\in\HomT{T}$ has a weakly random sub-object. \end{definition} \begin{proposition}\label{prop:monprimalmostfinite->WR} Let $\cL$ be a finite relational language whose predicate symbols have arity at most $2$ and let $T$ be a canonical theory in $\cL$. If $\cM[T]$ is monochromatically primally almost finite, then $T\in\WR$. \end{proposition} \begin{proof} We prove this by the contra-positive. Suppose $T\notin\WR$ and let us show that the set $\cP$ of monochromatic prime models of $T$ is not almost finite. By Lemma~\ref{lem:almostfinite}, it is sufficient to present a sequence $(F_n)_{n\in\NN}$ of finite monochromatic prime models of $T$ such that $F_n$ is not a substructure of $F_m$ whenever $n < m$. Since $T\notin\WR$, there must exist a limit $\phi\in\HomT{T}$ that does not contain any weakly random sub-object. We now construct a sequence $(\phi_n)_{n\in\NN}$ of sub-objects of $\phi$ and a sequence $(F_n)_{n\in\NN}$ of finite prime models of $T$ satisfying the following. \begin{enumerate} \item For every $n\in\NN$, $\phi_{n+1}$ is a sub-object of $\phi_n$. \item For every $n\in\NN$, $F_n\in Q(\phi_n)\setminus Q(\phi_{n+1})$. \end{enumerate} We construct these sequences inductively as follows. \begin{enumerate}[label={\arabic*.}] \item We claim that there exists a sub-object $\phi_0$ of $\phi$ such that there exists $M_1\in\cM_1[T]$ with $\phi_0(M_1) = 1$ (and thus all $M\in\cM_1[T]\setminus\{M_1\}$ have $\phi_0(M)=0$). Indeed, if $M_1\in\cM_1[T]$ is such that $\phi(M_1) > 0$ and $\cN$ is an Euclidean structure over $\Omega$ with $\phi_\cN=\phi$, then $A\df\Tind(M_1,\cN)$ is a positive measure set, so for any measure-isomorphism $F$ modulo $0$ from $\Omega_A$ to $\Omega$, the sub-object $\phi_0\df\phi_{\cN\rest_A^F}$ satisfies the desired property. \item Given $\phi_n\in\HomT{T}$, since $\phi_n$ is a sub-object of $\phi$, we know that $\phi_n$ is not weakly random, so there exists $N_n\in Q(\phi_n)\setminus P(\phi_n)$. Let $\cP_n$ be the set of substructures of $N_n$ that are prime. By Lemma~\ref{lem:Pphi}\ref{lem:Pphi:arity2}, we know that $P(\phi_n)$ is \emph{strongly} closed under substitutions and since $N_n\in S(\cP_n)$, there must exist $F_n\in\cP_n\setminus P(\phi_n)$ and since $Q(\phi_n)$ is closed under substructures, we get $F_n\in Q(\phi_n)\setminus P(\phi_n)$. From the definition of $P(\phi_n)$, it then follows that there exists a sub-object $\phi_{n+1}$ of $\phi_n$ (hence also of $\phi$) such that $F_n\in Q(\phi_n)\setminus Q(\phi_{n+1})$. \end{enumerate} Let now $n,m\in\NN$ be such that $n < m$. By induction, we know that $\phi_m$ is a sub-object of $\phi_{n+1}$, so $Q(\phi_m)\subseteq Q(\phi_{n+1})$, which in turn implies that $F_n\in Q(\phi_n)\setminus Q(\phi_m)$. Since $Q(\phi_m)$ is closed under substructures and $F_m\in Q(\phi_m)$, it follows that $F_n$ is not a substructure of $F_m$. Finally, since all $\phi_n$ are also sub-objects of $\phi_0$, we must have $Q(\phi_n)\cap\cM_1[T]\subseteq Q(\phi_0)\cap\cM_1[T] = \{M_1\}$. This implies that for every \emph{unary} predicate symbol $P\in\cL$ and every $n\in\NN$, we have $M_1\vDash\forall x, P(x)$ if and only if $F_n\vDash\forall x, P(x)$ (otherwise, we would have $Q(\phi_n)\cap\cM_1[T]\neq\{M_1\}$). Thus the $F_n$ are monochromatic. \end{proof} \begin{lemma}\label{lem:noweaklyrandomsubobject} Let $N = (N_i)_{i\in\NN}$ be a sequence of prime $\cL$-structures of size at least $2$ such that for each $i\in\NN$, there are finitely many $j\in\NN$ such that $N_i$ is a substructure of $N_j$, let $R = (R_i)_{i\in\NN}$ be a compatible sequence of recursive blow-ups relative to $N$ and let $\phi_R\in\HomT{T_\cL}$ be the limit of $R$. If $\prod_{i\in\NN} (1 - 1/\lvert N_i\rvert) = 0$, then $\phi_R$ does not have any weakly random sub-object. \end{lemma} \begin{proof} Let $V$ and $\cH$ be as in Proposition~\ref{prop:recblowup}. Suppose toward a contradiction that $\phi_R$ has a weakly random sub-object. By Lemma~\ref{lem:PWRset}, there exists a positive measure set $A\subseteq\Omega^V$ and a measure-isomorphism $F$ modulo $0$ from $\Omega^V_A$ to $\Omega^V$ such that $\cH\rest_A^F$ is weakly random. By Proposition~\ref{prop:recblowup}\ref{prop:recblowup:zeroproduct}, there exists $i_0\in\NN$ such that $\tind(N_{i_0},\cH\rest_A^F) > 0$. Let $j_0\df\max\{j \mid N_{i_0}\text{ is a substructure of } N_j\} < \infty$. Since $\{K_{\sigma,V} \mid \sigma\in\prod_{\ell=0}^{j_0} V_\ell\}$ partitions $\Omega^V$, there must exist $\sigma\in\prod_{\ell=0}^{j_0} V_\ell$ such that $\nu^V(A\cap K_{\sigma,V}) > 0$. From the definition of $\cH$, it follows that for every measure-isomorphism $\widetilde{F}$ modulo $0$ from $\Omega^V_{K_{\sigma,V}}$ to $\Omega^V$, we have $\phi_{\cH\rest_{K_\sigma,V}^{\widetilde{F}}} = \phi_{R'}$ for the sequence $R' = (R'_i)_{i\in\NN}$ given by $R'_i\df R_{j_0 + 1 + i}\rest_{U_i}$, where \begin{align*} U_i & \df \left\{\tau\in\prod_{\ell=0}^{j_0 + 1 + i} V_\ell \;\middle\vert\; \tau\rest_{\{0,\ldots,j_0\}} = \sigma \right\}. \end{align*} Note also that $R'$ is a compatible sequence of recursive blow-ups relative to the shifted sequence $N' = (N'_i)_{i\in\NN}$ given by $N'_i\df N_{j_0+1+i}$. We claim now that $\tind(N_{i_0},\cH\rest_{K_{\sigma,V}}^{\widetilde{F}}) = 0$. Indeed, since $N_{i_0}$ is prime, for this density to be positive, $N_{i_0}$ must be a substructure of infinitely many $R'_i$, but since $R'_i\in S(\{N_j \mid j\geq j_0+1\})$, Lemma~\ref{lem:primesubstructure} says that this can only happen if $N_{i_0}$ is a substructure of some $N_j$ with $j\geq j_0+1$, which would contradict the definition of $j_0$. Finally, this is a contradiction since $\cH\rest_A^F$ was assumed to be weakly random but $N_{i_0}\in Q(\cH\rest_A^F)\setminus P(\cH\rest_A^F)$ as $\nu^V(A\cap K_{\sigma,V}) > 0$ and $\tind(N_{i_0},\cH\rest_{K_{\sigma,V}}^{\widetilde{F}}) = 0$. \end{proof} \begin{proposition}\label{prop:WR->primalmostfinite} Let $T$ be a canonical theory such that $\cM[T]$ is weakly closed under substitutions. If $T\in\WR$, then $T$ is primally almost finite. \end{proposition} Before we prove Proposition~\ref{prop:WR->primalmostfinite}, let us note that it is completely trivial when all arities are at least $3$ as in this case there are only finitely many prime structures by Remark~\ref{rmk:prime3}. \begin{proof} We prove this by the contra-positive. Suppose $\{N'_i \mid i\in\NN\}$ is an infinite antichain of prime models of $T$ and without loss of generality, assume every $N'_i$ has size at least $2$ (as $\cM_0[T]\cup\cM_1[T]$ is finite). For each $n\in\NN$, let $r_n\in\NN_+$ be large enough so that $(1-1/\lvert N'_i\rvert)^{r_n}\leq 1/2$ and for each $\ell\in\NN$, let $N_\ell\df N'_n$ for the unique $n\in\NN$ such that $\sum_{m=0}^{n-1} r_m\leq\ell < \sum_{m=0}^n r_m$. Clearly, for each $\ell\in\NN$, there exist exactly $r_\ell$ values of $t\in\NN$ such that $N_\ell$ is a substructure of $N_t$. Note also that \begin{align*} \prod_{\ell\in\NN}\left(1 - \frac{1}{\lvert N_\ell\rvert}\right) & = \prod_{m\in\NN}\left(1 - \frac{1}{\lvert N'_m\rvert}\right)^{r_m} \leq \prod_{m\in\NN} \frac{1}{2} = 0. \end{align*} Since $\cM[T]$ is weakly closed under substitutions, by Lemma~\ref{lem:comprecblowup}, there exists a compatible sequence $R = (R_\ell)_{\ell\in\NN}$ of recursive blow-ups relative to $N=(N_\ell)_{\ell\in\NN}$ with $R_\ell\in\cM[T]$ for every $\ell\in\NN$, and by Lemma~\ref{lem:noweaklyrandomsubobject}, the limit $\phi_R\in\HomT{T}$ of $R$ does not have any weakly random sub-object, hence $T\notin\WR$. \end{proof} Let us conclude this section by observing operations that preserve $\WR$ (at the level of theories). The next proposition shows naturality (at the level of theories) of $\WR$, that is, it is preserved by open interpretations. \begin{proposition}\label{prop:WRnatural} If $I\colon T_1\leadsto T_2$ is an open interpretation and $T_2\in\WR$, then $I(T_2)\in\WR$. \end{proposition} \begin{proof} Follows from Proposition~\ref{prop:naturality}\ref{prop:naturality:WR}, the fact that every $\phi\in\HomT{I(T_2)}$ is of the form $\phi=\psi^I$ for some $\psi\in\HomT{T_2}$ and the fact that if $\psi$ is a sub-object of $\phi$, then $\psi^I$ is a sub-object of $\phi^I$ and conversely, every sub-object of $\phi^I$ is of the form $\psi^I$ for some sub-object $\psi$ of $\phi$. \end{proof} It is easy to see that $\WR$ is not preserved under disjoint unions of theories (see Definition~\ref{def:couplings}): the theory of linear orders $\TLinOrder$ satisfies $\WR$ (as it is finitely categorical) but the theory of permutations $\TPerm=\TLinOrder\cup\TLinOrder$ does not satisfy $\WR$ (see Remark~\ref{rmk:TPermWR}). However, the next proposition says that $\WR$ at least interacts well with disjoint unions with theories with $\AEHP$ (see Definition~\ref{def:AEHP}). \begin{proposition}\label{prop:AEHPWRindepcoup} Let $T_1$ and $T_2$ be universal theories and suppose $T_1\in\AEHP$. Then the following hold. \begin{enumerate} \item If $T_2\in\AEHP$, then $T_1\cup T_2\in\AEHP$ \label{prop:AEHPWRindepcoup:AEHP} \item If $T_2\in\WR$, then $T_1\cup T_2\in\WR$ \label{prop:AEHPWRindepcoup:WR} \end{enumerate} \end{proposition} To prove this proposition, we will need the following result from~\cite{CR20b} on theons representing couplings (see Definition~\ref{def:couplings}). \begin{proposition}[\protect{\cite[Proposition~4.3]{CR20b}}]\label{prop:theoncoupling} Let $\psi\in\HomT{T_1\cup T_2}$ be a coupling of $\phi_1\in\HomT{T_1}$ and $\phi_2\in\HomT{T_2}$ and let $\cN^1$ be a $T_1$-on over $\Omega$ such that $\phi_1 = \phi_{\cN^1}$. Then there exists a $(T_1\cup T_2)$-on $\cH$ over $\Omega\otimes\Omega$ such that $\psi=\phi_\cH$ and $\cH_P = \cN_P\times\cE_{k(P)}(\Omega)$ for every predicate symbol $P$ in the language of $T_1$ (when we naturally identify $\cE_{k(P)}(\Omega\otimes\Omega)$ with $\cE_{k(P)}(\Omega)\times\cE_{k(P)}(\Omega)$). \end{proposition} \begin{proofof}{Proposition~\ref{prop:AEHPWRindepcoup}} Let $\psi\in\HomT{T_1\cup T_2}$ and let $I_i\colon T_i\leadsto T_1\cup T_2$ ($i\in[2]$) be the structure-erasing interpretation. Then $\psi$ is a coupling of $\phi_1\df\psi^{I_1}$ and $\phi_2\df\psi^{I_2}$. Let $\cN^1$ be a $T_1$-on over $\Omega=(X,\cA,\mu)$ such that $\phi_1=\phi_{\cN^1}$. Since $T_1\in\AEHP$, by~\cite[Theorem~5.11]{CM22}, there exists a positive measure set $A\subseteq X$ and a measure-isomorphism $F$ modulo $0$ from $\Omega_A$ to $\Omega$ such that $\phi_{\cN^1\rest_A^F}$ is trivial. Let now $\cH$ be the $(T_1\cup T_2)$-on over $\Omega'\df\Omega\otimes\Omega$ given by Proposition~\ref{prop:theoncoupling} and let $\mu'\df\mu\otimes\mu$ be the underlying measure of $\Omega'$. Let also $A'\df A\times X$ and let $F'\df F\otimes\id_X$ be the measure-isomorphism modulo $0$ from $\Omega'_{A'}$ to $\Omega'$ that acts as $F$ on the first coordinate and acts identically on the second coordinate. Suppose $T_2\in\AEHP$. Since $I_2(\cH\rest_{A'}^{F'}) = I_2(\cH)\rest_{A'}^{F'}$ is a $T_2$-on, by~\cite[Theorem~5.11]{CM22}, there exists a positive $\mu'_{A'}$-measure set $B\subseteq X\times X$ such that $I_2(\cH)\rest_{A'}^{F'}\rest_B^{\widetilde{F}}$ is trivial for every measure-isomorphism $\widetilde{F}$ modulo $0$ from $(\Omega'_{A'})_B$ to $\Omega'_{A'}$. Set $B'\df B\cap A'$ so that $B'$ is a positive $\mu'$-measure set such that $I_2(\cH)\rest_{B'}^{\widetilde{F}\comp F'}$ is trivial. Note now that since $\cH_P = \cN_P\times\cE_{k(P)}(\Omega)$ for every predicate symbol $P$ in the language of $T_1$, we get $\phi_{I_1(\cH)\rest_{A'}^{F'}} = \phi_{\cN^1\rest_A^F}$, which is a trivial limit. Since $\phi_{I_1(\cH)\rest_{B'}^{\widetilde{F}\comp F'}}$ is a sub-object of $\phi_{\cN^1\rest_A^F}$, it must also be trivial. Hence, $\psi=\phi_{\cH\rest_{B'}^{\widetilde{F}\comp F'}}$ must be trivial as it is a coupling of two trivial limits $\phi_{\cH\rest_{B'}^{\widetilde{F}\comp F'}}^{I_1}=\phi_{I_1(\cH)\rest_{B'}^{\widetilde{F}\comp F'}}$ and $\phi_{\cH\rest_{B'}^{\widetilde{F}\comp F'}}^{I_2}=\phi_{I_2(\cH)\rest_{B'}^{\widetilde{F}\comp F'}}$. Thus item~\ref{prop:AEHPWRindepcoup:AEHP} is proved. \medskip For item~\ref{prop:AEHPWRindepcoup:WR}, we make the same construction but taking $B\subseteq X\times X$ with positive $\mu'_{A'}$-measure such that $I_2(\cH)\rest_{A'}^{F'}\rest_B^{\widetilde{F}}$ is weakly random as guaranteed by $T_2\in\WR$. Then $\psi=\phi_\cH$ must be weakly random by Proposition~\ref{prop:coup} as it is a coupling of a trivial limit $\phi_{\cH\rest_{B'}^{\widetilde{F}\comp F'}}^{I_1}=\phi_{I_1(\cH)\rest_{B'}^{\widetilde{F}\comp F'}}$ with a weakly random limit $\phi_{\cH\rest_{B'}^{\widetilde{F}\comp F'}}^{I_2}=\phi_{I_2(\cH)\rest_{B'}^{\widetilde{F}\comp F'}}$. \end{proofof} \section{Conclusion and open problems} \label{sec:concl} In this paper we studied the notion of weak randomness, a weakening of the quasirandomness property $\UInduce[1]$ (see~\cite{CR20b}), which requires the limit $\phi\in\HomT{T}$ to be such that for every sub-object $\psi$ of $\phi$ and every finite structure $M$, we have $\phi(M) > 0$ if and only if $\psi(M) > 0$. We characterized (strongly) persistent families of structures, i.e., those that correspond to a theory $T$ that has a universal weakly random limit (that is, a weakly random $\phi$ such that $\Th(\phi)=T$) as precisely those that are closed under substructures and weakly closed under substitutions. We also studied a weakening $\WR$ of $\AEHP$ that requires every limit of $T$ to contain a weakly random sub-object (see Definitions~\ref{def:AEHP}, \ref{def:WR} and~\ref{def:WRgeneral}). We characterized $\WR$ for theories $T$ with maximum arity at most $2$ and $\cM[T]$ closed under substitutions as precisely the set of theories $T$ that are monochromatically primally almost finite. \medskip A very natural open problem that was not addressed in this paper is to characterize weak randomness at the level of objects, that is, to provide an equivalent property to $\phi\in\HomT{T}$ being weakly random. Toward this goal, a natural first step is to ask how different can two weakly random objects $\phi$ and $\psi$ be. A first source of difference is obviously that they can have different persistence sets $P(\phi)\neq P(\psi)$. On the other hand, if $P(\phi)=P(\psi)$, then we can attempt to measure their difference based on the sub-object partial pre-order and it is natural to ask what is the structure of the partially pre-ordered set $\Phi_\cF\df\{\phi \mid P(\phi) = Q(\phi) = \cF\}$ for some (strongly) persistent class $\cF$. Obviously, if $\cF=\{K_n \mid n\in\NN\}$ or $\cF=\{\overline{K}_n \mid n\in\NN\}$, then the set $\Phi_\cF$ has only one element, but even for the next simplest case $\cF=S(\{K_0,K_2,\overline{K}_2\})$ of induced subgraphs of recursive blow-ups of $C_4$, the structure of the partial pre-order on $\Phi_\cF$ is not clear: does it have incomparable elements? What about infinite antichains? By Proposition~\ref{prop:recblowup}, if $G=(G_n)_{n\in\NN}$ in which each $G_n$ is either $K_2$ or $\overline{K}_2$ and both $K_2$ and $\overline{K}_2$ occur infinitely often, then the recursive blow-up $\phi_G$ satisfies $\phi_G\in\Phi_\cF$ and we believe that changing the asymptotic proportion of edges and non-edges in $G$ should produce incomparable elements of $\Phi_\cF$. As we mentioned in the introduction, the approximate \Erdos--Hajnal property ($\AEHP$) is a variation of the usual \Erdos--Hajnal property ($\EHP$) that allows for negligible errors, but requires linear-sized homogeneous sets in the presence of convergence. Since $\WR$ is a weakening of $\AEHP$, we would like to ask the following more abstract question: what is the polynomial-sized error-free version of $\WR$ in the finite? Furthermore, since $\AEHP$ implies $\EHP$ and $\WR$ is a larger class than $\AEHP$, is it still true that $\WR$ implies $\EHP$ for graphs? Of course, this implication must hold if the \Erdos--Hajnal Conjecture is true. As mentioned in Section~\ref{sec:WR:univ} (see also Table~\ref{tab:corresp}), several of the proofs on weak randomness and the class $\WR$ do not generalize very well in the presence of predicates of arity at least $3$. It is natural to ask if we can characterize $\WR$ in these cases in the presence of some simplifying assumption that would replace closure under substitution used in the binary case. As briefly mentioned before, weak randomness is a weakening of the property $\UInduce[1]$ of~\cite{CR20b}. Since $\UInduce[1]$ is part of a hierarchy of quasirandomness properties $\UInduce[\ell]$, one might expect that there exists a hierarchy of weak randomness as well. In turn, it may be that our difficulty in understanding $\WR$ in arity $3$ comes from the fact that there is a wide variety of $\UInduce[1]$ limits of $3$-hypergraphs and since $\UInduce[2]$ for $3$-hypergraphons amounts again to only (full) quasirandom $3$-hypergraphons, one might expect that the corresponding $\WR[2]$ property in arity $3$ defined from an appropriate notion of ``weak $2$-randomness'' (or more generally $\WR[\ell-1]$ in arity $\ell$) could be easier to handle. Since the definition of $\UInduce[2]$ is considerably more technical than that of $\UInduce[1]$ and our initial attempts at a weak $2$-randomness definition did not yet yield any interesting results, we refrain from elaborating further. Finally, in the absence of closure under substitutions, it is obvious that $\WR$ is no longer characterized by the primally almost finite condition: obvious counter-examples include the theories $T_{\omega\leq k}$ ($T_{\chi\leq k}$, resp.) of graphs whose clique number (chromatic number, resp.) is at most $k$, which clearly satisfy $\AEHP$ but are not closed under substitutions when $k\geq 2$ (as $K_{k+1}\in S(\{K_2\})$ is not a model of $T_{\omega\leq k}$ or $T_{\chi\leq k}$). It is possible to upgrade Lemma~\ref{lem:graphs:primallyalmostfinite->WR} and Proposition~\ref{prop:monprimalmostfinite->WR} to also cover the theories $T_{\omega\leq k}$ and $T_{\chi\leq k}$ via an interactive proof (more precisely, a two-player game in which the first player is attempting to show that some sub-object $\psi$ must have $Q(\psi)$ monochromatically primally almost finite and the second player is attempting to deceive the first player), but we leave this result to a future work. \bibliographystyle{alpha}
{ "timestamp": "2022-09-20T02:21:09", "yymm": "2209", "arxiv_id": "2209.08638", "language": "en", "url": "https://arxiv.org/abs/2209.08638" }
\section{Introduction} Although light-matter interactions were central to the development of quantum field theory, it is only recently that the interactions between microwave photons and magnetic materials have been explored in detail. Indeed, it was in 2009 that Imamo$\breve{\text{g}}$lu pointed out that strong coupling is achieved between resonant cavity photons and a spin ensemble in a coupled spin-photon system \cite{Imamoglu}. A short time later, interactions between a nanomagnet and microwave photons in a spherical resonator were investigated by Soykal and Flatt{\'e} \cite{SoykalFlattePRL,SoykalFlattePRB}. Since 2010, developments in microwave resonator technology have pushed forward our ability to explore fundamental aspects of quantum physics \cite{LiberskySM}, and have led to the rapid development of the new field of quantum magnonics, and associated hybrid quantum technologies \cite{LQReview,HuRoadmap,BhoiKim}. In this paper, we develop a general, finite temperature, quantum field theory that may be used to study light-matter interactions, including interactions between a quantum system and an oscillator bath environment \cite{Dicke, Hopfield, DeLiberato, FV, CaldeiraLeggett, LeggettSB}. The formalism is applicable to materials having strong Ising interactions between their constituent atoms, or spins, and materials with complicated, multilevel, single-site Hamiltonians, such as the quantum Ising magnet LiHoF$_4$ \cite{Bitko, Chakraborty, MckenzieStamp}, which undergoes a ferromagnetic to paramagnetic quantum phase transition in an applied transverse field. We analyze the transverse-field Ising model (TFIM) in the presence of an applied ac magnetic field along the easy axis of the material \begin{align} \label{eq:TFIM} \mathcal{H} &= \mathcal{H}_{TFIM} + B_z \cos{(\omega t)} \sum_i J_i^z \\ \label{eq:TFIMAC} \mathcal{H}_{TFIM} &= -\frac{1}{2} \sum_{i \neq j} V_{ij} J_i^z J_j^z - B_x \sum_i J_i^x. \end{align} This simple model of a quantum material in a microwave resonator can be quantized to obtain a quantum optics model in which the spins couple to an effective photon momentum operator ($p \sim i(a^{\dagger}-a)$) \begin{align} \label{eq:Hp} \mathcal{H} = \mathcal{H}_{TFIM} - i \alpha (a^{\dagger}-a) \frac{1}{\sqrt{N}} \sum_i J_i^z. \end{align} The magnetic insulating crystal LiHoF$_4$ is often considered an archetypical quantum Ising material, albeit with a strong hyperfine interaction between each holmium spin and its nucleus, and with the dominant coupling between spins being long range dipolar interactions \cite{Bitko}. The results of our theory are applied to LiHoF$_4$, and they accommodate the low energy electronuclear modes present in the material. The coupling between light and matter depends on the atomic density of the matter. We note that the spin density of LiHoF$_4$ is more than three times that of YIG, which has been a primary focus of quantum magnonics. Coupled light and matter modes will hybridize, forming polariton modes. The theory of polaritons, named as such, stemmed from Hopfield's work \cite{Hopfield}, although earlier work on coupled light-matter modes is present in the literature \cite{Tolpygo, Huang}. The quantum optics model, given by equation (\ref{eq:Hp}) shares similarities with the Hopfield model \cite{Hopfield}, as well as the Dicke model \cite{Dicke}, and quantum environment models such as the Caldeira-Leggett and spin boson Hamiltonians \cite{CaldeiraLeggett, LeggettSB} (see Appendix \ref{ap:Models} for more details). A primary difference between the model given by equation (\ref{eq:Hp}) and the models introduced by Dicke and Hopfield is that in equation (\ref{eq:Hp}) we are considering a fixed system of spins, and no diamagnetic term is present in the Hamiltonian. A system comprised by mobile spins or atoms, as in the Dicke and Hopfield models, will exhibit a diamagnetic response. As discussed below, the diamagnetic term in light-matter Hamiltonians has important consequences, so equation (\ref{eq:Hp}) should be considered a distinct model. The diamagnetic term has been the source of considerable controversy. In the absence of the diamagnetic term, as one increases the light matter coupling strength, a superradiant quantum phase transition is expected to occur, in which photons spontaneously appear in the ground state of the system \cite{HeppLieb, Wang}. The presence of the diamagnetic term forestalls this transition \cite{Rzazewski}. Furthermore, with the diamagnetic term present, it was shown by De Liberato that as one increases the light-matter coupling, the light and matter modes will in fact decouple \cite{DeLiberato}. The source of this light-matter decoupling is the diamagnetic response which localizes the photon modes away from the matter, and shifts the resonant frequency of the light mode. As the coupling strength is increased, one finds the polaritonic modes have a predominately light or matter character. The spin-photon Hamiltonian given in equation (\ref{eq:Hp}) leads to an effective magnon-photon Hamiltonian in which the diamagnetic term is absent. As the quantum Ising material is tuned through its critical point, the spectral weight of the soft mode, and hence the magnon- photon coupling strength, diverges. As no diamagnetic term is present, this ought to lead to a superradiant quantum phase transition. However, we show that including the effects of dissipation and decoherence of the magnon modes leads to a very different outcome. When environmental degrees of freedom are taken into account, the resulting dissipation and decoherence couple to the divergence of the soft mode, providing a new means to prevent superradiance. We substantiate this theoretical prediction in experiments on a model quantum Ising magnet. The remainder of this paper is structured as follows: To begin, in Section \ref{sec:ResPhys}, we provide a brief discussion of the magnon-polariton propagator and the resonator transmission function. This provides a primary connection between theoretical work and experimental results. The magnon-polariton theory is then developed in Section \ref{sec:MPtheory}. Starting with equations (\ref{eq:TFIM}) and (\ref{eq:TFIMAC}), we derive the magnon-polariton propagator for the coupled light-matter system, and an effective bosonic Hamiltonian describing the system. The calculation is lengthy, so we begin Section \ref{sec:MPtheory} with a detailed summary of the steps involved. Having obtained the magnon-polariton propagator, we discuss its application to calculating mode energies and spectral weights in Section \ref{sec:DisRes}, first in the absence of damping and then with frequency-independent (ohmic) damping of the magnon modes. This concludes the theoretical portion of this paper. In Section \ref{sec:Expt}, we compare the theory with experimental data on LiHoF$_4$ in a microwave resonator \cite{LiberskySM, Kovacevic}. An ansatz is used to account for decoherence of the spins comprising the collective magnon modes. With dissipation and decoherence accounted for, we are able to make quantitative comparisons between results of this model and experimental measurements. \section{Resonator Physics} \label{sec:ResPhys} In a resonator experiment, one measures transmission of photons through the resonator, which is determined theoretically by the magnon-polariton propagator. Our quantum optics model is analyzed making use of the imaginary time ordered magnon-polariton propagator of the coupled system \cite{MahanBook} \begin{align} \label{eq:MPprop} D_{mp}(\tau) = \bigr\langle T_{\tau} \bigr(a^{\dagger}(\tau)+a(\tau)\bigr) \bigr(a^{\dagger}+a\bigr) \bigr\rangle, \end{align} where $\langle T_{\tau}\ \cdots \rangle$ is an imaginary time ordered thermal average taken over the light and matter degrees of freedom. The results of this theory are applied to the quantum Ising magnet LiHoF$_4$ in a microwave resonator. In a two-port microwave resonator experiment, one may measure transmission of photons through the resonator. The resonator transmission function is given by \cite{PozarBook,WallsMilburn} \begin{align} S_{21} = \frac{x_2^{OUT}}{x_1^{IN}}\biggr|_{x_2^{IN}=0}, \end{align} where $x_{1,2}^{IN/OUT}$ is a measure of the incoming and outgoing light at the resonator ports $1$ and $2$. The transmission function is the ratio of the outgoing photons at port $2$ to the incoming photons at port $1$ when no light is incident at port $2$. We assume the resonator transmission function is related to the magnon-polariton propagator by \cite{Harder} \begin{align} \label{eq:ResTrans} |S_{21}(\omega)|^2 \propto \text{Im}[D_{mp}^{ret}(\omega)], \end{align} where the proportionality constant depends on details of the resonator. The retarded magnon-polariton propagator $D_{mp}^{ret}(\omega)$, or photon response function, is defined by $D_{mp}^{ret}(\omega)=\beta D_{mp}(i\omega_n \rightarrow \omega+i0^+)$, with \begin{align} D_{mp}(i\omega_n) = \frac{1}{\beta}\int_0^{\beta} d\tau \ e^{i\omega_n \tau} D_{mp}(\tau), \end{align} where $\omega_n = 2\pi n/\beta$ are Bose-Matsubara frequencies \cite{MahanBook}. The imaginary component of $D_{mp}^{ret}(\omega)$ corresponds to the energy absorbed by the resonator photons. The transmission data varies over many orders of magnitude, and will be presented on a logarithmic scale \begin{align} 10 \log{|S_{21}|^2} = 10 \log{\bigr(A \text{Im}[D_{mp}^{ret}]\bigr)}. \end{align} The proportionality constant $A$ can be adjusted so that the scale of the experimental data matches that of the theoretical results. In what follows we set $A=1$, leaving a more quantitative comparison of the experimental resonator transmission and the theoretical results as a subject for future work. The magnon-polariton propagator is defined in terms of photon position operators, $x \sim a^{\dagger} + a$, whereas in equation (\ref{eq:Hp}) the spins couple to a photon momentum operator $p \sim i(a^{\dagger}-a)$. One can show that a canonical transformation that swaps the photon position and momentum operators leads to an equivalent formulation of the model in which the spins couple to an effective position operator \cite{Leggett84} (see Appendix \ref{ap:coupling}) \begin{align} \label{eq:Hx} \mathcal{H} = \mathcal{H}_{TFIM} - \alpha (a^{\dagger}+a) \frac{1}{\sqrt{N}} \sum_i J_i^z. \end{align} This canonical transformation facilitates the calculation of the magnon-polariton propagator for the interacting spin-photon system. The cooperativity of a light-matter system is defined by $C \equiv 4g_m^2/(\Gamma_m \Gamma_r)$, where $\Gamma_r$ and $\Gamma_m$ are the linewidths (or dampings) of the light and matter modes, respectively \cite{Zhang}. In this expression, the coupling, $g_m^2=\alpha^2 A_m$, is between a magnon mode and a light mode, where $\alpha$ is the spin-photon coupling given in equation (\ref{eq:Hx}), and $A_m$ is the spectral weight of the relevant magnon mode. This expression for the magnon-photon coupling is derived in Section \ref{sec:MPprop}. When the coupling strength exceeds the damping of the system ($C>1$), the modes are said to be strongly coupled, and there will be coherent energy oscillations between the matter and the light. Regardless of whether or not the modes are strongly coupled, the use of perturbation theory and the rotating wave approximation requires $\eta = g/\omega <<1$. If $\eta>0.1$ the system is said to be in the ultrastrong coupling regime, and if $\eta>1$ the system is in the deep strong coupling regime \cite{Kockum}. Somewhat confusingly, a system in the ultrastrong, or deep strong, coupling regime may be weakly coupled if $C<1$. We have provided a brief description of the magnon-polariton propagator, the resonator transmission function, and a discussion of the cooperativity of a light matter system. We will make use of this material in the development of the magnon-polariton theory, and the comparison between the theory and experimental results for LiHoF$_4$ in a microwave resonator. In the next section, we provide a detailed derivation of the magnon-polariton propagator beginning with the basic model given by equations (\ref{eq:TFIM}) and (\ref{eq:TFIMAC}). \section{Magnon-Polariton Theory} \label{sec:MPtheory} Our goal in this section is a detailed derivation of the magnon-polariton propagator, beginning with the basic spin model given by equations (\ref{eq:TFIM}) and (\ref{eq:TFIMAC}). Prior to delving into the calculation, we provide a brief summary of the required steps, and the terms which appear as the theory develops. In Section \ref{sec:Hsp}, we quantize the longitudinal ac magnetic field present in our basic model, assuming a plane wave basis for the photons, and we divide the spin Hamiltonian into its mean field part, and interactions between fluctuations about the mean field. The photon part of the resulting spin-photon Hamiltonian contains a term describing the instantaneous Zeeman energy of the spins in the ac field. The spin-photon interaction is given by an effective photon momentum operator ($p \sim i(a^{\dagger}-a)$) coupled to fluctuations of the spins about their mean field. A canonical transformation is used to swap the photon momentum operator for a photon position operator in the interaction. A phenomenological filling factor is introduced to account for the coupling between spins and photons in a resonator where the plane wave assumption may break down. In Section \ref{sec:DS}, we discuss the dynamic susceptibility of a quantum Ising system having a multilevel single site Hamiltonian. The dynamic susceptibility is discussed in both the mean field (MF) and the random phase approximations (RPA). To go beyond the RPA, phenomenological damping parameters are introduced to account for damping of the magnon modes due to interactions between magnetic fluctuations, phonons, or any other environmental degrees of freedom. The dynamic susceptibility is central to the calculation of the magnon-polariton propagator. In Section \ref{sec:AFT}, we return to the spin-photon Hamiltonian derived in Section \ref{sec:Hsp}. An auxiliary field is introduced to account for the interactions between magnetic fluctuations in the spin component of the Hamiltonian. A shift in the auxiliary field allows a trace to be performed over the microscopic spin degrees of freedom, resulting in an effective field theory which describes photons coupled to collective spin excitations, or magnons, present in the quantum Ising material. An expression for the propagator of the free auxiliary field is developed. The shift in the auxiliary field leads to a diamagnetic term in the photon component of the Hamiltonian, $\mathcal{H}_{\gamma}^D = D (a^{\dagger}+a)^2$, which shifts the frequency of the resonator mode. Although this diamagnetic term is present in an intermediate stage of the development of the theory, we find that the free auxiliary field propagator contains a term which restores the photon frequency to its original value in the final expression for the magnon-polariton propagator, given in Section \ref{sec:MPprop}, so the diamagnetic response term arising from the shift in the auxiliary field plays no role in the final theory. In Section \ref{sec:Hgamma}, we consider the photon component of the magnon-photon Hamiltonian and derive the free photon propagator. Finally, in Section \ref{sec:MPprop}, we consider the full magnon-photon Hamiltonian and derive the magnon-polariton propagator for the coupled light-matter system in terms of the dynamic susceptibility of the quantum Ising material. The spectral representation of the dynamic susceptibility is used to derive an equivalent bosonic Hamiltonian for the light-matter system. This completes the derivation of the magnon-polariton propagator. \subsection{Spin-Photon Hamiltonian} \label{sec:Hsp} We consider the transverse field Ising model (TFIM) in a longitudinal ac field, $\mathcal{H} = \mathcal{H}_{\gamma} + \mathcal{H}_{TFIM} + \mathcal{H}_{int}$, where $\mathcal{H}_{\gamma}$ is the photon Hamiltonian, the TFIM Hamiltonian is given in (\ref{eq:TFIM}), and the interaction between the spins and the magnetic field is \begin{align} \mathcal{H}_{int} = B_z \cos(\omega t) \sum_i J_i^z. \end{align} The TFIM may be treated in mean field (MF) theory, $\mathcal{H}_{TFIM} = \mathcal{H}_{MF} + \mathcal{H}_{fl}$, where the MF Hamiltonian is \begin{align} \label{eq:HMF} \mathcal{H}_{MF} = E_{gs} -H_z \sum_i J_i^z - B_x \sum_i J_i^x, \end{align} with $H_z = V_0 \langle J^z \rangle_{MF}$, where the zero wavevector component of the interaction between spins is $V_0 = \sum_j V_{ij}$. The constant contribution to the ground state energy, $E_{gs}= V_0 \langle J^z \rangle_{MF}^2/2$, will be dropped from the subsequent analysis. The MF spin polarization $\langle J^z \rangle_{MF}$ is determined self consistently from the MF Hamiltonian \cite{SuzukiBook, DuttaBook}. The energy of the interactions between fluctuations in the longitudinal MF spin polarization are given by \begin{align} \mathcal{H}_{fl} = -\frac{1}{2} \sum_{i \neq j} V_{ij} \delta J_i^z \delta J_j^z, \end{align} where the fluctuation operator is defined by $\delta J_i^z = J_i^z - \langle J^z \rangle_{MF}$. We consider a single electromagnetic field mode, in which case \begin{align} \mathcal{H}_{\gamma} = \hbar \omega_r \biggr(a^{\dagger} a + \frac{1}{2}\biggr). \end{align} Assuming the magnetic field is generated by a plane wave, the quantized ac magnetic field in a volume $V_{res}$ may be written \begin{align} B_z \cos(\omega t) \rightarrow \widehat{B}_z = -i \frac{g_L \mu_B}{c} \sqrt{\frac{\hbar \omega_r}{2 V_{res} \epsilon_0}} (a^{\dagger}-a), \end{align} where, on the right-hand side, the time dependence is implicit in the photon operators and the amplitude of the field depends on the photon density. The Land{\'e} g-factor and Bohr magneton written explicitly in the quantized expression were previously included in the definition of $B_z$. We assume photons with a wavelength much larger than the sample size so that $e^{i \boldsymbol{q} \cdot \boldsymbol{r}} \approx 1$, with $\omega_r = qc$. Transforming the spin operators to momentum space \begin{align} J_{\bf k}^z = \frac{1}{\sqrt{N}} \sum_i e^{i{\bf k} \cdot {\bf r_i}} J_i^z, \end{align} we find that the interaction is $\mathcal{H}_{int} = - i \alpha (a^{\dagger}-a) \delta J_0^z$, with \begin{align} \label{eq:MomentumCoupling} \alpha = g_L \mu_B \sqrt{\frac{\mu_0 \hbar \omega_r N}{2 V_{res}}}. \end{align} The interaction is between spin fluctuations and an effective momentum operator, $p \sim i(a^{\dagger}-a)$. In Appendix \ref{ap:coupling} we show that a canonical transformation that swaps the photon position and momentum operators leads to an equivalent formulation of the problem in which \begin{align} \label{eq:Hintx} \mathcal{H}_{int} = - \alpha (a^{\dagger}+a) \delta J_0^z. \end{align} We have dropped a term linear in the photon operators from the interaction, $\widehat{B}_z N \langle J^z \rangle_0 = - \alpha (a^{\dagger}+a) \sqrt{N} \langle J^z \rangle_0$. This is the (instantaneous) MF Zeeman energy of the spins in the longitudinal ac magnetic field. We will reintroduce this term as part of the photon Hamiltonian in Section \ref{sec:Hgamma}. In a system with $n$ atoms per unit cell, the total number of atoms is $N=nV_{sample}/V_{cell}$. The interaction strength may then be written \begin{align} \label{eq:BareCoupling} \alpha = \eta \sqrt{2\pi} \sqrt{\hbar \omega_r}\sqrt{\rho J_D} \quad \text{with} \quad J_D = \frac{\mu_0 (g_L \mu_B)^2}{4\pi}, \end{align} where in our plane-wave approximation the filling factor is $\eta=\sqrt{V_{sample}/V_{res}}$, and the spin density is $\rho=n/V_{cell}$. In YIG we have $\rho = 4.22 \times 10^{27} m^{-3}$, whereas in LiHoF$_4$ the value is $\rho=1.39 \times 10^{28} m^{-3}$, which is about 3.3 times the value in YIG. The dipolar energy scale of the LiHoF$_4$ system is given by $\rho J_D=13.52mK$. For a discussion of the magnon-photon coupling strength in YIG, see references \cite{Zhang, Flower}. Our result for the filling factor was based on a plane-wave assumption. In a realistic model of a microwave resonator \cite{SoykalFlattePRL, SoykalFlattePRB}, the plane-wave assumption may break down, and the filling factor will depend on details of the resonator. One may express the filling factor as \cite{Zhang, Flower} \begin{align} \eta = \sqrt{\frac{\bigr(\int_{V_{sample}}\textbf{B}(\textbf{r}) \cdot \widehat{z} \ d\textbf{r}\bigr)^2 } {V_{sample} \int_{V_{res}}\bigr(\textbf{B}(\textbf{r})\bigr)^2\ d\textbf{r}}}, \end{align} where $\textbf{B}(\textbf{r})$ is the magnitude of the ac resonator field. In this work, we will treat the filling factor as a phenomenological parameter. The results of our theory are applied to experimental data on LiHoF$_4$ in a loop gap microwave resonator \cite{Libersky}. \subsection{Dynamic Susceptibility} \label{sec:DS} The dynamic susceptibility of a quantum Ising material is central to the development of the magnon-polariton theory. We will make frequent use of the dynamic susceptibility and its spectral decomposition. We proceed to review the dynamic susceptibility in both the mean field and the random phase approximations (MF and RPA). For a more detailed discussion of the dynamic susceptibility of magnetic materials, see \textit{Rare Earth Magnetism} by Jensen and MacKintosh \cite{JensenMackintosh}. The MF Hamiltonian for each spin, and the matrix elements of the longitudinal spin operator, may be expressed in terms of eigenstates and energies of the single site MF Hamiltonian given by equation (\ref{eq:HMF}) \begin{align} \label{eq:MF} \mathcal{H}_{MF_i} = \sum_m E_m | m \rangle \langle m| \quad \text{and} \quad c_{mn} = \langle m | J^z | n\rangle_{MF}, \end{align} where $\{E_m\}$ are the single site energy levels of the system, and $\{|m\rangle\}$ are the associated eigenstates. We drop the constant shift in the ground state energy, $E_{gs}$, from subsequent analysis. The modes of the spin system, and their associated spectral weights, follow from the connected imaginary time correlation function, or Green function, $g(\tau) = -\langle T_{\tau} \delta J^z(\tau) \delta J^z \rangle_{MF}$, where $T_{\tau}$ is the imaginary time ordering operator. In MF theory, transforming the Green function to Matsubara frequency space ($\omega_n = 2\pi n/\beta$), we may write the MF Green function as \cite{JensenMackintosh, Stinchcombe} \begin{align} g(i\omega_n) = \frac{1}{\beta} \int_0^{\beta} e^{i\omega_n \tau} g(\tau) d\tau = \widetilde{g}(i\omega_n) - g_{el} \delta_{i\omega_n,0}, \end{align} where in the final expression the Green function is divided into an inelastic component, and the quasi-elastic diffusive pole of the system. The longitudinal MF dynamic susceptibility and the Green function are related by $\chi_0(\omega) = - \beta g(i\omega_r \rightarrow \omega + i0^+) = \widetilde{\chi}_0(\omega) + \chi_{el}^0 \delta_{\omega,0}$. In terms of the MF energy levels and matrix elements of the longitudinal spin operator, one may write the dynamic susceptibility as \begin{align} \label{eq:chi0} \widetilde{\chi}_0(z) &= \sum_{n > m}|c_{mn}|^2 p_{mn} \frac{2 E_{nm}}{E_{nm}^2-z^2} \\ \nonumber \beta \chi_{el}^0 &= \sum_m c_{mm}^2p_m - \biggr[\sum_m c_{mm}p_m \biggr]^2. \end{align} The $p_{mn} = p_m-p_n$ are differences between population factors $p_m = e^{-\beta E_m}/Z_{MF}$, where $Z_{MF} = \text{Tr}[e^{-\beta \mathcal{H}_{MF_i}}]$. The poles of $\widetilde{\chi}_0(z)$, $E_{nm} = E_n-E_m$, are the MF modes of the system, and their spectral weights are $a_{mn} = |c_{mn}|^2 p_{mn}$. The elastic contribution to the dynamic susceptibility, $\chi_{el}^0$, vanishes in the paramagnetic phase of the system ($c_{mm}=0$), and decays exponentially with temperature $\bigr(\chi_{el}^0 \sim T e^{(-E_1/T)} \bigr)$. In the random phase approximation (RPA), the result for the dynamic susceptibility is $\chi(\boldsymbol{k},z) = \chi_0(z)/(1-V_{\boldsymbol{k}} \chi_0(z))$. One may solve for the poles of this function, and their residues, in order to obtain its spectral representation \begin{align} \label{eq:specrep} \chi(\boldsymbol{k}, z) = \sum_m \biggr[\frac{A_{\boldsymbol{k}}^m 2E_{\boldsymbol{k}}^m} {(E_{\boldsymbol{k}}^m)^2-z^2} \biggr] + \chi_{\boldsymbol{k}}^{el} \delta_{z,0}, \end{align} where $A_{\boldsymbol{k}}^m$ is the spectral weight of the $m^{th}$ RPA mode $E_{\boldsymbol{k}}^m$. In the magnon-polariton theory, the wavelengths of the microwave photons are much larger than the size of the sample, so we are interested in the $\boldsymbol{k}=0$ limit of the dynamic susceptibility. In this limit we write $\chi(z) = \chi(\boldsymbol{k}=0, z)$, and we define $\{ \omega_m \} = \{ E_{\boldsymbol{k}=0}^m \}$, and $\{ A_m \} = \{ A_{\boldsymbol{k}=0}^m \}$, as the zero wavevector component of the magnon modes and their spectral weights. The spectral weights of the magnon modes are inversely proportional to the mode frequencies $A_m \sim 1/\omega_m$ (see Appendix \ref{ap:LiHo}), with the spectral weight of the soft mode diverging at the critical point of the system. The RPA expression for the dynamic susceptibility neglects any damping of the magnon modes. In reality, the modes are damped by interactions between the magnetic fluctuations, and environmental degrees of freedom such as phonons, and extraneous photons inside a resonator. If the modes are assumed to behave as damped harmonic oscillators, the dynamic susceptibility may be written ($\chi_{el}=0$) \begin{align} \chi(\omega) = \sum_m \frac{A_m 2\omega_m}{\omega_m^2-\omega^2 - i\omega\Gamma_m}. \end{align} We have analytically continued to real frequencies $z\rightarrow \omega + i0^+$, and introduced the phenomenological damping parameters $\{ \Gamma_m \}$. As will be shown, the magnon-polariton propagator may be written in the same way. In terms of its reactive and absorptive parts ($\chi = \chi' + i\chi''$), the dynamic susceptibility is \begin{align} \label{eq:chi1} \chi'(\omega) = \sum_m \frac{A_m 2\omega_m (\omega_m^2-\omega^2)} {(\omega_m^2-\omega^2)^2 + (\omega \Gamma_m)^2} + \frac{(\Gamma_0/2)^2 \chi_{el}}{\omega^2+(\Gamma_0/2)^2} \end{align} and \begin{align} \label{eq:chi2} \chi''(\omega) = \sum_m \frac{A_m 2\omega_m \omega \Gamma_m} {(\omega_m^2-\omega^2)^2 + (\omega \Gamma_m)^2} + \frac{\omega \Gamma_0/2 \chi_{el}}{\omega^2+(\Gamma_0/2)^2}, \end{align} where we have included the contribution from $\chi_{el}$ to illustrate its role in the theory. The damping parameter will downshift the resonant frequency of the mode, $\omega_m \rightarrow \widetilde{\omega}_m = \sqrt{\omega_m^2-(\Gamma_m/2)^2}$, and if the damping exceeds the mode energy, $\Gamma_m/2 > \omega_m$, the mode becomes overdamped. The shift in the mode energy may be eliminated by introducing a counterterm to the theory. This is accomplished by setting $z=\omega+i\Gamma_m/2$ for each mode in equation (\ref{eq:specrep}). The dynamic susceptibility is then \begin{widetext} \begin{align} \label{eq:chiprime} \chi'(\omega) = \sum_m \biggr[\frac{A_m (\omega+\omega_m)}{(\omega+\omega_m)^2 + (\Gamma_m/2)^2} -\frac{A_m (\omega-\omega_m)}{(\omega-\omega_m)^2 + (\Gamma_m/2)^2}\biggr] +\frac{\chi_{el} (\Gamma_0/2)^2}{\omega^2+(\Gamma_0/2)^2} , \end{align} and \begin{align} \label{eq:chiprime2} \chi''(\omega) = \sum_m \biggr[\frac{A_m \Gamma_m/2} {(\omega-\omega_m)^2 + (\Gamma_m/2)^2} -\frac{A_m \Gamma_m/2}{(\omega+\omega_m)^2 + (\Gamma_m/2)^2}\biggr] +\frac{\chi_{el} \omega \Gamma_0/2}{\omega^2+(\Gamma_0/2)^2} . \end{align} \end{widetext} With the counterterm present, the effect of the damping is to broaden the delta function peaks associated with absorption and emission by the magnon modes into Lorentzians. Damping also broadens the quasielastic diffusive pole into an additional peak in the absorption spectrum, albeit with a different lineshape. The elastic contribution to the dynamic susceptibility vanishes in the paramagnetic phase of the system, and decays exponentially with temperature. In the time domain, the Lorentzian function describes exponentially decaying oscillations at a fixed frequency, rather than the strictly exponential decay of excitations seen in an overdamped harmonic oscillator. We make use of the spectral representation of the dynamic susceptibility to calculate the magnon-photon coupling strengths in the magnon-polariton theory. \subsection{Auxiliary Field Theory} \label{sec:AFT} To derive the magnon-photon Hamiltonian, we make use of the partition function as a means to renormalize the system. The interactions between spins are decoupled via the introduction of an auxiliary Hubbard-Stratonovich field, which allows us to average out the microscopic spin degrees of freedom. The resulting theory describes photons coupled to the collective spin excitations, or magnons, present in the material. We divide the total Hamiltonian of the spin-photon system into two terms $\mathcal{H} = \mathcal{H}_0 + \mathcal{H}'$, where $\mathcal{H}'$ contains the spin fluctuations \begin{align} \mathcal{H}' = -\frac{1}{2} \sum_{i \neq j} V_{ij} \delta J_i^z \delta J_j^z - \alpha (a^{\dagger}+a) \frac{1}{\sqrt{N}} \sum_i \delta J_i^z \end{align} and $\mathcal{H}_0 = \mathcal{H}_{MF} + \mathcal{H}_{\gamma}$. The photon Hamiltonian contains a contribution from the instantaneous Zeeman energy of the spins in the ac field as discussed following equation (\ref{eq:Hintx}). The partition function, written in the Matsubara formalism, is given by \cite{MahanBook} \begin{equation} \label{eq:PF} Z = Z_{\mathcal{H}_0} \biggr\langle T_{\tau} \exp \biggr[-\int_{\tau} \beta \mathcal{H}'(\tau) \biggr] \biggr\rangle_{0}, \end{equation} where $\int_{\tau} \equiv \int_0^{\beta} d\tau / \beta$. The interactions between spin fluctuations may be decoupled via the introduction of an auxiliary Hubbard-Stratonovich field \cite{MckenzieStamp} \begin{align} \frac{Z}{Z_{\mathcal{H}_0}} = \int \mathcal{D}\phi \exp\biggr(&-\frac{1}{2}\int_{\tau} \sum_{\boldsymbol{k}} |\phi_{\boldsymbol{k}}(\tau)|^{2}\biggr) \\ \nonumber & \times \biggr\langle T_{\tau} \exp\biggr(\int_{\tau} V(\tau)\biggr) \biggr\rangle_0, \end{align} where the integration measure is $\mathcal{D\phi} = d\phi_{\boldsymbol{k}}/\sqrt{2\pi}$, and (suppressing the $\tau$ dependence) \begin{align} V = \sum_{\boldsymbol{k}} \biggr[\phi_{-\boldsymbol{k}} \sqrt{\beta V_{\boldsymbol{k}}} + \beta \alpha [a^{\dagger}+a]\delta_{\boldsymbol{k},0}\biggr] \delta J_{\boldsymbol{k}}^z. \end{align} We proceed by shifting the auxiliary field so that the dependence of the interaction on the photons is in the Gaussian prefactor \begin{align} \label{eq:fieldshift} \phi_0 \rightarrow \phi_0 - \frac{\beta \alpha (a^{\dagger}+a)}{\sqrt{\beta V_0}}. \end{align} Multiplying out the result for the zero wavevector component of the Gaussian prefactor, the partition function is \begin{align} \frac{Z}{Z_{\mathcal{H}_0}} &= \biggr\langle \biggr\langle T_{\tau} \int \mathcal{D}\phi \exp\biggr( \int_{\tau} \alpha_{\phi} \phi_0 (a^{\dagger}+a) \biggr) \\ \nonumber &\times \exp\biggr(-\frac{1}{2}\int_{\tau} \sum_k |\phi_{\boldsymbol{k}}|^{2}\biggr) \times \exp\biggr(\int_\tau V_s \biggr) \biggr\rangle_s\biggr\rangle_{\gamma}, \end{align} where the dimensionless coupling between the photon operators and the magnetic fluctuations is $\alpha_{\phi} = \beta \alpha / \sqrt{\beta V_0}$. The interaction between the shifted auxiliary field and the spin fluctuations is \begin{align} V_s(\tau) = \sum_k \phi_{-\boldsymbol{k}}(\tau) \sqrt{\beta V_{\boldsymbol{k}}}\ \delta J_{\boldsymbol{k}}^z(\tau). \end{align} The thermal average over the eigenstates of $\mathcal{H}_0$ has been written in terms of separate averages over the spin and photon eigenstates, $\langle \cdots \rangle_0 = \langle\langle \cdots \rangle_s \rangle_{\gamma}$. This is possible because in $\mathcal{H}_0$ the Hilbert spaces for the spins and the photons are disjoint. The square of the shifted auxiliary field contains a term independent of the field, $\mathcal{H}_{\gamma}^D=D(a^{\dagger}+a)^2$ with $D=\alpha_{\phi}^2/(2\beta) = \alpha^2/(2V_0)$, which has been shifted into the photon part of $\mathcal{H}_0$. We are now in a position to trace over the spin degrees of freedom. This has been dealt with in detail elsewhere \cite{MckenzieStamp}; here we simply quote the result for the partition function in the random phase approximation \begin{align} \label{eq:Pfunction} \frac{Z}{Z_{\mathcal{H}_0}Z_{\phi}} = \biggr\langle\biggr\langle T_{\tau} \exp\biggr(\alpha_{\phi} \int_{\tau} \phi_0 (a^{\dagger}+a) \biggr)\biggr\rangle_{\phi}\biggr\rangle_{\gamma} \end{align} where $\langle \cdots \rangle_{\phi}$ is an average taken with respect to the free auxiliary field. Transforming to Matsubara frequency space \begin{align} \phi(i\omega_n) = \int_{\tau} e^{i\omega_n \tau} \phi(\tau), \end{align} the partition function of the free auxiliary field is \begin{align} Z_{\phi} = \int \mathcal{D}\phi \exp\biggr(-\frac{1}{2} \sum_{n,{\bf k}} (\mathcal{D}_{\phi}^0\bigr({\bf k},i\omega_n)\bigr)^{-1} |\phi_{\bf k}(i\omega_n)|^2\biggr), \end{align} where the free field propagator, $\mathcal{D}_{\phi}^0\bigr({\bf k},i\omega_n) = \langle |\phi_{\boldsymbol{k}}(i\omega_n)|^2\rangle_{\phi}$, is \begin{align} \mathcal{D}_{\phi}^0\bigr({\bf k},i\omega_n) = \frac{1}{1- V_{\boldsymbol{k}} \chi_0(i\omega_n)} = 1+V_{\boldsymbol{k}} \chi(\boldsymbol{k},i\omega_n). \end{align} One may make use of the spectral decomposition of the dynamic susceptibility (equation (\ref{eq:specrep})) to determine the spectral representation of the free field propagator. Beginning with a microscopic spin-photon Hamiltonian, we have developed an effective theory describing photons interacting with an auxiliary field which represents the collective magnetic excitations, or magnons, present in the system. We now turn to the photon component of the Hamiltonian, and make use of harmonic oscillator position and momentum operators to develop a path integral representation of the photonic degrees of freedom. \subsection{Photon Hamiltonian} \label{sec:Hgamma} Considering a single photon mode, the photon Hamiltonian is given by \begin{align} \label{eq:Hgamma} \mathcal{H}_{\gamma} = \omega_r \biggr(a^{\dagger} a + \frac{1}{2}\biggr) - \lambda (a^{\dagger}+a) + D (a^{\dagger}+a)^2, \end{align} where \begin{align} \label{eq:D} \lambda = \alpha \sqrt{N} \langle J^z \rangle_0 \qquad \text{and} \qquad D = \frac{\alpha_{\phi}^2}{2\beta} = \frac{\alpha^2}{2V_0} . \end{align} As discussed following equation (\ref{eq:MomentumCoupling}), the term linear in the photon operators is the instantaneous mean field Zeeman energy of the spins in the applied ac field, only now we are considering spins coupled to an effective photon position operator. The source of the diamagnetic term is the shift in the auxiliary field given in equation (\ref{eq:fieldshift}). Although the diamagnetic term is present in this intermediate stage of the development of the magnon-polariton theory, we find it does not play a role in the final expression for the magnon-polariton propagator given in Section \ref{sec:MPprop}. We proceed by representing the photons with classical harmonic oscillator variables \begin{align} x = \sqrt{\frac{\hbar}{2m\omega}} (a^{\dagger}+a) && p = i \sqrt{\frac{\hbar m \omega}{2}} (a^{\dagger}-a). \end{align} In terms of these operators we have ($\hbar,m=1$) \begin{align} \mathcal{H}_{\gamma} = \frac{p^2}{2} + \frac{1}{2} \omega_r^2 x^2 - \sqrt{2\omega_r} \lambda x + 2 D \omega_r x^2. \end{align} The diamagnetic term shifts the oscillator frequency. In terms of the shifted variables \begin{align} \label{eq:freqshift} \omega_{\gamma} = \omega_r \sqrt{1+\frac{4D}{\omega_r}} \qquad \text{and} \qquad \lambda_{\gamma} = \lambda \biggr[1+\frac{4D}{\omega_r}\biggr]^{-\frac{1}{4}}, \end{align} the photon Hamiltonian is \begin{align} \mathcal{H}_{\gamma} = \frac{p^2}{2} + \frac{1}{2} \omega_{\gamma}^2 x^2 -\sqrt{2 \omega_{\gamma}} \lambda_{\gamma} x. \end{align} The term linear in the position operator re-zeros the oscillator, and leads to a shift in its ground state energy \begin{align} \mathcal{H}_{\gamma} = \frac{p^2}{2} + \frac{1}{2} \omega_{\gamma}^2 (x-x_0)^2 -\frac{1}{2} \omega_{\gamma}^2 x_0^2, \end{align} where $x_0 = \sqrt{2\omega_{\gamma}} \lambda_{\gamma}/\omega_{\gamma}^2$. This linear shift of the oscillator will not affect the photon propagator. In terms of photonic quasiparticle operators which create and annihilate photons with energy $\omega_{\gamma}$, the photon Hamiltonian may be written \begin{align} \mathcal{H}_{\gamma} = \omega_{\gamma} \biggr(a_{\gamma}^{\dagger} a_{\gamma} + \frac{1}{2}\biggr) -\frac{1}{2} \omega_{\gamma}^2 x_0^2. \end{align} The shift in the ground state energy may be included with the ground state energy of the spins $E_{gs}$ (see the discussion following equation (\ref{eq:HMF})), and dropped from subsequent consideration. In imaginary time, the propagator of the shifted photon modes is \begin{align} D_{\gamma}(\tau) = \bigr\langle T_{\tau} \bigr(a_{\gamma}^{\dagger}(\tau)+a_{\gamma}(\tau)\bigr) \bigr(a_{\gamma}^{\dagger}+a_{\gamma}\bigr) \bigr\rangle_{\gamma}, \end{align} where the average $\langle \cdots \rangle_{\gamma}$ is taken with respect to $\mathcal{H}_{\gamma}$. One may express the partition function of the photon system in terms of a path integral over the harmonic oscillator position operator \cite{ShankarBook} \begin{align} Z_{\gamma} = \text{Tr}[e^{-\beta \mathcal{H}_{\gamma}}] = \int \mathcal{D}x \exp{\biggr[-\int_0^{\beta} \mathcal{L}_{\gamma}[\dot{x},x] d\tau\biggr]}, \end{align} where $\mathcal{H}=\mathcal{L}$ in imaginary time, and the path integral is over the shifted harmonic oscillator variables. In the field theory, it is convenient to work with the dimensionless operator $x_{\gamma}=a_{\gamma}^{\dagger}+a_{\gamma}$. In Matsubara frequency space, the Euclidean action in terms of the dimensionless operator $x_{\gamma}$ is given by \begin{align} \label{eq:Lgamma} \int_0^{\beta} \mathcal{L}_{\gamma}(\tau) d\tau = \frac{1}{2}\frac{\beta}{2\omega_{\gamma}} \sum_n \bigr[-(i\omega_n)^2+\omega_{\gamma}^2\bigr] |x_{\gamma}(i\omega_n)|^2. \end{align} It follows that the photon propagator is given by \begin{align} D_{\gamma}(i\omega_n)=\frac{2\omega_{\gamma}}{\beta} \frac{1}{\omega_{\gamma}^2-(i\omega_n)^2}. \end{align} We now have the free propagators of the magnon and photon systems, $\mathcal{D}_{\phi}^0\bigr({\bf k},i\omega_n)$ and $D_{\gamma}(i\omega_n)$. Equipped with these propagators, we may proceed to calculate the magnon-polariton propagator for the coupled magnon-photon system. \subsection{Magnon-Polariton Propagator} \label{sec:MPprop} We have developed a path integral representation of the partition function for a quantum Ising system in a resonator. We return now to the partition function of the full system, given by equation (\ref{eq:Pfunction}). The non-interacting component of the partition function may be rewritten as $Z_{\mathcal{H}_0} = Z_{MF}Z_{\gamma}$, where $Z_{MF}$ yields the mean field free energy of the spins, and $Z_{\gamma}$ yields the free energy of the free photons. Although the mean field free energy of the spins has important thermodynamic consequences, it has no bearing on the magnon-polariton propagator and may be dropped from subsequent analysis. We define the magnon-polariton partition function by \begin{align} \label{eq:Zmp} \frac{Z_{mp}}{Z_{\gamma}Z_{\phi}} = \biggr\langle\biggr\langle T_{\tau} \exp\biggr(\alpha_{\phi} \int_{\tau} \phi_0 (a^{\dagger}+a) \biggr) \biggr\rangle_{\phi}\biggr\rangle_{\gamma}. \end{align} We wish to determine the magnon-polariton propagator of the rescaled photon operators ($x_{\gamma} = a_{\gamma}^{\dagger}+a_{\gamma}$), \begin{align} D_{mp}^{\gamma}(\tau) = \bigr\langle T_{\tau} \bigr(a_{\gamma}^{\dagger}(\tau)+a_{\gamma}(\tau)\bigr) \bigr(a_{\gamma}^{\dagger}+a_{\gamma}\bigr) \bigr\rangle_{mp}, \end{align} but in order to do so, we must re-express the interaction in terms of the photon operators $a_{\gamma}$. In terms of the dimensionless operator $x_{\gamma}=a_{\gamma}^{\dagger}+a_{\gamma}$, we find that \begin{align} \alpha_{\phi} \int_{\tau} \phi_0 (a^{\dagger}+a) = \beta \alpha_{\gamma} \sum_n \phi(i\omega_n) x_{\gamma}(-i\omega_n) \end{align} where \begin{align} \label{eq:alpha} \alpha_{\gamma} = \frac{\alpha_{\phi}}{\beta} \biggr[1+\frac{4D}{\omega_r}\biggr]^{-\frac{1}{4}}. \end{align} Note that if $D_{mp}$ is the propagator for the original photonic operators, $x=a^{\dagger}+a$, which create and annihilate photons with frequency $\omega_r$, we have $D_{mp}^{\gamma} = (\omega_{\gamma}/ \omega_r) D_{mp}$. In order to calculate the magnon-polariton propagator, one may expand the interaction in (\ref{eq:Zmp}) and sum the resulting Dyson series. The exact result for the magnon-polariton propagator is \begin{align} D_{mp}^{\gamma}(i\omega_n) = \frac{1}{D_{\gamma}^{-1}(i\omega_n) - \beta^2 \alpha_{\gamma}^2 \mathcal{D}_{\phi}(i\omega_n)}. \end{align} Recall that the free field propagator may be written in terms of the dynamic susceptibility as $\mathcal{D}_{\phi}^0\bigr(i\omega_n) = 1+V_0 \chi(i\omega_n)$, where $\chi(i\omega_n)$ is the zero wavevector component of the RPA susceptibility given by equation (\ref{eq:specrep}). This leads to \begin{align} D_{mp}^{\gamma}(i\omega_n) = -\frac{2\omega_{\gamma}}{\beta} \biggr[ \frac{1}{(i\omega_n)^2 - \omega_{c}^2 + (\alpha_c^2/\beta) \chi(i\omega_n)}\biggr], \end{align} where the effective frequency of the resonator and the effective coupling strength are now \begin{align} \omega_c^2 = \omega_{\gamma}^2 - 2\beta \alpha_{\gamma}^2 \omega_{\gamma} \quad \text{and} \quad \alpha_c^2 = 2 \beta^2 \alpha_{\gamma}^2 \omega_{\gamma} V_0. \end{align} The resonant frequency of the resonator is shifted by the diamagnetic response of the photons $\omega_r \rightarrow \omega_{\gamma}$ (equation (\ref{eq:freqshift})). The coupling between the photons and the auxiliary field again shifts the resonator frequency $\omega_{\gamma} \rightarrow \omega_c$. A short calculation shows that $\omega_c = \omega_r$, so the resonant photon frequency of the system is unchanged. This is as one might expect because the original spin-photon Hamiltonian does not contain a diamagnetic term. In terms of the original parameters of the spin-photon Hamiltonian, one may show that the rescaled coupling is $\alpha_c^2/\beta = \alpha^2 2\omega_r$. Using the fact that $D_{mp} = (\omega_r/\omega_{\gamma}) D_{mp}^{\gamma}$, we arrive at the magnon-polariton propagator of the original resonator photons ($x=a^{\dagger}+a$) \begin{align} \label{eq:MPprop2} D_{mp}(z) = -\frac{2\omega_r}{\beta} \biggr[ \frac{1}{z^2 - \omega_r^2 + \alpha^2 2\omega_r \chi(z)}\biggr]. \end{align} This propagator is a central result of the magnon-polariton theory. As discussed in Section \ref{sec:ResPhys}, it provides a primary connection between theoretical work and the experimentally measured resonator transmission function. Our result for the propagator includes the effects of counter-rotating terms which become important in the ultra-strong, or deep strong, coupling regimes \cite{Kockum}. The dynamic susceptibility is given in equation (\ref{eq:specrep}). With $\chi_{el}=0$, one may write down an effective bosonic magnon-photon Hamiltonian describing the system \begin{align} \label{eq:Hmp} \mathcal{H}_{mp} = \omega_r &a^{\dagger} a + \sum_{m} \omega_m b_m^{\dagger} b_m \\ \nonumber &+ (a^{\dagger}+a) \sum_{m} g_m (b_m^{\dagger}+b_m). \end{align} In the absence of damping, the magnon-polariton propagator for the theory is given by (see appendix \ref{ap:CHO}) \begin{align} \label{eq:MPprop3} D_{mp}^{\mathcal{H}}(i\omega_n) = -\frac{2\omega_r}{\beta} \left[ \frac{1}{(i\omega_n)^2-\omega_r^2 -\sum_m \frac{4 g_m^2 \omega_m\omega_r}{(i\omega_n)^2-\omega_m^2}}\right]. \end{align} Comparing with equation (\ref{eq:MPprop2}), we see that the coupling in the effective bosonic theory is \begin{align} \label{eq:coupling} g_m^2 = \alpha^2 A_m. \end{align} The propagator then satisfies $D_{mp}^{\mathcal{H}}(a^{\dagger},a) = D_{mp}(a^{\dagger},a)$. Recall that the spectral weights of the magnon modes scale like $A_m \sim 1/\omega_m$, so that the couplings will also scale like the inverse of the mode energies. The magnon mode energies, $\omega_m$, and the coupling strength, $g_m$, are temperature dependent due to the temperature dependence of the mean field, and the population factors which determine $A_m$. One sees that the effective bosonic magnon-photon Hamiltonian captures the propagator of the original resonator photons coupled to the quantum Ising spins, apart from the contribution from the quasielastic diffusive pole. Therefore, when $\chi_{el}=0$, we are free to use the bosonic theory to describe the magnon-photon system. Note that there is no diamagnetic term in the effective bosonic Hamiltonian. In the Dicke model, one is dealing with mobile charged particles, and the diamagnetic term comes from squaring the canonical momentum of the charge carriers. As we are dealing with a fixed system of spins, no such term is expected. In the development of the auxiliary field theory, we treated the magnetic fluctuations in the quantum Ising system in the RPA, and determined the exact magnon-polariton propagator within this approximation. In the rotating wave approximation (RWA), counter rotating terms in the effective bosonic Hamiltonian are dropped, leading to an approximate result for the magnon-polariton propagator \cite{Harder} (assuming $\chi_{el}=0$, and with $D_{mp}^{ret}(\omega) = \beta D_{mp}(z\rightarrow\omega+i0^+)$) \begin{align} D_{mp}^{ret}(\omega)\ = &\ \frac{1}{\omega - \omega_{mp}^- + i\Gamma_{mp}^-/2} \\ \nonumber &\qquad -\frac{1}{\omega + \omega_{mp}^+ + i\Gamma_{mp}^+/2} . \end{align} where \begin{align} \label{eq:RPA1} \omega_{mp}^-(\omega) &= \omega_r + \sum_m \frac{g_m^2(\omega-\omega_m)}{(\omega-\omega_m)^2 + (\Gamma_m/2)^2} \\ \nonumber \omega_{mp}^+(\omega) &= \omega_r - \sum_m \frac{g_m^2(\omega+\omega_m)}{(\omega+\omega_m)^2 + (\Gamma_m/2)^2}, \end{align} and \begin{align} \label{eq:RPA2} \frac{\Gamma_{mp}^-(\omega)}{2} = \frac{\Gamma_r}{2} + \sum_m \frac{g_m^2\Gamma_m/2}{(\omega-\omega_m)^2 + (\Gamma_m/2)^2} \\ \nonumber \frac{\Gamma_{mp}^+(\omega)}{2} = \frac{\Gamma_r}{2} + \sum_m \frac{g_m^2\Gamma_m/2}{(\omega+\omega_m)^2 + (\Gamma_m/2)^2}. \end{align} A phenomenological damping parameter $\Gamma_r$ has been included to account for any intrinsic damping of the resonator photons. As a coherent quantum Ising system is tuned through its critical point, the spectral weight of the soft mode diverges, as will the coupling of the soft mode to the resonator photons. When $g_m >> |\omega_r-\omega_m|$, one expects the RWA to break down, and it is necessary to make use of the full RPA magnon-polariton propagator to calculate resonator transmission. \section{Discussion of Results} \label{sec:DisRes} We have developed an effective field theory, and an equivalent bosonic Hamiltonian, describing a quantum Ising system in a microwave resonator. The theory has been used to calculate the magnon-polariton propagator of the light-matter system. In the Dicke and Hopfield models (see Appendix \ref{ap:Models}), the diamagnetic response of a light-matter system goes like the square of the coupling strength, $D \sim \alpha^2$, and the coupling strength varies like the square root of the atom or spin density, $\alpha \sim \rho^{\frac{1}{2}}$, as in equations (\ref{eq:BareCoupling}) and (\ref{eq:D}). With the diamagnetic term present, the effective resonator frequency diverges with the spin density (see Appendix \ref{ap:CHO}). This forestalls the superradiant quantum phase transition \cite{Rzazewski}, and leads to light-matter decoupling \cite{DeLiberato}. The situation here is different. Importantly, the effective Hamiltonian describing the magnon-photon system (equation (\ref{eq:Hmp})) does not contain a diamagnetic term. The magnon-photon coupling strength depends on the spectral weight of the relevant magnon mode (see equation (\ref{eq:coupling})), and may be tuned by the applied transverse field independently of the resonator frequency. Consider a system with a single magnon mode. In the absence of damping, the upper and lower polariton modes follow from the poles of the magnon-polariton propagator (equation (\ref{eq:MPprop3})) \begin{align} \label{eq:wpm} \omega_{\pm}^2 = \frac{\omega_r^2+\omega_m^2}{2} \pm \sqrt{\biggr(\frac{\omega_r^2-\omega_m^2}{2}\biggr)^2+4g_m^2\omega_r\omega_m}. \end{align} As the system is tuned through a quantum critical point, the spectral weight of the soft mode will diverge, as will the coupling $g_m^2 \sim A_m \sim 1/\omega_m \rightarrow \infty$. At the degeneracy point, $\omega_r=\omega_m$, there ought to be an avoided level crossing in the magnon-polariton spectrum \begin{align} \label{eq:wmpdeg} \omega_{\pm} = \omega_r \sqrt{1 \pm 2 g_m/\omega_r} \qquad \text{if} \qquad \omega_m=\omega_r, \end{align} or possibly a superradiant quantum phase transition if $g_m > \sqrt{\omega_m\omega_r}/2$. Recall from the discussion following equation (\ref{eq:coupling}) that $g_m$ and $A_m$ are temperature dependent, so the condition for superradiance is valid at finite temperatures. We have neglected dissipation and decoherence of the soft mode. The divergent spectral weight of the soft mode will lead to strong coupling to the resonator photons; it will also lead to strong coupling with bath degrees of freedom such as extraneous photons and phonons. Prior to a discussion of the damped magnon-polariton system, we provide a brief analysis of the propagator in the random phase approximation, and in mean field theory. In the random phase approximation, we capture the coupling between photons and collective spin excitations in the system. At the mean field level, we capture single ion excitations. \subsection{Mean Field Theory} \label{sec:MF} In order to calculate the magnon-polariton propagator in the random phase approximation, an auxiliary field was introduced to account for the magnetic fluctuations. The resulting theory accounts for spins coupled to collective excitations in the material. In order to capture excitations at individual sites, a mean field theory (MF) is more appropriate. One may calculate the magnon-polariton propagator in MF theory without introducing the auxiliary field. Our starting point for the MF calculation is equation (\ref{eq:PF}), where $\mathcal{H}'$ is now \begin{align} \mathcal{H}' = - \alpha (a^{\dagger}+a) \frac{1}{\sqrt{N}} \sum_i \delta J_i^z. \end{align} We have dropped the interactions between the fluctuations of the spins about their MF. The photon Hamiltonian is given by \begin{align} \mathcal{H}_{\gamma} = \omega_r \biggr(a^{\dagger} a + \frac{1}{2}\biggr) - \lambda (a^{\dagger}+a). \end{align} At the MF level, there is no diamagnetic term in the photon Hamiltonian. The diamagnetic term came from a shift in the auxiliary field in the RPA theory. One may introduce harmonic oscillator variables, as in Section \ref{sec:Hgamma}, to obtain an effective action for the photons. The result is the same as in equation (\ref{eq:Lgamma}), with $\omega_{\gamma}$ replaced with the resonator frequency $\omega_r$. Recall that the shift in the photon frequencies ($\omega_r \rightarrow \omega_{\gamma}$) came from the diamagnetic term in the photon Hamiltonian, which is not present in the MF theory. The resulting MF magnon-polariton partition function is (recall $\int_{\tau} \equiv \int_0^{\beta} d\tau / \beta$) \begin{align} \frac{Z_{mp}^{MF}}{Z_{\gamma}} = \biggr\langle\biggr\langle T_{\tau} \exp\biggr(\beta \alpha \int_{\tau} x \delta J_0^z \biggr)\biggr\rangle_{s}\biggr\rangle_{\gamma}, \end{align} where $x=a^{\dagger}+a$, and $\delta J_0^z$ is the zero wavevector component of the electronic spin operators. One may perform a cumulant expansion and trace over the microscopic spin degrees of freedom \cite{MckenzieStamp}. The average over the spins $\langle \cdots \rangle_s$ is taken with respect to the MF spin Hamiltonian. We have dropped $Z_{MF}$ from $Z_{mp}^{MF}$ because the mean field partition function of the spins plays no further role in determining the magnon-polariton propagator. Truncating the result of the cumulant expansion at the RPA (or Gaussian) level, one finds \begin{align} \frac{Z_{mp}^{MF}}{Z_{\gamma}} = \biggr\langle \exp\biggr( \frac{\beta \alpha^{2}}{2} \sum_n \chi_0(i\omega_n) |x(i\omega_n)|^2\biggr) \biggr\rangle_{\gamma}, \end{align} and the resulting mean field magnon-polariton propagator is \begin{align} D_{mp}^{MF}(z) = -\frac{2\omega_r}{\beta} \biggr[ \frac{1}{z^2-\omega_r^2+\alpha^2 2\omega_r \chi_0(z)}\biggr]. \end{align} This result may have easily been anticipated from equation (\ref{eq:MPprop2}). Writing the RPA susceptibility as a Born series we have \begin{align} \chi = \chi_0 + \chi_0 V_0 \chi_0 + \chi_0 V_0 \chi_0 V_0 \chi_0 + \cdots \end{align} Truncating the series after the first term leads to the MF result involving light scattering from individual ions. Summing the full series leads to the RPA result which describes light coupled to collective modes of the system. We have derived the MF result here to demonstrate the use of the magnon-polariton theory at the MF level, where the introduction of the auxiliary field is unnecessary. Using the spectral decomposition of the MF propagator, which follows from equation (\ref{eq:chi0}), and neglecting the quasielastic diffusive pole, one may write the propagator as \begin{align} \label{eq:DmpMF} D_{mp}^{MF}(z) = -\frac{2\omega_r}{\beta}\left[\frac{1}{z^2-\omega_r^2 -\sum_{n>m} \frac{4 g_{mn}^2 E_{nm} \omega_r}{z^2 - E_{nm}^2}} \right]. \end{align} The coupling strength is $g_{mn}^2 = \alpha^2 a_{mn}$, where $a_{mn} = |c_{mn}|^2 p_{mn}$ is the spectral weight of the MF transition between states $n$ and $m$. In a resonator experiment, one expects both single ion excitations, and collective modes. The eigenstates of the collective modes may involve quantum coherent superpositions of many different single ion eigenstates. As the system is subject to decoherence, the collective mode behavior may give way to single ion excitations. As will be demonstrated for the LiHoF$_4$ system, the relative strengths of the single ion excitations and the collective modes can be compared by tuning their respective spectral weights. \subsection{Random Phase Approximation} \label{sec:RPA} In the absence of damping, we obtain the magnon-polariton propagator in the random phase approximation. A spectral decomposition of the magnon-polariton propagator may be obtained making use of equation (\ref{eq:MPprop3}), which we write as \begin{align} \beta D_{mp}(z)\biggr|_{RPA} = -\frac{P(z)}{Q(z)}, \end{align} where \begin{align} P(z) = 2\omega_r \prod_m (z^2-\omega_m^2), \end{align} and \begin{align} \label{eq:resSpec} Q(z) &= (z^2-\omega_r^2)\prod_m (z^2-\omega_m^2) \\ \nonumber & \qquad - \sum_m 4g_m^2 \omega_m\omega_r \prod_{m'\neq m} (z^2-\omega_{m'}^2). \end{align} The magnon-polariton modes $ \{ \omega_p \} $ follow from the zeros of $Q(z)$, which we may rewrite as $Q(z) = \prod_p (z^2-\omega_p^2)$. The spectral decomposition of the propagator is then \begin{align} \beta D_{mp}(z)\biggr|_{RPA} = \sum_p \frac{A_p 2\omega_p}{\omega_p^2-z^2}, \end{align} with \begin{align} \label{eq:ResSpecWeight} A_p = \frac{\omega_r \prod_m (\omega_p^2-\omega_m^2)}{\omega_p \prod_{q \neq p}(\omega_p^2-\omega_q^2)}. \end{align} We see that the magnon-polariton spectral weights scale like the inverse of the mode energy, $A_p \sim 1/\omega_p$. This RPA expression will be used to calculate the modes of LiHoF$_4$ in a microwave resonator. \subsection{Damped Magnon-Polariton Propagator} \label{sec:DampedProp} We have developed a theory of magnon-polaritons in quantum Ising systems, and discussed the resulting propagator in the random phase approximation and in mean field theory. We find that the magnon-photon coupling strength depends on the spectral weight of the relevant magnon mode. As a system is tuned through its quantum critical point, the divergent spectral weight of the soft mode leads to deep strong coupling between the soft mode and the resonator photons. As no diamagnetic term is present in the theory, one expects this to lead to a superradiant quantum phase transition. However, this neglects the effects of damping and decoherence due to the system's coupling to its environment, which we discuss here. Coupled light-matter systems, and associated quantum technologies, are generating considerable excitement \cite{Kockum, LQReview,HuRoadmap,BhoiKim}. Of course, in any real-world scenario, one must consider the impact of the environment on the sytem of interest. For recent research on this topic see \cite{CorteseDeLiberato} and references therein. In this work, we do not explore the full complexity of the memory effects, dissipation, and decoherence expected when a polaritonic system is coupled to a bath, or baths; rather, we introduce phenomenological parameters that may account for damping and decoherence in the magnon-polariton theory at a basic level. The results are then compared to experimental data in Section \ref{sec:Expt}. We assume ohmic (frequency independent) damping of the magnon modes, in which case the damped retarded magnon-polariton propagator may be written ($D_{mp}^{ret}(\omega) = \beta D_{mp}(z \rightarrow \omega+i0^+)$) \begin{align} \label{eq:dampedprop} D_{mp}^{ret}(\omega) = \frac{-2\omega_r}{\omega^2-\omega_{mp}^2 + i\omega \Gamma_{mp}} , \end{align} where from equation (\ref{eq:MPprop2}) \begin{align} \omega_{mp}^2 = \omega_r^2 + (\Gamma_r/2)^2 - 2 \alpha^2 \omega_r \chi'(\omega), \end{align} and \begin{align} \omega \Gamma_{mp} = \omega \Gamma_r + 2 \alpha^2 \omega_r \chi''(\omega). \end{align} A factor of $\Gamma_r$ has been included to account for any intrinsic damping of the resonator mode. The $\Gamma_r$ term in the expression for $\omega_{mp}$ is a counterterm which eliminates a shift in the resonator frequency due to its damping (see the discussion in Section \ref{sec:DS}). The reactive and absorptive components of the dynamic susceptibility are given in equations (\ref{eq:chiprime}) and (\ref{eq:chiprime2}). The magnon damping functions $\{ \Gamma_m \}$, are assumed to be frequency independent, although they will vary with the transverse field. The magnon-polariton propagator can be viewed as a damped photon propagator, but the magnon ``bath" leads to a frequency dependent damping function, and a complex set of magnon-polariton modes that follow from the zeros of $\omega^2-\omega_{mp}^2(\omega)$, or equivalently $\omega_{mp}(\omega_p) = \omega_p$. Consider a system with a single magnon mode for which the polariton modes follow from the real part of \begin{align} \omega_p^2 = \omega_r^2 - (\omega_p + i\Gamma_r/2)^2 - \frac{4g_m^2\omega_r\omega_m}{\omega_m^2-(\omega_p+i\Gamma_m/2)^2}. \end{align} When the damping is weak, we recover the upper and lower polariton modes given in equation (\ref{eq:wpm}). A superradiant phase transition will occur if the coupling strength is sufficiently strong to drive the lower polariton mode to zero. In the damped system, the condition for superradiance is \begin{align} \label{eq:gmDamped} g_m > \frac{\sqrt{\omega_m\omega_r}}{2} \biggr[\biggr(1+\frac{\Gamma_r^2}{4\omega_r^2}\biggr) \biggr(1+\frac{\Gamma_m^2}{4\omega_m^2}\biggr)\biggr]^{\frac{1}{2}}. \end{align} In the absence of damping and decoherence, if a magnon mode softens to zero, the magnon-polariton system will always be driven into a superradiant phase (recall $g_m \rightarrow \infty$ as $\omega_m \rightarrow 0$). With damping present, the divergence of $g_m$ may be matched by a divergence on the right hand side of equation (\ref{eq:gmDamped}) preventing the lower polariton mode from dropping to zero. Furthermore, if the constituent spins making up the soft mode are subject to decoherence, one expects a reduction in its spectral weight, and hence a reduction in the coupling strength $g_m$. This may lead to weak coupling and prevent superradiance. We will elaborate on this point in Section \ref{sec:Expt}. In a damped system comprised by multiple magnon modes, the magnon-polariton mode and linewidth equations are \begin{widetext} \begin{align} \label{eq:wmp} \omega_{mp}^2 = \omega_r^2 + \biggr(\frac{\Gamma_r}{2}\biggr)^2 + \sum_m\frac{2 g_m^2 \omega_r (\omega-\omega_m)}{(\omega-\omega_m)^2 + (\Gamma_m/2)^2} - \sum_m\frac{2 g_m^2 \omega_r (\omega+\omega_m)}{(\omega+\omega_m)^2 + (\Gamma_m/2)^2} - 2 g_0^2 \omega_r \frac{(\Gamma_0/2)^2}{\omega^2-(\Gamma_0/2)^2} \end{align} and \begin{align} \label{eq:Gammamp} \omega \Gamma_{mp} = \omega \Gamma_r + \sum_m \frac{g_m^2 \omega_r \Gamma_m}{(\omega-\omega_m)^2 + (\Gamma_m/2)^2} - \sum_m \frac{g_m^2 \omega_r \Gamma_m}{(\omega+\omega_m)^2 + (\Gamma_m/2)^2} +2 g_0^2 \omega_r \frac{\omega \Gamma_0/2}{\omega^2 +(\Gamma_0/2)^2}. \end{align} \end{widetext} Note that the coupling to the zero mode, defined by $g_0^2 \equiv \alpha^2 \chi_{el}$, will decay exponentially with temperature, and vanish in the paramagnetic phase of the system. We will drop this mode from subsequent consideration. One may compare the results for $\omega_{mp}$ and $\Gamma_{mp}$ with the RWA results given in equations (\ref{eq:RPA1}) and (\ref{eq:RPA2}). As previously noted, as a coherent quantum Ising system is tuned through its critical point, one expects the RWA results to break down. Assuming that $g_m$ is sufficiently weak, or $\Gamma_m$ is sufficiently strong, to prevent superradiance, one finds the damping of the polariton mode at resonance ($\omega_p = \omega_m$) to be \begin{align} \Gamma_p = \Gamma_{mp}(\omega_p) \approx \Gamma_r \biggr[1+C\frac{\omega_r}{\omega_p}\biggr], \end{align} where the cooperativity of the system is $C=4g_m^2/(\Gamma_m \Gamma_r)$. Recall that $\omega_p$ is the polariton mode energy, $\omega_m$ is the magnon mode energy, and $\omega_r$ is the bare resonance frequency of the resonator. From inspection of equation (\ref{eq:Gammamp}), we see that as the soft magnon mode is tuned through $\omega_p$, a resonance is expected to appear in the linewidth of the polariton mode. The form and magnitude of the resonance will depend on how $g_m$ and $\Gamma_m$ vary with the transverse field. The collective magnon modes, in particular the soft mode, are entangled many-body eigenstates of the spin system. The quantum coherence of the collective magnon modes is not easily accounted for by the theory. When the quantum coherent superposition of spins comprising a particular magnon mode are in contact with their environment, one expects the superposition of spin states to give way to a classical mixture of spin states. In our analysis of the LiHoF$_4$ system below, we account for this decoherence by transferring spectral weight from the collective RPA excitations to the single ion excitation spectrum. This leads to mixed single ion and collective mode transmission in the magnon-polariton propagator. With dissipation and decoherence present, in the limit $g_m/\Gamma_m \rightarrow 0$, we have $\omega_{mp} = \omega_r$ and $\Gamma_{mp} = \Gamma_r$. The resonator shows no evidence of the magnon modes. We note, however, that this is distinct from the light-matter decoupling discussed by De Liberato \cite{DeLiberato}. In light-matter decoupling, the diamagnetic response of the system localizes the photon modes away from the matter modes and shifts the frequency of the photons, so that the polaritonic quasiparticle operators have a distinct light or matter character. The diamagnetic term is absent in the magnon-polariton theory, and the environment is an additional feature that may prevent superradiance. \section{Comparison to Experiment} \label{sec:Expt} So far, our analysis has been theoretical. In order to have confidence in the results, one must compare theoretical work to experimental data. We do so here by comparing the magnon-polariton theory to transmission spectra of LiHoF$_4$ in loop gap microwave resonators \cite{Libersky, LiberskySM}. Consider the low temperature effective Hamiltonian of the LiHoF$_4$ system \cite{Chakraborty, Tabei, MckenzieStamp} \begin{align} \label{eq:LiHo} \mathcal{H}_{eff} = - \frac{C_{zz}^2}{2} \sum_{i \neq j} V_{ij} \tau_{i}^{z}\tau_{j}^{z} - \frac{\Delta}{2} \sum_{i} \tau_{i}^{x} + \mathcal{H}_{hyp}, \end{align} where the interaction contains a dipolar component, and a weaker antiferromagnetic component \begin{align} \label{eq:V} V_{ij} = J_D D_{ij}^{zz} - J_{nn}. \end{align} In what follows, we assume a LiHoF$_4$ sample with zero demagnetization field, consistent with a needle shaped sample, or a striped domain pattern in which the demagnetization field in the bulk of the sample averages to zero (see Section IIB of the supplement to reference \cite{LiberskySM} for more details). The eigenstates of the $J=8$ holmium spins are mixed and split by the crystal electric field and an applied transverse field. The $\{\tau_i^{\mu}\}$ are Pauli operators describing the two lowest electronic spin states , and $C_{zz}(B_x)$ is a truncation parameter which depends on the applied transverse field, as does the effective transverse field, $\Delta(B_x)$, which splits the energies of the two lowest electronic spin eigenstates. The truncated longitudinal holmium electronic spin operator is $J^z = C_{zz} \tau^z$. The hyperfine component of the Hamiltonian contains the coupling of each effective spin-$1/2$ operator to its $I=7/2$ nucleus. This splits the single ion Hamiltonian into $16$ electronuclear levels, all of which can be accomodated using our formalism. Further details of the LiHoF$_4$ system are discussed in Appendix \ref{ap:LiHo}. The spin-photon interaction is assumed to be \begin{align} \mathcal{H}_{int} = -\alpha (a^{\dagger}+a) \delta J_0^{z}, \end{align} where $\delta J_0^z = C_{zz} \delta \tau_0^z$ is the $k=0$ wavevector component of the longitudinal spin fluctuation operator ($\delta J_0^z = J_0^z - \langle J^z \rangle_{MF}$) in Fourier space, and the coupling constant is (see equation \ref{eq:BareCoupling}) \begin{align} \label{eq:BareCoupling2} \alpha = \eta \sqrt{2\pi} \sqrt{\hbar \omega_r} \sqrt{\rho J_D}. \end{align} In LiHoF$_4$ we have four spins in each unit cell having volume $V_{cell} = 2.88 \times 10^{-28}m^3$, to give a total number of spins $N=4V_{sample}/V_{cell}$. The spin density is $\rho=4/V_{cell}=1.39 \times 10^{28} m^{-3}$, which is about 3.3 times the value in YIG, and the dipolar energy per unit cell is $\rho J_D = 13.52mK = 282MHz$. The filling factor $\eta$ is left as a free parameter which depends on details of the resonator. Consider, as an example, an $\omega_r/(2\pi)=1$GHz applied ac field. In temperature units, we have $\hbar \omega_r/k_B = 48mK$. Plugging in the numbers, we find the coupling at 1GHz to be \begin{align} \alpha \big|_{1GHz} \approx \eta \times 64mK = \eta \times 1.33GHz. \end{align} Using this value as a reference, the coupling for any given frequency (in GHz) is given by \begin{align} \alpha (f) \approx \eta \sqrt{f/f_0} \times 1.33 GHz, \end{align} where $f_0=1GHz$ is the reference frequency. The resonator transmission function is given by equation (\ref{eq:ResTrans}). It follows from equation (\ref{eq:dampedprop}) that \begin{align} |S_{21}|^2 \propto \text{Im}[D_{mp}^{ret}] = \frac{2\omega\omega_r\Gamma_{mp}}{(\omega^2-\omega_{mp}^2)^2 + (\omega\Gamma_{mp})^2}. \end{align} Without knowledge of the proportionality constant, one cannot obtain $\Gamma_{mp}$ from the amplitude and phase of the transmission function; however, one may still obtain the magnon-polariton modes, and compare qualitative features of their linewidths with theoretical results. If the coupling between the resonator photons and the magnetic excitations is weak, the modes of the resonator will differ little from the modes of LiHoF$_4$, apart from the appearance of an additional mode corresponding to the resonator frequency. In Figure \ref{fig:ModesWeakCoupling}, we illustrate the RPA transmission spectrum of a needle shaped sample of LiHoF$_4$ in a $1GHz$ resonator, at zero temperature, with a filling factor of $\eta=0.01$, along with the MF modes of LiHoF$_4$. The low energy RPA modes of the resonator differ little from the RPA modes of LiHoF$_4$, apart from the addition of the resonator mode. In the upper band of excitations, we see the gapped electronic mode which has been measured in neutron scattering experiments \cite{Ronnow2}. A comparison of spectral weights determined by equation (\ref{eq:ResSpecWeight}) shows that, under weak coupling, the RPA transmission spectrum of the resonator is dominated by the resonator mode. \begin{figure}[htp] \centering \includegraphics[width=8cm]{ResonatorModesWeakCouplingEta0p01.pdf} \caption{Modes of LiHoF$_4$, at zero temperature, in a 1GHz resonator with a filling factor of $\eta=0.01$. We assume the average demagnetization field in the sample is zero. Due to the weak coupling, the RPA modes of the resonator are much the same as the RPA modes of LiHoF$_4$, with an additional mode at the resonator frequency. The MF modes of the LiHoF$_4$ system are shown as dashed lines for comparison. The mode showing significant softening in the upper band of energy levels has dominant spectral weight. This mode has been measured in neutron scattering experiments \cite{Ronnow2}. In the inset, we see the lowest energy electronuclear mode soften to zero at the quantum critical point. A similar figure showing the electronuclear modes of LiHoF$_4$, and their spectral weights, is provided in reference \cite{MckenzieStamp}.} \label{fig:ModesWeakCoupling} \end{figure} When the coupling between the resonator photons and the magnetic excitations is weak, the resonator transmission spectrum does not exhibit any novel features. If the filling factor is increased to $\eta=0.25$, we see interesting features in the RPA transmission spectrum due to the hybridization of the magnon and photon modes. In Figures \ref{fig:DampedTransmission} - \ref{fig:DampedDecoherentTransmission2}, we show the effects of damping and decoherence on the theoretical resonator transmission, and we compare the results to experimental data. In Fig. \ref{fig:DampedTransmission}, we consider constant ohmic damping of the magnon-modes. As discussed in Section \ref{sec:DampedProp}, we find that strong damping of the magnon modes may prevent the superradiant quantum phase transition expected as the quantum Ising material is tuned through its critical point. The theoretical results for the resonator transmission are in poor agreement with the experimental data, shown in Fig. \ref{fig:ExptTransmission}, and it is necessary to refine our treatment of the damped magnon modes. In Fig. \ref{fig:DampedModes}, we show the single ion and collective mode resonator transmission using a more realistic model for the damping parameters. Our estimates of the magnitudes of the damping parameters fall short of what is necessary to prevent superradiance. In order to account for this discrepancy, we introduce a phenomenological model to account for decoherence of the collective magnon modes. To explore the effects of decoherence, we assume spectral weight is transferred from collective magnon modes to single ion excitations in the magnon-polariton propagator. This leads to a reduced coupling between the collective magnon modes and the photons, and mixed single ion and collective mode transmission in the resonator. In Fig. \ref{fig:DampedDecoherentTransmission}, we show the effects of tuning the decoherence rate of the collective magnon modes in the phenomenological model; as the decoherence rate is increased, the superradiant quantum phase transition gives way to an avoided level crossing between the magnon soft mode and the resonator mode, which, upon further increasing the decoherence rate, gives way to a resonance at the transverse field values where the resonator mode is degenerate with the soft mode. The model is then used to calculate mixed single ion and collective mode resonator transmission at frequencies where the resonator mode is degenerate with the lowest single ion excitation, and the results are compared to experimental data for a bimodal loop gap microwave resonator. We find good agreement between the experimental data and the theoretical results. \begin{figure} \centering \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.1cm 0.2cm 0.5cm 0.5cm},clip]{DampedTransmission.png} \end{minipage} \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.05cm 0.2cm 0.3cm 0.5cm},clip]{DampedTransmission2.png} \end{minipage} \caption{Damped RPA transmission function of LiHoF$_4$ in a 1.9GHZ microwave resonator at zero temperature. We consider a sample of LiHoF$_4$ in which the average demagnetization field is zero, and assume a filling factor of $\eta=0.25$. In the upper left hand figure, the dampings of the magnon modes and the resonator mode are $\Gamma_m = 1\mu K =20.837kHz$ and $\Gamma_r=1nK=20.837Hz$, respectively. With this weak damping, the system is driven into a superradiant phase. On the right, the damping of the soft mode has been increased to $\Gamma_{m=1} = 0.5K=10.419GHz$ which stops the lower polariton mode from softening to zero, preventing the superradiant phase transition. The upper polariton mode is attenuated to the point where it is no longer visible in the transmission spectrum.} \label{fig:DampedTransmission} \end{figure} Consider Fig. \ref{fig:DampedTransmission}, in which we show the zero temperature transmission spectrum of LiHoF$_4$ in a $\omega_r/(2\pi) = 1.9GHz$ resonator with constant ohmic damping of the magnon modes. We see that when the modes are weakly damped, the lower polariton mode softens to zero marking a superradiant quantum phase transition in the system. As discussed following equation (\ref{eq:gmDamped}), by increasing the damping of the soft mode from $\Gamma_{m=1} = 1 \mu K=20.837kHz$ to $\Gamma_{m=1} = 0.5K=10.419GHz$, the lower polariton mode no longer softens to zero; however, the resulting transmission function is in poor agreement with the experimental data. In Fig. \ref{fig:ExptTransmission} we show the experimental resonator transmission and the inverse quality factor of the resonator mode ($1/Q$), which is proportional to the linewidth of the polariton mode. The inverse quality factor shows a resonance near the phase transition that may be decomposed into the sum of three distinct peaks. The central peak is due to absorption at the phase transition. As the LiHoF$_4$ sample is tuned through its critical point, one expects absorption at all frequencies, similar to critical opalescence \cite{LiberskySM}. The two satellite peaks correspond to resonances in the transmission function where the resonator polariton mode ($\omega_p$) is degenerate with the soft mode ($\omega_m$). To better capture the experimental data, we consider a refined model for the damping parameters, and make use of an ansatz meant to capture the effects of decoherence of the collective magnon modes. \begin{figure} \centering \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.2cm 0.1cm 0.05cm 0.1cm},clip]{ExptTransmission1p9GHz.png} \end{minipage} \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.1cm 0.1cm 0.05cm 0.1cm},clip]{ExptQinv.png} \end{minipage} \caption{Measured transmission function of LiHoF$_4$ in a 1.9GHZ microwave resonator. The inverse quality factor of the resonator mode is shown on the right. The measured value of $1/Q$ (blue) has been decomposed into the sum of three Gaussian peaks (red). The central peak corresponds to absorption at the phase transition. The satellite peaks to either side of the central peak occur where the soft mode is degenerate with the resonator mode.} \label{fig:ExptTransmission} \end{figure} In a more realistic model for the damping of the magnon modes, the damping parameters will vary as a function of transverse field and frequency. We neglect the memory effects associated with the frequency dependence of the damping parameters; however, we incorporate the transverse field dependence of the parameters by considering damping due to an oscillator bath environment at the frequency of the magnon mode. The damping of a mode at frequency $\omega_m$ is given by $\Gamma_m = \gamma'(\omega_m)$, where $\gamma'(\omega)$ is given in equation (\ref{eq:gamma}) of Appendix \ref{ap:CHO}. We find that \begin{align} \Gamma_m = 2\pi \sum_z g_{zm}^2 [\delta(\omega_m-\omega_z)-\delta(\omega_m+\omega_z)], \end{align} where the frequency independent damping function $\Gamma_m$ is in agreement with what one obtains using a master equation approach \cite{Clerk}; the transverse field dependence of the damping function is due to the transverse field dependence of the magnon mode $\omega_m$. Converting the sum over bath modes to an integral, one obtains \begin{align} \Gamma_m = 2\pi g_{zm}^2 \rho_b(\omega_m) n(\omega_m), \end{align} where $\rho_b(\omega_m)$ is the density of states of the bath modes at frequency $\omega_m=\omega_z>0$, and $n(\omega_m)$ is the Bose-Einstein distribution function. Recall that the magnon-photon coupling strength in LiHoF$_4$ is given by $g_m^2=\alpha^2 A_m$ (equation (\ref{eq:coupling})), with $\alpha^2 = 2\pi \eta^2 (\rho J_D) \omega_r $, as in equation (\ref{eq:BareCoupling2}). We assume the coupling between magnons and bath modes has a similar form $g_{zm}^2 = g_0^2 A_m = Z \omega_m A_m$ (for light-matter coupling, one has $Z=2\pi \eta^2 \rho J_D$). Assuming a quadratic density of states, $\rho_b(\omega_m) = \rho_0 \omega_m^2$, in the high temperature limit ($\beta \omega_m \ll 1$), the damping parameter may be written \begin{align} \label{eq:Damping} \Gamma_m = C_0 A_m \omega_m^2 \quad \text{where} \quad C_0 = 2\pi Z \hbar \rho_0 k_B T. \end{align} The spectral weights of the magnon modes go like $A_m \sim 1/\omega_m$, so one expects a reduction in the damping of the soft mode as $\omega_m \rightarrow 0$. The damping of LiHoF$_4$ due to a phonon bath has been analyzed by Buchhold \cite{Buchhold} \textit{et al.} Specific heat measurements \cite{Aggarwal} in LiYF$_4$ and LiLuF$_4$ indicate Debye temperatures of $\theta_D=560K$ and $\theta_D=540K$, respectively. The Debye temperature of LiHoF$_4$ is expected to be similar. In terms of the Debye temperature and the corresponding Debye frequency $\omega_D = k_B \theta_D /\hbar$, and assuming the phonon density of states is $\rho_{ph}(\omega_m) = \rho_0 \omega_m^2$, Buchhold \textit{et al}. find the damping of a magnon mode $\omega_m$ to be \begin{align} \Gamma_m \approx \gamma_D \frac{T}{\theta_D \omega_D^2} \omega_m^2 = \widetilde{\gamma}_D \frac{T}{\theta_D \omega_D^2} A_m \omega_m^2, \end{align} where in the final expression we have excluded $A_m$ from the decay rate at the Debye frequency and temperature, $\gamma_D$. Comparing with equation (\ref{eq:Damping}), we see that $C_0 = \widetilde{\gamma}_D T/ (\theta_D \omega_D^2)$. Little information on phonons in LiHoF$_4$ is available; however, for spin vacancies in diamond one has \cite{Buchhold} $\gamma_D / (\theta_D \omega_D^2) = 10^{-6} \rightarrow 10^{-5}\ (GHz\ K)^{-1}$. At the experimentally relevant temperature of $T=50mK$, this leads to $C_0 \approx 5 \times (10^{-8}\rightarrow 10^{-7}) GHz^{-1}$, and damping of the magnon modes of less than a kilohertz. This is very weak damping of the modes; however, interactions between magnetic fluctuations, and environmental degrees of freedom other than phonons are expected to increase the dampings. \begin{figure} \centering \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.2cm 0.2cm 0.5cm 0.5cm},clip]{DampedTransmissionRPA.png} \end{minipage} \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.1cm 0.2cm 0.3cm 0.5cm},clip]{DampedTransmissionMF.png} \end{minipage} \caption{Damped transmission function of LiHoF$_4$ in a $3.2GHz$ microwave resonator with a filling factor of $\eta=0.25$ in the RPA (left), and in MF theory (right). We consider the zero temperature transmission of a LiHoF$_4$ sample with zero average demagnetization field. The damping parameters are given by $\Gamma_m \propto A_m \omega_m^2$ (similarly for the MF modes). The proportionality constant is chosen so that the damping parameters are roughly in line with what one expects for spin vacancies in diamond (see text). In the RPA (left), the damping is insufficient to prevent superradiance in the system due to the divergent spectral weight of the magnon soft mode. The spectral weight carried by the lowest MF mode does not diverge, and the resultant coupling strength is not strong enough to cause superradiance in the single ion resonator spectrum (right). In a system subject to decoherence, one expects to see both single ion and collective mode transmission.} \label{fig:DampedModes} \end{figure} In Fig. \ref{fig:DampedModes}, we consider single ion and collective mode resonator transmission, with the damping of the collective modes given in (\ref{eq:Damping}), and the damping of the single ion excitations given by $\Gamma_{mn} = C_0 a_{mn} E_{nm}^2$, where $a_{mn}$ and $E_{nm}$ are discussed in Section \ref{sec:MF}. We consider a $3.2GHz$ resonator relevant to the experimental data shown in Fig. \ref{fig:DampedDecoherentTransmission2}; results for a $1.9GHz$ resonator are similar. The single ion resonator transmission follows from replacing $\chi$ with $\chi_0$ in equations (\ref{eq:wmp}) and (\ref{eq:Gammamp}), as discussed in Section \ref{sec:MF}. We set $C_0=10^{-5}\ K^{-1} = 4.8 \times 10^{-7}\ GHz^{-1}$, so the damping parameters are roughly in line with what one expects for spin vacancies in diamond. The MF and RPA resonator transmission is calculated at zero temperature, which accurately captures the most dominant modes present at the experimentally relevant temperature of $T=50mK$. This validates using the $T=50mK$ estimate for the damping parameters in the zero temperature resonator transmission calculations. Modes corresponding to excitations between thermally excited states of the quantum Ising material will be the subject of future work. The damping of the collective magnon modes in Fig. \ref{fig:DampedModes} is insufficient to prevent a superradiant phase transition in the system, which is inconsistent with the experimental data. However, we have not accounted for the quantum coherence of the collective magnon modes. We attempt to do so by assuming that spectral weight is transferred from the collective magnon modes to the single ion excitation spectrum shown on the right hand side of Fig. \ref{fig:DampedModes}. Indeed, for each mode in equations (\ref{eq:wmp}) and (\ref{eq:Gammamp}), we assume (for example) \begin{align} & \qquad\qquad \frac{2g_m^2 \omega_r (\omega-\omega_m)}{(\omega-\omega_m)^2+(\Gamma_m/2)^2} \rightarrow \\ \nonumber &\frac{2 \widetilde{g}_m^2 \omega_r (\omega-\omega_m)}{(\omega-\omega_m)^2+(\Gamma_m/2)^2} + \frac{2 \widetilde{g}_{mn}^2 \omega_r (\omega-E_{nm})}{(\omega-E_{nm})^2+(\Gamma_{mn}/2)^2}, \end{align} where \begin{align} \widetilde{g}_m^2(\omega=\omega_m) = \alpha^2 A_m \biggr[\frac{\omega_m^2}{\gamma_{dec}^2+\omega_m^2}\biggr] \end{align} and \begin{align} \widetilde{g}_{mn}^2(\omega=\omega_m) = \alpha^2 a_{mn} \biggr[1-\frac{\omega_m^2}{\gamma_{dec}^2+\omega_m^2}\biggr]. \end{align} This leads to mixed single ion and collective mode transmission in the magnon-polariton propagator. Fourier transforming $\widetilde{g}_m^2(\omega)$, one finds that this ansatz corresponds to exponential decay of the collective mode spectral weight at a rate determined by $\gamma_{dec}$. We set $\omega=\omega_m$ to capture decoherence at the relevant frequency scale of the quantum Ising material. In the development of the magnon-polariton theory, the photons couple to an auxiliary field which describes magnetic fluctuations, and determines the magnon modes present in the material. The quantum coherence of the collective magnon modes is a tacit assumption which may not be valid if environmental degrees of freedom, or higher order interactions (beyond the RPA) between the magnetic fluctuations, lead to decoherence on timescales faster than the relevant timescales of the magnon modes. In Fig. \ref{fig:DampedDecoherentTransmission}, we show the mixed single ion and collective mode resonator transmission as one tunes the decoherence rate $\gamma_{dec}$. The damping parameters are chosen to be roughly consistent with what one expects for spin vacancies in diamond, as in Fig. \ref{fig:DampedModes}. With $\gamma_{dec} = 0.5GHz$, the reduction in the coupling strength is insufficient to prevent the superradiant quantum phase transition. Increasing the decoherence rate to $\gamma_{dec}=15GHz$, which is larger than the relevant magnon and resonator mode frequency, leads to an avoided level crossing in the transmission spectrum, rather than a superradiant phase transition, as shown in the upper right plot in Fig. \ref{fig:DampedDecoherentTransmission}. Further increasing the decoherence rate attenuates the soft mode, and weakens the avoided level crossing in the transmission spectrum. With the coherence time set to picoseconds, which is shorter than the timescale set by the inverse of the magnon mode frequency, the magnon soft mode will show up as a resonance in the resonator transmission spectrum, as seen in the experimental data. \begin{figure}[htp] \centering \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.1cm 0.1cm 0.5cm 0.8cm},clip]{./MixedTrans1p9GHzDamped0p5Deco.png} \end{minipage} \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.8cm 0.1cm 0.3cm 0.6cm},clip]{./MixedTrans1p9GHzDamped15Deco.png} \end{minipage} \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.1cm 0.2cm 0.5cm 0.5cm},clip]{./MixedTrans1p9GHzDamped100Deco.png} \end{minipage} \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.15cm 0.2cm 0.3cm 0.5cm},clip]{./MixedTrans1p9GHzDamped500Deco.png} \end{minipage} \caption{Mixed single ion and collective mode transmission of a LiHoF$_4$ sample in a 1.9GHz resonator at zero temperature. The filling factor is set to $\eta=0.25$ and we assume the average demagnetization field is zero. The damping parameters are chosen to be in line with what one might expect for spin vacancies in diamond. In the upper left figure the decoherence factor is set to $\gamma_{dec}=0.5GHz$. The sharp dip in the upper polariton mode occurs where the soft mode crosses $\gamma_{dec}$. In the upper right figure the decoherence factor has been increased to $\gamma_{dec}=15GHz$, which is sufficient to prevent superradiance. Further increasing the decoherence factor attenuates the soft mode and closes the avoided level crossing in the spectrum. In the experimental data, one expects a weak avoided level crossing to show up as a resonance in the inverse quality factor of the resonator.} \label{fig:DampedDecoherentTransmission} \end{figure} \begin{figure}[htp] \centering \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.2cm 0.2cm 0.5cm 0.8cm},clip]{./MixedTrans3p2GHzDamped100Deco.png} \end{minipage} \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.1cm 0.2cm 0.3cm 0.8cm},clip]{./MixedTrans3p7GHzDamped100Deco.png} \end{minipage} \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.2cm 0.2cm 0.5cm 0.5cm},clip]{./MixedTrans3p2GHz3p7GHzDamped100Deco.png} \end{minipage} \begin{minipage}{.23\textwidth} \centering \includegraphics[width=1\linewidth,trim={0.1cm 0.2cm 0.3cm 0.5cm},clip]{./ExptTransmission.png} \end{minipage} \caption{Mixed single ion and collective mode transmission of LiHoF$_4$ in a 3.2GHz and 3.7GHz resonator at zero temperature. The filling factor is set to $\eta=0.25$, and the damping parameters are chosen to be in line with what one expects for spin vacancies in diamond. The decoherence factor is set to $\gamma_{dec}=100GHz$, a value for which, although faint, the soft mode is visible in the transmission spectrum. Comparing the avoided level crossing in the $3.2GHz$ resonator to the $3.7GHz$ resonator in the upper pair of figures, we see a larger avoided level crossing at the lower frequency. This is due to the increase of the spectral weight of the magnon mode at $3.2GHz$, which supersedes the reduction in coupling strength due to the lower resonator frequency. In the lower pair of figures, we sum the calculated transmission from the $3.2GHz$ and the $3.7GHz$ resonators, and compare the results to transmission through a bimodal loop gap resonator. In the experimental data, interactions between the resonator modes lead to an antiresonance near $3.6GHz$ and hybridization of the polariton modes not accounted for in the theoretical calculation. The lowest polariton mode in the experimental data exhibits weak avoided level crossings consistent with the presence of the collective soft mode, and Walker modes, in the material (see text for details).} \label{fig:DampedDecoherentTransmission2} \end{figure} We have shown the effects of ohmic damping of the magnon modes, in conjunction with an ansatz meant to capture the impact of decoherence of the collective spin excitations comprising the magnon modes. When decoherence is accounted for, we find that the superradiant quantum phase transition, or strong avoided level crossing, expected as the spectral weight of the magnon soft mode diverges, gives way to a resonance in the resonator transmission function. In Fig. \ref{fig:DampedDecoherentTransmission2}, we consider resonator frequencies of $\omega_r/(2\pi) = 3.2GHz$ and $3.7GHz$, and compare the calculated transmission function to experimental data for a bimodal loop gap resonator. At these frequencies, the resonator modes are degenerate with the lowest single ion excitation in the system. We see strong avoided level crossings when the lowest single ion excitation is degenerate with the resonator modes. The increased spectral weight of the single ion excitation at 3.2GHz leads to a stronger avoided level crossing than at 3.7GHZ, despite the reduction in frequency. This is consistent with the avoided level crossings seen in the experimental data. The experimental data is for a bimodal resonator. In the theoretical calculation we assume the two resonator modes are independent, and sum their response. This fails to capture interactions between the resonator modes, which lead to the antiresonance seen in the experimental data near $3.6GHz$, and mixing of the calculated polariton modes. Nevertheless, we find good agreement between the calculated resonator transmission and the experimental data. The lowest polariton mode exhibits a series of weak avoided level crossings in the ferromagnetic phase of the quantum Ising material. These avoided level crossings are due to the soft mode, and Walker modes present in the material \cite{Walker1, Walker2}. An analysis of the Walker modes will be the subject of future work. Previously, in reference \cite{LiberskySM}, the magnon mode responsible for the avoided level crossings seen in the experimental data shown in Fig. \ref{fig:DampedDecoherentTransmission2} was attributed to an excited state transition; this was based on an RPA analysis of the LiHoF$_4$ crystal. In the current work, assuming mixed single ion and collective mode resonator transmission, we attribute these avoided level crossings to the lowest single ion excitation (ground state to first excited state) shown by the dashed line in the inset to Fig. \ref{fig:ModesWeakCoupling}. The structure and energy of the lowest single ion excitation, and the first excited state in the RPA calculation, are similar. Accounting for, and exploring, the effects of dissipation when polariton modes are coupled to environmental degrees of freedom is an active research area \cite{WangHu, CorteseDeLiberato}; incorporating the effects of decoherence of the collective magnon modes comprising the magnon-polaritons in a quantum Ising material coupled to a resonator mode is a more difficult task. Here, we have developed a basic formalism amenable to investigating these problems in real materials, with LiHoF$_4$ being the magnetic system of primary interest. We have demonstrated the effects of ohmic damping of the magnon modes in LiHoF$_4$ in a microwave resonator, and we have explored the consequences of decoherence of the magnon modes present in the material via an ansatz in which spectral weight is transferred from the collective modes to single ion excitations. Our results are in good agreement with experimental data for LiHoF$_4$ in a microwave resonator; we leave further refinements of the theory, and more sophisticated numerical analysis, as a subject for future work. \section{Conclusions and Outlook} Beginning with a microscopic spin model for a quantum Ising system in a microwave resonator, we have derived an effective finite temperature quantum field theory for the magnon-photon system, and an effective Hamiltonian for the coupled bosonic modes. The theory has been used to calculate the magnon-polariton propagator, and the results have been applied to LiHoF$_4$, which has a complex, multilevel, single site Hamiltonian. One may also apply this formalism to the quantum optics models, and quantum environment models, discussed in Appendix \ref{ap:Models}. Our analysis of a quantum Ising material via the introduction of an auxiliary field describing the magnetic fluctuations goes beyond standard spin quantization techniques. The resulting theory captures multiple magnon modes, the quasi-elastic diffusive pole of the quantum Ising material, and excitations between thermally excited states of the material. Our treatment of the light in terms of harmonic oscillator variables is basic; however, we believe it provides clarity, and we have made contact between paradigmatic quantum optics models and oscillator bath theory. One may extend and expound details of the theory by treating the light, or the environment, in a more sophisticated manner. A key result of this paper is that tuning the applied transverse field allows one to tune the magnon-photon coupling strength. As one approaches the critical point of the quantum Ising material, the magnon-photon coupling strength will diverge. A fixed system of spins in an ac magnetic field will not exhibit a diamagnetic response, so deep strong coupling between the magnons and photons is achieved without the light-matter decoupling inherent in the Dicke \cite{Dicke}, Dicke-Ising \cite{Cortese}, and Hopfield models \cite{Hopfield}. However, in the real-world, coupling to an environment will lead to dissipation and decoherence, which may lead to weak coupling between the magnon and photon modes. We have treated dissipation and decoherence phenomenologically, and compared the results of the theory to experimental data on LiHoF$_4$ in a loop gap microwave resonator. We consider the agreement between the experimental data and the theoretical results to be good, although further refinement of the theory, particularly more detailed modeling of the environment and the resulting decoherence, and a more sophisticated numerical analysis, would be beneficial. We leave this a subject for future work. We have focused on the magnon-polariton propagator because it may be the best way to make contact between theoretical work and experimental results, and our treatment of the light in terms of harmonic oscillator variables is the easiest way to obtain results. Harmonic oscillator variables, and eigenstates, have also been used to study entanglement, and the quantum-chaotic properties, of the Dicke model \cite{EmaryBrandesPRE, Lambert}. Alternatively, one may make use of a coherent state basis of eigenstates for the light. This was an original approach to the problem \cite{Wang} that allows for a more thorough investigation of the thermodynamics of the system. More recently, coherent states were used to study entanglement between a qubit and a field mode \cite{Everitt}. We see further investigation into the entanglement properties of light-matter systems as a promising area for research, with particular relevance to high sensitivity magnon detection \cite{LQScience}, and associated quantum technologies. This work provides a detailed microscopic theory of a quantum optics system. Such a theory is necessary in order to make progress in more topical research areas such as the non-equilibrium phases and phase transitions \cite{Sieberer, DallaTorre, Kirton}, and novel dynamics \cite{Henriet, Hanai}, present in damped-driven quantum systems \cite{Kasprzak, Muniz}. The formalism here is complementary to, and more general than, standard approaches which make use of bosonic or fermionic representations of the spin degrees of freedom, and the field theory is amenable to treatment via the Keldysh functional integral approach. Finally, the burgeoning field of quantum magnonics will require models of light-matter interactions, following the lines of the present investigation. \section{Acknowledgments} The authors would like to thank Yikai Yang and Philip Stamp for helpful discussions. Experimental work at Caltech was supported by U.S. Department of Energy Basic Energy Sciences, Award No. DE-SC0014866. \begin{appendices} \section{Other Models} \label{ap:Models} In the absence of spin-spin interactions, our model shares similarities with the Dicke model \cite{Dicke}, which is a paradigmatic model of quantum optics. The basic Dicke model describes an atomic cloud, approximated as a set of two level systems, coupled to a single photonic field mode ($\hbar=1$) \begin{align} \mathcal{H}_{Dicke} = \omega_r a^{\dagger} a + \omega_0 J^z + \frac{\alpha}{\sqrt{N}} (a^{\dagger}+a) J^x + \mathcal{H}_{A^2}. \end{align} The collective atomic operators are given by $J^{\mu} = \sum_i J_i^{\mu}= \sum_i \sigma_i^{\mu}/2$, where the $\sigma_i^{\mu}$ are Pauli operators. In this model, the atoms (or spins) are mobile charged particles. This collective set of atomic operators couples to a position operator of a single field mode, $x \sim a^{\dagger}+a$. The diamagnetic term, \begin{align} \label{eq:A2} \mathcal{H}_{A^2} = D (a^{\dagger}+a)^2, \end{align} comes from squaring the canonical momentum of the mobile charged particles. Invoking the Thomas-Reiche-Kuhn sum rule for a multilevel atom \cite{Rzazewski, Nataf}, one finds that in the two level approximation $D > \alpha^2/\omega_0$, so that the magnitude of the diamagnetic term diverges like the square of the coupling strength. If Ising interactions between the atoms in the Dicke model are included, one has the Dicke-Ising model \cite{Cortese}. The absence of the diamagnetic term in equation (\ref{eq:Hx}) distinguishes it from the Dicke-Ising model. As the diamagnetic term has important consequences, these two models should be considered distinct. The Dicke model was introduced in 1954 to describe an atomic system in a light field. In 1958, Hopfield developed a model for dielectric materials in an electromagnetic field \cite{Hopfield}. Considering only a pair of modes in the resulting exciton-polariton theory, the Hopfield model is given by ($\hbar=1$) \begin{align} \label{eq:Hop} \mathcal{H}_{Hop.} = \omega_r a^{\dagger} a + \omega_0 b^{\dagger} b - ig (a^{\dagger}+a) (b^{\dagger}-b) + \mathcal{H}_{A^2}. \end{align} The coupling in the Hopfield model is between an effective position operator, $x\sim a^{\dagger}+a$, and an effective momentum operator $p \sim i(b^{\dagger}-b)$. The diamagnetic term is the same as for the Dicke model (equation \ref{eq:A2}), with $D = g^2/\omega_0$. As for the spin-photon model in equation (\ref{eq:Hp}), one can show that the substitution $i(b^{\dagger}-b) \rightarrow b^{\dagger}+b$ leads to an equivalent formulation of the model (see Appendix \ref{ap:coupling}). The formalism developed here encompasses both the Dicke and Hopfield models, and generalizes the basic Dicke model to include interactions between multilevel spins or atoms. The models developed by Dicke and Hopfield share a connection with work on quantum environments. In quantum optics, light is an intrinsic part of the system; in the theory of quantum environments, light, other bosonic or fermionic modes, and spin degrees of freedom, are extrinsic to the system of interest, and lead to dissipation and decoherence in the system. The quantum optics models discussed above share strong similarities with standard decoherence models describing a quantum system coupled to its environment, such as the Caldeira-Leggett model and the spin-boson model \cite{CaldeiraLeggett, LeggettSB}. In the decoherence models the system is comprised by the matter modes, and the environment is analogous to the light in the quantum optics models. The formalism developed in this work is applicable to all the models discussed above. Furthermore, it can be used to generalize the basic Dicke model and spin-boson model to include interactions between multilevel atoms or spins with complicated single ion Hamiltonians. \section{Momentum verses Position Coupling} \label{ap:coupling} The magnon-polariton theory has been derived for a system in which the spins couple to a photon position operator, as in equation (\ref{eq:Hx}). One can show that equation (\ref{eq:Hp}) is an equivalent formulation of the model. The two expressions are related by a canonical transformation that swaps photon position and momentum operators \cite{Leggett84}. Similarly, the position-momentum coupling in the Hopfield model, given by equation (\ref{eq:Hop}), may be replaced with a coupling between position operators. Consider the Hopfield model. In terms of harmonic oscillator variables, \begin{align} x = \sqrt{\frac{\hbar}{2m\omega}} (a^{\dagger}+a) \quad \text{and} \quad p = i \sqrt{\frac{\hbar m \omega}{2}} (a^{\dagger}-a), \end{align} the model is written \begin{align} \mathcal{H}_{Hop.} =\frac{P^2}{2M} + \frac{1}{2}M\omega_r^2 X^2 + \frac{(p-cmX)^2}{2m} + \frac{1}{2}m\omega_0^2 x^2, \end{align} with $c = 2g \sqrt{M\omega_r/m\omega_0}$. One may replace the position-momentum coupling with a position coupling by making the change of variables $\widetilde{x} = p/(m\omega_0)$ and $\widetilde{p} = -m\omega_0 x$. In terms of creation and annihilation operators, this canonical transformation leads to \begin{align} \widetilde{\mathcal{H}}_{Hop.} = \hbar \omega_r &\biggr(a^{\dagger} a + \frac{1}{2}\biggr) + \hbar \omega_0 \biggr(\widetilde{b}^{\dagger} \widetilde{b} + \frac{1}{2}\biggr) \\ \nonumber &- \hbar g (a^{\dagger}+a)(\widetilde{b}^{\dagger}+\widetilde{b}) + \frac{\hbar g^2}{\omega_0}(a^{\dagger}+a)^2, \end{align} where the coupling is now between position operators. Note that this Hamiltonian is equivalent to the Caldeira-Leggett Hamiltonian (see Appendix \ref{ap:CHO}) if only a single bath mode is considered. For reference, we note that the roles of the $a$ and $b$ bosons in the interaction and in the diamagnetic term may be interchanged making use of a gauge transformation \cite{Garziano}. Now consider the model given by equation (\ref{eq:Hp}). One may develop the magnon-polariton theory in the same manner as for the position coupling case. The photon component of the magnon-polariton Hamiltonian, $\mathcal{H}_{mp} = \mathcal{H}_{\gamma} + \mathcal{H}_{\phi} + \mathcal{H}_{int}$, is then \begin{align} \mathcal{H}_{\gamma} = \hbar \omega_r \biggr(a^{\dagger} a + \frac{1}{2}\biggr) - i \hbar \lambda (a^{\dagger}-a) - D (a^{\dagger}-a)^2, \end{align} or, in terms of harmonic oscillator variables, \begin{align} \mathcal{H}_{\gamma} = \frac{p^2}{2m_r} + \frac{1}{2} m_r \omega_r^2 x^2 -\sqrt{\frac{2\hbar}{m_r \omega_r}} \lambda p + \frac{2}{m_r \hbar \omega_r} D p^2. \end{align} One may rescale the mass and frequency of the oscillators, \begin{align} m_{\gamma} = m_r \biggr[1+\frac{4D}{\hbar \omega_r}\biggr]^{-1} \quad \text{and} \quad \omega_{\gamma} = \omega_r \sqrt{1+\frac{4D}{\hbar \omega_r}}, \end{align} to obtain \begin{align} \mathcal{H}_{\gamma} = \frac{p^2}{2m_{\gamma}} + \frac{1}{2} m_{\gamma} \omega_{\gamma}^2 x^2 -\sqrt{\frac{2\hbar}{m_{\gamma} \omega_{\gamma}}} \lambda_{\gamma} p, \end{align} where $\lambda_{\gamma}$ is given in equation (\ref{eq:freqshift}). When the spins couple to a photon momentum operator the interaction between the auxiliary field and the photons is given by \begin{align} \mathcal{H}_{int} = \hbar \alpha_{\gamma} \sqrt{\frac{2}{\hbar m_{\gamma} \omega_{\gamma}}} \phi_0 p, \end{align} with $\alpha_{\gamma}$ given in equation (\ref{eq:alpha}). Combining the terms involving photon operators, $\mathcal{H}_{\gamma \phi}=\mathcal{H}_{\gamma}+\mathcal{H}_{int}$, we have \begin{align} \mathcal{H}_{\gamma \phi} = \frac{p^2}{2m_{\gamma}} + \frac{1}{2} m_{\gamma} \omega_{\gamma}^2 x^2 - \frac{2\hbar}{m_{\gamma} \omega_{\gamma}} p (\lambda_{\gamma}-\alpha_{\gamma} \phi_0). \end{align} The canonical transformation between the photon position and momentum operators leads to \begin{align} \widetilde{\mathcal{H}}_{\gamma \phi} = \frac{\widetilde{p}^2}{2m_{\gamma}} + \frac{1}{2} m_{\gamma} \omega_{\gamma}^2 \widetilde{x}^2 - \sqrt{2\hbar m_{\gamma} \omega_{\gamma}} \widetilde{x} (\lambda_{\gamma} - \alpha_{\gamma} \phi_0), \end{align} which is equivalent to the result obtained if the spins are coupled to photon position operators in the original Hamiltonian (the rescaled mass of the oscillator does not affect the quantized theory). \section{The L$\text{i}$H$\text{o}$F$_4$ System} \label{ap:LiHo} Consider the low temperature effective Hamiltonian of LiHoF$_4$ given in equation (\ref{eq:LiHo}) of the main text. The truncation of the LiHoF$_4$ system, and the low energy electronuclear modes present in the system, have been dealt with in detail elsewhere \cite{Chakraborty, Tabei, MckenzieStamp, EisenlohrVojta}; here we present details relevant to the calculation of the magnon-polariton propagator. In the random phase approximation (RPA), the longitudinal dynamic susceptibility may be written as $\chi(z) = \chi_0(z)/(1-V_0 \chi_0(z))$, where the mean field (MF) susceptibility, $\chi_0(z) = \widetilde{\chi}_0(z) + \chi_{el}^0 \delta_{z,0}$, is written explicitly in terms of the MF parameters of the system in equation (\ref{eq:chi0}). The inelastic component of the RPA susceptibility is $\widetilde{\chi}(z) = \widetilde{\chi}_0(z)/(1-V_0 \widetilde{\chi}_0(z))$, and the RPA expression for the quasi-elastic diffusive pole is \begin{align} \chi_{el} = \frac{\widetilde{\chi}_0(0)+\chi_{el}^0} {1-V_0 (\widetilde{\chi}_0(0)+\chi_{el}^0)} - \frac{\widetilde{\chi}_0(0)}{1-V_0 \widetilde{\chi}_0(0)}. \end{align} Defining the ratio of the MF and RPA modes of the system to be \begin{align} R \equiv \frac{1}{1-V_0 \widetilde{\chi}_0(0)} = \frac{\prod_{n>m} E_{nm}^2}{\prod_m \omega_m^2}, \end{align} the elastic component of the RPA susceptibility may be written \begin{align} \chi_{el} = \frac{R^2 \chi_{el}^0}{1-R V_0 \chi_{el}^0}. \end{align} The elastic component of the dynamic susceptibility has not been analyzed explicitly in this work; however, it is provided here for reference. The inelastic component of the dynamic susceptibility, given in equation (\ref{eq:specrep}), determines the RPA modes of the LiHoF$_4$ system and their spectral weights. These spectral weights determine the strength of the magnon-photon coupling in the magnon-polariton theory. In terms of the MF energy levels and matrix elements of the longitudinal spin operator, the RPA expression for the inelastic component of the longitudinal dynamic susceptibility at zero wavevector is \begin{widetext} \begin{align} \widetilde{\chi}(z) = \frac{-C_{zz}^2\sum_{n>m}|c_{mn}|^2 p_{mn} 2E_{nm} \prod_{t>s \neq nm} (E_{ts}^2-z^2)} {\prod_{n>m} (E_{nm}^2-z^2)-C_{zz}^2 V_0\sum_{n>m}|c_{mn}|^2 p_{mn} 2E_{nm} \prod_{ts \neq mn} (E_{ts}^2-z^2)}. \end{align} \end{widetext} In a needle shaped sample of LiHoF$_4$, with zero demagnetization field, the zero wavevector component of the interaction strength (equation (\ref{eq:V})), is approximately $V_0 \approx 74mK$, and, as mentioned following equation (\ref{eq:V}), $C_{zz}$ is a truncation parameter, with $J^z=C_{zz}\tau^z$ in the truncated spin-1/2 electronic subspace. The remaining parameters are defined in Section \ref{sec:DS} following equation (\ref{eq:chi0}). The poles of the dynamic susceptibility determine the RPA magnon modes of the system, and their residues determine the spectral weights of the modes. The poles and residues can be calculated as in Section \ref{sec:RPA} of the body of the paper. One finds the spectral weights of the RPA magnon modes to be given by \begin{align} A_m = \frac{C_{zz}^2}{\omega_m}\sum_{n>m} |c_{mn}|^2 p_{mn} E_{nm} \frac{\prod_{t>s \neq nm} [E_{ts}^2-\omega_m^2]} {\prod_{s \neq m} [\omega_s^2-\omega_m^2]}, \end{align} where $\{\omega_m \}$ are the zero wavevector RPA modes of the system. The spectral weights of the RPA modes scale like $A_m \sim 1/\omega_m$, with the spectral weight of the soft mode diverging at the quantum critical point. \section{Coupled Harmonic Oscillators} \label{ap:CHO} Consider the Caldeira-Leggett Hamiltonian in which a harmonic oscillator is linearly coupled to a bath of harmonic oscillators (quantum Brownian motion) \cite{CaldeiraLeggett, Weiss, BreuerBook} \begin{align} \mathcal{H}_{CL} = \frac{P^2}{2M} + &\frac{1}{2}M\omega_s^2 X^2 +\sum_z \biggr[\frac{p_z^2}{2m} + \frac{1}{2}m\omega_z^2 x_z^2\biggr] \\ \nonumber & - \sum_z c_z X x_z + \sum_z \frac{c_z^2}{2m_z \omega_z^2} X^2. \end{align} The bath leads to damping and decoherence of the primary oscillator. In terms of bosonic creation and annihilation operators the Caldeira-Leggett Hamiltonian may be written \begin{align} \label{eq:CL} \mathcal{H}_{CL} = &\hbar \omega_s \biggr(b_0^{\dagger} b_0 + \frac{1}{2}\biggr) + \sum_z \hbar \omega_z \biggr(a_z^{\dagger} a_z + \frac{1}{2}\biggr) \\ \nonumber &- \sum_z \hbar g_z (a_z^{\dagger}+a_z) (b_0^{\dagger}+b_0) + \sum_z D_z (b_0^{\dagger}+b_0)^2, \end{align} where \begin{align} g_z = \frac{c_z}{2 \sqrt{m_z\omega_z M \omega_s}} \quad \text{and} \quad D_z = \frac{\hbar g_z^2}{\omega_z}. \end{align} The Caldeira-Leggett counterterm is equivalent to the diamagnetic term present in light-matter Hamiltonians. In order to determine the damping due to the bath, we calculate the propagator of the primary oscillator. The counterterm shifts the frequency of the primary harmonic oscillator \begin{align} \mathcal{H}_{CL} = \hbar \overline{\omega}_s \biggr(b^{\dagger} b &+ \frac{1}{2}\biggr) + \sum_z \hbar \omega_z \biggr(a_z^{\dagger} a_z + \frac{1}{2}\biggr) \\ \nonumber &- \sum_z \hbar \overline{g}_z (a_z^{\dagger}+a_z) (b^{\dagger}+b), \end{align} where the rescaled coupling and shifted frequencies are \begin{align} \label{eq:renormalizedcoupling} \overline{g}_z = \frac{c_z}{2 \sqrt{m_z\omega_z M \overline{\omega}_s}} \quad \text{and} \quad \overline{\omega}_s^2 = \omega_s^2\biggr[1 + \frac{4D_z}{\hbar \omega_s} \biggr]. \end{align} The propagator of the shifted oscillator modes is defined by \begin{align} D_b(\tau) = \bigr\langle T_{\tau} \bigr(b^{\dagger}(\tau)+b(\tau)\bigr) \bigr(b^{\dagger}+b\bigr) \bigr\rangle. \end{align} Treating the interaction between oscillators perturbatively using the Matsubara formalism, one finds the propagator of the primary oscillator in Matsubara frequency space to be \begin{align} D_b(i\omega_n) = -\frac{2\overline{\omega}_s}{\beta \hbar} \left[ \frac{1}{(i\omega_n)^2 - \overline{\omega}_s^2 - \sum_z \frac{4\overline{g}_z^2 \overline{\omega}_s \omega_z}{(i\omega_n)^2 - \omega_z^2}}\right]. \end{align} This is equivalent to equation (\ref{eq:MPprop3}) of the main text apart from the fact that the counterterm has shifted the frequency of the primary oscillator. Consider a single bath mode. The poles of the polariton propagator yield the upper and lower polariton modes \begin{align} \omega_{\pm}^2 = \frac{\overline{\omega}_s^2+\omega_z^2}{2} \pm \sqrt{\biggr(\frac{\overline{\omega}_s^2-\omega_z^2}{2}\biggr)^2 +4\overline{g}_z^2 \overline{\omega}_s\omega_z}. \end{align} In the absence of the counterterm ($D=0$), the lower polariton mode reaches zero at a critical value of $g_z = \sqrt{\omega_s \omega_z}/2$. In a light-matter system, this coupling strength marks a superradiant quantum phase transition \cite{HeppLieb, Wang}. The presence of the counterterm forestalls this transition so that $\omega_- > 0$ for any value of $g_z$. The counterterm is also responsible for a decoupling of the light and matter modes (or system and bath modes) as the coupling strength is increased \cite{Rzazewski}. Indeed, consider what happens as the bare coupling between oscillators diverges, $c_z \rightarrow \infty$. The shifted frequency of the primary oscillator diverges linearly with the coupling, $\overline{\omega}_s \sim c_z$, and the rescaled coupling between the oscillators goes like $\overline{g}_z \sim c_z/\sqrt{\overline{w}_s} \sim \sqrt{c_z}$. Comparing the rescaled coupling strength to the shifted oscillator frequency we see that $\overline{\eta} \equiv \overline{g}_z/\overline{\omega}_s \sim 1/\sqrt{c_z} \rightarrow 0$. As the bare coupling between oscillators is increased, the bath mode will become an increasingly weak perturbation to the system. We return now to the oscillator bath environment. In order to make contact with standard results, we express the propagator for the shifted system modes in terms of the propagator of the original modes of the system $D_b = \overline{\omega}_s D_{b_0} / \omega_s$. Analytically continuing to real frequencies ($D_{b_0}^{ret}(\omega) = \beta D_{b_0}(i\omega_n \rightarrow \omega + i0^+$), the retarded propagator of the original bosonic system modes may be written \begin{align} D_{b_0}^{ret}(\omega) = -\frac{2\omega_s}{\hbar} \left[\frac{1}{\omega^2 +i \gamma \omega- \omega_s^2}\right], \end{align} where the damping function is \begin{align} \gamma(\omega) = \frac{i}{\omega} \biggr[ \sum_z\frac{4g_z^2\omega_s}{\omega_z}+ \lim_{\epsilon \rightarrow 0} \sum_z \frac{4g_z^2 \omega_s \omega_z}{\omega^2 + i\omega \epsilon - \omega_z^2} \biggr]. \end{align} The real and imaginary parts of the damping function, $\gamma = \gamma' +i\gamma''$, are given by \begin{align} \omega \gamma''(\omega) = \sum_z\frac{4g_z^2\omega_s}{\omega_z}\biggr[ \frac{-\omega^2}{\omega_z^2-\omega^2}\biggr], \end{align} and \begin{align} \label{eq:gamma} \omega \gamma' = 2\pi \omega_s \sum_z g_z^2 \biggr[\delta(\omega-\omega_z)-\delta(\omega+\omega_z)\biggr]. \end{align} In terms of the original harmonic oscillator variables, the spectral density of the bath is defined by \begin{align} J(\omega) \equiv M \omega \gamma'(\omega) = \frac{\pi}{2} \sum_z \frac{c_z^2}{m_z \omega_z} \delta(\omega-\omega_z), \end{align} in agreement with the standard result. The counterterm (or equivalently, the diamagnetic term) eliminates a zero frequency shift in $\gamma''$. This term is absent in the magnon-polariton theory. In the magnon-polariton theory, the photons are considered to be the system, and the magnons, which are themselves subject to dissipation and decoherence, comprise a bath. \end{appendices}
{ "timestamp": "2022-09-20T02:22:08", "yymm": "2209", "arxiv_id": "2209.08674", "language": "en", "url": "https://arxiv.org/abs/2209.08674" }
\section{Introduction} This paper focuses on the representation of a random variable as an adapted Lebesgue - as opposed to stochastic - integral. We start the analysis with statements of our main results and then place them in the extant literature while offering motivation for their study. \medskip Let $(\Omega, (\mathcal{F}_t)_{t\leq T}, \mathcal{F}} \newcommand{\bF}{\mathbb{F},\bP)$ be a filtered probability space. Given an $\mathcal{F}} \newcommand{\bF}{\mathbb{F}_T$-mea\-su\-ra\-ble random variable $\xi$, we ask whether there exists a progressively-measurable process $\beta$ such that \begin{align}\label{equ:ac-repr} \xi = \int_0^T \beta_u\, du, \text{ a.s.} \end{align} with $\beta$ in a given integrability class. We focus on the Lebesgue measure on a finite time-horizon $[0,T]$ because other settings (alternative measures instead of the Lebesgue measure, alternative horizons, or the discrete time on an infinite horizon instead of the continuous time) lead to a similar analysis. \medskip Our main results apply to two integrability classes for $\beta$, but we discuss interesting features of some other classes, too, in Section \ref{sec:examples}. We say that $\beta$ is \emph{weakly regular} if $\int_0^T \beta_u^2\, du < \infty$, a.s., and \emph{strongly regular} if $\@ifstar{\oldee}{\oldee*}{ \int_0^T \beta_u^2\, du}<\infty$. Assuming throughout that all $\mathcal{F}} \newcommand{\bF}{\mathbb{F}_t$-local martingales are continuous, we show in Theorem \ref{thm:main-strong} that the representation \eqref{equ:ac-repr} holds for some strongly regular $\beta$ if and only if $\xi \in \bL^1$ and \begin{align*} \@ifstar{\oldee}{\oldee*}{\int_0^T \oo{T-t}\, d\@ifstar{\oldab}{\oldab*}{M}_t}<\infty \text{ where } M_t = \@ifstar{\oldee}{\oldee*}{\xi \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t}. \end{align*} In a less restrictive, weakly regular case, our Theorem \ref{thm:main-weak} states that \eqref{equ:ac-repr} holds for a weakly regular $\beta$ if and only if there exists a probability measure $\bQ$ equivalent to $\bP$ and a $\bQ$ local martingale $M$ with $M_T=\xi$ such that \begin{align} \label{equ:slow} \int_0^T \oo{T-t}\, d\@ifstar{\oldab}{\oldab*}{M}_t<\infty,\text{ a.s.} \end{align} Intuitively, an absolutely continuous representation of the form \eqref{equ:ac-repr} with a weakly regular $\beta$ exists if and only if $\xi$ closes a local martingale whose quadratic variation grows slowly enough at $T$. This problem has a interesting link with the so-called ``fundamental theorem of asset pricing'' (see \cite[Theorem 1.1, p.~487]{DelSch94}). As is well known in the Mathematical Finance community, this Theorem states that a locally-bounded semimartingale $M$ is a local martingale under some measure $\bQ$ equivalent to $\bP$ if and only if it satisfies the condition NFLVR, a slightly stronger version of the classical NA (no arbitrage) condition of mathematical finance. We may think, informally, of a process that satisfies NFLVR as a {measure-free version of a local martingale,} or, similarly, as a semimartingale whose local-martingale part is everywhere more active than its finite-variation part. \\ When focusing on the representation \eqref{equ:ac-repr} of $\xi$ under the weaker, probability-free, condition on $\beta$, that question boils down to the relationship between $\xi$, the set of null events, and the filtration. Rephrased in financial terms, what we show is that \eqref{equ:ac-repr} holds if and only if $\xi$ closes a price process which has the NFLVR property and moreover is a ``slow" local martingale under a suitable $\bQ$ - in the sense of \eqref{equ:slow}. Such ``slow" local martingale that converges to $\xi$ can be used as a proxy for the good approximability of $\xi$ by $\mathcal{F}} \newcommand{\bF}{\mathbb{F}_t$-adapted random variables as $t\nearrow T$. \medskip Unlike in the case of martingale representation, the question of uniqueness of an absolutely continuous representation admits a trivially negative answer in many interesting integrability classes, including both weak and strong regularity discussed above. That fact served as a prompt to look for a canonical, rather than unique $\beta$. When $\@ifstar{\oldee}{\oldee*}{\int_0^T \beta_u^2\, du}<\infty$ is required, the $\beta$ that minimizes $\@ifstar{\oldee}{\oldee*}{\int_0^T \beta_u^2\, du}$ admits an easy-to-verify explicit form, namely \[ \hat{\beta}_t = \frac{1}{T} M_0 + \int_0^t \oo{T-u}\, dM_u, \ t\in [0,T). \] where $M_t = \@ifstar{\oldee}{\oldee*}{\xi \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t}$. Unfortunately, we could not identify an analogous natural notion of canonicity in the weakly regular case. \medskip Absolutely continuous representation issues arise quite easily in applications. For instance, in \cite{AidBia21} the authors deal with a linear-quadratic stochastic control problem on the Wiener space, arising from carbon regulation. In that problem, the controls are square integrable \emph{rates}, i.e., state dynamics involve integrals of these controls with respect to $dt$. Furthermore, the objective function contains a terminal penalty term which is a function of an integral of one of the controls, $\beta$, so that the random variable $\xi = \int_0^T \beta_t\, dt$ appears in the objective function. Since the problem is not strictly convex in $\beta$, the authors of \cite{AidBia21} were only able obtain an explicit expression for the optimal $\hat \xi$, and for the associated martingale $\hat M_t = \@ifstar{\oldee}{\oldee*}*{ \hat \xi \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t}$. They left the problem of finding an optimal, square integrable, rate $\hat \beta$ that \emph{represents} the optimal $\hat \xi$ open (see \cite[Remark 4.1]{AidBia21}). Integrable-enough absolutely continuous representations come in handy in other contexts, as well. For example, they provide useful estimates when proving existence of solutions to stochastic differential equations. The interested reader can consult the book \cite[Chapter 6]{FGS17} for a general treatment, or \cite{BGZ21} for an application to stochastic delayed differential equations in an optimal investment problem. \medskip The only existing result concerning absolutely-continuous representation we are aware of is the ``factorization formula'' of Da Prato and Zabczyk (see \cite[Theorem 5.2.5, p.~58]{DaPZab96}). Set on an abstract Wiener space, it provides an explicit absolutely continuous representation of a random variable given by a stochastic integral. It relies on a version of a stochastic Fubini theorem (see \cite[Theorem 5.2.5]{DapZab14}) but does not address the regularity of the representation itself, or provide any necessary conditions. A deeper discussion of why their approach, based on the stochastic Fubini theorem, does not lead to the kinds of results we are interested in is given in Remark \ref{rem:Fubini}. Our results extend the existing ones in several directions. First, we give \emph{necessary and sufficient} conditions on the random variable $\xi$ for the representation to exist under both weak and strong regularity. Furthermore, in the strongly regular case we show that the unique \emph{martingale} solution the representation problem arises as the $\bL^2$-norm minimizer on the product space. The paper is organized as follows: Section 2 treats the strongly regular and section 3 the weakly regular case; Section 4 contains further examples, results and comments. \medskip \textbf{Setup and notation.} We consider a measurable space $(\Omega,\mathcal{F}} \newcommand{\bF}{\mathbb{F})$, together with a maximal family $\mathcal{P}} \newcommand{\bP}{\mathbb{P}$ of mutually equivalent probability measures on $\mathcal{F}} \newcommand{\bF}{\mathbb{F}$, as well as a right-continuous filtration $\bF=\prf{\mathcal{F}} \newcommand{\bF}{\mathbb{F}_t}$. We impose the following, standing, assumption throughout: \begin{assumption} \label{asm:cont} The filtration $\prf{\mathcal{F}} \newcommand{\bF}{\mathbb{F}_t}$ is continuous, i.e., each $\prf{\mathcal{F}} \newcommand{\bF}{\mathbb{F}_t}$-martingale with respect to some (and then all) $\bP \in \mathcal{P}} \newcommand{\bP}{\mathbb{P}$ is continuous. \end{assumption} We say that a continuous process $M$ is a \define{$\mathcal{P}} \newcommand{\bP}{\mathbb{P}$-local martingale} if it is a local martingale under some $\bP\in\mathcal{P}} \newcommand{\bP}{\mathbb{P}$. The set of all $\mathcal{P}} \newcommand{\bP}{\mathbb{P}$-local martingales is denoted by $\sM^{loc}$. For $\bP \in \mathcal{P}} \newcommand{\bP}{\mathbb{P}$, $\bL^p(\bP)$ is a shorthand for $\bL^p(\Omega,\mathcal{F}} \newcommand{\bF}{\mathbb{F}, \bP)$ while $\bL^{p,q}(\bP)$, $q \in [0,\infty)$ denotes the set of all $\prf{\mathcal{F}} \newcommand{\bF}{\mathbb{F}_t}$-predictable processes $\beta$ with $ \int_0^T \@ifstar{\oldabs}{\oldabs*}{\beta_u}^p\, du \in \bL^q(\bP)$. When $p\geq 1$, the space $\bL^{p,1}(\bP)$ comes with the norm $\@ifstar{\oldnorm}{\oldnorm*}{\beta}_{\bL^{p,1}(\bP)}=\@ifstar{\oldeeup}{\oldeeup*}{\bP}{ \int_0^T \@ifstar{\oldabs}{\oldabs*}{\beta_u}^p\, du}^{1/p}$, while no topology on $\bL^{p,0}(\bP)$ will be needed. Since the spaces $\bL^{p,0}(\bP)$, $\bP \in \mathcal{P}} \newcommand{\bP}{\mathbb{P}$ coincide, we omit the probability measure from the notation and simply write $\bL^{p,0}$. For $\xi \in \mathcal{F}} \newcommand{\bF}{\mathbb{F}_T$ and $\bP \in \mathcal{P}} \newcommand{\bP}{\mathbb{P}$, we set \begin{align*} \mathcal{B}} \newcommand{\bB}{\mathbb{B}^{p,q}(\xi,\bP) &:= \sets{ \beta \in \bL^{p,q}(\bP)}{ \textstyle\int_0^T \beta_u\, du = \xi\text{ a.s.}}. \end{align*} When $q=0$, we omit the measure $\bP$ and write only $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{p,0}(\xi)$. \section{The strongly regular case} In this section we choose and fix a probability measure $\bP \in \mathcal{P}} \newcommand{\bP}{\mathbb{P}$ and use it as the underlying measure in all probabilistic statements. In particular, we write $\bL^2$ and $\bL^{2,1}$ for $\bL^2(\bP)$ and $\bL^{2,1}(\bP)$. \begin{theorem} \label{thm:main-strong} For $\xi\in\bL^1$, let $\{M_t\}_{t\in [0,T]}$ and $\{\hat{\beta}\}_{t\in [0,T)}$ be defined by \begin{align} \label{equ:mb-m} M _t &= \@ifstar{\oldee}{\oldee*}{ \xi \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t},\ t\in [0,T] \\ \label{equ:mb-b} \hat{\beta}_t &= \frac{1}{T} M_0 + \int_0^t \oo{T-u}\, dM_u, \ t\in [0,T). \end{align} The following statements are equivalent under Assumption \ref{asm:cont}: \begin{enumerate}[label*=\arabic*.] \item $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\xi) \ne \emptyset$. \item $\hat{\beta} \in \mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\xi)$. \item $\hat{\beta} \in \bL^{2,1}$. \item $\@ifstar{\oldee}{\oldee*}{\int_0^T \oo{T-t}\, d\@ifstar{\oldab}{\oldab*}{M}_t}<\infty$. \end{enumerate} When $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\xi) \ne \emptyset$, the process $\hat{\beta}$ given by \eqref{equ:mb-b} is, up to a version, \begin{enumerate} \item[(a)] the unique martingale on $[0,T)$ in $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\xi)$ \item[(b)] the minimal $\bL^{2,1}$-norm element in $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\xi)$. \end{enumerate} \end{theorem} \begin{proof} \emph{1.$\to$ 2.} Assuming that $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\xi)$ is nonempty consider the minimization problem \begin{align} \label{equ:inf} \inf_{\beta\in \mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\xi)} \@ifstar{\oldee}{\oldee*}{\int_0^T \beta_u^2\, du} = \inf_{\beta\in \mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\xi)} \@ifstar{\oldnorm}{\oldnorm*}{\beta}_{\mathbb{L}^{2,1}}^2 \end{align} The set $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\xi)$ is convex and closed in $\bL^{2,1}$. By intersecting it with a large-enough ball in $\bL^{2,1}$, we may assume that it is also bounded in $\bL^{2,1}$. The Banach Alaoglu theorem ensures then that such restricted subset of $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\xi)$ is \emph{weakly compact}. Since the $\bL^{2,1}$ norm is a weakly lower semicontinuous function, so is its square and thus there exists a $\tilde{\beta}$ which attains the minimum in \eqref{equ:inf}. This minimizer is also unique by strict convexity of the objective function. \medskip Next, we show that $\tilde{\beta}$ admits a modification which is a martingale on $[0,T)$. We start by perturbing $\tilde{\beta}$ in the direction of a process $\gamma \in \bL^{2,1}$ with $\int_0^T \gamma_u\, du = 0$, a.s. Since $\tilde{\beta}\ + \varepsilon \gamma \in \mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}$ and $\@ifstar{\oldnorm}{\oldnorm*}*{\tilde{\beta}\pm \varepsilon \gamma}^2\geq \@ifstar{\oldnorm}{\oldnorm*}*{\tilde{\beta}}^2$ for each $\varepsilon\in\R$, we obtain the following set of ``first-order conditions'' \begin{align} \label{equ:variational} \@ifstar{\oldee}{\oldee*}{ \int_0^T \tilde{\beta}_u \gamma_u\, du} = 0, \ \forall\, \gamma\in \bL^{2,1} \text{ with } \int_0^T \gamma_u\, du = 0, \text{ a.s.} \end{align} Given $t < s$ in $[0,T)$, for each $\mathcal{F}} \newcommand{\bF}{\mathbb{F}_t$-measurable and random variable $\chi \in \bL^2$ we define \begin{align*} \gamma^{\chi}_u = \begin{cases} \chi \oo{s-t}, & u \in (t,s], \\ -\chi \oo{T-s}, & u \in (s,T]. \end{cases} \end{align*} so that $\int_0^T \gamma^{\chi}_u\, du =0$ for each $\chi$. By applying the equality in \eqref{equ:variational} to $\gamma^{\chi}$ for all $\mathcal{F}} \newcommand{\bF}{\mathbb{F}_t$-measurable $\chi \in \bL^2$ we obtain \begin{align*} \@ifstar{\oldee}{\oldee*}{ \frac{\tilde{A}_s - \tilde{A}_t}{s-t} \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t} = \@ifstar{\oldee}{\oldee*}{ \frac{\tilde{A}_T -\tilde{A}_s}{T-s} \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t}, a.s. \end{align*} where $\tilde{A}_t = \int_0^t \tilde{\beta}_u\, du$. We rearrange the obtained equality further to conclude that \begin{align} \label{equ:iden} \oo{T-s} \Big(\@ifstar{\oldee}{\oldee*}{ \tilde{A}_s \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t} - M_t\Big) = \oo{T-t}\Big( \tilde{A}_t - M_t \Big), \text{ a.s.} \end{align} for all $s<t$ in $[0,T)$. The left-hand side of \eqref{equ:iden} above is a martingale in $t$ on $[0,s]$, and, therefore, so is the right-hand side. We apply It\^ o's{} formula and set the $dt$-term to $0$ to conclude that \begin{align} \label{equ:betadrift} \tilde{\beta}_t = \frac{M_t - \tilde{A}_t}{T-t}\text{ a.s., for almost all $t$}. \end{align} It follows that $\tilde{\beta}$ has a continuous version on $[0,T)$, which we, from now on, adopt. Furthermore, the right-hand side of \eqref{equ:betadrift} is a semimartingale on $[0,T)$ so we can use It\^ o's{} formula once more to conclude that \[ \tilde{\beta}_t = \oo{T} M_0 + \int_0^t \oo{T-u}\, dM_u \text{ for } t\in [0,T)\] Therefore, $\hat{\beta} = \tilde{\beta}$ and statement \emph{2.} follows immediately. \medskip \emph{2.}~$\to$ \emph{3.}~ Immediate. \medskip \emph{3.}~$\leftrightarrow$~\emph{4.} Either of the assumptions \emph{3.}~ and \emph{4.}~ guarantees that both $M$ and $\hat{\beta}$ are $\bL^2$-bounded martingales on each compact subinterval of $[0,T)$. Therefore, the equivalence follows from the following computation based on Fubini's theorem \begin{equation*} \begin{split} \@ifstar{\oldee}{\oldee*}{\int_0^T (\hat{\beta}_u - \hat{\beta}_0)^2\, du} = \int_0^T\@ifstar{\oldee}{\oldee*}{\int_0^t \oo{(T-r)^2}\, d\@ifstar{\oldab}{\oldab*}{M}_r}\, dt = \@ifstar{\oldee}{\oldee*}{ \int_0^T \oo{T-t}\, d\@ifstar{\oldab}{\oldab*}{M}_t}. \end{split} \end{equation*} \medskip \emph{3.}~$\to$~\emph{1.} It suffices to show that $\int_0^T \hat{\beta}_t\, dt = \xi$, a.s. The definition of $\hat{\beta}$ in \eqref{equ:mb-b} and integration by parts imply that for $t\in [0,T)$ we have \begin{align} \label{equ:ip-1} (T-t) \hat{\beta}_t = T \hat{\beta}_0 + \int_0^t \,dM_u - \int_0^t \hat{\beta}_u\, du = M_t - \int_0^t \hat{\beta}_u\, du. \end{align} Another round of integration by parts, but this time applied to the stochastic integral $\int_0^t \oo{T-u}\, dM_u$, implies that \begin{align} \label{equ:ip-2} \hat{\beta}_t = \oo{T-t} M_t + \int_0^t \frac{M_u}{ (T-u)^2}\, du. \end{align} Put together, identities \eqref{equ:ip-1} and \eqref{equ:ip-2}, give \begin{align*} \int_0^t \hat{\beta}_u \, du = \frac{ \int_0^t \frac{M_u}{(T-u)^2}\, du,}{ \oo{T-t} } \text{ for } t\in [0,T) \end{align*} The final step is to use l'H\^ opital's rule and the fact that $M_t\to \xi$, as $t\to T$. \medskip Concerning the last part of the statement of the theorem, (b) was established in the course of the proof above. For (a), we assume that there exists another martingale $\beta^*$ in $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\xi)$ so that $$ M_t = \@ifstar{\oldee}{\oldee*}{\int_0^T \beta^*_u\,du \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t} = \int_0^t \beta^*_u\, du + \beta^*_t (T-t).$$ The equality \eqref{equ:ip-1} above implies that $$ 0= \int_0^t (\beta^*_u - \hat{\beta}_u)\, du + (\beta^*_t - \hat{\beta}_t) (T-t) \text{ for all } t\in [0,T), \text{ a.s.} $$ It follows that $\beta^*_t - \hat{\beta}_t$ is continuously differentiable for $t\in [0,T)$, and the conclusion $\beta^* = \hat{\beta}$ follows by differentiation. \end{proof} \section{The weakly regular case} For $\xi \in \mathcal{F}} \newcommand{\bF}{\mathbb{F}_T$, let $\sM^{loc}(\xi)$ denote the set of all $M\in \sM^{loc}$ such that $M_T = \xi$, a.s. Let $\mathcal{P}} \newcommand{\bP}{\mathbb{P}^1(\xi)$ be the set of probabilities in $\mathcal{P}$ which integrate $\xi$. For $\bP \in\mathcal{P}} \newcommand{\bP}{\mathbb{P}^1(\xi)$, we set \[ M^{\bP,\xi}_t = \eep{\xi \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t},\ t\in [0,T],\] taken in its continuous version, so that $M^{\bP,\xi}$ is the unique $\bP$-martingale in $\sM^{loc}(\xi)$. Finally, for $M \in \sM^{loc}$ we define \begin{align} \label{equ:beM} \hat{\beta}^M_t = \frac{1}{T} M_0 + \int_0^t \oo{T-u}\, dM_u, \ t\in [0,T). \end{align} \begin{theorem} \label{thm:main-weak} For a random variable $\xi \in \mathcal{F}} \newcommand{\bF}{\mathbb{F}_{T}$, the following are equivalent: \begin{enumerate}[label*=\arabic*., itemsep = 0.5em] \item $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,0}(\xi) \ne \emptyset$. \item $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\bP, \xi)\ne \emptyset$ for some $\bP\in\mathcal{P}} \newcommand{\bP}{\mathbb{P}^1(\xi)$. \item $\hat{\beta}^M \in \bL^{2,0}$ for some $M \in \sM^{loc}(\xi)$. \item $\int_0^T \oo{T-t}\, d\@ifstar{\oldab}{\oldab*}{M}_t<\infty \text{ a.s.}$, for some $M \in \sM^{loc}(\xi)$ \item $\eep{ \int_0^T \oo{T-t}\, d\@ifstar{\oldab}{\oldab*}{M^{\bP,\xi}}_t }<\infty $, for some $\bP \in \mathcal{P}} \newcommand{\bP}{\mathbb{P}^1(\xi)$. \end{enumerate} \end{theorem} \begin{proof} \emph{1.}~$\to$~\emph{2.} We pick $\beta \in \mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,0}(\xi)$ and $\bP_0 \in \mathcal{P}} \newcommand{\bP}{\mathbb{P}$, and define $\bP \in \mathcal{P}} \newcommand{\bP}{\mathbb{P}$ by \begin{align*} \frac{d\bP}{d\bP_0}= c \frac{1}{1+\@ifstar{\oldabs}{\oldabs*}{\xi}+\int_0^T \beta_u^2\, du}, \end{align*} where $c$ is the normalizing constant. This way $\bP\in \mathcal{P}} \newcommand{\bP}{\mathbb{P}^1(\xi)$ and the process $\beta$ belongs to $\bL^{2,1}(\bP)$, and, hence, also to $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,1}(\bP)$. \medskip \emph{2.}~$\to$~\emph{5.} This is the content of the implication \emph{1.}~$\to$~\emph{4.}~in Theorem \ref{thm:main-strong}, but, possibly, under an equivalent probability measure. \medskip \emph{5.}~$\to$~\emph{4.} Immediate. \medskip \emph{4.}~$\to$~\emph{3.} Let $M\in \sM^{loc}(\xi)$ be as in the statement, and let $\bP\in\mathcal{P}} \newcommand{\bP}{\mathbb{P}$ be such that $M$ is a $\bP$-local martingale. We define the nondecreasing sequence $\{T_m\}_{m\in\N}$ of stopping times by \[ T_m = \inf\sets*{t\geq 0}{ \int_0^t \oo{T-u}\, d\@ifstar{\oldab}{\oldab*}{M}_u \geq m},\] so that $\bP[ T_m < T] \to 0$, as $m\to\infty$. The process $\hat{\beta}^M$, given by \eqref{equ:beM} above, is a continuous local $\bP$-martingale on $[0,T)$ so there exists another nondecreasing sequence $\seq{\tau_n}$ of stopping times with the property that $\tau_n \to T$, a.s., such that $(\hat{\beta}^M)^{\tau_n}$ is an $\bL^2(\bP)$-bounded martingale on $[0,T]$. In particular, \[ \eep{ (\hat{\beta}^M_{u\wedge T_m \wedge \tau_n})^2} = \eep{ \hat{\beta}^{M}_{u\wedge T_m \wedge \tau_n}} \text{ for all $m,n \in \N$ and $u\in [0,T)$.} \] We let $n\to\infty$ and use Fatou's lemma on the left-hand side and the monotone convergence theorem on the right to conclude that \begin{align} \label{equ:fatou} \eep{ (\hat{\beta}^M_{T_m \wedge u})^2 } \leq \eep{\@ifstar{\oldab}{\oldab*}*{\hat{\beta}^M}_{T_m \wedge u}} = \eep{ \int_0^{ T_m \wedge u} \oo{(T-r)^2}\, d\@ifstar{\oldab}{\oldab*}*{M}_r} \end{align} for $u<T$ and $m\in\N$. By integrating \eqref{equ:fatou} above in $u$ over $[0,T]$ we obtain, via Fubini's theorem, that \begin{align*} \@ifstar{\oldee}{\oldee*}{ \int_0^T \big(\hat{\beta}^M_{T_m \wedge u }\big)^2\, du} & \leq \@ifstar{\oldee}{\oldee*}{ \int_0^T \inds{u\geq r} \int_0^T \inds{r\leq T_m} \oo{(T-r)^2}\, d\@ifstar{\oldab}{\oldab*}*{M^{\bP}}_r\, du} \\ & = \@ifstar{\oldee}{\oldee*}{ \int_0^{T_m} \oo{T-r}\, d\@ifstar{\oldab}{\oldab*}*{M^{\bP}}_r} \leq m \end{align*} It follows that $\int_0^{T_m} (\hat{\beta}^M_u)^2\, du \leq \int_0^T ((\hat{\beta}^M)^{T_m}_u)^2\, du < \infty$, a.s. for each $m$. Since $\bP[T_m = T] \to 1$ as $m\to\infty$, we conclude that $\int_0^T (\hat{\beta}^M_u)^2\, du < \infty$, a.s. \medskip \emph{3.}~$\to$~\emph{1.} The last argument in the proof of Theorem \ref{thm:main-strong} is based only on the integration by parts formula and on the property $\hat{\beta}^M\in \bL^{1,0}$. Therefore it can be applied here, since $\hat{\beta}^M\in \bL^{2,0} \subseteq \bL^{1,0}$. \end{proof} \begin{remark}\ \label{rem:Fubini} As it aims for generality, but also operates within specific regularity classes, our proof of Theorem \ref{thm:main-weak} above does not use the stochastic Fubini theorem. To explain why in more detail, let us start with a brief description of how an argument based on it would play out. It would start with a choice of a measure $\bP\in\mathcal{P}} \newcommand{\bP}{\mathbb{P}$ and a martingale $M$ with $M_T=\xi$, where, without loss of generality, we assume that $M_0=0$. When its conditions are satisfied, the stochastic Fubini theorem, applied to the function $\psi(s,t, \omega) = \frac{1}{T-t}I_{[0,s]}(t)$ and with integrals with respect to $dM$ and $ds$, yields \begin{align*} \int_0^T ds \int_0^s \frac{1}{T-t}\, dM_t = \int_0^T dM_t \,\frac{1}{T-t} \int_t^T ds = M_T = \xi, \end{align*} making \begin{align} \label{equ:cand-beta} \beta_s = \int_0^s \frac{1}{T-t}\, dM_t \end{align} an absolutely-continuous representation of $\xi$. The weakest available condition for the above to hold is due to Veraar (see \cite[Theorem 2.2]{Ver12}), and it can be stated as \begin{align} \label{equ:Veraar-cond} \int_0^T \@ifstar{\oldprn}{\oldprn*}{ \int_0^s \frac{1}{(T-t)^2} \, d \@ifstar{\oldab}{\oldab*}{M}_t}^{\tfrac{1}{2}}\, ds < \infty, \text{ a.s.} \end{align} in our case. It is superficially related to our condition 4.~of Theorem \ref{thm:main-weak}, but it does not automatically insert $\beta$ into our, weak and strong, regularity classes. Indeed, when $\xi = W_T$, \eqref{equ:Veraar-cond} is clearly satisfied, but, as we will see in Proposition \ref{pro:markovian} below, $W_T$ does not admit an absolutely continuous representation with a weakly (or strongly) regular $\beta$. Put differently: \begin{center} \emph{regularity conditions for the validity of the stochastic \\ Fubini theorem do not correspond to our regularity classes.} \end{center} \end{remark} \smallskip A natural question is whether a condition such as: \begin{enumerate}[label*=\arabic*., itemsep = 0.5em] \item[4'.] $\int_0^T \oo{T-t}\, d\@ifstar{\oldab}{\oldab*}{M}_t<\infty \text{ a.s.}$, for \underline{all} $M \in \sM^{loc}(\xi)$ \end{enumerate} can be inserted in Theorem \ref{thm:main-weak}. An equivalent question is whether the condition $4.$ of Theorem \ref{thm:main-weak} implies the condition $4'.$ above. We only have a partial (positive) answer to this problem. It states that under certain regularity conditions, probability measures with a finite relative entropy share the property $\eep{\int_0^T \oo{T-t}\, d\@ifstar{\oldab}{\oldab*}{M^{\bP}}_t}<\infty$. \begin{proposition}\label{pro:entropy} Suppose that the filtration is generated by a $\bP$-Brownian motion $W$ and that $\xi$ is of the form \[ \xi = \int_0^T \sigma_u\, dW_u, \text{ where } \sigma \text{ is bounded.} \] If the martingale $M^{\bP}_t = \eep{ \xi \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t}$ satisfies \[ \eep{\int_0^T \oo{T-t}\, d\@ifstar{\oldab}{\oldab*}*{M^{\bP}}_t\, dt}<\infty,\] and $\bQ \ll \bP$ is such that \begin{align} \label{equ:entr} \@ifstar{\oldee}{\oldee*}{ \RN{\bQ}{\bP} \log^+ \@ifstar{\oldprn}{\oldprn*}{\RN{\bQ}{\bP}}}<\infty, \end{align} then \[ \eeq{\int_0^T \oo{T-t}\, d\@ifstar{\oldab}{\oldab*}*{M^{\bQ}}_t\, dt}<\infty.\] \end{proposition} \begin{proof} With $\bP$ and $\bQ$ as in the statement, let $\theta$ be such that the dynamics of the density process $Z_t = \@ifstar{\oldee}{\oldee*}{ \tRN{\bQ}{\bP} \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t}$ is given by \[ dZ_t = -Z_t\, \theta_t\, dW_t.\] By Girsanov's theorem and boundedness of $\sigma$, the process \[ M'_t = \int_0^t \sigma_u\, (dW_u + \theta_u\, du) = M^{\bP}_t + \int_0^t \sigma_u\theta_u\, du \] is a $\bQ$-martingale. By boundedness of $\sigma$, $\xi$ is $\bQ$ integrable.Thus, \begin{align*} M^{\bQ}_t &= \eeq{ \xi \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t} = \eeq{ M^{\bP}_T \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t} =\eeq{ M'_T - \int_0^T \sigma_u \theta_u\, du \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t}\\ &= M^{\bP}_t + \int_0^t \sigma_u \theta_u\, du - L_t, \end{align*} where \[ L_t = \eeq{\int_0^T \sigma_u \theta_u\, du \giv \mathcal{F}} \newcommand{\bF}{\mathbb{F}_t}.\] It follows that \begin{align*} \@ifstar{\oldab}{\oldab*}*{M^{\bQ}}_t = \@ifstar{\oldab}{\oldab*}*{M^{\bP}-L}_t . \end{align*} By the Kunita Watanabe inequality, \begin{align*} \@ifstar{\oldab}{\oldab*}*{M^{\bP}-L}_t \leq 2\@ifstar{\oldab}{\oldab*}*{M^{\bP}}_t + 2\@ifstar{\oldab}{\oldab*}*{L}_t, \end{align*} so it suffices to show that \begin{align} \label{Z-int} \eeq{\int_0^T \oo{T-t}\, d\@ifstar{\oldab}{\oldab*}{L}_t}<\infty. \end{align} For that, we note that $\xi' = L_T$ admits an absolutely continuous representation by its very definition: \begin{align*} L_T = \int_0^T \beta'_t\, dt \text{ where } \beta'_t = \sigma_t \theta_t. \end{align*} Assumption \eqref{equ:entr} above, together with the fact that the function $x\mapsto x \log(x)$ is convex and bounded from below, implies that the process $Z\log(Z)$ is a continuous $\bP$-sub martingale on $[0,T]$. Hence, there exists a finite constant $C$ such that \begin{align} \label{equ:tau-bound} \@ifstar{\oldee}{\oldee*}{ Z_{\tau} \log(Z_{\tau}) } \leq C \text{ for each $[0,T]$-valued stopping time $\tau$,} \end{align} It\^ o's{} formula applied to the semimartingale $Z \log(Z)$ yields \begin{align*} Z_{\tau} \log(Z_{\tau}) = N_{\tau} + \too{2} \int_0^{\tau} Z_u \theta_u^2\, du \text{ where $N$ is a local martingale. } \end{align*} Let $\seq{\tau_n}$ is a sequence of stopping times that reduces the local martingale $N$. The upper bound of \eqref{equ:tau-bound} above implie that \begin{align*} C \geq \@ifstar{\oldee}{\oldee*}{Z_{\tau_n} \log(Z_{\tau_n})} = \too{2} \@ifstar{\oldee}{\oldee*}{\int_0^{\tau_n} Z_u \theta_u^2\, du} = \too{2} \eeq{ \int_0^{\tau_n} \theta_u^2\, du}. \end{align*} It remains to use the monotone-convergence theorem to conclude that \begin{align*} \eeq{\int_0^T \theta_u^2\, du} < \infty. \end{align*} Boundedness of $\sigma$ implies that $\eeq{\int_0^T (\beta'_u)^2\, dt}<\infty$, and \eqref{Z-int} follows from the implication $1.\to 4.$ of Theorem \ref{thm:main-strong}. \end{proof} \section{Examples and further remarks} \label{sec:examples} \subsection{The Brownian, Markovian case} Unlike the martingale representation, an absolutely continuous representation requires at least some degree of non-Mar\-ko\-vi\-a\-ni\-ty in order to be non trivial. Let's start with a simple argument to provide some intuition. Suppose that the terminal value $W_T$ of the Brownian motion admits an absolutely-continuous representation of the form The process ${W}_t - \int_0^t \beta_u\, du$ cannot be a martingale under any equivalent measure. Otherwise, it would be a non-trivial martingale with a trivial terminal value. It follows that $\beta$ cannot be ``too regular'', in the sense that the stochastic exponential $\@ifstar{\oldexp}{\oldexp*}{ \int_0^{t} \beta_u\, dW_u - \too{2} \int_0^t \beta_u^2\, du}$ cannot be a martingale (if it is well-defined at all). In the next Proposition, an argument based on our Theorem \ref{thm:main-weak}, can be used to show that such a $\beta$ does not exist at all. \begin{proposition} \label{pro:markovian} Suppose that $\xi = g(W_T)$ for some Brownian motion $W$ and $g\in C^2(\R)$. Then $\xi \in \mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,0}$ if and only if $g$ is constant. \end{proposition} \begin{proof} Since $\int_0^T (g''(W_u))^2\, du<\infty$, a.s., It\^ o's formula implies that $\xi$ admits an absolutely continuous representation under weak regularity, i.e., that $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,0}(\xi) \ne \emptyset$ if and only if $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{2,0}(\bar{\xi}) \ne \emptyset$, where \begin{align*} \bar{\xi} = \int_0^T g'(W_t)\, dW_t. \end{align*} Being deterministic, the quadratic variation of the Brownian motion has the same distribution under all $\bP \in \mathcal{P}} \newcommand{\bP}{\mathbb{P}$, so by Theorem \ref{thm:main-weak}, this will be true if and only if \begin{align} \label{equ:g-prime-int} \int_0^T (g'(W_t))^2/(T-t)\, dt<\infty, \text{ a.s.} \end{align} Continuity of $g'$ and $W$ force $g'(W_t)\to 0$ when $t\rightarrow T$, i.e., $g'(W_T)=0$, whenever \eqref{equ:g-prime-int} holds. \end{proof} \subsection{The class $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{p,1}$} The ``factorization formula'' of Da Prato and Zabczyk (see \cite[Theorem 5.2.5, p.~58]{DaPZab96}) provides us with an absolutely continuous representation for random variables of the form \[ \xi = M_T \text{ where } M_t = \int_0^t \sigma_u\, dW_u,\] when $\sigma$ *** the filtration is generated by the $\bP$-Brownian motion $W$: \begin{align} \label{equ:fact} \xi = \int_0^T \beta_t\, dt \text{ where } \beta_t = \frac{\sin(\alpha \pi)}{\pi} (T-t)^{\alpha-1} \int_0^t (t-u)^{-\alpha}\, dM_u, \end{align} where $\alpha \in (1/2,1)$. Note that we can interpret $(T-t)^{1-\alpha}\beta_t = \int_0^t (t-u)^{-\alpha}\, dM_u$ as the formal Riemann-Liouville fractional integral of order $\alpha$ of the noise $dM$, up to a multiplicative constant. To complete the factorization formula, we then integrate the result multiplied by $(T-t)^{1-\alpha}$ with respect to $dt$, i.e., compute the Riemann-Liouville integral of the complementary order $1-\alpha$ of the result. The semigroup property of Riemann-Liouville integration would suggest that the result should coincide with the integral of order $\alpha+(1-\alpha)=1$ of $dM$, i.e., it should yield $M_T=\xi$, itself. {We refer the reader to \cite{SamKilMar93} for details on fractional integration. } Since no restriction other than boundedness is imposed on $\sigma$ it may appear that \eqref{equ:fact} contradicts Proposition \ref{pro:markovian} above, in that it provides a representation for $ \xi = W_T$, for example. There is no contradiction, however, as the process $\beta$ produced in \eqref{equ:fact} does not belong to $\bL^{2,0}$. To see that we pick {$1<p<2$} and use Doob's maximal inquality followed by the Burkholder-Davis-Gundy theorem to obtain the following estimate for $\alpha<1/2$: \begin{align*} \@ifstar{\oldee}{\oldee*}{ \@ifstar{\oldabs}{\oldabs*}{\beta_t}^p } &= (T-t)^{p(\alpha-1)} \@ifstar{\oldee}{\oldee*}{ \@ifstar{\oldabs}{\oldabs*}{\int_0^t (t-u)^{-\alpha}\, dM_u}^p}\\ & \lessapprox (T-t)^{p(\alpha-1)} \@ifstar{\oldee}{\oldee*}{ \sup_{s \leq t} \@ifstar{\oldabs}{\oldabs*}{\int_0^s (t-u)^{-\alpha}\, dM_u}^p}\\ & \lessapprox (T-t)^{p(\alpha-1)} \@ifstar{\oldee}{\oldee*}{ \@ifstar{\oldprn}{\oldprn*}{\int_0^t (t-u)^{-2 \alpha}\, d\@ifstar{\oldab}{\oldab*}{M}_u}^{p/2}}\\ \end{align*} where $a \lessapprox b$ is a shorthand for $a \leq C\, b$, for some constant $C>0$ which depends only on $p$. Allowing $C$ to depend on $\sigma$ as well, we can go on to conclude that \begin{equation} \label{equ:fin-inf} \begin{split} \@ifstar{\oldee}{\oldee*}{\int_0^T \@ifstar{\oldabs}{\oldabs*}{\beta_t}^p }\, dt & \leq { \int_0^T (T-t)^{p(\alpha-1)} \@ifstar{\oldprn}{\oldprn*}{\int_0^t (t-u)^{-2\alpha }\, d\@ifstar{\oldab}{\oldab*}{M}_u}^{p/2} }\\ & \lessapprox \int_0^T (T-t)^{p(\alpha-1)} \@ifstar{\oldprn}{\oldprn*}{\int_0^t (t-u)^{-2\alpha}\, du}^{p/2} \\ & = \int_0^T (T-t)^{p(\alpha-1)} t^{p(1/2-\alpha)}\, dt. \end{split} \end{equation} The last integral is finite if and only if $\alpha \in (1-1/p,1/2)$. In particular, it is infinite for all $\alpha$ when $p=2$. It is interesting to observe, though, that when $p<2$, the choice $\alpha = \tfrac{3}{4} - \tfrac{1}{2p}$ in \eqref{equ:fact} provides an absolutely continuous representation of $\xi$ in $\mathcal{B}} \newcommand{\bB}{\mathbb{B}^{p,1}$ for $p\in (1,2)$. We conclude the paper with a summary of the discussion above: \begin{proposition} A random variable of the form $\xi = \int_0^T \sigma_u\, dW_u$, with $\@ifstar{\oldabs}{\oldabs*}{\sigma}$ bounded, admits an absolutely continuous representation in $\bL^{p,1}$ for each $p < 2$. \end{proposition} The above result should be contrasted with the findings in our Theorem \ref{thm:main-strong}, which states that such a representation in $\bL^{2,1}$ if and only if, additionally, $$\@ifstar{\oldee}{\oldee*}{\int_0^T \frac{\sigma_u^2}{T-t}\, dt}<\infty.$$ \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2022-09-20T02:20:45", "yymm": "2209", "arxiv_id": "2209.08629", "language": "en", "url": "https://arxiv.org/abs/2209.08629" }
\section{Introduction} Social media has recently been implicated in a host of societal dysfunctions, such as extremism, polarization, anti-science rhetoric and dissemination of hate. By allowing individuals with particular interests to connect irrespective of physical distances, the web allowed for the creation of countless niches, some of which can be problematic. Reddit (\url{www.reddit.com}) is a social media platform that has become one of the most popular meeting places for such interest-based niche groups, where people create thematic forums called ``subreddits.'' One negative externality caused by the ease of congregation in the digital world is that users with racist, misogynist, and similarly uncivil views are able to find others just like them and form communities dedicated to those harmful stances. The formation of such groups can then serve as breeding ground for further deplorable opinions, which, through an echo chamber between community members, may become even more extreme and alienated from reality and good-citizenry, potentially including offline hateful behavior. This study aims to answer the following research question: \textbf{Does becoming active in a hateful subreddit increase a person's usage of hate speech outside of that subreddit?} Answering that questions will help us understand the impact of hateful communities as originators of hate speech in the platform as a whole. We create a case-controlled quasi-experimental causal study through Interrupted Time Series (ITS) design and control-treatment user matching. We generate high-precision hate speech lexicons for multiple communities and domains, which are utilized to obtain the measures of hate speech over time. We then find an immediate increase in a user's hate speech as soon as the user becomes active in the hateful subreddit. The causal nature of ITS models implies that becoming active in a hateful community is the factor leading to an increase in hate speech. We observe a repeated pattern of pre-join ramping up in hate speech, and post-join ramping down in hate speech. These findings help us understand whether halting access to the most extreme communities can break the pathway in which users, despite being hateful enough to join a hate community, get even further radicalized after becoming active in such communities. \section{Related Work} \textbf{Hate Speech in Social Media.} While it is difficult to devise a uniform definition of hate speech that is both implementable as a computer program while also satisfying critics \cite{silva2016analyzing}, it has been shown that the main targets for hate speech are groups based on race, behavior (i.e. insecure people, sensitive people) and physical appearance \cite{mondal2017measurement,silva2016analyzing}. Analysis of both directed and generalized hate speech on Twitter has found that directed hate speech is angrier than generalized hate speech, which in turn is angrier than general tweets \cite{elsherief2018hate}. One measurement of hate speech in politically fringe digital communities estimates that one-quarter of all posts contain some form of hate speech, with 13.7\% of posts containing explicit hate speech and 15.5\% containing implicit hate speech \cite{rieger2021assessing}. There is a positive association between time spent on Gab (a far-right social media) and hate speech \cite{gallacher2021hate}. Further exploration of extremist subreddits reveals how those groups combine the the up-vote / down-vote system, Reddit's feed algorithm, governance structure, and anonymity, to surface the extremist content within them and promote extreme discourse against opposing groups, while self-validating their views \cite{gaudette2021upvoting,Massanari2017,gothard2020exploring}. \textbf{In-Group and Out-Group Behavior on Social Media.} Subreddits are distinctive enough that research has shown it is possible to identify subreddits' posts based on both their style (86.5\% accuracy) and topic (71.1\% accuracy) \cite{tran2016characterizing}. Moreover, ''the presence of directed political incivility in interactions between dissimilar users tends to discourage further cross-ideological engagement. Conversely, interactions in which ideologically like-minded users engage in derogation of the out-group are less likely to be sustained in the short term'' \cite{marchal2020polarizing}. Contrasting the aforementioned findings, in Facebook and Twitter posts about the political out-group were shared more than posts about the in-group, and the number of terms about the out-group increases the odds of a post being shared \cite{rathje2021out}. We note group dynamics are not unique to social media, as a singular analog example, research has shown in-group favoritism and out-group derogation among Europeans and Maoris, but for different reasons: ethnic identity for Maoris and social dominance orientation for Europeans \cite{hamley2020ingroup}. \textbf{Hate Speech Identification and Prediction.} Another direction of research seeks to leverage machine learning to classify and predict hate speech at scale. Successful approaches have leveraged transfer learning \cite{almerekhi2020investigating} and moderation-minded work \cite{habib2021proactive}. Multiple researches have built datasets for hate speech identification \cite{mathew2020hatexplain,qian2019benchmark,mollas2020ethos,de2018hate}, in addition to building a dataset, De Gilbert et al. \cite{de2018hate} used Pointwise Mutual Information (PMI) comparison between hateful and non-hateful sentences, finding the most hateful words are derogatory or refer to targeted groups of hate speech. \textbf{Moderation of Hate Speech.} Various moderation approaches could help reduce the amount of hate speech, including the banning of high-profile communities on Reddit \cite{Chandrasekharan2017}, as well as employing somewhat softer alternatives to bans, such as quarantines \cite{chandrasekharan2020quarantined,copland2020reddit}. Quarantines also led a portion of the communities' members to leave Reddit for more lightly regulated platforms, which merely forwards the issue of handling hate speech to another platform \cite{copland2020reddit}. Those findings corroborate with previous study showing that subreddits influence one another, and thus intervention in one subreddit can be expected to have additional effects on other subreddits, especially those with a large shared user-base \cite{Zannettou2017}. After a subreddit's ban, top users become less active while most users reduced their community-derived language without a reduction in activity. Moreover explicitly racist communities had a stronger reduction in activity from bans than dark humor communities \cite{trujillo2021echo}. Previous research also shows that hateful users differ from regular users, e.g., in their vocabulary as well as engagement \cite{Chatzakou2017}, hence to the extend top users of hateful subreddits are the most hateful users, this can at least partly explain the diverging fates of the top users in relation to regular members. \section{Methods}\label{sec:methods} \subsection{Identifying Communities of Interest} To select the communities of interest we identify subreddits that were known to be a breeding ground for hateful speech. We used previously published research on hateful communities on Reddit and selected the following subreddits: r/GreatApes, r/CoonTown, r/Incels and r/fatpeoplehate as they appear to be the most notorious ones~\cite{Chandrasekharan2017,Ko2021,gothard2020exploring}. All the selected subreddits were already banned when we started our analysis. \subsection{Hate Speech Lexicons} We employ a pragmatic definition of hate speech, with the intent of making hate speech detection a problem that can be defined in algorithms and computational tasks. We broadly define hate speech as usage of words that direct hate and derogation towards a specific group of people based on their gender, appearance, or some other characteristic. We create a corpus of candidate hate words by employing Sparse Additive Generative Models of Text (SAGE) \cite{Eisenstein2011} to compare all posts within each studied subreddit versus a global sample corpus for all of Reddit. The global Reddit corpus contains a sampling of content published through the entire platform to serve as a baseline of speech over all of Reddit. It consists of 10GB of randomly crawled posts, which were obtained directly through the Reddit API's ``fetch random`` functionality, and includes both submissions and comments. The subreddit specific corpus contains all posts made inside the studied subreddit. We remove stopwords from both corpora. Many hate words are slangs which are not well handled by stemming or lemmatizing, hence we do not apply any such techniques. SAGE then calculates the most distinctive words for the target corpus, which provides a list of unigrams mostly characterizing of each subreddit. We select the top 300 candidate words and use a 3-rater system, where the authors of this paper served as the raters. Each rater independently classified each candidate word as ``0 - Not a hate word'', ``1 - Might be a hate word'', and ``2 - Always a hate word''. The purpose of this approach is to reflect that certain speech cannot be certainly classified as hateful in the absence of context \cite{cervone2021language,paasch2021insult}. We then sum up the ratings and select only words which score 4 or above to form each subreddits' hate speech lexicon, ensuring words are likely to be hate words. These lexicons allows us to calculate the level of hate speech - the number of hate words divided by the number of all words used by a user. The level of hate speech is calculated for each user for each day the user has been active on the platform. This approach to lexicon construction has the upside of allowing us to capture hate speech that is unusual and in-group specific, thus oftentimes being encrypted to the general population and hence masked from generalized lexicons \cite{Does2022,Gerrard2018}. \subsection{Data Collection} \textbf{Treatment Data}. To obtain the post history for all active members of each hateful subreddit we used the PushShift API \cite{Baumgartner2020} to crawl all available submissions and comments made on each subreddit. Once all available data was obtained, we then parse through the data and obtain the usernames of all users who posted on each subreddit, those will be the treatment samples for our analysis. Having the usernames we once again leverage PushShift to now crawl the entire posting history for those users. We seek to remove automated posting bots from our sample by searching usernames for the following keywords: ''bot'', ''auto'', ''transcriber'', ''gif'', ''link'', ''twitter'', which were obtained by manual inspection of usernames in the largest files crawled. We manually analyze the usernames matching those keywords by inspecting a sample of their Reddit posts and subsequently remove accounts identified as bots. For each subreddit this resulted in the removal of few accounts ($< 1\%$), yet always at least the top 5 accounts by activity levels were removed, as they were consistently bots. The resulting dataset contain 3140 members of r/GreatApes, 8927 members of r/CoonTown, 17859 members of r/Incels, and 21465 members of r/fatpeoplehate. \textbf{Control Data}. To increase the probability of choosing ''similar'' users in the control sample, the control candidates are selected from users who are likely to share the topics of interest with the treatment users. Therefore, we select control candidates from the users who are the members of the subreddits frequently visited by the treatment candidates as well, but not members of the studied hateful subreddit itself. We refer to them as the ''control subreddits''\footnote{Not to be confused with the control users. ''control subreddits'' are used as an aid to identify control users that are likely to match with the treatment users.}. To identify the ''control subreddits'' we find subreddits that have the high proportion of already identified treatment users. That way we select the top 30 ''control subreddits''. We then gather all users active on those subreddits as the set of candidate control users. \textbf{Banned Subreddits}. With the goal of producing additional analyses, we obtain a non-exhaustive list of banned subreddits. Reddit does not divulge its bans, and hence all information comes from user generated content and self-reports. The gathering process was entirely manual and consisted of browsing Reddit itself for posts compiling subreddit bans. Given the nature of the data gathering process, prominent bans and bans of large subreddits are more likely to be featured in our set. In total we obtained 3515 subreddits reported to be banned. \subsection{Matching} Using Mahalanobis distance-based matching \cite{Stuart2010}, we find a 1:1 pairing between treatment users and control candidates, such that the control candidates which are most similar to treatment users are selected. We limit our set of treatment users to 15k users, randomly chosen amongst all active subreddit members, and limit the control candidates to a 5:1 ratio on the actual number of treatment users obtained. We considered the following set of features: \textit{account creation date}, \textit{Reddit karma (sum of all up-votes minus all down-votes)}, \textit{total number of submissions}, \textit{total number of comments}, and the \textit{count of posts made in each of the top 50 subreddits} (those with the highest ratio of treatment members, as per the list generated when defining the subreddits from which to sample control candidates). To ensure that the matching procedure does not influence the later analyses, matching only considers users' behavior (i.e. features) prior to the moment they join the hateful subreddit, as joining is akin to beginning treatment \cite{Ham2022}. The matching algorithm uses the following procedure: (1) select a treatment user, (2) check in which month that user became active on the hateful subreddit, (3) consider that user's features and the control candidates' features on the month prior to the activation month, (4) find the most similar control candidate via Mahalanobis distance matching, (5) store a triplet of (treatment, control, distance), (6) move to next treatment user. Once the algorithm stops we have obtained a set of control users that equals the treatment users in number and is as similar to the pre-treatment treatment users as possible. We center treatment user data such that day 0 is the day each user became active in the hateful subreddit (i.e., when the treatment began). For control users their day 0 is the same as that assigned to their matched treatment user. This means both users in a given \{treatment user, control user\} pair have the same day 0, but the centering date differs between pairs. \subsection{Interrupted Time Series} ITS is a subset of Regression Discontinuity Design models which emphasizes specifically the modelling of effects over a continuous time period. It is well established and regarded for its usage in causal modeling in various fields. \cite{cattaneo_idrobo_titiunik_2020,Lee2010RDD,Ham2022}. Throughout this analysis, for the treatment user data we consider only the posts outside the studied subreddit, as we seek to understand the effects that joining one community has in the users' behavior outside of that community, and to control for in-group specific behavior. Additionally, the hate speech lexicons are overfit to each studied subreddit, which could affect results if the target subreddit was not removed from the treatment users data. The treatment subreddit is by definition not present in the data for control users. ITS requires defining how many days around the treatment date (before and after) will be considered by the model. This analysis period is known as the bandwidth and current best-practices involve the use of cross-validation to find the optimal bandwidth \cite{Ludwig2005,Imbens2008}. Since we must be careful not to overfit the bandwidth choice to the treatment being analyzed, this procedure is done only on data prior to the treatment date (day 0). The cross-validation procedure consists of selecting several bandwidths to try (e.g. 30, 50, or 100 days), defining how many cross-validation rounds to run (e.g. 10, 25, 100), and an evaluation metric suitable for leave-one-out samples, with RMSE being the recommended choice and the one used in this analysis \cite{Jacob2012}. Then the cross-validation process works as follows: For a bandwidth of 50 days, with 10 cross-validation rounds, we would then fit a linear regression to days -51 to -2 and use the model to predict the \textit{proportion of hate speech} on day -1, then fit a regression on days -52 to -3 and predict on day -2, repeated up to predicting on day -10 \cite{Baicker2019}. From that set of predictions and known truth values we can then calculate the cross-validation RMSE for a given bandwidth. After iterating though all bandwidths, we select the one with the smallest RMSE. Under this procedure, testing bandwidths in the range [30,365] in increments of 5 days, and using 100 cross-validation rounds, we find the optimal bandwidths for each of the studied subreddits to be: r/GreatApes = 85 days; r/CoonTown = 30 days; r/fatpeoplehate = 55 days; r/Incels = 310 days. Bandwidths below 30 days were not studied due to their lower statistical power. \textbf{Model Construction.} We fit a single Ordinary Least Squares (OLS) regression model to the entire dataset while using dummy variables to indicate which samples belong to the treatment and control groups, and which samples belong to the pre- and post-treatment groups \cite{Jacob2012}. In ITS lingo the treatment group is referred to as \textit{exposed}, such that \textit{exposed} = 0 means control group and \textit{exposed} = 1 means treatment group. The break-point in the time dimension when users begin to be treated is represented by the variable \textit{interrupted}, such that \textit{interrupted} = 0 means pre-treatment and \textit{interrupted} = 1 means post-treatment. In our case all samples up to but not including day 0 are pre-treatment samples (interrupted = 0), and all samples from day 0 (inclusive) onward are post-treatment samples (interrupted = 1). The OLS regression also includes the continuous \textit{time} variable, as well as all possible interaction terms between the three aforementioned variables (\textit{exposed}, \textit{interrupted}, and \textit{time}). The resulting set of coefficients is shown in Table \ref{tab:Coefficients}. Wald's F-test shows all fitted models are statistically significant (p-value $<10^{-20}$). \begin{table}[h] \footnotesize \centering \begin{tabular}{ ll } Parameter & Meaning \\ \midrule $const$ & Pre-treatment baseline \\ $time$ & Pre-treatment trend \\ $expos$ & Incremental baseline for the treatment group \\ $inter$ & Incremental baseline after treatment\\ $time \times expos$ & Incremental trend for the treatment group \\ $time \times inter$ & Incremental trend after treatment \\ $expos \times inter$ & Incremental baseline on treatment group \\ & after treatment \\ $time \times expos \times inter$ & Incremental trend for the treatment group \\ & after treatment \\ \bottomrule \end{tabular} \caption{Interpretation of coefficients obtained from the ITS model} \label{tab:Coefficients} \end{table} \textbf{Model Interpretation.} The coefficients in Table \ref{tab:Coefficients} can be used to measure the relative increase in hate speech, defined as the difference between pre-treatment and post-treatment hate speech divided by the pre-treatment hate speech. For treatment users, the pre/post gap can be measured by \textit{$inter + expos \times inter$}, whereas the pre-treatment hate speech is defined as \textit{$const + expos$}. This relative measure looks at the instantaneous effect of joining the treatment subreddit, that is, it leverages the ITS's best estimate for the immediate increase that happens as a result of joining, while being unaffected by time-related trends in the data. \textbf{Sensitivity Analysis.} We perform sensitivity analysis by fitting several OLS regression models using all bandwidths in the [30,365] range and then observing the coefficients for each of the ITS features as well as the corresponding p-values. This helps ensure that the results being considered are not outliers resultant from a very specific bandwidth size, and shows that all measured coefficients converge to consistent values regardless of bandwidth size. Sensitivity analysis confirms one challenge of ITS which was already to be expected: When using a small bandwidth the model is built with few samples, resulting in large variance and thus high p-values on the t-tests of individual coefficients. Then, as the bandwidth size increases the confidence intervals and p-values shrink. \section{Results} The initial analyses show that the users who joined hateful subreddits will increase their usage of hateful speech outside of that subreddit by an average of $\approx 30\%$ immediately after joining. This estimate was made by using the individual coefficients obtained from the ITS model, and the pre/post gap measurement explained in the Model Analysis section under Methods. In more detail, users who joined r/CoonTown show the smallest increase in hate speech outside of r/CoonTown with just $3\%$ average increase, while the users who joined other three subreddits display substantially larger increase in hate speech usage with $38\%$, $40\%$, and $30\%$ for r/fatpeoplehate, r/GreatApes and r/Incels respectively. The relative increase in hate speech immediately after users join the hateful subreddit is illustrated in Figure~\ref{fig:subreddit_Comparison_All}. Values are measured as the percentage increase of the users' hate speech immediately post-join compared to their immediately pre-join hate speech levels. \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{figures/Excess_Hate_Speech_Bar_ALL.png} \caption{Relative increase in the rates of hate speech immediately after users join the hateful subreddit, as obtained from ITS model} \label{fig:subreddit_Comparison_All} \end{figure} In addition to the immediate increase measured, there are further insights to be observed from plotting the results of the ITS model over time, as shown in Figure \ref{fig:ITS}. For each studied subreddit, combining the dummy variables (see Methods) through all possible interaction terms allows a single ITS model to generate four regression best-fit estimates of hate speech, both for treatment and control users, on both pre- and post-treatment periods. It is not surprising that the users in the treatment group (those who joined hateful subreddits) already had a relatively high rate of hate speech before joining the hateful subreddits compared to the control users. Still, the treatment groups show significant increase in rates of hate speech after joining the hateful subreddits, while the control users show no change. Such difference in trends suggests that joining the hateful subreddit has a major effect on the rate of hate speech expressed by the users who joined. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{figures/ITS_Merged.pdf} \caption{Interrupted Time Series plots for the studied subreddits.} \label{fig:ITS} \end{figure} From the plots we observe that, with varying intensities, treatment groups were already ramping up their hate speech before they joined the hateful subreddit. ITS design still allows us to conclude that the joining of that hate subreddit increases the user's hate speech immediately, but this trend nevertheless raises questions regarding what drives users to become members of hateful communities at the first place. From the ITS model, we observe a post-join downwards trend, indicating that the immediate post-join hate speech spike wanes over time, although at the end of the analysis period all subreddits were still above the hate speech values observed at the beginning of the analysis period. Fixing the hate speech baseline to the start of the analysis period, the end-of-period hate speech values saw increases of 189\%, 10\%, 25\% and 320\% for r/GreatApes, r/CoonTown, r/Incels, and r/fatpeoplehate respectively. To ensure the ITS model shows robust results regardless of the time bandwidth, we additionally performed a Sensitivity Analysis (see Methods). Here, we vary the bandwidth and observe the change of various model coefficients. Across all bandwidths we observe a high p-value ($>$0.05) in the three parameters that measure external effects occurring on all users: $time$, $inter$, and $time \times inter$. Those high p-values suggest that the we cannot reject the null hypothesis that the true values of aforementioned variables are zero. Since the three coefficients obtained by ITS model are statistically indistinguishable from zero, we assume that we do not have evidence for any significant global (external) effects acting on all users, as those effects would be captured by the three aforementioned variables. Hence we conclude the control users express no change in their hate speech over time. This implies all measured effects are resultants of treatment users joining the hateful subreddit. The action of joining, denoted as the $expos$ variable and its interaction terms, is the only difference between treatments and controls. In summary, thanks to the sensitivity analysis we show that the observed results are replicable across bandwidths, and not merely a by-product of an overfit optimal bandwidth. We can observe a downwards trend in the treatment group post-joining. Given Reddit's active role in moderation, this downwards trend can, at least partially, result from Reddit's banning of individual users, which likely targets the most egregious offenders - those that most elevate the group's hate speech. Considering this common practice of banning users who express uncivil behavior, the average hate speech for the group is expected to decrease over time as only samples for the non-banned less-hateful members are still available. To test this hypothesis we analyze the link between hate speech level and account lifespan -- the time an account is active on Reddit. We measure the average post-join hate speech for users who quit Reddit in less than one year after joining the studied subreddit and compare them to the users who stay longer. We observe that accounts with a longer lifespan have a lower average hate speech, as shown in Table \ref{tab:User-Lifespan}. \begin{table}[h] \centering \begin{tabular}{ lcc } Subreddit & Up to 365 days & Over 365 days \\ \midrule r/GreatApes & 0.421\% & 0.140\% \\ r/CoonTown & 0.465\% & 0.101\% \\ r/fatpeoplehate & 0.130\% & 0.001\% \\ r/Incels & 0.083\% & 0.029\% \\ \bottomrule \end{tabular} \caption{Average post-join hate speech for users with short and long lifespans on Reddit, as measured by the date of their last post in all of Reddit in relation to their becoming active on the hateful subreddit.} \label{tab:User-Lifespan} \end{table} Complementarily, we leverage our data to explore hate speech trends over time for an expanded set of contexts in which treatment users can be found. We analyze the hate speech in the following scenarios: \textit{in all subreddits}, \textit{inside the treatment subreddit}, \textit{outside the treatment subreddit}, \textit{inside banned subreddits} and \textit{inside non-banned subreddits}. The plots illustrating the rate of hate speech of treatment users over time are shown in Figure \ref{fig:Timeseries}. While effect sizes vary based on each subreddit and its unique lexicon, we generally observe that treatment users already had higher hate speech levels in the ``banned subreddits`` category, and that after joining the studied hateful subreddit, their hate speech displays a visually noticeable increase in the banned subreddits category. It also increases in the ``all`` category, which is to be expected since this includes the hateful subreddit itself. Hate speech trends in the non-banned set closely tracks that of the outside set, which is to be expected as those are highly overlapping. Yet it is noteworthy that the time-series of both sets are still visually above that of the control set. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{figures/Timeseries_Merged.pdf} \caption{Temporal characteristics of expressed hate speech for treatment and control users in various scenarios.} \label{fig:Timeseries} \end{figure} All analysis were done based on the custom lexicons developed during this study. Figure \ref{fig:WF} illustrates the lexicons' most frequently written hate words in the in-group and out-group settings, considering posts after the users became active members of the hateful subreddit. We compare both distributions via a Spearman Rank Correlation test, which measures the monotonicity of the relationship between two datasets, here represented as the in-group and out-group word distributions. For all four subreddits studied the test obtained a p-value below 0.05, meaning we can reject the hypothesis that both word distributions are generated by the same underlying system. This confirms there is a difference in the distributions with which hate words are chosen when communicating to the in-group versus out-group. When looking at racism lexicons, the usage of the n-word nearly doubles when members are communicating with the in-group. Similarly significant increases happen with other highly offensive words such as ``obeast`` in the fat-shaming context. We also observe a reduction in group slang such as ``sheboon`` and ``roastie`` when speaking outside the hateful community. This effect happens across all categories of hate speech studied. Conversely, the out-group data shows an increase in derogatory-yet-not-as-societally-shunned terms such as ``fatlogic`` and ``fag``. In summary, discoursing in the out-group displays a relative toning-down of the most offensive words and of insider slangs, counter-weighted by a relative increase in milder hate words. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{figures/Word_Frequencies_Horizontal.pdf} \caption{Distributions of the most frequent hate words used inside and outside of each observed subreddit} \label{fig:WF} \end{figure} \section{Discussion} Through a robust ITS analysis we were able to identify a significant increase in hate speech resulting from joining subreddits known for their toxicity. This increase was observed outside of the community which incited it, thus implying a spread of hate arising from joining hateful communities. When analyzing the ITS models in Figure \ref{fig:ITS}, we noted that there is an upwards trend in hate speech that predates the moment when treatment users become active members of the hate community. It seems probable that that users were already becoming attracted to that specific thematic, perhaps already being passive members of the studied community, known in Reddit lingo as ''lurkers'', or users who read a subreddit's forum but do not engage in discussion. Due to dataset limitations, we measure the treatment start as the date in which a user made a first post in the studied subreddit. There is a looming possibility that treatment users were already lurkers before becoming active, and thus their external behavior had already began reflecting their increasing extremism. Additionally, we observe a downward trends in the treatment group post-joining. We find an evidence for a link between the decreased hate speech and the accounts' lifespan. The lifespan is affected by either a voluntary abandonment of accounts created for usage on the hateful community, or banning by Reddit, although our data does not permit to differentiate amongst those possibilities. Besides the effects of Reddit's banning polices, that were briefly explored in this study, there could be other reasons for such phenomenon. A commonplace explanation could be a simple regression to the mean -- a phenomenon in which those performing extremely well or extremely poorly (outliers) tend to move closer to the mean over time. Further research is needed to confirm this hypothesis. When looking at the percentage of hate speech produced by the members of the studied hateful subreddits in various contexts, it is observable that those users' hate speech also increases in other subreddits that were also eventually banned at a later date as illustrated in Figure \ref{fig:Timeseries}. We can see an expected connection between a subreddit adopting hateful words, and a probability of it being banned. Moreover, since a subreddit cannot adopt hate words after it is already banned, it must be that the adoption of hate words precedes the ban, although this analysis cannot necessarily prove that the adoption causes the ban. Regarding hate speech vocabulary, in Figure \ref{fig:WF} we observe clear differences in hate word choices when subreddit members express themselves inside versus outside the community. Those differences were confirmed by a Spearman Rank Correlation test. We observe that users choose more ``insider`` hate words within their community, but less frequently bring these slang to the outside world, which is in line with some previous findings \cite{trujillo2021echo}. Such behavior might be one way of evoking group membership, by displaying knowledge of the group's unique vocabulary, which has been shown to entice greater responsiveness from the community \cite{tran2016characterizing}. The converse is also true, where users who employ more obscure in-group language outside of their group might simply find low engagement in other forums, either due to the out-group's disinterest in a foreign cause, or simply due to others inability to comprehend messaging laden with group-specific language. Both scenarios would discourage the usage of group-specific language in favor of a more commonplace vocabulary and style when communicating outside of the users' community. From a qualitative perspective, in both the racist and the fat-shaming communities there appears to be a toning-down on the relative usage of the most egregiously offensive words, such as the n-word and ``obeast``, when communicating out-of-community. A possible reason to explain this is that even hate group members are still somewhat affected by society's pressure towards civility. A secondary factor might once again be moderation, as those who chose the most offensive words in the out-group would be at higher likelihood of having their posting ability blocked, thus halting their spread of such vocabulary. From observing four different subreddits, covering three categories of hate speech, we have shown a causal relationship between a user becoming active in the community and the user's hate speech increasing immediately after. Such negative externality that arises from the growth of hateful communities makes a case for stronger moderation of such communities, an area where quarantining subreddits shows some promise. In general these findings corroborate with previous research \cite{Chandrasekharan2017,trujillo2021echo} that suggests positive hate reducing effects can be obtained by restraining such hateful communities. \section{Conclusion} In this paper we presented a causal inference analysis of the impact that joining a hateful subreddit has on a user's ensuing hate speech in other subreddits. We looked at Reddit, one of the major social media platforms, and discover that the hate speech increases as an effect of joining hateful subreddits. We find the positive effects in three different categories of hate speech: racism, misogyny and fat-shaming. This study displays novelty for using causal modeling to understand the influence of social media in hate speech, especially by leveraging ITS design to identify the effects. Additionally, we perform the sensitivity analyses by exploring the optimal time bandwidths and assessing the modulating effects of different time bandwidths on calculated effect sizes. However, potential limitations warrant consideration. Our findings indicate that despite the matching technique employed, there were still pre-existing differences between the treatment and control groups, which stem at least in part from the need of matching without using the target variable, as commonly expected from such approaches \cite{Ham2022}. This causes the treatment group to have a higher baseline hate speech level, measured as \textit{const} + \textit{expos}. Nevertheless the ITS model utilized is robust to different baselines by including a control group precisely to allow measurement of such extraneous effects. The approach to lexicon construction or the choice of lexicon usage can also have an impact in the analyses as hate speech is not binary, but rather covers an entire spectrum, from positively talking about the out-group, passing through mild-offenses and ending in a toxic discourse. Certain outlets might be more conducive to different extents of hate speech and thus differences could be found when lexicons differ. We were also limited in our data availability, as it is only possible to measure a user's active actions, namely posting and commenting. This constraint makes analysing the effects of passive content consumption challenging. Similarly, we only study four subreddits, and the additional analyses is needed to assess the generality of our conclusions in Reddit and other social media platforms. Given the inability to obtain data regarding when a user formally chooses to become member of a subreddit, nor data regarding which posts were being read by a user, the current approach could not investigate if those relate to the observed pre-join upwards trend, thus leaving opportunity for future work. The mirror case is also true, exploration of what is driving the treatment users speech downwards after joining the hateful subreddit could unearth additional information to be leveraged in combating hate speech. \section*{Acknowledgements} Funding for this work is provided through the USC-ISI Exploratory Research Award and through DARPA (Award \# HR0011260595 and \# HR001121C0169) \section*{Conflicts of Interest} The authors declare no conflicts of interest. \bibliographystyle{plain}
{ "timestamp": "2022-10-04T02:18:12", "yymm": "2209", "arxiv_id": "2209.08697", "language": "en", "url": "https://arxiv.org/abs/2209.08697" }
\section{Introduction} The pro-$\var{V}$ topology on a group is defined by Hall in \cite{Hall49, Hall50}, where $\var{V}$ is a \emph{variety}, namely a class of finite groups closed with respect to subgroups, homomorphic images and direct products. Such a class is usually called a \emph{pseudovariety}, see \cite{MSW01, RZ94, Weil98}. The closure $Cl_{\var{V}}(S)$ of a subset $S$ in a finitely generated free group $F$ endowed with the pro-$\var{V}$ topology consists of those elements not able to be distinguished with $S$ by any homomorphism from $F$ to the groups in $\var{V}$. For instance, let $\var{V}$ be the variety $\var{Ab}$ of all finite abelian groups, the closure of the trivial subgroup in $F$ is the derived subgroup $[F,F]$. Hall proved that every finitely generated subgroup is closed in $F$ when $\var{V}$ is the variety of all finite groups. This is not true for the pro-$\var{Ab}$ topology---the trivial subgroup is not closed and its closure $[F,F]$ is not even finitely generated. It is natural to raise a question that given a variety $\var{V}$ and a finitely generated subgroup $H$, how to determine the pro-$\var{V}$ closure of $H$. When $\var{V}$ is the variety $\var{G}_p$ ($p$ is prime) of all finite $p$-groups, Ribes-Zalesskii \cite{RZ94} proved that the pro-$\var{G}_p$ closure of a finitely generated subgroup is finitely generated and gave an algorithm to compute the closure. Margolis-Sapir-Weil modified Ribes-Zalesskii's algorithm in \cite{MSW01} to compute the pro-$\var{G}_p$ closure in polynomial time. They also proved when $F$ is endowed with pro-nilpotent topology, namely when $\var{V}$ is the variety $\var{Nil}$ of all finite nilpotent groups, the pro-$\var{Nil}$ closure of a finitely generated subgroup is finitely generated and computable. The main tool used in \cite{MSW01, RZ94} is the bijective correspondence between finitely generated subgroups and certain graphs acted by groups which are called reduced inverse automata in \cite{MSW01} and reduced $A$-labeled graphs in this paper where $A$ is a basis of $F$. The idea behind this correspondence goes back to Serre \cite{Ser77} and Stallings \cite{Sta83}, that has turned out to be extremely useful since then. This correspondence allows one to define the notion ``overgroups'' of a finitely generated subgroup. An overgroup of a finitely generated subgroup $H$ is always finitely generated, but whether a subgroup is an overgroup of $H$ depends on the chosen basis of $F$. When a basis is fixed, there are only finitely many overgroups of $H$. In \cite{MSW01}, Margolis-Sapir-Weil proved the pro-$\var{Nil}$ closure of $H$ is the intersection of all pro-$\var{G}_p$ closures of $H$ and each pro-$\var{G}_p$ closure is an overgroup of $H$ for a fixed basis $A$. So the intersection involving finitely many finitely generated subgroups is finitely generated. We will describe this tool in Section \ref{sp3} and refer to \cite{KM02, MVW01} where relevant statements and proofs can be found. For instance, a simple proof of Hall's extension theorem in \cite{Hall50}, which states every finitely generated subgroup is a free factor of a subgroup of finite index in the free group can be found in \cite{KM02} Theorem 8.1. Motivated by Margolis-Sapir-Weil's work, we focus on the case that $\var{V}$ is the variety $\var{Su}$ of all finite supersolvable groups. It is a variety lies between $\var{Nil}$ and the variety $\var{Sol}$ of all finite solvable groups. We prove \begin{mtm Let $H$ be a finitely generated subgroup of $F$ endowed with the pro-$\var{Su}$ topology, then the closure of $H$ is finitely generated. \end{mtm} The proof depends on the following observation. A finite nilpotent group is a direct product of its Sylow subgroups and so can project onto a group in the subvariety $\var{G}_p$ of $\var{Nil}$ for each prime $p$ dividing its order. This allows the authors of \cite{MSW01} to prove the pro-$\var{Nil}$ closure is the intersection of all pro-$\var{G}_p$ closures. A similar result holds for a finite supersolvable group, that is Lemma \ref{fp} which states that a finite supersolvable group can project onto a group in the subvariety $\var{H}_p$ (defined in Section \ref{sp}) of $\var{Su}$ for each prime $p$ dividing its order. It is the key to prove the pro-$\var{Su}$ closure of $H$ is the intersection of all pro-$\var{H}_p$ closures. Then we turn to prove each pro-$\var{H}_p$ closure is an overgroup of $H$ for a fixed basis, that is Corollary \ref{hpclo}, and finally prove the main theorem by the fact that there are only finitely many overgroups of $H$. The organization of this paper is as follows: The definitions of some varieties with their notations are given in Section \ref{sp1}; some easy but frequently used results about the pro-$\var{V}$ topology on a free group are listed in Section \ref{sp2}; in Section \ref{sp3} and \ref{sp4}, we give a brief introduction about the tool mentioned above and some important lemmas and propositions which are used in this paper; in Section \ref{t1} we show that the pro-$\var{H}_p$ closure of $H$ is an overgroup of $H$ (Corollary \ref{hpclo}) and provide a simple method to decide whether the pro-$\var{H}_p$ closure of a finitely generated subgroup is $F$ (Theorem \ref{hpdense}); in Section \ref{t2}, after introducing some notions and propositions about finite supersolvable groups, we prove the main theorem of this paper. \section{Preliminary}\label{sp} \subsection{Varieties of finite groups}\label{sp1} \begin{defn} A class of finite groups is called a \emph{variety} if it is closed with respect to subgroups, homomorphic images and direct products. The \emph{product} $\var{U}*\var{V}$ of two varieties $\var{U}$, $\var{V}$ consists of all groups $G$ having a normal subgroup $N\in \var{U}$ such that $G/N\in \var{V}$. For two varieties $\var{U}, \var{V}$, we say $\var{U}$ is a \emph{subvariety} of $\var{V}$ if $\var{U}\subset \var{V}$. \end{defn} \begin{defn} A group $G$ is called a \emph{supersolvable} group if there exists a normal series: $$1=H_0\leq H_1\leq \cdots\leq H_n=G$$ where each $H_i\lhd G$ and each $H_{i+1}/H_i$ is cyclic. \end{defn} The class of all finite supersolvable groups is a variety denoted by $\var{Su}$. We use the following notations for the varieties mentioned in this paper. \vspace{2mm} \begin{tabular}{|c|c|} \hline Notation & The corresponding variety\\ \hline $\var{G}_p$ & the variety of all $p$-groups ($p$ is prime)\\ $\var{Ab}$ & the variety of all abelian groups \\ $\var{Ab}_d$ & the variety of all abelian groups of exponent dividing $d$ \\ $\var{Nil}$ & the variety of all nilpotent groups \\ $\var{Su}$ & the variety of all supersolvable groups \\ $\var{Sol}$ & the variety of all solvable groups \\ $\var{H}_p$ & $\var{G}_p*\var{Ab}_{p-1}$ ($p$ is prime)\\ \hline \end{tabular} \vspace{2mm} \begin{rem} It is easy to check $\var{G}_p\subset \var{Nil}\subset \var{Su} \subset \var{Sol}$. $\var{H}_p$ is also a subvariety of $\var{Su}$. \end{rem} \subsection{The pro-$\var{V}$ topology on a free group}\label{sp2} In \cite{Hall50}, Hall introduced a topology for free groups which is called the pro-$\var{V}$ topology nowadays. Let $F$ be a free group and $\var{V}$ be a variety. Denote the family of normal subgroups $K$ of $F$ such that $F/K\in \var{V}$ by $\mathcal{K}$. We can define a topology on $F$ by taking $\{Kx~|~K\in \mathcal{K}, ~x\in F\}$ as a basis for open sets in $F$. Note that $\mathcal{K}$ is a basis of neighborhoods of the element $1$. This topology is called the pro-$\var{V}$ topology on $F$ which can also be defined for an arbitrary group. \begin{rem} The pro-$\var{V}$ topology on $F$ is the coarsest topology which makes all homomorphisms from $F$ to elements (equipped with the discrete topology) of $\var{V}$ continuous. \end{rem} In \cite{MSW01}, the authors gave another point view for the pro-$\var{V}$ topology. We describe it below. For $x, y\in F$, we say that a finite group $V$ separates $x$ and $y$ if there exists a homomorphism $\phi_V: F\to V$ such that $\phi_V(x)\neq \phi_V(y)$. Let $$d(x,y)=\begin{cases} e^{-r(x,y)}, & \mbox{if }x,y \mbox{ can be separated by an element in }$\var{V}$ \\ 0, & \mbox{otherwise} \end{cases}$$ where $r(x,y)=\min\{|V|: V\in \var{V} ~\text{separtes}~ x ~\text{and}~ y\}$. One can verified that $d$ is a pseudometric on $F$. The topology defined by $d$ is the same as the pro-$\var{V}$ topology. \begin{defn} Let $\var{V}$ be a variety. A subgroup $H$ of $F$ is said to be \emph{$\var{V}$-open} (resp. \emph{$\var{V}$-closed}) if it is open (resp. closed) in $F$ endowed with the pro-$\var{V}$ topology. The closure of a subset $S\subset F$ is called the $\var{V}$-closure of $S$ and denoted by $Cl_{\var{V}}(S)$. Moreover $S$ is called $\var{V}$-dense in $F$ if $Cl_{\var{V}}(S)=F$. \end{defn} \begin{prop}[\cite{MSW01}, Proposition 1.3] Let $H$ be a subgroup of $F$, then$$Cl_{\var{V}}(H)=\bigcap_{K \in \mathcal{K}} K=\bigcap_{\phi\in \Phi_\var{V}} \phi^{-1}(\phi(H)).$$ where $\mathcal{K}$ is the set of all $\var{V}$-open subgroups that containing $H$, and $\Phi_\var{V}$ is the set of all homomorphisms from $F$ to elements of $\var{V}$. \end{prop} Note that for each $\phi\in \Phi_\var{V}$, $\phi(F)$ lies in $\var{V}$, then $\phi$ induces a surjective homomorphism $\phi_s: F\to \phi(F)\in \var{V}$ such that $\phi_s^{-1}(\phi_s(H))=\phi^{-1}(\phi(F))$. We have \begin{cor}\label{dense} Let $H$ be a subgroup of $F$, then $H$ is $\var{V}$-dense in $F$ if and only if for each surjective $\phi_s\in \Phi_\var{V}$, $\phi_s^{-1}(\phi_s(H))=F$, namely $\phi_s(H)=\phi_s(F)$. \end{cor} \begin{lem}[\cite{MSW01}, Corollary 3.1]\label{subd} Suppose $\var{U},\var{V}$ are two varieties such that $\var{U}\subset \var{V}$. Let $H$ be a finitely generated subgroup of $F$. (1) If $H$ is $\var{V}$-dense, then $H$ is $\var{U}$-dense. (2) The $\var{V}$-closure of $H$ is contained in the $\var{U}$-closure of $H$. (3) If $H$ is $\var{U}$-closed, then $H$ is $\var{V}$-closed. \end{lem} \subsection{Representation of subgroups by labeled graphs}\label{sp3} A free group $F$ can be identified with the fundamental group of the bouquet of circles $R$ while each subgroup of $F$ corresponds to an associated graph covering $R$. In this point of view, Stallings introduced a useful notion of a folding of graphs in \cite{Sta83} which is widely used in the research of the lattices of subgroups of free groups from then on. This method is also useful to solve algorithmic problems concerning subgroups of $F$. We briefly describe this technique below and refer to \cite{KM02} \cite{MVW01} for more details. \subsubsection{Reduced $A$-labeled graph} Denote the free group $F$ over a finite alphabet $A$ by $F(A)$. \begin{defn} An \emph{$A$-labeled graph} $\Gamma$ is a directed graph satisfying the following conditions: (1) the underlying undirected graph of $\Gamma$ is connected; (2) there is a prechosen vertex marked by 1 and we call it the base vertex of $\Gamma$; (3) each edge of $\Gamma$ is labeled by a letter of $A$. \end{defn} For an $A$-labeled graph $\Gamma$, if there exist two different edges $e_1, e_2$ with the same origin (resp. with the same terminus) and the same label, then we can identify $e_1, e_2$ to produce a new $A$-labeled graph. This operation is called ``folding of graphs'', see \cite{KM02} for details. \begin{defn} [See \cite{MVW01}] An $A$-labeled graph $\Gamma$ is said to be \emph{reduced} if different edges with the same origin (resp. with the same terminus) have different labels, and if each vertex except the base vertex is adjacent to at least two different edges. \end{defn} By a path $p$ in an $A$-labeled graph $\Gamma$ we understand a path in the underlying undirected graph of $\Gamma$ being allowed to travel back along edges. The label of $p$ is the word (maybe not a reduced word) in $F(A)$ obtained by concatenating consecutively the labels of the edges crossed by $p$, concatenating $a^{-1}$ whenever an edge labeled by $a\in A$ is crossed backwards. The path $p$ is said to be reduced if the label of $p$ is a reduced word in $F(A)$. For a reduced $A$-labeled graph $\Gamma$, we can associate a subgroup $L(\Gamma)$ of $F(A)$ with it which is the set of words labeling reduced paths in $\Gamma$ from the base vertex 1 back to itself. Note that if $\Gamma$ is finite, then the associated subgroup $L(\Gamma)$ is finitely generated. Conversely, suppose $H=\langle h_1,...,h_k\rangle$ is a finitely generated subgroup of $F(A)$ where each $h_i$ is a non-empty reduced words over the alphabet $A$, then $H$ can be represented by a finite reduced $A$-labeled graph through the following steps (see Figure 1 for an example). \textbf{Step 1}. Suppose $h_1=a_{i_1}^{\epsilon_1}\cdots a_{i_m}^{\epsilon_m}, a_{i_1},...,a_{i_m}\in A, \epsilon_1,...,\epsilon_m=\pm 1$, we construct a $m$ subdivided circle labeled by $h_1$ based on a vertex marked by 1, in which each inverse letter $a_{i_j}^{\epsilon_j}$ where $\epsilon_j=-1$ gives rise to a $a_{i_j}$-labeled edge in the reverse direction on the circle. \textbf{Step 2}. For each $h_i \in \{h_2,...,h_k\}$, we do the same thing like that in the first step and construct a $m_i$ divided circle in addition to the vertex marked by 1 where $m_i$ is the length of $h_i$. Note that the circle corresponding to $h_i$ has $m_i$ edges and $m_i-1$ vertices. We now get a finite $A$-labeled graph $\Gamma'$ consisting of $k$ circles based on the common vertex 1. \textbf{Step 3}. We do the operation ``folding of graphs'' and make $\Gamma'$ to be reduced, namely we iteratively identify different edges with the same origin (resp. with the same terminus) and the same label. This process terminates since $\Gamma'$ is a finite graph and it does not matter in which order identification take place. The resulting reduced $A$-labeled graph is denoted by $\Gamma_A(H)$. \begin{figure}[H] \centering \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=new style 0] (0) at (-10.5, 3.75) {1}; \node [style=new style 0] (1) at (-11.75, 4.5) {2}; \node [style=new style 0] (2) at (-11.75, 3) {3}; \node [style=new style 0] (3) at (-9, 4.5) {5}; \node [style=new style 0] (4) at (-9, 3.25) {4}; \node [style=none] (5) at (-8, 3.75) {}; \node [style=none] (6) at (-6.75, 3.75) {}; \node [style=new style 0] (7) at (-4.75, 2.5) {1}; \node [style=new style 0] (8) at (-4.75, 5) {2,5}; \node [style=new style 0] (9) at (-6, 3.75) {3}; \node [style=new style 0] (10) at (-3.5, 3.75) {4}; \node [style=new style 0] (11) at (-8.5, 0.25) {2,3,5}; \node [style=new style 0] (12) at (-7, 1) {4}; \node [style=new style 0] (13) at (-7, -0.5) {1}; \node [style=none] (14) at (-12.75, 3.75) {$a$}; \node [style=none] (15) at (-11.25, 5.25) {$b$}; \node [style=none] (16) at (-9.75, 2.5) {$a$}; \node [style=none] (17) at (-9.75, 5.25) {$b$}; \node [style=none] (19) at (-11.25, 2.5) {$b$}; \node [style=none] (20) at (-6.25, 0.25) {$a$}; \node [style=none] (21) at (-8.5, 4) {$b$}; \node [style=none] (22) at (-8, -1.25) {$b$}; \node [style=none] (23) at (-5.75, 2.75) {$b$}; \node [style=none] (24) at (-5.75, 5) {$a$}; \node [style=none] (25) at (-5, 3.75) {$b$}; \node [style=none] (26) at (-3.5, 2.75) {$a$}; \node [style=none] (27) at (-3.75, 5) {$b$}; \node [style=none] (28) at (-9.5, 0.25) {$a$}; \node [style=none] (29) at (-8, 1.75) {$b$}; \node [style=none] (30) at (-5.25, 2) {}; \node [style=none] (31) at (-6.25, 1) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=new edge style 1, bend right=60, looseness=1.25] (0) to (1); \draw [style=new edge style 1, bend left=60, looseness=1.25] (0) to (2); \draw [style=new edge style 0, bend right=60] (1) to (2); \draw [style=new edge style 0, bend right=60, looseness=1.25] (0) to (4); \draw [style=new edge style 1, bend left=60, looseness=1.25] (0) to (3); \draw [style=new edge style 1, bend left] (3) to (4); \draw [style=new edge style 4] (5.center) to (6.center); \draw [style=new edge style 1] (7) to (8); \draw [style=new edge style 1, bend left] (8) to (10); \draw [style=new edge style 1, bend left] (7) to (9); \draw [style=new edge style 0, bend right] (8) to (9); \draw [style=new edge style 0, bend right] (7) to (10); \draw [style=new edge style 0, in=-135, out=135, loop] (11) to (); \draw [style=new edge style 1, bend left=60, looseness=1.25] (11) to (12); \draw [style=new edge style 1, bend right=300, looseness=1.25] (13) to (11); \draw [style=new edge style 0, bend right=45] (13) to (12); \draw [style=new edge style 4] (30.center) to (31.center); \end{pgfonlayer} \end{tikzpicture} \caption{Getting $\Gamma_A(H)$ for $H=\left \langle bab^{-1}, b^2 a^{-1} \right \rangle$ by ``folding of graphs''}\label{Stallings folding} \end{figure} A finite reduced $A$-labeled graph $\Gamma$ can be associated with a finitely generated subgroup $L(\Gamma)$ while a finite generated subgroup $H$ can be represented by a finite reduced $A$-labeled graph $\Gamma_A(H)$. In fact, one can prove $L(\Gamma_A(H))=H$ (see \cite{KM02} Theorem 5.1, 5.2) and we have the following proposition. \begin{prop} There is a one-to-one correspondence between finitely generated subgroups of $F(A)$ and finite reduced $A$-labeled graphs. \end{prop} \subsubsection{Scheier graphs and Scheier free factors} There is another way to construct $\Gamma_A(H)$ described in \cite{KM02}. Let $\Gamma(F(A),A)$ be the Cayley graph of $F(A)$ with respect to the basis $A$ and $\Gamma_H= \Gamma(F(A),A)/H$ be the quotient graph of $\Gamma(F(A),A)$ by the right action of $H$ which is usually called Schreier graph or coset graph associated to $H$. The vertex set of $\Gamma_H$ is the set of cosets $\{Hg~|~g\in F(A)\}$, and there is an edge from $Hg_1$ to $Hg_2$ labeled by $a\in A$ if and only if $Hg_1a=Hg_2$. The vertex corresponding to the coset $H$ is marked by 1, then $\Gamma_H$ is an $A$-labeled infinite graph unless $H$ is of finite index in $F(A)$. Now $\Gamma_A(H)$ is the core of $\Gamma_H$ with respect to the base vertex 1, see \cite{KM02} Theorem 5.1. \begin{defn} Let $F(A)$ be the free group with basis $A$ and the elements are understand as reduced words over the alphabet $A$. A prefix-closed subset of $F(A)$ is called a \emph{Schreier set}. Let $H$ be a subgroup of $F(A)$, a complete Schreier set $T$ of representatives for all cosets of $H$ is called a \emph{Schreier transversal} for $H$ and its associated basis $B_T$ of $H$ is called a \emph{Schreier basis} of $H$. A \emph{Schreier free factor} (with respect to the chosen basis $A$) of $H$ is any subgroup of $H$ that is generated by a subset of some Schreier basis of $H$. \end{defn} We can obtain Schreier transversals and Schreier bases easily from the coset graph $\Gamma_H$ of $H$. Choose any spanning tree $L$ of $\Gamma_H$, then the set of all reduced words that label a path in $L$ starting at the base vertex 1 is a Schreier transversal $T$ for $H$. The Schreier basis $B_T$ of $H$ corresponding to $T$ can be obtained as follows. For each edge $e$ in $\Gamma_H$ that is not on the spanning tree $L$, consider the word $w_e:=w_{o_e} a_e w_{t_e}^{-1}$ where $o_e$ is the origin of $e$, $t_e$ is the terminus of $e$, $a_e\in A$ is the label of $e$, and $w_{o_e}$ (resp $w_{t_e}$) is the representative in $T$ of the coset $o_e$ (resp. $t_e$). Then $B_T=\{w_e~|~e\in \Gamma_H-L\}$ is the corresponding Schreier basis of $H$. \begin{exam} Let $H=\left \langle aba^{-1}b,b^{-1}a^{-1}ba^{-1}b,a^{-1}b^{-1},b^{-1}ab^3 a^{-1}b \right \rangle$ be a finitely generated subgroup of $F(A)=\langle a, b\rangle$. $\Gamma_A(H)$ is shown in Figure \ref{Gamma(H) and L} which is also the core of the coset graph $\Gamma_H$ of $H$. A spanning tree $L_0$ of $\Gamma_A(H)$ is depicted by dotted lines which can be easily extended to a spanning tree $L$ of $\Gamma_H$. The corresponding Schreier transversal is $$T=\{1,b^{-1}ab^{-1},b^{-1}a, b^{-1},b^{-1}ab, a^{-1},\cdots \},$$ and the Schreier basis of $H$ corresponding to $T$ is $$B_{T}=\{aba^{-1}b,b^{-1}ab^{-1}ab,b^{-1}abbba^{-1}b, ba \}.$$ \begin{figure}[H] \centering \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=new style 0] (0) at (0, 0) {1}; \node [style=new style 0] (1) at (2.5, 1.5) {2}; \node [style=new style 0] (2) at (2.5, -1.25) {4}; \node [style=new style 0] (3) at (5, 0) {3}; \node [style=new style 0] (4) at (5, 2) {5}; \node [style=new style 0] (5) at (-3, 0) {6}; \node [style=none] (6) at (-1.75, 1.25) {$b$}; \node [style=none] (7) at (-1.5, -1.25) {$a$}; \node [style=none] (8) at (1, -1) {$b$}; \node [style=none] (9) at (1, 1.25) {$a$}; \node [style=none] (10) at (4, 1.25) {$b$}; \node [style=none] (11) at (4, -1) {$a$}; \node [style=none] (12) at (5.75, 1) {$b$}; \node [style=none] (13) at (3.75, 2.5) {$b$}; \node [style=none] (14) at (2.75, 0) {$a$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=new edge style 0] (1) to (2); \draw [style=new edge style 3, bend left=15] (2) to (0); \draw [style=new edge style 2, bend right=15] (2) to (3); \draw [style=new edge style 3, bend left=15] (1) to (3); \draw [style=new edge style 1, bend right=45, looseness=1.50] (0) to (5); \draw [style=new edge style 0, bend left=15] (0) to (1); \draw [style=new edge style 1, bend right] (4) to (1); \draw [style=new edge style 3, in=-45, out=45] (3) to (4); \draw [style=new edge style 2, bend right=60, looseness=1.25] (5) to (0); \end{pgfonlayer} \end{tikzpicture} \caption{$\Gamma_A(H)$ and its spanning tree $L_0$}\label{Gamma(H) and L} \end{figure} \end{exam} \begin{lem}\label{Schf} Let $H$ be a finitely generated subgroup of $F(A)$ represented by a reduced $A$-labeled graph $\Gamma_A(H)$. Suppose $\Delta$ is a reduced $A$-labeled subgraph of $\Gamma_A(H)$, then the subgroup $L(\Delta)$ associated with $\Delta$ is a Schreier free factor of $H$. \end{lem} \begin{proof} See \cite{KM02} Lemma 6.1 and Proposition 6.2. \end{proof} \subsubsection{Morphisms between reduced $A$-labeled graphs} \begin{defn} A morphism $\phi$ between two reduced $A$-labeled graphs $\Gamma$ and $\Delta$ is a map from $\Gamma$ to $\Delta$ that preserves the structure of $A$-labeled graphs. Namely, $\phi$ sends the base vertex to the base vertex, and vertices to vertices, and edges to edges such that whenever $\Gamma$ has an edge $e$ labeled by $a\in A$ from $o_e$ to $t_e$, then $\phi(e)$ is an edge labeled by $a$ from $\phi(o_e)$ to $\phi(t_e)$ in $\Delta$. \end{defn} It is easy to observe that a morphism of reduced $A$-labeled graphs is necessarily locally injective (an immersion in \cite{Sta83}), namely for each vertex $v$ of $\Gamma$, different edges with the same origin (resp. the same terminus) $v$ have different images. The following properties are well known and proved in \cite{KM02} (or \cite{MVW01}). \begin{prop}\label{morbasic} Let $H, K$ be two finitely generated subgroups of $F(A)$. Then (1) A morphism $\Gamma_A(H)\to \Gamma_A(K)$ exists if and only if $H\leq K$. (2) If a morphism $\Gamma_A(H)\to \Gamma_A(K)$ exists, it is unique. We denote it by $\phi_{H,K}$. (3) If a morphism $\Gamma_A(H)\to \Gamma_A(K)$ is injective, namely $\phi_{H,K}$ is one-to-one (if and only if it is one-to-one on vertices), then $H$ is a free factor of $K$. \end{prop} \begin{figure}[H] \centering \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=new style 0] (0) at (-0.5, 0.5) {1}; \node [style=new style 0] (1) at (-1, -1) {5}; \node [style=new style 0] (3) at (1, 0.5) {3}; \node [style=new style 0] (4) at (1.5, -0.5) {4}; \node [style=new style 0] (6) at (0.5, -1.75) {6}; \node [style=none] (8) at (-0.75, -2) {$b$}; \node [style=none] (9) at (1.5, -1.75) {$b$}; \node [style=none] (11) at (-1.25, 0.25) {$a$}; \node [style=none] (13) at (0.25, 1.25) {$b$}; \node [style=none] (14) at (1.75, 0.25) {$a$}; \node [style=new style 0] (17) at (5.5, 0) {1}; \node [style=new style 0] (18) at (4.5, -1) {5}; \node [style=new style 0] (19) at (7, 0.75) {3}; \node [style=new style 0] (20) at (6.5, -1) {4}; \node [style=new style 0] (21) at (5.5, -2) {6}; \node [style=new style 0] (22) at (4.25, 0.75) {2}; \node [style=none] (23) at (4.5, -2) {$b$}; \node [style=none] (24) at (6.5, -2) {$b$}; \node [style=none] (25) at (3.5, -0.5) {$a$}; \node [style=none] (26) at (4.5, 0) {$a$}; \node [style=none] (27) at (5, 1) {$b$}; \node [style=none] (28) at (6, 1) {$b$}; \node [style=none] (29) at (6.75, -0.25) {$a$}; \node [style=none] (30) at (6, -0.5) {$a$}; \node [style=none] (31) at (7.75, -0.25) {$b$}; \node [style=none] (32) at (2.25, -0.5) {}; \node [style=none] (33) at (3, -0.5) {}; \node [style=none] (34) at (0.25, -2.75) {$H=\left \langle ab^2a^{-1}b,ab^2a \right \rangle$}; \node [style=none] (35) at (5.75, -2.75) {$K=\left \langle ba^{-1}b^4,ab^4,a^{-1}b^2,b^{-2}a^{-1}b \right \rangle$}; \node [style=none] (36) at (0.25, -0.75) {$a$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=new edge style 0, bend right] (0) to (1); \draw [style=new edge style 0, bend left=15] (3) to (4); \draw [style=new edge style 1, bend right] (3) to (0); \draw [style=new edge style 1, bend right=45] (1) to (6); \draw [style=new edge style 1, bend right] (6) to (4); \draw [style=new edge style 0, bend left=60, looseness=1.25] (18) to (22); \draw [style=new edge style 0, bend right] (17) to (18); \draw [style=new edge style 0, bend right] (20) to (17); \draw [style=new edge style 0, bend right] (19) to (20); \draw [style=new edge style 1, bend right] (19) to (17); \draw [style=new edge style 1, bend right] (17) to (22); \draw [style=new edge style 1, bend right] (18) to (21); \draw [style=new edge style 1, bend right] (21) to (20); \draw [style=new edge style 1, bend right=45, looseness=1.50] (20) to (19); \draw [style=new edge style 4] (32.center) to (33.center); \draw [style=new edge style 0, bend right=315] (4) to (0); \end{pgfonlayer} \end{tikzpicture} \caption{Injective morphism between reduced $A$-labeled graphs}\label{fig injective morphism} \end{figure} \begin{rem}\label{mSchf} If the morphism $\Gamma_A(H)\to \Gamma_A(K)$ is injective, namely the image of $\Gamma_A(H)$ is a reduced $A$-labeled subgraph of $\Gamma_A(K)$, then $H$ is a Schreier free factor of $K$ by Lemma \ref{Schf}. \end{rem} \begin{defn} Let $H, K$ be two finitely generated subgroups of $F(A)$. If $\phi_{H,K}:\Gamma_A(H)\to \Gamma_A(K)$ is surjective (both on vertices and on edges), then we say that $K$ is an overgroup of $H$ (with respect to $A$). The set of all overgroups of $H$ is called the \emph{$A$-fringe} of $H$, denoted by $\mathcal{O}_A(H)$. \end{defn} \begin{figure}[H] \centering \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=new style 0] (0) at (-5, 1.75) {2}; \node [style=new style 0] (1) at (-5.75, 0.5) {1}; \node [style=new style 0] (2) at (-3.25, 1.5) {3}; \node [style=new style 0] (3) at (-5, -0.75) {5}; \node [style=new style 0] (4) at (-3.25, -0.5) {4}; \node [style=none] (5) at (-6, 1.5) {$a$}; \node [style=none] (6) at (-6, -0.5) {$a$}; \node [style=none] (7) at (-3.75, -1.25) {$b$}; \node [style=none] (8) at (-2.5, 0.5) {$a$}; \node [style=none] (9) at (-4, 2.25) {$b$}; \node [style=new style 0] (10) at (0, 0.5) {1}; \node [style=new style 0] (11) at (1.5, 0.5) {2,5}; \node [style=new style 0] (12) at (3, 0.5) {3,4}; \node [style=none] (13) at (0.75, -0.5) {$a$}; \node [style=none] (14) at (2.25, -0.5) {$b$}; \node [style=none] (15) at (0.75, 1.5) {$a$}; \node [style=none] (16) at (2.25, 1.5) {$b$}; \node [style=none] (17) at (-1.5, 0.5) {}; \node [style=none] (18) at (-0.75, 0.5) {}; \node [style=none] (19) at (-4.5, -1.75) {$H=\left \langle ab^2a^{-1},aba^2b^{-1}a^{-1},ababa \right \rangle$}; \node [style=none] (20) at (2, -1.75) {$K=\left \langle a^2,ab^2a,ababa \right \rangle$}; \node [style=none] (26) at (4, 0.5) {$a$}; \node [style=none] (27) at (-4.25, 0.75) {$b$}; \node [style=none] (28) at (-4, 0) {$a$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=new edge style 0, bend left] (1) to (0); \draw [style=new edge style 0, bend left] (3) to (1); \draw [style=new edge style 1, bend left] (0) to (2); \draw [style=new edge style 1, bend left] (4) to (3); \draw [style=new edge style 1, bend left=60, looseness=1.25] (11) to (12); \draw [style=new edge style 0, bend left=60, looseness=1.25] (11) to (10); \draw [style=new edge style 4] (17.center) to (18.center); \draw [style=new edge style 0, bend left=45] (2) to (4); \draw [style=new edge style 0, bend left=60, looseness=1.25] (10) to (11); \draw [style=new edge style 1, bend left=60, looseness=1.25] (12) to (11); \draw [style=new edge style 0, in=-45, out=45, loop] (12) to (); \draw [style=new edge style 1, bend right=315] (2) to (0); \draw [style=new edge style 0, bend left] (4) to (2); \end{pgfonlayer} \end{tikzpicture} \caption{Surjective morphism between reduced $A$-labeled graphs}\label{fig surjective morphism} \end{figure} Since a finite reduced $A$-labeled graph can only be mapped onto finitely many reduced $A$-labeled graphs, we have \begin{prop} Let $H$ be a finitely generated subgroup of $F(A)$, then $\mathcal{O}_A(H)$ is a finite set. \end{prop} \subsection{$\var{V}$-preclosed subgroups}\label{sp4} \begin{defn}[\cite{Weil98}, Section 2.1] Let $\var{V}$ be a variety. A subgroup $H$ of $F(A)$ is said to be $\var{V}$-preclosed if there exists a $\var{V}$-open subgroup $K$ containing $H$ such that the natural morphism $\phi_{H,K}: \Gamma_A(H)\to \Gamma_A(K)$ is injective. The intersection of all $\var{V}$-preclosed subgroups that containing $H$ is called the $\var{V}$-preclosure of $H$ \end{defn} A $\var{V}$-preclosed subgroup is a free factor of a $\var{V}$-open subgroup by Proposition \ref{morbasic} (3), but the converse does not hold in general, see \cite{Weil98} Example 2.3. This is because the notion of $\var{V}$-preclosed subgroup is not an algebraic notion and it depends on the chosen basis $A$. In \cite{Weil98} Section 2, the author discussed the relation between $\var{V}$-closed subgroups and $\var{V}$-preclosed subgroups, and proved that \begin{prop}[\cite{Weil98}, Proposition 2.8]\label{ctopc} A $\var{V}$-closed subgroup of the free group $F$ is $\var{V}$-preclosed. \end{prop} \begin{prop}[\cite{Weil98}, Section 2.1.2]\label{preover} Let $H$ be a finitely generated subgroup of $F(A)$, then the $\var{V}$-preclosure of $H$ is an overgroup of $H$. \end{prop} The author also found that under some assumptions about the variety $\var{V}$, these two notions coincide. \begin{defn} A variety $\var{V}$ is called \emph{extension-closed} if for a finite group $G$ whenever a normal subgroup $K$ of $G$ and $G/K$ lie in $\var{V}$, then $G$ is in $\var{V}$. \end{defn} \begin{prop}[\cite{Weil98}, Proposion 2.9]\label{cpclf} Assume $\var{V}$ is an extension-closed variety and let $H$ be a finitely generated subgroup of the free group. Then the following are equivalent: (1) $H$ is $\var{V}$-closed; (2) $H$ is $\var{V}$-preclosed; (3) $H$ is a free factor of a $\var{V}$-open subgroup. \end{prop} A similar result like Proposition \ref{cpclf} also holds for a variety with the M. Hall property or shortly called a Hall variety defined below. \begin{defn}[\cite{AS}, Section 3.5]\label{dhall} A variety $\var{V}$ is said to be a Hall variety if for each finite alphabet $A$ and each $\var{V}$-open subgroup $K$ of $F(A)$, each Schreier free factor of $K$ is $\var{V}$-closed in $F(A)$. \end{defn} \begin{prop}\label{cpclfhall} Assume $\var{V}$ is a Hall variety and let $H$ be a finitely generated subgroup of the free group $F(A)$. Then the following are equivalent: (1) $H$ is $\var{V}$-closed; (2) $H$ is $\var{V}$-preclosed; (3) $H$ is a Schreier free factor of a $\var{V}$-open subgroup. \end{prop} \begin{proof} (3)$\Rightarrow$(1) is by Definition \ref{dhall}. (1)$\Rightarrow$(2) is by Proposition \ref{ctopc}. (2)$\Rightarrow$(3) is by Remark \ref{mSchf}. \end{proof} \section{Supersolvable closures of subgroups} \subsection{$\var{H}_p$-closures of subgroups}\label{t1} \subsubsection{$\var{H}_p$-closures are overgroups} In \cite{AS}, the authors gave a complete classification of varieties of supersolvable groups those are Hall varieties. They proved \begin{thm}[\cite{AS}, Theorem 1.1] For a variety $\var{V}$ of finite supersolvable groups, then $\var{V}$ is a Hall variety if and only if $\var{V}=\var{G}_p * \var{Ab}_d$ for some prime $p$ and some positive integer $d~|~p-1$. \end{thm} \begin{cor}\label{hpclo} Let $H$ be a finitely generated subgroup of $F(A)$, then the $\var{H}_p$-closure of $H$ is an overgroup of $H$, namely $Cl_{\var{H}_p}(H)$ lies in $\mathcal{O}_A(H)$. \end{cor} \begin{proof} By Proposition \ref{cpclfhall} $Cl_{\var{H}_p}(H)$ is exactly the $\var{H}_p$-preclosure of $H$ and by Proposition \ref{preover} it is an overgroup of $H$. \end{proof} \subsubsection{Deciding $\var{H}_p$-denseness} \begin{defn} We call $(G,S)$ an \emph{$n$-marked group} if $S$ is a generating set of $G$ containing $n$ elements. A homomorphism between two $n$-marked groups $(G, s_1,...,s_n)$, $(G', s'_1,...,s'_n)$ is a group homomorphism $\phi: G\to G'$ such that $\phi(s_i)=s'_i$ for each $i=1,...,n$. \end{defn} \begin{defn Let $\var{V}$ be a variety and $F=F(A)$ the free group over a finite alphabet $A$. We call $F_\var{V}(A)$ a \emph{finite free object} in $\var{V}$ over $A$ if $F_\var{V}(A)$ is in $\var{V}$ and there exists a homomorphism $\sigma: (F,A)\to (F_\var{V}(A), \sigma(A))$ called the canonical projection satisfying the following universal property: for any $\psi: (F,A)\to (G,S)\in\var{V}$, there exists $\hat{\psi}: (F_\var{V}(A), \sigma(A))\to (G,S)$ such that the following diagram commutes. \end{defn} \begin{center} \begin{tikzcd} {(F(A), A)} && {(G, S)} \\ \\ {(F_\var{V}(A),\sigma(A))} \arrow["\sigma"', from=1-1, to=3-1] \arrow["\psi", from=1-1, to=1-3] \arrow["{\hat{\psi}}"', dashed, from=3-1, to=1-3] \end{tikzcd} \end{center} \begin{rem} In \cite{MSW01} Section 2, the authors described the free object over $A$ with respect to $\var{V}$. When $\var{V}$ has a finite free object over $A$, it is exactly the $F_\var{V}(A)$ defined above. \end{rem} \begin{exam}\label{abm} For the variety $\var{Ab}_m$, $F_{\var{Ab}_m}(A)$ is isomorphic to $(\mathbb Z/m\mathbb Z)^n$ (usually denoted by $(\mathbb Z/m\mathbb Z)^A$) where $n$ is the cardinality of $A$. The kernel of the canonical projection $\sigma: (F,A)\to (F_{\var{Ab}_m}(A), \sigma(A))$ is $[F,F]F^m$. \end{exam} In \cite{MSW01}, Margolis-Sapir-Weil proved the following Lemma \ref{gpabp} and Corollary \ref{pdec} stating that $\var{G}_p$-denseness is equivalent to $\var{Ab}_p$-denseness and decidable. \begin{lem}[\cite{MSW01}, Corollary 3.2]\label{gpabp} Let $H$ be a finitely generated subgroup of $F=F(A)$, and let $\sigma: F(A)\to (\mathbb Z/p\mathbb Z)^A$ be the canonical projection. The following are equivalent. (1) $H$ is $\var{G}_p$-dense in $F$; (2) $H$ is $\var{Ab}_p$-dense in $F$; (3) $\sigma^{-1}\sigma(H)=F$; (4) $\sigma(H)=(\mathbb Z/p\mathbb Z)^A$. \end{lem} \begin{rem}\label{pdense} Let $N$ be the kernel of $\sigma$, that is $N=[F,F]F^p$, then $H$ satisfies the above condition (3) if and only if $H\cdot N=F$. \end{rem} \begin{cor}[\cite{MSW01}, Corollary 3.4]\label{pdec} It is decidable whether $H$ is $\var{G}_p$-dense. \end{cor} \begin{cor}[\cite{MSW01} Section 3.2 or \cite{RZ94} Theorem 4.2]\label{gpc} Let $H$ be a finitely generated subgroup of $F$, then the $\var{G}_p$-closure of $H$ is computable. \end{cor} There is a similar relationship between $\var{H}_p$ and its subvariety $\var{Ab}_p*\var{Ab}_{p-1}$ like that in Lemma \ref{gpabp}. We will prove $\var{H}_p$-denseness is equivalent to $\var{Ab}_p*\var{Ab}_{p-1}$-denseness (Theorem \ref{hpdense}) and decidable (Theorem \ref{hpd}). A variety $\var{V}$ is called \emph{locally finite} if $F_\var{V}(A)$ exists for any finite alphabet $A$. For instance, the variety $\var{Ab}_m$ in Example \ref{abm} is locally finite. The product of two locally finite varieties is also locally finite. \begin{lem}\label{kernel} The variety $\var{Ab}_p*\var{Ab}_{p-1}$ is locally finite. Moreover, the kernel of the canonical projection $\sigma: (F(A),A)\to (F_{\var{Ab}_p*\var{Ab}_{p-1}}(A),\sigma(A))$ is $[N_{p-1},N_{p-1}]N^p_{p-1}$ where $N_{p-1}=[F,F]F^{p-1}$. \end{lem} \begin{proof} See \cite{Neu67} 21.12, 21.13 or \cite{AS} page 871. \end{proof} \begin{lem}\label{atot} Let $H$ be a finitely generated subgroup of $F$ and let $N_{p-1}=[F,F]F^{p-1}$ where $p$ is a prime number. Then $Cl_{\var{Ab}_p*\var{Ab}_{p-1}}(H)=H\cdot [N_{p-1},N_{p-1}]N^p_{p-1}$. \end{lem} \begin{proof} Let $A$ be a basis of $F$, then by Lemma \ref{kernel} the kernel of the canonical projection $\sigma: (F(A),A)\to (F_{\var{Ab}_p*\var{Ab}_{p-1}}(A),\sigma(A))$ is $[N_{p-1},N_{p-1}]N^p_{p-1}$. Note that $F_{\var{Ab}_p*\var{Ab}_{p-1}}(A)$ is in the variety $\var{Ab}_p*\var{Ab}_{p-1}$ and $\sigma$ is in the set $\Phi_{\var{Ab}_p*\var{Ab}_{p-1}}$ of all homomorphisms from $F$ to groups in $\var{Ab}_p*\var{Ab}_{p-1}$, we have $$Cl_{\var{Ab}_p*\var{Ab}_{p-1}}(H)=\bigcap_{\phi\in \Phi_{\var{Ab}_p*\var{Ab}_{p-1}}} \phi^{-1}(\phi(H))\leq \sigma^{-1}(\sigma(H)).$$ For any homomorphism $\phi\in \Phi_{\var{Ab}_p*\var{Ab}_{p-1}}$, denote also by $\phi$ the induced surjective homomorphism $(F,A)\to (\phi(F),\phi(A))$. Since $F_{\var{Ab}_p*\var{Ab}_{p-1}}(A)$ is the free object in $\var{Ab}_p*\var{Ab}_{p-1}$ over $A$, there exists $\hat{\phi}: (F_{\var{Ab}_p*\var{Ab}_{p-1}}(A), \sigma(A))\to (\phi(F),\phi(A))$ such that $\phi=\hat{\phi}\circ \sigma$. Thus $\sigma^{-1}(\sigma(H))\leq \sigma^{-1}(\hat{\phi}^{-1}(\hat{\phi}\circ\sigma(H)))= \phi^{-1}(\phi(H))$, then $$\sigma^{-1} (\sigma(H))\leq \displaystyle \bigcap_{\phi\in \Phi_{\var{Ab}_p*\var{Ab}_{p-1}}} \phi^{-1}(\phi(H))=Cl_{\var{Ab}_p*\var{Ab}_{p-1}}(H).$$ So we have $Cl_{\var{Ab}_p*\var{Ab}_{p-1}}(H)=\sigma^{-1} (\sigma(H))=H\cdot [N_{p-1},N_{p-1}]N^p_{p-1}$. \end{proof} \begin{lem}\label{hptoa} Let $H$ be a finitely generated subgroup of $F$, then $H$ is $\var{H}_p$-dense in $F$ if and only if $H$ is $\var{Ab}_p*\var{Ab}_{p-1}$-dense in $F$. \end{lem} \begin{proof} Suppose $H$ is $\var{H}_p$-dense in $F$. The variety $\var{Ab}_p*\var{Ab}_{p-1}$ is a subvariety of $\var{H}_p$, then by Lemma \ref{subd} $H$ is $\var{Ab}_p*\var{Ab}_{p-1}$-dense in $F$. Conversely, assume $H$ is $\var{Ab}_p*\var{Ab}_{p-1}$-dense but not $\var{H}_p$-dense in $F$, then by Corollary \ref{dense} there exists a surjective homomorphism $\phi: F\to G$ for some $G$ in the variety $\var{H}_p=\var{G}_p* \var{Ab}_{p-1}$ such that $\phi(H)\lneqq \phi(F)=G$. Suppose $M$ is a maximal subgroup of $G$ containing $\phi(H)$, then for the Frattini subgroup $\Phi(G)$ of $G$ which is the intersection of all maximal subgroups of $G$, we have $\phi(H)\cdot\Phi(G)\leq M$. Since $p$ is coprime with $p-1$, by Schur-Zassenhaus theorem $G$ is a semidirect product $P\rtimes B$ for some $p$-group $P\in \var{G}_p$ and $B\in \var{Ab}_{p-1}$. Note that $P/\Phi(P)$ is an abelian $p$-group, that is $P/\Phi(P)\in \var{Ab}_p$, then $P\rtimes B/\Phi(P)\cong (P/\Phi(P))\rtimes B$ lies in $\var{Ab}_p*\var{Ab}_{p-1}$. Since $\Phi(P)\leq \Phi(G)$, $G/\Phi(G)$ is a quotient of $P\rtimes B/\Phi(P)$, and it lies in the variety $\var{Ab}_p*\var{Ab}_{p-1}$ also. Let $\tau: G\to G/\Phi(G)$ be the natural quotient map, then $$\tau\circ\phi(H)=(\phi(H)\cdot \Phi(G))/\Phi(G)\leq M/\Phi(G)$$ which is proper in $\tau\circ\phi(F)=G/\Phi(G)$, so $H$ is not $\var{Ab}_p*\var{Ab}_{p-1}$-dense in $F$, contrary to the assumption. \end{proof} \begin{thm}\label{hpdense} Let $H$ be a finitely generated subgroup of $F$, then the following are equivalent: (1) $H$ is $\var{H}_p$-dense in $F$; (2) $H$ is $\var{Ab}_p*\var{Ab}_{p-1}$-dense in $F$; (3) $H\cdot [N_{p-1},N_{p-1}]N^p_{p-1}=F$ where $N_{p-1}=[F,F]F^{p-1}$; (4) $H$ is $\var{Ab}_{p-1}$-dense in $F$ and $H\cap N_{p-1}$ is $\var{Ab}_{p}$-dense in $N_{p-1}$. \end{thm} \begin{proof} By Lemma \ref{atot} and \ref{hptoa} and we only need to prove (3)$\Leftrightarrow$(4). Suppose $H\cdot [N_{p-1},N_{p-1}]N^p_{p-1}=F$, then $H\cdot N_{p-1}=F$ since $[N_{p-1},N_{p-1}]N^p_{p-1}$ is a subgroup of $N_{p-1}$, and we have $$(H\cap N_{p-1})\cdot [N_{p-1},N_{p-1}]N^p_{p-1}=(H\cdot [N_{p-1},N_{p-1}]N^p_{p-1})\cap N_{p-1}=F\cap N_{p-1}=N_{p-1}.$$ So by Remark \ref{pdense}, $H$ is $\var{Ab}_{p-1}$-dense in $F$ and $H\cap N_{p-1}$ is $\var{Ab}_{p}$-dense in $N_{p-1}$. Conversely, assume $H$ is $\var{Ab}_{p-1}$-dense in $F$ and $H\cap N_{p-1}$ is $\var{Ab}_{p}$-dense in $N_{p-1}$, then by Remark \ref{pdense}, $H\cdot N_{p-1}=F$ and $(H\cap N_{p-1})\cdot [N_{p-1},N_{p-1}]N^p_{p-1}=N_{p-1}$ and we have $$F=H\cdot N_{p-1}=H\cdot ((H\cap N_{p-1})\cdot [N_{p-1},N_{p-1}]N^p_{p-1})=H\cdot [N_{p-1},N_{p-1}]N^p_{p-1}.$$ \end{proof} \begin{lem}\label{abd} Let $H$ be a finitely generated subgroup of $F$, then $H$ is $\var{Ab}_d$-dense in $F$ if and only if for each prime factor $p$ of $d$, $H$ is $\var{Ab}_p$-dense in $F$. \end{lem} \begin{proof} Suppose $H$ is $\var{Ab}_d$-dense in $F$, since $\var{Ab}_p$ is a subvariety of $\var{Ab}_d$, $H$ is $\var{Ab}_p$-dense by Lemma \ref{subd}. Conversely, assume $H$ is $\var{Ab}_p$-dense for each prime $p$ divides $d$ but not $\var{Ab}_d$-dense, then by Corollary \ref{dense} there exist $G\in \var{Ab}_d$ and a surjective homomorphism $\phi: F\to G$ such that $\phi(H)\lneqq \phi(F)=G$. The quotient group $G/\phi(H)$ is a nontrivial abelian group and so can project onto a cyclic group $\mathbb Z/p\mathbb Z$ for some prime $p$ divides $d$. Then for the composition $\psi: F\to G\to G/\phi(H)\to \mathbb Z/p\mathbb Z$, $1=\psi(H)\lneqq \psi(F)=\mathbb Z/p\mathbb Z$, contrary to the assumption that $H$ is $\var{Ab}_p$-dense. \end{proof} By Lemma \ref{abd} and Corollary \ref{pdec}, it is decidable whether $H$ is $\var{Ab}_{p-1}$-dense in $F$ and $H\cap N_{p-1}$ is $\var{Ab}_{p}$-dense in $N_{p-1}$, thus by Theorem \ref{hpdense} we have \begin{cor}\label{hpd} Let $H$ be a finitely generated subgroup of $F$, then it is decidable whether $H$ is $\var{H}_p$-dense. \end{cor} \subsubsection{Computing $\var{H}_p$-closures} For two finitely generated subgroups $H,K$ of $F$ such that $H\leq K$, let $K$ be endowed with the pro-$\var{V}$ topology, we denote the $\var{V}$-closure of $H$ in $K$ by $Cl_{\var{V}}(H,K)$. The pro-$\var{V}$ topology of $K$ is usually not the restriction to $K$ of the pro-$\var{V}$ topology of $F$, but when $\var{V}$ is an extension-closed variety, then for an open subgroup $K$, these two topology coincide (\cite{MSW01}, Proposition 1.6). This is the key for the authors of \cite{MSW01} to construct an algorithm to compute the $\var{G}_p$-closure of a subgroup. In \cite{AS}, Auinger-Steinberg proved that the $\var{V}$-closure of a finitely generated subgroup is computable for certain product variety $\var{V}=\var{U}*\var{W}$ which can be applied to the case that $\var{V}=\var{H}_p=\var{G}_p*\var{Ab}_{p-1}$. \begin{prop}[\cite{AS}, Proposition 4.4] Suppose that $\var{U}$ and $\var{W}$ are varieties of groups with $\var{U}$ having computable closures and $\var{W}$ being locally finite. Suppose, moreover, there is an algorithm that on input a finite set $A$ outputs generators of the kernel $K$ of the canonical projection $\rho: F(A)\to F_\var{W}(A)$. Then $\var{U}*\var{W}$ has computable closures. \end{prop} \begin{rem} In fact, the authors proved $$Cl_{\var{U}*\var{W}}(H)=\bigcup_{t\in T} Cl_{\var{U}}(H\cap K, K)t,$$ where $T$ is a Schreier transversal for the right cosets of $H\cap K$ in $H$ which is computable under the assumption in the above proposition. \end{rem} For $\var{U}=\var{G}_p$ and $\var{W}=\var{Ab}_{p-1}$, the $\var{G}_p$-closure can be computed by Corollary \ref{gpc}, $\var{W}$ is locally finite and the kernel of the canonical projection $\rho: F(A)\to F_\var{W}(A)$ is $N_{p-1}=[F,F]F^{p-1}$ mentioned before whose generators can be expressed as words over the alphabet $A$ by an algorithm. We have the following \begin{cor} Let $H$ be a finitely generated subgroup of $F$, then the $\var{H}_p$-closure of $H$ is computable. \end{cor} \subsection{Supersolvable closures of subgroups}\label{t2} Let $\mathbb{P}$ be the set of all prime numbers, for a nonempty subset $\pi\subset \mathbb{P}$, denote by $\pi'$ the complement of $\pi$ in $\mathbb{P}$. Moreover, if $\pi$ consists of only one element $p$, we denote $\pi'$ by $p'$. We denote the set of all prime numbers dividing $n$ by $\pi(n)$ and $\pi(|G|)$ by $\pi(G)$ for a finite group $G$. \begin{defn}[$\pi$-group] Let $\pi$ be a nonempty subset of $\mathbb{P}$. A finite group $G$ is called a $\pi$-group if $\pi(G)\subset \pi$. \end{defn} \begin{defn}[$\pi$-core] For a nonempty subset $\pi\subset \mathbb{P}$, the $\pi$-core $O_{\pi}(G)$ of $G$ is the unique largest normal $\pi$-subgroup of $G$. If $\pi$ consists of only one element $p$, we denote $O_{\{p\}}(G)$ by $O_p(G)$ which is called the $p$-core of $G$. \end{defn} \begin{rem} The $p$-core $O_p(G)$ is the intersection of all Sylow $p$-subgroups of $G$. If $p\notin \pi(G)$, then $O_p(G)=1$ while $O_{p'}(G)=G$. \end{rem} \begin{lem}\label{ff} Let $G$ be a finite group, then $\bigcap_{p\in\pi(G)}O_{p'}(G)$ is trivial. \end{lem} \begin{proof} For each $p\in\pi(G)$, the order of $\bigcap_{p\in\pi(G)}O_{p'}(G)$ divides the order of $O_{p'}(G)$. It is coprime with each prime factor of $\pi(G)$, hence $\bigcap_{p\in\pi(G)}O_{p'}(G)$ is trivial. \end{proof} \begin{lem}\label{fff} Let $M$ be a subgroup of a finite group $G$, then $M=\bigcap_{p\in\pi(G)}MO_{p'}(G)$. \end{lem} \begin{proof} Suppose $\pi(G)=\{p_1,...,p_k\}$, we have $|G|=p_1^{n_1}\cdots p_k^{n_k}$, $|M|=p_1^{m_1}\cdots p_k^{m_k}$ and $|O_{p'_i}(G)|=p_1^{\alpha_{i1}}\cdots p_i^{\alpha_{ii}}\cdots p_k^{\alpha_{ik}}$ for $i=1,...,k$. Note that $\alpha_{ii}=0$ since $p_i$ is coprime with $|O_{p'_i}(G)|$. The order of $MO_{p'_i}(G)$ divides $$|M||O_{p'_i}(G)|=p_1^{m_1+\alpha_{i1}}\cdots p_{i-1}^{m_{i-1}+\alpha_{i,i-1}}p_i^{m_i}p_{i+1}^{m_{i+1}+\alpha_{i,i+1}}\cdots p_k^{m_k+\alpha_{ik}},$$ then the order of $\bigcap_{i=1}^k MO_{p'_i}(G)$ divides the greatest common divisor $$\gcd(|M||O_{p'_1}(G)|,\cdots,|M||O_{p'_k}(G)|)=p_1^{m_1}\cdots p_k^{m_k}=|M|.$$ Thus $M=\bigcap_{i=1}^k MO_{p'_i}(G)$ because $\bigcap_{i=1}^k MO_{p'_i}(G)$ has the same order with its subgroup $M$. \end{proof} \begin{defn}[$p$-nilpotent] Assume $p$ is a prime number, a finite group $G$ is said to be $p$-nilpotent if for each chief factor $M/N$ of $G$, either $M/N$ is a $p'$-group or $M/N$ is in the center of $G/N$. For a finite group $G$, we denote the product of all normal $p$-nilpotent subgroups of $G$ by $F_{p}(G)$. It is a characteristic subgroup of $G$. \end{defn} \begin{lem}[\cite{Guo}, Theorem 1.8.13 (2), page 41]\label{fpg} Let $G$ be a finite group, then $F_p(G)/O_{p'}(G)=O_p(G/O_{p'}(G))$. \end{lem} \begin{lem}[\cite{Guo}, Example 4, page 98]\label{abp} Let $G$ be a finite group, then $G$ is supersolvable if and only if $G/F_p(G)$ lies in the variety $\var{Ab}_{p-1}$ for each $p\in \pi(G)$. \end{lem} \begin{lem}\label{fp} Suppose $G$ is a finite supersolvable group, then $G/O_{p'}(G)$ lies in the variety $\var{H}_p$ for each $p\in \pi(G)$. \end{lem} \begin{proof} We have an exact sequence of groups $$1\longrightarrow F_p(G)/O_{p'}(G)\longrightarrow G/O_{p'}(G)\longrightarrow G/F_p(G)\longrightarrow 1.$$ $G/F_p(G)$ lies in the variety $\var{Ab}_{p-1}$ by Lemma \ref{abp} and $F_p(G)/O_{p'}(G)$ is a $p$-group by Lemma \ref{fpg}. \end{proof} \begin{thm}\label{mainthm} Let $H$ be a subgroup of $F$, then $Cl_{\var{Su}}(H)=\bigcap_{p\in\mathbb{P}}Cl_{\var{H}_p}(H)$. \end{thm} \begin{proof} $Cl_{\var{Su}}(H)\subset Cl_{\var{H}_p}(H) $ since $\var{H}_p$ is a subvariety of $\var{Su}$ for any $p\in \mathbb{P}$. If there exists an element $g\in \bigcap_{p\in\mathbb{P}}Cl_{\var{H}_p}(H)-Cl_{\var{Su}}(H)$, then $g$ satisfies both of the following two conditions: (1) for any $p\in \mathbb{P}$, $S\in \var{H}_p$ and any homomorphism $\psi: F\to S$, $\psi(g)\in \psi(H)$; (2) there exist $G\in \var{Su}$ and a homomorphism $\phi: F\to G$ such that $\phi(g)\notin \phi(H)$. Denote the quotient homomorphism $G\to G/O_{p'}(G)$ by $\tau_p$ for $p\in \pi(G)$. By Lemma \ref{fp}, $G/O_{p'}(G)$ lies in the variety $\var{H}_p$, thus $\tau_p\circ\phi(g)\in \tau_p\circ\phi(H)$ by the condition (1) above. We have $\phi(g)\in \tau_p^{-1}(\tau_p(\phi(H)))=\phi(H)O_{p'}(G)$ for each $p\in \pi(G)$, then $$\phi(g)\in \bigcap_{p\in \pi(G)} \phi(H)O_{p'}(G)=\phi(H)$$ by Lemma \ref{fff}, contrary to the condition (2) above. \end{proof} \begin{cor}\label{mainres} Let $H$ be a finitely generated subgroup of $F$, then the $\var{Su}$-closure of $H$ is finitely generated. \end{cor} \begin{proof} Fix a basis $A$ of $F$. By Corollary \ref{hpclo}, each $\var{H}_p$-closure of $H$ lies in the $A$-fringe $\mathcal{O}_A(H)$ of $H$ which is a finite set, then by Theorem \ref{mainthm}, the $\var{Su}$-closure of $H$ is the intersection of finitely many finitely generated subgroups, and so it is also finitely generated. \end{proof} \noindent\textbf{Acknowledgements.} The second named author acknowledges partial support from the National Natural Science Foundation of China (Grant No. 12271385).
{ "timestamp": "2022-09-20T02:23:41", "yymm": "2209", "arxiv_id": "2209.08720", "language": "en", "url": "https://arxiv.org/abs/2209.08720" }
\section{Introduction} \vspace{-5mm} Driving has seen its numerous phases of evolution, from being steam-propelled to becoming almost fully autonomous. Throughout these developments, the existence of one phenomenon has remained constant in the daily environment --- intense sunlight which can obstruct the view of a vision sensor (either eyes or cameras) while maneuvering a vehicle. When the sun descends on the horizon, sun glare seeps below a car's visor and visually impairs vision sensors, causing difficulty in navigating everyday traffic. The temporary blindness (due to sun glare) causes difficulty in sensing other cars, traffic signs, and often leads to accidents. As a result of sun glare, a recent report \cite{DoTreport} by the Department of Transportation has stated that as many as 9,000 glare-related accidents occur each year, making it one of the two most prominent environment related reasons for crashes. The combination of harsh sun glare with common driving risk factors contributes to more crashes and congestion in day-to-day driving, leading to setbacks within implementing new automotive technologies. \vspace{-3mm} There has been an upsurge of autonomous vehicles driving alongside everyday drivers, such as Tesla or Google's Waymo. These self-driving vehicles make their decisions through the use of object detection and classification algorithms, which allows the autonomous system to ``see'' objects (such as traffic signs on the road), classify them, and make a real-time decision (machine learning~\cite{bian2022machine}) based on the algorithmic interpretation of the seen object. The functionality of these algorithms heavily depends on rich sets of data that are collected from real-world scenarios, annotated, and fed to ``teach'' algorithms on what it may experience on the road. One set of data frequently used to teach algorithms within autonomous cars includes traffic signs---critical for navigating everyday traffic. While there are several datasets publicly available that focus on traffic signs in regular weather conditions, there is {\em few traffic sign dataset focusing on traffic signs with sun glare}. Our experiments indicate that when vehicles are continuously trained to recognize objects using data without sun glare, real-time algorithms within cars may fail to detect traffic signs and other objects when blinded by high-intensity visual noise, leading to catastrophe. \vspace{-3mm} Datasets containing traffic signs with sun glare are often internal within autonomous driving companies and consequently are not publicly available for wider research purposes. While existing public datasets (such as LISA) do not contain any sun glare at all, there is an emerging need to create a public dataset with a wide variety of traffic signs with sun glare interference to fill this disparity. \renewcommand{\thempfootnote}{\fnsymbol{mpfootnote}} \begin{table*}[ht] \begin{minipage}{\textwidth} \caption[xxx]{Comparison of Existing Traffic Sign Datasets.} \vspace{-3mm} \begin{center} \scalebox{1.1}{ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Dataset & Images & Image Resolution & Classes & Tasks\footnote{We target detection and the recognition (classification) tasks here.} & Features & Country & Year \\ [0.5ex] \hline\hline STS~\cite{larsson2011correlating} & 3,777 & 1280$\times$960 & 20 & Both & w/ general occlusion & Sweden & 2011\\ \hline GTSRB~\cite{stallkamp2012man} & 51,839 & 15$\times$15 $\sim$ 250$\times$250 & 43 & Recognition & w/ general occlusion & Germany & 2011\\ \hline GTSDB~\cite{houben2013detection} & 900 & 1360$\times$1024 & 43 & Detection & w/ general occlusion & Germany & 2013\\ \hline LISA~\cite{mogelmose2012vision} & 6,610 & 640$\times$680 $\sim$ 1024$\times$522 & 49 & Both & w/ general occlusion & USA & 2012\\ \hline TT-100K~\cite{zhu2016traffic} & 100,000\footnote{Only 10,000 images contain traffic signs.} & 2048$\times$2048 & 221 & Both & w/ general occlusion & China & 2016\\ \hline MTSD~\cite{ertler2020mapillary} & 100,000 & 1000$\times$1000 $\sim$ 2048$\times$2048 & 313 & Both & w/ general occlusion & Worldwide & 2019\\ \hline DFG~\cite{tabernik2019deep}\footnote{The DFG Traffic Sign Dataset uses polygon annotations, instead of bounding box annotations.} & 6,957 & 720 $\times$576$\sim$1920$\times$1080 & 200 & Both & w/ general occlusion & Slovenia & 2019\\ \hline \hline \textbf{\textbf{GLARE}{}} & 2,157 & 720$\times$480$\sim$ 1920$\times$1080 & 41 & Both & \textbf{w/ heavy glares} & USA & 2022\\ \hline \end{tabular} \label{tab:comparison}} \end{center} \end{minipage} \end{table*} \vspace{-3mm} \textbf{Contributions.} As an addition to the LISA Traffic Sign dataset, we establish the GLARE dataset --- a collection of images with traffic signs which have heavy visual interference as a result of strong sunlight. GLARE will be a publicly available set of images for training real-time detection algorithms and more. This dataset and the proposed algorithms are intended to act as a baseline for upcoming researchers while developing, training, and examining their own models. The contributions of this work come in three folds: \vspace{-6mm} \begin{itemize} \item We establish a fine-grained traffic sign dataset, GLARE, with abundant of realistic glares on or near the traffic sign areas. To our knowledge, GLARE is the first traffic sign dataset with detailed annotations of glares and covering a rich scenarios of glare conditions from daily driving. Compared to the common used dataset (e.g., LISA~\cite{mogelmose2012vision}, GTSDB~\cite{houben2013detection}, and TT-100K~\cite{zhu2016traffic}), GLARE provides pure observation of traffic sign with glares instead mixing the sparse witness of general occlusions. We follow the standard format to annotate, calibrate, and reorganize the dataset for a wide range of research tasks (e.g., traffic sign detection, image classification and temporal localization). \item We also have released the full procedures to step-by-step create the dataset and analyze its statistical features. \item We further showcase the research potentials of the GLARE dataset by testing it on a large group of benchmarks. Specifically, we observe that the performances of mainstream traffic sign detection algorithms degrades drastically on GLARE test set, where training with GLARE dataset shows a significant performance gain instead. \end{itemize} \textbf{Organization.} The rest of the paper is organized as follows: Section II summarizes the existing related work of traffic sign datasets and the cutting-edge object detection models. Section III details the dataset including its collection, annotation, and statistics. Section IV reports the experiments to check the testing performance of the mainstream traffic sign detection algorithms with and without GLARE in the training phase. Section V concludes the paper. \footnote{GLARE is available at {https://github.com/NicholasCG/GLARE\_Dataset}. } \vspace{-5mm} \section{Related Work} \vspace{-2mm} \subsection{Traffic Sign datasets} \vspace{-3mm} With the advancement of autonomous driving, there has been an emphasis on collecting data with all types of road conditions, signs, and any factor to note while driving, leading to a plethora of datasets in the community specific to traffic sign detection. \vspace{-4mm} Several datasets tend to focus on traffic signs found globally, each with variations. For example, the German Traffic Sign Recognition Benchmark~\cite{houben2013detection} focuses on traffic signs from Germany and captured images in different environments under varied weather conditions. Others that follow a similar pattern include the Tsinghua-Tencent 100K dataset~\cite{zhu2016traffic}, the Swedish Traffic Sign dataset~\cite{larsson2011correlating}, and the Belgium Traffic Signs dataset~\cite{timofte2014multi}. It is advantageous to the computer vision community to have access to traffic signs from around the world, but there is a significant drawback common to public traffic sign datasets: a lack of sun glare within its images. \vspace{-3mm} The use of convolutional neural networks (CNNs) is prevalent throughout traffic sign datasets, often for the tasks of detection and recognition. To set the standard for these tasks, baselines are often attached to datasets in the form of varied CNNs. The Mapilliary dataset~\cite{ertler2020mapillary}, for example, uses a Faster regional-based convolutional neural network (R-CNN) based detector to produce mean average precision (mAP) results over all of its classes. The DFG Traffic Sign dataset~\cite{tabernik2019deep} is another example that uses such techniques to establish a baseline, utilizing a Faster R-CNN and a Mask R-CNN to provide mAP values range in the upper 90's Although the dataset includes traffic-sign instances with synthetic distortions that may resemble sun glare, these images are incomparable to those with natural sun glare. Because the generalization performance may suffer from improper training with datasets that lack natural sun glare. This is a phenomenon found often throughout datasets focused on traffic sign depiction: models are trained on data without obscuring conditions or with synthetic ones, such as sun glare, usually are unsatisfactory when tested in real driving scenarios. \vspace{-3mm} Severe conditions, such as sun glare or heavy rain, impede the visibility of traffic signs while driving. Just as it hinders human drivers, it additionally interferes with algorithmic vision. The CURE-TSD-Real dataset~\cite{temel2019traffic}, comprised of traffic sign images in simulated heavy road conditions, is an example of a dataset with severe conditions which resulted in a 29\% drop in average precision. This motivates us to investigate the possibility for harsh sun glare to cause a drop in algorithmic performance as well. \vspace{-3mm} GLARE dataset intends to be an extension of the LISA dataset~\cite{mogelmose2012vision}, which is one of the most commonly used American traffic sign datasets with an emphasis on large variations within urban landscapes. The dataset is comprised of videos and stand-alone images of traffic signs, amounting to about 6,610 images and 7,855 annotations. Source data for the LISA dataset comes with color, in grayscale, and does not include images with excessive sun glare. The LISA dataset answered a need for a public dataset with US-based traffic signs and notably contributed more as it includes full traceability of its dataset by providing full annotations of all images, and includes all associated tracks. We provide full comparisons of the existing related traffic sign datasets in Table~\ref{tab:comparison} with GLARE, which shows that GLARE is the latest traffic dataset (with two years gap from DFG and MTSD and nine years gap from original LISA) and it is formed by high-resolution images with heavy/harsh glares on traffic signs. \vspace{-6mm} \subsection{Traffic Sign Classification} \vspace{-4mm} One of the most popular applications with the aforementioned datasets is traffic sign classification, where tremendous efforts are accomplished from statistical learning to deep learning paradigm. For example, Soendoro and Supriana~\cite{soendoro2011traffic} firstly adopts the SVMs with the sparse representation to recognize the class of traffic sign in images. With the rise the deep learning, convolutional neural networks begin to dominate the performances of recognition/classification in the traffic sign domain. Specifically, on the GTSRB dataset, a large amount of CNN variants~\cite{mao2016hierarchical} shows a powerful ability for generalization, where the classification accuracy on the testing set is even better than the performance of human experts (e.g., CNNs with spatial transformers~\cite{arcos2018deep} can achieve roughly 99.7\% in terms of top-1 accuracy). Note that, the reported high classification accuracy is based on the cropped traffic sign images, where we can obtain these images via a specifically designed detection task. \vspace{-6mm} \subsection{End-to-End Traffic Sign Detection} \vspace{-4mm} At first, the detection and the classification are two independent tasks, where the classification is built upon the proper detected images (i.e., the traffic sign is located and extracted intact). With the rise of the CNNs, the original powerful performance of image classification/recognition~\cite{krizhevsky2012imagenet} rapidly transfers to object detection domain. Furthermore, it has been a consensus that the family of CNNs is capable of detecting a bounding box for specific object while classifying its category simultaneously. The well-known RCNN/fast RCNN~\cite{girshick2014rich,girshick2015fast} first generates potential bounding boxes on the frames and then classifies the object only in these bounding boxes. However, the final performance of detection depends on the performances of multiple stages during the complex pipeline (i.e., pre-processing, classification, and post-processing to re-score the proposed bounding boxes), where the whole process is slow as well. To address the efficiency and complexity issue, ~\cite{redmon2016you} proposes the first edition of You Only Look Once (YOLO) series, v1 to v5, and treats the object detection as a single regression task to directly establish the connection among the image pixels, the bounding box coordinates and the labels with probabilities. To further improve the performance, YOLOX~\cite{ge2021yolox} is proposed as incorporating the anchor-free manner and several cutting-edge detection techniques (e.g., decoupled head, leading label assignment strategy). \vspace{-3mm} Another branch of detection strategies leverage the popular transformer~\cite{carion2020end} encoder-decoder architectures by removing the complicated hand-designed components such as non-maximum suppression or anchor generation while optimizing a global loss that is enable unique classifications via bipartite matching. Similar ideas are brought into traditional RCNN architecture that a special Swin Transformer~\cite{liu2021swin} shows a great performance gain when replacing the ResNet50 backbone. Almost all the aforementioned detection algorithms rely on the supervised learning with labeled traffic signs and it is rarely considered that the possible strong noises (e.g., sun glare) in the testing phase may degrade the detection performance. \vspace{-6mm} \section{GLARE Dataset} \vspace{-4mm} This section presents the GLARE traffic sign dataset, a sun glare focused dataset to assist researchers and developers in building real-time autonomous traffic sign detection and classification system in sun glare conditions. The dataset includes 2157 images and annotations, each containing a single traffic sign annotation. This dataset can be used for classification, detection, concurrent detection and classification. \vspace{-6mm} \subsection{Video Collection and Processing} \vspace{-4mm} The initial collection process started with three dashboard cameras recording approximately 38 hours of footage around the Orlando area. Two cameras were forward facing with one filmed at 1920$\times$1080 (1080p) and the other rearward facing one filmed at 720$\times$480 (480p). In total, the cameras filmed 463 initial videos of 40 hours and 25 minutes of footage. \vspace{-3mm} The first step in the video processing was to remove all videos that did not meet the criteria of having both sun glare and traffic signs at the time. This results in 163 videos that contained some amount of sun glare. The second step was to extract the sections of video with sun glare, referred to as {\em clips}. The clips each contained a continuous presence of sun glare, with about a half second of extra time at the start of each clip to allow for ease of finding the beginning of the sun glare. Since only footage that concurrently contains sun glare and traffic signs matters, any clips that did not contain traffic signs during the follow-up screening section were discarded. Following a similar procedure as the LISA dataset~\cite{mogelmose2012vision}, these short videos are referred to as {\em tracks}. 189 tracks were used in the creation of the GLARE dataset, totaling 18 minutes and 11 seconds of footage. The tracks were organized by their original source video, with 33 original source videos being used in total to produce the GLARE dataset. \vspace{-6mm} \subsection{Annotation Process} \vspace{-4mm} The image annotation process was separated into two main steps: bounding box localization of traffic signs, and bounding box approval with cleanup. The first step was completed by two individuals at the same time with a single open-source tool~\cite{videolabel} to allow for efficient labeling. The second step was completed after the initial processing of all the images as a quality assurance step by two individuals who worked on labeling and processing the initial bounding boxes in the LISA dataset format~\cite{mogelmose2012vision}. The automatic bounding box tracking algorithms available with the tool were Re3~\cite{gordon2018re3} and CMT (Consensus-based Matching and Tracking of Keypoints for Object Tracking)~\cite{Nebehay2015CVPR}, and we used Re3 for all our annotations due to stable tracking at all sizes. When annotations were saved, each image was exported has a single associated annotation. \vspace{-3mm} \noindent{\bf Traffic Sign Localization.} In the first step, tracks were processed together based on the original source video. Each track would then be played to completion, with the bounding box labeling occurring on the first frame with sun glare that was a multiple of 5. The process would continue with the current bounding box being saved on every subsequent multiple of 5 until the track finished playing or sun glare was no longer on the screen. When labeling traffic signs using the bounding box localization tool, you could automatically choose the classification of the traffic sign to allow for increased efficiency. After the initial labeling, the annotation tool would continue automatically annotating until the current user deemed the automatic annotation to have drifted too far. The annotator would then delete and reapply the bounding box, and continue annotating until all the tracks were processed. For each traffic sign in a track, no more than 30 annotations of that traffic sign would be saved to decrease overexposure of that sign. \noindent{\bf Bounding Box Approval and Cleanup.} After a track was labeled, the annotations were reviewed and either approved or rejected. Any bounding boxes with background noise that can be removed with manual selection are removed and relabeled. After all tracks and annotations were processed, the video was exported in a single CSV file (similar to the LISA dataset) for further processing, as demonstrated by Figure~\ref{fig:processer}. \begin{figure}[ht] \centering \vspace{-3mm} \includegraphics[width=.7\linewidth]{Images/image_selector.png} \caption{Bounding box processing and exporting.} \vspace{-2mm} \label{fig:processer} \end{figure} After all the tracks were processed and exported by the original source video, the annotations were further processed to remove previously uncaught errors and extract statistical information from the entire dataset. Any bounding box that did not localize a sign or contained significant background noise were rejected, and any improperly labeled annotations were renamed. After removing the improper annotations, the remaining annotations were then categorized on if the traffic signs were covered in any way, and if they were on the current road or a side road~\cite{mogelmose2012vision}. These "Occluded" and "On another road" annotations~\cite{mogelmose2012vision} were then pooled. \vspace{-7mm} \subsection{Dataset Statistics} \vspace{-4mm} The GLARE dataset contains 2,157 bounding box annotations and associated images distributed across 41 classes. Figure~\ref{fig:counts} shows the distributions of the annotations per class. The annotations were created from multiple videos to ensure a variety in the location in glare conditions. For each track in each source video, a maximum of 30 frames for each traffic sign class were allowed to minimize over-exposure of traffic signs in specific positions and sun glare conditions. Figure~\ref{fig:annot_counts} shows the distribution of annotations across the 33 source videos processed. The size of the bounding box annotations varies between 6 $\times$ 14 and 137 $\times$ 178 pixels, and the size of the images is either 810 $\times$ 540 or 937 $\times$ 540 pixels. The dataset works with existing scripts released alongside the LISA dataset for annotation extraction and splitting~\cite{mogelmose2012vision}. \begin{figure}[ht] \vspace{-5mm \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=\linewidth]{Images/ann_counts.png} \caption{Traffic sign classes.} \label{fig:counts} \end{subfigure} \hfill \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=\linewidth]{Images/ann_p_video.png} \caption{Annotations per source video.} \label{fig:annot_counts} \end{subfigure} \caption{Statistics/Distributions of GLARE dataset.} \label{fig:image2} \end{figure} The types of visual interference labelled as sun glare can be broadly categorized into four categories. The categories described are subjective, but are recorded to allow for greater understanding on how we evaluated sun glare during the initial video processing for the dataset. Examples of each of the following categories can be seen in Figure~\ref{fig:image2}. The first category is where there is a clear sun without any significant additional bright cloud noise or brightness interference from the camera. The sun appears as a bright ball, excluding any obstruction by either clouds or other objects. In an upcoming section, we will describe a naïve detector for this type of sun glare to improve traffic sign detection results. The second category is where there is a visible sun, but there are additional clouds that add to the overall brightness of the image. The third category is where there is minimal to no visible sun due to cloud interference. Although the sun is not visible in the frame, there is still visual interference that causes traffic signs to be less visible than in clear conditions, decreasing detection. The fourth category is sun glare due to other interference. The sun being visible is not a requirement, as there is visual interference due to the camera settings. Either way, the visual interference appears similar to the interference caused by the other types of sun glare. The images themselves have not been labelled based on type of sun glare due to the subjective nature of the categories and that some images can fit into multiple categories. \begin{figure}[ht] \begin{subfigure}{0.22\textwidth} \includegraphics[width=\linewidth]{Images/annot_glare_pics/annot_1.png} \caption{Sun with other interference} \end{subfigure} \hfill \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{Images/annot_glare_pics/annot_2.png} \caption{Clear sun without clouds} \end{subfigure} \hfill \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{Images/annot_glare_pics/annot_3.png} \caption{Clouds with non-visible sun} \end{subfigure} \hfill % \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{Images/annot_glare_pics/annot_4.png} \caption{Clear sun with clouds} \end{subfigure} \caption{Examples of images form the GLARE dataset with bounding boxes highlighted.} \label{fig:image2} \end{figure} \vspace{-4mm} \section{Benchmarks} \vspace{-4mm} We tested multiple state of the art methods in detecting traffic signs in sun glare conditions. Each method has two models trained, one on a subset of the GLARE dataset with 80/20 split for training (1725 images) and validation (432 images) and one on the LISA dataset \footnote{For the models trained on LISA, the training set is the entire LISA dataset (7855 images), and the validation set is the subset of the GLARE dataset that have classes within the set of classes of the LISA dataset (1725 images).}. \vspace{-3mm} \noindent{\bf Metrics.} The scoring metric used for comparing the performance of the models is the mean average precision (mAP) of the classes as defined in COCO~\cite{lin2014microsoft} and implemented in Ultralytics' YOLOv5~\cite{glenn_jocher_2022_6222936} and OpenMMLab~\cite{mmdetection}. For each image, we produce multiple predicted bounding boxes, and compare the intersection of each bounding box to the ground truth of the traffic sign in the image. If the predicted bounding box's Intersection over Union (IoU) with the ground truth is over 0.5, then the prediction is counted as positive. In the case of multiple prediction to a ground truth bounding box, only the prediction with the highest IoU is counted. The Average Precision (AP) is then calculated as the area under the precision-recall curve, and over all classes, we calculate mAP$_{0.5}$ as the mean over all classes with an IoU of 0.5 and mAP$_{0.5:0.95}$ as the mean over all the classes over a range between 0.5 to 0.95, with a step of 0.05~\cite{lin2014microsoft}. \vspace{-6mm} \subsection{Configured Methods} \vspace{-4mm} In order to demonstrate the necessity of viability of the GLARE dataset, we experimented six state-of-the-art methods with detailed \textbf{fine-tuned} hyper-parameter settings as follows: \vspace{-3mm} \begin{itemize}[leftmargin=*] \item \textbf{Faster-RCNN}~\cite{Ren_2017} and \textbf{ResNet50}~\cite{resnet} backbone: We used batch size 2 with a model pretrained on the COCO as the initial weights, and trained for 12 epochs and an initial learning rate of $2.5\times10^{-3}$ with a step decay by 10 after the 8th and 11th epochs. \item \textbf{Faster-RCNN} with \textbf{Swin Transformer}~\cite{liu2021swin} backbone: We trained with batch size 1 for GLARE and 4 for LISA over 20 epochs. The learning rate was $1\times10^{-4}$ with an AdamW optimizer and decayed over a cosine annealing scheduler with a minimum learning rate of $1\times10^{-7}$. The learning rate had a linear 10 warmup epochs with a warmup ratio of $0.1$. \item \textbf{YOLOv5}~\cite{glenn_jocher_2022_6222936}: We trained for 300 epochs using the small model, as our dataset is meant for real-time application, with LISA model stopping training after 150 epochs due to no improvement, and used a batch size of 64. For the learning rate, we trained with an initial learning rate of $1\times10^{-2}$ with 5 warmup epochs with a linear decay over all epochs. \item \textbf{YOLOX}: We trained using the small model for consistency with YOLOv5, and trained for 300 epochs with a batch size of 8 and SGD with Nestrov acceleration. We trained with an initial learning rate of $1.25\times10^{-3}$ with a cosine annealing decay over 250 epochs for the GLARE dataset and 285 epochs for LISA with a final learning rate of $6.25\times10^{-5}$. The learning rate had an exponential warmup with 5 warmup epochs with a warmup ratio of 1. \item \textbf{Deformable DETR}~\cite{zhu2021deformable}: We trained with a a batch size of 2 with an AdamW optimizer for 100 epochs and a gradient clipping norm of $0.1$. The initial learning rate was $2\times10^{-4}$ with a step decay by 10 after the 85th epoch for the GLARE dataset and after the 40th epoch for LISA. \item \textbf{TOOD}~\cite{feng2021tood}: We trained with a batch size of 2 over 100 epochs for GLARE and 30 epochs for LISA. We trained with an initial learning rate of $1.25\times10^{-3}$ with a step decay after the 16th and 22nd epochs. The learning rate had a linear warmup with 500 warmup iterations and a warmup ratio of $1\times10^{-3}$. \end{itemize} \vspace{-3mm} The YOLOv5 models were trained on Ultralytics' own testing platform, and the rest of the architectures were trained and validated using OpenMMLab's MMDetection toolbox~\cite{mmdetection}. All architectures used the given training and testing pipelines for images so as to not bias the detection results to either dataset. Unless stated otherwise, the initial weights during training were random and training was performed using Stochastic Gradient Descent (SGD). The only alteration from the initial given parameters was the number of epochs and learning rate configurations so as to ensure the convergence of each model. All of the architectures were trained on a single GPU. \vspace{-7mm} \subsection{Experimental Results} \vspace{-4mm} The results of testing the GLARE dataset against baseline models trained on the dataset versus trained on the LISA dataset are shown in Table~\ref{table:benchmarks}\footnote{The architectures are listed by initial release of the associated publication or code itself if no publication is available.}. \begin{table}[ht] \centering \caption{Detection results of trained architectures when tested on the GLARE dataset} \scalebox{1}{ \begin{tabular}{@{}lrrrr@{}} \toprule & \multicolumn{2}{c}{mAP$_{0.5}$} & \multicolumn{2}{c}{mAP$_{0.5:0.95}$} \\\midrule & GLARE & \multicolumn{1}{l|}{LISA } & GLARE & LISA \\ \midrule Faster-RCNN$_{\textit{ResNet50}}$ & 86.00 & \multicolumn{1}{r|}{32.10} & 58.46 & 16.41 \\ \midrule Faster-RCNN$_{\textit{SwinT-Base}}$ & 90.80 & \multicolumn{1}{r|}{43.80} & 60.26 & 20.39 \\ \midrule YOLOv5 & 87.40 & \multicolumn{1}{r|}{20.90} & 60.70 & 11.10 \\ \midrule YOLOX & 89.60 & \multicolumn{1}{r|}{17.70} & 60.90 & 9.84 \\ \midrule Deformable DETR & 87.30 & \multicolumn{1}{r|}{37.70} & 52.77 & 18.20 \\ \midrule TOOD & 88.20 & \multicolumn{1}{r|}{43.60} & 58.63 & 20.75 \\ \bottomrule \end{tabular}} \label{table:benchmarks} \end{table} \vspace{-2mm} For the models trained on our dataset, the average mAP is 88.22 for IoU = 0.5 and 58.62 for IoU = [0.5, 0.95]. For the models trained on the LISA dataset, the average mAP is 32.47 for IoU = 0.5 and 16.12 for IoU = [0.5, 0.95]. This represents an average decrease of 55.75 points for mAP$_{0.5}$ and an average decrease of 42.5 points for mAP$_{0.5:0.95}$. \vspace{-3mm} Newer architectures have generally greater performance when trained on GLARE, as can be seen by the 1.4 increase between YOLOv5 and a Faster-RCNN with ResNet50 and a 3.4 increase between YOLOv5 and Faster-RCNN with a Swin Transformer on mAP$_{0.5}$. This behavior mostly carries over to mAP$_{0.5:0.95}$ as well, as all architectures, except the Deformable DETR, performed better than the Faster-RCNN with ResNet50. \vspace{-3mm} For the models trained on LISA, there is less consistency in improvement when tested on GLARE. The best performing models are the Faster-RCNN with a Swin Transformer and the TOOD architectures. Both YOLOv5 and YOLOX have a notable decrease in mAP when compared to the average. \vspace{-3mm} All the architectures performing well when trained on GLARE do poorly on LISA. This indicates that sun glare has a notable effect on the ability for current architectures to accurately detect traffic signs. Newer architectures perform better in detecting traffic signs generally in both cases, although there is greater variation in results on LISA. Sun glare affects real-time architectures (such as YOLOv5 and YOLOX) more when trained on LISA. \vspace{-5mm} \section{Conclusion and Future Works} \vspace{-4mm} This paper introduces GLARE, a traffic sign dataset with a focus on sun glare and how it affects the recognition of traffic signs in such conditions. The dataset includes 2,157 images with corresponding bounding box annotations of traffic signs across 41 classes from the Orlando area. The GLARE dataset has a specific focus on images with sun glare present, which affects both human drivers and cameras for autonomous driving systems. Our baseline benchmarks have shown that sun glare has a noticeable effect on the ability of current architectures to detect traffic signs \vspace{-3mm} The GLARE dataset is the beginning of future research on traffic sign detection in naturally noisy conditions and the removal of sun glare as well. We believe this dataset can be used as a testing set for entire sun glare removal using the U-Net architecture~\cite{unet}, as seen in previous work removing sun flares from images~\cite{sunflare}. We also believe this dataset can be extended to include traffic signs in other noisy or abnormal conditions, such as rain, fog, and night-time driving. Such an extension could be used to create and train architectures that can detect traffic signs with greater precision in a wider variety of conditions, whether through image restoration or detection and recognition alone. \vspace{-5mm} \bibliographystyle{unsrt}
{ "timestamp": "2022-09-20T02:23:37", "yymm": "2209", "arxiv_id": "2209.08716", "language": "en", "url": "https://arxiv.org/abs/2209.08716" }
\section*{Acknowledgements} This work is partially supported by National Natural Science Foundation of China (NSFC) under Grant No. 61972098. Ting Su is partially supported by NSFC Project No. 62072178. \balance \bibliographystyle{ACM-Reference-Format} \section{Approach} \label{sec:approach} \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{figures/overview.pdf} \caption{Workflow of our approach.} \label{fig:overview} \end{figure} The workflow of our approach implemented in {iFixDataloss}\xspace is shown in Figure \ref{fig:overview}. It comprises two steps: (1) data loss issue detection, (2) data loss issue fixing. Given an APK under test, {iFixDataloss}\xspace first uses static analysis and dynamic exploration to identify activities in the app that have data loss issues. For each activity that exhibits a data loss issue, {iFixDataloss}\xspace uses a template based approach to generate a patch to fix the data loss issue. Lastly, {iFixDataloss}\xspace runs the app and explores the patched activity to check if the data loss issue is correctly fixed. \subsection{Data Loss Issue Detection.} \input{sections/algorithm1} Algorithm~\ref{alg:detection} outlines {iFixDataloss}\xspace's data loss issue detection procedure, which consists of two phases:\emph{static analysis} and \emph{systematic testing}. For the static analysis phase (Line 4-5), {iFixDataloss}\xspace extracts a static activity graph of the app under test that is used to guide exploration to quickly reach more activities. Additionally, {iFixDataloss}\xspace identifies variables within the GUI that should be stored persistently. In the systematic testing phase (Line 6-23), {iFixDataloss}\xspace runs the APK on an emulator and systematically explores the app. For each newly discovered state, {iFixDataloss}\xspace uses pre-defined strategies to create data loss scenarios and test the state in these scenarios to find data loss issues. \subsubsection{Static Analysis} {iFixDataloss}\xspace 's component for static analysis is built on FlowDroid~\cite{flowdroid}. FlowDroid is a widely used program analysis tool that can be used to perform data flow analysis for Android apps. Using FlowDroid, {iFixDataloss}\xspace is able to construct a static activity transition graph and identify persistent data within an app. \paragraph{Static Activity Transition Graph.} The static activity transition graph in {iFixDataloss}\xspace is defined as $G=(A, E, \Sigma)$, where node $a \in A$ indicates an activity, edge $e \in E$ is an edge between nodes representing an activity transition, and label $\sigma \in \Sigma$ is a label on an edge representing a widget with which associated events might trigger the activity transition. Here, we do not show our static activity graph construction algorithm in the paper due to limited space. \paragraph{Persistent Data Identification.} In practice, persistent data is typically saved on the internal storage via three storage solutions: \texttt{SharedPreferences}, SQLite databases, and the local file systems. \texttt{SharedPreferences} is a lightweight mechanism built into Android for saving and restoring data. Adopting this solution, {iFixDataloss}\xspace identifies persistent data as follows: It analyzes the app code as well as the XML files, which describe GUI, to track the data flow of variables within the GUI. {iFixDataloss}\xspace reports variables whose values flow into invocations of APIs that are used to save persistent data. The APIs of saving persistent data are shown in Table \ref{tab:APIs}. {iFixDataloss}\xspace considers data stored in variables that match these criteria as persistent data. This practice is adopted in~\cite{krerror} as well. \input{tables/APIs} \subsubsection{Systematic Testing} {iFixDataloss}\xspace runs the app on an emulator and performs a systematic exploration, i.e., retrieving elements of GUI and exercising them systematically. To test each state in data loss issue inducing scenarios, {iFixDataloss}\xspace implements the following modules: \begin{itemize}[leftmargin=*] \item \emph{Transition Guided.} To quickly discover more states, {iFixDataloss}\xspace prioritizes exercising elements in GUI which are more likely to trigger activity transitions. To achieve this, {iFixDataloss}\xspace queries the static activity transition graph generated in the last step and localizes the elements in the current activity that might trigger transitions. \item \emph{Inputs Recording.} To test each state in all three data loss scenarios we designed, {iFixDataloss}\xspace also records the sequence of events that lead to a specific state. This allows {iFixDataloss}\xspace to restore the app state by repeating the recorded event sequence. \item \emph{State Identification.} To avoid repeatedly testing a state, {iFixDataloss}\xspace uniquely identifies a state by computing a hash over its widget hierarchy tree in which text-box values are removed (mitigating state explosion problem). This state abstraction is widely used in Android testing works~\cite{stoat,timemachine,ape}. As shown in Line 9-11, {iFixDataloss}\xspace skips the states that have been tested for data loss scenarios and only tests newly discovered states. \end{itemize} As shown in Line 14-22 in Algorithm~\ref{alg:detection}, {iFixDataloss}\xspace tests each state for data loss issues in the three scenarios and records the variables that display data loss issues. The variables reported in scenario $\mathcal{P}_{rotate}$, $\mathcal{P}_{back}$ and $\mathcal{P}_{kill}$ are stored in set $V$, respectively. $num$ stores the total amount of times that other data loss issues, such as crashes, arise. In scenario $\mathcal{P}_{back}$, {iFixDataloss}\xspace requires identifying editable widgets, which is done by querying $T_{ew}$ storing editable widget types. In scenario $\mathcal{P}_{kill}$, {iFixDataloss}\xspace requires variables that have been identified as persistent data, which are stored in $W_{pr}$. In the end, {iFixDataloss}\xspace reports $R\langle ACT\_id, V, num\rangle$. \subsection{Data Loss Issue Fixing} In this section, we present templates used in patch generation and discuss how patches are evaluated in {iFixDataloss}\xspace. Note that, {iFixDataloss}\xspace focuses on fixing data loss issues where variable values are lost and leave out issues where a crash or hanging occurs in this work. \paragraph{Patch Templates.} We fix data loss issues using templates derived from the official Android documentation. The recommended way to fix data loss issues is to save and restore data in \emph{proper} lifecycle methods with a \emph{proper} data saving mechanism. Following the suggestion in the documentation, we classify variables that have data loss issues into three categories and design patch templates accordingly: \begin{itemize}[leftmargin=*] \begin{figure}[t] \centering \includegraphics{figures/template1.pdf} \caption{The patch template for preserving data across app runs.} \label{fig:template-cross-session} \end{figure} \item \emph{Storing values of editable widgets that need to be stored across app runs.} This kind of data is directly input by users and needs to be persistently saved. For example, edits that are made when composing an email. The Android documentation suggests saving this kind of data in \texttt{onPause()} method and restoring it in \texttt{onResume()} method to ensure nothing is lost in case the current activity is killed~\footnote{This callback is mostly used for saving any persistent state the activity is editing, to present a ``edit in place'' model to the user and making sure nothing is lost if there are not enough resources to start the new activity without first killing this one.}. To keep the methods \texttt{onPause()} and \texttt{onResume()} fast, we adopt the \texttt{SharedPreferences} framework to save and restore data since it is relatively lightweight compared to databases and file systems. The template for saving and restoring this category of values is shown in Figure~\ref{fig:template-cross-session}. \begin{figure}[h] \centering \includegraphics{figures/template2.pdf} \caption{The patch template for preserving data in a single app run.} \label{fig:template-single-session} \end{figure} \item \emph{Storing values of editable widgets that need to be stored for a single app run.} Consider the scenario where a user is using a Calculator app. If the user typed "33*23+3" in the \texttt{TextField}, this input should be saved for the duration of the computation session. However, this data should be cleared for the next run. As suggested in the documentation, we save and restore this kind of data in \texttt{onPause()} and \texttt{onResume()} methods, respectively. Since it is data for a single session, we save it in the \texttt{Bundle} object that is used for passing data between Android activities. The template for saving and restoring this category of values is shown in Figure~\ref{fig:template-single-session}. \begin{figure}[h] \centering \includegraphics{figures/template3.pdf} \caption{The patch template for preserving values of non-editable widgets.} \label{fig:template-non-editable} \end{figure} \item \emph{Storing values of non-editable widgets.} The data loss issue of non-editable widgets often occur during changes in runtime configuration, e.g., screen rotation. The documentation suggests saving and restoring them using \texttt{onSaveInstanceState()} and \texttt{onRestoreInstanceState()} methods. The template of them is shown in Figure~\ref{fig:template-non-editable}. \end{itemize} \paragraph{Patch Generation.} Figure~\ref{fig:patchgeneration} shows the workflow of the patch generation. Given a set of variables $V$ whose values need to be preserved, {iFixDataloss}\xspace first divides them into two categories. The first category is the variables storing values of editable widgets and the second category is the variables storing values of non-editable widgets. The variables storing values of editable widgets are further divided into two categories: (1) variables whose values are used across app runs; (2) variables whose values are used in a single app run. In the end, these variables are divided into three categories as shown in Figure~\ref{fig:patchgeneration}. For each category of variables, {iFixDataloss}\xspace uses the corresponding template designed above to generate code for preserving their values and then it assembles code for preserving the three categories of variable values to generate a patch. Specifically, {iFixDataloss}\xspace converts the patch code to a set of \emph{AST} nodes and adds them into the \emph{AST} tree of the target activity's source code. \paragraph{Patch Evaluation.} To evaluate a patch, {iFixDataloss}\xspace runs the app and tests the patched activity in all three data loss scenarios to check if any data loss issue is found and then systematically explores functionality related to the activity to check if any crashes or freezing issues occur. If no error is found, {iFixDataloss}\xspace outputs the patch. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{figures/patchgeneration.pdf} \caption{Workflow of patch generation.} \label{fig:patchgeneration} \end{figure} \section{Conclusion} \label{sec:conclusion} We introduced a practical technique that can automatically detect and fix data loss issues in Android apps and implemented it into a tool {iFixDataloss}\xspace. Our extensive experiments (66 apps) show {iFixDataloss}\xspace is effective in detecting and fixing data loss issues in Android apps. In the evaluated 66 apps, {iFixDataloss}\xspace detected 374 data loss issues and 284 of them were previously unknown. {iFixDataloss}\xspace successfully generated 59 patches for 188 out of the 374 issues. Out of the 20 submitted patches, 16 have been accepted by developers. The experiments also show that iFixDataloss outperforms the state-of-the-art techniques in terms of the number of detected data loss issues and the quality of generated patches. To facilitate future research on data loss issues, we make our tool and data set publicly available at \cite{iFixDataloss-artifact}. \section{Data Loss Revealing Strategy Design} \label{sec:actdesign} In this section, we first introduce the relevant basic concepts in the Android framework and analyze possible scenarios in which data loss issues could occur. At the end, we present the data loss revealing strategies we design for these scenarios. \subsection{Basics} \paragraph{Activity.} An activity is a fundamental component in Android that is used to implement an app page (also called screen) and contains the logic to handle the app page. Apart from activities, the Android framework provides \emph{fragments} that are smaller components for app page construction. An activity can have several fragments, each one containing both some graphical elements and the logic to handle them. Activities and fragments are the main components used in app page construction and are often affected by data loss issues. For the sake of simplicity, we refer only to activities in the following, but all our concepts apply to both activities and fragments. \paragraph{Activity Recreation.} This phenomenon frequently occurs during the execution of Android apps: an activity that a user is interacting with or put into the background can be destroyed due to system constraints (e.g., memory pressure or runtime configuration changes). When the user navigates back, the system recreates the activity using a set of saved data that describes the state of the activity when it was destroyed. \paragraph{Data Loss Issues.} Data loss is a state inconsistency issue that occurs in activity recreation, in which certain values of variables that describe the state are lost or assigned with initial values when recreating the activity that is destroyed earlier. This state inconsistency may cause inconvenience to users. Data loss often affects the GUI state, for instance, certain widgets may be missing their text values. In some cases, it may affect the internal state of the app. In this work, we mainly focus on data loss issues affecting the GUI state. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{figures/lifecycle.pdf} \caption{Activity lifecycle.} \label{fig:lifecycle} \end{figure} \subsection{Scenarios in Which Data Loss May Occur} While activity recreation frequently occurs during app execution, the Android framework does not provide a fully automated mechanism for saving status data when an activity is destroyed and restored for activity recreation. App developers have to handle such status data during activity recreation. To support this, the Android framework provides callback methods (e.g., \texttt{onSaveInstanceState() and onRestoreInstanceState()}) that are invoked in the deconstruction and reconstruction of an activity, and developers can implement data saving and restoring functionality in these methods. However, an activity can be destroyed in multiple ways, for instance, being destroyed by pressing the \texttt{Back} button or being killed by the Android system for memory reclaiming. For each scenario, callback methods may be executed differently. In the scenarios where developers don't implement or incorrectly implement the callback methods that handle status data, data loss issues will occur. As shown in the activity lifecycle in Figure~\ref{fig:lifecycle}, there are four state paths from state \texttt{Resumed} to \texttt{Shutdown}. That is, there are four theoretical activity lifecycle processes that the activity goes through when it is destroyed. For each process, a different sequence of callback methods is executed. However, in practice only three of them can occur in real-world scenarios. According to the official Android documentation\footnote{https://developer.android.com/guide/components/activities/activity-lifecycle}, the transition marked by the dashed line occurs only when the back button is pressed. Subsequently, when the back button is pressed, the onDestroy() callback method is called. Therefore, the path marked by the triangle, cannot occur\footnote{Except in the rare case that this path is explicitly implemented by developers}. For the three scenarios that occur in practice, if developers do not properly save state data, data loss issues will happen. Thus, every activity in apps should be tested for these three scenarios: \begin{itemize}[leftmargin=*] \item \emph{Scenario 1 ($S1$):} \texttt{onPause()$\rightarrow$onSaveInstanceState()$\rightarrow$onSto\\p()}. This scenario often occurs when an activity is forcibly killed due to interrupt actions, for instance, being killed by the user swiping or killed by the Android system for memory reclaiming when it stays in the background for a long time. \item \emph{Scenario 2 (S2):} \texttt{onPause()$\rightarrow$onStop()$\rightarrow$onDestroy()}. This callback method execution order occurs when an activity is destroyed by the user pressing the \texttt{Back} button or the activity finishes itself. \item \emph{Scenario 3 (S3): \texttt{onPause()$\rightarrow$onSaveInstanceState()$\rightarrow$onSto\\p()$\rightarrow$OnDestroy()}}. This scenario occurs when an activity is destroyed and recreated due to runtime configuration changes e.g., screen rotation. For this case, status data in the activity needs to be completely saved in the deconstruction and restored in the recreation. \end{itemize} \subsection{Data Loss Issue Revealing Strategies} In this section, we outline the three strategies we have designed to reveal data loss issues that occur in the three scenarios above. A strategy can be expressed as a tuple $\langle E_d, E_r, O\rangle$, in which $E_d$ indicates an event sequence that destroys the current activity, $E_r$ indicates an event sequence that leads to re-entering of a state that was destroyed, and $O$ is a testing oracle that checks whether a data loss issue occurs during the activity recreation. The testing oracle $O$ is defined as \begin{definition} A data loss issue occurs when $V=\langle v_0, v_1 ..., v_n \rangle$ is different from $V^\prime=\langle v_0^\prime, v_1^\prime ..., v_n^\prime \rangle$, where $V$ represents values of variables within the GUI and $V^\prime$ represents their values after the activity recreation process. \end{definition} The idea behind these strategies is that given an app state, we execute events that trigger activity recreation and perform a state comparison to discover data loss issues. In the following, we introduce the three strategies: \begin{itemize}[leftmargin=*] \item $\mathcal{P}_{kill}:=\langle E_{kl}, E_{rp}, O_{kl}\rangle$. Strategy $\mathcal{P}_{kill}$ is designed to reveal data loss issues in scenario $S1$. $E_{kl}$ indicates an event sequence that simulates the user swiping action that kills the app. $E_{rp}$ represents the event sequence that restarts the app and re-enters the state when the app was killed. It can be recorded during state exploration (explained in Section~\ref{sec:approach}). $O_{kl}$ represents the testing oracle checking for data loss issues that occur in scenario S1. In $O_{kl}$, $V$ indicates variable values within the GUI that should be stored across app runs. This oracle is based on the suggestions shown in the box below, which is from an Android developer documentation~\footnote{\url{https://www.geeksforgeeks.org/shared-preferences-in-android-with-examples/}}. As suggested in the documentation, persistent data should be stored persistently to ensure a smooth user experience, even if the app is killed or restarted. Persistent data can be identified based on data usage patterns, e.g., stored in databases, which will be further explained in Section~\ref{sec:approach}. \result{\emph{Persist data across user sessions, even if the app is killed and restarted, or the device is rebooted.}} \input{tables/EditWidget} \item $\mathcal{P}_{back}:=\langle E_{bk}, E_{nx}, O_{bk}\rangle$. Strategy $\mathcal{P}_{back}$ is designed to reveal data loss issues in scenario $S2$. $E_{bk}$ indicates the event ``Back button'' and $E_{nx}$ represents an event sequence that re-enters the state that was just destroyed by pressing the Back button. The \texttt{Back button} is a system event that can be generated at any time and the event sequence $E_{nx}$ can be recorded during state exploration. $O_{bk}$ represents the testing oracle checking for data loss issues that occur in scenario S2. In $O_{bk}$, $V$ indicates property values of \emph{editable} widgets within the GUI. This oracle is derived from the suggestions shown in the box below, which is from the official Android developer documentation~\footnote{\url{https://developer.android.com/reference/android/app/Activity\#saving-persistent-state}}. As suggested in the document, the values of editable widgets should be preserved when the Back button is pressed. Editable widgets are listed in Table~\ref{tab:editable_widgets}. \result{\emph{The user pressing BACK from your activity does not mean "cancel" -- it means to leave the activity with its current contents saved away. Canceling edits in an activity must be provided through some other mechanism, such as an explicit "revert" or "undo" option.}} \item $\mathcal{P}_{rotate}:=\langle E_{rt}, $NOOP$, O_{rt}\rangle$. Strategy $\mathcal{P}_{rotate}$ is designed to reveal data loss issues in scenario $S3$. $E_{rt}$ is a system event \texttt{screen-rotation}. When performing a screen-rotation event, the current state will be destroyed and recreated. Since activity recreation is automatically triggered in the screen rotation operation, the activity re-entering event is $NOOP$, i.e., a \texttt{do-nothing} event. $O_{rt}$ represents the testing oracle checking for data loss issues that occur in scenario S3. In $O_{rt}$, $V$ indicates values of all the widgets~\footnote{Here, only app-related properties are considered and screen-related properties such as widget bounds are excluded.} within the GUI. For this oracle, we consider property values of all widgets in the GUI because screen-rotation is a neutral event and its execution expects no change in the state. \end{itemize} Apart from the three oracles described above, {iFixDataloss}\xspace also detects other types of data loss issues with generic oracles such as crashes, and widgets disappearing. \section{Evaluation} \label{sec:evaluation} In our experimental evaluation, we seek to answer the following research questions: \begin{itemize}[leftmargin=*] \item \textbf{RQ1}: How effective is {iFixDataloss}\xspace in finding data loss issues in Android apps? \item \textbf{RQ2}: How is the quality of patches generated by {iFixDataloss}\xspace? \item \textbf{RQ3}: How useful is {iFixDataloss}\xspace in fixing real-world data loss issues in Android apps? \end{itemize} \subsection{Experimental Setup} \paragraph{Subject Apps.} We evaluated {iFixDataloss}\xspace on a data set containing 66 Android apps, which is constructed by merging 48 benchmark apps used in Data Loss Detector~\cite{benchmark} and 21 apps that are found to have data loss issues in LiveDroid~\cite{livedroid}(there are 7 duplicate apps in these two benchmarks) as well as 4 apps downloaded from the Google Play Store. These 4 apps were used for data loss issues investigation during the early study of our project and also included the evaluation data set (marked by ``\#'' in Table~\ref{tab:result}). Due to limited space, we use asterisks to omit some suffix letters of app names. \paragraph{Data Loss Issue Reporting. } The experiments detect two types of data loss issues: GUI variables with missing values (indicated with $VE$) and critical errors such as crashing, hanging, and dialogues disappearing (indicated with $CE$). For each experiment, we report the number of GUI variables that are not preserved during activity destruction and the number of critical errors. For $VE$, we further classify them into two categories: \begin{itemize}[leftmargin=*] \item \emph{True Positives (TP):} variables whose values should be preserved. \item \emph{False Positives (FP):} variables whose values should not be preserved \end{itemize} For each variable in $VE$, we manually check if the variable has a data loss issue. Specifically, we explore the app and test the activity in which the variable resides in the following procedure. We first modify the values of all editable widgets and then perform a screen rotation operation and check if the variable remains the same before and after the screen rotation. Then we perform the procedure for the pressing Back button and Killing scenarios. If the variable value remains the same for the three scenarios, we deem the variable values should not be preserved, i.e. the variable is a false positive. Otherwise, the variable is a true positive. \paragraph{Comparison Tool Selection.} We compare {iFixDataloss}\xspace with LiveDroid~\cite{livedroid} and Data Loss Detector~\cite{dld}. LiveDroid is the most recent technique that prevents data loss issues in Android apps by automatically patching Android apps. Data Loss Detector is the most recent technique that detects data loss issues in Android apps. Data Loss Detector focuses on data loss issues caused by screen rotation and detects them by performing the screen rotation operation during testing. \paragraph{Execution Environment.} Our experiments run on a 64-bit Windows 10 physical machine with 2.30GHz Intel(R) Core(TM) i7-10510U CPU and 16GB RAM, and uses an Android emulator to run GUI exploration in the data loss detection phase. \input{tables/results} The emulator is configured with 2GB RAM and Android Nougat operating system(SDK 7.1, API level 25). For each technique in the evaluation, we use the default parameter values given on its website. Regarding Data Loss Detector and {iFixDataloss}\xspace, we run each experiment for one hour \subsection{RQ1: Data Loss Issue Detection} Table~\ref{tab:result} shows the results of {iFixDataloss}\xspace, LiveDroid and Data Loss Detector after running through the 66 Android apps. Column ``T'' indicates the total number of detected data loss issues. Column ``TP'' and ``FP'' categorizes whether the reported issues are true positives or false positives. It is worth noting that, TP and FP for DLD are not shown in Table~\ref{tab:result} because in the comparison we focused on determining if a variable value detected by the tools is supposed to be preserved (described in Section-6.1). DLD only detects data loss issues and does not specify variable values that should be preserved and thus does not have a TP and FP column. Column ``CE'' indicates the number of critical errors detected by {iFixDataloss}\xspace. There are 14 apps in the data set that LiveDroid failed to run due to some compatibility issues, which are denoted by ``-'' in the table. \emph{Results.} {iFixDataloss}\xspace detected 374 data loss issues in the 66 Android apps. 188 out of these 374 issues are GUI variables with missing values and none were false positives. The remaining 186 issues are critical errors. Our investigation shows that out of the 374 data loss issues, 284 of them were previously unknown. In comparison with the state-of-the-art techniques, {iFixDataloss}\xspace detected the most data loss issues, followed by DLD (152) and LiveDroid (43). Regarding false positives, LiveDroid detected 296 GUI variables with missing values, but 253 are false positives. LiveDroid detects a significant amount of false positives because it uses static analysis to reason about variables whose values might change and suffers from the low precision problem in static analysis. We further perform a comparison of detected data loss issues between {iFixDataloss}\xspace and state-of-the-art techniques. As shown in Figure \ref{fig:intersection}, {iFixDataloss}\xspace can detect all of the 43 issues that were detected by LiveDroid and 49 of 152 issues that were detected by Data Loss Detector. Two possible reasons that {iFixDataloss}\xspace could not detect all the issues are: (1) Data Loss Detector adopts screenshot-based oracles and reports a data loss issue whenever a difference is detected on the screenshot before and after screen rotation. Therefore it tends to report false positives, e.g., animations in an app page can cause differences on the screenshot even if there are no data loss issues in the app page; (2) {iFixDataloss}\xspace adopts a state exploration strategy different from Data Loss Detector and may miss certain states that were covered by Data Loss Detector. However, in total, {iFixDataloss}\xspace detected much more data loss issues than Data Loss Detector. \result{{iFixDataloss}\xspace detected 374 data loss issues in the 66 Android apps. 284 out of 374 issues are previously undetected issues. {iFixDataloss}\xspace significantly outperforms the state-of-the-art techniques in terms of the number of detected data loss issues. } \begin{figure}[h] \centering \includegraphics[width=0.35\textwidth]{figures/intersection.pdf} \caption{Data loss issues comparisons between {iFixDataloss}\xspace and the other two data loss issue detection tools.} \label{fig:intersection} \end{figure} \subsection{RQ2: Patch Quality} \input{tables/patchtype} We evaluate the quality of a patch generated by {iFixDataloss}\xspace and LiveDroid based on two criteria. Firstly, we check if the patch successfully fixes the data loss issues without introducing new errors. For each patch, we run the patched app on an emulator to check if the data loss issue is fixed. Specifically, we manually test the patched activity in data loss scenarios and examine if the data loss issue still occurs. To ensure no new errors are introduced, we explore functionalities related to the patched activity and check if the app misbehaves, e.g. crashes or the GUI disappearing. Secondly, we check if our patch generates any false positives i.e. variables in the GUI whose values should not be saved but saved because of the patch. Furthermore, we classify patches into four categories: \begin{itemize}[leftmargin=*] \item \emph{Type 1.} Patch fixes all data loss issues without preserving false positives; \item \emph{Type 2.} Patch fixes all data loss issues but also preserves some false positives; \item \emph{Type 3.} Patch fixes some but not all data loss issues and also preserves some false positives; \item \emph{Type 4.} Patch only preserves false positives. \end{itemize} Note that {iFixDataloss}\xspace is a fully automated tool and it can automatically evaluate patches as shown in Section~\ref{sec:approach}. In the experiment evaluation, we manually explore patched activities at runtime only for evaluating the quality of patches generated by {iFixDataloss}\xspace and LiveDroid. \paragraph{Results.} 59 patches were generated to address the 188 issues that involve variable values with missing detected by {iFixDataloss}\xspace. Note that an activity may have multiple variables with data loss issues, in these cases, {iFixDataloss}\xspace generates one patch to preserve the values of all variables that exhibit data loss issues. As shown in Table~\ref{tab:PatchType}, all of the 59 patches generated by {iFixDataloss}\xspace fall into Type 1, i.e., all the 188 issues are fixed without preserving variable values that should not be preserved. In comparison, 296 issues that involve variables with missing values were detected by LiveDroid. LiveDroid generated 44 patches in total to address these issues. As shown in Table~\ref{tab:PatchType}, 7 of the 44 patches fall into Type 1, i.e., only 7 patches fix the data loss issues without saving and restoring variable values that should not be preserved. The remaining patches have the \emph{over saving} problem, i.e., saving and restoring variable values that should not be preserved. Out of the 296 variables values identified by LiveDroid, 253 are false positives, i.e., unnecessarily preserved. On average, 85\% (253/296) of the variable's values are values that should not be saved and exhibit the over-saving issue. \result{{iFixDataloss}\xspace generated 59 patches that successfully fixed 188 variable values miss issues without preserving false positives (variable values that should not be preserved). {iFixDataloss}\xspace outperformed the state-of-the-art technique LiveDroid in terms of the number of preserved false positives.} \input{tables/PR} \subsection{RQ3: Usefulness } To evaluate how useful {iFixDataloss}\xspace is in practice, we selected the top 10 apps in the data set that have been updated most recently and submitted all the patches for these apps to developers (20 patches). In total, we created and submitted 20 pull requests for the 10 apps on Github. At the time of writing the paper, out of the 20 pull requests, 16 have been accepted with very positive comments: \begin{itemize}[leftmargin=*] \item "Looks good - thanks again" \item "Super thank you for the contribution" \item "Thank you very much for this nice contribution. It looks really cool, overall." \item "Thank you! I have tested this and merged by rebasing into the master." \item ...... \end{itemize} We have not yet received a response for the remaining 4 pull requests. The details of the 20 patches are shown in Table~\ref{tab:PR}. Similar to Table~\ref{tab:result}, we used asterisks to omit some suffix letters of app names due to space limitations. \result{Out of the 20 pull requests with patches generated by {iFixDataloss}\xspace, 16 have been accepted with positive comments.} \subsection{Threats to Validity} The main threats to external validity lie in the selection of the apps. {iFixDataloss}\xspace is evaluated on 66 Android apps. Our results may not be generalizable beyond the 66 apps to which we have applied {iFixDataloss}\xspace. To mitigate this threat, we chose apps from within the benchmarks of the two literature works, Data Loss Detector and Livedroid. Threats to internal validity are factored into our experimental methodology and they may affect our results. We manually explore apps to reproduce data loss issues reported in the experiments and may fail to explore certain functionalities, which might affect our results. We also performed some manual checks during evaluation, which is potentially error-prone. To minimize this threat, two people performed manual checks and compared the experimental results to check for discrepancies. We also realise that the false positive rate of LiveDroid reported in our experiments is different from the rate reported in the LiveDroid paper. We checked with the authors of LiveDroid on this matter. The authors of LiveDroid explained that they also consider non-UI property values in the calculation of their false positive rate. Non-UI property values do not apply to our scenarios, therefore, resulting in a different false positive rate. We used the default parameters when running LiveDroid and DLD. For LiveDroid, we tried running each app with different parameters and found there was no difference in results compared to the default parameters. For DLD, we tried running each app with different parameters except for the runtime length which we kept at 1 hour and found there was no difference in results compared to the default parameters. We had 2 people run these experiments to ensure that the results were consistent. Based on this, we concluded that running the experiments using default parameters did not affect the results. \section{Implementation} \label{sec:impl} {iFixDataloss}\xspace is implemented as a fully automated data loss detection and fix framework, which reuses or extends a set of off-the-shelf tools: ApkTool \cite{Apktool}, FlowDroid \cite{flowdroid}, UI Automator \cite{UiAutomator}, Android Debug Bridge(ADB) \cite{adb}, and JaveParser \cite{Javaparser}. Apktool is used to decompile an apk to extract its XML files. FlowDroid is extended to build the activity graph of an app and perform data flow analysis. UI Automator is used to dump the GUI layout and perform app state exploration. ADB is used to simulate the three revealing actions(i.e., kill, back and rotate). During patch generation, we use JaveParser to parse the source code of apps and generate patches. \section{Introduction} \label{sec:intro} A good user experience is essential for the success and popularity of mobile apps. On the other hand, a poor user experience can cause an app to become unpopular even if it provides useful functionality. Data loss is one of the prominent frustrations users can experience with mobile apps. Imagine a user is filling out a form with many fields and has almost completed the form after tediously typing through a tiny virtual keyboard. She or he might mistakenly touch the \emph{Back} button or be forcibly switched away from the app, causing all the input data to be lost. This occurs when an app page is destroyed by the mobile operating system. For example, when the mobile operating system needs to conserve resources or the mobile operating system determines that the app page is no longer needed (e.g., exiting via pressing the \emph{Back} button) or the app has been stored in the background for a long time. In such cases, if the app does not properly save user data, data loss issues will occur. Recent studies show data loss issues are extremely pervasive. Amalfitano et al.~\cite{amalfitano} reported that 60 out of the studied 68 apps (88.2\%) had data loss issues Fixing data loss issues automatically is non-trivial. A straightforward way to fix data loss issues is to save the values of all the state variables in an app page before it is destroyed, and restore these values when revisiting the app page. The problem with this solution is data \emph{over-saving}. An app page may comprise a large amount of data that is irrelevant to user input (e.g., widgets displaying text). Saving this irrelevant data could slow down the app because such data saving occurs in the UIthread. As suggested in the Android documentation~\footnote{\url{https://developer.android.com/training/articles/perf-anr\#Avoiding}}, “any method that runs on the UI thread should do as little work as possible" avoiding UI sluggishness. Additionally, data loss issues can lead to app misbehavior. For instance, a \texttt{TextView} on the app page displays how many times the user visits the page. Assume the current value of the \texttt{TextView} is 5 and saved before the app page is destroyed. When the user revisits the app page, value 5 is retrieved and displayed, which is confusing since the user expects 6. These have been identified as crucial problems in fixing data loss issues in recent works~\cite{livedroid,krerror}. A recent work, LiveDroid~\cite{livedroid}, uses static analysis to reason about program variables and GUI properties that might be changed during user interactions and generate patches to save and restore their values at runtime to avoid data loss issues. Although a significant portion of variables is ruled out in static analysis, LiveDroid still suffers from the over-saving issue. As reported in the paper, it generates too many false positives, i.e., preserving variable values that should not be preserved. This will reduce app performance and responsiveness due to the cost of saving unnecessary data. In this paper, we present a technique, called {iFixDataloss}\xspace, which can automatically detect and fix data loss issues in Android apps whilst eliminating the over-saving issue. The key insight of our technique is that the scenarios in which data loss issues occur can be simulated by generating a particular event or composed events during testing. For instance, screen rotation, one of the most frequent data loss scenarios, can be triggered by executing an orientation change event at app runtime. Thus, we can detect data loss issues by testing each app page for these data loss scenarios, and checking if data is lost. To fix data loss issues, {iFixDataloss}\xspace only preserves data that exhibits loss issues during testing thereby avoiding the saving of unnecessary data. To minimize unnecessary data saving, we further identify data related to user input (e.g., values of \texttt{TextField} widgets) and only save this kind of data in the patches. Specifically, {iFixDataloss}\xspace combines static and dynamic analysis to detect data loss issues. It first builds an \emph{activity transition graph} by performing static analysis on an app under test. Guided by this graph, {iFixDataloss}\xspace mimics user actions to exercise the app with a guided exploration strategy that steers the exploration towards app pages that may be affected by data loss issues. For each app page, {iFixDataloss}\xspace tests it by executing a set of predefined events that generate data loss scenarios. According to the lifecycle of Android activity, we define a set of events or composed events that cover possible data loss scenarios. Thus, {iFixDataloss}\xspace can find data loss issues more thoroughly. In contrast, existing techniques~\cite{dld,livedroid} only cover a portion of those scenarios. For patch generation, {iFixDataloss}\xspace not only fixes data loss issues that only impact the current app run but also issues that impact users across multiple runs. Most of the time, data loss issues only affect app usage in the current run, e.g., certain \texttt{TextView} values on the screen are lost after a change in screen orientation. This kind of data typically only needs to be preserved for the current run and is no longer needed after the run is terminated. For such data, {iFixDataloss}\xspace generates patches that save it in memory and retrieve it when needed. In certain cases, data that is lost needs to be preserved across runs, we call this persistent data e.g., a membership registration form that is fully filled by users but has not been submitted; even after the app is killed, it still needs to be saved. For this kind of data, {iFixDataloss}\xspace generates patches that save the data using a storage method that can be restored across app runs. To achieve this, we develop a strategy based on user data usage patterns to distinguish these two kinds of data. In other words, {iFixDataloss}\xspace fixes not only data loss issues that occur in a singular instance of an app run but also data loss issues across app runs. By contrast, the existing technique LiveDroid~\cite{livedroid} only fixes data loss issues that occur in a singular instance of an app run and cannot fix data loss issues across multiple runs. Our experiments show that {iFixDataloss}\xspace is effective in both detecting and fixing data loss issues in Android apps. We evaluated {iFixDataloss}\xspace on 66 Android apps and detected 374 data loss issues and out of them, 284 were previously unknown. {iFixDataloss}\xspace outperforms the recent data loss detection techniques DLD~\cite{dld} and LiveDroid~\cite{livedroid} regarding the number of detected data loss issues. It also outperforms the data loss issue fixing tool LiveDroid~\cite{livedroid} in terms of the quality of generated patches, i.e., the number of over-saved variable values in patches. For 188 issues, {iFixDataloss}\xspace successfully generated patches to fix them. For unknown issues, we submitted 20 patches generated by {iFixDataloss}\xspace to developers; 16 of these 20 have been accepted by developers with very positive feedback. Overall, our contributions can be summarized as follows: \begin{itemize} \item We identify scenarios in which data loss may occur based on the Android life cycle and design strategies to reveal data loss issues. Further, we develop patch templates for fixing data loss issues. We also implement our approach into a fully automated tool {iFixDataloss}\xspace for detecting and fixing data loss issues in Android apps. \item We performed an extensive experiment in which we found 374 data loss issues in 66 Android apps and 284 issues that were previously unknown and successfully generated patches for 188 out of the 374 issues. Out of 20 submitted patches, 16 have been accepted by developers. \item To facilitate future research, we make our prototype tool {iFixDataloss}\xspace and the data set used in the experiment are available at \url{https://github.com/iFixDataLoss/iFixDataloss22} \end{itemize} \section{Motivating Example} \label{sec:example} \begin{figure}[t] \centering \subfigure[before back]{ \label{fig:example1} \begin{minipage}{0.44\linewidth} \centering \hspace{-0.34cm}\includegraphics[width=1.6in]{figures/accountdetails1.png} \end{minipage} } \quad \subfigure[after back and return]{ \label{fig:example2} \begin{minipage}{0.44\linewidth} \centering \hspace{-0.34cm}\includegraphics[width=1.6in]{figures/accountdetails2.png} \end{minipage} } \caption{Screenshots of the CycleStreets activity described in the example.} \label{fig:motivation} \end{figure} In this section, we describe a data loss issue in a popular mobile app Cyclestreet (196 stars on Github and over 100K downloads on Google Play) as well as the inability of existing techniques in detecting and fixing this issue. Then we explain how it is detected and fixed by {iFixDataloss}\xspace. \paragraph{Data Loss.} Cyclestreets~\footnote{\url{https://www.cyclestreets.net/mobile/android/}} is a cycle journey planner mobile app that is widely used in the UK. {iFixDataloss}\xspace found a data loss issue in the app, which exists on its account registration page. Figure~\ref{fig:example1} shows the account registration process, which involves filling out the form using your personal information such as user name and email address. Subsequently, to finish registration, the user clicks the \emph{Register} button to submit the completed form. Upon investigation, there is a data loss issue in the page, which can be triggered in the following steps: (1) fill out the form without clicking the \emph{Register} button; (2) press the \emph{Back} button; (3) return to the registration page. As shown in Figure~\ref{fig:example2}, all the data filled in the previous step is lost after returning to the registration page. If users encounter this issue, it will frustrate users, since they have to refill this data which is tedious and time-consuming. \paragraph{Challenges.} There are difficulties in both detecting and fixing this data loss issue. Detecting such issues requires an oracle that checks whether user data is lost during testing. However, most existing mobile app testing tools \cite{sapienz,stoat,ape,timemachine,gesda,combodroid} can only detect crashes in apps and thus cannot be used to find and fix data loss issues. As mentioned earlier, a recent tool DLD~\cite{dld} is capable of detecting data loss issues. Unfortunately, DLD mainly detects data loss issues that occur when device orientation is changed and fails to detect the issue in the example. Automatically fixing data loss issues is non-trivial as well. The app page in the example comprises $17$ widgets and each widget contains more than $15$ properties, containing $311$ variable values in total. Restoring all of these values when entering the page can significantly slow down the app. We ran an experiment 10 times to compute the cost of saving and restoring all these variable values. On average, saving and restoring these 311 variables values takes 300ms, which is over 3 times the acceptable response time given in the Android documentation~\footnote{\url{https://developer.android.com/training/articles/perf-anr\#Reinforcing}}. Additionally, this time only takes into account saving and restoring. In a real-world app, there could be additional processing needed that would further increase the response time beyond the acceptable. To save less data, the state-of-the-art technique LiveDroid~\cite{livedroid} leverages static analysis to reason about variable values in GUIs that might be changed during user interaction and only saves them. In total, LiveDroid preserved 166 variable to fix this issue. Despite reducing the amount of data being saved, LiveDroid still exhibits an \emph{over-saving} issue. The majority of the 166 variable values are unnecessarily saved since they are initialization values and never changed during user interactions, e.g., \texttt{resource-id} and \texttt{content-desc}. \paragraph{Our approach.} The tool {iFixDataloss}\xspace, in which our approach is implemented, detected and fixed this issue via the following steps: \begin{itemize}[leftmargin=*] \item \emph{Data Loss Issue Detection.} {iFixDataloss}\xspace explores the app on an emulator and tests each discovered app page in data loss scenarios that we define based on the Android documentation. One of the scenarios is \texttt{Back-ReEntering}, i.e., exiting an app page via pressing the \texttt{Back} button and re-entering the page. In the example, {iFixDataloss}\xspace found that data filled in on the registration page was lost after the execution of the \texttt{Back-ReEntering} scenario. Thus, {iFixDataloss}\xspace reported the issue. During detection, {iFixDataloss}\xspace not only reports a data loss issue in an app page but also records variable values that are lost in a data loss scenario. Specifically, {iFixDataloss}\xspace modifies the values of variables on the page that may store user data such as \texttt{TextField} before generating a data loss scenario. If the value of a variable is changed in the data loss scenario, {iFixDataloss}\xspace deems that the variable has a data loss issue and its value needs to be saved. In the example, {iFixDataloss}\xspace found the values of these five \texttt{TextField} widgets (marked in red in Figure~\ref{fig:example2}) were changed in the \texttt{Back-ReEntering} scenario, and so the resource Ids of these widgets were recorded for patch generation in the next step. \item \emph{Patch Generation.} {iFixDataloss}\xspace uses a template-based approach to generate patches. One template is designed to fix this kind of data loss issue that occurs in the example. With this template, {iFixDataloss}\xspace can generate a patch to fix this issue, in which only 5 variable values are preserved (shown in Figure~\ref{fig:patch}). \item \emph{Patch Evaluation.} {iFixDataloss}\xspace evaluates the generated patch by testing the patched APK. If data loss issues no longer exist in the corresponding app page and no crashes or freezes are found in the exploration of functionalities related to the app page, {iFixDataloss}\xspace reports the issue is fixed. In the example, {iFixDataloss}\xspace found neither data loss issues nor crashes or hangings in the evaluation and thus determined that the patch was successful in fixing the issue. Furthermore, we created a pull request on the Github repo of the app Cyclestreet with the generated patch and the pull request was accepted. \end{itemize} In summary, {iFixDataloss}\xspace successfully detected and fixed the data loss issue in the example. In the generated patch, only five widget values are saved and restored, and no unnecessary data is saved. The issue has been confirmed by the developers of the app Cyclestreets and the generated patch has been accepted as well. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{figures/patchexample.pdf} \caption{The patch generated by {iFixDataloss}\xspace for fixing the data loss issue in the example.} \label{fig:patch} \end{figure} \section{Related Work} \label{sec:relatedwork} \paragraph{Data Loss Issues Fixing.} There have been a few pieces of research work that attempt to automatically fix data loss issues in Android apps. A recent work LiveDroid~\cite{livedroid} leverages static analysis to identify program variable values and GUI properties that may change during user interactions and preserve them across app life cycles to avoid data loss issues. Due to ``infeasible'' paths introduced in the static analysis, LiveDroid tends to report false positives in results. Furthermore, LiveDroid only works for data loss issues across app life cycles such as activity recreation and does not work for data loss issues across app runs. RuntimeDroid~\cite{runtimedroid} uses an online resource loading module to update GUI elements when certain configurations are changed at runtime, avoiding activity restarting. By contrast, {iFixDataloss}\xspace can fix data loss issues by preserving variable values whose loss issues have been witnessed during testing and restoring them in data loss scenarios. Thus, {iFixDataloss}\xspace generates no false positives. Apart from fixing data loss issues across app life cycles, {iFixDataloss}\xspace also can fix data loss issues across app runs. \paragraph{Automated Program Repair for Android Apps.} There are several existing works that focus on fixing other types of issues in Android apps. Droix~\cite{droix} employs a search-based approach to generate patches that fix crashes in Android apps. AppEvolve~\cite{appevolve} analyzes existing updates in other apps to generate patches that fix issues in Android apps caused by an API evolving. METER~\cite{meter} leverages computer vision techniques to fix broken GUI test scripts during app evolution. Sapfix~\cite{sapfix}, a deployed automated program repair tool in Facebook, seeks to generate patches for more types of bugs in mobile apps with templates that are created by human engineers based on previous bug fixes. Compared to those works, {iFixDataloss}\xspace focuses on fixing data loss issues in Android apps and can be used to complement those tools for fixing more types of issues in Android apps. \paragraph{Data Loss Issues Detection.} Similar to {iFixDataloss}\xspace, data loss issue detection tools DLD~\cite{dld} and ALARic~\cite{alaric} exercise app pages by executing a screen rotation action and detecting data loss issues by checking for differences in the GUI before and after rotation. Thor~\cite{thor} augments existing test suites with neutral sequences of operations to reveal more failures. The injected event sequences that create adverse conditions such as disconnecting the internet and turning off audio services, may reveal data loss issues. Quantum~\cite{quantum} tests Android apps by generating test cases that are injected with a series of operations that are more likely to reveal failures based on a study of previous bugs (e.g., zooming in and zooming out). SetDroid~\cite{setdroid} executes test cases under different system settings to find system setting related failures. Despite their ability to find data loss issues, those techniques are only able to find a portion of data loss issues. By contrast, we design operations that cover all kinds of scenarios in which data loss issues occur based on the Android lifecycle and use them to discover more types of data loss issues. More importantly, not only can {iFixDataloss}\xspace find data loss issues but it can also fix them. \paragraph{Automated Android App Testing.} Another rich branch of research work focuses on generating test inputs for Android apps. For instance, Sapienz~\cite{sapienz} uses evolutionary algorithms to generate test inputs that cover more code coverage. TimeMachine~\cite{timemachine} saves app states that have the potential to trigger new program behaviors and prioritizes exploring them to discover more app behaviors. Stoat~\cite{stoat}, APE~\cite{ape} and DroidBot~\cite{droidbot} leverages a built model to guide input generation. A3E systematically generates inputs following a depth-first strategy. SwiftHand~\cite{swifthand} uses machine learning to learn a model that is used to guide input generation. ACTEve~\cite{acteve} uses symbolic execution to generate inputs. While these techniques can effectively explore app behaviors, they are insufficient for detecting data loss issues due to a lack of oracles that check for data loss issues. \section*{Acknowledgements} This work is partially supported by National Natural Science Foundation of China (NSFC) under Grant No. 61972098. Ting Su is partially supported by NSFC Project No. 62072178. \balance \bibliographystyle{ACM-Reference-Format} \section{Approach} \label{sec:approach} \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{figures/overview.pdf} \caption{Workflow of our approach.} \label{fig:overview} \end{figure} The workflow of our approach implemented in {iFixDataloss}\xspace is shown in Figure \ref{fig:overview}. It comprises two steps: (1) data loss issue detection, (2) data loss issue fixing. Given an APK under test, {iFixDataloss}\xspace first uses static analysis and dynamic exploration to identify activities in the app that have data loss issues. For each activity that exhibits a data loss issue, {iFixDataloss}\xspace uses a template based approach to generate a patch to fix the data loss issue. Lastly, {iFixDataloss}\xspace runs the app and explores the patched activity to check if the data loss issue is correctly fixed. \subsection{Data Loss Issue Detection.} \input{sections/algorithm1} Algorithm~\ref{alg:detection} outlines {iFixDataloss}\xspace's data loss issue detection procedure, which consists of two phases:\emph{static analysis} and \emph{systematic testing}. For the static analysis phase (Line 4-5), {iFixDataloss}\xspace extracts a static activity graph of the app under test that is used to guide exploration to quickly reach more activities. Additionally, {iFixDataloss}\xspace identifies variables within the GUI that should be stored persistently. In the systematic testing phase (Line 6-23), {iFixDataloss}\xspace runs the APK on an emulator and systematically explores the app. For each newly discovered state, {iFixDataloss}\xspace uses pre-defined strategies to create data loss scenarios and test the state in these scenarios to find data loss issues. \subsubsection{Static Analysis} {iFixDataloss}\xspace 's component for static analysis is built on FlowDroid~\cite{flowdroid}. FlowDroid is a widely used program analysis tool that can be used to perform data flow analysis for Android apps. Using FlowDroid, {iFixDataloss}\xspace is able to construct a static activity transition graph and identify persistent data within an app. \paragraph{Static Activity Transition Graph.} The static activity transition graph in {iFixDataloss}\xspace is defined as $G=(A, E, \Sigma)$, where node $a \in A$ indicates an activity, edge $e \in E$ is an edge between nodes representing an activity transition, and label $\sigma \in \Sigma$ is a label on an edge representing a widget with which associated events might trigger the activity transition. Here, we do not show our static activity graph construction algorithm in the paper due to limited space. \paragraph{Persistent Data Identification.} In practice, persistent data is typically saved on the internal storage via three storage solutions: \texttt{SharedPreferences}, SQLite databases, and the local file systems. \texttt{SharedPreferences} is a lightweight mechanism built into Android for saving and restoring data. Adopting this solution, {iFixDataloss}\xspace identifies persistent data as follows: It analyzes the app code as well as the XML files, which describe GUI, to track the data flow of variables within the GUI. {iFixDataloss}\xspace reports variables whose values flow into invocations of APIs that are used to save persistent data. The APIs of saving persistent data are shown in Table \ref{tab:APIs}. {iFixDataloss}\xspace considers data stored in variables that match these criteria as persistent data. This practice is adopted in~\cite{krerror} as well. \input{tables/APIs} \subsubsection{Systematic Testing} {iFixDataloss}\xspace runs the app on an emulator and performs a systematic exploration, i.e., retrieving elements of GUI and exercising them systematically. To test each state in data loss issue inducing scenarios, {iFixDataloss}\xspace implements the following modules: \begin{itemize}[leftmargin=*] \item \emph{Transition Guided.} To quickly discover more states, {iFixDataloss}\xspace prioritizes exercising elements in GUI which are more likely to trigger activity transitions. To achieve this, {iFixDataloss}\xspace queries the static activity transition graph generated in the last step and localizes the elements in the current activity that might trigger transitions. \item \emph{Inputs Recording.} To test each state in all three data loss scenarios we designed, {iFixDataloss}\xspace also records the sequence of events that lead to a specific state. This allows {iFixDataloss}\xspace to restore the app state by repeating the recorded event sequence. \item \emph{State Identification.} To avoid repeatedly testing a state, {iFixDataloss}\xspace uniquely identifies a state by computing a hash over its widget hierarchy tree in which text-box values are removed (mitigating state explosion problem). This state abstraction is widely used in Android testing works~\cite{stoat,timemachine,ape}. As shown in Line 9-11, {iFixDataloss}\xspace skips the states that have been tested for data loss scenarios and only tests newly discovered states. \end{itemize} As shown in Line 14-22 in Algorithm~\ref{alg:detection}, {iFixDataloss}\xspace tests each state for data loss issues in the three scenarios and records the variables that display data loss issues. The variables reported in scenario $\mathcal{P}_{rotate}$, $\mathcal{P}_{back}$ and $\mathcal{P}_{kill}$ are stored in set $V$, respectively. $num$ stores the total amount of times that other data loss issues, such as crashes, arise. In scenario $\mathcal{P}_{back}$, {iFixDataloss}\xspace requires identifying editable widgets, which is done by querying $T_{ew}$ storing editable widget types. In scenario $\mathcal{P}_{kill}$, {iFixDataloss}\xspace requires variables that have been identified as persistent data, which are stored in $W_{pr}$. In the end, {iFixDataloss}\xspace reports $R\langle ACT\_id, V, num\rangle$. \subsection{Data Loss Issue Fixing} In this section, we present templates used in patch generation and discuss how patches are evaluated in {iFixDataloss}\xspace. Note that, {iFixDataloss}\xspace focuses on fixing data loss issues where variable values are lost and leave out issues where a crash or hanging occurs in this work. \paragraph{Patch Templates.} We fix data loss issues using templates derived from the official Android documentation. The recommended way to fix data loss issues is to save and restore data in \emph{proper} lifecycle methods with a \emph{proper} data saving mechanism. Following the suggestion in the documentation, we classify variables that have data loss issues into three categories and design patch templates accordingly: \begin{itemize}[leftmargin=*] \begin{figure}[t] \centering \includegraphics{figures/template1.pdf} \caption{The patch template for preserving data across app runs.} \label{fig:template-cross-session} \end{figure} \item \emph{Storing values of editable widgets that need to be stored across app runs.} This kind of data is directly input by users and needs to be persistently saved. For example, edits that are made when composing an email. The Android documentation suggests saving this kind of data in \texttt{onPause()} method and restoring it in \texttt{onResume()} method to ensure nothing is lost in case the current activity is killed~\footnote{This callback is mostly used for saving any persistent state the activity is editing, to present a ``edit in place'' model to the user and making sure nothing is lost if there are not enough resources to start the new activity without first killing this one.}. To keep the methods \texttt{onPause()} and \texttt{onResume()} fast, we adopt the \texttt{SharedPreferences} framework to save and restore data since it is relatively lightweight compared to databases and file systems. The template for saving and restoring this category of values is shown in Figure~\ref{fig:template-cross-session}. \begin{figure}[h] \centering \includegraphics{figures/template2.pdf} \caption{The patch template for preserving data in a single app run.} \label{fig:template-single-session} \end{figure} \item \emph{Storing values of editable widgets that need to be stored for a single app run.} Consider the scenario where a user is using a Calculator app. If the user typed "33*23+3" in the \texttt{TextField}, this input should be saved for the duration of the computation session. However, this data should be cleared for the next run. As suggested in the documentation, we save and restore this kind of data in \texttt{onPause()} and \texttt{onResume()} methods, respectively. Since it is data for a single session, we save it in the \texttt{Bundle} object that is used for passing data between Android activities. The template for saving and restoring this category of values is shown in Figure~\ref{fig:template-single-session}. \begin{figure}[h] \centering \includegraphics{figures/template3.pdf} \caption{The patch template for preserving values of non-editable widgets.} \label{fig:template-non-editable} \end{figure} \item \emph{Storing values of non-editable widgets.} The data loss issue of non-editable widgets often occur during changes in runtime configuration, e.g., screen rotation. The documentation suggests saving and restoring them using \texttt{onSaveInstanceState()} and \texttt{onRestoreInstanceState()} methods. The template of them is shown in Figure~\ref{fig:template-non-editable}. \end{itemize} \paragraph{Patch Generation.} Figure~\ref{fig:patchgeneration} shows the workflow of the patch generation. Given a set of variables $V$ whose values need to be preserved, {iFixDataloss}\xspace first divides them into two categories. The first category is the variables storing values of editable widgets and the second category is the variables storing values of non-editable widgets. The variables storing values of editable widgets are further divided into two categories: (1) variables whose values are used across app runs; (2) variables whose values are used in a single app run. In the end, these variables are divided into three categories as shown in Figure~\ref{fig:patchgeneration}. For each category of variables, {iFixDataloss}\xspace uses the corresponding template designed above to generate code for preserving their values and then it assembles code for preserving the three categories of variable values to generate a patch. Specifically, {iFixDataloss}\xspace converts the patch code to a set of \emph{AST} nodes and adds them into the \emph{AST} tree of the target activity's source code. \paragraph{Patch Evaluation.} To evaluate a patch, {iFixDataloss}\xspace runs the app and tests the patched activity in all three data loss scenarios to check if any data loss issue is found and then systematically explores functionality related to the activity to check if any crashes or freezing issues occur. If no error is found, {iFixDataloss}\xspace outputs the patch. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{figures/patchgeneration.pdf} \caption{Workflow of patch generation.} \label{fig:patchgeneration} \end{figure} \section{Conclusion} \label{sec:conclusion} We introduced a practical technique that can automatically detect and fix data loss issues in Android apps and implemented it into a tool {iFixDataloss}\xspace. Our extensive experiments (66 apps) show {iFixDataloss}\xspace is effective in detecting and fixing data loss issues in Android apps. In the evaluated 66 apps, {iFixDataloss}\xspace detected 374 data loss issues and 284 of them were previously unknown. {iFixDataloss}\xspace successfully generated 59 patches for 188 out of the 374 issues. Out of the 20 submitted patches, 16 have been accepted by developers. The experiments also show that iFixDataloss outperforms the state-of-the-art techniques in terms of the number of detected data loss issues and the quality of generated patches. To facilitate future research on data loss issues, we make our tool and data set publicly available at \cite{iFixDataloss-artifact}. \section{Data Loss Revealing Strategy Design} \label{sec:actdesign} In this section, we first introduce the relevant basic concepts in the Android framework and analyze possible scenarios in which data loss issues could occur. At the end, we present the data loss revealing strategies we design for these scenarios. \subsection{Basics} \paragraph{Activity.} An activity is a fundamental component in Android that is used to implement an app page (also called screen) and contains the logic to handle the app page. Apart from activities, the Android framework provides \emph{fragments} that are smaller components for app page construction. An activity can have several fragments, each one containing both some graphical elements and the logic to handle them. Activities and fragments are the main components used in app page construction and are often affected by data loss issues. For the sake of simplicity, we refer only to activities in the following, but all our concepts apply to both activities and fragments. \paragraph{Activity Recreation.} This phenomenon frequently occurs during the execution of Android apps: an activity that a user is interacting with or put into the background can be destroyed due to system constraints (e.g., memory pressure or runtime configuration changes). When the user navigates back, the system recreates the activity using a set of saved data that describes the state of the activity when it was destroyed. \paragraph{Data Loss Issues.} Data loss is a state inconsistency issue that occurs in activity recreation, in which certain values of variables that describe the state are lost or assigned with initial values when recreating the activity that is destroyed earlier. This state inconsistency may cause inconvenience to users. Data loss often affects the GUI state, for instance, certain widgets may be missing their text values. In some cases, it may affect the internal state of the app. In this work, we mainly focus on data loss issues affecting the GUI state. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{figures/lifecycle.pdf} \caption{Activity lifecycle.} \label{fig:lifecycle} \end{figure} \subsection{Scenarios in Which Data Loss May Occur} While activity recreation frequently occurs during app execution, the Android framework does not provide a fully automated mechanism for saving status data when an activity is destroyed and restored for activity recreation. App developers have to handle such status data during activity recreation. To support this, the Android framework provides callback methods (e.g., \texttt{onSaveInstanceState() and onRestoreInstanceState()}) that are invoked in the deconstruction and reconstruction of an activity, and developers can implement data saving and restoring functionality in these methods. However, an activity can be destroyed in multiple ways, for instance, being destroyed by pressing the \texttt{Back} button or being killed by the Android system for memory reclaiming. For each scenario, callback methods may be executed differently. In the scenarios where developers don't implement or incorrectly implement the callback methods that handle status data, data loss issues will occur. As shown in the activity lifecycle in Figure~\ref{fig:lifecycle}, there are four state paths from state \texttt{Resumed} to \texttt{Shutdown}. That is, there are four theoretical activity lifecycle processes that the activity goes through when it is destroyed. For each process, a different sequence of callback methods is executed. However, in practice only three of them can occur in real-world scenarios. According to the official Android documentation\footnote{https://developer.android.com/guide/components/activities/activity-lifecycle}, the transition marked by the dashed line occurs only when the back button is pressed. Subsequently, when the back button is pressed, the onDestroy() callback method is called. Therefore, the path marked by the triangle, cannot occur\footnote{Except in the rare case that this path is explicitly implemented by developers}. For the three scenarios that occur in practice, if developers do not properly save state data, data loss issues will happen. Thus, every activity in apps should be tested for these three scenarios: \begin{itemize}[leftmargin=*] \item \emph{Scenario 1 ($S1$):} \texttt{onPause()$\rightarrow$onSaveInstanceState()$\rightarrow$onSto\\p()}. This scenario often occurs when an activity is forcibly killed due to interrupt actions, for instance, being killed by the user swiping or killed by the Android system for memory reclaiming when it stays in the background for a long time. \item \emph{Scenario 2 (S2):} \texttt{onPause()$\rightarrow$onStop()$\rightarrow$onDestroy()}. This callback method execution order occurs when an activity is destroyed by the user pressing the \texttt{Back} button or the activity finishes itself. \item \emph{Scenario 3 (S3): \texttt{onPause()$\rightarrow$onSaveInstanceState()$\rightarrow$onSto\\p()$\rightarrow$OnDestroy()}}. This scenario occurs when an activity is destroyed and recreated due to runtime configuration changes e.g., screen rotation. For this case, status data in the activity needs to be completely saved in the deconstruction and restored in the recreation. \end{itemize} \subsection{Data Loss Issue Revealing Strategies} In this section, we outline the three strategies we have designed to reveal data loss issues that occur in the three scenarios above. A strategy can be expressed as a tuple $\langle E_d, E_r, O\rangle$, in which $E_d$ indicates an event sequence that destroys the current activity, $E_r$ indicates an event sequence that leads to re-entering of a state that was destroyed, and $O$ is a testing oracle that checks whether a data loss issue occurs during the activity recreation. The testing oracle $O$ is defined as \begin{definition} A data loss issue occurs when $V=\langle v_0, v_1 ..., v_n \rangle$ is different from $V^\prime=\langle v_0^\prime, v_1^\prime ..., v_n^\prime \rangle$, where $V$ represents values of variables within the GUI and $V^\prime$ represents their values after the activity recreation process. \end{definition} The idea behind these strategies is that given an app state, we execute events that trigger activity recreation and perform a state comparison to discover data loss issues. In the following, we introduce the three strategies: \begin{itemize}[leftmargin=*] \item $\mathcal{P}_{kill}:=\langle E_{kl}, E_{rp}, O_{kl}\rangle$. Strategy $\mathcal{P}_{kill}$ is designed to reveal data loss issues in scenario $S1$. $E_{kl}$ indicates an event sequence that simulates the user swiping action that kills the app. $E_{rp}$ represents the event sequence that restarts the app and re-enters the state when the app was killed. It can be recorded during state exploration (explained in Section~\ref{sec:approach}). $O_{kl}$ represents the testing oracle checking for data loss issues that occur in scenario S1. In $O_{kl}$, $V$ indicates variable values within the GUI that should be stored across app runs. This oracle is based on the suggestions shown in the box below, which is from an Android developer documentation~\footnote{\url{https://www.geeksforgeeks.org/shared-preferences-in-android-with-examples/}}. As suggested in the documentation, persistent data should be stored persistently to ensure a smooth user experience, even if the app is killed or restarted. Persistent data can be identified based on data usage patterns, e.g., stored in databases, which will be further explained in Section~\ref{sec:approach}. \result{\emph{Persist data across user sessions, even if the app is killed and restarted, or the device is rebooted.}} \input{tables/EditWidget} \item $\mathcal{P}_{back}:=\langle E_{bk}, E_{nx}, O_{bk}\rangle$. Strategy $\mathcal{P}_{back}$ is designed to reveal data loss issues in scenario $S2$. $E_{bk}$ indicates the event ``Back button'' and $E_{nx}$ represents an event sequence that re-enters the state that was just destroyed by pressing the Back button. The \texttt{Back button} is a system event that can be generated at any time and the event sequence $E_{nx}$ can be recorded during state exploration. $O_{bk}$ represents the testing oracle checking for data loss issues that occur in scenario S2. In $O_{bk}$, $V$ indicates property values of \emph{editable} widgets within the GUI. This oracle is derived from the suggestions shown in the box below, which is from the official Android developer documentation~\footnote{\url{https://developer.android.com/reference/android/app/Activity\#saving-persistent-state}}. As suggested in the document, the values of editable widgets should be preserved when the Back button is pressed. Editable widgets are listed in Table~\ref{tab:editable_widgets}. \result{\emph{The user pressing BACK from your activity does not mean "cancel" -- it means to leave the activity with its current contents saved away. Canceling edits in an activity must be provided through some other mechanism, such as an explicit "revert" or "undo" option.}} \item $\mathcal{P}_{rotate}:=\langle E_{rt}, $NOOP$, O_{rt}\rangle$. Strategy $\mathcal{P}_{rotate}$ is designed to reveal data loss issues in scenario $S3$. $E_{rt}$ is a system event \texttt{screen-rotation}. When performing a screen-rotation event, the current state will be destroyed and recreated. Since activity recreation is automatically triggered in the screen rotation operation, the activity re-entering event is $NOOP$, i.e., a \texttt{do-nothing} event. $O_{rt}$ represents the testing oracle checking for data loss issues that occur in scenario S3. In $O_{rt}$, $V$ indicates values of all the widgets~\footnote{Here, only app-related properties are considered and screen-related properties such as widget bounds are excluded.} within the GUI. For this oracle, we consider property values of all widgets in the GUI because screen-rotation is a neutral event and its execution expects no change in the state. \end{itemize} Apart from the three oracles described above, {iFixDataloss}\xspace also detects other types of data loss issues with generic oracles such as crashes, and widgets disappearing. \section{Evaluation} \label{sec:evaluation} In our experimental evaluation, we seek to answer the following research questions: \begin{itemize}[leftmargin=*] \item \textbf{RQ1}: How effective is {iFixDataloss}\xspace in finding data loss issues in Android apps? \item \textbf{RQ2}: How is the quality of patches generated by {iFixDataloss}\xspace? \item \textbf{RQ3}: How useful is {iFixDataloss}\xspace in fixing real-world data loss issues in Android apps? \end{itemize} \subsection{Experimental Setup} \paragraph{Subject Apps.} We evaluated {iFixDataloss}\xspace on a data set containing 66 Android apps, which is constructed by merging 48 benchmark apps used in Data Loss Detector~\cite{benchmark} and 21 apps that are found to have data loss issues in LiveDroid~\cite{livedroid}(there are 7 duplicate apps in these two benchmarks) as well as 4 apps downloaded from the Google Play Store. These 4 apps were used for data loss issues investigation during the early study of our project and also included the evaluation data set (marked by ``\#'' in Table~\ref{tab:result}). Due to limited space, we use asterisks to omit some suffix letters of app names. \paragraph{Data Loss Issue Reporting. } The experiments detect two types of data loss issues: GUI variables with missing values (indicated with $VE$) and critical errors such as crashing, hanging, and dialogues disappearing (indicated with $CE$). For each experiment, we report the number of GUI variables that are not preserved during activity destruction and the number of critical errors. For $VE$, we further classify them into two categories: \begin{itemize}[leftmargin=*] \item \emph{True Positives (TP):} variables whose values should be preserved. \item \emph{False Positives (FP):} variables whose values should not be preserved \end{itemize} For each variable in $VE$, we manually check if the variable has a data loss issue. Specifically, we explore the app and test the activity in which the variable resides in the following procedure. We first modify the values of all editable widgets and then perform a screen rotation operation and check if the variable remains the same before and after the screen rotation. Then we perform the procedure for the pressing Back button and Killing scenarios. If the variable value remains the same for the three scenarios, we deem the variable values should not be preserved, i.e. the variable is a false positive. Otherwise, the variable is a true positive. \paragraph{Comparison Tool Selection.} We compare {iFixDataloss}\xspace with LiveDroid~\cite{livedroid} and Data Loss Detector~\cite{dld}. LiveDroid is the most recent technique that prevents data loss issues in Android apps by automatically patching Android apps. Data Loss Detector is the most recent technique that detects data loss issues in Android apps. Data Loss Detector focuses on data loss issues caused by screen rotation and detects them by performing the screen rotation operation during testing. \paragraph{Execution Environment.} Our experiments run on a 64-bit Windows 10 physical machine with 2.30GHz Intel(R) Core(TM) i7-10510U CPU and 16GB RAM, and uses an Android emulator to run GUI exploration in the data loss detection phase. \input{tables/results} The emulator is configured with 2GB RAM and Android Nougat operating system(SDK 7.1, API level 25). For each technique in the evaluation, we use the default parameter values given on its website. Regarding Data Loss Detector and {iFixDataloss}\xspace, we run each experiment for one hour \subsection{RQ1: Data Loss Issue Detection} Table~\ref{tab:result} shows the results of {iFixDataloss}\xspace, LiveDroid and Data Loss Detector after running through the 66 Android apps. Column ``T'' indicates the total number of detected data loss issues. Column ``TP'' and ``FP'' categorizes whether the reported issues are true positives or false positives. It is worth noting that, TP and FP for DLD are not shown in Table~\ref{tab:result} because in the comparison we focused on determining if a variable value detected by the tools is supposed to be preserved (described in Section-6.1). DLD only detects data loss issues and does not specify variable values that should be preserved and thus does not have a TP and FP column. Column ``CE'' indicates the number of critical errors detected by {iFixDataloss}\xspace. There are 14 apps in the data set that LiveDroid failed to run due to some compatibility issues, which are denoted by ``-'' in the table. \emph{Results.} {iFixDataloss}\xspace detected 374 data loss issues in the 66 Android apps. 188 out of these 374 issues are GUI variables with missing values and none were false positives. The remaining 186 issues are critical errors. Our investigation shows that out of the 374 data loss issues, 284 of them were previously unknown. In comparison with the state-of-the-art techniques, {iFixDataloss}\xspace detected the most data loss issues, followed by DLD (152) and LiveDroid (43). Regarding false positives, LiveDroid detected 296 GUI variables with missing values, but 253 are false positives. LiveDroid detects a significant amount of false positives because it uses static analysis to reason about variables whose values might change and suffers from the low precision problem in static analysis. We further perform a comparison of detected data loss issues between {iFixDataloss}\xspace and state-of-the-art techniques. As shown in Figure \ref{fig:intersection}, {iFixDataloss}\xspace can detect all of the 43 issues that were detected by LiveDroid and 49 of 152 issues that were detected by Data Loss Detector. Two possible reasons that {iFixDataloss}\xspace could not detect all the issues are: (1) Data Loss Detector adopts screenshot-based oracles and reports a data loss issue whenever a difference is detected on the screenshot before and after screen rotation. Therefore it tends to report false positives, e.g., animations in an app page can cause differences on the screenshot even if there are no data loss issues in the app page; (2) {iFixDataloss}\xspace adopts a state exploration strategy different from Data Loss Detector and may miss certain states that were covered by Data Loss Detector. However, in total, {iFixDataloss}\xspace detected much more data loss issues than Data Loss Detector. \result{{iFixDataloss}\xspace detected 374 data loss issues in the 66 Android apps. 284 out of 374 issues are previously undetected issues. {iFixDataloss}\xspace significantly outperforms the state-of-the-art techniques in terms of the number of detected data loss issues. } \begin{figure}[h] \centering \includegraphics[width=0.35\textwidth]{figures/intersection.pdf} \caption{Data loss issues comparisons between {iFixDataloss}\xspace and the other two data loss issue detection tools.} \label{fig:intersection} \end{figure} \subsection{RQ2: Patch Quality} \input{tables/patchtype} We evaluate the quality of a patch generated by {iFixDataloss}\xspace and LiveDroid based on two criteria. Firstly, we check if the patch successfully fixes the data loss issues without introducing new errors. For each patch, we run the patched app on an emulator to check if the data loss issue is fixed. Specifically, we manually test the patched activity in data loss scenarios and examine if the data loss issue still occurs. To ensure no new errors are introduced, we explore functionalities related to the patched activity and check if the app misbehaves, e.g. crashes or the GUI disappearing. Secondly, we check if our patch generates any false positives i.e. variables in the GUI whose values should not be saved but saved because of the patch. Furthermore, we classify patches into four categories: \begin{itemize}[leftmargin=*] \item \emph{Type 1.} Patch fixes all data loss issues without preserving false positives; \item \emph{Type 2.} Patch fixes all data loss issues but also preserves some false positives; \item \emph{Type 3.} Patch fixes some but not all data loss issues and also preserves some false positives; \item \emph{Type 4.} Patch only preserves false positives. \end{itemize} Note that {iFixDataloss}\xspace is a fully automated tool and it can automatically evaluate patches as shown in Section~\ref{sec:approach}. In the experiment evaluation, we manually explore patched activities at runtime only for evaluating the quality of patches generated by {iFixDataloss}\xspace and LiveDroid. \paragraph{Results.} 59 patches were generated to address the 188 issues that involve variable values with missing detected by {iFixDataloss}\xspace. Note that an activity may have multiple variables with data loss issues, in these cases, {iFixDataloss}\xspace generates one patch to preserve the values of all variables that exhibit data loss issues. As shown in Table~\ref{tab:PatchType}, all of the 59 patches generated by {iFixDataloss}\xspace fall into Type 1, i.e., all the 188 issues are fixed without preserving variable values that should not be preserved. In comparison, 296 issues that involve variables with missing values were detected by LiveDroid. LiveDroid generated 44 patches in total to address these issues. As shown in Table~\ref{tab:PatchType}, 7 of the 44 patches fall into Type 1, i.e., only 7 patches fix the data loss issues without saving and restoring variable values that should not be preserved. The remaining patches have the \emph{over saving} problem, i.e., saving and restoring variable values that should not be preserved. Out of the 296 variables values identified by LiveDroid, 253 are false positives, i.e., unnecessarily preserved. On average, 85\% (253/296) of the variable's values are values that should not be saved and exhibit the over-saving issue. \result{{iFixDataloss}\xspace generated 59 patches that successfully fixed 188 variable values miss issues without preserving false positives (variable values that should not be preserved). {iFixDataloss}\xspace outperformed the state-of-the-art technique LiveDroid in terms of the number of preserved false positives.} \input{tables/PR} \subsection{RQ3: Usefulness } To evaluate how useful {iFixDataloss}\xspace is in practice, we selected the top 10 apps in the data set that have been updated most recently and submitted all the patches for these apps to developers (20 patches). In total, we created and submitted 20 pull requests for the 10 apps on Github. At the time of writing the paper, out of the 20 pull requests, 16 have been accepted with very positive comments: \begin{itemize}[leftmargin=*] \item "Looks good - thanks again" \item "Super thank you for the contribution" \item "Thank you very much for this nice contribution. It looks really cool, overall." \item "Thank you! I have tested this and merged by rebasing into the master." \item ...... \end{itemize} We have not yet received a response for the remaining 4 pull requests. The details of the 20 patches are shown in Table~\ref{tab:PR}. Similar to Table~\ref{tab:result}, we used asterisks to omit some suffix letters of app names due to space limitations. \result{Out of the 20 pull requests with patches generated by {iFixDataloss}\xspace, 16 have been accepted with positive comments.} \subsection{Threats to Validity} The main threats to external validity lie in the selection of the apps. {iFixDataloss}\xspace is evaluated on 66 Android apps. Our results may not be generalizable beyond the 66 apps to which we have applied {iFixDataloss}\xspace. To mitigate this threat, we chose apps from within the benchmarks of the two literature works, Data Loss Detector and Livedroid. Threats to internal validity are factored into our experimental methodology and they may affect our results. We manually explore apps to reproduce data loss issues reported in the experiments and may fail to explore certain functionalities, which might affect our results. We also performed some manual checks during evaluation, which is potentially error-prone. To minimize this threat, two people performed manual checks and compared the experimental results to check for discrepancies. We also realise that the false positive rate of LiveDroid reported in our experiments is different from the rate reported in the LiveDroid paper. We checked with the authors of LiveDroid on this matter. The authors of LiveDroid explained that they also consider non-UI property values in the calculation of their false positive rate. Non-UI property values do not apply to our scenarios, therefore, resulting in a different false positive rate. We used the default parameters when running LiveDroid and DLD. For LiveDroid, we tried running each app with different parameters and found there was no difference in results compared to the default parameters. For DLD, we tried running each app with different parameters except for the runtime length which we kept at 1 hour and found there was no difference in results compared to the default parameters. We had 2 people run these experiments to ensure that the results were consistent. Based on this, we concluded that running the experiments using default parameters did not affect the results. \section{Implementation} \label{sec:impl} {iFixDataloss}\xspace is implemented as a fully automated data loss detection and fix framework, which reuses or extends a set of off-the-shelf tools: ApkTool \cite{Apktool}, FlowDroid \cite{flowdroid}, UI Automator \cite{UiAutomator}, Android Debug Bridge(ADB) \cite{adb}, and JaveParser \cite{Javaparser}. Apktool is used to decompile an apk to extract its XML files. FlowDroid is extended to build the activity graph of an app and perform data flow analysis. UI Automator is used to dump the GUI layout and perform app state exploration. ADB is used to simulate the three revealing actions(i.e., kill, back and rotate). During patch generation, we use JaveParser to parse the source code of apps and generate patches. \section{Introduction} \label{sec:intro} A good user experience is essential for the success and popularity of mobile apps. On the other hand, a poor user experience can cause an app to become unpopular even if it provides useful functionality. Data loss is one of the prominent frustrations users can experience with mobile apps. Imagine a user is filling out a form with many fields and has almost completed the form after tediously typing through a tiny virtual keyboard. She or he might mistakenly touch the \emph{Back} button or be forcibly switched away from the app, causing all the input data to be lost. This occurs when an app page is destroyed by the mobile operating system. For example, when the mobile operating system needs to conserve resources or the mobile operating system determines that the app page is no longer needed (e.g., exiting via pressing the \emph{Back} button) or the app has been stored in the background for a long time. In such cases, if the app does not properly save user data, data loss issues will occur. Recent studies show data loss issues are extremely pervasive. Amalfitano et al.~\cite{amalfitano} reported that 60 out of the studied 68 apps (88.2\%) had data loss issues Fixing data loss issues automatically is non-trivial. A straightforward way to fix data loss issues is to save the values of all the state variables in an app page before it is destroyed, and restore these values when revisiting the app page. The problem with this solution is data \emph{over-saving}. An app page may comprise a large amount of data that is irrelevant to user input (e.g., widgets displaying text). Saving this irrelevant data could slow down the app because such data saving occurs in the UIthread. As suggested in the Android documentation~\footnote{\url{https://developer.android.com/training/articles/perf-anr\#Avoiding}}, “any method that runs on the UI thread should do as little work as possible" avoiding UI sluggishness. Additionally, data loss issues can lead to app misbehavior. For instance, a \texttt{TextView} on the app page displays how many times the user visits the page. Assume the current value of the \texttt{TextView} is 5 and saved before the app page is destroyed. When the user revisits the app page, value 5 is retrieved and displayed, which is confusing since the user expects 6. These have been identified as crucial problems in fixing data loss issues in recent works~\cite{livedroid,krerror}. A recent work, LiveDroid~\cite{livedroid}, uses static analysis to reason about program variables and GUI properties that might be changed during user interactions and generate patches to save and restore their values at runtime to avoid data loss issues. Although a significant portion of variables is ruled out in static analysis, LiveDroid still suffers from the over-saving issue. As reported in the paper, it generates too many false positives, i.e., preserving variable values that should not be preserved. This will reduce app performance and responsiveness due to the cost of saving unnecessary data. In this paper, we present a technique, called {iFixDataloss}\xspace, which can automatically detect and fix data loss issues in Android apps whilst eliminating the over-saving issue. The key insight of our technique is that the scenarios in which data loss issues occur can be simulated by generating a particular event or composed events during testing. For instance, screen rotation, one of the most frequent data loss scenarios, can be triggered by executing an orientation change event at app runtime. Thus, we can detect data loss issues by testing each app page for these data loss scenarios, and checking if data is lost. To fix data loss issues, {iFixDataloss}\xspace only preserves data that exhibits loss issues during testing thereby avoiding the saving of unnecessary data. To minimize unnecessary data saving, we further identify data related to user input (e.g., values of \texttt{TextField} widgets) and only save this kind of data in the patches. Specifically, {iFixDataloss}\xspace combines static and dynamic analysis to detect data loss issues. It first builds an \emph{activity transition graph} by performing static analysis on an app under test. Guided by this graph, {iFixDataloss}\xspace mimics user actions to exercise the app with a guided exploration strategy that steers the exploration towards app pages that may be affected by data loss issues. For each app page, {iFixDataloss}\xspace tests it by executing a set of predefined events that generate data loss scenarios. According to the lifecycle of Android activity, we define a set of events or composed events that cover possible data loss scenarios. Thus, {iFixDataloss}\xspace can find data loss issues more thoroughly. In contrast, existing techniques~\cite{dld,livedroid} only cover a portion of those scenarios. For patch generation, {iFixDataloss}\xspace not only fixes data loss issues that only impact the current app run but also issues that impact users across multiple runs. Most of the time, data loss issues only affect app usage in the current run, e.g., certain \texttt{TextView} values on the screen are lost after a change in screen orientation. This kind of data typically only needs to be preserved for the current run and is no longer needed after the run is terminated. For such data, {iFixDataloss}\xspace generates patches that save it in memory and retrieve it when needed. In certain cases, data that is lost needs to be preserved across runs, we call this persistent data e.g., a membership registration form that is fully filled by users but has not been submitted; even after the app is killed, it still needs to be saved. For this kind of data, {iFixDataloss}\xspace generates patches that save the data using a storage method that can be restored across app runs. To achieve this, we develop a strategy based on user data usage patterns to distinguish these two kinds of data. In other words, {iFixDataloss}\xspace fixes not only data loss issues that occur in a singular instance of an app run but also data loss issues across app runs. By contrast, the existing technique LiveDroid~\cite{livedroid} only fixes data loss issues that occur in a singular instance of an app run and cannot fix data loss issues across multiple runs. Our experiments show that {iFixDataloss}\xspace is effective in both detecting and fixing data loss issues in Android apps. We evaluated {iFixDataloss}\xspace on 66 Android apps and detected 374 data loss issues and out of them, 284 were previously unknown. {iFixDataloss}\xspace outperforms the recent data loss detection techniques DLD~\cite{dld} and LiveDroid~\cite{livedroid} regarding the number of detected data loss issues. It also outperforms the data loss issue fixing tool LiveDroid~\cite{livedroid} in terms of the quality of generated patches, i.e., the number of over-saved variable values in patches. For 188 issues, {iFixDataloss}\xspace successfully generated patches to fix them. For unknown issues, we submitted 20 patches generated by {iFixDataloss}\xspace to developers; 16 of these 20 have been accepted by developers with very positive feedback. Overall, our contributions can be summarized as follows: \begin{itemize} \item We identify scenarios in which data loss may occur based on the Android life cycle and design strategies to reveal data loss issues. Further, we develop patch templates for fixing data loss issues. We also implement our approach into a fully automated tool {iFixDataloss}\xspace for detecting and fixing data loss issues in Android apps. \item We performed an extensive experiment in which we found 374 data loss issues in 66 Android apps and 284 issues that were previously unknown and successfully generated patches for 188 out of the 374 issues. Out of 20 submitted patches, 16 have been accepted by developers. \item To facilitate future research, we make our prototype tool {iFixDataloss}\xspace and the data set used in the experiment are available at \url{https://github.com/iFixDataLoss/iFixDataloss22} \end{itemize} \section{Motivating Example} \label{sec:example} \begin{figure}[t] \centering \subfigure[before back]{ \label{fig:example1} \begin{minipage}{0.44\linewidth} \centering \hspace{-0.34cm}\includegraphics[width=1.6in]{figures/accountdetails1.png} \end{minipage} } \quad \subfigure[after back and return]{ \label{fig:example2} \begin{minipage}{0.44\linewidth} \centering \hspace{-0.34cm}\includegraphics[width=1.6in]{figures/accountdetails2.png} \end{minipage} } \caption{Screenshots of the CycleStreets activity described in the example.} \label{fig:motivation} \end{figure} In this section, we describe a data loss issue in a popular mobile app Cyclestreet (196 stars on Github and over 100K downloads on Google Play) as well as the inability of existing techniques in detecting and fixing this issue. Then we explain how it is detected and fixed by {iFixDataloss}\xspace. \paragraph{Data Loss.} Cyclestreets~\footnote{\url{https://www.cyclestreets.net/mobile/android/}} is a cycle journey planner mobile app that is widely used in the UK. {iFixDataloss}\xspace found a data loss issue in the app, which exists on its account registration page. Figure~\ref{fig:example1} shows the account registration process, which involves filling out the form using your personal information such as user name and email address. Subsequently, to finish registration, the user clicks the \emph{Register} button to submit the completed form. Upon investigation, there is a data loss issue in the page, which can be triggered in the following steps: (1) fill out the form without clicking the \emph{Register} button; (2) press the \emph{Back} button; (3) return to the registration page. As shown in Figure~\ref{fig:example2}, all the data filled in the previous step is lost after returning to the registration page. If users encounter this issue, it will frustrate users, since they have to refill this data which is tedious and time-consuming. \paragraph{Challenges.} There are difficulties in both detecting and fixing this data loss issue. Detecting such issues requires an oracle that checks whether user data is lost during testing. However, most existing mobile app testing tools \cite{sapienz,stoat,ape,timemachine,gesda,combodroid} can only detect crashes in apps and thus cannot be used to find and fix data loss issues. As mentioned earlier, a recent tool DLD~\cite{dld} is capable of detecting data loss issues. Unfortunately, DLD mainly detects data loss issues that occur when device orientation is changed and fails to detect the issue in the example. Automatically fixing data loss issues is non-trivial as well. The app page in the example comprises $17$ widgets and each widget contains more than $15$ properties, containing $311$ variable values in total. Restoring all of these values when entering the page can significantly slow down the app. We ran an experiment 10 times to compute the cost of saving and restoring all these variable values. On average, saving and restoring these 311 variables values takes 300ms, which is over 3 times the acceptable response time given in the Android documentation~\footnote{\url{https://developer.android.com/training/articles/perf-anr\#Reinforcing}}. Additionally, this time only takes into account saving and restoring. In a real-world app, there could be additional processing needed that would further increase the response time beyond the acceptable. To save less data, the state-of-the-art technique LiveDroid~\cite{livedroid} leverages static analysis to reason about variable values in GUIs that might be changed during user interaction and only saves them. In total, LiveDroid preserved 166 variable to fix this issue. Despite reducing the amount of data being saved, LiveDroid still exhibits an \emph{over-saving} issue. The majority of the 166 variable values are unnecessarily saved since they are initialization values and never changed during user interactions, e.g., \texttt{resource-id} and \texttt{content-desc}. \paragraph{Our approach.} The tool {iFixDataloss}\xspace, in which our approach is implemented, detected and fixed this issue via the following steps: \begin{itemize}[leftmargin=*] \item \emph{Data Loss Issue Detection.} {iFixDataloss}\xspace explores the app on an emulator and tests each discovered app page in data loss scenarios that we define based on the Android documentation. One of the scenarios is \texttt{Back-ReEntering}, i.e., exiting an app page via pressing the \texttt{Back} button and re-entering the page. In the example, {iFixDataloss}\xspace found that data filled in on the registration page was lost after the execution of the \texttt{Back-ReEntering} scenario. Thus, {iFixDataloss}\xspace reported the issue. During detection, {iFixDataloss}\xspace not only reports a data loss issue in an app page but also records variable values that are lost in a data loss scenario. Specifically, {iFixDataloss}\xspace modifies the values of variables on the page that may store user data such as \texttt{TextField} before generating a data loss scenario. If the value of a variable is changed in the data loss scenario, {iFixDataloss}\xspace deems that the variable has a data loss issue and its value needs to be saved. In the example, {iFixDataloss}\xspace found the values of these five \texttt{TextField} widgets (marked in red in Figure~\ref{fig:example2}) were changed in the \texttt{Back-ReEntering} scenario, and so the resource Ids of these widgets were recorded for patch generation in the next step. \item \emph{Patch Generation.} {iFixDataloss}\xspace uses a template-based approach to generate patches. One template is designed to fix this kind of data loss issue that occurs in the example. With this template, {iFixDataloss}\xspace can generate a patch to fix this issue, in which only 5 variable values are preserved (shown in Figure~\ref{fig:patch}). \item \emph{Patch Evaluation.} {iFixDataloss}\xspace evaluates the generated patch by testing the patched APK. If data loss issues no longer exist in the corresponding app page and no crashes or freezes are found in the exploration of functionalities related to the app page, {iFixDataloss}\xspace reports the issue is fixed. In the example, {iFixDataloss}\xspace found neither data loss issues nor crashes or hangings in the evaluation and thus determined that the patch was successful in fixing the issue. Furthermore, we created a pull request on the Github repo of the app Cyclestreet with the generated patch and the pull request was accepted. \end{itemize} In summary, {iFixDataloss}\xspace successfully detected and fixed the data loss issue in the example. In the generated patch, only five widget values are saved and restored, and no unnecessary data is saved. The issue has been confirmed by the developers of the app Cyclestreets and the generated patch has been accepted as well. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{figures/patchexample.pdf} \caption{The patch generated by {iFixDataloss}\xspace for fixing the data loss issue in the example.} \label{fig:patch} \end{figure} \section{Related Work} \label{sec:relatedwork} \paragraph{Data Loss Issues Fixing.} There have been a few pieces of research work that attempt to automatically fix data loss issues in Android apps. A recent work LiveDroid~\cite{livedroid} leverages static analysis to identify program variable values and GUI properties that may change during user interactions and preserve them across app life cycles to avoid data loss issues. Due to ``infeasible'' paths introduced in the static analysis, LiveDroid tends to report false positives in results. Furthermore, LiveDroid only works for data loss issues across app life cycles such as activity recreation and does not work for data loss issues across app runs. RuntimeDroid~\cite{runtimedroid} uses an online resource loading module to update GUI elements when certain configurations are changed at runtime, avoiding activity restarting. By contrast, {iFixDataloss}\xspace can fix data loss issues by preserving variable values whose loss issues have been witnessed during testing and restoring them in data loss scenarios. Thus, {iFixDataloss}\xspace generates no false positives. Apart from fixing data loss issues across app life cycles, {iFixDataloss}\xspace also can fix data loss issues across app runs. \paragraph{Automated Program Repair for Android Apps.} There are several existing works that focus on fixing other types of issues in Android apps. Droix~\cite{droix} employs a search-based approach to generate patches that fix crashes in Android apps. AppEvolve~\cite{appevolve} analyzes existing updates in other apps to generate patches that fix issues in Android apps caused by an API evolving. METER~\cite{meter} leverages computer vision techniques to fix broken GUI test scripts during app evolution. Sapfix~\cite{sapfix}, a deployed automated program repair tool in Facebook, seeks to generate patches for more types of bugs in mobile apps with templates that are created by human engineers based on previous bug fixes. Compared to those works, {iFixDataloss}\xspace focuses on fixing data loss issues in Android apps and can be used to complement those tools for fixing more types of issues in Android apps. \paragraph{Data Loss Issues Detection.} Similar to {iFixDataloss}\xspace, data loss issue detection tools DLD~\cite{dld} and ALARic~\cite{alaric} exercise app pages by executing a screen rotation action and detecting data loss issues by checking for differences in the GUI before and after rotation. Thor~\cite{thor} augments existing test suites with neutral sequences of operations to reveal more failures. The injected event sequences that create adverse conditions such as disconnecting the internet and turning off audio services, may reveal data loss issues. Quantum~\cite{quantum} tests Android apps by generating test cases that are injected with a series of operations that are more likely to reveal failures based on a study of previous bugs (e.g., zooming in and zooming out). SetDroid~\cite{setdroid} executes test cases under different system settings to find system setting related failures. Despite their ability to find data loss issues, those techniques are only able to find a portion of data loss issues. By contrast, we design operations that cover all kinds of scenarios in which data loss issues occur based on the Android lifecycle and use them to discover more types of data loss issues. More importantly, not only can {iFixDataloss}\xspace find data loss issues but it can also fix them. \paragraph{Automated Android App Testing.} Another rich branch of research work focuses on generating test inputs for Android apps. For instance, Sapienz~\cite{sapienz} uses evolutionary algorithms to generate test inputs that cover more code coverage. TimeMachine~\cite{timemachine} saves app states that have the potential to trigger new program behaviors and prioritizes exploring them to discover more app behaviors. Stoat~\cite{stoat}, APE~\cite{ape} and DroidBot~\cite{droidbot} leverages a built model to guide input generation. A3E systematically generates inputs following a depth-first strategy. SwiftHand~\cite{swifthand} uses machine learning to learn a model that is used to guide input generation. ACTEve~\cite{acteve} uses symbolic execution to generate inputs. While these techniques can effectively explore app behaviors, they are insufficient for detecting data loss issues due to a lack of oracles that check for data loss issues.
{ "timestamp": "2022-09-20T02:23:40", "yymm": "2209", "arxiv_id": "2209.08719", "language": "en", "url": "https://arxiv.org/abs/2209.08719" }
\section{Introduction} Intelligent devices, such as mobile phones, wearable devices, and autonomous cars, generate massive amounts of data every day. These data have a wide range of applications, for instance, the next word prediction \citep{konevcny2016federated}, scheduling the traffic to avoid the congestion \citep{accettura2013decentralized}, smoke detection \citep{khan2019energy}, and building health monitoring \citep{wu2020fedhome,scuro2018iot}. In general, a graph structure exists among devices \citep{bello2014intelligent,atzori2010internet}. To accomplish certain tasks, different devices can communicate with each other along edges of such a graph, in addition to performing local computation. In fact, concerns about private information leakage \citep{voigt2017eu}, coupled with the increasing computing power of devices, have made it a practice to store data as well as train algorithms locally and transmit only parameters. Besides, heterogeneity among devices naturally arises. For instance, the storage, computing and communication capabilities, and even the data distribution of each device may differ, and not all of the devices are accessible for a real-time training procedure \citep{li2020federated,srinidhi2019network}. These issues pose fundamental challenges to conventional distributed learning methodologies. Distributed statistical methods commonly presume that a large dataset is randomly split into $K$ subsets stored on local devices that are connected to a central machine \citep{gao2021review}. Inspired by the idea of divide-and-conquer, small tasks are solved on local devices in parallel, and local results are aggregated on the central machine to produce the final result \citep{Qin2022selective}. Most of these approaches are statistically efficient and only require one round of communication, but are limited by the condition $K=o(\sqrt{N})$, where $N$ denotes the total sample size, e.g., \citet{JMLR:v14:zhang13b}, \citet{battey2018}, \citet{fan2019distributed}, among others. Despite that several methods are proposed to relax the condition on the number of local devices \citep{fan2021,Jordan1029communication}, this line of work essentially requires that tasks on each local device can be exactly solved, which can be problematic for devices with a limited computing power. This issue is even pronounced for stream data and/or for parameters required to be updated in real-time, e.g., automated cars \citep{Zhang2021real}. To accommodate these situations, \cite{stich2019local} proposed local stochastic gradient descent (SGD) that runs SGD independently in parallel on each device and averages the aggregated parameters on the central machine only once in a while. The iteration which parameter averaging takes place is referred to as `global synchronization', and the others are called `local update'. The number of global synchronization, i.e., communication cost, can be much smaller than the total number of iterations for algorithmic convergence. However, the aforementioned methods are all confined to homogeneous setups. Federated learning \citep{konecny2016federated} is a machine learning technique tailored to the heterogeneity problem \citep{kairouz2019advances,zhang2020convergence}. One fundamental algorithm, federated averaging \citep{pmlr-v54-mcmahan17a}, generalizes local SGD in terms of communication heterogeneity by allowing certain devices to be randomly unavailable at each iteration. Notably, the graph whose edge represents the pathway that information can be transmitted is star-shaped for both local SGD and federated averaging, with each device connecting to a central machine. Due to limited bandwidth, it is increasingly impractical to send local parameters to a central machine for an ever-expanding network size \citep{Pavel2017mobile}. To get rid of the central machine and also adapt to non-i.i.d data, \cite{koloskova2020unified} proposed a decentralized SGD by sampling random connected graphs at each iteration, where parameters of the neighbors in a simulated random graph, instead of all devices, are averaged at the synchronization step. This kind of algorithm is also known as the gossip algorithm \citep{boyd2006randomized}, which is communication efficient since only connected devices interact with each other at each iteration. Although effectively these inspiring algorithms can be computed, device heterogeneity therein lacks rigorous statistical formulation. For example, both federated averaging and decentralized SGD are designed for the situation where parameters of devices are the same. However, parameter equality indeed implies distribution identity if parameters are identifiable; see \citet[Chap.5]{van2000asymptotic} for details. Relating heterogeneous data distribution to parameter estimation, we can measure the goodness of algorithms in dealing with heterogeneity in terms of a metric between estimators and the true parameter \citep{cai2021shir}. This line of work commonly assumes that effects of covariates on outcomes can be decomposed into a common effect shared by all local devices and device-specific effects that explain heterogeneity, e.g., \citet{Zhao2016partial} and \citet{duan2021heterogeneity}. Under this parametrization of heterogeneity, covariates giving rise to device-specific effects require to be known, which demands a strong prior knowledge. Moreover, algorithms therein cannot be executed in real-time, which makes them mostly suitable to multi-center research \citep{sidransky2009multicenter} instead of federated learning. In this work, we encode data heterogeneity with an unknown graph $G_0=(V,E_0)$, where $V$ consists of all devices as nodes and $G_0$ is defined as a collection of multiple disjoint cliques. Each clique essentially defines a cluster of devices sharing the same distribution identified by some unknown $p$-dimensional parameters. We further assume that a graph $G=(V,E)$ is given in priori with possibly $E\neq E_0$, whose edges not only represent communication pathways but also reveal certain similarities between connected devices. In social networks \citep{scott1988social,wasserman1994social}, the property that linked nodes act similarly is known as network cohesion \citep{li2019prediction}, a phenomenon observed in numerous social behavior studies \citep{christakis2007spread,fujimoto2012social}. We promote the estimators on connected devices to be equal by the following $M$-estimation penalized by the fused Lasso: \begin{align}\label{eq:penalized_m} \min_{\boldsymbol{\theta}_u\in \boldsymbol{\Xi}, u\in V} \frac{1}{|V|}\sum_{u\in V}\frac{1}{n_u}\sum_{k=1}^{n_u}m_u(\mathbf{z}_k^{(u)};\boldsymbol{\theta}_u)+ \lambda\sum_{(i,j)\in E}\phi(\boldsymbol{\theta}_i-\boldsymbol{\theta}_j), \end{align} where $m_{u}(\cdot;\boldsymbol{\theta}), u\in V$, denote some known functions, $\boldsymbol{\Xi}\subset \mathbb{R}^p$ denotes the parameter space which is a compact, $\{\mathbf{z}_k^{(u)}\}_{k=1}^{n_u}$ denotes the dataset observed on device $u$, and $\phi(\cdot)$ is some norm defined on $\mathbb{R}^p$. Notably, a wide range of statistical models, including (generalized) linear models, Huber regression, and maximum likelihood estimation, are included by choosing corresponding $m_{u}(\cdot;\boldsymbol{\theta}), u\in V$. We propose Fed-ADMM, a decentralized stochastic version of alternating direction method of multipliers (ADMM) algorithm, to solve \eqref{eq:penalized_m}. Quite different from federated averaging, our algorithm does not require a central machine and parameters are transmitted device-to-device. Our algorithm does not require a high computational capacity of local devices either; at each iteration, only a mini-batch of samples is processed on each device. We prove that it achieves $O(T^{-1}\log T)$ convergence rate to the global minimizer of \eqref{eq:penalized_m} where $T$ denotes the total number of iterations. We further show that our algorithm attains the same convergence rate under malicious random block of devices in the real-time optimization process. Moreover, we provide a deterministic theorem on the consistency of the global minimizer of \eqref{eq:penalized_m}. Interestingly, the estimation error can be decomposed into two components, the averaged variance of $M$-estimation without regularization from the data fidelity term in \eqref{eq:penalized_m}, and the bias term introduced by wrongly shrinking $\boldsymbol{\theta}_i -\boldsymbol{\theta}_j, (i,j)\in E\setminus E_0$ towards zero due to the fused regularization. In certain specific cases, we further obtain probabilistic results for the estimation error. Our result shows that, if the given graph $G$ is close enough to $G_0$, the global minimizer of \eqref{eq:penalized_m} performs optimally as if we could aggregate the whole data and know which devices share the same distribution. For the case where $G$ provides too much misleading information, we propose an adaptive edge selection procedure through multiple testing to ensure the optimality. \subsection{Related Work} Our work is closely related to personalized federated learning which, vaguely speaking, aims to personalize global models to perform better for individual devices. There is a growing body of literature that focuses on personalized federated learning. \cite{mansour2020three} proposed to cluster similar devices first and then applied federated averaging to each cluster. Taking a perspective of transfer learning, \cite{wang2019federated} proposed to train a global model first, and then some or all of parameters of the global model are fine-tuned on local devices. \cite{jiang2019improving} borrowed some ideas from model agnostic meta learning to deal with personalized federated learning problems. \cite{smith2017federated} proposed MOCHA to produce personalized solutions in light of multi-task learning. For a more comprehensive review, see \cite{kulkarni2020survey}. These methods, however, either serve as pure algorithmic solutions that lack rigorous statistical analyses, or require a strong computing power of local devices. Another line of work related to this article is network/fused Lasso \citep{hallac2015network,tibshirani2005sparsity,rudin1992nonlinear}. This line of work presumes that parameters are sparse over a given graph/network. \cite{hutter2016optimal} derived a sharp convergence rate for Gaussian mean estimation with total variation regularization. \cite{hallac2015network} used distributed ADMM to solve optimization problems of the same form as \eqref{eq:penalized_m}. Our work generalize \cite{hutter2016optimal} to general $M$-estimation and \cite{hallac2015network} to a stochastic federated setting. \cite{richards2021distributed} considered the case where each node is associated with a sparse linear model, and two nodes are linked if the difference of their solutions is also sparse. They required that the underlying graph is a tree and sparsity of the differences across nodes is smaller than the sparsity at the root. We, however, consider arbitrary graphs. \subsection{Organization of This Paper} The rest of the paper is organized as follows. Some useful notation, problem setup and related assumptions are discussed in Section \ref{sec:pre}. Section \ref{sec:method} mainly contains details of our method, including statistical guarantees of our estimators and an edge selection procedure through multiple testing. In Section \ref{sec:optim}, we introduce the Fed-ADMM together with its extension and show their algorithmic consistency. Section \ref{sec:simulation} consists of simulations, and a real-world data analysis is included in Section \ref{sec:real}. \section{Preliminaries}\label{sec:pre} We first introduce some notation used in this article. \subsection{Notation} For any set $S$, we denote by $|S|$ its cardinality. We denote by $G=(V,E)$ a graph with node set $V=\{1,\ldots,|V|\}$ and edge set $E$. For any $e=(i,j)\in E$, we let $e^+=\max\{i,j\}$ and $e^-=\min\{i,j\}$. The signed incidence matrix with respect to $E$ is denoted by $\bm{D}\in\{-1,0,1\}^{|E|\times |V|}$ whose $(e,i)$-th entry is $D_{e i}=\mathbf{1}\{i = e^+\} - \mathbf{1}\{i = e^-\} $ for $e\in E$ and $i\in V$, where $\mathbf{1}\{\cdot\}$ denotes the indicator function. We let $\bm{D}^{\dag}$ be the Moore--Penrose inverse of $\bm{D}$. We denote by $\mathcal{C}_1,\ldots,\mathcal{C}_{K}$ the connected components of a graph $G=(V,E)$, where $K$ is the number of $\mathcal{C}_i$s in $G$. We sometimes emphasis the dependence of $K$ on $G=(V,E)$ by writing $K(E)$. For any matrix $\bm{A}=(\bm{a}_1,\ldots,\bm{a}_m)^T\in\mathbb{R}^{m\times p}$, define the $\ell_1/\phi$-norm of $\bm{A}$ as $R(\bm{A})=\sum_{j=1}^m\phi(\bm{a}_j)$, where $\phi:\mathbb{R}^p\mapsto [0,\infty)$ denotes a norm defined on $\mathbb{R}^p$. Sometimes, we also write $\bm{A}=(\bm{a}_k: k\in S)$ a $|S|\times p$ as a matrix whose $j$-th row is $\bm{a}_j^T$. We denote by $\|\bm{a}\|_2$ the $\ell_2$-norm of $\bm{a}$ defined on $\mathbb{R}^p$. Let $\mathbb{B}(\bm{a};r)=\{\bm{x}: \|\bm{x} - \bm{a}\|_2\le r\}$ be the ball in $\mathbb{R}^p$ with center being $\bm{a}$ and radius being $r$. For some symmetric matrix $\bm{A}$, we denote by $\lambda_{\max}(\bm{A})$ the maximal eigenvalue of $\bm{A}$ and by $\lambda_{\min}(\bm{A})$ its minimal eigenvalue. If $\bm{A}$ is positive semi-definite, we denote by $\lambda_{\min}^{+}(\bm{A})$ its smallest nonzero eigenvalue. \subsection{Problem Setup and Identifiability of Heterogeneity} We consider the heterogeneity of devices in federated learning. A device can be different from other devices in a variety of ways, including data distribution, computing power, and availability patterns during communication. These types of spatial and temporal heterogeneity can be described by graphs whose nodes represent devices. Consider probability measures $P(\boldsymbol{\theta}_u^*)$, $u\in V$, with $\{\boldsymbol{\theta}_u^*\colon u\in V\}\subset \mathbb{B}(\mathbf{0}_p;r_0)\subset\mathbb{R}^p$ for some constant $r_0\in (0,\infty)$. Naturally, we denote by $\boldsymbol{\Xi}=\mathbb{B}(\mathbf{0}_p;r_0)$ the parameter space. We assume that, for each $u\in V$, $\mathbf{z}_k^{(u)}\sim P(\boldsymbol{\theta}_u^*), 1\le k\le n_u$ independently on device $u$. We denote by $\bm{\mathcal{Z}}_u\subset\mathbb{R}^{q_u}$ the support set of $P(\boldsymbol{\theta}_u^*)$ such that $\{\mathbf{z}_k^{(u)}\}_{k=1}^{n_u}\subset \bm{\mathcal{Z}}_u$. We assume that $\bm{\mathcal{Z}}_u$ is compact such that $\max_{u\in V}\sup_{\bm{z}_1,\bm{z}_2\in\bm{\mathcal{Z}}_u}\|\bm{z}_1^{(u)}-\bm{z}_2^{(u)}\|_2=r_z\in (0,\infty)$. Let $n=\min_{u\in V}n_u$. \begin{definition}[Characteristic graph]\label{def:data_dist} A graph $G_0=(V,E_0)$ is called a characteristic graph of a set of probability distributions $\{P(\boldsymbol{\theta}_u^*); u\in V\}$ if $\boldsymbol{\theta}_u^*=\boldsymbol{\theta}_v^*$ is equivalent to $(u,v)\in E_0$. \end{definition} A characteristic graph has a natural decomposition $G_0=\cup_{i=1}^{K(E_0)}\mathcal{C}_i^*$ where $\mathcal{C}_i^*$s $\big(1\le i\le K(E_0)\big)$ denote disjoint cliques. This definition is, in fact, an intermediate model between local and global models, able to adaptively characterize the degree of heterogeneity. If $K_*=1$, which means $G_0$ is complete and thus parameters of local devices equal to each other, our model is reduced to the global model considered in \cite{stich2019local} and \cite{pmlr-v54-mcmahan17a}. If $K_*=|V|$, which means all parameters are distinct from each other, we arrive at the local model and recover the setting of \cite{smith2017federated}. In practice, $E_0$ and the number of clusters $K(E_0)$ are typically unknown. Under a general $M$-estimation framework, $\boldsymbol{\theta}_u^*$ can be identified by \begin{align}\label{eq:M_est} \boldsymbol{\theta}_u^*=\argmin_{\boldsymbol{\theta}\in\boldsymbol{\Xi}}M_u(\boldsymbol{\theta})\equiv E[m_u(\mathbf{z};\boldsymbol{\theta})], \quad u\in V. \end{align} Notably, the model \eqref{eq:M_est} consists of a wide variety of statistical models, including linear regression, generalized linear regression, maximum likelihood estimation, and so on. Moreover, methods employed for parameter estimation are allowed to vary across devices. For example, at device $u$ we are interested in the population mean $\boldsymbol{\theta}_u^*=E(\mathbf{z}^{(u)})$ which can be estimated by maximizing the likelihood of observed samples, while at device $v$ we are given a classification task and $\boldsymbol{\theta}_v^*$ is the coefficient of classification hyperplane which can be estimated by logistic regression. Regularity conditions are required to guarantee that \eqref{eq:M_est} is well-posed. Denote by $\boldsymbol{\psi}_u(\cdot;\boldsymbol{\theta})$ the derivative of $m_u(\cdot;\boldsymbol{\theta})$ with respect to $\boldsymbol{\theta}$, and let $\boldsymbol{\Sigma}_u(\boldsymbol{\theta})=\mathrm{cov}(\boldsymbol{\psi}_u(\mathbf{z};\boldsymbol{\theta}))$ be the covariance matrix of $\boldsymbol{\psi}_u(\mathbf{z};\boldsymbol{\theta})$ with $\mathbf{z}\sim P(\boldsymbol{\theta}_u^*)$. Also, denote by $\mathbf{H}_u(\boldsymbol{\theta})$ the Hessian matrix of $M_u(\boldsymbol{\theta})$, and by $\widehat{\mathbf{H}}_u(\boldsymbol{\theta})$ its empirical counterpart, i.e., $\widehat{\mathbf{H}}_u(\boldsymbol{\theta}) = n_u^{-1}\sum_{k=1}^{n_u}\partial \boldsymbol{\psi}(\mathbf{z}_k^{(u)};\boldsymbol{\theta})/\partial \boldsymbol{\theta}$. \begin{condition}[Identifiability]\label{con:identifiability} For each $u\in V$, $m_u(\mathbf{z},\boldsymbol{\theta})$ is convex and twice differentiable with respect to $\boldsymbol{\theta}$ within $\boldsymbol{\Xi}$. Moreover, $\mathbf{H}_u(\boldsymbol{\theta})$ is Lipschitz continuous under the operator norm at $\boldsymbol{\theta}=\boldsymbol{\theta}_u^*$, i.e., for any $\boldsymbol{\theta}\in \boldsymbol{\Xi}$, \[ \|\mathbf{H}_u(\boldsymbol{\theta}) - \mathbf{H}_u(\boldsymbol{\theta}_u^*)\|_2\le L \|\boldsymbol{\theta} - \boldsymbol{\theta}_u^*\|_2, \] and the eigenvalues of $\mathbf{H}_u(\boldsymbol{\theta}_u^*)$ are bounded, i.e., \[ \underline{\lambda} \le \min_{u\in V}\lambda_{\min}(\mathbf{H}_u(\boldsymbol{\theta}_u^*)) \le \max_{u\in V}\lambda_{\max}(\mathbf{H}_u(\boldsymbol{\theta}_u^*))\le \overline{\lambda} , \] where $L, \underline{\lambda}$ and $\overline{\lambda}$ are some constants independent of $u\in V$ such that $\boldsymbol{\Xi} \supset \cup_{u\in V}\mathbb{B}(\boldsymbol{\theta}_u^*;\underline{\lambda}/(2L))$. \end{condition} Condition \ref{con:identifiability} implies that for each $u\in V$, $\boldsymbol{\theta}_u^*$ is locally identifiable; that is, it is the unique minimizer of $M_u(\boldsymbol{\theta})$ at the vicinity of $\boldsymbol{\theta}_u^*$; see Lemma \ref{lem:strong_c}. In light of Definition \ref{def:data_dist}, distribution heterogeneity among devices is uniquely characterized in terms of $\boldsymbol{\Theta}^*=(\boldsymbol{\theta}_u^*:u\in V)$. In order to estimate $\boldsymbol{\Theta}^*$ from samples, we also need to impose certain conditions on $P(\boldsymbol{\theta}_u^*), u\in V$. \begin{condition}[Design and noise distribution]\label{con:distribution} \begin{itemize} \item[(\romannumeral1)]\textbf{Sub-Gaussian noises.} For each $u\in V$, the random vector $\boldsymbol{\psi}_u(\mathbf{z}_k^{(u)};\boldsymbol{\theta}^*_u)$ is sub-Gaussian with parameter $\sigma^2\in(0,\infty)$, i.e., \[ E [\exp \{( \bm{a}^T\boldsymbol{\psi}_u(\mathbf{z}_k^{(u)};\boldsymbol{\theta}^*_u) )^2 / \sigma^2\}] \le 2,\quad k = 1, \dots, n_u, \] for any $ \bm{a} \in \mathbb{R}^p$ with $\| \bm{a} \|_2 = 1$. \item[(\romannumeral2)]\textbf{Fixed design.} It holds that \begin{align*} \kappa^{-1} &\le \min_{u\in V}\inf_{\boldsymbol{\theta}\in \mathbb{B}(\boldsymbol{\theta}_u^*;r_1)}\lambda_{\min}(\widehat{\mathbf{H}}_u(\boldsymbol{\theta}))\le \max_{u\in V}\sup_{\boldsymbol{\theta}\in \mathbb{B}(\boldsymbol{\theta}_u^*;r_1)}\lambda_{\max}(\widehat{\mathbf{H}}_u(\boldsymbol{\theta}))\le \kappa \end{align*} for some constant $\kappa\ge 1$ and $r_1>0$ with $\cup_{u\in V}\mathbb{B}(\boldsymbol{\theta}_u^*;r_1)\subset \boldsymbol{\Xi}$. \item[(\romannumeral3)]\textbf{Random design.} For any $\boldsymbol{\theta}\in\boldsymbol{\Xi}$, the derivative of $\boldsymbol{\psi}_u(\cdot;\boldsymbol{\theta})$ with respect to $\boldsymbol{\theta}$ can be decomposed as \[ \frac{\partial \boldsymbol{\psi}_u(\cdot;\boldsymbol{\theta})}{\partial \boldsymbol{\theta}} = h_u(\cdot;\boldsymbol{\theta}) \bm{f}_u(\cdot) \bm{f}_u(\cdot)^T, \] for a vector-valued function $\bm{f}_u(\cdot): \mathbb{R}^{q_u}\to \mathbb{R}^p$ and a scalar-valued function $h_u(\cdot;\boldsymbol{\theta})$ parametrized by $\boldsymbol{\theta}$. Moreover, \[ \underline{h}\le \min_{u\in V}\inf_{\bm{z}\in\bm{\mathcal{Z}}_u}\inf_{\boldsymbol{\theta}\in \boldsymbol{\Xi}}|h_u(\bm{z};\boldsymbol{\theta})|\le \max_{u\in V}\sup_{\bm{z}\in\bm{\mathcal{Z}}_u}\sup_{\boldsymbol{\theta}\in \boldsymbol{\Xi}} |h_u(\bm{z};\boldsymbol{\theta})|\le \overline{h} \] holds for some constants $\overline{h}\ge \underline{h}>0$, and $\bm{f}_u(\mathbf{z}) \bm{f}_u(\mathbf{z})^T$ with $\|E(\bm{f}_u(\mathbf{z}) \bm{f}_u(\mathbf{z})^T)\|_2 \le \sigma_{1,z}^2$ is a sub-exponential random matrix with parameters $(\mv,\alpha)$ for $\mathbf{z}\sim P(\boldsymbol{\theta}_u^*)$, i.e., \[ E\left\{\exp\left[\lambda\left( \bm{f}_u(\mathbf{z}) \bm{f}_u(\mathbf{z})^T-E\left(\bm{f}_u(\mathbf{z}) \bm{f}_u(\mathbf{z})^T\right)\right)\right]\right\}\preceq \exp\left(\frac{\lambda^2\mv}{2}\right), \] for $ |\lambda|<\alpha^{-1}$, where the matrix-valued parameter $\mv\in\mathbb{R}^{p\times p}$ satisfies $ \|\mv\|_2\le \sigma_{2,z}^2$, and $ \sigma_{1,z}, \sigma_{2,z}$ are some positive constants. \end{itemize} \end{condition} For supervised learning, part (\romannumeral1) of Condition \ref{con:distribution} imposes a sub-Gaussian tail on the distribution of noises for technical convenience, which can be relaxed to distributions of high-order moments. Part (\romannumeral2) of Condition \ref{con:distribution} requires that the empirical Hessian matrices have a bounded condition number, which is standard in the federated learning literature. We show in Lemma \ref{lem:conc_emp_hes} that it holds with high probability under part (\romannumeral3) of Condition \ref{con:distribution}. Part (\romannumeral3) requires that the Hessian matrix of $m_u(\cdot;\boldsymbol{\theta})$ can be factorized into a product, such that one factor is a sub-exponential random matrix which does not depend on $\boldsymbol{\theta}$, and the other factor is parameter-dependent and uniformly bounded over the parameter space. We show that various popular statistical models satisfy this factorization in Section \ref{sec:method}. Notably, either part (\romannumeral2) or (\romannumeral3) is enough to guarantee that $\boldsymbol{\Theta}^*$ can be recovered from finite samples. Once \eqref{eq:M_est} being well-posed, a naive estimator of $\boldsymbol{\theta}_u^*$ is \begin{align}\label{eq:naive_est} \widehat{\boldsymbol{\theta}}_u^{\mathrm{loc}} = \argmin_{\boldsymbol{\theta}\in\boldsymbol{\Xi}}\widehat{M}_u\equiv \frac{1}{n_u}\sum_{k=1}^{n_u}m_u(\mathbf{z}_k^{(u)};\boldsymbol{\theta}). \end{align} Here, the dimensionality, $p$, is allowed to increase with $n$, but in a low dimensional manner, i.e., $p= o(n)$. Under Condition \ref{con:identifiability} and \ref{con:distribution}, $\widehat{\boldsymbol{\theta}}_u^{\mathrm{loc}} - \boldsymbol{\theta}_u^*$ is asymptotically normal; see Proposition \ref{prop:asymptotic_normality}. However, since $\boldsymbol{\theta}_u^*=\boldsymbol{\theta}_v^*$ for any $(u,v)\in E_0$, estimating each $\boldsymbol{\theta}_u^*$ separately gives rise to an efficiency loss compared to the ideal estimator obtained by aggregating all data points with the same distribution \citep{dobriban2021distributed}. To avoid the efficiency loss, the latent structure $G_0$ among local devices requires to be considered. We shall introduce our estimator in the next section, and compare its performance with the naive estimator $\widehat{\boldsymbol{\Theta}}^{\mathrm{loc}}=(\widehat{\boldsymbol{\theta}}^{\mathrm{loc}}_u : u\in V)$. \section{Methodology}\label{sec:method} When the characteristic graph $G_0=(V,E_0)$ is known, our heterogeneous problem is reduced to $K(E_0)$ independent homogeneous problems. Unfortunately, $G_0$ is unknown. In this article, we consider the case where $G=(V,E)$, a surrogate graph of $G_0$, is given a priori, and establish a detailed statistical analysis of the effect introduced by wrong edges $(E\setminus E_0)\cup (E_0\setminus E)$ when $G$ deviates from $G_0$. In practice, $E$ can be acquired from prior knowledge of $E_0$ or communication pathways among devices When $E$ incorporates prior information of $E_0$, it is beneficial to introduce a penalty that encourages an equal estimate on connected devices in $G$. By doing so, we borrow information from devices sharing the same data distribution. In more specific, we propose the following penalized $M$-estimation: \begin{equation}\label{eq:ob} \widehat{\boldsymbol{\Theta}}=(\widehat{\boldsymbol{\theta}}_u: u\in V)=\argmin_{\boldsymbol{\Theta}}F(\boldsymbol{\Theta})= \frac{1}{|V|}\sum_{u\in V} \widehat{M}_u(\boldsymbol{\theta}_u)+ \lambda R(\bm{D}\boldsymbol{\Theta}), \end{equation} where $\boldsymbol{\Theta}=(\boldsymbol{\theta}_u: u\in V)\in\mathbb{R}^{|V|\times p}$, $\bm{D}\boldsymbol{\Theta}=(\boldsymbol{\theta}_{e^+} - \boldsymbol{\theta}_{e^-}: e\in E)\in\mathbb{R}^{|E|\times p}$, $\lambda$ is a tuning parameter, and $R(\bm{D}\boldsymbol{\Theta})=\sum_{e\in E}\phi(\boldsymbol{\theta}_{e^+} - \boldsymbol{\theta}_{e^-})$ where $\phi(\cdot)$ is a norm defined on $\mathbb{R}^p$. We first show that a wide range of statistical models are included in \eqref{eq:ob} through several examples. \begin{example}[Mean estimation] For some $u\in V$, samples are drawn independently from following model \begin{align} \mathbf{z}^{(u)}_k = \boldsymbol{\theta}_u^* + \boldsymbol{\varepsilon}^{(u)}_k,\quad 1\le k\le n_u,\ u\in V, \label{eq:mean_Example} \end{align} where $\boldsymbol{\theta}_u^*\in\mathbb{R}^p$ is the unknown target mean vector on device $u$, and $\boldsymbol{\varepsilon}^{(u)}_k$s are independent Gaussian noises with variance matrix $\sigma^2\bm{I}_p$. Choose the negative log-likelihood as $m_u(\mathbf{z}_k^{(u)}; \boldsymbol{\theta})=\|\mathbf{z}^{(u)}_k-\boldsymbol{\theta}\|_2^2/(2\sigma^2)$, and thus $\boldsymbol{\psi}_u(\mathbf{z}_k^{(u)}; \boldsymbol{\theta})=\sigma^{-2}(\boldsymbol{\theta}-\mathbf{z}^{(u)}_k)$. Since noises are normal distributed with bounded variances, Condition \ref{con:identifiability} and \ref{con:distribution} hold. For $n_u= 1 (\forall u\in V)$, \eqref{eq:mean_Example} is essentially nonparametric \citep{tibshirani2005sparsity,Padilla2020Adaptive}, and $\boldsymbol{\theta}_u^*$ can be viewed as the value that some function $f_0(\cdot)$ takes on device $u$, in which case a device is a data point. In particular, if further $p=1$ and $G$ is a grid graph, \eqref{eq:mean_Example} is reduced to total variation de-noising \citep{hutter2016optimal}. \end{example} \begin{example}[Linear regression] Suppose that for some $u\in V$, $\mathbf{z}_k^{(u)}=(\mathbf{x}_k^{(u)}, \textnormal{y}_k^{(u)})\in\mathbb{R}^p\times \mathbb{R}$ is generated independently by \[ \textnormal{y}_k^{(u)} = (\boldsymbol{\theta}_u^*)^T \mathbf{x}_k^{(u)} + \varepsilon_k^{(u)}, \quad 1\le k\le n_u, \] where $\varepsilon_k^{(u)}$s are independent sub-Gaussian noises with parameter $\sigma^2$, and $\{\mathbf{x}_k^{(u)}\}_{k=1}^{n_u}$ is fixed with non-singular covariance matrix $\boldsymbol{\Sigma}_u$. Choose the squared error loss as $m_u(\mathbf{z}_k^{(u)};\boldsymbol{\theta}) = (\textnormal{y}_k^{(u)} - \boldsymbol{\theta}^T \mathbf{x}_k^{(u)} )^2 /2$, then $\boldsymbol{\psi}_u(\mathbf{z}_k^{(u)};\boldsymbol{\theta}) = -(\textnormal{y}^{(u)}_k- \boldsymbol{\theta}^T \mathbf{x}_k^{(u)}) \mathbf{x}_k^{(u)}$ and $\boldsymbol{\Sigma}_u(\boldsymbol{\theta}_u^*) = \sigma^2\boldsymbol{\Sigma}_u$. It is straightforward to see that Condition \ref{con:identifiability} and part (\romannumeral1) and part (\romannumeral2) of Condition \ref{con:distribution} are satisfied. \end{example} \begin{example}[Logistic regression] Suppose that for some $u\in V$, $\mathbf{z}_k^{(u)}=(\mathbf{x}_k^{(u)}, \textnormal{y}_k^{(u)})\in\mathbb{R}^p\times \{0,1\}$ is generated independently by \[ P(\textnormal{y}_k^{(u)} \mid \mathbf{x}_k^{(u)})=\exp \left\{\textnormal{y}_k^{(u)} (\boldsymbol{\theta}_i^*)^T \mathbf{x}_k^{(u)} - \zeta((\boldsymbol{\theta}_u^*)^T \mathbf{x}_k^{(u)} )\right\}, \quad 1\le k\le n_u, \] where $\zeta(t) = \log(1+\exp(t))$. As a standard assumption, $\{\mathbf{x}_k^{(u)}\}_{k=1}^{n_u}$ are i.i.d samples from a distribution supported on the unit sphere $\mathbb{B}(\mathbf{0}_p;1)$ with $c\bm{I}_p\succeq \mathrm{var}(\mathbf{x}_k^{(u)})\succeq c^{-1}\bm{I}_p$ for some constant $c>1$. Choose the negative log-likelihood as $m_u(\mathbf{z}_k^{(u)};\boldsymbol{\theta}) = \zeta(\boldsymbol{\theta}^T \mathbf{x}_k^{(u)})- \textnormal{y}_k^{(u)} \boldsymbol{\theta}^T \mathbf{x}_k^{(u)}$. Then $\boldsymbol{\psi}_u(\mathbf{z}_k^{(u)};\boldsymbol{\theta}) = \zeta^\prime(\boldsymbol{\theta}^T \mathbf{x}_k^{(u)})\mathbf{x}_k^{(u)} -\textnormal{y}_k^{(u)} \mathbf{x}_k^{(u)}$ and $(\partial \boldsymbol{\psi}_u/\partial \boldsymbol{\theta})(\mathbf{z}_k^{(u)};\boldsymbol{\theta}) = \zeta^{\prime\prime}(\boldsymbol{\theta}^T \mathbf{x}_k^{(u)})\mathbf{x}_k^{(u)}(\mathbf{x}_k^{(u)})^T$. One can show that Condition \ref{con:identifiability} and \ref{con:distribution} hold since $\sup_{\mathbf{x}\in \mathbb{B}(\mathbf{0}_p;1), \boldsymbol{\theta}\in \boldsymbol{\Xi}}|\boldsymbol{\theta}^T \mathbf{x}|\le \sup_{\boldsymbol{\theta}\in\boldsymbol{\Xi}}\|\boldsymbol{\theta}\|_2\le r_0$. \end{example} Clearly, the regularization term $\lambda R(\bm{D}\boldsymbol{\Theta})$ promotes $\boldsymbol{\theta}_{e^+} - \boldsymbol{\theta}_{e^-}$ to be zero for any $e\in E$. If $\lambda = 0$ (i.e., the extra information of $G$ does not play any role), \eqref{eq:ob} is reduced to the local estimator defined in \eqref{eq:naive_est}, since the data fidelity term $|V|^{-1}\sum_{u\in V} \widehat{M}_u(\boldsymbol{\theta}_u)$ is separable with respect to $\boldsymbol{\theta}_u$. On the other hand, if $\lambda\to\infty$, \eqref{eq:ob} is reduced to the global estimator that presumes that $\boldsymbol{\theta}_u^*\equiv \boldsymbol{\theta}^* (\forall u\in V)$, ignoring distribution heterogeneity among devices. In the following, we shall provide a general statistical convergence guarantee of $\widehat{\boldsymbol{\Theta}}$, which can be used to determine the rate of $\lambda$, and to adaptively adjust for distribution heterogeneity. \subsection{Statistical Guarantees}\label{sec:sg} Let $\boldsymbol{\Delta}^* = \bm{D}\boldsymbol{\Theta}^*=(\boldsymbol{\delta}_e^*:e\in E)$, then $\boldsymbol{\delta}_e^*=\boldsymbol{\theta}_{e^+}^* - \boldsymbol{\theta}_{e^-}^*\neq 0$ if and only if $e\in E\setminus E_0$. Inspired by the group lasso \citep{negahban2012unified}, we can recover the nonzero rows of $\boldsymbol{\Delta}^*$ under the help of $\lambda \sum_{e\in E}\phi(\boldsymbol{\delta}_e^*)$. In high-dimensional settings, some regularity condition, e.g., compatibility condition \citep{buhlmann2011statistics} and restricted eigenvalue condition \citep{bickel2009simultaneous}, of the design matrix is required for parameter estimation. In our case, the `design matrix' with respect to $\boldsymbol{\Delta}^*$ is not explicitly given, but closely related to the incidence matrix $\bm{D}$. We use a similar notion, compatibility factor \citep{hutter2016optimal}, to characterize the regularity of `design matrix' for estimating $\boldsymbol{\Delta}^*$. \begin{definition}[Compatibility factor]\label{def:cf} The compatibility factor of $\bm{D}$ for a set $T\subset E$ with respect to $R(\cdot)$ is defined as \[\kappa_{\emptyset}(\bm{D})\equiv 1,\quad \kappa_T(\bm{D})\equiv \inf_{\boldsymbol{\Theta} \in \mathbb{R}^{|V|\times p}}\frac{\sqrt{|T|} \|\boldsymbol{\Theta}\|_F}{R[(\bm{D}\boldsymbol{\Theta})_T]}\quad \mathrm{for}\ T\neq\emptyset, \] where $(\bm{D}\boldsymbol{\Theta})_{T,:} = (\boldsymbol{\theta}_{e^+} - \boldsymbol{\theta}_{e^-}: e\in T)\in\mathbb{R}^{|T|\times p}$ denotes the sub-matrix of $\bm{D}\boldsymbol{\Theta}$ with rows indexed by $T$. \end{definition} The compatibility factor $\kappa_T(\bm{D})$ is similar in spirit to the compatibility condition in the Lasso literature. To see this, for $p=1$ and $\bm{D}^{\dag}\bm{D}= \bm{I}_{|V|}$, it holds that $\boldsymbol{\Theta}\in \mathbb{R}^{|V|}, \boldsymbol{\Delta}=\bm{D}\boldsymbol{\Theta}\in \mathbb{R}^{|E|}$, $R[(\bm{D}\boldsymbol{\Theta})_{T}]=\|\boldsymbol{\Delta}_T\|_1$, and $\|\boldsymbol{\Theta}\|_F^2 = \boldsymbol{\Delta}^T (\bm{D}^{\dag})^T \bm{D}^{\dag} \boldsymbol{\Delta}$. Thus $(\bm{D}^{\dag})^T \bm{D}^{\dag}$, as the `design matrix' for estimating $\boldsymbol{\Delta}^*$, satisfies the compatibility condition with constant $\kappa_T^2(\bm{D})$. Also, the compatibility factor helps to identify nonzero rows of $\boldsymbol{\Delta}^*$ through the following condition. \begin{condition}\label{con:comp_graph} The compatibility factor $\kappa_S(\bm{D})\ge \kappa_0$ for $S=E\setminus E_0$, where $\kappa_0>0$ denotes some universal constant. \end{condition} In fact, $\kappa_T(\bm{D})$ can also be viewed as a measure of the degree centrality \citep{Sharma2013} of $G=(V,E)$. Following \citet[Lemma 3]{hutter2016optimal}, we can show that $\kappa_T(\bm{D})\ge 1/(2\min\{\sqrt{d},\sqrt{|T|}\})$, where $d$ denotes the maximum degree of $G$. For graphs with a bounded maximal degree, Condition \ref{con:comp_graph} is met. \subsubsection{Deterministic Results} We have the following deterministic statement about the global minimizer of the convex program \eqref{eq:ob}. For convenience, let $\widehat{\boldsymbol{\Psi}}(\boldsymbol{\Theta}^*) = (\nabla_{\boldsymbol{\theta}_u} \widehat{M}_u(\boldsymbol{\theta}_u^*): u\in V)\in\mathbb{R}^{|V|\times p}$, and let $R^*(\cdot)$ be the dual norm of $R(\cdot)$ defined in Lemma \ref{lem:Rnorm}. \begin{theorem}\label{thm:main} Under the Condition \ref{con:identifiability}, part (\romannumeral2) of Condition \ref{con:distribution}, and Condition \ref{con:comp_graph}, let $\widehat{\boldsymbol{\Theta}}$ be the global minimizer of \eqref{eq:ob}, $\rho = |V|^{-1/2}\|\boldsymbol{\Pi}_{\mathrm{Ker}(\bm{D})}\widehat{\boldsymbol{\Psi}}(\boldsymbol{\Theta}^*)\|_F$ and $\lambda = |V|^{-1/2} R^*\{(\bm{D}^\dag)^T\widehat{\boldsymbol{\Psi}}(\boldsymbol{\Theta}^*)\}$, where $\boldsymbol{\Pi}_{\mathrm{Ker}(\bm{D})}$ denotes the projection matrix that mapping vectors in $\mathbb{R}^{|V|}$ to the kernel space of $\bm{D}$. If $\{\widehat{\boldsymbol{\theta}}_u\}_{u\in V}\subset \boldsymbol{\Xi}$, we have that \begin{align}\label{thm_main} \frac{1}{|V|}\|\widehat{\boldsymbol{\Theta}}-\boldsymbol{\Theta}^*\|_F^2&\le 2\kappa^2 \left(\rho^2 + \frac{4|S|}{\kappa_0}\lambda^2\right). \end{align} \end{theorem} We remark that the strong convexity condition, i.e., part (\romannumeral2) of Condition \ref{con:distribution}, is necessary. Owing to the rank deficiency of incidence matrix $\bm{D}$, there exists a subspace of $\boldsymbol{\Theta}$ that cannot be penalized. For example, even if we would have $\widehat{\boldsymbol{\theta}}_i=\widehat{\boldsymbol{\theta}}_j$ for $(i,j)\in E\cap E_0$, there could exist a common shift in both $\widehat{\boldsymbol{\theta}}_i$ and $\widehat{\boldsymbol{\theta}}_j$, such that $\widehat{\boldsymbol{\theta}}_i-\boldsymbol{\theta}_i^*=\widehat{\boldsymbol{\theta}}_j-\boldsymbol{\theta}_j^*=c\neq 0$. The strong convexity of empirical Hessian matrices ensures the uniqueness of each $\widehat{\boldsymbol{\theta}}_i$ and controls the shift effect. Our deterministic result builds on the condition $\lambda=|V|^{-1/2} R^*\{(\bm{D}^\dag)^T\widehat{\boldsymbol{\Psi}}(\boldsymbol{\Theta}^*)\}$, which can help us to determine the rate of $\lambda$ for some specific choice of $R(\cdot), \{m_u(\cdot;\boldsymbol{\theta})\}_{u\in V}$, and $G=(V,E)$. We interpret $|S|\lambda^2/\kappa_0$ as the mean squared error term incurred by estimating $\boldsymbol{\Delta}^*= \bm{D}\boldsymbol{\Theta}^*$. Owing to the $\ell_1$ part of $R(\cdot)$-norm, we can estimate such a row-wise sparse matrix with a Lasso-type convergence rate. We highlight that there exists an interesting trade-off in $|S|\lambda^2/\kappa_0$. Specifically, choosing $\phi(\cdot)=\|\cdot\|_1$, the constants in \eqref{eq:lambda_highp} consist of $\gamma_G$ and $\kappa_0$. For a connected graph $G$, the quantity $\gamma_G$ denotes the algebraic connectivity of $G$. A larger $\gamma_G$ represents that $G$ is more connected, which potentially gives rise to a larger $|S|$ and smaller $\kappa_0$. We interpret $\rho^2$ as the \textit{averaged intra-group variances} of devices for estimating $\boldsymbol{\Theta}^*$ with respect to the given graph $G$. The notion group denotes each connected component of $G$, since $\mathrm{Ker}(\bm{D})=\mathrm{span}\{\mathbf{1}\{\mathcal{C}_1\},\ldots,\mathbf{1}\{\mathcal{C}_{K(E)}\}\}$ \citep[see][Theorem 8.3.1]{Godsil2001}, where $\mathbf{1}\{\mathcal{C}_i\}\in\mathbb{R}^{|V|}$ denotes the indicator vector of $\mathcal{C}_i$ whose component in $\mathcal{C}_i$ is $1$ and otherwise $0$. Notice that devices can only communicate within each connected component of $G$. Smaller $K(E)$, implied by a stronger communication ability, gives rise to a smaller $\rho^2$. However, stronger communication ability comes with a price. More connected $G$ is, the size of $S = E\setminus E_0$ can be potentially larger, which reveals an interesting guidance for the choice of $G=(V,E)$; that is, we need to balance connectivity and the number of false edges. We emphasize that $\rho^2$ is unavoidable to identify $\boldsymbol{\Theta}^*$, even if $E=E_0$, in which case $E\rho^2= O\big(\sigma^2 p K(E_0) /(n |V|)\big)$, demonstrating a parametric rate, i.e., the noise level multiplying the total number of parameters over the total number of samples. In contrast, $|S|$ may be zero, in which case $|V|^{-1}\|\widehat{\boldsymbol{\Theta}} - \boldsymbol{\Theta}^*\|_F^2\le 2\kappa^2\rho^2$. As discussed above, in order to yield smallest estimation error, $G$ must be as connected as possible within $\mathcal{C}_i^* (1\le i\le K_*)$. \subsubsection{Probabilistic Results}\label{sec:prob_result} Under certain assumptions of the distribution and specific choice of $\phi(\cdot)$, we can have probabilistic bounds for $\rho^2$ and $\lambda^2$. \begin{theorem}\label{thm:lam_rho} Part (\romannumeral1) of Condition \ref{con:distribution} implies that, for some constant $C_{\rho}>0$, \begin{align}\label{eq:rho_highp} \rho^2 \leq C_\rho\sigma^2\frac{K p \log(1/\xi)}{n |V|} \end{align} with probability at least $1-\xi$, where $n=\min_{k\in V}n_k$. If we choose $\phi(\cdot)=\|\cdot\|_1$, part (\romannumeral1) of Condition \ref{con:distribution} also implies that, for some constant $C_{\lambda}>0$, \begin{align}\label{eq:lambda_highp} \lambda^2 \leq \left(\frac{C_\lambda\sigma^2}{\gamma_G^2}\right)\frac{p\log(|E|/\xi)}{n |V|} \end{align} with probability at least $1-\xi$, where $\gamma_G^2$ denotes the smallest nonzero eigenvalue of $\bm{D}^T \bm{D}$. In particular, under the Condition \ref{con:identifiability}, part (\romannumeral1) and part (\romannumeral3) of Condition \ref{con:distribution}, and Condition \ref{con:comp_graph}, choosing $\kappa = \max\{4\overline{\lambda}(\log 2)/ 3, 3/\underline{\lambda}\}$, it holds with probability at least $1-O(p |V|\exp\{-c n\}+ \xi)$ that \begin{align}\label{eq: thm_prop} \frac{1}{|V|}\|\widehat{\boldsymbol{\Theta}} - \boldsymbol{\Theta}^*\|_F^2&\le 2\kappa^2 \left\{C_\rho\sigma^2\frac{K p \log(1/\xi)}{n |V|} + \biggl(\frac{4C_\lambda\sigma^2}{\kappa_0\gamma_G^2}\biggr)\frac{p|S|\log(|E|/\xi)}{n |V|}\right\}. \end{align} \end{theorem} From \eqref{eq: thm_prop}, the convergence rate of our estimator scales as $O_p(\sigma^2 p(K+|S|)/(n |V|))$ up to a logarithmic factor. We refer to the term \[ \textnormal{GF}_{G_0}(G)\equiv K(E_0)/(K(E)+|E\setminus E_0|) \] as the \emph{graph fidelity} of $G=(V,E)$ with respect to $G_0=(V,E_0)$ that measures the faithfulness of $G$ to $G_0$. Interestingly, there exists a phase transition of the performance of $\widehat{\boldsymbol{\Theta}}$ as the graph fidelity of $G$ with respect to $G_0$ varies from the minimal value to the maximal value. Suppose that $K(E_0)\neq 1$; otherwise there is no distribution heterogeneity among devices. In this case, \[ \textnormal{GF}_{\min}\equiv \frac{2K(E_0)}{|V|^2\big(1-K^{-1}(E_0)\big)+2}\le\textnormal{GF}_{G_0}(G)\le 1. \] For $G$ such that $\textnormal{GF}_{G_0}(G)=1$ (e.g., $G=G_0$), convergence rate of the resulting estimator scales as $O_p(\sigma^2 pK(E_0)/(n |V|))$ which is the same as the \emph{oracle} estimator obtained by aggregating samples sharing the same distribution. For $G$ such that $\textnormal{GF}_{G_0}(G)=K(E_0)/|V|$ (e.g., $E=\emptyset$ such that local devices do not communicate with each other), the associated estimator converges with the same rate $O_p(\sigma^2 p/n)$ as the local estimator defined in \eqref{eq:naive_est}. For $G$ such that $\textnormal{GF}_{G_0}(G)=\textnormal{GF}_{\min}$ (e.g., $G$ is complete), the resulting estimator has the convergence rate $O_p\big(\sigma^2p n^{-1}\{|V|(1-K^{-1}(E))\}\big)$, which is worse than the local estimator if $|V|>2$. Notice that, the global estimator obtained by treating $\boldsymbol{\theta}_u^*\equiv \boldsymbol{\theta}^* \ (\forall u\in V)$ is inconsistent when $K(E_0)\neq 1$, since it ignores distribution heterogeneity among local devices. Therefore, our method is most effective when the average size of cliques $|V|/K(E_0)$ is growing. Provided that $\textnormal{GF}_{G_0}(G)$ is close to $1$, the larger $|V|/K(E_0)$ is, the more substantial our method outperforms the local estimation \eqref{eq:naive_est}, which justifies our capability of dealing with a large number of heterogeneous devices simultaneously. In the subsequent section, we introduce a simple yet powerful method to enlarge the graph fidelity of any given graph $G$. \subsection{Edge Selection by multiple testing}\label{sec:edge_selection} In order to improve the performance of our method, we propose to find a subgraph $\widehat{G}=(V,\widehat{E})$ of $G=(V,E)$ with the largest graph fidelity with respect to $G_0$, \begin{align}\label{eq:op_graph} \widehat{E}\in \argmin_{\widetilde{E}\subset E} \left\{K(\widetilde{E}) + |\widetilde{E}\setminus E_0|\right\}. \end{align} Note that $\widehat{G}=(V,\widehat{E})$ represents the graph which gives rise to the best estimator based on $G=(V,E)$. If there exist multiple connected components in $G$, the objective function of \eqref{eq:op_graph} is separable according to the connected components. Without loss of generality, we assume $G$ is connected such that $K(E)=1$. The following proposition relates problem \eqref{eq:op_graph} to selecting true edges in $E$. \begin{proposition}\label{prop:optimal_graph_mht} For any graph $G=(V,E)$ with $K(E)=1$ and $E_0$ in Definition \ref{def:data_dist}, we have \[ \min_{\widetilde{E}\subset E}\{K(\widetilde{E})+|\widetilde{E}\setminus E_0|\}=K(E\cap E_0), \] where $K(E\cap E_0)$ denotes the number of connected components of $G=(V,E\cap E_0)$. For any $\widetilde{E}\subset E$, \begin{align}\label{eq:upper_bound_c} K(E\cap E_0)\le K(\widetilde{E}) + |\widetilde{E} \setminus E_0|\le K(E\cap E_0) + |\widetilde{E} \setminus E_0| + |(E\cap E_0) \setminus \widetilde{E}|. \end{align} \end{proposition} Proposition \ref{prop:optimal_graph_mht} implies that one of the minimizers of \eqref{eq:op_graph} is exactly $E\cap E_0$, which suggests to find a $\widetilde{E}\subset E$ with a small $|\widetilde{E} \setminus E_0| + |(E\cap E_0) \setminus \widetilde{E}|$. By Definition \ref{def:data_dist}, $(i,j)\in E_0$ is equivalent to $\boldsymbol{\theta}_i^*=\boldsymbol{\theta}_j^*$. This motivate us to consider the simultaneous testing of the following null hypotheses \begin{align}\label{eq:mht} H_{0,e}\colon \boldsymbol{\theta}_{e^+}^* = \boldsymbol{\theta}_{e^-}^*\quad \mathrm{versus} \quad H_{1,e}\colon \boldsymbol{\theta}_{e^+}^* \neq \boldsymbol{\theta}_{e^-}^*,\quad e\in E. \end{align} We impose Condition \ref{con:2ndbartlett} for technical convenience. \begin{condition}\label{con:2ndbartlett} For each $u\in V$, $\boldsymbol{\Sigma}_u(\boldsymbol{\theta}_u^*) = \mathbf{H}_u(\boldsymbol{\theta}_u^*)$. \end{condition} The requirement $\boldsymbol{\Sigma}_i(\boldsymbol{\theta}_i^*) = \mathbf{H}_i(\boldsymbol{\theta}_i^*)$, which is also known as the second Bartlett identity \citep{bartlett1953approximate}, is met if the probability density function, $p_u(\bm{z};\boldsymbol{\theta})$, of $P(\boldsymbol{\theta}_{u}^*)$ is uniformly integrable with respect to $\bm{z}$ over $\boldsymbol{\theta}\in\boldsymbol{\Xi}$. As shown in Lemma \ref{lem:naive_consistency}, the local estimator $\widehat{\boldsymbol{\theta}}_u^{\mathrm{loc}}$ defined in \eqref{eq:naive_est} is asymptotically normal with asymptotic mean $\boldsymbol{\theta}_u^*$, and variance $\mathbf{H}_u(\boldsymbol{\theta}_u^*)^{-1}\boldsymbol{\Sigma}_u(\boldsymbol{\theta}_u^*)\mathbf{H}_u(\boldsymbol{\theta}_u^*)^{-T}$. Condition \ref{con:2ndbartlett} is used to obtain a consistent estimator of the asymptotic variance of $\widehat{\boldsymbol{\theta}}_u^{\mathrm{loc}}$. Thus, for $\boldsymbol{\theta}_{e^+}^* = \boldsymbol{\theta}_{e^-}^*$, we can construct a test statistic via \begin{align}\label{eq:test_statistic_def} \widehat{\textnormal{W}}_e = \Big\{\big(\widehat{\boldsymbol{\theta}}_{e^+}^{\mathrm{loc}} - \widehat\boldsymbol{\theta}_{e^{-}}^{\mathrm{loc}}\big)^{T}\big(\widehat{\boldsymbol{\Omega}}_{e^+} + \widehat{\boldsymbol{\Omega}}_{e^-}\big)^{-1}\big(\widehat{\boldsymbol{\theta}}_{e^+}^{\mathrm{loc}} - \widehat\boldsymbol{\theta}_{e^{-}}^{\mathrm{loc}}\big)\Big\}^{1/2}, \end{align} where $\widehat{\boldsymbol{\Omega}}_u=\{n_u \widehat{\mathbf{H}}_u(\widehat{\boldsymbol{\theta}}_u^{\mathrm{loc}})\}^{-1}$ denotes the asymptotic variance of $\widehat{\boldsymbol{\theta}}_u^\mathrm{loc}$. Adopting Bonferroni correction, we select $E\cap E_0$ by \begin{align}\label{eq:bonferroni} \widehat{E} = \big\{e\in E : |\widehat{\textnormal{W}}_e|^2\le \chi_{p}^2(\alpha/|E|)\big\}, \end{align} where $\chi_{p}^2(\alpha)$ is the upper $\alpha$-quantile of the $\chi_p^2$ distribution. For $\boldsymbol{\theta}_{e^+}^* \neq \boldsymbol{\theta}_{e^-}^*$, to adaptively measure the distance between them, define the distance $\mathrm{dist}(\boldsymbol{\theta}_1,\boldsymbol{\theta}_2)=\big\{\big(\boldsymbol{\theta}_1 - \boldsymbol{\theta}_2\big)^{T}\big(c_{e^+}\boldsymbol{\Omega}_{e^+}^*+c_{e^-}\boldsymbol{\Omega}_{e^-}^*\big)^{-1}\big(\boldsymbol{\theta}_1 - \boldsymbol{\theta}_2\big)\big\}^{1/2}$, where $c_{u}=\lim_{n_e\to +\infty}n_e/ n_{u}\ (n_e=\min\{n_{e^+},n_{e^-}\})$ and $\boldsymbol{\Omega}_u^*=\{\mathbf{H}(\boldsymbol{\theta}_u^*)\}^{-1}$ for $u\in\{e^+,e^-\}$. Under certain minimum signal condition, we show that our procedure can consistently select edges in $E_0$ with a large probability. \begin{theorem}\label{thm:sel_con} Under the same conditions imposed in Proposition \ref{prop:asymptotic_normality}, if further \begin{align}\label{con:min_signal} \min_{e\in E\setminus E_0} n_e\Big\{\mathrm{dist}(\boldsymbol{\theta}_{e^+}^*,\boldsymbol{\theta}_{e^-}^*)\Big\}^2 \ge 4\chi^2_{p}(\alpha/|E|), \end{align} then $ \liminf_{n\to\infty} P\left(\widehat{E} = E\cap E_0\right) \ge 1 - \alpha$. \end{theorem} Theorem \ref{thm:sel_con} demonstrates that, our selection procedure \eqref{eq:bonferroni} incurs no false negatives and false positives, with confidence level $1-\alpha$. Remarkably, this edge selection procedure does not require data exchange between devices. Theorem \ref{thm:sel_con} also suggests that the graph that gives rise to the optimal convergence rate is not necessarily $G_0$, but any graph $G=(V,E)$ such that $K(E\cap E_0)=K(E_0)$, revealing that our method is robust to graph mis-specification. Nonetheless, \eqref{eq:bonferroni} is conservative for controlling the false positive rate \citep{benjamini1995controlling}. Despite that we have obtained the asymptotic distribution of the test statistic in \eqref{eq:test_statistic_def}, the dependency among testing $H_{0,e}, e\in E$ poses a challenge to controlling FPR of our edge selection procedure, which is left for future work. \section{Decentralized Stochastic ADMM}\label{sec:optim} In this section, we focus on the problem of solving \eqref{eq:ob}. In fact, \eqref{eq:ob} is has a similar form to trend filtering \citep{JMLR:wang2016} and convex clustering \citep{LindstenOL:2011,Tang2016fused}, which can be efficiently solved by the primal-dual interior-point method \citep{kim2009}, (stochastic) first-order primal-dual optimization algorithms \citep{Ho2019GlobalEB}, and Alternating Direction Method of Multipliers (ADMM) \citep{aaditya2016}. However, none of them is directly applicable when data can not be shared across devices. \cite{hallac2015network} proposed a decentralized version of ADMM to optimize \eqref{eq:ob}, where only parameters are transmitted across edges after being updated on each local device. However, it requires to use all the data on each device in each iteration, and thus is not appropriate for online settings, or devices with weak computational capacity. We propose an algorithm called Fed-ADMM to solve the optimization problem \eqref{eq:ob}. Fed-ADMM is a decentralized stochastic version of ADMM. Apart from not transmitting local data, only a mini-batch of samples is used in each iteration on each device, and we allow local optimization speeds and availability patterns to be heterogeneous across devices. Without loss of generality, we assume that edges in $G$ point from the larger node towards the smaller node such that $(i,j)\in E$ implies $i>j$. Let $N_i=\{j: (i,j)\in E\}\cup\{j:(j,i)\in E\}$ denote the neighbors of node $i$. We first consider the case where all devices are available instantaneously. Similar to \cite{hallac2015network}, we introduce auxiliary vectors $\boldsymbol{\beta}_{i j}, \boldsymbol{\beta}_{j i}$ with the constraints $\boldsymbol{\beta}_{i j}=\boldsymbol{\theta}_i, \boldsymbol{\beta}_{j i}= \boldsymbol{\theta}_j$, for all $ (i ,j)\in E$. The augmented Lagrangian \citep{hestenes1969multiplier} is \begin{equation}\label{eq:lagrange} \begin{aligned} & L(\boldsymbol{\Theta},\mathbf{B},\malpha)= \frac{1}{|V|}\sum_{i\in V}\widehat{M}_i(\boldsymbol{\theta}_i) + \lambda \sum_{(i,j)\in E}\phi(\boldsymbol{\beta}_{i j} - \boldsymbol{\beta}_{j i}) \\ &- \sum_{(i,j)\in E}\left\{ \boldsymbol{\alpha}_{i j}^T (\boldsymbol{\theta}_i - \boldsymbol{\beta}_{i j}) + \boldsymbol{\alpha}_{j i}^T (\boldsymbol{\theta}_j - \boldsymbol{\beta}_{j i}) \right\} + \frac{\rho}{2}\sum_{(i,j)\in E} \left\{\|\boldsymbol{\theta}_i - \boldsymbol{\beta}_{i j}\|_2^2 + \|\boldsymbol{\theta}_j - \boldsymbol{\beta}_{j i}\|_2^2\right\}, \end{aligned} \end{equation} where $\mathbf{B} = (\boldsymbol{\beta}_{i j},\boldsymbol{\beta}_{j i}: (i,j)\in E)$ and $\malpha = (\boldsymbol{\alpha}_{i j},\boldsymbol{\alpha}_{j i}: (i,j)\in E)$. Generally, ADMM solves \eqref{eq:lagrange} iteratively by minimizing $L(\boldsymbol{\Theta},\mathbf{B},\malpha)$ with respect to $\boldsymbol{\Theta}$ and $\mathbf{B}$ alternatively given the other fixed, followed by an update over the Lagrangian multiplier $\malpha$. Notably, $L(\boldsymbol{\Theta},\mathbf{B},\malpha)$ is separable, and updates for $\boldsymbol{\Theta},\mathbf{B}$, and $\malpha$ can be executed in a distributed way. In practice, however, local devices cannot afford to optimize with the whole dataset. Motivated by stochastic gradient descent, instead of directly minimizing $L(\boldsymbol{\Theta},\mathbf{B}(t),\malpha(t))$ with respect to $\boldsymbol{\theta}_i$ on device $i$, we adopt one-step stochastic gradient update in the $t$-th iteration: \begin{equation}\label{eq:ADMM_primal} \begin{aligned} \boldsymbol{\theta}_i(t+1) = \boldsymbol{\theta}_i(t) - &\eta(t)\left\{\widetilde{\mathbf{g}}_i(t)+ \rho\sum_{j\in N_i} (\boldsymbol{\theta}_i(t) - \boldsymbol{\beta}_{i j}(t) - \rho^{-1}\boldsymbol{\alpha}_{i j}(t)) \right\}, \end{aligned} \end{equation} where $\widetilde{\mathbf{g}}_i(t) = |\mathcal{B}_i(t)|^{-1}\sum_{b\in \mathcal{B}_i(t)}\boldsymbol{\psi}_{i}(\mathbf{z}_b^{(i)};\boldsymbol{\theta}_i(t))$, $\eta(t)$ denotes the learning rate, $\mathcal{B}_i(t)$ denotes the mini-batch randomly sampled from $\{\mathbf{z}_k^{(i)}\}_{k=1}^{n_i}$ on device $i$ in the $t$-th iteration, and $\widetilde{\mathbf{g}}_i(t)$ is an unbiased estimator of $\nabla_{\boldsymbol{\theta}}\widehat{M}_i(\boldsymbol{\theta})$ evaluated at $\boldsymbol{\theta}_i(t)$. Except for local samples, the update equation \eqref{eq:ADMM_primal} only requires $\boldsymbol{\beta}_{i j}(t)$ and $\boldsymbol{\alpha}_{i j}(t)$ which can be transmitted from device $j$. Thus, \eqref{eq:ADMM_primal} can be executed in parallel for all devices. With $\boldsymbol{\theta}_i(t+1), i\in V$ at hand, we then update $\boldsymbol{\beta}_{i j}(t)$ and $\boldsymbol{\beta}_{j i}(t)$ by \begin{equation}\label{eq:update_beta} \begin{aligned} &\biggl(\begin{matrix}\boldsymbol{\beta}_{i j}(t+1)\\ \boldsymbol{\beta}_{j i}(t+1)\end{matrix}\biggr)= \argmin_{\boldsymbol{\beta}_{i j},\boldsymbol{\beta}_{j i}}\biggl\{ \lambda\phi(\boldsymbol{\beta}_{i j} - \boldsymbol{\beta}_{j i}) \biggr. \\ &\quad \quad \quad\quad\biggl.+\frac{\rho}{2}\biggl(\|\boldsymbol{\theta}_i(t+1)-\boldsymbol{\beta}_{i j} - \rho^{-1}\boldsymbol{\alpha}_{i j}(t)\|_2^2 + \|\boldsymbol{\theta}_j(t+1)-\boldsymbol{\beta}_{j i} - \rho^{-1}\boldsymbol{\alpha}_{j i}(t)\|_2^2\biggr) \biggr\}. \end{aligned} \end{equation} The update equation \eqref{eq:update_beta} can be implemented on either device $i$ or device $j$, as long as $(\boldsymbol{\theta}_j(t+1), \boldsymbol{\beta}_{j i}(t), \boldsymbol{\alpha}_{j i}(t))$ or $(\boldsymbol{\theta}_i(t+1), \boldsymbol{\beta}_{i j}(t), \boldsymbol{\alpha}_{i j}(t))$ is transmitted to the corresponding device. We highlight that, for specific choice of $\phi(\cdot)$, for example, $\phi(\cdot) = \|\cdot\|_1$ and $\phi(\cdot) = \|\cdot\|_2$, we can obtain an explicit update equation from \eqref{eq:update_beta}. Finally, we update $\boldsymbol{\alpha}_{i j}(t)$ and $\boldsymbol{\alpha}_{j i}(t)$ by \begin{equation}\label{eq:update_alpha} \biggl(\begin{matrix}\boldsymbol{\alpha}_{i j}(t+1)\\ \boldsymbol{\alpha}_{j i}(t+1)\end{matrix}\biggr) = \biggl(\begin{matrix}\boldsymbol{\alpha}_{i j}(t)\\ \boldsymbol{\alpha}_{j i}(t)\end{matrix}\biggr) - \rho\biggl(\begin{matrix}\boldsymbol{\theta}_i(t+1) - \boldsymbol{\beta}_{i j}(t+1)\\ \boldsymbol{\theta}_j(t+1) - \boldsymbol{\beta}_{j i}(t+1)\end{matrix}\biggr). \end{equation} Notice that update equation \eqref{eq:update_alpha} also only requires parameter communication among connected devices. Both \eqref{eq:update_beta} and \eqref{eq:update_alpha} can be performed in parallel across edges. We refer to \eqref{eq:ADMM_primal} as node optimization step, and \eqref{eq:update_beta} and \eqref{eq:update_alpha} as edge communication step. Fed-ADMM is summarized in Algorithm \ref{alg:admm}. Our algorithm bears a resemblance to \citet{ouyang2013stochastic} and \citet{suzuki2013dual}, who approximated $m_{i}(\cdot;\boldsymbol{\theta}_i)$ with a linear function $m_{i}(\cdot;\boldsymbol{\theta}_i(t)) + (\boldsymbol{\theta} - \boldsymbol{\theta}_i(t))^T\boldsymbol{\psi}_{i}(\cdot;\boldsymbol{\theta}_i(t))$, and used the proximal method \citep{rockafellar1976augmented} to update $\boldsymbol{\theta}_i(t)$, \begin{align} \boldsymbol{\theta}_i(t+1)= \argmin_{\boldsymbol{\theta}\in\boldsymbol{\Xi}}&\biggl\{ \boldsymbol{\theta}^T\widetilde{\mathbf{g}}_i(t)+ \frac{\rho}{2}\sum_{j\in N_i}\|\boldsymbol{\theta}-\boldsymbol{\beta}_{i j}(t) - \rho^{-1}\boldsymbol{\alpha}_{i j}(t)\|_2^2 +\frac{\|\boldsymbol{\theta} - \boldsymbol{\theta}_i(t)\|_2^2}{2\widetilde{\eta}(t)} \biggr\},\label{eq:ouyang_update_theta} \end{align} where $\widetilde{\eta}(t)$ is the step-size. Different from \eqref{eq:ADMM_primal}, their work was motivated by the linearized proximal point method \citep{xu2011class}. Interestingly, \eqref{eq:ADMM_primal} can be viewed as an extension of \eqref{eq:ouyang_update_theta} with an adaptive learning rate. In more specific, choosing ${\eta}(t) = \widetilde\eta(t)/(1 + \rho|N_i|\widetilde\eta(t))$, one can show that \eqref{eq:ouyang_update_theta} is equivalent to \eqref{eq:ADMM_primal}. \begin{algorithm}[htp] \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{Initial value $\boldsymbol{\Theta}(0),\mathbf{B}(0),\malpha(0)$, number of iterations $T$, and the learning rate $\eta(t), 1\le t\le T$.} \Repeat{$t> T$}{ Sample mini-batches $\mathcal{B}_i(t)$ on device $i$ in parallel\; Obtain $\boldsymbol{\theta}_i(t + 1)$ on device $i$ by \eqref{eq:ADMM_primal} in parallel for each $i\in V$\; Broadcast each $\boldsymbol{\theta}_i(t + 1)$ to neighbor devices\; Obtain $\boldsymbol{\beta}_{i j}(t + 1)$ and $\boldsymbol{\beta}_{j i}(t+1)$ on device $i$ \footnotemark{} by \eqref{eq:update_beta} in parallel for $(i,j)\in E$\; Obtain $\boldsymbol{\alpha}_{i j}(t + 1)$ and $\boldsymbol{\alpha}_{j i}(t+1)$ on device $i$ by \eqref{eq:update_alpha} in parallel for $(i,j)\in E$\; Broadcast $(\boldsymbol{\beta}_{i j}(t+1),\boldsymbol{\beta}_{j i}(t+1))$ and $(\boldsymbol{\alpha}_{i j}(t+1),\boldsymbol{\alpha}_{j i}(t+1))$ from device $i$ to device $j$ in parallel for $(i,j)\in E$\; $t \leftarrow t + 1$ } \Output{$\overline{\boldsymbol{\Theta}}=T^{-1}\sum_{t=1}^{T}\boldsymbol{\Theta}(t-1)$} \caption{Decentralized stochastic ADMM} \label{alg:admm} \end{algorithm} \footnotetext{In fact, we can obtain $\boldsymbol{\beta}_{i j}(t + 1)$ and $\boldsymbol{\beta}_{j i}(t+1)$ on either device $i$ or device $j$. Here, we choose device $i$ as the working device without loss of generality. } In the sequel, we show that the output of Algorithm \eqref{alg:admm} converges to the global minimizer of \eqref{eq:lagrange}. Let $\big(\boldsymbol{\theta}_i(t), \boldsymbol{\beta}_{i j}(t), \boldsymbol{\beta}_{j i}(t), \boldsymbol{\alpha}_{i j}(t),\boldsymbol{\alpha}_{j i}(t)\big)$ be the output of Algorithm \ref{alg:admm} in the $t$-th iteration, $0\le t\le T$. Denote by $\big\{\widehat\boldsymbol{\theta}_i,\widehat\boldsymbol{\beta}_{i j},\widehat\boldsymbol{\beta}_{j i},\widehat\boldsymbol{\alpha}_{i j},\widehat\boldsymbol{\alpha}_{j i}; i,j\in V\big\}$ the global minimizer of \eqref{eq:lagrange}. Without loss of generality, we assume that $\big\{\widehat\boldsymbol{\theta}_i,\widehat\boldsymbol{\beta}_{i j},\widehat\boldsymbol{\beta}_{j i}, \boldsymbol{\theta}_i(t), \boldsymbol{\beta}_{i j}(t), \boldsymbol{\beta}_{j i}(t); i,j\in V, 0\le t\le T\big\} \subset \boldsymbol{\Xi}$, since we can always project them into $\boldsymbol{\Xi}$. \begin{theorem}\label{thm:conv_alg_admm} Let $\kappa_{\alpha} = \max_{i\in V, j\in N_i} (\|\boldsymbol{\alpha}_{i j}(0)\|_2^2+\|\widehat{\boldsymbol{\alpha}}_{i j}\|_2^2)$. Suppose that $\kappa_\alpha\ge \lambda \sup_{\bm{a}\neq \mathbf{0}}\phi(\bm{a})\linebreak\|\bm{a}\|_2^{-1}$, $C_\psi=\max_{i\in V}\sup_{\boldsymbol{\theta}\in\boldsymbol{\Xi}} {n_i}^{-1}\sum_{k=1}^{n_i}\|\boldsymbol{\psi}_i(\mathbf{z}_k^{(i)};\boldsymbol{\theta})\|_2^2<\infty$ and $r_0=\sup_{\bm{a}\in\boldsymbol{\Xi}}\|\bm{a}\|_2<\infty$. Under Condition \ref{con:identifiability} and part (\romannumeral2) of Condition \ref{con:distribution}, by choosing $\eta(t) = \kappa / t$, we have \[ \frac{1}{|V|}E\left(\left.\|\overline{\boldsymbol{\Theta}}-\widehat{\boldsymbol{\Theta}}\|_F^2\right|\mathbf{z}_k^{(u)}, 1\le k\le n_u, u\in V\right)\le \frac{2\kappa^2 C_{\psi}\log T}{T}, \] for sufficiently large $T$ such that $\kappa C_{\psi} |V|\log T \ge |E|(8\rho^{-1}\kappa_{\alpha} + 2r_0^2 \rho + 4r_0\kappa_\alpha)$, where the expectation is taken with respect to the choice of mini-batches $\{\mathcal{B}_u(t)\colon u\in V, 1\le t\le T\}$. \end{theorem} \subsection{Extension of Fed-ADMM to Heterogeneous Accessibility of Devices }\label{sec:fedadmm_off} Now we consider the case where a proportion of devices can be inaccessible during the training process. Let $R_i(t)=\mathbf{1}\{\mathrm{device}\ i\ \mathrm{is\ available\ in\ the\ iteration\ }t\}$ for $i\in V$, $t\in\mathbb{N}$ and $S(t)=\{i: R_i(t)=1\}$ be the set of available devices in the $t$-th iteration. We model the inaccessibility by imposing randomness on $R_i(t)$. Suppose that $R_i(t)$ is a Bernoulli random variable with mean $p_i$ which is independent of $t$. We assume that $R_i(t), t>0$ enjoys the memoryless property, i.e., $\{R_i(t_1); i\in V\}$ is independent of $\{R_i(t_2); i\in V\}$ for any $t_1\neq t_2$. We allow a heterogeneous offline rate among devices, i.e., $p_i\neq p_j$ for $i\neq j$, and allow the dependency among $(R_i(t), i\in V)$ for a fixed $t$. Let $p_0=\min_{i\in V} p_i>0$. To begin with, we presume the existence of a central machine $\mathcal{S}$ and known $p_i, i\in V$. In the $t$-th iteration, for the node computation step, device $i$ needs local samples to produce an unbiased estimator of $\nabla_{\boldsymbol{\theta}_i} \widehat{M}_i(\boldsymbol{\theta}_i(t))$. Motivated by the inverse probability weighting method \citep{wooldridge2007inverse,mansournia2016inverse} in the causal inference literature, an unbiased estimator of $\nabla_{\boldsymbol{\theta}_i} \widehat{M}_i(\boldsymbol{\theta}_i(t))$ is $|\mathcal{B}_i(t)|^{-1}\sum_{b\in \mathcal{B}_i(t)}p_i^{-1}\boldsymbol{\psi}_{i}(\mathbf{z}_b^{(i)};\boldsymbol{\theta}_i(t))R_i(t)$ owing to the independence between $R_i(t)$ and $\mathcal{B}_i(t)$. Therefore, we modify \eqref{eq:ADMM_primal} as \begin{subequations} \begin{align} &\boldsymbol{\theta}_i(t+1) = \boldsymbol{\theta}_i(t) - \eta(t)\left\{\frac{1}{|\mathcal{B}_i(t)|}\sum_{b\in \mathcal{B}_i(t)}p_i^{-1}\boldsymbol{\psi}_{i}(\mathbf{z}_b^{(i)};\boldsymbol{\theta}_i(t))\right.\notag\\ &\quad\quad\quad\quad\quad\quad\quad\left.+ \rho\sum_{j\in N_i} (\boldsymbol{\theta}_i(t) - \boldsymbol{\beta}_{i j}(t) - \rho^{-1}\boldsymbol{\alpha}_{i j}(t)) \right\}\quad\mathrm{on\ device}\ i\ \mathrm{with}\ R_i(t) = 1;\label{eq:update_theta_online}\\ &\boldsymbol{\theta}_i(t+1) = \boldsymbol{\theta}_i(t) - \eta(t)\rho\sum_{j\in N_i} (\boldsymbol{\theta}_i(t) - \boldsymbol{\beta}_{i j}(t) - \rho^{-1}\boldsymbol{\alpha}_{i j}(t)) \quad\mathrm{on}\ \mathcal{S}\ \mathrm{with}\ R_i(t) = 0,\label{eq:update_theta_offline} \end{align} \end{subequations} where for \eqref{eq:update_theta_online}, we need to send $\{\boldsymbol{\beta}_{i j}(t),\boldsymbol{\alpha}_{i j}(t); j\in N_i\}$ from $\mathcal{S}$ to device $i$, and then send $\boldsymbol{\theta}_i(t+1)$ back to $\mathcal{S}$ from device $i$. This procedure applies to all devices in $S(t)$. After $\boldsymbol{\theta}_i(t+1) (i\in V)$ being updated and transmitted to $\mathcal{S}$, we can derive $\big(\boldsymbol{\beta}_{i j}(t+1),\boldsymbol{\beta}_{j i}(t+1)\big)$ by \eqref{eq:update_beta} and $\big(\boldsymbol{\alpha}_{i j}(t+1),\boldsymbol{\alpha}_{j i}(t+1)\big)$ by \eqref{eq:update_alpha} on $\mathcal{S}$. Similar to Theorem \ref{thm:conv_alg_admm}, we obtain the convergence rate of Algorithm \ref{alg:admm_offline} in Supplementary materials. \begin{corollary}\label{coro:conv_alg_admm} Under the same conditions of Theorem \ref{thm:conv_alg_admm}, for the output of Algorithm \ref{alg:admm_offline} with known $\{p_i, i\in V\}$ with $p_0=\min_{i\in V}p_i>0$, we have \[ \frac{1}{|V|}E\left(\left.\|\overline{\boldsymbol{\Theta}}-\widehat{\boldsymbol{\Theta}}\|_F^2\right|\mathbf{z}_k^{(u)}, 1\le k\le n_u, u\in V\right)\le \frac{2\kappa^2 C_{\psi}\log T}{p_0 T}, \] for sufficiently large $T$, where the expectation is taken with respect to the choice of mini-batches $\{\mathcal{B}_u(t)\colon u\in V, 1\le t\le T\}$. \end{corollary} \section{Simulations}\label{sec:simulation} In this section, we evaluate the performance of five methods, Fed-ADMM, Fed-ADMM-ES, Oracle, Local and Global in various settings. Here, for a specific graph $G=(V,E)$, Fed-ADMM denotes the output of Algorithm \eqref{alg:admm} with $G$, Fed-ADMM-ES denotes the output of Algorithm \eqref{alg:admm} after applying the adaptive edge selection procedure to $G=(V,E)$, Oracle denotes the output of Algorithm \eqref{alg:admm} with the characteristic graph $G_0$, Local denotes the local estimator defined in \eqref{eq:naive_est}, and Global denotes the estimator obtained by ignoring the heterogeneity, i.e., \[ \widehat{\boldsymbol{\Theta}}^{\mathrm{gl}} = \argmin_{\boldsymbol{\Theta}} \frac{1}{|V|}\sum_{u\in V}\sum_{k=1}^{n_u} m_u(\mathbf{z}_k^{(u)};\boldsymbol{\theta}_u)\quad\mathrm{s.t.}\ \boldsymbol{\theta}_i=\boldsymbol{\theta}_j, \forall \{i,j\}\subset V. \] The metric of performance is chosen as the average squared estimation error, i.e., $\|\widehat{\boldsymbol{\Theta}}-\boldsymbol{\Theta}^*\|_F^2/|V|$. We also compare the algorithmic convergence rates of Fed-ADMM and vanilla stochastic gradient descent (SGD) for solving the objective function \eqref{eq:penalized_m}, i.e., total number of iterations required to converge. We first introduce data generating processes of our simulation. We consider linear regression tasks on each device, where covariates $\mathbf{x}_k^{(u)}\sim N_p(0,\bm{I}_p)$ and noises $\varepsilon_k^{(u)}\sim N(0,1)$ independently for $k=1,\ldots,n_u; u\in V$. The characteristic graph $G_0$ is generated by evenly partitioning $V$ into $K_0$ subsets, $V_1,\ldots,V_{K_0}$, and constructing a complete subgraph with node set being $V_j$. For each $V_j$ and $u\in V_j$, the responses $\textnormal{y}_k^{(u)} = \big(\mathbf{x}_k^{(u)}\big)^T \boldsymbol{\vartheta}^{(j)} + \varepsilon_k^{(u)}$, where $\boldsymbol{\vartheta}^{(j)}\ (1\le j\le K_0)$ are sampled independently from a Gaussian distribution with mean $0$ and covariance matrix $p^{-1/2}\bm{I}_p$. We store the adjacency matrix $\boldsymbol{\Lambda}_0$ of $G_0$. \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{S1.eps} \caption{Average estimation error of Fed-ADMM, Fed-ADMM-ES and oracle, local and global estimators in linear regression, with randomly corrupted graphs. The number of clusters $K$ is fixed to be $5$ for all settings. The two rows correspond to corruption level $\varrho=0.1$ and $0.2$.} \label{fig:S1} \end{figure} We then generate $G=(V,E)$ by maliciously corrupting $G_0$ with \textit{corruption level} $\varrho>0$. By sampling Bernoulli random variables $\textnormal{e}_{i j}$ independently with mean $\varrho$, we flip the connection status between device $i$ and device $j$ in $G_0$ if $\textnormal{e}_{i j}=1$; otherwise keep it intact. In more specific, we directly generate the adjacency matrix $\boldsymbol{\Lambda}(\varrho)$ by $ \boldsymbol{\Lambda}_{i j}(\varrho)=\boldsymbol{\Lambda}_{j i}(\varrho) = \textnormal{e}_{i j}\{1 - (\boldsymbol{\Lambda}_0)_{i j}\} + (1-\textnormal{e}_{i j})(\boldsymbol{\Lambda}_0)_{i j}$ for $i, j\in V$ with $i<j$, and choose $G$ as the graph that $\boldsymbol{\Lambda}(\varrho)$ defines. The deviation between $G$ and $G_0$ can be adjusted by different choices of $\varrho$. Indeed, $|E\setminus E_0|+ |E_0\setminus E| = \sum_{i< j}\textnormal{e}_{i j}=\varrho|V|(|V| - 1) / 2 + O_p(\varrho|V|\log|V|)$ by Hoeffding's inequality. We choose $m_u(\mathbf{z}_k^{(u)};\boldsymbol{\theta})=\big(\textnormal{y}_k^{(u)}-\big(\mathbf{x}_k^{(u)}\big)^T \boldsymbol{\theta} \big)^2/2$ and $\phi(\cdot)=\|\cdot\|_1$ in \eqref{eq:ob}. The regularization parameter $\lambda$ is tuned by cross-validation. \subsection{Estimation Error} We compare the averaged estimation error of aforementioned five estimators. For simplicity, we set $n_i = n$ for all $i\in V$. We choose $|V|\in \{20, 40, 60, 80, 100\}$, $n\in \{50, 100, 200\}$, and fixed $K=5$ and $p=20$. We then conduct simulations with corruption level $\varrho=0.1$ and $\varrho=0.2$ respectively. In Figure \ref{fig:S1}, we report the average squared estimation error for each estimator. The first row and the second row correspond to the corruption level $\varrho=0.1$ and $\varrho=0.2$, respectively. Each point is the mean $100$ of independent replications. The performance of Local and Global do not depend on $|V|$ in all settings, while the estimation error of Oracle is decreasing as $|V|$ increases. In the case $\varrho=0.1$, the Fed-ADMM has smaller estimation errors than local estimators, which shows the superiority of data federation. However, there still exists a non-vanishing performance gap between Fed-ADMM and the oracle estimator. Notably, the average error of Fed-ADMM does not decrease as $|V|$ increases, since the corruption level $\varrho$ is fixed as $|V|$ grows and thus the expected number of wrong edges increases as well, which hinders the performance as shown in Theorem \ref{thm:lam_rho}. Fed-ADMM performs even worse than the local estimator as shown in the second row of Figure \ref{fig:S1} if we use an even worse graph. We highlight that this issue can be solved efficiently by edge selection. In all settings, Fed-ADMM-ES (Fed-ADMM with edge selection) outperforms Fed-ADMM and local estimator, and is almost indistinguishable from Oracle when $n$ is large. This suggests that most of the misleading information in graph can be eliminated by the edge selection procedure proposed in Section \ref{sec:edge_selection}. \subsection{Sensitivity Analysis of Graphs} \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{S3.eps} \caption{Average estimation error of Fed-ADMM, Fed-ADMM-ES, Fed-ADMM-Local-ES, Oracle, Local and Global in linear regression, with randomly corrupted graphs. The $x$-axis corresponds to the corruption lever $\varrho$. We fix $K=5$. The two rows correspond to $n=50$ and $100.$} \label{fig:S3} \end{figure} In order to further study how the performance is degraded of Fed-ADMM when $\varrho$ increases, we let $\varrho$ vary from $0$ to $0.9$ in steps of $0.1$, and compare the averaged estimation error for Fed-ADMM and Fed-ADMM-ES. Besides, we also consider edge selection based only on local estimators (Fed-ADMM-Local-ES). That is, we do not use the information of the given graph, but only use local estimators on each device, to do edge selection. Notice that the estimation error of Fed-ADMM-Local-ES does not depend on $\varrho$, no matter how the given graph $G$ can be misleading. Meanwhile, this method cannot benefit from $G$ when it is close to $G_0$. We set $K=5$, and $|V|\in\{40, 60, 80\}$ and $n\in\{50, 100\}$. The results are reported in Figure \ref{fig:S3}. Each point is summarized from $100$ independent replications. The average estimation error of Fed-ADMM increases rapidly as $\varrho$ increases and exceeds the local estimator when $\varrho\geq 0.2$. In contrast, the performance of Fed-ADMM-ES is much more robust. Even when $\varrho=0.9$, i.e.. the adjacency matrix is almost completely misleading, it is still better than the local estimator. Besides, as expected, when $\varrho$ is small, which means that $\bm{A}(\varrho)$ is close to the true adjacency matrix $\bm{A}_0$, Fed-ADMM-ES is better than Fed-ADMM-Local-ES; otherwise, using local estimators to construct incidence matrix is a more reasonable choice. \subsection{Algorithmic Convergence Rates} The algorithmic convergence rates of Fed-ADMM and its variant are studied in this section. Vanilla gradient descent (GD) and stochastic gradient descent (SGD), which have convergence rates of order $1/\sqrt{T}$ in this convex but nonsmooth setting, are also considered for comparison. For $K=5$, $p=20$, $n\in\{50, 100\}$ and $|V|\in\{40,60,80\}$, we run the Fed-ADMM with full batch, Fed-ADMM with batch size $10$, GD and SGD with batch size $10$. The results are reported in Figure \ref{fig:S2}. The $x$-axis and $y$-axis correspond to the number of optimization iterations and average estimation error, respectively. In all settings, the convergence of Fed-ADMM (Fed-ADMM with full batch) is much faster than SGD (GD). \section{A Real-Data Study}\label{sec:real} In this section, we use our proposed method to study the 2020 US presidential election results. The 2020 county-level election results is available from \url{https://github.com/tonmcg/US_County_Level_Election_Results_08-20} and the county-level information can be derived from \url{ https://www.kaggle.com/benhamner/2016-us-election}. The original data includes 51 states, 3111 counties and 52 county-level predictors. We treat each state as a device and its counties as its samples. We use the county-level information as the predictors and the election result of each county as responses. If Democrats win, the label of this county is encodes as $1$, otherwise $0$. We then use logistic regression to predict election results. Due to limited data availability, not all states have large enough sample size. We then select the states with more than 50 counties into our study, and the total number of selected states is $29$. We consider the following two approaches to obtain the graph used in Fed-ADMM: (a) use the historical election results prior to 2016 to classify the states, denoted by $\widehat{\mathbf{D}}_{\mathrm{his}}$; (b) use the local estimators of states to do edge selection, denoted by $\widehat{\mathbf{D}}_{\mathrm{loc}}$. Here, $\widehat{\mathbf{D}}_{\mathrm{his}}$ is generated by the division of red, blue, and swing states. We use the local estimator and global estimator as baselines. We randomly select $2/3$ counties as training data and the rest counties serve as test samples to measure their prediction accuracy, where accuracy is defined as the proportion of correctly classified samples over the whole test samples. We repeat this for $50$ times to report mean and standard deviation of each model's accuracies in Table \ref{tab:acc}. \begin{table}[h] \centering \fontsize{8}{11}\selectfont \caption{Accuracy (mean(standard deviation)) of Local, Global, and FedADMM. } \begin{tabular}{ccccc} \toprule \multicolumn{1}{c}{\multirow{2}{*}{Methods}} & \multirow{2}{*}{Local} & \multirow{2}{*}{Global} & \multicolumn{2}{c}{FedADMM} \cr \cmidrule(lr){4-5} & & & $\widehat{\mathbf{D}}_{\mathrm{loc}}$ & $\widehat{\mathbf{D}}_{\mathrm{his}}$ \cr \cmidrule(lr){1-5} Accuracy & 0.741(0.034) & 0.752(0.012) & 0.793(0.019) & 0.742(0.011) \cr \bottomrule \end{tabular}\vspace{0cm} \label{tab:acc} \end{table} Table \ref{tab:acc} shows that FedADMM with $\widehat{\mathbf{D}}_{\mathrm{loc}}$ performs best. Global estimator outperforms the local one, suggesting that the heterogeneity among the considered states is not strong. FedADMM with $\widehat{\mathbf{D}}_{\mathrm{his}}$ performs on par with local estimator, which indicates that the heterogeneity does not mainly come from the division of red, blue and swing states. Despite the prediction performance, we are also interested in the graph obtained by edge selection, which reflects similarity between states in 2020 presidential election. We plot the clustering result about the $29$ considered states that is derived by using $\widehat{\mathbf{D}}_{\mathrm{loc}}$ in our proposed method in Fig \ref{fig:map} in the Supplementary Material. A brief discussion can also be found therein. \section{Discussion} In this work, we consider parameter estimation across multiple devices, under strong constraints such as the disallowance of data sharing, data distribution heterogeneity, limited computational capacity and unstable accessibility of local devices. Under a general $M$-estimation framework, we propose an efficient, scalable, and decentralized algorithm and provide the convergence rate of our estimator in terms of Frobenius norm. The clustering consistency, i.e., model selection consistency, of our estimator, as well as statistical properties of the post-clustering estimator can be analyzed in future. High-dimensional parameter estimation and inference receive much less attention in federated learning literature. The recent work \citep{battey2018,fan2021,cai2021shir} considered high-dimensional parameter estimation and inference under distributed settings without sharing local data. However, they need to pre-compute a local estimator from each device, requiring a strong computational capacity of local devices. Our method can extend to high-dimensional settings, under all aforementioned constraints in federated learning, by simply adding an extra $\ell_1$-regularization of each parameter, i.e., \[ \widehat{\boldsymbol{\Theta}}=\argmin_{\boldsymbol{\Theta}} \frac{1}{|V|}\sum_{i\in V} \big(\widehat{M}_i(\boldsymbol{\theta}_i) + \lambda_i \|\boldsymbol{\theta}_i\|_1\big)+ \lambda R(\bm{D}\boldsymbol{\Theta}). \] As discussed before, statistical inference in this case still requires further investigation. \clearpage \bibliographystyle{jasa}
{ "timestamp": "2022-09-20T02:23:56", "yymm": "2209", "arxiv_id": "2209.08737", "language": "en", "url": "https://arxiv.org/abs/2209.08737" }
\section{Notation} \label{notation} Let $k$ be a field (any characteristic is allowed). A symmetric bilinear form over $k$ means a finite-dimensional vector space with a nondegenerate symmetric bilinear form. For $k$ of characteristic not 2, these can be identified with quadratic forms. We write $\langle a_1,\ldots,a_n\rangle$ for the diagonal form $a_1 x_1y_1+\cdots+a_n x_ny_n$. The Grothendieck-Witt ring $GW(k)$ is the Grothendieck group of symmetric bilinear forms over $k$, with addition corresponding to orthogonal direct sum and multiplication to tensor product. For $k$ of characteristic not 2, Witt's cancellation theorem says that two symmetric bilinear forms over $k$ are isomorphic if and only if they have the same class in $GW(k)$ \cite[section II.1]{Lam}. The Witt ring $W(k)$ is the quotient of $GW(k)$ by the subgroup generated by the hyperbolic plane $\bf H$; this subgroup is in fact an ideal. For $k$ of any characteristic, the bilinear form $\langle 1,-1\rangle$ is zero in $W(k)$, by the isomorphism $\langle 1,1,-1\rangle\cong \langle 1\rangle\perp {\bf H}$ \cite[equation 1.16]{EKM}. We can identify the ideal $I=\ker(W(k)\rightarrow \text{\bf Z}/2)$ of even-dimensional forms with $\ker(\text{dim} : GW(k)\rightarrow \text{\bf Z})$. The Grothendieck-Witt ring $GW(k)$ has the advantage of being a $\lambda$-ring, using exterior powers of symmetric bilinear forms \cite[section 27.1]{GMS}. A reference on $\lambda$-rings is \cite{AT}. As is now standard, we call a $\lambda$-ring what \cite{AT} calls a special $\lambda$-ring. Following \cite[Definition 3.1]{BObook}, a divided power structure on an ideal $I$ in a commutative ring $R$ is a collection of functions $\gamma_n:I\rightarrow R$ for $n\geq 0$ such that: (1) $\gamma_0(x)=1$, $\gamma_1(x)=x$, $\gamma_n(x)\in I$ for $n>0$ and $x\in I$. (2) $\gamma_n(x+y)=\sum_{i=0}^n \gamma_i(x)\gamma_{n-i}(y)$ for $x,y\in I$. (3) $\gamma_n(ax)=a^n\gamma_n(x)$ for $a\in R$, $x\in I$. (4) $\gamma_m(x)\gamma_n(x)=\binom{m+n}{m} \gamma_{m+n}(x)$ for $x\in I$. (5) $\gamma_n(\gamma_m(x))=\frac{(mn)!}{(m!)^n n!}\gamma_{mn}(x)$ for $x\in I$. (The coefficient is an integer.) Here relation (4) implies that $n!\, \gamma_n(x)=x^n$, and so every ideal in a commutative $\text{\bf Q}$-algebra has a unique divided power structure defined by $\gamma_n(x)=x^n/n!$; this explains where the axioms come from. \section{The formula for divided powers} We give a formula for the divided power structure on the ideal $I$ of even-dimensional forms in the Witt ring localized at 2. In general, it is necessary to localize the Witt ring at 2 to have divided powers, as one sees by looking at the Witt ring of the real numbers, $W(\text{\bf R})=\text{\bf Z}$. Note that localizing at 2 makes very little difference for Witt rings. Indeed, Pfister showed that there is no odd torsion in $W(k)$, and so $W(k)$ always injects into $W(k)_{(2)}$. Moreover, if $-1$ is a sum of squares in $k$ (which holds in all fields of positive characteristic, for example) then $W(k)$ is killed by some power of 2 and hence is equal to its localization at 2 \cite[Theorem VIII.3.2]{Lam}. \begin{theorem} \label{main} Let $k$ be a field. Then the ideal $I_{(2)} =\ker(W(k)\rightarrow \text{\bf Z}/2)_{(2)}$ in the Witt ring localized at 2 has a divided power structure defined by $$\gamma_n(x)=\sum_{i\geq 0} (-1)^{(n-i)/2} (i!/n!) T(n,i)\lambda^i(x),$$ where $T(n,i)$ are the ``tangent numbers'' defined by $$\frac{(\tan t)^i}{i!}=\sum_{n\geq i}T(n,i)t^n/n! .$$ Here we identify $I_{(2)}$ with $\ker(GW(k)\rightarrow \text{\bf Z})_{(2)}$, in which exterior powers are defined. The coefficients in the formula for $\gamma_n(x)$ are 2-local integers, so that the formula makes sense in $W(k)_{(2)}$. \end{theorem} Note that the divided power operations are not the ``gamma operations'' which exist in any $\lambda$-ring. The sign in the formula makes sense because $T(n,i)$ is nonzero only for $i\equiv n\pmod{2}$. Various formulas for tangent numbers are discussed by Comtet \cite[pp.~259--260]{Comtet}, from which the following table is taken. Many references on tangent numbers can be found in \cite{Sloane}. $$\begin{tabular}{r|rrrrrrrrrr} $n\backslash i$ & 0&1&2&3&4&5&6&7&8&9 \\ \hline 0&1 & &&&&&&&& \\ 1& & 1 &&&&&&&& \\ 2&& & 1 &&&&&&& \\ 3&& 2 & & 1 &&&&&& \\ 4&& & 8 & & 1 &&&&& \\ 5&& 16 & & 20 && 1 &&&& \\ 6&&& 136 && 40 && 1 &&& \\ 7&& 272 && 616 && 70 && 1 && \\ 8&&& 3968 && 2016 && 112 && 1 & \\ 9&& 7936 && 28160 && 5376 && 168 && 1 \end{tabular}$$ The numbers in column 1, giving the coefficients of the Taylor series of the tangent function, are closely related to Bernoulli numbers, in the sense that $$T(2n+1,1)=(-1)^{n-1}B_{2n}4^n(4^n-1)/2n.$$ Therefore, one cannot expect too simple a formula for the coefficients of $\gamma_n$ in terms of exterior powers, for an arbitrary field. The first few formulas are: \begin{align*} \gamma_1(x)&=x\\ \gamma_2(x)&=\lambda^2(x),\\ \gamma_3(x)&=\lambda^3(x)-(1/3)x\\ \gamma_4(x)&=\lambda^4(x)-(2/3)\lambda^2(x). \end{align*} On the other hand, for fields in which $-1$ is a square, Corollary \ref{binom} gives a simple formula for the divided powers $\gamma_n$. {\bf Proof of Theorem \ref{main}. }The axioms imply that a divided power structure on an ideal $I$ in a commutative $\text{\bf Z}_{(2)}$-algebra $R$ is uniquely determined by the operation $\gamma_2$. Moreover, a function $\gamma_2:I\rightarrow I$ gives a divided power structure exactly when it satisfies $\gamma_2(x+y)=\gamma_2(x)+xy+\gamma_2(y)$ and $\gamma_2(ax)=a^2\gamma_2(x)$ for $x,y\in I$, $a\in R$. See Berthelot-Ogus \cite[Appendix]{BOpaper} for the analogous description of divided power structures in $\text{\bf Z}_{(p)}$-algebras for any prime number $p$. (They also assume that $2\gamma_2(x)=x^2$, but that follows from the formulas mentioned.) Thus we can define a divided power structure on the ideal $I_{(2)}=\ker(GW(k)\rightarrow \text{\bf Z})_{(2)}$ in $GW(k)_{(2)}$ by declaring that $\gamma_2(x)=\lambda^2(x)$ for $x\in I_{(2)}$. Here the first identity $\lambda^2(x+y)=\lambda^2(x)+xy+\lambda^2(y)$ is a standard property of exterior powers (it holds in any $\lambda$-ring). The second property is special to the Grothendieck-Witt ring. Indeed, for all elements $a,x$ of a $\lambda$-ring, we have $$\lambda^2(ax)=a^2\lambda^2(x)+\lambda^2(a)\psi^2(x),$$ where $\psi^2$ is the Adams operation defined by $\psi^2(x)=x^2-2\lambda^2(x)$. So it suffices to show that $\psi^2$ is zero on the ideal $I$ in $GW(k)$. Since Adams operations are ring homomorphisms, this is easy. View any element of $I$ as the difference of two symmetric bilinear forms of the same dimension, and we have: \begin{align*} \psi^2(\langle a_1,\ldots,a_n\rangle -\langle b_1,\ldots,b_n\rangle)&= \langle a_1^2,\ldots,a_n^2\rangle -\langle b_1^2\ldots,b_n^2\rangle\\ &= \langle 1,\ldots, 1\rangle - \langle 1,\ldots, 1\rangle \\ &= 0. \end{align*} Thus we have constructed a divided power structure on the ideal $I_{(2)}=\ker(GW(k)\rightarrow \text{\bf Z})_{(2)}$. This is the same divided power structure as that constructed by Marshall \cite{Marshall}. It is immediate that these operations give a divided power structure on $I_{(2)}$ as an ideal in the Witt ring, $I_{(2)}=\ker(W(k)\rightarrow \text{\bf Z}/2)_{(2)}$. From the axioms for a divided power structure, all the operations $\gamma_n$ on $I_{(2)}$ are $\text{\bf Z}_{(2)}$-polynomials in iterates of $\gamma_2 =\lambda^2$. But the Grothendieck-Witt ring of a field has extra properties not valid in an arbitrary $\lambda$-ring. In particular, $\lambda^a(x)\lambda^b(x)$ is a $\text{\bf Z}$-linear combination of the exterior powers $\lambda^m(x)$ for $m\leq a+b$, and $\lambda^a(\lambda^b(x))$ is a $\text{\bf Z}$-linear combination of $\lambda^m(x)$ for $m\leq ab$ \cite[chapter 27]{GMS}. Therefore we have $$\gamma_n(x)=\sum_{i\leq n}a(n,i)\lambda^i(x)$$ for some universal coefficients $a(n,i)$ in $\text{\bf Z}_{(2)}$ (independent of the field $k$). We want to determine these coefficients. We first give the formulas for $\lambda^a(x)\lambda^b(x)$ and $\lambda^2(\lambda^a(x))$, which can be proved by a direct computation using diagonal forms. \begin{lemma} \label{lambda} Let $k$ be a field. For any $x\in I=\ker(GW(k)\rightarrow \text{\bf Z})$, $$\lambda^a(x)\lambda^b(x)=\sum_{j\geq 0} \binom{a+b-2j}{a-j} \binom{-a-b+2j}{j}\lambda^{a+b-2j}(x)$$ and $$\lambda^2(\lambda^a(x))=\sum_{j\geq 0}\frac{1}{2} \binom{2a-2j}{a-j}\binom{-2a+2j}{j} \lambda^{2a-2j}(x).$$ The coefficients in these formulas are integers. \end{lemma} To determine the coefficients for $\gamma_n(x)$, we can work in a field $k$ in which $x,\lambda^2(x),\ldots,\lambda^n(x)$ are linearly independent in $W(k)\otimes\text{\bf Q}$ for some $x\in I$ (or, equivalently, $x,x^2,\ldots,x^n$ are linearly independent in $W(k)\otimes\text{\bf Q}$). For example, this holds for $k$ a suitable rational function field over the real numbers. In this situation, we do not lose any information by tensoring $W(k)$ with the rationals, where the divided powers are simply $\gamma_n(x)=x^n/n!$. In particular, we have $\gamma_{n+1}(x)=x\gamma_n(x)/(n+1)$ for all $n\geq 0$. Also, we have $$x\lambda^i(x)=(i+1)\lambda^{i+1}(x)-(i-1)\lambda^{i-1}(x)$$ by Lemma \ref{lambda}. This gives a recurrence relation for the numbers $a(n,i)\in \text{\bf Z}_{(2)}$: $$a(n+1,i)=\frac{i}{n+1}(a(n,i-1)-a(n,i+1)).$$ Since $\gamma_0(x)=1$, the numbers $a(0,i)$ in the 0th row are 1 for $i=0$ and $0$ otherwise. By the recurrence relation, we have $a(n,i)=0$ unless $n\equiv i\pmod{2}$. The statement we are trying to prove suggests defining rational numbers $b(n,i)$ by $a(n,i)=(-1)^{(n-i)/2} (i!/n!) b(n,i)$ for $i,n\geq 0$. The recurrence relation for $a(n,i)$ implies that $$b(n+1,i)=b(n,i-1)+i(i+1)b(n,i+1).$$ But this is exactly the recurrence relation satisfied by the tangent numbers $T(n,i)$ defined by $$(\tan t)^i/i! = \sum_{n\geq i}T(n,i)t^n/n!$$ \cite[p.~259]{Comtet}. To check that, differentiate this formula for $(\tan t)^i/i!$, using that the derivative of $\tan t$ is $1+(\tan t)^2$. \ QED \section{Divided powers when $-1$ is a square} We now show that the coefficients in the formula for divided powers in the Witt ring simplify to binomial coefficients modulo 2. This is relevant to fields $k$ in which $-1$ is a square, since then $W(k)$ is an $\text{\bf F}_2$-algebra. \begin{corollary} \label{binom} Let $k$ be a field in which $-1$ is a square. Then the ideal $I=\ker(W(k)\rightarrow \text{\bf Z}/2)$ has a divided power structure defined by the formula $$\gamma_n(x)=\sum_j \binom{n}{j}\lambda^{n-2j}(x).$$ Here we identify $I$ with $\ker(GW(k)\rightarrow \text{\bf Z})$, in which exterior powers are defined. \end{corollary} For example, when $-1$ is a square in $k$, we have: \begin{align*} \gamma_1(x)&=x\\ \gamma_2(x)&=\lambda^2(x)\\ \gamma_3(x)&=\lambda^3(x)+x\\ \gamma_4(x)&=\lambda^4(x). \end{align*} {\bf Proof. }As shown in the proof of Theorem \ref{main}, $I_{(2)}=\ker(GW(k)\rightarrow \text{\bf Z})_{(2)}$ has a unique divided power structure such that $\gamma_2(x)=\lambda^2(x)$. Since we now assume that $-1$ is a square in $k$, $I=I_{(2)}$ is killed by 2. By repeated application of the formula for $\gamma_m(\gamma_n(x))$ in a divided power ideal, it follows that $\gamma_{2^r}(x)=(\gamma_2)^r(x)$ for $x\in I$. Next, by Lemma \ref{lambda}, $$\lambda^2(\lambda^{2^{r}}(x))=\sum_{j\geq 0}\frac{1}{2} \binom{2^{r+1}-2j}{2^{r}-j}\binom{-2^{r+1}+2j}{j} \lambda^{2^{r+1}-2j}(x).$$ Consider the first binomial coefficient here: $(1/2)\binom{2n}{n}$ is 0 modulo 2 except when $n$ is a power of 2, where it is 1 modulo 2. Since we are working modulo 2 (as $-1$ is a square in $k$), most terms in the sum disappear and we have $$\lambda^2(\lambda^{2^{r}}(x))=\sum_{s=0}^r \binom{-2^{s+1}} {2^r-2^s}\lambda^{2^{s+1}}(x).$$ The coefficient here is 0 modulo 2 for $s<r$ and 1 modulo 2 for $s=r$. We conclude that $$\lambda^2(\lambda^{2^r}(x))=\lambda^{2^{r+1}}(x).$$ Therefore, by induction, $\gamma_{2^r}(x)=(\lambda^2)^r(x)=\lambda^{2^r}(x)$ for $x\in I$. This proves Corollary \ref{binom} for $n=2^r$, since $\binom{2^r}{j}=0\pmod {2}$ for $0<j<2^r$. Let $n=2^{i_0}+\cdots+2^{i_r}$ be the binary expansion of $n$. Then $\gamma_n(x)$ is a constant in $\text{\bf Z}_{(2)}^*$ times $\gamma_{2^{i_0}}(x)\cdots\gamma_{2^{i_r}}(x)$ for $x\in I$. Since we can work modulo 2, \begin{align*} \gamma_n(x)&=\gamma_{2^{i_0}}(x)\cdots\gamma_{2^{i_r}}(x)\\ &= \lambda^{2^{i_0}}(x)\cdots\lambda^{2^{i_r}}(x). \end{align*} We want to show that this equals $\sum_{j\geq 0} \binom{n}{j}\lambda^{n-2j}(x)$. We prove this formula by induction on the number of ones in the binary expansion of $n$. Thus, we suppose that the formula holds for $m:=2^{i_1}+\cdots + 2^{i_r}$, and we will prove it for $n=2^{i_0}+m$, where $i_0<i_1< \cdots < i_r$. We have, for $x\in I$, \begin{align*} \gamma_n(x) &= \lambda^{2^{i_0}}(x)\cdots\lambda^{2^{i_r}}(x) \\ &= \lambda^{2^{i_0}}(x)\sum_{j\geq 0}\binom{m}{j}\lambda^{m-2j}(x)\\ &= \sum_{j,l\geq 0} \binom{m}{j} \binom{2^{i_0}+m-2j-2l}{2^{i_0}-l} \binom{-2^{i_0}-m+2j+2l}{l}\lambda^{n-2j-2l}(x), \end{align*} using Lemma \ref{lambda}. Here $\binom{m}{j}$ is 0 modulo 2 unless $2^{i_1}|j$, since $m$ is a multiple of $2^{i_1}$; so we can assume that $2^{i_1}$ divides $j$ in the sum. So $2^{i_0}|(2^{i_0}+m-2j)$. If $2^{i_0}\nmid l$, then the bottom number in the binomial coefficient $\binom{2^{i_0}+m-2j-2l}{2^{i_0}-l}$ has lowest binary digit 1 where the top number has digit 0, and so the binomial coefficient is 0 modulo 2. So we can also assume that $2^{i_0}| l$ in the above sum. But the binomial coefficient just mentioned is also 0 if $l>2^{i_0}$, and so we have $l=0$ or $l=2^{i_0}$. That is, $$\gamma_n(x)=\sum_{j\geq 0} \binom{m}{j}\binom{n-2j}{2^{i_0}} \lambda^{n-2j}(x)+\sum_{j\geq 0}\binom{m}{j} \binom{2^{i_0}-m+2j} {2^{i_0}}\lambda^{n-2^{i_0+1}-2j}(x).$$ As we mentioned, the terms in these sums are 0 unless $2^{i_1}|j$. Given that $2^{i_1}|j$, we have $n-2j\equiv 2^{i_0}\pmod{2^{i_0+1}}$, and hence the binomial coefficient $\binom{n-2j}{2^{i_0}}$ in the left sum is 1 modulo 2 for $0\leq j\leq n/2$ and $2^{i_1}|j$. (We only need to consider $j$ at most $n/2$ since we are studying the coefficient of $\lambda^{n-2j}(x)$.) Likewise for the negative binomial coefficient in the right sum, above: $2^{i_1}$ divides $m-2j$, and so $\binom{2^{i_0}-m+2j}{2^{i_0}}$ is 1 modulo 2 for $0\leq j\leq n/2$ and $2^{i_1}|j$. Thus \begin{align*} \gamma_n(x)&=\sum_{j\geq 0}\binom{m}{j}\lambda^{n-2j}(x) +\sum_{j\geq 0}\binom{m}{j}\lambda^{n-2^{i_0+1}-2j}(x)\\ &=\sum_{j\geq 0}\bigg[ \binom{m}{j}+\binom{m}{j-2^{i_0}} \bigg] \lambda^{n-2j}(x). \end{align*} The coefficient in the last sum is 0 modulo 2 unless $2^{i_0}|j$; note that the index $j$ in this sum is either the $j$ in the previous sums, which is a multiple of $2^{i_1}$, or else the old $j$ plus $2^{i_0}$. We know that $\binom{u}{v}+\binom{u}{v-1}=\binom{u+1}{v}$. Since multiplying the top and bottom numbers by a power of 2 does not change a binomial coefficient modulo 2, we have $\binom{2^{i}u}{2^{i}v}+\binom{2^{i}u} {2^{i}v-2^{i}}\equiv \binom{2^{i}u+2^{i}}{2^{i}v}\pmod{2}$. Since $m+2^{i_0}=n$, this simplifies our formula for $\gamma_n(x)$ to: $$\gamma_n(x)=\sum_{j\geq 0}\binom{n}{j}\lambda^{n-2j}(x).$$ This completes the inductive proof of this formula when $-1$ is a square in $k$. \ QED \section{Comparison with Milnor $K$-theory} It is straightforward to compute the divided powers of Pfister forms in the Witt ring. (By definition, a 1-fold Pfister form $\langle \langle a \rangle \rangle$, for $a\in k^*$, means the 2-dimensional symmetric bilinear form $\langle 1,-a\rangle$, and an $r$-fold Pfister form $\langle \langle a_1,\ldots,a_r\rangle \rangle$ means the tensor product $\langle \langle a_1\rangle \rangle\cdots \langle \langle a_r\rangle \rangle$.) For example, one checks by induction on $r$ that \begin{align*}\gamma_2\langle \langle a_1,\ldots,a_r\rangle \rangle &=\langle \langle a_1,\ldots,a_r\rangle \rangle \langle \langle -1\rangle \rangle ^{r-1}\\ &= 2^{r-1}\langle \langle a_1,\ldots,a_r\rangle \rangle . \end{align*} In particular, if $-1$ is a square in $k$ (so that $2=0$ in $W(k)$), then $$\gamma_2\langle \langle a_1,\ldots,a_r\rangle \rangle=\begin{cases} \langle \langle a_1 \rangle \rangle & \text{if } r=1\\ 0 & \text{if }r\geq 2. \end{cases}$$ As a result, when $-1$ is a square, we get simple formulas for the divided powers of Pfister forms: for $n>0$, $$\gamma_n\langle \langle a \rangle \rangle=\begin{cases} \langle \langle a \rangle \rangle & \text{if }n\text{ is a power of 2}\\ 0 & \text{otherwise} \end{cases}$$ and, for $r\geq 2$ and $n>0$, $$\gamma_n\langle \langle a_1,\ldots,a_r \rangle \rangle=\begin{cases} \langle \langle a_1,\ldots,a_r \rangle \rangle & \text{if }n=1\\ 0 & \text{otherwise} \end{cases}$$ This calculation plus the formal properties of divided powers imply a simple formula for the divided powers of any element of $I^r$, written as a sum of $r$-fold Pfister forms $s_i$, when $r\geq 2$ and $-1$ is a square: $$\gamma_n\bigg( \sum_{i=1}^r s_i\bigg) = \sum_{1\leq i_1<\cdots <i_n\leq r} s_{i_1}\cdots s_{i_n}.$$ This implies: \begin{corollary} If $-1$ is a square in a field $k$, then the divided power $\gamma_n$ maps $I^r\subset W(k)$ into $I^{nr}$ for $r\geq 2$. This operation is compatible with the divided power operation on Milnor $K$-theory modulo 2, $\gamma_n: K^M_r(k)/2=I^r/I^{r+1}\rightarrow K^M_{nr}(k)/2=I^{nr}/I^{nr+1}$. \end{corollary} Indeed, divided powers on Milnor $K$-theory modulo 2 are defined exactly when $-1$ is a square and $r\geq 2$, and in that case they are defined by the same formula as above (where $s_i$ are symbols $\{ a_1,\ldots,a_r\}$ in $K^M_r(k)/2$) \cite{Kahn,Vial}.
{ "timestamp": "2022-09-20T02:21:04", "yymm": "2209", "arxiv_id": "2209.08634", "language": "en", "url": "https://arxiv.org/abs/2209.08634" }
\section{Conclusion} This paper presents a new generative approach for learning 3D shape distribution and generating diverse, high-quality, and \new{possibly novel} 3D shapes. Unlike prior works, we operate on the frequency domain. By decomposing the implicit function in the form of TSDF using biorthogonal wavelet\new{s}, we build a compact wavelet representation with a pair of coarse and detail coefficient volumes, as an encoding of 3D shape. Then, we formulate our generator upon a probabilistic diffusion model to learn to generate diverse shapes in the form of coarse coefficient volumes from noise samples, and a detail predictor to further learn to generate compatible detail coefficient volumes for reconstructing fine details. Both quantitative and qualitative experimental results demonstrate the superiority of our method in generating diverse and realistic shapes that exhibit fine details, complex and thin structures, and clean surfaces, beyond the generation capability of the state-of-the-art methods. To our best knowledge, this is the first work that successfully adopts a compact wavelet representation for an unconditional generative modeling on 3D shape generation, enabling many directions for future research. At first glance, our benefits can be extended to other downstream tasks with extra conditions,~\emph{e.g.}, shape reconstruction from images or point clouds, and shape editing with user inputs. Another promising future direction is to adopt wavelet-based 3D generation to animation production,~\emph{e.g.}, generating sequences of character motion with spatio-temporal wavelet representations. Also, we would like to explore more challenging cases,~\emph{e.g.}, objects with extremely fine details and generation of 3D scenes. \section{Results and Experiments} \subsection{Galleries of our generated shapes} Besides Figure~\ref{fig:teaser}, we present Figure~\ref{fig:gallery} to showcase the compelling capability of our method on generating shapes of various categories. Our generated shapes exhibit {\em diverse topologies\/}, {\em fine details\/}, and also {\em clean surfaces without obvious artifacts\/}, covering a rich variety of small, thin, and complex structures that are typically very challenging for the existing approaches to produce. More 3D shape generation results are provided in the supplementary material. \vspace{-3pt} \subsection{Comparison with Other Methods} Next, we compare the shape generation capability of our method with four state-of-the-art methods: IM-GAN~\cite{chen2019learning}, Voxel-GAN~\cite{kleineberg2020adversarial}, Point-Diff~\cite{luo2021diffusion}, and SPAGHETTI~\cite{hertz2022spaghetti}. To our best knowledge, ours is the first work that generates implicit shape representations in frequency domain and considers coarse and detail coefficients to enhance the generation of structures and fine details. Our experiments follow the same setting as the above works. Specifically, we leverage our trained model on the Chair and Airplane categories in ShapeNet~\cite{chang2015shapenet} to randomly generate 2,000 shapes for each category. Then, we uniformly sample 2,048 points on each generated shape and evaluate the shapes using the same set of metrics as in the previous methods (details to be presented later). As for the four state-of-the-art methods, we employ publicly-released trained network models to generate shapes. \vspace{-3pt} \paragraph{Evaluation metrics.} Following~\cite{luo2021diffusion,hertz2022spaghetti}, we evaluate the generation quality using (i) minimum matching distance (MMD) measures the fidelity of the generated shapes; (ii) coverage (COV) indicates how well the generated shapes cover the shapes in the given 3D repository; and (iii) 1-NN classifier accuracy (1-NNA) measures how well a classifier differentiates the generated shapes from those in the repository. Overall, a low MMD, a high COV, and an 1-NNA close to 50\% indicate good generation quality. \new{More details are provided in the supplementary material.} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{images/query_comparison.png} \vspace{-2mm} \caption{Visual comparisons with state-of-the-art methods. Our generated shapes exhibit finer details and cleaner surfaces, without obvious artifacts. } \label{fig:query_comapre} \vspace{-2mm} \end{figure} \vspace{-3pt} \paragraph{Quantitative evaluation.} Table~\ref{tab:quanComparison} reports the quantitative comparison results, showing that our method surpasses all others for almost all the evaluation cases over the three metrics for both the Chair and Airplane categories. We employ the Chair category, due to its large variations in structure and topology, and the Airplane category, due to the fine details in its shapes. As discussed in~\cite{yang2019pointflow,luo2021diffusion}, the COV and MMD metrics have limited capabilities to account for details, so they are not suitable for measuring the fine quality of the generation results,~\emph{e.g.}, the generated shapes sometimes show a better performance even when compared with the ground-truth training shapes on these metrics. In contrast, 1-NNA is more robust and can better correlate with the generation quality. In this metric, our approach outperforms all others, while having a significant margin in the Airplane category, manifesting the diversity and fidelity of our generated results. \begin{figure} \includegraphics[width=0.98\linewidth]{images/novel_analysis.png} \vspace{-3mm} \caption{ \new{ Shape novelty analysis. Top: From our generated shape (in green), we retrieve top-four most similar shapes (in blue) in training set by CD and LFD. Bottom: We generate 500 chairs using our method; for each chair, we retrieve the most similar shape in the training set by LFD; then, we plot the distribution of LFDs for all retrievals, showing that our method is able to generate shapes that are more similar (low LFDs) or more novel (high LFDs) compared to the training set. Note that the generated shape at $50^{\text{th}}$ percentile is already not that similar to the associated training-set shape. } \vspace{-1mm} } \label{fig:novelty_analysis} \end{figure} \vspace{-3pt} \paragraph{Qualitative Evaluation} Figure~\ref{fig:query_comapre} show some visual comparisons. For each random shape generated by our method, we find a similar shape (with similar structures and topology) generated by each of the other methods to make the visual comparison easier. \new{See supplementary material Sections B and D for more visual comparisons.} \new{Further, as different methods likely have different statistical modes in the shape generation distribution, we also take random shapes generated by IM-GAN and find similar shapes generated by our method for comparison; see supplementary material Section~C for the results. } From \new{all} these results, we can see that the 3D shapes generated by our method clearly exhibit finer details, higher fidelity structures, and cleaner surfaces, without obvious artifacts. \subsection{Model Analysis} \paragraph{Shape novelty analysis.} \new{Next, we analyze whether our method can generate shapes that are not necessarily the same as the training-set shapes, meaning that it does not simply memorize the training data.} To do so, we use our method to generate 500 random shapes and retrieve top-four most similar shapes in the training set separately via two different metrics,~\emph{i.e.}, Chamfer Distance (CD) and \new{Light Field Distance (LFD)~\cite{chen2003visual}. \new{It is noted that LFD is computed based on rendered images from multiple views on each shape, so it focuses more on the visual similarity between shapes and is considered to be more robust for shape retrieval. For the details on the metrics, please see the supplementary material. } } \new{ Figure~\ref{fig:novelty_analysis} (top) shows a shape generated by our method, together with top-four most similar shapes retrieved from the training set by CD and LFD; due to the page limit, another ten examples are shown in the supplementary material. } Comparing our shapes with the retrieved ones, we can see that the shapes share similar structures, showing that our method is able to generate realistic-looking structures like those in the training set. Beyond that, our shapes exhibit noticeable differences in various local structures. \new{ As mentioned earlier, a good generator should produce diverse shapes that are not necessarily the same as the training shapes. So, we further statistically analyze the novelty of our generated shapes relative to the training set. To do so, we use our method to generate 500 random chairs; for each generated chair shape, we use LFD to retrieve the most similar shape in the training set. Figure~\ref{fig:novelty_analysis} (bottom) plots the distribution of LFDs between our generated shapes (in green) and retrieved shapes (in blue). Also, we show four shape pairs at various percentiles, revealing that shapes with larger LFDs are more different from the most similar shapes in the training set. From the LFD distribution, we can see that our method can learn a generation distribution that covers shapes in the training set (low LFD) and also generates novel and realistic-looking shapes that are more different (high LFD) from the training-set shapes. } \vspace*{-4pt} \paragraph{Ablation Study} \new{To evaluate the major components in our method, we conducted an ablation study by successively changing our full pipeline. First, we evaluate the generation performance with/without the detail predictor. Next, we study the importance of the diffusion model and the wavelet representation in the generator network. \input{tables/analysis} The results in Table 2 demonstrate the capability of the detail predictor, which introduces a substantial improvement on all metrics (first vs. second rows). Further, replacing our generator with the VAD model or directly predicting TSDF leads to a performance degrade (second \& last two rows). Due to the page limit, please refer to the supplementary material for the details on how the ablation cases are implemented and the visual comparison results. \vspace*{-4pt} \paragraph{Limitations.} Due to the page limit, please refer to Section~K of the supplementary material for the discussion on limitations. } \section{Introduction} \label{sec:intro} Generative modeling of 3D shapes enables rapid creation of 3D contents, enriching extensive applications across graphics, vision, and VR/AR. With the emerging large-scale 3D datasets~\cite{chang2015shapenet}, data-driven shape generation has gained increasing attention from the research community recently. In general, a good 3D generative model should be able to produce diverse, realistic, and novel shapes, not necessarily the same as the existing ones. Existing shape generation models are developed mainly for voxels~\cite{girdhar2016learning,zhu2017rethinking,yang2018learning}, point clouds~\cite{fan2017point,jiang2018gal,achlioptas2018learning}, and meshes~\cite{wang2018pixel2mesh,groueix2018papier,smith2019geometrics,tang2019skeleton}. % Typically, these representations cannot handle high resolutions or irregular topology, thus unlikely producing high-fidelity results. In contrast, implicit functions~\cite{mescheder2019occupancy,park2019deepsdf,chen2019learning} show improved performance in surface reconstructions. By representing a 3D shape as a level set of discrete volume or \new{a} continuous field, we can flexibly extract a mesh object of arbitrary topology at desired resolution. Existing generative models such as GANs and normalizing flows have shown great success \new{in} generating point clouds and voxels. Yet, they cannot effectively generate implicit functions. To represent a surface in 3D, a large number of point samples are required, even though many nearby samples are \new{redundant}. Taking the occupancy field for instance, only regions near the surface have \new{varying data values}, yet we need huge efforts to encode samples in constant and smoothly-varying regions. Such representation non-compactness and redundancy demands a huge computational cost and hinders the efficiency of direct generative learning on implicit surfaces. To address these challenges, some methods attempt to sample in a pre-trained latent space built on the reconstruction task~\cite{chen2019learning,mescheder2019occupancy} or convert the generated implicits into point clouds or voxels for adversarial learning~\cite{kleineberg2020adversarial,luo2021surfgen}. However, these regularizations can only be indirectly applied to the generated implicit functions, so they are not able to ensure the generation of realistic objects. Hence, the visual quality of the generated shapes often shows a significant gap, as compared with the 3D reconstruction results, and the diversity of their generated shapes is also quite limited. This work introduces a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in the wavelet frequency domain. Overall, we have three key contributions: (i) a compact wavelet representation (\emph{i.e.}, a pair of coarse and detail coefficient volumes) based on biorthogonal wavelet\new{s} and truncated signed distance field to implicitly encode 3D shapes, facilitating effective learning of 3D shape distribution for shape generation; (ii) a generator network formulated based on the diffusion probabilistic model~\cite{sohl2015deep} to produce coarse coefficient volumes from random noise samples, promoting the generation of diverse and novel shapes; and (iii) a detail predictor network, formulated to produce compatible detail coefficients to enhance the fine details in the generated shapes. With the two trained networks, we can start from random noise volumes and flexibly generate diverse and \new{realistic} shapes \new{that are not necessarily the same as the training shapes}. Both quantitative and qualitative experimental results manifest the 3D generation capabilities of our method, showing its superiority over \new{the} state-of-the-art approaches. As Figure~\ref{fig:teaser} shows, our generated shapes exhibit diverse topology, clean surfaces, sharp boundaries, and fine details, without obvious artifacts. \new{Fine details such as curved/thin beams, small pulley, and complex cabinets are} very challenging for the existing 3D generation approaches \new{to synthesize}. \section{Method} \label{sec:architecture} \subsection{Compact Wavelet Representation} \new{Preparing a compact wavelet representation from an input shape (see Figure~\ref{fig:overview}(a)) involves the following two steps}: (i) implicitly represent the shape using a signed distance field (SDF); and (ii) decompose the implicit representation via wavelet transform into coefficient volumes, each encoding a specific scale of the shape. In the first step, we scale each shape to fit \new{$[-0.45,+0.45]^3$} and sample an SDF of resolution $256^3$ to implicitly represent the shape. Importantly, we truncate the distance values in the SDF to $[-0.1,+0.1]$, so regions \new{not close to} object surface are clipped to a constant. We denote the truncated signed distance field (TSDF) for the $i$-th shape in training set as $S_i$. By using $S_i$, we can significantly reduce the shape representation redundancy and enable the shape learning process to better focus on the shape's structures and fine details. The second step is a multi-scale wavelet decomposition~\cite{mallat1989theory,daubechies1990wavelet,velho1994multiscale} on the TSDF. Here, we decompose $S_i$ into a high-frequency detail coefficient volume and a low-frequency coarse coefficient volume, which is roughly a compressed version of $S_i$. We repeat this process $J$ times on the coarse coefficient volume of each scale, decomposing $S_i$ into a series of multi-level coefficient volumes. We denote the coarse and detail coefficient volumes at the $j$-th step (scale) as $C^j_i$ and $D^j_i$, respectively, where $j = \{1,...,J\}$. The representation is lossless, meaning that the extracted coefficient volumes together can faithfully reconstruct the original TSDF via \new{a series of} inverse wavelet transforms. There are three important considerations in the data preparation. First, multi-scale decomposition can effectively separate rich structures, fine details, and noise in the TSDF. Empirically, we evaluate the reconstruction error on the TSDF by masking out all higher-scale detail coefficients and reconstructing $S_i$ only from the coefficients at scale $J=3$,~\emph{i.e.}, $C^3_i$ and $D^3_i$. We found that the reconstructed TSDF values have relatively small changes from the originals (only 2.8\% in magnitude), even without 97\% of the coefficients for the Chair category in ShapeNet~\cite{chang2015shapenet}. Comparing Figures~\ref{fig:compact_analysis} (a) vs. (b), we can see that reconstructing only from the coarse scale of $J=3$ already well retains the chair's structure. Motivated by this observation, we propose to construct the compact wavelet representation at a coarse scale ($J=3$) \new{and drop other detail coefficient volumes,~\emph{i.e.}, $D^1_i$ and $D^2_i$,} for efficient shape learning. \new{More details on the wavelet decomposition are given in the supplementary material.} \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{images/wavelet_analysis} \vspace*{-3mm} \caption{Reconstructions with different wavelet filters. (a) An input shape from ShapeNet. (b,c) Reconstructions from the $J$=3 coefficient volumes with biorthogonal wavelets. The two numbers mean the vanishing moment of the synthesis and analysis wavelets. (d) Reconstruction with the Haar wavelet.} \label{fig:compact_analysis} \vspace*{-5mm} \end{figure} Second, we need a suitable wavelet filter. While Haar wavelet is a popular choice due to its simplicity, \new{using it to encode} smooth and continuous signals such as the SDF may introduce serious voxelization artifacts, see,~\emph{e.g.}, Figure~\ref{fig:compact_analysis} (d). In this work, we propose to adopt the biorthogonal wavelet\new{s}~\cite{cohen1992biorthogonal}, since it enables a more smooth decomposition of the TSDF. Specifically, we tried different settings in the biorthogonal wavelet\new{s} and \new{chose} to use high vanishing moments with six for the synthesis filter and eight for the analysis filter; see Figures~\ref{fig:compact_analysis} (b) vs. (c). Also, instead of storing the detail coefficient volumes with seven channels, as in traditional wavelet decomposition, we follow~\cite{velho1994multiscale} to efficiently compute it as the difference between the inverse transformed $C^{j}_i$ and $C^{j-1}_i$ in a Laplacian pyramid style. \new{Hence, the detail coefficient volume has a higher resolution than the coarser one, but both have much lower resolution than the original TSDF volume ($256^3$). } Last, it is important to truncate the SDF before constructing the wavelet representation for shape learning. By truncating the SDF, regions not close to the shape surface would be cast to a constant function to make efficient the wavelet decomposition and shape learning. Otherwise, we found that the shape learning process will collapse and the training loss cannot be reduced. \subsection{Shape Learning} \label{ssec:shape_learning} Next, to learn the 3D shape distribution in the given shape set, we collect coefficient volumes $\{C_i^J , D_i^J\}$ from different input shapes for training (i) the {\em generator network\/} to learn to iteratively remove noise from a random Gaussian noise sample to generate $C_i^J$; and (ii) the {\em detail predictor network\/} to learn to predict $D_i^J$ from $C_i^J$ to enhance the details in the generated shapes. \vspace*{-3pt} \paragraph{Network structure} \ To start, we formulate a simple but efficient neural network structure for both the generator and detail predictor networks. The two networks have the same structure, since they both take a 3D volume as input and then output a 3D volume of same resolution as the input. Specifically, we adopt a modified 3D version of the U-Net architecture~\cite{nichol2021improved}. We first apply 3D convolution to progressively compose and downsample the input into a set of multi-scale features and a bottleneck feature volume. Then, we apply a single self-attention layer to aggregate features in the bottleneck volume, so that we can efficiently incorporate non-local information into the features. Further, we upsample and concatenate features in the same scale and progressively perform an inverse convolution to produce an output of same size as the input. Note also that for all convolution layers in the network structure, we use a filter size of three with \new{a} stride of one. \begin{figure*}[t] \centering \includegraphics[width=0.97\linewidth]{images/gallery} \vspace*{-3.5mm} \caption{ Gallery of our generated shapes: Table, Chair, Cabinet, and Airplane (top to bottom). Our shapes exhibit complex structures, fine details, and clean surfaces, without obvious artifacts, compared with those generated by others; see Figure~\ref{fig:query_comapre}.} \label{fig:gallery} \vspace*{-3mm} \end{figure*} \vspace*{-3pt} \paragraph{Modeling the generator network.} \ We formulate the 3D shape generation process based on the denoising diffusion probabilistic model~\cite{sohl2015deep}. For simplicity, we drop the subscript and superscript in $C_i^J$ , and denote $\{ C_{0}, ..., C_{T} \}$ as the shape generation sequence, where $C_0$ is the target, which is $C_i^J$; $C_T$ is a random noise volume from the Gaussian prior; and $T$ is the total number of time steps. As shown on top of Figure~\ref{fig:overview}(b), we have (i) a forward process (denoted as $q(C_{0:T})$) that progressively adds noises based on a Gaussian distribution to corrupt $C_0$ into a random noise volume; and (ii) a backward process (denoted as $p_{\theta}(C_{0:T})$) that employs the generator network (with network parameter $\theta$) to iteratively remove noise from $C_T$ to generate the target. \new{ Note that all 3D shapes $\{ C_{0}, ..., C_{T} \}$ are represented as 3D volumes and each voxel value is a wavelet coefficient at its spatial location. } Both the forward and backward processes are modeled as Markov processes. \new{ The generator network is optimized to maximize the generation probability of the target, \emph{i.e.}, $p_{\theta}(C_0)$. Also, as suggested in~\cite{ho2020denoising}, this training procedure can be further simplified to use the generator network to predict noise volume $\epsilon_{\theta}$. Hence, we adopt a mean-squares loss to train our framework: \begin{equation} \label{eq:objective_simp} L_2 = E_{t,C_0,\epsilon}[{\parallel} \epsilon - \epsilon_{\theta}(C_t, t) {\parallel}^2], \epsilon \sim \mathcal{N}(0,\mathbf{I}), \end{equation} where $t$ is a time step; $\epsilon$ is a noise volume; and $\mathcal{N}(0,\mathbf{I})$ denotes a unit Gaussian distribution. In particular, we first sample noise volume $\epsilon$ from a unit Gaussian distribution $\mathcal{N}(0,\mathbf{I})$ and a time step $t \in [1,...,T]$ to corrupt $C_0$ into $C_t$. Then, our generator network learns to predict noise $\epsilon$ based on the corrupted coefficient volume $C_t$. Further, as the network takes time step $t$ as input, we convert value $t$ into an embedding via two MLP layers. Using this embedding, we can condition all the convolution modules in the prediction and enable the generator to be more aware of the amount of noise contaminated in $C_t$. For more details on the derivation of the training objectives, please refer to the supplementary material. } \paragraph{Detail predictor network} With the trained generator, we can obtain diverse and good-quality coarse coefficient volumes,~\emph{i.e.}, $C_0$. Next, we train the detail predictor network to produce detail coefficient volume $D_0$ from $C_0$ (see the bottom part of Figure~\ref{fig:overview}(b)), so that we can further enhance the details in our generated shapes. To train the detail predictor network, we leverage the paired coefficient volumes $\{ C_i^J, D_i^J \}$ from the data preparation. Importantly, detail coefficient volume $D_0$ should be highly correlated to coarse coefficient volume $C_0$. Hence, we pose detail prediction as a conditional regression on the detail coefficient volume, aiming at learning neural network function $f: C_0 \rightarrow D_0$; hence, we optimize $f$ via a mean squared error loss. Overall, the detail predictor has the same network structure as the generator, but we include more convolution layers to accommodate the cubic growth in the number of nonzero values in the detail coefficient volume. \subsection{Shape Generation} Now, we are ready to generate 3D shapes. First, we can randomize a 3D noise volume as $C_T$ from the standard Gaussian distribution. Then, we can employ the trained generator for $T$ iterations to produce $C_0$ from $C_T$. This process is iterative and inter-dependent. We cannot parallelize the operations in different iterations, so leading to a very long computing time. To speed up the inference process, we adopt an approach in~\cite{song2020denoising} to sub-sample a set of time steps from $[1,..., T]$ during the inference; in practice, we evenly sample $1/10$ of the total time steps in all our experiments. After we obtain the coarse coefficient volume $C_0$, we then use the detail predictor network to predict detail coefficient volume $D_0$ from $C_0$. After that, we perform a series of inverse wavelet transforms from $\{ C_0 , D_0 \}$ at scale $J$=$3$ to reconstruct the original TSDF. Hence, we can further extract an explicit 3D mesh from the reconstructed TSDF using the marching cube algorithm~\cite{lorensen1987marching}. Figure~\ref{fig:overview}(c) illustrates the shape generation procedure. \subsection{Implementation Details} \label{sec:implementation} We employed ShapeNet~\cite{chang2015shapenet} to prepare the training dataset used in all our experiments. Following the data split in~\cite{chen2019learning}, we use only the training split to supervise our network training. Also, similar to~\cite{hertz2022spaghetti,luo2021diffusion,li2021spgan}, we train a single model for generating shapes of each category in the ShapeNet dataset~\cite{chang2015shapenet}. We implement our networks using PyTorch and run all experiments on a GPU cluster with four RTX3090 GPUs. We follow~\cite{ho2020denoising} to set $\{\beta_t\}$ to increase linearly from $1e^{-4}$ to $0.02$ for 1,000 time steps and set $\sigma_t = \frac{1-\bar{\alpha}_{t-1}}{1 - \bar{\alpha}_t} \beta_t$. We train the generator for 800,000 iterations and the detail predictor for 60,000 iterations, both using the Adam optimizer~\cite{kingma2014adam} with a learning rate of \new{$1e^{-4}$}. Training the generator and detail predictor takes around three days and 12 hours, respectively. \new{ The inference takes around six seconds per shape on an RTX 3090 GPU. We adapt~\cite{cotter2020uses} to implement the 3D wavelet decomposition and {\em will release our code and training data upon the publication of this work.\/} } \section{Overview} \label{sec:overview} Our approach consists of the following three major procedures: \vspace*{4pt} (i) {\em Data preparation}~is a one-time process for preparing a compact wavelet representation \new{from} each input shape; see Figure~\ref{fig:overview}(a). For each shape, we sample a signed distance field (SDF) and truncate its distance values to avoid redundant information. Then, we transform the truncated SDF to the wavelet domain to produce a series of multi-scale coefficient volumes. Importantly, we take {\em a pair of coarse and detail coefficient volumes\/} at the same scale as our compact wavelet representation for implicitly encoding the input shape. \vspace*{4pt} (ii) {\em Shape learning\/}~aims to train a pair of neural networks to learn the 3D shape distribution from the coarse and detail coefficient volumes; see Figure~\ref{fig:overview}(b). First, we adopt the denoising diffusion probabilistic model~\cite{sohl2015deep} to formulate and train the {\em generator network\/} to learn to iteratively refine a random noise sample for generating diverse 3D shapes in the form of the coarse coefficient volume. Second, we design and train the {\em detail predictor network\/} to learn to produce the detail coefficient volume from the coarse coefficient volume for introducing further details in our generated shapes. Using our compact wavelet representation, it becomes feasible to train both the generator and detail predictor to successfully produce coarse coefficient volumes with plausible 3D structures and detail coefficient volumes with fine details. \vspace*{4pt} (iii) {\em Shape generation\/}~employs the two trained networks to generate 3D shapes; see Figure~\ref{fig:overview}(c). Starting from a random Gaussian noise sample, we first use the trained generator to produce the coarse coefficient volume then the detail predictor to produce an associated detail coefficient volume. After that, we can perform an inverse wavelet transform, followed by the marching cube operator~\cite{lorensen1987marching}, to generate the output 3D shape. \section{Related Work} \label{sec:rw} \paragraph{3D reconstruction via implicit function.} Recently, many methods leverage the flexibility of implicit surface for 3D reconstructions from voxels~\cite{mescheder2019occupancy,chen2019learning}, \new{complete/partial point clouds~\cite{park2019deepsdf,Liu2021MLS,yan2022shapeformer}}, and RGB images~\cite{xu2019disn,xu2020ladybird,li2021d2im,tang2021skeletonnet}. On the other hand, besides ground-truth field values, various supervisions have been explored to train the generation of implicit surfaces,~\emph{e.g.}, multi-view images~\cite{liu2019learning,niemeyer2020differentiable} and unoriented point clouds~\cite{atzmon2020sal,gropp2020implicit,zhao2021sign}. Yet, the task of 3D reconstruction focuses mainly on synthesizing a high-quality 3D shape that best matches the input. So, it is fundamentally very different from the task of 3D shape generation, which aims to learn the shape distribution \new{of} a given set of shapes for generating diverse, high-quality, and \new{possibly novel} shapes accordingly. \vspace*{-4pt} \paragraph{3D shape generation via implicit function.} Unlike 3D reconstruction, the 3D shape generation task has no fixed ground truth to supervise the generation of each shape sample. Exploring efficient guidance for implicit surface generation is still an open problem. Some works attempt to use the reconstruction task to first learn a latent embedding~\new{\cite{mescheder2019occupancy,chen2019learning,hao2020dualsdf,ibing20213d}} then generate new shapes by decoding \new{codes sampled from} the learned latent space. Recently,~\cite{hertz2022spaghetti} learn a latent space with a Gaussian-mixture-based autodecoder for shape generation and manipulation. Though these approaches ensure a simple training process, the generated shapes have limited diversity restricted by the pre-trained shape space. Some other works attempt to convert implicit surfaces to some other representations,~\emph{e.g.}, voxels\new{~\cite{kleineberg2020adversarial,zheng2022sdfstylegan}}, point cloud~\cite{kleineberg2020adversarial}, and mesh~\cite{luo2021surfgen}, for applying adversarial training. Yet, the conversion inevitably leads to information loss in the generated implicit surfaces, thus reducing the training efficiency and generation quality. In this work, we propose to adopt a compact wavelet representation for modeling the implicit surface and learn to \new{synthesize} it with a diffusion model. By this means, we can effectively learn to generate the implicit representation without a pre-trained latent space and a representation conversion. The results also show that our new approach is capable of producing diversified shapes of high visual quality, exceeding the state-of-the-art methods. \vspace*{-6pt} \paragraph{Other representations for 3D shape generation} \cite{smith2017improved,wu2016learning} explore voxels, a natural grid-based extension of 2D image. Yet, the methods learn mainly coarse structures and fail to produce fine details due to memory restriction. Some other methods explore point clouds via GAN~\cite{gal2020mrgan,hui2020progressive,li2021spgan}, flow-based models~\cite{kim2020softflow,cai2020learning}, and diffusion models~\cite{zhou20213d}. Due to the discrete nature of point clouds, 3D meshes reconstructed from them often contain artifacts. This work focuses on implicit surface generation, aiming at generating high-quality and diverse meshes with fine details and overcoming the limitations of the existing representations. \begin{figure*}[!t] \centering \includegraphics[width=1.0\linewidth]{images/overview_new} \vspace*{-5mm} \caption{Overview of our approach. (a) {\em Data preparation\/} builds a compact wavelet representation (a pair of coarse and detail coefficient volumes) for each input shape using a truncated signed distance field (TSDF) and a multi-scale wavelet decomposition. (b) {\em Shape learning\/} trains the generator network to produce coarse coefficient volumes from random noise samples and trains the detail predictor network to produce detail coefficient volumes from coarse coefficient volumes. (c) {\em Shape generation\/} employs the trained generator to produce a coarse coefficient volume and then the trained detail predictor to further predict a compatible detail coefficient volume, followed by an inverse wavelet transform and marching cube, to generate the output 3D shape. } \label{fig:overview} \vspace*{-1.5mm} \end{figure*} \vspace*{-6pt} \paragraph{Multi-scale neural implicit representation.} This work also relates to multi-scale representations, so we discuss some 3D deep learning works in this area. \cite{liu2020neural,takikawa2021neural,martel2021acorn,chibane2020implicit,chen2021multiresolution} predict multi-scale latent codes in an adaptive octree to improve the reconstruction quality and inference efficiency. \cite{fathony2020multiplicative} propose a band-limited network to obtain a multi-scale representation by restricting the frequency magnitude of the basis functions. Recently,~\cite{saragadam2022miner} adopt the Laplacian pyramid to extract multi-scale coefficients for multiple neural networks. Unlike our work, this work overfits each input object with an individual representation for efficient storage and rendering. In contrast to our work on shape generation, the above methods focus on improving 3D reconstruction performance by separately handling features at different levels. In our work, we adopt a multi-scale implicit representation based on wavelets (motivated by~\cite{velho1994multiscale}) to build a compact representation for high-quality shape generation. \vspace*{-6pt} \paragraph{Denoising diffusion models.} These models ~\cite{sohl2015deep,ho2020denoising,nichol2021improved,song2020denoising} recently show top performance in image generation, surpassing GAN-based models~\cite{dhariwal2021diffusion}. Very recently,~\cite{luo2021diffusion,zhou20213d} adopt diffusion models for point cloud generation. Yet, they fail to generate smooth surfaces and complex structures, as point clouds contain only discrete samples. Distinctively, we adopt diffusion model with a compact wavelet representation to model a continuous signed distance field, promoting shape generation with diverse structures and fine details. \section{Conclusion} This paper presents a new generative approach for learning 3D shape distribution and generating diverse, high-quality, and \new{possibly novel} 3D shapes. Unlike prior works, we operate on the frequency domain. By decomposing the implicit function in the form of TSDF using biorthogonal wavelet\new{s}, we build a compact wavelet representation with a pair of coarse and detail coefficient volumes, as an encoding of 3D shape. Then, we formulate our generator upon a probabilistic diffusion model to learn to generate diverse shapes in the form of coarse coefficient volumes from noise samples, and a detail predictor to further learn to generate compatible detail coefficient volumes for reconstructing fine details. Both quantitative and qualitative experimental results demonstrate the superiority of our method in generating diverse and realistic shapes that exhibit fine details, complex and thin structures, and clean surfaces, beyond the generation capability of the state-of-the-art methods. To our best knowledge, this is the first work that successfully adopts a compact wavelet representation for an unconditional generative modeling on 3D shape generation, enabling many directions for future research. At first glance, our benefits can be extended to other downstream tasks with extra conditions,~\emph{e.g.}, shape reconstruction from images or point clouds, and shape editing with user inputs. Another promising future direction is to adopt wavelet-based 3D generation to animation production,~\emph{e.g.}, generating sequences of character motion with spatio-temporal wavelet representations. Also, we would like to explore more challenging cases,~\emph{e.g.}, objects with extremely fine details and generation of 3D scenes. \section{Results and Experiments} \subsection{Galleries of our generated shapes} Besides Figure~\ref{fig:teaser}, we present Figure~\ref{fig:gallery} to showcase the compelling capability of our method on generating shapes of various categories. Our generated shapes exhibit {\em diverse topologies\/}, {\em fine details\/}, and also {\em clean surfaces without obvious artifacts\/}, covering a rich variety of small, thin, and complex structures that are typically very challenging for the existing approaches to produce. More 3D shape generation results are provided in the supplementary material. \vspace{-3pt} \subsection{Comparison with Other Methods} Next, we compare the shape generation capability of our method with four state-of-the-art methods: IM-GAN~\cite{chen2019learning}, Voxel-GAN~\cite{kleineberg2020adversarial}, Point-Diff~\cite{luo2021diffusion}, and SPAGHETTI~\cite{hertz2022spaghetti}. To our best knowledge, ours is the first work that generates implicit shape representations in frequency domain and considers coarse and detail coefficients to enhance the generation of structures and fine details. Our experiments follow the same setting as the above works. Specifically, we leverage our trained model on the Chair and Airplane categories in ShapeNet~\cite{chang2015shapenet} to randomly generate 2,000 shapes for each category. Then, we uniformly sample 2,048 points on each generated shape and evaluate the shapes using the same set of metrics as in the previous methods (details to be presented later). As for the four state-of-the-art methods, we employ publicly-released trained network models to generate shapes. \vspace{-3pt} \paragraph{Evaluation metrics.} Following~\cite{luo2021diffusion,hertz2022spaghetti}, we evaluate the generation quality using (i) minimum matching distance (MMD) measures the fidelity of the generated shapes; (ii) coverage (COV) indicates how well the generated shapes cover the shapes in the given 3D repository; and (iii) 1-NN classifier accuracy (1-NNA) measures how well a classifier differentiates the generated shapes from those in the repository. Overall, a low MMD, a high COV, and an 1-NNA close to 50\% indicate good generation quality. \new{More details are provided in the supplementary material.} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{images/query_comparison.png} \vspace{-2mm} \caption{Visual comparisons with state-of-the-art methods. Our generated shapes exhibit finer details and cleaner surfaces, without obvious artifacts. } \label{fig:query_comapre} \vspace{-2mm} \end{figure} \vspace{-3pt} \paragraph{Quantitative evaluation.} Table~\ref{tab:quanComparison} reports the quantitative comparison results, showing that our method surpasses all others for almost all the evaluation cases over the three metrics for both the Chair and Airplane categories. We employ the Chair category, due to its large variations in structure and topology, and the Airplane category, due to the fine details in its shapes. As discussed in~\cite{yang2019pointflow,luo2021diffusion}, the COV and MMD metrics have limited capabilities to account for details, so they are not suitable for measuring the fine quality of the generation results,~\emph{e.g.}, the generated shapes sometimes show a better performance even when compared with the ground-truth training shapes on these metrics. In contrast, 1-NNA is more robust and can better correlate with the generation quality. In this metric, our approach outperforms all others, while having a significant margin in the Airplane category, manifesting the diversity and fidelity of our generated results. \begin{figure} \includegraphics[width=0.98\linewidth]{images/novel_analysis.png} \vspace{-3mm} \caption{ \new{ Shape novelty analysis. Top: From our generated shape (in green), we retrieve top-four most similar shapes (in blue) in training set by CD and LFD. Bottom: We generate 500 chairs using our method; for each chair, we retrieve the most similar shape in the training set by LFD; then, we plot the distribution of LFDs for all retrievals, showing that our method is able to generate shapes that are more similar (low LFDs) or more novel (high LFDs) compared to the training set. Note that the generated shape at $50^{\text{th}}$ percentile is already not that similar to the associated training-set shape. } \vspace{-1mm} } \label{fig:novelty_analysis} \end{figure} \vspace{-3pt} \paragraph{Qualitative Evaluation} Figure~\ref{fig:query_comapre} show some visual comparisons. For each random shape generated by our method, we find a similar shape (with similar structures and topology) generated by each of the other methods to make the visual comparison easier. \new{See supplementary material Sections B and D for more visual comparisons.} \new{Further, as different methods likely have different statistical modes in the shape generation distribution, we also take random shapes generated by IM-GAN and find similar shapes generated by our method for comparison; see supplementary material Section~C for the results. } From \new{all} these results, we can see that the 3D shapes generated by our method clearly exhibit finer details, higher fidelity structures, and cleaner surfaces, without obvious artifacts. \subsection{Model Analysis} \paragraph{Shape novelty analysis.} \new{Next, we analyze whether our method can generate shapes that are not necessarily the same as the training-set shapes, meaning that it does not simply memorize the training data.} To do so, we use our method to generate 500 random shapes and retrieve top-four most similar shapes in the training set separately via two different metrics,~\emph{i.e.}, Chamfer Distance (CD) and \new{Light Field Distance (LFD)~\cite{chen2003visual}. \new{It is noted that LFD is computed based on rendered images from multiple views on each shape, so it focuses more on the visual similarity between shapes and is considered to be more robust for shape retrieval. For the details on the metrics, please see the supplementary material. } } \new{ Figure~\ref{fig:novelty_analysis} (top) shows a shape generated by our method, together with top-four most similar shapes retrieved from the training set by CD and LFD; due to the page limit, another ten examples are shown in the supplementary material. } Comparing our shapes with the retrieved ones, we can see that the shapes share similar structures, showing that our method is able to generate realistic-looking structures like those in the training set. Beyond that, our shapes exhibit noticeable differences in various local structures. \new{ As mentioned earlier, a good generator should produce diverse shapes that are not necessarily the same as the training shapes. So, we further statistically analyze the novelty of our generated shapes relative to the training set. To do so, we use our method to generate 500 random chairs; for each generated chair shape, we use LFD to retrieve the most similar shape in the training set. Figure~\ref{fig:novelty_analysis} (bottom) plots the distribution of LFDs between our generated shapes (in green) and retrieved shapes (in blue). Also, we show four shape pairs at various percentiles, revealing that shapes with larger LFDs are more different from the most similar shapes in the training set. From the LFD distribution, we can see that our method can learn a generation distribution that covers shapes in the training set (low LFD) and also generates novel and realistic-looking shapes that are more different (high LFD) from the training-set shapes. } \vspace*{-4pt} \paragraph{Ablation Study} \new{To evaluate the major components in our method, we conducted an ablation study by successively changing our full pipeline. First, we evaluate the generation performance with/without the detail predictor. Next, we study the importance of the diffusion model and the wavelet representation in the generator network. \input{tables/analysis} The results in Table 2 demonstrate the capability of the detail predictor, which introduces a substantial improvement on all metrics (first vs. second rows). Further, replacing our generator with the VAD model or directly predicting TSDF leads to a performance degrade (second \& last two rows). Due to the page limit, please refer to the supplementary material for the details on how the ablation cases are implemented and the visual comparison results. \vspace*{-4pt} \paragraph{Limitations.} Due to the page limit, please refer to Section~K of the supplementary material for the discussion on limitations. } \section{Introduction} \label{sec:intro} Generative modeling of 3D shapes enables rapid creation of 3D contents, enriching extensive applications across graphics, vision, and VR/AR. With the emerging large-scale 3D datasets~\cite{chang2015shapenet}, data-driven shape generation has gained increasing attention from the research community recently. In general, a good 3D generative model should be able to produce diverse, realistic, and novel shapes, not necessarily the same as the existing ones. Existing shape generation models are developed mainly for voxels~\cite{girdhar2016learning,zhu2017rethinking,yang2018learning}, point clouds~\cite{fan2017point,jiang2018gal,achlioptas2018learning}, and meshes~\cite{wang2018pixel2mesh,groueix2018papier,smith2019geometrics,tang2019skeleton}. % Typically, these representations cannot handle high resolutions or irregular topology, thus unlikely producing high-fidelity results. In contrast, implicit functions~\cite{mescheder2019occupancy,park2019deepsdf,chen2019learning} show improved performance in surface reconstructions. By representing a 3D shape as a level set of discrete volume or \new{a} continuous field, we can flexibly extract a mesh object of arbitrary topology at desired resolution. Existing generative models such as GANs and normalizing flows have shown great success \new{in} generating point clouds and voxels. Yet, they cannot effectively generate implicit functions. To represent a surface in 3D, a large number of point samples are required, even though many nearby samples are \new{redundant}. Taking the occupancy field for instance, only regions near the surface have \new{varying data values}, yet we need huge efforts to encode samples in constant and smoothly-varying regions. Such representation non-compactness and redundancy demands a huge computational cost and hinders the efficiency of direct generative learning on implicit surfaces. To address these challenges, some methods attempt to sample in a pre-trained latent space built on the reconstruction task~\cite{chen2019learning,mescheder2019occupancy} or convert the generated implicits into point clouds or voxels for adversarial learning~\cite{kleineberg2020adversarial,luo2021surfgen}. However, these regularizations can only be indirectly applied to the generated implicit functions, so they are not able to ensure the generation of realistic objects. Hence, the visual quality of the generated shapes often shows a significant gap, as compared with the 3D reconstruction results, and the diversity of their generated shapes is also quite limited. This work introduces a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in the wavelet frequency domain. Overall, we have three key contributions: (i) a compact wavelet representation (\emph{i.e.}, a pair of coarse and detail coefficient volumes) based on biorthogonal wavelet\new{s} and truncated signed distance field to implicitly encode 3D shapes, facilitating effective learning of 3D shape distribution for shape generation; (ii) a generator network formulated based on the diffusion probabilistic model~\cite{sohl2015deep} to produce coarse coefficient volumes from random noise samples, promoting the generation of diverse and novel shapes; and (iii) a detail predictor network, formulated to produce compatible detail coefficients to enhance the fine details in the generated shapes. With the two trained networks, we can start from random noise volumes and flexibly generate diverse and \new{realistic} shapes \new{that are not necessarily the same as the training shapes}. Both quantitative and qualitative experimental results manifest the 3D generation capabilities of our method, showing its superiority over \new{the} state-of-the-art approaches. As Figure~\ref{fig:teaser} shows, our generated shapes exhibit diverse topology, clean surfaces, sharp boundaries, and fine details, without obvious artifacts. \new{Fine details such as curved/thin beams, small pulley, and complex cabinets are} very challenging for the existing 3D generation approaches \new{to synthesize}. \section{Method} \label{sec:architecture} \subsection{Compact Wavelet Representation} \new{Preparing a compact wavelet representation from an input shape (see Figure~\ref{fig:overview}(a)) involves the following two steps}: (i) implicitly represent the shape using a signed distance field (SDF); and (ii) decompose the implicit representation via wavelet transform into coefficient volumes, each encoding a specific scale of the shape. In the first step, we scale each shape to fit \new{$[-0.45,+0.45]^3$} and sample an SDF of resolution $256^3$ to implicitly represent the shape. Importantly, we truncate the distance values in the SDF to $[-0.1,+0.1]$, so regions \new{not close to} object surface are clipped to a constant. We denote the truncated signed distance field (TSDF) for the $i$-th shape in training set as $S_i$. By using $S_i$, we can significantly reduce the shape representation redundancy and enable the shape learning process to better focus on the shape's structures and fine details. The second step is a multi-scale wavelet decomposition~\cite{mallat1989theory,daubechies1990wavelet,velho1994multiscale} on the TSDF. Here, we decompose $S_i$ into a high-frequency detail coefficient volume and a low-frequency coarse coefficient volume, which is roughly a compressed version of $S_i$. We repeat this process $J$ times on the coarse coefficient volume of each scale, decomposing $S_i$ into a series of multi-level coefficient volumes. We denote the coarse and detail coefficient volumes at the $j$-th step (scale) as $C^j_i$ and $D^j_i$, respectively, where $j = \{1,...,J\}$. The representation is lossless, meaning that the extracted coefficient volumes together can faithfully reconstruct the original TSDF via \new{a series of} inverse wavelet transforms. There are three important considerations in the data preparation. First, multi-scale decomposition can effectively separate rich structures, fine details, and noise in the TSDF. Empirically, we evaluate the reconstruction error on the TSDF by masking out all higher-scale detail coefficients and reconstructing $S_i$ only from the coefficients at scale $J=3$,~\emph{i.e.}, $C^3_i$ and $D^3_i$. We found that the reconstructed TSDF values have relatively small changes from the originals (only 2.8\% in magnitude), even without 97\% of the coefficients for the Chair category in ShapeNet~\cite{chang2015shapenet}. Comparing Figures~\ref{fig:compact_analysis} (a) vs. (b), we can see that reconstructing only from the coarse scale of $J=3$ already well retains the chair's structure. Motivated by this observation, we propose to construct the compact wavelet representation at a coarse scale ($J=3$) \new{and drop other detail coefficient volumes,~\emph{i.e.}, $D^1_i$ and $D^2_i$,} for efficient shape learning. \new{More details on the wavelet decomposition are given in the supplementary material.} \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{images/wavelet_analysis} \vspace*{-3mm} \caption{Reconstructions with different wavelet filters. (a) An input shape from ShapeNet. (b,c) Reconstructions from the $J$=3 coefficient volumes with biorthogonal wavelets. The two numbers mean the vanishing moment of the synthesis and analysis wavelets. (d) Reconstruction with the Haar wavelet.} \label{fig:compact_analysis} \vspace*{-5mm} \end{figure} Second, we need a suitable wavelet filter. While Haar wavelet is a popular choice due to its simplicity, \new{using it to encode} smooth and continuous signals such as the SDF may introduce serious voxelization artifacts, see,~\emph{e.g.}, Figure~\ref{fig:compact_analysis} (d). In this work, we propose to adopt the biorthogonal wavelet\new{s}~\cite{cohen1992biorthogonal}, since it enables a more smooth decomposition of the TSDF. Specifically, we tried different settings in the biorthogonal wavelet\new{s} and \new{chose} to use high vanishing moments with six for the synthesis filter and eight for the analysis filter; see Figures~\ref{fig:compact_analysis} (b) vs. (c). Also, instead of storing the detail coefficient volumes with seven channels, as in traditional wavelet decomposition, we follow~\cite{velho1994multiscale} to efficiently compute it as the difference between the inverse transformed $C^{j}_i$ and $C^{j-1}_i$ in a Laplacian pyramid style. \new{Hence, the detail coefficient volume has a higher resolution than the coarser one, but both have much lower resolution than the original TSDF volume ($256^3$). } Last, it is important to truncate the SDF before constructing the wavelet representation for shape learning. By truncating the SDF, regions not close to the shape surface would be cast to a constant function to make efficient the wavelet decomposition and shape learning. Otherwise, we found that the shape learning process will collapse and the training loss cannot be reduced. \subsection{Shape Learning} \label{ssec:shape_learning} Next, to learn the 3D shape distribution in the given shape set, we collect coefficient volumes $\{C_i^J , D_i^J\}$ from different input shapes for training (i) the {\em generator network\/} to learn to iteratively remove noise from a random Gaussian noise sample to generate $C_i^J$; and (ii) the {\em detail predictor network\/} to learn to predict $D_i^J$ from $C_i^J$ to enhance the details in the generated shapes. \vspace*{-3pt} \paragraph{Network structure} \ To start, we formulate a simple but efficient neural network structure for both the generator and detail predictor networks. The two networks have the same structure, since they both take a 3D volume as input and then output a 3D volume of same resolution as the input. Specifically, we adopt a modified 3D version of the U-Net architecture~\cite{nichol2021improved}. We first apply 3D convolution to progressively compose and downsample the input into a set of multi-scale features and a bottleneck feature volume. Then, we apply a single self-attention layer to aggregate features in the bottleneck volume, so that we can efficiently incorporate non-local information into the features. Further, we upsample and concatenate features in the same scale and progressively perform an inverse convolution to produce an output of same size as the input. Note also that for all convolution layers in the network structure, we use a filter size of three with \new{a} stride of one. \begin{figure*}[t] \centering \includegraphics[width=0.97\linewidth]{images/gallery} \vspace*{-3.5mm} \caption{ Gallery of our generated shapes: Table, Chair, Cabinet, and Airplane (top to bottom). Our shapes exhibit complex structures, fine details, and clean surfaces, without obvious artifacts, compared with those generated by others; see Figure~\ref{fig:query_comapre}.} \label{fig:gallery} \vspace*{-3mm} \end{figure*} \vspace*{-3pt} \paragraph{Modeling the generator network.} \ We formulate the 3D shape generation process based on the denoising diffusion probabilistic model~\cite{sohl2015deep}. For simplicity, we drop the subscript and superscript in $C_i^J$ , and denote $\{ C_{0}, ..., C_{T} \}$ as the shape generation sequence, where $C_0$ is the target, which is $C_i^J$; $C_T$ is a random noise volume from the Gaussian prior; and $T$ is the total number of time steps. As shown on top of Figure~\ref{fig:overview}(b), we have (i) a forward process (denoted as $q(C_{0:T})$) that progressively adds noises based on a Gaussian distribution to corrupt $C_0$ into a random noise volume; and (ii) a backward process (denoted as $p_{\theta}(C_{0:T})$) that employs the generator network (with network parameter $\theta$) to iteratively remove noise from $C_T$ to generate the target. \new{ Note that all 3D shapes $\{ C_{0}, ..., C_{T} \}$ are represented as 3D volumes and each voxel value is a wavelet coefficient at its spatial location. } Both the forward and backward processes are modeled as Markov processes. \new{ The generator network is optimized to maximize the generation probability of the target, \emph{i.e.}, $p_{\theta}(C_0)$. Also, as suggested in~\cite{ho2020denoising}, this training procedure can be further simplified to use the generator network to predict noise volume $\epsilon_{\theta}$. Hence, we adopt a mean-squares loss to train our framework: \begin{equation} \label{eq:objective_simp} L_2 = E_{t,C_0,\epsilon}[{\parallel} \epsilon - \epsilon_{\theta}(C_t, t) {\parallel}^2], \epsilon \sim \mathcal{N}(0,\mathbf{I}), \end{equation} where $t$ is a time step; $\epsilon$ is a noise volume; and $\mathcal{N}(0,\mathbf{I})$ denotes a unit Gaussian distribution. In particular, we first sample noise volume $\epsilon$ from a unit Gaussian distribution $\mathcal{N}(0,\mathbf{I})$ and a time step $t \in [1,...,T]$ to corrupt $C_0$ into $C_t$. Then, our generator network learns to predict noise $\epsilon$ based on the corrupted coefficient volume $C_t$. Further, as the network takes time step $t$ as input, we convert value $t$ into an embedding via two MLP layers. Using this embedding, we can condition all the convolution modules in the prediction and enable the generator to be more aware of the amount of noise contaminated in $C_t$. For more details on the derivation of the training objectives, please refer to the supplementary material. } \paragraph{Detail predictor network} With the trained generator, we can obtain diverse and good-quality coarse coefficient volumes,~\emph{i.e.}, $C_0$. Next, we train the detail predictor network to produce detail coefficient volume $D_0$ from $C_0$ (see the bottom part of Figure~\ref{fig:overview}(b)), so that we can further enhance the details in our generated shapes. To train the detail predictor network, we leverage the paired coefficient volumes $\{ C_i^J, D_i^J \}$ from the data preparation. Importantly, detail coefficient volume $D_0$ should be highly correlated to coarse coefficient volume $C_0$. Hence, we pose detail prediction as a conditional regression on the detail coefficient volume, aiming at learning neural network function $f: C_0 \rightarrow D_0$; hence, we optimize $f$ via a mean squared error loss. Overall, the detail predictor has the same network structure as the generator, but we include more convolution layers to accommodate the cubic growth in the number of nonzero values in the detail coefficient volume. \subsection{Shape Generation} Now, we are ready to generate 3D shapes. First, we can randomize a 3D noise volume as $C_T$ from the standard Gaussian distribution. Then, we can employ the trained generator for $T$ iterations to produce $C_0$ from $C_T$. This process is iterative and inter-dependent. We cannot parallelize the operations in different iterations, so leading to a very long computing time. To speed up the inference process, we adopt an approach in~\cite{song2020denoising} to sub-sample a set of time steps from $[1,..., T]$ during the inference; in practice, we evenly sample $1/10$ of the total time steps in all our experiments. After we obtain the coarse coefficient volume $C_0$, we then use the detail predictor network to predict detail coefficient volume $D_0$ from $C_0$. After that, we perform a series of inverse wavelet transforms from $\{ C_0 , D_0 \}$ at scale $J$=$3$ to reconstruct the original TSDF. Hence, we can further extract an explicit 3D mesh from the reconstructed TSDF using the marching cube algorithm~\cite{lorensen1987marching}. Figure~\ref{fig:overview}(c) illustrates the shape generation procedure. \subsection{Implementation Details} \label{sec:implementation} We employed ShapeNet~\cite{chang2015shapenet} to prepare the training dataset used in all our experiments. Following the data split in~\cite{chen2019learning}, we use only the training split to supervise our network training. Also, similar to~\cite{hertz2022spaghetti,luo2021diffusion,li2021spgan}, we train a single model for generating shapes of each category in the ShapeNet dataset~\cite{chang2015shapenet}. We implement our networks using PyTorch and run all experiments on a GPU cluster with four RTX3090 GPUs. We follow~\cite{ho2020denoising} to set $\{\beta_t\}$ to increase linearly from $1e^{-4}$ to $0.02$ for 1,000 time steps and set $\sigma_t = \frac{1-\bar{\alpha}_{t-1}}{1 - \bar{\alpha}_t} \beta_t$. We train the generator for 800,000 iterations and the detail predictor for 60,000 iterations, both using the Adam optimizer~\cite{kingma2014adam} with a learning rate of \new{$1e^{-4}$}. Training the generator and detail predictor takes around three days and 12 hours, respectively. \new{ The inference takes around six seconds per shape on an RTX 3090 GPU. We adapt~\cite{cotter2020uses} to implement the 3D wavelet decomposition and {\em will release our code and training data upon the publication of this work.\/} } \section{Overview} \label{sec:overview} Our approach consists of the following three major procedures: \vspace*{4pt} (i) {\em Data preparation}~is a one-time process for preparing a compact wavelet representation \new{from} each input shape; see Figure~\ref{fig:overview}(a). For each shape, we sample a signed distance field (SDF) and truncate its distance values to avoid redundant information. Then, we transform the truncated SDF to the wavelet domain to produce a series of multi-scale coefficient volumes. Importantly, we take {\em a pair of coarse and detail coefficient volumes\/} at the same scale as our compact wavelet representation for implicitly encoding the input shape. \vspace*{4pt} (ii) {\em Shape learning\/}~aims to train a pair of neural networks to learn the 3D shape distribution from the coarse and detail coefficient volumes; see Figure~\ref{fig:overview}(b). First, we adopt the denoising diffusion probabilistic model~\cite{sohl2015deep} to formulate and train the {\em generator network\/} to learn to iteratively refine a random noise sample for generating diverse 3D shapes in the form of the coarse coefficient volume. Second, we design and train the {\em detail predictor network\/} to learn to produce the detail coefficient volume from the coarse coefficient volume for introducing further details in our generated shapes. Using our compact wavelet representation, it becomes feasible to train both the generator and detail predictor to successfully produce coarse coefficient volumes with plausible 3D structures and detail coefficient volumes with fine details. \vspace*{4pt} (iii) {\em Shape generation\/}~employs the two trained networks to generate 3D shapes; see Figure~\ref{fig:overview}(c). Starting from a random Gaussian noise sample, we first use the trained generator to produce the coarse coefficient volume then the detail predictor to produce an associated detail coefficient volume. After that, we can perform an inverse wavelet transform, followed by the marching cube operator~\cite{lorensen1987marching}, to generate the output 3D shape. \section{Related Work} \label{sec:rw} \paragraph{3D reconstruction via implicit function.} Recently, many methods leverage the flexibility of implicit surface for 3D reconstructions from voxels~\cite{mescheder2019occupancy,chen2019learning}, \new{complete/partial point clouds~\cite{park2019deepsdf,Liu2021MLS,yan2022shapeformer}}, and RGB images~\cite{xu2019disn,xu2020ladybird,li2021d2im,tang2021skeletonnet}. On the other hand, besides ground-truth field values, various supervisions have been explored to train the generation of implicit surfaces,~\emph{e.g.}, multi-view images~\cite{liu2019learning,niemeyer2020differentiable} and unoriented point clouds~\cite{atzmon2020sal,gropp2020implicit,zhao2021sign}. Yet, the task of 3D reconstruction focuses mainly on synthesizing a high-quality 3D shape that best matches the input. So, it is fundamentally very different from the task of 3D shape generation, which aims to learn the shape distribution \new{of} a given set of shapes for generating diverse, high-quality, and \new{possibly novel} shapes accordingly. \vspace*{-4pt} \paragraph{3D shape generation via implicit function.} Unlike 3D reconstruction, the 3D shape generation task has no fixed ground truth to supervise the generation of each shape sample. Exploring efficient guidance for implicit surface generation is still an open problem. Some works attempt to use the reconstruction task to first learn a latent embedding~\new{\cite{mescheder2019occupancy,chen2019learning,hao2020dualsdf,ibing20213d}} then generate new shapes by decoding \new{codes sampled from} the learned latent space. Recently,~\cite{hertz2022spaghetti} learn a latent space with a Gaussian-mixture-based autodecoder for shape generation and manipulation. Though these approaches ensure a simple training process, the generated shapes have limited diversity restricted by the pre-trained shape space. Some other works attempt to convert implicit surfaces to some other representations,~\emph{e.g.}, voxels\new{~\cite{kleineberg2020adversarial,zheng2022sdfstylegan}}, point cloud~\cite{kleineberg2020adversarial}, and mesh~\cite{luo2021surfgen}, for applying adversarial training. Yet, the conversion inevitably leads to information loss in the generated implicit surfaces, thus reducing the training efficiency and generation quality. In this work, we propose to adopt a compact wavelet representation for modeling the implicit surface and learn to \new{synthesize} it with a diffusion model. By this means, we can effectively learn to generate the implicit representation without a pre-trained latent space and a representation conversion. The results also show that our new approach is capable of producing diversified shapes of high visual quality, exceeding the state-of-the-art methods. \vspace*{-6pt} \paragraph{Other representations for 3D shape generation} \cite{smith2017improved,wu2016learning} explore voxels, a natural grid-based extension of 2D image. Yet, the methods learn mainly coarse structures and fail to produce fine details due to memory restriction. Some other methods explore point clouds via GAN~\cite{gal2020mrgan,hui2020progressive,li2021spgan}, flow-based models~\cite{kim2020softflow,cai2020learning}, and diffusion models~\cite{zhou20213d}. Due to the discrete nature of point clouds, 3D meshes reconstructed from them often contain artifacts. This work focuses on implicit surface generation, aiming at generating high-quality and diverse meshes with fine details and overcoming the limitations of the existing representations. \begin{figure*}[!t] \centering \includegraphics[width=1.0\linewidth]{images/overview_new} \vspace*{-5mm} \caption{Overview of our approach. (a) {\em Data preparation\/} builds a compact wavelet representation (a pair of coarse and detail coefficient volumes) for each input shape using a truncated signed distance field (TSDF) and a multi-scale wavelet decomposition. (b) {\em Shape learning\/} trains the generator network to produce coarse coefficient volumes from random noise samples and trains the detail predictor network to produce detail coefficient volumes from coarse coefficient volumes. (c) {\em Shape generation\/} employs the trained generator to produce a coarse coefficient volume and then the trained detail predictor to further predict a compatible detail coefficient volume, followed by an inverse wavelet transform and marching cube, to generate the output 3D shape. } \label{fig:overview} \vspace*{-1.5mm} \end{figure*} \vspace*{-6pt} \paragraph{Multi-scale neural implicit representation.} This work also relates to multi-scale representations, so we discuss some 3D deep learning works in this area. \cite{liu2020neural,takikawa2021neural,martel2021acorn,chibane2020implicit,chen2021multiresolution} predict multi-scale latent codes in an adaptive octree to improve the reconstruction quality and inference efficiency. \cite{fathony2020multiplicative} propose a band-limited network to obtain a multi-scale representation by restricting the frequency magnitude of the basis functions. Recently,~\cite{saragadam2022miner} adopt the Laplacian pyramid to extract multi-scale coefficients for multiple neural networks. Unlike our work, this work overfits each input object with an individual representation for efficient storage and rendering. In contrast to our work on shape generation, the above methods focus on improving 3D reconstruction performance by separately handling features at different levels. In our work, we adopt a multi-scale implicit representation based on wavelets (motivated by~\cite{velho1994multiscale}) to build a compact representation for high-quality shape generation. \vspace*{-6pt} \paragraph{Denoising diffusion models.} These models ~\cite{sohl2015deep,ho2020denoising,nichol2021improved,song2020denoising} recently show top performance in image generation, surpassing GAN-based models~\cite{dhariwal2021diffusion}. Very recently,~\cite{luo2021diffusion,zhou20213d} adopt diffusion models for point cloud generation. Yet, they fail to generate smooth surfaces and complex structures, as point clouds contain only discrete samples. Distinctively, we adopt diffusion model with a compact wavelet representation to model a continuous signed distance field, promoting shape generation with diverse structures and fine details.
{ "timestamp": "2022-09-20T02:23:49", "yymm": "2209", "arxiv_id": "2209.08725", "language": "en", "url": "https://arxiv.org/abs/2209.08725" }
\section{Examples of Argument Pairs} We list a couple of improbable argument pairs from the ``checklist''. \begin{table}[h] \resizebox{\textwidth}{!}{ \begin{tabular}{ll|ll} \toprule \multicolumn{2}{c|}{Argument 1} & \multicolumn{2}{c}{Argument 2} \\ \midrule Justice.Sentence.Unspecified & JudgeCourt & Life.Die.Unspecified & Victim \\ Justice.Sentence.Unspecified & Defendant & Life.Die.Unspecified & Victim \\ Control.ImpedeInterfereWith.Unspecified & Impeder & Justice.ArrestJailDetain.Unspecified & Jailer \\ Contact.RequestCommand.Unspecified & Recipient & Justice.ArrestJailDetain.Unspecified & Jailer \\ Life.Injure.Unspecified & Injurer & Transaction.ExchangeBuySell.Unspecified & Giver \\ Justice.TrialHearing.Unspecified & Defendant & Transaction.ExchangeBuySell.Unspecified & Giver \\ Justice.TrialHearing.Unspecified & Defendant & Transaction.ExchangeBuySell.Unspecified & Recipient \\ Conflict.Attack.DetonateExplode & Attacker & Contact.Contact.Broadcast & Communicator \\ Conflict.Attack.Unspecified & Attacker & Contact.Contact.Broadcast & Communicator \\ Conflict.Attack.DetonateExplode & Attacker & Contact.ThreatenCoerce.Unspecified & Communicator \\ Conflict.Attack.Unspecified & Attacker & Contact.ThreatenCoerce.Unspecified & Communicator \\ \bottomrule \end{tabular}} \end{table} \section{Hyperparameters used in The Experiments} \begin{table}[h] \begin{tabular}{l|l} \toprule train batch size & 2 \\ eval batch size & 1 \\ learning rate & 3e-5 \\ accumulate grad batches & 4 \\ training epoches & 5 \\ warmup steps & 0 \\ weight decay & 0 \\ \# gpus & 1 \\ \bottomrule \end{tabular} \caption{Hyperparameters.} \end{table} \end{document} \section{Introduction} An event is a specific occurrence involving participants (people, objects, etc.). Understanding events in the text is necessary for building machine reading systems, as well as for downstream tasks such as information retrieval, knowledge base population, and trend analysis of real-life world events~\cite{sundheim-1992-overview}. Event Extraction has long been studied as a local sentence-level task~\cite{grishman-sundheim-1996-message, ji2008refining, grishman_2019,LinACL2020}. This has driven researchers to focus on developing approaches for sentence-level predicate-argument extraction. This is problematic when events and their arguments spread across multiple sentences -- in real-world cases, events are often written throughout a document.\footnote{In {\sc WikiEvents}~\cite{li-etal-2021-document}, nearly 40\% of events have an argument outside the sentence containing the trigger.} \begin{figure}[t] \centering \resizebox{\columnwidth}{!}{ \includegraphics{./figures/example.pdf} } \caption{Document-level event argument extraction.} \label{fig:task} \end{figure} In Figure~\ref{fig:task}, the excerpt of a news article describes two events in the 3rd sentence (an arrest event triggered by ``captured'') and the 6th sentence (an attack event triggered by ``explosion''). S6 on its own contains little information about the arguments/participants of the explosion event, but together with the context of S3 and S7, we can find the informative arguments for the \textsc{attacker} role. In this work, we focus on the \textit{informative argument} extraction problem, which is more practical and requires much a broader view of cross-sentence context~\cite{li-etal-2021-document}. For example, although ``the brothers'' also refers to ``Tamerlan T.'' and ``Dzhokhar'' (and closer to the trigger word), it should not be extracted as an informative argument. In recent years, there have been efforts focusing on event extraction beyond sentence boundaries with end-to-end learning~\cite{ebner-etal-2020-multi, du-thesis-2021, li-etal-2021-document}. Most of the work still focuses on modeling each event independently~\cite{li-etal-2021-document} and ignores the global context partially because of the pretrained models' length limit and their lack of attention for distant context~\cite{khandelwal-etal-2018-sharp}. \newcite{du-etal-2021-template} propose to model dependency between events directly via the design of generation output format, yet it is not able to handle longer documents with more events -- whereas in real-world news articles there are often more than fifteen inter-related events (Table~\ref{tab:datastats}). In addition, previous work often overlooks the consistency between extracted event structures across the long document. For example, if one person has been identified as a \textsc{jailer} in an event, it's unlikely that the same person is an \textsc{attacker} in another event in the document (Figure~\ref{fig:task}), according to world event knowledge~\cite{Sap-etal-2019-atomic, yao-etal-2020-weakly}. In this paper, to tackle these challenges and have more consistent/coherent extraction results, we propose a document-level memory-enhanced training and decoding framework (Figure~\ref{fig:framework}) for the problem. It can leverage relevant and necessary context beyond the length constraint of end-to-end models, by using the idea of a dynamic memory store. It helps the model leverage previously generated/extracted event information during both training (implicitly) and during test/decoding (explicitly). More specifically, during training, it retrieves the most similar event sequence in the memory store as additional input context to mode. Plus, it performs constrained decoding based on the memory store and our harvested global knowledge-based argument pairs from the ontology. We conduct extensive experiments and analysis on the {\sc WikiEvents} corpus and show that our framework significantly outperforms previous methods either based on neural sequence labeling or text generation. We also demonstrate that the framework achieves larger gains over baseline non memory-based models as the number of events grows in the document, and it is more robust to manually designed adversarial examples. \section{Task Definition} In this work, we focus on the challenging problem of extracting {\bf informative arguments of events}\footnote{Name entity mentions are recognized as more informative than nominal mentions.} from the document. Each event consists of (1) a trigger expression which is a continuous span in the document, it is of a type $E$ which is predefined in an ontology; (2) and a set of arguments $\{arg_1, arg_2, ...\}$, each of them has a role predefined in the ontology, for event type $E$. In the annotation guideline/ontology, the ``template'' that describes the connections between arguments of the event type is also provided. For example, when $E$ is {\it Arrest}, its corresponding arguments to be extracted should have roles: \textsc{Jailer} (\verb|<arg1>|), {\sc Detainee} (\verb|<arg2>|), {\sc Crime} (\verb|<arg3>|), {\sc Place} (\verb|<arg4>|). Its description template is: \begin{quote} \verb|<arg1>| arrested or jailed \verb|<arg2>| for \verb|<arg3>| crime at \verb|<arg4>| place \end{quote} Given a long news document $Doc = \{..., \text{<Trg1>}, ..., x_i, ..., \text{<Trg2>},..., x_n\}$ with given event triggers, our goal is to extract all the informative argument spans to fill in the role of $E1$, $E2$, etc. For the example piece in Figure~\ref{fig:task}, $E1$ is {\it Arrest} (triggered by <Trg1> ``captured'') and $E2$ is {\it Attack-Detonate} (<Trg2> is ``explosion''). The ontology is constructed by the DARPA KAIROS project\footnote{\footnotesize https://www.darpa.mil/news-events/2019-01-04} for event annotation. It defines 67 event types in a three-level hierarchy, which is richer than the ACE05 ontology with only 33 event types for sentence-level extraction. \section{Methodology} In this section, we describe our memory-enhanced neural generation-based framework (Figure~\ref{fig:framework}) for extracting informative event arguments from the document. Our base model is based on a sequence-to-sequence pretrained language model for text generation. We first introduce how we leverage previously extracted events as additional context for training the text generation-based event extraction model to help the model automatically capture event dependency knowledge (Section~\ref{subsec:memorytrain}). To {\it explicitly} help the model satisfy the global event knowledge-based constraints (e.g., it is improbable that one person would be {\sc Jailer} in event A and then {\sc Attacker} in event B), we propose a dynamic decoding process with world knowledge-based argument pair constraints (Section~\ref{subsec:dynamicde}). \subsection{Memory-enhanced Generation Model \\ for Argument Extraction} \label{subsec:memorytrain} Following \newcite{li-etal-2021-document}, the main model of our framework is based on the pretrained encoder-decoder model BART~\cite{lewis-etal-2020-bart}. The intuition behind using BART for the extraction task is that it is pre-trained as a denoising autoencoder -- reconstruct the original input sequence. This fits our objective of extracting argument spans from the input document because the extracted arguments' tokens are from the input sequence. The generation model takes (1) {\it context:} the concatenation of the piece of text $x$ (of document $D$) containing the current event trigger\footnote{Up to the maximum length limit of the pre-trained model.} and the event type's corresponding template in the ontology; (2) {\it memory store} $m$: of previously extracted events of the same document $D$, as input, and learns a distribution $p(y|x,m)$ over possible outputs $y$. The ground truth sequence $y$ is a sequence of a template where the placeholder $\verb|<arg>|$s are filled by the gold-standard argument spans of the current event.\footnote{The gold sequence for the 1st event in Figure~\ref{fig:task} would be ``[policemen including Collin] arrested or jailed [Tamerlan T. and Dzhokhar] for <arg> crime at <arg> place''} \begin{equation} p(y|x,m) = \prod_{i}^{N} p_(y_i|y_{1:i-1},x,m) \end{equation} To be more specific on building the dependency between events across the document, we use the most relevant event in the memory store $m$ as additional context, instead of the entire memory store. To retrieve the most relevant ``event'' (i.e., a generated sequence) from the memory store $m = \{m_1, m_2,... \}$, we use S-BERT~\cite{reimers-gurevych-2019-sentence} for dense retrieval (i.e., retrieval with dense representations provided by NN). S-BERT is a modification of the BERT model~\cite{devlin-etal-2019-bert} that uses siamese and triplet network structures to obtain semantically meaningful embeddings for text sequences. We can compare the distance between two input sequences with cosine-similarity in an easier and faster way. Given a current input document piece $x$, we encode all of the previously generated event sequences in the memory store and $x$. Then we calculate the similarity scores with vector space cosine-similarity and normalization: \begin{equation} \nonumber \begin{gathered} \text{score}(m_i|x) = \frac{\exp{f(x,m_i)}}{\sum_{m_i \in m}{\exp{f(x,m_i)}}} \\ f(x,m_i) = Embed(x)^T Embed(m_i) \end{gathered} \end{equation} Afterwards, we select the $m_i$ with the highest similarity score: $m^{R} = \argmax_i \text{score}(m_i|x)$ To summarize, the input sequence for the memory-enhanced model consists of the retrieved generated event sequence ($m^{R}$), the template for the current event type ($T$) -- provided by the ontology/dataset, and the context words from the document ($x_1$, ..., $x_n$): \begin{equation} \begin{gathered} \nonumber \text{<S>} \ m^{R}_1, m^{R}_2, ..., \ \text{</S>} \\ \text{<S>} \ T_1, T_2, ... \ \text{</S>} \quad x_1, x_2, ..., x_n \ \text{[EOS]} \\ \end{gathered} \end{equation} During training time, the memory store consists of gold-standard event sequences -- while at test time, it contains real generated event sequences. The training objective is to minimize the negative log likelihood over all $((x, m^{R}, T), y)$ instances. Since we fix the parameters from S-BERT, the retrieval module's parameters are not updated during training. Thus the training time cost of our memory-based training is almost the same to the simple generation-based model. \subsection{Constrained Decoding with \\ Global Knowledge-based Argument Pairs} \label{subsec:dynamicde} The constrained/dynamic decoding is an important stage in our framework. We first harvest a number of world knowledge-based event argument pairs that are probable/improbable of happening with the same entity being the argument. For example, (<Event Type: Arrest, Argument Role: {\sc Jailer}> | <Event Type: Attack-Detonate, Argument Role: {\sc Attacker}>) is an improbable pair. In the framework (Figure~\ref{fig:framework}), they are called ``argument pairs''. Then based on the argument pairs constraints, the dynamic decoding is conducted throughout the document -- if one entity is decoded in an event in the earlier part of the document, it should not be decoded later in another event if the results are incompatible with the improbable argument pairs. \begin{algorithm}[t] \small \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{ Event Ontology $O$, consisting of $|O|$ events' information. For each event $E_i \in O$, it has a set of argument roles $(A^{i}_1, A^{i}_2,...)$. } \Output{A set of (<Event Type, Argument Role> | <Event Type, Argument Role>) pairs with ``probable'' or ``improbable'' denotation.} \BlankLine $impro\_arg\_pairs \longleftarrow \{\}$\; $pro\_arg\_pairs \longleftarrow \{\}$\; \tcp{Enumerate event type pairs} \For{$i \leftarrow 1$ \KwTo $|O|$}{ \For{$j \leftarrow i+1$ \KwTo $|O|$}{ $cnt(i,j)$ = count \# of $(E_i, E_j)$ co-occurrence in the training documents; \lIf{$\text{cnt}(i,j) == 0$}{continue} \tcp{Enumerate argument pairs} \For{$A^{i}_k \in E_i$ args $(A^{i}_1, A^{i}_2,...)$}{ \For{$A^{j}_h \in E_j$ args $(A^{j}_1, A^{j}_2,...)$}{ \lIf{$entity\_type(A^{i}_k) != entity\_type(A^{j}_h$)}{continue} $cnt\_args(A^{i}_k, A^{j}_h)$ = count \# of $(A^{i}_k, A^{j}_h)$ being the same entity in the training set documents; \lIf{$\frac{cnt\_args(A^{i}_k, A^{j}_h)}{cnt(i,j)}>0.001$} {$impro\_arg\_pairs$.add($(<E_i, A^{i}_k> | <E_j, A^{j}_h>)$)} \Else{$pro\_arg\_pairs$.add($(<E_i, A^{i}_k> | <E_j, A^{j}_h>)$)} } } } } \caption{\small Automatic Harvesting Argument Pairs from the Event Ontology} \label{algo:harvest} \end{algorithm} \paragraph{Harvesting Global Knowledge-based Argument Pairs from the Ontology} We first run an algorithm to automatically harvest all candidate argument pairs (Algorithm~\ref{algo:harvest}). Basically, we \begin{itemize}[leftmargin=*] \item First enumerate all possible event type pairs, and count how many times they co-occur in the training set (Line 2--6). \item Then we enumerate all possible argument types pairs that share the same entity type from the ontology (e.g., argument {\sc Organization} (ORG) and argument {\sc Victim} (PER) don't have the same entity type), and count how many times both of the args are of the same entity in training docs (e.g., ``Dzhokhar'' are both {\sc Detainee} and {\sc Attacker} in two events in Figure~\ref{fig:task}) (Line 7--11). \item Finally we add into the set of probable argument pairs, whose normalized score is above a threshold (99\% of the candidate arguments with non-zero score); and the rest into the set of improbable pairs (Line 11--14). \end{itemize} \begin{figure}[t] \centering \resizebox{\columnwidth}{!}{ \includegraphics{./figures/eg_decoding.pdf} } \caption{Constrained/Dynamic Decoding.} \label{fig:decoding} \end{figure} After automatic harvesting, since there is noise in the dataset as well as cases not covered, we conduct a human curation process to mark certain improbable argument pairs as probable, based on world knowledge. Finally, we obtain 1,568 improbable argument pairs and 687 probable pairs. \begin{table}[!h] \small \centering \begin{tabular}{l|cc} \toprule & \begin{tabular}[c]{@{}l@{}}\# pairs with global \\ co-occurrence stats\end{tabular} & \begin{tabular}[c]{@{}l@{}}\# pairs after \\ human curation\end{tabular} \\ \midrule improbable & 1,855 & 1,568 \\ probable & 400 & 687 \\ \bottomrule \end{tabular} \caption{Statistics of Harvested Argument Pairs.} \label{tab:harveststats} \end{table} \paragraph{Dynamic Decoding Process} During the decoding process, we keep an explicit data structure in the memory store, to record what entities have been decoded and what argument roles they are assigned to (Figure~\ref{fig:decoding}). During decoding the arguments of later events in the document, assuming we are at a time step $t$ for generating the sequence for event $E_i$, to generate token $y_t$, we first determine the argument role ($A_k$) it corresponds to. Then we search through the memory store if there are extracted entities $e$ that have argument role $A_h$, where $<A_k, A_h>$ is an improbable argument pair. Then when decoding to token at time step $t$, we decrease the probability (after softmax) of generating/extracting tokens in entity $e$ according to the improbable argument pair rule. Compared to decreasing the probability of extracting certain conflicting entities, we are more reserved in utilizing the probable argument pairs, only if the same entity has been assigned the argument role for more than 5 times in the document, we are increasing the probability of extracting the same entity (generating the token of the entity) for the corresponding argument role (the most co-occurred). After the generation process for the current event, we add the newly generated event sequence (extracted arguments) back into the memory store. \section{Experiments} \subsection{Dataset and Evaluation Metrics} We conduct evaluations on the newly released \textsc{WikiEvents} dataset~\cite{li-etal-2021-document}. As compared to the ACE05\footnote{ http://www.itl.nist.gov/iad/mig/tests/ace/2005/} sentence-level extraction benchmark, {\sc WikiEvents} focuses on annotations for informative arguments and for multiple events in the document-level event extraction setting, and is the only benchmark dataset for this purpose to now. It contains real-world news articles annotated with the DARPA KAIROS ontology. As shown in the dataset paper, the distance between informative arguments and event trigger is 10 times larger than the distance between local/uninformative arguments (including pronouns) and event triggers. This demonstrates more needs for modeling long document context and event dependency and thus it requires a good benchmark for evaluating our proposed models. The statistics of the dataset are shown in Table~\ref{tab:datastats}. We use the same data split and preprocessing step as in the previous work. \begin{table}[h] \small \centering \begin{tabular}{l|ccc} \toprule & Train & Dev & Test \\ \midrule Documents & 206 & 20 & 20 \\ Sentences & 5262 & 378 & 492 \\ Avg. number of events & 15.73 & 17.25 & 18.25 \\ Avg. number of tokens & 789.33 & 643.75 & 712.00 \\ \bottomrule \end{tabular} \caption{Dataset Statistics} \label{tab:datastats} \end{table} \begin{table*}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l|cccccc|cccccc} \toprule & \multicolumn{6}{c|}{Argument Identification} & \multicolumn{6}{c}{Argument Classification} \\ Models & \multicolumn{3}{c}{Head Match} & \multicolumn{3}{l|}{Coref Match} & \multicolumn{3}{c}{Head Match} & \multicolumn{3}{l}{Coref Match} \\ & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 \\ \midrule \begin{tabular}[c]{@{}l@{}}BERT-CRF \\ \cite{shi-lin-2019-BERTCRF}\end{tabular} & - & - & 52.71 & - & - & 58.12 & - & - & 43.29 & - & - & 47.70 \\ \begin{tabular}[c]{@{}l@{}}BART-Gen \\ \cite{li-etal-2021-document}\end{tabular} & 58.62 & 55.64 & 57.09 & 62.84 & 59.64 & 61.19 & 54.02 & 51.27 & 52.61 & 57.47 & 54.55 & 55.97 \\ \midrule Memory-based Training & 61.07 & 56.18 & 58.52 & 66.21 & 60.91 & 63.45 & 55.93 & 51.45 & 53.60 & 60.47 & 55.64 & 57.95 \\ \begin{tabular}[c]{@{}l@{}}\quad w/ knowledge \\ \quad constrained decoding\end{tabular} & 62.45 & 56.55 & {\bf 59.35} & 67.67 & 61.27 & {\bf 64.31}$^{*}$ & 57.23 & 51.82 & {\bf 54.39} & 61.85 & 56.00 & {\bf 58.78}$^{*}$ \\ \bottomrule \end{tabular}} \caption{Performance (\%) on the informative argument extraction task. $^{*}$ indicates statistical significance ($p<0.05$).} \label{tab:performance-general} \end{table*} As for evaluation, we use the same criteria as in previous work. We consider an argument span to be correctly identified if its offsets match any of the gold/reference informative arguments of the current event (i.e., argument identification); and it is correctly classified if its semantic role also matches (i.e., argument classification)~\cite{li-etal-2013-joint}. To judge whether the extracted argument and the gold-standard argument span match, since the exact match is too strict that some correct candidates are considered as spurious (e.g., ``the 22 policemen'' and ``22 policemen'' do not match under the exact match standard). Following~\newcite{huang2012modeling, li-etal-2021-document}, we use head word match F1 ({\it Head F1}). We also report performance under a more lenient metric ``{\it Coref F1}'': the extracted argument span gets full credit if it is coreferential with the gold-standard arguments~\cite{ji-grishman-2008-refining}. The coreference links information between informative arguments across the document are given in the gold annotations. \subsection{Results} We compare our framework to a number of competitive baselines. \cite{shi-lin-2019-BERTCRF} is a popular baseline for semantic role labeling (predicate-argument prediction). It performs sequence labeling based on automatically extracted features from BERT~\cite{devlin-etal-2019-bert} and uses Conditional Random Fields~\cite{DBLP:conf/icml/LaffertyMP01} for structured prediction ({\bf BERT-CRF}). \newcite{li-etal-2021-document} propose to use conditional neural text generation model for the document-level argument extraction problem, it handles each event in isolation ({\bf BART-Gen}). For our proposed memory-enhanced training with retrieved additional context, we denote it as {\bf Memory-based Training}. We also present the argument pairs constrained decoding results separately to see both components' contributions.\footnote{All significance tests for F-1 are computed using the paired bootstrap procedure of 5k samples of generated sequences~\cite{berg-etal-2012-empirical}.} In Table~\ref{tab:performance-general}, we present the main results for the document-level informative argument extraction. The score for argument identification is strictly higher than argument classification since it only requires span offset match. We observe that: \begin{itemize} \item The neural generation-based models (BART-Gen and our framework) are superior in this document-level informative argument extraction problem, as compared to the sequence labeling-based approaches. Plus, generation-based methods only require one pass as compared to span enumeration-based methods~\cite{wadden-etal-2019-entity, du-cardie-2020-event}. \item As compared to the raw BART-Gen, with our memory-based training -- leveraging previously closest extracted event information substantially helps increase precision (P) and F-1 scores, with smaller but notable improvement in recall especially under Coref Match. \item With additional argument pair constrained decoding, there is an additional significant improvement in precision and F-1 scores. This can be mainly attributed to two factors: (I) during constrained decoding, we relied more on ``improbable arg. pairs'' as a checklist to make sure that the same entity not generated for conflicting argument roles in the same document, and only utilize very few top ``probable arg. pairs'' for promoting the decoding for frequently appearing entities; (II) If an entity has been decoded in previous event A by mistake then under the argument pair rule, it will not be decoded in event B even if it correct -- which might hurt the recall. \end{itemize} \begin{table}[t] \small \centering \begin{tabular}{l|cccc} \toprule & \multicolumn{2}{c}{Arg. Classification} \\ & Head M. & Coref. M. \\ \midrule BART-Gen & 50.00 & 53.12 \\ Memory-based Training & 50.75 & 53.73 \\ \begin{tabular}[c]{@{}l@{}}Our Best Model (w/ knowledge \\ constrained decoding)\end{tabular} & 53.73 & 56.72 \\ \bottomrule \end{tabular} \caption{Performance (\%) on adversarial examples.} \label{tab:adv} \end{table} \paragraph{Robustness to Adversarial Examples} To test how the models react to specially designed adversarial examples, we select a quarter of documents from the original test set, and add one more adversarial event into each of them by adding a few new sentences. The additional event is designed to ``attract'' the model to make mistakes that are against our global knowledge-based argument pair rules.\footnote{In our open-sourced repository, readers will be able to find our designed adversarial examples under the data folder.} An excerpt for one example: \begin{quote} Tandy, then 19, {\bf talks} to his close friend, Stephen Silva, about ... Tandy and Silva both {\bf died} as lifeguards together at the Harvard pool. Later a kid was {\bf killed} by a Stephen Silva-lookalike guy. \end{quote} In this example, we know ``Stephen Silva'' died in the second event ``Life.Die'' triggered by {\bf died}. Although it is also mentioned in the last sentence, ``Stephen Silva'' should not be extracted as the {\sc Killer}. In Table~\ref{tab:adv}, we summarize the F-1 scores of argument classification models. Firstly we see on the adversarial examples, the performance scores all drop as compared to the normal setting (Table~\ref{tab:performance-general}), proving it's harder to maintain robustness in this setting. Our best model with argument pair constrained decoding outperforms substantially both BART-Gen and our memory-based training model. The gap is larger than the general evaluation setting, which shows the advantage of {\it explicitly} enforcing the reasoning/constraint rules. \section{Further Analysis} In this section, we further provide more insights with quantitative and qualitative analysis, as well as error analysis for the remaining challenges. \paragraph{Influence of Similarity-based Retrieval} In Table~\ref{tab:ablation}, we first investigate what happens when our similarity-based retrieval module is removed -- we find that the F-1 scores substantially drop. There's also a drop of scores across metrics when we retrieve a random event from the memory store. It is interesting that the model gets slightly better performance with random memory than not using any retrieved/demonstration sequences. This corresponds to the findings in other domains of NLP on how demonstrations lead to performance gain when using pre-trained language models (especially in the few-shot learning setting). \begin{table}[h] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{l|cc|cc} \toprule \multirow{2}{*}{Models} & \multicolumn{2}{c|}{Arg. I.} & \multicolumn{2}{c}{Arg. C.} \\ & H. M. & C. M. & H. M. & C. M. \\ \midrule Memory-based Training & 58.52 & 63.45 & 53.60 & 57.95 \\ \quad w/o retrieval & 56.84 & 61.82 & 51.29 & 55.69 \\ \quad w/ random memory & 57.65 & 62.69 & 52.22 & 57.17 \\ \bottomrule \end{tabular} } \caption{Ablation (\%) for similarity-based retrieval.} \label{tab:ablation} \end{table} \paragraph{Document Length and \# of Events} In Figure~\ref{fig:chart_doclen_evennum}, we examine how performances change as the document length and the number of events per document grow. First we observe that as the document length grows, challenges grow for both the baseline and our framework (F-1 drops from over 70\% to around 55\%). While our framework maintains a larger advantage when document is longer than 250 words. As the number of events per document grows (from <=8 to around 25), our model's performance is not affected much (F-1 all over 60\%). While the baseline system's F-1 score drops to around 50\%. \begin{figure}[h] \begin{subfigure}[b]{\columnwidth} \centering \includegraphics[width=\columnwidth]{./figures/chart_doclen.pdf} \end{subfigure} \hfill \hfill \begin{subfigure}[b]{\columnwidth} \centering \includegraphics[width=\columnwidth]{./figures/chart_eventnum.pdf} \end{subfigure} \caption{Effect of doc length and events \# per doc.} \label{fig:chart_doclen_evennum} \end{figure} \begin{table*}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l|l|l|l} \toprule & BART-Gen Baseline & Memory-enhanced Training & w/ Constrained Decoding \\ \midrule Input Doc. 1 & \multicolumn{3}{l}{\begin{tabular}[c]{@{}l@{}}{[}S1{]} ... Accused New York bomber \textcolor{green!80!gray}{Ahmad Khan Rahimi} on Thursday to {\it federal charges} that he set off ...\\ {[}S4{]} ... He spoke only once, when U.S. District Judge Richard Berman asked him to ...\\ {[}S9{]} The confrontation left him with several gunshot {\bf wounds}, delaying the filing of {\it federal charges} ... \end{tabular}} \\ \midrule Decoded Seq. & \begin{tabular}[c]{@{}l@{}}\textcolor{red}{Richard Berman[{\sc Victim}]} \\ was injured by <arg> ... \end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{green!80!gray}{Ahmad Khan Rahimi[{\sc Victim}]} \\ was injured by <arg> ... \end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{green!80!gray}{Ahmad Khan Rahimi[{\sc Victim}]} \\ was injured by <arg> ... \end{tabular} \\ \midrule Input Doc. 2& \multicolumn{3}{l}{\begin{tabular}[c]{@{}l@{}}{[}S1{]} Cuba {\bf sidesteps} Colombia 2019s request to ... \\ {[}S11{]} In November, Colombia asked Cuba to {\bf capture} ELN rebel commander \textcolor{green!80!gray}{Nicolas Rodriguez} \\ and provide information about the presence of other commanders in the Cuban territory. ... \\ {[}S13{]} The Cuban government did not respond publicly to that request or made a statement ...\end{tabular}} \\ \midrule Decoded Seq. & \begin{tabular}[c]{@{}l@{}}\textcolor{red}{Cuba[{\sc Jailer}]} arrested or jailed \\ \textcolor{green!80!gray}{Nicolas Rodriguez[{\sc Detainee}]} ... \end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{red}{Cuba[{\sc Jailer}]} arrested or jailed \\ \textcolor{green!80!gray}{Nicolas Rodriguez[{\sc Detainee}]} ... \end{tabular} & \begin{tabular}[c]{@{}l@{}}<arg> arrested or jailed \\ \textcolor{green!80!gray}{Nicolas Rodriguez[{\sc Detainee}]} ... \end{tabular} \\\bottomrule \end{tabular} } \caption{Decoded Seq. (Extracted Arguments) by BART-Gen and Our Models.} \label{tab:qualitative_eg} \end{table*} \paragraph{Qualitative Analysis} We present a couple of representative examples (Table~\ref{tab:qualitative_eg}). In the first example, for the event triggered by {\bf wounds}, it's hard to find the {\sc Victim} argument ``Ahmad Khan Rahimi'' since it's explicitly mentioned far before the current sentence. But with retrieved additional context, both our framework variants are able to extract the full name correctly. In the second example, ``Cuba'' was mentioned in two sentences with two events (Impede event triggered by {\bf sidesteps} and Arrest triggered by {\bf capture}). But it only participated in the first event. According to our argument pair constraints -- it's improbable that one entity is both an {\sc Impeder} and a {\sc Jailer}, our framework with constrained decoding conducts reasoning to avoid the wrong extraction. \begin{table}[t] \resizebox{\columnwidth}{!}{ \begin{tabular}{l|ccc} \toprule & Missing & Spurious & Misclassified \\\midrule Head M & 239 (52.88\%) & 187 (41.37\%) & 26 (5.75\%) \\ Coref M & 213 (52.85\%) & 161 (39.95\%) & 29 (7.20\%) \\\bottomrule \end{tabular}} \caption{Types of Errors Made by Our Framework.} \label{tab:error_types} \end{table} \paragraph{Error Analysis and Remaining Challenges} \begin{figure}[t] \resizebox{0.95\columnwidth}{!}{ \includegraphics{./figures/hist_arg_dist.pdf} } \caption{Distribution of Distance between Informative Arguments and the Gold-standard Triggers.} \label{fig:arg_dist} \end{figure} Table~\ref{tab:error_types} categorizes types of argument extraction errors made by our best model. The majority of errors is from missing arguments and only around 7\% of cases are caused by incorrectly-assigned argument roles (e.g., a {\sc Place} argument is mistakenly labeled as a {\sc Target} argument). Interestingly, from Figure~\ref{fig:arg_dist}'s distribution, we see that as compared to the distance of gold-standard informative arguments to the trigger (avg. 80.41 words), the missing arguments are far away (avg. 136.39 words) -- showing the hardness of extracting distant arguments as compared to local arguments. Finally we examine deeper the example predictions and categorize reasons for errors into the following types: (1) {\it Challenge to obtain an accurate boundary of the argument span.} In the example excerpt ``On Sunday, a suicide bombing in the southeastern province of [Logar] left eight ...'', our model extracts ``southeastern province'' as {\sc Place}. Similarly in ``... were transported to [Kabul] city..'', our model extracts ``city'' as {\sc Destination}. In both cases the model gets no credit. To mitigate this problem, models should be able to identify certain noun phrase boundaries with external knowledge. Plus, the improvement of data annotation and evaluation is also needed -- the model should get certain credit though the span does not overlap but related to the gold argument. (2) {\it Long distance dependency and deeper context understanding.} In news, most of the contents are written by the author while certain content is cited from participants. While models usually do not distinguish the difference and consider the big stance difference. In the excerpt ``Bill Richard, whose son, Martin, was the youngest person killed in the bombing, said Tsarnaev could have backed out ... Instead, \uwave{Richard said, he chose hate. he chose destruction. He chose death. ...}'', the full name of the informative argument (``D. Tsarnaev'') was mentioned at the very beginning of the document. Although our model can leverage previously decoded events, it is not able to fully understand the speaker's point of view and misses the full {\sc Killer} argument span. \section{Related Work} \paragraph{Event Knowledge} There has been work on acquiring event-event knowledge/subevent knowledge with heuristic-based rules or crowdsourcing-based methods. \newcite{Sap-etal-2019-atomic} propose to use crowdsourcing for obtaining \textit{if-then} relations between events. \newcite{bosselut-etal-2019-comet} use generative language models to generate new event knowledge based on crowdsourced triples. \newcite{yao-etal-2020-weakly} propose a weakly-supervised approach to extract sub-event relation tuples from the text. In our work, we focus on harvesting knowledge-based event argument pair constraints from the predefined ontology with training data co-occurrence statistics. Plus, the work above on knowledge acquisition has not investigated explicitly encoding the knowledge/constraints for improving the performance of models of document-level event extraction related tasks. \paragraph{Document-level Event Extraction} Event extraction has been mainly studied under the document-level setting (the template filling tasks from the MUC conferences~\cite{grishman-sundheim-1996-message}) and the sentence-level setting (using the ACE data~\cite{doddington-etal-2004-automatic} and BioNLP shared tasks~\cite{kim-etal-2009-overview}). In this paper, we focus on the document-level event argument extraction task which is a less-explored and challenging topic~\cite{du-etal-2021-template, li-etal-2021-document}. To support the progress for the problem, \newcite{ebner-etal-2020-multi} built RAMS dataset, and it contains annotations for cross-sentence arguments but for each document it contains only one event. Later \newcite{li-etal-2021-document} built the benchmark {\sc WikiEvents} with complete event annotations for each document. Regarding the methodology, neural text generation-based models have been proved to be superior at this document-level task~\cite{huang-etal-2021-document, du-etal-2021-template, li-etal-2021-document}. But they are still limited by the maximum length context issue and mainly focus on modeling one event at a time. \newcite{yang-mitchell-2016-joint} proposed a joint extraction approach that models cross-event dependencies -- but it's restricted to events co-occurring within a sentence and only does trigger typing. In our framework, utilizing the memory store can help better capture global context and avoid the document length constraint. Apart from event extraction, in the future, it's worth investigating how to leverage the global memory idea for other document-level IE problems like ($N$-ary) relation extraction~\cite{quirk-poon-2017-distant, yao-etal-2019-docred}. \section{Conclusions and Future Work} In this work, we examined the effect of global document-level ``memory'' on informative event argument extraction. In the new framework, we propose to leverage the previously extracted events as additional context to help the model learn the dependency across events. At test time, we propose to use a dynamic decoding process to help the model satisfy global knowledge-based argument constraints. Experiments demonstrate that our approach achieves substantial improvements over prior methods and has a larger advantage when document length and events number increase. For future work, we plan to investigate how to extend our method to multi-document event extraction cases. \section*{Acknowledgement} We thank the anonymous reviewers helpful suggestions. This research is based upon work supported by U.S. DARPA KAIROS Program No. FA8750-19-2-1004, U.S. DARPA AIDA Program No. FA8750-18-2-0014 and LORELEI Program No. HR0011-15-C-0115. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. \section{Task Definition} In this work, we focus on the challenging problem of extracting {\bf informative arguments of events}\footnote{Name entity mentions are recognized as more informative than nominal mentions.} from the document. Each event consists of (1) a trigger expression which is a continuous span in the document, it is of a type $E$ which is predefined in an ontology; (2) and a set of arguments $\{arg_1, arg_2, ...\}$, each of them has a role predefined in the ontology, for event type $E$. In the annotation guideline/ontology, the ``template'' that describes the connections between arguments of the event type is also provided. For example, when $E$ is {\it Arrest}, its corresponding arguments to be extracted should have roles: \textsc{Jailer} (\verb|<arg1>|), {\sc Detainee} (\verb|<arg2>|), {\sc Crime} (\verb|<arg3>|), {\sc Place} (\verb|<arg4>|). Its description template is: \begin{quote} \verb|<arg1>| arrested or jailed \verb|<arg2>| for \verb|<arg3>| crime at \verb|<arg4>| place \end{quote} Given a long news document $Doc = \{..., \text{<Trg1>}, ..., x_i, ..., \text{<Trg2>},..., x_n\}$ with given event triggers, our goal is to extract all the informative argument spans to fill in the role of $E1$, $E2$, etc. For the example piece in Figure~\ref{fig:task}, $E1$ is {\it Arrest} (triggered by <Trg1> ``captured'') and $E2$ is {\it Attack-Detonate} (<Trg2> is ``explosion''). The ontology is constructed by the DARPA KAIROS project\footnote{\footnotesize https://www.darpa.mil/news-events/2019-01-04} for event annotation. It defines 67 event types in a three-level hierarchy, which is richer than the ACE05 ontology with only 33 event types for sentence-level extraction. \section{Methodology} In this section, we describe our memory-enhanced neural generation-based framework (Figure~\ref{fig:framework}) for extracting informative event arguments from the document. Our base model is based on a sequence-to-sequence pretrained language model for text generation. We first introduce how we leverage previously extracted events as additional context for training the text generation-based event extraction model to help the model automatically capture event dependency knowledge (Section~\ref{subsec:memorytrain}). To {\it explicitly} help the model satisfy the global event knowledge-based constraints (e.g., it is improbable that one person would be {\sc Jailer} in event A and then {\sc Attacker} in event B), we propose a dynamic decoding process with world knowledge-based argument pair constraints (Section~\ref{subsec:dynamicde}). \subsection{Memory-enhanced Generation Model \\ for Argument Extraction} \label{subsec:memorytrain} Following \newcite{li-etal-2021-document}, the main model of our framework is based on the pretrained encoder-decoder model BART~\cite{lewis-etal-2020-bart}. The intuition behind using BART for the extraction task is that it is pre-trained as a denoising autoencoder -- reconstruct the original input sequence. This fits our objective of extracting argument spans from the input document because the extracted arguments' tokens are from the input sequence. The generation model takes (1) {\it context:} the concatenation of the piece of text $x$ (of document $D$) containing the current event trigger\footnote{Up to the maximum length limit of the pre-trained model.} and the event type's corresponding template in the ontology; (2) {\it memory store} $m$: of previously extracted events of the same document $D$, as input, and learns a distribution $p(y|x,m)$ over possible outputs $y$. The ground truth sequence $y$ is a sequence of a template where the placeholder $\verb|<arg>|$s are filled by the gold-standard argument spans of the current event.\footnote{The gold sequence for the 1st event in Figure~\ref{fig:task} would be ``[policemen including Collin] arrested or jailed [Tamerlan T. and Dzhokhar] for <arg> crime at <arg> place''} \begin{equation} p(y|x,m) = \prod_{i}^{N} p_(y_i|y_{1:i-1},x,m) \end{equation} To be more specific on building the dependency between events across the document, we use the most relevant event in the memory store $m$ as additional context, instead of the entire memory store. To retrieve the most relevant ``event'' (i.e., a generated sequence) from the memory store $m = \{m_1, m_2,... \}$, we use S-BERT~\cite{reimers-gurevych-2019-sentence} for dense retrieval (i.e., retrieval with dense representations provided by NN). S-BERT is a modification of the BERT model~\cite{devlin-etal-2019-bert} that uses siamese and triplet network structures to obtain semantically meaningful embeddings for text sequences. We can compare the distance between two input sequences with cosine-similarity in an easier and faster way. Given a current input document piece $x$, we encode all of the previously generated event sequences in the memory store and $x$. Then we calculate the similarity scores with vector space cosine-similarity and normalization: \begin{equation} \nonumber \begin{gathered} \text{score}(m_i|x) = \frac{\exp{f(x,m_i)}}{\sum_{m_i \in m}{\exp{f(x,m_i)}}} \\ f(x,m_i) = Embed(x)^T Embed(m_i) \end{gathered} \end{equation} Afterwards, we select the $m_i$ with the highest similarity score: $m^{R} = \argmax_i \text{score}(m_i|x)$ To summarize, the input sequence for the memory-enhanced model consists of the retrieved generated event sequence ($m^{R}$), the template for the current event type ($T$) -- provided by the ontology/dataset, and the context words from the document ($x_1$, ..., $x_n$): \begin{equation} \begin{gathered} \nonumber \text{<S>} \ m^{R}_1, m^{R}_2, ..., \ \text{</S>} \\ \text{<S>} \ T_1, T_2, ... \ \text{</S>} \quad x_1, x_2, ..., x_n \ \text{[EOS]} \\ \end{gathered} \end{equation} During training time, the memory store consists of gold-standard event sequences -- while at test time, it contains real generated event sequences. The training objective is to minimize the negative log likelihood over all $((x, m^{R}, T), y)$ instances. Since we fix the parameters from S-BERT, the retrieval module's parameters are not updated during training. Thus the training time cost of our memory-based training is almost the same to the simple generation-based model. \subsection{Constrained Decoding with \\ Global Knowledge-based Argument Pairs} \label{subsec:dynamicde} The constrained/dynamic decoding is an important stage in our framework. We first harvest a number of world knowledge-based event argument pairs that are probable/improbable of happening with the same entity being the argument. For example, (<Event Type: Arrest, Argument Role: {\sc Jailer}> | <Event Type: Attack-Detonate, Argument Role: {\sc Attacker}>) is an improbable pair. In the framework (Figure~\ref{fig:framework}), they are called ``argument pairs''. Then based on the argument pairs constraints, the dynamic decoding is conducted throughout the document -- if one entity is decoded in an event in the earlier part of the document, it should not be decoded later in another event if the results are incompatible with the improbable argument pairs. \begin{algorithm}[t] \small \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{ Event Ontology $O$, consisting of $|O|$ events' information. For each event $E_i \in O$, it has a set of argument roles $(A^{i}_1, A^{i}_2,...)$. } \Output{A set of (<Event Type, Argument Role> | <Event Type, Argument Role>) pairs with ``probable'' or ``improbable'' denotation.} \BlankLine $impro\_arg\_pairs \longleftarrow \{\}$\; $pro\_arg\_pairs \longleftarrow \{\}$\; \tcp{Enumerate event type pairs} \For{$i \leftarrow 1$ \KwTo $|O|$}{ \For{$j \leftarrow i+1$ \KwTo $|O|$}{ $cnt(i,j)$ = count \# of $(E_i, E_j)$ co-occurrence in the training documents; \lIf{$\text{cnt}(i,j) == 0$}{continue} \tcp{Enumerate argument pairs} \For{$A^{i}_k \in E_i$ args $(A^{i}_1, A^{i}_2,...)$}{ \For{$A^{j}_h \in E_j$ args $(A^{j}_1, A^{j}_2,...)$}{ \lIf{$entity\_type(A^{i}_k) != entity\_type(A^{j}_h$)}{continue} $cnt\_args(A^{i}_k, A^{j}_h)$ = count \# of $(A^{i}_k, A^{j}_h)$ being the same entity in the training set documents; \lIf{$\frac{cnt\_args(A^{i}_k, A^{j}_h)}{cnt(i,j)}>0.001$} {$impro\_arg\_pairs$.add($(<E_i, A^{i}_k> | <E_j, A^{j}_h>)$)} \Else{$pro\_arg\_pairs$.add($(<E_i, A^{i}_k> | <E_j, A^{j}_h>)$)} } } } } \caption{\small Automatic Harvesting Argument Pairs from the Event Ontology} \label{algo:harvest} \end{algorithm} \paragraph{Harvesting Global Knowledge-based Argument Pairs from the Ontology} We first run an algorithm to automatically harvest all candidate argument pairs (Algorithm~\ref{algo:harvest}). Basically, we \begin{itemize}[leftmargin=*] \item First enumerate all possible event type pairs, and count how many times they co-occur in the training set (Line 2--6). \item Then we enumerate all possible argument types pairs that share the same entity type from the ontology (e.g., argument {\sc Organization} (ORG) and argument {\sc Victim} (PER) don't have the same entity type), and count how many times both of the args are of the same entity in training docs (e.g., ``Dzhokhar'' are both {\sc Detainee} and {\sc Attacker} in two events in Figure~\ref{fig:task}) (Line 7--11). \item Finally we add into the set of probable argument pairs, whose normalized score is above a threshold (99\% of the candidate arguments with non-zero score); and the rest into the set of improbable pairs (Line 11--14). \end{itemize} \begin{figure}[t] \centering \resizebox{\columnwidth}{!}{ \includegraphics{./figures/eg_decoding.pdf} } \caption{Constrained/Dynamic Decoding.} \label{fig:decoding} \end{figure} After automatic harvesting, since there is noise in the dataset as well as cases not covered, we conduct a human curation process to mark certain improbable argument pairs as probable, based on world knowledge. Finally, we obtain 1,568 improbable argument pairs and 687 probable pairs. \begin{table}[!h] \small \centering \begin{tabular}{l|cc} \toprule & \begin{tabular}[c]{@{}l@{}}\# pairs with global \\ co-occurrence stats\end{tabular} & \begin{tabular}[c]{@{}l@{}}\# pairs after \\ human curation\end{tabular} \\ \midrule improbable & 1,855 & 1,568 \\ probable & 400 & 687 \\ \bottomrule \end{tabular} \caption{Statistics of Harvested Argument Pairs.} \label{tab:harveststats} \end{table} \paragraph{Dynamic Decoding Process} During the decoding process, we keep an explicit data structure in the memory store, to record what entities have been decoded and what argument roles they are assigned to (Figure~\ref{fig:decoding}). During decoding the arguments of later events in the document, assuming we are at a time step $t$ for generating the sequence for event $E_i$, to generate token $y_t$, we first determine the argument role ($A_k$) it corresponds to. Then we search through the memory store if there are extracted entities $e$ that have argument role $A_h$, where $<A_k, A_h>$ is an improbable argument pair. Then when decoding to token at time step $t$, we decrease the probability (after softmax) of generating/extracting tokens in entity $e$ according to the improbable argument pair rule. Compared to decreasing the probability of extracting certain conflicting entities, we are more reserved in utilizing the probable argument pairs, only if the same entity has been assigned the argument role for more than 5 times in the document, we are increasing the probability of extracting the same entity (generating the token of the entity) for the corresponding argument role (the most co-occurred). After the generation process for the current event, we add the newly generated event sequence (extracted arguments) back into the memory store. \section{Related Work} \paragraph{Event Knowledge} There has been work on acquiring event-event knowledge/subevent knowledge with heuristic-based rules or crowdsourcing-based methods. \newcite{Sap-etal-2019-atomic} propose to use crowdsourcing for obtaining \textit{if-then} relations between events. \newcite{bosselut-etal-2019-comet} use generative language models to generate new event knowledge based on crowdsourced triples. \newcite{yao-etal-2020-weakly} propose a weakly-supervised approach to extract sub-event relation tuples from the text. In our work, we focus on harvesting knowledge-based event argument pair constraints from the predefined ontology with training data co-occurrence statistics. Plus, the work above on knowledge acquisition has not investigated explicitly encoding the knowledge/constraints for improving the performance of models of document-level event extraction related tasks. \paragraph{Document-level Event Extraction} Event extraction has been mainly studied under the document-level setting (the template filling tasks from the MUC conferences~\cite{grishman-sundheim-1996-message}) and the sentence-level setting (using the ACE data~\cite{doddington-etal-2004-automatic} and BioNLP shared tasks~\cite{kim-etal-2009-overview}). In this paper, we focus on the document-level event argument extraction task which is a less-explored and challenging topic~\cite{du-etal-2021-template, li-etal-2021-document}. To support the progress for the problem, \newcite{ebner-etal-2020-multi} built RAMS dataset, and it contains annotations for cross-sentence arguments but for each document it contains only one event. Later \newcite{li-etal-2021-document} built the benchmark {\sc WikiEvents} with complete event annotations for each document. Regarding the methodology, neural text generation-based models have been proved to be superior at this document-level task~\cite{huang-etal-2021-document, du-etal-2021-template, li-etal-2021-document}. But they are still limited by the maximum length context issue and mainly focus on modeling one event at a time. \newcite{yang-mitchell-2016-joint} proposed a joint extraction approach that models cross-event dependencies -- but it's restricted to events co-occurring within a sentence and only does trigger typing. In our framework, utilizing the memory store can help better capture global context and avoid the document length constraint. Apart from event extraction, in the future, it's worth investigating how to leverage the global memory idea for other document-level IE problems like ($N$-ary) relation extraction~\cite{quirk-poon-2017-distant, yao-etal-2019-docred}. \section{Introduction} An event is a specific occurrence involving participants (people, objects, etc.). Understanding events in the text is necessary for building machine reading systems, as well as for downstream tasks such as information retrieval, knowledge base population, and trend analysis of real-life world events~\cite{sundheim-1992-overview}. Event Extraction has long been studied as a local sentence-level task~\cite{grishman-sundheim-1996-message, ji2008refining, grishman_2019,LinACL2020}. This has driven researchers to focus on developing approaches for sentence-level predicate-argument extraction. This is problematic when events and their arguments spread across multiple sentences -- in real-world cases, events are often written throughout a document.\footnote{In {\sc WikiEvents}~\cite{li-etal-2021-document}, nearly 40\% of events have an argument outside the sentence containing the trigger.} \begin{figure}[t] \centering \resizebox{\columnwidth}{!}{ \includegraphics{./figures/example.pdf} } \caption{Document-level event argument extraction.} \label{fig:task} \end{figure} In Figure~\ref{fig:task}, the excerpt of a news article describes two events in the 3rd sentence (an arrest event triggered by ``captured'') and the 6th sentence (an attack event triggered by ``explosion''). S6 on its own contains little information about the arguments/participants of the explosion event, but together with the context of S3 and S7, we can find the informative arguments for the \textsc{attacker} role. In this work, we focus on the \textit{informative argument} extraction problem, which is more practical and requires much a broader view of cross-sentence context~\cite{li-etal-2021-document}. For example, although ``the brothers'' also refers to ``Tamerlan T.'' and ``Dzhokhar'' (and closer to the trigger word), it should not be extracted as an informative argument. In recent years, there have been efforts focusing on event extraction beyond sentence boundaries with end-to-end learning~\cite{ebner-etal-2020-multi, du-thesis-2021, li-etal-2021-document}. Most of the work still focuses on modeling each event independently~\cite{li-etal-2021-document} and ignores the global context partially because of the pretrained models' length limit and their lack of attention for distant context~\cite{khandelwal-etal-2018-sharp}. \newcite{du-etal-2021-template} propose to model dependency between events directly via the design of generation output format, yet it is not able to handle longer documents with more events -- whereas in real-world news articles there are often more than fifteen inter-related events (Table~\ref{tab:datastats}). In addition, previous work often overlooks the consistency between extracted event structures across the long document. For example, if one person has been identified as a \textsc{jailer} in an event, it's unlikely that the same person is an \textsc{attacker} in another event in the document (Figure~\ref{fig:task}), according to world event knowledge~\cite{Sap-etal-2019-atomic, yao-etal-2020-weakly}. In this paper, to tackle these challenges and have more consistent/coherent extraction results, we propose a document-level memory-enhanced training and decoding framework (Figure~\ref{fig:framework}) for the problem. It can leverage relevant and necessary context beyond the length constraint of end-to-end models, by using the idea of a dynamic memory store. It helps the model leverage previously generated/extracted event information during both training (implicitly) and during test/decoding (explicitly). More specifically, during training, it retrieves the most similar event sequence in the memory store as additional input context to mode. Plus, it performs constrained decoding based on the memory store and our harvested global knowledge-based argument pairs from the ontology. We conduct extensive experiments and analysis on the {\sc WikiEvents} corpus and show that our framework significantly outperforms previous methods either based on neural sequence labeling or text generation. We also demonstrate that the framework achieves larger gains over baseline non memory-based models as the number of events grows in the document, and it is more robust to manually designed adversarial examples. \section{Experiments} \subsection{Dataset and Evaluation Metrics} We conduct evaluations on the newly released \textsc{WikiEvents} dataset~\cite{li-etal-2021-document}. As compared to the ACE05\footnote{ http://www.itl.nist.gov/iad/mig/tests/ace/2005/} sentence-level extraction benchmark, {\sc WikiEvents} focuses on annotations for informative arguments and for multiple events in the document-level event extraction setting, and is the only benchmark dataset for this purpose to now. It contains real-world news articles annotated with the DARPA KAIROS ontology. As shown in the dataset paper, the distance between informative arguments and event trigger is 10 times larger than the distance between local/uninformative arguments (including pronouns) and event triggers. This demonstrates more needs for modeling long document context and event dependency and thus it requires a good benchmark for evaluating our proposed models. The statistics of the dataset are shown in Table~\ref{tab:datastats}. We use the same data split and preprocessing step as in the previous work. \begin{table}[h] \small \centering \begin{tabular}{l|ccc} \toprule & Train & Dev & Test \\ \midrule Documents & 206 & 20 & 20 \\ Sentences & 5262 & 378 & 492 \\ Avg. number of events & 15.73 & 17.25 & 18.25 \\ Avg. number of tokens & 789.33 & 643.75 & 712.00 \\ \bottomrule \end{tabular} \caption{Dataset Statistics} \label{tab:datastats} \end{table} \begin{table*}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l|cccccc|cccccc} \toprule & \multicolumn{6}{c|}{Argument Identification} & \multicolumn{6}{c}{Argument Classification} \\ Models & \multicolumn{3}{c}{Head Match} & \multicolumn{3}{l|}{Coref Match} & \multicolumn{3}{c}{Head Match} & \multicolumn{3}{l}{Coref Match} \\ & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 \\ \midrule \begin{tabular}[c]{@{}l@{}}BERT-CRF \\ \cite{shi-lin-2019-BERTCRF}\end{tabular} & - & - & 52.71 & - & - & 58.12 & - & - & 43.29 & - & - & 47.70 \\ \begin{tabular}[c]{@{}l@{}}BART-Gen \\ \cite{li-etal-2021-document}\end{tabular} & 58.62 & 55.64 & 57.09 & 62.84 & 59.64 & 61.19 & 54.02 & 51.27 & 52.61 & 57.47 & 54.55 & 55.97 \\ \midrule Memory-based Training & 61.07 & 56.18 & 58.52 & 66.21 & 60.91 & 63.45 & 55.93 & 51.45 & 53.60 & 60.47 & 55.64 & 57.95 \\ \begin{tabular}[c]{@{}l@{}}\quad w/ knowledge \\ \quad constrained decoding\end{tabular} & 62.45 & 56.55 & {\bf 59.35} & 67.67 & 61.27 & {\bf 64.31}$^{*}$ & 57.23 & 51.82 & {\bf 54.39} & 61.85 & 56.00 & {\bf 58.78}$^{*}$ \\ \bottomrule \end{tabular}} \caption{Performance (\%) on the informative argument extraction task. $^{*}$ indicates statistical significance ($p<0.05$).} \label{tab:performance-general} \end{table*} As for evaluation, we use the same criteria as in previous work. We consider an argument span to be correctly identified if its offsets match any of the gold/reference informative arguments of the current event (i.e., argument identification); and it is correctly classified if its semantic role also matches (i.e., argument classification)~\cite{li-etal-2013-joint}. To judge whether the extracted argument and the gold-standard argument span match, since the exact match is too strict that some correct candidates are considered as spurious (e.g., ``the 22 policemen'' and ``22 policemen'' do not match under the exact match standard). Following~\newcite{huang2012modeling, li-etal-2021-document}, we use head word match F1 ({\it Head F1}). We also report performance under a more lenient metric ``{\it Coref F1}'': the extracted argument span gets full credit if it is coreferential with the gold-standard arguments~\cite{ji-grishman-2008-refining}. The coreference links information between informative arguments across the document are given in the gold annotations. \subsection{Results} We compare our framework to a number of competitive baselines. \cite{shi-lin-2019-BERTCRF} is a popular baseline for semantic role labeling (predicate-argument prediction). It performs sequence labeling based on automatically extracted features from BERT~\cite{devlin-etal-2019-bert} and uses Conditional Random Fields~\cite{DBLP:conf/icml/LaffertyMP01} for structured prediction ({\bf BERT-CRF}). \newcite{li-etal-2021-document} propose to use conditional neural text generation model for the document-level argument extraction problem, it handles each event in isolation ({\bf BART-Gen}). For our proposed memory-enhanced training with retrieved additional context, we denote it as {\bf Memory-based Training}. We also present the argument pairs constrained decoding results separately to see both components' contributions.\footnote{All significance tests for F-1 are computed using the paired bootstrap procedure of 5k samples of generated sequences~\cite{berg-etal-2012-empirical}.} In Table~\ref{tab:performance-general}, we present the main results for the document-level informative argument extraction. The score for argument identification is strictly higher than argument classification since it only requires span offset match. We observe that: \begin{itemize} \item The neural generation-based models (BART-Gen and our framework) are superior in this document-level informative argument extraction problem, as compared to the sequence labeling-based approaches. Plus, generation-based methods only require one pass as compared to span enumeration-based methods~\cite{wadden-etal-2019-entity, du-cardie-2020-event}. \item As compared to the raw BART-Gen, with our memory-based training -- leveraging previously closest extracted event information substantially helps increase precision (P) and F-1 scores, with smaller but notable improvement in recall especially under Coref Match. \item With additional argument pair constrained decoding, there is an additional significant improvement in precision and F-1 scores. This can be mainly attributed to two factors: (I) during constrained decoding, we relied more on ``improbable arg. pairs'' as a checklist to make sure that the same entity not generated for conflicting argument roles in the same document, and only utilize very few top ``probable arg. pairs'' for promoting the decoding for frequently appearing entities; (II) If an entity has been decoded in previous event A by mistake then under the argument pair rule, it will not be decoded in event B even if it correct -- which might hurt the recall. \end{itemize} \begin{table}[t] \small \centering \begin{tabular}{l|cccc} \toprule & \multicolumn{2}{c}{Arg. Classification} \\ & Head M. & Coref. M. \\ \midrule BART-Gen & 50.00 & 53.12 \\ Memory-based Training & 50.75 & 53.73 \\ \begin{tabular}[c]{@{}l@{}}Our Best Model (w/ knowledge \\ constrained decoding)\end{tabular} & 53.73 & 56.72 \\ \bottomrule \end{tabular} \caption{Performance (\%) on adversarial examples.} \label{tab:adv} \end{table} \paragraph{Robustness to Adversarial Examples} To test how the models react to specially designed adversarial examples, we select a quarter of documents from the original test set, and add one more adversarial event into each of them by adding a few new sentences. The additional event is designed to ``attract'' the model to make mistakes that are against our global knowledge-based argument pair rules.\footnote{In our open-sourced repository, readers will be able to find our designed adversarial examples under the data folder.} An excerpt for one example: \begin{quote} Tandy, then 19, {\bf talks} to his close friend, Stephen Silva, about ... Tandy and Silva both {\bf died} as lifeguards together at the Harvard pool. Later a kid was {\bf killed} by a Stephen Silva-lookalike guy. \end{quote} In this example, we know ``Stephen Silva'' died in the second event ``Life.Die'' triggered by {\bf died}. Although it is also mentioned in the last sentence, ``Stephen Silva'' should not be extracted as the {\sc Killer}. In Table~\ref{tab:adv}, we summarize the F-1 scores of argument classification models. Firstly we see on the adversarial examples, the performance scores all drop as compared to the normal setting (Table~\ref{tab:performance-general}), proving it's harder to maintain robustness in this setting. Our best model with argument pair constrained decoding outperforms substantially both BART-Gen and our memory-based training model. The gap is larger than the general evaluation setting, which shows the advantage of {\it explicitly} enforcing the reasoning/constraint rules. \section{Further Analysis} In this section, we further provide more insights with quantitative and qualitative analysis, as well as error analysis for the remaining challenges. \paragraph{Influence of Similarity-based Retrieval} In Table~\ref{tab:ablation}, we first investigate what happens when our similarity-based retrieval module is removed -- we find that the F-1 scores substantially drop. There's also a drop of scores across metrics when we retrieve a random event from the memory store. It is interesting that the model gets slightly better performance with random memory than not using any retrieved/demonstration sequences. This corresponds to the findings in other domains of NLP on how demonstrations lead to performance gain when using pre-trained language models (especially in the few-shot learning setting). \begin{table}[h] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{l|cc|cc} \toprule \multirow{2}{*}{Models} & \multicolumn{2}{c|}{Arg. I.} & \multicolumn{2}{c}{Arg. C.} \\ & H. M. & C. M. & H. M. & C. M. \\ \midrule Memory-based Training & 58.52 & 63.45 & 53.60 & 57.95 \\ \quad w/o retrieval & 56.84 & 61.82 & 51.29 & 55.69 \\ \quad w/ random memory & 57.65 & 62.69 & 52.22 & 57.17 \\ \bottomrule \end{tabular} } \caption{Ablation (\%) for similarity-based retrieval.} \label{tab:ablation} \end{table} \paragraph{Document Length and \# of Events} In Figure~\ref{fig:chart_doclen_evennum}, we examine how performances change as the document length and the number of events per document grow. First we observe that as the document length grows, challenges grow for both the baseline and our framework (F-1 drops from over 70\% to around 55\%). While our framework maintains a larger advantage when document is longer than 250 words. As the number of events per document grows (from <=8 to around 25), our model's performance is not affected much (F-1 all over 60\%). While the baseline system's F-1 score drops to around 50\%. \begin{figure}[h] \begin{subfigure}[b]{\columnwidth} \centering \includegraphics[width=\columnwidth]{./figures/chart_doclen.pdf} \end{subfigure} \hfill \hfill \begin{subfigure}[b]{\columnwidth} \centering \includegraphics[width=\columnwidth]{./figures/chart_eventnum.pdf} \end{subfigure} \caption{Effect of doc length and events \# per doc.} \label{fig:chart_doclen_evennum} \end{figure} \begin{table*}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l|l|l|l} \toprule & BART-Gen Baseline & Memory-enhanced Training & w/ Constrained Decoding \\ \midrule Input Doc. 1 & \multicolumn{3}{l}{\begin{tabular}[c]{@{}l@{}}{[}S1{]} ... Accused New York bomber \textcolor{green!80!gray}{Ahmad Khan Rahimi} on Thursday to {\it federal charges} that he set off ...\\ {[}S4{]} ... He spoke only once, when U.S. District Judge Richard Berman asked him to ...\\ {[}S9{]} The confrontation left him with several gunshot {\bf wounds}, delaying the filing of {\it federal charges} ... \end{tabular}} \\ \midrule Decoded Seq. & \begin{tabular}[c]{@{}l@{}}\textcolor{red}{Richard Berman[{\sc Victim}]} \\ was injured by <arg> ... \end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{green!80!gray}{Ahmad Khan Rahimi[{\sc Victim}]} \\ was injured by <arg> ... \end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{green!80!gray}{Ahmad Khan Rahimi[{\sc Victim}]} \\ was injured by <arg> ... \end{tabular} \\ \midrule Input Doc. 2& \multicolumn{3}{l}{\begin{tabular}[c]{@{}l@{}}{[}S1{]} Cuba {\bf sidesteps} Colombia 2019s request to ... \\ {[}S11{]} In November, Colombia asked Cuba to {\bf capture} ELN rebel commander \textcolor{green!80!gray}{Nicolas Rodriguez} \\ and provide information about the presence of other commanders in the Cuban territory. ... \\ {[}S13{]} The Cuban government did not respond publicly to that request or made a statement ...\end{tabular}} \\ \midrule Decoded Seq. & \begin{tabular}[c]{@{}l@{}}\textcolor{red}{Cuba[{\sc Jailer}]} arrested or jailed \\ \textcolor{green!80!gray}{Nicolas Rodriguez[{\sc Detainee}]} ... \end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{red}{Cuba[{\sc Jailer}]} arrested or jailed \\ \textcolor{green!80!gray}{Nicolas Rodriguez[{\sc Detainee}]} ... \end{tabular} & \begin{tabular}[c]{@{}l@{}}<arg> arrested or jailed \\ \textcolor{green!80!gray}{Nicolas Rodriguez[{\sc Detainee}]} ... \end{tabular} \\\bottomrule \end{tabular} } \caption{Decoded Seq. (Extracted Arguments) by BART-Gen and Our Models.} \label{tab:qualitative_eg} \end{table*} \paragraph{Qualitative Analysis} We present a couple of representative examples (Table~\ref{tab:qualitative_eg}). In the first example, for the event triggered by {\bf wounds}, it's hard to find the {\sc Victim} argument ``Ahmad Khan Rahimi'' since it's explicitly mentioned far before the current sentence. But with retrieved additional context, both our framework variants are able to extract the full name correctly. In the second example, ``Cuba'' was mentioned in two sentences with two events (Impede event triggered by {\bf sidesteps} and Arrest triggered by {\bf capture}). But it only participated in the first event. According to our argument pair constraints -- it's improbable that one entity is both an {\sc Impeder} and a {\sc Jailer}, our framework with constrained decoding conducts reasoning to avoid the wrong extraction. \begin{table}[t] \resizebox{\columnwidth}{!}{ \begin{tabular}{l|ccc} \toprule & Missing & Spurious & Misclassified \\\midrule Head M & 239 (52.88\%) & 187 (41.37\%) & 26 (5.75\%) \\ Coref M & 213 (52.85\%) & 161 (39.95\%) & 29 (7.20\%) \\\bottomrule \end{tabular}} \caption{Types of Errors Made by Our Framework.} \label{tab:error_types} \end{table} \paragraph{Error Analysis and Remaining Challenges} \begin{figure}[t] \resizebox{0.95\columnwidth}{!}{ \includegraphics{./figures/hist_arg_dist.pdf} } \caption{Distribution of Distance between Informative Arguments and the Gold-standard Triggers.} \label{fig:arg_dist} \end{figure} Table~\ref{tab:error_types} categorizes types of argument extraction errors made by our best model. The majority of errors is from missing arguments and only around 7\% of cases are caused by incorrectly-assigned argument roles (e.g., a {\sc Place} argument is mistakenly labeled as a {\sc Target} argument). Interestingly, from Figure~\ref{fig:arg_dist}'s distribution, we see that as compared to the distance of gold-standard informative arguments to the trigger (avg. 80.41 words), the missing arguments are far away (avg. 136.39 words) -- showing the hardness of extracting distant arguments as compared to local arguments. Finally we examine deeper the example predictions and categorize reasons for errors into the following types: (1) {\it Challenge to obtain an accurate boundary of the argument span.} In the example excerpt ``On Sunday, a suicide bombing in the southeastern province of [Logar] left eight ...'', our model extracts ``southeastern province'' as {\sc Place}. Similarly in ``... were transported to [Kabul] city..'', our model extracts ``city'' as {\sc Destination}. In both cases the model gets no credit. To mitigate this problem, models should be able to identify certain noun phrase boundaries with external knowledge. Plus, the improvement of data annotation and evaluation is also needed -- the model should get certain credit though the span does not overlap but related to the gold argument. (2) {\it Long distance dependency and deeper context understanding.} In news, most of the contents are written by the author while certain content is cited from participants. While models usually do not distinguish the difference and consider the big stance difference. In the excerpt ``Bill Richard, whose son, Martin, was the youngest person killed in the bombing, said Tsarnaev could have backed out ... Instead, \uwave{Richard said, he chose hate. he chose destruction. He chose death. ...}'', the full name of the informative argument (``D. Tsarnaev'') was mentioned at the very beginning of the document. Although our model can leverage previously decoded events, it is not able to fully understand the speaker's point of view and misses the full {\sc Killer} argument span. \section*{Acknowledgement} We thank the anonymous reviewers helpful suggestions. This research is based upon work supported by U.S. DARPA KAIROS Program No. FA8750-19-2-1004, U.S. DARPA AIDA Program No. FA8750-18-2-0014 and LORELEI Program No. HR0011-15-C-0115. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. \section{Examples of Argument Pairs} We list a couple of improbable argument pairs from the ``checklist''. \begin{table}[h] \resizebox{\textwidth}{!}{ \begin{tabular}{ll|ll} \toprule \multicolumn{2}{c|}{Argument 1} & \multicolumn{2}{c}{Argument 2} \\ \midrule Justice.Sentence.Unspecified & JudgeCourt & Life.Die.Unspecified & Victim \\ Justice.Sentence.Unspecified & Defendant & Life.Die.Unspecified & Victim \\ Control.ImpedeInterfereWith.Unspecified & Impeder & Justice.ArrestJailDetain.Unspecified & Jailer \\ Contact.RequestCommand.Unspecified & Recipient & Justice.ArrestJailDetain.Unspecified & Jailer \\ Life.Injure.Unspecified & Injurer & Transaction.ExchangeBuySell.Unspecified & Giver \\ Justice.TrialHearing.Unspecified & Defendant & Transaction.ExchangeBuySell.Unspecified & Giver \\ Justice.TrialHearing.Unspecified & Defendant & Transaction.ExchangeBuySell.Unspecified & Recipient \\ Conflict.Attack.DetonateExplode & Attacker & Contact.Contact.Broadcast & Communicator \\ Conflict.Attack.Unspecified & Attacker & Contact.Contact.Broadcast & Communicator \\ Conflict.Attack.DetonateExplode & Attacker & Contact.ThreatenCoerce.Unspecified & Communicator \\ Conflict.Attack.Unspecified & Attacker & Contact.ThreatenCoerce.Unspecified & Communicator \\ \bottomrule \end{tabular}} \end{table} \section{Hyperparameters used in The Experiments} \begin{table}[h] \begin{tabular}{l|l} \toprule train batch size & 2 \\ eval batch size & 1 \\ learning rate & 3e-5 \\ accumulate grad batches & 4 \\ training epoches & 5 \\ warmup steps & 0 \\ weight decay & 0 \\ \# gpus & 1 \\ \bottomrule \end{tabular} \caption{Hyperparameters.} \end{table} \end{document} \section{Conclusions and Future Work} In this work, we examined the effect of global document-level ``memory'' on informative event argument extraction. In the new framework, we propose to leverage the previously extracted events as additional context to help the model learn the dependency across events. At test time, we propose to use a dynamic decoding process to help the model satisfy global knowledge-based argument constraints. Experiments demonstrate that our approach achieves substantial improvements over prior methods and has a larger advantage when document length and events number increase. For future work, we plan to investigate how to extend our method to multi-document event extraction cases.
{ "timestamp": "2022-09-20T02:22:27", "yymm": "2209", "arxiv_id": "2209.08679", "language": "en", "url": "https://arxiv.org/abs/2209.08679" }
\section{Introduction} In recent control application problems, opportunities for demands for sophisticated machine behavior and machine-to-human contact have increased. Because the problems require machine behavior to stay within a safe range for the machine itself and humans, the center of the control design guidelines is changing from stability to {\it safety}. The transition is theoretically realized by the change from stabilization based on a control Lyapunov function to safety-critical control based on a control barrier function (CBF). In the last few years, research results on a CBF have been actively reported \cite{ames2017,ames2019,furusawa2021,nakamura2019,nguyen2016,shimizu2022,tezuka2022}. In the context of a CBF, the control objective is to make a specific subset, which is said to be a {\it safe set}, on the state space invariance forward in time (namely, forward invariance \cite{ames2019}). This policy is welcome to various control application problems, and in reality, the simple realization of seemingly complicated commands \cite{ames2019,shimizu2022}, and human assist control \cite{furusawa2021,nakamura2019,tezuka2022} are being promoted. A safety-critical control law should have robustness because the system's state slightly escapes from a safe set due to modeling and measurement errors. If an initial state is outside a safe set, the safety-critical control should make the state quickly return to the safe set. Such control is designed using a CBF whose domain is inside and outside a safe set. Unfortunately, this is difficult with a reciprocal control barrier function (RCBF) whose value diverges toward the boundary of a safe set. Therefore, a zeroing control barrier function (ZCBF) whose value tends to zero toward the boundary of a safe set has been the main focus in recent years \cite{ames2017,ames2019,tezuka2022}. On the other hand, a safe set is also desirable to maintain invariance even when a violent and irregular disturbance influences a system. Recently, a few reports on a CBF for a stochastic system have been made in \cite{clark2021} and \cite{wang2021}. The literature implies that a ZCBF for a stochastic system is hard to derive based on an RCBF in the same way as the case of a deterministic system. The reason is that the functional of the solution to a stochastic system is not invariant to a scaling operation \cite{nishimura2018automatica}. Moreover, the existence of an invariance set in a stochastic system is quite strict and generally depends on the property of a diffusion term \cite{nishimura2016scl}. Suppose a control input exists, achieving the invariance of a safe set regardless of a diffusion coefficient. In that case, the control input generally diverges toward the boundary of a safe set \cite{wang2021}. Based on the above discussions, we propose a way of analyzing a CBF for a stochastic system via the following steps. First, in Section~\ref{sec:preliminary}, we define mathematical notations, a target system, and a global solution used in this paper. In Section~\ref{sec:motivation}, by considering a simple example, we confirm that a stochastic system is generally difficult to have a safe set invariance with probability one. In Section~\ref{sec:main}, first, we propose an {\it almost sure RCBF (AS-RCBF)} and an {\it almost sure ZCBF (AS-ZCBF)} ensuring the invariance of a safe set with probability one. Second, we design a safety-critical control ensuring the existence of an AS-RCBF and an AS-ZCBF and show that the controller diverges towards the boundary of a safe set. Third, we construct a new type of a {\it stochastic ZCBF} clarifying a probability for the invariance of a safe set and showing the convergence of a specific expectation related to the attractiveness of a safe set from the outside of the set. And then, we discuss the conditions of an AS-ZCBF and a stochastic ZCBF, ensuring the safety against the change of a diffusion coefficient. In Section~\ref{sec:example}, we confirm the usefulness of the proposed functions and the control design via simple examples with numerical simulation. Section~\ref{sec:conclusion} concludes this paper. \section{Preliminary}\label{sec:preliminary} \subsection{Notations} Let $\R^n$ be an $n$-dimensional Euclidean space and especially $\R := \R^1$. A Lie derivative of a smooth mapping $W: \R^n \to \R$ in a mapping $F = (F_1,\ldots,F_q): \R^n \to \R^n \times \R^q$ with $F_1,\ldots,F_q: \R^n \to \R^n$ is denoted by \begin{align} \lie{F}{W}(x) = \left( \pfrac{W}{x} F_1(x), \ldots, \pfrac{W}{x} F_q(x) \right). \end{align} For constants $a,b>0$, a continuous mapping $\alpha:[-b,a] \to \R$ is said to be an extended class $\mathcal{K}$ function if it is strictly increasing and satisfies $\alpha(0)=0$. The boundary of a set $\mathcal{A}$ is denoted by $\partial \mathcal{A}$. Let $(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t \ge 0},\mathbb{P})$ be a filtered probability space, where $\Omega$ is the sample space, $\mathcal{F}$ is the $\sigma$-algebra that is a collection of all the events, $\{\mathcal{F}_t\}_{t \ge 0}$ is a filtration of $\mathcal{F}$ and $\mathbb{P}$ is a probabilistic measure. In the filtered probability space, $\pr{A|B}$ denotes the probability of some event $A$ under some condition $B$, $\ex{y|B}$ means the expectation of some random variable $y$ under some condition $B$, and $w = (w_1,\ldots,w_d)^T: [0,\infty) \to \R^d$ is a $d$-dimensional standard Wiener process. For a Markov process $x(t) \in \R^n$ with an initial state $x(0)=x_0$, we often use the following notations $\pri{x_0}{A} = \pr{A|x(0)=x_0}$ and $\exi{x_0}{y}=\ex{y|x(0)=x_0}$. The minimum of $a,b \in \R$ is described by $a \wedge b := \min(a,b)$. The differential form of an It\^o integral of $f:\R^n \to \R^n$ over $w$ is represented by $f(x) dw$. The trace of a square matrix $X$ is denoted by $\mathrm{tr}[X]$. \subsection{Target system, the related functions, and a global solution} In this subsection, we describe a target system, the related functions frequently used throughout the paper, and the definition of a solution in global time. First, let us consider a deterministic system \begin{align}\label{eq:sys-det-gen} \dot{x} & = f(x) + g(x) (u_o(x) + u), \end{align} where $x \in \R^n$ is a state vector, $u_{o}:\R^n \to \R^m$ is a pre-input assumed to be a continuous state-feedback, $u \in U \subset \R^m$ is a compensator for safety-critical control, where $U$ denotes an acceptable control set, and maps $f: \R^n \to \R^n$ and $g: \R^n \to \R^n \times \R^m$ are assumed to be both locally Lipschitz. The initial condition is denoted by $x_0 =x(0) \in \R^n$. Note that the assumption of the local Lipschitz condition for $f$ and $g$ implies the existence of $T>0$ such that $x: [0,T] \to \R^n$ is a maximal solution to the system. The main target of this paper is the following stochastic system \begin{align}\label{eq:sys-sto-gen} dx &= \{ f(x) + g(x) (u_o(x) + u) \} dt + \sigma(x) dw, \end{align} where $x,u,u_o,f,g$ and $x_0$ have the same conditions as in system \eqref{eq:sys-det-gen}, respectively, $w:[0,\infty) \to \R^d$ is a standard Wiener process, and a mapping $\sigma: \R^n \to \R^n \times \R^d$ is also assumed to be locally Lipschitz. The local Lipschitz condition of $f$, $g$ and $\sigma$ implies that there exists a stopping time $T>0$ such that $x: [0,T] \to \R^n$ is a maximal solution to the system. For simplicity, we further define some functions. For a $C^2$ mapping $y:M \to \R$ with some $M \subset \R^n$, let \begin{align} &L^D_{f,g}(u,u_o(x),y(x)) := (\lie{f}{y})(x) + (\lie{g}{y})(x) (u+u_o(x)),\\ &L^I_{\sigma}(y(x)) := \frac12 \mathrm{tr} \left[ \sigma(x) \sigma(x)^T \left[ \pfrac{}{x} \left[ \pfrac{y}{x}\right]^T \right](x) \right], \\ &\mathcal{L}_{f,g,\sigma}(u,u_o(x),y(x)) := L^D_{f,g}(u,u_o(x),y(x)) + L^I_{\sigma}(y(x)) \end{align} and \begin{align}\label{eq:H} H_{\sigma}(h(x)) := \frac12 \lie{\sigma}{h}(x) (\lie{\sigma}{h}(x))^T. \end{align} For a mapping $v:M \to (0,\infty)$ smooth in $M \subset \R^n$, we often consider the relationship \begin{align}\label{eq:rel-bh} -(v(x))^{-2} L^D_{f,g}(u,u_o(x),v(x)) = L^D_{f,g}(u,u_o(x),(v(x))^{-1}). \end{align} Moreover, based on \cite{nishimura2018automatica}, we describe the following notion meaning the existence of a global solution in forward time for the system \eqref{eq:sys-sto-gen}: \begin{definition}[FIiP and FCiP]\label{def:fcip} Let an open subset $M \subset \R^n$ and the system \eqref{eq:sys-sto-gen} be considered with $u=\phi(x)$, where $\phi: M \to \R^n$ is a continuous mapping. If a proper and $C^2$ mapping $Y: M \to [0,\infty)$ and a continuous mapping $\psi: [0,\infty) \times (0,1) \to [0,\infty)$ both exist for every $x_0 \in M$ such that \begin{align}\label{eq:fcip} \pri{x_0}{\forall t \in [0,l],\ Y(x(t)) \le \psi(l,\epsilon)} \ge 1- \epsilon \end{align} holds for all $l \in [0,\infty)$ and all $\epsilon \in (0,1)$, then the system is said to be forward invariance in probability (FIiP) in $M$. In addition, if $M =\R^n$ and $Y(\cdot)=|\cdot|$, the system is said to be forward complete in probability (FCiP). \eod \end{definition} \begin{theorem}\label{thm:fcip} Let us consider the system \eqref{eq:sys-sto-gen}, an open subset $M \subset \R^n$, a continuous mapping $\phi: M \to \R^n$ and an initial condition $x_0 \in M$. If there exists a proper and $C^2$ mapping $Y: M \to [0,\infty)$ such that \begin{align} \mathcal{L}_{f,g,\sigma}(\phi(x),u_o(x),Y(x)) \le c_1 Y(x) + c_2 \end{align} is satisfied for all $x \in M$ and for some $c_1 \in [0,\infty)$ and $c_2 \in [0,\infty)$, then the system with $u=\phi(x)$ is FIiP in $M$. In addition, if $M=\R^n$, the system is FCiP. \eot \end{theorem} Definition~\ref{def:fcip} and Theorem~\ref{thm:fcip} are the same as (C2) and Proposition~17 in \cite{nishimura2018automatica}, respectively, except for two differences; $x$ is restricted in $M$ and $Y$ is allowed to be not positive definite in this paper, while $x$ is allowed to be in $\R^n$ and $Y$ is restricted to be positive definite in the literature. Because $M$ is an open set and $Y$ is non-negative and proper, $Y(x) \to \infty$ always holds as $x \to \partial M$ \cite{nakamura2019}. The positive definiteness of $Y$ is required for stability analysis and it can be omitted for just analyzing forward invariance and completeness. Therefore, Theorem~\ref{thm:fcip} is straightforwardly proven via the proof of Proposition~17 in \cite{nishimura2018automatica} by replacing $\R^n$ by $M$. Note that, FIiP in $M$ implies that the probability of $Y(x) \to \infty$ is infinitesimal; this estimate that the solution $x$ starting at $x_0 \in M$ stays in $M$ with probability $1-\epsilon$ for arbitrarily small $\epsilon$. \section{Motivating Example}\label{sec:motivation} \subsection{An example of safety-critical control for a deterministic system} Firstly, let us consider a safety-critical control problem via the following theorem: \begin{theorem}[\cite{ames2019}]\label{thm:ames-zcbf} Assume that a set $C \subset \R^n$ is a superlevel set of a continuously differentiable mapping $h: C \to \R$ which satisfies $h(x) \ge 0$ for all $x \in C$, $h(x)=0$ for any $x \in \partial C$ and $\partial h/ \partial x \neq 0$ for all $x \in C$. For a system \eqref{eq:sys-det-gen}, if there exists a compensator $u=\phi(x)$ such that (there exists a global solution in forward time in $\R^n$ and) there exists an extended class $K_\infty$ function $\bar{\alpha}: C \to \R$ satisfying \begin{align}\label{eq:con-cbf-ames} L^D_{f,g}(\phi(x),u_o(x),h) \ge -\bar{\alpha}(h(x)), \end{align} then any solution $x(t)$ starting at $x_0 \in C$ satisfies $x(t) \in C$ for all $t \in [0,\infty)$. \eot \end{theorem} To make a discussion simpler, we omit the consideration of the convergence to $C$ from $D \setminus C$ satisfying $C \subset D \subset \R^n$, described in the original literature \cite{ames2019}. As an example, we consider \begin{align}\label{eq:sys-det} \dot{x} = u_o(x) + u, \end{align} where $x \in \R$, $u_o: \R \to \R$, $u \in U =\R$, and $x(0)=x_0 > \alpha > 0$. Now we let a safe set as $C = [\alpha,\infty)$; that is, we aim to design $u$ so that $x(t) \in C$ is satisfied for all $x_0 \in C$ and all $t \ge 0$. If, for $h_s(x)=x-\alpha$ and $\gamma>0$, we design $u=\phi_{h_s}$, where \begin{align} \phi_{h_s}(x) := \left\{ \begin{array}{ll} -u_o(x) -\gamma h_s(x),& u_o(x) + \gamma h_s(x) < 0 \\ 0, & u_o(x) + \gamma h_s(x) \ge 0, \end{array}\right. \end{align} then we obtain \begin{align} L^D_{0,1}(\phi_{h_s}(x),u_o(x),h_s(x)) \ge -\gamma h_s(x), \end{align} which satisfies \eqref{eq:con-cbf-ames} by considering $h=h_s$ and $\bar{\alpha}(h_s)=\gamma h_s$. Therefore, according to Theorem~\ref{thm:ames-zcbf}, $C$ becomes safe by $u=\phi_{h_s}$. The same result as above is also derived by using the reciprocal function of $h_s$: \begin{align}\label{eq:rcbf} B_s(x) := (h_s(x))^{-1}=(x-\alpha)^{-1}, \end{align} provided that the function satisfies \begin{align}\label{eq:ex-mot-rcbf} L^D_{0,1}(\phi_{h_s}(x),u_o(x),B_s(x)) \le \gamma B_s(x) \end{align} in $x \in C$; the condition is somewhat different from an RCBF in \cite{ames2019} and similar to an extended RCBF in \cite{furusawa2021,nakamura2019}. (Strictly, an extended RCBF further requires $B_s$ to be proper, and it is defined for a time-varying system). What is important is that, a ZCBF $h_s(x)$ is defined in whole $\R$ and $B_s(x)$ is bounded just inside of $C\setminus \partial C \subset \R^n$. Therefore, $h_s(x)$ is generally useful in the viewpoint of robust control because modeling and measurement errors often cause an initial value outside of $C$. \subsection{Trying extension of safety-critical control to a stochastic system} In this subsection, we try to extend the discussion in the previous subsection to a stochastic system. A stochastic version of a CBF is discussed in \cite{clark2021} and \cite{wang2021}. Roughly speaking, Theorem~3 in \cite{clark2021} claims that, considering a stochastic system \eqref{eq:sys-sto} with $x_0 \in C$, if the condition \eqref{eq:con-cbf-ames} is replaced by \begin{align}\label{eq:con-zcbf-clark} \mathcal{L}_{f,g,\sigma}(\phi(x),u_o(x),h(x)) \ge -h(x) \end{align} for $x \in C \setminus \partial C$, then any solution satisfies $x(t) \in C$ for all $t \ge 0$ {\it with probability one}. However, another previous result in \cite{wang2021} implies that the first exit time of $x(t)$ from $C$ has a finite value with a non-zero probability. Here, we consider the answer to the above contradiction by considering a safety-critical control for a stochastic system \begin{align}\label{eq:sys-sto} dx = (u_o(x) + u)dt + c dw, \end{align} which is the same form as \eqref{eq:sys-det} except for the existence of the diffusion term $c dw$, where $c \neq 0$ and $w: [0,\infty) \to \R$ is a standard Wiener process. As with the previous subsection, we consider $h_s(x)=x-\alpha$ as a candidate for a ZCBF. Because the Hessian of the function is always zero, we obtain \begin{align} L^I_{c}(h_s)= 0. \end{align} This implies that, setting $u=\phi_{h_s}$ results in \begin{align} \mathcal{L}_{0,1,c} (\phi_{h_s}(x),u_o(x),h_s(x)) \ge - \gamma h_s(x), \end{align} which satisfies the condition \eqref{eq:con-zcbf-clark} with $\gamma=1$. However, a solution to the resulting system \begin{align} dx = \left\{ \begin{array}{ll} -(x-\alpha)dt + c dw,& u_o(x) + h_s(x) < 0,\\ u_o(x) dt + c dw,& u_o(x) + h_s(x) \ge 0, \end{array}\right. \end{align} has a non-zero probability to escape $C$ even if $x_0 \in C$; for example, if $u_o=-1$ and $x_0 = \alpha$, the state equation instantly satisfies $dx = c dw$. This example implies that the condition \eqref{eq:con-zcbf-clark} ensures $C$ to be safe ``in probability''. On the other hand, considering $B_s(x)=(x-\alpha)^{-1}$, the Hessian does not vanish and results in \begin{align} L^I_{c} = c^2 (x-\alpha)^{-3}; \end{align} hence, we can estimate that $B_s$ yields an answer to the control problem different from $h_s$. Letting \begin{align} &\Phi_s(x) = -\gamma h_s(x) + c^2 B_s(x) \end{align} and \begin{align}\label{eq:ctrl-zcbf-exam} \phi_{B_s}(x) &= \left\{ \begin{array}{ll} -u_o(x) + \Phi_s(x), & u_o(x) < \Phi_s(x), \\ 0, & u_o(x) \ge \Phi_s(x), \end{array}\right. \end{align} a compensator $u=\phi_{B_s}(x)$ yields \begin{align} \mathcal{L}_{0,1,c} (\phi_{B_s}(x),u_o(x),B_s(x)) \le \gamma B_s(x). \end{align} Because the compensator diverges at $\partial C$, it may have the potential to cage the solution $x$ in $C$ with probability one. The answer will be given in a later section. The difference of $\phi_{h_s}$ and $\phi_{B_s}$ occurs because a functional of a solution to a stochastic system is not invariant concerned with a scaling operation \cite{nishimura2018automatica}. More clearly, for a solution $x$ to \eqref{eq:sys-sto}, a non-zero, bounded and smooth mapping $v: \R \to (0,\infty)$ and a mapping $s: \R \to \R$ satisfies \begin{align}\label{eq:ls} \mathcal{L}_{0,1,c}(u,u_o(x),s(v(x))) = &\pfrac{s}{v}(x) \mathcal{L}_{0,1,c}(u,u_o(x),v(x))+ \frac12 \pfrac{^2 s}{v^2} ((\lie{c}{v})(x))^2. \end{align} If, we ``formally'' consider $s(v)=1/v$ and $v=B_s$, we obtain \begin{align}\label{eq:lbi} \mathcal{L}_{0,1,c}(u,u_o(x),s(B_s(x))) = u + u_o(x) + \frac{c^2}{x-\alpha}; \end{align} thus, $u=\phi_{B_s}(x)$ results in $\mathcal{L}_{0,1,c}(u,u_o(x),s(B_s(x))) = -\gamma (x-\alpha)$. However, a function $h':=s(B_s)$ has to satisfy \begin{align} \pfrac{h'}{x} = 1,\quad \pfrac{^2 h'}{x^2} = \frac{2}{x-\alpha}, \end{align} which could not hold simultaneously; that is, $h'(x)$ does not exist. For a stochastic system, a subset of the state space is generally hard to be (almost sure) invariance because the diffusion coefficient is required to be zero at the boundary of the subset\footnote{The detail is discussed in\cite{nishimura2016scl}, which aims to make the state of a stochastic system converge to the origin with probability one and confine the state in a specific subset with probability one. The aim is a little like the aim of a control barrier function.}. To avoid the tight condition for the coefficient, we should design a state-feedback law whose value is massive, namely diverge in general, at the boundary of the subset so that the effect of the law overcomes the disturbance term. Moreover, a functional ensuring the (almost sure) invariance of the subset probably diverges at the boundary of the set as with a global stochastic Lyapunov function \cite{khasminskii2012,kushner,mao2007} and an RCBF. The above discussion also implies that if a ZCBF is defined for a stochastic system and ensures ``safety with probability one,'' the good robust property of the ZCBF probably gets no appearance. The reason is that the related state-feedback law generally diverges at the boundary of the safe set. Hence, the previous work in \cite{wang2021} proposes a ZCBF with analysis of exit time of a state from a safe set. In the next section, we consider another way to construct a ZCBF for a stochastic system; especially, we propose two types of ZCBFs; an almost sure ZCBF (AS-ZCBF) and a stochastic ZCBF, which have somewhat different conditions compared with ZCBFs in \cite{clark2021} and \cite{wang2021}. Then, in Section~\ref{sec:example}, we confirm the usefulness of our ZCBFs for control design by a few examples with numerical simulation. \section{Main Claim}\label{sec:main} \subsection{Definitions of a safe set and safety for a stochastic system} Let us define a safe set $\chi \subset \R^n$ being open, and there exists a mapping $h:\R^n \to \R$ satisfying all the following conditions: \begin{description} \item[(Z1)] $h(x)$ is $C^2$. \item[(Z2)] $h(x)$ is proper in $\chi$; that is, for any $L \in [0,\infty)$, any superlevel set $\{x \in \chi | h(x) \ge L\}$ is compact. \item[(Z3)] The closure of $\chi$ is the $0$-superlevel set of $h(x)$; that is, \begin{align} &\chi = \{x \in \R^n | h(x) > 0\},\\ &\partial \chi = \{ x \in \R^n | h(x)=0\}, \end{align} are both satisfied. \end{description} If needed, (Z2) is sometimes replaced by the following: \begin{description} \item[(Z2)'] $h(x)$ is proper in $\R^n$. \end{description} We also notice that the reciprocal function $B(x) := (h(x))^{-1}$ is often used after. Let $p \in [0,1]$ and $\chi' \subset \chi$. System \eqref{eq:sys-sto-gen} is said to be {\it safe in} $(\chi',\chi,p)$ if, for any $x_0 \in \chi'$, \begin{align} \pri{x_0}{x(t) \in \chi, \forall t \in [0,\infty)} \ge p \end{align} is satisfied. \subsection{CBFs ensuring almost sure safety} In this subsection, we describe sufficient conditions for $h(x)$ and $B(x)$ to ensure that the target system is FIiP in $\chi$. \begin{definition}[AS-RCBF] Let \eqref{eq:sys-sto-gen} be considered with $\chi$ and $h(x)$ satisfying (Z1), (Z2) and (Z3). If there exist a continuous mapping $\phi: \chi \to \R^m$ and a constant $\gamma>0$ such that, for all $x_0 \in \chi$, \begin{align}\label{eq:con-rcbf} \mathcal{L}_{f,g,\sigma}(\phi(x),u_o(x),B(x)) &\le \gamma B(x) \end{align} is satisfied, then $B(x)$ is said to be {\it an almost sure reciprocal control barrier function (AS-RCBF)}. \eod \end{definition} \begin{theorem}\label{thm:rcbf} If there exists an AS-RCBF $B(x)$ for the system \eqref{eq:sys-sto-gen}, then it is FIiP in $\chi$. \eot \end{theorem} \begin{proof} Applying the given condition \eqref{eq:con-rcbf} and (Z1)--(Z3) to Theorem~\ref{thm:fcip}, the system \eqref{eq:sys-sto-gen} is ensured to be FIiP in $\chi$. \end{proof} The above condition \eqref{eq:con-rcbf} is more relaxed than the condition of a stochastic RCBF shown in \cite{clark2021}, and it is similar to the condition of an extended RCBF for a deterministic system proposed in\cite{furusawa2021,nakamura2019}. The dual notion of the AS-RCBF is defined as follows. \begin{definition}[AS-ZCBF] Let \eqref{eq:sys-sto-gen} be considered with $\chi$ and $h(x)$ satisfying (Z1), (Z2) and (Z3). If there exist a continuous mapping $\phi: \chi \to \R^m$ and a constant $\gamma>0$ such that, for all $x_0 \in \chi$, \begin{align}\label{eq:con-zcbf} \mathcal{L}_{f,g,\sigma}(\phi(x),u_o(x),h(x)) \ge &-\gamma h(x) + L^I_{\sigma}(h(x))+ (h(x))^2 L^I_{\sigma}(B(x)) \end{align} is satisfied, then $h(x)$ is said to be {\it an almost sure zeroing control barrier function (AS-ZCBF)}. \eod \end{definition} \begin{theorem}\label{thm:zcbf} If there exists an AS-ZCBF $h(x)$ for system \eqref{eq:sys-sto-gen}, then it is FIiP in $\chi$. \eot \end{theorem} \begin{proof} First, the condition \eqref{eq:con-zcbf} is transformed into \begin{align}\label{eq:con-zcbf2} L^D_{f,g}(\phi(x),u_o(x),h(x)) \ge -\gamma h(x) + (h(x))^2 L^I_{\sigma}(B(x)). \end{align} Using \eqref{eq:rel-bh} with $v=B$ and $M = \chi = \{ x \in \R^n | h(x)>0 \}$, the inequality is further transformed into \begin{align} -(B(x))^{-2} L^D_{f,g}(\phi(x),u_o(x),B(x)) \ge &-\gamma (B(x))^{-1} + (B(x))^{-2} L^I_{\sigma}(B) \end{align} for $x \in M$. Thus, we obtain \begin{align} -(B(x))^{-2} \mathcal{L}_{f,g,\sigma}(\phi(x),u_o(x),B(x)) \ge -\gamma (B(x))^{-1}, \end{align} which results in \eqref{eq:con-rcbf}. Therefore, the existence of an AS-ZCBF $B(x)$ ensures the system \eqref{eq:sys-sto-gen} is FIiP in $\chi$ via Theorem~\ref{thm:rcbf}. \end{proof} Next, we show a control design of $u=\phi(x)$ using an AS-RCBF and an AS-ZCBF. \begin{corollary}\label{cor:ctrl-zcbf} Consider the system \eqref{eq:sys-sto-gen}, the safe set $\chi$, $h(x)$ and $B(x)$ satisfying all the conditions of (Z1)--(Z3). Let \begin{align} &I(u_o(x),h(x)) := L^D_{f,g}(0,u_o(x),h(x)) \label{eq:I} \\ &J(h(x)) := -\gamma h(x) + (h(x))^2 L^I_{\sigma}(B(x)) \label{eq:J} \end{align} and \begin{align} &\phi_N(x) := \left\{ \begin{array}{ll} -\frac{I(u_o(x),h(x))-J(h(x))}{\lie{g}{h}(x) (\lie{g}{h}(x))^T} (\lie{g}{h}(x))^T, & I < J \cap \lie{g}{h} \neq 0\\ 0 , & I \ge J \cup \lie{g}{h} = 0 \end{array}\right. \end{align} be designed. If \begin{align}\label{eq:ctrl-con} \lie{f}{h}(x) > - \gamma h(x) + (h(x))^2 L^I_\sigma(B(x)) \end{align} holds for $\lie{g}{h} = 0$, then the compensator $u=\phi_N(x)$ yields that the system \eqref{eq:sys-sto-gen} is FIiP in $\chi$. \eot \end{corollary} \begin{proof} First, we consider the case of $\lie{g}{h} \neq 0$. If $I < J$, we obtain \begin{align} \mathcal{L}_{f,g,\sigma}&(\phi_N(x),u_o(x),h(x)) = -\gamma h(x) +(h(x))^2 L^I_\sigma (B(x)) + L^I_\sigma (h(x)), \end{align} and if $I \ge J$, we obtain \begin{align} \mathcal{L}_{f,g,\sigma}(\phi_N(x),u_o(x),h(x)) &= I(u_o(x),h(x)) + L^I_\sigma (h(x)) \nonumber \\ &\ge J(h(x)) + L^I_\sigma (h(x)) \nonumber \\ &= -\gamma h(x) +(h(x))^2 L^I_\sigma (B(x)) + L^I_\sigma (h(x)). \end{align} Therefore, regardless of $I < J$ or $I \ge J$, the inequality \eqref{eq:con-zcbf} is satisfied. Then, we consider the other case, i.e., $\lie{g}{h} = 0$. The additional condition \eqref{eq:ctrl-con} implies that there exists a sufficiently small constant $\epsilon>0$ such that \begin{align} \lie{f}{h}(x) - \epsilon \ge - \gamma h(x) + (h(x))^2 L^I_\sigma(B(x)) \end{align} is satisfied. Combining the inequality and the assumption of $u_o$ to be continuous, for a subset $G_o \subset \chi$, which is a neighborhood of $x_g \in \{ x \in \R^n | \lie{g}{h}(x)=0 \}$, \begin{align} ||\lie{g}{h}(x) u_o(x) || \le \epsilon \end{align} is satisfied. Thus, for $x \in G_o$, we obtain \begin{align} \lie{f}{h}(x) + \lie{g}{h}(x) u_o(x) \ge - \gamma h(x) + (h(x))^2 L^I_\sigma(B(x)), \end{align} which implies that $I \ge J$; namely, $\phi_N=0$ in $G_o$. Therefore, $\phi_N$ is continuous around $\lie{g}{h}(x)=0$. Consequently, $\phi_N$ is always continuous in $\chi$ and satisfies all the assumptions and conditions of Theorem~\ref{thm:zcbf}. This completes the proof. \end{proof} \begin{remark} The control design in Corollary~\ref{cor:ctrl-zcbf} is a stochastic version of the control design in \cite{tezuka2022}. As in the literature, we can probably discuss optimality of a stochastic system \eqref{eq:sys-sto} with $u=\phi_N(x)$. The issue is out of the scope of this paper; it will be left as a topic for future work. \eor \end{remark} \begin{remark} If the condition \eqref{eq:con-zcbf} becomes strict; i.e., ``$\ge$'' is replaced by ``$>$'', the additional condition \eqref{eq:ctrl-con} obviously holds. \eor \end{remark} \subsection{A Stochastic ZCBF with quantitative evaluation of probability} In this subsection, we propose a new type of a ZCBF for a stochastic system to yield a quantitative evaluation of how safe the system is from the viewpoint of probability. We also notice that the condition of the stochastic ZCBF is somewhat different from \cite{wang2021}. Here, we set some sets and stopping times used in this subsection. For $\mu > 0$, let \begin{align} &\chi_\mu := \{x \in \R^n | h(x) \in (0, \mu ]\} \subset \chi, \\ &\chi_{h > \mu} := \chi \setminus \chi_\mu = \{x \in \R^n | h(x) > \mu \}, \\ &\R^n_{h \le \mu}:= \tilde{\chi} \cup \chi_\mu = \{x \in \R^n | h(x) \le \mu\}, \end{align} be defined. For a solution to the system \eqref{eq:sys-sto-gen} with $x_0 \in \chi_\mu$, the first exit time from $\chi_\mu$ is denoted by $\tau_{0\mu}$, and for the solution with $x_0 \in \chi$, the first exit time from $\chi$ is denoted by $\tau_0$. \begin{definition}[Stochastic ZCBF]\label{def:szcbf} Let \eqref{eq:sys-sto-gen} be considered with $\chi$ and $h(x)$ satisfying (Z1), (Z2)' and (Z3). If there exists a continuous mapping $\phi: \R^n \to \R^m$ such that, for all $x \in \R^n_{h \le \mu}$, \begin{align}\label{eq:con-prob} \mathcal{L}_{f,g,\sigma}(\phi(x),u_o(x),h(x)) \ge b H_{\sigma}(h(x)) \end{align} is satisfied with some $b>0$, then $h(x)$ is said to be {\it a stochastic ZCBF}. \eod \end{definition} \begin{lemma}\label{lem:safe-prob} If there exists a stochastic ZCBF $h(x)$ for the system \eqref{eq:sys-sto-gen}, then it is safe in $(\chi_\mu,\chi,1-e^{-b h(x_0)})$. \eot \end{lemma} \begin{proof} First, we prove that the existence of a stochastic ZCBF $h(x)$ ensures that the system \eqref{eq:sys-sto-gen} with $u=\phi(x)$ is FCiP. Let \begin{align}\label{eq:func-prob} h_b(x) := e^{bh(x)}. \end{align} Because \begin{align} L^D_{f,g}(\phi(x),u_o(x),h_b(x)) = b h_b(x) L^D_{f,g}(\phi,u_o(x),h(x)), \end{align} is satisfied, \eqref{eq:con-prob} changes as follows: \begin{align}\label{eq:con-prob3} L^D_{f,g}(\phi(x),u_o(x),h_b(x)) \ge b h_b(x) \left\{ b H_{\sigma}(h(x)) - L^I_{\sigma}(h(x))\right\}. \end{align} Moreover, letting \begin{align} B_b(x):=(h_b(x))^{-1}=e^{-bh(x)}, \end{align} we obtain \begin{align}\label{eq:htob} L^I_{\sigma}(B_b(x)) = b B_b(x) \left\{ b H_{\sigma}(h(x)) - L^I_{\sigma}(h(x)) \right\}, \end{align} which transforms \eqref{eq:con-prob3} into \begin{align}\label{eq:con-prob4} L^D_{f,g}(\phi,u_o(x),h_b(x)) \ge (h_b(x))^2 L^I_{\sigma}(B_b(x)). \end{align} Therefore, remembering \eqref{eq:rel-bh} with $v=B_b$, we obtain \begin{align} -L^D_{f,g}(\phi,u_o(x),B_b(x)) \ge L^I_{\sigma}(B_b(x)); \end{align} that is, \begin{align}\label{eq:exp-szcbf} \mathcal{L}_{f,g,\sigma}(\phi,u_o(x),B_b(x)) \le 0,\ x \in \R^n_{h \le \mu}. \end{align} Here, we consider the rest space $\chi_{h > \mu}$, where the assumption (Z2)' implies that the space is bounded and $h$ is bounded from above in the space. In addition, $B_b$ is decreasing, $u_o$ is continuous, and $f$, $g$, and $\sigma$ are all locally Lipschitz. Therefore, $\mathcal{L}_{f,g,\sigma}(\phi,u_o(x),B_b(x))$ is bounded from above; that is, for sufficiently large values $c_1>0$ and $c_2>0$, we obtain \begin{align}\label{eq:exp-szcbf-soto} \mathcal{L}_{f,g,\sigma}(\phi,u_o(x),B_b(x)) \le c_1 B_b(x) + c_2,\ x \in \chi_{h > \mu}. \end{align} Considering \eqref{eq:exp-szcbf} and \eqref{eq:exp-szcbf-soto}, all the conditions of Theorem~\ref{thm:fcip} are satisfied with $Y=B_b$; that is, the system \eqref{eq:sys-sto-gen} with $u=\phi(x)$ is FCiP. Next, going back to \eqref{eq:exp-szcbf} and applying Dynkin's formula \cite{khasminskii2012, kushner, mao2007}, provided that we restrict $x_0 \in \chi_\mu$, we obtain \begin{align} &\ex{B_b(x(t\wedge \tau_{0\mu}))} - B_b(x_0) \nonumber \\ &= \int_0^{t \wedge \tau_{0\mu}} \mathcal{L}_{f,g,\sigma}(\phi(x(\tau)),u_o(x(\tau)),B_b(x(\tau)) d\tau \le 0. \end{align} Further considering Markov inequality \cite{khasminskii2012, kushner, mao2007}, which is represented by \begin{align} \pri{x_0}{ B_b(x(\tau')) \ge m } \le \frac{\ex{ B_b(x(\tau'))}}{m} \end{align} for any stopping time $\tau' \ge 0$ with $x \in \R^n$ and $m>0$, we obtain \begin{align} \pri{x_0}{B_b(x(t \wedge \tau_{0\mu})) \ge m} \le \frac{B_b(x_0)}{m}. \end{align} Now, letting $x^* \in \partial \chi$, then $h(x^*)=0$ is satisfied by the condition (Z3). Thus, we obtain \begin{align} \sup_{x \in \partial \chi_\mu} B_b(x) = B_b(x^*) = e^{-b * 0 } = 1. \end{align} This implies that we can choose $m=e^{-b*0}=1$ for $x_0 \in \chi_\mu$; thus, we obtain \begin{align}\label{eq:prob-zcbf-last} \pri{x_0}{h(x(t \wedge \tau_{0\mu})) \le 0 } &= \pri{x_0}{ B_b(x(t \wedge \tau_{0\mu})) \ge 1} \nonumber \\ &\le B_b(x_0) = e^{-b h(x_0)}. \end{align} This completes the proof. \end{proof} If the initial value satisfies $h(x_0) > \mu$, then the above lemma is improved so that the probability is independent of the initial value. \begin{theorem}\label{thm:safe-prob} If there exists a stochastic ZCBF for the system \eqref{eq:sys-sto-gen}, then it is safe in $(\chi_{h > \mu}, \chi, 1-e^{-b \mu})$. \eot \end{theorem} \begin{proof} First, using \eqref{eq:prob-zcbf-last}, we obtain \begin{align} \pri{x^*}{h(x(t \wedge \tau_{0\mu})) \le 0 } \le e^{-b \mu}. \end{align} Next, let us assume that the initial condition satisfies $x_0 \in \chi_{h > \mu}$. Because the subset $\chi_{h > \mu}$ is bounded, any sample path $x(t)$ has to achieve the boundary if it leaves the subset. Moreover, because $f$, $g$ and $\sigma$ of the system \eqref{eq:sys-sto-gen} are all time-invariant, any solution $x(t)$ is memoryless. These results imply that \begin{align} \pr{h(x(t \wedge \tau_{0})) \le 0 | x(0) = x_0 \in \chi_{h > \mu}} &\le \pr{h(x(t \wedge \tau_{0})) \le 0 | x(0) = x^*} \nonumber \\ &\le \pr{h(x(t \wedge \tau_{0\mu})) \le 0 | x(0) = x^*} \end{align} Therefore, we obtain \begin{align} \pr{h(x(t \wedge \tau_{0}) \le 0 | x(0) = x_0 \in \chi_{h > \mu}} \le e^{-b \mu}. \end{align} \end{proof} \begin{remark} A characteristic feature of our stochastic ZCBF in Definition~\ref{def:szcbf} appears in the condition \eqref{eq:con-prob} that do not restrict anything for $x \in \chi_{h>\mu}$; we can design a compensator $u$ freely in the subset, provided that we assume $u$ to be continuous. \eor \end{remark} In addition, because the condition \eqref{eq:exp-szcbf} implies that $B_b(x)$ is non-negative supermartingale \cite{khasminskii2012,mao2007} outside of the safe set $\chi$, sample paths approach the safe set $\chi$ in probability. More concretely, we can employ the analysis of $\mu$-zone mean-square convergence shown in \cite{poznyak2018}. Letting $\mu_b := e^{-b\mu}$ and \begin{align} &[\ex{B_b(x)} - \mu_b]_{+} := \left\{ \begin{array}{ll} \ex{B_b(x)}-\mu_b, & \ex{B_b(x)} \ge \mu_b \\ 0, & \ex{B_b(x)} < \mu_b \end{array}\right., \\ &W(x) := ([\ex{B_b(x)} - \mu_b]_{+})^2, \end{align} then the following holds. \begin{corollary} If there exists a stochastic ZCBF $h(x)$ for \eqref{eq:sys-sto-gen}, and moreover, for all $x \in \R^n_{h \le \mu}$, the condition \eqref{eq:con-prob} is replaced by \begin{align}\label{eq:con-prob-strict} \mathcal{L}_{f,g,\sigma}(\phi(x),u_o(x),h(x)) > b H_{\sigma}(h(x)), \end{align} then, for solutions of the system with $u=\phi(x)$ and $x_0 \in \R^n_{h \le \mu}$, \begin{align} \lim_{t \to \infty} W(x(t)) = 0 \end{align} is satisfied. \eot \end{corollary} \begin{proof} Because the condition \eqref{eq:con-prob-strict} holds, for any $x \in \R^n_{h \le \mu}$, $B_b(x)=e^{-bh(x)}$ satisfies $\mu_b \le B_b(x)$. Using the inequality, we obtain \begin{align}\label{eq:exp-szcbf-strict} \mathcal{L}_{f,g,\sigma}(\phi,u_o(x),B_b(x)) < 0,\ x \in \R^n_{h \le \mu} \end{align} via the same way to derive \eqref{eq:exp-szcbf}. Therefore, using Dynkin's formula, we obtain \begin{align} \frac{dW}{dt} &= 2 [\ex{B_b(x)} - \mu_b]_{+} \frac{d\ex{B_b(x)}}{dt} \nonumber \\ &= 2 [\ex{B_b(x)} - \mu_b]_{+} \ex{\mathcal{L}_{f,g,\sigma}(\phi(x),u_o(x),B_b(x))} \nonumber \\ &<0. \end{align} This completes the proof. \end{proof} \subsection{Robustness for changing a diffusion coefficient and the relationship with a deterministic system} In this subsection, we discuss the probability of safety under the diffusion coefficient change. In particular, we confirm that the probability converges to one as the diffusion coefficient disappears. The target system of this subsection is \begin{align}\label{eq:sys-sto-new} dx = \{ f(x)+g(x) (u+u_o(x)) \} dt + \sigma'(x)dw, \end{align} where $\sigma': \R^n \to \R^n \times \R^d$ is locally Lipschitz and the others are the same as \eqref{eq:sys-sto-gen}. \begin{corollary} Let us assume that there exists an AS-ZCBF $h(x)$ (or, equivalently an AS-RCBF $B(x)=(h(x))^{-1}$) for the system \eqref{eq:sys-sto-gen}. If \begin{align}\label{eq:con-zcbf-new} L^I_{\sigma}(B) \ge L^I_{\sigma'}(B) \end{align} is satisfied for all $x_0 \in \chi$, then the system \eqref{eq:sys-sto-new} is FIiP in $\chi$. \eot \end{corollary} \begin{proof} By the condition \eqref{eq:con-zcbf} in Theorem~\ref{thm:zcbf}, \begin{align} L^D_{f,g}(\phi(x),u_o(x),h(x)) \ge -\gamma h(x) + (h(x))^2 L^I_{\sigma}(B(x)) \end{align} is satisfied. Thus, we obtain \begin{align} \mathcal{L}_{f,g,\sigma'}&(\phi(x),u_o(x),h(x)) \ge -\gamma h(x) + (h(x))^2 L^I_{\sigma}(B(x)) + L^I_{\sigma'}(h(x)). \end{align} Here, assuming that the condition \eqref{eq:con-zcbf-new} holds, then we also obtain \begin{align} \mathcal{L}_{f,g,\sigma'}&(\phi(x),u_o(x),h(x)) \ge -\gamma h(x) + (h(x))^2 L^I_{\sigma'}(B(x)) + L^I_{\sigma'}(h(x)), \end{align} which is the same as the condition \eqref{eq:con-zcbf}, provided that $\sigma$ is replaced by $\sigma'$. Therefore, the corollary is proven by using Theorem~\ref{thm:zcbf}. \end{proof} The above result immediately implies that, as long as $h(x)$ is designed so that $L^I_{\sigma}(B) \ge 0$ holds, the deterministic system \eqref{eq:sys-det-gen} with $u=\phi$ is safe in $\chi$ because \eqref{eq:sys-det-gen} has the same form as \eqref{eq:sys-sto-new} with $\sigma'=0$. \begin{corollary}\label{cor:mu-zone} Let us assume that there exists a stochastic ZCBF $h(x)$ for the system \eqref{eq:sys-sto-gen}. If \begin{align} L^I_{\sigma}(h(x)) \le L^I_{\sigma'}(h(x)) \end{align} holds and there exists $a \in (0,1]$ such that \begin{align} a^2 H_{\sigma}(h(x)) \ge H_{\sigma'}(h(x)) \end{align} is satisfied for any $x \in \chi$, then the system \eqref{eq:sys-sto-new} is safe in $(\chi_{h > \mu},\chi,1-e^{-b\mu/a^2})$. \eot \end{corollary} \begin{proof} Considering $u=\phi(x)$ and \eqref{eq:con-prob}, we obtain \begin{align} \mathcal{L}_{f,g,\sigma'}(\phi(x),u_o(x),h(x)) & = \mathcal{L}_{f,g,\sigma}(\phi(x),u_o(x),h(x)) + L^I_{\sigma'}(h(x)) -L^I_{\sigma}(h(x)) \nonumber \\ &\ge b H_{\sigma}(h(x)) \\%\frac{b}{2} \lie{\sigma}{h}(x) (\lie{\sigma}{h}(x))^T \\ &\ge \frac{b}{a^2} H_{\sigma'}(h(x)) \end{align} This inequality and Theorem~\ref{thm:safe-prob} immediately proves the corollary. \end{proof} The above result also implies that, if $L^I_{\sigma'}(h(x)) = 0$, $a$ can be chosen infinitesimally small; in particular, \eqref{eq:con-cbf-ames} is satisfied with any $\alpha(h(x))$; thus, the deterministic system \eqref{eq:sys-det-gen} with $u=\phi(x)$ and $x_0 \in \chi_{h > \mu}$ is safe in $\chi$. \section{Examples}\label{sec:example} \subsection{Revisit to the motivating example}\label{subsec:ex1} In this subsection, we revisit the motivating example dealt with in Section~\ref{sec:motivation}. Let us consider the stochastic system \eqref{eq:sys-sto}, provided that a safe set is $\chi_1=(\alpha,\infty)$ according to Section~\ref{sec:main}. First, we remake the function $B_s(x)=1/(x-\alpha)$ and $h_s=x-\alpha$ so that it is proper in $\chi_1$. Referring to \cite{nakamura2019}, set \begin{align} p_N(x) := \frac12 (x-N)^4 + \frac12 (x-N)^3|x-N| \end{align} for a sufficiently large $N>\alpha$. Then, the functions \begin{align} &B_1(x) = \frac{1}{x-\alpha} + p_N(x),\\ &h_1(x)=(B_1(x))^{-1}= \frac{x-\alpha}{1 + (x-\alpha)p_N(x)}, \end{align} are proper in $\chi_1$. The shape of $h_1(x)$ is shown in Fig.~\ref{fig:h1shape}. Here, we consider $u=\phi_N(x)$ defined in Corollary~\ref{cor:ctrl-zcbf}. If $x \ge N$, $\phi_N$ is somewhat complicated because $p_N(x) = (x-N)^4$. However, if $x < N$, the calculation results in Section~\ref{sec:motivation} can be straightforwardly used because $p_N(x)=0$. That is, $\phi_{B_s}$ defined in \eqref{eq:ctrl-zcbf-exam} is equivalent to $\phi_N$ for any $x \ge N$. Thus, we conclude $B_1$ and $h_1$ are an AS-RCBF and an AS-ZCBF, respectively. Next, we consider the same problem setting as above, provided that the amplitude of the input is bounded; that is, for some $U_M>0$, an extra condition $-U_M \le u_o(x) + u \le U_M$ is considered. Setting \begin{align} J_2(x) := -\gamma h_s(x) + c^2 B_s(x), \end{align} let a compensator $u=\phi_1(x)$ by designing \begin{align} \phi_1(x) := \left\{ \begin{array}{ll} -u_o(x) + U_M, & h_s \le 0 \cup \max(u_o,J_2,U_M) \neq U_M \\ -u_o(x) - U_M, & h_s > 0 \cap \max(u_o,J_2,-U_M) = -U_M \\ -u_o(x) + J_2(x) , & h_s > 0 \cap u_o < J_2 \in (-U_M,U_M) \\ 0, & \mathrm{otherwise} \end{array}\right. \end{align} and $h_1$ be considered as a candidate of a stochastic ZCBF. Moreover, we design \begin{align} &h(x_{\mu_1}) = \frac{-U_M+\sqrt{U_M^2 + 4 \gamma c^2}}{2\gamma} =: \mu_1 , \label{eq:ex1-mu}\\ &N > \frac{U_M+\sqrt{U_M^2 + 4 \gamma c^2}}{2\gamma} =: D, \label{eq:ex1-d} \end{align} so that $u_o(x) + \phi_1(x)=U_M$ for any $x \le x_{\mu_1}$ with $x_{\mu_1} \in (\alpha,N)$ and $u_o(x)+\phi_1(x)=-U_M$ for any $x \ge D$ with $N>D$. For $x \le x_{\mu_1} < N$, we obtain \begin{align} &P_N(x) = \pfrac{P_N}{x}(x)= \pfrac{^2 P_N}{x^2}(x) = 0, \\ &\lie{c}{h_1}(x) = \lie{c}{h_s}(x) = c, \\ &\mathcal{L}_{1,1,c}(\phi_1(x),u_o(x),h_1(x)) = \phi_1(x) + u_o(x) = U_M; \end{align} thus, we obtain \begin{align} \mathcal{L}_{1,1,c}(\phi_1(x),u_o(x),h_1) \ge b H_{c}(h_1(x)) \end{align} with $b=2 U_M /c^2$ and \begin{align} \mathcal{L}_{1,1,c}(\phi_1(x),u_o(x),h_1) > b H_{c}(h_1(x)) \end{align} with $b=2 U_M /c^2+ \epsilon$. The above results imply that $h_1$ is a stochastic ZCBF and the system is safe in $(\chi_{\ge \mu_1},\chi,1-e^{- 2 U_M \mu_1 /c^2})$, where $\chi_{\ge \mu_1}=\{x \in \R^n | h(x) \ge \mu_1\}$. In addition, the condition in Corollary~\ref{cor:mu-zone} holds with $b = 2 U_M /c^2+ \epsilon$. Furthermore, suppose that the system \eqref{eq:sys-sto} is nothing but a system for designing a compensator and a system \begin{align}\label{eq:sys-sto-act} dx = (u+u_o(x)) dt + c' dw \end{align} is the actual target system. If there exists $a \in [0,1]$ such that $c' = ac$, then the system \eqref{eq:sys-sto-act} is safe in $(\chi_{\ge \mu_1},\chi,1-e^{\frac{-2 U_M \mu_1}{a^2 c^2}})$. Therefore, by designing the compensator using $c >c'$, we can make the probability of safety of $\chi_1$ higher, provided that it is limited by the input constraint $U_M$. Of course, if $a=0$, the probability is $1$. \subsection{Confinement in a bounded subset}\label{subsec:ex2} For system \eqref{eq:sys-sto}, let us consider a safe set $\chi_2:=(\alpha-\beta,\alpha+\beta)$ with some $\beta>0$. We consider the following function as a candidate for an AS-ZCBF or a stochastic ZCBF: \begin{align} h_2(x) := \left\{ \begin{array}{ll} \frac{2 \beta \kappa}{\pi} (x-\alpha+\beta),& x \le \alpha -\beta, \\ \frac{4 \beta^2 \kappa}{\pi^2} \sin (\theta(x)), & x \in \chi_2, \\ -\frac{2 \beta \kappa}{\pi} (x-\alpha-\beta),& x \ge \alpha +\beta, \\ \end{array}\right. \end{align} where $\kappa > 0$ and \begin{align} \theta(x) := \frac{\pi}{2\beta} (x-\alpha+\beta). \end{align} The reciprocal function is defined by $B_2(x) := (h_2(x))^{-1}$. The shape of $h_2(x)$ is shown in Fig.~\ref{fig:h2shape}. First, considering Corollary~\ref{cor:ctrl-zcbf} and setting $\gamma = \kappa c^2 > 1$, we obtain a compensator \begin{align}\label{eq:ex2a-ctrl} \phi_{N2}(x) = \left\{ \begin{array}{ll} -u_o(x) + \frac{2\beta\kappa c^2}{\pi\tan(\theta(x))}, & I < J \cap x \neq \alpha, \\ 0, & I \ge J \cup x = \alpha, \\ \end{array}\right. \end{align} where \begin{align} I(u_o(x),h_2(x)) &= \frac{2\beta \kappa}{\pi} \cos(\theta(x)) u_o(x), \\ J(h_2(x)) &= - \frac{ 4 \beta^2 \kappa^2 c^2}{\pi^2} \sin (\theta(x)) + \frac{4 \beta^2 \kappa^2 c^2}{\pi^2 \sin (\theta(x))} \nonumber \\ &= \frac{4 \beta^2 \kappa^2 c^2}{\pi^2} \frac{\cos^2(\theta(x))}{\sin(\theta(x))}. \end{align} This implies that $h_2(x)$ is an AS-ZCBF because the system \eqref{eq:sys-sto} with $u=\phi_{N2}$ and $\chi = \chi_2$ results in \eqref{eq:con-zcbf}. Next, considering the restriction $|u+u_o(x)|\le U_M$ and \begin{align} \Phi_2(x) := \frac{2\beta\kappa c^2}{\pi\tan(\theta(x))}, \end{align} we redesign the compensator $\phi_{N2}$ as follows: \begin{align} \phi_2(x) := \left\{ \begin{array}{ll} \phi_{N2}(x), & x \in \chi_2 \cap \Phi_2 \in (-U_M,U_M), \\ -u_o(x) + U_M, & x \le \alpha-\beta \cup \Phi_2 \ge U_M, \\ -u_o(x) - U_M, & x \ge \alpha+\beta \cup \Phi_2 \le -U_M. \end{array}\right. \end{align} In the rest of this subsection, we confirm that $h_2$ is a stochastic ZCBF using $u=\phi_2(x)$. First, for all $x \in \R \setminus \chi_2$, \begin{align} &\pfrac{h_2}{x} = -\sgn{x-\alpha} \frac{2\beta\kappa}{\pi}, \quad \pfrac{^2 h_2}{x^2} = 0, \end{align} are both satisfied. Thus, we obtain \begin{align} \mathcal{L}_{0,g,c}(\phi_2,\phi_o(x),h_2(x)) &= \frac{2\beta\kappa}{\pi}U_M \nonumber \\ &= \frac{1}{2} \frac{\pi}{\beta\kappa c^2}U_M \frac{4\beta^2\kappa^2c^2}{\pi^2} \nonumber\\ &= \frac{1}{2} \frac{\pi}{\beta\kappa c^2}U_M (\lie{c}{h_2}(x))^2; \end{align} therefore, choosing $b=b_{21}$ with \begin{align} b_{21} \le \frac{\pi}{\beta\kappa c^2}, \end{align} we obtain \eqref{eq:con-prob} for $x \in \R\setminus \chi_2$. Next, we consider the situation of $x \in \chi_{\mu_2} = \{x \in \R | h_2(x) \in (0, \mu_2)\}$ for $\mu_2 = h_2(x_{\mu_2})$, where $x_{\mu_2}$ satisfies \begin{align} \Phi_2(x_{\mu_2})=\pm U_M. \label{eq:ex2-max} \end{align} In this situation, we obtain \begin{align} &\mathcal{L}_{0,1,c}(u,u_o,h_2(x)) = \frac{2 \beta \kappa}{\pi} \cos (\theta(x))(u+u_o) - \frac{c^2\kappa}{2}\sin(\theta(x))\\ &(\lie{c}{h_2}(x))^2 = \frac{2 \beta^2 \kappa^2}{\pi^2}\cos^2(\theta(x)). \end{align} Moreover, \begin{align} &\sin(\theta(x)) > 0 \label{ineq:ex2} \end{align} yields \begin{align} \sgn{\cos(\theta(x))} = \sgn{\Phi_2(x)}. \end{align} Therefore, if $u+u_o(x) = \sgn{\Phi_2(x)} U_M$, we obtain \begin{align} &\mathcal{L}_{0,1,c}(u,u_o,h_2(x)) = \frac{2 \beta \kappa}{\pi} |\cos (\theta(x))| U_M - \frac{c^2\kappa}{2}\sin(\theta(x)), \end{align} which satisfies the condition \eqref{eq:con-prob} with $b=b_{22}$ if \begin{align}\label{eq:ex2-con-b} 0 < b_{22} \le \frac{\pi}{\beta \kappa} \left\{ \frac{U_M}{|\cos(\theta(x))|} - \frac{c^2 \pi \sin(\theta(x))}{4 \beta \cos^2(\theta(x))} \right\} \end{align} holds. Using \eqref{ineq:ex2}, we obtain \begin{align} \sup_{x \in \chi_{\mu_2}}|\tan(\theta(x))| = |\tan(\theta(x_{\mu_2}))| < \frac{4 \beta}{c^2 \pi}U_M \end{align} as a condition for $b_{22}$ to exist. Therefore, considering \eqref{eq:ex2-max}, \begin{align} |\tan(\theta(x_{\mu_2}))| < \frac{8 \beta^2 \kappa}{\pi^2 |\tan(\theta(x_{\mu_2}))|}; \end{align} thus, \begin{align}\label{eq:ex2-mu-con} |\tan(\theta(x_{\mu_2}))| < \frac{2 \sqrt{2 \kappa} \beta}{\pi}. \end{align} Consequently, for $x_{\mu_2}$ satisfying \eqref{eq:ex2-mu-con}, the compensator $u=\phi_2(x)$ ensures the system \eqref{eq:sys-sto} is safe in $(\chi_{\ge \mu_2},\chi_2,1-e^{-b_2 \mu_2})$, where $\mu_2 = h_2(x_{\mu_2})$ and $b_2 := \min(b_{21},b_{22})$. \begin{figure}[!t] \begin{minipage}[t]{0.45\hsize} \centering \includegraphics[keepaspectratio, scale=0.3]{h1_shape.pdf} \caption{$h_1$ with $\alpha=1$ and $N=7$.} \label{fig:h1shape} \end{minipage} \begin{minipage}[t]{0.45\hsize} \centering \includegraphics[keepaspectratio, scale=0.3]{h2_shape.pdf} \caption{$h_2$ with $\alpha=0$, $\beta=1$ and $\kappa=20000$.} \label{fig:h2shape} \end{minipage} \\ \end{figure} \subsection{Numerical simulation} In this subsection, we confirm the validity of the derived compensators for $\chi_1$ and $\chi_2$ by computer simulation. \begin{example}\label{ex:ex1sim} Consider the system \eqref{eq:sys-sto} with the safe set $\chi_1$ and the compensator $u=\phi_1$ discussed in Subsection~\ref{subsec:ex1}. Letting $\alpha=1$, $\gamma = 1$, $c=0.1$ and $U_M=1$, we obtain $x_{\mu_1} \approx 1.01$, $\mu_1 \approx 0.01$, $b_1 \approx 200$ and $D \approx 1.01$. The value of $N$ does not affect computer simulation if we design it massively; for example, we set $N=10^{10}$. Then, the system \eqref{eq:sys-sto} is safe in $(\{ x > x_{\mu_1} \}, \{ x > 1 \}, 0.862)$. If $u_o=0$ for any $t \ge 0$, the compensator $\phi_1$ is illustrated as in Fig.~\ref{fig:u1}. Here, assuming $u_o = -1$ and $x_0=4$, the simulation results of time responses of the state $x$, the compensator $u$ and the pre-input $u_o$, and the ZCBF $h_1$ are described in Figs.~\ref{fig:ex1-x}, \ref{fig:ex1-u} and \ref{fig:ex1-zcbf}, respectively. In the simulation, we calculate ten times sample paths in the grey lines, the average of the paths in the red line, the results for the deterministic system (i.e., $\sigma'=0$ ) in the blue line, and the pre-input $u_o$ in the green line. \end{example} \begin{example}\label{ex:ex2sim} Consider the system \eqref{eq:sys-sto} with the safe set $\chi_2$ and the compensator $u=\phi_2$ discussed in Subsection~\ref{subsec:ex2}. Letting $c=0.01$, $\alpha=0$, $\beta=1$, $U_M = 1.0$ and $\kappa = 20000$, we obtain $x_{\mu_2} \approx \pm 0.42$, $\mu_2 \approx 6375$ and $b_2 = b_{22} \approx 2.54 \times 10^{-4}$. Then, the system \eqref{eq:sys-sto} is safe in $(\{ |x| < |x_{\mu_2}| \}, \{ |x| < 1 \},0.802)$. If $u_o=0$ for any $t \ge 0$, the compensator $\phi_2$ is illustrated as in Fig.~\ref{fig:u2}. Here, assuming $u_o = 1$ and $x_0=0.99$, the simulation results of time responses of the state $x$ and the compensator $u$ and the pre-input $u_o$, and the ZCBF $h_2$ are described in Figs.~\ref{fig:ex2-x}, ~\ref{fig:ex2-u} and \ref{fig:ex2-zcbf}, respectively. The colors of the lines have the same roles as in Example~\ref{ex:ex1sim}. \end{example} In the simulation results, the safety is achieved better than the estimation of Theorem~\ref{thm:safe-prob}; note that the theorem ensures the minimum probability of leaving safe sets. The results may imply that, for actual control problems influenced by white noises, the designed compensators have good performances as safety-critical control. \begin{figure}[!t] \begin{tabular}{cc} \begin{minipage}[t]{0.45\hsize} \centering \includegraphics[keepaspectratio, scale=0.3]{ex1_compfunc.pdf} \caption{$\phi_1$ with $u_o=0$.} \label{fig:u1} \end{minipage} & \begin{minipage}[t]{0.45\hsize} \centering \includegraphics[keepaspectratio, scale=0.3]{ex1_state.pdf} \caption{$x$ in Ex.~\ref{ex:ex1sim}.} \label{fig:ex1-x} \end{minipage} \\ \begin{minipage}[t]{0.45\hsize} \centering \includegraphics[keepaspectratio, scale=0.3]{ex1_input.pdf} \caption{$\phi_1$ and $u_o$ in Ex.~\ref{ex:ex1sim}.} \label{fig:ex1-u} \end{minipage} & \begin{minipage}[t]{0.45\hsize} \centering \includegraphics[keepaspectratio, scale=0.3]{ex1_zcbf.pdf} \caption{$h_1$ in Ex.~\ref{ex:ex1sim}.} \label{fig:ex1-zcbf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}[t]{0.45\hsize} \centering \includegraphics[keepaspectratio, scale=0.3]{ex2_compfunc.pdf} \caption{$\phi_2$ with $u_o=0$.} \label{fig:u2} \end{minipage} & \begin{minipage}[t]{0.45\hsize} \centering \includegraphics[keepaspectratio, scale=0.3]{ex2_state.pdf} \caption{$x$ in Ex.~\ref{ex:ex2sim}.} \label{fig:ex2-x} \end{minipage} \\ \begin{minipage}[t]{0.45\hsize} \centering \includegraphics[keepaspectratio, scale=0.3]{ex2_input.pdf} \caption{$\phi_2$ and $u_o$ in Ex.~\ref{ex:ex2sim}.} \label{fig:ex2-u} \end{minipage} & \begin{minipage}[t]{0.45\hsize} \centering \includegraphics[keepaspectratio, scale=0.3]{ex2_zcbf.pdf} \caption{$h_2$ in Ex.~\ref{ex:ex2sim}.} \label{fig:ex2-zcbf} \end{minipage} \end{tabular} \caption*{In the above figures, the grey colored lines denote sample paths, the red colored lines denote the average of the paths, the blue lines denote the results for the deterministic system (i.e., $\sigma'=0$ ), and the green line denote the pre-input $u_o(x)$.} \end{figure} \section{Conclusion}\label{sec:conclusion} In this paper, we proposed an almost sure reciprocal/zeroing control barrier function and a stochastic zeroing control barrier function for designing a safety-critical control law for a stochastic control system. We also notice that the results are now effective for just an autonomous system with a state-feedback-type pre-input. The extension to a non-autonomous system with a time-varying pre-input is challenging for future work because the extension will enable us to apply our results for recent control application problems such as human-assist control \cite{furusawa2021,nakamura2019,tezuka2022}.
{ "timestamp": "2022-09-20T02:23:51", "yymm": "2209", "arxiv_id": "2209.08728", "language": "en", "url": "https://arxiv.org/abs/2209.08728" }
\section{Introduction} \label{sec:introduction} A well-known challenge of modern high-capacity recognition models, such as deep neural networks (DNNs), is their need for large quantities of training data. This is a major problem for tasks involving overhead (e.g., satellite) imagery due to the costs to purchase and label the imagery. Furthermore, there is tremendous variability in real-world overhead imagery (e.g., due to geography, weather, time-of-day, imaging hardware, and more), making it costly to capture in a dataset. Despite their tremendous benefits for the research community, recent benchmark datasets of overhead imagery (e.g., DSTL \cite{Iglovikov2018}, DeepGlobe \cite{Demir2018}, and Inria \cite{Maggiori2017}) encompass just a few geographic locations and environmental conditions. As a result, these datasets still capture a small fraction of real-world visual variability, and DNNs trained on these datasets have been found to perform unpredictably, and usually poorly, on new collections of overhead imagery \cite{Maggiori2017,Wang2017,Huang2018,Kong2020}. This limitation greatly undermines their value to researchers and real-world users. In this work we explore the use of \textit{synthetic} overhead imagery to address this problem. Synthetic imagery is captured using a virtual camera operating in a virtual world, where the designer can control the scene content and environmental conditions. By systematically varying the camera properties, scene content, and environmental conditions it is possible to rapidly capture large quantities of imagery. Furthermore, because we have all information about the camera and the scene, we can also automatically generate a variety of high-quality ground truth annotations, such as full pixel-wise segmentation of scene content and objects, or image depth maps. Such annotations are often expensive or impracticable to collect. For these reasons, synthetic imagery potentially offers a fast and cost-effective means to collect large quantities of diverse training imagery for DNNs. Due to these potential benefits, interest in synthetic imagery has grown rapidly in recent years, and a large number of studies have demonstrated that it is beneficial for training DNNs on a variety of tasks \cite{Kong2020,Ward2018,Shermeyer2021,Xu,Hu2021,Han2017,Ros2016,Richter2016,Tobin2017,Zhang2017,Shafaei2016,Qureshi2008,Taylor2007,Tremblay2018,Sankaranarayanan2018}, including those involving overhead imagery \cite{Kong2020,Ward2018,Shermeyer2021,Xu,Hu2021,Han2017}. \begin{figure} \centering \includegraphics[height=5.6cm]{imgs/Overview.png} \caption{Overview of the meta-simulation problem formulation: the goal is to estimate the target design parameters and generate realistic synthetic images in the target domain for augmenting the training of a downstream task model.} \label{fig:overview} \end{figure} One major obstacle to the success of synthetic imagery is the \textit{sim-to-real gap} \cite{Kar2019a,Devaranjan2020}, which refers to systematic visual dissimilarities between the synthetic and real-world imagery. A large body of research has been focused on overcoming this gap \cite{Tremblay2018,Sankaranarayanan2018,Kar2019a,Devaranjan2020,hoffman_2018,huang2018multimodal}. The majority of this work has focused on overcoming so-called \textit{appearance} gaps, which refers to gaps caused primarily by limitations in the graphics engine that affect low-level imagery features, such as shading and texture \cite{hoffman_2018, huang2018multimodal}. Relatively less attention has been given to overcoming the \textit{content} gap, which refers to differences in scene content between the virtual world and real-world scenes - these gaps then result in significant visual differences between the imagery. Content gaps can include differences in the types of objects in the scene, their material composition (which often affects object texture and color), and their spatial arrangement \cite{Kar2019a}. Content gaps can be overcome through careful manual design, however this is costly and time-consuming \cite{Kong2020}. Furthermore, the design process must generally be repeated for each new context, or visual domain, one wishes to perform well upon. This has led to the recent development of \textit{meta-simulation} models which aim to (semi-) automate the design of the virtual world, mitigating one of the major limitations of synthetic imagery \cite{Kar2019a,Devaranjan2020,Kulkarni2015Deep,Du2021Auto-Tuned,Mansinghka2013Approximate,Kulkarni2015Picture:,Yildirim2015,Louppe2019Adversarial,Ganin2018Synthesizing,Mellor2019Unsupervised,Behl2020AutoSimulate:,Ruiz2019LEARNING}. \subsection{Meta-simulation} \label{subsec:introduction_meta_simulation} Meta-simulation methods attempt to optimize the design parameters of the simulator (which primarily influence the virtual scene content) in order to generate synthetic images that look similar to those in a set of \textit{unlabeled} target imagery, as illustrated in Fig. \ref{fig:overview}. We can then use the synthetic imagery for training a task model to perform well on the target unlabeled imagery. Solving the meta-simulation problem is difficult because we must simultaneously infer the types of content in the scene, their spatial arrangement and quantity, and their material composition - a vast search space. The optimization process is made more difficult because simulators are typically complex and non-differentiable black-box functions, and may also have non-ordinal categorical input parameters. Despite these challenges, a variety of meta-simulation methods have been proposed in recent years \cite{Kar2019a,Devaranjan2020,Kulkarni2015Deep,Du2021Auto-Tuned,Mansinghka2013Approximate,Kulkarni2015Picture:,Yildirim2015,Louppe2019Adversarial,Ganin2018Synthesizing,Mellor2019Unsupervised,Behl2020AutoSimulate:,Ruiz2019LEARNING}. In these seminal works, the authors demonstrated that meta-simulation can be used to infer appropriate design parameters, as well as improve the accuracy of recognition models trained on the resulting synthetic data. However, most of this existing work has focused upon natural imagery, which exhibits some important differences with respect to overhead imagery. First, there exist simulation engines for natural imagery that are computationally fast, and produce relatively realistic synthetic imagery (i.e., a low sim-to-real visual gap). By contrast, publicly-accessible simulators for overhead imagery are substantially slower, and exhibit a relatively large sim-to-real visual gap (see Sec. \ref{sec:related_work_syn}). Another important distinction is that natural imagery is relatively easy to collect, and it is therefore often abundant. By contrast, overhead imagery is costly to collect, and is often limited in quantity for novel target domains. . Due to these differences, many existing meta-simulation methods are not directly applicable to overhead imagery (see Sec. \ref{subsec:related_work_MS}) and there has been little work investigating meta-simulation specifically for overhead imagery. The authors in \cite{Kar2019a} and \cite{Devaranjan2020} demonstrated the effectiveness of several meta-simulation methods on overhead imagery - one of which, termed Meta-Sim2 (MS2) - we also investigate here. However these works only tested on synthetic target imagery (as opposed to real imagery), and did so on simplistic scenes with limited variability. As a result, it is unclear how well their methods would perform on more complex and realistic synthetic imagery, as well as real-world imagery. \subsection{Contributions of this work} \label{subsec:introduction_contributions} In this work we perform the first investigation of meta-simulation for real-world overhead imagery. We compare three meta-simulation methods: two existing methods, and one novel method. Given the unique challenges of overhead imagery, we found that two existing methods were well-suited for the overhead imagery meta-simulation task (see Sec. \ref{subsec:related_work_MS}): direct regression (DR)\cite{Du2021Auto-Tuned}, and MS2\cite{Devaranjan2020}. DR is a relatively simple approach that has recently been applied to meta-simulation \cite{Du2021Auto-Tuned}, and is suitable for overhead imagery. MS2 \cite{Devaranjan2020} recently achieved state-of-the-art performance, outperforming its predecessor Meta-Sim \cite{Kar2019a} on several meta-simulation tasks. \begin{figure} \centering \includegraphics[height=5.0cm]{imgs/Exps.png} \caption{Sim-to-real gaps that impact the meta-simulation models are illustrated: (i) by searching among the designs (e.g., rooftop textures) accessible to the simulator, the content gap can be reduced, and the remaining content gap after the optimal design is selected is called the design gap; (ii) the appearance gap exists due to the limitation of computer graphics, and it cannot be reduced by the meta-simulator.} \label{fig:illustration_of_appearance_and_design_gaps} \end{figure} We evaluated these two meta-simulation methods on several controlled experiments in which we isolate and examine the impact of two major real-world factors on meta-simulation models: (i) the presence of sim-to-real \textit{design} gaps, and (ii) the presence of sim-to-real \textit{appearance} gaps. As is presented in Fig. \ref{fig:illustration_of_appearance_and_design_gaps}, regarding (i) a design gap is the remaining content gap between the optimal synthetic scene design and the content in the real-world target imagery. As is mentioned in Sec. \ref{sec:introduction}, meta-simulation models aim to reduce the content gap by optimizing the design of virtual scene. However, in real applications, the content gap is unlikely to be completely eliminated. This may occur, for example, if the target imagery contains a rooftop with a texture that cannot be perfectly matched by any available to the meta-simulator. Therefore, there is a "gap" between the designs available to the meta-simulator and those present in the target scene. Given the variety of real-world scenes, design gaps are likely present whenever meta-simulating on any real-world target imagery. In these cases, the meta-simulator will ideally identify the design that results in synthetic imagery that is visually most similar to the real-world imagery. The second factor is the presence of a sim-to-real appearance gap, which are typically low-level gaps that cannot be (directly) influenced or reduced by the meta-simulator's effectiveness, and as we show, tend to make meta-simulation much more challenging. Significant sim-to-real gaps are present in current synthetic overhead imagery simulators, and therefore this is an important factor to consider when evaluating meta-simulation models. We incrementally introduce the two aforementioned factors into our meta-simulation problems in order to isolate their impacts on the meta-simulation models. Our experiments reveal that existing meta-simulation approaches work relatively well when there are no sim-to-real gaps, however, both MS2 and DR degrade, to varying degrees, as these gaps are introduced. Most importantly, both methods perform much more poorly when applied to real-world overhead imagery, which we hypothesize exhibits both significant appearance and design gaps. MS2 - a state-of-the-art meta-simulation method - also suffers from several additional limitations. First, it requires a large number of simulations each time meta-simulation is performed, which quickly becomes impracticable if meta-simulation must be repeated (e.g., for unique collections of target imagery). This cost is exacerbated if the simulator is relatively slow, as is the case in overhead imagery. These limitations are described in greater detail in Sec. \ref{subsec:problem_setting_meta_simulation_2} and \ref{subsec:problem_setting_direct_regression}. To address the limitations of existing approaches, we propose Neural-Adjoint Meta-Simulation (NAMS), which is more effective for meta-simulation on real-world overhead imagery, and has better computational complexity. NAMS relies upon training a DNN to model the simulator, yielding a differentiable function that relates synthetic imagery to its design. Using this model, we can then use gradient descent to rapidly search for design parameters that yield synthetic imagery that match some given target imagery. NAMS is therefore \textit{amortized}: it still requires an initial random sampling of simulations to perform meta-simulation, however, this simulation process only needs to be done once, after which NAMS can be used to rapidly perform design inference for new domains. In our experiments we find that NAMS outperforms MS2 (and DR) when using the quantity of simulations required by MS2 \textit{for just a single meta-simulation}. The contribution of this work is briefly summarized below: \begin{enumerate} \item \textit{A performance comparison of modern meta-simulation techniques for overhead imagery tasks.} We compare the effectiveness of several meta-simulation models for complex and challenging synthetic and real-world overhead imagery. To our knowledge we are the first to evaluate meta-simulation methods for real-world or (complex) synthetic overhead imagery. \item \textit{We investigate the behavior of meta-simulation methods under several important real-world scenarios.} We show that meta-simulation methods perform very differently depending upon whether (i) there is an appearance gap between the synthetic and real-world imagery, and (ii) whether the target imagery is within the design space of the simulator. \item \textit{Neural-Adjoint Meta-Simulation (NAMS), a novel meta-simulation model.} NAMS achieves similar performance to other existing methods, but it is better-suited to the challenges of meta-simulation in overhead imagery. In particular, it performs better on real-world overhead imagery, and scales much better with the number of unique meta-simulation tasks that must be performed. \end{enumerate} The remainder of this paper is organized as follows: Section \ref{sec:related_work} discusses related work; Section \ref{sec:problem_setting} discusses the meta-simulation problem formulation and the existing methods; Section \ref{sec:nams_description} discusses the proposed NAMS methodology; Section \ref{sec:experimental_design} and \ref{sec:experimental_results} discuss our experiments and results; and Section \ref{sec:conclusion} presents our conclusions. \section{Related work} \label{sec:related_work} \subsection{Training models with synthetic data} \label{sec:related_work_syn} The exploration of synthetic training data as an alternative to real-world dataset collection and annotation has grown rapidly in recent years. Synthetic training data has been studied for a variety of tasks, such as segmentation and planning in driving \cite{Richter2016,Wrenninge2018,Prakash2019,Gaidon2016,Dosovitskiy2017,Alhaija2018,Ros2016}, indoor scene segmentation \cite{Zhang2017,Armeni2019,Handa2016,McCormac2017,Savva2019,Wang2017,Wu2018}, robotic control \cite{Tassa2018,Todorov2012,Sadeghi2016,Brockman2016}, optical flow estimation \cite{Butler2012}, \cite{Shugrina2019}, home robotics \cite{Puig2018,Kolve2017,Gao2019}, surveillance system design and evaluation \cite{Qureshi2008}, \cite{Taylor2007}, \cite{Qureshi2007}, and more. Synthetic \textit{overhead} imagery has also been recently explored for object detection \cite{Shermeyer2021,Hu2021,Xu,Han2017,Ward2018} and segmentation \cite{Kong2020}, respectively. In this work, we adopt the Synthinel-1 simulation approach \cite{Kong2020}, because it is the only method thus far that features an research-accessible simulator, and software to aid the generation of synthetic overhead imagery. However, as is stated in \cite{Kong2020}, it takes approximately 1 minute to create 36 synthetic overhead images, covering 1 square km of the ground on a standard desktop, which is substantially slower than processes employed for synthetic natural imagery generation. Please refer to \cite{Kong2020} for more details about the simulator. In this work, the downstream task is building segmentation, and the main challenge is to choose building features that are appropriate for real-world target imagery. This is a challenging problem because of the substantial domain gap (noted in \cite{Kong2020}), and the substantial visual variability in buildings across different geographic regions. \subsection{Meta Simulation Methods} \label{subsec:related_work_MS} Many meta-simulation methods are presented as solutions of the aforementioned meta-simulation task \cite{Ruiz2019LEARNING,Behl2020AutoSimulate:,Kar2019a,Devaranjan2020,Mansinghka2013Approximate,Kulkarni2015Picture:,Yildirim2015,Louppe2019Adversarial,Ganin2018Synthesizing,Mellor2019Unsupervised}, however, most of them cannot be directly applied to our meta-simulation task on overhead imagery, due to three major unique properties of overhead imagery with respect to natural imagery: the presence of discrete and non-ordinal design parameters (e.g., selection of textures from a bank); the absence of labels in the target domain; and the large sim-2-real visual gap between synthetic and real overhead imagery. We next summarize existing methods, and how each are rendered ineffective or inapplicable due to these factors. In \cite{Kulkarni2015Deep}, a neural network with an encoder-decoder structure is used to learn the design parameters from target images, however, the networks are designed for continuous parameters and do not support discrete/categorical parameters natively, which may be impractical in real-world overhead imagery applications. In \cite{Du2021Auto-Tuned}, a searching method is implemented, to iteratively decide whether the given parameters are higher or lower than the true parameters in the target data, and shift the parameters to approach the target parameters. However, for categorical parameters, there is no intrinsic ordering to different categories, and such a method also does not work for categorical parameters. In \cite{Ruiz2019LEARNING}, \cite{Behl2020AutoSimulate:}, Reinforcement Learning (RL) strategies are adopted for the searching of the target design parameters, with a bi-level optimization problem formulation. The lower level optimizes the variables in the downstream model, given the synthetic data generated with certain design parameter values, while the upper level optimizes the design parameter values, by maximizing the performance of downstream algorithms over a validation target dataset. RL techniques are applied to solve the aforementioned non-differentiable optimization. However, these methods rely on a set of labeled target data to form a validation dataset, which is very likely not available in real-world overhead imagery applications. Other RL-based methods do not rely on labeled target data \cite{Louppe2019Adversarial,Ganin2018Synthesizing,Mellor2019Unsupervised}, where adversarial learning strategies are applied. Discriminators are trained to measure the difference between synthetic data and the target data. The design parameter values are then optimized by maximizing the capability to fool the discriminator, to avoid maximizing a validation loss. Therefore, these methods rely on only unlabeled target data for training the discriminator. However, as is discussed in \cite{Kar2019a}, learning with discriminators is known to suffer from mode collapses. Besides, due to the large sim-to-real gap in the overhead imagery applications, discriminators can easily distinguish synthetic image and real world target images, making the learning of the parameters fail due to vanishing gradients. There are some existing methods that are potentially applicable to overhead imagery. One such method is direct regression (DR) \cite{Du2021Auto-Tuned}, where a CNN model is used to directly infer design parameters from images. The model is trained using a set of synthetic images, paired with their corresponding design parameters. We include this method in our experiments because it is applicable, and to demonstrate that such a simple and straightforward approach does not work well on real-world overhead imagery. Another class of applicable approaches evaluate the difference between synthetic data and the target data by using hand-crafted distance metrics between synthetic and real-world imagery (as opposed to learned adversarial distances) and hence \cite{Kar2019a,Devaranjan2020,Mansinghka2013Approximate,Kulkarni2015Picture:,Yildirim2015}. For example, Maximum Mean Discrepancy is used in \cite{Kar2019a} and feature distribution matching is used in \cite{Devaranjan2020}. Among these methods, MS2\cite{Devaranjan2020} recently achieved state-of-the-art performance and therefore we include it in our experiments. We use our experiments to demonstrate that both DR and MS2 suffer from unique limitations that undermine their performance on real-world overhead imagery. Additionally, MS2 exhibits computational properties that make it scale poorly to multiple target domains, which is discussed further in Sec. \ref{subsec:problem_setting_meta_simulation_2}. \subsection{The Neural Adjoint} \label{subsec:related_work_NA} The Neural-Adjoint (NA) is a recently-proposed method to solve ill-posed inverse problems \cite{Ren2020}, which is an extension of other similar methods (e.g., \cite{peurifoy2018nanophotonic,Gomez-Bombarelli2018}). Our meta-simulation task can be framed as an ill-posed inverse problem, where we observe some data, and then we must identify some hidden parameters (e.g., design parameters) that will give rise to the observed data. In \cite{Ren2020} the NA achieved superior overall performance compared to other contemporary approaches on a benchmark of nonlinear inverse problems, and therefore we adopt it here. \section{Problem Setting and Existing Methods} \label{sec:problem_setting} In this section we define the meta-simulation problem, and introduce some existing methods utilized in our experiments. In our context, the goal of meta-simulation is to train some type of supervised recognition model (e.g., segmentation, object detection) for overhead imagery, denoted $y = f_{\gamma}(x)$, where $x$ is some input image, $y$ are the labels corresponding to the imagery (e.g.,pixel-wise labels, or object bounding boxes), and $\gamma$ represents the model parameters (e.g., weights of a DNN). We assume the availability of some set of labeled \textit{source domain} data $(X^{S},Y^{S}) = \{ (x^{S}_{i},y^{S}_{i})\} _{i=1}^{N^{S}}$ that can be used to infer $\gamma$ (i.e., train the task model). It is assumed that $x^{S}_{i} \sim p^{S}$, where $p^{S}$ is the distribution of the source domain imagery. We then wish to apply our trained model, $f_{\gamma}$, to a collection of unlabeled \textit{target domain} imagery $X^{T} = \{x^{T}_{i}\}_{i=1}^{N^{T}}$ where $x^{T}_{i} \sim p^{T}$, where $p^{T}$ is the target domain distribution. We assume in general that $p^{S} \neq p^{T}$ so that the trained model is likely to perform poorly when applied to predict the labels for any $x^{T}_{i} \in X^{T}$. This scenario arises frequently in remote sensing applications where we wish to apply a trained model to new imagery that was collected under different conditions than the training imagery: e.g., a novel geographic locations, weather conditions, times-of-day, and imaging hardware. Recent research has indicated that such differences across image collections often cause significant reductions in the performance of DNNs, even when using a large and diverse training dataset \cite{Kong2020,Maggiori2017}. One strategy to overcome this problem is to generate a set of synthetic imagery that resembles the target domain imagery, and then use it to help train $f_{\gamma}$ so that it performs better when applied to $X^{T}$. More formally, assume that we have some set of synthetic data $(X^G,Y^G)=\{( x^{G}_{i}, y^{G}_{i}) \}_{i=1}^{N^{G}}$ that has been generated by some process, denoted $G(d,\zeta)$, where $(x^{G}_i,y^{G}_i) = G(d_{i},\zeta_{i})$ and $d_{i} \sim p^{d}$ and $\zeta_{i} \sim p^{\zeta}$. Here $G$ refers to a simulation engine for generating synthetic imagery, and $d \in \mathcal{D}$ encodes the contents of the synthetic imagery that is provided to the simulator: e.g., the objects in the scene, their locations, and their appearance. We call $\mathcal{D}$ the design space, encompassing all scene content \textit{that we can control}. Note that $d$ may be composed of continuous or discrete values, or a mixture. The distribution $p^{d}$ therefore controls the properties of the synthetic data, and must be set by the designer. The vector $\zeta$ models the properties of the synthetic imagery that vary randomly each time synthetic imagery is generated, but that are not controlled by $d$. For simplicity we will often omit $\zeta$ when discussing $G$, since it is usually immaterial. Recent work has demonstrated that synthetic overhead imagery can be highly beneficial for training recognition models for overhead imagery \cite{Kong2020,Ward2018,Shermeyer2021,Xu,Hu2021,Han2017}, e.g., when training on an augmented dataset $(X^{S},Y^{S}) \cup (X^{G},Y^{G})$ \cite{Kong2020,Shermeyer2021,Hu2021,Han2017}. However, choosing $p^{d}$ is costly and time-consuming task that greatly undermines the value of the synthetic data. \textit{The goal of meta simulation is to automatically infer a setting of $p^{d}$ that maximizes the effectiveness of the resulting set of synthetic imagery, only using the unlabeled target domain data, $X^{T}$}. \subsection{Challenges and Modeling Strategies} \label{subsec:challenges_of_meta_simulation} Existing meta-simulation models generally work by trying to align the distribution of the synthetic imagery, denoted $p^{G}$, with the distribution of the target domain imagery $p^{T}$. Mathematically, they generally try to solve the following optimization problem \begin{equation} \label{eq:meta_simulation_alignment} p^{d*} = \argmin_{p^d} \mathcal{L}(p^{G},p^{T}) \end{equation} where $p^{d*}$ is the distribution of $d$ that minimizes the difference, measured by $\mathcal{L}$ between the synthetic and target data distributions. Note that $x^{G} = G(d)$ where $d \sim p^{d}$, and therefore $p^{G}$ depends directly upon $p^{d}$, the distribution of $d$, which is controlled by the designer. This is a challenging optimization problem problem due to several factors. First, it is difficult to model $p^{G}$ and $p^{T}$ due to the high dimensionality of the imagery. Then alignment of the distributions is difficult because $p^{G}$ depends upon $d$ through the function $G$, which is a complex and non-differentiable function, which may also be computationally costly to evaluate. Finally, some properties of the synthetic imagery vary randomly due to $\zeta$, making it unlikely to achieve a perfect alignment of distributions, and adding noise to any estimates of the distributions or their alignment. To simplify the meta-simulation problem, recent methods extract lower-dimensional features of the imagery using a pre-trained DNN \cite{Kar2019a,Devaranjan2020}, such as the ResNet model \cite{he2016deep}. In this case $x^{T}$, $x^{S}$, and $x^{G}$ above would represent feature vectors of imagery, rather than the original imagery. These substitutions can be made without any loss of generality in the problem description, or descriptions of the methods. \subsection{Meta-Sim2 (MS2)} \label{subsec:problem_setting_meta_simulation_2} MS2 \cite{Devaranjan2020} was proposed as an improvement to the Meta-Sim method \cite{Kar2019a}. MS2 models $p^{d}$ with different statistical assumptions depending on the type of variable $d$, for example, the authors use a multinomial distribution for categorical values of $d$, which has the advantage that it has continuous distribution parameters, denoted $\theta$. They then approximate $p^{G}$ and $p^{T}$ using kernel density estimation, with the approximations denoted by $\hat{p}^{G}$ and $\hat{p}^{T}$, respectively. Then the authors formulate the optimization in Eq. (\ref{eq:meta_simulation_alignment}) as \begin{equation} \label{eq:MS2} \min_{\theta} \textit{KL}[\hat{p}^{T}||\hat{p}^{G}], \end{equation} \begin{equation} \label{eq:MS2_2} \min_{\theta} \mathbb{E}_{x^G \sim p^{G}} [\log \hat{p}^{G}(x^G) - \log \hat{p}^{T}(x^G)], \end{equation} where $\mathbb{E}$ represents the expectation operator, and $\textit{KL}$ represents the Kullback-Leibler divergence between the two distributions. Using the relationship that $x^G=G(d)$ and $d \sim p^{d}$ we can rewrite the loss as \begin{equation} \label{eq:MS2_3} \min_{\theta} \mathbb{E}_{d \sim p^{d}} [\log \hat{p}^{G}( G(d) ) - \log \hat{p}^{T}( G(d) )], \end{equation} This objective is still difficult to optimize because of the (non-differentiable) simulation engine, $G$, present in its computation. To optimize this objective the REINFORCE estimator is utilized, whereby the gradient of the loss in Eq. (\ref{eq:MS2_3}) is approximated with \begin{multline} \label{eq:MS2_4} \nabla_{\theta}\mathcal{L} \approx \\ \frac{1}{N} \sum_{i=1}^N (\log \hat{p}^{G}( G(d_{i}) ) - \log \hat{p}^{T}( G(d_i)) \nabla_{\theta} \log p^{d}(d_{i}). \end{multline} In \cite{Devaranjan2020} the authors use $N=500$ to obtain an accurate estimate of the gradient, which must be simulated with $G$ with $d \sim \hat{p}^{d}$ using the current estimate of $\theta$. Once the gradient is obtained, it is used to adjust $\theta$, and the kernel density estimation is used to re-estimate $p^{G}$. This process is repeated until convergence of the loss in Eq. (\ref{eq:MS2_3}). In \cite{Devaranjan2020} the authors used 200 iterations. A major drawback of MS2 is its computation time, due to the number of simulations that must be run per iteration of gradient descent. Furthermore, this process must be repeated each time meta-simulation must be performed (e.g., if a new domain is encountered). For this reason MS2 scales poorly with the number of meta-simulation runs. These problems are further exacerbated if the simulator is relatively slow, such as in the case of overhead imagery. Collectively these problems make MS2 computationally intensive for overhead imagery applications. In Section \ref{sec:experiment_computation_time} we provide a comparison of computation time between different meta-simulation methods considered in this work. One other important limitation of this approach is its dependence upon kernel density estimates for $\hat{p}^{T}$, which requires a large number of samples of real imagery from the target domain (e.g., $N=500$ in \cite{Devaranjan2020}), which may not always be available due to the cost of overhead imagery. \subsection{Direct Regression (DR)} \label{subsec:problem_setting_direct_regression} Another way to solve Eq. (\ref{eq:meta_simulation_alignment}) is to infer the appropriate design for each $x^{T}_{i} \in X^T$ independently, resulting in a collection of designs, given by $\hat{D}^T=\{\hat{d_{i}}(x_{i}^{T})\}$, where each $\hat{d}_{i} \in \hat{D}^T$ corresponds to one target domain instance. The designs in $D^T$ can then be used to estimate $p^{d}$, or be sampled directly to generate synthetic training imagery. This is the approach taken by direct regression (DR), which has been previously applied in \cite{Du2021Auto-Tuned}. In direct regression we train a model of the form $\hat{d} = f_{\lambda}^{\textit{DR}}(x)$ that directly predicts the design of a given image. This model is trained using a collection of $N^{\textit{DR}}$ randomly-sampled synthetic training imagery, $D^{\textit{DR}} = \{(G(d_i),d_i) \}_{i=1}^{N^{\textit{DR}}}$ where $d_{i} \sim U$. Here $U$ is a uniform distribution over $\mathcal{D}$, the domain of $d$, so that $D^{\textit{DR}}$ is representative of all possible designs, and therefore the trained regression model will be accurate for all possible designs within the design space. The regression model is then trained to satisfy the following objective \begin{equation} \label{eq:direct_regression_optimization} \min_{\lambda} \mathbb{E}_{d \sim U} [\mathcal{L}( f_{\lambda}^{\textit{DR}}(G(d)), d)], \end{equation} where $\mathcal{L}$ is some loss function. In this work we use mean-squared error for the loss, and we estimate the expectation in Eq. (\ref{eq:direct_regression_optimization}) using $D^{\textit{DR}}$. As discussed in \ref{subsec:challenges_of_meta_simulation}, $x$ in this case represents features of imagery extracted from a pre-trained DNN, rather than raw imagery, to reduce the dimensionality of the problem. We use a DNN for $f^{\textit{DR}}_{\lambda}$. This approach has the advantage that it relies upon training a standard regression model, making the training process relatively simple and fast. Furthermore, it only requires one initial collection of synthetic imagery for training, rather than requiring large quantities of simulation each time a new domain is encountered like MS2. However, this approach has the limitation that it can make highly inaccurate predictions if the real-world target imagery is visually novel compared to the synthetic training imagery \cite{Du2021Auto-Tuned}. This can occur, for example, if there are (low-level) visual domain gaps between the synthetic and real-world imagery, or if the real-world imagery contains content that cannot be instantiated with the the simulator, $G$. \section{Neural-Adjoint Meta-Simulation (NAMS)} \label{sec:nams_description} Neural-Adjoint Meta-Simulation (NAMS) combines the advantages of DR and MS2 into a single model. Similar to DR, NAMS only needs to be trained once, after which it can be applied to rapidly infer designs for new target domains. NAMS also only requires a few target domain images to infer a design (e.g., we use 9 images in our experiments). However, NAMS is more robust than DR to visual gaps between the synthetic and real-world imagery, leading to much better design inference on real-world problems. As we show in Sec. \ref{sec:experimental_results}, DR can fail unpredictably and dramatically, while MS2 and NAMS can perform more reliably. \begin{figure} \centering \includegraphics[height=8.5cm]{imgs/NAMS.png} \caption{Overview of NAMS in two stages: in the training stage, mappings between continuous embedding $z$ and the design $d$ using network $E$ and $D$ are trained, together with an adjoint neural network $P$, by using synthetic data $d$ and $x^G(d)$; in the searching stage, the target continuous embedding $z^*$ is searched based on the target data $x^{T}$ and the corresponding design $d^*$ is reconstructed.} \label{fig:NAMS} \end{figure} \textbf{Model Overview.} A diagram of NAMS is presented in Fig. \ref{fig:NAMS} (top), where the functions $E$, $P$, and $D$ are all DNN with parameters given by $\theta_{E}$, $\theta_{P}$, and $\theta_{D}$ respectively. A major obstacle of meta-simulation is the black-box graphics simulator, which makes it difficult to optimize Eq. (\ref{eq:meta_simulation_alignment}). NAMS addresses this issue by using two DNNs to model the simulator, given by $\hat{x} = P(E(d))$ in Fig. \ref{fig:NAMS}. With a DNN-based simulator we can perform gradient descent with respect to $d$ to minimize the visual differences between our simulated imagery and a specific target domain image. Similar to DR, we can identify the design for each available target domain image, resulting in a collection of estimated designs that can then be used to construct $p^{d}$ to minimize the meta-simulation objective in Eq. (\ref{eq:meta_simulation_alignment}). A second challenge of meta-simulation is the presence of discrete or non-ordinal entries of $d$, which can undermine gradient descent. To mitigate this problem, we learn a \textit{continuous} embedding of the designs, denoted $z$, such that similar embeddings have similar-looking synthetic imagery. We use a stochastic embedding, where $z$ is sampled from a normal distribution with parameters that depend upon the $d$. Mathematically, we have $z \sim \mathcal{N}(\mu,\sigma)$ where $(\mu,\sigma) = E(d)$. In the NAMS model, we use $z = \mu + e \odot \sigma$, where $e \sim p^{e}$ is a normal random variable with diagonal covariance and the operator $\odot$ performs element-wise multiplication of two vectors, following \cite{kingma2013auto}. The randomness of $z$ is intended to model visual features of the synthetic and real imagery that are not controlled by $d$, and therefore vary across imagery even while $d$ is fixed (e.g., the $\zeta$ parameter in Sec. \ref{sec:problem_setting}). When inferring the setting of $d$ that corresponds to some target image, we first perform gradient descent with respect to $z$. We then decode the optimized value, $z^{*}$ using $D$ to get $d^{*}$ as illustrated in Fig. \ref{fig:NAMS} (bottom). We next present the technical details of NAMS. \textbf{Model Training.} During training we minimize the following overall loss given by \begin{equation} \label{eq:nams_overall_loss} \mathcal{L}_{\textit{NAMS}} = \lambda_{P} \mathcal{L}_{P} + \lambda_{D} \mathcal{L}_{D} + \lambda_{\textit{KLD}} \mathcal{L}_{\textit{KLD}}, \end{equation} where $\lambda_{P}$, $\lambda_{D}$, and $\lambda_{\textit{KLD}}$ represent the weight of each loss term during training. The first loss, $\mathcal{L}_{P}$, encourages the model $\hat{x}=P(E(d)) \sim x^G(d)$ to make accurate predictions of image features, given some design for the imagery. Mathematically, we have \begin{equation} \label{eq:nams_predictor_loss} \mathcal{L}_{P}(\theta_{E},\theta_{P}) = \mathbb{E}_{d \sim U, \zeta \sim p^{\zeta}, e \sim p^{e}} [ \mathcal{L}(P(E(d)),G(d,\zeta))], \end{equation} where and $\zeta$ is the random noise vector used as input to the true simulator, $G$. Here $\mathcal{L}$ represents some measure of error, and we use mean-squared error. Since both $E$ and $P$ are DNNs, we can infer their parameters using gradient descent. The second loss, $\mathcal{L}_{D}$, encourages the model to obtain accurate decoding of the latent vector $z$ into its corresponding design. Mathematically we have \begin{equation} \label{eq:nams_decoder_loss} \mathcal{L}_{D}(\theta_{E},\theta_{D}) = \mathbb{E}_{d \sim U, e \sim p^{e}} [ \mathcal{L}(D(E(d)),d)]. \end{equation} Again, since $E$ and $D$ are both DNNs, this loss can be minimized via gradient descent. The third loss, $\mathcal{L}_{\textit{KLD}}$, encourages the design embeddings to be smooth and independent. Mathematically we have \begin{equation} \label{eq:nams_kl_loss} \mathcal{L}_{\textit{KLD}}(\theta_{E}) = \mathbb{E}_{d \sim U, e \sim p^{e}} [ \textit{KL}(p^{z},p^{e})], \end{equation} where $p^{e}$ is a multivariate normal distribution with zero mean and diagonal covariance. In all three loss equations above, we replace the expectation operators with sample estimators when optimizing on real-world data. In section \ref{sec:experimental_design} we will discuss the specific implementation details of the DNNs in NAMS, and our hyperparameter settings. \textbf{Inference of Designs for Data Generation.} In the NAMS searching stage, we use the predictor $P$ to solve the optimization in Eq. (\ref{eq:meta_simulation_alignment}) in $z$ space, similar to the DR method, by inferring the appropriate design for each $x^{T}$ independently, as shown in Fig. \ref{fig:NAMS}. With the trained predictor, $P$, we can directly infer synthetic image features $P(z)=\hat{x} \sim x^G(d)$, given the embedding $z$. We can therefore substitute this into Eq. (\ref{eq:meta_simulation_alignment}), so that the searching optimization can be rewritten as: \begin{equation} \label{eq:na} \min_{z}\mathcal{L}_{S}[P(z), x^T]. \end{equation} Note that, $P$ is a differentiable neural network. As a result the optimization can be efficiently solved using a gradient-decent-based searching method. For this, we used the recently-proposed Neural-Adjoint approach \cite{Ren2020}, which freezes the network weights of $P$ and treats the embedding $z$ as the parameters and then performs back-propagation with respect to the loss $\mathcal{L}_S$ to solve the optimization problem in Eq. (\ref{eq:na}). \textbf{Initialization and Majority Voting.} Due to the non-linearity of the problem, the search space is likely to contain many local minima. Hence, for each target image, we propose to search from $M$ random initial points uniformly distributed within a range of $\pm k \sigma_j$ in $j^{th}$ dimension in the space of $z$ and perform a majority voting among $S$ searching results with the smallest losses, which are considered near the global minimum. Here, the hyper-parameters $M$ is set to 1000, $k$ is set to 3 and $S$ is set to 50 for all of our experiments, and $\sigma_j$ is estimated as the standard division of the training data in $j^{th}$ dimension of $z$. Together the optimization that we solve during the search phase becomes: \begin{equation} \label{eq:z} \begin{split} &z^* = V_S[\{z^{(i)*}\}_{i=1}^M], \\ &\text{s.t.} \quad z^{(i)*} = \argmin_{z^{(i)}} L_2[P(z^{(i)}) - x^{T}], \\ \end{split} \end{equation} where the element-wised $L_2$-norm $L_2$ is used to measure the distance in feature space, and $V_S$ stands for the majority voting operation among $S$ best results. Once we infer $z^*$, we can then use the decoder to obtain the corresponding design, $d^*=D(z^*)$. This searching process is repeated for each individual real-world target image, resulting in a dataset of optimized $d^*$ values, which we can use as a population to simulate data in the simulator. Similarly, we can treat the population of designs as a sampling distribution for designs of our synthetic training data for a downstream task. \textbf{Augmentation and Feature Averaging.} In addition, to reduce the impact of the randomness caused by $\zeta$, data augmentation and averaging can be applied. In all of our experiments, both in the training and searching stages, we take 9 images with the same design $d$ to form a mini-batch each time. For each image, we create 8 augmented images with four rotations $\{0^{\circ},90^{\circ},180^{\circ},270^{\circ}\}$ and a horizontal flip. Then, we average the features of these 72 images in the mini-batch. Hence, $x^{T}$, $x^{S}$, and $x^{G}$ above represent the average features of 72 images with the same design $d$. \textbf{Diversify with Rejection.} In real applications, we randomly discard some proportion of the optimized designs, and replace them with a uniformly selected design (the default non-meta-simulation design approach). This has the effect of diversifying the simulated training data for the downstream task, and mitigates risks of over-fitting to (possibly erroneous) $z^*$ estimates. To achieve this, we use a weight $r$ in $(0,1)$ to control the probability of rejecting the selected design and replacing with a uniformly selected design, so that the probability to obtain rare designs will increase. The sampling method can be described as follows: \begin{equation} \label{eq:d} d^* = \begin{cases} D(z^*) , & u\geq r\\ d \sim U, & u < r \end{cases}, \end{equation} where $u$ is a uniformly sampled value in $(0,1)$, and note that $U$ stands for a uniform distribution over $\mathcal{D}$. We summarize the overall algorithm of the proposed NAMS method in Algorithm 1. \begin{algorithm}[h!] \caption{ } \label{alg:1} \begin{algorithmic} \IF{Training stage} \STATE Input a training dataset $\{(d_i,x^G_{i}(d_i) ) \}_{i=1}^{N^G}$. \STATE Do augmentation, extract features and do averaging. \STATE Set weights $\lambda_D$, $\lambda_{KLD}$, and $\lambda_P$. \STATE Train \textit{E}, \textit{D}, \textit{P} with the optimization in Eq. (\ref{eq:nams_overall_loss}). \ENDIF \IF{Searching Stage} \STATE Input a real-world target dataset $X^{T}$. \STATE Set hyper parameter $r$. \STATE Do augmentation, extract features and do averaging. \FOR{Each $x^{T} \in X^{T}$} \STATE Search for the design embedding $z^*$ by Eq. (\ref{eq:z}). \STATE Obtain the optimal design $d^*$ by Eq. (\ref{eq:d}). \ENDFOR \STATE Generate synthetic dataset $X^{G*}$ using all $d^*$ values in the simulator. \ENDIF \STATE Train the downstream task algorithm using $X^{G*}$. \end{algorithmic} \end{algorithm} \section{Experimental Datasets} \label{sec:experimental_datasets} We evaluate three meta-simulation methods of satellite imagery for the task of building segmentation. Building segmentation is a challenging and widely-studied problem in remote sensing \cite{Huang2018,Maggiori2017,Demir2018}, making it a useful and representative problem on which to examine meta-simulation for overhead imagery. To support these experiments, we construct both real-world and synthetic overhead imagery datasets, as described next. \subsection{Real-world Overhead Imagery.} We obtain real-world overhead imagery labeled with building footprints from DeepGlobe (DG) \cite{Demir2018} and Inria \cite{Maggiori2017}). These two popular benchmark datasets for building footprint segmentation feature $0.3m/pixel$ resolution RGB imagery collected over nine diverse cities across the world, encompassing $713 km^2$ of surface area. These two datasets collectively provide a challenging and contemporary set of real-world overhead imagery for testing both our task models and our meta-simulation models. In our experiments, all of the images are cropped into the same size $x\in R^{384\times384}$. \subsection{Synthetic Overhead Imagery.} We create synthetic overhead imagery $x\in R^{384\times384}$ using the only one research-accessible simulator from \cite{Kong2020}. First, we procedurally generate virtual cities with numerous features (e.g., road network topology, building types, etc). The content of each virtual city depends upon a large number of parameters. Some parameters are set to fixed values (e.g., lighting conditions), while others are sampled from distributions (e.g., object types and density). Please see the detailed parameter settings in \cite{Kong2020}. Then, synthetic satellite imagery is captured using a virtual camera (with $0.3m/pixel$ resolution) overlooking the virtual city. The simulation requires nearly 1.6 seconds per synthetic image on our hardware, which is similar to the speed stated in \cite{Kong2020}. \section{Experimental Design} \label{sec:experimental_design} In our experiments, we assume that we have access to a set of real-world overhead imagery labeled with building footprints, for training building segmentation models. We use the DG dataset as this labeled source domain data $(X^{S},Y^{S})$. Our goal is to apply the models to different novel unlabeled target datasets, which may be visually dissimilar to the labeled source imagery (i.e., the same DG imagery in our case) used to train our task models. \begin{comment} \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{Experimental Dataset Outline} \label{table:experimental_dataset} \begin{tabular}{ccc} \hline Dataset Name & Symbol & Usage \\ \hline Syn\_tr & $(X^{G},Y^{G})$ & Meta-simulation training \\ DG & $(X^{S},Y^{S})$ & Downstream training \\ Syn\_aug & $(X^{G*},Y^{G*})$ & Downstream augmenting \\ Syn\_in & $X^{T}$ & Downstream target 1 \\ Syn\_out & $X^{T}$ & Downstream target 2 \\ Inria & $X^{T}$ & Downstream target 3 \\ \hline \end{tabular} \end{center} \end{table} \end{comment} Due to the large difference between the training and target datasets, we use synthetic imagery to augment the training of the downstream building segmentation models. The ratio of the real-world training images and the synthetic augmenting images is set to 6:1, following \cite{Kong2020} for the best performance. To improve the efficacy of synthetic images, our meta-simulation problem aims to infer simulator design parameters that result in synthetic imagery that is most visually similar to the unlabeled target imagery. In our experiments, we infer two design parameters, $d_{flat} \in [0,1]^{40}$ and $d_{slope} \in [0,1]^{44}$, which controls “rooftop texture” of (i) flat-roofed buildings and (ii) sloped-roof buildings, respectively. For simplicity, we infer the dominant rooftop textures in each target image only. In other words, only one $d_{flat}$ value and one $d_{slope}$ value are inferred per target image. For consistency, in the simulator, we use the same $d_{flat}$ and the same $d_{slope}$ for all of the sloped-roof and flat-roofed buildings, respectively, in each virtual city. The rooftop textures accessible to the model are shown in the left column of Fig. \ref{fig:exmaples_textures}. We choose these two parameters because we hypothesize that building color and texture have a strong influence on the the building segmentation task. We use a one-hot encoding for these two parameters, making them challenging categorical and non-ordinal variables to infer i.e., the roof texture at index of one may be much more similar to the texture at index of forty than to the texture at index of two. Because we use building rooftop textures as the design parameter of interest, we only include target imagery in our test set with at least 10000 building pixels ($6.78\%$ of total) in it. \begin{figure} \centering \includegraphics[height=5.6cm]{imgs/exp_texture.png} \caption{Rooftop textures of flat-roofed and sloped-roof buildings used in the experiments are shown.} \label{fig:exmaples_textures} \end{figure} \subsection{Experimental Scenarios} As is mentioned in Sec. \ref{subsec:introduction_contributions}, we examine the impact of sim-to-real design/appearance gaps to meta-simulation methods with three increasingly-challenging target scenarios, as shown in Fig. \ref{fig:experimental_design_illustration} and summarized in Table \ref{table:experimental_design_outline}. \begin{figure} \centering \includegraphics[height=5.6cm]{imgs/experimental_design_illustration_v2.PNG} \caption{Illustration of the relationship between the different sets of imagery employed in our meta-simulation experiments. For example, E1 denotes the set of target imagery employed for Experiment 1.} \label{fig:experimental_design_illustration} \end{figure} \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{Experimental Design Outline} \label{table:experimental_design_outline} \begin{tabular}{ccccc} \hline \begin{tabular}[c]{@{}c@{}}Experiment \\ ID\end{tabular} & \begin{tabular}[c]{@{}c@{}}Training Domain \\ Data, $X^{T}$ \end{tabular} & \begin{tabular}[c]{@{}c@{}}Target Domain \\ Data, $(X^{S},Y^{S})$ \end{tabular} & \begin{tabular}[c]{@{}c@{}} Appearance \\ Gap? \end{tabular} & \begin{tabular}[c]{@{}c@{}} Design \\ Gap? \end{tabular} \\ \hline E1 & Synthetic & Synthetic & No & No \\ E2 & Synthetic & Synthetic & No & Yes \\ E3 & Real & Real & Yes & Yes \\ \hline \end{tabular} \end{center} \end{table} \textbf{E1: Synthetic Target Data with No Design Gap} In the first scenario, we use imagery generated from our simulator with no design gap as target imagery. In other words, the target scene designs are accessible to the meta-simulator, and hence, the meta-simulator is able to perfectly match the target scene designs. This scenario is the easiest, and although it is unrealistic, it serves as a useful baseline to analyze the impact of adding in more realistic conditions. To increase the reliability of the experiment, we do repeated trials using 4 target datasets, each containing 900 images collected from a synthetic city with a pair of randomly sampled $d_{flat}$ and $d_{slope}$ accessible to the meta-simulator. \textbf{E2: Synthetic Target Data with a Design Gap } In the second scenario, we again use synthetic target imagery, however, it is constructed using designs that are not accessible to the meta-simulator, which means a design gap exists. It is unlikely that we will have access to all designs in practice. Therefore the meta-simulator will not be able to find the exact design in the target imagery, but a good meta-simulator should output a design that matches the target imagery the most. Similarly, we do four repeated trials, in which each target dataset contains 900 images generated from a virtual city with a pair of unseen $d_{flat}$ and $d_{slope}$ (as shown in the right column in Fig. \ref{fig:exmaples_textures}). \textbf{E3: Real-world Target Data} The third scenario with real-world target imagery is the most challenging but is realistic. Both the design and appearance gap exist between the meta-simulator and the real-world target imagery. We expect the meta-simulator to output a design that matches the target imagery the most. The third target dataset is the Inria dataset with 3600 real-world images per city. We assume that the ground truth of the Inria data are not accessible to the meta-simulation models and the downstream building segmentation models in the training stage. The labels are used only for evaluating the overall performance of these models. \textbf{E4: Computation Time Comparisons.} The computation time of the NAMS and the MS2 methods is evaluated on our standard hardware, an Intel(R) Core(TM) i7-7700HQ CPU@2.80 GHz with an NVIDIA RTX 2080Ti GPU. Note that, for the MS2 method, we cannot afford the computation cost with the same training process in \cite{Devaranjan2020}, and hence, we estimate the computation cost of the simulation process, based on the number of required simulations and the computation cost of simulating a single image. \subsection{Meta-Simulation Training and Inference} For all meta-simulation models, we use a pre-trained ViT-L16 \cite{Dosovitskiy2020} model to extract 1000-D features $x^{T}$, $x^{S}$, and $x^{G}$ from images. For fair comparisons, we do the same augmentation and feature averaging strategy used in the NAMS method (averaging the features of 72 augmented images with the same design, as discussed in \ref{sec:nams_description}) for all of the other meta-simulation methods. \textbf{NAMS.} In the NAMS model, the encoder $E$, decoder $D$ and predictor $P$ networks are composed of fully-connected layers. The encoder $E$ takes a 84-D vector (a concatenation of $d_{flat} \in [0,1]^{40}$ and $d_{slope} \in [0,1]^{44}$) as input, and it consists of two fully connected layers with 1024 and 2048 hidden units respectively, each followed by a batch normalization layer, a leaky ReLU layer with 0.2 as the slope and a dropout layer with 0.5 as the dropout rate. We use a 10-D vector as the continuous design embedding, $\boldsymbol{z}$, to support the richer design space. Hence, another fully connected layer with 20 hidden units is used as the output layer, with 10 of the output dimensions as $\boldsymbol{\mu}$ and the other 10 as $\boldsymbol{\sigma}$ for sampling the continuous embedding $\boldsymbol{z}$. We use the same structure in the decoder. The only difference is that we use a 10-D vector as the input $\boldsymbol{z}$ and a 84-D vector as the output data $\boldsymbol{d}$, and a sigmoid function is used in the output layer. Binary cross-entropy loss is used for the reconstruction loss of $\boldsymbol{d}$ in Eq. (\ref{eq:nams_decoder_loss}). We use the same structure in the predictor model, with a 10-D vector as input $\boldsymbol{z}$, and a 1000-D vector as output features. We train the NAMS model using a synthetic dataset $(X^{G},Y^{G})$. To form this training dataset, we uniformly sample 1700 pairs of $d_{flat}$ and $d_{slope}$ values and created 9 synthetic images per pair, resulting in 15,000 synthetic images. We train the NAMS model using $(X^{G},Y^{G})$ dataset only once, and apply the same trained model to the target datasets in E1 to E3 for design parameter inference. Additional model training details can be found in the Appendix for each experiment. \textbf{DR.} In the DR model, a network consisted of 5 fully-connected hidden layers ($500$ hidden units each layer followed by batch normalization and ReLu layers) is used to predict the 84-D design vector from the 1000-D input features. It is trained with regularizer strength of 0.5, learning rate of 1e-3, batch size of 1000 and train/test split of 80/20, using the same training dataset $(X^{G},Y^{G})$ as is used in NAMS training. The model architecture and the training hyper-parameters are selected by brute-force grid searching. \textbf{MS2.} The MS2 method does not separate the training and inference stage. It searches design parameters in each target dataset from scratch. The MS2 searching requires 100000 synthetic images for each target dataset (approximated based on \cite{Devaranjan2020}), which will cost 1.8 days with this slow simulator. To search all the 13 target datasets in our experiments from E1 to E3, MS2 will cost unaffordable 23.4 days, let alone the extra searching required for hyper-parameter tuning and the extra needs of (8 times more) images for feature averaging. Hence, we run the simulator for about one month and generate a pool of 464640 synthetic images with 264 images for each design, which is much larger than the training dataset used for DR and NAMS. The synthetic images used in MS2 inference are generated from the pool with replacement. We used the same model architecture and training strategies described in \cite{Devaranjan2020} for the best performance. We only change the input and output data dimensions and adjust the learning rate, to make sure that the training converges at 20 epochs similar to \cite{Devaranjan2020} in our experiments. \subsection{Task Models and Training Details} The parameter values inferred by the meta-simulation methods are used in the simulator to generate the optimized synthetic data $(X^{G*},Y^{G*})$ for augmenting the training of the downstream task. We then train two contemporary networks for segmentation of satellite imagery, the U-Net \cite{Iglovikov2018} and DeepLabV3 \cite{Chen2017}, which represent popular versions of general architectures for segmentation: the encoder-decoder structure and the feature pyramid structure, respectively. For the best performance, following \cite{Kong2020}, we use a pre-trained Resnet-50 encoder for both of the models, and we train networks for 80,000 mini-batch iterations with a batch size of 7 and the learning rates of 5e-5 and 1e-4 for the DeepLabV3 and U-Net models, respectively. For both networks we drop the learning rate by one order of magnitude after 50,000 iterations of training. \subsection{Scoring Metrics} We measure the performance of three meta-simulation methods with two kinds of metrics. \textbf{Accuracy of Design Inference.} In the experiments E1 and E2, we know the true design of the target domain imagery. Hence, if we rank-order the incorrect textures according to their similarity to the true texture, we can evaluate whether the meta-simulation method recovers one of the nearest $n_{top}$ textures. For these experiments we use a simple measure of similarity between two image patches \cite{Belongie1998} in HSV color space, to rank nearest textures with $n_{top}=1$ and $n_{top}=10$. Note that, in E1, the target texture is accessible to the simulator, hence, the nearest $n_{top}=1$ texture is exactly the target texture. In the experiments E3 with real-world images, we visually compare the selected synthetic imagery and the real-world target imagery. \textbf{Performance of a Task Model.} We then examined the impact of the synthetic imagery augmentation on training downstream task models. We trained two building segmentation models, U-Net and DeepLabV3, with six augmentation strategies for each experiment from E1 to E3 for comparisons: (i) real satellite images only with no augmentation, (ii) augmentation with synthetic images from uniform sampling, (iii) augmentation with synthetic images selected by DR, (iv) augmentation with MS2, (v) augmentation with NAMS; and (vi) augmentation with the ground truth target domain images. We then measure the performance of the segmentation models using the Intersection-over-Union (IoU) metric based on the ground truth labels. \setlength{\tabcolsep}{4pt} \begin{table*}[!t] \begin{center} \caption{E1 results: accuracy of design inference. The average rates of correctly selecting one of the $n_{top}$ nearest flat-roofed/sloped-roof textures to the target textures among 4 repeated trails in E1 are shown. "Uniform" stands for the uniform sampling method.} \label{table:Synthetic_result} \begin{tabular}{cccccccccc} \hline\noalign{\smallskip} & \multicolumn{4}{c}{Top-1} & & \multicolumn{4}{c}{Top-10}\\ \cline{2-5} \cline{7-10} & Uniform & DR & MS2 & NAMS(ours) & & Uniform & DR & MS2 & NAMS(ours) \\ \cline{2-5} \cline{7-10} Flat-roofed & 0.025 & \textbf{0.963} & 0.008 & 0.705 & & 0.250 & \textbf{0.983} & 0.418 & 0.893 \\ Sloped-roof & 0.023 & \textbf{0.888} & 0.008 & 0.323 & & 0.230 & \textbf{0.908} & 0.604 & 0.653 \\ \hline \end{tabular} \end{center} \end{table*} \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{E1 results: performance of a task model. We report the IoU performance of the U-Net and DeepLabV3 models on the target dataset, trained with 6 augmentation strategies. "No" stands for the training with no augmentation. "Uniform" stands for the augmentation with uniformly sampled synthetic images. "GT" stands for the augmentation with the ground truth target domain images.} \label{table:task_performance_synthetic} \begin{tabular}{ccccccc} \hline\noalign{\smallskip} Model & No & Uniform & DR & MS2 & NAMS(ours) & GT \\ \hline U-Net & 47.8 & 77.6 & \textbf{83.4} & 79.1 & 81.8 & 84.3 \\ DeepLabV3 & 34.8 & 80.5 & \textbf{85.8} & 81.0 & 85.2 & 86.7 \\ \hline \end{tabular} \end{center} \end{table} \section{Experimental Results} \label{sec:experimental_results} In this section we present the results of the experiments described in Sec \ref{sec:experimental_design}, and summarized in Table \ref{table:experimental_design_outline}. We also report the estimated computation time of all of the methods we consider. \subsection{E1: Synthetic Target Data with No Design Gap } \label{subsec:experimental_results_e1} \setlength{\tabcolsep}{4pt} \begin{table*}[!t] \begin{center} \caption{E2 results: accuracy of design inference. The average rates of correctly selecting one of the $n_{top}$ nearest flat-roofed/sloped-roof textures to the target textures among 4 repeated trails in E2 are shown. } \label{table:Synthetic_result_out} \begin{tabular}{cccccccccc} \hline\noalign{\smallskip} & \multicolumn{4}{c}{Top-1} & & \multicolumn{4}{c}{Top-10}\\ \cline{2-5} \cline{7-10} & Uniform & DR & MS2 & NAMS(ours) & & Uniform & DR & MS2 & NAMS(ours) \\ \cline{2-5} \cline{7-10} Flat & 0.025 & \textbf{0.098} & 0.032 & 0.090 & & 0.250 & \textbf{0.890} & 0.328 & 0.708 \\ Sloped & 0.023 & 0.013 & 0.002 & \textbf{0.105} & & 0.230 & \textbf{0.818} & 0.110 & 0.623 \\ \hline \end{tabular} \end{center} \end{table*} \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{E2 results: performance of a task model. We report the IoU performance of the U-Net and DeepLabV3 models on the target dataset, trained with 6 augmentation strategies.} \label{table:task_performance_synthetic_out} \begin{tabular}{ccccccc} \hline\noalign{\smallskip} Model & No & Uniform & DR & MS2 & NAMS(ours) & GT \\ \hline U-Net & 47.0 & 76.0 & 77.7 & 76.3 & \textbf{78.7} & 83.0 \\ DeepLabV3 & 34.5 & 78.6 & 80.7 & 78.5 & \textbf{81.2} & 85.6 \\ \hline \end{tabular} \end{center} \end{table} The accuracy of design inference with our NAMS method in E1 is presented Table \ref{table:Synthetic_result}, in the first column (Top-1) where we report the proportion of target images for which NAMS recovers the true parameters, compared with (i) uniform sampling, (ii) searching with MS2 and (iii) regressing with DR. The results in Table \ref{table:Synthetic_result} shown that DR still performs well for this in-domain test with no design gaps, and NAMS is substantially more accurate than MS2, even with a much smaller training dataset. In the second column (Top-10) in Table \ref{table:Synthetic_result}, we see that NAMS achieves much greater accuracy when evaluating whether it finds one of the top-10 best patches (by our measure), suggesting that it is still finds a more appropriate design, even if it is not the best design. The downstream task performance results of this experiment are presented in Table \ref{table:task_performance_synthetic}, for both the U-Net and DeepLabV3 models. The results indicate that DR performs pretty well as expected. NAMS improves average performance by 5.8\% and 6.1\% on average for the U-Net and DeepLabV3 models, which is better than MS2, and closes most of the performance gap between a uniform sampling and the best possible performance, achieved when design parameters perfectly match the testing synthetic dataset. \subsection{E2: Synthetic Target Data with a Design Gap } \label{subsec:experimental_results_e2} Here, it is impossible to check if the meta-simulation methods inferred the correct design parameters, since the target value is not in the training space. However, we can still rank the nearest $n_{top}$ textures according to their similarity to the true texture in HSV color space, similar to the E1 experiments. As shown in Table \ref{table:Synthetic_result_out}, all of the results get smaller compared to the E1 tests in Table \ref{table:Synthetic_result}, and NAMS can still outperform MS2 in all cases. One interesting thing to note is that, NAMS is able to achieve better accuracy then DR for searching the top-1 sloped roof texture, suggesting that NAMS is relatively more robust to infer out-of-domain targets than DR. Next, as shown in Table \ref{table:task_performance_synthetic_out}, NAMS improves average performance for U-Net by 3.8\% and DeepLabV3 by 3.5\% which is much better than MS2, while the improvement of DR is only 2.5\% for U-Net and 2.8\% for DeepLabV3. It indicated that, the robustness of the NAMS method is better than DR when design gap exists between the training and target dataset. \subsection{E3: Real-world Target Data} \label{subsec:experimental_results_e3} \begin{figure*}[!t] \centering \includegraphics[height=5.5cm]{imgs/Meta_sim_results.png} \caption{E3 results: accuracy of design inference. Examples of the target images and the selected images by NAMS are shown.} \label{fig:Meta_sim_result} \end{figure*} \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{E3 results: performance of a task model. We report the IoU performance of the U-Net and DeepLabV3 models on the target dataset, trained with 6 augmentation strategies.} \label{table:task_performance_real} \begin{tabular}{ccccccc} \hline\noalign{\smallskip} Model & No & Uniform & DR & MS2 & NAMS(ours) & GT \\ \hline U-Net & 51.7 & 53.4 & 53.3 & 52.7 & \textbf{54.0} & 58.8 \\ DeepLabV3 & 46.9 & 49.8 & 50.2 & 50.4 & \textbf{50.6} & 58.9 \\ \hline \end{tabular} \end{center} \end{table} In E3 experiment, we evaluate our method for building segmentation on real satellite imagery. Our goal is again to choose $d_{flat}$ and $d_{slope}$, the flat and sloped rooftop textures that best match the rooftops in the real-world target imagery. The results of E3 are presented in Table \ref{table:task_performance_real}, and show that NAMS outperforms both the MS2 and the DR approach, on average, across the five testing cities, and the improvement from uniform sampling is 1.2\% for U-Net and 1.5\% for DeepLabV3. In contrast to the synthetic experiments E1 and E2, however, the benefits associated with NAMS are generally lower. We hypothesize that this is likely due to the substantial sim-to-real appearance gap between our synthetic imagery and the real-world imagery. Fig. \ref{fig:Meta_sim_result} presents examples of the real-world target imagery, and the synthetic designs that were chosen by NAMS for those cities. The designs chosen are qualitatively similar to those in the target cities, however, the sim-to-real gap is apparent in the imagery. \subsection{E4: Computation Time Comparison} \label{sec:experiment_computation_time} Here we present the computation time of NAMS on our benchmark tasks, and compare it with the computation time of the non-amortized method, MS2. Key metrics of each approach are summarized in Table \ref{table:Computation_time}. NAMS must be trained on a collection of simulations first. After this training however, it can infer design parameters on new target domains (or individual images) without additional simulation. By contrast, MS2 requires no initial simulation or training, but it requires a large number of simulations for each target domain, and this process must be repeated from scratch for each new target domain. We examine the overhead imagery task for example, which motivated the development of NAMS. NAMS required 15,000 initial simulations and 13h of training, whereas MS2 requires no offline training or simulations. However, NAMS requires 0.7h to infer parameters for each new target domain, whereas MS2 requires 1.8 days (16.2 days if average strategy is applied) with the same training process in \cite{Devaranjan2020}. Therefore NAMS scales much more efficiently with the number of target domains. Furthermore, NAMS can infer parameters for a small group of 9 images. In this estimation we assumed that each target domain comprises 5000 images, following \cite{Devaranjan2020}, \cite{Kar2019a}. However, if we are satisfied with parameter estimates obtained on just 50 target images, in which case NAMS requires only 25s per domain (0.5 sec/image $*$ 50 images). \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{Key computation time metrics for NAMS and MS2 on our benchmark task. $N_d$ denotes the number of target domains. Each target domain is assumed to comprise 5000 images.} \label{table:Computation_time} \begin{tabular}{clc} \hline & Experiments & Overhead imagery \\ \cline{2-3} & Simulation time & 1.6s/image \\ \hline \parbox[t]{2mm}{\multirow{ 4}{*}{\rotatebox[origin=c]{90}{NAMS}}} & Total simulations required & 15,000 \\ & Training time & 13h \\ & Inference per image & 0.5s\\ & Inference per target domain & 0.7h \\ \hline \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{MS2}}} & Total simulations & 100,000*$N_d$ \\ & Training time & 0 \\ & Inference per image & N/A \\ & Inference per target domain & 1.8 days (estimated)\\ \hline \end{tabular} \end{center} \end{table} \begin{figure*}[!t] \centering \begin{tabular}{c} \includegraphics[height=4cm]{imgs/iter_in_domain.png} \\ \small (a) \end{tabular} \begin{tabular}{c} \includegraphics[height=4cm]{imgs/iter_out_domain.png} \\ \small (b) \end{tabular} \begin{tabular}{c} \includegraphics[height=4cm]{imgs/iter_real.png} \\ \small (c) \end{tabular} \caption{The searching process for the NAMS method in experiments E1 to E3 is shown compared with the target design and image.} \label{fig:iter} \end{figure*} \section{Conclusions}\label{sec:conclusion} In this paper, we first investigate and compare existing meta-simulation methods for overhead imagery. We evaluate two state-of-the-art methods, direct regression and MS2, on several carefully-designed experiments. A novel NAMS method is then proposed to address the unique challenges associated with overhead imagery. NAMS is composed of two key ideas. First, we propose a continuous embedding for discrete/categorical design parameters, by using a VAE model. Second, we learn a Neural-Adjoint network for inferring image data from design space to by-pass the black-box simulator. We show in several synthetic and real-world experiments that, by using NAMS, synthetic images with the target design can be rapidly and accurately generated for the training of downstream task algorithms. Compared to existing methods, NAMS is much more robust to sim-to-real gaps and computationally efficient as the number of unique target domains grows larger.
{ "timestamp": "2022-09-20T02:22:33", "yymm": "2209", "arxiv_id": "2209.08685", "language": "en", "url": "https://arxiv.org/abs/2209.08685" }
\section{Introduction} Energy system operators worldwide introduce carbon reduction measures to control the CO$_2$ content of energy supply~\cite{liu2021country}. Such measures include soft monetary penalties (e.g., carbon tax) or hard emission constraints (e.g., carbon cap or renewable portfolio standard) within operational planning routines. When operation planning is insufficient, more expensive yet effective long-term planning optimizes network design in order to accommodate more environment-friendly supply. In gas networks, which connect spatially distributed supply and demand hubs, carbon reduction measures can be used to prioritize pipeline-quality gas from renewable suppliers, such as biogas produced from organic matter, syngas produced by steam reforming, or hydrogen produced from electrolysis at large offshore wind sites, like the proposed North Sea hub~\cite{NSWPH}. However, solving such planning optimization problems with emission targets is challenging due to complex gas flow physics. \textbf{Contributions.} To address gas network planning under emission targets, we devise a new optimization method that substitutes the non-convex Weymouth equation of gas flows with a composition of trained input-convex and input-concave neural networks (ICNNs). Together, they explain the dependency of gas flows on nodal pressures. We embed trained ICNNs into planning optimization problems which are then solved using standard mixed-integer solvers. Tests on the Belgium gas network demonstrate the improvement of our methods over standard solvers, especially under strict emission targets. \subsection{Related work} \textbf{Gas network optimization.} Designing optimization methods to aid operation planning dates back to at least 1979 \cite{o1979mathematical}. Since then, solvers based on mixed-integer \cite{wilson1988steady}, piece-wise linear \cite{de2000gas}, quadratic \cite{singh2019natural,singh2020natural} and semi-definite \cite{ojha2017solving} programming have been introduced. The CO$_{2}$ footprint of integrated gas and electricity networks has been addressed by integrating renewables \cite{ordoudis2019integrated,ratha2020affine,roald2020uncertainty,dvorkin2021stochastic,dvorkin2022multi} or directly incorporating carbon reduction measures in operational \cite{piperagkas2011stochastic,cheng2019low} and long-term expansion planning problems \cite{degleris2021emissionsaware,qiu2014low,cheng2018planning}. We refer to \cite{conejo2020operations} for a comprehensive literature review. \textbf{Neural networks to aid optimization.} Using the mixed-integer neural network (NN) reformulation \cite{tjeng2017evaluating,xiao2018training,grimstad2019relu}, NNs can be used for approximating complex input-output dependencies within optimization, e.g., in power systems problems \cite{murzakhanov2020neural,donon2020neural,hu2020physics,kody2022modeling}. The reformulation represents the activation of each ReLU function using linear and binary constraints parameterized by NN weights and biases, which can be computationally challenging at scale \cite{grimstad2019relu}. Here, we explore an alternative functional approximation that relies on \textit{input-convex} NNs, which constrain network weights to ensure the output is a convex function of inputs \cite{amos2017input}. Since trained ICNN mappings can be recast as linear optimization problems \cite{amos2017input,duchesne2021supervised}, we leverage them to convert non-convex optimization problems into bilevel optimization problems which are linear in both their upper- and lower-levels \cite{pozo2017basic}. \section{Emission-aware gas network planning problems} \textbf{Operational planning problem.} A gas network includes $n$ nodes, representing injections, extractions or network junctions, and $\ell$ edges, representing pipelines. The operational planning problem identifies the least-cost supply allocation $\vartheta\in\mathbb{R}^{n}$ that satisfies nodal gas demands $\delta\in\mathbb{R}^{n}$, while ensuring that nodal pressures $\pi\in\mathbb{R}^{n}$ and gas flows $\varphi\in\mathbb{R}^{\ell}$ remain within technical limits. This problem is solved using the following optimization formulation~\cite{dvorkin2021stochastic}: \begin{subequations}\label{OGF} \begin{align} \minimize{{\varphi,\vartheta,\pi\in\mathcal{F}}}\quad&c^{\top}\vartheta\label{OGF:obj}\\ \text{subject to}\quad&A\varphi=\vartheta-\delta,\label{OGF:balance}\\ &\varphi\circ|\varphi|=\text{diag}[\omega]A^{\top}\pi,\label{OGF:weymouth} \end{align} \end{subequations} which minimizes linear gas supply costs subject to technical constraints. Using graph admittance matrix $A\in\mathbb{R}^{n\times \ell}$, equation \eqref{OGF:balance} ensures the conservation of gas mass. Given the fixed friction coefficients $\omega\in\mathbb{R}^{\ell}$, the steady-state Weymouth equation \eqref{OGF:weymouth} enforces the non-convex dependency of gas flows on pressure variables. Finally, a convex set $\mathcal{F}$ is used to respect the technical limits on gas mass and pressures. Note that vector $\pi$ contains squared nodal pressures to reduce non-linearities in \eqref{OGF:weymouth} \cite{de2000gas}. We do not model compressors, which can be incorporated with fixed \cite{singh2019natural,singh2020natural} or varying \cite{de2000gas,dvorkin2021stochastic} compression rates without significant impacts on computational costs. Although cost function \eqref{OGF:obj} typically includes only marginal production costs, it can also internalize an emission (carbon) tax to penalize gas producers with higher environmental impact. Alternatively, emissions can be regulated by carbon cap constraints on the total emission level. Although the equivalence of carbon tax and carbon cap can be shown through the Karush–Kuhn–Tucker conditions of \eqref{OGF} \cite{brown2021decreasing}, the carbon cap is preferred due to non-convexities in \eqref{OGF:weymouth}. Indeed, the same emission goal may not be achieved under the carbon tax, since local search algorithms may fail to minimize the penalty term globally; meanwhile, the carbon cap is introduced through the hard constraint, i.e., \begin{align}\label{eq:emission_cap} e^{\top}\vartheta\leqslant \overline{e}, \end{align} with vector $e\in\mathbb{R}^{n}$ of carbon intensities and carbon cap $\overline{e}$, which must be satisfied at all times. \textbf{Long-term planning problem.}\label{para_plan} Since a carbon cap may significantly affect the operating cost in \eqref{OGF:obj}, the long-term planning problem optimizes the network design to enable more economical satisfaction of the emission constraint \eqref{eq:emission_cap}. This problem is especially relevant for the design of future hydrogen gas transport networks which governments are actively considering~\cite{khan2021techno}. Let the diameter $d\in\mathbb{R}^{\ell}$ of gas pipelines be the design variable. Since pipeline friction is often modeled as being linearly proportional to diameter~\cite{Sundar:2019}, a constant $\hat{\omega}_i$ can be used to relate friction and diameter via $\omega_{i}=\hat{\omega}_{i}d_{i}$. The diameter enters the operational problem \eqref{OGF} through the Weymouth equation \eqref{OGF:weymouth} as \begin{align}\label{expantion:Weymouth} & \text{diag}[d]^{-1}\varphi\circ|\varphi|=\text{diag}[\hat{\omega}]A^{\top}\pi, \end{align} where the right-hand side has no explicit dependence on diameter. By defining a vector $\lambda\in\mathbb{R}^{\ell}$ of expansion costs, we obtain a long-term planning optimization from problem \eqref{OGF} by augmenting the total cost of expansion $\lambda^{\top}d$ to \eqref{OGF:obj} and substituting equation \eqref{OGF:weymouth} with its dynamic counterpart in \eqref{expantion:Weymouth}. \section{Input-convex neural network approach to emission-aware planning} Addressing the non-convex equation \eqref{OGF:weymouth}, we observe that its left-hand side $f(\varphi_{l})=\varphi_{l}|\varphi_{l}|$ is convex for $\varphi_{l}\geqslant0$ and concave for $\varphi_{l}\leqslant0$. Hence, $f(\varphi_{l})$ can be approximated with a sum $f(\varphi_{l})\approx\Phi_{\shortplus}(\varphi_{l})+\Phi_{\shortminus}(\varphi_{l})$ of one input-convex $\Phi_{\shortplus}(\varphi_{l})$ and one input-concave $\Phi_{\shortminus}(\varphi_{l})$ neural network. We use the following $k-$layer architectures under ReLU activation functions of hidden neurons: \begin{align*} \begin{array}{l} \Phi_{\shortplus}(\varphi_{l}) \colon\quad z^{1}_{\shortplus} = \text{max}\left(0,W^{0}_{\shortplus}\varphi_{l}+b^{0}_{\shortplus}\right), \quad z^{i+1}_{\shortplus} = \text{max}\left(0,W^{i}_{\shortplus}z^{i}_{\shortplus}+b^{i}_{\shortplus}\right), \forall i=1,\dots,k-1,\\[1.0ex] \Phi_{\shortminus}(\varphi_{l}) \colon\quad z^{1}_{\shortminus} = \text{max}\left(0,W^{0}_{\shortminus}\varphi_{l}+b^{0}_{\shortminus}\right), \quad z^{i+1}_{\shortminus} = \text{max}\left(0,W^{i}_{\shortminus}z^{i}_{\shortminus}+b^{i}_{\shortminus}\right), \forall i=1,\dots,k-1, \end{array} \end{align*} with a scalar input $\varphi_{l}$, scalar output $z_{k}$, and weights and biases $W$ and $b$, respectively. In $\Phi_{\shortplus}(\varphi_{l})$, the weights $W^{i}_{\shortplus},\forall i=1,\dots,k-1$ are non-negative to render the output a convex function of the input. In $\Phi_{\shortminus}(\varphi_{l})$, the weights $W^{i}_{\shortminus}$ are also non-negative for $i=1,\dots k-2$, but they are non-positive for $i=k-1$ to render the output a concave function of the input. With such architectures, we have a piece-wise functional approximation $f(\varphi_{l})\rightarrow z_{\shortplus}^{k} + z_{\shortminus}^{k}$. From \cite[Appendix B]{amos2017input}, we can retrieve the output of the trained ICNNs from the input by solving a linear program, e.g., \begin{subequations}\label{prog:icnn_ref_base} \begin{align} \minimize{z^{1}_{\shortplus},\dots,z^{k}_{\shortplus}}\quad& z^{k}_{\shortplus}\\ \text{subject to}\quad&z^{1}_{\shortplus}\geqslant W^{0}_{\shortplus}\varphi_{l}+b^{0}_{\shortplus}, \quad z^{i+1}_{\shortplus}\geqslant W^{i}_{\shortplus}z^{i}_{\shortplus}+b^{i}_{\shortplus}, \quad z^{i}_{\shortplus}\geqslant \mymathbb{0}, \quad \forall i=1,\dots,k-1, \end{align} \end{subequations} for the $\Phi_{\shortplus}(\varphi_{l})$ architecture, and it takes a similar form for the $\Phi_{\shortminus}(\varphi_{l})$ architecture. Thus, to approximate the Weymouth equation, we need to embed two linear programs (one convex and one concave) for each pipeline. The computational burden, however, will depend on the number of hidden layers and neurons. To reduce the burden, we note that for $\varphi_{l}\geqslant0$, solution $z^{k}_{\shortplus}$ is an {\it outer approximation} of the trained ICNN output, and the number of approximating hyperplanes $2^{p}$ is the number of unique combinations of $p$ hidden neurons. For small -- yet sufficient to represent a convex function -- architectures, we can screen approximating hyperplanes and leave only a set $\mathbb{H}_{\shortplus}$ of {\it supporting} hyperplanes, for which there exists an input $\varphi_{l}$ which makes such hyperplanes active (binding). Such hyperplane parameters are obtained from the trained ICNN as \begin{align*} &\textstyle\prod_{r=k}^{0}(s_{j}^{r}\circ W_{\shortplus}^{r})\varphi_{l}+\textstyle\sum_{i=0}^{k}\prod_{r=k}^{i}(s_{j}^{r}\circ W_{\shortplus}^{r})b^{i} = w_{\shortplus}^{j}\varphi_{l} + v_{\shortplus}^{j},\quad\forall j \in\mathbb{H}_{\shortplus}, \end{align*} with slope $w_{\shortplus}^{j}$ and intercept $v_{\shortplus}^{j}$. Vector $s_{j}\in\mathbb{R}^{p}$ collects a unique combination of ReLU activations (1 if active, and 0 if otherwise) of hyperplane $j$, and $s_{j}^{r}$ is a subset of $s_{j}$ with hidden neurons of layer $r$. Similarly, we obtain the active hyperplanes for the outer approximation of the concave part of $f(\varphi_{l})$. We now put forth the bilevel operational planning optimization which embeds the trained ICNNs: \begin{subequations}\label{bilevel} \begin{align} \minimize{{\varphi,\vartheta,\pi\in\mathcal{F}}}\quad&c^{\top}\vartheta\\ \text{subject to}\quad&\text{Constraints}\;\eqref{OGF:balance},\eqref{eq:emission_cap},\quad t_{\shortplus} + t_{\shortminus}=\text{diag}[\omega]A^{\top}\pi,\\ &\begin{matrix*}[l] t_{\shortplus}^{l} \in & \text{minimize}_{\;t_{\shortplus}^{l}} & t_{\shortplus}^{l}, \quad \text{subject to}\; w_{\shortplus}^{i}\varphi_{l}+ v_{\shortplus}^{i}\leqslant t_{\shortplus}^{l},\quad\forall i\in\mathbb{H}_{\shortplus}, \forall l\in1,\dots,e \end{matrix*}\label{Bilevel:LL_convex}\\ &\begin{matrix*}[l] t_{\shortminus}^{l} \in & \text{maximize}_{\;t_{\shortminus}^{l}} & t_{\shortminus}^{l},\quad \text{subject to}\; w_{\shortminus}^{i}\varphi_{l}+v_{\shortminus}^{i}\geqslant t_{\shortminus}^{l},\quad\forall i\in\mathbb{H}_{\shortminus}, \forall l\in1,\dots,e \end{matrix*}\label{Bilevel:LL_concave} \end{align} \end{subequations} where \eqref{Bilevel:LL_convex} and \eqref{Bilevel:LL_concave} are lower-level optimization problems, each including a single auxiliary variable $t^{l}$ which returns the ICNN output. Indeed, problem \eqref{Bilevel:LL_convex} is a light-weighted version of \eqref{prog:icnn_ref_base} producing the identical approximation result. Appendix \ref{app:ICNN_KKT_reformulation} provides a tractable mixed-integer reformulation of \eqref{bilevel} using Karush–Kuhn–Tucker (KKT) conditions of \eqref{Bilevel:LL_convex} and \eqref{Bilevel:LL_concave}. Then, in Appendix \ref{app:Weymputh_explantion_details}, we show that the dynamic Weymouth equation \eqref{expantion:Weymouth} can also be approximated by an ICNN. \section{Numerical tests on the Belgium gas network} To demonstrate emission-aware planning, we use a modified Belgium system from \cite{de2000gas}, with a meshed topology, tighter pressure bounds, and more distributed gas supply and demand hubs. Using this system, we compare three methods to solve operation planning: 1) an interior point solver \texttt{IPOPT}~\cite{wachter2006implementation}, 2) a mixed-integer quadratic programming (MIQP) relaxation, detailed in Appendix \ref{app:MIQP_relaxation}, and 3) the proposed ICNN-aided optimization. The last two are solved with mixed-integer Gurobi solver~\cite{gurobi}. The long-term planing is solved by the 1\textsuperscript{st} and 3\textsuperscript{rd} methods only, as no convex relaxation of equation \eqref{expantion:Weymouth} is known. The CPU time for all methods does not exceed several minutes. The NN architectures include 1 hidden layer with up to 15 neurons, which was sufficient to approximate convex and concave parts of the Weymouth equation. Test data, details on the training procedure, and codes to replicate our results are available at \url{https://doi.org/10.5281/zenodo.7089330} The CO$_{2}$ intensity of the gas supply in the test system varies between 0.6 and 2.7 kg/m$^3$, and solving the operational planning problem \eqref{OGF} without emission constraint \eqref{eq:emission_cap} results in up to 125.9 kT of emitted CO$_{2}$ with the \texttt{IPOPT} solver. To limit emissions, we select one moderate emission cap of 100 kT and one extreme cap of 48.9 kT, below which no method returns a feasible solution. The solutions for operation planning are collected in Table \ref{tab:operation}. As emission cap reduces, the \texttt{IPOPT} solver becomes more sensitive to initialization and fails to provide a feasible solution with probability up to 39.0\%. Although the termination status of the MIQP relaxation is always optimal, it may not be feasible with respect to the original, non-relaxed Weymouth equation; using it as a warm start for \texttt{IPOPT}, however, we retrieve a feasible point, which is competitive with the best performance of randomly initialized \texttt{IPOPT} solver. With either moderate or no emission cap, the proposed ICNN-aided optimization improves on the MIQP solution and consistently returns the best solution found with \texttt{IPOPT}. In the most constrained case, with $\overline{e}=48.9$ kT, the ICNN-aided optimization solution provides the least-cost operation cost, thus dominating both \texttt{IPOPT} and MIQP solutions. Table \ref{tab:coopt} provides the summary of long-term planning cost, which includes both operating cost and adjusted (to a single, peak hour) expansion cost. While the \texttt{IPOPT} solver exhibits a large variance and fails to produce any solution with probability up to 41.4\%, the ICNN-aided optimization always returns the best solution discovered with random \texttt{IPOPT} initializations. For the worst case \texttt{IPOPT} outcomes, the ICNN-aided solution yields 3.2\%--5.9\% cost savings, as it requires less pipeline expansion; e.g., for $\overline{e}=48.9$ kT, it expands pipelines by 117mm less on average across the network. \begin{table}[h] \caption{Cost summary of the emission-aware operation planning (\euro1,000). } \label{tab:operation} \begin{tabular}{cc@{\hspace{1.25\tabcolsep}}c@{\hspace{1.25\tabcolsep}}c@{\hspace{1.25\tabcolsep}}cc@{\hspace{1.25\tabcolsep}}cc@{\hspace{1.25\tabcolsep}}c} \toprule \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Emission \\ cap, kT\end{tabular}} & \multicolumn{4}{c}{1,000 random \texttt{IPOPT} initializations} & \multicolumn{2}{c}{MIQP relaxation} & \multicolumn{2}{c}{ICNN-aided solution} \\ \cmidrule(lr){2-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} & min & mean & max & \begin{tabular}[c]{@{}c@{}}prob. of\\ failure\end{tabular} & optimal & \begin{tabular}[c]{@{}c@{}}warm start\\ for \texttt{IPOPT}\end{tabular} & optimal & \begin{tabular}[c]{@{}c@{}}warm start\\ for \texttt{IPOPT}\end{tabular} \\ \midrule $\infty$ & 1,923.3 & 1,927.2 & 1,929.2 & 16.6\% & 1,540.8 & 1,929.2 & 1,932.3 & 1,923.3 \\ 100 & 2,225.1 & 2,235.1 & 2,256.2 & 16.0\% & 2,137.2 & 2,225.1 & 2,241.3 & 2,225.1 \\ 48.9 & 4,344.6 & 4,344.6 & 4,344.6 & 39.0\% & 4,200.8 & 4,344.6 & 4,290.1 & 4,291.2 \\ \bottomrule \end{tabular} \end{table} \begin{table}[h] \centering \caption{Cost summary of the emission-aware long-term planning (\euro1,000). }\label{tab:coopt} \begin{tabular}{cc@{\hspace{1.5\tabcolsep}}c@{\hspace{1.5\tabcolsep}}c@{\hspace{1.5\tabcolsep}}cc@{\hspace{1.5\tabcolsep}}c} \toprule \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Emission \\ cap, kT\end{tabular}} & \multicolumn{4}{c}{1,000 random \texttt{IPOPT} initializations} & \multicolumn{2}{c}{ICNN-aided solution} \\ \cmidrule(lr){2-5} \cmidrule(lr){6-7} & min & mean & max & \begin{tabular}[c]{@{}c@{}}prob. of\\ failure\end{tabular} & optimal & \begin{tabular}[c]{@{}c@{}}warm start\\ for \texttt{IPOPT}\end{tabular} \\ \midrule $\infty$ & 2,671.7 & 2,701.8 & 2,829.5 & 28.6\% & 2,666.4 & 2,671.6 \\ 100 & 3,057.8 & 3,090.2 & 3,191.9 & 30.3\% & 3,056.6 & 3,057.8 \\ 48.9 & 5,079.1 & 5,138.7 & 5,247.9 & 41.4\% & 5,079.9 & 5,079.1 \\ \bottomrule \end{tabular} \end{table} \section{Conclusion} We developed a new method for operation and long-term planning of gas networks under emission constraints, based on embedding input-convex and input-concave neural networks into planning optimization. We demonstrated empirical evidence that our method is robust even to strictest emission targets, for which the non-convex and relaxation-based solvers often fail to produce a feasible solution. A cost-saving potential of our method is up to 1.2\% in operational and 5.9\% in long-term planning. \section*{Acknowledgements} Vladimir Dvorkin is supported by the MSCA-COFUND Postdoctoral Program, Grant Agreement No. 101034297 -- project Learning ORDER. Samuel Chevalier is supported by the HORIZON-MSCA-2021 Postdoctoral Fellowship Program, Grant Agreement No. 101066991 -- project TRUST-ML. Spyros Chatzivasileiadis is supported by the ERC Starting Grant, Grant Agreement No. 949899 -- project VeriPhIED. \newpage {\small
{ "timestamp": "2022-09-20T02:21:15", "yymm": "2209", "arxiv_id": "2209.08645", "language": "en", "url": "https://arxiv.org/abs/2209.08645" }
\section{Introduction} The result presented in this paper contributes to the study of principles of generic absoluteness (see \cite{bagaria_axioms_generic_absoluteness} for a survey of this area). These principles can be seen as generalizations of the bounded forcing axioms, like $MA$, $BPFA$, $BSPFA$ and $BMM$ (see \cite{bagaria_bfa_generic_absoluteness}). Here is the general form of such a principle: \begin{dfn} Let $W$ be a definable subclass of $V$, $\Phi$ a class of formulas with parameters and $\Gamma$ a class of forcing notions. $\mathcal{A}(W,\Phi,\Gamma)$ is the statement that for any formula $\phi$ that belongs to $\Phi$ and for any $P \in \Gamma$, $W^{V}\vDash \phi \iff W^{V^P}\vDash \phi$. \end{dfn} We denote $\mathcal{A}(H_{\omega_1}, \mathbf{\Sigma_2}, \Gamma)$ by $\Sigma^1_3$-Abs$(\Gamma)$, pronounced ``$\Sigma^1_3$-absoluteness (for $\Gamma$)'', as $\Sigma_2$ formulas over $H_{\omega_1}$ are equivalent to $\Sigma^1_3$ formulas. In this special case, the consistency strengths for various classes $\Gamma$ are known. See \cite{generic_absoluteness} and \cite{more_generic_absoluteness} for proofs (in the table, we denote by $\ccc$, $proper$, $semi-proper$ the obvious classes of forcing notions; $set$ denotes the class of all set-sized forcing notions). \begin{center} \begin{tabular}{ c | c | c | c } $\Sigma^1_3$-Abs$(\ccc)$ & $\Sigma^1_3$-Abs$(proper)$ & $\Sigma^1_3$-Abs$(semi-proper)$ & $\Sigma^1_3$-Abs$(set)$ \\ \hline ZFC & ZFC & ZFC & reflecting \end{tabular} \end{center} \begin{dfn}\cite{goldstern_shelah_bpfa}\label{reflecting} A regular cardinal $\kappa$ is \textit{reflecting} \textit{iff} $V_\kappa \prec_{\Sigma_2} V$, or, equivalently, \textit{iff} for any regular $\theta$ and any formula $\phi$ with parameters from $V_\kappa$ such that $H_\theta \vDash \phi$, there is a regular $\gamma < \kappa$ such that $H_\gamma \vDash \phi$. \end{dfn} Forcing axioms are often considered with respect to their interaction with other interesting propositions. For example, knowing how to construct a model of $MA$, one may ask what is needed to obtain a model of $MA\; \wedge$ ``every projective set of reals is Lebesgue measurable'' or $MA \;\wedge $ ``$\omega_1$ is inaccessible to reals'' (as in \cite{equicon_results}). \begin{dfn} We say $\omega_1$ is \textit{inaccessible to reals} \textit{iff} for any real $r$, ${\omega_1}^{L[r]} < \omega_1$. \end{dfn} The fact that $\Sigma^1_3$-Abs$(set)$ implies that $\omega_1$ is inaccessible to reals (in fact $\Sigma^1_3$-absoluteness for $\omega_1$-preserving forcing suffices) is pivotal in setting it apart from the weaker axioms, those that are equiconsistent with $ZFC$. One can show that $\Sigma^1_3$-Abs$(proper)$ together with the assumption that $\omega_1$ is inaccessible to reals has the full strength of a reflecting cardinal (\cite{more_generic_absoluteness}; see also \cite{thesis}). The question is: how does the additional assumption that $\omega_1$ is inaccessible to reals interact with those forcing axioms that do not directly imply it? The answer is made available in the following table, where the additional hypothesis is indicated by ``$\Omega$''. We present a proof of the result on the far left \footnote{This material was part of my thesis \cite{thesis} and was sketched during a talk at the Summer Workshop in Fine Structure Theory (SWIFT) in Bonn, July 2003, and content of a talk given in M\"unster in February 2004. I'd like to thank Ralph Schindler for his kind invitation and hospitality. }. For this we introduce (in section \ref{indescribable}) a large cardinal property, ``lightface $\Sigma^1_2$-indescribable'' (denoted by ``lf-$\Sigma^1_2$-id'' in the table). \begin{center} \begin{tabular}{ c | c | c | c } $\Sigma^1_3$-Abs$(\ccc)$ & $\Sigma^1_3$-Abs$(proper)$ & $\Sigma^1_3$-Abs$(semi-proper)$ & $\Sigma^1_3$-Abs$(set)$ \\ $\wedge \;\Omega$ & $\wedge\; \Omega$ & $\wedge\; \Omega$ & $\wedge\; \Omega$ \\ \hline lf-$\Sigma^1_2$-id & reflecting & reflecting & reflecting \end{tabular} \end{center} The proof uses (among other things) a coding technique developed in \cite[theorem~C]{equicon_results}, which we adapt to our needs in section \ref{coding}. Also, the proof uses a bit of fine structure theory; as we are making an effort to make this paper very accessible, we will review some of the facts we rely on in the next section. \section{Notation, Facts, Definitions}\label{facts} To define the large cardinal property hinted at, we need a bit of second-order logic. We differentiate second-order from first-order variables or constant symbols by using upper case for the former and lower case for the latter. We remind the reader that $\Phi(X_0, \hdots X_k)$ is a $\Sigma^1_n$-formula if $\Phi$ starts with a block of existential quantifiers over second-order variables, with $n$ changes of quantifier, followed by an arbitrary number of first-order quantifiers. $\Pi^1_n$ means negation of $\Sigma^1_n$. Remember that for a first-order formula $\Phi$ (mentioning some second-order variables $Y_0, \hdots, Y_r, X_0, \hdots, X_r$) \[ \langle M, X_0, \hdots, X_k \rangle \vDash \exists Y_0 \hdots Q Y_r \Phi(Y_0, \hdots, Y_r, X_0, \hdots, X_r)\] (where $Q$ denotes $\exists$ or $\forall$) exactly if \[ \begin{array}{c} \exists Y_0 \in \mathcal{P}(M) \hdots Q Y_r \in \mathcal{P}(M) \\ \text{ such that }\langle M, X_0, \hdots, X_k, Y_0, \hdots, Y_r \rangle \vDash \Phi(Y_0, \hdots, Y_r, X_0, \hdots, X_r) \end{array} \] For an elaborate definition, see \cite[p.~7f\,]{kanamori:03}. Let $(\phi_i)_{i \in \omega}$ enumerate all $\Delta_0$ formulas. Now we define the value of the $i$-th $\Sigma_1$-Skolem function for $L_\alpha$ ($\alpha$ a limit ordinal), denoted by $h^{L_\alpha}_i$: if $L_\alpha \vDash \exists v \exists w \phi_i(v,w,x)$ we say $h^{L_\alpha}_i(x)=y$ just if $(y,z)$ is the $\leq_L$-least pair such that $L_\alpha \vDash \phi_i(y,z,x)$; $h^{L_\alpha}_i(x)=\emptyset$ otherwise. By the $\Sigma_1$-Skolem hull of $M$ inside $L_\alpha$, denoted by $h^{L_\alpha}_{\Sigma_1}(M)$, we mean the least set containing $M$ and closed under all $h^{L_\alpha}_i$. In this case, $h^{L_\alpha}_{\Sigma_1}(M)=\bigcup_{i \in \omega} {h^{L_\alpha}_i }[ M ]$. Most importantly, $\Sigma_1$-Skolem functions are uniformly $\Sigma_1$-definable, i.e. there is a $\Delta_0$ formula $\Phi$ such that $h^{L_\alpha}_i(x)=y$ if and only if $L_\alpha \vDash \exists z \Phi(i,x,y,z)$. We will make use of simple facts about Skolem hulls, like: \begin{fct}\label{skolemlower} If $\langle L_\alpha, x \rangle$ is isomorphic to $\langle h^{L_{\bar \alpha} }_{\Sigma_1}(M\cup\{\bar x \}),\bar x \rangle$ and $M$ is transitive, then $\langle L_\alpha, x \rangle = \langle h^{L_\alpha}_{\Sigma_1}(M\cup\{x \}),x\rangle$. \end{fct} \begin{fct} \label{skolemupper} If $L_\alpha= h^{L_{\alpha}}_{\Sigma_1}(M)$ and $\sigma: L_\alpha \rightarrow_{\Sigma_1} L_\beta$, then $\ran(\sigma)= h^{L_{\beta}}_{\Sigma_1}(\sigma[M])$. \end{fct} For details, see \cite[II, 6]{devlin}). \section{Lightface $\Sigma^1_2$-indescribable cardinals}\label{indescribable} \begin{dfn}\label{thedefinition} We say that a cardinal $\kappa$ has the $\Sigma^1_2$ reflection property if whenever \[ V_\kappa \vDash \exists X \forall Y \Phi(X,Y,p) \] where $\Phi$ is first-order in the language of set-theory with $X$ and $Y$ as additional predicates, and $p \in V_\kappa$, then there is $\xi < \kappa$ such that $V_\xi \vDash \exists X \forall Y \Phi(X,Y,p)$. We say $\kappa$ is (lightface) $\Sigma^1_2$-indescribable if in addition $\kappa$ is inaccessible. \end{dfn} \begin{fct}\label{def2} $\kappa$ is lightface $\Sigma^1_2$-indescribable $\iff$ $\kappa$ is inaccessible and $H_\kappa \prec_{\Sigma_2} H_{\kappa^+}$. \end{fct} \begin{proof} First assume indescribability. Let $H_{\kappa^+}\vDash \exists x \forall y \phi(x,y,p)$, where $p \in H_\kappa$. Pick a witness $x_0$ in $H_{\kappa^+}$. For any transitive $M \in H_\kappa$ containing $x_0$ and $p$, we have $M\vDash \forall y \phi(x_0,y,p)$. So $H_{\kappa^+}\vDash \exists x \forall y \phi(x,y,p)$ is equivalent to a $\Sigma^1_2$ assertion over $H_\kappa$, and thus is reflected by some $H_\xi$, for inaccessible $\xi < \kappa$. Thus $H_{\xi^+}\vDash \exists x \forall y \phi(x,y,p)$, and as $\Sigma_2$ formulas are upward absolute for members of the $H$-hierarchy, \[ H_{\kappa}\vDash \exists x \forall y \phi(x,y,p). \] For the other direction, let $H_\kappa\vDash\exists X \forall Y \phi(X,Y,p)$. Then $H_{\kappa^+}$ thinks ``there is an ordinal $\theta$ such that $\exists x \subseteq V_\theta \forall y \subseteq V_\theta \langle V_\theta,x,y \rangle \vDash \phi(x,y,p)$''. As ``$z=V_\theta$'' is $\Pi_1$ in $z$ and $\theta$, this is seen to be a $\Sigma_2$ statement in the parameter $p$ and so holds in $H_\kappa$. \end{proof} \begin{fct}\label{fct:indescribable} \begin{enumerate} \item If $\kappa$ has the $\Sigma^1_2$ reflection property, it is a limit cardinal and is not equal to $2^\lambda$ for any $\lambda < \kappa$. \item \label{Mahlo}There is a stationary set of $\Sigma^1_2$-indescribable cardinals below any Mahlo cardinal. The least Mahlo is not $\Sigma^1_2$-indescribable. \item Reflecting implies $\Sigma^1_2$-indescribable which in turn implies the existence of many inaccessibles. \item If $P$ is a partial ordering of size less than $\kappa$, then forcing with $P$ preserves the $\Sigma^1_2$-indescribability of $\kappa$. \end{enumerate} \end{fct} \begin{proof} \begin{enumerate} \item If $\kappa = \lambda^+$, the $\Pi^1_1$ sentence (with $\lambda$ as a parameter) ``there is no function from $\lambda$ onto the ordinals'' holds in $V_\kappa$. But this sentence can't hold in any $V_\xi$ containing $\lambda$, $\xi < \kappa$. If $\kappa = 2^\delta$, look at the sentence ``there is a bijection between $\mathcal P (\delta)$ and the ordinals''. Argue as above. \item Let $\kappa$ be Mahlo. Consider the function that assigns to each ordinal $\eta < \kappa$ the least $\xi$ such that: if $\Phi$ is $\Sigma^1_2$ with a parameter from $V_\eta$ and there is $\alpha < \kappa$ such that $V_\alpha\vDash \Phi$, then there is $\alpha < \xi$ such that $V_\alpha\vDash \Phi$. Any closure point under this function has the $\Sigma^1_2$ reflection property, and by Replacement in $V_\kappa$, the closure points under this function form a $cub$ subset of $\kappa$. This prooves the first assertion. To see that the least Mahlo cannot have the reflection property, observe that $\kappa$ being Mahlo is expressible by a $\Pi^1_1$ statement over $V_\kappa$. \item The first assertion follows from the previous Fact, as $\Sigma_2$ sentences are upward absolute for members of the $H$-hierarchy. Secondly, being inaccessible is expressible as a $\Pi^1_1$ statement (for example the power set of any set exists and can be mapped injectively into some ordinal and there is no function with a set as domain but unbounded range). \item\label{forcing} By Fact \ref{def2}, it suffices to proove $H_\kappa \prec_{\Sigma_2} H_{\kappa^+}$ holds in the extension, assuming it holds in the ground model. Observe that for a partial order $P$ and a regular $\alpha$ such that $P\in H_\alpha$, every element of $(H_\alpha)^{V^P}$ has a $P$-name in $H_\alpha$ (straightforwardly check, constructing names by induction on the rank; in fact, it suffices to assume that $P$ is a subset of $H_\kappa$ and has the $\kappa$-cc). Recall that for any given $\Delta_0$ formula $\phi$ and for any transitive $ZF^-$-model $M$ such that $P \in M$, the forcing relation for $\phi$ on $P\times (P$-names in $M)$ is uniformly $\Delta_1$-definable over $M$ (i.e. the definition is the same for all such $M$). Thus $\Vdash_P$``$H_{\kappa^+} \vDash \exists x \forall y \phi(y,y,\dot{p})$'' is equivalent to a statement of the form \[ \exists \dot{x} \in H_{\kappa^+} \quad \forall \dot{y} \in H_{\kappa^+} \quad H_{\kappa^+} \vDash \phi'(\dot{x},\dot{y},\dot{p},P), \] where $\phi'$ is $\Delta_1$. As $H_\kappa \prec_{\Sigma_2} H_{\kappa^+}$, the above holds with $\kappa^+$ replaced by $\kappa$. As $P \in H_\kappa$, $\Vdash_P$``$H_\kappa \vDash \exists x \forall y \Phi(x,y,\dot{p})$''. \end{enumerate} \end{proof} Lightface $\Sigma^1_2$-indescribability does imply a certain fragment of Mahloness, as we observed in \cite{thesis}. For an application of the following notion see \cite[\S5]{bagaria_solovay_models}. \begin{dfn} Let us call a cardinal $\kappa$ $\Sigma_n$-Mahlo (resp. $\Pi_n$-Mahlo) \textit{iff} it is inaccessible and every $cub$ subset $C$ of $\kappa$ with a $\Sigma_n$ (resp. $\Pi_n$) definition in $H_\kappa$, with parameters, contains an inaccessible cardinal. Lightface $\Sigma_n$-Mahlo (resp. $\Pi_n$-Mahlo) is defined analogously, but without allowing parameters in the definition of $C$. \end{dfn} \begin{fct} If $\kappa$ is lightface $\Sigma^1_2$-indescribable, then it is $\Sigma_2$-Mahlo. In fact, $\kappa$ is an inaccessible limit of $\Sigma_2$-Mahlo cardinals. Any lightface $\Pi_2$-Mahlo cardinal is an inaccessible limit of $\Sigma^1_2$-indescribable cardinals. \end{fct} \begin{proof} First, assume $\kappa$ is $\Sigma^1_2$-indescribable. To proove that $\kappa$ is $\Sigma_2$-Mahlo, let $C$ be a $cub$ subset of $\kappa$ such that $x \in C \iff H_\kappa\vDash \phi(\xi)$, where $\phi(\xi)$ is $\Sigma_2$. $H_{\kappa^+}\vDash$``there is an inaccessible $\theta$ such that $H_\theta\vDash\forall \xi \exists \bar\xi>\xi \phi(\bar\xi)$''. This statement is itself $\Sigma_2$, so it also holds in $H_\kappa$. So there is an inaccessible $\theta < \kappa$ such that $H_\theta \vDash \forall \xi \exists \bar \xi > \xi \phi(\bar\xi)$. By upward absoluteness of $\phi(\xi)$ for members of the $H$-hierarchy and closedness of $C$, $\theta \in C$. This completes the proof of the first assertion. It is straightforward to check that ``$\kappa$ is $\Sigma_2$-Mahlo'' is $\Pi^1_1$ over $V_\kappa$, so since $\kappa$ is $\Sigma^1_2$-indescribable, there are unboundedly many $\Sigma_2$-Mahlo cardinals below $\kappa$. For the last assertion, assume $\kappa$ is lightface $\Pi_2$-Mahlo. We follow the proof of Fact \ref{fct:indescribable}, (\ref{Mahlo}). Check that the $cub$ set mentioned there, consisting of $\xi < \kappa$ having the $\Sigma^1_2$ reflection property, has a $\Pi_2$ definition, without parameters, over $V_\kappa$. So there are unboundedly many $\Sigma^1_2$-indescribable cardinals below $\kappa$. \end{proof} \section{Coding using an Aronszajn-tree}\label{coding} We fix the following notation: for a tree $T$, we denote by $<_T$ (or $\leq_T$) the tree order, $T_\alpha$ denotes the $\alpha$-th level of $T$ and $T\upharpoonright \alpha$ denotes the subtree of $T$ consisting of all levels of height less than $\alpha$. By $\pred(t)$ we mean of course $\{ t' \in T | t' <_T t \}$. The following works for any Aronszajn-tree $T$, that is, a tree of height $\omega_1$ with countable levels and without any cofinal branches (i.e. linearly ordered sets of type $\omega_1$). Aronszajn trees can be ``specialized'' by a $\ccc$ forcing: that is, one adds an order preserving function from the tree into the rationals (a so-called specializing function). This ensures that one cannot add, by further forcing, cofinal branches without at the same time collapsing $\omega_1$. Applying this forcing to code a subset of $\omega_1$ by a real, \cite{equicon_results} proves that $MA$ together with ``$\omega_1$ is inaccessible to reals'' implies $\omega_1$ is weakly compact in $L$. We present a slight variation. \begin{fct}\label{fct:ccc-resh} Let $S=(s_\alpha)_{\alpha < \omega_1}$ be a sequence of reals. There is a $\ccc$ forcing $P$ that adds a real $r$ such that in the extension the following holds: whenever $M$ is a transitive model of $ZF^-$ such that $r \in M$, $ \langle T,\leq_T \rangle \in M$, we have $(s_\alpha)_{\alpha < \omega_1} \in M$. \end{fct} \begin{proof} To achieve this, we iterate the following notion of forcing: fix $Q_0$, $Q_1$, two disjoint dense sets whose union is all rational numbers. For any sequence $S=(s_\alpha)_{\alpha < \omega_1}$ consider $P^{S}_T$ consisting of all conditions $f$ such that \begin{enumerate} \item $f$ is a function with domain a finite subset of $T \times \omega$. \item For each $n \in \omega$, the function $t \mapsto f(t,n)$ is a partial order preserving mapping from $(T,<_T)$ into the rationals. \item For any $\alpha < \omega_1$, $t$ at the $\alpha$-th level of $T$ and $n \in \omega$, if $(t,n) \in \dom (f)$, then $f(t,n) \in Q_0$ if and only if $n \in s_\alpha$. \end{enumerate} \begin{lem}\label{lem:ccc-resh:basic} Let $F= \bigcup G$, where $G$ is generic. Then $F$ is a function from $T \times \omega$ into the rationals which is order preserving and continuous at limit nodes of $T$; moreover, for any $\alpha < \omega_1$, and any $t \in T_\alpha$, $\{ \; n \in \omega \; \vert \; F(t,n) \in Q_0 \; \}= s_\alpha$. \end{lem} \begin{proof} Clearly, $D_{(t,n)}:=\{\; p \in P^S_T \;\vert\; (t,n) \in \dom (p)\;\}$ is dense for any $(t,n) \in T \times \omega$: given a condition $p$, there is an interval of possible values for $p$ at $(t,n)$ (since $p$ has finite domain), so if $t$ is at level $\alpha$ of $T$, we can choose a value from $Q_0$ or $Q_1$, depending on whether $n \in s_\alpha$ or not. So $F$ is a total, order preserving function on $T \times \omega$, and the ``moreover'' clause holds by definition. $F$ is continuous as $D_{(t,n),\epsilon }:=\{ \; p \in P^S_T \;\vert\; \exists t' \in T \; \lvert p(t',n)-p(t,n) \rvert < \epsilon \;\}$ is dense for any $n \in \omega$, $\epsilon > 0$ and $t$ at a limit level of $T$ (again, by the finiteness of the domain of any condition). \end{proof} \begin{lem} $P^S_T$ is $\ccc$. \end{lem} \begin{proof} Assume $(p_\alpha)_{\alpha < \omega_1}$ is an uncountable antichain; then $\{ \dom(p_\alpha) \vert \alpha < \omega_1 \}$ is an uncountable subset of $[ T \times \omega ]^{< \omega}$, so we can apply the delta-systems lemma and assume that for each $\alpha$, $\dom(p_\alpha)=r \cup d_\alpha$, where $r, (d_\alpha)_{\alpha<\omega_1}$ are pairwise disjoint. Let us also assume that the $d_\alpha$ all have the same cardinality $k$. There are only countably many possibilities for the values of the $p_\alpha$ on $r$, so we assume that all the conditions agree on $r$. So for any $\alpha, \alpha' < \omega_1$, there is $t \in d_\alpha$, $t' \in d_{\alpha'}$ and $n \in \omega$ such that $p_\alpha \cup p_{\alpha'}$ is not order preserving on $\{(t,n), (t',n)\}$, whence in particular $t$ and $t'$ are comparable in the tree order. As any node of the tree has only countably many predecessors in the tree order, by thinning out $(p_\alpha)_{\alpha < \omega_1}$ we can further assume that for all $\alpha < \alpha' < \omega_1$, there are $t \in d_\alpha$, $t' \in d_{\alpha'}$ such that $t <_T t'$. Let us now enumerate the $d_\alpha$ as $t^0_\alpha, \hdots, t^{k-1}_\alpha$. We know that all the conditions in the antichain have comparable nodes in their domain, we will now find a sufficiently coherent subset of conditions to get a branch through $T$. Enlarge (using Zorn's lemma) the filter of co-initial subsets of $\omega_1$ to an ultrafilter $U$ ($U$ contains only sets of size $\omega_1$, i.e. $U$ is uniform). For any $\alpha < \omega_1$, we have $\{ \; \beta < \omega_1 \;\vert \; \exists i,j \;\; t^i_\alpha <_T t^j_\beta \; \} \in U$. So by finite additivity of $U$, for each $\alpha$, there are $i,j$ such that $ \{ \; \beta < \omega_1 \;\vert \; \; t^i_\alpha <_T t^j_\beta \; \} \in U$. Moreover, there is an uncountable set $I$ and $i,j$ such that the above holds for all $\alpha \in I $ and this particular pair $i,j$. So for any $\alpha, \alpha' \in I$, as elements of $U$ have non-empty (in fact large) intersection, there is $\beta$ such that $t^i_\alpha <_T t^j_\beta$ and $t^i_{\alpha'} <_T t^j_\beta$, so $t^i_\alpha$ and $t^i_{\alpha'}$ are comparable and $(t^i_\alpha)_{\alpha \in I}$ is an uncountable branch through $T$. \end{proof} Now we can prove Fact \ref{fct:ccc-resh}. We build $P$ as the finite support iteration of $(P_k)_{k \in \omega}$. Let $s^0_\alpha=s_\alpha$; $P_0$ is the forcing coding this sequence of reals into a specializing function for $T$. At stage $n$, we have added a specializing function $F_n$; let $s^{n+1}_\alpha$ be a real coding (in some absolute way) $F_n$ restricted to $(T\upharpoonright \alpha+2) \times \omega$. $P_{n+1}$ is the forcing for coding the sequence $(s^n_\alpha)_{\alpha< \omega_1}$. Let $r$ be a real coding all reals $(s^k_0)_{k \in \omega}$; we check by induction on $\eta \leq \omega_1$ that $r$ has the property promised in \ref{fct:ccc-resh}: assume that for all $k$, $(s^k_\xi)_{\xi < \eta} \in M$ (this holds by assumption if $\eta=1$). If $\eta=\zeta+1$, as $s^{k+1}_\zeta$ codes $F_k$ restricted to $T \upharpoonright \zeta +2$, for an arbitrary $t \in T_{\zeta+1}$, $s^k_{\zeta+1}=\{ n | F_k(t,n)\in Q_0 \} \in M$. For limit $\eta$, using $(s^k_\xi)_{k \in \omega, \xi < \eta}$ we have $F_k$ restricted to $T\upharpoonright \eta$ inside $M$, and therefore, picking an arbitrary $t \in T_\eta$, $n \in s^k_\eta$ exactly if $sup(\{ F(t',n) \vert t'<_T t\})\in Q_0$; so $s^k_\eta \in M$ for all $k$. \end{proof} \section{An equiconsistency} \begin{thm} ``$\Sigma^1_3$-Abs$(\ccc)$ and $\omega_1$ inaccessible to reals'' has the consistency strength of a $\Sigma^1_2$-indescribable. \end{thm} \begin{proof} First, observe that it in order to proove that an inaccessible cardinal $\kappa$ has the $\Sigma^1_2$ reflection property, it suffices to proove the seemingly weaker property where we treat \textit{all second-order quantifiers as ranging over sets of ordinals}, rather than over arbitrary subsets of a structure. For notational reasons, we shall sometimes identify sets (those denoted by $X$, $\bar X$, $X^\star$ etc.) with their characteristic functions, and therefore write ``$X \upharpoonright \xi$'' for ``$X \cap \xi$''. Let $\kappa$ denote ${\omega_1}^V$, and work in $L$. Observe $\kappa$ is inaccessible. Let $\Phi$ be some first-order formula (with parameter in $L_\kappa$, which we suppress), and let $X^\star \in L$ be some function from $\kappa$ into $2$, such that \[ \langle L_\kappa, X^\star \rangle \vDash \forall A \Phi(X^\star, A) \] We may naturally assume that for all $\xi < \kappa$ there is $A \subseteq \xi$ such that \[ \langle L_\xi, X^\star \upharpoonright \xi, A \rangle \vDash \neg \Phi(X^\star \upharpoonright \xi, A)\text{,} \] for otherwise, we are done\footnote{In other words, we may assume $\kappa$ is not weakly compact in $L$, as witnessed by $X^\star$ and $\Phi$.}. Varying the well-known construction of an Aronszajn-tree whose height is an inaccessible cardinal which is not weakly compact in $L$, we now define a tree $T$ and its ordering $\leq_T$: Elements of $T$ are tuples $(\beta,X)$, where $\beta < \kappa$ and $X \in {}^\delta 2$, for some $\delta$, and \begin{enumerate} \item $L_\beta = h^{L_\beta}_{\Sigma_1}(\card{X}\cup\{X\})$ (in particular, $X \in L_\beta$)\label{skolemhull} \item $X \upharpoonright \card{X} = X^\star \upharpoonright \card{X}$ \item for all $\xi \leq \dom(X)$, there is $A \in L_\beta$, a subset of $\xi$, such that \[ \langle L_\xi, X \upharpoonright \xi, A \rangle \vDash \neg \Phi( X \upharpoonright \xi, A). \] \label{counterex} \end{enumerate} Define $(\beta,X)\leq_T (\bar\beta, \bar X) \iff$ $X\leq_L \bar X$ and there is a $\Sigma_1$-elementary embedding $\sigma:L_\beta \rightarrow L_{\bar\beta}$ such that $\sigma(X)=\sigma(\bar X)$ and $\sigma$ is the identity on $\card{X}$. This can be motivated by observing that branches correspond to a failure of reflection, as will become clear in a moment. Let's check $\leq_T$ is a tree order. Clearly, $\leq_T$ is transitive and reflexive. Also, $\leq_T$ is antisymmetric: assume $(\beta, X) \leq (\beta',X')$ and $(\beta',X') \leq (\beta, X)$. As $X=X'$, the embedding witnessing $(\beta, X)\leq_T (\beta',X')$ shows $L_\beta$ is isomorphic to $h^{L_{\beta'}}_{\Sigma_1}(\card{X'}\cup\{X'\})$ (by Fact \ref{skolemupper}); but the latter is just $L_{\beta'}$, by item (\ref{skolemhull}) in the definition of $T$. It remains to check that any two predecessors of a node are comparable: say $(\beta, X)$, $(\beta', X') \leq_T (\bar \beta, \bar X)$, as witnessed by embeddings $\sigma$ and $\sigma'$. Without loss of generality assume $X \leq_L X'$, whence also $\card{X}\leq \card{X'}$ (if not, since $X'$ is a function on an ordinal, $X' \in L_{\card{X'}^+} \subseteq L_{\card{X}}$, contradiction). So (once more using Fact \ref{skolemupper}) $\ran(\sigma)=h^{L_{\bar \beta}}_{\Sigma_1}(\card{X}\cup\{ \bar X \}) \subseteq h^{L_{\bar \beta}}_{\Sigma_1}(\card{X'}\cup\{ \bar X \})=\ran(\sigma')$, whence $(\sigma')^{-1} \circ \sigma$ is a well-defined elementary embedding and so $(\beta, X) \leq_T (\beta', X')$. We now show $T$ is a $\kappa$-Aronszajn tree. First observe that for a node $(\bar \beta,\bar X)$ of $T$ and a cardinal $\alpha \leq \bar \beta$, there is \textit{exactly one} $t\leq_T (\beta,X)$ of cardinality $\alpha$. Existence: look at the transitive collapse $L_\beta$ of $h^{L_{\bar \beta}}_{\Sigma_1}(\alpha\cup\{ \bar X \})$ and let $X$ denote the image of $\bar X$ under the collapsing map (let $\sigma$ denote the inverse of this map). Then $\card{X}=\alpha$, so $L_\beta = h^{L_{\beta}}_{\Sigma_1}(\card{X}\cup\{ X \})$, by Fact \ref{skolemlower}. Item (\ref{counterex}) holds for $(\bar \beta, \bar X)$, so by a Skolem hull argument, it also holds for $(\beta,X)$. So $(\beta, X) \in T$. If $\alpha < \card{\bar X }$, $X \leq_L \bar X$, and $\sigma$ witnesses $(\beta,X) \leq_T (\bar \beta,\bar X)$. If $\alpha = \card{\bar X }$, by item (\ref{skolemhull}), $X=\bar X$ and $\beta=\bar \beta$. Uniqueness: say $(\beta, X)$, $(\beta', X') \leq_T (\bar \beta, \bar X)$, and $\alpha=\card{\beta}=\card{\beta'}$. By Fact \ref{skolemupper}, both $\langle L_\beta, X\rangle$ and $\langle L_{\beta'}, X' \rangle$ are isomorphic to $h^{L_{\bar \beta}}_{\Sigma_1}(\alpha\cup\{ \bar X \})$, so they are identical. As a corollary we obtain that if $(\beta,X) \in T$ and $\card{\beta} = \omega_\alpha$, $(\beta,X)$ has exactly $\alpha$ predecessors in $\leq_T$, i.e. the height of $(\beta,X)$ in T is $\alpha$. So $T\upharpoonright \alpha \subseteq L_{\omega_\alpha}$ ($T$ has small levels). $T$ has height at least $\kappa$: Let any $\alpha < \kappa$ be given. Let $X := X^\star \upharpoonright \omega_\alpha$ and let $L_\beta$ be the transitive collapse of $H:=h^{L_{\kappa}}_{\Sigma_1}(\omega_\alpha\cup\{ X \})$. It is easy to check that $(\beta,X) \in T$ (for item (\ref{counterex}), observe that $\dom(X) = \omega_\alpha \in H$) and we have seen its height is exactly $\alpha$. To conclude that $T$ is Aronszajn (in $V$), it remains to check: \begin{lem} $T$ does not have a branch of order-type $\kappa$ in $V$. \end{lem} \begin{proof} Else, let $(\beta(\alpha), X(\alpha))_{\alpha < \kappa}$ be such a branch. Let $\sigma^{\bar \alpha}_\alpha: L_{\beta(\alpha) } \rightarrow_{\Sigma_1}L_{\beta(\bar \alpha)}$ be the embedding witnessing $(\beta(\alpha), X(\alpha)) \leq_T (\beta(\bar \alpha), X(\bar \alpha))$. A straightforward argument involving the $\Sigma_1$-definable Skolem functions shows that for $\alpha < \alpha' < \bar \alpha$, $\sigma^{\bar \alpha}_{\alpha'}\circ \sigma^{\alpha'}_{\alpha}=\sigma^{\bar \alpha}_{\alpha}$. As $\kappa$ has uncountable cofinality, the direct limit of this chain of models is well-founded and a model of $V=L$, therefore isomorphic to some $L_\delta$. Each $L_{\beta(\alpha)}$ is $\Sigma_1$-elementarily embeddable into $L_\delta$ via a map that is the identity on $\card{\beta(\alpha)}^L$, and all the $X(\alpha)$ are mapped to one $X_0$ which must therefore end-extend $X^\star$ (in the sense that $X_0 \upharpoonright \kappa = X^\star)$. So $\delta > \kappa$ (as $X_0 \in L_\delta$). By elementarity (and condition \ref{counterex} in the definition of $T$), there is $A \in L_\delta$, a subset of $\kappa$, such that $\langle L_\kappa, X_0 \upharpoonright \kappa , A \rangle \vDash \neg \Phi(X_\delta \upharpoonright \kappa ,A)$, contradiction. \end{proof} Let's go back to working in $L$ again, for yet a little while. $T$ is not pruned (there are dying branches and branches that don't split), and $T$ needn't even have unique limit nodes (in the sense that for $t$ and $t'$ at a limit level $T_\lambda$, if $t$ and $t'$ have the same predecessors, then $t=t'$). The latter shortcoming has to be remedied, and this is accomplished easily by replacing $T$ by $T'$, where $T'\upharpoonright \omega = T \upharpoonright \omega$, $T'_{\alpha+1}=T_{\alpha}$ for any infinite ordinal $\alpha<\kappa$, while for limit ordinals $\lambda$ we set $T'_\lambda=\{\pred(t) | t \in T_\lambda \}$. $T'$ carries the obvious order ($t\leq_{T'} t'$ exactly if either $t \subseteq t'$ or $t \in t'$ or $t \subseteq \pred(t')$ or $t \leq_T t'$). Fix $\delta^\star$ such that $X^\star \in L_{\delta^\star}$. Pick $E$, a binary relation on $\kappa$, such that \begin{enumerate} \item $\langle \kappa, E \rangle \cong \langle L_{\delta^\star}, \in \rangle$, and \item $X^\star(\xi)=1 \iff (\xi +1)\; E \; \emptyset$. \end{enumerate} Define $C:= \{ \xi < \kappa | \xi$ is a cardinal and $\langle L_\xi, X^\star\upharpoonright\xi, E\cap(\xi\times\xi) \rangle \prec \langle L_\kappa, X^\star, E\rangle \}$. By inaccessibility of $\kappa$ this is a $cub$ set. Let $C$ be enumerated as $(c_\xi)_{\xi<\kappa}$. Now we work in $V$: let $s_\xi$ be a real coding, in some absolute manner, the tuple \[(T_{\xi+1},X^\star\upharpoonright c_\xi, E\cap (c_\xi \times c_\xi)).\] Apply the forcing just described (Fact \ref{fct:ccc-resh}) to code the sequence $S=(s_\xi)_{\xi<\omega_1}$ into a single real $r$, using $T$. Consider any $\beta < \kappa$ such that $L_\beta[r]$ is a model of ``$ZF^-$ and $\omega_1$ exists''. Let $\alpha$ denote $\omega_1^{L_\beta[r]}$. We claim that for some $\xi \leq \alpha$, there is $x \in L_\beta \cap \mathcal P (\xi)$ such that for all $a \in L_\beta \cap \mathcal P (\xi)$, $\langle L_\xi, x, a\rangle \vDash \Phi(x,a)$, i.e. that from the point of view of $L_\beta$, reflection occurs before or at $\omega_1$. Assume otherwise; we show how to recursively reconstruct $(s_\xi)_{\xi < \alpha}$ inside $L_\beta[r]$, and then obtain a contradiction. We construct $(s_\xi)_{\xi < \eta}$ by recursion on $\eta \leq \alpha$. $s_0 \in L_\beta[r]$ is immediate. Now say $\eta=\gamma+1$: by induction hypothesis $(s_\xi)_{\xi \leq \gamma} \in L_\beta[r]$, so $T\upharpoonright \gamma+2 \in L_\beta[r]$. As in the proof of Fact \ref{fct:ccc-resh}, utilizing the specializing functions on that tree (coded recursively by $r$), we obtain $s_{\gamma+1} \in L_\beta[r]$. We shall now consider two cases simultaneously, since the next few steps of the argument are identical for both: \begin{enumerate} \item \label{omega1}$\eta < \alpha$ is a limit ordinal; in this case, we must show how to continue the construction of $(s_\xi)_{\xi<\alpha}$. \item \label{recursion}we have constructed $(s_\xi)_{\xi<\alpha}$ and $\eta=\alpha$; this leads to a contradiction. \end{enumerate} In any case, we may assume $(s_\xi)_{\xi<\eta} \in L_\beta[r]$, whence $X^\star \upharpoonright c_\eta$, $E \cap (c_\eta \times c_\eta) \in L_\beta[r]$. $E \cap (c_\eta \times c_\eta)$ is of course a well founded relation, and by the definition of $C$ and elementarity, its transitive collapse is equal to some $L_{\zeta^\star}$ such that $X^\star \upharpoonright c_\eta \in L_{\zeta^\star}$ and $\zeta^\star < \beta$. To be sure the construction of $(s_\xi)_{\xi < \alpha}$ takes place entirely in $L_\beta[r]$, we feel we should mention the triviality that since $(\omega_\eta)^L < \beta$, $(\omega_\eta)^L=(\omega_\eta)^{L_\beta}$. Work in $L$. Since $c_\eta \geq \omega_\eta$, $X^\star \upharpoonright \omega_\eta \in L_\beta$. For each $\xi < \eta$, look at the transitive collapse $L_{\beta(\xi)}$ of $h^{L_\beta}_{\Sigma_1}(\omega_\xi \cup \{X^\star \upharpoonright \omega_\eta\})$, and let $X(\xi)$ be the image of $X^\star \upharpoonright \omega_\eta$ under the collapsing map. By definability of the Skolem-hull operator and by Replacement in $L_\beta$, the sequence $(\beta(\xi),X(\xi))_{\xi < \eta}$ is an element of $L_\beta$. Observe that for $\xi < \eta$, $c_\xi$ is countable in $L_\beta[r]$, so $c_\eta \leq \alpha$ and thus $\omega_\eta \leq \alpha$. Hence, by assumption, for each $\xi \leq \omega_\eta$ there is $a \in L_\beta$, $a \subseteq \xi$, such that $\langle L_\xi,X^\star\upharpoonright \xi, a \rangle\vDash \neg \Phi(X^\star\upharpoonright \xi,a)$. This ensures item (\ref{counterex}) in the definition of $T$ holds for each $(\beta(\xi), X(\xi))$, so arguing just as in the proof showing $T$ is Aronszajn, we have $(\beta(\xi), X(\xi)) \in T$ and its height is $\xi$. So we have found a branch $b$ of order-type $\eta$ in $T \upharpoonright \eta$, $b \in L_\beta$. We work in $V$ again. Once more, as in the proof of Fact \ref{fct:ccc-resh}, we may recover, for all $k$, $F_k \upharpoonright (b\times \omega)$ from $b$ and $r$, inside $L_\beta[r]$ (observe only one node on each level suffices for the construction). Now we proceed to argue by cases: in case (\ref{omega1}), $b$ has order-type $\alpha = {\omega_1}^{L_\beta[r]}$, contradicting $F_0 \upharpoonright (b\times \omega) \in L_\beta[r]$, as $F_0$ yields an order-preserving function from $b$ into the rationals. For case (\ref{recursion}), we must show $s_\eta \in L_\beta[r]$, and indeed, this holds as $n \in s_\eta \iff \sup\{ F_0(t,n) | t \in b \} \in Q_0$. This finishes the proof of the claim. So we have found, after forcing with a $\ccc$ partial order, a real $r$ with the $\Pi_1$ property \[ \begin{array}{c} \forall \beta < \omega_1 \text{, if }L_\beta[r]\vDash \text{``}ZF^-\text{ and $\omega_1$ exists'', then}\\ \exists \xi \leq (\omega_1)^{L_\beta[r]}\text{ such that }(L_\xi\vDash \exists X \forall A \Phi(X,A))^{L\beta} \end{array} \] By $\Sigma^1_3$-absoluteness, we may assume that $r$ is in the ground model. But since $\omega_1$ is inaccessible to reals, we may look at $\beta:= (\omega_2)^{L[r]}$. By the above, for some $\xi \leq (\omega_1)^{L_\beta[r]}$, $(L_\xi\vDash \exists X \forall A \Phi(X,A))^{L\beta}$, and therefore $(L_\xi\vDash \exists X \forall A \Phi(X,A))^L$. This completes one direction of the proof. For the other direction, assume $\kappa$ is $\Sigma^1_2$-indescribable. We show that after forcing with the L\'evy-collapse of $\kappa$, $\Sigma^1_3$-absoluteness for $\ccc$ forcing holds and $\kappa$ is inaccessible to reals. The latter is clear, as any real in the extension can be absorbed into an intermediate model where $\kappa$ is still inaccessible. In $V^{Coll(\omega, \kappa )}$, let $P$ be a $\ccc$ partial order which forces a $\Sigma^1_3(r)$ statement $\phi(r)$, $r \in V^{Coll(\omega, \{ \alpha \} )}$, for some $\alpha < \kappa$. Firstly, we can assume that $\card{P} = \omega_1$: using the tree representation of $\Sigma^1_2$ sets, write $\phi(r)$ as ``there is a real $x$ such that $T(x)$ is well-founded''. Here, $T$ is a tree on $\omega_1$ which is $\Delta_0$ definable in the parameters $r$ and $\omega_1$. So $\Vdash_{P} ``\exists \dot{x}$ such that $T(\dot{x})$ is well-founded''. As $P$ has the $\ccc$, there is $\xi < \omega_2$ such that $\Vdash_{P} rank(T(\dot{x}))<\check{\xi}$, and there is a name $\dot{F}$ for a ranking function on $T(\dot{x})$, $\card{\dot{F}} = \omega_1$. Now let $M$ be an elementary submodel of $H_{\omega_2}$ such that $\dot x, \dot F$ and $ P$ are elements of $M$ and $\omega_1 +1 \subseteq M$. As the forcing relation for $\Delta_0$ sentences is uniformly $\Delta_0$ definable for transitive models of $ZF^-$, we can take the transitive collapse of $M$ and we have $\Vdash_{P'}$``$\dot F'$ is an order preserving function from $T(\dot x')$ into the ordinals'', where $\dot x', \dot F'$ and $P'$ are the images of $\dot x, \dot F$ and $ P$ under the collapsing map. Thus, since $P'$ preserves $\omega_1$, $\Vdash_{P'}\phi(r)$. This proves we can assume $P$ has size $\omega_1$. For the moment, we work in $ W:=V^{Coll(\omega, \{ \alpha \} )}$. Let $\dot P$ be a $Coll(\omega, \kappa)$-name for $P$. As $Coll(\omega, \kappa) \in H_{\kappa^+}$, we may assume $\dot P \in H_{\kappa^+}$, whence \begin{equation}\label{forcable} H_{\kappa^+}\vDash \exists Q \Vdash_Q \phi(r), \end{equation} as witnessed by $Q:=Coll(\omega, \kappa ) * P$. In $W^Q$, \[ \phi(r) \iff H_{\kappa^+}\vDash \exists u \forall w \psi (u,w,r) \] for a suitable $\Delta_0$ formula $\psi$ (e.g. such that $\forall w \psi(u,w,r)$ says that a certain tree $T(u,r)$ on $\omega$ is ill-founded, i.e. has no ranking function; use the tree representation for $\Pi^1_1$ sets). Using this equivalence and arguing as in \ref{fct:indescribable}(\ref{forcing}), in $W$ \eqref{forcable} is equivalent to \begin{equation}\label{forcable2} \exists Q \in H_{\kappa^+} \quad\exists \dot{u} \in H_{\kappa^+} \quad \forall \dot{w} \in H_{\kappa^+} \quad H_{\kappa^+} \vDash \psi'(\dot{u},\dot{w},r,Q), \end{equation} where $\psi'$ is $\Delta_1$. By $\Sigma^1_2$-indescribability of $\kappa$, \eqref{forcable2} holds with $\kappa^+$ replaced by $\kappa$. As \eqref{forcable} and \eqref{forcable2} are still equivalent when $\kappa^+$ is replaced by $\kappa$, there is $Q' \in H_\kappa$, $\Vdash_{Q'} \phi(r)$. In $V^{Coll(\omega, \kappa)}$, there is $H$ which is generic for $Q'$ over $W$, as $\card{\mathcal P (Q')}^W$ is collapsed to $\omega$. $\phi(r)$ holds in $W[H]$, and $\phi(r)$ is upward absolute between $W[H] \subseteq V^{Coll(\omega, \kappa)}$ and $V^{Coll(\omega, \kappa)}$. So $\phi(r)$ holds in $V^{Coll(\omega, \kappa)}$, whence $\Sigma^1_3$-absoluteness holds between this model and any subsequent $\ccc$ extension. \end{proof} \section*{Open questions} What are other applications of lightface indescribable cardinals? E.g. what is the consistency strength of ``two-step'' $\Sigma^1_3$-absoluteness for $\ccc$ forcing plus $\omega_1$ inaccessible to reals? Two-step $\Sigma^1_3$-absoluteness for $\ccc$ forcing means that for any $\ccc$ forcing $P$ and a $P$-name $\dot Q$ such that $P$ forces ``$\dot Q$ is a $\ccc$ partial ordering'', ${H_{\omega_1}}^V \prec_{\Sigma_2} {H_{\omega_1}}^{V^P}$ and $H_{\omega_1}^{V^P} \prec_{\Sigma_2} {H_{\omega_1}}^{V^{P\star \dot Q}}$.
{ "timestamp": "2022-09-20T02:22:51", "yymm": "2209", "arxiv_id": "2209.08693", "language": "en", "url": "https://arxiv.org/abs/2209.08693" }
\section{Introduction} \label{sec:intro} A nonnegative real function $f$ is said to be \emph{bell-shaped} if it is smooth, it converges to zero at $\pm\infty$, and for every $n = 0, 1, 2, \ldots$ the $n$th derivative $f^{(n)}$ changes sign exactly $n$ times. According to~\cite{karlin}, this notion of bell-shaped functions was introduced in~1940s in the study of statistical games. Examples of bell-shaped functions include the densities of the normal distribution $\pi^{-1/2} \exp(-x^2)$, the Cauchy distribution $\pi^{-1} (1 + x^2)^{-1}$, and, more generally, stable distributions. The last claim become a popular conjecture after an incorrect proof appeared in~1983 in~\cite{gawronski}, and it was eventually proved in~\cite{kwasnicki}, extending a partial result due to Simon in~\cite{simon}. There are no compactly supported bell-shaped functions: this conjecture due to Schoenberg was proved already in~1950 by Hirschman in~\cite{hirschman}. However, many \emph{one-sided} functions, that is, functions supported in a half-line, are bell-shaped. Examples include the density functions of the Lévy distribution $\pi^{-1/2} x^{-3/2} e^{-1/x} \ind_{(0, \infty)}(x)$ and, more generally, of hitting times of 1-D diffusion processes; see~\cite{js}. The class of bell-shaped functions was completely characterised in~\cite{ks}, and it proved to be related to total positivity, infinite divisibility and the theory of Pick functions. The concept of bell-shaped functions has its obvious discrete analogue: \emph{bell-shaped sequences}. A two-sided nonnegative sequence $(a_k)$ (with $k \in \mathds{Z}$) is said to be \emph{bell-shaped}, if it converges to zero at $\pm\infty$ and for every $n = 0, 1, 2, \ldots$ the $n$th iterated difference $(\Delta^n a_k)$ changes sign exactly $n$ times. A one-sided sequence is bell-shaped if the corresponding two-sided sequence, obtained by padding zeroes for negative indices, is bell-shaped. Although the notion of a bell-shaped sequence seems very natural, apparently it has not yet appeared in mathematical literature. It is rather straightforward to verify that geometric sequences and, more generally, completely monotone sequences are bell-shaped. In Section~\ref{sec:examples} we provide a few more classes of bell-shaped sequences. The main purpose of this article is to prove the following characterisation theorem, which is a discrete analogue (for one-sided sequences) of the complete description of the class of bell-shaped functions developed in~\cite{js,kwasnicki,ks}. We say that a function $\varphi$ is \emph{increasing-after-rounding} if there is an increasing integer-valued function $\tilde{\varphi}$ such that $\tilde{\varphi} \le \varphi \le \tilde{\varphi} + 1$. Note that if $\lfloor \varphi \rfloor$ or $\lceil \varphi \rceil$ is increasing, then $\varphi$ is increasing-after-rounding, but the converse is not quite true: the latter condition is slightly more general. \begin{theorem}\label{thm:main} For a nonnegative sequence $(a_k)$, the following are equivalent: \begin{enumerate}[label={\rm(\alph*)}] \item\label{thm:main:a} $(a_k)$ is a bell-shaped sequence; \item\label{thm:main:b} $(a_k)$ is the convolution of a summable Pólya frequency sequence and a completely monotone sequence which converges to zero; \item\label{thm:main:c} the generating function of $(a_k)$ is given by the formula \formula[eq:main]{ \sum_{k = 0}^\infty a_k x^k & = \exp \biggl( b x + c + \int_{-\infty}^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \varphi(s) ds \biggr) } for $x \in (0, 1)$, where $b \in [0, \infty)$, $c \in \mathds{R}$ and $\varphi$ is a nonnegative Borel function on $\mathds{R}$ such that: \begin{itemize} \item $\varphi$ is decreasing and integer-valued on $(-\infty, 0)$; \item $\varphi$ is equal to zero on $[0, 1]$; \item $\varphi$ is increasing-after-rounding on $(1, \infty)$; \item $\varphi(s) / s^2$ is integrable near $-\infty$ and near $\infty$; \item $(1 - \varphi(s)) / (1 - s)$ is nonintegrable in a right neighbourhood of $1$. \end{itemize} \end{enumerate} Furthermore, the right-hand side of~\eqref{eq:main} is the generating function of a bell-shaped sequence whenever the conditions on $b, c, \varphi$ listed in item~\ref{thm:main:c} are satisfied. \end{theorem} A sample function $\varphi$ satisfying the conditions listed in item~\ref{thm:main:c} is shown in Figure~\ref{fig:bs}. \begin{figure} \includegraphics[width=0.8\textwidth]{figure-1i.pdf} \caption{A sample function $\varphi$ in Theorem~\ref{thm:main}\ref{thm:main:c}. For an appropriate doubly infinite increasing sequence of \emph{points of increase} $s_k \in [-\infty, \infty]$ such that $s_{-1} < 0 < 1 = s_0$, we have $\varphi(s) = k$ for $s \in (s_{-k - 1}, s_{-k})$, and $k \le \varphi(s) \le k + 1$ for $s \in (s_k, s_{k + 1})$, where $k = 0, 1, 2, \ldots$} \label{fig:bs} \end{figure} \begin{remark} The above result looks very similar to Theorems~1.1 and~1.3 of~\cite{ks} (see also Theorem~1.1 in~\cite{kwasnicki}), which provide a similar description for bell-shaped functions. For a one-sided integrable function $f$, these results assert that the following three conditions are equivalent: (a)~$f$ is bell-shaped; (b) $f$ is the convolution of a Pólya frequency function and a completely monotone function; (c)~the Laplace transform of $f$ is given by an integral formula similar to~\eqref{eq:main}. The proof of our Theorem~\ref{thm:main} also follows the approach of~\cite{js,kwasnicki,ks}. Nevertheless, there are essential differences. The most important obstacle that needed to be overcome is related to the factorisation~\eqref{eq:bs:proof:3} of the expression appearing in the discrete variant of Post's inversion formula. In the case of bell-shaped functions, an analogous expression was automatically a product of two Laplace transforms: one of a Pólya frequency function and another one of a completely monotone function (or, for two-sided functions, an AM-CM function). In our case, the first factor in the right-hand side of~\eqref{eq:bs:proof:3} is indeed the generating function of a Pólya frequency sequence. The other one, however, is not the generating function of a completely monotone sequence, not even after passing to the limit as $n \to \infty$. In fact, the two factors need to be treated simultaneously, and additional arguments are needed in order to prove the desired factorisation of the limit. \end{remark} \begin{remark} Our motivation to study bell-shaped sequences, a subclass of unimodal sequences, comes from probability theory, where geometric properties of probability mass functions of discrete random variables plays a certain role; we refer to~\cite{dj} for an account of unimodality, and to~\cite{bdk,dhkr,jw,ll,rao} for a sample of applications in statistics. Thus, we remark that a variant of Theorem~\ref{thm:main} for \emph{summable} bell-shaped sequences holds true, provided that in item~\ref{thm:main:b} the completely monotone sequence is required to be summable rather than converge to zero, and in item~\ref{thm:main:c} the last condition of $\varphi$ is replaced by the following stronger one: $\varphi(s) / (s - 1)$ is integrable in a right neighbourhood of $1$. \end{remark} The remaining part of the article consists of five sections. Basic definition and auxiliary results are gathered in Section~\ref{sec:pre}. Section~\ref{sec:repr} contains the core of the proof of Theorem~\ref{thm:main}, namely, the proof that condition~\ref{thm:main:a} implies condition~\ref{thm:main:c}. The proof of the other assertions of Theorem~\ref{thm:main} is given in Section~\ref{sec:exp}. In Section~\ref{sec:whale} we discuss briefly a closely related concept of whale-shaped sequences. Finally, we provide a number of examples in Section~\ref{sec:examples}. \section{Preliminaries} \label{sec:pre} Below we gather definitions and known results, as well as a few auxiliary lemmas, required for the proof of Theorem~\ref{thm:main}. \subsection{Sequences} Throughout the article, sequences (or one-sided sequences) are indexed with nonnegative integers $\mathds{N} = \{0, 1, 2, \ldots\}$, and doubly infinite sequences (or two-sided sequences) are indexed with integers $\mathds{Z} = \{\ldots, -1, 0, 1, 2, \ldots\}$. To improve clarity, we often use the function notation $a(k)$ for the $k$th term of a sequence instead of the more customary subscript notation $a_k$. We identify every sequence $(a(k) : k \in \mathds{N})$ with the corresponding doubly infinite sequence $(\bar{a}(k) : k \in \mathds{Z})$, defined by \formula{ \bar{a}(k) & = \begin{cases} a(k) & \text{for $k \ge 0$,} \\ 0 & \text{for $k < 0$.} \end{cases} } Whenever this causes no confusion, we do not distinguish between these two sequences, and we use the same symbol $a(k)$ to denote both of them. Additionally, whenever the meaning is clear from the context, we use the symbol $a(k)$ to denote both the $k$th term of a sequence and the entire sequence, and we tend avoid the more formal, but less convenient notation $(a(k) : k \in \mathds{N})$ or $(a(k))$. The \emph{difference operator} $\Delta$ is defined by the formula \formula{ \Delta a(k) & = a(k + 1) - a(k) . } For $n = 0, 1, 2, \ldots$ we define the \emph{iterated difference operator} $\Delta^n$ inductively: we let $\Delta^0 a(k) = a(k)$ and $\Delta^{n + 1} a(k) = \Delta^n \Delta a(k)$ for $n = 0, 1, 2, \ldots$ The \emph{convolution} of sequences is defined in the usual way: \formula{ a * b(k) & = \sum_{j = -\infty}^\infty a(j) b(k - j) } whenever the series in the right-hand side converges. Note that the series reduces to a finite sum if both $a(k)$ and $b(k)$ are one-sided sequences. We have the following summation by parts formula: \formula{ \sum_{k = -\infty}^\infty a(k) \Delta b(k) & = -\sum_{k = -\infty}^\infty b(k) \Delta a(k - 1) } whenever either of sums converges and the boundary terms go to zero: \formula{ \lim_{k \to \pm\infty} a(k) b(k) & = 0 . } An $n$-fold application of summation by parts leads to a more general formula \formula{ \sum_{k = -\infty}^\infty a(k) \Delta^n b(k) & = -\sum_{k = -\infty}^\infty b(k) \Delta^n a(k - n) , } again provided that all boundary terms go to zero: \formula{ \lim_{k \to \pm\infty} \Delta^j a(k - j) \Delta^{n - 1 - j} b(k) & = 0 } for $j = 0, 1, 2, \ldots, n - 1$. A sequence $a(k)$ is said to \emph{change sign} $N$ times, where $N = 0, 1, 2, \ldots$\,, if there exist indices $k_0 < k_1 < \ldots < k_N$ such that \formula{ a(k_{j - 1}) a(k_j) & < 0 & \text{for $j = 1, 2, \ldots, N$,} } and $N$ is the largest number with the above property. If such indices exist for every $N$, we say that $a(k)$ changes sign infinitely many times. A nonnegative sequence $a(k)$ is said to be: \begin{enumerate}[label=(\alph*)] \item a \emph{completely monotone sequence} if $(-1)^n \Delta^n a(k) \ge 0$ for $n, k = 0, 1, 2, \ldots$\,; \item a \emph{Pólya frequency sequence} if the infinite matrix $(a(k - l) : k, l \in \mathds{N})$ is \emph{totally positive}; \item a \emph{bell-shaped sequence} if $a(k)$ converges to zero as $k \to \infty$, and for every $n = 0, 1, 2, \ldots$ the doubly infinite sequence $\Delta^n a(k)$ changes sign $n$ times. \end{enumerate} To simplify the discussion, we exclude the sequence which is constant zero. All three notions are discussed in more detail later in this section, and the last one is the main subject of the present article. \subsection{Generating functions and moment sequences} The \emph{generating function} of a sequence $a(k)$ is given by \formula{ F(x) & = \sum_{k = 0}^\infty a(k) x^k } whenever the series converges. If $a(k)$ is summable, then the generating function is a holomorphic function in the unit disk in the complex plane, and it extends continuously to the unit circle. When $a(k)$ is merely bounded, then $F$ is still a holomorphic function in the unit disk, but it may fail to extend continuously to the boundary. The dual notion is the \emph{moment sequence}: if $\mu$ is a finite signed measure on $\mathds{R}$, then the moment sequence $a(k)$ of $\mu$ is defined as \formula{ a(k) & = \int_{\mathds{R}} s^k \mu(ds) } whenever the integral is well-defined. This is the case when, for example, $\mu$ is concentrated on a bounded interval. The following discrete variant of the classical Post's inversion formula for the Laplace transform plays a key role in our proof. For completeness, we provide a short probabilistic proof. \begin{theorem}[discrete Post's inversion formula; Theorem III.3 in~\cite{widder}] \label{thm:post} Suppose that $F$ is an integrable function on $(0, 1)$. Let \formula{ A(k) & = \int_{(0, 1)} x^k F(x) dx , } where $k = 0, 1, 2, \ldots$\,, be the moment sequence of $F$. Suppose furthermore that $F$ is continuous at $x \in (0, 1)$, and \formula{ \lim_{n \to \infty} \frac{j_n}{n} & = \frac{x}{1 - x} \, . } Then \formula{ F(x) & = \lim_{n \to \infty} (n + j_n + 1) \binom{n + j_n}{n} (-1)^n \Delta^n A(j_n) . } \end{theorem} \begin{proof} We first consider the case when $F$ is bounded on $(0, 1)$. Suppose that $X(1), X(2), \ldots$ is a sequence of i.i.d.\@ random variables uniformly distributed over $(0, 1)$. Let $X(j : k)$ denote the corresponding order statistic, i.e.\@ the $j$th smallest value among $X(1), X(2), \ldots, X(k)$. Suppose that \formula{ \lim_{n \to \infty} k_n & = \infty && \text{and} & \lim_{n \to \infty} \frac{j_n}{k_n} & = x . } We have \formula{ \lim_{n \to \infty} X(j_n : k_n) = x } with probability one. (This relatively simple folklore fact follows easily from stronger results: asymptotic normality of order statistics, see~\cite{bahadur}, or the Glivenko--Cantelli theorem.) In particular, by Lebesgue's dominated convergence theorem, \formula{ \lim_{n \to \infty} \mathds{E} F(X(j_n : k_n)) & = F(x) . } On the other hand, \formula{ \mathds{P}(X(j : k) \in dx) & = (k + 1) \binom{k}{j} x^j (1 - x)^{k - j} dx , } and hence, if $0 < j < k$, \formula{ \mathds{E} F(X(j : k)) & = \int_0^1 (k + 1) \binom{k}{j} x^j (1 - x)^{k - j} F(x) dx \\ & = (k + 1) \binom{k}{j} \sum_{i = 0}^{k - j} \binom{k - j}{i} (-1)^i A(j + i) \\ & = (k + 1) \binom{k}{j} (-1)^{k - j} \Delta^{k - j} A(j) . } Choosing $k_n = n + j_n$ and observing that \formula{ & \text{if } \lim_{n \to \infty} \frac{j_n}{n} = \frac{x}{1 - x} \, , \text{ then } \lim_{n \to \infty} \frac{j_n}{k_n} = x , } we obtain the desired result. For a general integrable function $F$, the argument is very similar, but we use Vitali's convergence theorem instead of Lebesgue's dominated convergence theorem. Since we will not need this result here, we only sketch the proof. For a sufficiently small $\delta > 0$, the function $F$ is bounded on $(x - \delta, x + \delta)$. On the other hand, the density functions of $X(j_n : k_n)$ are easily shown to be bounded on $(0, 1) \setminus (x - \delta, x + \delta)$ uniformly with respect to $n = 1, 2, \ldots$\, This implies that the family of random variables $F(X(j_n : k_n))$ is uniformly integrable, and so Vitali's convergence theorem indeed applies to the limit of $\mathds{E} F(X(j_n : k_n))$. \ignore{It remains to show that if $j_n / n$ converges to $\frac{x}{1 - x}$ and $k_n = n + j_n$, then \formula{ f_n(s) & = (k_n + 1) \binom{k_n}{j_n} s^j (1 - s)^{k - j_n} } is bounded uniformly in $n = 1, 2, \ldots$ and $s \in (0, 1) \setminus (x - \delta, x + \delta)$. This is standard, but for completeness we provide a short argument. Recall that if $x_n = j_n / k_n$, then $x_n$ converges to $x$. The unimodal function $f_n$ (defined on $(0, 1)$) has a maximum at $x_n$, and so for $n$ sufficiently large, $f_n$ is increasing over $(0, x - \frac{\delta}{2}]$ and decreasing over $[x + \frac{\delta}{2}, 1)$. It follows that for $n$ sufficiently large and $s \in (0, x - \delta]$, we have \formula{ f_n(s) \le f_n(x - \delta) & \le \frac{2}{\delta} \int_{x - \delta}^{x - \delta/2} f_n(t) dt \\ & \le \frac{2}{\delta} \int_{(0, 1) \setminus (x - \delta/2, x + \delta/2)} f_n(t) dt \\ & \le \frac{2}{\delta} \, \mathds{P}(|X(j_n : k_n) - x| \ge \tfrac{\delta}{2}) . } A similar argument shows that the same estimate holds for $s \in [x + \delta, 1)$. Finally, since $X(j_n : k_n)$ converges to $x$ with probability one, the right-hand side of the above estimate clearly converges to zero as $n \to \infty$, and the desired uniform bound follows.} \end{proof} \subsection{Completely monotone sequences} Recall that a sequence $a(k)$ is \emph{completely monotone} if $(-1)^n \Delta^n a(k) \ge 0$ for $n, k = 0, 1, 2, \ldots$ We assume here that $a(k)$ is not constant zero. Every geometric sequence $s^k$ with $s \in [0, 1]$ is clearly completely monotone: if $a(k) = s^k$, then $\Delta^n a(k) = (-1)^n (1 - s)^n a(k)$ (here and below we assume that $0^0 = 1$). It follows that for every finite measure $\mu$ on $[0, 1]$ the moment sequence \formula[eq:hausdorff]{ a(k) & = \int_{[0, 1]} s^k \mu(ds) } is completely monotone. By the famous theorem due to Hausdorff, the converse is true: every completely monotone sequence is the moment sequence of a finite measure on $[0, 1]$, that is, it is given by~\eqref{eq:hausdorff}. We refer to Section~III.4 in~\cite{widder} for further properties of completely monotone sequences. By Fubini's theorem, the generating function of a completely monotone sequence $a(k)$ given by~\eqref{eq:hausdorff} satisfies \formula[eq:cm:gen]{ F(x) & = \sum_{k = 0}^\infty a(k) x^k = \int_{[0, 1]} \sum_{k = 0}^\infty s^k x^k \mu(ds) = \int_{[0, 1]} \frac{1}{1 - s x} \, \mu(ds) } when $|x| < 1$. In particular, $F$ extends to a holomorphic function in $\mathds{C} \setminus [1, \infty)$, and it is a \emph{Pick function} on $(-\infty, 1)$, a notion discussed later in this section; see Section~\ref{sec:cm:pick} for details. As a direct consequence of Fubini's theorem, a completely monotone sequence $a(k)$ given by~\eqref{eq:hausdorff} converges to zero if and only if $\mu(\{1\}) = 0$, and $a(k)$ is summable if and only if additionally \formula{ \int_{[0, 1)} \frac{1}{1 - s} \, \mu(ds) < \infty . } In terms of the generating function $F$ given by~\eqref{eq:cm:gen}, $a(k)$ converges to zero if and only if \formula[eq:cm:zero]{ \lim_{x \to 1^-} (1 - x) F(x) & = 0 . } Indeed: the limit in the left-hand side is equal to $\mu(\{1\})$ by~\eqref{eq:cm:gen} and the dominated convergence theorem. Similarly, $a(k)$ is summable if and only if $F(1)$ is well-defined, or, equivalently, $F$ is bounded on $(0, 1)$. \subsection{Pólya frequency sequences} \label{sec:pf} By definition, a doubly infinite sequence $a(k)$ is a \emph{Pólya frequency sequence} if the infinite matrix $(a(k - l) : k, l \in \mathds{N})$ is \emph{totally positive}, that is, if all finite matrices $(a(k_i - l_j) : i, j \in \{1, 2, \ldots, n\})$, where $k_1 < k_2 < \ldots < k_n$ and $l_1 < l_2 < \ldots < l_n$, have nonnegative determinants. Again we assume here that $a(k)$ is not constant zero. The theory of Pólya frequency sequences is in large part due to Schoenberg. In particular, Edrei proved the conjecture of Schoenberg, which asserts that if $a(k)$ is a (one-sided) Pólya frequency sequence, then the generating function $F$ of $a(k)$ is given by \formula[eq:pf:gen]{ F(x) & = \sum_{k = 0}^\infty a(k) x^k = e^{b x + c} \prod_{m = 0}^\infty \frac{1 + q_m x}{1 - p_m x} \, , } where $b \in [0, \infty)$, $c \in \mathds{R}$, and $p_m$ and $q_m$ are summable nonnegative sequences. Conversely, the right-hand side of the above formula always defines the generating function of a Pólya frequency sequence. This result is given as Theorem~11.5.3 in Karlin's monograph~\cite{karlin} on total positivity. A Pólya frequency sequence $a(k)$ satisfying~\eqref{eq:pf:gen} is bounded if and only if $p_m \le 1$ for every $m$, and it is summable if and only if $p_m < 1$ for every $m$. For a summable Pólya frequency sequence $a(k)$, the generating function $F$ again extends to a holomorphic function in $\mathds{C} \setminus [1, \infty)$, and its complex logarithm (which is continuous and takes a real value at $0$) turns out to be a \emph{Pick function} on $(0, 1)$, as shown later in this section; see Section~\ref{sec:pf:pick} for details. We remark that for a general Pólya frequency sequence $a(k)$, the complex logarithm of the generating function $F$ is a Pick function on $(-(\max_{m \in \mathds{N}} q_m)^{-1}, (\max_{m \in \mathds{N}} p_m)^{-1})$, but we will not need this result. Every summable Pólya frequency sequence $a(k)$ has the \emph{variation diminishing property}: the convolution with $a(k)$ does not increase the number of sign changes; see Theorem~5.1.5 in~\cite{karlin}. Although this will not be needed in this article, we mention that the converse is also true: every summable nonnegative sequence with the variation diminishing property is a Pólya frequency sequence. The last property follows from the results of Chapter~5 in~\cite{karlin} in a similar way as Theorem~5.4.2 therein is proved. See also Chapter~IV in~\cite{hw} for a related discussion. We remark that if a random variable $X$ is a sum of independent: Poissonian random variable with parameter $b$; series of geometric random variables with parameters $p_m$; and series of Bernoulli random variables with parameters $q_m / (q_m + 1)$, then the sequence $a(k) = \mathds{P}(X = k)$ is a Pólya frequency sequence satisfying~\eqref{eq:pf:gen} with an appropriate normalisation constant $c$. \subsection{Pick functions} A \emph{Pick function} is a holomorphic map from the (open) upper complex half-plane to its closure. Other names are commonly used for this notion, including: \emph{Nevanlinna function} (not to be confused with the \emph{Nevanlinna class} of functions), \emph{Nevanlinna--Pick function}, \emph{Herglotz function}. By a classical result due to Herglotz, every Pick function $F$ admits the \emph{Stieltjes representation} \formula[eq:pick]{ F(x) & = b x + c + \int_{\mathds{R}} \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \mu(ds) , } where $b \in [0, \infty)$, $c \in \mathds{R}$ and $\mu$ is a measure on $\mathds{R}$ such that $\int_{\mathds{R}} (1 + s^2)^{-1} \mu(ds) < \infty$. Conversely, the right-hand side of~\eqref{eq:pick} defines a Pick function for every admissible $b$, $c$ and $\mu$. This result is given as Theorem II.I in Donoghue's monograph~\cite{donoghue} on Loewner's theorem. The measure $\mu$ in~\eqref{eq:pick} can be recovered as the boundary limit of the imaginary part of $F$: \formula{ \mu(ds) & = \lim_{t \to 0^+} \frac{1}{\pi} \, \Im F(s + i t) ds , } in the sense of the vague limit of measures; see Lemma~II.1 in~\cite{donoghue}, or formula~(3.10) in Remling's book~\cite{remling} on canonical systems. Formula~\eqref{eq:pick} can be rephrased in the following convenient way: \formula[eq:pick:var]{ F(x) & = c + \int_{\mathds{R} \cup \{\infty\}} \frac{1 + s x}{s - x} \, \tilde{\mu}(ds) , } where $c \in \mathds{R}$ is the same constant as in~\eqref{eq:pick}, and $\tilde{\mu}$ is a finite measure on $\mathds{R} \cup \{\infty\}$, the one-point compactification of $\mathds{R}$, given by \formula{ \tilde{\mu}(ds) & = \frac{1}{1 + s^2} \, \mu(ds) + b \delta_\infty(ds) . } Here we understand that for $s = \infty$ the integrand is equal to $(1 + s x) / (s - x) = x$. We refer to formula~(3.9) in~\cite{remling} for further details on the above reformulation. Suppose that $F$ is a Pick function. If $F$ is not constant zero, then, by the maximum modulus principle, $F$ has no zeroes in the upper complex half-plane. Therefore, the complex logarithm $\log F$ of $F$ is well defined (in the sense of the principal branch), and since $\Im \log F = \Arg F$ is nonnegative, $\log F$ is again a Pick function. Furthermore, since $\Im \log F = \Arg F$ takes values in $[0, \pi]$, in the Stieltjes representation~\eqref{eq:pick} of $\log F$ we necessarily have $b = 0$ and $\mu$ absolutely continuous, with density function taking values in $[0, 1]$. This leads to the \emph{exponential representation} of Pick functions: every nonzero Pick function $F$ is given by \formula[eq:pick:exp]{ F(x) & = \exp \biggl(\gamma + \int_{-\infty}^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \varphi(s) ds\biggr) } for some constant $\gamma \in \mathds{R}$ and some Borel function $\varphi$ on $\mathds{R}$ with values in $[0, 1]$. Conversely, the right-hand side of~\eqref{eq:pick:exp} defines a nonzero Pick function for all admissible parameters $\gamma$ and $\varphi$. Furthermore, \formula{ \varphi(s) ds & = \lim_{t \to 0^+} \frac{1}{\pi} \, \Arg F(s + i t) ds } in the sense of vague convergence of measures, and, in fact, $\varphi(s)$ is the pointwise limit of $\pi^{-1} \Arg F(s + i t)$ for almost every $s \in \mathds{R}$. We refer to formula~(II.6) in~\cite{donoghue}, and to Section~7.2 in~\cite{remling}. We say that $F$ is a \emph{Pick function on $(\alpha, \beta)$}, where $-\infty \le \alpha < \beta \le \infty$, if $F$ is a Pick function which extends continuously to the interval $(\alpha, \beta)$ on the real axis, and takes real values there. By Schwarz's reflection principle, in this case $F$ extends to a holomorphic function on $\mathds{C} \setminus ((-\infty, \alpha] \cup [\beta, \infty))$, satisfying $F(\overline{x}) = \overline{F(x)}$. In terms of the Stieltjes representation~\eqref{eq:pick}, a Pick function $F$ is a Pick function on $(\alpha, \beta)$ if and only if $\mu((\alpha, \beta)) = 0$; see Lemma~II.2 in~\cite{donoghue}. Thus, if $F$ is a Pick function on $(\alpha, \beta)$, then representations~\eqref{eq:pick} and~\eqref{eq:pick:var} are valid for all $x \in \mathds{C} \setminus ((-\infty, \alpha] \cup [\beta, \infty))$. In particular, Pick functions on $(\alpha, \beta)$ are increasing on $(\alpha, \beta)$. Similarly, a Pick function $F$ is a Pick function on $(\alpha, \beta)$ such that $F > 0$ on $(\alpha, \beta)$ if and only if $\varphi = 0$ almost everywhere on $(\alpha, \beta)$ in the exponential representation~\eqref{eq:pick:exp}. \subsection{Convergence of Pick functions} A sequence of Pick functions converges pointwise in the upper complex half-plane if and only if it converges uniformly on compact subsets of the upper complex half-plane, and the limit is necessarily again a Pick function. Furthermore, convergence of a sequence of Pick functions is equivalent to the convergence of the corresponding parameters $c$ (usual convergence of real numbers) and $\tilde{\mu}$ (weak convergence of measures on $\mathds{R} \cup \{\infty\}$) in the modified Stieltjes representation~\eqref{eq:pick:var}. This is essentially Theorem~7.3(a) in~\cite{remling}, see also Lemma~II.3 in~\cite{donoghue}. A sequence of Pick functions on $(\alpha, \beta)$ converges pointwise on $(\alpha, \beta)$ if and only if it converges in the sense described above, and the limit is necessarily again a Pick function on $(\alpha, \beta)$. This can be proved as Theorem~7.3(a) in~\cite{remling}, using a modified criterion for compactness of a set of Pick functions: Lemma~II.4 in~\cite{donoghue}. Another approach is to apply the much more general result given in Theorem~7.4(b) in~\cite{remling}. We remark that Pick functions on $(0, \infty)$ which are nonnegative on $(0, \infty)$ form the class of \emph{complete Bernstein functions}, while the class of functions $F$ such that $-F$ is a Pick function on $(0, \infty)$ which is nonpositive on $(0, \infty)$ is the class of \emph{Stieltjes functions}. Pick functions on $(0, \infty)$ are called \emph{extended complete Bernstein functions} in~\cite{ssv}. We refer to that book for a detailed discussion of these classes of functions, and here we only mention that Stieltjes function have Stieltjes representation \formula[eq:stieltjes]{ F(x) & = c + \int_{[0, \infty)} \frac{1}{s + x} \, \mu(ds) , } and that if $F$ is a nonzero Stieltjes function, then $1 / F$ is a Pick function on $(0, \infty)$ which is positive on $(0, \infty)$, and thus \formula[eq:stieltjes:exp]{ \frac{1}{F(x)} & = \exp \biggl(\gamma + \int_{-\infty}^0 \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \varphi(s) ds\biggr) , } where $\varphi$ is a Borel function with values in $[0, 1]$. \subsection{Generating functions of completely monotone sequences} \label{sec:cm:pick} We argue that, as already remarked above, the generating functions of completely monotone sequences are Pick functions. Let $a(k)$ be a completely monotone function, and let $F$ be the generating function of $a(k)$, given by~\eqref{eq:cm:gen}. It is straightforward to see that the right-hand side of this formula defines a Pick function on $(-\infty, 1)$ which is nonnegative on $(-\infty, 1)$. It follows that the exponential representation of $F$ takes form \formula[eq:cm:exp]{ F(x) & = \exp \biggl(\gamma + \int_1^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \varphi(s) ds\biggr) , } where $\gamma \in \mathds{R}$ and $\varphi$ is a Borel function on $(1, \infty)$ taking values in $[0, 1]$; see Figure~\ref{fig:pff:cm}. Furthermore, \formula{ \varphi(s) & = \lim_{t \to 0^+} \frac{1}{\pi} \, \Arg F(s + i t) } for almost every $s \in (1, \infty)$. Conversely, it is easy to see that every $\gamma \in \mathds{R}$ and every Borel function $\varphi$ on $(1, \infty)$ with values in $[0, 1]$ correspond in the way described above to a completely monotone sequence. Observe that \formula{ \log \frac{1}{1 - x} & = \frac{\log 2}{2} + \int_1^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) ds . } Thus, with the notation introduced above, \formula{ \log \frac{1}{(1 - x) F(x)} & = \frac{\log 2}{2} - \gamma + \int_1^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) (1 - \varphi(s)) ds . } By the above identity and~\eqref{eq:cm:zero}, a completely monotone sequence $a(k)$ converges to zero if and only if \formula{ \lim_{x \to 1^-} \int_1^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) (1 - \varphi(s)) ds & = \infty . } By the monotone convergence theorem, the above condition is equivalent to \formula{ \int_1^\infty \biggl( \frac{1}{s - 1} - \frac{s}{1 + s^2} \biggr) (1 - \varphi(s)) ds & = \infty , } that is, to nonintegrability of $(1 - \varphi(s)) / (s - 1)$ in a right neighbourhood of $1$. Finally, $a(k)$ is summable if and only if $F(x)$ is bounded on $(0, 1)$, or, equivalently, \formula{ \lim_{x \to 1^-} \int_1^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \varphi(s) ds & < \infty . } By the monotone convergence theorem, the above condition is equivalent to \formula{ \int_1^\infty \biggl( \frac{1}{s - 1} - \frac{s}{1 + s^2} \biggr) \varphi(s) ds & < \infty , } that is, to integrability of $\varphi(s) / (s - 1)$ in a right neighbourhood of $1$. \begin{figure} \includegraphics[width=0.8\textwidth]{figure-2i.pdf} \caption{A sample function $\varphi$ in the representation~\eqref{eq:cm:exp} of the generating function of a completely monotone sequence (purple) and in the representation~\eqref{eq:pf:exp} of the generating function of a Pólya frequency sequence (red). The corresponding sequences are convolution factors of a bell-shaped sequence, which in turn corresponds to the function $\varphi$ depicted in Figure~\ref{fig:bs}.} \label{fig:pff:cm} \end{figure} \subsection{Generating functions of Pólya frequency sequences} \label{sec:pf:pick} We now show that the generating functions of Pólya frequency sequences are exponentials of Pick functions. Consider a summable Pólya frequency sequence $a(k)$, and again let $F$ denote the generating function of $a(k)$. By~\eqref{eq:pf:gen}, we have \formula[eq:pf:gen:log]{ F(x) & = \exp \biggl( b x + c + \sum_{m = 0}^\infty \log(1 + q_m x) - \sum_{m = 0}^\infty \log(1 - p_m x) \biggr) , } where $\log$ in the right-hand side denotes the principal branch of the complex logarithm. Recall that here $p_m$ and $q_m$ are summable nonnegative sequences, and $p_m < 1$ for every $m$. It is again straightforward to verify that the exponent in the right-hand side of~\eqref{eq:pf:gen:log} defines a Pick function on $(0, 1)$, and since we have \formula{ \log(1 + q x) & = \int_{-\infty}^{1/q} \biggl( \frac{1}{s - x} - \frac{1}{s - 1} \biggr) ds , \\ \log \frac{1}{1 - p x} & = \int_{1/p}^\infty \biggl( \frac{1}{s - x} - \frac{1}{s + 1} \biggr) ds } (where we agree that $1/0 = \infty$ and $-1/0 = -\infty$), we find that the Stieltjes representation~\eqref{eq:pick} of $\log F$ reads \formula*[eq:pf:exp]{ F(x) & = \exp \biggl( b x + c + \int_{-\infty}^\infty \biggl( \frac{1}{s - x} - \frac{\sign s}{1 + |s|} \biggr) \varphi(s) ds \biggr) \\ & = \exp \biggl( b x + \tilde{c} + \int_{-\infty}^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \varphi(s) ds \biggr) , } where \formula[eq:pf:exp:phi]{ \varphi(s) & = \sum_{m = 0}^\infty \ind_{(-\infty, -1/q_m)}(s) + \sum_{m = 0}^\infty \ind_{(1/p_m, \infty)}(s) } and \formula{ \tilde{c} & = c + \int_{-\infty}^\infty \biggl( \frac{s}{1 + s^2} - \frac{\sign s}{1 + |s|} \biggr) \varphi(s) ds ; } see Figure~\ref{fig:pff:cm} Note that in the derivation of~\eqref{eq:pf:exp}, we applied Fubini's theorem to exchange the sum and the integral. Another way to justify this step is to observe that adding a series of Pick function is equivalent to adding the corresponding parameters $b$, $c$ and $\mu$ in their Stieltjes representations~\eqref{eq:pick}. The converse to the above representation is true: every $b \in [0, \infty)$, $c \in \mathds{R}$ and every Borel function $\varphi$ as in~\eqref{eq:pf:exp:phi}, where $p_m$ and $q_m$ are summable nonnegative sequences and $p_m < 1$ for every $m = 0, 1, 2, \ldots$\,, correspond to a summable Pólya frequency sequence. In the terminology introduced in the following section, $\varphi$ given by~\eqref{eq:pf:exp:phi} is stepwise decreasing on $(-\infty, 0)$ and stepwise increasing on $(1, \infty)$. Summability of the sequences $p_m$ and $q_m$ is easily found to be equivalent to the integrability condition \formula[eq:pf:int]{ & \text{$\varphi(s) / s^2$ is integrable near $-\infty$ and near $\infty$.} } on the corresponding function $\varphi$ given by~\eqref{eq:pf:exp:phi}, and the inequality $p_m > 1$ for every $m = 0, 1, 2, \ldots$ simply means that $\varphi = 0$ in a right neighbourhood of $1$. Note that formula~\eqref{eq:pf:exp} can be treated as the exponential representation of the generating function $F$, similar to~\eqref{eq:pick:exp}. We stress, however, that typically $F$ fails to be a Pick function, and that the functions $\varphi$ defined by~\eqref{eq:pf:exp:phi} are typically not bounded from above by $1$. \subsection{Auxiliary classes of monotone functions} \label{sec:step:rect} We say that a real-valued function $\varphi$ is \emph{stepwise increasing} on an interval $(\alpha, \beta)$, where $-\infty \le \alpha < \beta \le \infty$, if, on this interval, $\varphi$ is increasing and takes only integer values. \emph{Stepwise decreasing} functions are defined in a similar way; this condition already appeared in Theorem~\ref{thm:main}\ref{thm:main:c}. If $\varphi$ is a stepwise increasing function on $(\alpha, \beta)$, then we have $\varphi(s) = k$ for $s \in (s_k, s_{k + 1})$, where the \emph{points of increase} $s_k \in [\alpha, \beta]$ form a doubly infinite increasing sequence, which converges to $\alpha$ as $k \to -\infty$, and which converges to $\beta$ as $k \to \infty$. Furthermore, the points of increase are determined uniquely by $\varphi$. We say that a Borel real-valued function $\varphi$ is \emph{monotone-after-rounding} on $(\alpha, \beta)$ if for every integer $n$ the function $\varphi - n$ changes sign at most once in $(\alpha, \beta)$. More precisely, we say that $\varphi$ is \emph{increasing-after-rounding} if $\varphi - n$ changes sign from negative to positive for some $n \in \mathds{Z}$, and that $\varphi$ is \emph{decreasing-after-rounding} otherwise; both notions include the case when $\varphi - n$ has constant sign for every $n \in \mathds{Z}$. This definition is easily seen to be equivalent to the one given just before the statement of Theorem~\ref{thm:main}. Indeed: for a function $\varphi$ which is monotone-after-rounding on $(\alpha, \beta)$, we have $\varphi(s) \in [k, k + 1]$ for $s \in (s_k, s_{k + 1})$, where the \emph{points of increase} $s_k \in [\alpha, \beta]$ again form an appropriately chosen doubly infinite sequence convergent to $\alpha$ as $k \to -\infty$ and to $\beta$ as $k \to \infty$. Note, however, that the points of increase need not be defined uniquely. Observe that if $\varphi_1$ is stepwise increasing and $\varphi_2$ is a Borel function taking values in $[0, 1]$, then $\varphi_1 + \varphi_2$ is increasing-after-rounding on $(\alpha, \beta)$ (with the same points of increase $s_k$). Conversely, if $\varphi$ is increasing-after-rounding, then we can define a stepwise increasing function $\varphi_1$ (with the same numbers $s_k$) such that $\varphi_2 = \varphi - \varphi_1$ only takes values in $[0, 1]$. Note that with the above definition, formula~\eqref{eq:pf:exp:phi} can be equivalently phrased as follows: $\varphi$ is stepwise decreasing over $(-\infty, 0)$, stepwise increasing over $(1, \infty)$, and equal to zero in an open interval containing $[0, 1]$. Suppose that $\varphi_n$ are Borel functions with values in $[0, 1]$ on an interval $(\alpha, \beta)$, and that the measures $\varphi_n(s) ds$ converge vaguely to a measure $\mu$. Then $\mu$ has a density function on $(\alpha, \beta)$ which takes values in $[0, 1]$. Indeed: for every nonnegative continuous $f$ whose support is a compact subset of $(\alpha, \beta)$ we have \formula{ 0 & \le \int_{-\infty}^\infty f(s) \varphi_n(s) ds \le \int_{-\infty}^\infty f(s) ds , } and hence \formula{ 0 & \le \int_{-\infty}^\infty f(s) \mu(ds) \le \int_{-\infty}^\infty f(s) ds . } This implies that on $(\alpha, \beta)$ the measure $\mu$ is absolutely continuous with respect to the Lebesgue measure, with density function taking values in $[0, 1]$, as desired. The following result asserts that, similarly, if $\varphi_n$ are stepwise monotone or monotone-after-rounding on an interval $(\alpha, \beta)$ and the measures $\varphi_n(s) ds$ converge vaguely to a measure $\mu$, then $\mu$ has a density function on $(\alpha, \beta)$ which is stepwise monotone or monotone-after-rounding, respectively. The proof of this fact is elementary, but somewhat lengthy. \begin{lemma} \label{lem:vague} Suppose that $\varphi_n$ are real-valued functions on $\mathds{R}$ such that the sequence of signed measures $\varphi_n(s) ds$ converges vaguely to a signed measure $\mu$. \begin{enumerate}[label={\rm (\alph*)}] \item\label{lem:vague:a} If $\varphi_n$ are stepwise increasing on an interval $(\alpha, \beta)$ with points of increase $s_{n, k}$, then $\mu$ has a stepwise increasing density function on $(\alpha, \beta)$. More precisely, for every integer $k$ the sequence $s_{n, k}$ converges as $n \to \infty$ to some limit $s_k$, and $s_k$ is the sequence of points of increase of the density function of $\mu$ on $(\alpha, \beta)$. \item\label{lem:vague:b} If $\varphi_n$ are increasing-after-rounding on an interval $(\alpha, \beta)$, then $\mu$ has an increasing-after-rounding density function on $(\alpha, \beta)$. More precisely, if $s_{n, k}$ denote the points of increase of $\varphi_n$ and if $s_k$ denote any partial limit of $s_{n, k}$ as $n \to \infty$, then $s_k$ is a sequence of points of increase of the density function of $\mu$ on $(\alpha, \beta)$. \end{enumerate} Similar results hold true for stepwise decreasing functions and functions which are decreasing-after-rounding. \end{lemma} \begin{proof} The proof uses the following property: if each of the functions $\psi_n$ is nonpositive in $(\alpha, s_n)$ and nonnegative in $(s_n, \beta)$, and the sequence of measures $\psi_n(s) ds$ converges vaguely to a measure $\mu$, then for any partial limit $\tilde{s}$ of $s_n$ the measure $\mu$ is nonpositive on $(\alpha, \tilde{s})$ and nonnegative on $(\tilde{s}, \beta)$. This follows directly from the definition of vague convergence: if $f$ is a nonnegative continuous function whose support is a compact subset of $(\alpha, \tilde{s})$, then \formula{ \int_{-\infty}^\infty f(s) \psi_n(s) ds & \le 0 \quad \text{infinitely often,} } and hence \formula{ \int_{-\infty}^\infty f(x) \mu(ds) & \le 0. } Thus, $\mu$ is a nonpositive measure on $(\alpha, \tilde{s})$. A similar argument shows that $\mu$ is a nonnegative measure on $(\tilde{s}, \beta)$. We first prove part~\ref{lem:vague:b}. Suppose that $\varphi_n$ are increasing-after-rounding on $(\alpha, \beta)$ and that $\varphi_n(s) ds$ converge vaguely to $\mu$. By the above property applied to $\psi_n = \varphi_n - k$, for every integer $k$ there is a number $\tilde{s}_k \in [\alpha, \beta]$ such that $\mu(ds) - k \, ds$ is nonpositive on $(\alpha, \tilde{s}_k)$ and nonnegative on $(\tilde{s}_k, \beta)$. Clearly, $\tilde{s}_k$ are increasing, and if $\tilde{s}_i < \tilde{s}_j$, then on $(\tilde{s}_i, \tilde{s}_j)$ we have \formula{ i \, ds & \le \mu(ds) \le j \, ds . } Thus, $\mu$ has a density function $\varphi$ on $(\tilde{\alpha}, \tilde{\beta})$, where \formula{ \tilde{\alpha} & = \lim_{k \to -\infty} \tilde{s}_k , \\ \tilde{\beta} & = \lim_{k \to \infty} \tilde{s}_k , } and $k \le \varphi(s) \le k + 1$ for $s \in (\tilde{s}_k, \tilde{s}_{k + 1})$. In other words, $\varphi$ is increasing-after-rounding on $(\tilde{\alpha}, \tilde{\beta})$. Finally, suppose that $\tilde{\beta} < \beta$. Since $(\tilde{\beta}, \beta) \subseteq (\tilde{s}_k, \beta)$ for every integer $k$, we have \formula{ \mu(ds) & \ge k \, ds } on $(\tilde{\beta}, \beta)$ for every integer $k$, which is absurd. Thus, $\tilde{\beta} = \beta$, and a similar argument shows that $\tilde{\alpha} = \alpha$. The desired result follows. The proof of part~\ref{lem:vague:a} is very similar. Suppose that $\varphi_n$ are stepwise increasing on $(\alpha, \beta)$, with points of increase denoted by $s_{n, k}$, and assume that $\varphi_n(s) ds$ converge vaguely to $\mu$. Fix an integer $k$, and choose $\lambda \in (k - 1, k)$. Observe that $\psi_n = \varphi_n - \lambda$ changes sign only once, at $s_{n, k}$, and the location of the sign change does not depend on $\lambda$. By the property discussed in the first part of the proof, there exists $\tilde{s}_k$ such that $\mu(ds) \le \lambda \, ds$ on $(\alpha, \tilde{s}_k)$ and $\mu(ds) \ge \lambda \, ds$ on $(\tilde{s}_k, \beta)$ for every $\lambda \in (k - 1, k)$. Thus, we have $\mu(ds) \le (k - 1) ds$ on $(\alpha, \tilde{s}_k)$ and $\mu(ds) \ge k \, ds$ on $(\tilde{s}_k, \beta)$. This shows that $\mu$ has a stepwise increasing density function on an interval $(\tilde{\alpha}, \tilde{\beta})$ defined as in the proof of part~\ref{lem:vague:b}, and the same argument as before shows that in fact $\tilde{\alpha} = \alpha$ and $\tilde{\beta} = \beta$. In order to complete the proof, it remains to observe that the numbers $\tilde{s}_k$ are determined uniquely by $\mu$, and so all partial limits of $s_{n, k}$ as $k \to \infty$ are necessarily equal to $s_k$. \end{proof} \subsection{Basic properties of bell-shaped sequences} \label{sec:bell} Recall that a sequence $a(k)$ is said to be \emph{bell-shaped} if it is nonnegative, it converges to zero, and, extended to a doubly infinite sequence in such a way that $a(k) = 0$ for $k < 0$, it satisfies the following sign-change condition: for $n = 0, 1, 2, \ldots$ the sequence $\Delta^n a(k)$ changes sign exactly $n$ times. Note that a sequence which is constant zero is not bell-shaped. Right from the definition it follows that if $a(k)$ is bell-shaped and $n = 0, 1, 2, \ldots$\,, then $(-1)^n \Delta^n a(k) \ge 0$ for $k$ large enough. Using this property with $n$ replaced by $n + 1$, we find that the sequence $(-1)^n \Delta^n a(k)$ is eventually decreasing (that is, decreasing for $k$ large enough). \begin{lemma} \label{lem:bell:bound} If $a(k)$ is a bell-shaped sequence and $n = 0, 1, 2, \ldots$\,, then \formula{ \lim_{k \to \pm\infty} k^n \Delta^n a(k) & = 0 . } If additionally $a(k)$ is summable, then \formula{ \lim_{k \to \pm\infty} k^{n + 1} \Delta^n a(k) & = 0 . } \end{lemma} \begin{proof} Note that $a(k)$ converges to zero as $k \to \infty$ by definition. Furthermore, if $a(k)$ is additionally summable, then \formula{ k a(k) & = \sum_{j = 0}^\infty a(k) \ind_{[1, k]}(j) . } Since $a(k) \ind_{[1, k]}(j) \le a(j)$ for every $j$, and $a(k) \ind_{[1, k]}(j)$ converges to $0$ as $k \to \infty$, the dominated convergence theorem implies that $k a(k)$ converges to zero as $k \to \infty$. By the eventual monotonicity of $\Delta^{n + 1} a(k)$, for $k$ large enough we have \formula{ (-1)^n \Delta^n a(\lfloor \tfrac{k}{2} \rfloor) & \ge (-1)^n \Delta^n a(k) - (-1)^n \Delta^n a(\lfloor \tfrac{k}{2} \rfloor) \\ & = \sum_{j = 1}^{\lceil k/2 \rceil} (-1)^{n + 1} \Delta^{n + 1} a(k - j) \\ & \ge \lceil \tfrac{k}{2} \rceil (-1)^{n + 1} \Delta^{n + 1} a(k) \ge 0 . } Therefore, \formula{ |k \Delta^{n + 1} a(k)| & \le 2 |\Delta^n a(\lfloor \tfrac{k}{2} \rfloor)| } for $k$ large enough. The desired results follow now easily by induction. \end{proof} By the discrete counterpart of Rolle's theorem, the sequence $\Delta a(k)$ changes sign at least once between each two consecutive changes of sign of $a(k)$. If additionally $a(k)$ converges to zero as $k \to \pm\infty$, then $\Delta a(k)$ additionally changes sign at least once before the first sign change of $a(k)$, and at least once after the last sign change of $a(k)$. By induction, we find that if $a(k)$ converges to zero as $k \to \pm\infty$, then $\Delta^n a(k)$ changes sign at least $n$ times for $n = 0, 1, 2, \ldots$ Consequently, in order to show that such a sequence is bell-shaped, we only need to show that $\Delta^n a(k)$ changes sign at most $n$ times for $n = 0, 1, 2, \ldots$ \section{Generating functions of bell-shaped sequences} \label{sec:repr} In this section we prove the crucial part of our main result: the implication~\ref{thm:main:a}$\implies$\ref{thm:main:c} in Theorem~\ref{thm:main}. For convenience, we state this implication as a separate result: a representation theorem for generating functions of bell-shaped sequences. \begin{theorem}\label{thm:repr} If $(a(k))$ is a bell-shaped sequence, then the generating function of $(a(k))$ is given by \formula[eq:repr]{ F(x) & = \sum_{k = 0}^\infty a(k) x^k = \exp \biggl( b x + c + \int_{-\infty}^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \varphi(s) ds \biggr) } for $x \in (0, 1)$, where $b \in [0, \infty)$, $c \in \mathds{R}$ and $\varphi$ is a nonnegative Borel function on $\mathds{R}$ such that \begin{itemize} \item $\varphi$ is stepwise decreasing on $(-\infty, 0)$; \item $\varphi$ is equal to zero on $(0, 1)$; \item $\varphi$ is increasing-after-rounding on $(1, \infty)$; \item $\varphi(s) / s^2$ is integrable near $-\infty$ and near $\infty$; \item $(1 - \varphi(s)) / (s - 1)$ is nonintegrable in a right neighbourhood of $1$. \end{itemize} \end{theorem} We first prove the result for summable bell-shaped sequences, and only then we discuss the necessary modifications in the general case. \begin{proof}[Proof of Theorem~\ref{thm:repr} for summable bell-shaped sequences] We denote by $\alpha_{n, m}$ the location of the $m$th sign change of $\Delta^n a(k - n)$: we let $\alpha_{n, -1} = -\infty$ and \formula[eq:bs:proof:alpha]{ \alpha_{n, m} & = \min\{k > \alpha_{n, m - 1} : (-1)^m \Delta^n a(k - n) > 0\} } for $m = 0, 1, 2, \ldots, n - 1$. Note that $0 < \alpha_{n, 0} < \alpha_{n, 1} < \ldots < \alpha_{n, n - 1}$, and thus $\alpha_{n, m} \ge m + 1$. The argument is broken into ten steps. \emph{Step 1.} Let $a(k)$ be a summable bell-shaped sequence, and let \formula{ F(x) & = \sum_{k = 0}^\infty a(k) x^k } be its generating function; here $|x| < 1$. Note that $F$ is continuous on $(0, 1)$, and thanks to summability of $a(k)$, $F$ is additionally bounded on $(0, 1)$. Define the moment sequence \formula{ A(k) & = \int_0^1 x^k F(x) dx } for $k = 0, 1, 2, \ldots$ By the discrete Post's inversion formula (Theorem~\ref{thm:post}), \formula[eq:bs:proof:1]{ F(x) & = \lim_{n \to \infty} (n + j_n + 1) \binom{n + j_n}{n} (-1)^n \Delta^n A(j_n) } whenever $x \in (0, 1)$ and \formula{ \lim_{n \to \infty} \frac{j_n}{n} & = \frac{x}{1 - x} \, . } Below we transform the right-hand side of~\eqref{eq:bs:proof:1}. \emph{Step 2.} For $k = 0, 1, 2, \ldots$\,, by Fubini's theorem, \formula{ A(k) & = \int_0^1 x^k \biggl( \sum_{j = 0}^\infty a(j) x^j \biggr) dx \\ & = \sum_{j = 0}^\infty \frac{a(j)}{j + k + 1} \, . } Therefore, \formula{ \Delta^n A(k) & = \sum_{j = 0}^\infty a(j) \Delta_k^n \frac{1}{j + k + 1} \\ & = \sum_{j = 0}^\infty a(j) \Delta_j^n \frac{1}{j + k + 1} , } where $\Delta_k$ and $\Delta_j$ denote the difference operators acting with respect to variables $k$ and $j$, respectively. Let $P$ be a polynomial of degree at most $n$, and let \formula{ Q(j) & = \frac{P(j) - P(-k - 1)}{j + k + 1} \, . } Then $Q$ is a polynomial of degree at most $n - 1$, and \formula{ \frac{P(-k - 1)}{j + k + 1} & = \frac{P(j)}{j + k + 1} - Q(j) . } Additionally, $\Delta^n Q(j) = 0$ for all $j$. Thus, \formula{ P(-k - 1) \Delta^n A(k) & = \sum_{j = 0}^\infty a(j) \Delta_j^n \frac{P(-k - 1)}{j + k + 1} \\ & = \sum_{j = 0}^\infty a(j) \Delta_j^n \frac{P(j)}{j + k + 1} . } As usual, we extend $a(k)$ to a two-sided sequence so that $a(k) = 0$ for $k < 0$. The $n$-fold application of summation by parts leads to \formula*[eq:bs:proof:2]{ P(-k - 1) \Delta^n A(k) & = \sum_{j = -\infty}^\infty a(j) \Delta_j^n \frac{P(j)}{j + k + 1} \\ & = \sum_{j = -\infty}^\infty (-1)^n \Delta^n a(j - n) \, \frac{P(j)}{j + k + 1} \, ; } here $k = 0, 1, 2, \ldots$\,, and we use the first assertion of Lemma~\ref{lem:bell:bound} to find that for $m = 0, 1, 2, \ldots, n - 1$ the boundary terms \formula{ & (-1)^m \Delta^m a(j - m) \Delta_j^{n - m - 1} \frac{P(j)}{j + k + 1} } converge to zero as $j \to \infty$. \emph{Step 3.} By combining the results of the first two steps (formulae~\eqref{eq:bs:proof:1} and~\eqref{eq:bs:proof:2}), we find that for an arbitrary sequence of polynomials $P_n$ of degree at most $n$, we have \formula*[eq:bs:proof:lim]{ F(x) & = \lim_{n \to \infty} \frac{n + j_n + 1}{P_n(-j_n - 1)} \binom{n + j_n}{n} \sum_{j = 0}^\infty \Delta^n a(j - n) \, \frac{P_n(j)}{j + j_n + 1} \\ & = \lim_{n \to \infty} \frac{n + j_n + 1}{n! P_n(-j_n - 1)} \biggl( \prod_{m = 0}^{n - 1} (m + j_n + 1) \biggr) \sum_{j = 0}^\infty \frac{P_n(j) \Delta^n a(j - n)}{j + j_n + 1} } for $x \in (0, 1)$, provided that $j_n / n$ converges to $\frac{x}{1 - x}$ as $n \to \infty$. We choose $P_n$ in such a way that $P_n(j) \Delta^n a(j - n) \ge 0$ for all $j = 0, 1, 2, \ldots$ More precisely, we set \formula[eq:bs:proof:pn]{ P_n(j) & = \prod_{m = 0}^{n - 1} (\alpha_{n, m} - j) , } where $0 < \alpha_{n, 0} < \alpha_{n, 1} < \ldots < \alpha_{n, n - 1}$ are the locations of sign changes of $\Delta^n a(k - n)$. Thus, \formula*[eq:bs:proof:3]{ F(x) & = \lim_{n \to \infty} \frac{n + j_n + 1}{n!} \biggl(\prod_{m = 0}^{n - 1} \frac{m + j_n + 1}{\alpha_{n, m} + j_n + 1} \biggr) \sum_{j = 0}^\infty \frac{P_n(j) \Delta^n a(j - n)}{j + j_n + 1} } for $x \in (0, 1)$ and $j_n$ as described above. \emph{Step 4.} In the right-hand side of~\eqref{eq:bs:proof:3}, the only element that depends on $x$ is the sequence $j_n$: we require that $j_n / n$ converges to $\frac{x}{1 - x}$. We introduce approximations to $F$, by formally replacing $(j_n + 1) / n$ (which also converges to $\frac{x}{1 - x}$) by $\frac{x}{1 - x}$. More precisely, it will be easier to temporarily fix $n$ and work with variable $y = \frac{n x}{1 - x}$; we will return to the original variable $x$ only in step~9. Thus, we define auxiliary functions $G_n$ by formally replacing $j_n + 1$ by $y$ in the expression under the limit in~\eqref{eq:bs:proof:3}: \formula[eq:bs:proof:4]{ G_n(y) & = \frac{n + y}{n!} \biggl( \prod_{m = 0}^{n - 1} \frac{m + y}{\alpha_{n, m} + y} \biggr) \sum_{j = 0}^\infty \frac{P_n(j) \Delta^n a(j - n)}{j + y} \, . } With this definition, formula~\eqref{eq:bs:proof:3} simply states that $G_n(j_n + 1)$ converges to $F(x)$ as $n \to \infty$. \emph{Step 5.} Observe that since $P_n(j) \Delta^n a(j - n) \ge 0$, the series in the right-hand side of~\eqref{eq:bs:proof:4} defines a Stieltjes function \formula[eq:bs:proof:5]{ H_n(y) & = \sum_{j = 0}^\infty \frac{P_n(j) \Delta^n a(j - n)}{j + y} \, ; } see~\eqref{eq:stieltjes}. The exponential representation~\eqref{eq:stieltjes:exp} of the Pick function $1 / H_n$ reads \formula[eq:bs:proof:6]{ H_n(y) & = \exp\biggl( -\delta_n - \int_{-\infty}^0 \biggl( \frac{1}{s - y} - \frac{s}{1 + s^2} \biggr) \psi_n(s) ds \biggr) } for some $\delta_n \in \mathds{R}$ and some Borel function $\psi_n$ with values in $[0, 1]$; here $y \in \mathds{C} \setminus (-\infty, 0]$. Furthermore, $\psi_n(s)$ is given almost everywhere by the boundary limit \formula{ \psi_n(s) & = \lim_{t \to 0^+} \frac{1}{\pi} \, \Arg \frac{1}{H_n(s + i t)} \\ & = -\lim_{t \to 0^+} \frac{\Arg H_n(s + i t)}{\pi} \, . } However, by definition~\eqref{eq:bs:proof:5}, $H_n$ is a meromorphic function, it is real-valued on the real axis, the poles of $H_n$ are located at those numbers $-j$ for which we have $P_n(j) \Delta^n a(j - n) > 0$, $j = 0, 1, 2, \ldots$\,, and $H_n$ is strictly decreasing between every two consecutive poles. This implies that $\psi_n$ only takes values $0$ and $1$, or, more precisely, \formula{ \psi_n(s) & = \begin{cases} 1 & \text{if $H_n(s) < 0$,} \\ 0 & \text{if $H_n(s) \ge 0$.} \end{cases} } for almost all $s \in \mathds{R}$. For $m = 0, 1, 2, \ldots$ the function $H_n$ is strictly decreasing on $(-m - 1, -m)$, and hence \formula[eq:bs:proof:beta]{ \{ s \in (-m - 1, -m) : H_n(s) < 0 \} & = (-\beta_{n, m}, -m) . } for some $\beta_{n, m} \in [m, m + 1]$. It follows that \formula[eq:bs:proof:7]{ \psi_n & = \sum_{m = 0}^\infty \ind_{[-\beta_{n, m}, -m)} } almost everywhere on $\mathds{R}$. \emph{Step 6.} We now derive the exponential representation of the remaining factors in the right-hand side of~\eqref{eq:bs:proof:4}. For simplicity, in this step we simply write $\alpha_m = \alpha_{n, m}$ and $\beta_m = \beta_{n, m}$. We have \formula{ \frac{m + y}{\alpha_m + y} & = \exp\biggl( \int_{-\alpha_m}^{-m} \frac{1}{s - y} \, ds \biggr) } and \formula{ n + y & = \exp\biggl( \int_{-\infty}^{-n} \biggl( \frac{1}{s - y} - \frac{s}{1 + s^2} \biggr) ds \biggr) . } By combining~\eqref{eq:bs:proof:4}, \eqref{eq:bs:proof:6} and the above two formulae, we find that \formula[eq:bs:proof:8]{ G_n(y) & = \exp \biggl( \gamma_n + \int_{-\infty}^0 \biggl( \frac{1}{s - y} - \frac{s}{1 + s^2} \biggr) \varphi_n(s) ds \biggr) } for $y \in \mathds{C} \setminus (-\infty, 0]$, where $\gamma_n \in \mathds{R}$ and \formula{ \varphi_n & = \ind_{(-\infty, -n)} + \sum_{m = 0}^{n - 1} \ind_{[-\alpha_m, -m)} - \psi_n } almost everywhere on $\mathds{R}$. Using additionally~\eqref{eq:bs:proof:7}, we obtain \formula{ \varphi_n & = \ind_{(-\infty, -n)} + \sum_{m = 0}^{n - 1} \ind_{[-\alpha_m, -m)} - \sum_{m = 0}^\infty \ind_{[-\beta_m, -m)} \\ & = \sum_{m = 0}^{n - 1} \bigl(\ind_{[-\alpha_m, -m)} - \ind_{[-\beta_m, -m)}\bigr) + \sum_{m = n}^\infty \bigl(\ind_{[-m - 1, -m)} - \ind_{[-\beta_m, -m)}\bigr) } almost everywhere. Recall that $m \le \beta_m \le m + 1 \le \alpha_m$. Thus, \formula[eq:bs:proof:9]{ \varphi_n & = \sum_{m = 0}^{n - 1} \ind_{[-\alpha_m, -\beta_m)} + \sum_{m = n}^\infty \ind_{[-m - 1, -\beta_m)} } almost everywhere on $\mathds{R}$. Clearly, $\varphi_n \ge 0$, and so formula~\eqref{eq:bs:proof:8} asserts that $G_n$ is the exponential of a Pick function on $(0, \infty)$ (with parameter $b$ equal to $\gamma_n$ and measure $\mu$ equal to $\varphi_n(s) ds$ in the Stieltjes representation~\eqref{eq:pick}). \emph{Step 7.} The second term in the right-hand side of~\eqref{eq:bs:proof:9} defines a function which takes values in $[0, 1]$ on $(-\infty, -n)$, and which is equal to zero on $[-n, 0)$. We claim that the first sum defines a function which is stepwise increasing on $(-\infty, -n)$ and stepwise decreasing on $(-n, 0)$. Since the sum of a stepwise monotone function and a function taking values in $[0, 1]$ is monotone-after-rounding, our claim implies that $\varphi_n$ is increasing-after-rounding on $(-\infty, -n)$ and stepwise decreasing on $(-n, 0)$. We continue to write $\alpha_m = \alpha_{n, m}$ and $\beta_m = \beta_{n, m}$. Clearly, the function $\tilde{\varphi}_n$ given by the first term in the right-hand side of~\eqref{eq:bs:proof:9}, that is, \formula{ \tilde{\varphi}_n & = \sum_{m = 0}^{n - 1} \ind_{[-\alpha_m, -\beta_m)} , } only takes integer values. All upward jumps of $\tilde{\varphi}_n$ are located at points $-\alpha_m$, where $m = 0, 1, 2, \ldots, n - 1$, and all downward jumps of $\tilde{\varphi}_n$ are located at points $-\beta_m$, where again $m = 0, 1, 2, \ldots, n - 1$. Since $-\alpha_n \le -\beta_n$ and $-n \le -\beta_n$, the function $\tilde{\varphi}_n$ is stepwise increasing on $(-\infty, -n)$. Thus, in order to prove our claim, we only need to show that $\tilde{\varphi}_n$ is stepwise decreasing on $(-n, 0)$, that is, that every upward jump at $-\alpha_m$ in $(-n, 0)$ is cancelled by some downward jump. Suppose that $j = \alpha_m < n$. Clearly, $j \ge 1$, and $P_n(j) = \prod_{i = 0}^{n - 1} (\alpha_i - j) = 0$. It follows that $H_n$ does not have a pole at $-j$, and hence $H_n$ is decreasing on $(-j - 1, -j + 1)$. If $H_n(-j) < 0$, then $H_n(s) < 0$ for all $s \in (-j, -j + 1)$, and so \formula{ \beta_{j - 1} & = j . } Thus, the downward jump of $\varphi_n$ at $-\beta_{j - 1} = -j$ cancels the upward jump of $\varphi_n$ at $-\alpha_m = -j$, as desired. Similarly, if $H_n(-j) \ge 0$, then $H_n(s) \ge 0$ for all $s \in (-j - 1, -j)$, and therefore \formula{ \beta_j & = j . } Hence, in this case the downward jump of $\varphi_n$ at $-\beta_j = -j$ cancels the upward jump of $\varphi_n$ at $-\alpha_m$. Our claim follows. \emph{Step 8.} Let us denote by $\log G_n$ the exponent in the right-hand side of~\eqref{eq:bs:proof:8}, so that $\log G_n$ is the continuous complex logarithm of the function $G_n$, which is real-valued on $(0, \infty)$. By the result of step~6, $\log G_n$ is a Pick function on $(0, \infty)$, and hence it is increasing and concave on $(0, \infty)$ (this follows immediately from~\eqref{eq:bs:proof:8}). We fix $x \in (0, 1)$, and we define \formula{ z & = \frac{x}{1 - x} \, , \\ w & = \frac{\frac{x}{2}}{1 - \frac{x}{2}} \, , \\ y_n & = \frac{n x}{1 - x} = n z , \\ j_n & = \biggl\lfloor \frac{n x}{1 - x} \biggr\rfloor = \lfloor y_n \rfloor = \lfloor n z \rfloor , \\ i_n & = \biggl\lfloor \frac{\frac{n x}{2}}{1 - \frac{x}{2}} \biggr\rfloor = \lfloor n w \rfloor . } Observe that $i_n + 1 < y_n < j_n + 1$ for $n$ large enough. By monotonicity, \formula{ \log G_n(y_n) & \le \log G_n(j_n + 1) , } while by concavity, \formula{ \log G_n(y_n) & \ge \frac{y_n - (i_n + 1)}{(j_n + 1) - (i_n + 1)} \, \log G_n(j_n + 1) + \frac{(j_n + 1) - y_n}{(j_n + 1) - (i_n + 1)} \, \log G_n(i_n + 1) . } By~\eqref{eq:bs:proof:3}, we have \formula{ \lim_{n \to \infty} G_n(j_n + 1) & = F(x) , \\ \lim_{n \to \infty} G_n(i_n + 1) & = F(\tfrac{x}{2}) . } It follows that, \formula{ \limsup_{n \to \infty} \log G_n(y_n) & \le \lim_{n \to \infty} \log G_n(j_n + 1) = \log F(x) . } Furthermore, since $\frac{y_n}{n} = z$, $\frac{j_n + 1}{n} \to z$ and $\frac{i_n + 1}{n} \to w$ as $n \to \infty$, we also have \formula{ \liminf_{n \to \infty} \log G_n(y_n) & \ge \lim_{n \to \infty} \biggl(\frac{y_n - (i_n + 1)}{(j_n + 1) - (i_n + 1)} \, \log G_n(j_n + 1) \\ & \hspace*{10em} + \frac{(j_n + 1) - y_n}{(j_n + 1) - (i_n + 1)} \, \log G_n(i_n + 1) \biggr) \\ & = \frac{z - w}{z - w} \, \log F(x) + \frac{z - z}{z - w} \, \log F(\tfrac{x}{2}) \\ & = \log F(x) . } We have thus proved that for every $x \in (0, 1)$, \formula{ \lim_{n \to \infty} G_n\bigg(\frac{n x}{1 - x}\biggr) & = F(x) . } \emph{Step 9.} We return to the original variable $x$: we define $F_n$ by the formula \formula{ F_n(x) & = G_n(\tfrac{n x}{1 - x}) . } Here $x \in \mathds{C} \setminus ((-\infty, 0] \cup [1, \infty))$, so that $y = \frac{n x}{1 - x} \in \mathds{C} \setminus (-\infty, 0]$. By the result of the previous step, the functions $F_n$ converge pointwise to $F$ on $(0, 1)$. We claim that $F_n$ is the exponential of a Pick function on $(0, 1)$, and that the Stieltjes representation of this Pick function is equivalent to \formula[eq:bs:proof:10]{ F_n(x) & = \exp \biggl( b_n + \biggl( \int_{-\infty}^0 + \int_1^\infty \biggr) \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \eta_n(s) ds \biggr) } for some $b_n \in \mathds{R}$, where $\eta_n(s) = \varphi_n(\tfrac{n s}{1 - s})$. One way to show this is to observe that both $x \mapsto \frac{n x}{1 - x}$ and the inverse function $y \mapsto \frac{y}{n + y}$ are Pick functions which preserve the real axis, and so composition with the former function is an isomorphism between Pick functions on $(0, \infty)$ and Pick functions on $(0, 1)$. However, we choose a more direct approach. By~\eqref{eq:bs:proof:8}, for $x \in \mathds{C} \setminus ((-\infty, 0] \cup [1, \infty))$ we have \formula{ F_n(x) & = \exp \biggl( \gamma_n + \int_{-\infty}^0 \biggl( \frac{1 - x}{s (1 - x) - n x} - \frac{s}{1 + s^2} \biggr) \varphi_n(s) ds \biggr) . } All we need to do is to substitute $s = \frac{n r}{1 - r}$ in the right-hand side, and rearrange the integrand. Let us denote by $\log F$ the exponent in the right-hand side. Thus, we have \formula{ \log F_n(x) & = \gamma_n + \biggl(\int_{-\infty}^0 + \int_1^\infty \biggr) \biggl( \frac{(1 - x) (1 - r)}{n r (1 - x) - n x (1 - r)} - \frac{n r (1 - r)}{(1 - r)^2 + n^2 r^2} \biggr) \frac{n \eta_n(r)}{(1 - r)^2} \, dr \\ & = \gamma_n + \biggl(\int_{-\infty}^0 + \int_1^\infty \biggr) \biggl( \frac{(1 - x) (1 - r)}{n (r - x)} - \frac{n r (1 - r)}{(1 - r)^2 + n^2 r^2} \biggr) \frac{n \eta_n(r)}{(1 - r)^2} \, dr \\ & = \gamma_n + \biggl(\int_{-\infty}^0 + \int_1^\infty \biggr) \biggl( \frac{1 - x}{(1 - r) (r - x)} - \frac{n^2 r}{(1 - r) ((1 - r)^2 + n^2 r^2)} \biggr) \eta_n(r) dr . } Observe that \formula{ \frac{1 - x}{(1 - r) (r - x)} & = \frac{1}{r - x} + \frac{1}{1 - r} \, , } and hence, by a straightforward calculation, \formula{ & \frac{1 - x}{(1 - r) (r - x)} + \frac{n^2 r}{(1 - r)^3 + n^2 r^2 (1 - r)} \\ & \qquad = \frac{1}{r - x} + \frac{1 - r - n^2 r}{(1 - r)^2 + n^2 r^2} \\ & \qquad = \biggl( \frac{1}{r - x} - \frac{r}{1 + r^2} \biggr) + \frac{r}{1 + r^2} + \frac{1 - r - n^2 r}{(1 - r)^2 + n^2 r^2} \, . } Therefore, \formula{ \log F_n(x) & = c_n + \biggl(\int_{-\infty}^0 + \int_1^\infty \biggr) \biggl( \frac{1}{r - x} - \frac{\sign r}{1 + |r|} \biggr) \eta_n(r) dr } for an appropriate $c_n \in \mathds{R}$, as claimed. Recall that $\varphi_n$ is zero on $(0, \infty)$, $\varphi_n$ is increasing-after-rounding on $(-\infty, -n)$ and stepwise decreasing on $(-n, 0)$. Since $\eta_n(s) = \varphi_n(\frac{n s}{1 - s})$, we find that $\eta_n$ is zero on $(0, 1)$, $\eta_n$ is increasing-after-rounding on $(1, \infty)$, and $\eta_n$ is stepwise decreasing on $(-\infty, 0)$. \emph{Step 10.} We are now in position to complete the proof. We already know that the exponent in the right-hand side of~\eqref{eq:bs:proof:10}, which we denote by $\log F_n$, is a Pick function on $(0, 1)$, and that $\log F_n$ converge pointwise on $(0, 1)$ to $\log F$. Thus, $\log F$ is a Pick function on $(0, 1)$ (or, strictly speaking, $\log F$ extends to a holomorphic function on $\mathds{C} \setminus ((-\infty, 0] \cup [1, \infty))$, which is a Pick function on $(0, 1)$). Additionally, the parameters $c$ and $\tilde{\mu}$ in the modified Stieltjes representation~\eqref{eq:pick:var} of Pick functions $\log F_n$ converge to the corresponding parameters of $\log F$. It follows that \formula{ \log F(x) & = c + \int_{\mathds{R} \cup \{\infty\}} \frac{1 + s x}{s - x} \tilde{\mu}(ds) } for $x \in \mathds{C} \setminus ((-\infty, 0] \cup [1, \infty))$, where $c \in \mathds{R}$ is the limit of $c_n$, and the measure $\tilde{\mu}(ds)$ on $\mathds{R} \cup \{\infty\}$ is the weak limit of measures $(1 + s^2)^{-1} \eta_n(s) ds$ as $n \to \infty$. We recall that for $s = \infty$, we understand that $\frac{1 + s x}{s - x} = x$. We transform the above expression into the usual Stieltjes representation~\eqref{eq:pick}: if $\mu(ds) = (1 + s^2) \tilde{\mu}(ds)$ on $\mathds{R}$ and $b = \tilde{\mu}(\{\infty\})$, then \formula{ \log F(x) & = b x + c + \int_{\mathds{R}} \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \mu(ds). } for $x \in \mathds{C} \setminus ((-\infty, 0] \cup [1, \infty))$. Furthermore, $\mu$ is the vague limit of measures $\eta_n(s) ds$ on $\mathds{R}$. Clearly, $\mu((0, 1)) = 0$. Since functions $\eta_n$ are stepwise decreasing on $(-\infty, 0)$, by Lemma~\ref{lem:vague}\ref{lem:vague:a}, $\mu$ has a density function $\eta$ on $(-\infty, 0)$, and $\eta$ is stepwise decreasing on $(-\infty, 0)$. Similarly, $\eta_n$ are increasing-after-rounding on $(1, \infty)$, and so Lemma~\ref{lem:vague}\ref{lem:vague:b} implies that $\mu$ has a density function $\eta$ on $(1, \infty)$, and $\eta$ is increasing-after-rounding on $(1, \infty)$. Finally, we have $\mu(\{0\}) = \mu(\{1\}) = 0$: perhaps the easiest way to see this is to note that $\eta_n$, and therefore also $\eta$, are in fact stepwise decreasing on $(-\infty, 1)$ and increasing-after-rounding on $(0, \infty)$, and so in particular absolutely continuous on $(-\infty, 1) \cup (0, \infty) = \mathds{R}$. This completes the proof of~\eqref{eq:repr} (except that the role of $\varphi$ is played by the function $\eta$). The first integrability condition, integrability of $\eta(s) / s^2$ near $-\infty$ and $\infty$, is a consequence of the fact that $\log F$ is a Pick function. For a summable bell-shaped sequence $a(k)$, the other integrability condition is the integrability of $\eta(s) / (1 - s)$ in a right neighbourhood of $1$. This property follows from the monotone convergence theorem by the argument already discussed at the end of Section~\ref{sec:cm:pick}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:repr} in the general case] In the proof for summable bell-shaped sequences, summability was only used in the very first step: for a general bell-shaped sequence $a(k)$ the generating function $F(x)$ may fail to be integrable on $(0, 1)$ (namely, when $a(k) / k$ is not summable), and so its moment sequence $A(k)$ is not even well-defined. For this reason, we modify the definition of $A(k)$ to \formula{ A(k) & = \int_0^1 (x^k - 1) F(x) dx . } The integral in the right-hand side converges, because we have $|(x^k - 1) F(x)| \le k (1 - x) F(x)$, and since $a(k)$ is bounded, the function $(1 - x) F(x)$ is bounded on $(0, 1)$. Furthermore, \formula{ -\Delta A(k) & = -\int_0^1 (x^{k + 1} - x^k) F(x) dx = \int_0^1 x^k (1 - x) F(x) dx } is the moment sequence of the function $(1 - x) F(x)$, which is bounded and continuous on $(0, 1)$. Thus, we may apply the discrete Post's inversion formula (Theorem~\ref{thm:post}) to find that \formula[eq:bs:proof:1:alt]{ (1 - x) F(x) & = \lim_{m \to \infty} (m + i_m + 1) \binom{m + i_m}{m} (-1)^{m + 1} \Delta^{m + 1} A(i_m) } whenever $x \in (0, 1)$ and $i_m / m$ converges to $\frac{x}{1 - x}$. But this is in fact equivalent to~\eqref{eq:bs:proof:1} if we substitute $i_m = j_{m + 1}$. Indeed: if $j_n / n$ converges to $\frac{x}{1 - x}$, then $i_m / m$ converges to $\frac{x}{1 - x}$, too, and so~\eqref{eq:bs:proof:1:alt} reads \formula{ (1 - x) F(x) & = \lim_{m \to \infty} (m + j_{m + 1} + 1) \binom{m + j_{m + 1}}{m} (-1)^{m + 1} \Delta^{m + 1} A(j_{m + 1}) \\ & = \lim_{n \to \infty} (n + j_n) \binom{n + j_n - 1}{n - 1} (-1)^n \Delta^n A(j_n) \\ & = \lim_{n \to \infty} \frac{n}{n + j_n + 1} \times \lim_{n \to \infty} (n + j_n + 1) \binom{n + j_n}{n} (-1)^n \Delta^n A(j_n) \\ & = (1 - x) \lim_{n \to \infty} (n + j_n + 1) \binom{n + j_n}{n} (-1)^n \Delta^n A(j_n) , } as claimed. The remaining part of the proof is exactly the same as in the summable case, with one exception: the final remark about integrability of $\eta(s) / (1 - s)$ in a right neighbourhood of $1$ needs to be replaced by a similar comment about nonintegrability of $(1 - \eta(s)) / (1 - s)$. \end{proof} \section{Exponential representations} \label{sec:exp} The implication~\ref{thm:main:a}$\implies$\ref{thm:main:c} in Theorem~\ref{thm:main} was shown above as Theorem~\ref{thm:repr}. Below we complete the proof of Theorem~\ref{thm:main}. For convenience, the remaining two implications are also phrased as separate results. First, we combine implication~\ref{thm:main:c}$\implies$\ref{thm:main:b} with the final assertion of Theorem~\ref{thm:main}. \begin{theorem}\label{thm:fact} Suppose that $F$ is a function on $(0, 1)$ given by the formula \formula{ F(x) & = \exp \biggl( b x + c + \int_{-\infty}^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \varphi(s) ds \biggr) , } where $b \in [0, \infty)$, $c \in \mathds{R}$ and $\varphi$ is a nonnegative Borel function on $\mathds{R}$ such that \begin{itemize} \item $\varphi$ is stepwise decreasing on $(-\infty, 0)$; \item $\varphi$ is equal to zero on $(0, 1)$; \item $\varphi$ is increasing-after-rounding on $(1, \infty)$; \item $\varphi(s) / s^2$ is integrable near $-\infty$ and near $\infty$; \item $(1 - \varphi(s)) / (s - 1)$ is nonintegrable in a right neighbourhood of $1$. \end{itemize} Then $F$ is the generating function of a sequence, which is the convolution of a summable Pólya frequency sequence and a completely monotone sequence which converges to zero. \end{theorem} \begin{proof} As discussed in Section~\ref{sec:step:rect}, every function which is increasing-after-rounding is the sum of a stepwise increasing function and a function with values in $[0, 1]$. Thus, we can write $\varphi = \varphi_1 + \varphi_2$, where: \begin{itemize} \item $\varphi_1$ only takes nonnegative integer values, it is stepwise decreasing on $(-\infty, 0)$, zero on $(0, 1)$, and stepwise increasing on $(1, \infty)$; \item $\varphi_2$ is zero on $(-\infty, 1)$ and it takes values in $[0, 1]$ on $(1, \infty)$. \end{itemize} Furthermore, by the integrability conditions imposed on $\varphi$, we may assume that: \begin{itemize} \item the function $\varphi_1(s) / s^2$ is integrable near $-\infty$ and near $\infty$, and $\varphi_1 = 0$ in a right neighbourhood of $1$; \item $(1 - \varphi_2(s)) / (s - 1)$ is nonintegrable in a right neighbourhood of $1$. \end{itemize} By the arguments described in Section~\ref{sec:pf:pick}, there exists a summable Pólya frequency sequence $b(k)$ with generating function \formula{ G(x) = \sum_{k = 0}^\infty b(k) & = \exp \biggl( b x + c + \int_{-\infty}^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \varphi_1(s) ds \biggr) ; } see~\eqref{eq:pf:exp}. Similarly, by the results of Section~\ref{sec:cm:pick}, there is a completely monotone sequence $a(k)$ which converges to zero, with generating function \formula{ H(x) = \sum_{k = 0}^\infty c(k) & = \exp \biggl( \int_{-\infty}^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \varphi_2(s) ds \biggr) ; } see~\eqref{eq:cm:exp}. It follows that $F(x) = G(x) H(x)$, and consequently $F$ is the generating function of the convolution of the Pólya frequency sequence $b(k)$ and the completely monotone sequence $c(k)$. \end{proof} It remains to show the implication~\ref{thm:main:b}$\implies$\ref{thm:main:a} of Theorem~\ref{thm:main}. \begin{theorem}\label{thm:var:dim} Suppose that $a(k)$ is the convolution of a summable Pólya frequency sequence and a completely monotone sequence which converges to zero. Then $a(k)$ is a bell-shaped sequence. \end{theorem} \begin{proof} Suppose that $a(k)$ is the convolution of a summable Pólya frequency sequence $b(k)$ and a completely monotone sequence $c(k)$ which converges to zero. By Fatou's lemma, $a(k)$ converges to zero as $k \to \infty$. In order to prove that $a(k)$ is bell-shaped, we only need to show that for $n = 0, 1, 2, \ldots$\,, the doubly infinite sequence $\Delta^n a(k)$ changes sign at most $n$ times. We first observe that $c(k)$ is bell-shaped: the sequence $\Delta^n c(k)$ has constant sign for $k \ge 0$, and it is zero for $k < -n$, and thus it changes sign at most $n$ times. By the variation diminishing property of summable Pólya frequency sequences (see Section~\ref{sec:pf}), for every $n = 0, 1, 2, \ldots$\,, the convolution of $\Delta^n c(k)$ and $b(k)$ has at most $n$ sign changes. It remains to observe that this convolution is precisely $\Delta^n a(k)$, and so $a(k)$ is indeed bell-shaped. \end{proof} \section{Whale-shaped sequences} \label{sec:whale} In Section~1.4 of~\cite{ks}, the authors introduce the notion of a \emph{whale-shaped function}, which is an intermediate concept between complete monotonicity and bell-shape. A smooth positive-valued function $f$ on $(0, \infty)$ is said to be whale-shaped of order $d \in \{0, 1, 2, \ldots, \infty\}$ if $f$ converges to $0$ at infinity, $f^{(n)}$ changes sign $\min\{n, d\}$ times on $(0, \infty)$ for $n = 0, 1, 2, \ldots$\,, and additionally $\lim_{x \to 0^+} f^{(n)}(x) = 0$ for $n = 0, 1, 2, \ldots, d - 1$. Note that whale-shaped functions of order $0$ are precisely completely monotone functions on $(0, \infty)$, while whale-shaped functions of infinite order are precisely strictly positive bell-shaped functions on $(0, \infty)$. We define the corresponding notion of \emph{whale-shaped sequences} in the following way. A (one-sided) strictly positive sequence $a(k)$ is whale-shaped of order $d \in \{0, 1, 2, \ldots, \infty\}$ if $a(k)$ converges to zero, and the sequence $\Delta^n a(k)$, restricted to $k \ge -d$, changes sign $\min\{n, d\}$ times for $n = 0, 1, 2, \ldots$\, Here, as usual, we extend the definiton of $a(k)$ to integer $k$ by setting $a(k) = 0$ for $k < 0$. Note that for a finite order $d$, one can phrase the main part of the above definition in the following equivalent way: after padding the sequence $a(k)$ with $d$ zeroes on the left, the $n$th iterated difference of this one-sided sequence changes sign $\min\{n, d\}$ times for $n = 0, 1, 2, \ldots$\, In this reformulation the sign-change condition is a direct analogue of the corresponding condition for whale-shaped functions, while padding with $d$ zeroes on the left is the discrete counterpart of the boundary condition $\lim_{x \to 0^+} f^{(n)}(x) = 0$ for $n = 0, 1, \ldots, d - 1$. Note that whale-shaped sequences of order $0$ are precisely completely monotone sequences, while whale-shaped sequences of infinite order coincide with strictly positive bell-shaped sequences. In contrast to the case of whale-shaped functions, it is easy to see that the class of whale-shaped functions of order $d$ increases with $d$. Below we prove the following result, which is an analogue of the corresponding characterisation of whale-shaped functions given in Theorem~1.13 in~\cite{ks}. \begin{theorem} \label{thm:whale} For a nonnegative sequence $a(k)$ and $d \in \{0, 1, 2, \ldots\}$, the following are equivalent: \begin{enumerate}[label={\rm(\alph*)}] \item\label{thm:whale:a} the sequence $a(k)$ is whale-shaped of order $d$; \item\label{thm:whale:b} $a(k)$ is the convolution of a completely monotone sequence which converges to zero and no more than $d$ geometric sequences with quotient in $(0, 1)$; \item\label{thm:whale:c} the generating function of $a(k)$ is given by the formula \formula[eq:whale]{ \sum_{k = 0}^\infty a(k) x^k & = \exp \biggl( c + \int_1^\infty \biggl( \frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) \varphi(s) ds \biggr) } for $x \in (0, 1)$, where $c \in \mathds{R}$ and $\varphi$ is a nonnegative Borel function on $(1, \infty)$ such that: \begin{itemize} \item $\varphi$ is increasing-after-rounding and bounded from above by $d + 1$; \item $(1 - \varphi(s)) / (1 - s)$ is nonintegrable in a right neighbourhood of $1$. \end{itemize} \end{enumerate} Furthermore, the right-hand side of~\eqref{eq:whale} is the generating function of a bell-shaped sequence whenever the conditions on $c, \varphi$ listed in item~\ref{thm:whale:c} are satisfied. \end{theorem} The argument is a variant of the proof of Theorem~\ref{thm:main}, and we only describe the necessary modifications. As it was the case with Theorem~\ref{thm:main}, we divide the proof into three parts. The proof Theorem~\ref{thm:fact} carries over with only minor change: we additionally know that the function $\varphi_1$ is zero on $(-\infty, 0)$ and it is bounded from above by $d$ on $(1, \infty)$. Thus, condition~\ref{thm:whale:c} implies condition~\ref{thm:whale:b}, and additionally the final claim of Theorem~\ref{thm:fact} holds true. In order to see that condition~\ref{thm:whale:b} implies condition~\ref{thm:whale:a}, we essentially follow the proof of Theorem~\ref{thm:var:dim}, but we need the following well-known lemma. \begin{lemma} \label{lem:geometric} If $b(k)$ is a geometric sequence with nonnegative quotient and $c(k)$ is a sequence such that $c(k)$ and $b * c(k)$ change sign $n$ times, at positions $\alpha_1, \alpha_2, \ldots, \alpha_n$ and $\beta_1, \beta_2, \ldots, \beta_n$ (arranged in an increasing order), respectively. Then $\alpha_1 \le \beta_1 < \alpha_2 \le \beta_2 < \ldots < \alpha_n \le \beta_n$. \end{lemma} \begin{proof} Suppose that $b(k) = q^k$ with $q \ge 0$. If $q = 0$ we have $\beta_j = \alpha_j$ and there is nothing to prove. If $q > 0$, then \formula{ q^{-k} (b * c)(k) & = q^{-k} \sum_{j = 0}^k q^{k - j} c(j) = \sum_{j = 0}^k q^{-j} c(j) . } It remains to observe that locations of sign changes of the sequence $q^{-j} c(j)$ and the sequence of its cumulative sums alternate exactly as in the statement of the lemma. \end{proof} \begin{proof}[Proof of the implication \ref{thm:whale:b}$\implies$\ref{thm:whale:a} in Theorem~\ref{thm:whale}] Suppose that $c(k)$ is a completely monotone sequence, and $b_1(k), b_2(k), \ldots, b_d(k)$ are geometric sequences with quotient in $[0, 1)$; the quotient equal to $0$ corresponds to the sequence $(1, 0, 0, \ldots)$, the neutral element with respect to convolution. We need to prove that the sequence $a(k) = b_1 * b_2 * \ldots * b_d * c(k)$ is whale-shaped of order $d$. Observe that \formula[eq:whale:proof:1]{ \Delta^n a(k - n) & = b_1 * b_2 * \ldots * b_d * \Delta^n c(k - n) } and that the one-sided sequence $\Delta^n c(k - n)$ changes sign $n$ times, at $k = 1, 2, \ldots, n$. By Lemma~\ref{lem:geometric} and induction with respect to $d$, we see that the sequence~\eqref{eq:whale:proof:1} (which necessarily changes sign $n$ times, as it is the $n$th iterated difference of a sequence which converges to zero) changes sign at $k = 1, 2, \ldots, n - d$, and additionally $d$ times for $k > n - d$. This, however, implies that $a(n)$ is whale-shaped of order~$d$. \end{proof} Finally, we prove that condition~\ref{thm:whale:a} implies condition~\ref{thm:whale:c}, by following closely the proof of Theorem~\ref{thm:repr}. \begin{proof}[Proof of the implication \ref{thm:whale:a}$\implies$\ref{thm:whale:c} in Theorem~\ref{thm:whale}] We consider a whale-shaped sequence $a(k)$ of degree $d$ and $n > d$, and we use the notation introduced in the proof of Theorem~\ref{thm:repr}. By definition, the sequence $\Delta^n a(k - n)$ changes sign $d$ times for indices $k > n - d$, so that the remaining $n - d$ sign changes are necessarily located at $k = 1, 2, \ldots, n - d$. Thus, the locations $\alpha_{n, m}$ of sign changes of that sequence, defined in~\eqref{eq:bs:proof:alpha}, satisfy \formula{ \alpha_{n, m} & = m + 1 && \text{for $m = 0, 1, 2, \ldots, n - d - 1$.} } It follows that the definition~\eqref{eq:bs:proof:pn} of the polynomial $P_n$ reads \formula{ P_n(j) & = \prod_{m = 0}^{n - 1} (\alpha_{n, m} - j) \\ & = (1 - j) (2 - j) \ldots (n - d - j) \prod_{m = n - d}^{n - 1} (\alpha_{n, m} - j) . } In the remaining part of the argument, however, we replace $\alpha_{n, m}$ and $P_n$ with \formula{ \tilde{\alpha}_{n, m} & = \begin{cases} m & \text{for $m = 0, 1, 2, \ldots, n - d - 1$,} \\ \alpha_{n, m} & \text{for $m = n - d, n - d + 1, \ldots, n - 1$,} \end{cases} } and \formula{ \tilde{P}_n(j) & = \prod_{m = 0}^{n - 1} (\tilde{\alpha}_{n, m} - j) \\ & = (-j) (1 - j) (2 - j) \ldots (n - d - 1 - j) \prod_{m = n - d}^{n - 1} (\alpha_{n, m} - j) . } Note that the expression under the limit in~\eqref{eq:bs:proof:lim} does not depend on the choice of the polynomial $P_n$, and hence the above modification does not affect neither the function $G_n$ defined in~\eqref{eq:bs:proof:4}, nor the corresponding function $\varphi_n$ determined by~\eqref{eq:bs:proof:8}. After these modifications, however, the auxiliary function $H_n$ defined in~\eqref{eq:bs:proof:5} takes form \formula{ \tilde{H}_n(y) & = \sum_{j = 0}^\infty \frac{\tilde{P}_n(j) \Delta^n a(j - n)}{j + y} \\ & = \sum_{j = n - d}^\infty \frac{\tilde{P}_n(j) \Delta^n a(j - n)}{j + y} \, . } We define the numbers $\tilde{\beta}_{n, m}$ as in~\eqref{eq:bs:proof:beta}: \formula{ \{ s \in (-m - 1, -m) : \tilde{H}_n(s) < 0 \} & = (-\tilde{\beta}_{n, m}, -m) . } Observe that $H_n$ is positive on $(-n + d, \infty)$. It follows that $\tilde{\beta}_{n, m} = m$ for $m = 0, 1, \ldots, n - d - 1$. Thus, the analogue of~\eqref{eq:bs:proof:9} reads \formula{ \varphi_n & = \sum_{m = 0}^{n - 1} \ind_{[-\tilde{\alpha}_m, -\tilde{\beta}_m)} + \sum_{m = n}^\infty \ind_{[-m - 1, -\tilde{\beta}_m)} \\ & = \sum_{m = n - d}^{n - 1} \ind_{[-\tilde{\alpha}_m, -\tilde{\beta}_m)} + \sum_{m = n}^\infty \ind_{[-m - 1, -\tilde{\beta}_m)} . } It follows that $\varphi_n = 0$ on $(-n + d, 0)$ and $\varphi_n \le d + 1$ everywhere. Consequently, the function $\eta_n$ introduced in step~9 of the proof of Theorem~\ref{thm:repr} is equal to zero on $(1 - \tfrac{n}{d}, 0)$ and it is bounded from above by $d + 1$ everywhere. After passing to the limit as $n \to \infty$, we find that the parameter $b$ is necessarily equal to zero, the function $\eta$ is equal to zero on $(-\infty, 0)$, and it is bounded from above by $d + 1$ on $(1, \infty)$. This completes the proof of~\eqref{eq:whale} (again with the role of $\varphi$ played by the function $\eta$). \end{proof} \section{Examples} \label{sec:examples} Of course, all Pólya frequency sequences are bell-shaped. Thus, in particular, the probability mass functions of geometric distributions, Poisson distributions, binomial distributions, as well as their convolutions, are all bell-shaped. Similarly, all completely monotone sequences are bell-shaped. It is straightforward to check that a uniform distribution over $\{0, 1, 2, \ldots, n - 1\}$ is bell-shaped if and only if $n = 1$ or $n = 2$. A finitely supported distribution is bell-shaped if and only if it is the convolution of Bernoulli distributions. By a rather straightforward calculation, probability mass functions $a(k)$ of negative binomial distributions are bell-shaped. Indeed: the generating function of such a sequence $a(k)$ is given by \formula{ F(x) = \sum_{k = 0}^\infty a(k) x^k & = \biggl(\frac{1 - p}{1 - p x}\biggr)^\lambda \\ & = \exp \biggl(c + \lambda \int_{1/p}^\infty \biggl(\frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) ds \biggr) } for some $p \in (0, 1)$ and $\lambda \in (0, \infty)$, and an appropriate constant $c$, namely, $c = \lambda \log(1 - p) - \lambda \log(1 + p^2)$. Thus, $F(x)$ is given by~\eqref{eq:main} with $b = 0$ and $\varphi(s) = \lambda \ind_{(1/p, \infty)}(s)$, and so $a(k)$ is bell-shaped. Alternatively, one can observe that $a(k)$ is the convolution of $\lfloor \lambda \rfloor$ geometric sequences with parameter $p$, and a completely monotone sequence: the probability mass function of a negative binomial distribution with parameters $p$ and $\lambda - \lfloor \lambda \rfloor$. A slightly less obvious examples of bell-shaped sequences are given by probability mass functions $a(k)$ of \emph{discrete stable distributions}. Such sequences are characterised by their generating functions \formula{ F(x) = \sum_{k = 0}^\infty a(k) x^k & = \exp(-\lambda (1 - x)^\nu) , } where $\nu \in (0, 1]$ is a parameter which corresponds to the index of stability and $\lambda > 0$ is the shape parameter. When $\nu = 1$, we recover the usual Poisson distribution. For $\nu \in (0, 1)$, we have \formula{ F(x) & = \exp \biggl(c + \frac{\lambda \sin(\nu \pi)}{\pi} \int_1^\infty \biggl(\frac{1}{s - x} - \frac{s}{1 + s^2} \biggr) (s - 1)^\nu ds \biggr) } for an appropriate constant $c$, and hence $F(x)$ is given by~\eqref{eq:main} with $b = 0$ and $\varphi(s) = \frac{\lambda}{\pi} \sin(\nu \pi) (s - 1)^\nu \ind_{(1, \infty)}(s)$. Thus, in either case $a(k)$ is indeed bell-shaped. For a detailed discussion of discrete stable distributions, we refer to the original article~\cite{sv} by Steutel and van Harn, where this class of discrete distributions was introduced.
{ "timestamp": "2022-09-20T02:21:11", "yymm": "2209", "arxiv_id": "2209.08641", "language": "en", "url": "https://arxiv.org/abs/2209.08641" }
\chapter{Dark Energy and Cosmic Acceleration: Complementarity of Probes and New Facilities} \chapter{Report of the Topical Group on Dark Energy and Cosmic Acceleration: Complementarity of Probes and New Facilities for Snowmass 2021} \chaptermark{\footnotesize Dark Energy and Cosmic Acceleration: Complementarity of Probes and New Facilities} \authorlist{Conveners: Brenna Flaugher, Vivian Miranda, David J.~Schlegel} {Adam J.~Anderson, Felipe Andrade-Oliveira, Eric J.~Baxter, Amy N.~Bender, Lindsey E.~Bleem, Chihway Chang, Clarence C.~Chang, Thomas Y.~Chen, Kyle S.~Dawson, Seth W.~Digel, Alex Drlica-Wagner, Simone Ferraro, Alyssa Garcia, Katrin Heitmann, Alex G.~Kim, Eric V.~Linder, Sayan Mandal, Rachel Mandelbaum, Phil Marshall, Joel Meyers, Laura Newburgh, Peter E.~Nugent, Antonella Palmese, M.~E.~S.~Pereira, Neelima Sehgal, Martin White, Yuanyuan Zhang } \section{Executive Summary} The mechanism(s) driving the early- and late-time accelerated expansion of the Universe represent one of the most compelling mysteries in fundamental physics today. The path to understanding the causes of early- and late-time acceleration depends on fully leveraging ongoing surveys, developing and demonstrating new technologies, and constructing and operating new instruments. This report presents a multi-faceted vision for the cosmic survey program in the 2030s and beyond that derives from these considerations. Cosmic surveys address a wide range of fundamental physics questions, and are thus a unique and powerful component of the HEP experimental portfolio. Wide-field surveys in the optical/near-infrared have played a critical role in establishing the standard model of cosmology, $\Lambda$CDM. We strongly advocate for continuing this extremely successful program into the coming decade and beyond. Regarding photometric imaging surveys, the HEP community sees three options for Rubin Observatory beyond LSST, each of which would require different investments with costs and benefits needing detailed study. These studies must be undertaken a few years into the LSST so that the range of opportunities and trade-offs between them can be informed by the then-current scientific findings and open questions in the field. The next generation of spectroscopic surveys has the opportunity to map a significant fraction of the observable Universe in three dimensions, tracking the expansion of the Universe and providing constraints on dark energy throughout most of cosmic history. The spectroscopic roadmap starts with continued operation of DESI (i.e., DESI-II), followed by a new wide-field spectroscopic facility that leverages and complements LSST imaging. \cite{2022arXiv220903585S}. Observations of the cosmic microwave background (CMB) have provided one of the most powerful probes of the origin, evolution, and contents of our Universe. Continuation of a strong CMB program will transform our understanding of the early Universe through measurements of tensor modes, test the particle content to unprecedented precision and provide unique insights about gravity, dark energy, and new physics through cross-correlation with the wide-field galaxy surveys advocated in this report. HEP investment in CMB-S4 is critical to enable a diverse fundamental physics program. Following CMB-S4, higher-resolution observations of the CMB will open a new regime of microwave background cosmology. Advancement of emerging techniques for cosmology and the study of dark energy, and complementarity among methods, should also be a priority. An array of concepts for mapping the Universe using radio or millimeter-wave spectroscopy have promise as unique probes of large-scale structure. Third-generation gravitational wave observatories now being studied have potential for independently probing the expansion of the universe and dark energy, which should be characterized and optimized. Across surveys and methods, priority should be given to the potential sensitivity gains from joint processing. This report arrives at several recommendations: \noindent {\bf Near-term Facilities} \begin{itemize} \item {Given the pivotal role of CMB experiments in the landscape of particle physics and cosmology, and their phenomenal successes thus far, we advocate for advancing the CMB program through strong support of the near-term construction and operation of CMB-S4, which will cross critical, well-motivated thresholds in the searches for inflationary gravitational waves and new particle species. } \item We advocate for the continued operations of DESI (DESI-II;~\cite{2022arXiv220903585S}) as an important part of the spectroscopic roadmap while a Stage V spectroscopic facility is designed and built. \item We advocate for support of small- and medium-scale projects that enhance the science reach of studies of transients discovered by Rubin LSST and ``standard sirens'' detected by gravitational wave facilities. Data from these projects should be combined with infrastructure that enables cross-experiment coordination and data transfer for time-domain astronomical sources and a US-HEP multi-messenger program with dedicated target-of-opportunity allocations on US-HEP and partner facilities. \end{itemize} \noindent {\bf Longer-term Facilities} \begin{itemize} \item Through the Snowmass2021 process, the HEP community has identified the pressing need for next-generation wide-field, massively multiplexed spectroscopic capabilities to complement LSST imaging. We strongly advocate for the establishment, support and start of construction of a Stage V spectroscopic facility in the coming decade. \item Recognizing the wealth of fundamental physics that could be probed if much higher resolution and lower noise could be efficiently achieved over a wide-area CMB survey, we strongly advocate for support of studies of a Stage V CMB facility to bring it to conceptual readiness for the next decade. \item New approaches such as millimeter and 21-cm line-intensity mapping (LIM) hold the promise of exceptional cosmological constraining power. However, the technological readiness of these programs must be further demonstrated before the community is prepared to invest fully in a large-scale project using these technologies. Thus, we recommend a coordinated R\&D program to advance the technical readiness of these projects. \item We advocate for the continued operation of the Rubin Observatory after LSST. The Rubin Observatory will continue to be a groundbreaking facility in 2034 that can advance the state-of-the-art by targeting the sky with new innovative observation strategies and/or instruments. \end{itemize} \noindent {\bf Complementarity} \begin{itemize}[nosep] \item No single experiment can reveal the nature of dark energy. Such a breakthrough will require data from a network of experiments, small and large, probing the early- and late-time Universe in complementary ways. At present, cross-survey analyses are challenging to initiate, organize, and fund. We advocate for the creation of clear pathways to support cross-survey analyses as part of the core mission of the HEP Cosmic Frontier. \item Multi-messenger measurements of gravitational wave events are an emerging complementary technique for probing cosmology through standard sirens. Support for coordination with future large facilities (such as the European Einstein telescope) will enable maturation of this novel technique for measuring dark energy. \item We advocate for the creation of multi-site data archive centers, where data from cosmological surveys is replicated for robustness and continuous availability. The centers will provide the long-term preservation of datasets and simulations. Such centers should also supply computing resources for in-place analyses, making joint investigations attainable given the huge I/O bottleneck that arises from downloading data from such centers. \item We advocate for a robust program to increase the available supercomputing resources to enable running, postprocessing, and validating a diverse set of numerical gravity-only and hydrodynamical simulations tailored to the specificities of different surveys. This program would enable the running and testing of data-driven methods involving, for example, machine learning or bayesian methods. \end{itemize} \begin{figure*}[t] \includegraphics[width=6.5in]{Cosmic/figures/CF6-Facilities-Timeline-v7.png} \caption{Current and potential future facilities probing cosmic acceleration that are or may be supported by DOE or NSF. Dashed boxes indicate fully-funded facilities. Facilities in red are optical imaging, in orange are optical spectroscopy, in blue are CMB, in green are gravitational waves, and in purple are radio/mm spectroscopy. The fade-in regions indicate commissioning periods, while the boxes indicate full survey observations.} \label{fig:roadmap} \end{figure*} \section{Introduction} Cosmic surveys, including observations of the cosmic microwave background (CMB) and the distribution of stars and galaxies, enable investigations of the fundamental components of the Universe including dark energy, dark matter, inflation, the properties of neutrinos, and signatures of other ``dark sector'' particles. Cosmological and astrophysical measurements provide the only empirical measurements of dark energy and inflation, while measurements of dark matter and neutrinos both motivate and complement other terrestrial HEP experiments. Over the last several decades cosmic surveys have resulted in the creation of a ``Standard Model" of cosmology ($\Lambda$CDM), in which the Universe is currently comprised of ${\sim} 68\%$ dark energy (assumed to be a cosmological constant, $\Lambda$) and ${\sim}27\%$ non-baryonic, collisionless, cold dark matter (CDM)~\citep[e.g.,][]{SDSS:2005xqv,DES:2018,DES:2019,eBOSS:2020yzd,DES:2022,SPT-3G:2021eoc}. The fact that cosmic surveys can address a wide range of fundamental physics questions make them a unique and powerful component of the HEP experimental portfolio. \begin{figure*}[ht] \includegraphics[width=6.5in]{Cosmic/figures/SurveyFig_NoExp_26Jun2022v4.png} \caption{Key scientific opportunities in HEP targeted by cosmic survey facilities. The colored areas illustrate regions in spatial scale and redshift favored for various scientific targets. Dark energy and modified gravity favor measurements at lower redshift at large-to-moderate spatial scales. Inflationary signals are best explored at the largest spatial scales. Small-scale, low-redshift surveys explore dark matter in the sub-halo regime, and precision measurements of the matter power spectrum at moderate-to-small scales out to high redshift are sensitive to neutrinos, dark matter, and new relativistic energy in the early Universe. Each independent technique explores physics over a broad range of spatial scales and cosmic history, while the full suite has multiple complementary measurements providing robust results. } \label{fig:spatial_redshift} \end{figure*} Figure \ref{fig:spatial_redshift} shows the breadth of HEP scientific opportunities enabled by cosmic surveys, stretching from the earliest moments of the Universe to present day. The previous P5 science driver of ``Understanding dark energy and cosmic acceleration'' is still very relevant and will continue to be so for the next decade and beyond. The theorized epoch of inflation is shown at the highest redshifts and optimally probed through tracers at the largest spatial scales on the sky. In contrast, dark energy is shown at the left, as its impact is most significant on the growth of structure in the modern (late-time) universe. Dark energy is optimally probed at large to medium scales. Cosmic signals that probe inflation and dark energy include (but are not limited to) the cosmic microwave background from the very early universe, the gas and galaxies tracing the matter distribution as structure formed and evolved, and optical galaxies and transients in the late universe. Each of theses signals has unique strengths that are discussed in further detail below. Additionally, cross-correlating between cosmic signals can eliminate systematics and extend the scientific reach further than that of the individual measurements. Finally, it is important to note the wealth of physics beyond dark energy and inflation that these very same cosmic signals can probe. From neutrinos and new relativisitic particles, to modified gravity and dark matter, cosmic signals have the ability to answer some of the biggest questions currently facing high-energy physics. \begin{figure*}[t] \begin{center} \includegraphics[width=4.5in]{Cosmic/figures/snowmass_cf6_v13.png} \caption{A high-level summary of the key scientific opportunities. The horizontal extent of each box corresponds to the redshift-range of the tracer, while the coloring indicates the experimental technique used to measure the signal. The dashed grey box emphasizes dark energy and inflationary probes. \label{fig:redshift_complement}} \end{center} \end{figure*} Figure \ref{fig:redshift_complement} shows a simplified summary of these same scientific targets as well as the cosmic signals and techniques used to explore them. Four main techniques are shown and discussed further in this report. First, optical and near-infrared surveys combining both imaging and spectroscopic (SPEC) measurements to measure tracers of structure (such as galaxies) in the late-time universe. Section \ref{sec:optical} discusses the highly anticipated scientific impact of the Vera Rubin Observatory (LSST) and the Dark Energy Spectroscopic Instrument (DESI), as well as complementary facilities and a future envisioned Stage V spectroscopic survey. Next, Section \ref{sec:cmb} introduces CMB facilities, including the CMB-S4 experiment that was prioritized in the previous Snowmass and P5 process. Also discussed is one concept for a CMB facility that is a potential successor to CMB-S4. Section \ref{sec:cross} highlights the power of cross-correlations between optical imaging, spectroscopic, and CMB surveys. This section also introduces transients and gravitational wave observations (GWO, emitted from local universe sources) as probes of fundamental physics, which also relies on complementary survey observations. Smaller projects and technology pathfinders are described in Section \ref{sec:smallproj}. Included is a discussion of line-intensity mapping (LIM) both using the 21-cm line from neutral hydrogen and mm-wavelength tracers, such as the rotational transitions of CO and the [CII] ionized carbon fine structure line. Finally, Section \ref{sec:multimessenger} details current and future gravitational wave observatories that will provide gravitational wave events for multi-messenger probes. Altogether, these observational techniques and cosmic survey facilities provide a unique and powerful means to explore dark energy and inflation in the coming decade, as well as developing the technology and concepts needed to continue a vibrant and cutting-edge program in the years that follow. \section{Optical/Near-Infrared Surveys and Facilities} \label{sec:optical} \begin{figure*}[t] \centering\includegraphics[width=6in]{Cosmic/figures/optical_surveys.png} \caption{Summary of imaging and spectroscopic surveys and facilities, ongoing and planned, that are supported by DOE/NSF partnerships. The international ground and space-based landscape of optical wide-field surveys, ongoing and planned, is very rich but for clarity is not represented here. SDSS had both imaging and spectroscopic capabilities, the Blanco telescope was used to carry out the DES, and the Mayall is currently used for DESI. In the near future, the Rubin Observatory will begin LSST. A new spectroscopic facility would open up new scientific opportunities.} \label{fig:opt_facilities} \end{figure*} Wide-field surveys at optical and near-infrared wavelengths play a central role in the exploration of the physics of the dark Universe. The Sloan Digital Sky Survey (SDSS), the first major survey jointly supported by the DOE and NSF, delivered unprecedented measurements of the structure of the Universe at late times. SDSS had first light in 1998 and provided both imaging and spectroscopic data. DOE-supported upgrades to the instrumentation in 2007--2009 enabled the cosmology reach to earlier cosmic times with the SDSS-III/BOSS and SDSS-IV/eBOSS programs. BOSS and eBOSS were spectroscopic surveys focused on refining measurements of the BAO signal through extensions of the SDSS program. Building upon the tremendous success of SDSS, new optical surveys have been designed, constructed, and executed through continued partnership between DOE and NSF. The Dark Energy Survey (DES) is an imaging survey that was operated on the 4-m Blanco telescope in 2013--2019 and is currently extracting final cosmology results. DES has delivered exciting results on the fundamental physics of dark energy, modified gravity, and dark matter. The Rubin Observatory is under construction in Chile and will start the Legacy Survey of Space and Time (LSST) in 2024. LSST will survey the southern sky with an unprecedented combination of depth, visit frequency, spectral bands, and areal coverage to provide unprecedented constraints on dark energy, neutrinos, and dark matter over the course of its 10-year survey. Recently, the Dark Energy Spectroscopic Instrument (DESI) started its observational campaign on the 4-m Mayall telescope in pursuit of measurements of dark energy, neutrino mass, and dark matter. Wide-field surveys in the optical/near-infrared have played a critical role in establishing the standard model of cosmology, $\Lambda$CDM and have delivered a broad range of science in addition to dark energy studies. This exceptional success showcases the power of imaging and spectroscopic surveys, and we strongly advocate for continuing this extremely successful program into the coming decade and beyond. In particular, the unparalleled efficiency of DESI for wide-field spectroscopy and the unprecedented imaging survey data to be collected by the Rubin Observatory will open up many exciting directions for advances in cosmology. In the following, we provide first a brief summary of facilities that are currently operating (DESI) or will soon start operations (Rubin Observatory). Then we discuss future opportunities with either existing or new facilities. We emphasize the following priorities for the optical survey program: \begin{itemize} \item Support for extracting science from ongoing and near-future surveys; \item Support for small programs that use existing facilities to maximize the science from flagship facilities; \item Support for the development of new technology to enable future surveys; \item Support for the design and development of a Stage V spectroscopic survey. \end{itemize} \subsection{Rubin Observatory} The Vera C.\ Rubin Observatory is a powerful facility that will further our knowledge of the Universe in many ways by enabling studies of the nature of dark energy and dark matter, a deep census of the solar system, exploration of the transient optical sky, and surveys of the stellar populations of the Milky Way ~\cite{2019ApJ...873..111I}. The Legacy Survey of Space and Time (LSST) to be undertaken with the observatory is due to start operations in 2024 and map the Southern sky for 10 years. LSST will deliver exciting science opportunities and we stress that support for LSST science will be crucial for the community. Precursor surveys have shown that data from a new survey always come with unexpected challenges but also opportunities. To address the challenges and to take advantage of new opportunities, sufficient support of the science programs is essential. After LSST is completed, Rubin will still be a state-of-the-art survey facility. The Rubin White Paper~\cite{Blum:2022dxi} describes possibilities for future endeavors for the observatory, and provides the scientific motivations for three post-LSST scenarios. Given that this CF6 report focuses on future facilities, we summarize them here and refer the reader to the White Paper and the CF4 report for the scientific justifications. The post-LSST opportunities for Rubin are in three broad categories, as described in Ref.~\cite{Blum:2022dxi}: \begin{itemize} \item \textbf{Continuing operations}: A strong science case for continued cooperation of Rubin relates to time-domain studies that would rely on modified observing cadence, exposure time, or filter selections relative to the LSST survey for greatly enhanced efficiency and target-of-opportunity observations of rare phenomena. Other scientific cases for continued operation of the observatory relate to follow-up observations of discoveries with LSST, focusing on studies that would enhance understanding the fundamental nature of dark matter. Continuing operations of Rubin, modifying only the observing strategy, could also provide synergistic observations that enable better scientific outcomes from combined analyses with overlapping large-area deep optical surveys in support of cosmology. In particular, the planned 2000 deg$^2$ High Latitude Survey with the Nancy Grace Roman Observatory would be an important target. \item \textbf{New filters}: Several scientific opportunities would be enabled by installation of new photometric filters. Examples discussed in Ref.~\cite{Blum:2022dxi} include a filter set complementary to the original six to improve photometric redshift estimates of the catalogued galaxy sample; a set of narrow-band or medium-band filters to enable emission line surveys for particular lines at redshift $z = 0$ or to select samples of galaxies at a set of discrete redshifts; and a set of patterned filters, which would enable multiple bandpasses to be sampled simultaneously across the field. \item \textbf{New instrument}: This would be the most expensive option but could transform the Rubin Observatory by providing truly new capabilities. For example, a wide-field spectrograph would provide the opportunity to follow up the rich LSST imaging dataset and open many new scientific approaches. This option would require a detailed feasibility and design study in the near future. \end{itemize} \subsection{Dark Energy Spectroscopic Instrument} The next decade promises exciting findings to gain a better understanding of the physics of the dark Universe. DESI, located on the 4-m Mayall Telescope at Kitt Peak, Arizona~\cite{desi:2016,desi2:2016}, is the first Stage IV dark energy experiment to begin science operations. DESI consists of a focal plane with 5,000 fiber positioners, a field-of-view with a diameter of 3.2 deg and ten 3-channel spectrographs covering the wavelength range 0.36--0.98 $\mu$m. DESI is currently conducting a 5-year survey to measure redshifts of 40 million galaxies plus a survey of gas in the intergalactic medium to constrain dark energy and cosmological parameters using the BAO and RSD techniques. At the end of the survey in 2026, the instrument will still be competitive with all other multi-object spectrographs that will exist at the time. The proposed DESI-II survey would continue operating the instrument (possibly with upgrades) leveraging and complementing the first year or two of imaging data from Rubin LSST. Additional spectroscopic data can enhance Rubin science in several ways (e.g., in photometric redshift training). Additionally, the DESI instrument is being considered as a possible contributor to Snowmass CF4 programs, particularly a large-volume survey to study inflation, neutrinos, and early dark energy in the linear/quasi-linear regime~\citep{Ferraro:2022}, and a large number density survey to study dark matter physics, modified gravity, small scale features in the primordial power spectrum, and possibly unknown physics~\citep{Dawson:2022}. The continued operation of the DESI instrument (DESI-II) is an important first step in the future spectroscopic roadmap and it is currently at the early stages of conceptual design \citep{DESI2:2021}. Several unique science opportunities are possible, either by continued operations of the current instrument, or with modest technological upgrades. These include dense surveys of the local volume for precision measurements of dark matter, dark energy, and high-resolution studies of the cosmic web (and transients followup for gravitational waves, supernovae, etc.), extension of the Luminous Red Galaxy (LRG) and Emission Line Galaxy (ELG) samples to higher redshift to enable multi-tracer analyses and take advantage of sample-variance cancellation, as well as increasing the observed volume, allowing access to larger scales, providing the cleanest probes of primordial physics. A high-redshift ($z > 2$) survey of Lyman-alpha emitters (LAEs) and Lyman Break Galaxies (LBGs) would measure a volume comparable to the main DESI samples, but at a different cosmic time. This would allow measurements of the amplitude of fluctuations at high redshift, a particularly compelling measurement in light of the recent tensions between the amplitude of structures at late time ($z<1$), compared to the predictions from the CMB (the so-called ``$S_8$ tension''). Measurements in the intermediate redshift regime ($2 \lesssim z \lesssim 4$) are particularly well-suited for understanding the origin of this tension. Moreover, measurements of expansion over this redshift range, deep into the matter-dominated epoch, will shed light on dynamical dark energy, where many models mimic a cosmological constant at late times, but can differ significantly from it during matter domination. In addition to its science reach, such a survey would also serve as a pathfinder for extended wide-field observations of high-redshift galaxies by a future facility, as discussed in the next Section. Possible technology upgrades to the DESI instrument include replacement of detectors with low-read-noise Skipper CCDs, a replacement of the focal plane with a larger number of fiber positioners, and the addition of a 4th spectroscopic channel extending further into the IR to measure [OII]-emitting galaxies to $1.6<z<2.0$, not currently possible with the existing 3 channels. Moreover, the potential overlap between DESI and LSST is an impressive 14,000 square degrees of extragalactic sky if both instruments were to observe to their design limits ($-30^{\circ} < {\rm Dec.} < +30^{\circ}$) and represents a great opportunity to complement LSST observations with galaxy spectroscopy. The most ambitious upgrade of DESI would include the replacement of the primary mirror, effectively turning the instrument into MegaMapper, a candidate future stage V spectroscopic facility described in the next Section. \subsection{Stage V Wide-field Multi-Object Spectroscopy} By 2030, Rubin LSST will have mapped at least $\sim$20,000 deg$^2$ of the sky at unprecedented depth from Cerro Panch\'{o}n in Chile. LSST will measure the expansion history and structure of the Universe through observations of type Ia supernova, weak lensing, galaxy clustering, strong lensing, and ultra-faint galaxies. However, LSST provides only coarse spectral information, and spectroscopic capabilities are essential to maximize the fundamental physical output from cosmic surveys~\citep{Kavli:2016}. Current wide-field spectroscopic capabilities in the southern hemisphere are insufficient for the task of complementing Rubin LSST. Existing capability is dominated by the Anglo-Australian Observatory's 2dF, with 400 optical fibers covering $\sim$3 deg$^2$ field-of-view on the 3.9-m AAT in Australia. The 4MOST instrument~\citep{4MOSTSPIE}, currently under construction and scheduled to begin operations soon, will measure $\sim$2400 spectra simultaneously using the 4-m VISTA telescope at the European Southern Observatory. Larger instruments, such as the 6.5-m Magellan telescopes at Las Campanas Observatory, the 8-m Gemini Telescope at Cerro Pach\'{o}n, and the 8.2-m Very Large Telescope at the European Southern Observatory (all in Chile), have fields-of-view that are too small for wide-field surveys. Other facilities are planned with 8-m to 30-m mirrors, but also have fields-of-view that are insufficient for large-field surveys. The Snowmass2021 Cosmic Frontier is charged with synthesizing community input on future studies of dark energy, dark matter, inflation, neutrinos, and other light relics through observational cosmology within the HEP program. Through the Snowmass2021 process, the HEP community has identified the pressing need for additional wide-field spectroscopic capabilities to complement LSST imaging~\citep{Dawson:2022,Ferraro:2022}. Understanding of the needs has evolved from previous community studies on maximizing science from LSST in 2015--2016~\citep{NAS:2015,Kavli:2016} and from the HEP Cosmic Visions process in 2016--2018~\citep{Dodelson:2016a,Dodelson:2016b,Dawson:2018fob}. Several white papers have been submitted to Astro2020 and Snowmass2021 describing the physics program and facilities that could meet some or all of these needs including DESI-II~\citep{DESI2:2021}, MegaMapper~\cite{Schlegel:2019eqc,Schlegel:2021,2022arXiv220904322S}, the Maunakea Spectroscopic Explorer (MSE)~\cite{Marshall:2019wsa,Marshall:2021}, and SpecTel \citep{Ellis:2019gnt}. In-depth discussion of the science opportunities, together with detailed forecasts for a number of experimental configurations have been presented~\cite{Ferraro:2022, Sailer:2021, MegaMapper_science:2020}. The fundamental physics program of a future spectroscopic facility is diverse and multifaceted. Following the evolutionary history of the Universe from early to late times: \begin{itemize} \item {\bf Inflation}: A next-generation spectroscopic survey will access an extremely large volume of the Universe, which will enable it to measure a number of primordial quantities beyond the cosmic variance limit of the CMB. These include making exquisite measurements of the power spectrum, dramatically increasing the sensitivity to primordial features or oscillations that can be created by many models of inflation. Sharp features arise when there is a sudden transition during inflation such as a step in the potential. Resonant features arise when some component of the background oscillates with a frequency larger than the Hubble scale. Another important advance achievable by these surveys is measurement of primordial non-Gaussianity with the goal of an order-of-magnitude improvement in sensitivity to surpass $\sigma(f_{\rm NL}^{\rm local}) < 1$, allowing the two main inflationary scenarios (single field vs multi-field inflation) to be distinguished. Additionally, greatly improved measurements of the running of the spectral index and of spatial curvature will shed additional light on the physics of the early Universe. \item {\bf Neutrinos and Dark Radiation}: Measurements of the physics of the early Universe provide strong constraints on the dark sector via, for example, via the determination of the number of light particles that are thermalized. This is parameterized by $N_{\mathrm{eff}}$, the number of relativistic particles other than photons. The Standard Model with three neutrino species predicts $N_{\mathrm{eff}}=3.045$. Measurements of the matter power spectrum can detect or exclude the existence of other particle species that decouple after the QCD phase transition, and tightly constrain particles that decouple earlier. Cosmological measurements from large galaxy surveys will complement CMB observations and other experimental efforts to detect low-mass dark sector particles (e.g., via quantum sensors, a 3\,GeV muon beam dump experiment, and DarkQuest). \item {\bf Dark Energy Throughout Cosmic History}: We are now in the domain of precision tests of the $\Lambda$CDM model. During this decade, experiments like DESI, Rubin LSST, Euclid, and the Roman Space Telescope will map the expansion of the Universe up to redshifts of $z \sim 2$ (when the Universe was roughly one-third of its current size). A wide-field multi-object spectroscopic facility is needed to map the expansion of the Universe to higher redshifts (earlier times). A detailed 3D map of at least $\sim 40$ million galaxy positions with redshifts in the range $2 < z < 5$ is needed to take the next step in dark energy research. Precision measurements of the redshifts of $>40$ million distant galaxies will require an increase of about an order of magnitude in the combination of the number of fibers and light collection capabilities over current spectroscopic instruments, driving the design of future facilities. Additionally, precision measurements of the matter power spectrum will be able to provide indirect percent-level constraints on Early Dark Energy (EDE) up to $z \sim 10^5$, when the Universe was only a few years old~\cite{Sailer:2021}. \end{itemize} \subsection{Complementary Facilities} The optical/near-infrared dark energy facilities described in this section will be complemented by several ground- and space-based observatories at similar wavelengths. They will be in various phases of planning, construction, and operation over the coming decades. Since these facilities are currently driven by support from NASA, NSF-AST, and private contributions, we summarize them briefly here. We note that future support from DOE or NSF-PHYS could come through future instruments, US Extremely Large Telescopes (US-ELTs) or support for joint analyses. \begin{itemize} \item {\bf US Extremely Large Telescopes} The US-ELT program consists of two 30-m-class telescopes: the Giant Magellan Telescope (GMT) to be sited in Chile and the Thirty Meter Telescope (TMT) to be sited in Hawai`i. These telescopes have relatively small fields-of-view and multiplexing, and thus are not optimal as wide-area spectroscopic survey facilities. However, the large light collecting area provided by a 30-m mirror allows these telescopes to observe extremely faint objects quickly. The US-ELT program was the highest-ranked ground-based porgram in the Astro2020 Decadal survey, but it is unlikely that the HEP community will participate in the design or construction of these telescope facilities. However, US-ELTs could complement one of the surveys discussed in this section by providing, for example, deep spectroscopy for training photometric redshift estimators on the faintest galaxies observed by Rubin or high-resolution imaging data to constrain dark matter through strong lensing. The cost of an ELT instrument (${\sim}\$40$M) would be roughly comparable to the cost of other HEP cosmic survey construction projects (e.g., DECam or DESI). \item {\bf Small, Wide-field Optical Surveys} Both the Zwicky Transient Facility (ZTF) and the La Silla Schmidt Southern Survey (LS4) provide a complementary, and necessary, set of observations to those of the Rubin and the space-based surveys. ZTF and LS4 have direct relevance to several cosmology and fundamental physics efforts including: peculiar velocity measurements, and hence fundamental constraints on general relativity, with supernova as standardized candles; gravitational wave standard sirens as probes of the expansion of the Universe and gravity; and measurements of the Hubble constant through Type Ia and II-P supernovae. They provide a higher cadence than the aforementioned surveys, especially important for analyzing the light curves as well as triggering follow-up for low-$z$ supernovae, and both have a robust ToO program for GW counterpart discovery in the optical. In addition, they open up the possibility of improved calibration for both Tully-Fisher and Fundamental Plane measurements (from spectroscopic surveys such as DESI) via supernova distances. \item {\bf Space-based Observatories} \textit{Some description of Euclic, Roman, SpherEx and any others...} \item {\bf Gravitational Wave Observatories} \textit{Some description of LIGO, Cosmic Explorer,...} \end{itemize} \section{Cosmic Microwave Background Surveys} \label{sec:cmb} Wide-field surveys of the CMB play a central role in particle physics and cosmology. Missions such as {\it{COBE}} \cite{Mather:1982}, {\it{WMAP}}~\cite{Bennett:2013}, and {\it{Planck}}~\cite{Planck:2011} have provided critical insight into the birth and early evolution of our Universe. In addition, many ground-based CMB experiments such as AdvACT~\cite{advact:2016}, SPT-3G~\cite{sobrin:2022}, BICEP/KECK~\cite{BK:2022}, and Simons Array~\cite{SA:2016} continue to push the frontiers of CMB measurements into lower-noise and higher-resolution regimes. Power spectra of CMB temperature and polarization data provide some of the tightest constraints on particle physics models, dark matter, and inflation, and -- combined with measurements of gravitational lensing of the CMB -- compelling evidence for dark energy. Building on the successes of precursor CMB experiments, the Simons Observatory~\cite{Simons:2019} and the South Pole Observatory are commencing observations in the early 2020s (see Figure~\ref{fig:CMB-timeline}). Looking ahead, the CMB-S4 project~\cite{CMB-S4:2022ght} is planned to make significant leaps in sensitivity. CMB-S4 is a joint DOE and NSF project that received DOE CD-0 approval in 2019, is advancing toward DOE CD-1 and NSF PDR, and currently has broad engagement from the majority of the US ground-based CMB science community. On a longer time-scale, CMB-HD is a proposed experimental concept that would have six times the resolution of current and planned CMB experiments, opening up a new regime of millimeter-wave science~\cite{CMB-HD:2022bsz}. Given the pivotal role that CMB experiments play in the landscape of particle physics and cosmology, and their phenomenal successes thus far, we strongly advocate for continuing the CMB program into the coming decade and beyond. Similar to the optical survey program, we emphasize three priorities for the CMB survey program: \begin{itemize} \item Support for ongoing and near-future surveys, including CMB-S4; \item Support for the development of new technology to enable the next major survey post CMB-S4; \item Support for the design and development of the next major survey. \end{itemize} \begin{figure*}[t] \includegraphics[width=6.5in]{Cosmic/figures/CMB-Timeline-Edited.pdf} \caption{Timeline of current and future ground-based CMB experiments. For context, the timeline also includes a few sub-orbital and satellite experiments in grey. Dashed boxes indicate fully-funded facilities. The fade-in regions indicate commissioning periods, while the boxes indicate full survey observations.} \label{fig:CMB-timeline} \end{figure*} \subsection{CMB-S4} CMB-S4 is the next-generation (Stage IV) cosmic microwave background experiment~\cite{CMB-S4:2022ght}. CMB-S4 is designed to achieve an enormous increase in sensitivity compared to existing CMB experiments while simultaneously leveraging two premier observing sites. Combined, these unique features will enable CMB-S4 to make transformational measurements of primordial gravitational waves and inflation and the dark Universe~\cite{cmbs4:2019}. Both of these science themes are of significant interest to the high-energy physics and cosmology communities. Additionally, the unique properties of CMB-S4 will enable mapping the matter in the cosmos and studies of the time-variable millimeter wavelength sky. CMB-S4 is a joint DOE and NSF project that has strong community support as evidenced by the mature, large collaboration and endorsements in the previous Snowmass, P5 report~\cite{HEPAPSubcommittee:2014bsm}, and more recently the Astro2020 Decadal Survey Report~\cite{Astro:2020}. CMB-S4 will construct telescopes in both Chile and at the South Pole, taking best advantage of features of each site to pursue its scientific goals. The South Pole site will host 18 small-aperture telescopes (SATs, diameter ~ 0.5 meter) and one 5-meter large-aperture telescope (LAT). These telescopes will conduct an ultra-deep survey of 3\% of the sky, targeting the B-mode polarization measurements at both the large and small angular scales on the sky needed to constrain inflation. Two 6-meter LATs will conduct a deep and wide survey of 60\% of the sky from the Chilean site, targeting the CMB-S4 science goals that benefit from additional sky area. Over 550,000 detectors will be deployed across the CMB-S4 telescopes, an enormous increase over all Stage III experiments combined that will make the planned increase in sensitivity possible. As introduced above, CMB-S4 has four main science themes that drive this experiment design and subsequent exceptional measurement opportunities. Here, we emphasize CMB-S4's impact on two key themes of particular relevance to the science of cosmic acceleration (for more details see discussions in white papers~\cite{CMB-S4:2022ght,Chang:2022tzj,Baxter:2022enq,Achucarro:2022qrl,Amin:2022soj,Blazek:2022uzw}). \begin{itemize} \item \textbf{Primordial gravitational waves and inflation}: Cosmic inflation is a prominent theory for the origin of structure in the Universe. A detection of primordial gravitational waves from inflation would be historic, providing evidence for the quantization of gravity and opening a window into the very early Universe \cite{Achucarro:2022qrl}. The factor of five leap in sensitivity and exquisite systematics control embedded in the CMB-S4 design will enable the experiment to cross major theoretically motivated thresholds through either a detection of these primordial gravitational waves from an inflationary epoch or an upper limit that will rule out entire classes of the most compelling inflationary models. In either outcome, CMB-S4 will dramatically advance our understanding of the primordial Universe. \item \textbf{The dark Universe}: CMB-S4 will also provide multiple compelling probes of the late-time universe which will enable stringent tests of dark energy and other models of the Universe's observed accelerated late-time expansion. These probes include precision measurements of the gravitational lensing of the CMB, the kSZ velocity field, and a large ($>$100,000) sample of massive galaxy clusters discovered via the tSZ effect. There is an additional wealth of information to be gained through cross survey analyses between the CMB and other tracers of structure as detailed below. \end{itemize} In addition to the fundamental physics above, the sensitivity and sky coverage of the CMB-S4 millimeter-wavelength survey will enable other important scientific opportunities in the themes of `mapping matter in the cosmos' and the `time-variable millimeter-wave sky'. Light relics are a well-motivated potential contributor of energy density in the Universe that lead to an observable signal in the CMB temperature and polarization~\cite{Dvorkin:2022jyg}. CMB-S4 will be able to constrain the effective number of neutrino species with a sensitivity to Weyl fermion and vector particles that froze out in the first fractions of a nanosecond. For explorations of the cosmological and astrophysical science of the growth of structure, maps of the ionized gas distribution at CMB-S4 sensitivity will lead to the detection of an order of magnitude more high-redshift ($z > 2$) galaxy clusters than found by Stage III experiments~\cite{cmbs4:2019}. This is just one example of the scientific potential of the ionized gas map; several others, including opportunities for complementarity, are described in the CMB-S4 white paper and its references~\cite{CMB-S4:2022ght}. Finally, CMB-S4 will provide new, key insights into millimeter-wavelength transient phenomena by making a repeated, systematic survey of a larger area of the sky at a cadence of approximately a day. Limited studies of the variable millimeter-wave sky exist, and therefore the CMB-S4 survey will open this discovery space. \subsection{CMB-HD} CMB-HD is a proposed CMB experiment that would have three times the total number of detectors as CMB-S4 and about six times the resolution of current and planned high-resolution CMB telescopes, opening a new regime for millimeter-wave science~\cite{CMB-HD:2022bsz}. CMB-HD would cross important thresholds for improving our understanding of fundamental physics, including the nature of dark matter and dark energy, the light particle content of the Universe, the mechanism of inflation, and whether the early Universe has new physics beyond the Standard Model, as suggested by recent H$_0$ measurements. The combination of CMB-HD with contemporary ground and space-based experiments would also provide countless powerful synergies. The concept for the CMB-HD instrument is two new 30-meter-class off-axis crossed Dragone telescopes located on Cerro Toco in the Atacama Desert~\cite{CMB-HD:2022bsz, Sehgal:2019ewc, Sehgal:2020yja}. Each telescope would host 800,000 detectors (200,000 pixels), for a total of 1.6 million detectors. The CMB-HD survey would cover half the sky over 7.5 years. This would result in an ultra-deep, ultra-high-resolution millimeter-wave survey over half the sky with 0.5~$\mu$K-arcmin instrument noise in temperature (0.7~$\mu$K-arcmin in polarization) in combined 90 and 150 GHz channels and 15-arcsecond resolution at 150 GHz. CMB-HD would also observe at seven different frequencies between 30 and 350 GHz for mitigation of foreground contamination. CMB-HD would be able to measure the dark energy equation of state with an uncertainty of $\sigma(w_{0})= 0.005$ by combining galaxy cluster abundance measurements, galaxy cluster lensing measurements, and measurements of the primary CMB power spectra~\citep{Raghunathan:2021zfi, Raghunathan:2021tdc}. This would provide a constraint on the dark energy equation of state to sub-percent level accuracy. CMB-HD would also constrain an epoch of inflation in several ways. CMB-HD would probe the existence of inflationary magnetic fields in the early Universe via tight constraints on anisotropic birefringence. It would have the sensitivity to obtain a $1\sigma$ uncertainty on the strength of scale-invariant inflationary magnetic fields, $B_{\rm SI}$, of $\sigma(B_{\rm SI})=0.036~\mathrm{nG}$, which is below the $0.1\,\mathrm{nG}$ threshold required for inflationary magnetic fields to explain the $\mu\mathrm{G}$ level magnetic fields observed in galaxies today~\cite{Mandal:2022tqu}. CMB-HD will therefore have the capability to detect inflationary magnetic fields with about $3\sigma$ significance or greater, and such a detection would provide compelling evidence for inflation. The cross correlation of CMB-HD with galaxy surveys would also provide powerful constraints on inflation. CMB-HD would measure primordial local non-Gaussian fluctuations in the CMB, characterized by the parameter $f_\mathrm{NL}^{\rm local}$, with an uncertainty of $\sigma(f_\mathrm{NL}^{\rm local}) = 0.26$, by combining the kinetic Sunyaev-Zel'dovich (kSZ) signal from CMB-HD with an over-apping galaxy survey such as from the Vera Rubin Observatory. This constraint is limited by the galaxy sample from Rubin Observatory, rather than by CMB-HD, and a combination with future even higher resolution galaxy surveys would lead to even better constraints. Reaching a target of $\sigma(f_\mathrm{NL}^{\rm local}) < 1$ would rule out a wide class of multi-field inflation models, shedding light on how inflation happened~\cite{Alvarez:2014vva,Smith2018,Munchmeyer:2018eey,Deutsch2018,Contreras2019,Cayuso2018}. Moreover, the combination of the kSZ effect from CMB-HD with the Rubin Observatory galaxy survey can constrain the primordial trispectrum amplitude, $\tau_\mathrm{NL}^{\rm local}$, with $\sigma(\tau_\mathrm{NL}^{\rm local})<1$~\cite{AnilKumar:2022flx}. CMB-HD also can provide an independent constraint on primordial gravitational waves with an uncertainty of $\sigma(r)=0.005$ via the combination of the polarized Sunyaev-Zel'dovich effect from CMB-HD with Rubin Observatory galaxies~\cite{CMB-HD:2022bsz}. For further details see~\cite{CMB-HD:2022bsz} and~\href{https://cmb-hd.org}{https://cmb-hd.org}. \section{Opportunities from Cross-survey Analyses} \label{sec:cross} The next decade will see dramatic improvements in our ability to probe the Universe, with major leaps in capabilities occurring nearly simultaneously across many new facilities. Each of these new facilities will enable transformative science, but joint analyses of the resultant datasets will be more powerful and robust than what can be achieved with any individual instrument. Notably, cross-survey analyses will improve the constraints on cosmic acceleration that drive the design and requirements for cosmological surveys into which DOE has invested, and also leverage those investments to constrain other aspects of fundamental physics that are important for our understanding of the Universe. At present, however, cross-survey analyses can be challenging to initiate, organize and fund. We therefore advocate for the creation of clear pathways to support cross-survey analyses as part of the core mission of the DOE Cosmic Frontier. \subsection{Static Probes} We first consider cross-survey analyses between ``static'' probes of the Universe, i.e. those observables that do not change significantly over the time frame of a survey. This includes probes like galaxy positions, weak gravitational lensing, and the Sunyaev Zel'dovich effect. Current and future cosmic surveys will obtain measurements of multiple static probes that overlap over significant fractions of the sky. Such measurements will enable many cross-survey analyses to obtain tighter and more robust constraints on the fundamental ingredients of our Universe. We illustrate the diversity and complementarity of overlapping cosmic probes in Fig.~\ref{fig:multiwavelength}. By combining overlapping probes from different surveys, new information about cosmological structure can be extracted, and the cosmological constraints from individual surveys can be made more robust to possible systematic biases. Some prominent examples include: \begin{itemize} \item {\bf Improved cosmological constraints.} By leveraging multi-wavelength data, combining imaging and spectroscopic surveys, cross-survey analyses will improve cosmological constraints from the evolution of large-scale structure. \item {\bf Improved robustness of cosmological constraints}. Analyses of cross-survey correlations help to isolate survey-specific systematic effects and break degeneracies between cosmological parameters and nuisance parameters, making cosmological constraints more robust. In addition, multi-wavelength data allow for improved understanding of baryonic processes, one of the main sources of systematic uncertainty in cosmological analyses of large-scale structure. \end{itemize} Measuring cross-correlations between different cosmological probes requires overlapping measurements on the sky. The survey strategies of several operational and planned DOE-funded cosmic surveys --- including optical imaging, spectroscopic, and CMB surveys --- have significant overlap. The potential therefore exists to harness the power of cross-correlations between them. However, modeling multi-survey correlations necessarily requires additional work beyond that typically undertaken by single surveys. In particular, there are significant technical challenges in simultaneously modeling and simulating observables that span a wide range of wavelength and scales, and that involve multiple astrophysical processes. Beyond the technical challenges associated with cross-survey analyses, there are also practical difficulties associated with this work. Any such analysis necessarily requires detailed knowledge of data products generated by multiple surveys. Some of this information may be proprietary, and not easily shared. Previous cross-survey analyses have typically waited until data products become public (thereby delaying results) or have operated through cross-survey memoranda of understanding (MoU). Relative to single-survey analyses, analyses conducted through MoU are often subject to additional bureaucratic hurdles that can delay progress and unnecessarily increase workloads. These difficulties can be significant enough to discourage cross-survey analyses, a clearly suboptimal outcome. \begin{figure*}[t] \centering \includegraphics[scale=0.65]{Cosmic/figures/mdpl2_maps} \caption{Simulated maps of the same patch of the Universe, as measured with several different cosmological probes (from left to right): dark matter halos (detectable via the galaxies they host), galaxy clusters (with the size of the circles indicating the cluster mass), gravitational lensing of the CMB ($\kappa_{\rm CMB}$), the thermal Sunyaev Zel'dovich effect (tSZ), the kinematic Sunyaev Zel'dovich effect (kSZ), the cosmic infrared background (CIB), and gravitational lensing of galaxy shapes (shading indicates the convergence, $\kappa_{\rm gal}$, while white lines indicate the shear, $\gamma$). Although each probe is very different, they are all sourced by the same underlying large-scale structure, and are therefore correlated. Joint analyses of these different probes can yield access to new cosmological information about the underlying structure. Simulated data from Omori (in prep.). } \label{fig:multiwavelength} \end{figure*} To capitalize upon these opportunities and address the associated challenges, a qualitatively new level of investment in cross-survey, joint-probe infrastructure is required -- this includes simulations, associated modeling, coordination of data sharing, survey strategy, and training for the next generation of scientists in a way that transcends any individual project or collaboration. The required investments are substantial, but they are critical for the next generation of cosmic surveys to fully realize their potential. Below we present a summary of future opportunities for growth that have potential to multiplicatively enhance the scientific returns of cosmological surveys in the 2020s: \begin{itemize} \item \textbf {Joint simulations:} Nearly all of the multi-probe analyses discussed above require high-fidelity synthetic data that is validated against observational data. The computational demands of these simulations can be high, and an intensive human effort is required in order to generate synthetic data that is of sufficiently high quality to merit this expense. Considerable progress has been made in this area in recent years, but efforts are typically limited to an individual survey, or even an individual probe in isolation. For example, most CMB simulations do not include physically realistic models of galaxy populations at low redshift, and synthetic datasets tailored for optical surveys of galaxies do not commonly include realistic treatments of the diffuse gas that can be observed in CMB surveys via, e.g., the SZ effect. As a result, the need is increasing for simulations that are suitable for multi-wavelength cross-correlation analyses. Addressing this widespread need is a key opportunity for further growth in the area of generating multi-survey synthetic data, and the wider cosmology community stands to greatly benefit from increased support for these efforts. \item \textbf{Joint modeling and analysis:} Current toolkits such as \texttt{Cobaya} \cite{Cobaya}, \texttt{Monte Python} \cite{MontePython}, \texttt{CosmoLike} \cite{cosmolike}, and \texttt{CosmoSIS} \cite{Zuntz:2015} have been successful in combining a number of ``standard'' large-scale structure probes in Bayesian analyses. Sophisticated modeling efforts with capability to make multi-wavelength predictions are commonly implemented in custom codebases that require highly specialized techniques in order to infer cosmological parameters in a Bayesian fashion. Fully integrating a new generation of models together with cosmological inference pipelines is another exciting opportunity, and would leverage new technologies such as machine learning methods, GPU interfaces, automatic gradient approaches, and likelihood-free inference methods. \item \textbf{New initiatives enabling joint analyses:} By construction, multi-survey analyses in the era of large collaborations are not hosted under one single collaboration with well-established communication structure and analysis tools. Presently such analyses are enabled by MoUs and other agreements, or carried out with public data. This structure can create an inherent barrier for multi-survey analyses, and suppress potential opportunities for exciting discoveries. Conversely, new levels of effort in cross-survey collaboration could offer major benefits to the scientific returns of future surveys. Such initiatives could include coordination of survey strategy to ensure overlap, joint processing of data, and coordination of cross-survey blinding strategies. New funding lines that focus on multi-survey cross-correlation analyses could be an effective, modest way to address some of these limitations. The scope of these problems, however, warrants consideration of new ``centers'' focusing on development of joint simulation/modeling/analysis tools, as well as training/education for the next generation of cosmologists who will be confronted with data already in the 2020s that is of a qualitatively new character from previous decades. In addition, this effort should be combined with a support for a healthy and equitable collaboration community \cite{2022arXiv220601849A,2022arXiv220403713H}. \item \textbf{Support for proposed cosmic survey instruments:} The enormous potential of joint analyses discussed in this white paper is necessarily built on the success of single-probe experiments. Enabling cross-survey analyses requires support for wide-field cosmic surveys including those listed in Figure~\ref{fig:roadmap}, and many more described in accompanying Snowmass white papers \cite{CMB-HD:2022bsz,2022arXiv220307291D,2022arXiv220306200C, 2022arXiv220307258K, Ferraro:2022, Schlegel:2019eqc}. In return, joint-probe analyses will provide critical and complementary information for understanding cosmic acceleration and other fundamental physics. \end{itemize} \subsection{Transient Probes} Transient science is a key frontier of modern cosmology, with profound implications for our understanding of dark energy, cosmological distances in the Universe, extreme strong-gravity environments, and high-energy physics. An extensive variety of transient science requires diverse data sets that can only be acquired via multiple experiments and surveys. For example, optical telescopes are necessary for the search and association of transient counterparts of gravitational-wave standard sirens detected by gravitational-wave observatories to measure the Hubble constant $H_0$ \cite{2009PhRvD..80j4009C,AbbottH0,2019ApJ...876L...7S,2020ApJ...900L..33P,2021arXiv211103604T,palmese_StS_DESI,Mukherjee:2022afz}. Moreover, studies using transients in combination with data from neutrino experiments such as IceCube have been proposed for measurement of the neutrino masses \cite{Pagliaroli:2011zz,Nakamura:2016kkl}. To measure the properties of dark energy specifically precise and accurate distance measurements will be needed for the Rubin Observatory Type Ia supernovae via spectroscopic, near-infrared, and enhanced temporal sampling \cite{snpv}. A high-efficiency search and discovery program will also be needed for the electromagnetic counterparts of standard sirens, to enable a measurement of the Hubble constant that is independent from the systematic uncertainties affecting other dark energy probes~\cite{Kim:2022iud}. One can also test theories of gravity from GW sources for both bright and dark standard sirens~\cite{Mukherjee:2020mha}. High spatial resolution and enhanced temporal sampling are also required to obtain precise time delays by modeling strongly lensed systems discovered by Rubin Observatory, and therefore, independently measure the Hubble constant~\cite{Linder:2011,TreuMarshall2016,Wong:2020,BirrerTreu:2021}. Finally, peculiar velocities inferred from the distances of standard sirens and supernovae could be compared with the density perturbations within the DESI survey volume to measure the strength and length scale of gravity~\cite{palmese20,Diaz:2021pem,Mukherjee:2020hyn}. Currently a critical issue experienced by the HEP community is the perceived inconsistencies between different experiments and/or cosmological probes. A prime example is the Hubble tension, where the Hubble constant measured from the cosmic microwave background, baryon acoustic oscillations, and Type Ia supernovae are not in agreement. These tensions present an opportunity for our community to make a breakthrough in our understanding of dark energy. Their resolution may lie in new fundamental physics, or unaccounted-for systematic errors. Transient science can play a crucial role in solving this challenging issue with enough resources and support for developing its full potential (see Section 2 of~\cite{Kim:2022iud}, for example). \textit{No experiment alone can solve the dark energy problem. Such a breakthrough will require a complex network of experiments, small and large, working in tandem. As dark energy is a priority of our community, it is natural that we ramp up our efforts to build and operate those experiments, optimizing for dark energy science. Those efforts include near-, medium-, and long-term investments. For example, we need data from gravitational wave observatories, and from telescopes that can identify their transient counterparts and host galaxies. Therefore, supporting partnerships between ongoing projects (such as DES/DESI/LSST and the LIGO/Virgo/KAGRA Collaborations) as well as the development of a third-generation gravitational wave observatory (e.g. Cosmic Explorer~\cite{2022arXiv220308228B}), which until recently had been considered as a outside the scope of the HEP community, is consistent with our goals.} Time-domain science with multiple experiments have unique considerations that do not occur for self-contained experiments, e.g., regarding experimental design. In a multi-experiment context, experimental designs can be optimized for a joint rather than stand-alone project. The joint analysis of low-level data products (e.g., pixels) can preserve significantly more information than the combination of lossy final data products. To benefit from this kind of joint analysis, static and time-domain resources are necessary for developing a new infrastructure for real-time communication between experiments. New support is needed to enable this time-domain science to achieve and surpass the precision level of the current standard static experiments. As such, analysis of multiple experiments requires resources beyond the sum allocated to the individual ones. We need to develop simulations that account for different probes to support self-consistent interpretation of the multi-experiment data. Ultimately, new experiments must be developed and supported when existing ones are insufficient. We advocate for these transient science initiatives detailed in Kim et al.\ (2022)~\citep{Kim:2022iud}: \begin{itemize} \item Small projects to acquire supplemental data to enhance the science reach of transients discovered by Rubin LSST. \item Use of the 4-m Blanco telescope hosting DECam for fast and effective search and discovery of transients including gravitational wave events, strongly lensed quasars and strongly lensed supernovae. \item Infrastructure that enables cross-experiment, cross-facility coordination and data transfer for time-domain astronomical sources. \item Theory/modeling that improves understanding of the transient astrophysical probes that are used to study cosmology. \item A US-HEP multi-messenger program, supported with dedicated target-of-opportunity allocations on US-HEP and partner facilities for the follow-up of gravitational waves and rare neutrino events. \item The development of a novel standard siren survey program using next-generation gravitational wave observatories to fully incorporate this new observable into the research portfolio for dark energy science. \item Construction of novel large-scale projects for a multi-messenger dark energy survey, including gravitational wave observatories and optical NIR telescopes, designed to resolve the current tensions and advance understanding of dark energy and cosmic acceleration. \end{itemize} \section{Small Projects and Pathfinders} \label{sec:smallproj} In 2016 and 2017, the community held two workshops to discuss future opportunities for survey science and to develop a small-project portfolio that would include technology developments to enable a major new Stage V Spectroscopic Facility. The findings are summarized in Ref.~\cite{Dawson:2018fob}. In the following, we provide an overview of the findings that are relevant in particular to the development of new facilities to explore cosmic acceleration. \subsection{Spectroscopy Pathfinder} In Ref.~\cite{Dawson:2018fob} the importance of new technology developments were highlighted. These developments are needed in the near future to enable a credible design for a Stage V spectroscopic facility. In particular, near-term investigations of the following areas will be crucial: \begin{itemize} \item {\bf Detector technologies} to extend to higher redshift (e.g., Germanium CCDs) and lower noise (e.g., Skipper CCDs). Current silicon CCD detectors have a wavelength cutoff due to the band gap of silicon. Lower band gap materials, such as Germanium offer the potential to extend to higher redshift. Precision measurements of faint, distant sources can be dominated by detector readout noise. Novel Skipper CCD detectors offer the ability to reduce noise through multiple non-destructive measurements of the charge in each detector pixel. A challenge in Skipper CCD technology is the readout time, which scales with the number of non-destructive measurements that are made. \item {\bf Fiber positioner technologies} to enable smaller pitch, denser packing, and greater robustness. Two technologies are currently considered for fiber positioners. The robotic twirling post design has been used by DESI. R\&D is ongoing to shrink the patrol radius and increase the packing density. Robustness is a current challenge faced by twirling post technology. The second technology is tilting spines, which are being used by the 4MOST spectrograph. R\&D is ongoing to shrink the pitch and demonstrate precise control of fiber positions. \item {\bf Wide-field optics} to enable larger focal planes that can hold more fibers. This is a critical component toward increasing total fiber number. Advances have been made in the context of several telescope designs to allow $> 1$-meter diameter focal planes (i.e., MegaMapper, MSE, SpecTel). Current challenges are the fabrication of large-diameter lenses. \item {\bf Verification of high-redshift target viability} (e.g., Lyman-alpha emitters, Lyman-break galaxies, etc.). This work is currently on-going with targeted observations by DESI. \item {\bf Narrow-band targeting} would use large-field imagers outfitted with multiple medium- or narrow-band filters to improve targeting efficiency for future spectroscopy. Such a campaign could be executed by DECam outfitted with a new set of filters for a moderate cost. \end{itemize} \subsection{21-cm Pathfinders} Neutral hydrogen is ubiquitous in the Universe after the CMB was formed, such that its 21\,cm emission can trace large-scale structure across cosmic time. At low redshift, maps of the 21\,cm emission line can form a galaxy survey to constrain models of dark energy. At higher redshifts, they can improve measurements of the primordial power spectrum as a probe of inflation (described in the CF5 report). In all cases, the primary challenge is removing bright foreground emission from the resulting maps, which drives the instrument design. Maps of 21\,cm emission at redshifts $z<6$ form a galaxy survey using the signal from neutral hydrogen trapped in galaxies. Unlike their optical counterparts, these radio surveys naturally have wide fields of view and observe all redshifts in their band simultaneously, allowing these radio telescopes to quickly survey very large volumes spanning the redshift desert ($z\sim 1$--$3$) and beyond ($z\sim 3$--$6$), where optical spectroscopy is challenging or impossible. To detect cosmological neutral hydrogen across a wide redshift range and target inflation and dark energy science goals, a dedicated 21\,cm instrument will require a close-packed array of thousands of dishes at least 6\,m in diameter across a wide redshift range~\cite{2018arXiv181009572C, PUMAAPC}, resulting in a radio array with a physically large footprint ($\sim$ km scales) that requires efficient signal transfer and an extremely large digital correlator. Dedicated experiments to use 21\,cm emission to map structure have shown that the primary challenge is foreground removal~\cite{CHIMEresults,2015ApJ...809...62P,2019ApJ...883..133K,2019ApJ...887..141L,2016ApJ...833..102B,2016MNRAS.460.4320E}, which drives requirements for instrumentation calibration and design. Solving these design challenges requires targeted R\&D for a pathfinder that has uniform elements; a well-controlled bandpass; instrument stability and stabilization methods using digital signal processing and fast real-time analysis; robust real-time RFI flagging; new calibration techniques for beam and gain measurements potentially including drone-based calibration; and requires analysis and simulations to fold in calibration measurements and assess their impact on cosmological parameter estimation\cite{2018arXiv181009572C}. The primary US pathfinder targeting this R\&D is The Packed Ultra-wideband Mapping Array (PUMA)~\cite{PUMAAPC}, a proposed next-generation 21\,cm \ intensity mapping array which is optimized for cosmology in the post-reionization era. The reference design calls for PUMA to consist of a hexagonal close-packed array of 32,000 parabolic dishes 6m in diameter, observing at 200-1100\,MHz, corresponding to a redshift range of $0.3 < z < 6$. The pathfinder array for this experiment is the PUMA-5K array, a staged deployment of 5000 dishes that would be used to test the analog, digital, and calibration equipment at a scale large enough to assess success on the sky. Specific technology R\&D required includes: \begin{itemize}[nosep] \item Digital electronics at or near the dish foci. \item A timing distribution network that spans kilometers with relative timing accuracy better than a picosecond. \item Real-time data processing, including real-time calibration, to enable essentially real-time data compression across interferometer inputs. \item Analog system design that includes uniformity of all elements and smooth response across a wide bandwidth. \end{itemize} Finally, the Dark Ages ($150 < z < 20$) are a particularly clean probe of the primordial power spectrum and its statistics, including searches for non-Gaussianity. However, measurements during this era are extremely challenging because the resulting long wavelengths ($\lambda \simeq $7 to 70\,m) require a physically large instrument and must contend with non-negligible effects from the Earth's ionosphere and significant contamination from human-generated radio sources (RFI). To assess whether the far side of the moon is adequate to address these issues, the DOE and NASA are collaborating to launch the pathfinder experiment LuSEE-Night (Lunar Surface Electromagnetics Experiment at Night) in lateW 2025 to deploy 4 steerable monopole antennas to characterize the radio sky at frequencies 1-50MHz with percent level absolute calibration and a $10^{-3}$ relative calibration between frequency bands. With data collected over 12 nights, it should provide measurements of the low-frequency radio sky below 50~MHz, demonstrate the feasibility of Dark Ages cosmology from the far side of the Moon, should have sufficient sensitivity to exclude presence of a monopole signal at about the 1 Kelvin level, about 1-2 orders of magnitude above the expected signal yet sufficient to constrain some models predicting non-standard properties of baryon thermodynamics during the Dark Ages. \subsection{Line-Intensity Mapping} Line-intensity mapping (LIM) is a nascent technique for mapping the large-scale structure (LSS) in the universe by measuring the spatial distribution of an atomic or molecular emission line with low-resolution spectrometers ($\lambda / \Delta \lambda < 1000$)~\cite{Kovetz:2017agg,Karkare:2022bai}. The ability to measure multiple emission lines over a wide range of redshifts $z>2$, beyond the range of current galaxy surveys, makes LIM a particularly promising technique for future surveys of large-scale structure. Although this method can be used with any emission line, LIM using mm-wavelength tracers, such as the rotational transitions of CO and the [CII] ionized carbon fine structure line, is of great experimental interest because such emission can be detected over the redshift range of $0<z<10$ from the ground using technology that is already in widespread use in CMB and sub-mm telescopes. In addition, the Galactic foregrounds are significantly less bright in these frequency ranges than in 21cm surveys using similar techniques. LIM with mm-wave tracers may be capable of making very significant improvements in constraints on primordial non-gaussianity, neutrino properties, light thermal relics, and dark energy, but doing so will require experiments with significantly more receiver elements and longer integration times than currently exist and development of sophisticated analysis pipelines. A suite of small projects, including CCAT-p, COMAP, CONCERTO, EXCLAIM, mmIME, SPT-SLIM, TIM, and TIME, is currently prototyping various spectrometer and detector technologies, at the scale of $10^5$ spectrometer-hours or less. By contrast, constraining the amplitude of local-type primordial non-gaussianity to a level $\sigma(f_\textrm{NL}) \sim 1$ that would distinguish between single- and multi-field inflation, or dtecting the minimal sum of the neutrino masses at $5\sigma$ would require a survey with $10^8$ spectrometer-hours, three-orders of magnitude larger than existing projects. To reach this level of sensitivity requires investment in a program of technology development, complemented by the staged deployment of projects with increasingly large focal planes of detectors to demonstrate these technologies in the field, analogous to the way the CMB field has grown from few-pixel experiments to an experiment like CMB-S4 with 500,000 detectors. Concurrent, steady improvement in modeling, analysis techniques, tools and pipelines is a must. Specific technological capabilities to develop include: \begin{itemize}[nosep] \item \textrm{{\bf On-chip spectrometers:}} A key challenge in scaling mm-wave spectrometers to very high channel counts is the spectrometer element itself. Traditional technologies, such as diffraction gratings, Fourier Transform or Fabry-Perot spectrometers, and heterodyne detection perform well for the existing generation of small focal planes, but each has difficulties scaling to larger focal planes. On-chip spectrometers, which channelize the incident radiation using a filter bank on the same silicon wafer as the pixel itself (similar to the current generation of multichroic CMB detectors, but with many more channels), offer a promising solution to the scaling problem by shrinking the physical size of the spectrometer and eliminating complex coupling optics between the telescope and the pixel. Despite these attractive features for mm-wave LIM, on-chip spectrometers are comparatively less mature than traditional technologies, and require field demonstration to test existing architectures and adapt the form factor to more efficiently use focal plane area of telescopes. \item \textrm{{\bf Multiplexed readout electronics:}} Spectrometers with $10^2 - 10^3$ spectral channels per spatial pixel require far more detectors or channels than broadband cameras. Increased multiplexing factors are essential in order to reduce the per channel cost of the readout system to a manageable level. Advances in FPGA technologies, such as RF system-on-a-chip (RFSoC) devices, for example, may reduce per channel cost of readout for kinetic inductance detectors to the level of \$1--2 / channel. \item \textrm{{\bf Telescopes and facilities:}} Detectors for mm-wave LIM, especially on-chip spectrometers, are compatible with the existing generation of small- and large-aperture telescopes built for CMB observations, including SPT, ACT, SO, and CCATp. In some cases, these existing facilities can be used to host mm-wave demonstration cameras without compromising other science goals (e.g. SPT-SLIM on SPT and PrimeCam on CCATp). A staged deployment of mm-wave LIM cameras of increasing size, using existing telescope infrastructure, is critical for achieving on-sky demonstrations of detector and readout technologies and prototyping analysis pipelines. \end{itemize} Since mm-wave LIM is still a very young field, a staged program of surveys of increasing size will provide valuable data sets for developing analysis techniques and characterizing observational systematics. For example, the problem of interlopers --- lines from different transitions and redshifts that map to the same observed frequency ---is one well-known systematic with several proposed solutions, but these mitigations have yet to be tested on real data. Similarly, the effect of atmospheric lines at mm-wavelengths is not expected to corrupt cosmological LIM signals, but projecting to the low required noise levels is difficult. \section{Multi-Messenger Probes} \label{sec:multimessenger} \subsection{Gravitational Wave Observatories} Historically, gravitational wave (GW) observatories were outside the scientific scope of the US HEP community’s efforts. However, since the discovery of GW150914 by the LIGO \& Virgo collaborations and the realization that gravitational wave standard sirens are a powerful dark energy probe (e.g., GW170817), the community has embraced this type of experiment. For that reason, we incorporate them here in the discussion of this report. The next decade will see upgrades of existing facilities, as well as developments of new large-scale projects. Both are discussed below. \subsubsection{Current Ground-Based GW Facilities} Currently, there are two LIGO facilities, in Livingston, Louisiana (LLO) and Hanford, Washington (LHO). Each of these detectors has 4-km long arms and is expected to have sensitivity for binary neutron star (BNS) mergers out to 160--190\,Mpc during the LIGO Fourth Observing run (O4). Other facilities expected to be online during O4 and beyond are Virgo, in Italy, as well as the recently constructed KAGRA in Japan. Each of these facilities has 3km long arms and is in various stages of sensitivity. During O3, Virgo reached a BNS range of 40--50\,Mpc and is expected to ramp up to 80--115\,Mpc during O4. KAGRA, on the other hand, will be online only for a portion of O4 and it is expected to reach $\sim 10$\,Mpc BNS sensitivity. During the O4 run the LIGO/Virgo Collaboration (LVC) expects to detect $10^{+52}_{-10}$ BNS events. \subsubsection{Future Ground-Based GW Facilities} With the numerous GW discoveries in recent years, plans for new ground-based facilities are already underway. LIGO-India has been approved for construction and should be operational by the end of the decade. This detector is planned to be the same size and design as the current LIGO facilities and will come online at similar sensitivity as current detectors~\cite{2019arXiv190402718C}. The addition of LIGO-India will greatly improve the localization of GW events, as well as help to measure the polarization of GWs. Additionally, plans for the Einstein Telescope have been moving forward~\cite{ETCern}. This facility would be underground with 10-km long arms and would be a third-generation (3G) GW observatory. In the US, Cosmic Explorer (CE) is the current 3G proposal for the 2030s, and it is now in the conceptual design phase~\cite{2021arXiv210909882E}. One of the CE’s proposals is two detectors of 40-km long arms that will be able to reach sources at $z \sim 100$ in network with the Einstein Telescope. \subsubsection{Future Space Based-GW Facilities} Space-borne gravitational wave observatories are being planned or proposed for the 2030s. The Laser Interferometer Space Antenna (LISA), a constellation of three spacecraft forming an equilateral triangle with sides 2.5-million km long, is understudy. LISA is led by the European Space Agency, but with significant contributions from NASA and the US, along with several other countries. LISA will open a new window in the GW spectrum by detecting sources in the mHz frequency band. Its main detections will be the inspiral and merger of massive binary black holes (MBBHs), with masses ranging between $10^4$ and $10^7 M_{\odot}$, at redshifts out to $z \sim 10$. LISA will observe the early inspiral phase of stellar-mass binary systems months to years before they are observed in terrestrial detectors. This has the potential to open an entire new chapter of the GW field by adding the power of multi-band observations. LISA scientific objectives include measurements of the expansion rate of the universe by means of GW observations alone and further to constrain cosmological parameters through joint GW and electromagnetic (EM) observations. Another objective of LISA is to understand primordial stochastic gravitational wave backgrounds (SGWBs) and their implications for early universe and particle physics \cite{2022arXiv220405434A}. Complementary to LISA, the Deci-hertz Interferometer Gravitational Wave Observatory (DECIGO) is the proposed Japanese space mission in the decihertz frequency band. DECIGO consists of four clusters of three spacecrafts (LISA-like) with an arm length of 1000-km. The main goals of DECIGO are the detection of primordial gravitational waves to verify and characterize the inflationary era, measurement of the expansion rate of the universe, and to characterize dark energy, and the prediction of accurate time and direction for electromagnetic follow-up observations. DECIGO will catch $\sim 100,000$ gravitational wave events per year from neutron star binary mergers within $z \sim5$ \cite{10.1093/ptep/ptab019}. A decihertz observatory like DECIGO is projected to determine the Hubble constant to $\sim 0.1\%$, and the dark-energy parameters $w_0$ and $w_a$ to $\sim 0.01$ and $\sim 0.1$, respectively \cite{2009PhRvD..80j4009C}. \bibliographystyle{JHEP} \chapter{Cosmic Frontier} \authorlist{All the converners}
{ "timestamp": "2022-09-20T02:21:29", "yymm": "2209", "arxiv_id": "2209.08654", "language": "en", "url": "https://arxiv.org/abs/2209.08654" }
\section{Introduction} Off-policy evaluation~(OPE) refers to the task of estimating the expected utility of a decision-making policy without having to deploy the policy~\cite{levine2020offline}. Such an ability is critical for vetting policies in high-stakes decision problems such as in healthcare \citep{gottesman2018evaluating}, where deploying a policy directly is often risky or unethical. Therefore, we must rely on existing data collected from alternate policies deployed possibly in a different environment. Accurate evaluation of a policy is important so that stakeholders can identify beneficial policies and discard the harmful ones. In real world applications, OPE is a challenging task since the deployment environments often undergo changes (i.e., dataset shifts). It is critical for OPE methods to evaluate policies in a way that is robust to these changes. Prior work has proposed different solutions to address this problem. While some approaches address the scenario where potential shifts in the data are fully known in advance~\cite{kato2020off,killian2020counterfactually,zhang2020invariant}, others focus on the case where there is little to no knowledge of potential shifts in advance~\cite{subbaswamy2021evaluating,li2021evaluating} since this is more common in real world applications. Prior works that focus on robust OPE under unseen dataset shifts predominantly model the shifts by considering bounded perturbations to the joint distribution of the data~\cite{si2020distributional,hatt2021generalizing,zhou2021tabular}, inspired by adversarial machine learning literature. However, prior research has also demonstrated that such shifts can be overly conservative, and often result in pessimistic estimates of policy utilities~\cite{petrik2019beyond, duchi2019distributionally}. Moreover, accounting for many irrelevant shifts may result in poor performance on shifts of interest. For instance, consider adversarial training methods that perturb training data in an $\ell_p$-norm ball and minimize the worst-case error across the perturbations \citep{goodfellow2014explaining,madry2018towards,sinha2018certifying}. Existing work shows that robustness to shifts modeled as $\ell_p$ perturbations does not transfer well to real world shifts \citep{taori2020measuring} or even to other $\ell_p$ norms \citep{maini20union}. Such methods can degrade performance on train distribution \citep{raghunathan2020understanding} and lead to degenerate solutions \citep{hu2018does}. While prior research has often considered bounded perturbations to the joint distribution of the data, this is quite uncommon in practice and does not represent realistic dataset shifts~\cite{subbaswamy2021evaluating,taori2020measuring}. To illustrate, we plotted the percentage of features that shift in three different real world datasets comprising of census, loan, and medical data (See Figure \ref{fig:count_shifts}). For each of these datasets, we often observe that only a subset of the covariates (and not the joint distribution of the data) undergo a change in case of dataset shifts. For example, only 4 (14\%) out of the 28 total features changed between pre 2006 and post 2006 loan datasets (Figure \ref{fig:count_shifts} -- middle). In order to model realistic dataset shifts, it therefore becomes important to exploit domain knowledge and inputs from human experts which can guide us to the plausible subset of features that are likely to shift. While such inputs have been incorporated in learning or evaluating classifiers that are robust to realistic dataset shifts~\cite{subbaswamy2021evaluating,li2021evaluating}, there is little to no work in the OPE literature that leverages domain knowledge and/or human inputs when modeling shifts in the data. % % \begin{figure*}[htbp!] \centering \begin{minipage}{0.33\textwidth} \centering \includegraphics[scale=0.25]{figures/supervised/adult_detection_results_MB-SM_p.png} US Census data (Adult Income) \end{minipage}% \begin{minipage}{0.33\textwidth} \centering \includegraphics[scale=0.25]{figures/supervised/sba_detection_results_MB-SM_p.png} Loan data (SBA) \end{minipage}% \begin{minipage}{0.33\textwidth} \centering \includegraphics[scale=0.25]{figures/supervised/eicu_detection_results_MB-SM_p.png} Medical data (eICU) \end{minipage} \caption{\textbf{Prevalence of shifts on subset of features.} Plots show the percentage of features that shift across domains within three different datasets (US Census, Loan, Medical). More details are given in Appendix \ref{app:shift_detect_app}. The three datasets have 10, 2, and 11 domains, respectively. The x and y-axes show the different domains (10 US state codes in US Census, 2 time periods in Loan, and 11 hospitals in Medical data). Each cell denotes the percentage of the total features (shown in each subtitle) that shift for the corresponding pair of domains. We detect and count feature shifts using conditional independence tests \citep{kulinski2020feature}, where we check if $P_\text{train}(X_i | X\setminus X_i)=P_\text{test}(X_i | X\setminus X_i)$. Firstly, plots show that distribution shifts are quite prevalent. Secondly, we observe that only a few domain pairs in US Census undergo shift in all features (value of 100\% in the leftmost plot). In all other pairs, only a fraction of the features shift (similar to Loan and Medical data). This shows that shifts on feature subsets are prevalent in the real world datasets and methods that assume shifts in \textit{all} features are aiming for robustness to unrealistic shifts.} \label{fig:count_shifts} \end{figure*} In this work, we address the aforementioned challenges by investigating how domain knowledge can help with providing more realistic estimates of the utilities of policies. To this end, we leverage human inputs on which aspects of the environments may plausibly change and adapt the OPE methods to only consider shifts on these aspects. The hope is that this enables a domain expert to constrain the shifts to only the most relevant or plausible ones. Then, we leverage the framework of \textit{distributionally robust optimization} \citep[DRO,][]{duchi2018learning} for carrying out robust policy evaluation for contextual bandits and Markov decision processes. More specifically, we make the following key contributions: \begin{itemize} \item We propose a novel framework, Robust OPE (ROPE), which considers shifts on a subset of covariates in the data based on user inputs and estimates worst-case utility under these shifts. \item We develop computationally efficient algorithms for robust OPE via human inputs in case of contextual bandits and Markov decision processes. \item We theoretically analyze the sample complexity of the proposed algorithms. \item We carry out extensive experiments with synthetic and real world datasets from the healthcare domain. Our results demonstrate that $\texttt{ROPE}$ can successfully tackle the over-conservatism of existing robust policy evaluation methods. \end{itemize} Our work paves the way for modeling realistic dataset shifts in the context of off-policy evaluation in reinforcement learning. % % % \section{Related work} As robustness has been extensively studied in a variety of learning settings, we review only closely related works on approaches for handling unseen shifts, i.e., adversarial robustness, DRO and causal robustness. \textbf{Adversarial Robustness}. Adversarial shifts modeled as $\ell_p$-norm perturbations have been widely considered to learn models robust to adversarial attacks \citep{goodfellow2014explaining,madry2018towards}. However, such methods provide limited gains in robustness to real world shifts \citep{taori2020measuring}. Recent work compensates for the non-realistic shifts by combining multiple $\ell_p$ norm balls \citep{maini20union}, considering shifts in a perceptual distance \citep{laidlaw2021perceptual}, or augmenting with additional datasets from adjacent domains \citep{taori2020measuring,miller20qamodels}. Similar methods have been extended to RL under perturbations to transition dynamics \citep{sinha2018certifying, pinto2017adversarial}, or horizon length and initial state distribution \citep{qi2020robust}. \textbf{Distributionally Robust Optimization}. DRO generalizes adversarial shifts to perturbations in the distributions rather than data points \citep{duchi2018learning}. The primary mechanism of DRO is to specify \textit{uncertainty sets} that encode the uncertainty about potential test distributions. These sets can be defined over joint distribution of the data~\citep{ben2013robust, bertsimas2018data, blanchet2019quantifying,duchi2018learning}, marginals ~\citep{duchi2019distributionally, li2021evaluating}, or conditionals \citep{subbaswamy2021evaluating}, and are well explored for supervised learning. Interestingly, adversarial training can be understood as solving DRO with a Wasserstein metric-based set \citep{staib2017distributionally, sinha2018certifying}. Applications of DRO have been explored in contextual bandits for policy learning~\citep{si2020distributional,mo2020learning,faury2020distributionally} and evaluation~\citep{kato2020off,jeong2020robust}. In robust MDPs, sets based on KL-divergence, $L_1,L_2$ and $L_\infty$ norms have been studied~\citep{nilim2005robust, iyengar2005robust}. Some approaches iteratively refine the sets with newly observed data, but the sets are still constructed using $L_1$ norm balls \citep{petrik2019beyond}. Thus, DRO methods (including adversarial training) lack ways to add domain knowledge and constrain the uncertainty sets, excluding some recent work \citep{subbaswamy2021evaluating}. \textbf{Causal Robustness}. Causal methods provide robustness to arbitrarily strong shifts by leveraging properties of the data generating process. For instance, using only features that cause the outcome leads to a robust model against arbitrary shifts in the features (under some structural assumptions), as shown in recent works \citep{rojas2018invariant, subbaswamy2019preventing, magliacane2018domain, rothenhausler2018anchor, peters2016causal}. A relaxation to bounded shifts has been proposed \citep{rothenhausler2018anchor,oberst2021regularizing,christiansen2020causal} for supervised learning, but in the special cases of additive shifts or linear Gaussian causal models. In contrast, we do not make parametric assumptions on the shifts. Under bounded shifts in conditional distributions, which is a broader class than what we consider, \citet{subbaswamy2021evaluating} propose methods for evaluating the performance of a given classification model. % Importantly, there are gaps in the literature in applying these ideas beyond supervised learning. In RL, \citet{si2020distributional,hatt2021generalizing} perform OPE restricted to contextual bandits and \citet{zhou2021tabular} generalizes the DRO approach to MDPs. However, all of these works only consider shifts in the joint distribution and do not investigate the use of domain knowledge to restrict the uncertainty sets. As RL starts to be deployed in critical applications such as for mechanical ventilation in ICUs \citep{peine2021development}, faithful evaluation of RL policies under plausible data shifts is an important need. Thus, extending $\texttt{ROPE}$ to RL and demonstrating its utility in realistic settings is a useful contribution. \section{Preliminaries} \label{sec:prelim} We first introduce the robust evaluation framework based on distributionally robust optimization, then give necessary background on decision-making problems modelled as % Contextual Bandits (CB) and Markov Decision Processes (MDPs). \textbf{Notation} We denote the random variable for all observable properties of a domain by $V$. The outcome variable is denoted by $Y$. This can represent labels in supervised learning, or rewards and states in RL. Features are denoted by $X$. For a subset of features $Z\subseteq X$, the remaining features are denoted by $X\setminus Z$. Train and test distributions over $V$ are denoted by $P$ and $Q$ respectively (using the same notation for their densities). An uncertainty set w.r.t. a distribution $P$ is denoted by $\mathcal{U}_P$. % \subsection{Robust Evaluation using DRO} Say, we want to evaluate a decision-making model, parameterized by $\theta$, on a test distribution $Q$. For a given reward function $r(\theta, V)$, this means, we want to find the value of the expected reward $\mathbb{E}_{V\sim Q}[r(\theta, V)]$. However, we do not know the test distribution $Q$ a priori. Robust evaluation methods (e.g. \citep{subbaswamy2021evaluating,li2021evaluating}) address this challenge by performing a worst-case evaluation of the model. Instead of finding the expected reward under $Q$, the \textit{robust} value of a model is defined as its worst-case reward across a set of distributions $\mathcal{U}_P$. \begin{equation} % \mathcal{R}(\theta, \mathcal{U}_P) {:=} \inf_{\tilde{P}\in\ \mathcal{U}_P}\ \mathbb{E}_{V\sim \tilde{P}}[r(\theta, V)] \label{eq:robust} \end{equation} where $\mathcal{U}_P$ is referred to as the uncertainty set. The key property of the solution to (\ref{eq:robust}) is that if the $\mathcal{U}_P$ is suitably chosen such that it contains the test distribution $Q$, then $\mathcal{R}(\theta, \mathcal{U}_P)\leq \mathbb{E}_{V\sim Q}[r(\theta, V)]$. Thus, we get the guarantee that the reward of the model on $Q$ is at least as good as what our robust evaluation finds. Naturally, a major part of this framework is the choice of uncertainty set $\mathcal{U}_P$. If we precisely choose $\mathcal{U}_P=\{Q\}$, then we recover the test set reward. But as we keep on increasing the uncertainty set, in the hope of including $Q$, we gradually get worse values due to the worst-case nature of the optimization (\ref{eq:robust}). Next we describe a broad class of uncertainty sets popular in past work. \textbf{Divergence-based Uncertainty Sets}. The uncertainty set $\mathcal{U}_P$ is all distributions lying in a $\delta$-ball around the train distribution, defined using a divergence metric $\mathcal{D}$: \begin{equation} \label{eq:distance_set} \text{[Joint DRO set]} \qquad \mathcal{U}^\text{div}_{P} := \{Q\ll P\quad \text{s.t.}\quad \mathcal{D}\infdivx{Q}{P}\leq \delta\} \end{equation} where $Q{\ll} P$ denotes absolute continuity i.e. $P(V)=0$ implies $Q(V)=0$. An example is the set with: $$\mathcal{D}_{\text{CVaR}}\infdivx{Q}{P} {=} \log{\sup_{V\in \texttt{dom}(V)}} \frac{Q(V)}{P(V)},$$ where CVaR stands for Conditional Value at Risk \citep{rockafellar2000optimization}. \textbf{Types of shifts} The above definition of $\mathcal{D}_{\text{CVaR}}$ assumes that $P$ and $Q$ may differ in distribution over all variables in $V$, namely shifts in the \textit{joint distribution}. There are many ways to limit the distributions considered by $\mathcal{U}^\text{div}_{P}$. \textit{Covariate shift} (e.g. \citet{duchi2019distributionally}) assumes that only the covariate distribution $P(X)$ may change at test time to $Q(X)$, while the conditional distribution $P(Y\vert X)$ remains the same. A slight generalization is \textit{subcovariate shift}, considered by us, that posits that $P(Z)$ may change for a subset of covariates $Z$ keeping the conditional distribution constant. This reduces the size of the set given by $\mathcal{D}_{\text{CVaR}}$ since many of the $Q$ are removed from the divergence set (\ref{eq:distance_set}) due to the constraint of matching the corresponding distributions in $P$. We leverage human input to specify (sub)covariate shifts and propose OPE methods in reinforcement learning. Next, we provide preliminaries for OPE in two widely-studied settings in RL -- contextual bandits and MDPs. % \subsection{Contextual Bandits} \label{sec:prelim_cb} Here we have access to $n$ tuples $\{(Z_i, T_i, Y_i)\}_i$ collected with a known stochastic policy that applies treatment $T_i$ in context $Z_i$ and observes the corresponding outcome $Y_i$. Further, it is assumed that the tuples are i.i.d. % and the joint distribution $P$ factorizes as {\small{$\prod_i P(Z_i)P(T_i\vert Z_i)P(Y_i|Z_i,T_i)$}}. \textbf{OPE in CBs} Given data sampled from $P$, the goal in OPE is to evaluate the expected outcome $\mathbb{E}_Q[Y]$ under a distribution $Q$ induced by following a new policy in the \textit{same} environment \citep{dudik2014doubly, thomas2016data}. Importantly, the difference between the two distributions is assumed to be only due to different policies i.e. $P(T\vert Z){\neq} Q(T\vert Z)$ and the rest of the environment-related factors are the same. That is, only shifts on $T$ are considered as the domain expert intends to implement a different policy, $Q(T\vert Z)$ instead of $P(T\vert Z)$. \subsection{Markov Decision Process} Markov Decision Processes are characterized by the tuple $(\mathcal{S}, \mathcal{A}, \mathcal{P}, r, \gamma)$, where $\mathcal{S}$ is the state-space, $\mathcal{A}$ is the action-space, $\mathcal{P} \triangleq P(\cdot\vert s,a)$ characterizes the dynamics, $r: \mathcal{S} \times \mathcal{A} {\,\rightarrow\,} \mathbb{R}$ is the reward model and $\gamma\in [0,1)$ is the discount factor. We restrict ourselves to finite state, finite action infinite-horizon MDPs in this work. % The the reward model and transition model are assumed to be fixed. % Value of a policy $\pi: \mathcal{S} {\,\rightarrow\,} \Delta(\mathcal{A})$ is defined as the expected cumulative reward received starting from $s_0$, $$V^\pi(s_0) = \mathbb{E}_{\pi,P}\left[\sum_{t=0}^\infty \gamma^t r(s_t,a_t)\vert s_0\right].$$ \textbf{OPE in MDPs} In off-policy evaluation, the goal is to estimate the value of a policy $\pi$ given that we have collected a batch of trajectories obtained using an alternative policy $\mu$. We motivate the need for robustness in the OPE task in Section~\ref{sec:fullmdp}. \section{Our Framework} To evaluate policies, we rely on user input to characterize which covariates indeed shift across domains. We then incorporate this domain knowledge in the form of specifying an uncertainty set which characterizes potential shifts relative to the nominal training distribution. Our goal is then to provide realistic off-policy estimates of policy performance under the potential shifts. Let $Z \subseteq X$ be the subset of variables that shift across domains. % In this case, we assume that the uncertainty set $\mathcal{U}^{\text{sub}}_P$ % contains all distributions resulting from shifts in subcovariates $Z$ which are bounded in some metric $\mathcal{D}$. Formally, \begin{equation} \label{eq:int_set} \begin{aligned} \text{[Subcovariate DRO set]} \\ % \mathcal{U}^{\text{sub}}_P := \Big\{& \nu(Z) \ll P(Z) \ \text{s.t.}\ \ % \mathcal{D}\infdivx{\nu(Z)}{P(Z)} \leq \delta % \Big\} \end{aligned} \end{equation} A key question is the choice of divergence metric $\mathcal{D}$ that can allow us to incorporate expert knowledge. While it is easy for a domain expert to provide precise information on which covariates may shift across domains, the magnitude of shift may not be precisely clear. We capture this complexity using the $\mathcal{D}_{\text{CVaR}}$-based uncertainty set. Existing literature \citep{duchi2019distributionally,li2021evaluating,subbaswamy2021evaluating} has leveraged $\mathcal{D}_{\text{CVaR}}$ for specifying magnitude of shifts and compute worse-case loss in case of supervised learning. Specifically, \citet{duchi2019distributionally} builds the uncertainty set in such a way that the training population contains at least $\delta$ proportion of the test population. \begin{equation} \label{eq:marginal_set_sub_app} \begin{aligned} \mathcal{U}^\text{sub}_P := \{&Q(Z)P(V\setminus Z\vert Z)\ \text{s.t.} \\ &P(Z) = \alpha' Q(Z) + (1-\alpha') Q'(Z), \alpha \leq \alpha' \leq 1\} \end{aligned} \end{equation} where $Q'(Z)$ is any distribution and $\alpha \in (0,1]$ determines the minimum size of the subpopulation shared between train and test. Thus, $\mathcal{U}^\text{sub}_P$ constrains the ratio $\frac{Q(Z)}{P(Z)}\leq \frac{1}{\alpha}$ for all values of $Z$. Equivalently, the set can be expressed using $\mathcal{D}_{\text{CVaR}}\infdivx{Q}{P} = \log{\sup_{Z\in \texttt{dom}(Z)}} \frac{Q(Z)}{P(Z)}\leq \delta$ for $\delta=\frac{1}{\alpha}$. Using the subcovariate uncertainty set $\mathcal{U}^{\text{sub}}_P$, the worst-case value $\mathcal{R}(\theta, \mathcal{U}^{\text{sub}}_P)$ in Eq. (\ref{eq:robust}) can be found using techniques from convex duality~\citet{duchi2019distributionally}. Details of the resulting estimator of $\mathcal{R}(\theta, \mathcal{U}^{\text{sub}}_P)$ are presented in Appendix~\ref{app:dro_bound}. For the remaining discussion, we will assume that we can solve the worst-case optimization problem resulting from subcovariate shifts and focus on how to leverage this optimizer for OPE. % We now present our main technical results, particularly the framework $\texttt{ROPE}$ for off-policy evaluation (OPE) under realistic shifts where human input is incorporated in terms of knowledge of (sub)covariate shifts we anticipate in practice. \begin{figure}[t!] \centering \includegraphics[scale=0.5]{figures/causal_graph_cb_annot.png} \caption{\textbf{Graphical model for shifts in contextual bandits}. Expert input (shaded node $Z$) denotes that $P(Z)$ changes at test time and $P(Y|X,T)$ remains the same. For example, treatments $T$ are prescribed based on age $Z$ and have different outcomes $Y$ by age. In practice, only the age distribution may change across environments e.g. across hospitals (as opposed to the joint distribution over age, treatment, and outcome). This motivates a less conservative approach to robustness by focusing on marginal shifts in age.} \label{fig:model_cb} % \end{figure} \subsection{Robust OPE in Contextual Bandits} \label{sec:offpolicy} As suggested in Section \ref{sec:prelim_cb}, the bandit setup typically assumes that the joint distribution $P$ changes only due to the change in policy. OPE tasks in CBs focus on evaluating the value of the policy under this assumption. Departing from this assumption, we evaluate the utility of a policy under a \textit{new} environment in which the domain expert anticipates shifts in $Z$. We characterize these by unknown and bounded shifts in $Z$. The following assumption is the CB analogue of the covariate shift assumption in supervised learning. \begin{assumption}[Expert Input for CBs] \label{assum:inter_cb} Suppose the human expert specifies the subset of features $Z\subseteq X$, such that across environments, only $P(Z)$ changes while $P(Y|X,T)$ remains the same. \end{assumption} Following this input, the joint distribution at test environment factorizes as $Q(X,T,Y) = Q(Z)Q(T,X\setminus Z\vert Z)P(Y\vert X, T)$. For simplicity, we will group the variables $T,X\setminus Z$ into $T$. The rest of the discussion remains the same and an extra factor of $P(X\setminus Z\vert Z)$ will be suppressed. Thus, we define the uncertainty set containing distributions $Q$ that shift in the marginal distribution of $Z$ as, \begin{equation}\label{eq:unc_cb} \begin{aligned} \mathcal{U}^{\text{CB}}_P = \{&Q\ll P\ \text{s.t.}\ Q=\nu(Z)Q(T\vert Z)P(Y\vert Z, T), \\ & \mathcal{D}\infdivx{\nu(Z)}{P(Z)}\leq \delta\} % \end{aligned} \end{equation} The robust OPE problem aims to find the \textit{worst-case} average outcome under $\mathcal{U}^{\text{CB}}_P$ instead of the average, \begin{equation} \label{eq:robust_eval} % \mathcal{R}(\mathcal{U}^{\text{CB}}_P) = \inf_{Q\in\ \mathcal{U}^{\text{CB}}_P}\ \mathbb{E}_{(Z, T, Y)\sim Q}[Y], % \end{equation} where $Q(T\vert Z)$ is the policy to be evaluated and is considered to be known and fixed. Consider each distribution in the set $\mathcal{U}^{\text{CB}}_P$, $Q(\cdot){=}\nu(Z)Q(T\vert Z)P(Y\vert Z, T)$, which differs from the train distribution in the factors for $Z$ and $T\vert Z$. \begin{align}\label{eq:robust_ope_cb} &\mathcal{R}(\mathcal{U}^\text{CB}_P) = \inf_{Q\in \mathcal{U}^\text{CB}_P}\ \mathbb{E}_{Z\sim Q(Z)}\mathbb{E}_{P(Y|T,Z)Q(T | Z)}[Y] \\ &\stackrel{(\ref{eq:robust_ope_cb}a)}{=} \inf_{Q \in \mathcal{U}^\text{CB}_P} \mathbb{E}_{Z\sim Q(Z)}\mathbb{E}_{P(Y|T,Z)P(T | Z)}\left[\frac{Q(T|Z)}{P(T|Z)}Y\right] \\ &\stackrel{(\ref{eq:robust_ope_cb}b)}{=}\inf_{\eta\in \mathbf{R}} \frac{1}{\delta}\mathbb{E}_{Z \sim P(Z)}\left[(\mathbb{E}\left[W{\times} Y\vert Z\right]-\eta)_+\right] + \eta \nonumberthis % \end{align} To solve (\ref{eq:robust_eval}) for this $Q$, we first use % importance sampling to account for the change in $T\vert Z$ due to the known policy $Q(T\vert Z)$, step~(\ref{eq:robust_ope_cb}a). As a result, the set $\mathcal{U}^{\text{CB}}_P$ now consists of shifts on $Z$ alone. Thus, the robust OPE problem reduces to solving $\sup_{Q\in\ \mathcal{U}^{\text{CB}}_P}\ \mathbb{E}_{V\sim Q}[W{\times} Y]$ where $W$ are the importance sampling weights: $$W(T, Z){=}Q(T\vert Z)/P(T\vert Z)$$ Using convex duality arguments~\citet{shapiro2014lectures} for our choice of uncertainty set $\mathcal{D}_{\text{CVaR}}$, we can obtain (\ref{eq:robust_ope_cb}b). This motivates the full optimization procedure summarized in Algorithm~\ref{alg:cb_ope}. We first compute importance sampling weights $W_i$, and create a re-weighted dataset $\{V_i = (Z_i, W_i \times Y_i)\}_i$. The risk defined in Eq.~\eqref{eq:robust_eval} is approximated by an estimate of its upper bound given in Eq.~\eqref{eq:marginal_smooth_app} when $Z$ are continuous valued (see Appendix \ref{app:dro_bound} for the detailed derivation). \begin{algorithm}[t!] \caption{Robust OPE in CBs} \label{alg:cb_ope} \begin{algorithmic} \STATE {\bfseries Input:} Data $\{Z_{i}, T_i, Y_i\}_{i}$, Target policy $Q(T|Z)$, behavior policy $P(T|Z)$, hyperparameters $\delta$, \texttt{L}, \texttt{lr}. \STATE \STATE Compute importance weights $\{W_i=\frac{Q(T_i|Z_i)}{P(T_i|Z_i)}\}_i$. \STATE Create dataset $\{V_i = (Z_{i}, W_i\times Y_i)\}_{i}$. \STATE Estimate $\widehat{R}(\mathcal{U}^{\text{CB}}_P)$ (Eq.~(\ref{eq:robust_eval})) with the worst-case risk estimator in Eq. (\ref{eq:marginal_smooth_app}) for the dataset. \STATE \STATE{{\bf{return}} $\widehat{R}(\mathcal{U}^{\text{CB}}_P)$} \end{algorithmic} \end{algorithm} \subsection{Robust OPE in MDPs} \label{sec:fullmdp} Off-policy evaluation is critical in sequential decision-making settings, often encountered in human-centered domains such as health. Often such environments are best modeled as a Markov Decision Process (MDP) as introduced in Section~\ref{sec:prelim}. In this case, we have to consider shifts in the transition dynamics across environments which can invalidate OPE methods for MDPs as they often assume stationary dynamics. Our goal is to evaluate \textit{robust} value for a given policy $\pi$ to be deployed in a new environment with unknown transition dynamics. Hence, the uncertainty set for each state-action pair is defined over $P(s'|s, a)$, denoted by $\mathcal{U}(s,a)$.\footnote{We drop the dependence on the train environment's transition probabilities for conciseness and use $P$ to denote target probabilities instead of $Q$ to avoid confusion with the $Q$-value function in RL.} Specifically, we want to estimate the robust value for $\pi$ starting from $s_0$ as $V^\pi(s_0) = \inf_{P\in\mathcal{U}} \mathbb{E}_{\pi,P}\left[\sum_{t=0}^\infty \gamma^t r(s_t,a_t)\vert s_0\right]$. \citet{iyengar2005robust} proves that $V^\pi(\cdot)$ is the solution to the following fixed-point equation (namely, robust Bellman equation) if we assume that uncertainty sets for each state-action pair are constructed independently (known as \textit{SA-rectangularity}), \begin{equation} \label{eq:robust_bellman} \begin{aligned} V^\pi(s) = r(s,\pi(s)) + \inf_{P\in\mathcal{U}(s,\pi(s))} \gamma \mathbb{E}_{s'\sim P(s'\vert s,\pi(s))}[V^\pi(s')] % \end{aligned} \end{equation} \begin{figure}[t!] \centering % \includegraphics[width=0.4\textwidth]{figures/causal_graph_mdp.png} % % % % % % \caption{\textbf{Graphical model representing a Markov Decision Process.} Expert input (shaded nodes in red) denotes the variables whose distribution changes across environments. Here, only $P(s_{t+1}^1| s_t,a_t)$ changes. Rewards $r_t$ are not shown for simplicity. Each $r_t$ has directed edges from $s_t$ and $a_t$. % } \label{fig:sel_mdp} % \end{figure} Intuitively, SA-rectangularity implies that the uncertainty sets are constructed across time steps independently. This property yields a tractable method to compute the value function estimates using dynamic programming \citep{iyengar2005robust}. We state the assumption formally in Appendix \ref{app:assum_mdp} for completeness. Eq. (\ref{eq:robust_bellman}) can be solved iteratively by dynamic programming~\citep{sutton2018reinforcement}. Given the value function at any iteration, we additionally have to solve the minimization problem over $\mathcal{U}(s,a)$. Thus, the robust OPE problem in MDPs reduces to solving multiple DRO subproblems with a chosen uncertainty set. \paragraph{Applying $\texttt{ROPE}$ to OPE in MDPs.} Past work has only considered uncertainty sets for the joint distribution \citep{iyengar2005robust,tamar2014scaling,petrik2019beyond,zhou2021tabular}. In contrast, we consider sets based on the shifts on parts of the state space of the MDP i.e. leveraging human input. Figure \ref{fig:sel_mdp} pictorially represents the probabilistic model for the MDP denoting the shifting state features. The following assumption is the RL analogue of the subcovariate shifts in supervised learning. For any time step $t>0$ in the MDP, consider a partitioning of the state feature vector $s_t$ into two feature sets, $(s^{1}_t,s^{2}_t)$. The factors of the joint distribution in any environment are given by: $$P(s_{t+1},r_{t+1}|s_t,a_t) = P(s^{1}_{t+1}\vert s_t, a_t)P(s^{2}_{t+1}\vert s_t, a_t, s^{1}_{t+1})P(r_{t+1}\vert s_t, a_t, s_{t+1})$$ \begin{assumption}[Expert Input for MDPs] \label{assum:inter_mdp} Suppose the human expert specifies the following, \begin{enumerate}[label=(\alph*)] \item $P(s^{1}_{t+1}\vert s_t, a_t)$ can shift across environments independently of state-action pairs at any other time step, while \item $P(s^{2}_{t+1}\vert s_t, a_t, s^{1}_{t+1})$ and $P(r_{t+1}\vert s_t, a_t, s_{t+1})$ are the same as that in the train environment. \end{enumerate} \end{assumption} \textbf{Remark}. Assumption \ref{assum:inter_mdp} implies that the MDP satisfies SA-rectangularity. For any state-action pair $(s_t,a_t)$ at time step $t$, the shifts that result in the uncertainty set $\mathcal{U}^{\text{MDP}}(s_t,a_t)$ are independent of the state-action pairs at other time steps. This implies that $\mathcal{U}^{\text{MDP}}(s_t,a_t)$ is determined independently of the uncertainty sets at other time steps. Thus, the collection $\mathcal{U}^{\text{MDP}}$ is all possible combinations of sets $\mathcal{U}^{\text{MDP}}(s_t,a_t)$. A scenario where SA-rectangularity does not hold is when uncertainty sets are constructed adaptively based on previous state-action pairs, which we do not address. Thus, the uncertainty set needs to be defined only for $P(s^{1}_{t+1}\vert s_t, a_t)$, denoted by $\mathcal{U}^{\text{MDP}}(s,a)$: \begin{align*} \mathcal{U}^{\text{MDP}}(s,a) {:=} \Big\{ &P(\cdot\vert s,a)\ll P_0(\cdot\vert s,a)\quad \text{s.t.}\\ &\forall s', P(s'\vert s,a)=\nu(s'^{1}\vert s, a)P_0(s'^{2}\vert s, a, s'^{1}), \\ &\mathcal{D}\infdivx{\nu(s'^{1}\vert s, a)}{P_0(s'^{1}\vert s, a)}\leq \delta\Big\} \end{align*} where $P_0(\cdot)$ denotes the distribution of the train environment. With $\mathcal{U}^{\text{MDP}}$, the DRO subproblem in (\ref{eq:robust_bellman}) reduces to, \begin{align*} &\inf_{P\in\mathcal{U}^{\text{MDP}}(s,\pi(s))} \mathbb{E}_P[V^\pi(s')] \\ &= \inf_{P\in\mathcal{U}^{\text{MDP}}(s,\pi(s))} \mathbb{E}_{P(s'^{1}\vert s, \pi(s))} \left[\mathbb{E}_{P_0(s'^{2}\vert s, \pi(s), s'^{1})}\left[V^\pi(s')\right]\right] % % \end{align*} We estimate the inner expectation with Monte-Carlo averaging on batch data, followed by solving the DRO problem as done in bandits (\ref{eq:robust_ope_cb}). Here, we use the maximum likelihood estimate of the transition model $P_0$ as we do not have access to the true model. Steps for estimation are outlined in % Algorithm \ref{alg:cb_mdp}. \begin{algorithm}[t!] \caption{Robust OPE in MDPs} \label{alg:cb_mdp} \begin{algorithmic} \STATE {\bfseries Input:} Trajectories $\{(s,a,s',r)\}$ sampled using policy $\mu$ and transition model $P_0$, Target policy $\pi$, Discount factor $\gamma$, Robustness level $\delta$. \STATE \STATE Learn transition models with sample averages across observed trajectories $\widehat{P_0}(s'^1\vert s,a)= \frac{\text{Count}\{(s,a,(s'^1,*),*)\}}{\text{Count}\{(s,a,(*,*),*)\}}$ and $\widehat{P_0}(s'^2\vert s,a,s'^1)= \frac{\text{Count}\{(s,a,(s'^1,s'^2),*)\}}{\text{Count}\{(s,a,(s'^1,*),*)\}}$, where the wildcard $(s,a,*,*)$ denotes transitions from trajectories that match the state-action pair $s,a$. \STATE \STATE Learn reward model with sample averages across observed trajectories $\widehat{r}(s,a) = \frac{\sum_{(\_,\_,\_,r)\in \{(s,a,*,*)\}} r}{\text{Count}\{(s,a,*,*)\}}$. \STATE \STATE Initialize $V^\pi(s)=0$, for all $s\in\mathcal{S}$. \REPEAT \FOR{$s\in\mathcal{S}$} \STATE Update $V^\pi(s)$ using Eq. (\ref{eq:robust_mdp_app}) in Appendix \ref{app:algmdp} with $\widehat{P_0}$. \ENDFOR \UNTIL{$V^\pi$ converges} \STATE \STATE{{\bf{return}} $V^\pi$} \end{algorithmic} \end{algorithm} Given enough samples of next state for each state-action pair, the robust value can still be estimated accurately. Suppose $V^\pi_{\mathcal{U}_{P_0}}$ be the robust value corresponding to the true transition model $P_0$ which we want to show is close to the robust value corresponding to the \textit{estimated} transition model $\smash{\widehat{P}_0}$ denoted by $\smash{V^\pi_{\mathcal{U}_{\widehat{P}_0}}}$. To show this, we will restrict to uncertainty sets in Eq. (\ref{eq:int_set}) defined by KL-divergence $\mathcal{D}\infdivx{Q}{P} = \sum_z Q(z)\log(\frac{Q(z)}{P(z)})$ being smaller than $\delta$. \begin{theorem}[Robust OPE Estimation error] \label{thm:error_ope} Given at least $n$ samples from $P_0(\cdot\vert s,a)$ for all $s,a$, assuming that the rewards are bounded $r\in[0,r_\text{max}]$, and that $\mathcal{U}^{\text{MDP}}$ are defined by KL-divergence, then with probability at least $1-\alpha$, {\small{ \begin{align*} \|V^\pi_{\mathcal{U}_{P_0}} - V^\pi_{\mathcal{U}_{\widehat{P}_0}}\|_\infty \leq O\left(\frac{\gamma r_\text{max}|\mathcal{S}|}{(1-\gamma)^2} \sqrt{\frac{1}{n}\log\left(\frac{4|\mathcal{S}\times\mathcal{A}\times\mathcal{S}|}{\alpha}\right)}\right) \end{align*} }} \end{theorem} Thus, the error in evaluating the robust value with the estimated transition model as compared to the true model converges at the rate $\tilde{O}\left(\frac{\gamma|\mathcal{S}|}{\sqrt{n}(1-\gamma)^2}\right)$, ignoring logarithmic factors. The dependence on $1/\sqrt{n}$ matches that for the non-robust case but is sub-optimal in $|\mathcal{S}|$ \citep{li2020breaking}. The proof is included in Appendix \ref{app:error_ope} in which we show an analogue of the \textit{simulation lemma}~\citep{kearns2002near} for robust MDPs. In terms of time complexity, each iteration of dynamic programming with DRO can be computed in $O(|\mathcal{A}| |\mathcal{S}|^2 \text{log}|\mathcal{S}|)$ which is only a $\text{log}|\mathcal{S}|$ factor more than the non-robust solution. \begin{figure*}[htbp!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{{figures/toylogpol_testshift_a0.8_r_4_error_test_line_conf.png}} \caption{} \label{fig:synth_cb} \end{subfigure}% \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{figures/warfarin_data_nr_5_a_0.8_s_3_error_test_line_conf.png} \caption{} \label{fig:wf_cb} \end{subfigure} \caption{(a) \textbf{CB, Synthetic}. MSE in value estimates for test sets (y-axis) with varying levels of shift in $Z_1$ (x-axis). $\texttt{ROPE}$ performs well for moderate shifts. (b) \textbf{CB, Warfarin}. MSE in value estimates for test sets with shift in the race distribution. $\texttt{ROPE}$ achieves the right level of conservatism to match the value at test. Curves for $\texttt{Standard}$ and $\texttt{JointDRO}$ are \textbf{not visible} as they have high error (around 0.1 and 0.7, respectively) and lie outside the plotted y-axis range. Robustness level in Eq. (\ref{eq:unc_cb}) is set to $\delta=0.8$. Error bars are computed over $5$ random initializations.} \label{fig:results} \end{figure*} \section{Experimental Evaluation}\label{sec:expts} In this section, we study the robustness-utility trade-offs achieved by the proposed shifts and compare these with the existing approaches\footnote{Code for replicating the experiments is at \url{https://github.com/AI4LIFE-GROUP/rise-against-distribution-shift}}. More specifically, we study whether optimizing for marginal shifts with $\texttt{ROPE}$ improves model performance under those shifts in comparison with optimizing for broader classes of shifts using existing methods. We start with subcovariate shifts in bandit settings as described in Sec. \ref{sec:exp_bandit}. We specifically consider the partial feedback problem in the contextual bandits setup since we only get to the observe the feedback (outcomes) for the actions taken in the collected data. Thus, we tackle robustness under partial feedback. We show that our method provides more faithful estimates for efficacy of a drug dosing policy under patient population shifts. In Sec. \ref{sec:exp_fullmdp}, we consider sequential decision problems under shifts in environment dynamics for MDPs. Within a simulated Sepsis environment, we show that the proposed sets provide significantly less conservative value estimates for a treatment policy. \subsection{Robust OPE in CBs} \label{sec:exp_bandit} We present results on a synthetic and a real world dataset. \textbf{Synthetic}. We generate data with two features $Z{:=}(Z_1, Z_2)$, binary treatment $T$, and continuous outcome $Y$. Additional details on how data is simulated are deferred to Appendix \ref{app:data_cb}. Here we assume changes the marginal distribution of $Z_1$ for evaluation. We simulate $n{=}2000$ samples in the train environment following a logistic policy and use it to estimate the robust value of a different policy in shifted environments. Baselines include: $\texttt{Standard}$ returns the average value assuming no shift. Inverse Probability Weighting \texttt{(IPW)} only corrects for shift in policy using importance sampling. $\texttt{JointDRO}$ accounts for shifts in all variables $V$, as done in past work \citep{si2020distributional}. % In Figure \ref{fig:synth_cb}, we plot the MSE between the estimated and the true policy value, evaluated using $20000$ samples from the test environment. We observe that when the test environment is close to the train one, not accounting for the shift ({$\texttt{Standard}$}) performs well. But, as the shift increases, our approach ($\texttt{ROPE}$) does better. With large shifts, larger uncertainty sets are required. For large shifts, {$\texttt{JointDRO}$} does better than the other methods. In summary, $\texttt{ROPE}$ performs well when the shift is significant but not too large. This highlights the importance of choosing the desired robustness level appropriately which is a challenging open problem for DRO methods. \begin{table*}% \centering \begin{subtable}[t]{0.6\linewidth} \centering \begin{tabular}{p{0.7cm}c|cccc} \toprule & $\texttt{Standard}$ & \multicolumn{2}{c}{$\texttt{JointDRO}$ } & \multicolumn{2}{c}{$\texttt{ROPE}$} \\ $\delta$ & - & 0.4 & 0.8 & 0.4 & 0.8 \\ \midrule mean & -1136.43 & -1448.16 & -1221.78 & \textbf{-1416.39} & \textbf{-1190.57} \\ std. & 6.22 & 6.32 & 5.28 & 6.70 & 6.33 \\ median & -1136.64 & -1449.91 & -1222.76 & -1417.12 & -1190.92 \\ \bottomrule \end{tabular} \caption{Cliffwalking domain} \label{tab:cliff_dp} \end{subtable}\hfill \begin{subtable}[t]{0.4\linewidth} \centering \begin{tabular}{p{0.7cm}c|cccc} \toprule & $\texttt{Standard}$ & $\texttt{JointDRO}$ & $\texttt{ROPE}$\\ $\delta$ & - & 0.8 & 0.8 \\ \midrule mean & -0.037 & -0.939 & \textbf{-0.662} \\ std. & 0.008 & 0.006 & 0.148 \\ median & -0.039 & -0.941 & -0.705 \\ \bottomrule \end{tabular} \caption{Sepsis domain} \label{tab:sepsis_dp} \end{subtable} \caption{\textbf{Robust OPE in MDP.} Estimated value ($10$ random runs) with standard (that is, non-robust) and robust dynamic programming. \texttt{ROPE}~provides less conservative value than $\texttt{JointDRO}$ meaning it has a smaller decrease in value from $\texttt{Standard}$.} \label{tab:mdp_tables} \end{table*} \textbf{Warfarin Dosing Policy}. Warfarin is an oral anticoagulant drug. Optimal dosage to assign to a patient while initiating Warfarin therapy has been a subject of multiple clinical trials \citep{heneghan2010optimal}. Using the public PharmGKB dataset~\citep{international2009estimation} of $5528$ patients, \citet{bastani2015online} learn contextual bandit policies that adapt doses based on patient covariates like demographics and clinical information. \footnote{A preprocessed version of the Warfarin dataset was downloaded from \url{https://github.com/khashayarkhv/contextual-bandits/blob/master/datasets/warfarin.csv}.} Reward, either 0 or 1, is defined as a policy's accuracy in making the correct dosing decision. However, the value of the policies is suspect when applied to patient populations different from the development cohort. Thus, we estimate robust value of a policy under shifts in race distribution. Note that the ground truth optimal dose for each patient is available in the data which enables evaluating different policies. We learn a dosing policy with linear regression on held-out data and estimate it on test data with shifted race distributions. Specifically, we subsample (without replacement) fewer patients with a recorded race into our analysis set. The policy has lower performance on patients with \texttt{Unknown} race. Thus, the value of the policy decreases with increasing shift as the relative proportion of this group increases. To estimate this value correctly, the robust method must consider the right level of conservativeness. Figure \ref{fig:wf_cb} shows that MSE between the estimated and true average reward is lower for $\texttt{ROPE}$ than for $\texttt{JointDRO}$ and \texttt{IPW}, as it constructs the sets for marginal shifts alone. % \begin{figure}[htbp!] \centering \includegraphics[width=\linewidth]{figures/cliff/cliff_v_value_algdp_rTrue_tmarginal_ucvar_a0.8_s0.1_e10000_runs10/value_run0_g1.png} \caption{\textbf{Illustration of the Cliffwalking domain.} Plot shows (part of) the value function estimated using robust Bellman equation with \texttt{ROPE}. Start position is (5,0), goal is (5,5), and cliff corresponds to the row (5,0) to (5,5). Agent slips downward by 1 cell with probability 0.1 when taking actions in any of the columns except first and last columns. Results in Table \ref{tab:cliff_dp} report the value at the start position (5,0), which is -1182.45 here, averaged over 10 random initializations of the domain.} \label{fig:cliff} \end{figure} \subsection{Robust OPE in MDPs} \label{sec:exp_fullmdp} We present results on two simulated RL domains. \textbf{Cliffwalking Domain}. We consider a $6\times6$ gridworld which a agent navigates from a start to a goal position avoiding a cliff \citep[][Ex. 6.6]{sutton2018reinforcement}. Please refer to Appendix \ref{app:data_cliff} for more details. An illustration of the gridworld is provided in Figure \ref{fig:cliff}. With a constant shift probability, the agent slips towards the cliff instead of taking the prescribed action. The slip probability varies across environments, % changing the transition dynamics and necessitating robustness in policy evaluation. We evaluate the value estimate for an agent following uniform random policy using dynamic programming with the standard Bellman equation ($\texttt{Standard}$) or the robust one in (\ref{eq:robust_bellman}) ($\texttt{JointDRO},\texttt{ROPE}$). % To simulate subcovariate shifts, we duplicate the state features $s^1,s^2$ such that $s^1$ follows the agent's actions while $s^2$ is random noise. Since agent's actions affect only $s^1$, $\texttt{ROPE}$ correctly constructs sets on $P(s^1\vert s,a)$. In contrast, $\texttt{JointDRO}$ ignores this structure and constructs sets on both the features, $P(s^1,s^2\vert s,a)$. This is the same setting as considered in past work \citep{zhou2021tabular} which used the KL divergence to define uncertainty sets. Table \ref{tab:cliff_dp} reports the value estimates for the start position. We observe that for a high level of desired robustness $\delta=0.4$, $\texttt{JointDRO}$ decreases the value by 27.4\% (from $-1136$ to $-1448$) while $\texttt{ROPE}$ only decreases it by 24.6\% (to $-1416$). This validates that both DRO methods have the expected behavior in the MDP. % \textbf{Sepsis Treatment Evaluation.} Sepsis simulator~\citep{oberst19counterfactual} is a domain with more involved transition dynamics and has been used to test treatment policies \citep{oberst19counterfactual, namkoong2020off, killian2020counterfactually,futoma2020popcorn}. It has a total of $1440$ states which includes $4$ vital signs (blood pressure, glucose levels, heart rate, and oxygen concentration) and diabetic status. Actions correspond to $3$ treatment combinations (antibiotics, mechanical vents, and vasopressors). Terminal states, discharge from ICU or death, have rewards $+1$ or $-1$, respectively. % Glucose levels fluctuate more for diabetics than non-diabetics. Our goal is to evaluate a policy learned using policy iteration, namely RL policy, on a dataset with $20\%$ diabetics. We consider a setting where the percentage of diabetics and the fluctuation in their glucose levels varies in the test environment. To interrogate the RL policy for possible deployment, we find its robust value accounting for these shifts. $\texttt{JointDRO}$ constructs sets based on the full 1440 states, while $\texttt{ROPE}$ considers uncertainty only in glucose level dynamics for diabetics and non-diabetics. Thus, $\texttt{ROPE}$ represents the actual shifts more faithfully compared to $\texttt{JointDRO}$. We % check how conservative the OPE estimates from assumptions of joint shifts are relative to leveraging domain knowledge to restrict to subcovariate shifts. % Table \ref{tab:sepsis_dp} reports the value estimates for the RL policy obtained with standard and robust dynamic programming. We observe that $\texttt{JointDRO}$ reports a decrease in value by $21$ times as compared to the value at train environment and $\texttt{ROPE}$ only reports a decrease in value by $14$ times. Thus, experiments demonstrate the benefit of curating the uncertainty sets using domain knowledge to balance utility and robustness. \paragraph{Optimization and system details} For the CB experiments, we perform gradient descent with Adagrad \citep{duchi11adagrad} implementation in PyTorch \citep{paszke19pytorch} to solve Eq. \ref{eq:marginal_smooth_app}. In Synthetic CB dataset, we use learning rate $0.5$ for $\texttt{ROPE}$ and $0.1$ for $\texttt{JointDRO}$, Lipschitz constant $1$, and number of steps $100$. In Warfarin dataset, we use learning rate $0.1$ for both $\texttt{ROPE}$ and $\texttt{JointDRO}$, Lipschitz constant $0.01$, and number of steps $200$. While solving the dynamic program iteratively in the MDP experiments, the convergence condition for $V^\pi$ is taken as maximum change in $V^\pi(\cdot) < 10^{-2}$ after one repetition across all states. The minimization over $\eta$ while solving Eq. (\ref{eq:robust_marginal}) is performed using Brent's method, implemented in the package \texttt{scipy} \citep{scipy}. For the MDP experiments, the only hyperparameter is $\delta$ which is set to $0.8$ throughout or to $0.4$ in Table \ref{tab:cliff_dp}. Experiments were run on a compute cluster using a single node with a 2.50 GHz Intel processor, 2 GPUs with 6 GB memory each, and 256 GB system memory. None of the datasets include personally identifiable information. \section{Conclusions and Future Work} \label{sec:conc} In this work we are focused on leveraging human expertise to provide value estimates of ML policies under realistic distribution shifts. Here, we propose to represent domain knowledge via uncertainty sets over sub-covariate shifts. % We argue that this enables representing more realistic shifts and lead to less conservative solutions. We then provide novel estimators for robust OPE in contextual bandits and MDPs leveraging the distributionally robust optimization framework. Future directions include expanding human input to % address shifts in conditional distributions for OPE (e.g. see \citep{subbaswamy2021evaluating} for supervised learning). Finally, applying the robust OPE method to continuous state-action spaces with function approximators (e.g. \citep{tamar2014scaling}) is an interesting direction of future work. % We hope that the perspective of % leveraging human input to define uncertainty sets for robustness in off-policy evaluation opens up more possibilities to tackle over-conservatism of robust learning as well as the challenging problem of model selection in off-policy evaluation. \paragraph{Broader implications} Although it is challenging to foresee the impact of using robust methods in real-world applications, one negative is that from over-reliance on results without adequate scrutiny of the assumptions like causal knowledge and uncertainty about future deployment settings. Adversaries can manipulate the deployment environments such that the estimated robust values of policies have high errors. The design of uncertainty sets that account for such adversarial changes should be informed and validated by domain experts. \begin{acks} The authors would like to thank the anonymous reviewers for their feedback and all the funding agencies listed below for supporting this work. This work is supported in part by the Center for Research on Computation and Society (CRCS) at Harvard University, NSF awards \#IIS-2008461, \#IIS-2040989, and \#1922658, and research awards from Amazon, Harvard Data Science Institute, Bayer, and Google. The authors would also like to thank Rediet Abebe, Sara Kingsley, Elita Lobo, Aviva Prins, and Milind Tambe for early feedback on this work. Lastly, HL would like to thank Sujatha and Mohan Lakkaraju for their continued support and encouragement. The views expressed here are those of the authors and do not reflect the official policy or position of the funding agencies. \end{acks} \bibliographystyle{ACM-Reference-Format} \balance \section{Introduction} Off-policy evaluation~(OPE) refers to the task of estimating the expected utility of a decision-making policy without having to deploy the policy~\cite{levine2020offline}. Such an ability is critical for vetting policies in high-stakes decision problems such as in healthcare \citep{gottesman2018evaluating}, where deploying a policy directly is often risky or unethical. Therefore, we must rely on existing data collected from alternate policies deployed possibly in a different environment. Accurate evaluation of a policy is important so that stakeholders can identify beneficial policies and discard the harmful ones. In real world applications, OPE is a challenging task since the deployment environments often undergo changes (i.e., dataset shifts). It is critical for OPE methods to evaluate policies in a way that is robust to these changes. Prior work has proposed different solutions to address this problem. While some approaches address the scenario where potential shifts in the data are fully known in advance~\cite{kato2020off,killian2020counterfactually,zhang2020invariant}, others focus on the case where there is little to no knowledge of potential shifts in advance~\cite{subbaswamy2021evaluating,li2021evaluating} since this is more common in real world applications. Prior works that focus on robust OPE under unseen dataset shifts predominantly model the shifts by considering bounded perturbations to the joint distribution of the data~\cite{si2020distributional,hatt2021generalizing,zhou2021tabular}, inspired by adversarial machine learning literature. However, prior research has also demonstrated that such shifts can be overly conservative, and often result in pessimistic estimates of policy utilities~\cite{petrik2019beyond, duchi2019distributionally}. Moreover, accounting for many irrelevant shifts may result in poor performance on shifts of interest. For instance, consider adversarial training methods that perturb training data in an $\ell_p$-norm ball and minimize the worst-case error across the perturbations \citep{goodfellow2014explaining,madry2018towards,sinha2018certifying}. Existing work shows that robustness to shifts modeled as $\ell_p$ perturbations does not transfer well to real world shifts \citep{taori2020measuring} or even to other $\ell_p$ norms \citep{maini20union}. Such methods can degrade performance on train distribution \citep{raghunathan2020understanding} and lead to degenerate solutions \citep{hu2018does}. While prior research has often considered bounded perturbations to the joint distribution of the data, this is quite uncommon in practice and does not represent realistic dataset shifts~\cite{subbaswamy2021evaluating,taori2020measuring}. To illustrate, we plotted the percentage of features that shift in three different real world datasets comprising of census, loan, and medical data (See Figure \ref{fig:count_shifts}). For each of these datasets, we often observe that only a subset of the covariates (and not the joint distribution of the data) undergo a change in case of dataset shifts. For example, only 4 (14\%) out of the 28 total features changed between pre 2006 and post 2006 loan datasets (Figure \ref{fig:count_shifts} -- middle). In order to model realistic dataset shifts, it therefore becomes important to exploit domain knowledge and inputs from human experts which can guide us to the plausible subset of features that are likely to shift. While such inputs have been incorporated in learning or evaluating classifiers that are robust to realistic dataset shifts~\cite{subbaswamy2021evaluating,li2021evaluating}, there is little to no work in the OPE literature that leverages domain knowledge and/or human inputs when modeling shifts in the data. % % \begin{figure*}[htbp!] \centering \begin{minipage}{0.33\textwidth} \centering \includegraphics[scale=0.25]{figures/supervised/adult_detection_results_MB-SM_p.png} US Census data (Adult Income) \end{minipage}% \begin{minipage}{0.33\textwidth} \centering \includegraphics[scale=0.25]{figures/supervised/sba_detection_results_MB-SM_p.png} Loan data (SBA) \end{minipage}% \begin{minipage}{0.33\textwidth} \centering \includegraphics[scale=0.25]{figures/supervised/eicu_detection_results_MB-SM_p.png} Medical data (eICU) \end{minipage} \caption{\textbf{Prevalence of shifts on subset of features.} Plots show the percentage of features that shift across domains within three different datasets (US Census, Loan, Medical). More details are given in Appendix \ref{app:shift_detect_app}. The three datasets have 10, 2, and 11 domains, respectively. The x and y-axes show the different domains (10 US state codes in US Census, 2 time periods in Loan, and 11 hospitals in Medical data). Each cell denotes the percentage of the total features (shown in each subtitle) that shift for the corresponding pair of domains. We detect and count feature shifts using conditional independence tests \citep{kulinski2020feature}, where we check if $P_\text{train}(X_i | X\setminus X_i)=P_\text{test}(X_i | X\setminus X_i)$. Firstly, plots show that distribution shifts are quite prevalent. Secondly, we observe that only a few domain pairs in US Census undergo shift in all features (value of 100\% in the leftmost plot). In all other pairs, only a fraction of the features shift (similar to Loan and Medical data). This shows that shifts on feature subsets are prevalent in the real world datasets and methods that assume shifts in \textit{all} features are aiming for robustness to unrealistic shifts.} \label{fig:count_shifts} \end{figure*} In this work, we address the aforementioned challenges by investigating how domain knowledge can help with providing more realistic estimates of the utilities of policies. To this end, we leverage human inputs on which aspects of the environments may plausibly change and adapt the OPE methods to only consider shifts on these aspects. The hope is that this enables a domain expert to constrain the shifts to only the most relevant or plausible ones. Then, we leverage the framework of \textit{distributionally robust optimization} \citep[DRO,][]{duchi2018learning} for carrying out robust policy evaluation for contextual bandits and Markov decision processes. More specifically, we make the following key contributions: \begin{itemize} \item We propose a novel framework, Robust OPE (ROPE), which considers shifts on a subset of covariates in the data based on user inputs and estimates worst-case utility under these shifts. \item We develop computationally efficient algorithms for robust OPE via human inputs in case of contextual bandits and Markov decision processes. \item We theoretically analyze the sample complexity of the proposed algorithms. \item We carry out extensive experiments with synthetic and real world datasets from the healthcare domain. Our results demonstrate that $\texttt{ROPE}$ can successfully tackle the over-conservatism of existing robust policy evaluation methods. \end{itemize} Our work paves the way for modeling realistic dataset shifts in the context of off-policy evaluation in reinforcement learning. % % % \section{Related work} As robustness has been extensively studied in a variety of learning settings, we review only closely related works on approaches for handling unseen shifts, i.e., adversarial robustness, DRO and causal robustness. \textbf{Adversarial Robustness}. Adversarial shifts modeled as $\ell_p$-norm perturbations have been widely considered to learn models robust to adversarial attacks \citep{goodfellow2014explaining,madry2018towards}. However, such methods provide limited gains in robustness to real world shifts \citep{taori2020measuring}. Recent work compensates for the non-realistic shifts by combining multiple $\ell_p$ norm balls \citep{maini20union}, considering shifts in a perceptual distance \citep{laidlaw2021perceptual}, or augmenting with additional datasets from adjacent domains \citep{taori2020measuring,miller20qamodels}. Similar methods have been extended to RL under perturbations to transition dynamics \citep{sinha2018certifying, pinto2017adversarial}, or horizon length and initial state distribution \citep{qi2020robust}. \textbf{Distributionally Robust Optimization}. DRO generalizes adversarial shifts to perturbations in the distributions rather than data points \citep{duchi2018learning}. The primary mechanism of DRO is to specify \textit{uncertainty sets} that encode the uncertainty about potential test distributions. These sets can be defined over joint distribution of the data~\citep{ben2013robust, bertsimas2018data, blanchet2019quantifying,duchi2018learning}, marginals ~\citep{duchi2019distributionally, li2021evaluating}, or conditionals \citep{subbaswamy2021evaluating}, and are well explored for supervised learning. Interestingly, adversarial training can be understood as solving DRO with a Wasserstein metric-based set \citep{staib2017distributionally, sinha2018certifying}. Applications of DRO have been explored in contextual bandits for policy learning~\citep{si2020distributional,mo2020learning,faury2020distributionally} and evaluation~\citep{kato2020off,jeong2020robust}. In robust MDPs, sets based on KL-divergence, $L_1,L_2$ and $L_\infty$ norms have been studied~\citep{nilim2005robust, iyengar2005robust}. Some approaches iteratively refine the sets with newly observed data, but the sets are still constructed using $L_1$ norm balls \citep{petrik2019beyond}. Thus, DRO methods (including adversarial training) lack ways to add domain knowledge and constrain the uncertainty sets, excluding some recent work \citep{subbaswamy2021evaluating}. \textbf{Causal Robustness}. Causal methods provide robustness to arbitrarily strong shifts by leveraging properties of the data generating process. For instance, using only features that cause the outcome leads to a robust model against arbitrary shifts in the features (under some structural assumptions), as shown in recent works \citep{rojas2018invariant, subbaswamy2019preventing, magliacane2018domain, rothenhausler2018anchor, peters2016causal}. A relaxation to bounded shifts has been proposed \citep{rothenhausler2018anchor,oberst2021regularizing,christiansen2020causal} for supervised learning, but in the special cases of additive shifts or linear Gaussian causal models. In contrast, we do not make parametric assumptions on the shifts. Under bounded shifts in conditional distributions, which is a broader class than what we consider, \citet{subbaswamy2021evaluating} propose methods for evaluating the performance of a given classification model. % Importantly, there are gaps in the literature in applying these ideas beyond supervised learning. In RL, \citet{si2020distributional,hatt2021generalizing} perform OPE restricted to contextual bandits and \citet{zhou2021tabular} generalizes the DRO approach to MDPs. However, all of these works only consider shifts in the joint distribution and do not investigate the use of domain knowledge to restrict the uncertainty sets. As RL starts to be deployed in critical applications such as for mechanical ventilation in ICUs \citep{peine2021development}, faithful evaluation of RL policies under plausible data shifts is an important need. Thus, extending $\texttt{ROPE}$ to RL and demonstrating its utility in realistic settings is a useful contribution. \section{Preliminaries} \label{sec:prelim} We first introduce the robust evaluation framework based on distributionally robust optimization, then give necessary background on decision-making problems modelled as % Contextual Bandits (CB) and Markov Decision Processes (MDPs). \textbf{Notation} We denote the random variable for all observable properties of a domain by $V$. The outcome variable is denoted by $Y$. This can represent labels in supervised learning, or rewards and states in RL. Features are denoted by $X$. For a subset of features $Z\subseteq X$, the remaining features are denoted by $X\setminus Z$. Train and test distributions over $V$ are denoted by $P$ and $Q$ respectively (using the same notation for their densities). An uncertainty set w.r.t. a distribution $P$ is denoted by $\mathcal{U}_P$. % \subsection{Robust Evaluation using DRO} Say, we want to evaluate a decision-making model, parameterized by $\theta$, on a test distribution $Q$. For a given reward function $r(\theta, V)$, this means, we want to find the value of the expected reward $\mathbb{E}_{V\sim Q}[r(\theta, V)]$. However, we do not know the test distribution $Q$ a priori. Robust evaluation methods (e.g. \citep{subbaswamy2021evaluating,li2021evaluating}) address this challenge by performing a worst-case evaluation of the model. Instead of finding the expected reward under $Q$, the \textit{robust} value of a model is defined as its worst-case reward across a set of distributions $\mathcal{U}_P$. \begin{equation} % \mathcal{R}(\theta, \mathcal{U}_P) {:=} \inf_{\tilde{P}\in\ \mathcal{U}_P}\ \mathbb{E}_{V\sim \tilde{P}}[r(\theta, V)] \label{eq:robust} \end{equation} where $\mathcal{U}_P$ is referred to as the uncertainty set. The key property of the solution to (\ref{eq:robust}) is that if the $\mathcal{U}_P$ is suitably chosen such that it contains the test distribution $Q$, then $\mathcal{R}(\theta, \mathcal{U}_P)\leq \mathbb{E}_{V\sim Q}[r(\theta, V)]$. Thus, we get the guarantee that the reward of the model on $Q$ is at least as good as what our robust evaluation finds. Naturally, a major part of this framework is the choice of uncertainty set $\mathcal{U}_P$. If we precisely choose $\mathcal{U}_P=\{Q\}$, then we recover the test set reward. But as we keep on increasing the uncertainty set, in the hope of including $Q$, we gradually get worse values due to the worst-case nature of the optimization (\ref{eq:robust}). Next we describe a broad class of uncertainty sets popular in past work. \textbf{Divergence-based Uncertainty Sets}. The uncertainty set $\mathcal{U}_P$ is all distributions lying in a $\delta$-ball around the train distribution, defined using a divergence metric $\mathcal{D}$: \begin{equation} \label{eq:distance_set} \text{[Joint DRO set]} \qquad \mathcal{U}^\text{div}_{P} := \{Q\ll P\quad \text{s.t.}\quad \mathcal{D}\infdivx{Q}{P}\leq \delta\} \end{equation} where $Q{\ll} P$ denotes absolute continuity i.e. $P(V)=0$ implies $Q(V)=0$. An example is the set with: $$\mathcal{D}_{\text{CVaR}}\infdivx{Q}{P} {=} \log{\sup_{V\in \texttt{dom}(V)}} \frac{Q(V)}{P(V)},$$ where CVaR stands for Conditional Value at Risk \citep{rockafellar2000optimization}. \textbf{Types of shifts} The above definition of $\mathcal{D}_{\text{CVaR}}$ assumes that $P$ and $Q$ may differ in distribution over all variables in $V$, namely shifts in the \textit{joint distribution}. There are many ways to limit the distributions considered by $\mathcal{U}^\text{div}_{P}$. \textit{Covariate shift} (e.g. \citet{duchi2019distributionally}) assumes that only the covariate distribution $P(X)$ may change at test time to $Q(X)$, while the conditional distribution $P(Y\vert X)$ remains the same. A slight generalization is \textit{subcovariate shift}, considered by us, that posits that $P(Z)$ may change for a subset of covariates $Z$ keeping the conditional distribution constant. This reduces the size of the set given by $\mathcal{D}_{\text{CVaR}}$ since many of the $Q$ are removed from the divergence set (\ref{eq:distance_set}) due to the constraint of matching the corresponding distributions in $P$. We leverage human input to specify (sub)covariate shifts and propose OPE methods in reinforcement learning. Next, we provide preliminaries for OPE in two widely-studied settings in RL -- contextual bandits and MDPs. % \subsection{Contextual Bandits} \label{sec:prelim_cb} Here we have access to $n$ tuples $\{(Z_i, T_i, Y_i)\}_i$ collected with a known stochastic policy that applies treatment $T_i$ in context $Z_i$ and observes the corresponding outcome $Y_i$. Further, it is assumed that the tuples are i.i.d. % and the joint distribution $P$ factorizes as {\small{$\prod_i P(Z_i)P(T_i\vert Z_i)P(Y_i|Z_i,T_i)$}}. \textbf{OPE in CBs} Given data sampled from $P$, the goal in OPE is to evaluate the expected outcome $\mathbb{E}_Q[Y]$ under a distribution $Q$ induced by following a new policy in the \textit{same} environment \citep{dudik2014doubly, thomas2016data}. Importantly, the difference between the two distributions is assumed to be only due to different policies i.e. $P(T\vert Z){\neq} Q(T\vert Z)$ and the rest of the environment-related factors are the same. That is, only shifts on $T$ are considered as the domain expert intends to implement a different policy, $Q(T\vert Z)$ instead of $P(T\vert Z)$. \subsection{Markov Decision Process} Markov Decision Processes are characterized by the tuple $(\mathcal{S}, \mathcal{A}, \mathcal{P}, r, \gamma)$, where $\mathcal{S}$ is the state-space, $\mathcal{A}$ is the action-space, $\mathcal{P} \triangleq P(\cdot\vert s,a)$ characterizes the dynamics, $r: \mathcal{S} \times \mathcal{A} {\,\rightarrow\,} \mathbb{R}$ is the reward model and $\gamma\in [0,1)$ is the discount factor. We restrict ourselves to finite state, finite action infinite-horizon MDPs in this work. % The the reward model and transition model are assumed to be fixed. % Value of a policy $\pi: \mathcal{S} {\,\rightarrow\,} \Delta(\mathcal{A})$ is defined as the expected cumulative reward received starting from $s_0$, $$V^\pi(s_0) = \mathbb{E}_{\pi,P}\left[\sum_{t=0}^\infty \gamma^t r(s_t,a_t)\vert s_0\right].$$ \textbf{OPE in MDPs} In off-policy evaluation, the goal is to estimate the value of a policy $\pi$ given that we have collected a batch of trajectories obtained using an alternative policy $\mu$. We motivate the need for robustness in the OPE task in Section~\ref{sec:fullmdp}. \section{Our Framework} To evaluate policies, we rely on user input to characterize which covariates indeed shift across domains. We then incorporate this domain knowledge in the form of specifying an uncertainty set which characterizes potential shifts relative to the nominal training distribution. Our goal is then to provide realistic off-policy estimates of policy performance under the potential shifts. Let $Z \subseteq X$ be the subset of variables that shift across domains. % In this case, we assume that the uncertainty set $\mathcal{U}^{\text{sub}}_P$ % contains all distributions resulting from shifts in subcovariates $Z$ which are bounded in some metric $\mathcal{D}$. Formally, \begin{equation} \label{eq:int_set} \begin{aligned} \text{[Subcovariate DRO set]} \\ % \mathcal{U}^{\text{sub}}_P := \Big\{& \nu(Z) \ll P(Z) \ \text{s.t.}\ \ % \mathcal{D}\infdivx{\nu(Z)}{P(Z)} \leq \delta % \Big\} \end{aligned} \end{equation} A key question is the choice of divergence metric $\mathcal{D}$ that can allow us to incorporate expert knowledge. While it is easy for a domain expert to provide precise information on which covariates may shift across domains, the magnitude of shift may not be precisely clear. We capture this complexity using the $\mathcal{D}_{\text{CVaR}}$-based uncertainty set. Existing literature \citep{duchi2019distributionally,li2021evaluating,subbaswamy2021evaluating} has leveraged $\mathcal{D}_{\text{CVaR}}$ for specifying magnitude of shifts and compute worse-case loss in case of supervised learning. Specifically, \citet{duchi2019distributionally} builds the uncertainty set in such a way that the training population contains at least $\delta$ proportion of the test population. \begin{equation} \label{eq:marginal_set_sub_app} \begin{aligned} \mathcal{U}^\text{sub}_P := \{&Q(Z)P(V\setminus Z\vert Z)\ \text{s.t.} \\ &P(Z) = \alpha' Q(Z) + (1-\alpha') Q'(Z), \alpha \leq \alpha' \leq 1\} \end{aligned} \end{equation} where $Q'(Z)$ is any distribution and $\alpha \in (0,1]$ determines the minimum size of the subpopulation shared between train and test. Thus, $\mathcal{U}^\text{sub}_P$ constrains the ratio $\frac{Q(Z)}{P(Z)}\leq \frac{1}{\alpha}$ for all values of $Z$. Equivalently, the set can be expressed using $\mathcal{D}_{\text{CVaR}}\infdivx{Q}{P} = \log{\sup_{Z\in \texttt{dom}(Z)}} \frac{Q(Z)}{P(Z)}\leq \delta$ for $\delta=\frac{1}{\alpha}$. Using the subcovariate uncertainty set $\mathcal{U}^{\text{sub}}_P$, the worst-case value $\mathcal{R}(\theta, \mathcal{U}^{\text{sub}}_P)$ in Eq. (\ref{eq:robust}) can be found using techniques from convex duality~\citet{duchi2019distributionally}. Details of the resulting estimator of $\mathcal{R}(\theta, \mathcal{U}^{\text{sub}}_P)$ are presented in Appendix~\ref{app:dro_bound}. For the remaining discussion, we will assume that we can solve the worst-case optimization problem resulting from subcovariate shifts and focus on how to leverage this optimizer for OPE. % We now present our main technical results, particularly the framework $\texttt{ROPE}$ for off-policy evaluation (OPE) under realistic shifts where human input is incorporated in terms of knowledge of (sub)covariate shifts we anticipate in practice. \begin{figure}[t!] \centering \includegraphics[scale=0.5]{figures/causal_graph_cb_annot.png} \caption{\textbf{Graphical model for shifts in contextual bandits}. Expert input (shaded node $Z$) denotes that $P(Z)$ changes at test time and $P(Y|X,T)$ remains the same. For example, treatments $T$ are prescribed based on age $Z$ and have different outcomes $Y$ by age. In practice, only the age distribution may change across environments e.g. across hospitals (as opposed to the joint distribution over age, treatment, and outcome). This motivates a less conservative approach to robustness by focusing on marginal shifts in age.} \label{fig:model_cb} % \end{figure} \subsection{Robust OPE in Contextual Bandits} \label{sec:offpolicy} As suggested in Section \ref{sec:prelim_cb}, the bandit setup typically assumes that the joint distribution $P$ changes only due to the change in policy. OPE tasks in CBs focus on evaluating the value of the policy under this assumption. Departing from this assumption, we evaluate the utility of a policy under a \textit{new} environment in which the domain expert anticipates shifts in $Z$. We characterize these by unknown and bounded shifts in $Z$. The following assumption is the CB analogue of the covariate shift assumption in supervised learning. \begin{assumption}[Expert Input for CBs] \label{assum:inter_cb} Suppose the human expert specifies the subset of features $Z\subseteq X$, such that across environments, only $P(Z)$ changes while $P(Y|X,T)$ remains the same. \end{assumption} Following this input, the joint distribution at test environment factorizes as $Q(X,T,Y) = Q(Z)Q(T,X\setminus Z\vert Z)P(Y\vert X, T)$. For simplicity, we will group the variables $T,X\setminus Z$ into $T$. The rest of the discussion remains the same and an extra factor of $P(X\setminus Z\vert Z)$ will be suppressed. Thus, we define the uncertainty set containing distributions $Q$ that shift in the marginal distribution of $Z$ as, \begin{equation}\label{eq:unc_cb} \begin{aligned} \mathcal{U}^{\text{CB}}_P = \{&Q\ll P\ \text{s.t.}\ Q=\nu(Z)Q(T\vert Z)P(Y\vert Z, T), \\ & \mathcal{D}\infdivx{\nu(Z)}{P(Z)}\leq \delta\} % \end{aligned} \end{equation} The robust OPE problem aims to find the \textit{worst-case} average outcome under $\mathcal{U}^{\text{CB}}_P$ instead of the average, \begin{equation} \label{eq:robust_eval} % \mathcal{R}(\mathcal{U}^{\text{CB}}_P) = \inf_{Q\in\ \mathcal{U}^{\text{CB}}_P}\ \mathbb{E}_{(Z, T, Y)\sim Q}[Y], % \end{equation} where $Q(T\vert Z)$ is the policy to be evaluated and is considered to be known and fixed. Consider each distribution in the set $\mathcal{U}^{\text{CB}}_P$, $Q(\cdot){=}\nu(Z)Q(T\vert Z)P(Y\vert Z, T)$, which differs from the train distribution in the factors for $Z$ and $T\vert Z$. \begin{align}\label{eq:robust_ope_cb} &\mathcal{R}(\mathcal{U}^\text{CB}_P) = \inf_{Q\in \mathcal{U}^\text{CB}_P}\ \mathbb{E}_{Z\sim Q(Z)}\mathbb{E}_{P(Y|T,Z)Q(T | Z)}[Y] \\ &\stackrel{(\ref{eq:robust_ope_cb}a)}{=} \inf_{Q \in \mathcal{U}^\text{CB}_P} \mathbb{E}_{Z\sim Q(Z)}\mathbb{E}_{P(Y|T,Z)P(T | Z)}\left[\frac{Q(T|Z)}{P(T|Z)}Y\right] \\ &\stackrel{(\ref{eq:robust_ope_cb}b)}{=}\inf_{\eta\in \mathbf{R}} \frac{1}{\delta}\mathbb{E}_{Z \sim P(Z)}\left[(\mathbb{E}\left[W{\times} Y\vert Z\right]-\eta)_+\right] + \eta \nonumberthis % \end{align} To solve (\ref{eq:robust_eval}) for this $Q$, we first use % importance sampling to account for the change in $T\vert Z$ due to the known policy $Q(T\vert Z)$, step~(\ref{eq:robust_ope_cb}a). As a result, the set $\mathcal{U}^{\text{CB}}_P$ now consists of shifts on $Z$ alone. Thus, the robust OPE problem reduces to solving $\sup_{Q\in\ \mathcal{U}^{\text{CB}}_P}\ \mathbb{E}_{V\sim Q}[W{\times} Y]$ where $W$ are the importance sampling weights: $$W(T, Z){=}Q(T\vert Z)/P(T\vert Z)$$ Using convex duality arguments~\citet{shapiro2014lectures} for our choice of uncertainty set $\mathcal{D}_{\text{CVaR}}$, we can obtain (\ref{eq:robust_ope_cb}b). This motivates the full optimization procedure summarized in Algorithm~\ref{alg:cb_ope}. We first compute importance sampling weights $W_i$, and create a re-weighted dataset $\{V_i = (Z_i, W_i \times Y_i)\}_i$. The risk defined in Eq.~\eqref{eq:robust_eval} is approximated by an estimate of its upper bound given in Eq.~\eqref{eq:marginal_smooth_app} when $Z$ are continuous valued (see Appendix \ref{app:dro_bound} for the detailed derivation). \begin{algorithm}[t!] \caption{Robust OPE in CBs} \label{alg:cb_ope} \begin{algorithmic} \STATE {\bfseries Input:} Data $\{Z_{i}, T_i, Y_i\}_{i}$, Target policy $Q(T|Z)$, behavior policy $P(T|Z)$, hyperparameters $\delta$, \texttt{L}, \texttt{lr}. \STATE \STATE Compute importance weights $\{W_i=\frac{Q(T_i|Z_i)}{P(T_i|Z_i)}\}_i$. \STATE Create dataset $\{V_i = (Z_{i}, W_i\times Y_i)\}_{i}$. \STATE Estimate $\widehat{R}(\mathcal{U}^{\text{CB}}_P)$ (Eq.~(\ref{eq:robust_eval})) with the worst-case risk estimator in Eq. (\ref{eq:marginal_smooth_app}) for the dataset. \STATE \STATE{{\bf{return}} $\widehat{R}(\mathcal{U}^{\text{CB}}_P)$} \end{algorithmic} \end{algorithm} \subsection{Robust OPE in MDPs} \label{sec:fullmdp} Off-policy evaluation is critical in sequential decision-making settings, often encountered in human-centered domains such as health. Often such environments are best modeled as a Markov Decision Process (MDP) as introduced in Section~\ref{sec:prelim}. In this case, we have to consider shifts in the transition dynamics across environments which can invalidate OPE methods for MDPs as they often assume stationary dynamics. Our goal is to evaluate \textit{robust} value for a given policy $\pi$ to be deployed in a new environment with unknown transition dynamics. Hence, the uncertainty set for each state-action pair is defined over $P(s'|s, a)$, denoted by $\mathcal{U}(s,a)$.\footnote{We drop the dependence on the train environment's transition probabilities for conciseness and use $P$ to denote target probabilities instead of $Q$ to avoid confusion with the $Q$-value function in RL.} Specifically, we want to estimate the robust value for $\pi$ starting from $s_0$ as $V^\pi(s_0) = \inf_{P\in\mathcal{U}} \mathbb{E}_{\pi,P}\left[\sum_{t=0}^\infty \gamma^t r(s_t,a_t)\vert s_0\right]$. \citet{iyengar2005robust} proves that $V^\pi(\cdot)$ is the solution to the following fixed-point equation (namely, robust Bellman equation) if we assume that uncertainty sets for each state-action pair are constructed independently (known as \textit{SA-rectangularity}), \begin{equation} \label{eq:robust_bellman} \begin{aligned} V^\pi(s) = r(s,\pi(s)) + \inf_{P\in\mathcal{U}(s,\pi(s))} \gamma \mathbb{E}_{s'\sim P(s'\vert s,\pi(s))}[V^\pi(s')] % \end{aligned} \end{equation} \begin{figure}[t!] \centering % \includegraphics[width=0.4\textwidth]{figures/causal_graph_mdp.png} % % % % % % \caption{\textbf{Graphical model representing a Markov Decision Process.} Expert input (shaded nodes in red) denotes the variables whose distribution changes across environments. Here, only $P(s_{t+1}^1| s_t,a_t)$ changes. Rewards $r_t$ are not shown for simplicity. Each $r_t$ has directed edges from $s_t$ and $a_t$. % } \label{fig:sel_mdp} % \end{figure} Intuitively, SA-rectangularity implies that the uncertainty sets are constructed across time steps independently. This property yields a tractable method to compute the value function estimates using dynamic programming \citep{iyengar2005robust}. We state the assumption formally in Appendix \ref{app:assum_mdp} for completeness. Eq. (\ref{eq:robust_bellman}) can be solved iteratively by dynamic programming~\citep{sutton2018reinforcement}. Given the value function at any iteration, we additionally have to solve the minimization problem over $\mathcal{U}(s,a)$. Thus, the robust OPE problem in MDPs reduces to solving multiple DRO subproblems with a chosen uncertainty set. \paragraph{Applying $\texttt{ROPE}$ to OPE in MDPs.} Past work has only considered uncertainty sets for the joint distribution \citep{iyengar2005robust,tamar2014scaling,petrik2019beyond,zhou2021tabular}. In contrast, we consider sets based on the shifts on parts of the state space of the MDP i.e. leveraging human input. Figure \ref{fig:sel_mdp} pictorially represents the probabilistic model for the MDP denoting the shifting state features. The following assumption is the RL analogue of the subcovariate shifts in supervised learning. For any time step $t>0$ in the MDP, consider a partitioning of the state feature vector $s_t$ into two feature sets, $(s^{1}_t,s^{2}_t)$. The factors of the joint distribution in any environment are given by: $$P(s_{t+1},r_{t+1}|s_t,a_t) = P(s^{1}_{t+1}\vert s_t, a_t)P(s^{2}_{t+1}\vert s_t, a_t, s^{1}_{t+1})P(r_{t+1}\vert s_t, a_t, s_{t+1})$$ \begin{assumption}[Expert Input for MDPs] \label{assum:inter_mdp} Suppose the human expert specifies the following, \begin{enumerate}[label=(\alph*)] \item $P(s^{1}_{t+1}\vert s_t, a_t)$ can shift across environments independently of state-action pairs at any other time step, while \item $P(s^{2}_{t+1}\vert s_t, a_t, s^{1}_{t+1})$ and $P(r_{t+1}\vert s_t, a_t, s_{t+1})$ are the same as that in the train environment. \end{enumerate} \end{assumption} \textbf{Remark}. Assumption \ref{assum:inter_mdp} implies that the MDP satisfies SA-rectangularity. For any state-action pair $(s_t,a_t)$ at time step $t$, the shifts that result in the uncertainty set $\mathcal{U}^{\text{MDP}}(s_t,a_t)$ are independent of the state-action pairs at other time steps. This implies that $\mathcal{U}^{\text{MDP}}(s_t,a_t)$ is determined independently of the uncertainty sets at other time steps. Thus, the collection $\mathcal{U}^{\text{MDP}}$ is all possible combinations of sets $\mathcal{U}^{\text{MDP}}(s_t,a_t)$. A scenario where SA-rectangularity does not hold is when uncertainty sets are constructed adaptively based on previous state-action pairs, which we do not address. Thus, the uncertainty set needs to be defined only for $P(s^{1}_{t+1}\vert s_t, a_t)$, denoted by $\mathcal{U}^{\text{MDP}}(s,a)$: \begin{align*} \mathcal{U}^{\text{MDP}}(s,a) {:=} \Big\{ &P(\cdot\vert s,a)\ll P_0(\cdot\vert s,a)\quad \text{s.t.}\\ &\forall s', P(s'\vert s,a)=\nu(s'^{1}\vert s, a)P_0(s'^{2}\vert s, a, s'^{1}), \\ &\mathcal{D}\infdivx{\nu(s'^{1}\vert s, a)}{P_0(s'^{1}\vert s, a)}\leq \delta\Big\} \end{align*} where $P_0(\cdot)$ denotes the distribution of the train environment. With $\mathcal{U}^{\text{MDP}}$, the DRO subproblem in (\ref{eq:robust_bellman}) reduces to, \begin{align*} &\inf_{P\in\mathcal{U}^{\text{MDP}}(s,\pi(s))} \mathbb{E}_P[V^\pi(s')] \\ &= \inf_{P\in\mathcal{U}^{\text{MDP}}(s,\pi(s))} \mathbb{E}_{P(s'^{1}\vert s, \pi(s))} \left[\mathbb{E}_{P_0(s'^{2}\vert s, \pi(s), s'^{1})}\left[V^\pi(s')\right]\right] % % \end{align*} We estimate the inner expectation with Monte-Carlo averaging on batch data, followed by solving the DRO problem as done in bandits (\ref{eq:robust_ope_cb}). Here, we use the maximum likelihood estimate of the transition model $P_0$ as we do not have access to the true model. Steps for estimation are outlined in % Algorithm \ref{alg:cb_mdp}. \begin{algorithm}[t!] \caption{Robust OPE in MDPs} \label{alg:cb_mdp} \begin{algorithmic} \STATE {\bfseries Input:} Trajectories $\{(s,a,s',r)\}$ sampled using policy $\mu$ and transition model $P_0$, Target policy $\pi$, Discount factor $\gamma$, Robustness level $\delta$. \STATE \STATE Learn transition models with sample averages across observed trajectories $\widehat{P_0}(s'^1\vert s,a)= \frac{\text{Count}\{(s,a,(s'^1,*),*)\}}{\text{Count}\{(s,a,(*,*),*)\}}$ and $\widehat{P_0}(s'^2\vert s,a,s'^1)= \frac{\text{Count}\{(s,a,(s'^1,s'^2),*)\}}{\text{Count}\{(s,a,(s'^1,*),*)\}}$, where the wildcard $(s,a,*,*)$ denotes transitions from trajectories that match the state-action pair $s,a$. \STATE \STATE Learn reward model with sample averages across observed trajectories $\widehat{r}(s,a) = \frac{\sum_{(\_,\_,\_,r)\in \{(s,a,*,*)\}} r}{\text{Count}\{(s,a,*,*)\}}$. \STATE \STATE Initialize $V^\pi(s)=0$, for all $s\in\mathcal{S}$. \REPEAT \FOR{$s\in\mathcal{S}$} \STATE Update $V^\pi(s)$ using Eq. (\ref{eq:robust_mdp_app}) in Appendix \ref{app:algmdp} with $\widehat{P_0}$. \ENDFOR \UNTIL{$V^\pi$ converges} \STATE \STATE{{\bf{return}} $V^\pi$} \end{algorithmic} \end{algorithm} Given enough samples of next state for each state-action pair, the robust value can still be estimated accurately. Suppose $V^\pi_{\mathcal{U}_{P_0}}$ be the robust value corresponding to the true transition model $P_0$ which we want to show is close to the robust value corresponding to the \textit{estimated} transition model $\smash{\widehat{P}_0}$ denoted by $\smash{V^\pi_{\mathcal{U}_{\widehat{P}_0}}}$. To show this, we will restrict to uncertainty sets in Eq. (\ref{eq:int_set}) defined by KL-divergence $\mathcal{D}\infdivx{Q}{P} = \sum_z Q(z)\log(\frac{Q(z)}{P(z)})$ being smaller than $\delta$. \begin{theorem}[Robust OPE Estimation error] \label{thm:error_ope} Given at least $n$ samples from $P_0(\cdot\vert s,a)$ for all $s,a$, assuming that the rewards are bounded $r\in[0,r_\text{max}]$, and that $\mathcal{U}^{\text{MDP}}$ are defined by KL-divergence, then with probability at least $1-\alpha$, {\small{ \begin{align*} \|V^\pi_{\mathcal{U}_{P_0}} - V^\pi_{\mathcal{U}_{\widehat{P}_0}}\|_\infty \leq O\left(\frac{\gamma r_\text{max}|\mathcal{S}|}{(1-\gamma)^2} \sqrt{\frac{1}{n}\log\left(\frac{4|\mathcal{S}\times\mathcal{A}\times\mathcal{S}|}{\alpha}\right)}\right) \end{align*} }} \end{theorem} Thus, the error in evaluating the robust value with the estimated transition model as compared to the true model converges at the rate $\tilde{O}\left(\frac{\gamma|\mathcal{S}|}{\sqrt{n}(1-\gamma)^2}\right)$, ignoring logarithmic factors. The dependence on $1/\sqrt{n}$ matches that for the non-robust case but is sub-optimal in $|\mathcal{S}|$ \citep{li2020breaking}. The proof is included in Appendix \ref{app:error_ope} in which we show an analogue of the \textit{simulation lemma}~\citep{kearns2002near} for robust MDPs. In terms of time complexity, each iteration of dynamic programming with DRO can be computed in $O(|\mathcal{A}| |\mathcal{S}|^2 \text{log}|\mathcal{S}|)$ which is only a $\text{log}|\mathcal{S}|$ factor more than the non-robust solution. \begin{figure*}[htbp!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{{figures/toylogpol_testshift_a0.8_r_4_error_test_line_conf.png}} \caption{} \label{fig:synth_cb} \end{subfigure}% \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{figures/warfarin_data_nr_5_a_0.8_s_3_error_test_line_conf.png} \caption{} \label{fig:wf_cb} \end{subfigure} \caption{(a) \textbf{CB, Synthetic}. MSE in value estimates for test sets (y-axis) with varying levels of shift in $Z_1$ (x-axis). $\texttt{ROPE}$ performs well for moderate shifts. (b) \textbf{CB, Warfarin}. MSE in value estimates for test sets with shift in the race distribution. $\texttt{ROPE}$ achieves the right level of conservatism to match the value at test. Curves for $\texttt{Standard}$ and $\texttt{JointDRO}$ are \textbf{not visible} as they have high error (around 0.1 and 0.7, respectively) and lie outside the plotted y-axis range. Robustness level in Eq. (\ref{eq:unc_cb}) is set to $\delta=0.8$. Error bars are computed over $5$ random initializations.} \label{fig:results} \end{figure*} \section{Experimental Evaluation}\label{sec:expts} In this section, we study the robustness-utility trade-offs achieved by the proposed shifts and compare these with the existing approaches\footnote{Code for replicating the experiments is at \url{https://github.com/AI4LIFE-GROUP/rise-against-distribution-shift}}. More specifically, we study whether optimizing for marginal shifts with $\texttt{ROPE}$ improves model performance under those shifts in comparison with optimizing for broader classes of shifts using existing methods. We start with subcovariate shifts in bandit settings as described in Sec. \ref{sec:exp_bandit}. We specifically consider the partial feedback problem in the contextual bandits setup since we only get to the observe the feedback (outcomes) for the actions taken in the collected data. Thus, we tackle robustness under partial feedback. We show that our method provides more faithful estimates for efficacy of a drug dosing policy under patient population shifts. In Sec. \ref{sec:exp_fullmdp}, we consider sequential decision problems under shifts in environment dynamics for MDPs. Within a simulated Sepsis environment, we show that the proposed sets provide significantly less conservative value estimates for a treatment policy. \subsection{Robust OPE in CBs} \label{sec:exp_bandit} We present results on a synthetic and a real world dataset. \textbf{Synthetic}. We generate data with two features $Z{:=}(Z_1, Z_2)$, binary treatment $T$, and continuous outcome $Y$. Additional details on how data is simulated are deferred to Appendix \ref{app:data_cb}. Here we assume changes the marginal distribution of $Z_1$ for evaluation. We simulate $n{=}2000$ samples in the train environment following a logistic policy and use it to estimate the robust value of a different policy in shifted environments. Baselines include: $\texttt{Standard}$ returns the average value assuming no shift. Inverse Probability Weighting \texttt{(IPW)} only corrects for shift in policy using importance sampling. $\texttt{JointDRO}$ accounts for shifts in all variables $V$, as done in past work \citep{si2020distributional}. % In Figure \ref{fig:synth_cb}, we plot the MSE between the estimated and the true policy value, evaluated using $20000$ samples from the test environment. We observe that when the test environment is close to the train one, not accounting for the shift ({$\texttt{Standard}$}) performs well. But, as the shift increases, our approach ($\texttt{ROPE}$) does better. With large shifts, larger uncertainty sets are required. For large shifts, {$\texttt{JointDRO}$} does better than the other methods. In summary, $\texttt{ROPE}$ performs well when the shift is significant but not too large. This highlights the importance of choosing the desired robustness level appropriately which is a challenging open problem for DRO methods. \begin{table*}% \centering \begin{subtable}[t]{0.6\linewidth} \centering \begin{tabular}{p{0.7cm}c|cccc} \toprule & $\texttt{Standard}$ & \multicolumn{2}{c}{$\texttt{JointDRO}$ } & \multicolumn{2}{c}{$\texttt{ROPE}$} \\ $\delta$ & - & 0.4 & 0.8 & 0.4 & 0.8 \\ \midrule mean & -1136.43 & -1448.16 & -1221.78 & \textbf{-1416.39} & \textbf{-1190.57} \\ std. & 6.22 & 6.32 & 5.28 & 6.70 & 6.33 \\ median & -1136.64 & -1449.91 & -1222.76 & -1417.12 & -1190.92 \\ \bottomrule \end{tabular} \caption{Cliffwalking domain} \label{tab:cliff_dp} \end{subtable}\hfill \begin{subtable}[t]{0.4\linewidth} \centering \begin{tabular}{p{0.7cm}c|cccc} \toprule & $\texttt{Standard}$ & $\texttt{JointDRO}$ & $\texttt{ROPE}$\\ $\delta$ & - & 0.8 & 0.8 \\ \midrule mean & -0.037 & -0.939 & \textbf{-0.662} \\ std. & 0.008 & 0.006 & 0.148 \\ median & -0.039 & -0.941 & -0.705 \\ \bottomrule \end{tabular} \caption{Sepsis domain} \label{tab:sepsis_dp} \end{subtable} \caption{\textbf{Robust OPE in MDP.} Estimated value ($10$ random runs) with standard (that is, non-robust) and robust dynamic programming. \texttt{ROPE}~provides less conservative value than $\texttt{JointDRO}$ meaning it has a smaller decrease in value from $\texttt{Standard}$.} \label{tab:mdp_tables} \end{table*} \textbf{Warfarin Dosing Policy}. Warfarin is an oral anticoagulant drug. Optimal dosage to assign to a patient while initiating Warfarin therapy has been a subject of multiple clinical trials \citep{heneghan2010optimal}. Using the public PharmGKB dataset~\citep{international2009estimation} of $5528$ patients, \citet{bastani2015online} learn contextual bandit policies that adapt doses based on patient covariates like demographics and clinical information. \footnote{A preprocessed version of the Warfarin dataset was downloaded from \url{https://github.com/khashayarkhv/contextual-bandits/blob/master/datasets/warfarin.csv}.} Reward, either 0 or 1, is defined as a policy's accuracy in making the correct dosing decision. However, the value of the policies is suspect when applied to patient populations different from the development cohort. Thus, we estimate robust value of a policy under shifts in race distribution. Note that the ground truth optimal dose for each patient is available in the data which enables evaluating different policies. We learn a dosing policy with linear regression on held-out data and estimate it on test data with shifted race distributions. Specifically, we subsample (without replacement) fewer patients with a recorded race into our analysis set. The policy has lower performance on patients with \texttt{Unknown} race. Thus, the value of the policy decreases with increasing shift as the relative proportion of this group increases. To estimate this value correctly, the robust method must consider the right level of conservativeness. Figure \ref{fig:wf_cb} shows that MSE between the estimated and true average reward is lower for $\texttt{ROPE}$ than for $\texttt{JointDRO}$ and \texttt{IPW}, as it constructs the sets for marginal shifts alone. % \begin{figure}[htbp!] \centering \includegraphics[width=\linewidth]{figures/cliff/cliff_v_value_algdp_rTrue_tmarginal_ucvar_a0.8_s0.1_e10000_runs10/value_run0_g1.png} \caption{\textbf{Illustration of the Cliffwalking domain.} Plot shows (part of) the value function estimated using robust Bellman equation with \texttt{ROPE}. Start position is (5,0), goal is (5,5), and cliff corresponds to the row (5,0) to (5,5). Agent slips downward by 1 cell with probability 0.1 when taking actions in any of the columns except first and last columns. Results in Table \ref{tab:cliff_dp} report the value at the start position (5,0), which is -1182.45 here, averaged over 10 random initializations of the domain.} \label{fig:cliff} \end{figure} \subsection{Robust OPE in MDPs} \label{sec:exp_fullmdp} We present results on two simulated RL domains. \textbf{Cliffwalking Domain}. We consider a $6\times6$ gridworld which a agent navigates from a start to a goal position avoiding a cliff \citep[][Ex. 6.6]{sutton2018reinforcement}. Please refer to Appendix \ref{app:data_cliff} for more details. An illustration of the gridworld is provided in Figure \ref{fig:cliff}. With a constant shift probability, the agent slips towards the cliff instead of taking the prescribed action. The slip probability varies across environments, % changing the transition dynamics and necessitating robustness in policy evaluation. We evaluate the value estimate for an agent following uniform random policy using dynamic programming with the standard Bellman equation ($\texttt{Standard}$) or the robust one in (\ref{eq:robust_bellman}) ($\texttt{JointDRO},\texttt{ROPE}$). % To simulate subcovariate shifts, we duplicate the state features $s^1,s^2$ such that $s^1$ follows the agent's actions while $s^2$ is random noise. Since agent's actions affect only $s^1$, $\texttt{ROPE}$ correctly constructs sets on $P(s^1\vert s,a)$. In contrast, $\texttt{JointDRO}$ ignores this structure and constructs sets on both the features, $P(s^1,s^2\vert s,a)$. This is the same setting as considered in past work \citep{zhou2021tabular} which used the KL divergence to define uncertainty sets. Table \ref{tab:cliff_dp} reports the value estimates for the start position. We observe that for a high level of desired robustness $\delta=0.4$, $\texttt{JointDRO}$ decreases the value by 27.4\% (from $-1136$ to $-1448$) while $\texttt{ROPE}$ only decreases it by 24.6\% (to $-1416$). This validates that both DRO methods have the expected behavior in the MDP. % \textbf{Sepsis Treatment Evaluation.} Sepsis simulator~\citep{oberst19counterfactual} is a domain with more involved transition dynamics and has been used to test treatment policies \citep{oberst19counterfactual, namkoong2020off, killian2020counterfactually,futoma2020popcorn}. It has a total of $1440$ states which includes $4$ vital signs (blood pressure, glucose levels, heart rate, and oxygen concentration) and diabetic status. Actions correspond to $3$ treatment combinations (antibiotics, mechanical vents, and vasopressors). Terminal states, discharge from ICU or death, have rewards $+1$ or $-1$, respectively. % Glucose levels fluctuate more for diabetics than non-diabetics. Our goal is to evaluate a policy learned using policy iteration, namely RL policy, on a dataset with $20\%$ diabetics. We consider a setting where the percentage of diabetics and the fluctuation in their glucose levels varies in the test environment. To interrogate the RL policy for possible deployment, we find its robust value accounting for these shifts. $\texttt{JointDRO}$ constructs sets based on the full 1440 states, while $\texttt{ROPE}$ considers uncertainty only in glucose level dynamics for diabetics and non-diabetics. Thus, $\texttt{ROPE}$ represents the actual shifts more faithfully compared to $\texttt{JointDRO}$. We % check how conservative the OPE estimates from assumptions of joint shifts are relative to leveraging domain knowledge to restrict to subcovariate shifts. % Table \ref{tab:sepsis_dp} reports the value estimates for the RL policy obtained with standard and robust dynamic programming. We observe that $\texttt{JointDRO}$ reports a decrease in value by $21$ times as compared to the value at train environment and $\texttt{ROPE}$ only reports a decrease in value by $14$ times. Thus, experiments demonstrate the benefit of curating the uncertainty sets using domain knowledge to balance utility and robustness. \paragraph{Optimization and system details} For the CB experiments, we perform gradient descent with Adagrad \citep{duchi11adagrad} implementation in PyTorch \citep{paszke19pytorch} to solve Eq. \ref{eq:marginal_smooth_app}. In Synthetic CB dataset, we use learning rate $0.5$ for $\texttt{ROPE}$ and $0.1$ for $\texttt{JointDRO}$, Lipschitz constant $1$, and number of steps $100$. In Warfarin dataset, we use learning rate $0.1$ for both $\texttt{ROPE}$ and $\texttt{JointDRO}$, Lipschitz constant $0.01$, and number of steps $200$. While solving the dynamic program iteratively in the MDP experiments, the convergence condition for $V^\pi$ is taken as maximum change in $V^\pi(\cdot) < 10^{-2}$ after one repetition across all states. The minimization over $\eta$ while solving Eq. (\ref{eq:robust_marginal}) is performed using Brent's method, implemented in the package \texttt{scipy} \citep{scipy}. For the MDP experiments, the only hyperparameter is $\delta$ which is set to $0.8$ throughout or to $0.4$ in Table \ref{tab:cliff_dp}. Experiments were run on a compute cluster using a single node with a 2.50 GHz Intel processor, 2 GPUs with 6 GB memory each, and 256 GB system memory. None of the datasets include personally identifiable information. \section{Conclusions and Future Work} \label{sec:conc} In this work we are focused on leveraging human expertise to provide value estimates of ML policies under realistic distribution shifts. Here, we propose to represent domain knowledge via uncertainty sets over sub-covariate shifts. % We argue that this enables representing more realistic shifts and lead to less conservative solutions. We then provide novel estimators for robust OPE in contextual bandits and MDPs leveraging the distributionally robust optimization framework. Future directions include expanding human input to % address shifts in conditional distributions for OPE (e.g. see \citep{subbaswamy2021evaluating} for supervised learning). Finally, applying the robust OPE method to continuous state-action spaces with function approximators (e.g. \citep{tamar2014scaling}) is an interesting direction of future work. % We hope that the perspective of % leveraging human input to define uncertainty sets for robustness in off-policy evaluation opens up more possibilities to tackle over-conservatism of robust learning as well as the challenging problem of model selection in off-policy evaluation. \paragraph{Broader implications} Although it is challenging to foresee the impact of using robust methods in real-world applications, one negative is that from over-reliance on results without adequate scrutiny of the assumptions like causal knowledge and uncertainty about future deployment settings. Adversaries can manipulate the deployment environments such that the estimated robust values of policies have high errors. The design of uncertainty sets that account for such adversarial changes should be informed and validated by domain experts. \begin{acks} The authors would like to thank the anonymous reviewers for their feedback and all the funding agencies listed below for supporting this work. This work is supported in part by the Center for Research on Computation and Society (CRCS) at Harvard University, NSF awards \#IIS-2008461, \#IIS-2040989, and \#1922658, and research awards from Amazon, Harvard Data Science Institute, Bayer, and Google. The authors would also like to thank Rediet Abebe, Sara Kingsley, Elita Lobo, Aviva Prins, and Milind Tambe for early feedback on this work. Lastly, HL would like to thank Sujatha and Mohan Lakkaraju for their continued support and encouragement. The views expressed here are those of the authors and do not reflect the official policy or position of the funding agencies. \end{acks} \bibliographystyle{ACM-Reference-Format} \balance
{ "timestamp": "2022-09-20T02:22:27", "yymm": "2209", "arxiv_id": "2209.08682", "language": "en", "url": "https://arxiv.org/abs/2209.08682" }
\section{Analysis of Maximum Likelihood Estimation} In this section, we provide theoretical results on the estimation of $\Theta^*$ and $\Delta^*$. We denote by \$ L_1(\Theta) = -\EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \log \Theta(S_t, Z_t) \right] \$ the population counterpart of $\hat L_1$. We define \#\label{eq:theta-hellinger} H^2(\Theta_1, \Theta_2) = \frac{1}{2} \cdot \EE \left[\frac{1}{T} \sum_{t = 0}^{T-1} \int \left( \sqrt{\Theta_1(S_t, z)} - \sqrt{\Theta_2(S_t, z)} \right)^2 \ud z \right]. \# We have the following supporting result. \begin{lemma}\label{lemma:iv-mle-1} Under Assumption \ref{ass:spaces}~\ref{ass:upper-bound-delta}, for any $\Theta_1, \Theta_2 \in \cF_1$, it holds with probability at least $1 - \delta$ for any $c / (NT)^2 \leq \delta \leq 1$ that \$ & \left| \left(L_1(\Theta_1) - L_1(\Theta_2)\right) - \left(\hat L_1(\Theta_1) - \hat L_1(\Theta_2)\right) \right| \\ & \qquad \leq c\cdot \frac{\log C_{\Theta^*}}{NT \kappa} \log \frac{1}{\delta} \log(NT) + c\cdot \sqrt{\frac{C_{\Theta^*}}{NT \kappa} H^2(\Theta_1, \Theta_2) \log\frac{1}{\delta} \log(NT) }, \$ where $H^2(\Theta_1, \Theta_2)$ is defined in \eqref{eq:theta-hellinger}. \end{lemma} \begin{proof} By Theorem \ref{thm:bernstein-mixing}, it holds with probability at least $1 - \delta$ that \#\label{eq:04939335} & \left| \left(L_1(\Theta_1) - L_1(\Theta_2)\right) - \left(\hat L_1(\Theta_1) - \hat L_1(\Theta_2)\right) \right| \\ & \quad \leq c\cdot \frac{\log C_{\Theta^*}}{NT \kappa} \log \frac{1}{\delta} \log(NT) + c\cdot \sqrt{\frac{1}{NT \kappa} \EE\left[\frac{1}{T} \sum_{t = 0}^{T-1} \left( \log\frac{\Theta_1(S_t,Z_t)}{\Theta_2(S_t,Z_t)} \right)^2 \right] \log\frac{1}{\delta} \log(NT) }. \# Now, it suffices to upper bound the variance term on the RHS of the above inequality. Note that $\log x \leq 2(\sqrt{x}-1)$ for any $x>0$. Thus, for any $s\in \cS$, we have \#\label{eq:iv-kl-hell} & \int \Theta^*(s,z)\left(\log\frac{\Theta_1(s,z)}{\Theta_2(s,z)}\right)^2 \ud z\\ & \qquad \leq 4 \int \Theta^* \max\left\{\left(\sqrt{\frac{\Theta_2}{\Theta_1}} - 1\right)^2, \left(\sqrt{\frac{\Theta_1}{\Theta_2}} - 1\right)^2\right\} \ud z \\ & \qquad = 4 \int \max\left\{\frac{\Theta^*}{\Theta_1}\left(\sqrt{\Theta_2} - \sqrt{\Theta_1}\right)^2, \frac{\Theta^*}{\Theta_2}\left(\sqrt{\Theta_1} - \sqrt{\Theta_2}\right)^2\right\} \ud z \\ & \qquad \leq 4C_{\Theta^*} \int \left(\sqrt{\Theta_1(s,z)} - \sqrt{\Theta_2(s,z)}\right)^2 \ud z, \# which implies that \#\label{eq:049395} \EE\left[\frac{1}{T} \sum_{t = 0}^{T-1} \left( \log\frac{\Theta_1(S_t,Z_t)}{\Theta_2(S_t,Z_t)} \right)^2 \right] \leq 8C_{\Theta^*} H^2(\Theta_1, \Theta_2). \# By plugging \eqref{eq:049395} into \eqref{eq:04939335}, we conclude the proof of the lemma. \end{proof} \subsection{Proof of Theorem \ref{thm:iv-param-theta}}\label{prf:thm:iv-param-theta} \begin{proof} \vskip5pt \noindent\textbf{Proof of the first statement.} It suffices to show that with probability at least $1 - \delta$, we have \$ \hat L_1(\Theta^*) - \hat L_1(\hat \Theta) \leq \alpha_1. \$ By Corollary \ref{cor:vdg-param}, it holds with probability at least $1 - \delta$ that \#\label{eq:3824783} H^2(\Theta^*, \hat \Theta) \leq c \cdot \frac{d}{NT \kappa} \log\frac{\theta_{\max}}{\delta}, \# where $c>0$ is an absolute constant, which may vary from lines to lines. Thus, by Lemma \ref{lemma:iv-mle-1}, it holds with probability at least $1 - \delta$ that \#\label{eq:qwe1f} & \left| \left(L_1(\Theta^*) - L_1(\hat \Theta)\right) - \left(\hat L_1(\Theta^*) - \hat L_1(\hat \Theta)\right) \right| \\ & \qquad \leq c\cdot \frac{\log C_{\Theta^*}}{NT \kappa} d \log \frac{\theta_{\max}}{\delta} \log(NT) + c\cdot \sqrt{\frac{C_{\Theta^*}}{NT \kappa} H^2(\Theta^*, \hat \Theta) d \log\frac{\theta_{\max}}{\delta} \log(NT) } \\ & \qquad \leq c\cdot \frac{C_{\Theta^*}d}{NT \kappa} \log\frac{\theta_{\max}}{\delta} \log(NT), \# where we use a covering argument and \eqref{eq:3824783} in the first and last inequalities, respectively. Further, by a similar idea as in \eqref{eq:iv-kl-hell}, we upper bound $|L_1(\Theta^*) - L_1(\hat \Theta)|$ as follows, \#\label{eq:qwe2f} |L_1(\Theta^*) - L_1(\hat \Theta)| & = \left|\EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \int \Theta^*(S_t,z) \log \frac{\Theta^*(S_t,z)}{\hat \Theta(S_t,z)}\ud z\right] \right| \\ & \leq 2C_{\Theta^*} H^2(\Theta^*, \hat \Theta) \leq c \cdot \frac{C_{\Theta^*}d}{NT \kappa} \log \frac{\theta_{\max}}{\delta}, \# where we use \eqref{eq:3824783} and Corollary \ref{cor:vdg-param} in the first and last inequalities, respectively. Now, by combining \eqref{eq:qwe1f} and \eqref{eq:qwe2f}, it holds with probability at least $1 - \delta$ that \$ \hat L_1(\Theta^*) - \hat L_1(\hat \Theta) \leq c\cdot \frac{C_{\Theta^*}d}{NT \kappa} \log\frac{\theta_{\max}}{\delta} \log(NT) = \alpha_1, \$ which concludes the proof of the first statement. \vskip5pt \noindent\textbf{Proof of the second statement.} By Lemma \ref{lemma:iv-mle-1}, with probability at least $1 - \delta$, for any $\Theta \in \textsf{conf}^1_{\alpha_1}$, we have \#\label{eq:qwww2f} & \left| \left(L_1(\Theta^*) - L_1(\Theta)\right) - \left(\hat L_1(\Theta^*) - \hat L_1(\Theta)\right) \right| \\ & \quad \leq c\cdot \frac{\log C_{\Theta^*}}{NT \kappa} d \log \frac{\theta_{\max}}{\delta} \log(NT) + c\cdot \sqrt{\frac{C_{\Theta^*}}{NT \kappa} H^2(\Theta^*, \Theta) d \log\frac{\theta_{\max}}{\delta} \log(NT) }, \# where we use a covering argument. In the meanwhile, by the first statement, we have $\Theta^*\in \textsf{conf}^1_{\alpha_1}$ with probability at least $1 - \delta$. Thus, we have \#\label{eq:qwww3f} \left| \hat L_1(\Theta^*) - \hat L_1(\Theta) \right| \leq \left| \hat L_1(\Theta^*) - \hat L_1(\hat \Theta) \right| + \left| \hat L_1(\hat \Theta) - \hat L_1(\Theta) \right| \leq 2\alpha_1, \# where we use the fact that $\Theta \in \textsf{conf}^1_{\alpha_1}$. By combining \eqref{eq:qwww2f} and \eqref{eq:qwww3f}, with probability at least $1- \delta$, it holds for any $\Theta \in \textsf{conf}^1_{\alpha_1}$ that \#\label{eq:qwww4f} L_1(\Theta) - L_1(\Theta^*) \leq & c\cdot \frac{\log C_{\Theta^*}}{NT \kappa} d \log \frac{\theta_{\max}}{\delta} \log(NT) \\ & + c\cdot \sqrt{\frac{C_{\Theta^*}}{NT \kappa} H^2(\Theta^*, \Theta) d \log\frac{\theta_{\max}}{\delta} \log(NT) }. \# On the other hand, it holds for any $s\in \cS$ that \$ -\int \Theta^*(s,z) \log\frac{\Theta(s,z)}{\Theta^*(s,z)}\ud z & \geq - 2 \int \Theta^*(s,z) \left(\sqrt{\frac{\Theta(s,z)}{\Theta^*(s,z)}}-1\right)\ud z \\ & = \int \left(\Theta^*(s,z) + \Theta(s,z) - 2\sqrt{\Theta(s,z) \Theta^*(s,z)}\right)\ud z \\ & = \int \left(\sqrt{\Theta^*(s,z)} + \sqrt{\Theta(s,z)} \right)^2\ud z, \$ which implies that \#\label{eq:qwww5f} L_1(\Theta) - L_1(\Theta^*) \geq 2H^2(\Theta^*, \Theta). \# By combining \eqref{eq:qwww4f} and \eqref{eq:qwww5f}, we have \$ H^2(\Theta^*, \Theta) \leq c\cdot \frac{\log C_{\Theta^*}}{NT \kappa} d \log \frac{\theta_{\max}}{\delta} \log(NT) + c\cdot \sqrt{\frac{C_{\Theta^*}}{NT \kappa} H^2(\Theta^*, \Theta) d \log\frac{\theta_{\max}}{\delta} \log(NT) }, \$ which implies that \$ H^2(\Theta^*, \Theta) \leq c\cdot \frac{C_{\Theta^*}d}{NT \kappa} \log \frac{\theta_{\max}}{\delta} \log(NT). \$ Now, by Lemma \ref{lemma:vi-vdg-lemma}, with probability at least $1-\delta$, it holds for any $\Theta \in \textsf{conf}^1_{\alpha_1}$ that \$ \sqrt{\EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \|\Theta(S_t,\cdot) - \Theta^*(S_t,\cdot)\|_1^2 \right]} \leq c\cdot \sqrt{\frac{C_{\Theta^*}d}{NT \kappa} \log \frac{\theta_{\max}}{\delta} \log(NT) }, \$ which concludes the proof of the second statement. \end{proof} \section{Proofs of Results in \S\ref{sec:iv-id}} We provide proofs of results in \S\ref{sec:iv-id}. We first present proofs for \S\ref{sec:iv-mis-id}, then we present proofs for \S\ref{sec:iv-vf-id}. \subsection{Proof of Lemma \ref{lemma:iv-mis-j}}\label{prf:lemma:iv-mis-j} \begin{proof} We observe for any $t\in \{0, 1, \ldots, T-1\}$ that \$ & \EE\left[ \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} w^\pi(S_t) R_t \Biggiven S_t \right] \\ & \quad = \sum_{a\in \cA} \EE\left[ \frac{Z_t^\top a \pi(a\given S_t) d^\pi(S_t) \ind\{A_t = a\} }{\Delta^*(S_t,a) \Theta^*(S_t,Z_t) d^b(S_t)} R(S_t, U_t, a, S_{t+1}, U_{t+1}) \Biggiven S_t \right] \\ & \quad = \sum_{a\in \cA} \EE\left[ \frac{Z_t^\top a \pi(a\given S_t) d^\pi(S_t) \ind\{A_t = a\} }{\Delta^*(S_t,a) \Theta^*(S_t,Z_t) d^b(S_t)} r(S_t, U_t, a) \Biggiven S_t \right] \\ & \quad = \sum_{a\in \cA} \EE\left[ \frac{Z_t^\top a \pi(a\given S_t) d^\pi(S_t) \PP(A_t = a\given S_t, U_t, Z_t) }{\Delta^*(S_t, a) d^b(S_t) \Theta^*(S_t,Z_t)} r(S_t, U_t, a) \Biggiven S_t \right ] \\ & \quad = \sum_{a\in \cA}\EE\left[ \frac{\pi(a\given S_t) d^\pi(S_t) \PP(A_t = a\given S_t, U_t, Z_t=a) }{\Delta^*(S_t, a) d^b(S_t) } r(S_t, U_t, a) \Biggiven S_t \right ] \\ & \quad \quad - \sum_{a\in \cA}\sum_{z\in \cZ, z\neq a} \frac{1}{K-1}\EE\left[ \frac{\pi(a\given S_t) d^\pi(S_t) \PP(A_t = a\given S_t, U_t, Z_t=z) }{\Delta^*(S_t, a) d^b(S_t) } r(S_t, U_t, a) \Biggiven S_t \right ] \\ & \quad = \sum_{a\in \cA} \frac{\pi(a\given S_t) d^\pi(S_t)}{d^b(S_t)} \EE_{U_t}[r(S_t, U_t, a)\given S_t], \$ where in the first equality, we use the definition of $w^\pi(s)$ and Assumption \ref{ass:10}; in the second equality, we use Assumption \ref{ass:iv-common}~\ref{ass:4}; in the third equality, we use Assumption \ref{ass:iv-common}~\ref{ass:5}; in the forth equality, we use Assumption \ref{ass:iv-common}~\ref{ass:iv-zu-ind}; while in the fifth equality, we use Assumption \ref{ass:iv-common}~\ref{ass:iv-compliance}. Now, by Assumption \ref{ass:10}, we know that $\EE_{U_t}[r(S_t, U_t, a)\given S_t] = \tilde r(S_t, a)$, where the function $\tilde r$ is independent of $t$. Therefore, we have \$ & \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} w^\pi(S_t) R_t \right] \\ & \qquad = \frac{1}{T} \sum_{t = 0}^{T-1} \EE\left[ \sum_{a\in \cA} \frac{\pi(a\given S_t) d^\pi(S_t)}{d^b(S_t)} \tilde r(S_t, a) \right ] \\ & \qquad = \frac{1}{T} \sum_{t = 0}^{T-1} \sum_{a\in \cA} \int \frac{\pi(a\given s) d^\pi(s)}{d^b(s)} \tilde r(s, a) p_t^b(s) \ud s \\ & \qquad = \sum_{a\in \cA} \int \pi(a\given s) d^\pi(s) \tilde r(s, a) \ud s = J(\pi), \$ which concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-est-eq}} \label{prf:lemma:iv-est-eq} \begin{proof} Similar to the proof of Lemma \ref{lemma:iv-mis-j} in \S\ref{prf:lemma:iv-mis-j}, we observe that \#\label{eq:iv-f11-vf} & \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \frac{Z_t^\top A_t d^\pi(S_t) \pi(A_t\given S_t)}{\Delta(S_t,A_t) d^b(S_t) P(Z_t\given S_t)} f(S_t) \right ] = \int f(s') d^\pi(s') \ud s'. \# Similarly, we have \#\label{eq:iv-f12-vf} & \EE\left[ \frac{1}{T} \sum_{t=0}^{T-1} \frac{Z_t^\top A_t d^\pi(S_t) \pi(A_t\given S_t)}{\Delta(S_t,A_t) d^b(S_t) P(Z_t\given S_t)} f(S_{t+1}) \right] \\ & \qquad = \int f(s') d^\pi(s) \pi(a\given s) \PP(S'=s\given S=s, A=a) \ud s'\ud a \ud s. \# Meanwhile, by the definition of $d^\pi(s,a)$, we have \#\label{eq:iv-f13-vf} d^\pi(s') & = (1 - \gamma) \sum_{t = 0}^\infty \gamma^t p_t^\pi(s') \\ & = (1-\gamma) \nu(s') + (1 - \gamma) \sum_{t = 0}^\infty \gamma^{t+1} p_{t+1}^\pi(s') \\ & = (1-\gamma) \left ( \nu(s') + \gamma \sum_{t = 0}^\infty \gamma^t \int \PP(S_{t+1} = s' \given S_t = s, A_t = a) \pi(a\given s) p_t^\pi(s) \ud s \ud a \right ) \\ & = (1 - \gamma ) \nu(s') + \gamma \int \PP(S' = s' \given S = s, A = a) \pi(a\given s) d^\pi(s) \ud s \ud a, \# where we use the assumption that $S_{t+1}\given (S_t, A_t)$ is time-homogeneous. Combining \eqref{eq:iv-f11-vf} and \eqref{eq:iv-f12-vf}, we have \$ & \EE\left[ \frac{1}{T} \sum_{t=0}^{T-1} \frac{Z_t^\top A_t d^\pi(S_t) \pi(A_t\given S_t)}{\Delta(S_t,A_t) d^b(S_t) P(Z_t\given S_t)} \left ( f(S_t) - \gamma f(S_{t+1}) \right ) \right] \\ & \qquad = \int f(s') \left( d^\pi(s') - \gamma \int d^\pi(s) \pi(a\given s) \PP(S'=s'\given S=s,A=a) \ud s \ud a \right) \ud s' \\ & \qquad = (1-\gamma) \int f(s') \nu(s') \ud s' = (1-\gamma) \EE_{S\sim \nu}\left[f(S)\right], \$ where we use \eqref{eq:iv-f13-vf} in the forth equality. This concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-vf-id}} \label{prf:lemma:iv-vf-id} \begin{proof} Similar to the proof of Lemma \ref{lemma:iv-mis-j} in \S\ref{prf:lemma:iv-mis-j}, we observe that \#\label{eq:iv-feb71} & \EE \left[ \frac{Z_0^\top A_0 \pi(A_0 \given S_0)}{\Delta^*(S_0, A_0) P(Z_0 \given S_0) } R_0 \Biggiven S_0 = s \right] \\ & \qquad = \EE_{U_0}\left [ \sum_{a\in \cA} \pi(a\given S_0) r(S_0, U_0, a)\Biggiven S_0 = s \right] = \EE_\pi[R_0 \given S_0 = s], \# where $\EE_\pi[\cdot]$ denotes that the expectation is taken w.r.t. $A_0 \sim \pi(\cdot \given S_0)$. For notational convenience, we denote by \#\label{eq:def-rhot} \rho_t = \frac{Z_t^\top A_t \pi(A_t \given S_t)}{\Delta^*(S_t, A_t) \Theta^*(Z_t \given S_t)}. \# Similarly, by Assumption \ref{ass:10}, we observe that \$ \EE \left[ \rho_0 \rho_1 R_1 \Biggiven S_0 = s \right] & = \EE\left [ \sum_{a\in \cA} \pi(a\given S_0) \EE\left[ \rho_1 R_1 \Biggiven S_0, U_0, A_0 = a \right] \Biggiven S_0 = s \right] \\ & = \EE\left [ \sum_{a\in \cA} \pi(a\given S_0) \EE\left[ \EE\left[ \rho_1 R_1 \Biggiven S_1, U_1 \right] \Biggiven S_0, U_0, A_0 = a \right] \Biggiven S_0 = s \right] \\ & = \EE\left [ \sum_{a\in \cA} \pi(a\given S_0) \EE\left[ \EE_\pi[R_1 \given S_1, U_1] \given S_0, U_0, A_0 = a \right] \Biggiven S_0 = s \right] \\ & = \EE_\pi[R_1 \given S_0 = s]. \$ Now, by induction, it holds for any $t \geq 0$ that \$ \EE\left[ R_t \cdot \prod_{j = 0}^t \rho_j \Biggiven S_0 = s \right] = \EE_\pi[R_t \given S_0 = s], \$ which implies that \$ V^\pi(s) = \EE\left[ \sum_{t = 0}^\infty \gamma^t R_t \prod_{j = 0}^t \rho_j \Biggiven S_0 = s \right]. \$ To show that the IV-aided Bellman equation holds, by a similar argument as in \eqref{eq:iv-feb71}, we observe that \$ \EE \left[ \rho_0 \gamma V^\pi(S_1) \Biggiven S_0 = s \right] & = \EE \left[\rho_0 \cdot \EE\left[ \sum_{t = 0}^\infty \gamma^{t+1} R_{t+1} \prod_{j = 1}^{t+1} \rho_j \Biggiven S_1 \right] \Biggiven S_0 = s\right] \\ & = \EE \left[\rho_0 \EE\left[ \sum_{t = 0}^\infty \gamma^{t+1} R_{t+1} \left( \prod_{j = 1}^{t+1} \rho_j \right ) \Biggiven S_1, U_1 \right] \Biggiven S_0 = s\right] \\ & = \EE\left[ \sum_{t = 1}^\infty \gamma^{t} R_{t} \left( \prod_{j = 0}^{t} \rho_j \right ) \Biggiven S_0 = s\right], \$ where the second equality relies on Assumption \ref{ass:10}. By the definition of $\rho_t$ in \eqref{eq:def-rhot}, we have \$ & \EE \left[ \frac{Z_0^\top A_0 \pi(A_0 \given S_0)}{\Delta^*(S_0, A_0) P(Z_0 \given S_0) } (R_0 + \gamma V^\pi(S_1) ) \Biggiven S_0 = s \right] \\ & \qquad = \EE\left[ \sum_{t = 0}^\infty \gamma^{t+1} R_{t+1} \left( \prod_{j = 0}^{t+1} \frac{Z_j^\top A_j \pi(A_j \given S_j)}{\Delta^*(S_j, A_j) P(Z_j \given S_j) } \right ) \Biggiven S_0 = s\right] = V^\pi(s). \$ By Assumption \ref{ass:10} again, we can show that for any $k \geq 0$ \$ V^\pi(s) = \EE\left[ \sum_{t = k}^\infty \gamma^{t-k} R_t \left( \prod_{j = k}^t \frac{Z_j^\top A_j \pi(A_j \given S_j)}{\Delta^*(S_j, A_j) P(Z_j \given S_j) } \right ) \bigggiven S_k = s \right], \$ which concludes the proof of the lemma. \end{proof} \subsection{Proof of Corollary \ref{cor:iv-vf-phi-0}} \label{prf:cor:iv-vf-phi-0} \begin{proof} We have \$ & \EE\left[ f(S_t) \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} \left(R_t + \gamma V^\pi(S_{t+1})\right) \right] \\ & \qquad = \EE\left[f(S_t) \EE\left[ \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} \left(R_t + \gamma V^\pi(S_{t+1})\right) \Biggiven S_t \right] \right] \\ & \qquad = \EE\left[f(S_t) V^\pi(S_t)\right], \$ where the last equality comes from Lemma \ref{lemma:iv-vf-id}. By summing all $t\in \{0, 1, \ldots, T-1\}$, we conclude the proof of the corollary. \end{proof} \section{Proofs of Results in \S\ref{sec:vf-theory}} \subsection{Proof of Theorem \ref{thm:iv-vf}} \label{prf:thm:iv-vf} \begin{proof} By the definition of $J(\pi)$ in \eqref{eq:iv-val-func}, we proceed as follows, \#\label{eq:iv-vf-pp1} & J(\pi^*) - J(\hat \pi_\textsf{vf}) \\ & \qquad = (1-\gamma) \EE_{S_0\sim \nu}\left [V^{\pi^*}(S_0) - V^{\hat \pi_\textsf{vf}}(S_0)\right] \\ & \qquad \leq (1-\gamma) \EE_{S_0\sim \nu} \left [V^{\pi^*}(S_0) \right] - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1} } \min_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta, \Theta, \hat \pi)} (1-\gamma) \EE_{S_0\sim \nu} \left [v(S_0) \right] \\ & \qquad \leq (1-\gamma) \EE_{S_0\sim \nu} \left [V^{\pi^*}(S_0) \right] - \min_{ (\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1} } \min_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta, \Theta, \pi^*)} (1-\gamma) \EE_{S_0\sim \nu} \left [v(S_0) \right], \# where in the first inequality, we use Lemma \ref{lemma:iv-v-pi-in-conf}; while in the last inequality, we use the optimality of $\hat \pi_\textsf{vf}$. It suffices to characterize the RHS of the above. We proceed \eqref{eq:iv-vf-pp1} as follows, \#\label{eq:iv-vf-pp3} & J(\pi^*) - J(\hat \pi_\textsf{vf}) \\ & \quad \leq \max_{ (\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1} } \max_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta, \Theta, \pi^*)} \left | (1-\gamma) \EE_{S_0\sim \nu}[v(S_0)] - (1-\gamma) \EE_{S_0\sim \nu}\left[ V^{\pi^*}(S_0) \right] \right |. \# Meanwhile, by Lemmas \ref{lemma:iv-mis-j} and \ref{lemma:iv-est-eq}, we have \#\label{eq:iv-vf-pp2} & (1-\gamma) \EE_{S_0\sim \nu} \left [ V^{\pi^*}(S_0) \right] = J(\pi^*) = \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} w^{\pi^*}(S_t) \frac{Z_t^\top A_t \pi^*(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} R_t \right], \\ & (1-\gamma) \EE_{S_0\sim \nu} \left [ v(S_0) \right] = \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} w^{\pi^*}(S_t) \frac{Z_t^\top A_t \pi^*(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} \left ( v(S_t) - \gamma v(S_{t+1}) \right ) \right]. \# Now, by plugging \eqref{eq:iv-vf-pp2} into the RHS of \eqref{eq:iv-vf-pp3}, we obtain \$ & J(\pi^*) - J(\hat \pi_\textsf{vf}) \leq \max_{ (\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1} } \max_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta, \Theta, \pi^*)} \left | \Phi_\textsf{vf}^{\pi^*}(v, w^{\pi^*}; \Delta^*, \Theta^*) \right |. \$ By continuing the above computation, we have \$ & J(\pi^*) - J(\hat \pi_\textsf{vf}) \\ & \quad \leq \max_{ (\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1} } \max_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta, \Theta, \pi^*)} \max_{g\in \cW} \left | \Phi_\textsf{vf}^{\pi^*}(v, g; \Delta^*, \Theta^*) \right | \\ & \quad = \max_{ (\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1} } \max_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta, \Theta, \pi^*)} \max_{g\in \cW} \max \left \{ \Phi_\textsf{vf}^{\pi^*}(v, g; \Delta^*, \Theta^*), -\Phi_\textsf{vf}^{\pi^*}(v, g; \Delta^*, \Theta^*) \right \} \\ & \quad = \max_{ (\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1} } \max_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta, \Theta, \pi^*)} \max_{g\in \cW} \max \left \{ \Phi_\textsf{vf}^{\pi^*}(v, g; \Delta^*, \Theta^*), \Phi_\textsf{vf}^{\pi^*}(v, -g; \Delta^*, \Theta^*) \right \} \\ & \quad = \max_{ (\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1} } \max_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta, \Theta, \pi^*)} \max_{g\in \cW} \Phi_\textsf{vf}^{\pi^*}(v, g; \Delta^*, \Theta^*) \\ & \quad \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) L_\Pi \sqrt{\frac{1}{NT \kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT)}, \$ where in the first inequality, we use Assumption \ref{ass:iv-vf-realizable}; in the third equality, we use the fact that $\cW$ is symmetric; while in the last inequality, we use Lemma \ref{lemma:iv-v-in-conf-good}. This concludes the proof of the theorem. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-v-pi-in-conf}} \label{prf:lemma:iv-v-pi-in-conf} \begin{proof} By Assumption \ref{ass:iv-vf-realizable}, we know that $V^\pi\in \cV$. Thus, to show that $V^\pi \in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi)$ with a high probability, it suffices to show that \#\label{eq:iv-vpi-in-ci-wtp} \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi(V^\pi, g; \Delta^*, \Theta^*) - \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v_{\Delta^*, \Theta^*}^\pi, g; \Delta^*, \Theta^*) \leq \alpha_\textsf{vf}. \# In the follows, we show that \eqref{eq:iv-vpi-in-ci-wtp} holds with a high probability. For the simplicity of notations, we denote by $\Phi_\textsf{vf}^\pi(v, g; *) = \Phi_\textsf{vf}^\pi(v, g; \Delta^*, \Theta^*)$ and $\hat v^\pi_* = \hat v^\pi_{\Delta^*, \Theta^*}$ for any $(\pi,v,g)$. Note that \#\label{eq:iv-ee1-vf} & \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(V^\pi, g; *) - \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) \\ & \qquad = \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(V^\pi, g; *) - \max_{g \in \cW} \Phi_\textsf{vf}^\pi(V^\pi, g; *) + \max_{g \in \cW} \Phi_\textsf{vf}^\pi(V^\pi, g; *) - \max_{g \in \cW} \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) \\ & \qquad \qquad + \max_{g \in \cW} \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) - \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) \\ & \qquad \leq \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(V^\pi, g; *) - \max_{g \in \cW} \Phi_\textsf{vf}^\pi(V^\pi, g; *) + \max_{g \in \cW} \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) - \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) \\ & \qquad \leq 2 \max_{v\in \cV} \left | \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(v, g; *) - \max_{g \in \cW} \Phi_\textsf{vf}^\pi(v, g; *) \right | \\ & \qquad \leq 2 \max_{v\in \cV} \max_{g \in \cW} \left | \hat \Phi_\textsf{vf}^\pi(v, g; *) - \Phi_\textsf{vf}^\pi(v, g; *) \right |, \# where in the first inequality, we use the fact that $\max_{g \in \cW} \Phi_\textsf{vf}^\pi(V^\pi, g; *) = 0$ while $\max_{g \in \cW} \Phi_\textsf{vf}^\pi(v, g; *) \geq 0$ for any $v$. In the meanwhile, by Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(\pi,v,g)\in \Pi \times \cV \times \cW$ that \#\label{eq:iv-ee2-vf} \left | \hat \Phi_\textsf{vf}^\pi(v, g; *) - \Phi_\textsf{vf}^\pi(v, g; *) \right | \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cW,\cV,\Pi}}{NT\kappa} \cdot \log\frac{1}{\delta} \log(NT) }, \# where we use Assumption \ref{ass:upper-bound-delta} and $\|g\|_\infty \leq C_*$ for any $g \in \cW$. Now, combining \eqref{eq:iv-ee1-vf} and \eqref{eq:iv-ee2-vf}, with probability at least $1 - \delta$, we have \$ & \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(V^\pi, g; *) - \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cW,\cV,\Pi}}{NT\kappa} \cdot \log\frac{1}{\delta} \log(NT) } = \alpha_\textsf{vf}, \$ which implies that $V^\pi \in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi)$ for any $\pi\in \Pi$. This concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-v-in-conf-good}} \label{prf:lemma:iv-v-in-conf-good} \begin{proof} Since $v \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta, \Theta, \pi)$, there exists a pair $(\tilde \Delta, \tilde\Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$ such that $v \in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\tilde \Delta, \tilde \Theta, \pi)$. For the simplicity of notations, we denote by \#\label{eq:iv-dd0-vf} \tilde v \in \argmin_{v\in \cV} \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(v, g; \tilde \Delta, \tilde \Theta), \# i.e., $\tilde v = \hat v^\pi_{\tilde \Delta, \tilde \Theta}$, which is defined in \eqref{eq:v-pi-def}. By the definition of $\tilde v$ and $v \in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\tilde \Delta, \tilde \Theta, \pi)$, we know that \#\label{eq:iv-dd1-vf} \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(v,g;\tilde \Delta, \tilde \Theta) - \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(\tilde v,g;\tilde \Delta, \tilde \Theta) \leq \alpha_\textsf{vf}. \# Note that \#\label{eq:iv-dd2-vf} & \max_{g\in \cW} \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) \\ & \qquad = \max_{g\in \cW} \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) - \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) + \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) \\ & \qquad \qquad - \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(v,g;\tilde \Delta, \tilde \Theta) + \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(v,g;\tilde \Delta, \tilde \Theta) - \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(\tilde v,g;\tilde \Delta, \tilde \Theta) \\ & \qquad \qquad + \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(\tilde v,g;\tilde \Delta, \tilde \Theta) - \max_{g\in \cW} \Phi^\pi_\textsf{vf}(\tilde v,g;\tilde \Delta, \tilde \Theta) + \max_{g\in \cW} \Phi^\pi_\textsf{vf}(\tilde v,g;\tilde \Delta, \tilde \Theta) \\ & \qquad \leq 2 \underbrace{\max_{(v,g,\Delta,\Theta)\in (\cV, \cW, \cF_0, \cF_1)} \left | \Phi^\pi_\textsf{vf}(v,g;\Delta, \Theta) - \hat \Phi^\pi_\textsf{vf}(v,g;\Delta, \Theta) \right|}_{\text{Term (I)}} + \underbrace{\max_{g\in \cW} \Phi^\pi_\textsf{vf}(\tilde v,g;\tilde \Delta, \tilde \Theta)}_{\text{Term (II)}} \\ & \qquad \qquad + \underbrace{\max_{g\in \cW} \left | \hat \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) - \hat \Phi^\pi_\textsf{vf}(v,g;\tilde \Delta, \tilde \Theta) \right|}_{\text{Term (III)}} + \alpha_\textsf{vf}, \# where we use \eqref{eq:iv-dd1-vf} in the last inequality. Now we upper bound terms (I), (II), and (III) on the RHS of \eqref{eq:iv-dd2-vf}. \vskip5pt \noindent\textbf{Upper Bounding Term (I).} By Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(v,g,\Delta,\Theta, \pi) \in (\cV, \cW, \cF_0, \cF_1, \Pi)$ that \$ \left | \hat \Phi_\textsf{vf}^\pi(v, g; \Delta, \Theta) - \Phi_\textsf{vf}^\pi(v, g; \Delta, \Theta) \right | \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV, \Pi}}{NT \kappa}\log\frac{1}{\delta} \log(NT)}, \$ which implies that with probability at least $1 - \delta$, we have \#\label{eq:iv-dd3-vf} \text{Term (I)} \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}}{NT \kappa}\log\frac{1}{\delta}\log(NT) }. \# \vskip5pt \noindent\textbf{Upper Bounding Term (II).} We introduce the following lemma to help upper bound term (II). \begin{lemma} \label{lemma:iv-any-v-hat-small-loss} Suppose $\alpha_0$ and $\alpha_1$ are defined in Assumption \ref{ass:iv-sl-res}. With probability at least $1 - \delta$, for any $(\Delta, \Theta, \pi) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1} \times \Pi$, we have \$ \max_{g\in \cW} \Phi_\textsf{vf}^\pi(\hat v^\pi_{\Delta, \Theta}, g; \Delta, \Theta) \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) L_\Pi \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}}{NT \kappa}\log\frac{1}{\delta}\log(NT) }, \$ where $\hat v^\pi_{\Delta, \Theta}$ is defined in \eqref{eq:v-pi-def}, $\xi_0$ and $\xi_1$ are constants defined in Assumption \ref{ass:iv-sl-res}. \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-any-v-hat-small-loss} for a detailed proof. \end{proof} By the definition of $\tilde v$ in \eqref{eq:iv-dd0-vf} and Lemma \ref{lemma:iv-any-v-hat-small-loss}, with probability at least $1 - \delta$, we have \#\label{eq:iv-dd4-vf} \text{Term (II)} \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) L_\Pi \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}}{NT \kappa}\log\frac{1}{\delta} \log(NT)}. \# \vskip5pt \noindent\textbf{Upper Bounding Term (III).} Note that \#\label{eq:iv-dd5-vf} & \left | \hat \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) - \hat \Phi^\pi_\textsf{vf}(v,g;\tilde \Delta, \tilde \Theta) \right| \\ & \leq \left | \left(\hat \EE - \EE\right) \left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(S_t) \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left( R_t + \gamma v(S_{t+1}) \right) \right ] \right | \\ & \qquad + \left | \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(S_t) \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left( R_t + \gamma v(S_{t+1}) \right) \right ] \right |. \# For the first term on the RHS of \eqref{eq:iv-dd5-vf}, by Theorem \ref{thm:hoeffding-mixing}, with probability at least $1- \delta$, it holds for any $(v,g,\pi)\in \cV\times \cW \times \Pi$ that \#\label{eq:iv-dd5-vf-1} & \left | \left(\hat \EE - \EE\right) \left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(S_t) \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left( R_t + \gamma v(S_{t+1}) \right) \right ] \right | \\ & \qquad \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}}{NT \kappa}\log\frac{1}{\delta} \log(NT)}. \# For the second term on the RHS of \eqref{eq:iv-dd5-vf}, with probability at least $1- \delta$, it holds that \#\label{eq:iv-dd5-vf-2} & \left | \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(S_t) \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left( R_t + \gamma v(S_{t+1}) \right) \right ] \right | \\ & \qquad \leq \frac{C_*}{1-\gamma} \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left | \frac{1}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} - \frac{1}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} \right| \right ] \\ & \qquad = \frac{C_*}{1-\gamma} \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left | \frac{\Theta^*(S_t,Z_t) - \tilde \Theta(S_t,Z_t)}{\tilde \Delta(S_t,A_t)\tilde \Theta(S_t,Z_t)\Theta^*(S_t,Z_t)} - \frac{\Delta^*(S_t,A_t) - \tilde \Delta(S_t,A_t)}{\Delta^*(S_t,A_t)\tilde \Delta(S_t,A_t) \Theta^*(S_t,Z_t)} \right| \right ] \\ & \qquad \leq \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \Bigg( C_{\Theta^*} \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left \| \Delta^*(S_t,\cdot) - \tilde \Delta(S_t,\cdot) \right \|_1 \right ] \\ & \qquad \qquad \qquad \qquad \qquad + C_{\Delta^*} \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left \| \Theta^*(S_t,\cdot ) - \tilde \Theta(S_t,\cdot )\right\|_1 \right ] \Bigg) \\ & \qquad \leq \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \Bigg( C_{\Theta^*} \sqrt{\EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left \| \Delta^*(S_t,\cdot) - \tilde \Delta(S_t,\cdot) \right \|_1^2 \right ]} \\ & \qquad \qquad \qquad \qquad \qquad + C_{\Delta^*} \sqrt{\EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left \| \Theta^*(S_t,\cdot ) - \tilde \Theta(S_t,\cdot )\right\|_1^2 \right ]} \Bigg) \\ & \qquad \leq \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \Bigg(\xi_0 C_{\Theta^*} \sqrt{\frac{C_{\Delta^*}}{NT \kappa} \mathfrak{C}_{\cF_0}\log\frac{1}{\delta} \log(NT)} + \xi_1 C_{\Delta^*} \sqrt{\frac{C_{\Theta^*}}{NT \kappa} \mathfrak{C}_{\cF_1} \log\frac{1}{\delta} \log(NT)} \Bigg), \# where in the first inequality, we use the fact that $\|v\|_\infty \leq 1/ (1 - \gamma)$ and $\|g\|_\infty \leq C_*$; in the third inequality, we use Cauchy-Schwarz inequality; while in the last inequality, we use Assumption \ref{ass:iv-sl-res} with the fact that $(\tilde \Delta, \tilde\Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$. Now, by plugging \eqref{eq:iv-dd5-vf-1} and \eqref{eq:iv-dd5-vf-2} into \eqref{eq:iv-dd5-vf}, with probability at least $1 - \delta$, it holds for any $v \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta, \Theta, \pi)$, $g\in \cW$, and $\pi \in \Pi$ that \#\label{eq:879434} & \left | \hat \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) - \hat \Phi^\pi_\textsf{vf}(v,g;\tilde \Delta, \tilde \Theta) \right| \\ & \qquad \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT\kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT)}. \# \vskip5pt Now, by plugging \eqref{eq:iv-dd3-vf}, \eqref{eq:iv-dd4-vf}, and \eqref{eq:879434} into \eqref{eq:iv-dd2-vf}, with probability at least $1 - \delta$, it holds for any $v \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta, \Theta, \pi)$ and $\pi \in \Pi$ that \$ & \max_{g\in \cW} \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) L_\Pi \sqrt{\frac{1}{NT\kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT)}, \$ which concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-any-v-hat-small-loss}} \label{prf:lemma:iv-any-v-hat-small-loss} \begin{proof} Note that \#\label{eq:iv-ee3-vf} & \max_{g\in \cW} \Phi_\textsf{vf}^\pi(\hat v^\pi_{\Delta, \Theta}, g; \Delta, \Theta) \\ & \qquad = \max_{g\in \cW} \Phi_\textsf{vf}^\pi(\hat v^\pi_{\Delta, \Theta}, g; \Delta, \Theta) - \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v^\pi_{\Delta, \Theta}, g; \Delta, \Theta) + \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v^\pi_{\Delta, \Theta}, g; \Delta, \Theta) \\ & \qquad \qquad - \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi( V^\pi, g; \Delta, \Theta) + \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi( V^\pi, g; \Delta, \Theta) - \max_{g\in \cW} \Phi_\textsf{vf}^\pi(V^\pi, g; \Delta, \Theta) \\ & \qquad \qquad + \max_{g\in \cW} \Phi_\textsf{vf}^\pi(V^\pi, g; \Delta, \Theta) - \max_{g\in \cW} \Phi_\textsf{vf}^\pi(V^\pi, g; \Delta^*, \Theta^*) \\ & \qquad \leq 2 \max_{v\in \cV} \max_{g\in \cW} \left | \Phi_\textsf{vf}^\pi(v, g; \Delta, \Theta) - \hat \Phi_\textsf{vf}^\pi(v, g; \Delta, \Theta) \right | \\ & \qquad \qquad + \max_{g\in \cW} \left | \Phi_\textsf{vf}^\pi(V^\pi, g; \Delta, \Theta) - \Phi_\textsf{vf}^\pi(V^\pi, g; \Delta^*, \Theta^*) \right |, \# where we use the fact that $\hat v^\pi_{\Delta, \Theta} \in \argmin_{v\in \cV} \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(v, g; \Delta, \Theta)$ in the last inequality. In the meanwhile, by Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(v,g,\pi)\in \cV\times \cW \times \Pi$ that \#\label{eq:iv-ee4-vf} \left | \hat \Phi_\textsf{vf}^\pi(v, g; \Delta, \Theta) - \Phi_\textsf{vf}^\pi(v, g; \Delta, \Theta) \right | \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cV,\cW,\Pi}}{NT \kappa}\log\frac{1}{\delta} \log(NT)}. \# Also, we upper bound the second term on the RHS of \eqref{eq:iv-ee3-vf} with probability at least $1- \delta$ by a similar argument as in \eqref{eq:iv-dd5-vf-2}, \#\label{eq:iv-ee5-vf} & \left | \Phi_\textsf{vf}^\pi(V^\pi, g; \Delta, \Theta) - \Phi_\textsf{vf}^\pi(V^\pi, g; \Delta^*, \Theta^*) \right | \\ & \leq \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \left(\xi_0 C_{\Theta^*} \sqrt{\frac{C_{\Delta^*}}{NT \kappa} \mathfrak{C}_{\cF_0}\log\frac{1}{\delta} \log(NT)} + \xi_1 C_{\Delta^*} \sqrt{\frac{C_{\Theta^*}}{NT \kappa} \mathfrak{C}_{\cF_1} \log\frac{1}{\delta} \log(NT)} \right), \# where in the first inequality, we use the fact that $\|V^\pi\|_\infty \leq 1/ (1 - \gamma)$ and $\|g\|_\infty \leq C_*$; in the third inequality, we use Cauchy Schwarz inequality; while in the last inequality, we use Assumption \ref{ass:iv-sl-res} with $(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$. Now, by plugging \eqref{eq:iv-ee4-vf} and \eqref{eq:iv-ee5-vf} into \eqref{eq:iv-ee3-vf}, with probability at least $1 - \delta$, it holds for any $(\Delta, \Theta, \pi) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1} \times \Pi$ that \$ \max_{g\in \cW} \Phi_\textsf{vf}^\pi(\hat v^\pi_{\Delta, \Theta}, g; \Delta, \Theta) \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) L_\Pi \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}}{NT \kappa}\log\frac{1}{\delta} \log(NT)}, \$ which concludes the proof of the lemma. \end{proof} \section{Proofs of Results in \S\ref{sec:mis-theory}} \subsection{Proof of Theorem \ref{thm:iv-mis}} \label{prf:thm:iv-mis} \begin{proof} Before the proof of the theorem, we first introduce some supporting results as follows. We define the population counterpart of $\hat L_\textsf{mis}(w, \pi; \Delta, \Theta)$ as \$ L_\textsf{mis}(w, \pi; \Delta, \Theta) = \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta(S_t,A_t) \Theta(S_t,Z_t)} w(S_t) R_t \right] \$ for any $(w, \pi, \Delta, \Theta)$. \begin{lemma} \label{lemma:iv-link-phi-l} It holds for any $(\pi, w)\in \Pi \times \cW$ that \$ L_\textsf{mis}(w^\pi, \pi; \Delta^*, \Theta^*) - L_\textsf{mis}(w, \pi; \Delta^*, \Theta^*) = \Phi_\textsf{mis}^\pi(w, V^\pi; \Delta^*, \Theta^*), \$ where $V^\pi$ is the state-value function defined in \eqref{eq:iv-val-func}. \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-link-phi-l} for a detailed proof. \end{proof} \begin{lemma} \label{lemma:iv-mis-min-delta-close} Suppose that $(\alpha_0, \alpha_1, \alpha_\textsf{mis})$ is defined in Lemmas \ref{lemma:iv-w-pi-in-conf}. With probability at least $1 - \delta$, it holds for any $(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$ that \$ & \left | \min_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{mis}(w, \pi^*; \Delta^*, \Theta^*) - \min_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi^*)} L_\textsf{mis}(w, \pi^*; \Delta, \Theta) \right | \\ & \qquad \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV} \log\frac{1}{\delta} \log(NT)} = \varepsilon^*_L. \$ \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-mis-min-delta-close} for a detailed proof. \end{proof} \begin{lemma} \label{lemma:iv-mis-hatl-close} With probability at least $1 - \delta$, it holds for any $(w, \Delta, \Theta, \pi)\in \cW \times \cF_0 \times \cF_1 \times \Pi$ that \$ & \left | L_\textsf{mis}(w, \pi; \Delta, \Theta) - \hat L_\textsf{mis}(w, \pi; \Delta, \Theta) \right | \\ & \qquad \leq c\cdot C_{\Delta^*} C_{\Theta^*} C_* \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\Pi} \log\frac{1}{\delta} \log(NT)} = \hat \varepsilon_L. \$ \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-mis-hatl-close} for a detailed proof. \end{proof} Now we start the proof of the theorem. By the definition of $J(\pi)$, it holds with probability at least $1 - \delta$ that \#\label{eq:thm-mis-f11} J(\pi^*) - J(\hat \pi_\textsf{mis}) & = L_\textsf{mis}(w^{\pi^*}, \pi^*; \Delta^*, \Theta^*) - L_\textsf{mis}(w^{\hat \pi_\textsf{mis}}, \hat \pi_\textsf{mis}; \Delta^*, \Theta^*) \\ & \leq L_\textsf{mis}(w^{\pi^*}, \pi^*; \Delta^*, \Theta^*) - \min_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \hat \pi_\textsf{mis})} L_\textsf{mis}(w, \hat \pi_\textsf{mis}; \Delta^*, \Theta^*) \\ & \leq L_\textsf{mis}(w^{\pi^*}, \pi^*; \Delta^*, \Theta^*) \\ & \qquad - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \hat \pi_\textsf{mis})} L_\textsf{mis}(w, \hat \pi_\textsf{mis}; \Delta, \Theta) \\ & \leq L_\textsf{mis}(w^{\pi^*}, \pi^*; \Delta^*, \Theta^*) \\ & \qquad - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \hat \pi_\textsf{mis})} \hat L_\textsf{mis}(w, \hat \pi_\textsf{mis}; \Delta, \Theta) + \hat \varepsilon_L \\ & \leq L_\textsf{mis}(w^{\pi^*}, \pi^*; \Delta^*, \Theta^*) \\ & \qquad - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi^*)} \hat L_\textsf{mis}(w, \pi^*; \Delta, \Theta) + \hat \varepsilon_L, \# where we use Lemma \ref{lemma:iv-w-pi-in-conf} in the first inequality; we use Assumption \ref{ass:iv-sl-res} that $(\Delta^*, \Theta^*) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}$ with probability at least $1 - \delta$ in the second inequality; we use Lemma \ref{lemma:iv-mis-hatl-close} in the third inequality; while we use the optimality of $\hat \pi_\textsf{mis}$ in the last inequality. Now, by applying Lemmas \ref{lemma:iv-mis-min-delta-close} and \ref{lemma:iv-mis-hatl-close}, we obtain from \eqref{eq:thm-mis-f11} that \#\label{eq:thm-mis-f12} J(\pi^*) - J(\hat \pi_\textsf{mis}) & \leq L_\textsf{mis}(w^{\pi^*}, \pi^*; \Delta^*, \Theta^*) - \min_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{mis}(w, \pi^*; \Delta^*, \Theta^*) + \varepsilon_L^* + 2\hat \varepsilon_L \\ & \leq \max_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi^*)} \left |\Phi_\textsf{mis}^{\pi^*}(w, V^{\pi^*}; \Delta^*, \Theta^*) \right| + \varepsilon_L^* + 2\hat \varepsilon_L \\ & \leq \max_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi^*)} \max_{f\in \cV} \left |\Phi_\textsf{mis}^{\pi^*}(w, f; \Delta^*, \Theta^*) \right| + \varepsilon_L^* + 2\hat \varepsilon_L \\ & \leq \max_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi^*)} \max_{f\in \cV} \max \left \{ \Phi_\textsf{mis}^{\pi^*}(w, f; \Delta^*, \Theta^*), -\Phi_\textsf{mis}^{\pi^*}(w, f; \Delta^*, \Theta^*) \right \} \\ & \qquad + \varepsilon_L^* + 2\hat \varepsilon_L \\ & \leq \max_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi^*)} \max_{f\in \cV} \max \left \{ \Phi_\textsf{mis}^{\pi^*}(w, f; \Delta^*, \Theta^*), \Phi_\textsf{mis}^{\pi^*}(w, -f; \Delta^*, \Theta^*) \right \} \\ & \qquad + \varepsilon_L^* + 2\hat \varepsilon_L \\ & \leq \max_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi^*)} \max_{f\in \cV} \Phi_\textsf{mis}^{\pi^*}(w, f; \Delta^*, \Theta^*) + \varepsilon_L^* + 2\hat \varepsilon_L, \# where in the second inequality, we use Lemma \ref{lemma:iv-link-phi-l}; in the third inequality, we use Assumption \ref{ass:iv-mis-realizable}; while in the last inequality, we use the fact that $\cV$ is symmetric. Now, by Lemma \ref{lemma:iv-w-in-conf-good} and plugging the definition of $\hat \varepsilon_L$ and $\varepsilon^*_L$ into \eqref{eq:thm-mis-f12}, it holds with probability at least $1 - \delta$ that \$ J(\pi^*) - J(\hat \pi_\textsf{mis}) \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{NT }{\delta} }, \$ which concludes the proof of the theorem. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-w-pi-in-conf}}\label{prf:lemma:iv-w-pi-in-conf} \begin{proof} First, by Assumption \ref{ass:iv-mis-realizable}, we know that $w^\pi \in \cW$. For notation simplicity, we denote by $\Phi_\textsf{mis}^\pi(w, f; *) = \Phi_\textsf{mis}^\pi(w, f; \Delta^*, \Theta^*)$ and $\hat w^\pi_* = \hat w^\pi_{\Delta^*, \Theta^*}$ for any $(\pi,w,f)$. Note that \#\label{eq:iv-ee1} & \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(w^\pi, f; *) - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) \\ & \qquad = \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(w^\pi, f; *) - \max_{f\in \cV} \Phi_\textsf{mis}^\pi(w^\pi, f; *) + \max_{f\in \cV} \Phi_\textsf{mis}^\pi(w^\pi, f; *) - \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) \\ & \qquad \qquad + \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) \\ & \qquad \leq \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(w^\pi, f; *) - \max_{f\in \cV} \Phi_\textsf{mis}^\pi(w^\pi, f; *) + \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) \\ & \qquad \leq 2 \max_{w\in \cW} \left | \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(w, f; *) - \max_{f\in \cV} \Phi_\textsf{mis}^\pi(w, f; *) \right | \\ & \qquad \leq 2 \max_{w\in \cW} \max_{f\in \cV} \left | \hat \Phi_\textsf{mis}^\pi(w, f; *) - \Phi_\textsf{mis}^\pi(w, f; *) \right |, \# where in the first inequality, we use the fact that $w^\pi = \argmin_{w\in \cW} \max_{f\in \cV} \Phi_\textsf{mis}^\pi(w, f; *)$; while in the second inequality, we use $w^\pi \in \cW$ by Assumption \ref{ass:iv-mis-realizable}. In the meanwhile, by Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(w,f,\pi)\in \cW\times \cV \times \Pi$ that \#\label{eq:iv-ee2} \left | \hat \Phi_\textsf{mis}^\pi(w, f; *) - \Phi_\textsf{mis}^\pi(w, f; *) \right | \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cV,\cW,\Pi}\log\frac{1}{\delta} \log(NT)}, \# where we use Assumption \ref{ass:upper-bound-delta}. Now, combining \eqref{eq:iv-ee1} and \eqref{eq:iv-ee2}, with probability at least $1 - \delta$, we have \$ & \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(w^\pi, f; *) - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) \\ & \qquad \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cV,\cW,\Pi}\log\frac{1}{\delta} \log(NT)} = \alpha_\textsf{mis}, \$ which implies that $w^\pi \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi)$. This concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-w-in-conf-good}} \label{prf:lemma:iv-w-in-conf-good} \begin{proof} Since $w \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)$, there exists a pair $(\tilde \Delta, \tilde\Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$ such that $w \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\tilde \Delta, \tilde \Theta, \pi)$. For the simplicity of notations, we denote by \#\label{eq:iv-dd0} \tilde w \in \argmin_{w\in \cW} \max_{f\in \cV} \hat \Phi^\pi_\textsf{mis}(w, f;\tilde \Delta, \tilde \Theta), \# i.e., $\tilde w = \hat w^\pi_{\tilde \Delta, \tilde \Theta}$, which is defined in \eqref{eq:w-pi-def}. By the definition of $\tilde w$ and $w \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\tilde \Delta, \tilde \Theta, \pi)$, with probability at least $1 - \delta$, it holds for any $\pi\in \Pi$ and $w \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\tilde \Delta, \tilde \Theta, \pi)$ that \#\label{eq:iv-dd1} \max_{f\in \cV} \hat \Phi^\pi_\textsf{mis}(w,f;\tilde \Delta, \tilde \Theta) - \max_{f\in \cV} \hat \Phi^\pi_\textsf{mis}(\tilde w,f;\tilde \Delta, \tilde \Theta) \leq \alpha_\textsf{mis}. \# Further, we observe that \#\label{eq:iv-dd2} & \max_{f\in \cV} \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*) \\ & \qquad \leq \underbrace{\max_{(w,f,\Delta,\Theta)\in (\cW, \cV, \cF_0, \cF_1)} \left | \Phi^\pi_\textsf{mis}(w,f;\Delta, \Theta) - \hat \Phi^\pi_\textsf{mis}(w,f;\Delta, \Theta) \right|}_{\text{Term (I)}} + \underbrace{\max_{f\in \cV} \Phi^\pi_\textsf{mis}(\tilde w,f;\tilde \Delta, \tilde \Theta)}_{\text{Term (II)}} \\ & \qquad \qquad + \underbrace{\max_{f\in \cV} \left | \hat \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*) - \hat \Phi^\pi_\textsf{mis}(w,f;\tilde \Delta, \tilde \Theta) \right|}_{\text{Term (III)}} + \alpha_\textsf{mis}, \# where we use \eqref{eq:iv-dd1} in the last inequality. Now we upper bound terms (I), (II), and (III) on the RHS of \eqref{eq:iv-dd2}. \vskip5pt \noindent\textbf{Upper Bounding Term (I).} By Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(w,f,\Delta,\Theta,\pi) \in (\cW, \cV, \cF_0, \cF_1, \Pi)$ that \$ \left | \hat \Phi_\textsf{mis}^\pi(w, f; \Delta, \Theta) - \Phi_\textsf{mis}^\pi(w, f; \Delta, \Theta) \right | \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa}\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}\log\frac{1}{\delta} \log(NT)}, \$ which implies that with probability at least $1 - \delta$, we have \#\label{eq:iv-dd3} \text{Term (I)} \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa}\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}\log\frac{1}{\delta} \log(NT)}. \# \vskip5pt \noindent\textbf{Upper Bounding Term (II).} We introduce the following lemma to help upper bound term (II). \begin{lemma} \label{lemma:iv-any-w-hat-small-loss} Suppose $(\alpha_0,\alpha_1)$ is defined in Assumption \ref{ass:iv-sl-res}. With probability at least $1 - \delta$, for any $(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$ and $\pi\in \Pi$, we have \$ \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\hat w^\pi_{\Delta, \Theta}, f; \Delta, \Theta) \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1 ) \sqrt{\frac{1}{NT \kappa}\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \log\frac{1}{\delta} \log(NT)}, \$ where $\hat w^\pi_{\Delta, \Theta}$ is defined in \eqref{eq:w-pi-def}, $\xi_0$ and $\xi_1$ are constants defined in Assumption \ref{ass:iv-sl-res}. \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-any-w-hat-small-loss} for a detailed proof. \end{proof} By the definition of $\tilde w$ in \eqref{eq:iv-dd0} and Lemma \ref{lemma:iv-any-w-hat-small-loss}, with probability at least $1 - \delta$, we have \#\label{eq:iv-dd4} \text{Term (II)} \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1 ) \sqrt{\frac{1}{NT \kappa}\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \log\frac{1}{\delta} \log(NT)}. \# \vskip5pt \noindent\textbf{Upper Bounding Term (III).} Note that \#\label{eq:iv-dd5} & \left | \hat \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*) - \hat \Phi^\pi_\textsf{mis}(w,f;\tilde \Delta, \tilde \Theta) \right| \\ & \leq \left | \left(\hat \EE - \EE\right) \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left(f(S_t) - \gamma f(S_{t+1}) \right) \right ] \right | \\ & \qquad + \left | \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left(f(S_t) - \gamma f(S_{t+1}) \right) \right ] \right |. \# For the first term on the RHS of \eqref{eq:iv-dd5}, by Theorem \ref{thm:hoeffding-mixing}, with probability at least $1- \delta$, it holds for any $(w,f,\pi)\in \cW\times \cV \times \Pi$ that \#\label{eq:iv-dd5-1} & \left | \left(\hat \EE - \EE\right) \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left(f(S_t) - \gamma f(S_{t+1}) \right) \right ] \right | \\ & \qquad \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}}{NT \kappa}\log\frac{1}{\delta} \log(NT)}. \# For the second term on the RHS of \eqref{eq:iv-dd5}, by a similar argument as in \eqref{eq:iv-dd5-vf-2}, it holds with probability at least $1 - \delta$ that \#\label{eq:iv-dd5-2} & \left | \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t)}{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t)w(S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left(f(S_t) - \gamma f(S_{t+1}) \right) \right ] \right | \\ & \leq \frac{2 C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \left(\xi_0 C_{\Theta^*} \sqrt{\frac{C_{\Delta^*}}{NT \kappa} \mathfrak{C}_{\cF_0}\log\frac{1}{\delta} \log(NT)} + \xi_1 C_{\Delta^*} \sqrt{\frac{C_{\Theta^*}}{NT \kappa} \mathfrak{C}_{\cF_1} \log\frac{1}{\delta} \log(NT)} \right), \# where in the first inequality, we use the fact that $\|f\|_\infty \leq 1/ (1 - \gamma)$ and $\|w\|_\infty \leq C_*$; in the third inequality, we use Cauchy Schwarz inequality; while in the last inequality, we use Assumption \ref{ass:iv-sl-res} with the fact that $(\tilde \Delta, \tilde\Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$. Now, by plugging \eqref{eq:iv-dd5-1} and \eqref{eq:iv-dd5-2} into \eqref{eq:iv-dd5}, with probability at least $1 - \delta$, it holds for any $w \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)$ and $(f,\pi)\in \cV\times \Pi$ that \#\label{eq:732847} & \left | \hat \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*) - \hat \Phi^\pi_\textsf{mis}(w,f;\tilde \Delta, \tilde \Theta) \right| \\ & \qquad \leq c \cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT) }. \# \vskip5pt Now, by plugging \eqref{eq:iv-dd3}, \eqref{eq:iv-dd4}, and \eqref{eq:732847} into \eqref{eq:iv-dd2}, with probability at least $1 - \delta$, it holds for any $\pi \in \Pi$ and $w \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)$ that \$ & \max_{f\in \cV} \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*) \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT) }, \$ which concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-link-phi-l}}\label{prf:lemma:iv-link-phi-l} \begin{proof} Since $\Phi^\pi_\textsf{mis}(w^\pi, V^\pi; \Delta^*, \Theta^*) = 0$, we have \$ & \Phi_\textsf{mis}^\pi(w, V^\pi; \Delta^*, \Theta^*) \\ & \qquad = \Phi_\textsf{mis}^\pi(w, V^\pi; \Delta^*, \Theta^*) - \Phi_\textsf{mis}^\pi(w^\pi, V^\pi; \Delta^*, \Theta^*) \\ & \qquad = \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} \left(w^\pi(S_t)-w(S_t)\right) \left (V^\pi(S_t) - \gamma V^\pi(S_{t+1}) \right ) \right] \\ & \qquad = \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left(w^\pi(S_t)-w(S_t)\right) \EE_{\pi} \left [V^\pi(S_t) - \gamma V^\pi(S_{t+1}) \given S_t \right ] \right] \\ & \qquad = \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left(w^\pi(S_t)-w(S_t)\right) \EE_\pi[R_t \given S_t] \right] \\ & \qquad = \EE \left[\frac{1}{T} \sum_{t = 0}^{T-1} \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} \left(w^\pi(S_t) - w(S_t)\right) R_t \right] \\ & \qquad = L_\textsf{mis}(w^\pi, \pi; \Delta^*, \Theta^*) - L_\textsf{mis}(w, \pi; \Delta^*, \Theta^*), \$ which concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-mis-min-delta-close}} \label{prf:lemma:iv-mis-min-delta-close} \begin{proof} With a slight abuse of notations, we denote by \$ w_0 \in \argmin_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{mis}(w, \pi^*; \Delta^*, \Theta^*), \qquad w_1 \in \argmin_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi^*)} L_\textsf{mis}(w, \pi^*; \Delta, \Theta). \$ Then we have \#\label{eq:iv-kkk1} & \left | \min_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{mis}(w, \pi^*; \Delta^*, \Theta^*) - \min_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi^*)} L_\textsf{mis}(w, \pi^*; \Delta, \Theta) \right | \\ & \qquad = \left | L_\textsf{mis}(w_0, \pi^*; \Delta^*, \Theta^*) - L_\textsf{mis}(w_1, \pi^*; \Delta, \Theta) \right| \\ & \qquad \leq \left | L_\textsf{mis}(w_0, \pi^*; \Delta^*, \Theta^*) - L_\textsf{mis}(w^{\pi^*}, \pi^*; \Delta^*, \Theta^*) \right | \\ & \qquad \qquad + \left | L_\textsf{mis}(w^{\pi^*}, \pi^*; \Delta^*, \Theta^*) - L_\textsf{mis}(w_1, \pi^*; \Delta^*, \Theta^*) \right | \\ & \qquad \qquad + \left |L_\textsf{mis}(w_1, \pi^*; \Delta^*, \Theta^*) - L_\textsf{mis}(w_1, \pi^*; \Delta, \Theta) \right | \\ & \qquad = \underbrace{\left | \Phi_\textsf{mis}^{\pi^*}(w_0, V^{\pi^*}; \Delta^*, \Theta^*) \right|}_{\text{Term (I)}} + \underbrace{\left| \Phi_\textsf{mis}^{\pi^*}(w_1, V^{\pi^*}; \Delta^*, \Theta^*) \right|}_{\text{Term (II)}} \\ & \qquad \qquad + \underbrace{\left | L_\textsf{mis}(w_1, \pi^*; \Delta^*, \Theta^*) - L_\textsf{mis}(w_1, \pi^*; \Delta, \Theta) \right|}_{\text{Term (III)}}. \# We upper bound terms (I), (II), and (III) on the RHS of \eqref{eq:iv-kkk1}, respectively. \vskip5pt \noindent\textbf{Upper Bounding Term (I).} Note that with probability at least $1 - \delta$, we have \#\label{eq:iv-kkk2} \left | \Phi_\textsf{mis}^{\pi^*}(w_0, V^{\pi^*}; \Delta^*, \Theta^*) \right| & \leq \max_{f\in \cV} \left | \Phi_\textsf{mis}^{\pi^*}(w_0, f; \Delta^*, \Theta^*) \right| \\ & = \max_{f\in \cV} \max \left\{ \Phi_\textsf{mis}^{\pi^*}(w_0, f; \Delta^*, \Theta^*), -\Phi_\textsf{mis}^{\pi^*}(w_0, f; \Delta^*, \Theta^*) \right \} \\ & = \max_{f\in \cV} \max \left\{ \Phi_\textsf{mis}^{\pi^*}(w_0, f; \Delta^*, \Theta^*), \Phi_\textsf{mis}^{\pi^*}(w_0, -f; \Delta^*, \Theta^*) \right \} \\ & = \max_{f\in \cV} \Phi_\textsf{mis}^{\pi^*}(w_0, f; \Delta^*, \Theta^*) \\ & \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV} \log\frac{1}{\delta} \log(NT)}, \# where in the first inequality, we use the fact that $V^{\pi^*} \in \cV$; in the third equality, we use the fact that $\cV$ is symmetric; in the last inequality, by noting that $w_0 \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)$, we use Lemma \ref{lemma:iv-w-in-conf-good}. This upper bounds term (I) on the RHS of \eqref{eq:iv-kkk1}. \vskip5pt \noindent\textbf{Upper Bounding Term (II).} Similar to \eqref{eq:iv-kkk2}, note that $w_1 \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)$, it holds with probability at least $1 - \delta$ that \#\label{eq:iv-kkk3} \left | \Phi_\textsf{mis}^{\pi^*}(w_1, V^{\pi^*}; \Delta^*, \Theta^*) \right| \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV} \log\frac{1}{\delta} \log(NT)}, \# which upper bounds term (II) on the RHS of \eqref{eq:iv-kkk1}. \vskip5pt \noindent\textbf{Upper Bounding Term (III).} Note that with probability at least $1 - \delta$, it holds for any $(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$ that \#\label{eq:iv-kkk4} & \left | L_\textsf{mis}(w_1, \pi^*; \Delta^*, \Theta^*) - L_\textsf{mis}(w_1, \pi^*; \Delta, \Theta) \right| \\ & \quad = \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left( \frac{Z_t^\top A_t \pi^*(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t \pi^*(A_t\given S_t)}{\Delta(S_t,A_t) \Theta(S_t,Z_t)} \right) w_1(S_t) R_t \right] \\ & \quad \leq C_{\Delta^*} C_{\Theta^*} C_* \left(\xi_0 C_{\Theta^*} \sqrt{\frac{C_{\Delta^*}}{NT \kappa} \mathfrak{C}_{\cF_0} \log\frac{1}{\delta} \log(NT)} + \xi_1 C_{\Delta^*} \sqrt{\frac{C_{\Theta^*}}{NT \kappa} \mathfrak{C}_{\cF_1} \log\frac{1}{\delta} \log(NT)} \right), \# where we use Cauchy-Schwarz inequality and Assumption \ref{ass:iv-sl-res} in the last inequality. \vskip5pt Now, by plugging \eqref{eq:iv-kkk2}, \eqref{eq:iv-kkk3}, and \eqref{eq:iv-kkk4} into \eqref{eq:iv-kkk1}, with probability at least $1 - \delta$, it holds for any $(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$ that \$ & \left | \min_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{mis}(w, \pi^*; \Delta^*, \Theta^*) - \min_{w\in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi^*)} L_\textsf{mis}(w, \pi^*; \Delta, \Theta) \right | \\ & \qquad \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV} \log\frac{1}{\delta} \log(NT)}, \$ which concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-mis-hatl-close}} \label{prf:lemma:iv-mis-hatl-close} \begin{proof} By Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(w, \Delta, \Theta, \pi)\in \cW \times \cF_0 \times \cF_1 \times \Pi$ that \$ \left | L_\textsf{mis}(w, \pi; \Delta, \Theta) - \hat L_\textsf{mis}(w, \pi; \Delta, \Theta) \right | \leq c\cdot C_{\Delta^*} C_{\Theta^*} C_* \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\Pi} \log\frac{1}{\delta} \log(NT)}, \$ which concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-any-w-hat-small-loss}} \label{prf:lemma:iv-any-w-hat-small-loss} \begin{proof} Note that \#\label{eq:iv-ee3} & \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\hat w^\pi_{\Delta, \Theta}, f; \Delta, \Theta) \\ & \qquad = \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\hat w^\pi_{\Delta, \Theta}, f; \Delta, \Theta) - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\hat w^\pi_{\Delta, \Theta}, f; \Delta, \Theta) + \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\hat w^\pi_{\Delta, \Theta}, f; \Delta, \Theta) \\ & \qquad \qquad - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi( w^\pi, f; \Delta, \Theta) + \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi( w^\pi, f; \Delta, \Theta) - \max_{f\in \cV} \Phi_\textsf{mis}^\pi(w^\pi, f; \Delta, \Theta) \\ & \qquad \qquad + \max_{f\in \cV} \Phi_\textsf{mis}^\pi(w^\pi, f; \Delta, \Theta) - \max_{f\in \cV} \Phi_\textsf{mis}^\pi(w^\pi, f; \Delta^*, \Theta^*) \\ & \qquad \leq 2 \max_{w\in \cW} \max_{f\in \cV} \left | \Phi_\textsf{mis}^\pi(w, f; \Delta, \Theta) - \hat \Phi_\textsf{mis}^\pi(w, f; \Delta, \Theta) \right | \\ & \qquad \qquad + \max_{f\in \cV} \left | \Phi_\textsf{mis}^\pi(w^\pi, f; \Delta, \Theta) - \Phi_\textsf{mis}^\pi(w^\pi, f; \Delta^*, \Theta^*) \right |, \# where we use the fact that $\hat w^\pi_{\Delta, \Theta} \in \argmin_{w\in \cW} \max_{f\in \cV} \hat \Phi^\pi_\textsf{mis}(w, f; \Delta, \Theta)$ in the last inequality. In the meanwhile, by Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(w,f,\pi)\in \cW\times \cV \times \Pi$ that \#\label{eq:iv-ee4} \left | \hat \Phi_\textsf{mis}^\pi(w, f; \Delta, \Theta) - \Phi_\textsf{mis}^\pi(w, f; \Delta, \Theta) \right | \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa}\mathfrak{C}_{\cV,\cW,\Pi}\log\frac{1}{\delta} \log(NT)}. \# Also, we upper bound the second term on the RHS of \eqref{eq:iv-ee3} with probability at least $1- \delta$ as follows, \#\label{eq:iv-ee5} & \left | \Phi_\textsf{mis}^\pi(w^\pi, f; \Delta, \Theta) - \Phi_\textsf{mis}^\pi(w^\pi, f; \Delta^*, \Theta^*) \right | \\ & \quad = \left | \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t)w(S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t)}{\Delta(S_t,A_t)\Theta(S_t,Z_t)} \right) \left(f(S_t) - \gamma f(S_{t+1})\right) \right ] \right | \\ & \quad \leq \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \left(\xi_0 C_{\Theta^*} \sqrt{\frac{C_{\Delta^*}}{NT \kappa}\mathfrak{C}_{\cF_0}\log\frac{1}{\delta} \log(NT)} + \xi_1 C_{\Delta^*} \sqrt{\frac{C_{\Theta^*}}{NT \kappa} \mathfrak{C}_{\cF_1} \log\frac{1}{\delta} \log(NT)} \right), \# where we use Cauchy-Schwarz inequality and Assumption \ref{ass:iv-sl-res} in the last inequality. Now, by plugging \eqref{eq:iv-ee4} and \eqref{eq:iv-ee5} into \eqref{eq:iv-ee3}, with probability at least $1 - \delta$, it holds for any $(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$ and $\pi\in \Pi$ that \$ \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\hat w^\pi_{\Delta, \Theta}, f; \Delta, \Theta) \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1 ) \sqrt{\frac{1}{NT \kappa}\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \log\frac{1}{\delta} \log(NT)}, \$ which concludes the proof of the lemma. \end{proof} \section{Proof of Results in \S\ref{sec:dr-theory}} \subsection{Proof of Theorem \ref{thm:iv-dr}}\label{prf:thm:iv-dr} \begin{proof} We split the proof into two case: (i) Assumption \ref{ass:iv-vf-realizable} holds; (ii) Assumption \ref{ass:iv-mis-realizable} holds. \vskip5pt \noindent\textbf{Case (i): Assumption \ref{ass:iv-vf-realizable} holds.} We introduce the following supporting lemmas. \begin{lemma} \label{lemma:iv-mis-hatl-close-dr} For any policy $\pi$, with probability at least $1 - \delta$ with $c / (NT)^2 \leq \delta \leq 1$, it holds for any $(w,v, \Delta, \Theta, \pi)\in \cW \times \cV \times \cF_0 \times \cF_1 \times \Pi$ that \$ & \left | L_\textsf{dr}(w, v, \pi; \Delta, \Theta) - \hat L_\textsf{dr}(w, v, \pi; \Delta, \Theta) \right | \\ & \qquad \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa}\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \log\frac{1}{\delta} \log(NT)} = \hat \epsilon_L. \$ \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-mis-hatl-close-dr} for a detailed proof. \end{proof} \begin{lemma} \label{lemma:iv-mis-min-delta-close-dr} Suppose that $(\alpha_0, \alpha_1, \alpha_\textsf{mis}, \alpha_\textsf{vf})$ is defined in Assumption \ref{ass:iv-sl-res}, Lemmas \ref{lemma:iv-w-pi-in-conf}, and \ref{lemma:iv-v-pi-in-conf}. With probability at least $1 - \delta$, it holds for any $(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$ that \$ & \left | \min_{(w,v)\in \textsf{conf}_{\alpha_\textsf{mis},\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) - \min_{(w,v)\in \textsf{conf}_{\alpha_\textsf{mis},\alpha_\textsf{vf}}(\Delta, \Theta, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta, \Theta) \right | \\ & \qquad \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV} \log\frac{1}{\delta} \log(NT)} = \epsilon_L^*. \$ \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-mis-min-delta-close-dr} for a detailed proof. \end{proof} By the definition of $L_{\textsf{dr}}$, it holds with probability at least $1 - \delta$ that \#\label{eq:koko1} & J(\pi^*) - J(\hat\pi_\textsf{dr}) \\ & \qquad = J(\pi^*) - L_\textsf{dr}(w, V^{\hat \pi_\textsf{dr}}, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) \\ & \qquad \leq J(\pi^*) - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \hat \pi_\textsf{dr})} L_\textsf{dr}(w, v, \hat \pi_\textsf{dr}; \Delta, \Theta) \\ & \qquad \leq J(\pi^*) - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \hat \pi_\textsf{dr})} \hat L_\textsf{dr}(w, v, \hat \pi_\textsf{dr}; \Delta, \Theta) + \hat \epsilon_L \\ & \qquad \leq J(\pi^*) - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \pi^*)} \hat L_\textsf{dr}(w, v, \pi^*; \Delta, \Theta) + \hat \epsilon_L \\ & \qquad \leq J(\pi^*) - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta, \Theta) + 2 \hat \epsilon_L, \# where in the first inequality, we use Assumption \ref{ass:iv-sl-res} that $(\Delta^*, \Theta^*) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}$ with probability at least $1 - \delta$, and Lemma \ref{lemma:iv-v-pi-in-conf} with Assumption \ref{ass:iv-vf-realizable} that $V^{\hat \pi_\textsf{dr}}\in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \hat \pi_\textsf{dr})$ with probability at least $1 - \delta$; in the second inequality, we use Lemma \ref{lemma:iv-mis-hatl-close-dr}; in the third inequality, we use the optimality of $\hat \pi_\textsf{dr}$; while in the last inequality, we use Lemma \ref{lemma:iv-mis-hatl-close-dr} again. By combining Lemma \ref{lemma:iv-mis-min-delta-close-dr} and \eqref{eq:koko1}, we have \#\label{eq:koko3} & J(\pi^*) - J(\hat\pi_\textsf{dr}) \\ & \qquad \leq J(\pi^*) - \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) + 2 \hat \epsilon_L + \epsilon_L^* \\ & \qquad = L_\textsf{dr}(w, V^{\pi^*}, \pi^*; \Delta^*, \Theta^*) - \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) + 2 \hat \epsilon_L + \epsilon_L^* \\ & \qquad = \max_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | L_\textsf{dr}(w, V^{\pi^*}, \pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) \right | + 2 \hat \epsilon_L + \epsilon_L^* \\ & \qquad = \max_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | \Phi_\textsf{vf}^{\pi^*}(v, w^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{vf}^{\pi^*}(v, w; \Delta^*, \Theta^*) \right | + 2 \hat \epsilon_L + \epsilon_L^*, \# where we use the following fact in the last equality, \$ L_\textsf{dr}(w, V^{\pi^*}, \pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) = \Phi_\textsf{vf}^{\pi^*}(v, w^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{vf}^{\pi^*}(v, w; \Delta^*, \Theta^*). \$ In the meanwhile, note that by Assumption \ref{ass:iv-vf-realizable} that $w^{\pi^*} \in \cW$, we obtain from \eqref{eq:koko3} that \#\label{eq:iv-dr-case2} J(\pi^*) - J(\hat\pi_\textsf{dr}) & \leq 2 \max_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | \Phi_\textsf{vf}^{\pi^*}(v, w; \Delta^*, \Theta^*) \right | + 2 \hat \epsilon_L + \epsilon_L^* \\ & \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \log\frac{NT }{\delta}}, \# where we use Lemma \ref{lemma:iv-v-in-conf-good} and plug in the definition of $\epsilon_L$ and $\epsilon_L^*$ in the last inequality. This concludes the proof of case (i). \vskip5pt \noindent\textbf{Case (ii): Assumption \ref{ass:iv-mis-realizable} holds.} It holds with probability at least $1 - \delta$ that \$ J(\pi^*) - J(\hat\pi_\textsf{dr}) & = J(\pi^*) - L_\textsf{dr}(w^{\hat \pi_\textsf{dr}}, v, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) \\ & \leq J(\pi^*) - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \hat \pi_\textsf{dr})} L_\textsf{dr}(w, v, \hat \pi_\textsf{dr}; \Delta, \Theta) \\ & \leq J(\pi^*) - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \hat \pi_\textsf{dr})} \hat L_\textsf{dr}(w, v, \hat \pi_\textsf{dr}; \Delta, \Theta) + \hat \epsilon_L \\ & \leq J(\pi^*) - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \pi^*)} \hat L_\textsf{dr}(w, v, \pi^*; \Delta, \Theta) + \hat \epsilon_L, \$ where we use Assumption \ref{ass:iv-sl-res} that $(\Delta^*, \Theta^*) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}$ with probability at least $1-\delta$ and Assumption \ref{ass:iv-mis-realizable} that $w^\pi \in \cW$ for any $\pi\in \Pi$ in the first inequality, we use Lemma \ref{lemma:iv-mis-hatl-close-dr} in the second inequality, and we use the optimality of $\hat \pi_\textsf{dr}$ in the last inequality. Further, by Lemmas \ref{lemma:iv-mis-hatl-close-dr} and \ref{lemma:iv-mis-min-delta-close-dr}, we have \$ & J(\pi^*) - J(\hat\pi_\textsf{dr}) \\ & \qquad \leq J(\pi^*) - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta, \Theta) + 2 \hat \epsilon_L\\ & \qquad \leq J(\pi^*) - \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) + 2\hat \epsilon_L + \epsilon^*_L \\ & \qquad \leq \max_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | L_\textsf{dr}(w^{\pi^*}, v, \pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) \right | + 2\hat \epsilon_L + \epsilon^*_L \\ & \qquad \leq \max_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | \Phi_\textsf{mis}^{\pi^*}(w, V^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{mis}^{\pi^*}(w, v; \Delta^*, \Theta^*) \right | + 2\hat \epsilon_L + \epsilon^*_L \\ & \qquad \leq 2 \max_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | \Phi_\textsf{mis}^{\pi^*}(w, v; \Delta^*, \Theta^*) \right | + 2 \hat \epsilon_L + \epsilon^*_L \\ & \qquad \leq 2 \max_{w \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}} (\Delta^*, \Theta^*, \pi^*)} \max_{v\in \cV} \left | \Phi_\textsf{mis}^{\pi^*}(w, v; \Delta^*, \Theta^*) \right | + 2 \hat \epsilon_L + \epsilon^*_L \$ where in the third inequality, we use the fact that $J(\pi^*) = L_\textsf{dr}(w^{\pi^*}, v, \pi^*; \Delta^*, \Theta^*)$; in the forth inequality, we use the following fact \$ L_\textsf{dr}(w,v,\pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w^{\pi^*},v,\pi^*; \Delta^*, \Theta^*) = -\Phi_\textsf{mis}^{\pi^*}(w, V^{\pi^*}; \Delta^*, \Theta^*) + \Phi_\textsf{mis}^{\pi^*}(w, v; \Delta^*, \Theta^*) \$ for any $(w,v)\in \cW\times \cV$; in the fifth inequality, we use the fact that $V^{\pi^*}\in \cV$ by Assumption \ref{ass:iv-mis-realizable} and $V^{\pi^*}\in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta^*, \Theta^*, \pi^*)$ with probability at least $1 - \delta$ by Lemma \ref{lemma:iv-v-pi-in-conf}. Now, by Lemma \ref{lemma:iv-w-in-conf-good} and the fact that $\cV$ is symmetric, we obtain that \#\label{eq:iv-dr-case1} J(\pi^*) - J(\hat\pi_\textsf{dr}) \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \log\frac{NT }{\delta}}, \# which concludes the proof of case (ii). \vskip5pt By combining \eqref{eq:iv-dr-case2} and \eqref{eq:iv-dr-case1}, we conclude the proof of the theorem. \end{proof} \subsection{Proof of Theorem \ref{thm:iv-dr-spec}}\label{prf:thm:iv-dr-spec} \begin{proof} Recall that \$ \tilde v^\pi \in \argmin_{v\in \cV} \max_{w\in \cW} \Phi_\textsf{vf}^\pi(v,w; \Delta^*, \Theta^*), \qquad \tilde w^\pi\in \argmin_{w\in \cW} \max_{v\in \cV} \Phi_\textsf{mis}^\pi(w, v; \Delta^*, \Theta^*). \$ We split the proof into the following two parts. \noindent\textbf{Part (i).} We first introduce the following lemmas. \begin{lemma} \label{lemma:vinconf-spec} Suppose $\alpha_\textsf{vf}$ is defined in Lemma \ref{lemma:iv-v-pi-in-conf} and $c / (NT)^2 \leq \delta \leq 1$. Then under Assumptions \ref{ass:spaces}\ref{ass:upper-bound-delta} and \ref{ass:ergodic}, with probability at least $1 - \delta$, it holds for any $\pi\in \Pi$ that $\tilde v^\pi \in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi)$. \end{lemma} \begin{proof} See \S\ref{prf:lemma:vinconf-spec} for a detailed proof. \end{proof} \begin{lemma} \label{lemma:vinconf-good-spec} Suppose that $(\alpha_0, \alpha_1, \alpha_\textsf{vf})$ is defined in Assumption \ref{ass:iv-sl-res} and Lemma \ref{lemma:iv-v-pi-in-conf} and $c / (NT)^2 \leq \delta \leq 1$. Then under Assumptions \ref{ass:iv-common}, \ref{ass:10}, \ref{ass:ergodic} and \ref{ass:iv-sl-res}, with probability at least $1 - \delta$, it holds for any policy $\pi\in \Pi$ and $v\in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta, \Theta, \pi)$ that \$ \max_{g\in \cW} \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) & \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT\kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT)} \\ & \qquad + \max_{g\in \cW} \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; \Delta^*, \Theta^*). \$ \end{lemma} \begin{proof} See \S\ref{prf:lemma:vinconf-good-spec} for a detailed proof. \end{proof} By the definition of $L_{\textsf{dr}}$, it holds that \#\label{eq:0622-0} & J(\pi^*) - J(\hat\pi_\textsf{dr}) \\ & \qquad = J(\pi^*) - L_\textsf{dr}(w, V^{\hat \pi_\textsf{dr}}, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) \\ & \qquad = J(\pi^*) - L_\textsf{dr}(w, \tilde v^{\hat \pi_\textsf{dr}}, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) + L_\textsf{dr}(w, \tilde v^{\hat \pi_\textsf{dr}}, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) - L_\textsf{dr}(w, V^{\hat \pi_\textsf{dr}}, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*). \# Note that \#\label{eq:0622-1} & \left | L_\textsf{dr}(w, \tilde v^{\hat \pi_\textsf{dr}}, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) - L_\textsf{dr}(w, V^{\hat \pi_\textsf{dr}}, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) \right | \leq C_* C_{\Delta^*} C_{\Theta^*} \varepsilon^\cV_\textsf{vf}. \# In the meanwhile, by Assumption \ref{ass:iv-sl-res} and Lemma \ref{lemma:vinconf-spec}, it holds with probability at least $1 - \delta$ that \#\label{eq:0622-2} L_\textsf{dr}(w, \tilde v^{\hat \pi_\textsf{dr}}, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) & \geq \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \hat \pi_\textsf{dr})} L_\textsf{dr}(w, v, \hat \pi_\textsf{dr}; \Delta, \Theta) \\ & \geq \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \hat \pi_\textsf{dr})} \hat L_\textsf{dr}(w, v, \hat \pi_\textsf{dr}; \Delta, \Theta) - \hat \epsilon_L \\ & \geq \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \pi^*)} \hat L_\textsf{dr}(w, v, \pi^*; \Delta, \Theta) - \hat \epsilon_L \\ & \geq \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta, \Theta) - 2 \hat \epsilon_L, \# where in the second inequality, we use Lemma \ref{lemma:iv-mis-hatl-close-dr}; in the third inequality, we use the optimality of $\hat \pi_\textsf{dr}$; in the forth inequality, we again use Lemma \ref{lemma:iv-mis-hatl-close-dr}. Now, by plugging \eqref{eq:0622-1} and \eqref{eq:0622-2} into \eqref{eq:0622-0}, we have \#\label{eq:0622-3} & J(\pi^*) - J(\hat\pi_\textsf{dr}) \\ & \qquad \leq J(\pi^*) - \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta, \Theta) \\ & \qquad \qquad + 2 \hat \epsilon_L + C_* C_{\Delta^*} C_{\Theta^*} \varepsilon^\cV_\textsf{vf}. \# By combining Lemma \ref{lemma:iv-mis-min-delta-close-dr} and \eqref{eq:0622-3}, we have \#\label{eq:koko3-2-spec} & J(\pi^*) - J(\hat\pi_\textsf{dr}) \\ & \quad \leq J(\pi^*) - \min_{(w,v)\in \textsf{conf}_{\alpha_\textsf{mis},\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*) } L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) + 2 \hat \epsilon_L + \epsilon_L^* + C_* C_{\Delta^*} C_{\Theta^*} \varepsilon^\cV_\textsf{vf} \\ & \quad = L_\textsf{dr}(w, V^{\pi^*}, \pi^*; \Delta^*, \Theta^*) - \min_{(w,v)\in \textsf{conf}_{\alpha_\textsf{mis},\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) \\ & \qquad \qquad + 2 \hat \epsilon_L + \epsilon_L^* + C_* C_{\Delta^*} C_{\Theta^*} \varepsilon^\cV_\textsf{vf} \\ & \quad = \max_{(w,v)\in \textsf{conf}_{\alpha_\textsf{mis},\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | L_\textsf{dr}(w, V^{\pi^*}, \pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) \right | \\ & \qquad \qquad + 2 \hat \epsilon_L + \epsilon_L^* + C_* C_{\Delta^*} C_{\Theta^*} \varepsilon^\cV_\textsf{vf} \\ & \quad = \max_{(w,v)\in \textsf{conf}_{\alpha_\textsf{mis},\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | \Phi_\textsf{vf}^{\pi^*}(v, w^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{vf}^{\pi^*}(v, w; \Delta^*, \Theta^*) \right | \\ & \qquad \qquad + 2 \hat \epsilon_L + \epsilon_L^* + C_* C_{\Delta^*} C_{\Theta^*} \varepsilon^\cV_\textsf{vf}, \# where we use the following fact in the last equality, \$ L_\textsf{dr}(w, V^{\pi^*}, \pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) = \Phi_\textsf{vf}^{\pi^*}(v, w^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{vf}^{\pi^*}(v, w; \Delta^*, \Theta^*). \$ We upper bound the first term on the RHS of \eqref{eq:koko3-2-spec} as follows, \#\label{eq:456374637} & \max_{(w,v)\in \textsf{conf}_{\alpha_\textsf{mis},\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | \Phi_\textsf{vf}^{\pi^*}(v, w^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{vf}^{\pi^*}(v, w; \Delta^*, \Theta^*) \right| \\ & \quad \leq \max_{v\in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \max_{w\in \cW} \left | \Phi_\textsf{vf}^{\pi^*}(v, \tilde w^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{vf}^{\pi^*}(v, w; \Delta^*, \Theta^*) \right| \\ & \qquad\quad + \max_{v\in \cV} \left | \Phi_\textsf{vf}^{\pi^*}(v, w^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{vf}^{\pi^*}(v, \tilde w^{\pi^*}; \Delta^*, \Theta^*) \right| \\ & \quad \leq 2 \max_{v\in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \max_{w\in \cW} \left |\Phi_\textsf{vf}^{\pi^*}(v, w; \Delta^*, \Theta^*) \right| \\ & \qquad \quad + \max_{v\in \cV} \left | \Phi_\textsf{vf}^{\pi^*}(v, w^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{vf}^{\pi^*}(v, \tilde w^{\pi^*}; \Delta^*, \Theta^*) \right|, \# where in the first inequality, we use triangle inequality; in the second inequality, we use the definition of $\tilde w^{\pi^*}$ that $\tilde w^{\pi^*}\in \cW$. By Lemma \ref{lemma:vinconf-good-spec} and Assumption \ref{ass:model-spec}, we obtain from \eqref{eq:456374637} that \#\label{eq:87343846} & \max_{(w,v)\in \textsf{conf}_{\alpha_\textsf{mis},\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | \Phi_\textsf{vf}^{\pi^*}(v, w^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{vf}^{\pi^*}(v, w; \Delta^*, \Theta^*) \right | \\ & \qquad \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT\kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT)} \\ & \qquad \qquad + 2 \max_{g\in \cW} \Phi_\textsf{vf}^{\pi^*}(\tilde v^{\pi^*}, g; \Delta^*, \Theta^*) \\ & \qquad \qquad + C_{\Delta^*} C_{\Theta^*} \varepsilon^\cW_\textsf{vf}/(1-\gamma). \# Also, we have \#\label{eq:3456789765} & \max_{g\in \cW} \Phi_\textsf{vf}^{\pi^*}(\tilde v^{\pi^*}, g; \Delta^*, \Theta^*) \\ & \qquad = \max_{g\in \cW} \Phi_\textsf{vf}^{\pi^*}(\tilde v^{\pi^*}, g; \Delta^*, \Theta^*) - \max_{g\in \cW} \Phi_\textsf{vf}^{\pi^*}(V^{\pi^*}, g; \Delta^*, \Theta^*) + \max_{g\in \cW} \Phi_\textsf{vf}^{\pi^*}(V^{\pi^*}, g; \Delta^*, \Theta^*) \\ & \qquad= \max_{g\in \cW} \Phi_\textsf{vf}^{\pi^*}(\tilde v^{\pi^*}, g; \Delta^*, \Theta^*) - \max_{g\in \cW} \Phi_\textsf{vf}^{\pi^*}(V^{\pi^*}, g; \Delta^*, \Theta^*) \\ & \qquad= \max_{g\in \cW} \left | \Phi_\textsf{vf}^{\pi^*}(\tilde v^{\pi^*}, g; \Delta^*, \Theta^*) - \Phi_\textsf{vf}^{\pi^*}(V^{\pi^*}, g; \Delta^*, \Theta^*) \right | \\ & \qquad \leq C_* C_{\Delta^*} C_{\Theta^*} \varepsilon^\cV_\textsf{vf}, \# where we use Assumption \ref{ass:model-spec} in the last inequality. By plugging \eqref{eq:87343846} and \eqref{eq:3456789765} into \eqref{eq:koko3-2-spec}, we have \#\label{eq:iv-dr-case2-2-spec} J(\pi^*) - J(\hat\pi_\textsf{dr}) & \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \log\frac{NT }{\delta}} \\ & \qquad + 3 C_{\Delta^*} C_{\Theta^*} \left(C_* \varepsilon^\cV_\textsf{vf} + \varepsilon^\cW_\textsf{vf}/(1-\gamma)\right), \# where we plug in the definition of $\epsilon_L$ and $\epsilon_L^*$ in the last inequality. \vskip5pt \noindent\textbf{Part (ii).} We first introduce the following lemmas. \begin{lemma} \label{lemma:iv-w-pi-in-conf-spec} Suppose $\alpha_\textsf{mis}$ is defined in Lemma \ref{lemma:iv-w-pi-in-conf} and and $c / (NT)^2 \leq \delta \leq 1$. Then under Assumptions \ref{ass:upper-bound-delta} and \ref{ass:ergodic}, with probability at least $1 - \delta$, it holds for any $\pi \in \Pi$ that $\tilde w^\pi \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi)$. \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-w-pi-in-conf-spec} for a detailed proof. \end{proof} \begin{lemma} \label{lemma:iv-w-in-conf-good-spec} Suppose that $(\alpha_0, \alpha_1, \alpha_\textsf{mis})$ is defined in Assumption \ref{ass:iv-sl-res} and Lemma \ref{lemma:iv-w-pi-in-conf}, and $c / (NT)^2 \leq \delta \leq 1$. Then under Assumptions \ref{ass:iv-common}, \ref{ass:10}, \ref{ass:ergodic}, and \ref{ass:iv-sl-res}, with probability at least $1 - \delta$, it holds for any $\pi\in \Pi$ and $w \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)$ that \$ \max_{f\in \cV} \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*) & \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT) } \\ & \qquad + \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; \Delta^*, \Theta^*). \$ \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-w-in-conf-good-spec} for a detailed proof. \end{proof} By the definition of $L_\textsf{dr}$, we have \#\label{eq:0623-1} & J(\pi^*) - J(\hat \pi_\textsf{dr}) \\ & \qquad = J(\pi^*) - L_\textsf{dr}(w^{\hat \pi_\textsf{dr}}, v, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) \\ & \qquad = J(\pi^*) - L_\textsf{dr}(\tilde w^{\hat \pi_\textsf{dr}}, v, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) + L_\textsf{dr}(\tilde w^{\hat \pi_\textsf{dr}}, v, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) - L_\textsf{dr}(w^{\hat \pi_\textsf{dr}}, v, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*). \# By Assumption \ref{ass:model-spec}, we have \#\label{eq:0623-2} \left | L_\textsf{dr}(\tilde w^{\hat \pi_\textsf{dr}}, v, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) - L_\textsf{dr}(w^{\hat \pi_\textsf{dr}}, v, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*)\right | \leq C_{\Delta^*} C_{\Theta^*} \varepsilon^\cW_\textsf{mis} / (1 - \gamma). \# In the meanwhile, by Assumption \ref{ass:iv-sl-res} and Lemma \ref{lemma:iv-w-pi-in-conf-spec}, it holds with probability at least $1 - \delta$ that \#\label{eq:0623-3} L_\textsf{dr}(\tilde w^{\hat \pi_\textsf{dr}}, v, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) & \geq \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \hat \pi_\textsf{dr})} L_\textsf{dr}(w, v, \hat \pi_\textsf{dr}; \Delta, \Theta) \\ & \geq \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \hat \pi_\textsf{dr})} \hat L_\textsf{dr}(w, v, \hat \pi_\textsf{dr}; \Delta, \Theta) - \hat \epsilon_L \\ & \geq \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \pi^*)} \hat L_\textsf{dr}(w, v, \pi^*; \Delta, \Theta) - \hat \epsilon_L \\ & \geq \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta, \Theta) - 2\hat \epsilon_L, \# where in the second inequality, we use Lemma \ref{lemma:iv-mis-hatl-close-dr}; in the third inequality, we use the optimality of $\hat \pi_\textsf{dr}$; in the forth inequality, we again use Lemma \ref{lemma:iv-mis-hatl-close-dr}. Further, combining Lemma \ref{lemma:iv-mis-min-delta-close-dr} and \eqref{eq:0623-3}, we have \#\label{eq:0623-4} L_\textsf{dr}(\tilde w^{\hat \pi_\textsf{dr}}, v, \hat \pi_\textsf{dr}; \Delta^*, \Theta^*) \geq \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) - 2\hat \epsilon_L - \epsilon_L^*. \# Now, by plugging \eqref{eq:0623-2} and \eqref{eq:0623-4} into \eqref{eq:0623-1}, we have \#\label{eq:0623-5} & J(\pi^*) - J(\hat \pi_\textsf{dr}) \\ & \qquad \leq J(\pi^*) - \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) \\ & \qquad \qquad + 2\hat \epsilon_L + \epsilon_L^* + C_{\Delta^*} C_{\Theta^*} \varepsilon^\cW_\textsf{mis} / (1 - \gamma) \\ & \qquad \leq \max_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | L_\textsf{dr}(w^{\pi^*}, v, \pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) \right | \\ & \qquad \qquad + 2\hat \epsilon_L + \epsilon^*_L + C_{\Delta^*} C_{\Theta^*} \varepsilon^\cW_\textsf{mis} / (1 - \gamma) \\ & \qquad = \max_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | \Phi_\textsf{mis}^{\pi^*}(w, V^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{mis}^{\pi^*}(w, v; \Delta^*, \Theta^*) \right | \\ & \qquad \qquad + 2\hat \epsilon_L + \epsilon^*_L + C_{\Delta^*} C_{\Theta^*} \varepsilon^\cW_\textsf{mis} / (1 - \gamma), \# where in the second inequality, we use the fact that $J(\pi^*) = L_\textsf{dr}(w^{\pi^*}, v, \pi^*; \Delta^*, \Theta^*)$; in the last equality, we use the following fact \$ L_\textsf{dr}(w,v,\pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w^{\pi^*},v,\pi^*; \Delta^*, \Theta^*) = -\Phi_\textsf{mis}^{\pi^*}(w, V^{\pi^*}; \Delta^*, \Theta^*) + \Phi_\textsf{mis}^{\pi^*}(w, v; \Delta^*, \Theta^*) \$ for any $(w,v)\in \cW\times \cV$. We upper bound the first term on the RHS of \eqref{eq:0623-5} as follows, \#\label{eq:0623-6} & \max_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} \left | \Phi_\textsf{mis}^{\pi^*}(w, V^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{mis}^{\pi^*}(w, v; \Delta^*, \Theta^*) \right | \\ & \qquad \leq \max_{w \in \textsf{conf}_{\alpha_\textsf{mis}}^\textsf{mis}(\Delta^*, \Theta^*, \pi^*)} \max_{v\in \cV} \left | \Phi_\textsf{mis}^{\pi^*}(w, v; \Delta^*, \Theta^*) - \Phi_\textsf{mis}^{\pi^*}(w, \tilde v^{\pi^*}; \Delta^*, \Theta^*) \right | \\ & \qquad \qquad + \max_{w\in \cW} \left | \Phi_\textsf{mis}^{\pi^*}(w, V^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{mis}^{\pi^*}(w, \tilde v^{\pi^*}; \Delta^*, \Theta^*) \right | \\ & \qquad \leq 2 \max_{w \in \textsf{conf}_{\alpha_\textsf{mis}}^\textsf{mis}(\Delta^*, \Theta^*, \pi^*)} \max_{v\in \cV} \left | \Phi_\textsf{mis}^{\pi^*}(w, v; \Delta^*, \Theta^*) \right | \\ & \qquad \qquad + \max_{w\in \cW} \left | \Phi_\textsf{mis}^{\pi^*}(w, V^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{mis}^{\pi^*}(w, \tilde v^{\pi^*}; \Delta^*, \Theta^*) \right | \\ & \qquad \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT) } \\ & \qquad \qquad + 2\max_{f\in \cV} \Phi_\textsf{mis}^{\pi^*}(\tilde w^{\pi^*}, f; \Delta^*, \Theta^*) + C_* C_{\Delta^*} C_{\Theta^*} \varepsilon_\textsf{mis}^\cV, \# where in the first inequality, we use triangle inequality; in the second inequality, we use the definition of $\tilde v^{\pi^*}$ that $\tilde v^{\pi^*}\in \cV$; in the last inequality, we use Lemma \ref{lemma:iv-w-in-conf-good-spec} and Assumption \ref{ass:model-spec}. In the meanwhile, by Assumption \ref{ass:model-spec}, we have \#\label{eq:0623-7} & \max_{f\in \cV} \Phi_\textsf{mis}^{\pi^*}(\tilde w^\pi, f; \Delta^*, \Theta^*) \\ & \qquad \leq \max_{f\in \cV} \Phi_\textsf{mis}^{\pi^*}(\tilde w^{\pi^*}, f; \Delta^*, \Theta^*) - \max_{f\in \cV} \Phi_\textsf{mis}^{\pi^*}(w^{\pi^*}, f; \Delta^*, \Theta^*) + \max_{f\in \cV} \Phi_\textsf{mis}^{\pi^*}(w^{\pi^*}, f; \Delta^*, \Theta^*) \\ & \qquad = \max_{f\in \cV} \Phi_\textsf{mis}^{\pi^*}(\tilde w^{\pi^*}, f; \Delta^*, \Theta^*) - \max_{f\in \cV} \Phi_\textsf{mis}^{\pi^*}(w^{\pi^*}, f; \Delta^*, \Theta^*) \\ & \qquad \leq \max_{f\in \cV} \left | \Phi_\textsf{mis}^{\pi^*}(\tilde w^{\pi^*}, f; \Delta^*, \Theta^*) - \Phi_\textsf{mis}^{\pi^*}(w^{\pi^*}, f; \Delta^*, \Theta^*) \right | \\ & \qquad \leq C_{\Delta^*} C_{\Theta^*} \varepsilon^\cW_\textsf{mis} / (1 -\gamma). \# Now, by plugging \eqref{eq:0623-6} and \eqref{eq:0623-7} into \eqref{eq:0623-5}, we have \#\label{eq:iv-dr-case1-2-spec} J(\pi^*) - J(\hat\pi_\textsf{dr}) & \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \log\frac{NT }{\delta}} \\ & \qquad + 3 C_{\Delta^*} C_{\Theta^*} \left(C_* \varepsilon^\cV_\textsf{mis} + \varepsilon^\cW_\textsf{mis}/(1-\gamma)\right). \# \vskip5pt By combining \eqref{eq:iv-dr-case2-2-spec} and \eqref{eq:iv-dr-case1-2-spec}, we have \$ J(\pi^*) - J(\hat\pi_\textsf{dr}) & \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \log\frac{NT }{\delta}} \\ & \qquad + 3 C_{\Delta^*} C_{\Theta^*} \min \left\{ C_* \varepsilon^\cV_\textsf{vf} + \varepsilon^\cW_\textsf{vf}/(1-\gamma), ~C_* \varepsilon^\cV_\textsf{mis} + \varepsilon^\cW_\textsf{mis}/(1-\gamma) \right \} \$ which conclude the proof. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-mis-hatl-close-dr}} \label{prf:lemma:iv-mis-hatl-close-dr} \begin{proof} By Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(w, v, \Delta, \Theta, \pi)\in \cW \times \cV \times \cF_0 \times \cF_1 \times \Pi$ that \$ & \left | L_\textsf{dr}(w, v, \pi; \Delta, \Theta) - \hat L_\textsf{dr}(w, v, \pi; \Delta, \Theta) \right | \\ & \qquad \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa}\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \log\frac{1}{\delta} \log(NT)}, \$ which concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-mis-min-delta-close-dr}} \label{prf:lemma:iv-mis-min-delta-close-dr} \begin{proof} With a slight abuse of notations, we denote by \$ & (w_0,v_0) \in \argmin_{(w,v)\in \textsf{conf}_{\alpha_\textsf{mis},\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*), \\ & (w_1,v_1) \in \argmin_{(w,v)\in \textsf{conf}_{\alpha_\textsf{mis},\alpha_\textsf{vf}}(\Delta, \Theta, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta, \Theta). \$ Then we have \#\label{eq:iv-kkk1-dr} & \left | \min_{(w,v)\in \textsf{conf}_{\alpha_\textsf{mis},\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta^*, \Theta^*) - \min_{(w,v)\in \textsf{conf}_{\alpha_\textsf{mis},\alpha_\textsf{vf}}(\Delta, \Theta, \pi^*)} L_\textsf{dr}(w, v, \pi^*; \Delta, \Theta) \right | \\ & \qquad = \left | L_\textsf{dr}(w_0, v_0, \pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w_1, v_1, \pi^*; \Delta, \Theta) \right| \\ & \qquad \leq \left | L_\textsf{dr}(w_0, v_0, \pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w^{\pi^*}, v_0, \pi^*; \Delta^*, \Theta^*) \right | \\ & \qquad \qquad + \left | L_\textsf{dr}(w^\pi, v_1, \pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w_1, v_1, \pi^*; \Delta^*, \Theta^*) \right | \\ & \qquad \qquad + \left |L_\textsf{dr}(w_1, v_1, \pi; \Delta^*, \Theta^*) - L_\textsf{dr}(w_1, v_1, \pi; \Delta, \Theta) \right | \\ & \qquad = \underbrace{\left | \Phi_\textsf{mis}^{\pi^*}(w_0, V^{\pi^*}; \Delta^*, \Theta^*)-\Phi_\textsf{mis}^{\pi^*}(w_0, v_0; \Delta^*, \Theta^*) \right|}_{\text{Term (I)}} \\ & \qquad \qquad + \underbrace{\left| \Phi_\textsf{mis}^{\pi^*}(w_1, V^{\pi^*}; \Delta^*, \Theta^*) - \Phi_\textsf{mis}^{\pi^*}(w_1, v_1; \Delta^*, \Theta^*) \right|}_{\text{Term (II)}} \\ & \qquad \qquad + \underbrace{\left | L_\textsf{dr}(w_1, v_1, \pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w_1, v_1, \pi^*; \Delta, \Theta) \right|}_{\text{Term (III)}}, \# where in the first inequality, we use triangle inequality and the fact that $L_\textsf{dr}(w^{\pi^*}, v, \pi^*; \Delta^*, \Theta^*) = J(\pi^*)$ for any function $v$; while in the last equality, we use the following equality for any $(w,v)$, \$ L_\textsf{dr}(w,v,\pi^*; \Delta^*, \Theta^*) - L_\textsf{dr}(w^{\pi^*},v,\pi^*; \Delta^*, \Theta^*) = -\Phi_\textsf{mis}^{\pi^*}(w, V^{\pi^*}; \Delta^*, \Theta^*) + \Phi_\textsf{mis}^{\pi^*}(w, v; \Delta^*, \Theta^*). \$ We upper bound terms (I), (II), and (III) on the RHS of \eqref{eq:iv-kkk1-dr}, respectively. \vskip5pt \noindent\textbf{Upper Bounding Term (I).} Note that with probability at least $1 - \delta$, we have \$ \left | \Phi_\textsf{mis}^{\pi^*}(w_0, V^{\pi^*}; \Delta^*, \Theta^*) \right| & \leq \max_{f\in \cV} \left | \Phi_\textsf{mis}^{\pi^*}(w_0, f; \Delta^*, \Theta^*) \right| \\ & = \max_{f\in \cV} \max \left\{ \Phi_\textsf{mis}^{\pi^*}(w_0, f; \Delta^*, \Theta^*), -\Phi_\textsf{mis}^{\pi^*}(w_0, f; \Delta^*, \Theta^*) \right \} \\ & = \max_{f\in \cV} \max \left\{ \Phi_\textsf{mis}^{\pi^*}(w_0, f; \Delta^*, \Theta^*), \Phi_\textsf{mis}^{\pi^*}(w_0, -f; \Delta^*, \Theta^*) \right \} \\ & = \max_{f\in \cV} \Phi_\textsf{mis}^{\pi^*}(w_0, f; \Delta^*, \Theta^*) \\ & \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV} \log\frac{1}{\delta} \log(NT)}, \$ where in the first inequality, we use the fact that $V^{\pi^*} \in \cV$; in the third equality, we use the fact that $\cV$ is symmetric; in the last inequality, by noting that $w_0 \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\pi_{\alpha_\textsf{mis}}(\Delta, \Theta)$, we use Lemma \ref{lemma:iv-w-in-conf-good}. Similarly, we have \$ \left | \Phi_\textsf{mis}^{\pi^*}(w_0, v_0; \Delta^*, \Theta^*) \right| \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV} \log\frac{1}{\delta} \log(NT)}, \$ which implies that \#\label{eq:iv-kkk2-dr} \text{Term (I)} \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV} \log\frac{1}{\delta} \log(NT)}. \# \vskip5pt \noindent\textbf{Upper Bounding Term (II).} Similar to \eqref{eq:iv-kkk2-dr}, with probability at least $1 - \delta$, we have \#\label{eq:iv-kkk3-dr} \text{Term (II)} \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV} \log\frac{1}{\delta} \log(NT)}. \# \vskip5pt \noindent\textbf{Upper Bounding Term (III).} Note that with probability at least $1 - \delta$, it holds for any $(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$ that \#\label{eq:iv-kkk4-dr} & \text{Term (III)} \\ & = \EE \left[\frac{1}{T}\sum_{t = 0}^{T-1} \left( \frac{Z_t^\top A_t \pi^*(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t \pi^*(A_t\given S_t)}{\Delta(S_t,A_t) \Theta(S_t,Z_t)} \right) w_1(S_t) \left(R_t + \gamma v_1(S_{t+1}) - v_1(S_t) \right) \right] \\ & \leq \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \left(\xi_0 C_{\Theta^*} \sqrt{\frac{C_{\Delta^*}}{NT \kappa} \mathfrak{C}_{\cF_0} \log\frac{1}{\delta} \log(NT)} + \xi_1 C_{\Delta^*} \sqrt{\frac{C_{\Theta^*}}{NT \kappa} \mathfrak{C}_{\cF_1} \log\frac{1}{\delta} \log(NT)} \right), \# where we use triangle inequality and Assumption \ref{ass:iv-sl-res} in the last inequality. \vskip5pt Now, by plugging \eqref{eq:iv-kkk2-dr}, \eqref{eq:iv-kkk3-dr}, and \eqref{eq:iv-kkk4-dr} into \eqref{eq:iv-kkk1-dr}, we conclude the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:vinconf-spec}}\label{prf:lemma:vinconf-spec} \begin{proof} By the definition of $\tilde v^\pi$ in \eqref{eq:v-tilde-def}, we know that $\tilde v^\pi \in \cV$. Thus, to show that $\tilde v^\pi \in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi)$ with a high probability, it suffices to show that \#\label{eq:iv-vpi-in-ci-wtp-spec} \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; \Delta^*, \Theta^*) - \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v_{\Delta^*, \Theta^*}^\pi, g; \Delta^*, \Theta^*) \leq \alpha_\textsf{vf}. \# In the follows, we show that \eqref{eq:iv-vpi-in-ci-wtp-spec} holds with a high probability. For the simplicity of notations, we denote by $\Phi_\textsf{vf}^\pi(v, g; *) = \Phi_\textsf{vf}^\pi(v, g; \Delta^*, \Theta^*)$ and $\hat v^\pi_* = \hat v^\pi_{\Delta^*, \Theta^*}$ for any $(\pi,v,g)$. Note that \#\label{eq:iv-ee1-vf-spec} & \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; *) - \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) \\ & \qquad = \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; *) - \max_{g \in \cW} \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; *) + \max_{g \in \cW} \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; *) - \max_{g \in \cW} \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) \\ & \qquad \qquad + \max_{g \in \cW} \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) - \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) \\ & \qquad \leq \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; *) - \max_{g \in \cW} \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; *) + \max_{g \in \cW} \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) - \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) \\ & \qquad \leq 2 \max_{v\in \cV} \left | \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(v, g; *) - \max_{g \in \cW} \Phi_\textsf{vf}^\pi(v, g; *) \right | \\ & \qquad \leq 2 \max_{v\in \cV} \max_{g \in \cW} \left | \hat \Phi_\textsf{vf}^\pi(v, g; *) - \Phi_\textsf{vf}^\pi(v, g; *) \right |, \# where in the first inequality, we use the fact that $\max_{g \in \cW} \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; *) \leq \max_{g \in \cW} \Phi_\textsf{vf}^\pi(v, g; *)$ for any $v$ due to the definition of $\tilde v^\pi$ in \eqref{eq:v-tilde-def}; while in the second inequality, we use the fact that $\tilde v^\pi, \hat v^\pi_* \in \cV$. In the meanwhile, by Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(\pi,v,g)\in \Pi \times \cV \times \cW$ that \#\label{eq:iv-ee2-vf-spec} \left | \hat \Phi_\textsf{vf}^\pi(v, g; *) - \Phi_\textsf{vf}^\pi(v, g; *) \right | \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cW,\cV,\Pi}}{NT\kappa} \cdot \log\frac{1}{\delta} \log(NT) }, \# where we use Assumption \ref{ass:upper-bound-delta} and $\|g\|_\infty \leq C_*$ for any $g \in \cW$. Now, combining \eqref{eq:iv-ee1-vf-spec} and \eqref{eq:iv-ee2-vf-spec}, with probability at least $1 - \delta$, we have \$ & \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; *) - \max_{g \in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v^\pi_*, g; *) \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cW,\cV,\Pi}}{NT\kappa} \cdot \log\frac{1}{\delta} \log(NT) } = \alpha_\textsf{vf}, \$ which implies that $\tilde v^\pi \in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi)$ for any $\pi\in \Pi$. This concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:vinconf-good-spec}}\label{prf:lemma:vinconf-good-spec} \begin{proof} Since $v \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta, \Theta, \pi)$, there exists a pair $(\tilde \Delta, \tilde\Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$ such that $v \in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\tilde \Delta, \tilde \Theta, \pi)$. For the simplicity of notations, we denote by \$ v^\dagger \in \argmin_{v\in \cV} \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(v, g; \tilde \Delta, \tilde \Theta), \$ i.e., $v^\dagger = \hat v^\pi_{\tilde \Delta, \tilde \Theta}$, which is defined in \eqref{eq:v-pi-def}. By the definition of $v^\dagger$ and $v \in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\tilde \Delta, \tilde \Theta, \pi)$, we know that \#\label{eq:iv-dd1-vf-spec} \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(v,g;\tilde \Delta, \tilde \Theta) - \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(v^\dagger,g;\tilde \Delta, \tilde \Theta) \leq \alpha_\textsf{vf}. \# Note that \#\label{eq:iv-dd2-vf-spec} & \max_{g\in \cW} \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) \\ & \qquad \leq 2 \underbrace{\max_{(v,g,\Delta,\Theta)\in (\cV, \cW, \cF_0, \cF_1)} \left | \Phi^\pi_\textsf{vf}(v,g;\Delta, \Theta) - \hat \Phi^\pi_\textsf{vf}(v,g;\Delta, \Theta) \right|}_{\text{Term (I)}} + \underbrace{\max_{g\in \cW} \Phi^\pi_\textsf{vf}(v^\dagger,g;\tilde \Delta, \tilde \Theta)}_{\text{Term (II)}} \\ & \qquad \qquad + \underbrace{\max_{g\in \cW} \left | \hat \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) - \hat \Phi^\pi_\textsf{vf}(v,g;\tilde \Delta, \tilde \Theta) \right|}_{\text{Term (III)}} + \alpha_\textsf{vf}, \# where we use \eqref{eq:iv-dd1-vf-spec} in the last inequality. Now we upper bound terms (I), (II), and (III) on the RHS of \eqref{eq:iv-dd2-vf-spec}. \vskip5pt \noindent\textbf{Upper Bounding Term (I).} By Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(v,g,\Delta,\Theta, \pi) \in (\cV, \cW, \cF_0, \cF_1, \Pi)$ that \$ \left | \hat \Phi_\textsf{vf}^\pi(v, g; \Delta, \Theta) - \Phi_\textsf{vf}^\pi(v, g; \Delta, \Theta) \right | \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV, \Pi}}{NT \kappa}\log\frac{1}{\delta} \log(NT)}, \$ which implies that with probability at least $1 - \delta$, we have \#\label{eq:iv-dd3-vf-spec} \text{Term (I)} \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}}{NT \kappa}\log\frac{1}{\delta}\log(NT) }. \# \vskip5pt \noindent\textbf{Upper Bounding Term (II).} Recall that $\tilde v^\pi \in \argmin_{v\in \cV} \max_{w\in \cW} \Phi_\textsf{vf}^\pi(v,w; \Delta^*, \Theta^*)$. Note that \#\label{eq:iv-ee3-vf-spec} & \max_{g\in \cW} \Phi_\textsf{vf}^\pi(v^\dagger, g; \tilde \Delta, \tilde \Theta) \\ & \qquad \leq 2 \max_{v\in \cV} \max_{g\in \cW} \left | \Phi_\textsf{vf}^\pi(v, g; \tilde \Delta, \tilde \Theta) - \hat \Phi_\textsf{vf}^\pi(v, g; \tilde \Delta, \tilde \Theta) \right | \\ & \qquad \qquad + \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi(v^\dagger, g; \tilde \Delta, \tilde \Theta) - \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi( \tilde v^\pi, g; \tilde \Delta, \tilde \Theta) + \max_{g\in \cW} \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; \tilde \Delta, \tilde \Theta) \\ & \qquad \leq 2 \max_{v\in \cV} \max_{g\in \cW} \left | \Phi_\textsf{vf}^\pi(v, g; \tilde \Delta, \tilde \Theta) - \hat \Phi_\textsf{vf}^\pi(v, g; \tilde \Delta, \tilde \Theta) \right | \\ & \qquad \qquad + \max_{g\in \cW} \left | \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; \tilde \Delta, \tilde \Theta) - \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; \Delta^*, \Theta^*) \right | + \max_{g\in \cW} \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; \Delta^*, \Theta^*), \# where we use triangle inequality and the fact that $v^\dagger \in \argmin_{v\in \cV} \max_{g\in \cW} \hat \Phi^\pi_\textsf{vf}(v, g; \Delta, \Theta)$ in the last inequality. In the meanwhile, by Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(v,g,\pi)\in \cV\times \cW \times \Pi$ that \#\label{eq:iv-ee4-vf-spec} \left | \Phi_\textsf{vf}^\pi(v, g; \tilde \Delta, \tilde \Theta) - \hat \Phi_\textsf{vf}^\pi(v, g; \tilde \Delta, \tilde \Theta) \right | \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cV,\cW,\Pi}}{NT \kappa}\log\frac{1}{\delta} \log(NT)}. \# Also, we upper bound the second term on the RHS of \eqref{eq:iv-ee3-vf-spec} with probability at least $1- \delta$ by a similar argument as in \eqref{eq:iv-dd5-vf-2}, \#\label{eq:iv-ee5-vf-spec} & \left | \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; \tilde \Delta, \tilde \Theta) - \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; \Delta^*, \Theta^*) \right | \\ & \leq \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \left(\xi_0 C_{\Theta^*} \sqrt{\frac{C_{\Delta^*}}{NT \kappa} \mathfrak{C}_{\cF_0}\log\frac{1}{\delta} \log(NT)} + \xi_1 C_{\Delta^*} \sqrt{\frac{C_{\Theta^*}}{NT \kappa} \mathfrak{C}_{\cF_1} \log\frac{1}{\delta} \log(NT)} \right), \# where in the first inequality, we use the fact that $\|v\|_\infty \leq 1/ (1 - \gamma)$ and $\|g\|_\infty \leq C_*$; in the third inequality, we use Cauchy Schwarz inequality; while in the last inequality, we use Assumption \ref{ass:iv-sl-res} with $(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$. Now, by plugging \eqref{eq:iv-ee4-vf-spec} and \eqref{eq:iv-ee5-vf-spec} into \eqref{eq:iv-ee3-vf-spec}, with probability at least $1 - \delta$, it holds for any $(\Delta, \Theta, \pi) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1} \times \Pi$ that \#\label{eq:iv-dd4-vf-spec} \text{Term (II)} \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}}{NT \kappa}\log\frac{1}{\delta} \log(NT)} + \max_{g\in \cW} \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; \Delta^*, \Theta^*). \# \vskip5pt \noindent\textbf{Upper Bounding Term (III).} Note that \#\label{eq:iv-dd5-vf-spec} & \left | \hat \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) - \hat \Phi^\pi_\textsf{vf}(v,g;\tilde \Delta, \tilde \Theta) \right| \\ & \leq \left | \left(\hat \EE - \EE\right) \left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(S_t) \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left( R_t + \gamma v(S_{t+1}) \right) \right ] \right | \\ & \qquad + \left | \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(S_t) \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left( R_t + \gamma v(S_{t+1}) \right) \right ] \right |. \# For the first term on the RHS of \eqref{eq:iv-dd5-vf}, by Theorem \ref{thm:hoeffding-mixing}, with probability at least $1- \delta$, it holds for any $(v,g,\pi)\in \cV\times \cW \times \Pi$ that \#\label{eq:iv-dd5-vf-1-spec} & \left | \left(\hat \EE - \EE\right) \left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(S_t) \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left( R_t + \gamma v(S_{t+1}) \right) \right ] \right | \\ & \qquad \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}}{NT \kappa}\log\frac{1}{\delta} \log(NT)}. \# For the second term on the RHS of \eqref{eq:iv-dd5-vf-spec}, by a similar argument as in \eqref{eq:iv-dd5-vf-2}, with probability at least $1- \delta$, it holds that \#\label{eq:iv-dd5-vf-2-spec} & \left | \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(S_t) \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left( R_t + \gamma v(S_{t+1}) \right) \right ] \right | \\ & \leq \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \left(\xi_0 C_{\Theta^*} \sqrt{\frac{C_{\Delta^*}}{NT \kappa} \mathfrak{C}_{\cF_0}\log\frac{1}{\delta} \log(NT)} + \xi_1 C_{\Delta^*} \sqrt{\frac{C_{\Theta^*}}{NT \kappa} \mathfrak{C}_{\cF_1} \log\frac{1}{\delta} \log(NT)} \right), \# where in the first inequality, we use the fact that $\|v\|_\infty \leq 1/ (1 - \gamma)$ and $\|g\|_\infty \leq C_*$; in the third inequality, we use Cauchy Schwarz inequality; while in the last inequality, we use Assumption \ref{ass:iv-sl-res} with the fact that $(\tilde \Delta, \tilde\Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$. Now, by plugging \eqref{eq:iv-dd5-vf-1-spec} and \eqref{eq:iv-dd5-vf-2-spec} into \eqref{eq:iv-dd5-vf-spec}, it holds with probability at least $1 - \delta$ that \#\label{eq:879434-spec} & \left | \hat \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) - \hat \Phi^\pi_\textsf{vf}(v,g;\tilde \Delta, \tilde \Theta) \right| \\ & \qquad \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT\kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT)}. \# \vskip5pt Now, by plugging \eqref{eq:iv-dd3-vf-spec}, \eqref{eq:iv-dd4-vf-spec}, and \eqref{eq:879434-spec} into \eqref{eq:iv-dd2-vf-spec}, with probability at least $1 - \delta$, it holds for any $v \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta, \Theta, \pi)$ and $\pi \in \Pi$ that \$ \max_{g\in \cW} \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) & \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT\kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT)} \\ & \qquad + \max_{g\in \cW} \Phi_\textsf{vf}^\pi(\tilde v^\pi, g; \Delta^*, \Theta^*), \$ which concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-w-pi-in-conf-spec}}\label{prf:lemma:iv-w-pi-in-conf-spec} \begin{proof} First, by the definition of $\tilde w^\pi$ in \eqref{eq:v-tilde-def}, we know that $\tilde w^\pi \in \cW$. For notation simplicity, we denote by $\Phi_\textsf{mis}^\pi(w, f; *) = \Phi_\textsf{mis}^\pi(w, f; \Delta^*, \Theta^*)$ and $\hat w^\pi_* = \hat w^\pi_{\Delta^*, \Theta^*}$ for any $(\pi,w,f)$. Note that \#\label{eq:iv-ee1-spec} & \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; *) - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) \\ & \qquad = \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; *) - \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; *) + \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; *) - \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) \\ & \qquad \qquad + \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) \\ & \qquad \leq \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; *) - \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; *) + \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) \\ & \qquad \leq 2 \max_{w\in \cW} \left | \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(w, f; *) - \max_{f\in \cV} \Phi_\textsf{mis}^\pi(w, f; *) \right | \\ & \qquad \leq 2 \max_{w\in \cW} \max_{f\in \cV} \left | \hat \Phi_\textsf{mis}^\pi(w, f; *) - \Phi_\textsf{mis}^\pi(w, f; *) \right |, \# where in the first inequality, we use the fact that $\max_{f\in \cV} \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; *) \leq \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\hat w_*^\pi, f; *)$ by the definition of $\tilde w^\pi$ in \eqref{eq:v-tilde-def}; while in the second inequality, we use the fact that $\tilde w^\pi, \hat w^\pi_* \in \cW$. In the meanwhile, by Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(w,f,\pi)\in \cW\times \cV \times \Pi$ that \#\label{eq:iv-ee2-spec} \left | \hat \Phi_\textsf{mis}^\pi(w, f; *) - \Phi_\textsf{mis}^\pi(w, f; *) \right | \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cV,\cW,\Pi}\log\frac{1}{\delta} \log(NT)}, \# where we use Assumption \ref{ass:upper-bound-delta}. Now, combining \eqref{eq:iv-ee1-spec} and \eqref{eq:iv-ee2-spec}, with probability at least $1 - \delta$, we have \$ & \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; *) - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\hat w^\pi_*, f; *) \\ & \qquad \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cV,\cW,\Pi}\log\frac{1}{\delta} \log(NT)} = \alpha_\textsf{mis}, \$ which implies that $\tilde w^\pi \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi)$. This concludes the proof of the lemma. \end{proof} \subsection{Proof of Lemma \ref{lemma:iv-w-in-conf-good-spec}} \label{prf:lemma:iv-w-in-conf-good-spec} \begin{proof} Since $w \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)$, there exists a pair $(\tilde \Delta, \tilde\Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$ such that $w \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\tilde \Delta, \tilde \Theta, \pi)$. For the simplicity of notations, we denote by \$ w^\dagger \in \argmin_{w\in \cW} \max_{f\in \cV} \hat \Phi^\pi_\textsf{mis}(w, f;\tilde \Delta, \tilde \Theta), \$ i.e., $w^\dagger = \hat w^\pi_{\tilde \Delta, \tilde \Theta}$, which is defined in \eqref{eq:w-pi-def}. By the definition of $w^\dagger $ and $w \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\tilde \Delta, \tilde \Theta, \pi)$, with probability at least $1 - \delta$, it holds for any $\pi\in \Pi$ and $w \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\tilde \Delta, \tilde \Theta, \pi)$ that \#\label{eq:iv-dd1-spec} \max_{f\in \cV} \hat \Phi^\pi_\textsf{mis}(w,f;\tilde \Delta, \tilde \Theta) - \max_{f\in \cV} \hat \Phi^\pi_\textsf{mis}(w^\dagger,f;\tilde \Delta, \tilde \Theta) \leq \alpha_\textsf{mis}. \# Further, we observe that \#\label{eq:iv-dd2-spec} & \max_{f\in \cV} \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*) \\ & \qquad \leq \underbrace{\max_{(w,f,\Delta,\Theta)\in (\cW, \cV, \cF_0, \cF_1)} \left | \Phi^\pi_\textsf{mis}(w,f;\Delta, \Theta) - \hat \Phi^\pi_\textsf{mis}(w,f;\Delta, \Theta) \right|}_{\text{Term (I)}} + \underbrace{\max_{f\in \cV} \Phi^\pi_\textsf{mis}(w^\dagger,f;\tilde \Delta, \tilde \Theta)}_{\text{Term (II)}} \\ & \qquad \qquad + \underbrace{\max_{f\in \cV} \left | \hat \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*) - \hat \Phi^\pi_\textsf{mis}(w,f;\tilde \Delta, \tilde \Theta) \right|}_{\text{Term (III)}} + \alpha_\textsf{mis}, \# where we use \eqref{eq:iv-dd1-spec} in the last inequality. Now we upper bound terms (I), (II), and (III) on the RHS of \eqref{eq:iv-dd2-spec}. \vskip5pt \noindent\textbf{Upper Bounding Term (I).} By Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(w,f,\Delta,\Theta,\pi) \in (\cW, \cV, \cF_0, \cF_1, \Pi)$ that \$ \left | \hat \Phi_\textsf{mis}^\pi(w, f; \Delta, \Theta) - \Phi_\textsf{mis}^\pi(w, f; \Delta, \Theta) \right | \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa}\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}\log\frac{1}{\delta} \log(NT)}, \$ which implies that with probability at least $1 - \delta$, we have \#\label{eq:iv-dd3-spec} \text{Term (I)} \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa}\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}\log\frac{1}{\delta} \log(NT)}. \# \vskip5pt \noindent\textbf{Upper Bounding Term (II).} Recall that $\tilde w^\pi\in \argmin_{w\in \cW} \max_{v\in \cV} \Phi_\textsf{mis}^\pi(w, v; \Delta^*, \Theta^*)$. Note that \#\label{eq:iv-ee3-spec} & \max_{f\in \cV} \Phi_\textsf{mis}^\pi(w^\dagger, f; \tilde \Delta, \tilde \Theta) \\ & \qquad = \max_{f\in \cV} \Phi_\textsf{mis}^\pi(w^\dagger, f; \tilde \Delta, \tilde \Theta) - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(w^\dagger, f; \tilde \Delta, \tilde \Theta) + \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(w^\dagger, f; \tilde \Delta, \tilde \Theta) \\ & \qquad \qquad - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi( \tilde w^\pi, f; \tilde \Delta, \tilde \Theta) + \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi( \tilde w^\pi, f; \tilde \Delta, \tilde \Theta) - \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; \tilde \Delta, \tilde \Theta) \\ & \qquad \qquad + \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; \tilde \Delta, \tilde \Theta) \\ & \qquad \leq 2 \max_{w\in \cW} \max_{f\in \cV} \left | \Phi_\textsf{mis}^\pi(w, f; \tilde \Delta, \tilde \Theta) - \hat \Phi_\textsf{mis}^\pi(w, f; \tilde \Delta, \tilde \Theta) \right | \\ & \qquad \qquad + \max_{f\in \cV} \left | \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; \tilde \Delta, \tilde \Theta) - \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; \Delta^*, \Theta^*) \right | + \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; \Delta^*, \Theta^*), \# where we use triangle inequality and the fact that $w^\dagger \in \argmin_{w\in \cW} \max_{f\in \cV} \hat \Phi^\pi_\textsf{mis}(w, f; \Delta, \Theta)$ in the last inequality. In the meanwhile, by Theorem \ref{thm:hoeffding-mixing}, with probability at least $1 - \delta$, it holds for any $(w,f,\pi)\in \cW\times \cV \times \Pi$ that \#\label{eq:iv-ee4-spec} \left | \Phi_\textsf{mis}^\pi(w, f; \tilde \Delta, \tilde \Theta) - \hat \Phi_\textsf{mis}^\pi(w, f; \tilde \Delta, \tilde \Theta) \right | \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa}\mathfrak{C}_{\cV,\cW,\Pi}\log\frac{1}{\delta} \log(NT)}. \# Also, we upper bound the second term on the RHS of \eqref{eq:iv-ee3-spec} with probability at least $1- \delta$ as follows, \#\label{eq:iv-ee5-spec} & \left | \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; \tilde \Delta, \tilde \Theta) - \Phi_\textsf{mis}^\pi( \tilde w^\pi, f; \Delta^*, \Theta^*) \right | \\ & = \left | \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t)w(S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left(f(S_t) - \gamma f(S_{t+1})\right) \right ] \right | \\ & \leq \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \left(\xi_0 C_{\Theta^*} \sqrt{\frac{C_{\Delta^*}}{NT \kappa}\mathfrak{C}_{\cF_0}\log\frac{1}{\delta} \log(NT)} + \xi_1 C_{\Delta^*} \sqrt{\frac{C_{\Theta^*}}{NT \kappa} \mathfrak{C}_{\cF_1} \log\frac{1}{\delta} \log(NT)} \right), \# where we use Cauchy-Schwarz inequality and Assumption \ref{ass:iv-sl-res} in the last inequality. Now, by plugging \eqref{eq:iv-ee4-spec} and \eqref{eq:iv-ee5-spec} into \eqref{eq:iv-ee3-spec}, it holds with probability at least $1 - \delta$ that \#\label{eq:iv-dd4-spec} \text{Term (II)} \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1 ) \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}}{NT \kappa} \log\frac{1}{\delta} \log(NT)} + \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; \Delta^*, \Theta^*). \# \vskip5pt \noindent\textbf{Upper Bounding Term (III).} Note that \#\label{eq:iv-dd5-spec} & \left | \hat \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*) - \hat \Phi^\pi_\textsf{mis}(w,f;\tilde \Delta, \tilde \Theta) \right| \\ & \leq \left | \left(\hat \EE - \EE\right) \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left(f(S_t) - \gamma f(S_{t+1}) \right) \right ] \right | \\ & \quad + \left | \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left(f(S_t) - \gamma f(S_{t+1}) \right) \right ] \right |. \# For the first term on the RHS of \eqref{eq:iv-dd5-spec}, by Theorem \ref{thm:hoeffding-mixing}, with probability at least $1- \delta$, it holds for any $(w,f,\pi)\in \cW\times \cV \times \Pi$ that \#\label{eq:iv-dd5-1-spec} & \left | \left(\hat \EE - \EE\right) \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t) }{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left(f(S_t) - \gamma f(S_{t+1}) \right) \right ] \right | \\ & \qquad \leq c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi}}{NT \kappa}\log\frac{1}{\delta} \log(NT)}. \# For the second term on the RHS of \eqref{eq:iv-dd5-spec}, by a similar argument as in \eqref{eq:iv-dd5-vf-2}, it holds with probability at least $1 - \delta$ that \#\label{eq:iv-dd5-2-spec} & \left | \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left ( \frac{Z_t^\top A_t\pi(A_t\given S_t) w(S_t)}{\Delta^*(S_t,A_t)\Theta^*(S_t,Z_t)} - \frac{Z_t^\top A_t\pi(A_t\given S_t)w(S_t)}{\tilde \Delta(S_t,A_t) \tilde \Theta(S_t,Z_t)} \right) \left(f(S_t) - \gamma f(S_{t+1}) \right) \right ] \right | \\ & \leq \frac{2 C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \left(\xi_0 C_{\Theta^*} \sqrt{\frac{C_{\Delta^*}}{NT \kappa} \mathfrak{C}_{\cF_0}\log\frac{1}{\delta} \log(NT)} + \xi_1 C_{\Delta^*} \sqrt{\frac{C_{\Theta^*}}{NT \kappa} \mathfrak{C}_{\cF_1} \log\frac{1}{\delta} \log(NT)} \right), \# where in the first inequality, we use the fact that $\|f\|_\infty \leq 1/ (1 - \gamma)$ and $\|w\|_\infty \leq C_*$; in the third inequality, we use Cauchy Schwarz inequality; while in the last inequality, we use Assumption \ref{ass:iv-sl-res} with the fact that $(\tilde \Delta, \tilde\Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$. Now, by plugging \eqref{eq:iv-dd5-1-spec} and \eqref{eq:iv-dd5-2-spec} into \eqref{eq:iv-dd5-spec}, with probability at least $1 - \delta$, it holds for any $w \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)$ and $(f,\pi)\in \cV\times \Pi$ that \#\label{eq:732847-spec} & \left | \hat \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*) - \hat \Phi^\pi_\textsf{mis}(w,f;\tilde \Delta, \tilde \Theta) \right| \\ & \qquad \leq c \cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT) }. \# \vskip5pt Now, by plugging \eqref{eq:iv-dd3-spec}, \eqref{eq:iv-dd4-spec}, and \eqref{eq:732847-spec} into \eqref{eq:iv-dd2-spec}, with probability at least $1 - \delta$, it holds for any $\pi \in \Pi$ and $w \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)$ that \$ \max_{f\in \cV} \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*) & \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT) } \\ & \qquad + \max_{f\in \cV} \Phi_\textsf{mis}^\pi(\tilde w^\pi, f; \Delta^*, \Theta^*), \$ which concludes the proof of the lemma. \end{proof} \section{Auxiliary Results} We introduce auxiliary results used in the paper. We provide the proofs of these results in \S\ref{sec:prf:aux}. We first introduce the following definition of $\beta$-mixing coefficient. \begin{definition} Let $\{Z_t \}_{t \geq 0}$ be a sequence of random variables. For any $i,j\in \NN \cup\{\infty\}$, we denote by $\sigma_i^j$ the sigma algebra generated by $\{Z_k\}_{i\leq k \leq j}$. The $\beta$-mixing coefficient of $\{Z_t \}_{t \geq 0}$ is defined as $\beta(t) = \sup_n\EE_{B\in \sigma_0^n} [\sup_{A\in \sigma_{n + t}^\infty} |\PP(A\given B) - \PP(A) |]$. \end{definition} We introduce the following form of $\beta$-mixing coefficient for Markov chains. \begin{lemma}\label{lemma:davydov} Suppose $\{Z_t \}_{t \geq 0}$ is a Markov chain with initial distribution $\zeta$. It holds that \$ \beta(t) \leq \frac{1}{2} \int \|p_{t'}(\cdot \given z) - p_\text{stat}(\cdot)\|_\text{TV} \ud p_\text{stat}(z) + \frac{3}{2} \int \|p_{t'}(\cdot \given z) - p_\text{stat}(\cdot)\|_\text{TV} \ud \zeta(z), \$ where $t' = \lfloor t/2 \rfloor$ and $p_n(\cdot \given z)$ is the the marginal the distribution of $Z_n$ given $Z_0 = z$ for any $n\in [N]$. \end{lemma} \begin{proof} See the proof of Lemma 1 in \cite{meitz2021subgeometric} for a detailed proof. \end{proof} Following from Lemma \ref{lemma:davydov}, we can upper bound the $\beta$-mixing coefficient for a Markov chain $\{Z_t \}_{t \geq 0}$. Before that, we impose the following assumption on $\{Z_t \}_{t \geq 0}$. \begin{assumption}\label{ass:mc-ass} The Markov chain $\{Z_t \}_{t \geq 0}$ with initial distribution $\zeta$ admits a unique stationary distribution $p_\text{stat}$ over $\cZ$ and is geometrically ergodic, i.e., there exists a function $\varphi\colon \cZ \to [0, \infty)$ and a constant $\kappa > 0$ such that \$ \left\| p_\text{stat}(\cdot) - p_t(\cdot \given z_0) \right\|_\text{TV} \leq \varphi(z_0) \cdot \exp\left(-2\kappa t\right), \$ where $p_t(\cdot \given z_0)$ is the marginal distribution of $Z_t$ given $Z_0 = z_0$ and there exists a positive absolute constant $c$ such that $\int \varphi(z) \ud \zeta(z) \leq c$ and $\int \varphi(z) \ud p_\text{stat}(z) \leq c$. \end{assumption} \begin{lemma}\label{lemma:davydov-adapt} Suppose $\{Z_t \}_{t \geq 0}$ is a Markov chain satisfying Assumption \ref{ass:mc-ass}. Then we have $\beta(t) \leq c\cdot \exp(-\kappa t)$ for any $t \geq 0$. \end{lemma} \begin{proof} For any $t \geq 0$, by Lemma \ref{lemma:davydov}, we have \$ \beta(t) & \leq \frac{1}{2} \int \|p_{t'}(\cdot \given z) - p_\text{stat}(\cdot)\|_\text{TV} \ud p_\text{stat}(z) + \frac{3}{2} \int \|p_{t'}(\cdot \given z) - p_\text{stat}(\cdot)\|_\text{TV} \ud \zeta(z) \\ & \leq \frac{1}{2} \int \varphi(z) \cdot \exp\left(-\kappa t\right) \ud p_\text{stat}(z) + \frac{3}{2} \int \varphi(z) \cdot \exp\left(-\kappa t\right) \ud \zeta(z) \\ & \leq c\cdot \exp(-\kappa t), \$ where in the second and last inequalities, we use Assumption \ref{ass:mc-ass}. This concludes the proof of the lemma. \end{proof} \subsection{Concentration Inequality for Geometrically Ergodic Non-Stationary Sequence} We first introduce the following lemma, which is a straight-forward genelization of Berbee's lemma \citep{Berbee1979RandomWW}. \begin{lemma}\label{lemma:berbee} For any $k > 0$ and a random sequence $\{Y_\ell\}_{\ell=1}^{k}$, there exists a random sequence $\{\tilde Y_\ell\}_{\ell=1}^{k}$ such that \begin{enumerate} \item $\{\tilde Y_\ell\}_{\ell=1}^{k}$ are independent; \item for any $1\leq \ell\leq k$, $\tilde Y_\ell$ and $Y_\ell$ have the same distribution; \item for any $1\leq \ell\leq k$, $\PP(\tilde Y_\ell \neq Y_\ell) = \beta(\sigma(\{Y_{\ell'}\}_{\ell'=1}^{\ell-1}), \sigma(\{Y_\ell\}))$. \end{enumerate} \end{lemma} \begin{proof} See Lemma 2.10 in \cite{barrera2021generalization} for a detailed proof. \end{proof} We introduce the following Hoeffding's Inequality and Bernstein's Inequality for geometrically ergodic non-stationary sequences. \begin{theorem}[Hoeffding's Inequality for geometrically ergodic non-stationary sequences]\label{thm:hoeffding-mixing} We denote by $\{X_t\}_{t\geq 0}\subseteq \cX$ a Markov chain satisfying Assumption \ref{ass:mc-ass}. Then for any function $f\colon \cX\to [-f_{\max}, f_{\max}]$, it holds with probability at least $1 - \delta$ with $c / (NT)^2 \leq \delta \leq 1$ that \$ \left| \frac{1}{NT} \sum_{i\in [N]}\sum_{t = 0}^{T-1} f(X_t^i) - \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} f(X_t) \right] \right| \leq c\cdot f_{\max} \sqrt{\frac{1}{NT\kappa} \log\frac{2}{\delta}\log(NT)}, \$ where $\{\{X_t^i\}_{t=0}^{T-1}\}_{i\in [N]}$ consists of $N$ i.i.d. trajectories with length $T > 0$ generated from the same distribution as $\{X_t\}_{t\geq 0}$. \end{theorem} \begin{proof} See \S\ref{prf:thm:hoeffding-mixing} for a detailed proof. \end{proof} \begin{theorem}[Bernstein's Inequality for geometrically ergodic non-stationary sequences]\label{thm:bernstein-mixing} We denote by $\{X_t\}_{t\geq 0}\subseteq \cX$ a Markov chain satisfying Assumption \ref{ass:mc-ass}. Then for any function $f\colon \cX\to [-f_{\max}, f_{\max}]$, it holds with probability at least $1 - \delta$ with $c / (NT)^2 \leq \delta \leq 1$ that \$ & \frac{1}{NT} \sum_{i\in [N]}\sum_{t = 0}^{T-1} f(X_t^i) - \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} f(X_t) \right] \\ & \qquad \leq c_1 \cdot \frac{ f_{\max}}{NT\kappa} \log \frac{2}{\delta} \log (NT) + c_2 \cdot \sqrt{\frac{1}{NT\kappa} \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} f(X_t)^2 \right] \log\frac{2}{\delta} \log(NT) }, \$ where $c_1$ and $c_2$ are positive absolute constants, and $\{\{X_t^i\}_{t=0}^{T-1}\}_{i\in [N]}$ consists of $N$ i.i.d. trajectories with length $T > 0$ generated from the same distribution as $\{X_t\}_{t\geq 0}$. \end{theorem} \begin{proof} See \S\ref{prf:thm:bernstein-mixing} for a detailed proof. \end{proof} \subsection{Empirical Processes for Geometrically Ergodic Non-Stationary Sequence} For any conditional probabilities $p_1(y\given x)$ and $p_2(y\given x)$ such that $(x,y)\in \cX\times \cY$, we define the squared Hellinger distance as follows, \$ h^2\left( p_1(\cdot \given x), p_2(\cdot \given x) \right) = \frac{1}{2} \int \left( \sqrt{p_1(y\given x)} - \sqrt{p_2(y\given x)} \right)^2 \ud y. \$ We further assume that $\cY$ is a discrete space. We denote by $p^*(y\given x)$ the true conditional probability of $y\in \cY$ given $x\in \cX$. Also, let $\{(X_t, Y_t)\}_{t\geq 0}\subset \cX\times \cY$ be a Markov chain such that $Y_t \sim p^*(\cdot \given X_t)$ and satisfies Assumption \ref{ass:mc-ass}. Further, we denote by $\mu_t$ the marginal distribution of $X_t$ for any $t\geq 0$. In the meanwhile, with $\mu = 1/T\cdot \sum_{t = 0}^{T-1}\mu_t$, we define the generalized squared Hellinger distance over $\mu$ as follows, \$ H^2(p_1, p_2) = \EE_{X\sim \mu}\left[ h^2\left( p_1(\cdot \given X), p_2(\cdot \given X) \right) \right]. \$ In the meanwhile, we are given a data set $\{\{(X_t^i, Y_t^i)\}_{t=0}^{T-1}\}_{i\in[N]}$ consisting of $N$ independent trajectories of length $T$, where $\{(X_t^i, Y_t^i)\}_{t=0}^{T-1}$ is generated from the same distribution as $\{(X_t, Y_t)\}_{t\geq 0}$. We construct the following maximum likelihood estimator for $p^*$, \#\label{eq:def-p-hat} \hat p \in \argmax_{p\in \cP} \hat \EE\left[ \log p(Y\given X) \right] = \frac{1}{NT}\sum_{i\in[N]} \sum_{t = 0}^{T-1} \log p(Y_t^i\given X_t^i). \# We also define \$ g_p(x,y) = \frac{1}{2} \log \frac{p(y\given x) + p^*(y\given x)}{2 p^*(y\given x)} \$ for any $(x,y)\in \cX \times \cY$. Now, we are ready to introduce the following lemma. \begin{lemma} \label{lemma:vi-vdg-lemma} We have \$ & H^2\left(\frac{\hat p + p^*}{2}, p^*\right) \leq \left( \hat \EE - \EE \right)\left[g_{\hat p}(X,Y)\right], \\ & H^2\left(\frac{p_1 + p^*}{2}, \frac{p_2 + p^*}{2}\right) \leq \frac{1}{2} H^2(p_1, p_2), \\ & H^2(p,p^*) \leq 16 H^2\left( \frac{p + p^*}{2}, p^* \right), \\ & \left\|p_1(\cdot \given x) - p_2(\cdot \given x) \right\|_1 \leq 2\sqrt{2} h\left( p_1(\cdot \given x), p_2(\cdot \given x) \right). \$ \end{lemma} \begin{proof} See \S\ref{prf:lemma:vi-vdg-lemma} for a detailed proof. \end{proof} We define the entropy integral as follows, \$ J_B(\delta, \overline\cP^{1/2}(\delta)) = \max\left\{ \int_{\delta^2 / 2^{10}}^\delta \left( H_{B}(u, \overline\cP^{1/2}(\delta)) \right)^{1/2} \ud u, \delta \right \}, \$ where $H_{B}(u, \overline\cP^{1/2}(\delta))$ is the entropy of the space $\overline\cP^{1/2}(\delta)$ with bracketing, and $\overline\cP^{1/2}(\delta)$ is defined as follows, \$ \overline\cP^{1/2}(\delta) = \{\overline{p}^{1/2}\colon p\in \cP \text{ and } H^2(\overline{p}, p^*) \leq \delta^2 \}. \$ Now, we introduce the following theorem, which upper bounds the distance between $\hat p$ and $p^*$. \begin{theorem} \label{thm:iv-vdg-bound} We take $\Psi(\delta) \geq J_B(\delta, \overline\cP^{1/2}(\delta))$ in such a way that $\Psi(\delta) / \delta^2$ is a non-increasing function of $\delta$. Then for a universal constant $c$ and any $\delta \geq \delta_{NT}$, where $\delta_{NT}$ satisfies that $\sqrt{NT} \delta_{NT}^2 \geq c \Psi(\delta_{NT})$, it holds with probability at least $1 - c/\kappa \cdot \exp(-NT \kappa\delta^2/(c^2 \log(NT))) - c / (N^2 T^2) \cdot \log(4/\delta)$ that \$ H^2(\hat p, p^*) \leq \delta^2, \$ where $\hat p$ is defined in \eqref{eq:def-p-hat}. \end{theorem} \begin{proof} See \S\ref{prf:thm:iv-vdg-bound} for a detailed proof. \end{proof} \iffalse Further, if $\cP$ is a convex set and $\mathfrak{C}_\cP < \infty$, we have the following result. \begin{corollary} \label{cor:iv-vdg-bound-convex} Suppose that $\cP$ is a convex set and $\mathfrak{C}_\cP < \infty$. Then it holds with probability at least $1 - \delta$ that \$ H^2(\hat p, p^*) \leq c\cdot \frac{\mathfrak{C}_\cP}{NT\kappa} \log\frac{1}{\delta}, \$ where $\hat p$ is defined in \eqref{eq:def-p-hat}. \end{corollary} \begin{proof} Since $\cP$ is a convex set, we have \$ J_B(\delta, \overline\cP^{1/2}(\delta)) & = \max\left\{ \int_{\delta^2 / 2^{10}}^\delta \left( H_{B}(u, \overline\cP^{1/2}(\delta)) \right)^{1/2} \ud u, \delta \right \} \\ & = \max\left\{ \int_{\delta^2 / 2^{10}}^\delta \left( H(u, \overline\cP^{1/2}(\delta)) \right)^{1/2} \ud u, \delta \right \} \\ & \leq \max\left\{ \int_{\delta^2 / 2^{10}}^\delta \left( c \cdot \mathfrak{C}_\cP \cdot \log\frac{1}{u} \right)^{1/2} \ud u, \delta \right \} \leq c \cdot \delta \sqrt{\mathfrak{C}_\cP \log\frac{1}{\delta}}, \$ where in the first equality, we use the fact that $H_{B}(u, \overline\cP^{1/2}(\delta)) = H(u, \overline\cP^{1/2}(\delta))$ since $\cP$ is convex; in the first inequality, we use the following fact \citep{mendelson2003entropy}, \$ H(u, \cG) \leq c\cdot \mathfrak{C}_\cG \log\frac{1}{u} \$ for any space $\cG$ and $u > 0$ with some constant $c > 0$. Thus, by taking $\Psi(\delta) = \delta \sqrt{\mathfrak{C}_\cP \log(1 / \delta)}$ and $\delta = c\sqrt{\mathfrak{C}_\cP / (NT\kappa) \cdot\log(1/\delta')}$, it holds with probability at least $1 - \delta'$ that \$ H^2(\hat p, p^*) \leq c \cdot \frac{\mathfrak{C}_\cP}{NT\kappa} \cdot\log\frac{1}{\delta'}, \$ which concludes the proof of the corollary. \end{proof} \fi We study the following case, where $\cP$ is a parametric class. \begin{corollary}[Parametric Class]\label{cor:vdg-param} Suppose $\cP = \{p_\theta\colon \theta\in \RR^d \text{ and } \|\theta\|_2 \leq \theta_{\max}\}$. Then with probability at least $1 - \delta$ with $c / (N^2 T^2) \cdot \log (NT) \leq \delta \leq 1$, we have \$ H^2(\hat p, p^*) \leq c \cdot \frac{d}{NT\kappa} \log\frac{\theta_{\max}}{\delta} \log(NT), \$ where $c>0$ is an absolute constant, which may vary from lines to lines. \end{corollary} \begin{proof} Note that \$ J_B(\delta, \overline\cP^{1/2}(\delta), d) \leq \delta \sqrt{d \log \frac{\theta_{\max}}{\delta}}. \$ By taking $\Psi(\delta) = \delta \sqrt{d \log(\theta_{\max}/\delta)}$, we have \$ \PP\left( H^2(\hat p, p^*) \leq c \cdot \frac{d}{NT \kappa} \log\frac{\theta_{\max}}{\delta} \log(NT) \right) \geq 1 - \delta \$ with $c / (N^2 T^2) \cdot \log (NT) \leq \delta \leq 1$, which concludes the proof of the corollary. \end{proof} \section{Proofs of Auxiliary Results}\label{sec:prf:aux} \subsection{Proof of Theorem \ref{thm:hoeffding-mixing}}\label{prf:thm:hoeffding-mixing} \begin{proof} We take $\tau = \min\{T, 3/\kappa \cdot \log(NT)\}$, and denote by $\EE_\text{stat}[\cdot]$ the expectation taken with respect to the stationary distribution of $\{X_t\}_{t\geq 0}$. We observe the following decomposition, \$ & \frac{1}{NT} \sum_{i\in [N]}\sum_{t = 0}^{T-1} f(X_t^i) - \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} f(X_t) \right] \\ & \qquad = \frac{\tau}{T} \underbrace{ \left( \frac{1}{N\tau} \sum_{i\in [N]}\sum_{t = 0}^{\tau-1} f(X_t^i) - \EE\left[\frac{1}{\tau} \sum_{t = 0}^{\tau-1} f(X_t)\right] \right)}_{\text{(I)}} ] \\ & \qquad\qquad + \frac{T - \tau}{T} \underbrace{ \left( \frac{1}{N(T - \tau)} \sum_{i\in [N]}\sum_{t = \tau}^{T-1} f(X_t^i) - \EE_\text{stat}\left[f(X)\right] \right) }_{\text{(II)}} \\ & \qquad \qquad + \frac{T -\tau}{T} \underbrace{ \left( \EE_\text{stat}\left[f(X)\right] - \EE\left[\frac{1}{T - \tau} \sum_{t = \tau}^{T-1} f(X_t)\right] \right)}_{\text{(III)}}. \$ In the follows, we upper bound terms (I), (II), and (III), respectively. \vskip5pt \noindent\textbf{Upper Bounding Term (I).} Since $\{\{X_t^i\}_{t=0}^{T-1}\}_{i\in [N]}$ are i.i.d. accross each trajectory, by standard Hoeffding's inequality, with probability at least $1 - \delta$, we have \#\label{eq:hoeffding-1} \left|\text{(I)}\right| \leq f_{\max} \sqrt{\frac{2}{N} \log\frac{2}{\delta}}. \# \vskip5pt \noindent\textbf{Upper Bounding Term (II).} We consider an auxiliary Markov chain $\{\{\tilde X_t^i\}_{t=0}^{T-1}\}_{i\in [N]}$, where the $i$-th trajectory $\{\tilde X_t^i\}_{t=0}^{T-1}$ is sampled such that $\tilde X_0^i\sim p_\text{stat}$. Here $p_\text{stat}$ is the stationary distribution of $\{X_t\}_{t\geq 0}$. Similarly, we define the following quantity, \$ \text{($\tilde{\text{II}}$)} = \frac{1}{N(T - \tau)} \sum_{i\in [N]}\sum_{t = \tau}^{T-1} f(\tilde X_t^i) - \EE_\text{stat}\left[f(X)\right]. \$ Now, for any $x\geq 0$, we upper bound the difference $\PP( \text{(II)} \geq x) - \PP(\text{($\tilde{\text{II}}$)} \geq x)$ as follows, \#\label{eq:hhhhh1} \PP\left( \text{(II)} \geq x \right) - \PP\left( \text{($\tilde{\text{II}}$)} \geq x\right) \leq N \sum_{t = \tau}^{T-1} \EE \left[\| p_t(\cdot \given X_0) - p_\text{stat}(\cdot ) \|_\text{TV} \right] \leq c\cdot NT \exp(-\kappa \tau). \# Thus, by \eqref{eq:hhhhh1}, to upper bound $\PP( \text{(II)} \geq x )$, it suffices to upper bound $\PP(\text{($\tilde{\text{II}}$)} \geq x)$. To upper bound $\PP( \text{($\tilde{\text{II}}$)} \geq x)$, we take $T -\tau = 2 k s$, where $k$ and $s$ are two positive integers for the simplicity of presentation. We partition the set $\{\tau, \tau+1, \ldots, T-1\}$ as follows, \$ & J_1 = \{\tau, \tau+1, \ldots, \tau+s-1\}, \quad J_2 = \{\tau+s, \tau+s+1, \ldots, \tau+2s-1\}, \quad \ldots, \\ & J_{2k-1} = \{T-2s, T-2s+1, \ldots, T-s-1\}, \quad J_{2k} = \{T-s, T-s+1, \ldots, T-1\}. \$ Under such a partition, we see that $\cup_{\ell\in[2k]}J_\ell = \{\tau, \tau+1, \ldots, T-1\}$ and $J_\ell\cap J_{\ell'} = \emptyset$ for any $\ell\neq \ell'$. Also, for any $i\in [N]$, we define \$ Z_\ell^i = (\tilde X_t^i)_{t\in J_\ell} \$ for any $\ell\in [2k]$. Now, for any $i\in [N]$, by Lemma \ref{lemma:berbee}, there exists a sequence $\{W_\ell^i\}_{\ell\in [2k]}$, where $W_\ell^i = (\tilde Y_t^i)_{t\in J_\ell}$ such that \begin{enumerate} \item $\{W_\ell^i\}_{\ell\in[2k]}$ are independent; \item for any $\ell\in[2k]$, $W_\ell^i$ and $Z_\ell^i$ have the same distribution; \item for any $\ell\in[2k]$, $\PP(W_\ell^i \neq Z_\ell^i) = \beta(\sigma(\{Z_{\ell'}^i\}_{\ell'\in[\ell-1]}), \sigma(\{Z_\ell^i\}))$. \end{enumerate} Note that the following inclusion relation holds, \$ \left\{ \frac{1}{s}\sum_{t\in J_\ell} f(\tilde X_t^i) - \EE_\text{stat}[f(X)] \geq x_\ell \right\} \subseteq \left\{ \frac{1}{s}\sum_{t\in J_\ell} f(\tilde Y_t^i) - \EE_\text{stat}[f(X)] \geq x_\ell \right\} \cup \{W_\ell^i \neq Z_\ell^i\} \$ for any $x_\ell\in \RR$. Thus, we have \$ \PP\left(\text{($\tilde{\text{II}}$)} \geq x\right) & \leq \PP\left( \frac{1}{kN} \sum_{i\in[N], \text{$\ell$ is odd}} \frac{1}{s}\sum_{t\in J_\ell} f(\tilde X_t^i) - \EE_\text{stat}[f(X)] \geq x\right) \\ & \qquad + \PP\left( \frac{1}{kN} \sum_{i\in[N], \text{$\ell$ is even}} \frac{1}{s}\sum_{t\in J_\ell} f(\tilde X_t^i) - \EE_\text{stat}[f(X)] \geq x\right) \\ & \leq \PP\left( \frac{1}{kN} \sum_{i\in[N], \text{$\ell$ is odd}} \frac{1}{s}\sum_{t\in J_\ell} f(\tilde Y_t^i) - \EE_\text{stat}[f(X)] \geq x\right) \\ & \qquad + \PP\left( \frac{1}{kN} \sum_{i\in[N], \text{$\ell$ is even}} \frac{1}{s}\sum_{t\in J_\ell} f(\tilde Y_t^i) - \EE_\text{stat}[f(X)] \geq x\right) + \sum_{\ell\in [2k]} \PP(W_\ell^i \neq Z_\ell^i) \\ & \leq 2 \exp\left(-\frac{kNx^2}{2f_{\max}^2}\right) + 2k \beta(s), \$ where we use Hoeffding's inequality in the last line. Similarly, we have \$ \PP\left(\text{($\tilde{\text{II}}$)} \leq -x\right) \leq 2 \exp\left(-\frac{kNx^2}{2f_{\max}^2}\right) + 2k \beta(s). \$ Thus, we have \$ \PP\left( \left| \text{(${\text{II}}$)} \right| \geq x\right) \leq 4 \exp\left(-\frac{kNx^2}{2f_{\max}^2}\right) + 4k \beta(s) + c\cdot NT \exp(-\kappa \tau). \$ Now, by taking $s = 3\log( NT) / \kappa$, it holds with probability at least $1 - \delta$ with $c / (NT)^2 \leq \delta \leq 1$ that \#\label{eq:hoeffding-2} \left| \text{(${\text{II}}$)} \right| \leq f_{\max} \sqrt{\frac{24}{NT\kappa} \log\frac{4}{\delta}\log(NT)}. \# \vskip5pt \noindent\textbf{Upper Bounding Term (III).} We observe that \#\label{eq:hoeffding-3} \left |\text{(III)} \right | \leq f_{\max} \cdot \sum_{t = \tau}^{T-1} c\cdot \exp(-\kappa t) \leq c\cdot \frac{f_{\max}}{N^2T^2}. \# \vskip5pt \noindent\textbf{Combining Everything.} Now, by combining \eqref{eq:hoeffding-1}, \eqref{eq:hoeffding-2}, and \eqref{eq:hoeffding-3}, it holds with probability at least $1 - \delta$ with $c / (NT)^2 \leq \delta \leq 1$ that \$ \frac{1}{NT} \sum_{i\in [N]}\sum_{t = 0}^{T-1} f(X_t^i) - \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} f(X_t) \right] \leq c\cdot f_{\max} \sqrt{\frac{48}{NT \kappa} \log\frac{4}{\delta}\log(NT)}, \$ which concludes the proof of the theorem. \end{proof} \subsection{Proof of Theorem \ref{thm:bernstein-mixing}}\label{prf:thm:bernstein-mixing} \begin{proof} The proof follows from the proof of Theorem \ref{thm:hoeffding-mixing} in \S\ref{prf:thm:hoeffding-mixing}. For the completeness of the paper, we present it here. We take $\tau = \min\{T, 3/\kappa\cdot \log(NT)\}$, and denote by $\EE_\text{stat}[\cdot]$ the expectation taken with respect to the stationary distribution of $\{X_t\}_{t\geq 0}$. We observe the following decomposition, \$ & \frac{1}{NT} \sum_{i\in [N]}\sum_{t = 0}^{T-1} f(X_t^i) - \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} f(X_t) \right] \\ & \qquad = \frac{\tau}{T} \underbrace{ \left( \frac{1}{N\tau} \sum_{i\in [N]}\sum_{t = 0}^{\tau-1} f(X_t^i) - \EE\left[\frac{1}{\tau} \sum_{t = 0}^{\tau-1} f(X_t)\right] \right)}_{\text{(I)}} ] \\ & \qquad \qquad + \frac{T - \tau}{T} \underbrace{ \left( \frac{1}{N(T - \tau)} \sum_{i\in [N]}\sum_{t = \tau}^{T-1} f(X_t^i) - \EE_\text{stat}\left[f(X)\right] \right) }_{\text{(II)}} \\ & \qquad \qquad + \frac{T -\tau}{T} \underbrace{ \left( \EE_\text{stat}\left[f(X)\right] - \EE\left[\frac{1}{T - \tau} \sum_{t = \tau}^{T-1} f(X_t)\right] \right)}_{\text{(III)}}. \$ In the follows, we upper bound terms (I), (II), and (III), respectively. \vskip5pt \noindent\textbf{Upper Bounding Term (I).} Since $\{\{X_t^i\}_{t=0}^{T-1}\}_{i\in [N]}$ are i.i.d. across each trajectory, by standard Bernstein's inequality, with probability at least $1 - \delta$, we have \#\label{eq:bern-1} \left|\text{(I)}\right| & \leq \frac{2f_{\max}}{3N}\log\frac{2}{\delta} + 4 \sqrt{ \frac{4}{N} \EE\left[\left(\frac{1}{\tau} \sum_{t = 0}^{\tau-1} f(X_t) \right)^2 \right] \log\frac{2}{\delta} } \\ & = \frac{2f_{\max}}{3N}\log\frac{2}{\delta} + 4 \sqrt{ \frac{4}{N} \EE\left[ \frac{1}{\tau} \sum_{t = 0}^{\tau-1} f(X_t)^2 \right] \log\frac{2}{\delta} }. \# \vskip5pt \noindent\textbf{Upper Bounding Term (II).} We consider an auxiliary dataset $\{\{\tilde X_t^i\}_{t=0}^{T-1}\}_{i\in [N]}$, where the $i$-th trajectory $\{\tilde X_t^i\}_{t=0}^{T-1}$ is sampled such that $\tilde X_0^i\sim p_\text{stat}$. Here $p_\text{stat}$ is the stationary distribution of $\{X_t\}_{t\geq 0}$. Similarly, we define the following quantity, \$ \text{($\tilde{\text{II}}$)} = \frac{1}{N(T - \tau)} \sum_{i\in [N]}\sum_{t = \tau}^{T-1} f(\tilde X_t^i) - \EE_\text{stat}\left[f(X)\right]. \$ Now, for any $x\geq 0$, we upper bound the difference $\PP( \text{(II)} \geq x) - \PP(\text{($\tilde{\text{II}}$)} \geq x)$ as follows, \#\label{eq:hhhhh1bern} \PP\left( \text{(II)} \geq x \right) - \PP\left( \text{($\tilde{\text{II}}$)} \geq x\right) \leq N \sum_{t = \tau}^{T-1} \EE \left[\| p_t(\cdot \given X_0) - p_\text{stat}(\cdot ) \|_\text{TV} \right] \leq c\cdot NT \exp(-\kappa \tau). \# Thus, by \eqref{eq:hhhhh1bern}, to upper bound $\PP( \text{(II)} \geq x )$, it suffices to upper bound $\PP(\text{($\tilde{\text{II}}$)} \geq x)$. To upper bound $\PP( \text{($\tilde{\text{II}}$)} \geq x)$, we take $T -\tau = 2 k s$, where $k$ and $s$ are two positive integers for the simplicity of presentation. We partition the set $\{\tau, \tau+1, \ldots, T-1\}$ as follows, \$ & J_1 = \{\tau, \tau+1, \ldots, \tau+s-1\}, \quad J_2 = \{\tau+s, \tau+s+1, \ldots, \tau+2s-1\}, \quad \ldots, \\ & J_{2k-1} = \{T-2s, T-2s+1, \ldots, T-s-1\}, \quad J_{2k} = \{T-s, T-s+1, \ldots, T-1\}. \$ Under such a partition, we see that $\cup_{\ell\in[2k]}J_\ell = \{\tau, \tau+1, \ldots, T-1\}$ and $J_\ell\cap J_{\ell'} = \emptyset$ for any $\ell\neq \ell'$. Also, for any $i\in [N]$, we define \$ Z_\ell^i = (\tilde X_t^i)_{t\in J_\ell} \$ for any $\ell\in [2k]$. Now, for any $i\in [N]$, by Lemma \ref{lemma:berbee}, there exists a sequence $\{W_\ell^i\}_{\ell\in [2k]}$, where $W_\ell^i = (\tilde Y_t^i)_{t\in J_\ell}$ such that \begin{enumerate} \item $\{W_\ell^i\}_{\ell\in[2k]}$ are independent; \item for any $\ell\in[2k]$, $W_\ell^i$ and $Z_\ell^i$ have the same distribution; \item for any $\ell\in[2k]$, $\PP(W_\ell^i \neq Z_\ell^i) = \beta(\sigma(\{Z_{\ell'}^i\}_{\ell'\in[\ell-1]}), \sigma(\{Z_\ell^i\}))$. \end{enumerate} Note that the following inclusion relation holds, \$ \left\{ \frac{1}{s}\sum_{t\in J_\ell} f(\tilde X_t^i) - \EE_\text{stat}[f(X)] \geq x_\ell \right\} \subseteq \left\{ \frac{1}{s}\sum_{t\in J_\ell} f(\tilde Y_t^i) - \EE_\text{stat}[f(X)] \geq x_\ell \right\} \cup \{W_\ell^i \neq Z_\ell^i\} \$ for any $x_\ell\in \RR$. Thus, we have \#\label{eq:bern1111} \PP\left(\text{($\tilde{\text{II}}$)} \geq x\right) & \leq \PP\left( \frac{1}{kN} \sum_{i\in[N], \text{$\ell$ is odd}} \frac{1}{s}\sum_{t\in J_\ell} f(\tilde X_t^i) - \EE_\text{stat}[f(X)] \geq x\right) \\ & \qquad + \PP\left( \frac{1}{kN} \sum_{i\in[N], \text{$\ell$ is even}} \frac{1}{s}\sum_{t\in J_\ell} f(\tilde X_t^i) - \EE_\text{stat}[f(X)] \geq x\right) \\ & \leq \PP\left( \frac{1}{kN} \sum_{i\in[N], \text{$\ell$ is odd}} \frac{1}{s}\sum_{t\in J_\ell} f(\tilde Y_t^i) - \EE_\text{stat}[f(X)] \geq x\right) \\ & \qquad + \PP\left( \frac{1}{kN} \sum_{i\in[N], \text{$\ell$ is even}} \frac{1}{s}\sum_{t\in J_\ell} f(\tilde Y_t^i) - \EE_\text{stat}[f(X)] \geq x\right) + \sum_{\ell\in [2k]} \PP(W_\ell^i \neq Z_\ell^i) \\ & \leq \exp\left(-\frac{3 k^2 N^2 x^2 }{ 6 \sum_{i\in [N], \text{$\ell$ is odd}} \EE\left[ \left( \frac{1}{s}\sum_{t\in J_\ell} f(\tilde Y_t^i) \right)^2 \right] + 2f_{\max} Nk x }\right) \\ & \qquad + \exp\left(-\frac{3 k^2 N^2 x^2 }{ 6 \sum_{i\in [N], \text{$\ell$ is even}} \EE\left[ \left( \frac{1}{s}\sum_{t\in J_\ell} f(\tilde Y_t^i) \right)^2 \right] + 2f_{\max} Nk x }\right) + 2k \beta(s), \# where we use Bernstein's inequality in the last line. Note that for any $(i, \ell)\in [N]\times[2k]$, we have \#\label{eq:bern1112} \EE\left[ \left( \frac{1}{s}\sum_{t\in J_\ell} f(\tilde Y_t^i) \right)^2 \right] = \EE_\text{stat}\left[f(X)^2\right] \leq \EE\left[ \frac{1}{T-\tau} \sum_{t = \tau}^{T-1} f(X_t)^2 \right] + f_{\max}^2 T c\cdot \exp(-\kappa \tau). \# Combining \eqref{eq:bern1111} and \eqref{eq:bern1112}, we have \$ \PP\left(\text{($\tilde{\text{II}}$)} \geq x\right) & \leq 2 \exp\left(-\frac{3 k^2 N^2 x^2 }{ 6 kN \left( \EE\left[ \frac{1}{T-\tau} \sum_{t = \tau}^{T-1} f(X_t)^2 \right] + f_{\max}^2 \beta(\tau) \right) + 2f_{\max} Nk x }\right) + 2k \beta(s). \$ Similarly, we have \$ \PP\left(\text{($\tilde{\text{II}}$)} \leq -x\right) \leq 2 \exp\left(-\frac{3 k^2 N^2 x^2 }{ 6 kN \left( \EE\left[ \frac{1}{T-\tau} \sum_{t = \tau}^{T-1} f(X_t)^2 \right] + f_{\max}^2 \beta(\tau) \right) + 2f_{\max} Nk x }\right) + 2k \beta(s). \$ Thus, we have \$ \PP\left( \left| \text{(${\text{II}}$)} \right| \leq x\right) & \leq 4 \exp\left(-\frac{3 k^2 N^2 x^2 }{ 6 kN \left( \EE\left[ \frac{1}{T-\tau} \sum_{t = \tau}^{T-1} f(X_t)^2 \right] + f_{\max}^2 \beta(\tau) \right) + 2f_{\max} Nk x }\right) \\ & \qquad \qquad + 4k \beta(s) + c\cdot NT \exp(-\kappa \tau). \$ Now, by taking $s = 3\log( NT) / \kappa$, it holds with probability at least $1 - \delta$ with $c / (NT)^2 \leq \delta \leq 1$ that \#\label{eq:bern-2} \left| \text{(${\text{II}}$)} \right| \leq \frac{2 f_{\max}}{3NT \kappa} \log \frac{2}{\delta} + 4\sqrt{\frac{4}{NT \kappa} \EE\left[ \frac{1}{T-\tau} \sum_{t = \tau}^{T-1} f(X_t)^2 \right] \log\frac{2}{\delta} }. \# \vskip5pt \noindent\textbf{Upper Bounding Term (III).} We observe that \#\label{eq:bern-3} \left |\text{(III)} \right | \leq f_{\max} \cdot \sum_{t = \tau}^{T-1} c\cdot \exp(-\kappa t) \leq c\cdot \frac{f_{\max}}{N^2T^2}. \# \vskip5pt \noindent\textbf{Combining Everything.} Now, by combining \eqref{eq:bern-1}, \eqref{eq:bern-2}, and \eqref{eq:bern-3}, it holds with probability at least $1 - \delta$ with $c / (NT)^2 \leq \delta \leq 1$ that \$ & \frac{1}{NT} \sum_{i\in [N]}\sum_{t = 0}^{T-1} f(X_t^i) - \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} f(X_t) \right] \\ & \qquad \leq \frac{24 f_{\max}}{NT\kappa} \log \frac{2}{\delta} \log (NT) + 48\sqrt{\frac{1}{NT\kappa} \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} f(X_t)^2 \right] \log\frac{2}{\delta} \log(NT) }, \$ which concludes the proof of the theorem. \iffalse Now, by using the fact that \$ \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} f(X_t)^2 \right] \leq \EE_\text{stat}\left[ f(X)^2 \right] + \frac{1}{T}\sum_{t = 0}^{T-1} c\cdot \exp(-\kappa t) \leq \EE_\text{stat}\left[ f(X)^2 \right] + \frac{a}{(1-e^{-b})T}, \$ it holds with probability at least $1 - \delta$ for any $0 < \delta < 1 / (NT)^2$ that \$ & \frac{1}{NT} \sum_{i\in [N]}\sum_{t = 0}^{T-1} f(X_t^i) - \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} f(X_t) \right] \\ & \qquad \leq \frac{24 f_{\max}}{NT\kappa} \log \frac{2}{\delta} \log (NT) + 48\sqrt{\frac{1}{NT\kappa} \left( \EE_\text{stat}\left[ f(X)^2 \right] + \frac{a}{(1-e^{-b})T} \right) \log\frac{2}{\delta} \log(NT) }, \$ \fi \end{proof} \subsection{Proof of Lemma \ref{lemma:vi-vdg-lemma}} \label{prf:lemma:vi-vdg-lemma} \begin{proof} The proof follows from the proofs of Lemmas 4.1 and 4.2 in \cite{geer2000empirical}. \noindent\textbf{First Inequality.} By the optimality of $\hat p$, we have \$ \hat \EE\left[ \log \hat p(Y\given X) \right] \geq \hat \EE\left[ \log p^*(Y\given X) \right], \$ which implies that \$ \int \log \frac{\hat p}{p^*} \ud \tilde p^* \ud \tilde \mu \geq 0, \$ where $\tilde p^*$ and $\tilde \mu$ are empirical counterparts of $p^*$ and $\mu$. Now, by the concavity of $\log(\cdot)$, we have \$ \log \frac{\hat p + p^*}{2p^*} \geq \frac{1}{2} \log\frac{\hat p}{p^*} + \frac{1}{2} \log\frac{p^*}{p^*} = \frac{1}{2} \log\frac{\hat p}{p^*}. \$ By the above two inequalities, we have \#\label{iv-vd-as1} 0 & \leq \frac{1}{4} \int \log \frac{\hat p}{p^*} \ud \tilde p^* \ud \tilde \mu \leq \frac{1}{2} \int \log \frac{\hat p + p^*}{2p^*} \ud \tilde p^* \ud \tilde \mu \\ & = \frac{1}{2} \int \log \frac{\hat p + p^*}{2p^*} \left (\ud \tilde p^* \ud \tilde \mu - \ud p^* \ud \mu \right) + \frac{1}{2} \int \log \frac{\hat p + p^*}{2p^*} \ud p^* \ud \mu \\ & = \left( \hat \EE - \EE \right)\left[g_{\hat p}\right] + \frac{1}{2} \int \log \frac{\hat p + p^*}{2p^*} \ud p^* \ud \mu. \# In the meanwhile, by the fact that $\log z \leq 2(\sqrt{z}- 1)$ for any $z > 0$, we have \#\label{iv-vd-as2} \frac{1}{2} \int \log \frac{\hat p + p^*}{2p^*} \ud p^* \ud \mu & \leq \int \left ( \sqrt{\frac{\hat p + p^*}{2p^*}} -1 \right) \ud p^* \ud\mu \\ & = \int \left ( \sqrt{\frac{\hat p + p^*}{2} \cdot p^*} - \frac{1}{2}p^* - \frac{1}{2} \cdot \frac{\hat p + p^*}{2}\right) \ud y \ud \mu \\ & = -\int \frac{1}{2} \left( \sqrt{\frac{\hat p + p^*}{2}} - \sqrt{p^*} \right )^2 \ud y \ud \mu \\ & = - H^2 \left( \frac{\hat p + p^*}{2}, p^* \right). \# By combining \eqref{iv-vd-as1} and \eqref{iv-vd-as2}, we conclude the proof of the first inequality. \vskip5pt \noindent\textbf{Second\&Third Inequality.} We denote by $\overline{p} = (p + p^*) / 2$ for any $p$. We note the following two facts, \$ & \frac{p_1^{1/2} + p_2^{1/2}}{\overline p_1^{1/2} + \overline p_2^{1/2}} \leq \sqrt{2}, \\ & \left | \overline p_1^{1/2} - \overline p_2^{1/2} \right | \left ( \overline p_1^{1/2} + \overline p_2^{1/2} \right ) = \left | \overline p_1 - \overline p_2 \right | = \left | \frac{p_1 - p_2}{2}\right | = \frac{1}{2} \left | p_1^{1/2} - p_2^{1/2}\right | \left ( p_1^{1/2} + p_2^{1/2}\right ). \$ Thus, we have \$ \left | \overline p_1 ^{1/2} - \overline p_2^{1/2} \right | = \frac{1}{2} \cdot \frac{p_1^{1/2} + p_2^{1/2}}{\overline p_1^{1/2} + \overline p_2^{1/2}} \cdot \left | p_1^{1/2} - p_2^{1/2} \right | \leq \frac{\sqrt{2}}{2} \cdot \left| p_1^{1/2} - p_2^{1/2} \right |, \$ which implies the second inequality. The third inequality can be proved in a similar way. \vskip5pt \noindent\textbf{Forth Inequality.} We note that \$ \left\|p_1(\cdot \given x) - p_2(\cdot \given x) \right\|_1 & = \int \left | p_1(y \given x) - p_2(y \given x) \right | \ud y \\ & = \int \left ( p_1(y \given x)^{1/2} - p_2(y \given x)^{1/2} \right )\left ( p_1(y \given x)^{1/2} + p_2(y \given x)^{1/2} \right ) \ud y \\ & = \sqrt{\int \left ( p_1(y \given x)^{1/2} - p_2(y \given x)^{1/2} \right )\ud y } \sqrt{\int \left ( p_1(y \given x)^{1/2} + p_2(y \given x)^{1/2} \right ) \ud y} \\ & \leq 2 \sqrt{\int \left ( p_1(y \given x)^{1/2} - p_2(y \given x)^{1/2} \right )\ud y } = 2\sqrt{2} h\left( p_1(\cdot \given x), p_2(\cdot \given x) \right), \$ which concludes the proof. \end{proof} \subsection{Proof of Theorem \ref{thm:iv-vdg-bound}} \label{prf:thm:iv-vdg-bound} \begin{proof} The proof follows from the proof of Theorem 7.4 in \cite{geer2000empirical}. We define the events \$ \cE = \left\{ \omega \in \Omega\colon H^2(\hat p, p^*) > \delta^2 \right\}. \$ Conditioning on $\cE$, we have \#\label{eq:iv-388383} \left(\hat \EE - \EE \right)[g_{\hat p}] \geq H^2(\overline{\hat p}, p^*) \geq \frac{1}{16} H^2(\hat p, p^*) > \frac{\delta^2}{16}, \# where the first two inequalities come from Lemma \ref{lemma:vi-vdg-lemma}. We further define \$ \cE^\dagger = \left\{ \omega \in \Omega\colon \sup_{p\in \cP\colon H^2(\overline{p},p^*) > \delta^2/16} \left(\hat \EE - \EE \right)[g_p] - H^2(\overline{p}, p^*) \geq 0 \right\}. \$ By \eqref{eq:iv-388383} and the definitions of $\cE$ and $\cE^\dagger$, we observe that $\cE \subseteq \cE^\dagger$. Thus, we only need to upper bound $\PP(\cE^\dagger)$. To do so, we use a peeling argument as follows. Let $L = \min\{\ell\colon 2^{\ell+1}\delta^2 / 16 > 1\}$. We observe that \#\label{eq:8943594} \PP(\cE^\dagger) \leq \sum_{\ell=0}^L \PP(\cE^\dagger_\ell), \# where \$ \cE^\dagger_\ell = \left\{ \omega\in \Omega\colon \sup_{p\in \cP_\ell } \left(\hat \EE[g_p] - \EE[g_p] \right) \geq 2^\ell \delta^2 / 16 \right\}. \$ Here $\cP_\ell = \{p\in \cP\colon H^2(\overline{p},p^*) \leq 2^{\ell+1}\delta^2/16\}$. To upper bound $\PP(\cE_\ell^\dagger)$, we introduce the following result. \begin{theorem} \label{thm:511-adapt} Given a Markov chain $\{Z_t\}_{t\geq 0}\subset \cZ$ satisfying Assumption \ref{ass:mc-ass}, and take \begin{align} & v \leq C_1\sqrt{NT} R^2 /K, \label{eq:vd-cond1}\\ & v \leq 8\sqrt{NT} R, \label{eq:vd-cond2}\\ & v \geq C_0\cdot \max\left\{ \int_{v/(2^6\sqrt{NT})}^R \left( \mathcal{H}_{B,K}(u,\cG,P) \right)^{1/2} \ud u, R \right\}, \label{eq:vd-cond3} \\ & v \geq C_2 / (NT)^2, \label{eq:vd-cond5} \\ & C_0^2 \geq C^2 (C_1 + 1), \label{eq:vd-cond4} \end{align} where $\mathcal{H}_{B,K}(u,\cG,P)$ is the generalized entropy with bracketing. Then we have \$ & \PP\left( \sup_{g\in \cG} \sqrt{NT} \left( \frac{1}{NT} \sum_{i\in [N]} \sum_{t = 0}^{T-1} g(Z_t^i) - \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(Z_t) \right] \right) \geq v \right) \\ & \qquad \leq \frac{4C}{\kappa}\exp\left(-\frac{v^2 \kappa}{18 C^2(C_1+1)R^2 \log(NT)}\right) + \frac{2}{N^2 T^2}, \$ where $\{Z_t^i\}_{t=0}^{T-1}$ is generated from the same distribution as $\{Z_t\}_{t\geq 0}$ for any $i\in [N]$. \end{theorem} \begin{proof} See \S\ref{prf:thm:511-adapt} for a detailed proof. \end{proof} To invoke Theorem \ref{thm:511-adapt}, we take \$ v = \sqrt{NT} \cdot 2^\ell \delta^2 / 16, \quad K = 1, \quad R = 2^{\ell/2}\delta, \quad C_1 = 15, \quad C = c/64, \quad C_0 = c/16, \quad C_2 = c. \$ It is easy to verify that \eqref{eq:vd-cond1}, \eqref{eq:vd-cond2}, \eqref{eq:vd-cond5}, and \eqref{eq:vd-cond4} hold. For \eqref{eq:vd-cond3}, since $\sqrt{NT}\delta_{NT}^2 \geq c \Psi(\delta_{NT})$, which implies that \$ \sqrt{NT} \geq c \cdot \frac{ \Psi(\delta_{NT})}{\delta_{NT}^2} \geq c \cdot \frac{\Psi(2^{\ell/2}\delta)}{2^\ell \delta^2}, \$ where we use the fact that $\Psi(\delta) / \delta^2$ is a non-increasing function of $\delta$. Thus, we have \$ 16 a \geq c \cdot \max \left\{ \int_{v/(2^6\sqrt{NT})}^R \left(\mathcal{H}_{B,1}\left(u,\{g_p\colon p\in \cP_\ell \}, \mu_0 \right) \right)^{1/2} \ud u , R \right\}, \$ which justifies \eqref{eq:vd-cond3} by noting that $K = 1$. Here, we use the fact that \$ \mathcal{H}_{B,1}(u, \{g_p\colon p\in \cP_\ell \}, P) \leq H_B\left(\frac{u}{\sqrt{2}} , \{\overline{p}^{1/2}\colon p\in \cP_\ell\} \right). \$ Thus, by using Theorem \ref{thm:511-adapt}, we have \$ \PP(\cE_\ell^\dagger) \leq \frac{c}{\kappa} \exp\left(-\frac{NT \kappa 2^\ell \delta^2}{c^2 \log(NT)}\right) + \frac{2}{N^2 T^2}. \$ Further, by combining \eqref{eq:8943594}, we have \$ \PP(\cE^\dagger) \leq \frac{c}{\kappa} \exp\left(-\frac{NT \kappa \delta^2}{c^2}\right) + \frac{c}{N^2 T^2} \log\frac{4}{\delta}, \$ which concludes the proof of the theorem. \end{proof} \subsection{Proof of Theorem \ref{thm:511-adapt}}\label{prf:thm:511-adapt} \begin{proof} We take $\tau = \min\{T, 3/\kappa \cdot \log(\cG_{\max} NT)\}$, where $\cG_{\max} = \max\{\max_{g\in \cG}\max_{z\in \cZ} g(z), 1\}$, and denote by $\EE_\text{stat}[\cdot]$ the expectation taken with respect to the stationary distribution of $\{Z_t\}_{t\geq 0}$. We have the following decomposition, \#\label{eq:van-flee-1} & \PP\left( \sup_{g\in \cG} \sqrt{NT} \left( \frac{1}{NT} \sum_{i\in [N]} \sum_{t = 0}^{T-1} g(Z_t^i) - \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(Z_t) \right] \right) \geq v \right) \\ & \qquad = \PP\Biggl( \sup_{g\in \cG} \frac{\tau}{T} \biggl( \frac{1}{N\tau} \sum_{i\in [N]} \sum_{t = 0}^{\tau-1} g(Z_t^i) - \EE\biggl[ \frac{1}{\tau} \sum_{t = 0}^{\tau-1} g(Z_t) \biggr] \biggr) \\ & \qquad \qquad \qquad + \frac{T-\tau}{T} \biggl( \frac{1}{N(T-\tau)} \sum_{i\in [N]} \sum_{t = \tau}^{T-1} g(Z_t^i) - \EE_\text{stat}[g(Z)] \biggr) \\ & \qquad \qquad \qquad + \frac{T-\tau}{T} \biggl( \EE_\text{stat}[g(Z)] - \EE\biggl[ \frac{1}{T-\tau} \sum_{t = \tau}^{T-1} g(Z_t) \biggr] \biggr) \geq \frac{v}{\sqrt{NT}} \Biggr) \\ & \qquad \leq \PP\left( \sup_{g\in \cG} \frac{\tau}{T} \left( \frac{1}{N\tau} \sum_{i\in [N]} \sum_{t = 0}^{\tau-1} g(Z_t^i) - \EE\left[ \frac{1}{\tau} \sum_{t = 0}^{\tau-1} g(Z_t) \right] \right) \geq \frac{v}{3\sqrt{NT}} \right) \\ & \qquad \qquad + \PP\left( \sup_{g\in \cG} \frac{T-\tau}{T} \left( \frac{1}{N(T - \tau)} \sum_{i\in [N]} \sum_{t = \tau}^{T-1} g(Z_t^i) - \EE_\text{stat}\left[ g(Z) \right] \right) \geq \frac{v}{3\sqrt{NT}} \right), \# where the last inequality comes from the fact that \$ & \sup_{g\in \cG} \frac{T-\tau}{T} \left( \EE_\text{stat}[g(Z)] - \EE\left[ \frac{1}{T-\tau} \sum_{t = \tau}^{T-1} g(Z_t) \right] \right) \\ & \qquad \leq \sup_{g\in \cG} \frac{T-\tau}{T} \int g(z) \frac{1}{T-\tau} \sum_{t = \tau}^{T-1} \left( p_\text{stat}(z) - \int p_t(z\given z_0) \ud \zeta(z_0) \right) \ud z \\ & \qquad \leq \frac{1}{T} \cG_{\max} \sum_{t = \tau}^{T-1} c\cdot \exp(-\kappa t) \leq \cG_{\max} c \exp(-\kappa\tau) \leq \frac{v}{3\sqrt{NT}}, \$ where we use the fact that $v \geq C_2 / (NT)^2$ for some constant $C_2$. Thus, we only need to upper bound the two terms on the RHS of \eqref{eq:van-flee-1}. We first introduce the following supporting results. \begin{lemma} \label{lemma:511-geer} Take \$ & v \leq C_1\sqrt{n} R^2 /K,\\ & v \leq 8\sqrt{n} R, \\ & v \geq C_0\cdot \max\left\{ \int_{v/(2^6\sqrt{n})}^R \left( \log \mathcal N_{B,K}(u,\cG,P) \right)^{1/2} \ud u, R \right\}, \\ & C_0^2 \geq C^2 (C_1 + 1). \$ Then we have \$ \PP\left( \sup_{g\in \cG} \left| \sqrt{n} \left( \frac{1}{n} \sum_{i\in [n]} g(Z_i) - \EE\left[g(Z)\right] \right) \right| \geq v \right) \leq C \exp\left( -\frac{v^2}{C^2(C_1 + 1)R^2} \right), \$ where $\{Z_i\}_{i\in [n]}$ are i.i.d. samples drawn the same distribution as $Z$. \end{lemma} \begin{proof} See Theorem 5.11 in \cite{geer2000empirical} for a detailed proof. \end{proof} \begin{lemma} \label{lemma:211-general} Given a $\beta$-mixing sequence $\{Z_t\}_{t\geq 0}\subset \cZ$ with coefficient $\beta(t)$ for any $t \geq 0$. There exists a sequence $\{Z_t^*\}_{t= 0}^{T-1}\subset \cZ$ and a set $\cJ$ such that \begin{enumerate} \item $\cJ$ is a partition of $\{0,1,\ldots, T-1\}$, i.e., $\cup_{J\in \cJ} J = \{0, 1, \ldots, T-1\}$ and $J_1\cap J_2 = \emptyset$ for any $J_1,J_2 \in \cJ$; \item for any $0\leq t \leq T-1$, $Z_t^*$ and $Z_t$ have the same distribution; \item for any $J\in \cJ$, $\{Z_t^*\}_{t\in J}$ is an independent sequence; \item it holds for any $u \in \RR$ that \$ & \PP\left(\sup_{g\in \cG} \frac{1}{T} \sum_{t=0}^{T-1} g(Z_t) - \EE\left[\frac{1}{T} \sum_{t=0}^{T-1} g(Z_t)\right] \geq u \right) \\ & \qquad \leq \sum_{J\in \cJ} \PP\left(\sup_{g\in \cG} \frac{1}{|J|} \sum_{t\in J} g(Z_t^*) - \EE\left[\frac{1}{|J|} \sum_{t\in J} g(Z_t)\right] \geq u \right) \\ & \qquad \qquad + \sum_{J\in \cJ} |J| \cdot \beta\left(\min\{ |t_1-t_2|\colon t_1\neq t_2\in J \}\right). \$ \end{enumerate} \end{lemma} \begin{proof} See Theorem 2.11 in \cite{barrera2021generalization} for a detailed proof. \end{proof} We upper bound two terms on the RHS of \eqref{eq:van-flee-1} as follows. \vskip5pt \noindent\textbf{Upper Bounding the First Term on the RHS of \eqref{eq:van-flee-1}.} To upper bound the first term, we invoke Lemma \ref{lemma:511-geer}. Since the sequence \$ \left\{ \frac{1}{\tau} \sum_{t = 0}^{\tau-1} g(Z_t^i) \right\}_{i\in [n]} \$ is i.i.d., we have \#\label{eq:van-flee-res-1} & \PP\left( \sup_{g\in \cG} \frac{\tau}{T} \left( \frac{1}{N\tau} \sum_{i\in [N]} \sum_{t = 0}^{\tau-1} g(Z_t^i) - \EE\left[ \frac{1}{\tau} \sum_{t = 0}^{\tau-1} g(Z_t) \right] \right) \geq \frac{v}{3\sqrt{NT}} \right) \\ & \qquad \leq C \exp\left( -\frac{v^2T}{9\tau^2 C^2 (C_1 + 1)R^2} \right) \leq C \exp\left( -\frac{v^2}{C^2 (C_1 + 1)R^2} \right). \# \vskip5pt \noindent\textbf{Upper Bounding the Second Term on the RHS of \eqref{eq:van-flee-1}.} To upper bound the second term, we note that \$ & \PP\left( \sup_{g\in \cG} \frac{T-\tau}{T} \left( \frac{1}{N(T - \tau)} \sum_{i\in [N]} \sum_{t = \tau}^{T-1} g(Z_t^i) - \EE_\text{stat}\left[ g(Z) \right] \right) \geq \frac{v}{3\sqrt{NT}} \right) = \text{(i)} + \text{(ii)}, \$ where \$ & \text{(i)} = \PP\left( \sup_{g\in \cG} \frac{T-\tau}{T} \left( \frac{1}{N(T - \tau)} \sum_{i\in [N]} \sum_{t = \tau}^{T-1} g(Z_t^i) - \EE_\text{stat}\left[ g(Z) \right] \right) \geq \frac{v}{3\sqrt{NT}} \right) \\ & \qquad \qquad - \PP\left( \sup_{g\in \cG} \frac{T-\tau}{T} \left( \frac{1}{N(T - \tau)} \sum_{i\in [N]} \sum_{t = \tau}^{T-1} g(\tilde Z_t^i) - \EE_\text{stat}\left[ g(Z) \right] \right) \geq \frac{v}{3\sqrt{NT}} \right), \\ & \text{(ii)} = \PP\left( \sup_{g\in \cG} \frac{T-\tau}{T} \left( \frac{1}{N(T - \tau)} \sum_{i\in [N]} \sum_{t = \tau}^{T-1} g(\tilde Z_t^i) - \EE_\text{stat}\left[ g(Z) \right] \right) \geq \frac{v}{3\sqrt{NT}} \right). \$ Here $\{\tilde Z_t^i\}_{t = 0}^{T-1}$ are an auxiliary sequence for any $i\in [N]$, where $\tilde Z_0^i$ is sampled from the stationary distribution of the sequence $\{Z_t\}_{t\geq 0}$. To upper bound $\text{(i)}$, we note that \#\label{eq:van-flee-res-2-1} \text{(i)} \leq \sum_{i\in [N]} \sum_{t=\tau}^{T-1} c\cdot \exp(-\kappa t) \leq NT c \exp(-\kappa \tau) \leq \frac{1}{N^2 T^2}. \# To upper bound $\text{(ii)}$, we invoke Lemma \ref{lemma:211-general} by taking \$ & \cJ = \{J_1, J_2, \ldots, J_s\}, \qquad J_j = \{\tau+j-1, \tau+j+s-1, \ldots, T-s+j\} \text{ for any $j\in [s]$}. \$ Then there exists a sequence $\{\{\tilde Z_t^{i*}\}_{t = \tau}^{T-1}\}_{i\in [N]}$ such that $\tilde Z_t^{i*}$ and $\tilde Z_t^i$ have the same distribution for any $(i,t)$; $\{\{\tilde Z_t^*\}_{t\in J}\}_{i\in [N]}$ are independent; and it holds for any $u \in \RR$ that \$ \text{(ii)} & \leq \sum_{j = 1}^s \PP\left(\sup_{g\in \cG} \frac{s}{N(T-\tau)} \sum_{i\in[N]} \sum_{t\in J_j} g(\tilde Z_t^{i*}) - \EE_\text{stat}\left[ g(Z) \right] \geq \frac{v}{3\sqrt{NT}} \frac{T}{T-\tau} \right) + (T-\tau) \cdot \beta(s) \\ & \leq s\cdot C\exp\left(-\frac{v^2}{9s C^2(C_1+1)R^2}\right) + (T-\tau)\beta(s), \$ where we use Lemma \ref{lemma:511-geer} in the last inequality. Now, by taking $s = \min\{T, 3/\kappa\cdot \log( NT)\}$, we have \#\label{eq:van-flee-res-2-2} \text{(ii)} \leq \frac{3C}{\kappa} \exp\left(-\frac{v^2 \kappa}{18 C^2(C_1+1)R^2 \log(NT)}\right) + \frac{1}{N^2 T^2}, \# where we use Lemma \ref{lemma:davydov-adapt} to upper bound $\beta(s)$. Now, by combining \eqref{eq:van-flee-res-2-1} and \eqref{eq:van-flee-res-2-2}, we have \#\label{eq:van-flee-res-2} & \PP\left( \sup_{g\in \cG} \frac{T-\tau}{T} \left( \frac{1}{N(T - \tau)} \sum_{i\in [N]} \sum_{t = \tau}^{T-1} g(Z_t^i) - \EE_\text{stat}\left[ g(Z) \right] \right) \geq \frac{v}{3\sqrt{NT}} \right) \\ & \qquad \leq \frac{3C}{\kappa}\exp\left(-\frac{v^2 \kappa}{18 C^2(C_1+1)R^2 \log(NT) }\right) + \frac{2}{N^2 T^2}. \# \vskip5pt \noindent\textbf{Combining Everything.} By plugging \eqref{eq:van-flee-res-1} and \eqref{eq:van-flee-res-2} into \eqref{eq:van-flee-1}, we have \$ & \PP\left( \sup_{g\in \cG} \sqrt{NT} \left( \frac{1}{NT} \sum_{i\in [N]} \sum_{t = 0}^{T-1} g(Z_t^i) - \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(Z_t) \right] \right) \geq v \right) \\ & \qquad \leq \frac{4C}{\kappa}\exp\left(-\frac{v^2 \kappa}{18 C^2(C_1+1)R^2 \log(NT)}\right) + \frac{2}{N^2 T^2}, \$ which concludes the proof of the theorem. \end{proof} \section{Numerical Simulation}\label{sec:exppppp} We demonstrate the usefulness of the proposed methods by conducting a simulation study, in which we simulate a synthetic dataset that mimics a real-world electronic medical record dataset for kidney transplantation patients \citep{hua2020personalized}. Kidney transplantation is the primary treatment for patients with chronic kidney disease or end-stage renal disease \citep{Arshad2019}. After the transplant surgery, patients are usually instructed to have regular clinical visits for their long-term care. At each visit, patients' creatinine levels, an important biomarker for measuring kidney function, are measured. Then based on patients' creatinine levels, physicians prescribe immunosuppressive drugs, such as tacrolimus, to keep their immune systems from rejecting the new kidney \citep{Kasiske2010}. Due to potential compliance and resistance issues, patients' whole blood tacrolimus concentrations are also measured so that their body responses to tacrolimus can be monitored. Our goal is estimate the optimal therapeutic strategies (i.e., the optimal tacrolimus concentrations) for patients after kidney transplantation from observational data collected in the electronic medical database. Under such a context, for any $t \geq 0$, at the $t$-th clinical visit, we denote by $S_t$ the patient's creatinine level, $Z_t$ the assigned dosage of the immunosuppressive drug tacrolimus, $A_t$ the actual effective dosage level (i.e., whole blood tacrolimus concentration), which can be different from $Z_t$ due to compliance/resistance issues, $U_t$ the unobserved confounders, such as the quality of care, and $R_t$ the reward. We introduce the simulation setup in details as follows. \vskip5pt \noindent\textbf{Dynamics and Rewards.} For simplicity, we consider $S_t\in \RR$ and $U_t\in \RR$ for any $t$. In the meanwhile, we assume that the IV and action spaces are ternary, i.e., $\cA = \cZ = \{0,1,2\}$, which represents low, medium, and high dosage/concentration level, respectively. Specifically, given the current state $S_t$ and action $A_t$, we assume that the reward $R_t$ and next state $S_{t+1}$ satisfy the following equations, \$ R_t = - S_t^2 + (U_t-2)\cdot (A_t-1), \qquad S_{t+1} = S_t + 0.5 \cdot (A_t - 1) + 3\cdot \ind\{S_t> 0\} \cdot (U_t-2), \$ where $U_t \sim \mathcal N(2, 0.1)$ is the unobserved confounder at the $t$-th step. The term $- S_t^2$ in defining $R_t$ reflects our clinical knowledge that creatinine levels that are either very high or too low can be harmful for patients. Also, we take the initial state $S_0\sim \mathcal N(5, 0.1)$. \vskip5pt \noindent\textbf{IV and Behavior Policy.} We assume that the IV $Z_t$ takes a value in $\cZ$ with certain probabilities. Specifically, given the current state $S_t$, the IV $Z_t$ is taken as follows. \begin{itemize} \item[(i)] if $S_t < -0.3$, we take $Z_t = z$ with probability $p_z$ for any $z\in \cZ$, where $(p_0, p_1, p_2) = (0.1,0.1,0.8)$; \item[(ii)] if $S_t > 0.3$, we take $Z_t = z$ with probability $p_z$ for any $z\in \cZ$, where $(p_0, p_1, p_2) = (0.8,0.1,0.1)$; \item[(iii)] if $-0.3\leq S_t \leq 0.3$, we take $Z_t = z$ with probability $p_z$ for any $z\in \cZ$, where $(p_0, p_1, p_2) = (0.1,0.8,0.1)$. \end{itemize} As for the behavior policy, given $S_t$, $U_t$, and $Z_t$, the action $A_t$ is taken as follows. \begin{itemize} \item if $U_t > 2$: \begin{itemize} \item[(i)] if $Z_t=0$, we take $A_t = a$ with probability $p_a$ for any $a\in \cA$, where $(p_0, p_1, p_2) = (0.8,0.1,0.1)$; \item[(ii)] if $Z_t=1$, we take $A_t = a$ with probability $p_a$ for any $a\in \cA$, where $(p_0, p_1, p_2) = (0.1,0.8,0.1)$; \item[(iii)] if $Z_t=2$, we take $A_t = a$ with probability $p_a$ for any $a\in \cA$, where $(p_0, p_1, p_2) = (0.1,0.1,0.8)$; \end{itemize} \item if $U_t \leq 2$: \begin{itemize} \item[(iv)] if $Z_t=0$, we take $A_t = a$ with probability $p_a$ for any $a\in \cA$, where $(p_0, p_1, p_2) = (0.78, 0.11, 0.11)$; \item[(v)] if $Z_t=1$, we take $A_t = a$ with probability $p_a$ for any $a\in \cA$, where $(p_0, p_1, p_2) = (0.05, 0.78, 0.17)$; \item[(vi)] if $Z_t=2$, we take $A_t = a$ with probability $p_a$ for any $a\in \cA$, where $(p_0, p_1, p_2) = (0.11, 0.05, 0.84)$. \end{itemize} \end{itemize} \vskip5pt \noindent\textbf{Simulation Setup.} Throughout the experiment, we consider simplex encoding of $\cA$ and $\cZ$ as in \eqref{eq:iv-simplex-encoding}. Under the aforementioned setting, we generate $N=1000$ trajectories with a finite horizon $T=100$ following the behavior policy. We take the discount factor $\gamma = 0.9$. For the simplicity of simulation, we parameterize $\cV$, $\cW$, and $\Pi$ as follows, \#\label{eq:iv-linear-param} & \cV = \left\{ v_{\omega_{\textsf{v}}}(\cdot) = \psi(\cdot)^\top \omega_{\textsf{v}} \colon \omega_{\textsf{v}} \in \RR^5 \right\}, \\ & \cW = \left\{ g_{\omega_{\textsf{g}}}(\cdot) = \psi(\cdot)^\top \omega_{\textsf{g}} \colon \omega_{\textsf{g}} \in \RR^5, \|\omega_{\textsf{g}}\|_\infty \leq 1 \right\}, \\ & \Pi = \left\{ \pi_{\omega_{\textsf{pi}}} \colon \pi_{\omega_{\textsf{pi}}}(a\given \cdot) \propto \exp(\psi(\cdot)^\top \omega_{a, \textsf{pi}}) \text{ where } \omega_{a, \textsf{pi}} \in \RR^5 \text{ for any $a\in \cA$} \right\}, \# where $\psi(s) = (1, s, s^2, s^3, s^4)^\top$ is the feature vector for any $s\in \RR$. For the simplicity of the simulation, we assume that there exists an oracle that gives $\Delta^*(s,a)$, $\Theta^*(s,z)$, and $\PP(A=a\given S=s)$ for any $(s,z,a)\in \cS\times \cZ\times \cA$. Such an oracle can be achieved by logistic regression. Under the linear parameterization in \eqref{eq:iv-linear-param}, we construct (pessimistic) estimators of the expected total reward using the following methods. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{ci.pdf} \caption{Expected total rewards by using \texttt{pess\_IV}, \texttt{no\_pess\_IV}, \texttt{pess\_no\_IV}, and \texttt{no\_pess\_no\_IV}. Here, the red dots represent the means, while the blue intervals represent the 95\% confidence intervals generated by 25 seeds. } \label{fig:exp-iv} \end{figure} \begin{itemize} \item \texttt{pess\_IV}. For any $\pi$, by adding random noise to the rewards, we formulate the following optimization problems, \$ \omega^j_{\textsf{v}} \in \argmin_{v\in \cV} \max_{g\in \cW} \frac{1}{NT} \sum_{i = 1}^N \sum_{t = 0}^{T-1} g(S_t^i) \left ( \frac{Z_t^{i \top} A_t^i \pi(A_t^i\given S_t^i)}{\Delta^*(S_t^i,A_t^i) \Theta^*(S_t^i,Z_t^i)} \left(\tilde R_t^{i,j} + \gamma v(S_{t+1}^i)\right) - v(S_t^i) \right ), \$ where $\tilde R_t^{i,j} = R_t^i + \chi_t^{i,j}$ and $\chi_t^{i,j} \sim \mathcal N(0, 0.1)$ for any $(t,i,j)\in [T]\times [N]\times [10]$. Then we take $(1-\gamma)\cdot \EE_{S_0\sim \nu}[v_{\omega^{j^*}_{\textsf{v}}}(S_0)]$ as a pessimistic estimator of the total expected reward, where $j^*$ is taken such that $\EE_{S_0\sim \nu}[v_{\omega^{j^*}_{\textsf{v}}}(S_0)]$ achieves the minimum among all $j\in [10]$. \item \texttt{no\_pess\_IV}. For any $\pi$, we solve the following optimization problem, \$ \omega_{\textsf{v}} \in \argmin_{v\in \cV} \max_{g\in \cW} \frac{1}{NT} \sum_{i = 1}^N \sum_{t = 0}^{T-1} g(S_t^i) \left ( \frac{Z_t^{i \top} A_t^i \pi(A_t^i\given S_t^i)}{\Delta^*(S_t^i,A_t^i) \Theta^*(S_t^i,Z_t^i)} \left(R_t^i + \gamma v(S_{t+1}^i)\right) - v(S_t^i) \right ). \$ Then we take $(1-\gamma)\cdot \EE_{S_0\sim \nu}[v_{\omega_{\textsf{v}}}(S_0)]$ as an estimator of the total expected reward. \item \texttt{pess\_no\_IV}. For any $\pi$, without using IVs, by adding random noise to the rewards, we formulate the following optimization problems, \$ \omega^j_{\textsf{v}} \in \argmin_{v\in \cV} \max_{g\in \cW} \frac{1}{NT} \sum_{i = 1}^N \sum_{t = 0}^{T-1} g(S_t^i) \left ( \frac{\pi(A_t^i\given S_t^i)}{\PP(A_t^i\given S_t^i)} \cdot \left(\tilde R_t^{i,j} + \gamma v(S_{t+1}^i)\right) - v(S_t^i) \right ), \$ where $\tilde R_t^{i,j} = R_t^i + \chi_t^{i,j}$ and $\chi_t^{i,j} \sim \mathcal N(0, 0.1)$ for any $(t,i,j)\in [T]\times [N]\times [10]$. Then we take $(1-\gamma)\cdot \EE_{S_0\sim \nu}[v_{\omega^{j^*}_{\textsf{v}}}(S_0)]$ as a pessimistic estimator of the total expected reward, where $j^*$ is taken such that $\EE_{S_0\sim \nu}[v_{\omega^{j^*}_{\textsf{v}}}(S_0)]$ achieves the minimum among all $j\in [10]$. \item \texttt{no\_pess\_no\_IV}. For any $\pi$, without using IVs, we solve the following optimization problem, \$ \omega_{\textsf{v}} \in \argmin_{v\in \cV} \max_{g\in \cW} \frac{1}{NT} \sum_{i = 1}^N \sum_{t = 0}^{T-1} g(S_t^i) \left ( \frac{\pi(A_t^i\given S_t^i)}{\PP(A_t^i\given S_t^i)} \cdot \left(R_t^i + \gamma v(S_{t+1}^i)\right) - v(S_t^i) \right ). \$ Then we take $(1-\gamma)\cdot \EE_{S_0\sim \nu}[v_{\omega_{\textsf{v}}}(S_0)]$ as an estimator of the total expected reward. \end{itemize} Finally, by zero-th order optimization, we update the policy to maximize the above estimators. We repeat the above procedure for 25 times, and plot the 95\% confidence intervals of the total expected rewards of the output policies in Figure \ref{fig:exp-iv}. According to the figure, we observe that with the presence of IVs, \texttt{pess\_IV} and \texttt{no\_pess\_IV} are capable of learning better policies than the behavior policy, with higher and more stable expected total rewards in \texttt{pess\_IV} due to the use of pessimism; while without IVs, \texttt{pess\_no\_IV} and \texttt{no\_pess\_no\_IV} even fail to beat the behavior policy. \section{Introduction} Reinforcement learning (RL, \cite{sutton2018reinforcement}) with deep neural networks gains tremendous successes in practice, e.g., games \citep{silver2016mastering, OpenAI_dota}, robotics \citep{kalashnikov2018scalable}, precision medicine \citep{kosorok2019precision,cho2022}. In many application domains, actively collecting data through interacting with the environment in an online fashion is usually either expensive or unethical, e.g., healthcare \citep{raghu2017continuous, komorowski2018artificial,gottesman2019guidelines} and autonomous driving \citep{shalev2016safe}). Therefore, a growing body of literature focus on designing RL methods in the offline setting, where the agent aims to learn an optimal policy $\pi^*$ in the infinite-horizon Markov decision process (MDP, \cite{puterman2014markov}) only through observational data, which consists of $N$ trajectories generated by a behavior policy $b$ with a finite horizon $T$. However, applying RL methods in the offline setting still possesses the following challenges: (i) The agent may be confounded by unmeasured variables (confounders) in the observational data. We refer to the MDP with such unmeasured confounders as confounded MDP. Such confounders usually come from private data or heuristic information not recorded \citep{brookhart2010confounding}. In the confounded MDP, the causal effects of actions on the transitions and rewards are not identifiable from the observational data, leaving most offline RL methods assuming unconfoundedness not applicable in our setting. (ii) To learn an optimal policy from observational data, many prior methods \citep{precup2000eligibility, antos2008learning, chen2019information} require a data coverage assumption for any policy $\pi$, i.e., the density ratio between the state-action visitation measure induced by $\pi$ and that induced by the behavior policy $b$ is uniformly upper bounded for any $\pi$. However, such a data coverage assumption is hardly satisfied in practice, especially when the state or action spaces are large. Further, many existing methods developed under this assumption are not stable or even do not converge when the assumption is violated \citep{wang2020statistical, wang2021instabilities}. To tackle the above challenge (i), we study the confounded MDP via instrumental variables (IVs, \cite{angrist1996identification}). Informally, IVs are variables that affect the transitions and rewards only through actions. With the aid of IVs, we introduce two types of identification results: value function (VF)-based and marginalized importance sampling (MIS)-based. Specifically, with only finite-horizon data, VF-based identification result helps identify the state-value function in the infinite-horizon confounded MDP and establish a new Bellman equation by leveraging IVs. On the other hand, we establish the MIS-based identification result for directly estimating the expected total reward $J(\pi)$ to be maximized. Both identification results rely on the memoryless assumption on the unmeasured confounders. The memoryless assumption rules out the existence of unmeasured confounders that can affect future rewards and unmeasured confounders but only the immediate reward. To tackle the above challenge (ii), we employ pessimism to achieve policy learning and systematically study its theoretical properties. Specifically, when using VF-based identification, we first formulate a min-max estimator of the state-value function via the newly established estimating equation. Then, we construct a confidence set of such a min-max estimator based on its loss function, so that the true state-value function lies within the confidence set with a high probability. Note that the construction of our confidence set does not require to develop uniform bands for related estimators, which is different from many existing pessimistic algorithms \citep[e.g.,][]{jin2021pessimism,rashidinejad2021bridging,yan2022efficacy}. Finally, we search for the best policy that maximizes the most conservative expected total reward associated with the estimated state-value function within the confidence set. As a theoretical contribution, under the data coverage assumption only for an optimal (in-class) policy $\pi^*$ and realizability assumption for all policies, we show that the suboptimality of the learned best policy is upper bounded by $O(\log(NT)(NT)^{-1/2})$, i.e., the regret of our algorithm in finding the optimal policy converges to $0$ as long as either the number of trajectories $N$ or the number of decision points at each trajectory $T$ goes to infinity. It is worth noting that our theoretical analysis does not assume that the observational data are generated from stationary distribution or even independent, which has been widely imposed in related literature \citep{farahmand2016regularized, nachum2019dualdice, tang2019doubly, xie2021bellman, kallus2022efficiently}, and thus is more general than the aforementioned literature. Without imposing such a restrictive assumption, inspired by \cite{wang2021projected}, our convergence analysis relies on novel concentration inequalities for geometrically ergodic sequences, which significantly increases the applicability of our results in practice. In the meanwhile, pessimism with MIS-based identification achieves a similar result by imposing a realizability assumption only for an optimal policy $\pi^*$ and data coverage assumption for all policies. Furthermore, by combining VF- and MIS-based identification results, we propose a doubly robust (DR) estimator for learning the optimal policy $\pi^*$. Theoretically, for such a DR estimator, we show a similar suboptimality at a rate of $O(\log(NT)(NT)^{-1/2})$, but only requiring that either the assumptions imposed in VF-based method or those imposed in MIS-based one hold. Lastly, our proposed algorithms have different requirements on the identifiability of the expected total rewards $J(\pi)$. Specifically, for the VF-based algorithm we propose, we only require the offline data distribution can identify $J(\pi^*)$ uniquely rather than other policies. On the contrary, the MIS-based pessimistic algorithm requires all $J(\pi)$ be uniquely identified by our offline data distribution, which is thus stronger. Interestingly both approaches do not require the identifiability of all associated nuisance parameters. See more detailed discussion in Section \ref{sec: identification}. In Table \ref{table:summary-assumptions} below, we summarize our main assumptions/requirements for the theoretical guarantees of the proposed algorithms. \begin{table}[h!] \centering \begin{tabular}{| c | c | c | c | } \hline Methods & Data Coverage & Realizability & Identifiability \\ \hline VF-based & $\|w^{\pi^*}\|_\infty \leq C_*$ & $w^{\pi^*}\in \cW$, $V^\pi\in \cV~\forall \pi\in \Pi$ & $J(\pi^*)$ is identifiable \\ \hline MIS-based & $\|w^\pi\|_\infty \leq C_*~\forall\pi\in \Pi$ & $V^{\pi^*}\in \cV$, $w^\pi\in \cW~\forall \pi\in \Pi$ & $J(\pi)$ is identifiable $\forall\pi\in \Pi$ \\ \hline DR-based & \multicolumn{3}{c|}{\makecell{$\|w^{\pi^*}\|_\infty \leq C_*$, $w^{\pi^*}\in \cW$, $V^\pi\in \cV~\forall\pi\in \Pi$, and $J(\pi^*)$ is identifiable; \\ \textit{\textbf{OR}} $\|w^\pi\|_\infty \leq C_*~\forall\pi\in \Pi$, $V^{\pi^*}\in \cV$, $w^\pi\in \cW~\forall \pi\in \Pi$, and $J(\pi)$ is identifiable $\forall\pi\in \Pi$}} \\ \hline \end{tabular} \caption{Assumptions required by our VF-, MIS-, and DR-based methods, where $w^\pi$ is the density ratio between visitation measures induced by the policy $\pi$ and the behavior policy $b$ (see \eqref{eq:iv-ratio-def} for a detailed definition), and $V^\pi$ is the state-value function of the policy $\pi$. Here $\cV, \cW$ and $\Pi$ are function classes, and $C_*$ is some generic constant.} \label{table:summary-assumptions} \end{table} \vskip5pt \noindent\textbf{Contribution.} As a summary, our contribution is three-fold. First, by leveraging IVs, we provide VF- and MIS-based identification results under the confounded MDP. Second, by employing pessimism based on the loss functions used for estimating nuisance parameters, we innovatively construct estimators of the optimal policy $\pi^*$ via VF- and MIS-based identification. Further, by combining VF- and MIS-based identification, we propose a DR-based algorithm for estimating $\pi^*$. Third, under mild conditions on data coverage and realizability, we show that the suboptimalities of the proposed algorithms in finding the optimal in-class policy are upper bounded by $O(\log(NT)(NT)^{-1/2})$, without requiring that the observational data are generated from stationary distribution or even independent. The success of our algorithm relies on novel constructions of confidence sets for related nuisance parameters and maximal inequalities of geometrically ergodic sequences over function classes, which may be of independent interest. \vskip5pt \noindent\textbf{Related Work.} ~Our work is related to the line of works that study RL under the presence of unmeasured confounders. \cite{zhang2019near} propose an online RL method to solve dynamic treatment regimes in a finite-horizon setting with the presence of confounded observational data. Their method relies on sensitivity analysis, which constructs a set of possible models based on the confounded observational data to obtain partial identification. Also, to incorporate the observational data into the finite-horizon RL, \cite{wang2021provably} propose deconfounded optimistic value iteration, which is an online algorithm with a provable regret guarantee. To ensure the identifiability through the observational data, they impose the backdoor criterion \citep{pearl2009causality, peters2017elements} when confounders are partially observed, and also the frontdoor criterion when confounders are unmeasured. Our work is also closely related to \cite{liao2021instrumental} and \cite{chen2021estimating}. Specifically, \cite{liao2021instrumental} propose an IV-aided value iteration algorithm to study confounded MDPs in the offline setting. It is worth mentioning that they only consider the finite-horizon MDP, where the transition dynamics is a linear function of some known feature map. To ensure identifiability, they assume that the unmeasured confounders are Gaussian noise, which does not affect the immediate reward and only affect the transition dynamics in an additive manner. In contrast, we consider an infinite-horizon confounded MDP without such restrictive assumptions on the structure of the model, which brings significant technical challenges. On the other hand, \cite{chen2021estimating} study the partial identification using IVs for improving dynamic treatment regimes and require the data coverage assumption on all the policies. In contrast, we establish point identification results with the help of IVs and our proposed algorithm is valid under only partial coverage, which is thus more appealing. In addition, with unmeasured confounders, \cite{kallus2020confounding} study off-policy evaluation in the infinite-horizon setting based on sensitivity analysis, which imposes additional assumptions on how strong the unmeasured confounding can possibly be. In the meanwhile, to ensure identifiability, \cite{namkoong2020off} consider the case where the unmeasured confounders affect only one of the decisions made. Very recently, there is a stream of research focused on using proximal causal inference \citep{tchetgen2020introduction} for off-policy evaluation and learning in the partially observed MDP \citep{bennett2021off,shi2021minimax,lu2022pessimism}. Our work is also related to the line of research on policy evaluation and policy learning in the offline setting assuming unconfoundedness. In terms of off-policy evaluation, most works either employs a VF-based method \citep{ernst2005tree,ertefaie2018constructing, shi2020statistical, liao2021off, uehara2021finite,zhou2021estimating}, or an MIS-based method \citep{liu2018breaking, nachum2019dualdice, zhang2020gendice, wang2021projected, uehara2021finite}. Our work is also related to those that propose DR estimators in off-policy evaluation. See \cite{jiang2016doubly, thomas2016data, tang2019doubly, kallus2020double, uehara2020minimax, kallus2022efficiently} for this line of research. As for policy learning in the offline setting, \cite{munos2008finite,antos2008learning} and \cite{luckett2019estimating} prove that fitted value and policy iterations converge to an optimal policy under the data coverage assumption and realizability assumption for all policies. By employing pessimism, \cite{xie2021bellman} guarantee a near-optimal policy under the realizability and the completeness assumptions for all policies, while \cite{jiang2020minimax} provide a similar guarantee under the data coverage assumption for the optimal policy and the realizability assumption for all policies. More recently, \cite{zhan2022offline} claims that they can learn a near-optimal policy under the data coverage and realizability assumptions for the optimal policy. Their method is built upon a regularized version of the LP formulation of MDPs and thus working on a class of regularized policies. However, due to regularization, the policy learned by \cite{zhan2022offline} is typically suboptimal even given infinite data. Moreover, their realizability assumption is imposed on the regularized value function, making it difficult to interpret and compare with other works. In contrast, our method is proposed in a non-regularized setting and is valid given standard realizability and minimal data coverage assumptions, even under the presence of unmeasured confounders and dependent observational data. \vskip5pt \noindent\textbf{Roadmap.} In Section \ref{sec:iv-background}, we introduce the background of confounded MDPs and their assumptions. In Section \ref{sec:iv-id}, we introduce VF- and MIS-based identification results. In Section \ref{sec:iv-pess}, by employing pessimism, we first introduce estimators of the optimal policy via VF- and MIS-based identification, then by combining both results, we introduce a DR estimator. Theoretical results upper bounding the suboptimalities of the proposed estimators are presented in Section \ref{sec:iv-theory}. In Section \ref{sec: dual form}, we provide dual formulations for our proposed algorithms under one additional assumption so that all estimated policy can be efficiently computed. Lastly, in Section \ref{sec: identification}, we discuss the implications of our algorithms on the identifiability related to total rewards and associated nuisance parameters. All technical proofs are provided in the Supplementary Material. In addition, in the Supplementary Material, we demonstrate the usefulness of the proposed methods by conducting a simulation study, in which we simulate a synthetic dataset that mimics a real-world electronic medical record dataset for kidney transplantation patients \citep{hua2020personalized}. \section{Confounded Markov Decision Processes} \label{sec:iv-background} In this section, we introduce the framework of confounded Markov decision processes with discrete instrumental variables. We aim to leverage the batch data to find an optimal in-class policy that maximizes the expected total rewards. \vskip5pt \noindent\textbf{Confounded MDPs} In a confounded MDP, we observe $\{S_t, A_t, R_t\}_{t\geq 0}$ for each trajectory, where $S_t$ is the observed state, $A_t$ is the action taken after observing $S_t$, and $R_t$ is the immediate reward received after making an action $A_t$ for $t\geq 0$. We denote by $\cS$ and $\cA$ the state and action spaces, respectively. Furthermore, we assume that at each decision point $t\geq 0$, there exist some unmeasured state variables $U_t \in \cU$, which may confound the effect of action $A_t$ on the rewards and future transitions. Due to such unobserved confounders, the (causal) effect of the action on the immediate and future rewards may not be non-parametrically identified and directly applying standard RL algorithms for MDPs will produce sub-optimal policies. To address this concern, we study the confounded MDP via the instrumental variable (IV) method \citep{angrist1995identification}, which has been widely used in the literature of causal inference (e.g., \cite{pearl2009causality,hernan2010causal}) to identify the causal effect of a treatment under unmeasured confounding. Specifically, at each decision point $t$, we further assume that we also observe a time-varying IV $Z_t\in \cZ$, which is independent of $U_t$ and does not have a direct effect on the immediate reward $R_t$ and all future states, actions, and rewards. With such an IV, we observe $\{S_t, Z_t, A_t, R_t\}_{t\geq 0}$ for each trajectory in the confounded MDP. In this work, we consider finite action and IV spaces, i.e., $\cA = \{a_j\}_{j\in [K]}$ and $\cZ = \{z_j\}_{j\in [K]}$, where $K\geq 2$ is an integer. Furthermore, we consider a simplex encoding for both actions and IVs which enjoy a nice interpretation \citep{zhang2014multicategory}. Specifically, for any $j\in [K]$, we let \#\label{eq:iv-simplex-encoding} a_j = z_j = \begin{cases} (K-1)^{-1/2}\mathbf{1}_{K-1} & \text{if $j = 1$,} \\ \frac{(1 + \sqrt{K} \mathbf{1}_{K-1} )}{(K-1)^{3/2}} + \sqrt{\frac{K}{K-1}} e_{j-1} & \text{if $2\leq j \leq K$,} \end{cases} \# where $\mathbf{1}_{K-1}\in \RR^{K-1}$ is an all-one vector and $e_j\in \RR^{K-1}$ is a vector with all elements $0$ except $1$ for $j$-th position. By the simplex encoding in \eqref{eq:iv-simplex-encoding}, one can see that $\sum_{j \in [K]}a_j = \sum_{j \in [K]}z_j = 0$ and $a_i^\top a_j = z_i^\top z_j = -\ind\{i\neq j\}/(K-1) + \ind\{i= j\}$ for any $i,j\in [K]$, where $\ind\{\cdot \}$ is an indicator function. We remark that any other reasonable encoding mechanisms can be adopted here and our results still hold. \vskip5pt \noindent\textbf{Value Function and Performance Metric.} In the confounded MDP, we aim to find an optimal in-class policy $\pi^*\in \Pi$ such that $\pi^*$ maximizes the expected total rewards, where $\Pi$ is a class of time-homogeneous policies mapping from the observed state space $\cS$ into the probability distribution over the action space $\cA$. In particular, $\pi(a \given s)$ refers to the probability of choosing action $a \in \cA$ given the state value $s \in \cS$. Formally, for any $\pi\in \Pi$, we define the value function $V^\pi$ and the expected total reward $J(\pi)$ as follows, \#\label{eq:iv-val-func} & V^\pi(s) = \EE_{\pi} \left[ \sum_{t = 0}^\infty \gamma^t R_t \Biggiven S_0 = s \right ], \qquad J(\pi) = (1-\gamma ) \cdot \EE_{S_0 \sim \nu} \left [ V^\pi(S_0) \right ], \# where the expectation $\EE_\pi[\cdot]$ is taken with respect to the distribution such that the action $A_t\sim \pi(\cdot \given S_t)$ for any $t\geq 0$, and $\nu$ is a known reference distribution over $\cS$. Given the definition of $J(\pi)$ in \eqref{eq:iv-val-func}, our goal is to leverage the batch data to estimate $\pi^*$, where $$ \pi^* \in \argmax_{\pi \in \Pi} J(\pi). $$ Suppose the batch data we have collected consist of $N$ independent and identically distributed copies of $\{S_t, Z_t, A_t, R_t\}_{t \geq 0}$ with a total number $T$ of decision points for each trajectory. Then we can summarize our batch data as $\cD = \{\{S_t^i, Z_t^i, A_t^i, R_t^i, S_{t+1}^i\}_{t = 0}^{T-1}\}_{i\in [N]}$. Assuming that the number of decision points is the same at each trajectory is for simplicity. Indeed the proposed method and theoretical results below remain valid as long as the number of decision points at each trajectory stays within a suitable range and at the same order asymptotically. We define the performance metric as \$ \textsf{SubOpt}(\pi) = J(\pi^*) - J(\pi), \$ which characterizes the suboptimality of a policy $\pi$ compared with the optimal in-class policy $\pi^*$. \vskip5pt \noindent\textbf{Why is Confounded MDP Challenging?} In the standard MDP \citep{sutton2018reinforcement}, all states are assumed fully observed and the trajectory $\{S_t, A_t, R_t\}_{t\geq 0}$ satisfies the Markovian property. By leveraging the celebrated Bellman equation, under some mild conditions, one can non-parametrically identify $J(\pi)$ for any $\pi\in \Pi$, which serves as a foundation for many existing RL algorithms. However, in the confounded MDP, due to the unobserved states, the effect of actions on the rewards and future states cannot be identified even if we include all past history information at each decision point $t \geq 0$. Therefore, additional assumptions are needed to identify $J(\pi)$ and in this work, we rely on the IV to deal with such challenges. \vskip5pt \noindent\textbf{Notation.} Throughout the paper, we denote by $c$ a positive absolute constant, which may vary from lines to lines. Without further explanation, we denote by $\EE_\pi[\cdot]$ the expectation taken with respect to the trajectory generated by the policy $\pi$, $\EE[\cdot]$ the expectation taken with respect to the trajectory generated by the behavior policy, and $\hat \EE[\cdot]$ the empirical average across all $N$ trajectories. \section{Assumptions and Identification Results} In this section, we introduce several assumptions to help us identify $J(\pi)$ for any $\pi \in \Pi$ by using an IV. The first assumption is related to the trajectory $\{S_t, U_t, A_t, R_t\}_{t\geq 0}$, where we model it by a time-homogenous MDP. \begin{assumption}\label{ass:MDP} The following statements hold. \begin{enumerate}[label=(\alph*)] \item For any $t \geq 1$, we have $(S_{t+1}, U_{t+1})\indp \{S_j, U_j, A_j\}_{0\leq j<t} \given (S_t, U_t, A_t)$ and the transition probability is time-homogeneous; \item \label{ass:iv-reward} For any $t\geq 0$, we have $R_t = R(U_t, S_t, A_t, S_{t+1}, U_{t+1})$ for some deterministic function $R\colon \cU\times \cS\times\cA\times\cS\times\cU\to \RR$. Also, we assume $|R_t| \leq 1$ almost surely for any $t\geq 0$; \item The offline dataset $\cD$ is generated by an unknown initial distribution $\zeta$ over $\cS$ and a stationary policy $b$, which is a function mapping from $\cS \times \cU \times \cZ$ into a probability distribution over $\cA$. \end{enumerate} \end{assumption} Here $b$ is often called behavior policy in RL literature. Assumption \ref{ass:MDP} is standard in the literature of RL, which is mild as $\{U_t\}_{t\geq0}$ is unobserved. The known reward structure in Assumption \ref{ass:MDP}~\ref{ass:iv-reward} is always satisfied as one can put the observed reward $R_t$ as a part of information in the next state. The uniformly bounded assumption on the reward $R_t$ is used to simplify the technical analysis and can be relaxed by imposing some high-order moment condition on $R_t$ instead. Due to the unobserved state variables $U_t$, we make the following IV assumptions. \begin{assumption}\label{ass:iv-common} The following statements hold. \begin{enumerate}[label=(\alph*)] \item \label{ass:4} For any $t\geq 0$, we have $(S_{t+1}, U_{t+1})\indp Z_t\given (S_t, U_t, A_t)$; \item \label{ass:5} For any $a\in \cA$ and $t\geq 0$, we have $\PP(A_t = a\given S_t, Z_t)\neq \PP(A_t = a\given S_t)$; \item \label{ass:iv-zu-ind} For any $t \geq 0$, we have $Z_t \indp U_t \given S_t$ and the probability distribution of $Z_t$ given $S_t$ is time-homogeneous. \item \label{ass:iv-compliance} For any $t\geq 0$, we have the behavior policy satisfy that \$ & b(A_t = a\given S_t, U_t, Z_t = a) - \frac{1}{K-1}\sum_{z\in \cZ, z\neq a} b(A_t = a\given S_t, U_t, Z_t = z) \\ & \qquad = b(A_t = a\given S_t, Z_t = a) - \frac{1}{K-1}\sum_{z\in \cZ, z\neq a} b(A_t = a\given S_t, Z_t = z) = \Delta^*(S_t, a), \$ i.e., the compliance $\Delta^*(S_t,a)$ defined above is independent of the unobserved confounder $U_t$ almost surely. \end{enumerate} \end{assumption} Assumption \ref{ass:iv-common}\ref{ass:4} states that there is no direct effect of the IV $Z_t$ on the future states and rewards except through the action $A_t$, which is a typical assumption in the literature of causal inference with IVs \citep{angrist1995identification, angrist1996identification}. Note that by Assumption \ref{ass:MDP}~\ref{ass:iv-reward}, we have implicitly restricted the effect of $Z_t$ on the reward $R_t$ only through $A_t$ in this assumption. Assumption \ref{ass:iv-common}~\ref{ass:5} requires that the IV $Z_t$ will influence the action $A_t$, which is called IV relevance in the causal inference. Assumption \ref{ass:iv-common}\ref{ass:iv-zu-ind}, corresponding to IV independence, ensures that the effect of $Z_t$ on futures states and rewards is unconfounded by adjusting the current state $S_t$. The homogeneous assumption on the conditional distribution of $Z_t$ given $S_t$ is imposed here as our target parameter $J(\pi)$ is defined over the infinite horizon. Define a function $$ \Theta^*(s,z) = \PP(Z_t = z\given S_t=s) $$ for every $(s, z) \in \cS \times \cZ$, which is independent of the decision point due to such time-homogeneity. In addition, Assumption \ref{ass:iv-common}\ref{ass:iv-compliance} essentially indicates that there is no interaction between $U_t$ and $Z_t$ in affecting whether the action $A_t$ will comply with $Z_t$ or not. This so-called independent compliance assumption has been widely adopted in identifying the average treatment effect of binary treatments in causal inference \citep{wang2018bounded,cui2021semiparametric}. Here we generalize it to the setting of multiple treatments and instrumental variables, which may be of independent interest. A graphical illustration of Assumptions \ref{ass:MDP} and \ref{ass:iv-common} is presented in Figure \ref{fig:mdp}, which also illustrates how the offline data in the confounded MDP are generated. Moreover, we provide a numerical example in \S\ref{sec:exppppp} of the Supplementary Material to illustrate the IV structure of our data generating process. In the following section, we introduce value function (VF)-based identification and marginalized importance sampling (MIS)-based identification, respectively. \begin{figure}[htbp] \centering \begin{tikzpicture}[scale=1.2] \node (so) at (-4.8,0) [label=below:{$S_{t-1}$}, circle, fill=black]{}; \node (zo) at (-4.4,1) [label=above:{$Z_{t-1}$}, circle, fill=black]{}; \node (uo) at (-2.2,1) [label=above:{$U_{t-1}$}, circle, draw]{}; \node (ao) at (-3.2,0) [label=below:{$A_{t-1}$}, circle, fill=black]{}; \node (s) at (-1.6,0) [label=below:{$S_t$}, circle, fill=black]{}; \node (z) at (-1.2,1) [label=above:{$Z_t$}, circle, fill=black]{}; \node (sp) at (1.6,0) [label=below:{$S_{t+1}$}, circle, fill=black]{}; \node (u) at (1,1) [label=above:{$U_t$}, circle, draw]{}; \node (a) at (0,0) [label=below:{$A_t$}, circle, fill=black]{}; \draw[-stealth] (ao) edge (s); \draw[-stealth] (so) edge (ao); \draw[-stealth] (zo) edge (ao); \draw[-stealth] (uo) edge (ao); \draw[dashed,->] (so) [out=-315,in=130] edge (u); \draw[dashed,->] (ao) edge (u); \draw[-stealth] (uo) edge (s); \draw[-stealth] (so) [out=-45,in=220] edge (s); \draw[-stealth] (so) edge (zo); \draw[-stealth] (s) edge (z); \draw[dashed,->] (uo) [out=45,in=140] edge (u); \draw[-stealth] (a) edge (sp); \draw[-stealth] (s) edge (a); \draw[-stealth] (z) edge (a); \draw[-stealth] (s) [out=-45,in=220] edge (sp); \draw[-stealth] (u) edge (a) edge (sp); \end{tikzpicture} \caption{A graphical illustration of the confounded MDP satisfying Assumptions \ref{ass:MDP} and \ref{ass:iv-common}. Here $\{Z_t\}_{t\geq 0}$ are IVs, which satisfy IV independence, i.e., $Z_t \indp U_t \given S_t$. Also the trajectory $\{S_t, U_t, A_t\}_{t\geq 0}$ satisfies the Markov property, i.e., $(S_{t+1}, U_{t+1})\indp \{S_j, U_j, A_j\}_{0\leq j<t} \given (S_t, U_t, A_t)$. In the meanwhile, the dependence of $U_t$ on $U_{t-1}$ and other variables given $S_t$ (dashed arrows) is prohibited by Assumption \ref{ass:10}. } \label{fig:mdp} \end{figure} \label{sec:iv-id} \subsection{Value Function-based Identification} \label{sec:iv-vf-id} In the unconfounded MDP, value function defined in \eqref{eq:iv-val-func} can be used to identify $J(\pi)$ and itself can be identified via the Bellman equation. However, due to the existence of unobserved confounders, the regular Bellman equation, which relies on the Markovian assumption, does not hold in general and the effect of actions on the reward cannot be identified either. Fortunately, by leveraging the IV, we are able to provide a way to identify $J(\pi)$ via the state-value function $V^\pi$, which can be identified by an IV-aided Bellman equation. Before stating our result, we make one additional assumption. \begin{assumption}\label{ass:10} We have $(Z_t, U_t) \indp (\{S_j, U_j, A_j\}_{j < t})\given S_t$ for $t \geq 1$. \end{assumption} Assumption \ref{ass:10} ensures that $(Z_t, U_t)$ is ``memoryless", which does not dependent on past observations. This essentially ensures that the stochastic process $\{S_t, Z_t, A_t\}_{t\geq0}$ satisfies Markov property. The memoryless assumption on the unobserved confounders has been commonly used in the confounded MDP. See \cite{kallus2020confounding,shi2022off} for more details. \begin{lemma} \label{lemma:iv-vf-id} Under Assumptions \ref{ass:MDP} and \ref{ass:iv-common}, for any $s \in \cS$ and $\pi \in \Pi$, we have \$ V^\pi(s) = \EE\left[ \sum_{t = 0}^\infty \gamma^t R_t \left( \prod_{j = 0}^t \frac{Z_j^\top A_j \pi(A_j \given S_j)}{\Delta^*(S_j, A_j) \Theta^*(S_j, Z_j) } \right ) \bigggiven S_0 = s \right]. \$ If additionally Assumption \ref{ass:10} is satisfied, it holds for any $t\geq 0$ that, \$ V^\pi(s) = \EE \left[ \frac{Z_t^\top A_t \pi(A_t \given S_t)}{\Delta^*(S_t, A_t) \Theta^*(S_t, Z_t) } \cdot \left(R_t + \gamma V^\pi(S_{t+1})\right ) \Biggiven S_t = s \right]. \$ Then the policy value $J(\pi)$ for $\pi \in \Pi$ can be identified via \$ J(\pi) = (1-\gamma) \cdot \EE_{S_0 \sim \nu}\left[V^\pi(S_0)\right]. \$ \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-vf-id} of the Supplementary Material for a detailed proof. \end{proof} We remark that the Bellman equation in the unconfounded MDP takes the following form, \$ V^\pi_{\textsf{unconf}}(s) = \EE \left[ \frac{\pi(A_t \given S_t)}{\PP(A_t\given S_t)} \cdot \left(R_t + \gamma V_{\textsf{unconf}}^\pi(S_{t+1})\right ) \Biggiven S_t = s \right], \$ where $V^\pi_{\textsf{unconf}}$ is the corresponding state-value function in the unconfounded MDP. In comparison, to deal with the unobserved confounders, our identification result in Lemma \ref{lemma:iv-vf-id} incorporates the IVs into the action density ratio. It is also interesting to see that if one can observe the trajectory $\{S_t, Z_t, A_t, R_t\}_{t \geq 0}$ to the infinity, then Assumptions \ref{ass:MDP} and \ref{ass:iv-common} are sufficient to identify $V^\pi(s)$ and $J(\pi)$ based on the first statement of Lemma \ref{lemma:iv-vf-id}. However, due to the limitation of only observing trajectories up to a finite horizon, we impose Assumption \ref{ass:10} so that Bellman equation is satisfied and used to break the curse of infinite-horizon. Based on Lemma \ref{lemma:iv-vf-id}, we introduce the following VF-based estimating equation, which will be used later in \S\ref{sec:iv-vf} to construct an estimator of the value function $V^\pi$. \begin{corollary}[VF-based Estimating Equation] \label{cor:iv-vf-phi-0} Under Assumptions \ref{ass:MDP}, \ref{ass:iv-common}, and \ref{ass:10}, it holds for any function $g\colon \cS\to \RR$ that \$ \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(S_t) \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} \cdot \left(R_t + \gamma V^\pi(S_{t+1})\right) \right] = \EE\left[\frac{1}{T}\sum_{t=0}^{T-1} g(S_t) V^\pi(S_t)\right]. \$ \end{corollary} \begin{proof} See \S\ref{prf:cor:iv-vf-phi-0} of the Supplementary Material for a detailed proof. \end{proof} \subsection{Marginalized Importance Sampling-based Identification} \label{sec:iv-mis-id} In this subsection, we propose another way to identify $J(\pi)$ via the marginal importance sampling. We first introduce the following notations. For any $t\geq 0$, we denote by $p_t^\pi(\cdot)$ the marginal distribution of $S_t$ under the known initial observed state distribution $\nu$ following the policy $\pi$. In the meanwhile, with a slight abuse of notations, we denote by $p_t^b(\cdot)$ the marginal distribution of $S_t$ under the unknown offline data generating distribution $\zeta$ following the behavior policy $b$. In addition, we denote by for every $s \in \cS$, \#\label{eq:iv-ratio-def} d^\pi(s) = (1-\gamma) \sum_{t = 0}^\infty \gamma^t p_t^\pi(s), \qquad d^b(s) = \frac{1}{T} \sum_{t = 0}^{T-1} p_t^b(s), \qquad w^\pi(s) = \frac{d^\pi(s)}{d^b(s)} \# the discounted state visitation measure under the policy $\pi$, the average state visitation measure under the behavior policy $b$, and their density ratio, respectively. As $T$ is the same for each trajectory, they all share the same $d^b$. In general, if $T$ is different from subjects to subjects, then one can treat $T$ as a random variable and define ratio function as a mixture of different ratio functions. Motivated by the idea of marginalized importance sampling for off-policy evaluation in the standard MDP \citep{liu2018breaking}, we establish the following novel identification result for the expected total reward $J(\pi)$ in the confounded MDP. \begin{lemma} \label{lemma:iv-mis-j} Under Assumptions \ref{ass:MDP}--\ref{ass:10}, for any $\pi \in \Pi$, we have \$ J(\pi) = \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} \cdot w^\pi(S_t) R_t \right], \$ where $w^\pi$ is defined in \eqref{eq:iv-ratio-def}. \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-mis-j} of the Supplementary Material for a detailed proof. \end{proof} We remark that the expected total reward in the unconfounded MDP takes the following form, \$ J_{\textsf{unconf}}(\pi) = \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \frac{\pi(A_t \given S_t)}{\PP(A_t\given S_t)}\cdot w^\pi(S_t) R_t \right], \$ where $J_{\textsf{unconf}}(\pi)$ is the corresponding expected total reward in the unconfounded MDP, and the expectation $\EE[\cdot]$ is taken with respect to the trajectory generated by the behavior policy. In comparison, to deal with the unobserved confounders, our identification result in Lemma \ref{lemma:iv-mis-j} incorporates the IVs into the action density ratio. Based on Lemma \ref{lemma:iv-mis-j}, we introduce the following MIS-based estimating equation, which will be used later in \S\ref{sec:iv-mis} to construct an estimator of the density ratio $w^\pi$. \begin{lemma}[MIS-based Estimating Equation] \label{lemma:iv-est-eq} Under Assumptions \ref{ass:MDP}-\ref{ass:10}, for any $\pi \in \Pi$, it holds for any function $f\colon \cS \to \RR$ that \$ (1-\gamma) \EE_{S_0\sim \nu} \left [ f(S_0) \right] = \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} \cdot w^\pi(S_t) \left ( f(S_t) - \gamma f(S_{t+1}) \right ) \right]. \$ \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-est-eq} for a detailed proof. \end{proof} Lemma \ref{lemma:iv-est-eq} shows that with the help of IVs and Assumption \ref{ass:10}, the estimating equation for the density ratio $w^\pi$ holds under the confounded MDP. This is different from the existing approaches such as \cite{liu2018breaking,zhang2020gendice} for estimating ratio functions in the standard MDP setting, which relies crucially on the no unmeasured confounding assumption. \section{Instrumental-Variable-Assisted RL with Pessimism}\label{sec:iv-pess} In this section, we introduce three pessimistic RL methods to estimate $\pi^*$ in our confounded MDP. Generally, pessimistic RL first employs the offline data to construct a conservative estimate of the values for any policy, then select the policy with the highest conservative estimate of its value. Though the recently proposed pessimistic RL shows promising performance in practice \citep{kumarConservativeQlearningOffline2020,yuMopoModelbasedOffline2020,kidambiMorelModelbasedOffline2020,deng2021score}, its theoretical understanding is far from complete and is only limited to fully observable MDP \citep{levine2020offline,shi2022pessimistic}. In this section, we adapt the idea of pessimism in our case and present related theoretical results in Section \ref{sec:iv-theory}. Since both identification results in \S\ref{sec:iv-vf-id} and \S\ref{sec:iv-mis-id} require estimating the quantities $\Delta^*(s,a)$ and $\Theta^*(s,z)$, we first introduce the estimating procedure for such quantities. We assume that there exists an oracle that gives estimators of $\Delta^*(s,a)$ and $\Theta^*(s,z)$ via two loss functions $\hat L_0(\Delta)$ and $\hat L_1(\Theta)$ as follows, \$ \hat \Delta\in \argmin_{\Delta\in \cF_0} \hat L_0(\Delta),\qquad \hat \Theta \in \argmin_{\Theta \in \cF_1} \hat L_1(\Theta), \$ where $\cF_0$ and $\cF_1$ are two function classes. We remark that we can use the negative likelihood functions for $\hat L_0$ and $\hat L_1$ (see \S\ref{sec:iv-theory} for details). In the meanwhile, we construct two confidence sets for $\Delta$ and $\Theta$, respectively, as follows, \#\label{eq:mle-conf-set} & \textsf{conf}^0_{\alpha_0} = \left \{\Delta \in \cF_0 \colon \hat L_0(\Delta) - \hat L_0(\hat \Delta) \leq \alpha_0 \right\},\\ & \textsf{conf}^1_{\alpha_1} = \left\{\Theta \in \cF_1 \colon \hat L_1(\Theta) - \hat L_1(\hat \Theta) \leq \alpha_1 \right\}, \# where $(\alpha_0,\alpha_1)$ are some constants that will be specified later. These two confidence sets are used to construct conservative estimators for $J(\pi)$ via either VF-, MIS-, or doubly robust (DR)-based estimation. \subsection{VF-based Pessimistic Method}\label{sec:iv-vf} We introduce VF-based pessimistic RL in this section. We first define the following quantity, \$ \hat \Phi^\pi_\textsf{vf}(v,g; \Delta, \Theta) = \hat \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(S_t) \left ( \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta(S_t,A_t) \Theta(S_t,Z_t)} \left(R_t + \gamma v(S_{t+1})\right) - v(S_t) \right ) \right], \$ where $\hat \EE[\cdot]$ is the empirical measure defined by the offline data $\cD$. In the meanwhile, we define its population counterpart as \$ \Phi^\pi_\textsf{vf}(v,g;\Delta, \Theta) = \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} g(S_t) \left ( \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta(S_t,A_t) \Theta(S_t,Z_t)} \left(R_t + \gamma v(S_{t+1})\right) - v(S_t) \right ) \right] \$ for any $(v, g, \Delta, \Theta)$, where the expectation $\EE[\cdot]$ is taken with respect to the trajectory generated by the behavior policy. Then by the VF-based estimating equation specified in Corollary \ref{cor:iv-vf-phi-0}, it is easy to see that $\Phi^\pi_\textsf{vf}(V^\pi,g;\Delta^*, \Theta^*) = 0$ for any function $g\colon \cS \to \RR$. With the aforementioned notions, for any $(\Delta, \Theta)$, we construct an estimator of $V^\pi$ via solving the following min-max optimization problem, \#\label{eq:v-pi-def} \hat v_{\Delta, \Theta}^\pi \in \argmin_{v\in \cV} \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi (v, g; \Delta, \Theta), \# where $\cV$ and $\cW$ are two sets to be specified later. To obtain an estimation $\hat \pi_\textsf{vf}$ for an optimal in-class policy $\pi^\ast$ that maximizes the expected total reward $J(\pi)$ defined in \eqref{eq:iv-val-func}, we formulate the following optimization problem, \#\label{eq:iv-vf-hat-pi} & \hat \pi_\textsf{vf} = \argmax_{\pi\in \Pi} \min_{ (\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1} } \min_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta, \Theta, \pi)} (1-\gamma)\EE_{S \sim \nu}[v(S)], \\ & \text{with } \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta, \Theta, \pi) = \left \{v\in \cV \colon \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi(v, g; \Delta, \Theta) - \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v_{\Delta, \Theta}^\pi, g; \Delta, \Theta) \leq \alpha_\textsf{vf} \right \}, \# where $(\alpha_0, \alpha_1, \alpha_\textsf{vf})$ are constants to be specified, and $\textsf{conf}^0_{\alpha_0}$ and $\textsf{conf}^1_{\alpha_1}$ are confidence sets defined in \eqref{eq:mle-conf-set}. Intuitively, the policy $\hat \pi_\textsf{vf}$ defined in \eqref{eq:iv-vf-hat-pi} aims to maximize the most pessimistic estimator of the expected total reward. As we will see in Theorem \ref{thm:iv-vf}, such a pessimistic method provably converges to an optimal policy with data coverage assumption only for the optimal policy and some other mild conditions. \subsection{MIS-based Pessimistic Method}\label{sec:iv-mis} We introduce MIS-based pessimistic RL in this section. We first define the following quantity, \$ \hat \Phi^\pi_\textsf{mis}(w, f; \Delta, \Theta) = & \EE_{S_0 \sim \nu}\left[ (1-\gamma) f(S_0) \right] \\ & - \hat \EE \left[ \frac{1}{T} \sum_{t=0}^{T-1} \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta(S_t,A_t) \Theta(S_t,Z_t)}w(S_t) \left (f(S_t) - \gamma f(S_{t+1}) \right ) \right]. \$ We define its population counterpart as \$ \Phi^\pi_\textsf{mis}(w,f;\Delta, \Theta) = & \EE_{S_0 \sim \nu}\left[ (1-\gamma) f(S_0) \right] \\ & - \EE \left[ \frac{1}{T} \sum_{t=0}^{T-1} \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta(S_t,A_t) \Theta(S_t,Z_t)}w(S_t) \left (f(S_t) - \gamma f(S_{t+1}) \right ) \right] \$ for any $(w, f, \Delta, \Theta)$. Then by the MIS-based estimating equation specified in Lemma \ref{lemma:iv-est-eq}, it can be seen that $\Phi^\pi_\textsf{mis}(w^\pi,f;\Delta^*, \Theta^*) = 0$ for any function $f\colon \cS \to \RR$. With the aforementioned notions, for any $(\Delta, \Theta)$, we construct an estimator of $w^\pi$ via solving the following minimax optimization problem, \#\label{eq:w-pi-def} & \hat w^\pi_{\Delta, \Theta} \in \argmin_{w\in \cW} \max_{f\in \cV} \hat \Phi^\pi_\textsf{mis}(w, f; \Delta, \Theta). \# With a slight abuse of notations, here $\cW$ and $\cV$ are again two sets to be specified later. We aim to obtain an optimal policy that maximizes the expected total reward $J(\pi)$ by utilizing the estimators constructed in \eqref{eq:w-pi-def}. For this, we further define the following estimator of $J(\pi)$ via Lemma \ref{lemma:iv-mis-j}, \$ \hat L_\textsf{mis} (w, \pi; \Delta, \Theta) = \hat \EE \left[ \frac{1}{T} \sum_{t = 0}^{T-1} \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta(S_t,A_t) \Theta(S_t,Z_t)} w(S_t) R_t \right]. \$ Then we aim to solve the following optimization problem, \#\label{eq:iv-mis-hat-pi} & \hat \pi_\textsf{mis} \in \argmax_{\pi\in \Pi} \min_{ (\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1} } \min_{w \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi) } \hat L_\textsf{mis}(w, \pi; \Delta, \Theta), \\ & \text{with } \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi) = \Biggl\{ w\in \cW \colon \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(w, f; \Delta, \Theta) - \max_{f\in \cV} \hat \Phi_\textsf{mis}^\pi(\hat w^\pi_{\Delta, \Theta}, f; \Delta, \Theta) < \alpha_\textsf{mis} \Biggr\}, \# where $(\alpha_0, \alpha_1, \alpha_\textsf{mis})$ are constants to be specified, and $\textsf{conf}^0_{\alpha_0}$ and $\textsf{conf}^1_{\alpha_1}$ are confidence sets defined in \eqref{eq:mle-conf-set}. Similarly as in \eqref{eq:iv-vf-hat-pi}, the policy $\hat \pi_\textsf{mis}$ defined in \eqref{eq:iv-mis-hat-pi} aims to maximize the most pessimistic estimator of the expected total reward. As we will see in Theorem \ref{thm:iv-mis}, such a pessimistic method provably converges to an optimal policy with realizability assumption only for the optimal policy. \subsection{DR-based Pessimistic Method}\label{sec:iv-dr} As a combination of VF-based and MIS-based policy optimization methods, we introduce a doubly robust (DR)-based pessimistic RL algorithm in this section. We define the following DR estimator with its population counterpart, \$ & \hat L_\textsf{dr}(w, v, \pi; \Delta, \Theta) = \hat \EE \left[\frac{1}{T} \sum_{t = 0}^{T-1} \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta(S_t,A_t) \Theta(S_t,Z_t)} w(S_t) \left( R_t + \gamma v(S_{t+1}) - v(S_t) \right) \right] \\ & \qquad \qquad \qquad \qquad \quad + (1-\gamma) \EE_{S_0\sim \nu} \left[v(S_0)\right], \\ & L_\textsf{dr}(w, v, \pi; \Delta, \Theta) = \EE \left[\frac{1}{T} \sum_{t = 0}^{T-1} \frac{Z_t^\top A_t \pi(A_t\given S_t)}{\Delta(S_t,A_t) \Theta(S_t,Z_t)} w(S_t) \left( R_t + \gamma v(S_{t+1}) - v(S_t) \right) \right] \\ & \qquad \qquad \qquad \qquad \quad + (1-\gamma) \EE_{S_0\sim \nu} \left[v(S_0)\right]. \$ Note that $L_\textsf{dr}(w^\pi, v, \pi; \Delta^*, \Theta^*) = L_\textsf{dr}(w, V^\pi, \pi; \Delta^*, \Theta^*) = J(\pi)$ for any $(\pi, w, v)\in \Pi \times \cW\times \cV$. Thus, the quantity $\hat L_\textsf{dr}$ serves as a valid DR estimator of $J(\pi)$. In the follows, based on such a DR estimator of the expected total reward, we formulate the following optimization problem for estimating the optimal in-class policy, \# & \hat \pi_\textsf{dr} \in \argmax_{\pi\in \Pi} \min_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0}\times \textsf{conf}^1_{\alpha_1}} \min_{(w,v) \in \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \pi)} \hat L_\textsf{dr}(w,v,\pi; \Delta, \Theta), \\ & \text{with } \textsf{conf}_{\alpha_\textsf{mis}, \alpha_\textsf{vf}}(\Delta, \Theta, \pi) = \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)\times \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta, \Theta, \pi), \label{eq:iv-dr} \# where $(\alpha_0, \alpha_1, \alpha_\textsf{vf}, \alpha_\textsf{mis})$ are constants to be specified, and $\textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta, \Theta, \pi)$ and $\textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)$ are defined in \eqref{eq:iv-vf-hat-pi} and \eqref{eq:iv-mis-hat-pi}, respectively. As we will see in Theorem \ref{thm:iv-dr}, such a DR-based pessimistic method provably converges to an optimal policy with realizability assumption only for the optimal policy. \section{Theoretical Results} \label{sec:iv-theory} In this section, we investigate theoretical properties of the aforementioned three methods. We aim to derive the finite-sample upper bounds for the sub-optimality of our estimated policies, i.e., $\textsf{SubOpt}{(\widehat \pi)}$, where $\hat \pi$ is either $\hat \pi_\textsf{vf}, \hat \pi_\textsf{mis}$, or $\hat \pi_\textsf{dr}$. To begin with, we first introduce the following definition of covering number, and then impose metric entropy conditions on related classes of functions used in the proposed algorithms. \begin{definition}[Covering Number] Let $(\cC, \|\cdot \|_\infty)$ be a normed space, and $\cH \subseteq \cC$. The set $\{x_1, x_2, \ldots, x_{n}\}$ is a $\varepsilon$-covering over $\cH$ if $\cH\subseteq \cup_{i = 1}^n B(x_i, \varepsilon)$, where $B(x_i, \varepsilon)$ is the sup-norm-ball centered at $x_i$ with radius $\varepsilon$. Then the covering number of $\cH$ is defined as $N(\varepsilon, \cH, \|\cdot\|_\infty) = \min\{n\colon \exists \text{ $\varepsilon$-covering over $\cH$ of size $n$}\}$. \end{definition} \begin{assumption}\label{ass:spaces} The following statements hold. \begin{enumerate}[label=(\alph*)] \item \label{ass:bounded-covering} For any set $\cH \in \{\cF_0, \cF_1, \cV, \cW, \Pi\}$, there exists a constant $\mathfrak{C}_{\cH}$ such that \$ N(\varepsilon, \cH, \|\cdot\|_\infty) \leq c\cdot (1/\varepsilon)^{\mathfrak{C}_{\cH}}, \$ where $c > 0$ is a constant. Further, we denote by $\mathfrak{C}_{\cH_1, \cH_2, \ldots, \cH_k} = \sum_{j\in [k]}\mathfrak{C}_{\cH_j}$ for any class of functions $\{\cH_1, \cH_2, \ldots, \cH_k\}$. \item \label{ass:upper-bound-delta} There exist positive constants $C_{\Delta^*}$ and $C_{\Theta^*}$ such that $|\Delta^*(s, a)| \geq C_{\Delta^*}^{-1}$ and $\Theta^*(s, z) \geq C_{\Theta^*}^{-1}$ for any $(s,z,a)\in \cS\times \cZ\times \cA$, where $\Theta^*(s,z)$ and $\Delta^*(s,a)$ are defined in Assumption \ref{ass:iv-common}. \item \label{ass:bounded-mle} We have $|\Delta(s,a)| \geq C_{\Delta^*}^{-1}$ and $\Theta(s,z) \geq C_{\Theta^*}^{-1}$ for any $(\Delta, \Theta, s, a, z)\in \cF_0\times \cF_1 \times \cS \times \cA \times \cZ$. \item \label{ass:lip} We have $\sup_{s\in \cS}|V^{\pi_1}(s) - V^{\pi_2}(s)| \leq L_\Pi \cdot \sup_{(s,a)\in \cS\times \cA} |\pi_1(a\given s) - \pi_2(a\given s)|$ for any $\pi_1,\pi_2 \in \Pi$, where $L_\Pi$ is a positive constant. \item \label{ass:function-bound} We have $\|v\|_\infty \leq 1/(1-\gamma)$ and $\|w\|_\infty \leq C_*$ for any $(v,w)\in \cV\times \cW$, where $C_* > 0$ is a constant. \end{enumerate} \end{assumption} Assumption \ref{ass:spaces}\ref{ass:bounded-covering} states that the function spaces have finite-log covering numbers, which has been widely used in the existing literature \citep[e.g.,][]{antos2008learning}. Assumption \ref{ass:spaces}\ref{ass:upper-bound-delta} states that the conditional probability $\Theta^*$ and the compliance $\Delta^*$ are uniformly lower bounded. Basically we require to have a data coverage on all the IVs and a non-negligible gap in terms of compliance. But we do not require a coverage assumption on all the actions. In practice, it seems that the coverage of all IVs is more plausible than that of all actions due to the noncompliance. With Assumption \ref{ass:spaces}\ref{ass:upper-bound-delta}, we only need to consider a lower bounded function class to recover $\Theta^*$ and $\Delta^*$, which is imposed in Assumption \ref{ass:spaces}\ref{ass:bounded-mle}. In the meanwhile, the Lipschitz condition imposed in Assumption \ref{ass:spaces}\ref{ass:lip} aims to control the complexity of the value function class induced by $\Pi$, i.e., the class $\{V^\pi(\cdot)\colon \pi\in \Pi\}$. Such an assumption is commonly imposed in related literature \citep{zhou2017residual,liao2020batch}. Finally, Assumption \ref{ass:spaces}\ref{ass:function-bound} states that the sets $\cV$ and $\cW$ are uniformly bounded for deriving the exponential inequalities. \begin{assumption}\label{ass:ergodic} The sequence $\{S_t, Z_t, U_t, A_t\}_{t\geq 0}$ admits a unique stationary distribution $G_\text{stat}$ over $\cS\times \cZ \times \cU \times \cA$ and is geometrically ergodic, i.e., there exists a function $\varphi\colon \cS\times \cZ \times \cU \times \cA \to \RR^+$ and a constant $\kappa > 0$ such that \$ \left\| G_\text{stat}(\cdot) - G_t(\cdot \given s_0, z_0, u_0, a_0 ) \right\|_\text{TV} \leq \varphi(s_0, z_0, u_0, a_0) \cdot \exp\left(-2\kappa t\right), \$ where $G_t(\cdot \given s_0, z_0, u_0, a_0)$ is the marginal distribution of $(S_t, Z_t, U_t, A_t)$ given $(S_0, Z_0, U_0, A_0) = (s_0, z_0, u_0, a_0)$ under the behavior policy $b$. Further, we have $\int \varphi(s,z,u,a) \ud \nu(s, z, u, a) \leq c$ and $\int \varphi(s,z,u,a) \ud G_\text{stat}(s, z, u, a) \leq c$ for some positive absolute constant $c$. \end{assumption} Assumption \ref{ass:ergodic} states that the the Markov chain $\{S_t, Z_t, U_t, A_t\}_{t\geq 0}$ mixes geometrically. Such an assumption is widely adopted in the related literature \citep{van1998learning,wang2021projected} to deal with dependent data. To establish the upper bounds for the sub-optimality of the resulting policies, we need to first show that in our proposed algorithms, there exists at least one feasible solution that satisfies the constraints with properly chosen constants. In the following, we focus on $\textsf{conf}^0_{\alpha_0}$ and $\textsf{conf}^1_{\alpha_1}$ for $\Delta^*$ and $\Theta^*$ respectively. Since $\textsf{conf}^0_{\alpha_0}$ and $\textsf{conf}^1_{\alpha_1}$ can be constructed by many methods, to keep our theoretical results general, we assume that there exists a proper choice of $(\alpha_0,\alpha_1)$ that ensures $\Delta^* \in \textsf{conf}^0_{\alpha_0}$ and $\Theta^* \in \textsf{conf}^1_{\alpha_1}$ and then give a valid example that justifies this assumption. \begin{assumption}\label{ass:iv-sl-res} There exists $(\alpha_0, \alpha_1)$ such that with probability at least $1 - \delta$, we have \$ \Delta^* \in \textsf{conf}^0_{\alpha_0}, \qquad \Theta^* \in \textsf{conf}^1_{\alpha_1}. \$ Further, with probability at least $1 - \delta$, for any $(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}$, we have \$ & \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left \| \Delta^*(S_t,\cdot) - \Delta(S_t,\cdot) \right \|_1^2 \right ] \leq \xi_0^2 \frac{C_{\Delta^*}}{NT \kappa} \cdot \mathfrak{C}_{\cF_0} \log\frac{2}{\delta}, \\ & \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \left \| \Theta^*(S_t,\cdot) - \Theta(S_t,\cdot) \right \|_1^2 \right ] \leq \xi_1^2 \frac{C_{\Theta^*}}{NT \kappa} \cdot \mathfrak{C}_{\cF_1} \log\frac{2}{\delta}. \$ \end{assumption} We now illustrate that Assumption \ref{ass:iv-sl-res} can be realized via maximum likelihood estimation (MLE) by replacing $\xi_0$ and $\xi_1$ with proper quantities. Note that the estimation of $\Delta^*$ can be decomposed into the estimation of $\PP(A = a\given S=s,Z=z)$ for all $z\in \cZ$, which can also be obtained via MLE. This implies that estimating $\Delta^*$ is similar to estimating $\Theta^*$. Therefore, we only show how to estimate $\Theta^*$ so that Assumption \ref{ass:iv-sl-res} holds for the simplicity of presentation. By maximum likelihood, we construct the loss function $\hat L_1$ and the estimator $\hat \Theta$ as follows, \$ \hat L_1(\Theta) = - \hat \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} \log \Theta(S_t, Z_t) \right] = - \frac{1}{NT} \sum_{i\in[N]} \sum_{t=0}^{T-1} \log \Theta(S_t^i, Z_t^i), ~~~ \hat \Theta \in \argmin_{\Theta\in \cF_1} \hat L_1(\Theta), \$ where for the ease of notations, we denote by $\hat \EE[\cdot]$ the empirical measure generated by the offline data $\cD$ hereafter. In addition, we assume that $\cF_1$ is a parametric class such that $\cF_1 = \{\Theta_\theta\colon \theta\in \RR^d, \|\theta\|_2 \leq \theta_{\max}\}$. We introduce the following results. \begin{theorem}\label{thm:iv-param-theta} Suppose $\cF_1 = \{\Theta_\theta\colon \theta\in \RR^d \text{ and } \|\theta\|_2 \leq \theta_{\max}\}$, and \$ \alpha_1 = c \cdot \frac{C_{\Theta^*}}{NT \kappa} \cdot d \log \frac{\theta_{\max}}{\delta} \log(NT), \$ where $c / (N^2 T^2) \cdot \log (NT) \leq \delta \leq 1$. Then under Assumptions \ref{ass:iv-common}, \ref{ass:spaces}\ref{ass:upper-bound-delta}, \ref{ass:spaces}\ref{ass:bounded-mle}, and \ref{ass:ergodic}, it holds that with probability at least $1-\delta$ that $\Theta^* \in \textsf{conf}^1_{\alpha_1}$. Further, with probability at least $1 - \delta$, it holds for any $\Theta \in \textsf{conf}^1_{\alpha_1}$ that \$ \sqrt{\EE\left[ \|\Theta(S,\cdot) - \Theta^*(S,\cdot)\|_1^2 \right]} \leq c\sqrt{ \frac{C_{\Theta^*}}{NT \kappa} \cdot d\log \frac{\theta_{\max}}{\delta}}. \$ \end{theorem} \begin{proof} See \S\ref{prf:thm:iv-param-theta} of the Supplementary Material for a detailed proof. \end{proof} Supported by Theorem \ref{thm:iv-param-theta}, we assume Assumption \ref{ass:iv-sl-res} holds throughout this section. \subsection{Theoretical Results for VF-based Pessimistic Method}\label{sec:vf-theory} We first impose the following assumption, which assumes that $V^\pi$ is realizable in $\cV$ for any policy $\pi$, and $w^{\pi^*}$ is realizable in $\cW$ only for the optimal policy $\pi^*$. \begin{assumption} \label{ass:iv-vf-realizable} We have $V^\pi \in \cV$ for any $\pi\in \Pi$ and $w^{\pi^*} \in \cW$. Further, we have $-w\in \cW$ for any $w \in \cW$. \end{assumption} In the following lemma, we show that with a proper choice of $\alpha_\textsf{vf}$, we have $V^\pi \in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi)$ with a high probability. \begin{lemma} \label{lemma:iv-v-pi-in-conf} Suppose \$ \alpha_\textsf{vf} = c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{\mathfrak{C}_{\cW,\cV,\Pi}}{NT\kappa} \cdot \log\frac{1}{\delta} \log(NT) } \$ and $c / (NT)^2 \leq \delta \leq 1$. Then under Assumptions \ref{ass:spaces} and \ref{ass:iv-vf-realizable}, with probability at least $1 - \delta$, it holds for any $\pi\in \Pi$ that $V^\pi \in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi)$. \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-v-pi-in-conf} of the Supplementary Material for a detailed proof. \end{proof} In the following lemma, we show that for any $v\in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta, \Theta, \pi)$, we can upper bound the risk of $\max_{g\in \cW} \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*)$, which in turn bounds the suboptimality of the estimated policy. \begin{lemma} \label{lemma:iv-v-in-conf-good} Let $(\alpha_0, \alpha_1, \alpha_\textsf{vf})$ be those defined in Assumption \ref{ass:iv-sl-res} and Lemma \ref{lemma:iv-v-pi-in-conf} and $c / (NT)^2 \leq \delta \leq 1$. Then under Assumptions \ref{ass:iv-common}--\ref{ass:iv-vf-realizable}, with probability at least $1 - \delta$, it holds for any policy $\pi\in \Pi$ and $v\in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta, \Theta, \pi)$ that \$ \max_{g\in \cW} \Phi^\pi_\textsf{vf}(v,g;\Delta^*, \Theta^*) \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) L_\Pi \sqrt{\frac{1}{NT\kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT)}. \$ \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-v-in-conf-good} of the Supplementary Material for a detailed proof. \end{proof} Equipped with the above results, we introduce the following theorem, which characterizes the suboptimality of the learned policy $\hat \pi_\textsf{vf}$ constructed in \eqref{eq:iv-vf-hat-pi}. \begin{theorem} \label{thm:iv-vf} Suppose $c / (NT)^2 \leq \delta \leq 1$. Under Assumptions \ref{ass:iv-common}--\ref{ass:iv-vf-realizable}, it holds with probability at least $1 - \delta$ that \$ \textsf{SubOpt}(\hat \pi_\textsf{vf}) \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) L_\Pi \sqrt{\frac{1}{NT \kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT)}. \$ \end{theorem} \begin{proof}[Proof Sketch] In the proof sketch, we assume that we have full knowledge on $\Delta^*$ and $\Theta^*$. By the definition of $J(\pi)$ in \eqref{eq:iv-val-func}, we have \$ J(\pi^*) - J(\hat \pi_\textsf{vf}) & = (1-\gamma) \EE_{S_0\sim \nu}\left [V^{\pi^*}(S_0) - V^{\hat \pi_\textsf{vf}}(S_0)\right] \\ & \leq (1-\gamma) \EE_{S_0\sim \nu} \left [V^{\pi^*}(S_0) \right] - \min_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta^*, \Theta^*, \hat \pi_\textsf{vf})} (1-\gamma) \EE_{S_0\sim \nu} \left [v(S_0) \right] \\ & \leq (1-\gamma) \EE_{S_0\sim \nu} \left [V^{\pi^*}(S_0) \right] - \min_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta^*, \Theta^*, \pi^*)} (1-\gamma) \EE_{S_0\sim \nu} \left [v(S_0) \right] \\ & \leq (1-\gamma) \cdot \max_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta^*, \Theta^*, \pi^*)} \left | \EE_{S_0\sim \nu} \left [V^{\pi^*}(S_0) - v(S_0) \right] \right |, \$ where in the first inequality, we use Lemma \ref{lemma:iv-v-pi-in-conf} that $V^{\hat \pi_\textsf{vf}} \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta^*, \Theta^*, \hat \pi_\textsf{vf})$ with a high probability; while in the second inequality, we use the optimality of $\hat \pi_\textsf{vf}$. In the meanwhile, by Lemmas \ref{lemma:iv-mis-j} and \ref{lemma:iv-est-eq}, we have the following decomposition, \#\label{eq:iv-vf-pp2-scketch} & (1-\gamma) \EE_{S_0\sim \nu} \left [ V^{\pi^*}(S_0) \right] = J(\pi^*) = \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} w^{\pi^*}(S_t) \frac{Z_t^\top A_t \pi^*(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} R_t \right], \\ & (1-\gamma) \EE_{S_0\sim \nu} \left [ v(S_0) \right] = \EE\left[ \frac{1}{T} \sum_{t = 0}^{T-1} w^{\pi^*}(S_t) \frac{Z_t^\top A_t \pi^*(A_t\given S_t)}{\Delta^*(S_t,A_t) \Theta^*(S_t,Z_t)} \left ( v(S_t) - \gamma v(S_{t+1}) \right ) \right]. \# Now, by plugging \eqref{eq:iv-vf-pp2-scketch}, we have \#\label{eq:OPE error} & J(\pi^*) - J(\hat \pi_\textsf{vf}) \leq \max_{v \in \textsf{conf}_{\alpha_\textsf{vf}}^\textsf{vf}(\Delta^*, \Theta^*, \pi^*)} \left | \Phi_\textsf{vf}^{\pi^*}(v, w^{\pi^*}; \Delta^*, \Theta^*) \right |. \# We then can upper bound the above suboptimality by Lemma \ref{lemma:iv-v-in-conf-good}, which concludes the proof of the theorem. See \S\ref{prf:thm:iv-vf} of the Supplementary Material for a detailed proof. \end{proof} In Theorem \ref{thm:iv-vf}, we impose data coverage and realizability assumptions as in Assumption \ref{ass:iv-vf-realizable}, which only requires that the offline data covers the trajectory generated by the optimal policy $\pi^*$ and $V^\pi$ is realizable in $\cV$ for any $\pi$. Our upper bound on the suboptimality of the estimated policy indicates that the regret of finding an optimal policy converges to $0$ as long as the number of trajectories or that of decision points on each trajectory goes to infinite. \subsection{Theoretical Results for MIS-based Pessimistic Method}\label{sec:mis-theory} We first impose the following assumption, which assumes that $w^\pi$ is realizable in $\cW$ for any policy $\pi$, and $V^{\pi^*}$ is realizable in $\cV$ only for the optimal policy $\pi^*$. \begin{assumption} \label{ass:iv-mis-realizable} We have $w^\pi \in \cW$ for any $\pi\in \Pi$ and $V^{\pi^*} \in \cV$. Further, we have $-v\in \cV$ for any $v \in \cV$. \end{assumption} In the following lemma, we show that with a proper choice of $\alpha_\textsf{mis}$, we have $w^\pi \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi)$ with high probability. \begin{lemma} \label{lemma:iv-w-pi-in-conf} Suppose \$ \alpha_\textsf{mis} = c\cdot \frac{C_{\Delta^*} C_{\Theta^*} C_*}{1-\gamma} \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cV,\cW,\Pi}\log\frac{1}{\delta} \log(NT)} \$ and $c / (NT)^2 \leq \delta \leq 1$. Then under Assumptions \ref{ass:spaces} and \ref{ass:iv-mis-realizable}, with probability at least $1 - \delta$, it holds for any $\pi \in \Pi$ that $w^\pi \in \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta^*, \Theta^*, \pi)$. \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-w-pi-in-conf} of the Supplementary Material for a detailed proof. \end{proof} In the following lemma, we show that for any $w \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)$, we can upper bound the risk $\max_{f\in \cV} \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*)$. \begin{lemma} \label{lemma:iv-w-in-conf-good} Let $(\alpha_0, \alpha_1, \alpha_\textsf{mis})$ be those defined in Assumption \ref{ass:iv-sl-res} and Lemma \ref{lemma:iv-w-pi-in-conf}, and $c / (NT)^2 \leq \delta \leq 1$. Then under Assumptions \ref{ass:iv-common}--\ref{ass:iv-sl-res}, and \ref{ass:iv-mis-realizable}, with probability at least $1 - \delta$, it holds for any $\pi\in \Pi$ and $w \in \cup_{(\Delta, \Theta) \in \textsf{conf}^0_{\alpha_0} \times \textsf{conf}^1_{\alpha_1}} \textsf{conf}^\textsf{mis}_{\alpha_\textsf{mis}}(\Delta, \Theta, \pi)$ that \$ \max_{f\in \cV} \Phi^\pi_\textsf{mis}(w,f;\Delta^*, \Theta^*) \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta}\log(NT) }. \$ \end{lemma} \begin{proof} See \S\ref{prf:lemma:iv-w-in-conf-good} of the Supplementary Material for a detailed proof. \end{proof} Equipped with the above results, we introduce the following theorem, which characterizes the suboptimality of the learned policy $\hat \pi_\textsf{mis}$ constructed in \eqref{eq:iv-mis-hat-pi}. \begin{theorem} \label{thm:iv-mis} Suppose $c / (NT)^2 \leq \delta \leq 1$. Under Assumptions \ref{ass:iv-common}--\ref{ass:iv-sl-res}, and \ref{ass:iv-mis-realizable}, it holds with probability at least $1 - \delta$ that \$ \textsf{SubOpt}(\hat \pi_\textsf{mis}) \leq c\cdot \frac{ C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta}\log(NT)}. \$ \end{theorem} \begin{proof} See \S\ref{prf:thm:iv-mis} of the Supplementary Material for a detailed proof. \end{proof} In Theorem \ref{thm:iv-mis}, we impose data coverage and realizability assumptions as in Assumption \ref{ass:iv-mis-realizable}, which only requires that $V^{\pi^*}$ is realizable in $\cV$ and the offline data covers the trajectory generated by the policy $\pi$ for any $\pi\in \Pi$. \subsection{Theoretical Results for DR-based Pessimistic Method}\label{sec:dr-theory} In this section, we study the theoretical properties of our DR-based pessimistic method for confounded MDP. \begin{theorem}\label{thm:iv-dr} Let $(\alpha_0, \alpha_1, \alpha_\textsf{mis}, \alpha_\textsf{vf})$ be those defined in Assumption \ref{ass:iv-sl-res}, Lemmas \ref{lemma:iv-v-pi-in-conf}, \ref{lemma:iv-w-pi-in-conf}, and one of Assumptions \ref{ass:iv-vf-realizable} and \ref{ass:iv-mis-realizable} hold. Then under Assumptions \ref{ass:iv-common}--\ref{ass:iv-sl-res}, it holds with probability at least $1 - \delta$ for any $c / (NT)^2 \leq \delta \leq 1$ that \$ \textsf{SubOpt}(\hat \pi_\textsf{dr}) \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \log\frac{1}{\delta}\log(NT) }, \$ where $\hat \pi_\textsf{dr}$ is defined in \eqref{eq:iv-dr}. \end{theorem} \begin{proof} See \S\ref{prf:thm:iv-dr} of the Supplementary Material for a detailed proof. \end{proof} Theorem \ref{thm:iv-dr} shows that $\hat \pi_\textsf{dr}$ is a doubly robust estimator of the optimal policy in the sense that either Assumption \ref{ass:iv-vf-realizable} or Assumption \ref{ass:iv-mis-realizable} ensures the convergence of $\hat \pi_\textsf{dr}$. Our results before hinge on the data coverage and realizability assumptions. Further, we consider the case when such assumptions are violated. We denote by \#\label{eq:v-tilde-def} \tilde v^\pi \in \argmin_{v\in \cV} \max_{w\in \cW} \Phi_\textsf{vf}^\pi(v,w; \Delta^*, \Theta^*), \qquad \tilde w^\pi\in \argmin_{w\in \cW} \max_{v\in \cV} \Phi_\textsf{mis}^\pi(w, v; \Delta^*, \Theta^*) \# and introduce the following assumption. \begin{assumption}[Model Misspecification]\label{ass:model-spec} The following statements hold. \begin{enumerate}[label=(\alph*)] \item\label{ass:vf-spec} We have $\|V^\pi - \tilde v^\pi \|_\infty \leq \varepsilon^\cV_\textsf{vf}$ for any $\pi\in \Pi$ and $\|w^{\pi^*} - \tilde w^{\pi^*}\|_\infty \leq \varepsilon^\cW_\textsf{vf}$. \item\label{ass:mis-spec} We have $\|w^\pi - \tilde w^\pi \|_\infty \leq \varepsilon^\cW_\textsf{mis}$ for any $\pi\in \Pi$ and $\|V^{\pi^*} - \tilde v^{\pi^*}\|_\infty \leq \varepsilon^\cV_\textsf{mis}$. \end{enumerate} \end{assumption} Though Assumption \ref{ass:model-spec} requires that \ref{ass:vf-spec} and \ref{ass:mis-spec} hold simultaneously, we remark that previous assumptions imposed in VF-, MIS-, and DR-based pessimism can be recovered by such an assumption. Specifically, Assumptions \ref{ass:iv-vf-realizable} and \ref{ass:iv-mis-realizable} can be recovered by taking $(\varepsilon^\cV_\textsf{vf}, \varepsilon^\cW_\textsf{vf}, \varepsilon^\cV_\textsf{mis}, \varepsilon^\cW_\textsf{mis}) = (0, 0, \infty, \infty)$ and $(\varepsilon^\cV_\textsf{vf}, \varepsilon^\cW_\textsf{vf}, \varepsilon^\cV_\textsf{mis}, \varepsilon^\cW_\textsf{mis}) = (\infty, \infty, 0, 0)$, respectively, in Assumption \ref{ass:model-spec}. Similarly, the data coverage and realizability assumptions in Theorem \ref{thm:iv-dr} can also be recovered by either taking $(\varepsilon^\cV_\textsf{vf}, \varepsilon^\cW_\textsf{vf}, \varepsilon^\cV_\textsf{mis}, \varepsilon^\cW_\textsf{mis}) = (0, 0, \infty, \infty)$ or taking $(\varepsilon^\cV_\textsf{vf}, \varepsilon^\cW_\textsf{vf}, \varepsilon^\cV_\textsf{mis}, \varepsilon^\cW_\textsf{mis}) = (\infty, \infty, 0, 0)$. To simplify the notation, we remark that $ \leq \infty$ means $< \infty$. \begin{theorem}\label{thm:iv-dr-spec} Let $(\alpha_0, \alpha_1, \alpha_\textsf{mis}, \alpha_\textsf{vf})$ be those defined in Assumption \ref{ass:iv-sl-res}, Lemma \ref{lemma:iv-v-pi-in-conf}, and Lemma \ref{lemma:iv-w-pi-in-conf}. Then under Assumptions \ref{ass:iv-common}--\ref{ass:iv-sl-res}, and \ref{ass:model-spec}, it holds with probability at least $1 - \delta$ for any $c / (NT)^2 \leq \delta \leq 1$ that \$ \textsf{SubOpt}(\hat \pi_\textsf{dr}) & \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) \sqrt{\frac{1}{NT \kappa} \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \log\frac{NT }{\delta}} \\ & \qquad + 3 C_{\Delta^*} C_{\Theta^*} \min\left\{ C_* \varepsilon^\cV_\textsf{vf} + \varepsilon^\cW_\textsf{vf}/(1-\gamma), ~C_* \varepsilon^\cV_\textsf{mis} + \varepsilon^\cW_\textsf{mis}/(1-\gamma) \right \}, \$ where $\hat \pi_\textsf{dr}$ is defined in \eqref{eq:iv-dr}. \end{theorem} \begin{proof} See \S\ref{prf:thm:iv-dr-spec} of the Supplementary Material for a detailed proof. \end{proof} In Theorem \ref{thm:iv-dr-spec}, the first term on the right-hand side of the suboptimality upper bound corresponds to the suboptimality of the DR-based estimator in Theorem \ref{thm:iv-dr}, and the second term characterizes the additional bias induced by model misspecification. We remark that either $(\varepsilon^\cV_\textsf{vf}, \varepsilon^\cW_\textsf{vf}) = (0,0)$ or $(\varepsilon^\cV_\textsf{mis}, \varepsilon^\cW_\textsf{mis}) = (0,0)$ in Theorem \ref{thm:iv-dr-spec} ensures zero bias, which corresponds to Theorem \ref{thm:iv-dr}. \section{Dual Formulation}\label{sec: dual form} To improve the computational efficiency of estimating the optimal in-class policy due to the confidence sets, we propose a dual formulation of the aforementioned pessimistic methods. For the purpose of clear illustration, we only consider the dual formulation of the VF-based pessimistic method proposed in \S\ref{sec:iv-vf}. Similar formulations for MIS-based and DR-based methods can also be derived accordingly. For the ease of presentation, we assume that there exists an oracle that gives us $\Delta^*$ and $\Theta^*$. Without the existence of such an oracle, we only need to employ two additional dual variables to consider the uncertainty induced by estimating $\Delta^*$ and $\Theta^*$. We consider the following dual form of \eqref{eq:iv-vf-hat-pi}, \#\label{eq:vf-dual} & \hat \pi_\textsf{vf}^\dagger = \argmax_{\pi\in \Pi} \max_{\lambda \geq 0} \min_{v \in \cV} ~(1-\gamma)\EE_{S \sim \nu}[v(S)] + \lambda \cdot \left( \hat M_\textsf{vf}^\pi(v) - \alpha_\textsf{vf} \right ), \\ & \text{s.t. } \hat M_\textsf{vf}^\pi(v) = \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi(v, g; \Delta^*, \Theta^*) - \max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi(\hat v_{\Delta^*, \Theta^*}^\pi, g; \Delta^*, \Theta^*), \# where $\hat v_{\Delta^*, \Theta^*}^\pi = \argmin_{v\in \cV}\max_{g\in \cW} \hat \Phi_\textsf{vf}^\pi(v, g; \Delta^*, \Theta^*)$ and $\lambda$ is the dual variable that corresponds to the constraint $v\in \textsf{conf}^\textsf{vf}_{\alpha_\textsf{vf}}(\Delta^*, \Theta^*, \pi)$. In comparison to the constrained optimization problem in \eqref{eq:iv-vf-hat-pi}, the problem in \eqref{eq:vf-dual} can be solved efficiently using gradient-based methods. In the following theorem, we characterize the suboptimality of $\hat \pi_\textsf{vf}^\dagger$. \begin{theorem}\label{thm:dual} Suppose that $\cV$ is convex, $\alpha_\textsf{vf}$ is defined in Lemma \ref{lemma:iv-v-in-conf-good}, and $c / (NT)^2 \leq \delta \leq 1$. Under Assumptions \ref{ass:iv-common}--\ref{ass:iv-vf-realizable}, it holds with probability at least $1 -\delta$ that \$ \textsf{SubOpt}(\hat \pi_\textsf{vf}^\dagger) \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) L_\Pi \sqrt{\frac{1}{NT\kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT)}. \$ \end{theorem} \begin{proof} For notational convenience, we denote by $(\hat \pi, \hat \lambda, \hat v)$ the solution of \eqref{eq:vf-dual}. Note that $(1-\gamma) \EE_{S_0\sim \nu}[v(S_0)]$ is a lower bounded real-valued convex functional w.r.t. $v$, and $\hat M^{\pi^*}_\textsf{vf}$ is also a convex functional. In the meanwhile, we have $\hat M^{\pi^*}_\textsf{vf}(\hat v^{\pi^*}_{\Delta^*, \Theta^*}) = 0$. Thus, by Theorem 1 of \S{8.6} in \cite{luenberger1997optimization}, strong duality holds, i.e., \#\label{eq:wefuir8-pp} & \max_{\lambda\geq 0} \min_{v\in \cV} \left\{ (1-\gamma) \EE_{S_0\sim \nu}\left [v(S_0)\right] + \lambda \cdot \left(\hat M_\textsf{vf}^{\pi^*}(v) - \alpha_\textsf{vf} \right ) \right\} \\ & \qquad = \min_{v\in \cV} \max_{\lambda\geq 0} \left\{ (1-\gamma) \EE_{S_0\sim \nu}\left [v(S_0)\right] + \lambda \cdot \left(\hat M_\textsf{vf}^{\pi^*}(v) - \alpha_\textsf{vf} \right ) \right\}. \# By Lemma \ref{lemma:iv-v-pi-in-conf}, it holds with probability at least $1 - \delta$ that $\hat M_\textsf{vf}^{\hat \pi} (V^{\hat \pi}) \leq \alpha_\textsf{vf}$. Thus, with probability at least $1 - \delta$, we have \#\label{eq:fuiewhr} & J(\pi^*) - J(\hat \pi) \\ & \quad = (1-\gamma) \EE_{S_0\sim \nu}\left [V^{\pi^*}(S_0) - V^{\hat \pi}(S_0)\right] \\ & \quad \leq (1-\gamma) \EE_{S_0\sim \nu}\left [V^{\pi^*}(S_0)\right] - \left ( (1-\gamma) \EE_{S_0\sim \nu}\left [V^{\hat \pi}(S_0)\right] + \hat \lambda \cdot \left( \hat M_\textsf{vf}^{\hat \pi}(V^{\hat \pi}) - \alpha_\textsf{vf} \right ) \right ) \\ & \quad \leq (1-\gamma) \EE_{S_0\sim \nu}\left [V^{\pi^*}(S_0)\right] - \min_{v\in \cV} \left\{ (1-\gamma) \EE_{S_0\sim \nu}\left [v(S_0)\right] + \hat \lambda \cdot \left( \hat M_\textsf{vf}^{\hat \pi}(v) - \alpha_\textsf{vf} \right ) \right\} \\ & \quad = (1-\gamma) \EE_{S_0\sim \nu}\left [V^{\pi^*}(S_0)\right] - \max_{\pi\in \Pi} \max_{\lambda\geq 0} \min_{v\in \cV} \left\{ (1-\gamma) \EE_{S_0\sim \nu}\left [v(S_0)\right] + \lambda \cdot \left( \hat M_\textsf{vf}^{\pi}(v) - \alpha_\textsf{vf} \right ) \right\} \\ & \quad = (1-\gamma) \EE_{S_0\sim \nu}\left [V^{\pi^*}(S_0)\right] - \max_{\lambda\geq 0} \min_{v\in \cV} \left\{ (1-\gamma) \EE_{S_0\sim \nu}\left [v(S_0)\right] + \lambda \cdot \left( \hat M_\textsf{vf}^{\pi^*}(v) - \alpha_\textsf{vf} \right ) \right\}, \# where in the second inequality, we use the fact that $V^{\hat \pi}\in \cV$; in the third inequality, we use the definition of $\hat \pi$ and $\hat \lambda$; in the last inequality, we use the fact that $\pi^* \in \Pi$. By combining \eqref{eq:wefuir8-pp} and \eqref{eq:fuiewhr}, we have \#\label{eq:erfuie} & J(\pi^*) - J(\hat \pi) \\ & \quad \leq (1-\gamma) \EE_{S_0\sim \nu}\left [V^{\pi^*}(S_0)\right] - \min_{v\in \cV} \max_{\lambda\geq 0} \left\{ (1-\gamma) \EE_{S_0\sim \nu}\left [v(S_0)\right] + \lambda \cdot \left( \hat M_\textsf{vf}^{\pi^*}(v) - \alpha_\textsf{vf} \right ) \right\} \\ & \quad \leq (1-\gamma) \EE_{S_0\sim \nu}\left [V^{\pi^*}(S_0)\right] - \min_{v\in \cV\colon \hat M_\textsf{vf}^{\pi^*}(v) \leq \alpha_\textsf{vf}} (1-\gamma) \EE_{S_0\sim \nu}\left [v(S_0)\right] \\ & \quad \leq \max_{v\in \cV\colon \hat M_\textsf{vf}^{\pi^*}(v) \leq \alpha_\textsf{vf}} \left | (1-\gamma) \EE_{S_0\sim \nu}\left [V^{\pi^*}(S_0) - v(S_0)\right] \right |. \# Note that by Lemmas \ref{lemma:iv-mis-j} and \ref{lemma:iv-est-eq}, we have \#\label{eq:eriufhwe} (1-\gamma) \EE_{S_0\sim \nu}\left [V^{\pi^*}(S_0) - v(S_0)\right] = \Phi_\textsf{vf}^{\pi^*}(v, w^{\pi^*}; \Delta^*, \Theta^*). \# By plugging \eqref{eq:eriufhwe} into \eqref{eq:erfuie}, we have \$ J(\pi^*) - J(\hat \pi) & \leq \max_{v\in \cV\colon \hat M_\textsf{vf}^{\pi^*}(v) \leq \alpha_\textsf{vf}} \left | \Phi_\textsf{vf}^{\pi^*}(v, w^{\pi^*}; \Delta^*, \Theta^*) \right | \\ & \leq c\cdot \frac{C_{\Delta^*}^2 C_{\Theta^*}^2 C_*}{1-\gamma} (\xi_0 + \xi_1) L_\Pi \sqrt{\frac{1}{NT\kappa}\cdot \mathfrak{C}_{\cF_0,\cF_1,\cW,\cV,\Pi} \cdot \log\frac{1}{\delta} \log(NT)}, \$ where in the last inequality, we use Lemma \ref{lemma:iv-v-in-conf-good} with the fact that $w^{\pi^*}\in \cW$. This concludes the proof. \end{proof} In Theorem \ref{thm:dual}, with an additional assumption that $\cV$ is convex, we show that a similar suboptimality holds as in Theorem \ref{thm:iv-vf} for the VF-based method. Thus, to avoid computational challenges induced by the confidence sets in \eqref{eq:iv-vf-hat-pi}, we only need to solve \eqref{eq:vf-dual} to obtain an optimal policy. We remark that similar dual formulations for MIS-based and DR-based methods can be derived, as well as their theoretical properties. \section{Identifiability}\label{sec: identification} We discuss interesting identifiability results implied by the proposed algorithm in this section. Following the convention, we say a parameter $\theta$ is identifiable if $\theta \rightarrow \mathbb{P}_{\theta}$ is injective. On the contrary, non-identifiability implies that there exists two different parameters that their corresponding data distributions coincide. In the following, we use a tabular MDP as an example to illustrate non-identifiability issue in RL. Then we discuss the identifiability required by our methods. \vskip5pt \noindent\textbf{Non-Identifiability in Tabular MDP.} We consider a tabular MDP with states $\cS = \{s_1, s_2, \ldots, s_{|\cS|}\}$, where the behavior policy $b$ used to generate offline data only covers states $\{s_2, \ldots, s_{|\cS|}\}$. We assume that the expected total reward under such a tabular MDP is $J(\pi)$ for any policy $\pi$. Since the offline data generated following $b$ never covers the state $s_1$, we cannot infer any information of the reward received at the state $s_1$. Thus, for any policy $\pi$ that arrives the state $s_1$ with a nonzero probability, we cannot identify the value $J(\pi)$ uniquely. In the meanwhile, the state-value function $V^\pi\colon \cS\to \RR$ is not uniquely identifiable for any policy $\pi$ (even for $\pi^*$), since the value $V^\pi(s_1)$ is not identifiable. \vskip5pt \noindent\textbf{Identifiability Required by Our Methods.} In \S\ref{sec:iv-id} and \S\ref{sec:iv-pess}, we do not explicitly impose any identifiability assumptions. But certain identifiability assumptions are implied by our data coverage assumptions as follows. For the ease of presentation, we assume that there exists an oracle that gives us $\Delta^*$ and $\Theta^*$. \begin{itemize} \item VF-based pessimistic algorithm. As imposed in Assumption \ref{ass:iv-vf-realizable}, we require that $w^{\pi^*}$ is upper bounded and modelled correctly. Thus, we know that the trajectory generated by the optimal policy $\pi^*$ is covered by the offline data, which implies that $J(\pi^*)$ is identifiable but not necessary for $J(\pi)$ for $\pi \neq \pi^*$. This can also be seen by the min-max estimation procedure such as \eqref{eq:v-pi-def}, which will correctly upper bound the policy evaluation error so that $J(\pi^*)$ is uniquely identified. Also see the proof of Lemma \ref{lemma:iv-v-in-conf-good}. We remark that we do not require our data distribution to uniquely identify $V^\pi$ for any $\pi \in \Pi$ and the IV-aided Bellman equation in Lemma \ref{lemma:iv-vf-id} could have multiple fixed point solutions. See \cite{chen2022well} for when the uniqueness can be implied and that $w^{\pi^*}$ is upper bounded does not imply $V^{\pi^*}$ is uniquely identified. \item MIS-based pessimistic algorithm. As imposed in Assumption \ref{ass:iv-mis-realizable}, we require that $w^{\pi}$ is upper bounded for any policy $\pi$ and modelled correctly. Thus, we know that the trajectory generated by any policy $\pi$ is covered by the offline data, which implies that $J(\pi)$ is identifiable for any $\pi$. Similarly, we do not require our data distribution to uniquely identify/estimate $w^\pi$ as the MIS-based estimating equation defined in Lemma \ref{lemma:iv-est-eq} may have multiple fixed point solutions. \item DR-based pessimistic algorithm. Since either Assumption \ref{ass:iv-vf-realizable} or Assumption \ref{ass:iv-mis-realizable} holds, we require that $J(\pi^*)$ is identifiable or $J(\pi)$ is identifiable for any $\pi$. In either case, we do not require that $w^\pi$ and $V^\pi$ are uniquely identified by our data distribution. \end{itemize} \section{Conclusion} In this paper, we study the offline RL in the face of unmeasured confounders. We focus on resolving the following two challenges: (i) the agent may be confounded by the unmeasured confounders; (ii) the offline data may not provide sufficient coverage. To resolve the first challenge, by employing IVs, we establish VF- and MIS-based identification results for the expected total reward in the confounded MDPs. To resolve the second challenge, we employ pessimism to achieve policy learning. Specifically, we propose VF- and MIS-based pessimistic policy estimators, which are constructed by maximizing the most conservative expected total reward associated with the estimated value function and density ratio, respectively. As a combination, we also propose a DR-based estimator. As for theoretical contributions, under mild coverage and realizability assumptions, we show that the suboptimalities of the proposed estimators are upper bounded by $O(\log(NT) (NT)^{-1/2})$. Further, we consider the case when the models are misspecified, i.e., previous coverage and realizability assumptions no longer hold. We remark that such a misspecified case is a unified framework of the aforementioned estimators.
{ "timestamp": "2022-09-20T02:21:54", "yymm": "2209", "arxiv_id": "2209.08666", "language": "en", "url": "https://arxiv.org/abs/2209.08666" }
\section*{\refname}} \newcommand{\pbref}[1]{\ref{#1} (\nameref*{#1})} \def\({\left(} \def\){\right)} \newcommand{\textnormal}{\textnormal} \newcommand{\displaystyle}{\displaystyle} \newcommand{\dsfrac}[2]{\displaystyle{\frac{#1}{#2}}} \newcommand{\textstyle{\bigoplus}}{\textstyle{\bigoplus}} \newcommand{\textstyle{\bigotimes}}{\textstyle{\bigotimes}} \newcommand{\textstyle{\bigcup}}{\textstyle{\bigcup}} \newcommand{\textstyle{\bigsqcup}}{\textstyle{\bigsqcup}} \newcommand{\textstyle{\bigcap}}{\textstyle{\bigcap}} \newcommand{\mc{S}}{\mc{S}} \newcommand{\mc{K}}{\mc{K}} \newcommand{\ddots}{\rotatebox[origin=t]{135}{$\cdots$}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\ms}[1]{\mathscr{#1}} \newcommand{\mf}[1]{\mathfrak{#1}} \newcommand{\wh{\mc{U}}}{\wh{\mc{U}}} \newcommand{\wh}[1]{\widehat{#1}} \newcommand{\dwh}[1]{\wh{\rule{0ex}{1.3ex}\smash{\wh{\hfill{#1}\,}}}} \newcommand{\wt}[1]{\widetilde{#1}} \newcommand{\wht}[1]{\widehat{\widetilde{#1}}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{$\mathscr{U}$}{$\mathscr{U}$} \newcommand{$\mathscr{R}$}{$\mathscr{R}$} \newcommand{$\mathscr{UR}$}{$\mathscr{UR}$} \newcommand{$\mathscr{DR}$}{$\mathscr{DR}$} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{K}}{\mathbb{K}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\abs}[1]{|#1|} \newcommand{\operatorname{d}}{\operatorname{d}} \newcommand{{\scriptstyle \mc{D}}}{{\scriptstyle \mc{D}}} \newcommand{\operatorname{tr}}{\operatorname{tr}} \newcommand{\operatorname{Im}}{\operatorname{Im}} \newcommand{\textit{i.e.}\ }{\textit{i.e.}\ } \newcommand{\textit{vs.}\ }{\textit{vs.}\ } \newcommand{\textit{e.g.}\ }{\textit{e.g.}\ } \newcommand{\textit{cf.}\ }{\textit{cf.}\ } \newcommand{\textit{etc}}{\textit{etc}} \newcommand{\textit{et al.}}{\textit{et al.}} \newcommand{\tn{span}}{\textnormal{span}} \newcommand{PDE}{PDE} \newcommand{\tn{U}}{\textnormal{U}} \newcommand{\tn{SU}}{\textnormal{SU}} \newcommand{\tn{GL}}{\textnormal{GL}} \newcommand{\tn{su}}{\textnormal{su}} \newcommand{Schr\"odinger}{Schr\"odinger} \newcommand{Liouville-von Neumann}{Liouville-von Neumann} \newcommand{Kochen-Specker}{Kochen-Specker} \newcommand{Leggett-Garg}{Leggett-Garg} \newcommand{\bra}[1]{\langle#1|} \newcommand{\ket}[1]{|#1\rangle} \newcommand{\kett}[1]{|\!\!|#1\rangle\!\!\rangle} \newcommand{\proj}[1]{\ket{#1}\bra{#1}} \newcommand{\braket}[2]{\langle#1|#2\rangle} \newcommand{\ketbra}[2]{|#1\rangle\langle#2|} \newcommand{\expectation}[1]{\langle#1\rangle} \newcommand{\tn{Herm}}{\textnormal{Herm}} \newcommand{\Sym}[1]{\textnormal{Sym}_{#1}} \newcommand{\meanvalue}[2]{\langle{#1}\rangle_{#2}} \newcommand{\tn{Prob}}{\textnormal{Prob}} \newcommand{\kjj}[3]{#1\!:\!#2,#3} \newcommand{\jk}[2]{#1,#2} \newcommand{\mf{j}}{\mf{j}} \newcommand{\pobs}[1]{\mathsf{#1}} \newcommand{\obs}[1]{\wh{\pobs{#1}}} \newcommand{\uop}[1]{\wh{\mathbf{#1}}} \newcommand{\weightU}[5]{\left[{#2}{}_{#3}\overset{#1}{\rightarrow}{#4}{}_{#5}\right]} \newcommand{\weightUT}[8]{\left[{#3}{}_{#4}\overset{#1}{\rightarrow}{#5}{}_{#6}\overset{#2}{\rightarrow}{#7}{}_{#8}\right]} \newcommand{\weight}[4]{\weightU{}{#1}{#2}{#3}{#4}} \newcommand{\weightT}[6]{\weightUT{}{}{#1}{#2}{#3}{#4}{#5}{#6}} \newcommand{\boxtimes}{\boxtimes} \newcommand{{\boxtimes_s}}{{\boxtimes_s}} \newcommand{\mathbf{(2\pi\hbar)}}{\mathbf{(2\pi\hbar)}} \newcommand{\boldsymbol{x}}{\boldsymbol{x}} \newcommand{\boldsymbol{y}}{\boldsymbol{y}} \newcommand{\mathbf{z}}{\mathbf{z}} \newcommand{\mathbf{q}}{\mathbf{q}} \newcommand{\mathbf{p}}{\mathbf{p}} \newcommand{\mathbf{0}}{\mathbf{0}} \newcommand{\widehat{\mathbf{a}}}{\widehat{\mathbf{a}}} \newcommand{\mathscr{C}}{\mathscr{C}} \newcommand{\mathscr{P}}{\mathscr{P}} \newcommand{\widehat{\x}}{\widehat{\boldsymbol{x}}} \newcommand{\widehat{\mathbf{p}}}{\widehat{\mathbf{p}}} \newcommand{\fqproj}[1]{\Pi_{#1}} \newcommand{\cqproj}[1]{\wh{\Pi}_{#1}} \newcommand{\cproj}[1]{\wh{\Pi}^{\perp}_{#1}} \newcommand{\mathbb{E}_3}{\mathbb{E}_3} \newcommand{\mathbf{D}}{\mathbf{D}} \newcommand{\tn{d}}{\textnormal{d}} \newcommand{\mathbf{d}}{\mathbf{d}} \newcommand{\mathbf{n}}{\mathbf{n}} \newcommand{\mathbf{m}}{\mathbf{m}} \newcommand{\V}[1]{\mathbb{V}_{#1}} \newcommand{\F}[1]{\mathcal{F}_{#1}} \newcommand{\widetilde{\mathcal{F}}^0}{\widetilde{\mathcal{F}}^0} \newcommand{\nD}[1]{|{#1}|} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\tn{End}}{\textnormal{End}} \newcommand{\vbundle}[4]{{#1}\to {#2} \stackrel{\pi_{#3}}{\to} {#4}} \newcommand{\vbundlex}[1]{\vbundle{V_{#1}}{E_{#1}}{#1}{M_{#1}}} \newcommand{\rho_{\scriptscriptstyle\btimes}}{\rho_{\scriptscriptstyle\boxtimes}} \newcommand{\intl}[1]{\int\limits_{#1}} \newcommand{\moyalBracket}[1]{\{\mskip-5mu\{#1\}\mskip-5mu\}} \newcommand{H_{\tn{int}}}{H_{\textnormal{int}}} \newcommand{\quot}[1]{``#1''} \def\sref #1{\S\ref{#1}} \newcommand{de Broglie--Bohm}{de Broglie--Bohm} \newcommand{{\dBB} theory}{{de Broglie--Bohm} theory} \newcommand{pilot-wave theory}{pilot-wave theory} \newcommand{PWT}{PWT} \newcommand{{\textbf{NRQM}}}{{\textbf{NRQM}}} \newcommand{\image}[3]{ \begin{center} \begin{figure}[!ht] \includegraphics[width=#2\textwidth]{#1} \caption{\small{\label{#1}#3}} \end{figure} \end{center} \vspace{-0.40in} } \newcommand{\ref{def:TQS}}{\ref{def:TQS}} \newcommand{\ref{thesis:HSF}}{\ref{thesis:HSF}} \newcommand{\ref{def:MQS}}{\ref{def:MQS}} \newcommand{\todo}[1]{\textcolor{red}{$\bigtriangleup\mkern-10.5mu!$} \textcolor{blue}{#1}\PackageWarning{TODO:}{#1!}} \theoremstyle{remark} \newtheorem{todoEnv}{\textcolor{red}{$\rightarrow$}} \newcommand{\todosub}[1]{\begin{todoEnv}\textcolor{blue}{#1}\end{todoEnv}\PackageWarning{TODO:}{#1!}} \newcommand{\todosup}[1]{\textcolor{red}{$\bigtriangleup\mkern-10.5mu!$} \textcolor{blue}{#1}\setcounter{todoEnv}{0}\PackageWarning{TODO:}{#1!}} \newcommand{\todob}[1]{\textcolor{red}{$\langle\!\langle$} \textcolor{blue}{#1}\PackageWarning{TODO:}{Remove ``Begin #1''!}} \newcommand{\todoe}[1]{\textcolor{blue}{#1}\textcolor{red}{$\rangle\!\rangle$}\PackageWarning{TODO:}{Remove ``End #1''!}} \title{COUNTING 3D-SPACES\texorpdfstring{\\}{ }Classicality and probability in standard and many-worlds quantum mechanics\texorpdfstring{\\}{ }from quantum-gravitational background-freedom} \author{Ovidiu Cristinel Stoica} \affiliation{ Dept. of Theoretical Physics, NIPNE---HH, Bucharest, Romania. \\ Email: \href{mailto:cristi.stoica@theory.nipne.ro}{cristi.stoica@theory.nipne.ro}, \href{mailto:holotronix@gmail.com}{holotronix@gmail.com} }% \date{\today \begin{abstract} I explain that background freedom in quantum gravity automatically leads to a dissociation of the quantum state into states having a classical 3d-space. That is, interference is not completely well-defined for states with different 3d-space geometries, even if their linear combination is. The dissociation into 3d-space geometries still allows for interference at small scales, but precludes it at macro scales. It grants the possibility of classical-looking macroscopic objects, including measuring devices. Counting the 3d-space geometries automatically gives the Born rule. But the wavefunction collapse turns out to be even more ad-hoc. Fortunately, the dissociation entails a kind of absolute decoherence, making the wavefunction collapse unnecessary. This naturally leads to a new version of the many-worlds interpretation, while solving its major problems: 1) the classical-3d-space states form an absolute preferred basis, 2) at any instant, the resulting branches look like classical worlds, with objects in the 3d-space, 3) the 3d-space geometries converge at the Big-Bang, favoring branching towards the future, 4) macro-branches stop interfering, even though micro-branches can interfere, 5) the coefficients $\Psi[\gamma,\phi]$ become real by absorbing the complex phases in the global U(1) gauge, 6) the ontology is a state vector uniquely dissociable into many gauged classical-3d-space states, each of them counting as a world by having local beables (the classical fields), 7) the density of the classical-3d-space states automatically obeys the Born rule. \end{abstract} \keywords{Everett's many-worlds interpretation; Born rule; quantum gravity; background-independence; many-spacetimes interpretation. \maketitle \section{Introduction} \label{s:intro} I show that background free approaches to quantum gravity prevent most quantum state vectors from having physically meaningful superpositions. Interference effects require a way to relate the positions in space among different state vectors, but background freedom limits this possibility. Linear combinations exist mathematically, but interference effects are suppressed in most situations. This leads to a new explanation of the emergence of classicality at the macro level, and to a natural derivation of the Born rule by counting states with definite classical $3$d-space. The resulting approach to understand quantum mechanics works less naturally with the wavefunction collapse, but very well with the many-worlds interpretation, solving some of its main problems. \image{MSTIQM.pdf}{0.45}{\textbf{Wavefunctional dissociation due to quantum-gravitational background freedom.} The micro-states (in green) are $3$d-space states. They are very similar at the Big-Bang, then background freedom makes them dissociate and form a branching structure like in the many-worlds interpretation. The dissociation is reversible at micro scales, allowing interference, but it becomes irreversible when it manifests at macro scales. The branching structure (in yellow) corresponds to the macro-states (in blue). Counting the $3$d-space states for each macro-state or branch gives the Born rule.} In Sec. \sref{s:quantum-gravity} I sketch the generic features of wavefunctional formulations of background-free quantum gravity. This leads to the notion of classical $3$d-space states, having a definite classical $3$d-space (or other structure assumed to be more fundamental than the $3$d manifold). In Sec. \sref{s:dissociation} I explain how background freedom makes the state vector dissociate into classical $3$d-space states, by limiting their ability to interfere. In Sec. \sref{s:probabilities} I show how counting the $3$d-space states into which the state vector dissociates gives the Born rule. Each $3$d-space state either is absent from the wavefunctional, or it appears in it with equal amplitude but varying density (see Fig. \ref{3dspace-counting.pdf}). The density is real, since the complex phases are absorbed into the gauge of the classical fields defining the $3$d-space states. In Sec. \sref{s:collapse-or-many-worlds} I argue that the $3$d-space states approach works less well with the collapse postulate, but it works naturally with the many-worlds interpretation, resulting in a version of it named here the \emph{many-spacetimes interpretation} of quantum mechanics. In Sec. \sref{s:mwi} I explain how the many-spacetimes interpretation solves some of the main problems of the many-worlds interpretation (Fig. \ref{MSTIQM.pdf}). These include the existence of a preferred basis, the emergence of quasi-classical macro worlds, the existence of familiar, classical-looking objects in the $3$d-space, the time-asymmetry of the branching structure, probabilities by counting states, the appearance of complex numbers in quantum mechanics, and the ontology, including the local beables, which justify counting each $3$d-space state as a world. Sec. \sref{s:discussion} concludes the article with a discussion. \section{3d-space states in quantum gravity} \label{s:quantum-gravity} \subsection{Classical 3d-space states} \label{s:3d-space} We do not have a yet final theory of quantum gravity, and even less so one that includes the other fields. But I will assume that such a theory is possible. Many of the various currently known approaches to quantum gravity admit wavefunctional formulations. The Wheeler-de Witt equation \begin{equation} \label{eq:WdW} \obs{H}\wt{\Psi}=0 \end{equation} involves a wavefunctional $\wt{\Psi}=\wt{\Psi}[\gamma_{ab}]$ on the space $\textnormal{Riem}(\Sigma)$ of all possible Riemannian geometries $(\Sigma,\gamma_{ab})$, where $\gamma_{ab}$ is the intrinsic metric tensor on a three-dimensional manifold $\Sigma$, which is a time-dependent spacelike $3$d-slice of the spacetime manifold $M=\Sigma\times\mathbb{R}$. Equation \eqref{eq:WdW} was obtained \cite{Dewitt1967QuantumTheoryOfGravityI_TheCanonicalTheory} by quantizing the Hamiltonian formulation of classical general relativity by Arnowitt, Deser, and Misner (ADM) \cite{ADM1962TheDynamicsOfGeneralRelativity}. The quantization replaces the classical $3$d metric $\gamma_{ab}$ and its conjugate momentum $\pi_{\gamma}^{cd}$ by operators, \begin{equation} \label{eq:WdW-hp} \begin{cases} \wh{\gamma}_{ab}(\boldsymbol{x})\Psi[\gamma_{ab}]=\gamma_{ab}(\boldsymbol{x})\Psi[\gamma_{ab}],\\ \wh{\pi}_{\gamma}^{cd}(\boldsymbol{x})\Psi[\gamma_{ab}]=\dsfrac{\hbar}{i}\dsfrac{\delta\Psi[\gamma_{ab}]}{\delta \gamma_{cd}(\boldsymbol{x})},\\ \end{cases} \end{equation} subject to the canonical commutation relations \begin{equation} \label{eq:WdW-CCR} \begin{cases} \left[\wh{\gamma}_{ab}(\boldsymbol{x}),\wh{\pi}_{\gamma}^{cd}(\boldsymbol{y})\right]=i\hbar\delta^c_{(a}\delta^d_{b)}(\boldsymbol{x},\boldsymbol{y}),\\ \left[\wh{\gamma}_{ab}(\boldsymbol{x}),\wh{\gamma}_{cd}(\boldsymbol{y})\right]=\left[\wh{\pi}_{\gamma}^{ab}(\boldsymbol{x}),\wh{\pi}_{\gamma}^{cd}(\boldsymbol{y})\right]=0,\\ \end{cases} \end{equation} where $\boldsymbol{x},\boldsymbol{y}\in\Sigma$ and $\delta/\delta \gamma_{cd}(\boldsymbol{x})$ is the functional derivative. The Wheeler-de Witt equation is a constraint equation, not an evolution equation, despite de Witt initially calling it the \emph{Einstein-{Schr\"odinger} equation}. It is complemented by three other constraint equations that factor out the space diffeomorphisms. The wavefunctional $\wt{\Psi}$ is a timeless solution. A proposal to decode a dynamical solution, made by Page and Wootters \cite{PageWootters1983EvolutionWithoutEvolution}, consists of interpreting it as a quantum system $\ket{\psi(\tau)}$ entangled with a clock $\ket{\tau}$, $\wt{\Psi}=\int_\mathbb{R}\ket{\tau}\ket{\psi(\tau)}d \tau$. This, and other proposals, were assessed critically in \cite{Isham1993CanonicalQuantumGravityAndTheProblemOfTime,Kuchavr2011TimeAndInterpretationsOfQuantumGravity}. According to Page and Wootters, we can consider that the state of the universe at the time $t$ is represented by the vector $\Psi(t):=\ket{t}\ket{\psi(t)}$. In the following we will assume the existence of a quantum theory of gravity based on time-dependent states. Ashtekar's formalism \cite{Ashtekar1986NewVariablesForClassicalAndQuantumGravity} is similar, except that instead of $\gamma$ and $\pi_{\gamma}$, its variables are an $\tn{su}(2)$ connection, whose conjugate variable is a densitized frame field on $\Sigma$. At the classical level the ADM formalism and the Ashtekar variables are equivalent. When quantized, the resulting operators satisfy commutation relations similar to \eqref{eq:WdW-CCR} \cite{Kiefer2012QuantumGravity}. Its quantization was interpreted by Rovelli and Smolin in terms of \emph{loop variables} \cite{RovelliSmolin1990LoopSpaceRepresentationOfQuantumGeneralRelativity}. We do not know with certainty that spacetime is continuous. Various approaches to quantum gravity are discrete, being based in general on structures that can be represented as graphs or hypergraphs that may have attached numbers at their vertices and (hyper-)edges. For example, in the \emph{causal sets} approach \cite{Sorkin1990SpacetimeAndCausalSets}, the vertices of the graph are events from spacetime, and oriented edges join pairs of events in causal relation, in the sense that the first event is in the past lightcone of the second one. The \emph{Regge calculus} \cite{Regge1961GeneralRelativityWithoutCoordinates} is based on triangulations of spacetime into $4$-simplices further approximated as flat. Distances are attached to the edges, and the spacetime curvature is concentrated at $2$-faces, and expressed in terms of deficit angles \textit{etc}. The \emph{causal dynamical triangulation} approach is similar, but with fixed-length edges \cite{Loll2019QuantumGravityFromCausalDynamicalTriangulationsAReview}. \emph{Loop quantum gravity} can be formulated in terms of spin networks and spin foams. \emph{Spin networks} are graphs with the edges labeled by half-integer numbers corresponding to irreducible representations of $\tn{su}(2)$ \cite{Penrose1971AngularMomentumAnApproachToCombinatorialSpaceTime,RovelliSmolin1995SpinNetworksAndQuantumGravity,AshtekarBianchi2021AShortReviewOfLoopQuantumGravity}. Two spin networks at different times are joined by a \emph{spin foam}, a hypergraph used in the path integral formulation of loop quantum gravity. All these graph or hypergraph structures are background-independent. They can also be seen as equivalence classes of (hyper)graphs embedded in the $3$d-space $\Sigma$ or in the spacetime $M$, where two such embedded structures are equivalent if they can be related by a diffeomorphism of the background manifold. Many of these discrete approaches use Feynman's path integral quantization, but at the end a complex coefficient is associated to each classical $3$d-space state, so it is likely that a wavefunctional representation always exists. I will assume that quantum gravity can be described by a theory admitting a wavefunctional representation. Let $\mc{C}_S$ be the set of classical $3$d-space configurations. These may be the diffeomorphism equivalence classes of Riemannian geometries $(\Sigma,\gamma)$, or more fundamental structures approximated by such geometries at low energies. For example, if quantum gravity is one of the discrete theories whose classical configurations are labeled (hyper)graphs, these will be the elements of $\mc{C}_S$. While much of the following works well with both continuous and discrete spacetimes, we will see that continuous spacetimes have some advantages. I will assume that there is a {Schr\"odinger} formulation of quantum gravity in terms of wavefunctionals over $\mc{C}_S$ endowed with a measure $\mu_S$ on $\mc{C}_S$. We assume that problems like the nonexistence of an infinite-dimensional Lebesgue measure are solved or avoided. The states of the universe are represented by unit vectors $\Psi$ in the Hilbert space $\mathcal{H}_S$ spanned by states $\ket{\gamma}$, where $\gamma\in\mc{C}_S$ and $\Psi[\gamma]:=\braket{\gamma}{\Psi}$, with the Hermitian scalar product \begin{equation} \label{eq:scalar_prod_space} \braket{\Psi}{\Psi'}:=\int_{\gamma\in\mc{C}_S}\Psi^\ast[\gamma]\Psi'[\gamma]{\scriptstyle \mc{D}}\mu_S[\gamma]. \end{equation} For matter quantum fields I will assume, like in the quantum field theory on the Minkowski spacetime, that there is a formulation in terms of wavefunctionals on the classical configuration space $\mc{C}_M$ of classical fields on $\Sigma$. The classical fields include bosonic fields, which commute, and fermionic fields, which are expressed using Grassmann numbers because they anticommute at equal times, see \textit{e.g.}\ \cite{Jackiw1988AnalysisInfDimManifoldsSchrodingerRepresentationForQuantizedFields}. If additional variables are needed to specify how the $3$d geometries integrate into $4$d manifolds, for example the shift and lapse variables, I will assume that these are included as well in $\mc{C}_M$. If the $3$d-space is a (hyper)graph $\gamma\in\mc{C}_S$, I will assume that matter can be described, in principle, by attaching various quantities or other structures to the elements of $\gamma$. Let us summarize all of the above into the following \begin{assumption} \label{as:matter-fields} The complete state of the universe is represented by a wavefunctional on a configuration space $\mc{C}=\mc{C}_S\times\mc{C}_M$, where $\mc{C}_S$ is the $3$d-space configuration space and $\mc{C}_M$ is the matter configuration space. We assume a measure $\mu$ on $\mc{C}$, of the form $\mu[\gamma,\phi]=\mu_S[\gamma]\mu_M[\phi]$, where $(\gamma,\phi)\in\mc{C}$. Let the Hilbert space of such wavefunctionals be $\mathcal{H}\cong\mathcal{H}_S\otimes\mathcal{H}_M$, with a scalar product \begin{equation} \label{eq:scalar_prod} \braket{\Psi}{\Psi'}:=\int_{(\gamma,\phi)\in\mc{C}}\Psi^\ast[\gamma,\phi]\Psi'[\gamma,\phi]{\scriptstyle \mc{D}}\mu[\gamma,\phi]. \end{equation} \end{assumption} \begin{definition} \label{def:3dspace-state} The states $\ket{\gamma,\phi}$ satisfying $\Psi[\gamma,\phi]=\braket{\gamma,\phi}{\Psi}$, where $\gamma$ represents the 3d-space and $\phi$ the matter fields, will be called \emph{$3$d-space states}. \end{definition} \subsection{3d-space states are fundamental} \label{s:fundamental-3d-space-states} Just because physicists first discovered classical physics, and later quantum theory, and formulated the latter by quantizing the former, it does not mean that quantum theory requires classical physics to exist. The universe is what it is, and it is fundamentally quantum. However, the Hilbert space is too symmetric as it is, and without the existence of preferred structures that break its symmetry, there would be no relation between Hilbert space vectors and physical reality, or between Hermitian operators and physical observables. Physical properties cannot simply emerge from the abstract state vector, even if the Hamiltonian is known, because if they would, infinitely many entities with the very same properties, but able to represent completely different physical realities, would emerge as well \cite{Stoica2021SpaceThePreferredBasisCannotUniquelyEmergeFromTheQuantumStructure,Stoica2022VersatilityOfTranslations}. Therefore, the basis $(\ket{\gamma,\phi})_{(\gamma,\phi)\in\mc{C}}$ of the Hilbert space $\mathcal{H}$ is special among the others, because of its physical meaning. This justifies \begin{assumption} \label{as:fundamental-3d-space-states} The $3$d-space states are fundamental, in the sense that, by their physical meaning, they are special among the other states represented by $\mathcal{H}$. \end{assumption} As explained earlier, these $3$d-space states are not necessarily Riemannian geometries, they can be other structures approximated at low energies by such geometries. What is important is that they have a special physical meaning, in the same sense in which, in nonrelativistic quantum mechanics, the position operators and their eigenvectors have a special physical meaning compared to other operators or vectors in the Hilbert space. \subsection{Background freedom} \label{s:background-freedom} To construct the configuration space $\mc{C}$, we eliminated the unphysical degrees of freedom due to diffeomorphisms and global gauge transformations. For example, two metric tensor fields on $\Sigma$ may look different, but a coordinate transformation, which corresponds to a diffeomorphism of $\Sigma$, may be able to map them into one another, showing that they are isometric. For this reason, we took the equivalence class of metrics on $\Sigma$ under diffeomorphisms. Similarly if the $3$d-space is a discrete structure like the ones that can be represented by graphs or hypergraphs from \sref{s:3d-space}, we took the configuration space consisting of such structures based on their internal relations, not as particular embeddings in a $3$d manifold. But let us state this explicitly, since it will be central in the article: \begin{assumption} \label{as:background-freedom} Our theory is background-free. \end{assumption} The case for background freedom was made for example by Smolin \cite{Smolin2006TheCaseForBackgroundIndependence}. General relativity already shows that the structures have to be relational: we use coordinates, but they are not absolute, they are just ways to assign numbers to points in space or spacetime. The \emph{hole argument} \cite{Stachel2014TheHoleArgumentAndSomePhysicalAndPhilosophicalImplications,Norton1988TheHoleArgument} shows that taking the points of the underlying manifold as having an independent reality from the intrinsic relations introduced by the metric tensor leads to indeterminacy. This is why many of the approaches to quantum gravity seem to require background freedom, or even have it built-in, including the formulation based on the Wheeler-de Witt equation \eqref{eq:WdW}, the discrete approaches based on (hyper)graphs discussed earlier, like causal sets, Regge calculus, causal dynamical triangulations, loop quantum gravity \textit{etc}. For a discussion of background independence in \emph{string theory} see Witten \cite{Witten1993QuantumBackgroundIndependenceInStringTheory}. \section{Dissociation into classical 3d-space states} \label{s:dissociation} \subsection{Background freedom and dissociation} \label{s:background-dissociation} In general, we make no difference between the concepts of linear combination and superposition, except maybe that a linear combination is understood as the mathematical expression of a superposition, which is a physical concept related to the position in the $3$d-space and phenomena like interference. And they usually coincide. In nonrelativistic quantum mechanics, any two wavefunctions can be superposed in the $3$d-space, because the underlying geometry is the same, and the reference frames are the same. In the wavefunctional formulation of quantum field theory on Minkowski spacetime, the local information about the wavefunctional of a scalar field is obtained by using local operators at $\boldsymbol{x}\in\Sigma=\mathbb{R}^3$, definable in function of the operators $\wh{\phi}(\boldsymbol{x})$ and $\wh{\pi}_{\phi}(\boldsymbol{x})$ (to be rigorous, one uses operator-valued distributions, applied to a sequence of test functions that converge uniformly to the Dirac distribution $\delta_{\boldsymbol{x}}$). In background-dependent theories of quantum gravity we can define local operators in a similar way, in function of the operators $\wh{\gamma}(\boldsymbol{x})$ and $\wh{\pi}_{\gamma}(\boldsymbol{x})$ from eq. \eqref{eq:WdW-hp}, and $\wh{\phi}_{\alpha}(\boldsymbol{x})$ and $\wh{\pi}_{\phi_{\alpha}}(\boldsymbol{x})$ for each matter field $\phi_{\alpha}$, where $\alpha$ stands for the spin and the internal degrees of freedom. But in background-free quantum gravity local operations on the $3$d-space do not make sense for all states, and likewise superpositions, even if the linear combinations are always defined. If the theory is background-free, a difference appears when we apply local operators to linear combinations. Any local operator $\obs{A}(\boldsymbol{x})$ depends on $\boldsymbol{x}$, but background freedom prevents the matching of $\boldsymbol{x}$ for $\ket{\gamma,\phi}$ to $\boldsymbol{x}$ for $\ket{\gamma',\phi'}$, because in general $\gamma\neq\gamma'$. There is no definite correspondence between the points of $\Sigma$ for $\ket{\gamma,\phi}$ and those of $\Sigma$ for $\ket{\gamma',\phi'}$, because of background freedom. The situation is even more visible in background-free theories where $(\Sigma,\gamma)$ is replaced by a labeled (hyper)graph. If $(\Sigma,\gamma)$ and $(\Sigma,\gamma')$ are isometric, a correspondence between the points of $\Sigma$ for $\ket{\gamma,\phi}$ and those of $\Sigma$ for $\ket{\gamma',\phi'}$ exists, although it is not necessarily unique. Sometimes such a correspondence exists only between some open regions of $\Sigma$. So the dissociation is not always ensured, and we will see that this is important. We arrived at the following: \begin{keyobservation} \label{obs:key:dissociation} Background freedom implies the dissociation of the universal wavefunction into classical $3$d-space states, because local operators and superpositions are not completely well-defined in the absence of a common background. \end{keyobservation} The dissociation is not necessarily complete, and various cases are captured in the following definition. \begin{definition} \label{def:dissociation} Two $3$d-space states $\ket{\gamma,\phi}$ and $\ket{\gamma',\phi'}$ are \emph{locally associable} if there exist two open subsets $U,U'\subseteq\Sigma$ and an isometry between $(U,\gamma)$ and $(U',\gamma')$. In case that $U=U'=\Sigma$, they are \emph{globally associable}. Two $3$d-space states are \emph{dissociated} if they are not globally associable. They are \emph{partially dissociated} if they are locally but not globally associable. They are \emph{completely dissociated} if they are neither locally nor globally associable. \end{definition} In the discrete case, in Definition \ref{def:dissociation}, (local) isometries are replaced by (local) isomorphisms between the labeled (hyper)graphs $\gamma$ and $\gamma'$. As long as the dissociation is not complete, the $3$d-space states can reassociate, at least partially. This allows quantum interference to exist at micro scales. This is the key to understanding why our quantum world looks quantum at small scales, and classical at macro scales. \subsection{Macro-states and classical micro-states} \label{s:classical-micro-macro-states} Macro-states correspond to equivalence classes of micro-states. There is a complete set of commuting projectors $(\obs{P}_{\alpha})_{\alpha\in\mc{A}}$ on $\mathcal{H}$, so that $[\obs{P}_{\alpha},\obs{P}_{\beta}]=0$ for any $\alpha\neq\beta\in\mc{A}$, and $\textstyle{\bigoplus}_{\alpha\in\mc{A}}\obs{P}_{\alpha}\mathcal{H}=\mathcal{H}$. Any \emph{macro-state} is represented by a subspace of the form $\obs{P}_{\alpha}\mathcal{H}$. We will say that the states belonging to macro-states $\obs{P}_{\alpha}\mathcal{H}$ are \emph{quasi-classical}. Since the $3$d-space states are classical, it makes sense to assume that they are also quasi-classical, \textit{i.e.}\ every $3$d-space state $\ket{\gamma,\phi}\in\obs{P}_{\alpha}\mathcal{H}$ for some $\alpha$. \begin{assumption} \label{as:quasi-classical} All $3$d-space states are quasi-classical. \end{assumption} If at a given time the state of the universe is a $3$d-space state, it immediately evolves into a linear combination of $3$d-space states. Dissociation and reassociation happen continuously. However, at the macro level, the state may remain quasi-classical for finite time intervals under unitary evolution. This accounts for the fact that macroscopic systems do not evolve all the time into linear combinations of macro-states like the {Schr\"odinger} cat, although it allows unitary evolution to lead to such linear combinations during quantum measurements. \section{Probabilities from counting 3d-space states} \label{s:probabilities} \subsection{Taking dissociation seriously} \label{s:dissociation-seriously} Every vector $\ket{\Psi}$ from $\mathcal{H}$ has the form \begin{equation} \label{eq:state-in-basis} \ket{\Psi} = \int_{(\gamma,\phi)\in\mc{C}}c_{\gamma,\phi}\ket{\gamma,\phi}{\scriptstyle \mc{D}}\mu[\gamma,\phi], \end{equation} where $c_{\gamma,\phi}=\Psi[\gamma,\phi]=\braket{\gamma,\phi}{\Psi}$. We may be tempted to simply proclaim the Born rule, that the probability density is \begin{equation} \label{eq:born_rule} P[\gamma,\phi]=\abs{c_{\gamma,\phi}}^2. \end{equation} But let us resist this for a while, and explore the consequences of the dissociation. If we explore the consequences of a physical principle, we should do it in its own terms, and if the result contradicts the observations, we should drop the starting principle. So let us byte the bullet and see where the idea of dissociation leads. We will see that it leads to the Born rule, but in a natural way, not by fiat. The dissociation into classical $3$d-space states suggests the following principle: \begin{principle} \label{pp:dissociation} Each $3$d-space state is either not present in $\ket{\Psi(t)}$, or it is present once (\textit{i.e.}\ it cannot be ``half-present'', even if eq. \eqref{eq:state-in-basis} may suggest this possibility). \end{principle} This may seem to contradict everything we know. However, we will get quantum theory back, with the familiar complex numbers, which will receive a geometric meaning in terms of a global gauge, and, since $\mc{C}$ is continuous, with the Born rule as we know it, but resulting from counting the $3$d-space states. \subsection{Making the wavefunctional real} \label{s:real-wavefunctional} Background freedom implies that the quantum state dissociates automatically into $3$d-space states, but since the coefficients $c_{\gamma,\phi}$ from eq. \eqref{eq:state-in-basis} are complex numbers, we need to understand their meaning. First, while $\ket{\gamma,\phi}$ is a classical state, $c_{\gamma,\phi}\ket{\gamma,\phi}$ is not classical. Let us for the moment ignore $\gamma$, and consider that $\phi$ is a scalar field. In general, $c\ket{\phi}\neq\ket{c\phi}$. Even if $c\in\mathbb{R}$, if $c\neq 1$, $\ket{c\phi}$ represents a classical state $c\phi$ completely different from $\phi$, so $\ket{c\phi}$ and $\ket{\phi}$ are orthogonal. But if $\phi$ is an electrically charged field and $\varphi\in\mathbb{R}$, $e^{i \varphi}\phi$ represents a global gauge transformation of $\phi$. The classical fields $\phi$ and $e^{i \varphi}\phi$ are physically the same. The state vectors $\ket{\phi}$ and $e^{i \varphi}\ket{\phi}$ are distinct, they differ by a phase factor, but they represent the same physical state. This suggests the following interpretation: \begin{keyobservation} \label{obs:key:gauge} If the matter fields admit an $\tn{U}(1)$ gauge symmetry, for any $\varphi\in\mathbb{R}$, \begin{equation} \label{eq:gauge} e^{i \varphi}\ket{\gamma,\phi}=\ket{\gamma,e^{i \varphi}\phi}. \end{equation} \end{keyobservation} This accounts for the fact that the physical equivalence of the classical fields $\phi$ and $e^{i \varphi}\phi$ corresponds to the physical equivalence of the state vectors $\ket{\phi}$ and $e^{i \varphi}\ket{\phi}$. This approach works for fields admit an $\tn{U}(1)$ symmetry, like charged fields and spinor fields. Since the electromagnetic field can be put in a complex form, even the photon admits an $\tn{U}(1)$ symmetry \cite{BialynickiBirula1994OnTheWavefunctionOfThePhoton}. Let us express the complex coefficients $c_{\gamma,\phi}$ from eq. \eqref{eq:state-in-basis} in the polar form \begin{equation} \label{eq:polar-form} c_{\gamma,\phi}=r[\gamma,\phi]e^{i \varphi[\gamma,\phi]}, \end{equation} with $r[\gamma,\phi]\geq 0$. Then, eq. \eqref{eq:state-in-basis} becomes \begin{equation} \label{eq:state-in-basis-polar} \ket{\Psi} = \int_{(\gamma,\phi)\in\mc{C}}r[\gamma,\phi]\ket{\gamma,e^{i \varphi[\gamma,\phi]}\phi}{\scriptstyle \mc{D}}\mu[\gamma,\phi], \end{equation} We see that, whenever a physical classical field contributes to $\ket{\Psi}$, it contributes only once, with a uniquely determined gauge $e^{i \varphi[\gamma,\phi]}$ and real coefficient $r[\gamma,\phi]$. As $\ket{\Psi}$ evolves in time, the gauge and $r[\gamma,\phi]$ can change. It remains to explain the relation between $r[\gamma,\phi]$ and the probability density of the $3$d-space states. \subsection{Emergence of the Born rule} \label{s:Born} Now that we have seen that gauge freedom allows the coefficients in the linear combination of $3$d-space states to be real numbers, let us see what is their meaning and how it relates to probabilities. I will assume that the configuration space $\mc{C}_S$ is continuous, so $\mc{C}$ is also continuous. This happens for example if $\Sigma$ is a $3$d manifold. I show that, under this assumption, the Born rule emerges by counting the $3$d-space states. A more general derivation can be found in \cite{Stoica2022ConservationLawsAndUnitarity}. Let us choose all fields $\phi$ so that in eq. \eqref{eq:polar-form} $\varphi[\gamma,\phi]=0$. We define $\xi:=(\gamma,\phi)$. First, we notice that a state vector of the form $ \ket{\Psi}=\frac{1}{\sqrt{n}}\sum_{k=1}^n\ket{\xi_k}$, where $(\ket{\xi_k})_{k\in\{1,\ldots,n\}}$ are distinct basis vectors, leads to the Born rule. If $\obs{P}_{\alpha}$ is a macro projector and $n_{\alpha}$ basis vectors composing $\ket{\Psi}$ belong to $\obs{P}_{\alpha}\mathcal{H}$, then $\bra{\Psi}\obs{P}_{\alpha}\ket{\Psi}=n_{\alpha}/n$. Therefore, the Born rule simply coincides with the usual counting rule ``probability is the ratio of the number of favorable outcomes to the total number of possible outcomes''. But only a small subset of the possible state vectors have this form, so this idea fails if the basis is discrete. However, this idea works in the continuous case, since the basis vectors can be distributed with nonuniform density. More precisely, if $r[\xi]=r[\gamma,\phi]$ from eq. \eqref{eq:state-in-basis-polar} is $\mu$-measurable, we can define a new measure \begin{equation} \label{eq:density_measure} {\scriptstyle \mc{D}}\wt{\mu}[\xi]:=r[\xi]{\scriptstyle \mc{D}}\mu[\xi], \end{equation} and obtain \begin{equation} \label{eq:psi_real_uniformized} \ket{\Psi}=\int_{\xi\in\mc{C}}\ket{\xi} {\scriptstyle \mc{D}}\wt{\mu}[\xi]. \end{equation} That's all. At first sight, one may think eq. \eqref{eq:psi_real_uniformized} cannot represent a normalized vector, so let us verify that it does: \begin{equation} \label{eq:check-normalized} \begin{aligned} \braket{\Psi}{\Psi}&=\int_{\xi\in\mc{C}}\bra{\xi} {\scriptstyle \mc{D}}\wt{\mu}[\xi]\int_{\xi'\in\mc{C}}\ket{\xi'} {\scriptstyle \mc{D}}\wt{\mu}[\xi']\\ &=\int_{\xi\in\mc{C}}\(\int_{\xi'\in\mc{C}}\braket{\xi}{\xi'} {\scriptstyle \mc{D}}\wt{\mu}[\xi']\){\scriptstyle \mc{D}}\wt{\mu}[\xi]\\ &=\int_{\xi\in\mc{C}}\(\int_{\xi'\in\mc{C}}\braket{\xi}{\xi'} r[\xi']{\scriptstyle \mc{D}}\mu[\xi']\){\scriptstyle \mc{D}}\wt{\mu}[\xi]\\ &=\int_{\xi\in\mc{C}}r[\xi]{\scriptstyle \mc{D}}\wt{\mu}[\xi] =\int_{\xi\in\mc{C}}r^2[\xi]{\scriptstyle \mc{D}}\mu[\xi]=1.\\ \end{aligned} \end{equation} Since $r[\xi]$ is $\mu$-measurable, the measure $\wt{\mu}$ is absolutely continuous with respect to $\mu$. Now, consider a macro projector $\obs{P}_{\alpha}$ so that the macro-state $\obs{P}_{\alpha}\mathcal{H}$ is the closure of a subspace spanned by $(\ket{\xi})_{\xi\in\mc(C)_{\alpha}}$, where $\mc{C}_{\alpha}$ is $\mu$-measurable. Then, from Assumption \ref{as:quasi-classical}, we get \begin{equation} \label{eq:born_rule_continuous} \bra{\Psi}\obs{P}_{\alpha}\ket{\Psi}=\int_{\xi\in\mc{C}_{\alpha}}\ket{\xi} {\scriptstyle \mc{D}}\wt{\mu}[\xi], \end{equation} just like the Born rule says. Therefore, state counting gives the Born rule, in accord to Principle \ref{pp:dissociation} (Fig. \ref{3dspace-counting.pdf}). \image{3dspace-counting.pdf}{0.45}{\textbf{The Born rule from counting $3$d-space states.} \\\textbf{A.} The usual interpretation of a wavefunction as a linear combination of basis state vectors of different lengths. \\\textbf{B.} The interpretation of the wavefunction in terms of constant length basis state vectors, but with inhomogeneous density.} \begin{keyobservation} \label{obs:key:distribution} If $\mc{C}_S$ is continuous, any state vector $\ket{\Psi}\in\mathcal{H}$ consists of mutually orthogonal $3$d-space states whose density is $\abs{\Psi[\gamma,\phi]}{\scriptstyle \mc{D}}\mu[\gamma,\phi]$. \end{keyobservation} Therefore, the numbers from eq. \eqref{eq:state-in-basis-polar} have a direct meaning: Principle \ref{pp:dissociation} combined with the gauge freedom allows the interpretation of the states $\ket{\Psi}$ as consisting of $3$d-space states that are either present or not. We obtained the Born rule from counting $3$d-space states. \begin{remark} \label{rem:born_rule_more_general} Note that the derivation of the Born rule from this Section is not limited to the case when the basis states are $3$d-space states \cite{Stoica2022ConservationLawsAndUnitarity}. What is important is that the basis is continuous, and that the basis vectors belong to macro-states. In quantum field theory in the {Schr\"odinger} wavefunctional representation, one can use the classical field configurations to obtain the basis. In nonrelativistic quantum mechanics, one can use the classical positions of the $\mathbf{n}$ particles, which are represented by points in the configuration space $\mathbb{R}^{3\mathbf{n}}$, and this is consistent with the fact that ultimately every quantum measurement translates to a position measurement. But the $3$d-space states have the advantage of dissociating in a natural way, and of including gravity. Moreover, the $3$d-space states are the only ones consisting of local beables, which are $\gamma$ and $\phi$ (see Sec. \sref{s:ontology}). This justifies counting these states to get the Born rule. \qed \end{remark} \section{Collapse postulate or many-worlds?} \label{s:collapse-or-many-worlds} Let us see how dissociation into $3$d-space states works with quantum measurements, and whether it works better by assuming the collapse postulate or with the many-worlds interpretation. A measuring device is a quasi-classical system. When interacting with the observed system, assumed to be microscopic in the sense that it is not directly observable, the combined system evolves into a linear combination of macroscopically distinct states. Each of these states contains the observed system in a different state, and the pointer of the measuring device indicating that state. So the {Schr\"odinger} equation predicts that two or more stories describing the measurement are simultaneously true. But we never observe such linear combinations: after the measurement, the pointer state is always in a definite macro-state. \begin{qmProblem} \label{qmProblem:collapse} Why can the state vector of the observed system be any linear combination at micro-scales, but not at macro-scale? \end{qmProblem} To resolve this problem, in standard quantum mechanics one invokes the \emph{collapse postulate} \cite{vonNeumann1955MathFoundationsQM}, which simply states that quantum measurements suspend the {Schr\"odinger} evolution, so that from the linear combination we keep only the term that corresponds to only one of the possible pointer states, removing the others. In doing this, standard quantum mechanics assumes, without explaining it, the pre-existence of measuring devices in quasi-classical states, but most quantum states are superpositions of quasi-classical states. So we have the following problem: \begin{qmProblem} \label{qmProblem:measuring-device} Why is the measuring device already in a quasi-classical state? \end{qmProblem} The collapse postulate purports to solve Problem \ref{qmProblem:collapse} by assuming implicitly that Problem \ref{qmProblem:measuring-device} is already solved. And, because of the collapse postulate, the {Schr\"odinger} equation is considered valid in some situations, but it is suspended in other situations. \emph{There seems to be a double standard here}. On one hand, linear combinations and entangled states appear and evolve in parallel as long as no observation is made, and the experiments are consistent with this. On the other hand, if we measure them, since we do not observe more parallel sets of outcomes simultaneously, we allow only one of the stories, and censor the other one, by appealing to the collapse postulate. Let us see how measurements happen in the approach based on dissociation into $3$d-space states proposed here. Consider a measuring device assumed to be almost classical, having a locally well-defined $3$d-space. Then, what enters in its range can be any state of the observed system, in any linear combination. Since the measuring device is localized, the instances in each 3d-space branch can be compared and collapse can be invoked. It may seem that the description of the measurement by using collapse became clearer. But we still had to assume that Problem \ref{qmProblem:measuring-device} is solved. And the collapse is still arbitrary, there is still no clear rule when it should be invoked. When no measurement is made, multiple $3$d-space states are allowed to coexist, dissociate and associate in interference patterns in the wavefunctional. But when a measurement is made, only some of the $3$d-space states seem to remain. Some linear combinations of $3$d-space states seem to be ``more equal'' than others. One may try to use the $3$d-space states approach to solve Problems \ref{qmProblem:collapse} and \ref{qmProblem:measuring-device} at once, by reformulating the collapse postulate in the following way: \begin{tentativePostulate}[Alternative Collapse Postulate] \label{altPost:collapse} During the evolution of the system, the $3$d-space states may become irreversibly dissociated into two or more sets of $3$d-space states, determined by the macro projectors. Let us call these sets macro-branches. When this happens, only one of the macro-branches remains, and the others disappear. \end{tentativePostulate} This Tentative Postulate seems to provide a basis to explain macro systems, including measuring devices. If so, it can solve both Problems \ref{qmProblem:collapse} and \ref{qmProblem:measuring-device} at once. But dissociation and reassociation happen all the time. Reassociation allows interference effects, but when dissociation is irreversible, these effects are suppressed automatically. \begin{remark} \label{rem:born_rule_collapse} If we assume collapse and try to explain the Born rule by counting $3$d-space states as in Sec. \sref{s:probabilities}, we will have to accept that the wavefunction consists of many micro-states that exist simultaneously, and part of them are eliminated by every collapse. But this would make quantum mechanics with the collapse postulate a strange version of the many-worlds interpretation, in which some of the micro-branches are removed with every collapse. On the other hand, the derivation of the Born rule from Sec. \sref{s:probabilities} works naturally with MWI. \qed \end{remark} These remarks immediately prompt the following: \begin{keyobservation} \label{obs:key:mwi} Tentative Postulate \ref{altPost:collapse} is unnecessary, because once the dissociation becomes irreversible, the macro-branches evolve independently and no longer interfere. \end{keyobservation} Therefore, since when dissociation becomes irreversible at macro scales the macro branches no longer interfere, the $3$d-space states approach works more naturally with the many-worlds interpretation (MWI) rather than with the wavefunction collapse. The key idea of MWI is to take the {Schr\"odinger} equation seriously, without introducing any ad-hoc rule that applies only to macro scales. This implies that all possible components of the total wavefunction continue to exist after the measurement, but thanks to decoherence, they no longer ``see'' each other. The linearity of the {Schr\"odinger} equation allows the macroscopically distinct states that result from a quantum measurement by unitary evolution to be independent, but in addition, they no longer interfere. The wavefunction branches so that the different branches occupy different regions in the configuration space. Interference is suppressed because the copy of any measuring device in one branch is unable to detect anything from another branch, so the branches no longer ``know'' about one another. And the branches become macroscopically distinct, in the sense that they correspond to projections of the state vector on different macro-states $\obs{P}_{\alpha_1}\mathcal{H},\ldots,\obs{P}_{\alpha_n}\mathcal{H}$. Decoherence into macro-branches seems to explain the existence of measuring devices and solve the measurement problem without violating the {Schr\"odinger} equation by invoking an ad-hoc wavefunction collapse. There are several problems that are not solved, at least not in a way that does not require a complete reinterpretation of well-established concepts like probabilities. They will be discussed in Sec. \sref{s:mwi}, where I will propose that these problems are solved, or at least alleviated, by the dissociation into $3$d-space states, which provides an absolute form of decoherence. \section{The many-spacetimes interpretation} \label{s:mwi} We think that we are forced to suspend the {Schr\"odinger} equation as a result of measurements, because we observe only one of the stories that the {Schr\"odinger} equation describes as taking place in parallel. But could we observe more than one of these stories at once? The {Schr\"odinger} equation predicts that even the observers would be ``multiplied'', each of its instances participates in one of the stories and not in the others, of which they are oblivious. And the laws of physics are the same in all of these stories. Everett noticed the perfect symmetry of the situation, and saw no reason to favor the story in which one gets an outcome against the competing stories. He proposed to trust the {Schr\"odinger} equation and accept that all stories continue to happen independently \cite{Everett1957RelativeStateFormulationOfQuantumMechanics,Everett1973TheTheoryOfTheUniversalWaveFunction}. {Schr\"odinger} himself proposed earlier something that he worried may ``seem lunatic'' along the same lines \cite{SchrodingerMWI1995TheInterpretationOfQuantumMechanics,Barrett1999TheQuantumMechanicsOfMindsAndWorlds,Deutsch2010ApartFromUniverses}. The result of Everett's realization is the \emph{many-worlds interpretation} (MWI) of quantum mechanics. But there are still open questions in MWI. Various proposals were made to solve them, and some researchers think they are solved. Others think that they cannot be solved and MWI does not deserve to be taken seriously. In this Section I argue that the $3$d-space states approach solves some of these problems, or provides a more natural way to solve them. This leads to a variant of the many-worlds interpretation, which may be called ``the many-$3$d-space states interpretation'', but I will call it \emph{the many-spacetimes interpretation} (MSTI). \subsection{Preferred basis: 3d-space states} \label{s:preferred-basis} Let us start with a problem whose solution is the key to solving other problems. \begin{mwiProblem}[Preferred basis] \label{mwiProblem:preferred-basis} In what basis does the branching take place, so that the worlds appear classical at the macro level? \end{mwiProblem} Presumably, this is solved by decoherence \cite{Schlosshauer2007DecoherenceAndTheQuantumToClassicalTransition}. However, there has to be more to the preferred basis than that it simply ``emerges''. Otherwise, if a preferred basis emerges, either for the entire universe, or for a subsystem, infinitely many others emerge \cite{Stoica2021SpaceThePreferredBasisCannotUniquelyEmergeFromTheQuantumStructure}. In nonrelativistic MWI, it is expected that the preferred basis is related to the positions in the configuration space. This would explain why branches no longer interfere -- it is because they no longer overlap in the configuration space. But the MSTI answer is different. \begin{mstiAnswer}[Preferred basis] \label{mstiAnswer:preferred-basis} The dissociation of the state vector automatically selects as the preferred basis the $3$d-space states basis. \end{mstiAnswer} \subsection{Macro world} \label{s:macro} Another important problem is the following \begin{mwiProblem}[Macro world] \label{mwiProblem:macro-world} How does the classical-looking macroscopic world emerge from the wavefunction? \end{mwiProblem} Often, Problem \ref{mwiProblem:macro-world} is considered solved by decoherence \cite{Schlosshauer2007DecoherenceAndTheQuantumToClassicalTransition,Zurek2022EmergenceOfTheClassicalWorldFromWithinOurQuantumUniverse,Kiefer2022FromQuantumToClassicalEssaysInHonourOfHDieterZeh}, which appeared in the first place to solve it. Without denying the importance of decoherence, the dissociation strengthens the idea, by introducing an absolute notion of decoherence. \begin{mstiAnswer}[Macro world] \label{mstiAnswer:macro-world} Each macro world corresponds to multiple classical $3$d-space states that belong to the same macro-state, because they are not distinguishable at the macro level. \end{mstiAnswer} The $3$d-space states gather together into macro states (Assumption\ref{as:quasi-classical}). Since each $3$d-space state is also quasi-classical, and since they are not distinguished by the macro projectors, they can account for the macro world. \subsection{Classicality as classicality} \label{s:classicality} Another problem is that the wavefunction is not defined on the $3$d-space, but on the much larger configuration space. This disturbed {Schr\"odinger} \cite{BacciagaluppiValentini2009SolvayConference}, Lorentz (\cite{Przibram1967LettersWaveMechanics}, p. 44), Einstein \cite{Howard1990EinsteinWorriesQM,FineBrown1988ShakyGameEinsteinRealismQT}, Heisenberg, Bohm \cite{Bohm2004CausalityChanceModernPhysics} \textit{etc}. This is true for the wavefunction of any state vector in the total Hilbert space. So even if MWI solves Problem \ref{mwiProblem:macro-world}, the following may remain: \begin{mwiProblem}[Objects in space] \label{mwiProblem:wavefunction-on-space} Given that the wavefunction is defined on the high-dimensional configuration space, how do familiar, classical-looking objects localized in space emerge from the wavefunction? \end{mwiProblem} The wavefunction, being an element of a representation of the Galilei or the Poincar\'e group \cite{Wigner1959GroupTheoryAndItsApplicationToTheQuantumMechanicsOfAtomicSpectra}, is intrinsically associated to space or spacetime. Therefore, properly analyzed, it satisfies all expectations of standard geometric objects in space or spacetime \cite{Stoica2021WhyTheWavefunctionAlreadyIsAnObjectOnSpace}. Moreover, if one is not satisfied with this and wants the wavefunction to be expressed as classical-like fields in space or spacetime, this is also possible, albeit in an inaesthetic way that at least serves as a proof of concept \cite{Stoica2019RepresentationOfWavefunctionOn3D}. But in the case of quantum gravity, the representation from \cite{Stoica2019RepresentationOfWavefunctionOn3D} only works if the theory is background-dependent. And even if the wavefunction is, in the sense of group theory or as fields, an object in space, it does not look like the familiar, classical-looking objects we see. Maybe decoherence leads to branches that look like familiar, classical-looking objects localized in space. Wallace \cite{Wallace2012TheEmergentMultiverseQuantumTheoryEverettInterpretation} thinks that the branches form patterns in the sense of Dennett \cite{Dennett1991RealPatterns}, but are these patterns classical-looking enough? Maudlin \cite{Maudlin2010CanTheWorldBeOnlyWavefunction,Maudlin2014CriticalStudyDavidWallaceTheEmergentMultiverseQuantumTheoryEverettInterpretation,Maudlin2019PhilosophyofPhysicsQuantumTheory} and Norsen \cite{Norsen2017FoundationsQM} think that Problem \ref{mwiProblem:wavefunction-on-space} is not solved, and that it is hard to solve it even if Problems \ref{mwiProblem:preferred-basis} and \ref{mwiProblem:macro-world} would be. They contrast this with the pilot-wave theory (PWT) \cite{Bohm1952SuggestedInterpretationOfQuantumMechanicsInTermsOfHiddenVariables}, which includes, along with the wavefunction, point-particles at definite positions in space, and with the Ghirardi-Rimini-Weber (GRW) interpretation \cite{GhirardiRiminiWeber1986GRWInterpretation}, where the wavefunction collapses around well-localized points in the configuration space, thereby appearing classical. Their arguments can be seen as relying on the idea that the primitive ontologies of the PWT and GRW interpretation (especially in Bell's flash ontology \cite{Bell2004SpeakableFlashOntology}) are very similar to the classical ones. This similarity also seems to help solving the other problems of the PWT and GRW interpretations \footnote{But for these interpretations to work, the wavefunction governing the motion of the particles in PWT and the probability of the spontaneous localization in the GRW interpretation has to be itself well localized around the points of the configuration space, so the MWI Problem \ref{mwiProblem:wavefunction-on-space} applies to these interpretations as well.}. An important lesson that can be learned from their arguments is that classical physics is clearer, and so any interpretation of quantum mechanics that is closer to classical physics has an important advantage. This suggests the following heuristic rule \begin{thumbrule} \label{thumbrule:familiar} If a solution is considered to work without problems in classical physics, and if it can be applied to an interpretation of quantum mechanics, it should also be considered to work without problems in that interpretation of quantum mechanics. \end{thumbrule} We can see that the MSTI Answers \ref{mstiAnswer:preferred-basis} and \ref{mstiAnswer:macro-world} already align MWI to this Rule of Thumb, except for the multiplicity of the worlds, which is not present in the classical theories. It is therefore desirable to have a solution of Problem \ref{mwiProblem:wavefunction-on-space} along the Rule of Thumb \ref{thumbrule:familiar} as in the PWT and GRW interpretations. Background freedom automatically makes this possible. \begin{mstiAnswer}[Objects in space] \label{mstiAnswer:wavefunction-on-space} The $3$d-space states consist of classical fields on the $3$d-space. \end{mstiAnswer} What can be more classical than the classical itself? \subsection{Branching asymmetry from Big-Bang symmetry} \label{s:branching-asymmetry} Another problem is the following \begin{mwiProblem}[Branching asymmetry] \label{mwiProblem:branching-asymmetry} Why is the branching happening only towards the future, and why do the branches remain separated? \end{mwiProblem} This is also often claimed to be solved by decoherence, but since the {Schr\"odinger} equation is time-symmetric, without very fine-tuned initial conditions of the universe, decoherence would equally predict branching towards the past. In the standard framework of the many-worlds interpretation, Wallace acknowledged this problem, analyzed it, and concluded that the branching asymmetry is correlated to the \emph{thermodynamic arrow of time} \cite{Wallace2012TheEmergentMultiverseQuantumTheoryEverettInterpretation}. But we do not have an explanation for the thermodynamic arrow of time either, although the second law of thermodynamics is a well-established fact. The dissociation into $3$d-space states allows us to make some progress, by relating branching asymmetry with the \emph{cosmological arrow of time}. The cosmological arrow of time points from the Big-Bang to the direction of time in which the universe expands. The closer the state of the universe is to the Big-Bang, the more homogeneous and isotropic the universe is. Moreover, as the singularity is approached, the $3$d-space contracts. A way to interpret this is that it contracts to a point, which is the singularity. This would be problematic, since if $\Sigma$ is a point at $t=0$, we will need to explain how it evolves into a $3$d manifold. Another way to interpret it is that the $3$d-space components of the metric tensor tend to $0$ as $t\searrow 0$, but the topology of space does not contract to a point, it is still the $3$d manifold $(\Sigma,\gamma_{ab}(\boldsymbol{x}) \equiv 0)$. By avoiding to make the assumption that the topology derives from distance, we can obtain equations for general relativity that continue to be valid under more general conditions. For this we need an alternative formulation of semi-Riemannian geometry and Einstein's general relativity, which is equivalent to these ones outside the singularity, but well-defined and free of infinities at the singularity. This was achieved and shown to work in many situations in which non-singular semi-Riemannian geometry is not defined \cite{Stoica2011FLRWBigBangSingularitiesAreWellBehaved,Stoica2012BeyondTheFLRWBigBangSingularity,Stoica2013SingularGeneralRelativityPhDThesis}. Moreover, this approach works well together with Penrose's \emph{Weyl curvature hypothesis}, whose motivation was to connect the cosmological and the thermodynamic arrows of time \cite{Penrose1979SingularitiesAndTimeAsymmetry,Stoica2012OnTheWeylCurvatureHypothesis}. Then, there is only one possible $3$d-space state at the Big-Bang singularity. Of course, as $t\searrow 0$ the system may be chaotic, as in the Mixmaster model \cite{Misner1969MixmasterUniverse} or the Belinski–Khalatnikov–Lifshitz model \cite{BKL1970OscillatoryApproachToASingularPointInTheRelativisticCosmology}. Then, while at the singularity there is still only one possible $3$d-space state, it can be approached in different ways as $t\searrow 0$. However, the limit $\gamma\to 0$ forces the solutions to depend on a small number of parameters as they converge to the unique $3$d-space $(\Sigma,0)$. Therefore, the severe constraint of the initial conditions for $(\Sigma,\gamma)$ implies that the branching structure of the wavefunctional is very asymmetric in time. This suggests a possible reason why, at macro scales, branching happens only towards the future. \begin{mstiAnswer}[Branching asymmetry] \label{mstiAnswer:branching-asymmetry} Branching happens only towards the future because at the Big-Bang the $3$d-space states are characterized by a very small number of degrees of freedom and converge to the same initial $3$d-space state. \end{mstiAnswer} This answer is of course incomplete. We do not know why the initial state had to be the Big-Bang, and it is not even sure that there was a singularity, many researchers think that quantum gravity will be able to remove it. \subsection{Probabilities from continuity} \label{s:probabilities-continuity} When a quantum measurement is made, the probability to obtain a certain outcome is given by the Born rule to be the square of the projection of the state vector. Different outcomes may therefore have different probabilities. However, in MWI, there is only one branch for each of these outcomes. A direct counting argument implies that all outcomes should be obtained with the same probability, contrary to the Born rule. Everett proposed that somehow the squared amplitude of the branch gives the probability that the observer ends out being the observer from that branch. \begin{mwiProblem}[Probabilities] \label{mwiProblem:probabilities} Why are the probabilities proportional to the squared amplitudes of the branches? \end{mwiProblem} There are various proposed solutions, based on many-minds \cite{AlbertLower1988InterpretingMWI_ManyMinds}, decision theory \cite{Deutsch1999QuantumTheoryOfProbabilityAndDecision,Wallace2002QuantumProbabilitiesAndDecisionRevisited}, measure of existence \cite{Vaidman2012ProbabilityInMWI} \textit{etc}. For a review see \cite{Vaidman2020DerivationsOfTheBornRule}. Proposals that somehow the amplitude of a branch yields probability have merits and led to interesting insights into the nature of probability \cite{Wallace2012TheEmergentMultiverseQuantumTheoryEverettInterpretation}. But if there is a way to obtain probabilities in the old-fashioned way, for example by branch counting (Saunders advocates this \cite{Saunders2021BranchCountingInTheEverettInterpretationOfQuantumMechanics}) or as \emph{the ratio of the number of favorable outcomes to the total number of possible outcomes}, the result would be more palatable, without necessarily contradicting other proposals. The Rule of Thumb \ref{thumbrule:familiar} suggests the desirability to solve the problem by micro-state or micro-branch counting. Fortunately, MSTI does just this: \begin{mstiAnswer}[Probabilities] \label{mstiAnswer:probabilities} ``Counting'' $3$d-space states allowed to have an inhomogeneous density gives probabilities proportional to the squared amplitudes. Counting each $3$d-space state is justified by the fact that only those states have local beables, see \sref{s:ontology}. \end{mstiAnswer} This derivation of the Born rule is consistent with, and provides a concrete realization of, Saunders' branch-counting \cite{Saunders2021BranchCountingInTheEverettInterpretationOfQuantumMechanics}, Vaidman's notion of measure of existence \cite{Vaidman2012ProbabilityInMWI}, and maybe, but I am not sure, the Deutsch-Wallace decision-theoretic argument \cite{Deutsch1999QuantumTheoryOfProbabilityAndDecision,Wallace2002QuantumProbabilitiesAndDecisionRevisited}. \subsection{Real wavefunction} \label{s:wavefunction-gauge} The Rule of Thumb \ref{thumbrule:familiar} also suggests the following \begin{mwiProblem}[Real-number-based probabilities] \label{mwiProblem:real-numbers} It is true that the norm of the (complex) wavefunction is real. But is there a deeper reason why we get real probabilities? \end{mwiProblem} MSTI suggests a solution according to the Rule of Thumb \ref{thumbrule:familiar} for this too: \begin{mstiAnswer}[Real-number-based probabilities] \label{mstiAnswer:real-numbers} The wavefunction is real, and the complex phases only represent a global $\tn{U}(1)$ gauge choice for the classical fields in the $3$d-space states. \end{mstiAnswer} \subsection{Ontology: a real wavefunction} \label{s:ontology} Another problem is that of ontology: \begin{mwiProblem}[Ontology] \label{mwiProblem:ontology} What is the ontology of MWI? What are the local beables? \end{mwiProblem} Some researchers consider that the abstract state vector and the Hamiltonian are sufficient to specify the ontology of MWI, and from it one can derive an essentially unique $3$d-space, the tensor product structure, the preferred basis, and all there is to be known about the universe \cite{CarrollSingh2019MadDogEverettianism}. This is impossible, because if any of these structures can be derived from the state vector and the Hamiltonian, infinitely many other solutions exist \cite{Stoica2021SpaceThePreferredBasisCannotUniquelyEmergeFromTheQuantumStructure}. Other researchers consider that the wavefunction is needed, in the sense that not only the state vector is required, but also the $3$d-space, and this is sufficient to specify the complete ontology \cite{Vaidman2016AllIsPsi,SEP-Vaidman2018MWI}. Despite this, authors like Maudlin \cite{Maudlin2010CanTheWorldBeOnlyWavefunction,Maudlin2014CriticalStudyDavidWallaceTheEmergentMultiverseQuantumTheoryEverettInterpretation,Maudlin2019PhilosophyofPhysicsQuantumTheory} and Norsen \cite{Norsen2017FoundationsQM} consider that MWI does not have a primitive ontology in terms of local beables. But in every micro-world in MSTI there are local beables, just like in classical physics. \begin{mstiAnswer}[Ontology] \label{mstiAnswer:ontology} There is a wavefunctional, composed of $3$d-space states and dissociable into them. Each $3$d-space state consists of a $3$d-space $(\Sigma,\gamma)$, on which classical fields $\phi$ are defined, with a fixed gauge. Every $3$d-space state appears at most once in the composition of the wavefunctional, but these states can be distributed with a nonuniform density. The distribution gives the real wavefunctional, and the gauge gives the complex phase of each term in the wavefunctional. The local beables are the classical fields $\phi$ and $\gamma$ defined on the $3$d manifold $\Sigma$, so they are defined only for $3$d-space states. Because the $3$d-space states are the ones having definite local beables, they correspond to (micro-)worlds. This justifies counting them to obtain the probabilities. \end{mstiAnswer} Therefore, local beables exist, and the Rule of Thumb \ref{thumbrule:familiar} was followed. \section{Discussion} \label{s:discussion} It is uncommon to use the wavefunctional formulation of quantum field theory in the interpretation of quantum mechanics. For some reason, it is considered more natural to take nonrelativistic quantum mechanics as a benchmark for these interpretations. But the wavefunctional formulation is natural too, if not even more natural. \begin{remark} \label{rem:measurement} When we perform a quantum measurement of a smaller system, we never observe directly its state, only the pointer state of the apparatus, which is macroscopic. A measuring device is dedicated to a particular location and type of quantum field (or subsystem in general), not to a particular particle (or subsystem). The result of any measurement translates into a change in the macro-state of the universe. All these are described adequately by the wavefunctional of the entire universe. \end{remark} Wheeler and Everett considered MWI as the interpretation of quantum mechanics that is suitable for quantum gravity \cite{Byrne2010TheManyWorldsOfHughEverettIII,Barrett2012EverettInterpretation}. According to DeWitt \cite{Dewitt1967QuantumTheoryOfGravityI_TheCanonicalTheory}, p. 1141: \begin{quote} Everett's view of the world is a very natural one to adopt in the quantum theory of gravity, where one is accustomed to speak without embarrassment of the 'wave function of the universe.' It is possible that Everett's view is not only natural but essential. \end{quote} Here, we have seen that background free quantum gravity solves some foundational problems of quantum mechanics, and especially of MWI. It even suggests a version of MWI (which is MSTI) as the more natural interpretation of quantum mechanics. The relation between quantum gravity and MWI is therefore reciprocal. Finally, I argued that MSTI solves some of the main problems of standard quantum mechanics and MWI. The strategy to make this interpretation more palatable was to highlight similarities with classical physics, based on the Rule of Thumb \ref{thumbrule:familiar}. It turns out that, except for the existence of a multiplicity of worlds, MSTI is a more classical version of MWI, with respect to the appearance of classicality, the existence of local beables, the statistics, and even the understanding of the complex numbers inherent to the theory.
{ "timestamp": "2022-09-20T02:20:37", "yymm": "2209", "arxiv_id": "2209.08623", "language": "en", "url": "https://arxiv.org/abs/2209.08623" }
\section*{\uppercase{Acknowledgements}} This research is supported by Next Vision\footnote{Next Vision: https://www.nextvisionlab.it/} s.r.l., by MISE - PON I\&C 2014-2020 - Progetto ENIGMA - Prog n. F/190050/02/X44 – CUP: B61B19000520008, and MIUR AIM - Attrazione e Mobilita Internazionale Linea 1 - AIM1893589 - CUP: E64118002540007 \input{sections/supplementary_material.tex} \clearpage {\small \bibliographystyle{ieee_fullname} \balance \section{Conclusion} \label{sec:conclusion} We proposed MECCANO, a multimodal dataset to study egocentric human behavior understanding in an industrial-like scenario. We publicly release the dataset (\url{https://iplab.dmi.unict.it/MECCANO/}) with temporal (action and interaction segments) and spatial (active, next-active object, and hands bounding boxes) annotations considering a taxonomy of 12 verbs, 20 nouns and 61 unique actions. In addition, we performed baseline experiments on five challenging tasks, showing the usefulness of multimodality of the MECCANO dataset. We argue that these multimodal signals are useful to develop real applications to support humans in the real life. MECCANO is also suitable to explore different tasks \cite{EgoProceLECCV2022, mixup_active_obj_detction} other than those considered in this work. Future works will explore new approaches to improve performance on the proposed tasks. \section{The MECCANO Multimodal Dataset} \label{sec:dataset} In this Section, we describe MECCANO multimodal, a dataset of egocentric videos composed of multimodal data collected in an industrial-like domain. We acquired RGB videos, depth maps and gaze signal simultaneously with two different devices. \subsection{Data Collection} \label{sec:meccano_collection} The MECCANO multimodal dataset has been acquired in an industrial-like scenario in which 20 subjects were asked to built a toy model of a motorbike (see Figure~\ref{fig:toy_model}). The motorbike is composed of 49 components with different shapes and sizes belonging to 19 classes. In addition, 2 tools, a \textit{screwdriver} and a \textit{wrench}, are available to facilitate the assembly of the toy model. In our settings, we have grouped two types of components which are similar in their appearance and have similar roles in the assembly process. Specifically, we grouped the A054 and A051 components (shown in Figure~\ref{fig:toy_model}) under the ``screw'' class. These two types of components only differ in their lengths. We also grouped A053, A057 and A077 under the ``washers'' class. Note that these components only differ in the radius of their holes and in their thickness. As a result, we have 20 object classes in total: 16 classes are related to the 49 motorbike components, whereas the others are associated to the two tools, to the instruction booklet and to a ``partial model" class, which indicates a set of components assembled together to form a part of the model (see Figure~\ref{fig:partial}). Note that multiple instances of each component are necessary to build the model. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/partial_model.png} \caption{Examples of objects belonging to the partial model class.} \label{fig:partial} \end{figure} For the data collection, the $49$ components related to the $16$ considered classes, the $2$ tools and the instruction booklet have been placed on a table to simulate an industrial-like environment. Specifically, objects of the same component class have been grouped and placed in a heap, and heaps have been placed randomly on the table (see Figure~\ref{fig:dataset}). We have considered two types of tables: a light-colored table and a dark one. The dataset has been acquired in 2 different countries, Italy and United Kingdom. Participants were from $8$ different nationalities with ages between $18$ and $55$. Figure~\ref{fig:participants} reports some statistics about the participants. We asked participants to sit and build the model of the motorbike. No other particular instruction was given to the participants, who were free to use all the objects placed on the table as well as the instruction booklet. Some examples of the captured data are reported in Figure~\ref{fig:dataset}. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/participants.pdf} \caption{Statistics of the 20 participants.} \label{fig:participants} \end{figure} The dataset has been acquired using a custom headset (see Figure~\ref{fig:headset}) which was worn by participants for acquisition purposes. The headset was composed of an Intel RealSense SR300\footnote{https://ark.intel.com/content/www/it/it/ark/products/92329/intel-realsense-camera-sr300.html} and by a Pupils Core\footnote{https://pupil-labs.com/} device. The headset was adjusted to control the point of view of the camera with respect to the different heights and postures of the participants in order to have the hands located approximately in the middle of the scene during the object manipulations. For each participant, we acquired the RGB stream and the depth signal from the RealSense device, whereas the gaze signal was acquired through the Pupils Core device (see Figure~\ref{fig:headset}). The RGB videos acquired with the RealSense device were recorded at a resolution of 1920x1080 pixels. Depth videos were acquired with a resolution of 640x480 pixels. Both videos have a framerate of 12fps. Finally, we acquired the gaze signal with the Pupils Core device with a frequency of 200Hz. To acquire the Real Sense and Pupils Core signals, we used the Pupils Capture software\footnote{https://pupil-labs.com/products/core/} which allows to acquire gaze simultaneously with signals coming from the two devices. Each video includes a complete assembly of the toy model starting from the 49 pieces placed on the table. The average duration of the captured videos is 21.14\textit{min}, with the longest one being 35.45\textit{min} and the shortest one being~9.23\textit{min}. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/multimodality.png} \caption{The custom headset used to acquire the MECCANO dataset along with examples of the captured modalities. The headset is composed of two devices: a Intel RealSense SR300 and a Pupil Core.} \label{fig:headset} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/action-interaction.png} \caption{Example of relation between the action \textit{``take screwdriver"} and its related interaction. $A_s$ and $A_e$ represent the start and the end times of the action, while $I_S$ represents the start of the interaction represented by the physical contact between the hand and the object. The interaction will end when there will be no more physical contact ($I_e$).} \label{fig:action_interaction} \end{figure*} \subsection{Data Alignment} We aligned the different signals both temporally and spatially to obtain a consistent association between modalities. In this way it is possible to use the set of annotations independently from the different chosen signal (e.g., temporal segments or bounding box annotations). The following sections detail the alignment of the different modalities to the source RGB videos. \subsubsection{Depth Alignment} There was a constant temporal misalignment of 0.4s between the depth and RGB signals due to the fact that the streams have been acquired with two different sensors (depth sensor and RGB sensor). We temporally aligned the two streams obtaining a total of 301016 depth frames\footnote{Due to the temporal misalignment, the number of depth frames is different respect to the number of RGB frames.}. Examples of RGB frames associated with the depth maps are shown in Figure~\ref{fig:dataset}. \subsubsection{Gaze Alignment} The gaze data consists of 2D pixel coordinates (\textit{x, y}) of the gaze position in the RealSense RGB frame, and include also a confidence scores and the timestamps. For each RGB frame, we associated a gaze signal selecting only gaze positions with a confidence larger than or equal to 0.6 and considering the timestamp closest to the considered frame (see Figure~\ref{fig:dataset}). \subsection{Data Annotation} \label{sec:meccano_data_ann} The MECCANO Multimodal dataset has been collected and annotated to study human behavioral understanding in an industrial-like scenario. Similarly to recent datasets \cite{Ego4D2021, Damen2018EPICKITCHENS}, MECCANO is a ``multi-task" dataset, due to its rich set of annotations which can be used and combined to solve different tasks. We provide temporal annotations for actions and interactions understanding that are useful to solve tasks which take into account the temporal dimension, such as action recognition, as we reported in Section~\ref{sec:action_recognition}. Active objects have been labeled with bounding boxes and object classes with the aim to solve tasks like Active Object Detection and Recognition (Section~\ref{sec:AODR}). Combining interaction temporal annotations and active objects bounding boxes, it is possible to address the task of Egocentric Human-Object Interaction Detection (EHOI) as detailed in Section~\ref{sec:EHOI}. We provide also bounding boxes around the hands of the user during all object interactions. Hands and active objects bounding boxes have also been tracked backward in time before the beginning of each interaction. These tracked bounding boxes are related to the task of “next active objects”. We exploit the hands and next-active objects bounding boxes to address the action anticipation task as reported in Section~\ref{sec:Action_anticipation}. Moreover, bounding boxes of active and next-active objects have been used to solve the Next-Active Object Detection task (see Section~\ref{sec:NAO}). In sum, to show the potential of the MECCANO Dataset and its annotations, we propose a benchmark which comprises five tasks using different sets of annotations and input modalities : 1) Action Recognition, 2) Active Object Detection and Recognition, 3) Egocentric Human-Object Interaction Detection, 4) Action Anticipation and 5) Next-Active Object Detection. Central to human-object annotations are user-object interactions and actions. Hence, we first investigate the relationship between these two concepts. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/rel.png} \caption{Examples of relations between action and interaction concepts.} \label{fig:action_interaction_table} \end{figure} \subsubsection{Action-Interaction Relations} \label{sec:action_vs_interactions} In the literature, the \textit{action} and \textit{interaction} concepts are often used interchangeably, specifically when tasks related to \textit{action} understanding are considered. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/Temporal_annotations.pdf} \caption{Example of two overlapping temporal annotations along with the associated verbs.} \label{fig:temporal} \end{figure*} Let's consider the sentence ``take a screwdriver'' which is composed of the verb ``take'' and the object ``screwdriver''. We consider that the \textit{interaction} is strongly related to the physical contact between the human and the object, whereas we assume that the \textit{action} is more related to the motion of the hands of the user and the related objects as well as to the change of state of the object (i.e., from inert to in hand). These two entities are correlated along the temporal dimension having different start and end times, also if they overlap (see Figure~\ref{fig:action_interaction}). Let $A_s$ and $A_e$ be the start and end time of an action and let $I_s$ and $I_e$ denote when the related interaction begins and ends. Considering the annotation ``take a screwdriver'', the action begins when the hand of the human starts to move towards the target object, which is the screwdriver on the table. The interaction begins when the hand touches the target object. Hence the interaction begins due to the physical contact between the hand and the object, when the action is still on-going. When the screwdriver has been taken, which means that it changed its state from inert to in hand, the action ends, while the interaction will be concluded when the physical contact will be broken (e.g., when the human will put down the object on the table). The verb describing the action is the same which describes the related interaction. Note that the interaction can be associated to different verbs over time. For example, if after taking the screwdriver the physical contact is not broken and the human puts down the screwdriver, the verb ``put down'' will describe also the interaction until the end of the action. Distinguishing these two concepts and understanding their temporal correlation is fundamental to formalize and disambiguate these two related tasks. Action understanding tasks focus on the temporal dimension of actions while the Human-Object Interaction detection task focuses more on the spatial position of the objects in the scene related to the physical contact between human and objects. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/actions_stats.pdf} \caption{List of the actions present in the MECCANO Dataset (right) and their distribution (left). The action labels follows a long-taile distribution which highlights the complexity of the considered scenario.} \label{fig:action_stats} \end{figure*} Considering this distinction, we labeled the MECCANO dataset with actions and interactions annotations as reported in Section~\ref{sec:temporal_ann} and explored the two different tasks. Figure~\ref{fig:action_interaction_table} reports different examples of relations between action and interaction concepts. For each example, we report temporal relation (first column), generic verbs (second column) and the verbs present in the MECCANO Dataset (third column) which belong to the considered relation. \subsubsection{Action and Interaction Temporal Annotations} \label{sec:temporal_ann} We considered 12 different verbs which describe the actions performed by the participants while building the toy model: \textit{take, put, check, browse, plug, pull, align, screw, unscrew, tighten, loosen} and \textit{fit}. We represent each temporal segment as a triplet composed of three different timestamps: 1) the start time which indicates the start of the action, 2) the contact time which indicates the first frame in which the contact between the hand and the object (or between the objects) is clearly visible, changing the state of the objects from \textit{passive} to \textit{active} and 3) the end time of the performed action. We manually annotated both contact and end times of each temporal segment. Since in the MECCANO Multimodal dataset there are three cases in which the action starts before the frame of contact (i.e., take, put and align), for these actions we automatically annotated the start time going back by 0.5 seconds with respect the contact time whereas for the others the start time coincides with the contact. Only for the \textit{check} verb, where the user doesn't need to touch an object, we annotated the contact time when it is clear from the video sequence that the user is looking at the object (see Figure~\ref{fig:temporal}). With this procedure, we annotated $8857$ video segments. Since a participant can perform multiple interactions simultaneously, we allowed the annotated segments to overlap (see Figure~\ref{fig:temporal}). In particular, in the MECCANO Multimodal dataset there are 1401 segments (15.82 \%) which overlap with at least another segment. We defined $61$ action classes composed of a verb and one or more objects, for example \textit{``align screwdriver to screw''} in which the verb is \textit{align} and the objects are \textit{screwdriver} and \textit{screw}. Depending on the verb and objects involved in the interaction, each temporal segment has been associated to one of the $61$ considered action classes. We analyzed the combinations of our $12$ verb classes and $20$ object classes to find a compact, yet descriptive set of actions classes. The action class selection process has been performed in two stages. In the first stage, we obtained the distributions of the number of active objects generally occurring with each of the $12$ verbs. In the second stage, we selected a subset of actions from all combinations of verbs and nouns. Let \begin{math} O = \{o_1, o_2, ..., o_n\} \end{math} and \begin{math} V = \{v_1, v_2, ..., v_m\} \end{math} be the set of the object and verb classes respectively. For each verb $v \in V$, we considered all the object classes $o \in O$ involved in one or more temporal segments labeled with verb $v$. In total, we obtained 61 action classes composing the MECCANO dataset which are shown in Figure~\ref{fig:action_stats}. \subsubsection{Active Object Bounding Box Annotations} For each temporal segment, we annotated the \textit{active} objects in frames sampled every $0.2$ seconds. Each active object annotation consists in a \textit{(class, x, y, w, h)} tuple, where \textit{class} represents the class of the object and \textit{(x, y, w, h)} are the 2D coordinates width and height defining the bounding box around the object in the frame. We annotated multiple objects when they were \textit{active} simultaneously (see Figure~\ref{fig:bbox} - first row). If an active object is occluded, even just in a few frames, we annotated it with a \textit{(class, x, y)} tuple, specifying the class of the object and its estimated 2D position. An example of occluded active object annotation is reported in the second row of Figure~\ref{fig:bbox}. With this procedure, we labeled a total of 64349 frames. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/bbox_annotations.pdf} \caption{Example of bounding box annotations for \textit{active} objects (first row) and occluded \textit{active} objects (second row).} \label{fig:bbox} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/concept_HOI.png} \caption[]{Examples of Human-Object Interactions from the third point of view (first row) and first point of view (second row)\footnotemark.} \label{fig:concept_HOI} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/next_annotations.png} \caption{Annotation procedure for next-active objects with bounding boxes in the past frames.} \label{fig:next_active_objects} \end{figure*} \subsubsection{EHOI Annotations} \label{sec:EHOI_ann} The HOI detection task consists in detecting the occurrence of human-object interactions, localizing both the humans taking part in the action and the interacted objects. HOI detection also aims to understand the relationships between humans and objects, which is usually described with a verb. Possible examples of HOIs are ``\textit{eat the sandwich}" or ``\textit{throw the ball}" (see Figure~\ref{fig:concept_HOI}-top). HOI detection models mostly consider one single object involved in the interaction \cite{Gupta2015VisualSR, HOI_Gupta_09, Gkioxari2018DetectingAR, HOI_Fei_Fei,Chao2018LearningTD}. Hence, an interaction is defined as a triplet in the form \textit{$<$human, verb, object$>$}, where the human is the subject of the action specified by a verb and an object. Considering the FPV domain, a first formalization of the HOI task has been proposed by \cite{Hands_in_contact_Shan20} who represented an interaction as a triple $<$hand, contact state, object$>$ where, the “contact state” variable assumes one of the following values: (none/self/other/portable/non-portable). This definition does not describe of the interaction in terms of verb classes and assume that only one object for each hand could be involved in the interaction. Differently from \cite{Hands_in_contact_Shan20}, we aim to understand more closely human behavior and hence study the Egocentric Human-Object Interaction (EHOI) detection task with the aim of predicting \textit{$<$verb, objects$>$} pairs describing the interaction observed from the egocentric point of view with multiple objects. Note that in EHOIs, the subject is always the camera wearer, so we do not require its localization in the frame, while one or more objects can be involved simultaneously in the interaction. The goal of EHOI detection is to infer the verb and object noun classes, and to localize each active object involved in the interaction. Let $O = \{o_1, o_2, ..., o_n\}$ and $V = \{v_1, v_2, ..., v_m\}$ be the sets of objects and verbs respectively. We define an Egocentric Human-Object Interaction $e$ as: \begin{equation} \label{eq:1} e = (v_h, \overline{\rm o_1}, \overline{\rm o_2}, ..., \overline{\rm o_i}\}) \end{equation} \noindent where \begin{math}v_h \in V\end{math} is the verb characterizing the interaction and \begin{math}{\overline{\rm o_1}, \overline{\rm o_2}, ..., \overline{\rm o_i}} \subseteq O \end{math} are the active objects involved in the interaction. Given the previous definition, we considered all the observed combinations of verbs and objects to represent EHOIs performed by the participants during the acquisition. Two examples are reported in Figure~\ref{fig:concept_HOI})-bottom. Each EHOI annotation is hence composed of a verb annotation and the bounding boxes of \textit{active} objects. Differently from other datasets, MECCANO multimodal has been hence explicitly annotated for the EHOI detection task. \subsubsection{Next Active Object Annotations} \label{sec:nao_annotations} Due to the limited number of public datasets explicitly annotated to study the future intentions of humans, few past works explored the task of predicting the next-active objects considering the first person point of view \cite{Furnari2017NextactiveobjectPF}, using both RGB and depth signals \cite{Bertasius2017FirstPA}, focusing on the hands \cite{JIANG2021212} or estimating also the time to contact with the future active objects \cite{Ego4D2021}. We annotated MECCANO with a set of labels useful to tackle the problem of \textit{Next Active Object} prediction whose goal is to predict and localize the objects that will be involved in a future human-object interaction from the first person view. For each human-object interaction, we annotated in the frames preceding the interaction the objects which will be \textit{active objects} in the contact frame. Starting from the contact frame, we sampled frames every 0.2 seconds going backwards up to 3 seconds before the beginning of the temporal segment, or less if there is an overlap with a previous segment\footnote{If an interaction overlaps with previous one, we did not annotate past frames.} (see Figure~\ref{fig:next_active_objects}). Indeed, not all interactions have past frames. For example if the interaction $E_1$ ends at timestamp $T_1$ and the interaction $E_2$ starts at timestamp $T_2 = T_1 + 0.1s$, past frames belonging to the interaction $E_2$ will overlap with frames belonging to the previous interaction $E_1$ and they will not be annotated. With this sampling procedure, we obtained labels in past frames for 75.66\% (6656) of the total number of interactions (8857) present in the dataset. Considering the frames preceding an interaction, each next-active object annotation consists in a \textit{(class, x, y, w, h)} tuple, where \textit{class} represents the class of the object which will be active and the \textit{(x, y, w, h)} tuple defines a bounding box around the considered object. If an object is going to be taken from a pile, then the pile itself is labeled as the next active object. Note that a pile of objects is composed only by objects of the same type. We labeled the pile because we assume that, before a human-object interaction occurs, it is not feasible to infer which object of the pile will be active (see Figure~\ref{fig:next_active_objects}-left). As in the case of active objects, if an object is occluded, we annotated it with a \textit{(class, x, y)} tuple specifying the class of the object and its estimated 2D position. With this procedure, we labeled a total of 48024 frames with 74127 bounding boxes\footnote{See supplementary material for additional details.}. \begin{table*}[] \caption{Statistics of the three splits: Train, Validation and Test. The 4th column indicates the percentage of videos belonging to the related subset with respect to the total number of videos present in the MECCANO Dataset.} \label{tab:splits} \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|ccccccccc} \multicolumn{1}{c|}{\textbf{Split}} & \textbf{\#Videos} & \textbf{Duration (min)} & \textbf{\%} & \textbf{\#EHOIs Segments} & \textbf{Obj. BBoxes} & \textbf{Hands BBoxes} & \textbf{NAO BBoxes} & \textbf{Country (U.K/Italy)} & \textbf{Table (Light/Dark)} \\ \hline Train & 11 & 236.47 & 55\% & 5057 & 37386 & 96556 & 28152 & 6/5 & 6/5 \\ Val & 2 & 46.57 & 10\% & 977 & 6983 & 19636 & 5490 & 1/1 & 1/1 \\ Test & 7 & 134.93 & 35\% & 2824 & 19980 & 87924 & 14382 & 4/3 & 4/3 \\ \hline \end{tabular}% } \end{table*} \subsubsection{Hands Annotations} As MECCANO multimodal features actions and human-object interactions from the first person point of view where the hand of the user are visible during the objects manipulation, knowledge on the position of the hands could be an important modality to explore. For each temporal segment, we annotated the hands of the participants with a bounding box on the set of frames belonging to the interaction (i.e. from the start frame to the end frame) and in the past frames preceding the interaction. Each hand annotation consists in a \textit{(class, x, y, w, h)} tuple, where \textit{class} represents the side of the hand (i.e. left or right) and \textit{(x, y, w, h)} defines a bounding box around the considered hand. We split this labeling procedure in two stages. Firstly, we processed the frames with the Hand Object Detector introduced in \cite{Hands_in_contact_Shan20}. This detector infers if an hand is involved in an interaction through the contact with active objects. In particular, the detector predicts the hand location, the side, a contact state, and a box around the object in contact. We considered only the hand location and the side for each of the processed frame. In the second stage, annotators checked if the predicted bounding boxes and the associated class were correct and adjusted them or added a new annotation if there was a missing hand prediction. If the bounding box was not precise or the class was wrong, they refined the bounding box and corrected the class of the hand. With this procedure, we annotated \textit{89628} frames with \textit{169625} bounding boxes around the hands. See supplementary material for additional details. We used this set of annotations as additional modality to tackle the \textit{Action Anticipation} task as described in Section~\ref{sec:Action_anticipation}. Hands annotations could be useful also to understand human-object interactions or to recognize the actions performed by the user. \section{Benchmarks and Baseline Results} \label{sec:benchmark} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/action_rec_architecture.pdf} \caption{SlowFast (RGB-Depth-Gaze) architecture which gets as input three different signals (RGB, depth and gaze).} \label{fig:action_rec_architecture} \end{figure*} The MECCANO dataset is suitable to study a variety of tasks, considering its multimodality and the challenging industrial-like scenario in which it was acquired. In this paper, we proposed five tasks related to human's behavior understanding and provide baseline results: 1) \textit{Action Recognition}, 2) \textit{Active Object Detection and Recognition}, 3) \textit{Egocentric Human-Object Interaction (EHOI) Detection}, 4) \textit{Action Anticipation} and 5) \textit{Next Active Object Detection}. While some of these tasks have been studied in previous works, none of them has been studied in industrial scenarios from the egocentric perspective also considering multimodal observations. Moreover, there are only few datasets publicy available \cite{Damen2020RESCALING, Ego4D2021, Sener2022Assembly101AL} which can be used to study different tasks simultaneously and to develop a complete system for human behavior understanding taking into account different aspects (e.g., actions, interactions, objects, future intentions). MECCANO has been split into three subsets (\textit{Training, Validation} and \textit{Test}) designed to balance the different types of desks (light, dark) and countries in which the videos have been acquired (IT, U.K.). We used the training set to train the baselines, the validation set for hyperparameters tuning and the test set to test the trained models. Each video has been entirely assigned to one of the three different subsets. Table~\ref{tab:splits} reports some statistics about the three splits, such as the number of videos, the total duration (in seconds), the number of temporally annotated EHOIs and the number of bounding box annotations. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/act_rec_qual.png} \caption{Qualitative results of the action recognition models. Green and red colors represent the correct/wrong predictions for both verbs and objects which compose the action class. In the first column, we report the ground truth action class, in the second one the prediction of the model SlowFast (RGB-Gaze), in third column the action class predicted by SlowFast (Depth-Gaze) and in the last column (\textit{All}) the prediction of SlowFast (RGB-Depth-Gaze).} \label{fig:action_rec_qual_res} \end{figure*} \begin{table*}[] \caption{Results for the action recognition task. The best results are reported in bold, whereas the second best results are underlined.} \label{tab:action_rec} \resizebox{\textwidth}{!}{% \begin{tabular}{l|ccccc} \textbf{Method} & \multicolumn{1}{l}{\textbf{Top-1 Accuracy}} & \multicolumn{1}{l}{\textbf{Top-5 Accuracy}} & \multicolumn{1}{l}{\textbf{AVG Class Precision}} & \multicolumn{1}{l}{\textbf{AVG Class Recall}} & \multicolumn{1}{l}{\textbf{AVG F1-score}} \\ \hline SlowFast (RGB) & 45.16 & 73.75 & 50.33 & 45.16 & 46.66 \\ SlowFast (Depth) & 45.13 & 72.19 & 50.28 & 45.13 & 46.96 \\ SlowFast (RGB-Depth) & \underline{49.49} & \underline{77.61} & \underline{56.13} & \underline{49.49} & \underline{51.90} \\ SlowFast (RGB-Gaze) & 45.34 & 73.61 & 49.83 & 44.60 & 46.25 \\ SlowFast (Depth-Gaze) & 45.27 & 72.30 & 50.62 & 45.27 & 47.23 \\ SlowFast (RGB-Depth-Gaze) & \textbf{49.66} & \textbf{77.82} & \textbf{56.69} & \textbf{49.66} & \textbf{52.25} \end{tabular} } \end{table*} \subsection{Action Recognition} \label{sec:action_recognition} Action Recognition consists in determining the action performed by the camera wearer from the observation of an egocentric video segment. Specifically, let \begin{math}C_a = \{c_1, c_2, ..., c_n\} \end{math} be the set of action classes and let \begin{math}A_i = [t_{s_i}, t_{e_i}]\end{math} be the video segment, where $t_{s_i}$ and $t_{e_i}$ are the start and the end times of the action respectively. The aim is to assign the correct action class $c_i \in C_a$ to the segment $A_i$. We evaluate action recognition using Top-1 and Top-5 accuracy computed on the whole test set. As class-aware measures, we report class-mean precision, class-mean recall and $F_1$-score. \subsubsection{Baseline} \label{sec:baseline_act_rec} As a baseline we considered SlowFast \cite{feichtenhofer2018slowfast}, which is a state-of-the-art method for action recognition. To explore the usefulness of multimodal signals for this task, we adopted different instances of the SlowFast architecture, as detailed in the following.\\ \textbf{SlowFast (RGB):\hspace{1mm}} the 3D network architecture which takes as input the RGB clip to perform action recognition, as implemented in Pyslowfast \cite{fan2020pyslowfast}.\\ \textbf{SlowFast (Depth):\hspace{1mm}} a 3D network architecture which takes as input the clip composed of depth maps related to the corresponding RGB frames.\\ \textbf{SlowFast (RGB-Depth):\hspace{1mm}} this model is composed of two SlowFast networks which process different input signals (RGB and the corresponding Depth frames). The two probability distributions obtained as output are averaged to obtain the final action prediction. \\ \textbf{SlowFast (RGB-Gaze):\hspace{1mm}} inspired by \cite{deep_attention_Minlong}, we integrated human gaze into the SlowFast network. Specifically, we used the ground truth gaze fixation to obtain an attention map which focuses on the most relevant spatial regions of the video frames along the time dimension. This attention map is multiplied with the output feature maps of both the slow and fast pathways. Then, we fused the new combined feature maps by concatenation and fed to the fully connected layers to obtain the final prediction.\\ \textbf{SlowFast (Depth-Gaze):\hspace{1mm}} this model is similar to SlowFast (RGB-Gaze) but it takes as input the depth maps of the video clips rather than RGB frames.\\ \textbf{SlowFast (RGB-Depth-Gaze):\hspace{1mm}} this model is composed of two instances of SlowFast model (one for each input signal) and integrates human gaze as seen in SlowFast (RGB-Gaze) architecture (Figure~\ref{fig:action_rec_architecture}). \subsubsection{Results} Table~\ref{tab:action_rec} reports the results obtained with the adopted baselines for the action recognition task. Baselines using only one modality (RGB or Depth), achieve similar performance in terms of Top-1 Accuracy (45.16\% vs. 45.13\%) and AVG F1-score (46.66\% vs. 46.96\%). Fusing the RGB and Depth signals, we obtain better results with respect to all the baselines which use one or two modalities. Exploiting all the signals present in the MECCANO Dataset (RGB, Depth and Gaze), we obtain the best results considering all the evaluation measures (last row of Table~\ref{tab:action_rec}). Even if the gaze modality represents an additional signal to guide learning, improvements with this modality are minor. The limited improvement obtained using this modality with respect to the model which uses RGB and Depth signals, could be related to the nature of the adopted architecture, which is simple and can be optimized in future works. Qualitative results are reported in Figure~\ref{fig:action_rec_qual_res}, which highlights how the use of multiple signals improve the performance on the action recognition tasks (first and second row). In general, the results suggest that the use of multiple modalities allows to improve the results in the MECCANO dataset. Moreover, the dataset is a challenging testbed for action recognition and offers a new scenario to compare Classic and Multimodal action recognition algorithms. \subsection{Active Object Detection and Recognition} \label{sec:AODR} Differently from previous works, we consider two distinct but related tasks: active object detection and active object recognition. In some cases, detecting the active objects manipulated by the human without considering the object class \cite{Hands_in_contact_Shan20} can be useful, for example when the taxonomy of the object classes is difficult to obtain or to initialize a tracker \cite{TREK150}. However, when a taxonomy is available, predicting the object classes can enable practical applications (e.g., monitoring the usage time of specific objects). The aim of the Active Object Detection task is to detect all the \textit{active} objects. Let \begin{math} O_{act} = \{o_1, o_2, ..., o_n\} \end{math} be the set of \textit{active} objects in a given frame. The goal is to detect with a bounding box each \textit{active} object $o_i \in O_{act}$. As evaluation measure, we use the Average Precision~(AP), which is used in standard object detection benchmarks. We set the IoU threshold equal to~$0.5$ in our experiments as in the standard Pascal VOC mAP measure \cite{PascalVOC_Zisserman_15}. The active object recognition task instead, consists in detecting and recognizing the \textit{active} objects involved in EHOIs considering the $20$ object classes of the MECCANO dataset. Formally, let \begin{math} O_{act} = \{o_1, o_2, ..., o_n\}\end{math} be the set of \textit{active} objects in the image and let \begin{math} C_{o} = \{c_1, c_2, ..., c_m\} \end{math} be the set of object classes. The task consists in detecting objects $o_i \in O_{act}$ and assigning them the correct class label $c_i \in C_{o}$. We use mAP \cite{PascalVOC_Zisserman_15} with an IoU threshold equal to $0.5$ for the evaluations. \begin{table}[] \caption{Baseline results for the \textit{active} object detection task.} \label{tab:active_det} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{l|c} \multicolumn{1}{c|}{\textbf{Method}} & \multicolumn{1}{l}{\textbf{AP (IoU \textgreater 0.5)}} \\ \hline Hand Object Detector \cite{Hands_in_contact_Shan20} & 11.17\% \\ Hand Object Detector \cite{Hands_in_contact_Shan20} (Avg dist.) & 11.10\% \\ Hand Object Detector \cite{Hands_in_contact_Shan20} (All dist) & 11.34\% \\ Hand Object Detector \cite{Hands_in_contact_Shan20} + Objs re-training & 20.18\% \\ Hand Object Detector \cite{Hands_in_contact_Shan20} + Objs re-training (Avg dist.) & 33.33\% \\ Hand Object Detector \cite{Hands_in_contact_Shan20} + Objs re-training (All dist.) & \textbf{38.14\%} \\ \hline \end{tabular}% } \end{table} \subsubsection{Methods} To address the problem of detecting active objects, the Hand-Object Detector proposed in \cite{Hands_in_contact_Shan20} has been considered as a baseline. The model has been designed to detect hands and objects when they are in contact with hands. This architecture is based on Faster-RCNN \cite{ren2015faster} and predicts a box around the visible human hands, as well as boxes around the objects the hands are in contact with and a link between them. We used the Hand-Object Detector \cite{Hands_in_contact_Shan20} pretrained on EPIC-Kitchens \cite{Damen2018EPICKITCHENS}, EGTEA \cite{Li2018_EGTEA-GAZE+} and CharadesEGO \cite{Sigurdsson2018Charades} as provided by the authors \cite{Hands_in_contact_Shan20}. For our purpose the model has been trained to recognize hands and to detect the \textit{active} objects regardless of their class. With default parameters, the Hand-Object Detector can find at most two \textit{active} objects in contact with hands. Since our dataset tends to contain more \textit{active} objects in a single EHOI (up to 7), we consider two variants of this model by changing the threshold on the distance between hands and detected objects. In the first variant, the threshold is set to the average distance between hands and \textit{active} objects in the MECCANO dataset. We named this variant ``\textit{Avg distance}''. In the second variant, we removed the thresholding operation and considered all detected objects as \textit{active} objects. We named this variant ``\textit{All objects}''. We further adapted the Hand-Object Detector \cite{Hands_in_contact_Shan20} re-training the Faster-RCNN component to detect all \textit{active} objects of the MECCANO dataset. Faster-RCNN has been trained on the training and validation sets using the provided \textit{active} object class labels. Since the task aims to only localize objects, we discard predicted object classes at test time. For the active object recognition task, as a baseline, we used a standard Faster-RCNN \cite{ren2015faster} object detector. For each image, the object detector predicts \textit{(x, y, w, h, class)} tuples which represent the object bounding boxes and the associated classes. We used the same Faster-RCNN model adopted for the Active Object Detection task, retaining also object classes at test time. \begin{table}[] \caption{Baseline results for the \textit{active} object recognition task.} \label{tab:active_rec} \centering \resizebox{0.6\columnwidth}{!}{% \begin{tabular}{clc} \multicolumn{1}{l|}{\textbf{ID}} & \multicolumn{1}{c|}{\textbf{Class}} & \textbf{AP (per class)} \\ \hline \multicolumn{1}{c|}{0} & \multicolumn{1}{l|}{instruction booklet} & 46.18\% \\ \multicolumn{1}{c|}{1} & \multicolumn{1}{l|}{gray\_angled\_perforated\_bar} & 09.79\% \\ \multicolumn{1}{c|}{2} & \multicolumn{1}{l|}{partial\_model} & 36.40\% \\ \multicolumn{1}{c|}{3} & \multicolumn{1}{l|}{white\_angled\_perforated\_bar} & 30.48\% \\ \multicolumn{1}{c|}{4} & \multicolumn{1}{l|}{wrench} & 10.77\% \\ \multicolumn{1}{c|}{5} & \multicolumn{1}{l|}{screwdriver} & 60.50\% \\ \multicolumn{1}{c|}{6} & \multicolumn{1}{l|}{gray\_perforated\_bar} & 30.83\% \\ \multicolumn{1}{c|}{7} & \multicolumn{1}{l|}{wheels\_axle} & 10.86\% \\ \multicolumn{1}{c|}{8} & \multicolumn{1}{l|}{red\_angled\_perforated\_bar} & 07.57\% \\ \multicolumn{1}{c|}{9} & \multicolumn{1}{l|}{red\_perforated\_bar} & 22.74\% \\ \multicolumn{1}{c|}{10} & \multicolumn{1}{l|}{rod} & 15.98\% \\ \multicolumn{1}{c|}{11} & \multicolumn{1}{l|}{handlebar} & 32.67\% \\ \multicolumn{1}{c|}{12} & \multicolumn{1}{l|}{screw} & 38.96\% \\ \multicolumn{1}{c|}{13} & \multicolumn{1}{l|}{tire} & 58.91\% \\ \multicolumn{1}{c|}{14} & \multicolumn{1}{l|}{rim} & 50.35\% \\ \multicolumn{1}{c|}{15} & \multicolumn{1}{l|}{washer} & 30.92\% \\ \multicolumn{1}{c|}{16} & \multicolumn{1}{l|}{red\_perforated\_junction\_bar} & 19.80\% \\ \multicolumn{1}{c|}{17} & \multicolumn{1}{l|}{red\_4\_perforated\_junction\_bar} & 40.82\% \\ \multicolumn{1}{c|}{18} & \multicolumn{1}{l|}{bolt} & 23.44\% \\ \multicolumn{1}{c|}{19} & \multicolumn{1}{l|}{roller} & 16.02\% \\ \hline \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ \cline{2-3} \multicolumn{1}{l}{} & \multicolumn{1}{c|}{\textbf{mAP}} & 30.39\% \end{tabular}% } \end{table} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/active_objects.png} \caption{Qualitative results for the Active Object Recognition task.} \label{fig:active_objects_qual} \end{figure} \subsubsection{Results} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/EHOI_arch.png} \caption{Proposed architecture for the EHOI detection task.} \label{fig:EHOI_arch} \end{figure*} \begin{table*}[] \caption{Results on the EHOI Detection task. The best results are reported in bold, whereas the second best results are underlined.} \label{tab:EHOI} \resizebox{\textwidth}{!}{ \begin{tabular}{lccc|ccc} & \multicolumn{3}{c|}{\textbf{mAP\textsubscript{verb}}} & \multicolumn{3}{c}{\textbf{mAP\textsubscript{verb,noun}}} \\ \textbf{Method} & \textbf{IoU@50} & \textbf{IoU@30} & \textbf{IoU@10} & \textbf{IoU@50} & \textbf{IoU@30} & \textbf{IoU@10} \\ \hline SlowFast (RGB) + Faster-RCNN & 26.44 & 28.83 & 30.45 & 19.14 & 21.36 & 22.05 \\ SlowFast (Depth) + Faster-RCNN & \textbf{29.10} & \textbf{31.81} & \textbf{33.81} & \underline{21.37} & \underline{24.01} & \underline{25.01} \\ SlowFast (RGB-Depth) + Faster-RCNN & 26.49 & 28.88 & 30.51 & 19.20 & 21.42 & 22.12 \\ SlowFast (RGB-Gaze) + Faster-RCNN & 28.82 & \underline{ 31.56} & \underline{33.55} & \textbf{21.38} & \textbf{24.04} & \textbf{25.04} \\ SlowFast (Depth-Gaze) + Faster-RCNN & \underline{28.92} & 31.51 & 33.38 & 20.79 & 23.28 & 24.20 \\ SlowFast (RGB-Depth-Gaze) + Faster-RCNN & 28.51 & 31.10 & 32.97 & 20.65 & 23.12 & 24.03 \end{tabular} } \end{table*} \begin{figure}[] \centering \includegraphics[width=\columnwidth]{images/ehoi_qual.png} \caption{Qualitative results of the SlowFast (Depth) + Faster-RCNN method for the EHOI Detection task. On the left, the SlowFast(Depth) verb prediction, while, on the right, the active objects detected by Faster-RCNN. We reported wrong verb predictions and missing object detection with a dashed red bounding box.} \label{fig:ehoi_qual} \end{figure} Table~\ref{tab:active_det} shows the results obtained by the \textit{active} object detection task baselines. The results highlight that the Hand-Object Detector \cite{Hands_in_contact_Shan20} is not able to generalize to the challenging domain offered by MECCANO Multimodal. All the three variants of the Hand-Object Detector using the original object detector obtained an AP approximately equal to 11\% (first three rows of Table~\ref{tab:active_det}). Re-training the object detector on the MECCANO dataset allowed to improve performance by significant margins. In particular, using the standard distance threshold value, we obtained an AP of 20.18\%. If we consider the average distance as the threshold to discriminate \textit{active} and \textit{passive} objects, we obtain an AP of 33.33\%. Removing the distance threshold (last row of Table~\ref{tab:active_det}), allows to outperform all the previous results obtaining an AP equal to 38.14\%. Note that, since no distance threshold is considered, the baseline consists in just using a Faster R-CNN object detector trained on the target context. This suggests that adapting the general object detector to the challenging domain of the proposed dataset is key to performance. Indeed, training the object detector to detect only \textit{active} objects in the scene already allows to obtain reasonable results, while there is still space for improvement. Table~\ref{tab:active_rec} reports the AP values obtained by the Faster R-CNN active object detection baseline for each class considering all the videos belonging to the test set of MECCANO. The last column shows the average of the AP values for each class and the last row reports the mAP value for the test set. The mAP was computed as the average of the mAP values obtained in each test video. AP values in the last column show that large objects are easier to recognize (e.g. \textit{instruction booklet: 46.48\%; screwdriver: 60.50\%; tire: 58.91\%; rim: 50.35\%}). Results suggests that the proposed dataset is challenging due to the presence of small objects. We reported qualitative results in Figure~\ref{fig:active_objects_qual}. We leave the investigation of more specific approaches to active object detection to future studies. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/concept_AA.png} \caption{The goal of the action anticipation task is to predict egocentric actions from an observation of the past.} \label{fig:concept_action_ant} \end{figure*} \subsection{EHOI Detection} \label{sec:EHOI} The goal of this task is to detect all egocentric human-object interactions (EHOI) in contact frames. As denoted in the definition of EHOIs as $<$verb, objects$>$ pairs (see Equation~\ref{eq:1}), methods should detect and recognize all the \textit{active} objects in the scene, as well as the verb describing the action performed by the human. Following previous works \cite{Gupta2015VisualSR, Gkioxari2018DetectingAR}, \textit{``AP\textsubscript{role}''} is used as evaluation measure for this task. Formally, a detected EHOI is considered as a true positive if 1) the predicted object bounding box has a IoU of 0.5 or higher with respect to a ground truth annotation and 2) the predicted verb matches with the ground truth. Note that only the \textit{active} object bounding box location (not the correct class) is considered in this measure. Since we want to also recognize \textit{active} objects, we consider a variant of \textit{``AP\textsubscript{role}''} adding the following condition: 3) the predicted object class matches with the ground truth. We called this measure \textit{``AP\textsubscript{verb,noun}''}\footnote{Note that \textit{``AP\textsubscript{noun}''} which considers only the 1) and 3) conditions is the same measure computed in Table~\ref{tab:active_rec}.}. We used both measures to evaluate our method. To better highlight the difference between the two measures we will refer to the \textit{``AP\textsubscript{role}''} as \textit{``AP\textsubscript{verb}''}. Moreover, we used different values of IoU (i.e., 0.5, 0.3 and 0.1) to compute the different \textit{``AP''} values. \subsubsection{Method} Our baseline is based on the combination of a SlowFast network \cite{fan2020pyslowfast} trained to predict the verb of the EHOI considering a video clip sampled around the frame following the interaction temporal annotations, and Faster-RCNN \cite{ren2015faster}, which detects and recognizes all \textit{active} objects in the frame as shown in Figure~\ref{fig:EHOI_arch}. Similar to the action recognition task, we explored the potential of the multimodal signals present in the MECCANO dataset, considering different instances of the SlowFast network which relies to RGB, Depth and Gaze signals similarly to what described in Section~\ref{sec:action_recognition}. Note that, in this case, the SlowFast network has been trained on the 12 verb classes of the MECCANO dataset which describe the interactions performed by the users rather than the action classes as done in Section~\ref{sec:action_recognition}. Moreover, due to the difference related to the \textit{action} and \textit{interaction} concepts, we used the EHOI annotations which are slightly different in terms of temporal boundaries with respect to the action annotations as detailed in Section~\ref{sec:EHOI_ann}. For the object detector component, we used the same model trained for the active object recognition task. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/act_ant_qual.png} \caption{Qualitative results of the proposed approach based on RULSTM which is composed of three branches: gaze, objects and hands.} \label{fig:act_ant_qual} \end{figure*} \subsubsection{Results} Table~\ref{tab:EHOI} reports the results for the EHOI detection task. The baseline which considers only the depth signal (second row) obtained the best results for all the mAP\textsubscript{verb} measures with different values of IoU (29.10, 31.81 and 33.81). This can be motivated by the fact that the depth signal (second row) helps to focus the attention on the hands which are most relevant to understand the motion and discriminate the different verbs. Moreover, the 3D network does not aim to predict the object classes, which makes the RGB signal less useful. Interestingly, the use of the depth signal and the attention map computed from the gaze signal (fifth row) does not help for the recognition of the interaction (29.10 versus 28.92) while using the attention map of the gaze with the RGB signal (fourth row) improves the recognition of the interactions obtaining the second best performance considering the IoU@30 and IoU@10 measures, with maP values of 31.56 and 33.55 respectively. If we consider the mAP\textsubscript{verb,noun} in which also the object class needs to be correctly predicted, the baseline which considers RGB and Gaze signals (4th row) obtained best results considering all the IoU values (21.38, 24.04 and 25.04). In general, the mAP\textsubscript{verb,noun} values are lower than mAP\textsubscript{verb} values because we also consider the condition of predicting the correct object class. Qualitative results of SlowFast (Depth) + Faster-RCNN are reported in Figure~\ref{fig:ehoi_qual}. Despite the promising performance of the proposed baseline, MECCANO Multimodal leaves space to more investigation on the proposed EHOI detection task due to the challenging nature of the considered industrial-like domain. \subsection{Action Anticipation} \label{sec:Action_anticipation} The goal of the action anticipation task is to predict egocentric actions from an observation of the past (see Figure~\ref{fig:concept_action_ant}). Let \begin{math}A = [t_{s}, t_{e}]\end{math} be a video segment, where $t_{s}$ and $t_{e}$ are the start and the end times of the action respectively. The aim is to assign the correct action class $c_i \in C_a$ to the segment $A$ observing a $t_{o}$ (observation time) seconds long video segment preceding the start time of the action $t_{s}$ by $t_{a}$ seconds (anticipation time). Following \cite{furnari2019rulstm} we used Top-k accuracy and Mean Top-5 Recall as evaluation measures and considered different anticipation times. \begin{figure*}[] \centering \includegraphics[width=\textwidth]{images/NAO_task.png} \caption{The aim of the Next-Active Object Detection task is to detect and recognize all the objects which will be involved in a future interaction.} \label{fig:nao_task} \end{figure*} \subsubsection{Method} We adopted the RULSTM approach proposed in \cite{furnari2019rulstm, furnari2020rulstm} to address the action anticipation task. This model is composed of different branches which take as input different signals (RGB, optical flow and object-centric features). We choose this model due to its state-of-the-art performance and because it has been explicitly designed to work with multimodal observations. We extended this method to exploit the multimodal signals present in the MECCANO dataset. In particular, the adopted baseline is composed of 5 branches, one for each signal: RGB, Depth, Gaze, object-centric features and hands-centric features. Depth features have been extracted running SlowFast (Depth) (see Section~\ref{sec:baseline_act_rec} trained on MECCANO. We computed object-centric features following \cite{furnari2019rulstm}, while, gaze features have been obtained weighting the object-centric features with the distance between the center of objects bounding boxes and the gaze position in the image. For hands-centric branch we use the hand annotations of the MECCANO dataset as input. Branches are trained and fused using the procedures explained in \cite{furnari2019rulstm}. Since we extracted the RGB features using a SlowFast network \cite{feichtenhofer2018slowfast} which encodes both spatial and temporal features, we did not consider the optical-flow. \begin{table}[] \caption{The table reports the ablation study performed with the adopted baseline for action anticipation. All the combinations of 5 branches representing different input signals (RGB, Depth, Objects, Gaze and Hands) are considered.} \label{tab:ablation_action_ant} \centering \resizebox{0.8\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c|c} \textbf{RGB} & \textbf{Depth} & \textbf{OBJ} & \textbf{Gaze} & \textbf{Hands} & \textbf{mt5r} \\ \hline \textbf{\checkmark} & X & X & X & X & 22,88\% \\ X & \textbf{\checkmark} & X & X & X & 14,07\% \\ \textbf{\checkmark} & \textbf{\checkmark} & X & X & X & 14,12\% \\ X & X & \textbf{\checkmark} & X & X & 29,41\% \\ \textbf{\checkmark} & X & \textbf{\checkmark} & X & X & 29,03\% \\ X & \textbf{\checkmark} & \textbf{\checkmark} & X & X & 25,26\% \\ \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & X & X & 21,74\% \\ X & X & X & \textbf{\checkmark} & X & 29,79\% \\ \textbf{\checkmark} & X & X & \textbf{\checkmark} & X & 29,63\% \\ X & \textbf{\checkmark} & X & \textbf{\checkmark} & X & 24,04\% \\ X & X & \textbf{\checkmark} & \textbf{\checkmark} & X & 31,46\% \\ \textbf{\checkmark} & \textbf{\checkmark} & X & \textbf{\checkmark} & X & 29,42\% \\ \textbf{\checkmark} & X & \textbf{\checkmark} & \textbf{\checkmark} & X & 32,01\% \\ \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & X & 28,05\% \\ X & X & X & X & \textbf{\checkmark} & 30,06\% \\ \textbf{\checkmark} & X & X & X & \textbf{\checkmark} & 29,86\% \\ X & \textbf{\checkmark} & X & X & \textbf{\checkmark} & 24,43\% \\ X & X & \textbf{\checkmark} & X & \textbf{\checkmark} & 31,13\% \\ X & X & X & \textbf{\checkmark} & \textbf{\checkmark} & 31,33\% \\ \textbf{\checkmark} & \textbf{\checkmark} & X & X & \textbf{\checkmark} & 29,89\% \\ \textbf{\checkmark} & X & \textbf{\checkmark} & X & \textbf{\checkmark} & 31,31\% \\ \textbf{\checkmark} & X & X & \textbf{\checkmark} & \textbf{\checkmark} & 30,97\% \\ X & \textbf{\checkmark} & \textbf{\checkmark} & X & \textbf{\checkmark} & 27,69\% \\ X & \textbf{\checkmark} & X & \textbf{\checkmark} & \textbf{\checkmark} & 26,84\% \\ X & X & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{32,25\%} \\ \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & X & \textbf{\checkmark} & 29,21\% \\ \textbf{\checkmark} & \textbf{\checkmark} & X & \textbf{\checkmark} & \textbf{\checkmark} & 28,11\% \\ \textbf{\checkmark} & X & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & 31,30\% \\ X & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & 27,77\% \\ \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & 24,05\% \end{tabular} } \end{table} \begin{table}[] \caption{Results obtained for the action anticipation task considering different values of $t_a$ (anticipation time).} \label{tab:act_ant} \resizebox{\columnwidth}{!}{% \begin{tabular}{lcccccccc} \multicolumn{1}{c}{\textbf{$t_a$}} & \textbf{2} & \textbf{1.75} & \textbf{1.50} & \textbf{1.25} & \textbf{1} & \textbf{0.75} & \textbf{0.50} & \textbf{0.25} \\ \hline \multicolumn{1}{l|}{\textbf{Top-1 Acc.}} & 23.37 & 23.48 & 23.30 & 23.97 & 24.08 & 24.50 & 25.60 & 28.87 \\ \multicolumn{1}{l|}{\textbf{Top-5 Acc.}} & 54.65 & 55.99 & 56.56 & 57.73 & 58.23 & 59.96 & 61.31 & 63.40 \\ \multicolumn{1}{l|}{\textbf{M. Top-5 Rec.}} & 18.57 & 18.73 & 21.24 & 21.26 & 22.38 & 24.67 & 24.93 & 26.01 \end{tabular} } \end{table} \subsubsection{Results} We performed an ablation study considering several instances of the adopted baselines focusing on different combination of these five branches (see Table~\ref{tab:ablation_action_ant}). Results show how hard it is to choose a combination of different signals to solve this task. For example, considering the combination of RGB and Depth signals (third row) decreases the performance with respect to using only the RGB signal (first row). Also using all signals simultaneously (last row) does not guarantee best performance, highlighting that it is not sufficient to use all the available signals to solve this task in this challenging environment. We found that the best approach which obtained a Mean Top-5 Recall of 32.25, computed on the Validation Set of the MECCANO dataset, is composed of three branches which take as input the gaze signal, the object-centric features and the hand-centric features. Table~\ref{tab:act_ant} reports the results obtained on the Test set of the MECCANO dataset. We evaluated this baseline considering different anticipation times ($t_a$) ranging from 2~seconds to 0.25 seconds. Qualitative results are shown in Figure~\ref{fig:act_ant_qual}. \subsection{Next-Active Object Detection} \label{sec:NAO} The aim of the Next-Active Object Detection task is to detect and recognize all the objects which will be involved in a future interaction (see Figure~\ref{fig:nao_task}). Let \begin{math}T = [t_{s}, t_{e}]\end{math} be a EHOI segment, where $t_{s}$ and $t_{e}$ are the start and the end times of the interaction respectively and let \begin{math} O_{act} = \{o_1, o_2, ..., o_n\} \end{math} be the set of active objects. The goal is to predict the set of active objects involved in the interaction $I$, \begin{math} O_I = \{o_1, o_2, ..., o_m\} \end{math} with $O_I \subseteq O_{act}$ and their bounding boxes \begin{math} B_I = \{b_{o_1}, b_{o_2}, ..., b_{o_m}\} \end{math} where $b_{o_i} = (x,y,w,h)$ is the bounding box related to the object $o_i$, by observing a $t_{o}$ (observation time) seconds long temporal segment preceding the start time of the interaction $t_{s}$ by $t_{a}$ seconds (anticipation time). For evaluation purposes we used the mean Average Precision (mAP) measure which considers the class and the accuracy of the spatial detection of the objects. \subsubsection{Method} We explored the task adopting different simple baselines based on the Faster R-CNN object detector \cite{ren2015faster}. The first baseline has been trained using only the active object annotations (the same ones used for the active object detection and recognition task). The second baseline has been trained using only the next-active object annotations. The third baseline has been trained with the active object annotations and finetuned with the next-active objects annotations. The fourth baseline has been trained using both active and next-active objects annotations. \begin{table}[] \caption{Results obtained for the Next-active object detection task.} \label{tab:NAO} \resizebox{\columnwidth}{!}{% \begin{tabular}{lcc} \textbf{Method} & \textbf{mAP} & \textbf{mAP50} \\ \hline \multicolumn{1}{l|}{Faster R-CNN (active objects)} & \underline{14.10} & \underline{26.00} \\ \multicolumn{1}{l|}{Faster R-CNN (next-active objects)} & 9.90 & 18.20 \\ \multicolumn{1}{l|}{Faster R-CNN (active + finetuning next-active objects)} & 11.60 & 19.90 \\ \multicolumn{1}{l|}{Faster R-CNN (active + next-active objects)} & \textbf{14.20} & \textbf{26.40} \end{tabular} } \end{table} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/qual_detection_nao.png} \caption{Qualitative results of the best approach based on Faster-RCNN. The dashed bounding box indicates a ground truth object which has not been detected.} \label{fig:qual_nao} \end{figure} \subsubsection{Results} Table~\ref{tab:NAO} shows the obtained results. Using only the next-active objects annotations is not enough to obtain reasonable performance on the prediction of the next-active objects. Training the model using both active and next-active objects annotations allows to obtain the best performance considering the mAP (14.20) and the mAP50 (26.40). We reported qualitative results in Figure~\ref{fig:qual_nao}. In general, the task needs to be explored in depth due to the challenging nature of the MECCANO dataset. \section{Introduction} \label{sec:intro} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/Toy_Model.png} \caption{Toy model built by subjects interacting with 2 tools, 49 components and the instructions booklet.} \label{fig:toy_model} \end{figure} Understanding human behavior from an egocentric perspective allows to build intelligent systems able to assist humans equipped with a camera (e.g., Microsoft Hololens 2\footnote{\url{https://www.microsoft.com/en-us/hololens}}, Vuzix Blade\footnote{\url{https://www.vuzix.com/products/blade-smart-glasses}}, Nreal Light \footnote{\url{https://www.nreal.ai/light/}}, etc.) in many contexts, including cultural sites \cite{RagusaPRL, cucchiara2014visions, vedi2019}, home scenarios \cite{You-Do_Damen_14} and industrial environments \cite{DeepVisionShield_Colombo19}. For example, recognizing human-object interactions in an industrial environment from First Person Vision (FPV) can be useful for monitoring the use of machines, to schedule calibration operations, to suggest to the operator how to use a specific machine or object, as well as to issue notifications about actions that may be missed in a production pipeline \cite{miss_actions_shapiro}. Furthermore, anticipating what a worker will do and which objects he will interact with provides information to improve safety in a factory, for example by notifying the user with an alert in case a dangerous action or interaction is anticipated. Many recent works investigated human behavior understanding considering different tasks such as action recognition \cite{feichtenhofer2018slowfast, TwoStream_convolutional_action_Zisserman_14, Two-Stream_Zisserman, Zhou2018TemporalRR, kazakos2019TBN, TSM_2019}, object detection \cite{girshick2014rich, girshick2015fast, ren2015faster, yolov3}, human-object interaction detection \cite{Gkioxari2018DetectingAR, Gupta2015VisualSR, Hands_in_contact_Shan20, Nagarajan2020EGOTOPOEA}, action anticipation \cite{Felsen_what_will_happen_17, Gao2017REDRE, furnari2019rulstm, slowfast_rulstm_ballan} as well as the detection of the next active objects \cite{Bertasius2017FirstPA, Furnari2017NextactiveobjectPF, JIANG2021212, Ego4D2021}. Advances in these fields have been obtained thanks to the availability of public datasets \cite{Imagenet, lin2014COCO, Gupta2015VisualSR, HICO_Chao} considering different contexts such as kitchens \cite{Damen2018EPICKITCHENS,Damen2020RESCALING, Li2018_EGTEA-GAZE+, Torre2009CMU-MMAC}, home and offices \cite{Ramanan_12_ADL, thu-read_17, You-Do_Damen_14, Ortis2017OrganizingEV}, different daily-living scenarios \cite{Ego4D2021} and relying on different modalities such as depth \cite{thu-read_17} and gaze \cite{Li2018_EGTEA-GAZE+}. While these contexts provide interesting test-beds to study human behavior in general, the industrial domain (e.g., factories, building sites, mechanical workshops, etc.) has never been explored from FPV. This is mainly due to the fact that data acquisition in industrial domains is difficult because of privacy issues and the need to protect industrial secrets \cite{privacy_protection_Yee}. Nowadays many wearable glasses are able to capture different signals such as IMU, depth maps, as well as pupils fixations and audio (e.g., Microsoft Hololens2, Nreal Light, Magic Leap). Multimodal data are of relevant importance because they can be used to represent the same observed scene with complementary information. Moreover, each different signal provides additional information about the observed environment and the camera wearer, such as semantic information (RGB), 3D information of the environment and the objects (depth), as well as the user's attention (gaze). Despite the availability of such multimodal signals in many wearable platforms available on the market, current datasets in egocentric vision seldom include rich multimodal signals. In this paper, we present MECCANO Multimodal, which comprises multimodal egocentric data acquired in an industrial-like domain. This dataset extends the previous MECCANO egocentric video dataset \cite{ragusa2020meccano} considering two extra modalities (i.e., depth and gaze signals), a new set of annotations (i.e., temporal action annotations and spatial bounding boxes of hands and next-active objects) and a new benchmark addressing 5 different tasks aimed to understand the human behavior exploiting different signals (i.e., RGB, depth and gaze). To collect the dataset, we asked 20 subjects to build a toy model of a motorbike (see Figure~\ref{fig:toy_model}) which is composed of 49 components with different shapes and sizes. Similarly to what happens in an industrial scenario, the subjects interact with tools such as a screwdriver and a wrench, as well as with tiny objects such as screws and bolts while executing a task involving sequential actions (e.g., take wrench, tighten bolt, put down wrench). Despite the fact that this scenario is a simplification of what can be found in real industrial settings, it is still fairly complex to model, as our experiments show. The dataset has been acquired in two countries (Italy and United Kingdom) using a custom headset. The multimodality is characterized by the gaze signal, depth maps and RGB videos acquired simultaneously with two different devices (additional details are reported in Section~\ref{sec:dataset}). We acquired 20 RGB videos associated to the 20 depth videos with a Intel RealSense SR300\footnote{https://ark.intel.com/content/www/us/en/ark/products/92329/intel-realsense-camera-sr300.html}. In addition, we captured the gaze signal using a Pupil Core device\footnote{https://pupil-labs.com/products/core/} and synchronized it with the RGB videos. MECCANO has been annotated to address different tasks related to human behavior understanding. Specifically, we provide temporal annotations indicating the start and the end times of each action performed by the participants and the contact time which indicates the first frame of contact between the hand and the object, i.e., when the object changes its state from \textit{passive} to \textit{active}. We also spatially annotated the objects involved in the interactions (i.e., active objects) and the hands of the subjects with bounding boxes. Moreover, starting from the active objects annotations, we labeled the same objects in the past to explore the task of predicting the future intentions of subjects by detecting and recognizing the next active objects (i.e., the next objects the user is going to interact with). The dataset is publicy released at the following link: \url{https://iplab.dmi.unict.it/MECCANO/}. To highlight the usefulness of the proposed multimodal dataset, we release baseline experiments related to five fundamental tasks focused on understanding human’s behaviour from first person vision in the considered industrial-like context: 1) Action Recognition, 2) Active Objects Detection and Recognition, 3) Egocentric Human-Objects Interaction Detection, 4) Action Anticipation and 5) Next-Active Objects Detection and Recognition. Some of these tasks have been treated in the state of the art, while, egocentric human-object interaction (EHOI) detection and next-active objects (NAO) detection and recognition tasks are underexplored considering the egocentric point of view. We revisit these tasks considering the FPV paradigm in Section \ref{sec:EHOI} and \ref{sec:NAO} respectively. Results demonstrate that solving these problems in the industrial domain settings from an egocentric point of view is challenging despite the availability of multimodal signals. In sum, the contributions of this work are as follows: 1) we present MECCANO multimodal, a new challenging egocentric multimodal dataset related to the industrial domain; 2) we study in details the HOI task considering the Egocentric Vision paradigm (EHOI); 3) we study the Next-Active Object Detection task from the egocentric perspective; 4) we propose a benchmark aimed to study human behavior in the considered industrial-like scenario exploring five different tasks, showing that the current state-of-the-art approaches are not sufficient to solve the considered problems in the industrial settings. The remainder of the paper is organized as follows. In Section~\ref{sec:related_work} we discuss related work. The proposed MECCANO Multimodal dataset is presented in Section~\ref{sec:dataset}. Section~\ref{sec:benchmark} describes the benchmark and discussed the results. We conclude the paper and discuss insights for future work in Section~\ref{sec:conclusion}. \section{Related Work} \label{sec:related_work} Our work is related to different lines of research, including the collection of benchmark datasets, action recognition, HOI detection, Egocentric HOI detection, action anticipation and next-active objects detection. The following sections discuss the relevant works belonging to the aforementioned research areas. \subsection{Datasets for Human Behavior Understanding} Different third person vision datasets have been proposed to study human behavior understanding exploiting the tasks of Human-Object Interaction (HOI) detection \cite{Gupta2015VisualSR, HICO_Chao, PPDM_liao2019} and action recognition \cite{caba2015activitynet, Kinetics_2017, Kinetics_Carreira2019ASN, Something_Something_Goyal}. These datasets are often composed of RGB images or videos as well as other modalities such as depth \cite{Li2010ActionRB, Wang2012MiningAE, Sung_CAD60, Koppula_cad120, Chen_UTD, Rahmani2016HistogramOO, Hu_3DHOI, Liu2020NTUR1}, especially after the release of the Microsoft Kinect \cite{zhang_kinect}. \cite{Gupta2015VisualSR} annotated the COCO dataset~\cite{lin2014COCO} with verbs (V-COCO) to study the problem of detecting HOI. V-COCO includes 10346 images annotated with 26 actions. HICO-Det \cite{HICO_Chao} is a large-scale dataset composed of static images used as a benchmark to study the task of HOI detection. This dataset includes 47766 images and has been annotated with 117 verbs and 80 objects (same objects as COCO). While these datasets focused on common and general actions, the HOI-A dataset~\cite{PPDM_liao2019} focused on a subset of actions, such as \textit{smoking cigarette} or \textit{talk on mobile phone} which can be considered dangerous actions while driving. The dataset is composed of 38668 images annotated with 10 verbs and 11 object classes. ActivityNet \cite{caba2015activitynet} is a large-scale dataset composed of videos depicting activities that are related to how humans spend their time in their daily lives, such as \textit{walking the dog} or \textit{hand-washing clothes}. The dataset is composed of a total of 849 video hours including 203 activity classes. \cite{Kinetics_2017} and \cite{Kinetics_Carreira2019ASN}~presented Kinetics, which is a third person video dataset related to human actions. The dataset is composed of 700 human action classes which include human-object interactions, such as \textit{play instrument} and human-human interactions, such as \textit{shake hands}. For each action, there are at least 600 video clips taken from YouTube videos. The authors of~\cite{Something_Something_Goyal} proposed Something-Something, a video dataset which includes low-level concepts (``\textit{something-something}'') to represent simple everyday aspects of the world. It contains 108499 short videos (from 2 to 6 seconds) annotated with 174 textual description such as ``turning \textit{something} upside down'' or ``spilling \textit{something} next to \textit{something}''. Other works have considered the egocentric scenario investigating different domains. Egocentric activities related to daily living have been studied by the authors of~\cite{ADL_PirsiavashR12}, who proposed the ADL dataset. The dataset is composed of one million frames acquired by 20 people performing a set of 18 actions of daily activities in their own apartments. The 3D~structure of the scenes has been explored by the authors of~\cite{moghimi_ego_RGBD} who proposed a dataset composed of 5 sequences acquired using an RGB-D camera by 4 different users. The authors of~\cite{thu-read_17} proposed a video-based RGB-D egocentric dataset (THU-READ) including different types of daily-life actions. The egocentric videos have been captured in 5 scenarios such as laboratory, bathroom, conference room, dormitory and restaurant by 8 different subjects performing 40 different actions. The problem of 3D hand-object actions recognition has been addressed by the authors of~\cite{GarciaHernando2018FirstPersonHA}, which released the Daily hand-object actions dataset containing 1175 videos belonging to 45 action categories. The dataset has been acquired by 6 actors over 3 different scenarios. A total of 105.459 RGB-D frames have been acquired and annotated with hand pose and action categories. Some works explored the kitchen domain from the first person point of view. Among these, the authors of~\cite{Torre2009CMU-MMAC} released the CMU Multi-Modal Activity Database (CMU-MMAC) to study human activities in a kitchen environment. The authors built a kitchen and acquired egocentric videos from 5 different subjects cooking 5 recipes. They captured RGB videos using different cameras, audio and motion capture information. EPIC-Kitchens and its extension \cite{Damen2018EPICKITCHENS, Damen2020Collection, Damen2020RESCALING} are a series of egocentric datasets focused on unscripted activites related to kitchens. In particular, EPIC-Kitchens-55 \cite{Damen2018EPICKITCHENS} is composed of 432 videos annotated with 352 objects classes and 125 different verbs classes. EPIC-Kitchens-100 \cite{Damen2020Collection} is an extension of EPIC-Kitchens-55 in terms of videos (700), environments (45) and hours (100). Along with the dataset, the authors proposed 6 challenges to study human behavior understanding in kitchens: action recognition, action detection, action anticipation, domain adapatation for action recognition, object detection and multi-instance retrieval. The authors of~\cite{Li2018_EGTEA-GAZE+} studied egocentric video action recognition, considering both RGB and gaze signals to determine what a person is doing (action recognition) and where they are looking (gaze estimation). They presented the dataset EGTEA Gaze+, where 32 subjects perform 7 different meal preparation tasks in different kitchens. EGTEA Gaze+ is composed of 106 action classes and includes gaze information collected at every frame. \\ Not only depth and gaze signals have been considered in past works. Sensor data like accelerometer or gyroscope have also been considered to recognize egocentric activity. The authors of~\cite{Song_sensor_data} captured a dataset of egocentric videos using a Google Glass, that acquired RGB videos and sensor information. In particular, 200 short sequences have been acquired by 20 different subjects which performed daily activities. \cite{Rogez_object_grasp} focused on object manipulation and proposed the Grasp UNderstanding (GUN-71) dataset which is composed of 12.000 RGB-D images labeled with 71 grasp classes. The videos have been acquired by 8 different subjects which performed different grasps on personal objects in 5 different houses. The camera used is a chest-mounted Intel'Senz3D\footnote{https://it.creative.com/p/archived-products/blasterx-senz3d} which is a webcam paired with a depth sensor. Beyond the aforementioned datasets which considered only one extra modality in addition to the RGB signal, the authors of~\cite{Kothari2020GazeinwildAD} proposed the Gaze-in-Wild dataset which comprises both gaze and depth streams. This dataset has been acquired by 19 participants which performed 4 activities: indoor navigation, ball catching, object search and tea making. They used a Pupil Labs eye tracker to acquire the gaze signal, the MPU to obtain the IMU data and the ZED stereo camera to acquire the depth map. Recently, Ego4D, a massive-scale egocentric video dataset has been released \cite{Ego4D2021}. Is has been acquired by 931 camera wearers from 9 different countries. Ego4D comprises videos, audio, 3D meshes of the environment, eye gaze, stereo and videos acquired by multiple egocentric cameras. The data has been collected in multiple domains and comprises different activities such as people playing cards, working at desk, cleaning the garden, cooking something or practicing a musical instrument. In addition to the dataset, \cite{Ego4D2021} presented five benchmarks focused on egocentric perception in the past, present and future. Inspired by the first version of the MECCANO dataset \cite{ragusa2020meccano}, \cite{Sener2022Assembly101AL} proposed Assembly101 which is a procedural activity dataset comprising multi-view videos in which subjects assembly and disassembly toys. Contextually, they benchmarked three action understanding tasks (i.e., action recognition, action anticipation and temporal segmentation) and proposed a new task which is related to mistakes detection. Despite the similar setup to MECCANO, we focus on the multimodal nature of the acquired data. Moreover, they acquired egocentric videos with monochrome cameras using a device similar to Oculus Quest VR. In addition, Assembly101 is able to address tasks related only to the actions performed by the users. It is worth noting that previous egocentric datasets have considered scenarios related to kitchens, offices, and daily-life activities and that they have generally tackled the action recognition task rather than EHOI detection. Table~\ref{tab:datasets} compares the aforementioned datasets with respect to the proposed MECCANO Multimodal dataset which is a conspicuous extension of the previous MECCANO dataset \cite{ragusa2020meccano}. As shown in Table~\ref{tab:datasets}, MECCANO Multimodal is the first egocentric multimodal dataset comprising both gaze and depth signals acquired in an industrial-like domain. Moreover, it has been explicitly annotated to tackle different tasks with different modalities useful to build a real system able to support humans in the industrial domain: 1) Action Recognition, 2) Active Object Detection and Recognition, 3) Egocentric Human-Object Interaction, 4) Action Anticipation and 5) Next-Active Object Detection. \begin{table*}[] \caption{Comparison of MECCANO with other datasets. AA: Action Anticipation. AOD: Active Object Detection. AOR: Active Object Recognition. AR: Action Recognition. AVD: Audio-Video Diarization. AVL: Audio-Video Localization. AVT: Audio-Video Transcription. DA-AR: Domain Adaptation for Action Recognition. EHOI: EHOI Detection. FHP: Future Hand Prediction. HOI: HOI Detection. HPE: Hand Pose Estimation. LAM: Looking-at-Me. LOC: Localization. MD: Mistake Detection. MQ: Moment Queries. MR: Multi-Instance Retrieval. NAO: Next-Active Objects Detection. NLQ: Natural Language Queries. OD: Object Detection. OSCC: Object State Change Classification. PNRTL: Point-of-No-return Temporal Localization. SCOD: State change Object detection. S\&LTA: Short and Long Term Anticipation. TAS: Temporal Action Segmentation. TTM: Talking-to-Me. VQ2D\&3D: Visual Queries with 2D\&3D Localization.} \label{tab:datasets} \resizebox{\textwidth}{!}{% \setlength\tabcolsep{2pt} \begin{threeparttable} \begin{tabular}{lcccllcccccccc} \multicolumn{1}{c}{\textbf{Dataset}} & \textbf{Settings} & \textbf{EGO?} & \textbf{Video?} & \multicolumn{1}{c}{\textbf{Signals}} & \multicolumn{1}{c}{\textbf{Tasks}} & \textbf{Year} & \textbf{Frames} & \textbf{Sequences} & \textbf{AVG. video duration} & \textbf{Action classes} & \textbf{Object classes} & \textbf{Object BBs} & \textbf{Participants} \\ \hline MECCANO Multimodal & Industrial-like & \checkmark & \checkmark & RGB, depth, gaze & \begin{tabular}[c]{@{}l@{}}EHOI, AR, AOD, AOR,\\ AA, NAO\end{tabular} & 2022 & 299,376 & 20 & 20.79 min & 61 & 20 & 307,601 & 20 \\ \hline Assembly101 \cite{Sener2022Assembly101AL} & Industrial-like & \checkmark & \checkmark & RGB, multi-view, 3D hand-pose & AR, AA, TAS, MS & 2022 & 20M & 362 & 7.10 min & 1380 & 90 & 0 & 53 \\ EGO4D\tnote{1} \cite{Ego4D2021} & Multi Domain & \checkmark & \checkmark &\begin{tabular}[c]{@{}l@{}}RGB, Audio, 3D environments, \\ stereo, gaze, IMU, multi-view\end{tabular} &\begin{tabular}[c]{@{}l@{}}VQ2D\&3D, NLQ, MQ, PNRTL, \\ SCOD, OSCC, AVD, AVT, AVL,\\ LAM, TTM, S\&LTA, FHP\end{tabular} & 2022 & 418M\tnote{2} & 9650 & 24.11 min & 113\tnote{3} & 449\tnote{3} & 295104\tnote{4}& 931 \\ EPIC-KITCHENS-100 \cite{Damen2020RESCALING} & Kitchens & \checkmark & \checkmark & RGB & AR, AD, AA, DA-AR, MR & 2021 & 20M & 700 & N/A & 97 & 300 & N/A & 37 \\ Gaze-in-wild \cite{Kothari2020GazeinwildAD} & Daily activities & \checkmark & \checkmark & RGB, depth, gaze & AR & 2020 & N/A & N/A & N/A & 4 & 0 & 0 & 19 \\ EGTEA Gaze+ \cite{Li2018_EGTEA-GAZE+} & Kitchens & \checkmark & \checkmark & RGB, gaze & AR & 2018 & 2,4M & 86 & 0.05 min & 106 & 0 & 0 & 32 \\ Daily Hand-Object Actions \cite{GarciaHernando2018FirstPersonHA} & Daily activities & \checkmark & \checkmark & RGB, depth & AR, HPE & 2018 & 105,459 & 1175 & 0.05 min & 45 & 26 & N/A & 6 \\ THU-READ \cite{thu-read_17}& Daily activties & \checkmark & \checkmark & RGB, depth & AR & 2017 & 343,626 & 1920 & 7.44 min & 40 & 0 & 0 & 8 \\ Multimodal Egocentric Activity \cite{Song_sensor_data} & Daily activities & \checkmark & \checkmark & RGB, sensor data & AR & 2016 & 30,000 & 200 & 0.25 min & 20 & 0 & 0 & 20 \\ GUN-71 \cite{Rogez_object_grasp} & Daily activities & \checkmark & \checkmark & RGB, depth & AR & 2015 & 12,000 & N/A & N/A & 71 & 28 & 0 & 8 \\ Wearable Computer Vision System \cite{moghimi_ego_RGBD} & Daily activities & \checkmark & \checkmark & RGB, depth & AR & 2014 & N/A & 5 & N/A & 12 & 0 & 0 & 4 \\ ADL \cite{ADL_PirsiavashR12} & Daily activities & \checkmark & \checkmark & RGB & AR, AOR & 2012 & 1,0M & 20 & 30.0 min & 32 & 42 & 137,780 & 20 \\ CMU \cite{Torre2009CMU-MMAC} & Kitchens & \checkmark & \checkmark & RGB & AR & 2009 & 200,000 & 16 & 15.0 min & 31 & 0 & 0 & 16 \\ \hline NTU RGB+D 120 \cite{Liu2020NTUR1} & General & X & \checkmark & RGB, depth & AR & 2020 & 8 M & 114,480 & N/A & 120 & 0 & 0 & 106 \\ UTD-MHAD \cite{Chen_UTD} & General & X & \checkmark & RGB, depth, sensor data & AR & 2017 & N/A & 861 & N/A & 27 & 0 & 0 & 8 \\ SYSU 3D Human-Object Interaction \cite{Hu_3DHOI} & General & X & \checkmark & RGB, depth & AR & 2017 & N/A & 480 & N/A & 12 & 0 & 0 & 40 \\ Something-Something \cite{Something_Something_Goyal} & General & X & \checkmark & RGB & AR, HOI & 2017 & 5,2 M & 108,499 & 0.07 min & 174 & N/A & 318,572 & N/A \\ Kinetics \cite{Kinetics_2017} & General & X & \checkmark & RGB & AR & 2017 & N/A & 455,000 & 0.17 min & 700 & 0 & 0 & N/A \\ UWA3D Multiview Activity \cite{Rahmani2016HistogramOO}& General & X & \checkmark & RGB, depth & AR & 2016 & N/A & 1200 & N/A & 30 & 0 & 0 & 10 \\ ActivityNet \cite{caba2015activitynet} & Daily activities & X & \checkmark & RGB & AR & 2015 & 91,6 M & 19,994 & 2.55 min & 200 & N/A & N/A & N/A \\ CAD-120 \cite{Koppula_cad120} & General & X & \checkmark & RGB, depth & AR, AOR & 2013 & 61585 & 120 & 0.28 min & 10 & 12 & N/A & 4 \\ MSRDailyActivity3D \cite{Wang2012MiningAE} & Daily activites & X & \checkmark & RGB, depth & AR & 2012 & N/A & 320 & N/A & 16 & 0 & 0 & N/A \\ Human Activity Detection \cite{Sung_CAD60} & Daily activites & X & \checkmark & RGB, depth & AR & 2011 & N/A & N/A & 0.75 min & 12 & 0 & 0 & 4 \\ MSR-Action3D \cite{Li2010ActionRB} & General & X & \checkmark & depth & AR & 2010 & 23797 & 402 & 0.07 min & 20 & 0 & 0 & 7 \end{tabular} \begin{tablenotes} \item[1] The statistics have been obtained on May 15, 2022. \item[2] The number of frames has been obtained considering the canonical videos acquired at 30 fps. \item[3] This number has been obtained from the ``Long-Term Anticipation" task considering both Training and Validation sets. \item[4] The number of bounding boxes has been obtained considering the ``State Change Object Detection" task. \end{tablenotes} \end{threeparttable} } \end{table*} \subsection{Human Behavior Understanding Tasks} In this section we discuss the state of the art focusing on relevant tasks which should be exploited to understand the human behavior. \subsubsection{Action Recognition} Video action recognition has been thoroughly studied by researchers, especially from the third person view. Some works \cite{Learning_actions_movies_Laptev, Human_detection_Flow_Schmid_06, TwoStream_convolutional_action_Zisserman_14, Two-Stream_Zisserman, temporal_segNet} mixed classic approaches considering hand-crafted features, such as optical flow and deep networks to represent the motion of actions using two stream networks. 3D ConvNets are commonly used to encode both spatial and temporal dimensions in a unified way \cite{ Conv_spatio-temporal_Taylor, Learning_spatio-temporal_Paluri, Carreira2017QuoVA}. Long-term filtering and pooling has focused on representing actions considering their full temporal extent \cite{Long-term_action_Schmid, Two-Stream_Zisserman, temporal_segNet, Zhou2018TemporalRR}. Other works separately control spatial and temporal dimensions factoring convolutions into separate 2D spatial and 1D temporal filters \cite{Spatiotemporal_residual_action, closer_spatiotemp_action, rethinking_spatiotemporal, Learning_spatiotemporal_pseudo}. Slow-Fast networks \cite{feichtenhofer2018slowfast} avoid using pre-computed optical flow and encodes the motion of actions into a ``fast'' pathway (which operates at a high frame rate) and simultaneously a ``slow'' pathway which captures semantics (operating at a low frame rate). The authors of~\cite{Zhou2018TemporalRR} introduced a network module called Temporal Relation Network (TRN) to learn temporal relations between video frames at multiple time scales. The authors of~\cite{TSM_2019} proposed a temporal shift module (TSM). This module allows 2D architectures to obtain comparable performance to 3D CNNs. Inspired by feature selection methods, the authors of~\cite{Feichtenhofer2020X3DEA} presented a family of video networks (X3D) which expand a 2D image classification architecture into a spatiotemporal one by expanding along multiple possible axes such as space, time, width and depth. The authors of~\cite{Hussein2019TimeceptionFC} revisited the definition of \textit{activity} and restricted it to \textit{complex action} which is a set of simple one-actions which compose the activity (e.g., \textit{cooking a meal} can be considered as a set of one-actions: \textit{get, cook, put and wash}). Recently, the action recognition task has been addressed considering multi-modal signals. For example, the authors of~\cite{Shi_2021_ICCV} considered audio, visual and textual information to recognize actions using graph convolutional neural networks (GCN). Previous works also investigated egocentric action recognition adapting third person vision approaches to the first person scenario \cite{TSM_2019, Zhou2018TemporalRR, feichtenhofer2018slowfast, Damen2018EPICKITCHENS}. In this work, we assess the performance of state-of-the-art action recognition methods on the proposed MECCANO dataset considering the state-of-the-art SlowFast network \cite{feichtenhofer2018slowfast} as a baseline. \subsubsection{HOI Detection} Previous works have investigated HOI detection mainly from a third person vision point of view. The authors of~\cite{Gupta2015VisualSR} were the first to explore the HOI detection task annotating the COCO dataset \cite{lin2014COCO} with verbs. The authors proposed a method to detect people performing actions able to localize the objects involved in the interactions on still images. The authors of~\cite{Gkioxari2018DetectingAR} proposed a human-centric approach based on a three-branch architecture (InteractNet) instantiated according to the classic definition of HOI in terms of a $<$human, verb, object$>$ triplet. This approach analyzes each human-object pairs detected with an object detector~\cite{ren2015faster} using a heat map to represent their relationship. Some works~\cite{Qi2018LearningHI, Chao2018LearningTD, RPN_Zhou} explored HOI detection using graph convolutional neural networks after detecting humans and objects in the scene. Recent works \cite{PPDM_liao2019, Wang_InteractionPoints_2020_CVPR} represented the relationship between both, the humans and the objects, as the intermediate point which connects the center of the human and object bounding boxes. The aforementioned works addressed the problem of HOI detection in the third person vision domain. In this work, we look at the task of HOI detection from an egocentric perspective considering the proposed MECCANO Multimodal dataset. \subsubsection{Tasks related to EHOI Detection} The problem of Human-Object Interaction (HOI) detection has been systematically investigated only from Third Person Vision. Previous work have considered similar tasks related to Egocentric Human-Object Interaction (EHOI) detection due to the limited availability of egocentric datasets explicitly labelled for this task. Some studies have modeled the relations between entities for interaction recognition as object affordances~\cite{Hotspots_Grauman19, Nagarajan2020EGOTOPOEA, affordance_Fang18}. Other studies tackled tasks related to EHOI recognition proposing hand-centric methods \cite{Cai2016UnderstandingHM, Lending_Hand_Bambach_15, Hands_in_contact_Shan20, kwon2021h}. The authors of~\cite{Hands_in_contact_Shan20} proposed to detect and localize hands in the scene distinguishing left from right hands. Objects are classified into two classes: \textit{active} or \textit{passive}. In particular, if an object is in contact with at least one hand, it is considered as \textit{active}, otherwise, it is considered as \textit{passive}. The authors of~\cite{Li_Adaptive_2020_CVPR} proposed to search network structures with differentiable architectures to construct adaptive structures for different videos to facilitate adaptive interaction modeling. The method has been evaluated on the Something-Something dataset \cite{Something_Something_Goyal} which contains egocentric-like videos. The authors of~\cite{kwon2021h} proposed a unified approach to recognize hand-object interactions predicting the 3D pose of the two interacting hands and the 6D pose of manipulated objects simultaneously. Despite these works have considered tasks related to human-object interaction from an egocentric point of view, the EHOI detection task has not yet been studied systematically. In this work, we formalize the task of EHOI detection and focus on this problem related to the industrial domain, considering the proposed MECCANO dataset. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/Meccano_Dataset.png} \caption{Examples of data acquired by the 20 different participants in two countries (Italy, United Kingdom).} \label{fig:dataset} \end{figure*} \subsubsection{Action Anticipation} The task of action anticipation has been investigated from the third person point of view \cite{Hierarchical_Repr_Savarese_14, Felsen_what_will_happen_17, Gao2017REDRE, Zeng2017VisualFB}. \cite{Hierarchical_Repr_Savarese_14} proposed a new representation of actions called \textit{hierarchical movemes} to anticipate the future actions from still image or short video clips. They encode the atomic components of human movements before an action is executed, representing the actions with high semantic and temporal granularity. The authors of~\cite{Felsen_what_will_happen_17} proposed a generic framework for forecasting future events in team sports videos related to water polo and basketball events. The authors of~\cite{Gao2017REDRE} considered multiple history representations of the past to anticipate a sequence of future representations. In the last years, the task has been studied also from the first person perspective. The authors of~\cite{robot-centric-anticipation_15} explored this task with the aim to assist humans which cooperate with a robot. In particular, they anticipate future actions considering videos acquired from the point of view of a robot which interacts with a human. A series of works \cite{furnari2019rulstm, furnari2020rulstm, slowfast_rulstm_ballan} address the task considering the LSTM networks to encode the features related to the past. The authors of~\cite{Roy_2022_WACV} focused on the goal representation to predict the next action from the first person view. In this work, we adopt the RULSTM model \cite{furnari2020rulstm} to evaluate state-of-the-art action anticipation methods on the proposed MECCANO dataset also considering the gaze and depth signals. \subsubsection{Next Active Objects Detection} The detection of next-active objects, i.e., the objects which will be involved in a EHOI, is a problem which has not been thoroughly studied due to the small number of public egocentric datasets suitable for the task. There are not egocentric datasets specifically annotated to perform this task. The authors of~\cite{Furnari2017NextactiveobjectPF} have been the first to explore the next-active objects prediction problem. They performed experiments on the Activity of Daily Living (ADL) egocentric dataset, analyzing the trajectories of the next-active objects with a temporal sliding window. The authors of~\cite{liu_forecasting_HOI} addressed the task of anticipating egocentric actions proposing an architecture composed of a motor attention module to predict the trajectory of the hands and by a module to detect the area of contact of the target object which will be active. These two outputs are fed into an anticipation module which predicts a spatio-temporal attention map which indicates the possible location of the next-active objects. The output is composed of the next action label, a ``hotspot" which indicates the area of the object in which there will be a contact, and the hand trajectory. The two egocentric datasets ADL \cite{ADL_PirsiavashR12} and EPIC-Kitchens \cite{Damen2020RESCALING} have been re-annotaded by \cite{JIANG2021212} to tackle the problem of short-term next-active object detection. They proposed a novel human-centerd approach composed of two pathways: 1) the first pathway generates a human visual attention probability map and 2) the second one generates a human hand position probability map. These two maps are then fused by an interaction module which outputs the final map of the next-active object. In addition to the next-object location, \cite{Fan2017ForecastingHA} predicted the hands location in future frames. The problem was tackled designing a two-stream CNN architecture with an auto-encoder by extending SSD, a state-of-the-art convolutional object detection network and using a regression network to infer future representations. The authors of~\cite{Dessalene_forecasting_contact} performed action anticipation and prediction through hand-objects contact representations. They presented a new architecture composed of an anticipation module and temporal relations represented using a Graph Convolutional Networks (GCN) and the LSTMs to predict the final next action label. They treated the next-active objects involved into the future contact with hands through semantic segmentation masks. The authors of~\cite{Bertasius2017FirstPA} detected the important objects for the camera wearer (i.e., objects related to the intent of the user) considering an unsupervised learning approach. Closely related to next-active object prediction, the authors of~\cite{Ego4D2021} presented a set of tasks for the forecasting benchmark of the Ego4D dataset including the task of short-term Object Interaction Anticipation. Despite previous works have considered tasks related to the problem of next-active object detection, it has not yet been studied in depth. Moreover, the task has not been studied considering different types of signals (e.g., depth and gaze) and in an industrial-like domain. In this work, we focus on the next-active object task on the MECCANO Multimodal dataset, which has been acquired in an industrial domain and annotated explicitly to tackle this challenging task. \section*{\uppercase{Supplementary Material}} \label{sec:supp_material} This document is intended for the convenience of the reader and reports additional information about the action-interaction relations, the proposed dataset and the annotation stage. This supplementary material is related to the following submission:\\ F. Ragusa, A. Furnari, G. M. Farinella. MECCANO: A Multimodal Egocentric Dataset for Humans Behavior Understanding in the Industrial Domain. submitted to Computer Vision and Image Understanding (CVIU), 2022. \\ The reader is referred to the manuscript and to our web page \url{https://iplab.dmi.unict.it/MECCANO/} to download the dataset and for further information. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/stats_nao_bbox.png} \caption{Long-tail distribution of bounding boxes over all object classes.} \label{fig:nao_bbox} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/VIA.png} \caption{Customized VIA project to support the labeling of next-active objects. Annotators were presented with a panel which allowed them to identify object classes through their thumbnails.} \label{fig:VIA} \end{figure*} \section{Next-Active Objects Annotations} \label{sec:nao} Figure~\ref{fig:nao_bbox} shows the distribution of bounding boxes over all object classes. As shown in the figure, the distribution follows a long-tail distribution, which highlights the complexity of this industrial scenario. Moreover, we reported how many bounding boxes we annotated for each object class considering the three splits (Training, Validation and Test) of the dataset. For the annotation phase of the next-active objects, we used VGG Image Annotator (VIA) \cite{dutta2019vgg} with a customized project to facilitate and speed up the selection of the correct object class (see Figure~\ref{fig:VIA}. Moreover, we provided a document to the annotators, containing a set of key rules for the annotations of next active objects, to support them and reduce ambiguities. In annotation guidelines, we reported the fundamental definitions (e.g., next active object, next active object in a pile, occluded next active object) showing visual examples (see Figure~\ref{fig:rules_ann}). \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/rules_nao.png} \caption{Next active object definition given to the labelers for the next active object bounding box annotation stage.} \label{fig:rules_ann} \end{figure} Figure~\ref{fig:past_interactions} shows the comparison between the number of interactions present in the MECCANO dataset with respect to the number of interactions which include labeled past frames. \section{Hands Annotations} Figure~\ref{fig:hands_stats} reports some statistics related to the hand annotations. \begin{figure}[htp] \centering \includegraphics[width=\columnwidth]{images/stats_past_interactions.png} \caption{Comparison between the number of interactions with respect to the number of interactions which have past frames.} \label{fig:past_interactions} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/hands_stats.png} \caption{Hands annotations distribution.} \label{fig:hands_stats} \end{figure*} An example of the labeling procedure is shown in Figure~\ref{fig:hands_refine}. In the first column, we reported the predictions of the Hand Object Detector \cite{Hands_in_contact_Shan20}. In the second column, the annotators fixed the class errors and refined the bounding box around the hands. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/hands_refinment.png} \caption{Example of the labeling procedure of the hands.} \label{fig:hands_refine} \end{figure} \section*{\uppercase{Acknowledgements}} This research is supported by Next Vision\footnote{Next Vision: https://www.nextvisionlab.it/} s.r.l., by MISE - PON I\&C 2014-2020 - Progetto ENIGMA - Prog n. F/190050/02/X44 – CUP: B61B19000520008, and MIUR AIM - Attrazione e Mobilita Internazionale Linea 1 - AIM1893589 - CUP: E64118002540007 \input{sections/supplementary_material.tex} \clearpage {\small \bibliographystyle{ieee_fullname} \balance \section{Conclusion} \label{sec:conclusion} We proposed MECCANO, a multimodal dataset to study egocentric human behavior understanding in an industrial-like scenario. We publicly release the dataset (\url{https://iplab.dmi.unict.it/MECCANO/}) with temporal (action and interaction segments) and spatial (active, next-active object, and hands bounding boxes) annotations considering a taxonomy of 12 verbs, 20 nouns and 61 unique actions. In addition, we performed baseline experiments on five challenging tasks, showing the usefulness of multimodality of the MECCANO dataset. We argue that these multimodal signals are useful to develop real applications to support humans in the real life. MECCANO is also suitable to explore different tasks \cite{EgoProceLECCV2022, mixup_active_obj_detction} other than those considered in this work. Future works will explore new approaches to improve performance on the proposed tasks. \section{The MECCANO Multimodal Dataset} \label{sec:dataset} In this Section, we describe MECCANO multimodal, a dataset of egocentric videos composed of multimodal data collected in an industrial-like domain. We acquired RGB videos, depth maps and gaze signal simultaneously with two different devices. \subsection{Data Collection} \label{sec:meccano_collection} The MECCANO multimodal dataset has been acquired in an industrial-like scenario in which 20 subjects were asked to built a toy model of a motorbike (see Figure~\ref{fig:toy_model}). The motorbike is composed of 49 components with different shapes and sizes belonging to 19 classes. In addition, 2 tools, a \textit{screwdriver} and a \textit{wrench}, are available to facilitate the assembly of the toy model. In our settings, we have grouped two types of components which are similar in their appearance and have similar roles in the assembly process. Specifically, we grouped the A054 and A051 components (shown in Figure~\ref{fig:toy_model}) under the ``screw'' class. These two types of components only differ in their lengths. We also grouped A053, A057 and A077 under the ``washers'' class. Note that these components only differ in the radius of their holes and in their thickness. As a result, we have 20 object classes in total: 16 classes are related to the 49 motorbike components, whereas the others are associated to the two tools, to the instruction booklet and to a ``partial model" class, which indicates a set of components assembled together to form a part of the model (see Figure~\ref{fig:partial}). Note that multiple instances of each component are necessary to build the model. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/partial_model.png} \caption{Examples of objects belonging to the partial model class.} \label{fig:partial} \end{figure} For the data collection, the $49$ components related to the $16$ considered classes, the $2$ tools and the instruction booklet have been placed on a table to simulate an industrial-like environment. Specifically, objects of the same component class have been grouped and placed in a heap, and heaps have been placed randomly on the table (see Figure~\ref{fig:dataset}). We have considered two types of tables: a light-colored table and a dark one. The dataset has been acquired in 2 different countries, Italy and United Kingdom. Participants were from $8$ different nationalities with ages between $18$ and $55$. Figure~\ref{fig:participants} reports some statistics about the participants. We asked participants to sit and build the model of the motorbike. No other particular instruction was given to the participants, who were free to use all the objects placed on the table as well as the instruction booklet. Some examples of the captured data are reported in Figure~\ref{fig:dataset}. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/participants.pdf} \caption{Statistics of the 20 participants.} \label{fig:participants} \end{figure} The dataset has been acquired using a custom headset (see Figure~\ref{fig:headset}) which was worn by participants for acquisition purposes. The headset was composed of an Intel RealSense SR300\footnote{https://ark.intel.com/content/www/it/it/ark/products/92329/intel-realsense-camera-sr300.html} and by a Pupils Core\footnote{https://pupil-labs.com/} device. The headset was adjusted to control the point of view of the camera with respect to the different heights and postures of the participants in order to have the hands located approximately in the middle of the scene during the object manipulations. For each participant, we acquired the RGB stream and the depth signal from the RealSense device, whereas the gaze signal was acquired through the Pupils Core device (see Figure~\ref{fig:headset}). The RGB videos acquired with the RealSense device were recorded at a resolution of 1920x1080 pixels. Depth videos were acquired with a resolution of 640x480 pixels. Both videos have a framerate of 12fps. Finally, we acquired the gaze signal with the Pupils Core device with a frequency of 200Hz. To acquire the Real Sense and Pupils Core signals, we used the Pupils Capture software\footnote{https://pupil-labs.com/products/core/} which allows to acquire gaze simultaneously with signals coming from the two devices. Each video includes a complete assembly of the toy model starting from the 49 pieces placed on the table. The average duration of the captured videos is 21.14\textit{min}, with the longest one being 35.45\textit{min} and the shortest one being~9.23\textit{min}. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/multimodality.png} \caption{The custom headset used to acquire the MECCANO dataset along with examples of the captured modalities. The headset is composed of two devices: a Intel RealSense SR300 and a Pupil Core.} \label{fig:headset} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/action-interaction.png} \caption{Example of relation between the action \textit{``take screwdriver"} and its related interaction. $A_s$ and $A_e$ represent the start and the end times of the action, while $I_S$ represents the start of the interaction represented by the physical contact between the hand and the object. The interaction will end when there will be no more physical contact ($I_e$).} \label{fig:action_interaction} \end{figure*} \subsection{Data Alignment} We aligned the different signals both temporally and spatially to obtain a consistent association between modalities. In this way it is possible to use the set of annotations independently from the different chosen signal (e.g., temporal segments or bounding box annotations). The following sections detail the alignment of the different modalities to the source RGB videos. \subsubsection{Depth Alignment} There was a constant temporal misalignment of 0.4s between the depth and RGB signals due to the fact that the streams have been acquired with two different sensors (depth sensor and RGB sensor). We temporally aligned the two streams obtaining a total of 301016 depth frames\footnote{Due to the temporal misalignment, the number of depth frames is different respect to the number of RGB frames.}. Examples of RGB frames associated with the depth maps are shown in Figure~\ref{fig:dataset}. \subsubsection{Gaze Alignment} The gaze data consists of 2D pixel coordinates (\textit{x, y}) of the gaze position in the RealSense RGB frame, and include also a confidence scores and the timestamps. For each RGB frame, we associated a gaze signal selecting only gaze positions with a confidence larger than or equal to 0.6 and considering the timestamp closest to the considered frame (see Figure~\ref{fig:dataset}). \subsection{Data Annotation} \label{sec:meccano_data_ann} The MECCANO Multimodal dataset has been collected and annotated to study human behavioral understanding in an industrial-like scenario. Similarly to recent datasets \cite{Ego4D2021, Damen2018EPICKITCHENS}, MECCANO is a ``multi-task" dataset, due to its rich set of annotations which can be used and combined to solve different tasks. We provide temporal annotations for actions and interactions understanding that are useful to solve tasks which take into account the temporal dimension, such as action recognition, as we reported in Section~\ref{sec:action_recognition}. Active objects have been labeled with bounding boxes and object classes with the aim to solve tasks like Active Object Detection and Recognition (Section~\ref{sec:AODR}). Combining interaction temporal annotations and active objects bounding boxes, it is possible to address the task of Egocentric Human-Object Interaction Detection (EHOI) as detailed in Section~\ref{sec:EHOI}. We provide also bounding boxes around the hands of the user during all object interactions. Hands and active objects bounding boxes have also been tracked backward in time before the beginning of each interaction. These tracked bounding boxes are related to the task of “next active objects”. We exploit the hands and next-active objects bounding boxes to address the action anticipation task as reported in Section~\ref{sec:Action_anticipation}. Moreover, bounding boxes of active and next-active objects have been used to solve the Next-Active Object Detection task (see Section~\ref{sec:NAO}). In sum, to show the potential of the MECCANO Dataset and its annotations, we propose a benchmark which comprises five tasks using different sets of annotations and input modalities : 1) Action Recognition, 2) Active Object Detection and Recognition, 3) Egocentric Human-Object Interaction Detection, 4) Action Anticipation and 5) Next-Active Object Detection. Central to human-object annotations are user-object interactions and actions. Hence, we first investigate the relationship between these two concepts. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/rel.png} \caption{Examples of relations between action and interaction concepts.} \label{fig:action_interaction_table} \end{figure} \subsubsection{Action-Interaction Relations} \label{sec:action_vs_interactions} In the literature, the \textit{action} and \textit{interaction} concepts are often used interchangeably, specifically when tasks related to \textit{action} understanding are considered. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/Temporal_annotations.pdf} \caption{Example of two overlapping temporal annotations along with the associated verbs.} \label{fig:temporal} \end{figure*} Let's consider the sentence ``take a screwdriver'' which is composed of the verb ``take'' and the object ``screwdriver''. We consider that the \textit{interaction} is strongly related to the physical contact between the human and the object, whereas we assume that the \textit{action} is more related to the motion of the hands of the user and the related objects as well as to the change of state of the object (i.e., from inert to in hand). These two entities are correlated along the temporal dimension having different start and end times, also if they overlap (see Figure~\ref{fig:action_interaction}). Let $A_s$ and $A_e$ be the start and end time of an action and let $I_s$ and $I_e$ denote when the related interaction begins and ends. Considering the annotation ``take a screwdriver'', the action begins when the hand of the human starts to move towards the target object, which is the screwdriver on the table. The interaction begins when the hand touches the target object. Hence the interaction begins due to the physical contact between the hand and the object, when the action is still on-going. When the screwdriver has been taken, which means that it changed its state from inert to in hand, the action ends, while the interaction will be concluded when the physical contact will be broken (e.g., when the human will put down the object on the table). The verb describing the action is the same which describes the related interaction. Note that the interaction can be associated to different verbs over time. For example, if after taking the screwdriver the physical contact is not broken and the human puts down the screwdriver, the verb ``put down'' will describe also the interaction until the end of the action. Distinguishing these two concepts and understanding their temporal correlation is fundamental to formalize and disambiguate these two related tasks. Action understanding tasks focus on the temporal dimension of actions while the Human-Object Interaction detection task focuses more on the spatial position of the objects in the scene related to the physical contact between human and objects. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/actions_stats.pdf} \caption{List of the actions present in the MECCANO Dataset (right) and their distribution (left). The action labels follows a long-taile distribution which highlights the complexity of the considered scenario.} \label{fig:action_stats} \end{figure*} Considering this distinction, we labeled the MECCANO dataset with actions and interactions annotations as reported in Section~\ref{sec:temporal_ann} and explored the two different tasks. Figure~\ref{fig:action_interaction_table} reports different examples of relations between action and interaction concepts. For each example, we report temporal relation (first column), generic verbs (second column) and the verbs present in the MECCANO Dataset (third column) which belong to the considered relation. \subsubsection{Action and Interaction Temporal Annotations} \label{sec:temporal_ann} We considered 12 different verbs which describe the actions performed by the participants while building the toy model: \textit{take, put, check, browse, plug, pull, align, screw, unscrew, tighten, loosen} and \textit{fit}. We represent each temporal segment as a triplet composed of three different timestamps: 1) the start time which indicates the start of the action, 2) the contact time which indicates the first frame in which the contact between the hand and the object (or between the objects) is clearly visible, changing the state of the objects from \textit{passive} to \textit{active} and 3) the end time of the performed action. We manually annotated both contact and end times of each temporal segment. Since in the MECCANO Multimodal dataset there are three cases in which the action starts before the frame of contact (i.e., take, put and align), for these actions we automatically annotated the start time going back by 0.5 seconds with respect the contact time whereas for the others the start time coincides with the contact. Only for the \textit{check} verb, where the user doesn't need to touch an object, we annotated the contact time when it is clear from the video sequence that the user is looking at the object (see Figure~\ref{fig:temporal}). With this procedure, we annotated $8857$ video segments. Since a participant can perform multiple interactions simultaneously, we allowed the annotated segments to overlap (see Figure~\ref{fig:temporal}). In particular, in the MECCANO Multimodal dataset there are 1401 segments (15.82 \%) which overlap with at least another segment. We defined $61$ action classes composed of a verb and one or more objects, for example \textit{``align screwdriver to screw''} in which the verb is \textit{align} and the objects are \textit{screwdriver} and \textit{screw}. Depending on the verb and objects involved in the interaction, each temporal segment has been associated to one of the $61$ considered action classes. We analyzed the combinations of our $12$ verb classes and $20$ object classes to find a compact, yet descriptive set of actions classes. The action class selection process has been performed in two stages. In the first stage, we obtained the distributions of the number of active objects generally occurring with each of the $12$ verbs. In the second stage, we selected a subset of actions from all combinations of verbs and nouns. Let \begin{math} O = \{o_1, o_2, ..., o_n\} \end{math} and \begin{math} V = \{v_1, v_2, ..., v_m\} \end{math} be the set of the object and verb classes respectively. For each verb $v \in V$, we considered all the object classes $o \in O$ involved in one or more temporal segments labeled with verb $v$. In total, we obtained 61 action classes composing the MECCANO dataset which are shown in Figure~\ref{fig:action_stats}. \subsubsection{Active Object Bounding Box Annotations} For each temporal segment, we annotated the \textit{active} objects in frames sampled every $0.2$ seconds. Each active object annotation consists in a \textit{(class, x, y, w, h)} tuple, where \textit{class} represents the class of the object and \textit{(x, y, w, h)} are the 2D coordinates width and height defining the bounding box around the object in the frame. We annotated multiple objects when they were \textit{active} simultaneously (see Figure~\ref{fig:bbox} - first row). If an active object is occluded, even just in a few frames, we annotated it with a \textit{(class, x, y)} tuple, specifying the class of the object and its estimated 2D position. An example of occluded active object annotation is reported in the second row of Figure~\ref{fig:bbox}. With this procedure, we labeled a total of 64349 frames. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/bbox_annotations.pdf} \caption{Example of bounding box annotations for \textit{active} objects (first row) and occluded \textit{active} objects (second row).} \label{fig:bbox} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/concept_HOI.png} \caption[]{Examples of Human-Object Interactions from the third point of view (first row) and first point of view (second row)\footnotemark.} \label{fig:concept_HOI} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/next_annotations.png} \caption{Annotation procedure for next-active objects with bounding boxes in the past frames.} \label{fig:next_active_objects} \end{figure*} \subsubsection{EHOI Annotations} \label{sec:EHOI_ann} The HOI detection task consists in detecting the occurrence of human-object interactions, localizing both the humans taking part in the action and the interacted objects. HOI detection also aims to understand the relationships between humans and objects, which is usually described with a verb. Possible examples of HOIs are ``\textit{eat the sandwich}" or ``\textit{throw the ball}" (see Figure~\ref{fig:concept_HOI}-top). HOI detection models mostly consider one single object involved in the interaction \cite{Gupta2015VisualSR, HOI_Gupta_09, Gkioxari2018DetectingAR, HOI_Fei_Fei,Chao2018LearningTD}. Hence, an interaction is defined as a triplet in the form \textit{$<$human, verb, object$>$}, where the human is the subject of the action specified by a verb and an object. Considering the FPV domain, a first formalization of the HOI task has been proposed by \cite{Hands_in_contact_Shan20} who represented an interaction as a triple $<$hand, contact state, object$>$ where, the “contact state” variable assumes one of the following values: (none/self/other/portable/non-portable). This definition does not describe of the interaction in terms of verb classes and assume that only one object for each hand could be involved in the interaction. Differently from \cite{Hands_in_contact_Shan20}, we aim to understand more closely human behavior and hence study the Egocentric Human-Object Interaction (EHOI) detection task with the aim of predicting \textit{$<$verb, objects$>$} pairs describing the interaction observed from the egocentric point of view with multiple objects. Note that in EHOIs, the subject is always the camera wearer, so we do not require its localization in the frame, while one or more objects can be involved simultaneously in the interaction. The goal of EHOI detection is to infer the verb and object noun classes, and to localize each active object involved in the interaction. Let $O = \{o_1, o_2, ..., o_n\}$ and $V = \{v_1, v_2, ..., v_m\}$ be the sets of objects and verbs respectively. We define an Egocentric Human-Object Interaction $e$ as: \begin{equation} \label{eq:1} e = (v_h, \overline{\rm o_1}, \overline{\rm o_2}, ..., \overline{\rm o_i}\}) \end{equation} \noindent where \begin{math}v_h \in V\end{math} is the verb characterizing the interaction and \begin{math}{\overline{\rm o_1}, \overline{\rm o_2}, ..., \overline{\rm o_i}} \subseteq O \end{math} are the active objects involved in the interaction. Given the previous definition, we considered all the observed combinations of verbs and objects to represent EHOIs performed by the participants during the acquisition. Two examples are reported in Figure~\ref{fig:concept_HOI})-bottom. Each EHOI annotation is hence composed of a verb annotation and the bounding boxes of \textit{active} objects. Differently from other datasets, MECCANO multimodal has been hence explicitly annotated for the EHOI detection task. \subsubsection{Next Active Object Annotations} \label{sec:nao_annotations} Due to the limited number of public datasets explicitly annotated to study the future intentions of humans, few past works explored the task of predicting the next-active objects considering the first person point of view \cite{Furnari2017NextactiveobjectPF}, using both RGB and depth signals \cite{Bertasius2017FirstPA}, focusing on the hands \cite{JIANG2021212} or estimating also the time to contact with the future active objects \cite{Ego4D2021}. We annotated MECCANO with a set of labels useful to tackle the problem of \textit{Next Active Object} prediction whose goal is to predict and localize the objects that will be involved in a future human-object interaction from the first person view. For each human-object interaction, we annotated in the frames preceding the interaction the objects which will be \textit{active objects} in the contact frame. Starting from the contact frame, we sampled frames every 0.2 seconds going backwards up to 3 seconds before the beginning of the temporal segment, or less if there is an overlap with a previous segment\footnote{If an interaction overlaps with previous one, we did not annotate past frames.} (see Figure~\ref{fig:next_active_objects}). Indeed, not all interactions have past frames. For example if the interaction $E_1$ ends at timestamp $T_1$ and the interaction $E_2$ starts at timestamp $T_2 = T_1 + 0.1s$, past frames belonging to the interaction $E_2$ will overlap with frames belonging to the previous interaction $E_1$ and they will not be annotated. With this sampling procedure, we obtained labels in past frames for 75.66\% (6656) of the total number of interactions (8857) present in the dataset. Considering the frames preceding an interaction, each next-active object annotation consists in a \textit{(class, x, y, w, h)} tuple, where \textit{class} represents the class of the object which will be active and the \textit{(x, y, w, h)} tuple defines a bounding box around the considered object. If an object is going to be taken from a pile, then the pile itself is labeled as the next active object. Note that a pile of objects is composed only by objects of the same type. We labeled the pile because we assume that, before a human-object interaction occurs, it is not feasible to infer which object of the pile will be active (see Figure~\ref{fig:next_active_objects}-left). As in the case of active objects, if an object is occluded, we annotated it with a \textit{(class, x, y)} tuple specifying the class of the object and its estimated 2D position. With this procedure, we labeled a total of 48024 frames with 74127 bounding boxes\footnote{See supplementary material for additional details.}. \begin{table*}[] \caption{Statistics of the three splits: Train, Validation and Test. The 4th column indicates the percentage of videos belonging to the related subset with respect to the total number of videos present in the MECCANO Dataset.} \label{tab:splits} \centering \resizebox{\textwidth}{!}{% \begin{tabular}{l|ccccccccc} \multicolumn{1}{c|}{\textbf{Split}} & \textbf{\#Videos} & \textbf{Duration (min)} & \textbf{\%} & \textbf{\#EHOIs Segments} & \textbf{Obj. BBoxes} & \textbf{Hands BBoxes} & \textbf{NAO BBoxes} & \textbf{Country (U.K/Italy)} & \textbf{Table (Light/Dark)} \\ \hline Train & 11 & 236.47 & 55\% & 5057 & 37386 & 96556 & 28152 & 6/5 & 6/5 \\ Val & 2 & 46.57 & 10\% & 977 & 6983 & 19636 & 5490 & 1/1 & 1/1 \\ Test & 7 & 134.93 & 35\% & 2824 & 19980 & 87924 & 14382 & 4/3 & 4/3 \\ \hline \end{tabular}% } \end{table*} \subsubsection{Hands Annotations} As MECCANO multimodal features actions and human-object interactions from the first person point of view where the hand of the user are visible during the objects manipulation, knowledge on the position of the hands could be an important modality to explore. For each temporal segment, we annotated the hands of the participants with a bounding box on the set of frames belonging to the interaction (i.e. from the start frame to the end frame) and in the past frames preceding the interaction. Each hand annotation consists in a \textit{(class, x, y, w, h)} tuple, where \textit{class} represents the side of the hand (i.e. left or right) and \textit{(x, y, w, h)} defines a bounding box around the considered hand. We split this labeling procedure in two stages. Firstly, we processed the frames with the Hand Object Detector introduced in \cite{Hands_in_contact_Shan20}. This detector infers if an hand is involved in an interaction through the contact with active objects. In particular, the detector predicts the hand location, the side, a contact state, and a box around the object in contact. We considered only the hand location and the side for each of the processed frame. In the second stage, annotators checked if the predicted bounding boxes and the associated class were correct and adjusted them or added a new annotation if there was a missing hand prediction. If the bounding box was not precise or the class was wrong, they refined the bounding box and corrected the class of the hand. With this procedure, we annotated \textit{89628} frames with \textit{169625} bounding boxes around the hands. See supplementary material for additional details. We used this set of annotations as additional modality to tackle the \textit{Action Anticipation} task as described in Section~\ref{sec:Action_anticipation}. Hands annotations could be useful also to understand human-object interactions or to recognize the actions performed by the user. \section{Benchmarks and Baseline Results} \label{sec:benchmark} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/action_rec_architecture.pdf} \caption{SlowFast (RGB-Depth-Gaze) architecture which gets as input three different signals (RGB, depth and gaze).} \label{fig:action_rec_architecture} \end{figure*} The MECCANO dataset is suitable to study a variety of tasks, considering its multimodality and the challenging industrial-like scenario in which it was acquired. In this paper, we proposed five tasks related to human's behavior understanding and provide baseline results: 1) \textit{Action Recognition}, 2) \textit{Active Object Detection and Recognition}, 3) \textit{Egocentric Human-Object Interaction (EHOI) Detection}, 4) \textit{Action Anticipation} and 5) \textit{Next Active Object Detection}. While some of these tasks have been studied in previous works, none of them has been studied in industrial scenarios from the egocentric perspective also considering multimodal observations. Moreover, there are only few datasets publicy available \cite{Damen2020RESCALING, Ego4D2021, Sener2022Assembly101AL} which can be used to study different tasks simultaneously and to develop a complete system for human behavior understanding taking into account different aspects (e.g., actions, interactions, objects, future intentions). MECCANO has been split into three subsets (\textit{Training, Validation} and \textit{Test}) designed to balance the different types of desks (light, dark) and countries in which the videos have been acquired (IT, U.K.). We used the training set to train the baselines, the validation set for hyperparameters tuning and the test set to test the trained models. Each video has been entirely assigned to one of the three different subsets. Table~\ref{tab:splits} reports some statistics about the three splits, such as the number of videos, the total duration (in seconds), the number of temporally annotated EHOIs and the number of bounding box annotations. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/act_rec_qual.png} \caption{Qualitative results of the action recognition models. Green and red colors represent the correct/wrong predictions for both verbs and objects which compose the action class. In the first column, we report the ground truth action class, in the second one the prediction of the model SlowFast (RGB-Gaze), in third column the action class predicted by SlowFast (Depth-Gaze) and in the last column (\textit{All}) the prediction of SlowFast (RGB-Depth-Gaze).} \label{fig:action_rec_qual_res} \end{figure*} \begin{table*}[] \caption{Results for the action recognition task. The best results are reported in bold, whereas the second best results are underlined.} \label{tab:action_rec} \resizebox{\textwidth}{!}{% \begin{tabular}{l|ccccc} \textbf{Method} & \multicolumn{1}{l}{\textbf{Top-1 Accuracy}} & \multicolumn{1}{l}{\textbf{Top-5 Accuracy}} & \multicolumn{1}{l}{\textbf{AVG Class Precision}} & \multicolumn{1}{l}{\textbf{AVG Class Recall}} & \multicolumn{1}{l}{\textbf{AVG F1-score}} \\ \hline SlowFast (RGB) & 45.16 & 73.75 & 50.33 & 45.16 & 46.66 \\ SlowFast (Depth) & 45.13 & 72.19 & 50.28 & 45.13 & 46.96 \\ SlowFast (RGB-Depth) & \underline{49.49} & \underline{77.61} & \underline{56.13} & \underline{49.49} & \underline{51.90} \\ SlowFast (RGB-Gaze) & 45.34 & 73.61 & 49.83 & 44.60 & 46.25 \\ SlowFast (Depth-Gaze) & 45.27 & 72.30 & 50.62 & 45.27 & 47.23 \\ SlowFast (RGB-Depth-Gaze) & \textbf{49.66} & \textbf{77.82} & \textbf{56.69} & \textbf{49.66} & \textbf{52.25} \end{tabular} } \end{table*} \subsection{Action Recognition} \label{sec:action_recognition} Action Recognition consists in determining the action performed by the camera wearer from the observation of an egocentric video segment. Specifically, let \begin{math}C_a = \{c_1, c_2, ..., c_n\} \end{math} be the set of action classes and let \begin{math}A_i = [t_{s_i}, t_{e_i}]\end{math} be the video segment, where $t_{s_i}$ and $t_{e_i}$ are the start and the end times of the action respectively. The aim is to assign the correct action class $c_i \in C_a$ to the segment $A_i$. We evaluate action recognition using Top-1 and Top-5 accuracy computed on the whole test set. As class-aware measures, we report class-mean precision, class-mean recall and $F_1$-score. \subsubsection{Baseline} \label{sec:baseline_act_rec} As a baseline we considered SlowFast \cite{feichtenhofer2018slowfast}, which is a state-of-the-art method for action recognition. To explore the usefulness of multimodal signals for this task, we adopted different instances of the SlowFast architecture, as detailed in the following.\\ \textbf{SlowFast (RGB):\hspace{1mm}} the 3D network architecture which takes as input the RGB clip to perform action recognition, as implemented in Pyslowfast \cite{fan2020pyslowfast}.\\ \textbf{SlowFast (Depth):\hspace{1mm}} a 3D network architecture which takes as input the clip composed of depth maps related to the corresponding RGB frames.\\ \textbf{SlowFast (RGB-Depth):\hspace{1mm}} this model is composed of two SlowFast networks which process different input signals (RGB and the corresponding Depth frames). The two probability distributions obtained as output are averaged to obtain the final action prediction. \\ \textbf{SlowFast (RGB-Gaze):\hspace{1mm}} inspired by \cite{deep_attention_Minlong}, we integrated human gaze into the SlowFast network. Specifically, we used the ground truth gaze fixation to obtain an attention map which focuses on the most relevant spatial regions of the video frames along the time dimension. This attention map is multiplied with the output feature maps of both the slow and fast pathways. Then, we fused the new combined feature maps by concatenation and fed to the fully connected layers to obtain the final prediction.\\ \textbf{SlowFast (Depth-Gaze):\hspace{1mm}} this model is similar to SlowFast (RGB-Gaze) but it takes as input the depth maps of the video clips rather than RGB frames.\\ \textbf{SlowFast (RGB-Depth-Gaze):\hspace{1mm}} this model is composed of two instances of SlowFast model (one for each input signal) and integrates human gaze as seen in SlowFast (RGB-Gaze) architecture (Figure~\ref{fig:action_rec_architecture}). \subsubsection{Results} Table~\ref{tab:action_rec} reports the results obtained with the adopted baselines for the action recognition task. Baselines using only one modality (RGB or Depth), achieve similar performance in terms of Top-1 Accuracy (45.16\% vs. 45.13\%) and AVG F1-score (46.66\% vs. 46.96\%). Fusing the RGB and Depth signals, we obtain better results with respect to all the baselines which use one or two modalities. Exploiting all the signals present in the MECCANO Dataset (RGB, Depth and Gaze), we obtain the best results considering all the evaluation measures (last row of Table~\ref{tab:action_rec}). Even if the gaze modality represents an additional signal to guide learning, improvements with this modality are minor. The limited improvement obtained using this modality with respect to the model which uses RGB and Depth signals, could be related to the nature of the adopted architecture, which is simple and can be optimized in future works. Qualitative results are reported in Figure~\ref{fig:action_rec_qual_res}, which highlights how the use of multiple signals improve the performance on the action recognition tasks (first and second row). In general, the results suggest that the use of multiple modalities allows to improve the results in the MECCANO dataset. Moreover, the dataset is a challenging testbed for action recognition and offers a new scenario to compare Classic and Multimodal action recognition algorithms. \subsection{Active Object Detection and Recognition} \label{sec:AODR} Differently from previous works, we consider two distinct but related tasks: active object detection and active object recognition. In some cases, detecting the active objects manipulated by the human without considering the object class \cite{Hands_in_contact_Shan20} can be useful, for example when the taxonomy of the object classes is difficult to obtain or to initialize a tracker \cite{TREK150}. However, when a taxonomy is available, predicting the object classes can enable practical applications (e.g., monitoring the usage time of specific objects). The aim of the Active Object Detection task is to detect all the \textit{active} objects. Let \begin{math} O_{act} = \{o_1, o_2, ..., o_n\} \end{math} be the set of \textit{active} objects in a given frame. The goal is to detect with a bounding box each \textit{active} object $o_i \in O_{act}$. As evaluation measure, we use the Average Precision~(AP), which is used in standard object detection benchmarks. We set the IoU threshold equal to~$0.5$ in our experiments as in the standard Pascal VOC mAP measure \cite{PascalVOC_Zisserman_15}. The active object recognition task instead, consists in detecting and recognizing the \textit{active} objects involved in EHOIs considering the $20$ object classes of the MECCANO dataset. Formally, let \begin{math} O_{act} = \{o_1, o_2, ..., o_n\}\end{math} be the set of \textit{active} objects in the image and let \begin{math} C_{o} = \{c_1, c_2, ..., c_m\} \end{math} be the set of object classes. The task consists in detecting objects $o_i \in O_{act}$ and assigning them the correct class label $c_i \in C_{o}$. We use mAP \cite{PascalVOC_Zisserman_15} with an IoU threshold equal to $0.5$ for the evaluations. \begin{table}[] \caption{Baseline results for the \textit{active} object detection task.} \label{tab:active_det} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{l|c} \multicolumn{1}{c|}{\textbf{Method}} & \multicolumn{1}{l}{\textbf{AP (IoU \textgreater 0.5)}} \\ \hline Hand Object Detector \cite{Hands_in_contact_Shan20} & 11.17\% \\ Hand Object Detector \cite{Hands_in_contact_Shan20} (Avg dist.) & 11.10\% \\ Hand Object Detector \cite{Hands_in_contact_Shan20} (All dist) & 11.34\% \\ Hand Object Detector \cite{Hands_in_contact_Shan20} + Objs re-training & 20.18\% \\ Hand Object Detector \cite{Hands_in_contact_Shan20} + Objs re-training (Avg dist.) & 33.33\% \\ Hand Object Detector \cite{Hands_in_contact_Shan20} + Objs re-training (All dist.) & \textbf{38.14\%} \\ \hline \end{tabular}% } \end{table} \subsubsection{Methods} To address the problem of detecting active objects, the Hand-Object Detector proposed in \cite{Hands_in_contact_Shan20} has been considered as a baseline. The model has been designed to detect hands and objects when they are in contact with hands. This architecture is based on Faster-RCNN \cite{ren2015faster} and predicts a box around the visible human hands, as well as boxes around the objects the hands are in contact with and a link between them. We used the Hand-Object Detector \cite{Hands_in_contact_Shan20} pretrained on EPIC-Kitchens \cite{Damen2018EPICKITCHENS}, EGTEA \cite{Li2018_EGTEA-GAZE+} and CharadesEGO \cite{Sigurdsson2018Charades} as provided by the authors \cite{Hands_in_contact_Shan20}. For our purpose the model has been trained to recognize hands and to detect the \textit{active} objects regardless of their class. With default parameters, the Hand-Object Detector can find at most two \textit{active} objects in contact with hands. Since our dataset tends to contain more \textit{active} objects in a single EHOI (up to 7), we consider two variants of this model by changing the threshold on the distance between hands and detected objects. In the first variant, the threshold is set to the average distance between hands and \textit{active} objects in the MECCANO dataset. We named this variant ``\textit{Avg distance}''. In the second variant, we removed the thresholding operation and considered all detected objects as \textit{active} objects. We named this variant ``\textit{All objects}''. We further adapted the Hand-Object Detector \cite{Hands_in_contact_Shan20} re-training the Faster-RCNN component to detect all \textit{active} objects of the MECCANO dataset. Faster-RCNN has been trained on the training and validation sets using the provided \textit{active} object class labels. Since the task aims to only localize objects, we discard predicted object classes at test time. For the active object recognition task, as a baseline, we used a standard Faster-RCNN \cite{ren2015faster} object detector. For each image, the object detector predicts \textit{(x, y, w, h, class)} tuples which represent the object bounding boxes and the associated classes. We used the same Faster-RCNN model adopted for the Active Object Detection task, retaining also object classes at test time. \begin{table}[] \caption{Baseline results for the \textit{active} object recognition task.} \label{tab:active_rec} \centering \resizebox{0.6\columnwidth}{!}{% \begin{tabular}{clc} \multicolumn{1}{l|}{\textbf{ID}} & \multicolumn{1}{c|}{\textbf{Class}} & \textbf{AP (per class)} \\ \hline \multicolumn{1}{c|}{0} & \multicolumn{1}{l|}{instruction booklet} & 46.18\% \\ \multicolumn{1}{c|}{1} & \multicolumn{1}{l|}{gray\_angled\_perforated\_bar} & 09.79\% \\ \multicolumn{1}{c|}{2} & \multicolumn{1}{l|}{partial\_model} & 36.40\% \\ \multicolumn{1}{c|}{3} & \multicolumn{1}{l|}{white\_angled\_perforated\_bar} & 30.48\% \\ \multicolumn{1}{c|}{4} & \multicolumn{1}{l|}{wrench} & 10.77\% \\ \multicolumn{1}{c|}{5} & \multicolumn{1}{l|}{screwdriver} & 60.50\% \\ \multicolumn{1}{c|}{6} & \multicolumn{1}{l|}{gray\_perforated\_bar} & 30.83\% \\ \multicolumn{1}{c|}{7} & \multicolumn{1}{l|}{wheels\_axle} & 10.86\% \\ \multicolumn{1}{c|}{8} & \multicolumn{1}{l|}{red\_angled\_perforated\_bar} & 07.57\% \\ \multicolumn{1}{c|}{9} & \multicolumn{1}{l|}{red\_perforated\_bar} & 22.74\% \\ \multicolumn{1}{c|}{10} & \multicolumn{1}{l|}{rod} & 15.98\% \\ \multicolumn{1}{c|}{11} & \multicolumn{1}{l|}{handlebar} & 32.67\% \\ \multicolumn{1}{c|}{12} & \multicolumn{1}{l|}{screw} & 38.96\% \\ \multicolumn{1}{c|}{13} & \multicolumn{1}{l|}{tire} & 58.91\% \\ \multicolumn{1}{c|}{14} & \multicolumn{1}{l|}{rim} & 50.35\% \\ \multicolumn{1}{c|}{15} & \multicolumn{1}{l|}{washer} & 30.92\% \\ \multicolumn{1}{c|}{16} & \multicolumn{1}{l|}{red\_perforated\_junction\_bar} & 19.80\% \\ \multicolumn{1}{c|}{17} & \multicolumn{1}{l|}{red\_4\_perforated\_junction\_bar} & 40.82\% \\ \multicolumn{1}{c|}{18} & \multicolumn{1}{l|}{bolt} & 23.44\% \\ \multicolumn{1}{c|}{19} & \multicolumn{1}{l|}{roller} & 16.02\% \\ \hline \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ \cline{2-3} \multicolumn{1}{l}{} & \multicolumn{1}{c|}{\textbf{mAP}} & 30.39\% \end{tabular}% } \end{table} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/active_objects.png} \caption{Qualitative results for the Active Object Recognition task.} \label{fig:active_objects_qual} \end{figure} \subsubsection{Results} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/EHOI_arch.png} \caption{Proposed architecture for the EHOI detection task.} \label{fig:EHOI_arch} \end{figure*} \begin{table*}[] \caption{Results on the EHOI Detection task. The best results are reported in bold, whereas the second best results are underlined.} \label{tab:EHOI} \resizebox{\textwidth}{!}{ \begin{tabular}{lccc|ccc} & \multicolumn{3}{c|}{\textbf{mAP\textsubscript{verb}}} & \multicolumn{3}{c}{\textbf{mAP\textsubscript{verb,noun}}} \\ \textbf{Method} & \textbf{IoU@50} & \textbf{IoU@30} & \textbf{IoU@10} & \textbf{IoU@50} & \textbf{IoU@30} & \textbf{IoU@10} \\ \hline SlowFast (RGB) + Faster-RCNN & 26.44 & 28.83 & 30.45 & 19.14 & 21.36 & 22.05 \\ SlowFast (Depth) + Faster-RCNN & \textbf{29.10} & \textbf{31.81} & \textbf{33.81} & \underline{21.37} & \underline{24.01} & \underline{25.01} \\ SlowFast (RGB-Depth) + Faster-RCNN & 26.49 & 28.88 & 30.51 & 19.20 & 21.42 & 22.12 \\ SlowFast (RGB-Gaze) + Faster-RCNN & 28.82 & \underline{ 31.56} & \underline{33.55} & \textbf{21.38} & \textbf{24.04} & \textbf{25.04} \\ SlowFast (Depth-Gaze) + Faster-RCNN & \underline{28.92} & 31.51 & 33.38 & 20.79 & 23.28 & 24.20 \\ SlowFast (RGB-Depth-Gaze) + Faster-RCNN & 28.51 & 31.10 & 32.97 & 20.65 & 23.12 & 24.03 \end{tabular} } \end{table*} \begin{figure}[] \centering \includegraphics[width=\columnwidth]{images/ehoi_qual.png} \caption{Qualitative results of the SlowFast (Depth) + Faster-RCNN method for the EHOI Detection task. On the left, the SlowFast(Depth) verb prediction, while, on the right, the active objects detected by Faster-RCNN. We reported wrong verb predictions and missing object detection with a dashed red bounding box.} \label{fig:ehoi_qual} \end{figure} Table~\ref{tab:active_det} shows the results obtained by the \textit{active} object detection task baselines. The results highlight that the Hand-Object Detector \cite{Hands_in_contact_Shan20} is not able to generalize to the challenging domain offered by MECCANO Multimodal. All the three variants of the Hand-Object Detector using the original object detector obtained an AP approximately equal to 11\% (first three rows of Table~\ref{tab:active_det}). Re-training the object detector on the MECCANO dataset allowed to improve performance by significant margins. In particular, using the standard distance threshold value, we obtained an AP of 20.18\%. If we consider the average distance as the threshold to discriminate \textit{active} and \textit{passive} objects, we obtain an AP of 33.33\%. Removing the distance threshold (last row of Table~\ref{tab:active_det}), allows to outperform all the previous results obtaining an AP equal to 38.14\%. Note that, since no distance threshold is considered, the baseline consists in just using a Faster R-CNN object detector trained on the target context. This suggests that adapting the general object detector to the challenging domain of the proposed dataset is key to performance. Indeed, training the object detector to detect only \textit{active} objects in the scene already allows to obtain reasonable results, while there is still space for improvement. Table~\ref{tab:active_rec} reports the AP values obtained by the Faster R-CNN active object detection baseline for each class considering all the videos belonging to the test set of MECCANO. The last column shows the average of the AP values for each class and the last row reports the mAP value for the test set. The mAP was computed as the average of the mAP values obtained in each test video. AP values in the last column show that large objects are easier to recognize (e.g. \textit{instruction booklet: 46.48\%; screwdriver: 60.50\%; tire: 58.91\%; rim: 50.35\%}). Results suggests that the proposed dataset is challenging due to the presence of small objects. We reported qualitative results in Figure~\ref{fig:active_objects_qual}. We leave the investigation of more specific approaches to active object detection to future studies. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/concept_AA.png} \caption{The goal of the action anticipation task is to predict egocentric actions from an observation of the past.} \label{fig:concept_action_ant} \end{figure*} \subsection{EHOI Detection} \label{sec:EHOI} The goal of this task is to detect all egocentric human-object interactions (EHOI) in contact frames. As denoted in the definition of EHOIs as $<$verb, objects$>$ pairs (see Equation~\ref{eq:1}), methods should detect and recognize all the \textit{active} objects in the scene, as well as the verb describing the action performed by the human. Following previous works \cite{Gupta2015VisualSR, Gkioxari2018DetectingAR}, \textit{``AP\textsubscript{role}''} is used as evaluation measure for this task. Formally, a detected EHOI is considered as a true positive if 1) the predicted object bounding box has a IoU of 0.5 or higher with respect to a ground truth annotation and 2) the predicted verb matches with the ground truth. Note that only the \textit{active} object bounding box location (not the correct class) is considered in this measure. Since we want to also recognize \textit{active} objects, we consider a variant of \textit{``AP\textsubscript{role}''} adding the following condition: 3) the predicted object class matches with the ground truth. We called this measure \textit{``AP\textsubscript{verb,noun}''}\footnote{Note that \textit{``AP\textsubscript{noun}''} which considers only the 1) and 3) conditions is the same measure computed in Table~\ref{tab:active_rec}.}. We used both measures to evaluate our method. To better highlight the difference between the two measures we will refer to the \textit{``AP\textsubscript{role}''} as \textit{``AP\textsubscript{verb}''}. Moreover, we used different values of IoU (i.e., 0.5, 0.3 and 0.1) to compute the different \textit{``AP''} values. \subsubsection{Method} Our baseline is based on the combination of a SlowFast network \cite{fan2020pyslowfast} trained to predict the verb of the EHOI considering a video clip sampled around the frame following the interaction temporal annotations, and Faster-RCNN \cite{ren2015faster}, which detects and recognizes all \textit{active} objects in the frame as shown in Figure~\ref{fig:EHOI_arch}. Similar to the action recognition task, we explored the potential of the multimodal signals present in the MECCANO dataset, considering different instances of the SlowFast network which relies to RGB, Depth and Gaze signals similarly to what described in Section~\ref{sec:action_recognition}. Note that, in this case, the SlowFast network has been trained on the 12 verb classes of the MECCANO dataset which describe the interactions performed by the users rather than the action classes as done in Section~\ref{sec:action_recognition}. Moreover, due to the difference related to the \textit{action} and \textit{interaction} concepts, we used the EHOI annotations which are slightly different in terms of temporal boundaries with respect to the action annotations as detailed in Section~\ref{sec:EHOI_ann}. For the object detector component, we used the same model trained for the active object recognition task. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/act_ant_qual.png} \caption{Qualitative results of the proposed approach based on RULSTM which is composed of three branches: gaze, objects and hands.} \label{fig:act_ant_qual} \end{figure*} \subsubsection{Results} Table~\ref{tab:EHOI} reports the results for the EHOI detection task. The baseline which considers only the depth signal (second row) obtained the best results for all the mAP\textsubscript{verb} measures with different values of IoU (29.10, 31.81 and 33.81). This can be motivated by the fact that the depth signal (second row) helps to focus the attention on the hands which are most relevant to understand the motion and discriminate the different verbs. Moreover, the 3D network does not aim to predict the object classes, which makes the RGB signal less useful. Interestingly, the use of the depth signal and the attention map computed from the gaze signal (fifth row) does not help for the recognition of the interaction (29.10 versus 28.92) while using the attention map of the gaze with the RGB signal (fourth row) improves the recognition of the interactions obtaining the second best performance considering the IoU@30 and IoU@10 measures, with maP values of 31.56 and 33.55 respectively. If we consider the mAP\textsubscript{verb,noun} in which also the object class needs to be correctly predicted, the baseline which considers RGB and Gaze signals (4th row) obtained best results considering all the IoU values (21.38, 24.04 and 25.04). In general, the mAP\textsubscript{verb,noun} values are lower than mAP\textsubscript{verb} values because we also consider the condition of predicting the correct object class. Qualitative results of SlowFast (Depth) + Faster-RCNN are reported in Figure~\ref{fig:ehoi_qual}. Despite the promising performance of the proposed baseline, MECCANO Multimodal leaves space to more investigation on the proposed EHOI detection task due to the challenging nature of the considered industrial-like domain. \subsection{Action Anticipation} \label{sec:Action_anticipation} The goal of the action anticipation task is to predict egocentric actions from an observation of the past (see Figure~\ref{fig:concept_action_ant}). Let \begin{math}A = [t_{s}, t_{e}]\end{math} be a video segment, where $t_{s}$ and $t_{e}$ are the start and the end times of the action respectively. The aim is to assign the correct action class $c_i \in C_a$ to the segment $A$ observing a $t_{o}$ (observation time) seconds long video segment preceding the start time of the action $t_{s}$ by $t_{a}$ seconds (anticipation time). Following \cite{furnari2019rulstm} we used Top-k accuracy and Mean Top-5 Recall as evaluation measures and considered different anticipation times. \begin{figure*}[] \centering \includegraphics[width=\textwidth]{images/NAO_task.png} \caption{The aim of the Next-Active Object Detection task is to detect and recognize all the objects which will be involved in a future interaction.} \label{fig:nao_task} \end{figure*} \subsubsection{Method} We adopted the RULSTM approach proposed in \cite{furnari2019rulstm, furnari2020rulstm} to address the action anticipation task. This model is composed of different branches which take as input different signals (RGB, optical flow and object-centric features). We choose this model due to its state-of-the-art performance and because it has been explicitly designed to work with multimodal observations. We extended this method to exploit the multimodal signals present in the MECCANO dataset. In particular, the adopted baseline is composed of 5 branches, one for each signal: RGB, Depth, Gaze, object-centric features and hands-centric features. Depth features have been extracted running SlowFast (Depth) (see Section~\ref{sec:baseline_act_rec} trained on MECCANO. We computed object-centric features following \cite{furnari2019rulstm}, while, gaze features have been obtained weighting the object-centric features with the distance between the center of objects bounding boxes and the gaze position in the image. For hands-centric branch we use the hand annotations of the MECCANO dataset as input. Branches are trained and fused using the procedures explained in \cite{furnari2019rulstm}. Since we extracted the RGB features using a SlowFast network \cite{feichtenhofer2018slowfast} which encodes both spatial and temporal features, we did not consider the optical-flow. \begin{table}[] \caption{The table reports the ablation study performed with the adopted baseline for action anticipation. All the combinations of 5 branches representing different input signals (RGB, Depth, Objects, Gaze and Hands) are considered.} \label{tab:ablation_action_ant} \centering \resizebox{0.8\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c|c} \textbf{RGB} & \textbf{Depth} & \textbf{OBJ} & \textbf{Gaze} & \textbf{Hands} & \textbf{mt5r} \\ \hline \textbf{\checkmark} & X & X & X & X & 22,88\% \\ X & \textbf{\checkmark} & X & X & X & 14,07\% \\ \textbf{\checkmark} & \textbf{\checkmark} & X & X & X & 14,12\% \\ X & X & \textbf{\checkmark} & X & X & 29,41\% \\ \textbf{\checkmark} & X & \textbf{\checkmark} & X & X & 29,03\% \\ X & \textbf{\checkmark} & \textbf{\checkmark} & X & X & 25,26\% \\ \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & X & X & 21,74\% \\ X & X & X & \textbf{\checkmark} & X & 29,79\% \\ \textbf{\checkmark} & X & X & \textbf{\checkmark} & X & 29,63\% \\ X & \textbf{\checkmark} & X & \textbf{\checkmark} & X & 24,04\% \\ X & X & \textbf{\checkmark} & \textbf{\checkmark} & X & 31,46\% \\ \textbf{\checkmark} & \textbf{\checkmark} & X & \textbf{\checkmark} & X & 29,42\% \\ \textbf{\checkmark} & X & \textbf{\checkmark} & \textbf{\checkmark} & X & 32,01\% \\ \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & X & 28,05\% \\ X & X & X & X & \textbf{\checkmark} & 30,06\% \\ \textbf{\checkmark} & X & X & X & \textbf{\checkmark} & 29,86\% \\ X & \textbf{\checkmark} & X & X & \textbf{\checkmark} & 24,43\% \\ X & X & \textbf{\checkmark} & X & \textbf{\checkmark} & 31,13\% \\ X & X & X & \textbf{\checkmark} & \textbf{\checkmark} & 31,33\% \\ \textbf{\checkmark} & \textbf{\checkmark} & X & X & \textbf{\checkmark} & 29,89\% \\ \textbf{\checkmark} & X & \textbf{\checkmark} & X & \textbf{\checkmark} & 31,31\% \\ \textbf{\checkmark} & X & X & \textbf{\checkmark} & \textbf{\checkmark} & 30,97\% \\ X & \textbf{\checkmark} & \textbf{\checkmark} & X & \textbf{\checkmark} & 27,69\% \\ X & \textbf{\checkmark} & X & \textbf{\checkmark} & \textbf{\checkmark} & 26,84\% \\ X & X & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{32,25\%} \\ \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & X & \textbf{\checkmark} & 29,21\% \\ \textbf{\checkmark} & \textbf{\checkmark} & X & \textbf{\checkmark} & \textbf{\checkmark} & 28,11\% \\ \textbf{\checkmark} & X & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & 31,30\% \\ X & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & 27,77\% \\ \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & \textbf{\checkmark} & 24,05\% \end{tabular} } \end{table} \begin{table}[] \caption{Results obtained for the action anticipation task considering different values of $t_a$ (anticipation time).} \label{tab:act_ant} \resizebox{\columnwidth}{!}{% \begin{tabular}{lcccccccc} \multicolumn{1}{c}{\textbf{$t_a$}} & \textbf{2} & \textbf{1.75} & \textbf{1.50} & \textbf{1.25} & \textbf{1} & \textbf{0.75} & \textbf{0.50} & \textbf{0.25} \\ \hline \multicolumn{1}{l|}{\textbf{Top-1 Acc.}} & 23.37 & 23.48 & 23.30 & 23.97 & 24.08 & 24.50 & 25.60 & 28.87 \\ \multicolumn{1}{l|}{\textbf{Top-5 Acc.}} & 54.65 & 55.99 & 56.56 & 57.73 & 58.23 & 59.96 & 61.31 & 63.40 \\ \multicolumn{1}{l|}{\textbf{M. Top-5 Rec.}} & 18.57 & 18.73 & 21.24 & 21.26 & 22.38 & 24.67 & 24.93 & 26.01 \end{tabular} } \end{table} \subsubsection{Results} We performed an ablation study considering several instances of the adopted baselines focusing on different combination of these five branches (see Table~\ref{tab:ablation_action_ant}). Results show how hard it is to choose a combination of different signals to solve this task. For example, considering the combination of RGB and Depth signals (third row) decreases the performance with respect to using only the RGB signal (first row). Also using all signals simultaneously (last row) does not guarantee best performance, highlighting that it is not sufficient to use all the available signals to solve this task in this challenging environment. We found that the best approach which obtained a Mean Top-5 Recall of 32.25, computed on the Validation Set of the MECCANO dataset, is composed of three branches which take as input the gaze signal, the object-centric features and the hand-centric features. Table~\ref{tab:act_ant} reports the results obtained on the Test set of the MECCANO dataset. We evaluated this baseline considering different anticipation times ($t_a$) ranging from 2~seconds to 0.25 seconds. Qualitative results are shown in Figure~\ref{fig:act_ant_qual}. \subsection{Next-Active Object Detection} \label{sec:NAO} The aim of the Next-Active Object Detection task is to detect and recognize all the objects which will be involved in a future interaction (see Figure~\ref{fig:nao_task}). Let \begin{math}T = [t_{s}, t_{e}]\end{math} be a EHOI segment, where $t_{s}$ and $t_{e}$ are the start and the end times of the interaction respectively and let \begin{math} O_{act} = \{o_1, o_2, ..., o_n\} \end{math} be the set of active objects. The goal is to predict the set of active objects involved in the interaction $I$, \begin{math} O_I = \{o_1, o_2, ..., o_m\} \end{math} with $O_I \subseteq O_{act}$ and their bounding boxes \begin{math} B_I = \{b_{o_1}, b_{o_2}, ..., b_{o_m}\} \end{math} where $b_{o_i} = (x,y,w,h)$ is the bounding box related to the object $o_i$, by observing a $t_{o}$ (observation time) seconds long temporal segment preceding the start time of the interaction $t_{s}$ by $t_{a}$ seconds (anticipation time). For evaluation purposes we used the mean Average Precision (mAP) measure which considers the class and the accuracy of the spatial detection of the objects. \subsubsection{Method} We explored the task adopting different simple baselines based on the Faster R-CNN object detector \cite{ren2015faster}. The first baseline has been trained using only the active object annotations (the same ones used for the active object detection and recognition task). The second baseline has been trained using only the next-active object annotations. The third baseline has been trained with the active object annotations and finetuned with the next-active objects annotations. The fourth baseline has been trained using both active and next-active objects annotations. \begin{table}[] \caption{Results obtained for the Next-active object detection task.} \label{tab:NAO} \resizebox{\columnwidth}{!}{% \begin{tabular}{lcc} \textbf{Method} & \textbf{mAP} & \textbf{mAP50} \\ \hline \multicolumn{1}{l|}{Faster R-CNN (active objects)} & \underline{14.10} & \underline{26.00} \\ \multicolumn{1}{l|}{Faster R-CNN (next-active objects)} & 9.90 & 18.20 \\ \multicolumn{1}{l|}{Faster R-CNN (active + finetuning next-active objects)} & 11.60 & 19.90 \\ \multicolumn{1}{l|}{Faster R-CNN (active + next-active objects)} & \textbf{14.20} & \textbf{26.40} \end{tabular} } \end{table} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/qual_detection_nao.png} \caption{Qualitative results of the best approach based on Faster-RCNN. The dashed bounding box indicates a ground truth object which has not been detected.} \label{fig:qual_nao} \end{figure} \subsubsection{Results} Table~\ref{tab:NAO} shows the obtained results. Using only the next-active objects annotations is not enough to obtain reasonable performance on the prediction of the next-active objects. Training the model using both active and next-active objects annotations allows to obtain the best performance considering the mAP (14.20) and the mAP50 (26.40). We reported qualitative results in Figure~\ref{fig:qual_nao}. In general, the task needs to be explored in depth due to the challenging nature of the MECCANO dataset. \section{Introduction} \label{sec:intro} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/Toy_Model.png} \caption{Toy model built by subjects interacting with 2 tools, 49 components and the instructions booklet.} \label{fig:toy_model} \end{figure} Understanding human behavior from an egocentric perspective allows to build intelligent systems able to assist humans equipped with a camera (e.g., Microsoft Hololens 2\footnote{\url{https://www.microsoft.com/en-us/hololens}}, Vuzix Blade\footnote{\url{https://www.vuzix.com/products/blade-smart-glasses}}, Nreal Light \footnote{\url{https://www.nreal.ai/light/}}, etc.) in many contexts, including cultural sites \cite{RagusaPRL, cucchiara2014visions, vedi2019}, home scenarios \cite{You-Do_Damen_14} and industrial environments \cite{DeepVisionShield_Colombo19}. For example, recognizing human-object interactions in an industrial environment from First Person Vision (FPV) can be useful for monitoring the use of machines, to schedule calibration operations, to suggest to the operator how to use a specific machine or object, as well as to issue notifications about actions that may be missed in a production pipeline \cite{miss_actions_shapiro}. Furthermore, anticipating what a worker will do and which objects he will interact with provides information to improve safety in a factory, for example by notifying the user with an alert in case a dangerous action or interaction is anticipated. Many recent works investigated human behavior understanding considering different tasks such as action recognition \cite{feichtenhofer2018slowfast, TwoStream_convolutional_action_Zisserman_14, Two-Stream_Zisserman, Zhou2018TemporalRR, kazakos2019TBN, TSM_2019}, object detection \cite{girshick2014rich, girshick2015fast, ren2015faster, yolov3}, human-object interaction detection \cite{Gkioxari2018DetectingAR, Gupta2015VisualSR, Hands_in_contact_Shan20, Nagarajan2020EGOTOPOEA}, action anticipation \cite{Felsen_what_will_happen_17, Gao2017REDRE, furnari2019rulstm, slowfast_rulstm_ballan} as well as the detection of the next active objects \cite{Bertasius2017FirstPA, Furnari2017NextactiveobjectPF, JIANG2021212, Ego4D2021}. Advances in these fields have been obtained thanks to the availability of public datasets \cite{Imagenet, lin2014COCO, Gupta2015VisualSR, HICO_Chao} considering different contexts such as kitchens \cite{Damen2018EPICKITCHENS,Damen2020RESCALING, Li2018_EGTEA-GAZE+, Torre2009CMU-MMAC}, home and offices \cite{Ramanan_12_ADL, thu-read_17, You-Do_Damen_14, Ortis2017OrganizingEV}, different daily-living scenarios \cite{Ego4D2021} and relying on different modalities such as depth \cite{thu-read_17} and gaze \cite{Li2018_EGTEA-GAZE+}. While these contexts provide interesting test-beds to study human behavior in general, the industrial domain (e.g., factories, building sites, mechanical workshops, etc.) has never been explored from FPV. This is mainly due to the fact that data acquisition in industrial domains is difficult because of privacy issues and the need to protect industrial secrets \cite{privacy_protection_Yee}. Nowadays many wearable glasses are able to capture different signals such as IMU, depth maps, as well as pupils fixations and audio (e.g., Microsoft Hololens2, Nreal Light, Magic Leap). Multimodal data are of relevant importance because they can be used to represent the same observed scene with complementary information. Moreover, each different signal provides additional information about the observed environment and the camera wearer, such as semantic information (RGB), 3D information of the environment and the objects (depth), as well as the user's attention (gaze). Despite the availability of such multimodal signals in many wearable platforms available on the market, current datasets in egocentric vision seldom include rich multimodal signals. In this paper, we present MECCANO Multimodal, which comprises multimodal egocentric data acquired in an industrial-like domain. This dataset extends the previous MECCANO egocentric video dataset \cite{ragusa2020meccano} considering two extra modalities (i.e., depth and gaze signals), a new set of annotations (i.e., temporal action annotations and spatial bounding boxes of hands and next-active objects) and a new benchmark addressing 5 different tasks aimed to understand the human behavior exploiting different signals (i.e., RGB, depth and gaze). To collect the dataset, we asked 20 subjects to build a toy model of a motorbike (see Figure~\ref{fig:toy_model}) which is composed of 49 components with different shapes and sizes. Similarly to what happens in an industrial scenario, the subjects interact with tools such as a screwdriver and a wrench, as well as with tiny objects such as screws and bolts while executing a task involving sequential actions (e.g., take wrench, tighten bolt, put down wrench). Despite the fact that this scenario is a simplification of what can be found in real industrial settings, it is still fairly complex to model, as our experiments show. The dataset has been acquired in two countries (Italy and United Kingdom) using a custom headset. The multimodality is characterized by the gaze signal, depth maps and RGB videos acquired simultaneously with two different devices (additional details are reported in Section~\ref{sec:dataset}). We acquired 20 RGB videos associated to the 20 depth videos with a Intel RealSense SR300\footnote{https://ark.intel.com/content/www/us/en/ark/products/92329/intel-realsense-camera-sr300.html}. In addition, we captured the gaze signal using a Pupil Core device\footnote{https://pupil-labs.com/products/core/} and synchronized it with the RGB videos. MECCANO has been annotated to address different tasks related to human behavior understanding. Specifically, we provide temporal annotations indicating the start and the end times of each action performed by the participants and the contact time which indicates the first frame of contact between the hand and the object, i.e., when the object changes its state from \textit{passive} to \textit{active}. We also spatially annotated the objects involved in the interactions (i.e., active objects) and the hands of the subjects with bounding boxes. Moreover, starting from the active objects annotations, we labeled the same objects in the past to explore the task of predicting the future intentions of subjects by detecting and recognizing the next active objects (i.e., the next objects the user is going to interact with). The dataset is publicy released at the following link: \url{https://iplab.dmi.unict.it/MECCANO/}. To highlight the usefulness of the proposed multimodal dataset, we release baseline experiments related to five fundamental tasks focused on understanding human’s behaviour from first person vision in the considered industrial-like context: 1) Action Recognition, 2) Active Objects Detection and Recognition, 3) Egocentric Human-Objects Interaction Detection, 4) Action Anticipation and 5) Next-Active Objects Detection and Recognition. Some of these tasks have been treated in the state of the art, while, egocentric human-object interaction (EHOI) detection and next-active objects (NAO) detection and recognition tasks are underexplored considering the egocentric point of view. We revisit these tasks considering the FPV paradigm in Section \ref{sec:EHOI} and \ref{sec:NAO} respectively. Results demonstrate that solving these problems in the industrial domain settings from an egocentric point of view is challenging despite the availability of multimodal signals. In sum, the contributions of this work are as follows: 1) we present MECCANO multimodal, a new challenging egocentric multimodal dataset related to the industrial domain; 2) we study in details the HOI task considering the Egocentric Vision paradigm (EHOI); 3) we study the Next-Active Object Detection task from the egocentric perspective; 4) we propose a benchmark aimed to study human behavior in the considered industrial-like scenario exploring five different tasks, showing that the current state-of-the-art approaches are not sufficient to solve the considered problems in the industrial settings. The remainder of the paper is organized as follows. In Section~\ref{sec:related_work} we discuss related work. The proposed MECCANO Multimodal dataset is presented in Section~\ref{sec:dataset}. Section~\ref{sec:benchmark} describes the benchmark and discussed the results. We conclude the paper and discuss insights for future work in Section~\ref{sec:conclusion}. \section{Related Work} \label{sec:related_work} Our work is related to different lines of research, including the collection of benchmark datasets, action recognition, HOI detection, Egocentric HOI detection, action anticipation and next-active objects detection. The following sections discuss the relevant works belonging to the aforementioned research areas. \subsection{Datasets for Human Behavior Understanding} Different third person vision datasets have been proposed to study human behavior understanding exploiting the tasks of Human-Object Interaction (HOI) detection \cite{Gupta2015VisualSR, HICO_Chao, PPDM_liao2019} and action recognition \cite{caba2015activitynet, Kinetics_2017, Kinetics_Carreira2019ASN, Something_Something_Goyal}. These datasets are often composed of RGB images or videos as well as other modalities such as depth \cite{Li2010ActionRB, Wang2012MiningAE, Sung_CAD60, Koppula_cad120, Chen_UTD, Rahmani2016HistogramOO, Hu_3DHOI, Liu2020NTUR1}, especially after the release of the Microsoft Kinect \cite{zhang_kinect}. \cite{Gupta2015VisualSR} annotated the COCO dataset~\cite{lin2014COCO} with verbs (V-COCO) to study the problem of detecting HOI. V-COCO includes 10346 images annotated with 26 actions. HICO-Det \cite{HICO_Chao} is a large-scale dataset composed of static images used as a benchmark to study the task of HOI detection. This dataset includes 47766 images and has been annotated with 117 verbs and 80 objects (same objects as COCO). While these datasets focused on common and general actions, the HOI-A dataset~\cite{PPDM_liao2019} focused on a subset of actions, such as \textit{smoking cigarette} or \textit{talk on mobile phone} which can be considered dangerous actions while driving. The dataset is composed of 38668 images annotated with 10 verbs and 11 object classes. ActivityNet \cite{caba2015activitynet} is a large-scale dataset composed of videos depicting activities that are related to how humans spend their time in their daily lives, such as \textit{walking the dog} or \textit{hand-washing clothes}. The dataset is composed of a total of 849 video hours including 203 activity classes. \cite{Kinetics_2017} and \cite{Kinetics_Carreira2019ASN}~presented Kinetics, which is a third person video dataset related to human actions. The dataset is composed of 700 human action classes which include human-object interactions, such as \textit{play instrument} and human-human interactions, such as \textit{shake hands}. For each action, there are at least 600 video clips taken from YouTube videos. The authors of~\cite{Something_Something_Goyal} proposed Something-Something, a video dataset which includes low-level concepts (``\textit{something-something}'') to represent simple everyday aspects of the world. It contains 108499 short videos (from 2 to 6 seconds) annotated with 174 textual description such as ``turning \textit{something} upside down'' or ``spilling \textit{something} next to \textit{something}''. Other works have considered the egocentric scenario investigating different domains. Egocentric activities related to daily living have been studied by the authors of~\cite{ADL_PirsiavashR12}, who proposed the ADL dataset. The dataset is composed of one million frames acquired by 20 people performing a set of 18 actions of daily activities in their own apartments. The 3D~structure of the scenes has been explored by the authors of~\cite{moghimi_ego_RGBD} who proposed a dataset composed of 5 sequences acquired using an RGB-D camera by 4 different users. The authors of~\cite{thu-read_17} proposed a video-based RGB-D egocentric dataset (THU-READ) including different types of daily-life actions. The egocentric videos have been captured in 5 scenarios such as laboratory, bathroom, conference room, dormitory and restaurant by 8 different subjects performing 40 different actions. The problem of 3D hand-object actions recognition has been addressed by the authors of~\cite{GarciaHernando2018FirstPersonHA}, which released the Daily hand-object actions dataset containing 1175 videos belonging to 45 action categories. The dataset has been acquired by 6 actors over 3 different scenarios. A total of 105.459 RGB-D frames have been acquired and annotated with hand pose and action categories. Some works explored the kitchen domain from the first person point of view. Among these, the authors of~\cite{Torre2009CMU-MMAC} released the CMU Multi-Modal Activity Database (CMU-MMAC) to study human activities in a kitchen environment. The authors built a kitchen and acquired egocentric videos from 5 different subjects cooking 5 recipes. They captured RGB videos using different cameras, audio and motion capture information. EPIC-Kitchens and its extension \cite{Damen2018EPICKITCHENS, Damen2020Collection, Damen2020RESCALING} are a series of egocentric datasets focused on unscripted activites related to kitchens. In particular, EPIC-Kitchens-55 \cite{Damen2018EPICKITCHENS} is composed of 432 videos annotated with 352 objects classes and 125 different verbs classes. EPIC-Kitchens-100 \cite{Damen2020Collection} is an extension of EPIC-Kitchens-55 in terms of videos (700), environments (45) and hours (100). Along with the dataset, the authors proposed 6 challenges to study human behavior understanding in kitchens: action recognition, action detection, action anticipation, domain adapatation for action recognition, object detection and multi-instance retrieval. The authors of~\cite{Li2018_EGTEA-GAZE+} studied egocentric video action recognition, considering both RGB and gaze signals to determine what a person is doing (action recognition) and where they are looking (gaze estimation). They presented the dataset EGTEA Gaze+, where 32 subjects perform 7 different meal preparation tasks in different kitchens. EGTEA Gaze+ is composed of 106 action classes and includes gaze information collected at every frame. \\ Not only depth and gaze signals have been considered in past works. Sensor data like accelerometer or gyroscope have also been considered to recognize egocentric activity. The authors of~\cite{Song_sensor_data} captured a dataset of egocentric videos using a Google Glass, that acquired RGB videos and sensor information. In particular, 200 short sequences have been acquired by 20 different subjects which performed daily activities. \cite{Rogez_object_grasp} focused on object manipulation and proposed the Grasp UNderstanding (GUN-71) dataset which is composed of 12.000 RGB-D images labeled with 71 grasp classes. The videos have been acquired by 8 different subjects which performed different grasps on personal objects in 5 different houses. The camera used is a chest-mounted Intel'Senz3D\footnote{https://it.creative.com/p/archived-products/blasterx-senz3d} which is a webcam paired with a depth sensor. Beyond the aforementioned datasets which considered only one extra modality in addition to the RGB signal, the authors of~\cite{Kothari2020GazeinwildAD} proposed the Gaze-in-Wild dataset which comprises both gaze and depth streams. This dataset has been acquired by 19 participants which performed 4 activities: indoor navigation, ball catching, object search and tea making. They used a Pupil Labs eye tracker to acquire the gaze signal, the MPU to obtain the IMU data and the ZED stereo camera to acquire the depth map. Recently, Ego4D, a massive-scale egocentric video dataset has been released \cite{Ego4D2021}. Is has been acquired by 931 camera wearers from 9 different countries. Ego4D comprises videos, audio, 3D meshes of the environment, eye gaze, stereo and videos acquired by multiple egocentric cameras. The data has been collected in multiple domains and comprises different activities such as people playing cards, working at desk, cleaning the garden, cooking something or practicing a musical instrument. In addition to the dataset, \cite{Ego4D2021} presented five benchmarks focused on egocentric perception in the past, present and future. Inspired by the first version of the MECCANO dataset \cite{ragusa2020meccano}, \cite{Sener2022Assembly101AL} proposed Assembly101 which is a procedural activity dataset comprising multi-view videos in which subjects assembly and disassembly toys. Contextually, they benchmarked three action understanding tasks (i.e., action recognition, action anticipation and temporal segmentation) and proposed a new task which is related to mistakes detection. Despite the similar setup to MECCANO, we focus on the multimodal nature of the acquired data. Moreover, they acquired egocentric videos with monochrome cameras using a device similar to Oculus Quest VR. In addition, Assembly101 is able to address tasks related only to the actions performed by the users. It is worth noting that previous egocentric datasets have considered scenarios related to kitchens, offices, and daily-life activities and that they have generally tackled the action recognition task rather than EHOI detection. Table~\ref{tab:datasets} compares the aforementioned datasets with respect to the proposed MECCANO Multimodal dataset which is a conspicuous extension of the previous MECCANO dataset \cite{ragusa2020meccano}. As shown in Table~\ref{tab:datasets}, MECCANO Multimodal is the first egocentric multimodal dataset comprising both gaze and depth signals acquired in an industrial-like domain. Moreover, it has been explicitly annotated to tackle different tasks with different modalities useful to build a real system able to support humans in the industrial domain: 1) Action Recognition, 2) Active Object Detection and Recognition, 3) Egocentric Human-Object Interaction, 4) Action Anticipation and 5) Next-Active Object Detection. \begin{table*}[] \caption{Comparison of MECCANO with other datasets. AA: Action Anticipation. AOD: Active Object Detection. AOR: Active Object Recognition. AR: Action Recognition. AVD: Audio-Video Diarization. AVL: Audio-Video Localization. AVT: Audio-Video Transcription. DA-AR: Domain Adaptation for Action Recognition. EHOI: EHOI Detection. FHP: Future Hand Prediction. HOI: HOI Detection. HPE: Hand Pose Estimation. LAM: Looking-at-Me. LOC: Localization. MD: Mistake Detection. MQ: Moment Queries. MR: Multi-Instance Retrieval. NAO: Next-Active Objects Detection. NLQ: Natural Language Queries. OD: Object Detection. OSCC: Object State Change Classification. PNRTL: Point-of-No-return Temporal Localization. SCOD: State change Object detection. S\&LTA: Short and Long Term Anticipation. TAS: Temporal Action Segmentation. TTM: Talking-to-Me. VQ2D\&3D: Visual Queries with 2D\&3D Localization.} \label{tab:datasets} \resizebox{\textwidth}{!}{% \setlength\tabcolsep{2pt} \begin{threeparttable} \begin{tabular}{lcccllcccccccc} \multicolumn{1}{c}{\textbf{Dataset}} & \textbf{Settings} & \textbf{EGO?} & \textbf{Video?} & \multicolumn{1}{c}{\textbf{Signals}} & \multicolumn{1}{c}{\textbf{Tasks}} & \textbf{Year} & \textbf{Frames} & \textbf{Sequences} & \textbf{AVG. video duration} & \textbf{Action classes} & \textbf{Object classes} & \textbf{Object BBs} & \textbf{Participants} \\ \hline MECCANO Multimodal & Industrial-like & \checkmark & \checkmark & RGB, depth, gaze & \begin{tabular}[c]{@{}l@{}}EHOI, AR, AOD, AOR,\\ AA, NAO\end{tabular} & 2022 & 299,376 & 20 & 20.79 min & 61 & 20 & 307,601 & 20 \\ \hline Assembly101 \cite{Sener2022Assembly101AL} & Industrial-like & \checkmark & \checkmark & RGB, multi-view, 3D hand-pose & AR, AA, TAS, MS & 2022 & 20M & 362 & 7.10 min & 1380 & 90 & 0 & 53 \\ EGO4D\tnote{1} \cite{Ego4D2021} & Multi Domain & \checkmark & \checkmark &\begin{tabular}[c]{@{}l@{}}RGB, Audio, 3D environments, \\ stereo, gaze, IMU, multi-view\end{tabular} &\begin{tabular}[c]{@{}l@{}}VQ2D\&3D, NLQ, MQ, PNRTL, \\ SCOD, OSCC, AVD, AVT, AVL,\\ LAM, TTM, S\&LTA, FHP\end{tabular} & 2022 & 418M\tnote{2} & 9650 & 24.11 min & 113\tnote{3} & 449\tnote{3} & 295104\tnote{4}& 931 \\ EPIC-KITCHENS-100 \cite{Damen2020RESCALING} & Kitchens & \checkmark & \checkmark & RGB & AR, AD, AA, DA-AR, MR & 2021 & 20M & 700 & N/A & 97 & 300 & N/A & 37 \\ Gaze-in-wild \cite{Kothari2020GazeinwildAD} & Daily activities & \checkmark & \checkmark & RGB, depth, gaze & AR & 2020 & N/A & N/A & N/A & 4 & 0 & 0 & 19 \\ EGTEA Gaze+ \cite{Li2018_EGTEA-GAZE+} & Kitchens & \checkmark & \checkmark & RGB, gaze & AR & 2018 & 2,4M & 86 & 0.05 min & 106 & 0 & 0 & 32 \\ Daily Hand-Object Actions \cite{GarciaHernando2018FirstPersonHA} & Daily activities & \checkmark & \checkmark & RGB, depth & AR, HPE & 2018 & 105,459 & 1175 & 0.05 min & 45 & 26 & N/A & 6 \\ THU-READ \cite{thu-read_17}& Daily activties & \checkmark & \checkmark & RGB, depth & AR & 2017 & 343,626 & 1920 & 7.44 min & 40 & 0 & 0 & 8 \\ Multimodal Egocentric Activity \cite{Song_sensor_data} & Daily activities & \checkmark & \checkmark & RGB, sensor data & AR & 2016 & 30,000 & 200 & 0.25 min & 20 & 0 & 0 & 20 \\ GUN-71 \cite{Rogez_object_grasp} & Daily activities & \checkmark & \checkmark & RGB, depth & AR & 2015 & 12,000 & N/A & N/A & 71 & 28 & 0 & 8 \\ Wearable Computer Vision System \cite{moghimi_ego_RGBD} & Daily activities & \checkmark & \checkmark & RGB, depth & AR & 2014 & N/A & 5 & N/A & 12 & 0 & 0 & 4 \\ ADL \cite{ADL_PirsiavashR12} & Daily activities & \checkmark & \checkmark & RGB & AR, AOR & 2012 & 1,0M & 20 & 30.0 min & 32 & 42 & 137,780 & 20 \\ CMU \cite{Torre2009CMU-MMAC} & Kitchens & \checkmark & \checkmark & RGB & AR & 2009 & 200,000 & 16 & 15.0 min & 31 & 0 & 0 & 16 \\ \hline NTU RGB+D 120 \cite{Liu2020NTUR1} & General & X & \checkmark & RGB, depth & AR & 2020 & 8 M & 114,480 & N/A & 120 & 0 & 0 & 106 \\ UTD-MHAD \cite{Chen_UTD} & General & X & \checkmark & RGB, depth, sensor data & AR & 2017 & N/A & 861 & N/A & 27 & 0 & 0 & 8 \\ SYSU 3D Human-Object Interaction \cite{Hu_3DHOI} & General & X & \checkmark & RGB, depth & AR & 2017 & N/A & 480 & N/A & 12 & 0 & 0 & 40 \\ Something-Something \cite{Something_Something_Goyal} & General & X & \checkmark & RGB & AR, HOI & 2017 & 5,2 M & 108,499 & 0.07 min & 174 & N/A & 318,572 & N/A \\ Kinetics \cite{Kinetics_2017} & General & X & \checkmark & RGB & AR & 2017 & N/A & 455,000 & 0.17 min & 700 & 0 & 0 & N/A \\ UWA3D Multiview Activity \cite{Rahmani2016HistogramOO}& General & X & \checkmark & RGB, depth & AR & 2016 & N/A & 1200 & N/A & 30 & 0 & 0 & 10 \\ ActivityNet \cite{caba2015activitynet} & Daily activities & X & \checkmark & RGB & AR & 2015 & 91,6 M & 19,994 & 2.55 min & 200 & N/A & N/A & N/A \\ CAD-120 \cite{Koppula_cad120} & General & X & \checkmark & RGB, depth & AR, AOR & 2013 & 61585 & 120 & 0.28 min & 10 & 12 & N/A & 4 \\ MSRDailyActivity3D \cite{Wang2012MiningAE} & Daily activites & X & \checkmark & RGB, depth & AR & 2012 & N/A & 320 & N/A & 16 & 0 & 0 & N/A \\ Human Activity Detection \cite{Sung_CAD60} & Daily activites & X & \checkmark & RGB, depth & AR & 2011 & N/A & N/A & 0.75 min & 12 & 0 & 0 & 4 \\ MSR-Action3D \cite{Li2010ActionRB} & General & X & \checkmark & depth & AR & 2010 & 23797 & 402 & 0.07 min & 20 & 0 & 0 & 7 \end{tabular} \begin{tablenotes} \item[1] The statistics have been obtained on May 15, 2022. \item[2] The number of frames has been obtained considering the canonical videos acquired at 30 fps. \item[3] This number has been obtained from the ``Long-Term Anticipation" task considering both Training and Validation sets. \item[4] The number of bounding boxes has been obtained considering the ``State Change Object Detection" task. \end{tablenotes} \end{threeparttable} } \end{table*} \subsection{Human Behavior Understanding Tasks} In this section we discuss the state of the art focusing on relevant tasks which should be exploited to understand the human behavior. \subsubsection{Action Recognition} Video action recognition has been thoroughly studied by researchers, especially from the third person view. Some works \cite{Learning_actions_movies_Laptev, Human_detection_Flow_Schmid_06, TwoStream_convolutional_action_Zisserman_14, Two-Stream_Zisserman, temporal_segNet} mixed classic approaches considering hand-crafted features, such as optical flow and deep networks to represent the motion of actions using two stream networks. 3D ConvNets are commonly used to encode both spatial and temporal dimensions in a unified way \cite{ Conv_spatio-temporal_Taylor, Learning_spatio-temporal_Paluri, Carreira2017QuoVA}. Long-term filtering and pooling has focused on representing actions considering their full temporal extent \cite{Long-term_action_Schmid, Two-Stream_Zisserman, temporal_segNet, Zhou2018TemporalRR}. Other works separately control spatial and temporal dimensions factoring convolutions into separate 2D spatial and 1D temporal filters \cite{Spatiotemporal_residual_action, closer_spatiotemp_action, rethinking_spatiotemporal, Learning_spatiotemporal_pseudo}. Slow-Fast networks \cite{feichtenhofer2018slowfast} avoid using pre-computed optical flow and encodes the motion of actions into a ``fast'' pathway (which operates at a high frame rate) and simultaneously a ``slow'' pathway which captures semantics (operating at a low frame rate). The authors of~\cite{Zhou2018TemporalRR} introduced a network module called Temporal Relation Network (TRN) to learn temporal relations between video frames at multiple time scales. The authors of~\cite{TSM_2019} proposed a temporal shift module (TSM). This module allows 2D architectures to obtain comparable performance to 3D CNNs. Inspired by feature selection methods, the authors of~\cite{Feichtenhofer2020X3DEA} presented a family of video networks (X3D) which expand a 2D image classification architecture into a spatiotemporal one by expanding along multiple possible axes such as space, time, width and depth. The authors of~\cite{Hussein2019TimeceptionFC} revisited the definition of \textit{activity} and restricted it to \textit{complex action} which is a set of simple one-actions which compose the activity (e.g., \textit{cooking a meal} can be considered as a set of one-actions: \textit{get, cook, put and wash}). Recently, the action recognition task has been addressed considering multi-modal signals. For example, the authors of~\cite{Shi_2021_ICCV} considered audio, visual and textual information to recognize actions using graph convolutional neural networks (GCN). Previous works also investigated egocentric action recognition adapting third person vision approaches to the first person scenario \cite{TSM_2019, Zhou2018TemporalRR, feichtenhofer2018slowfast, Damen2018EPICKITCHENS}. In this work, we assess the performance of state-of-the-art action recognition methods on the proposed MECCANO dataset considering the state-of-the-art SlowFast network \cite{feichtenhofer2018slowfast} as a baseline. \subsubsection{HOI Detection} Previous works have investigated HOI detection mainly from a third person vision point of view. The authors of~\cite{Gupta2015VisualSR} were the first to explore the HOI detection task annotating the COCO dataset \cite{lin2014COCO} with verbs. The authors proposed a method to detect people performing actions able to localize the objects involved in the interactions on still images. The authors of~\cite{Gkioxari2018DetectingAR} proposed a human-centric approach based on a three-branch architecture (InteractNet) instantiated according to the classic definition of HOI in terms of a $<$human, verb, object$>$ triplet. This approach analyzes each human-object pairs detected with an object detector~\cite{ren2015faster} using a heat map to represent their relationship. Some works~\cite{Qi2018LearningHI, Chao2018LearningTD, RPN_Zhou} explored HOI detection using graph convolutional neural networks after detecting humans and objects in the scene. Recent works \cite{PPDM_liao2019, Wang_InteractionPoints_2020_CVPR} represented the relationship between both, the humans and the objects, as the intermediate point which connects the center of the human and object bounding boxes. The aforementioned works addressed the problem of HOI detection in the third person vision domain. In this work, we look at the task of HOI detection from an egocentric perspective considering the proposed MECCANO Multimodal dataset. \subsubsection{Tasks related to EHOI Detection} The problem of Human-Object Interaction (HOI) detection has been systematically investigated only from Third Person Vision. Previous work have considered similar tasks related to Egocentric Human-Object Interaction (EHOI) detection due to the limited availability of egocentric datasets explicitly labelled for this task. Some studies have modeled the relations between entities for interaction recognition as object affordances~\cite{Hotspots_Grauman19, Nagarajan2020EGOTOPOEA, affordance_Fang18}. Other studies tackled tasks related to EHOI recognition proposing hand-centric methods \cite{Cai2016UnderstandingHM, Lending_Hand_Bambach_15, Hands_in_contact_Shan20, kwon2021h}. The authors of~\cite{Hands_in_contact_Shan20} proposed to detect and localize hands in the scene distinguishing left from right hands. Objects are classified into two classes: \textit{active} or \textit{passive}. In particular, if an object is in contact with at least one hand, it is considered as \textit{active}, otherwise, it is considered as \textit{passive}. The authors of~\cite{Li_Adaptive_2020_CVPR} proposed to search network structures with differentiable architectures to construct adaptive structures for different videos to facilitate adaptive interaction modeling. The method has been evaluated on the Something-Something dataset \cite{Something_Something_Goyal} which contains egocentric-like videos. The authors of~\cite{kwon2021h} proposed a unified approach to recognize hand-object interactions predicting the 3D pose of the two interacting hands and the 6D pose of manipulated objects simultaneously. Despite these works have considered tasks related to human-object interaction from an egocentric point of view, the EHOI detection task has not yet been studied systematically. In this work, we formalize the task of EHOI detection and focus on this problem related to the industrial domain, considering the proposed MECCANO dataset. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/Meccano_Dataset.png} \caption{Examples of data acquired by the 20 different participants in two countries (Italy, United Kingdom).} \label{fig:dataset} \end{figure*} \subsubsection{Action Anticipation} The task of action anticipation has been investigated from the third person point of view \cite{Hierarchical_Repr_Savarese_14, Felsen_what_will_happen_17, Gao2017REDRE, Zeng2017VisualFB}. \cite{Hierarchical_Repr_Savarese_14} proposed a new representation of actions called \textit{hierarchical movemes} to anticipate the future actions from still image or short video clips. They encode the atomic components of human movements before an action is executed, representing the actions with high semantic and temporal granularity. The authors of~\cite{Felsen_what_will_happen_17} proposed a generic framework for forecasting future events in team sports videos related to water polo and basketball events. The authors of~\cite{Gao2017REDRE} considered multiple history representations of the past to anticipate a sequence of future representations. In the last years, the task has been studied also from the first person perspective. The authors of~\cite{robot-centric-anticipation_15} explored this task with the aim to assist humans which cooperate with a robot. In particular, they anticipate future actions considering videos acquired from the point of view of a robot which interacts with a human. A series of works \cite{furnari2019rulstm, furnari2020rulstm, slowfast_rulstm_ballan} address the task considering the LSTM networks to encode the features related to the past. The authors of~\cite{Roy_2022_WACV} focused on the goal representation to predict the next action from the first person view. In this work, we adopt the RULSTM model \cite{furnari2020rulstm} to evaluate state-of-the-art action anticipation methods on the proposed MECCANO dataset also considering the gaze and depth signals. \subsubsection{Next Active Objects Detection} The detection of next-active objects, i.e., the objects which will be involved in a EHOI, is a problem which has not been thoroughly studied due to the small number of public egocentric datasets suitable for the task. There are not egocentric datasets specifically annotated to perform this task. The authors of~\cite{Furnari2017NextactiveobjectPF} have been the first to explore the next-active objects prediction problem. They performed experiments on the Activity of Daily Living (ADL) egocentric dataset, analyzing the trajectories of the next-active objects with a temporal sliding window. The authors of~\cite{liu_forecasting_HOI} addressed the task of anticipating egocentric actions proposing an architecture composed of a motor attention module to predict the trajectory of the hands and by a module to detect the area of contact of the target object which will be active. These two outputs are fed into an anticipation module which predicts a spatio-temporal attention map which indicates the possible location of the next-active objects. The output is composed of the next action label, a ``hotspot" which indicates the area of the object in which there will be a contact, and the hand trajectory. The two egocentric datasets ADL \cite{ADL_PirsiavashR12} and EPIC-Kitchens \cite{Damen2020RESCALING} have been re-annotaded by \cite{JIANG2021212} to tackle the problem of short-term next-active object detection. They proposed a novel human-centerd approach composed of two pathways: 1) the first pathway generates a human visual attention probability map and 2) the second one generates a human hand position probability map. These two maps are then fused by an interaction module which outputs the final map of the next-active object. In addition to the next-object location, \cite{Fan2017ForecastingHA} predicted the hands location in future frames. The problem was tackled designing a two-stream CNN architecture with an auto-encoder by extending SSD, a state-of-the-art convolutional object detection network and using a regression network to infer future representations. The authors of~\cite{Dessalene_forecasting_contact} performed action anticipation and prediction through hand-objects contact representations. They presented a new architecture composed of an anticipation module and temporal relations represented using a Graph Convolutional Networks (GCN) and the LSTMs to predict the final next action label. They treated the next-active objects involved into the future contact with hands through semantic segmentation masks. The authors of~\cite{Bertasius2017FirstPA} detected the important objects for the camera wearer (i.e., objects related to the intent of the user) considering an unsupervised learning approach. Closely related to next-active object prediction, the authors of~\cite{Ego4D2021} presented a set of tasks for the forecasting benchmark of the Ego4D dataset including the task of short-term Object Interaction Anticipation. Despite previous works have considered tasks related to the problem of next-active object detection, it has not yet been studied in depth. Moreover, the task has not been studied considering different types of signals (e.g., depth and gaze) and in an industrial-like domain. In this work, we focus on the next-active object task on the MECCANO Multimodal dataset, which has been acquired in an industrial domain and annotated explicitly to tackle this challenging task. \section*{\uppercase{Supplementary Material}} \label{sec:supp_material} This document is intended for the convenience of the reader and reports additional information about the action-interaction relations, the proposed dataset and the annotation stage. This supplementary material is related to the following submission:\\ F. Ragusa, A. Furnari, G. M. Farinella. MECCANO: A Multimodal Egocentric Dataset for Humans Behavior Understanding in the Industrial Domain. submitted to Computer Vision and Image Understanding (CVIU), 2022. \\ The reader is referred to the manuscript and to our web page \url{https://iplab.dmi.unict.it/MECCANO/} to download the dataset and for further information. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/stats_nao_bbox.png} \caption{Long-tail distribution of bounding boxes over all object classes.} \label{fig:nao_bbox} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/VIA.png} \caption{Customized VIA project to support the labeling of next-active objects. Annotators were presented with a panel which allowed them to identify object classes through their thumbnails.} \label{fig:VIA} \end{figure*} \section{Next-Active Objects Annotations} \label{sec:nao} Figure~\ref{fig:nao_bbox} shows the distribution of bounding boxes over all object classes. As shown in the figure, the distribution follows a long-tail distribution, which highlights the complexity of this industrial scenario. Moreover, we reported how many bounding boxes we annotated for each object class considering the three splits (Training, Validation and Test) of the dataset. For the annotation phase of the next-active objects, we used VGG Image Annotator (VIA) \cite{dutta2019vgg} with a customized project to facilitate and speed up the selection of the correct object class (see Figure~\ref{fig:VIA}. Moreover, we provided a document to the annotators, containing a set of key rules for the annotations of next active objects, to support them and reduce ambiguities. In annotation guidelines, we reported the fundamental definitions (e.g., next active object, next active object in a pile, occluded next active object) showing visual examples (see Figure~\ref{fig:rules_ann}). \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/rules_nao.png} \caption{Next active object definition given to the labelers for the next active object bounding box annotation stage.} \label{fig:rules_ann} \end{figure} Figure~\ref{fig:past_interactions} shows the comparison between the number of interactions present in the MECCANO dataset with respect to the number of interactions which include labeled past frames. \section{Hands Annotations} Figure~\ref{fig:hands_stats} reports some statistics related to the hand annotations. \begin{figure}[htp] \centering \includegraphics[width=\columnwidth]{images/stats_past_interactions.png} \caption{Comparison between the number of interactions with respect to the number of interactions which have past frames.} \label{fig:past_interactions} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{images/hands_stats.png} \caption{Hands annotations distribution.} \label{fig:hands_stats} \end{figure*} An example of the labeling procedure is shown in Figure~\ref{fig:hands_refine}. In the first column, we reported the predictions of the Hand Object Detector \cite{Hands_in_contact_Shan20}. In the second column, the annotators fixed the class errors and refined the bounding box around the hands. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/hands_refinment.png} \caption{Example of the labeling procedure of the hands.} \label{fig:hands_refine} \end{figure}
{ "timestamp": "2022-09-20T02:22:49", "yymm": "2209", "arxiv_id": "2209.08691", "language": "en", "url": "https://arxiv.org/abs/2209.08691" }
\section{Conclusion} In this article, we extend our previous line of research on suggesting MeSH terms for Boolean queries for systematic review literature search. This task adds to a recent stream of research that has focused on computational methods for the assisted creation~\cite{scells2020automatic,scells2020objective,scells2021comparison, agosti2019analysis} or refinement~\cite{wang2021mesh, scells2018generating, scells2019www, agosti2020post, alharbi2020refining} of Boolean queries for systematic review creation. In addition to the lexical methods we proposed previously, in this new line of work, we introduced a new set of BERT based MeSH suggestion methods. We undertook a comprehensive evaluation and analysis of our new MeSH suggestion methods. We compared the effectiveness of the suggested MeSH terms from our new methods to both our existing lexical methods and the original queries formulated by information specialists. We found that the MeSH terms originally chosen by information specialists were often not the most effective choice and that more effective MeSH terms can be suggested automatically by our new methods We also found that using BERT methods can generally achieve higher effectiveness than the Lexical method in MeSH Term suggestion: this may be due to the fact that BERT methods were often able to capture deeper semantic relationships. This finding motivates future work to combine lexical and BERT methods in order to reap the benefits of both approaches. Combining such sparse and dense approaches has seen much success in related areas of research, such as ad-hoc search~\cite{wang2021bert, li2022interpolate, karpukhin2020dense, ma2021replication}. In addition, we believe that the full potential of using MeSH entities in our suggestion method is unexplored. In our future work, we project three research directions using more information from MeSH entities to achieve more effective MeSH term suggestions, including (1) Use of MeSH tree hierarchy: MeSH entities are organised in a tree hierarchy. The parent-child relationship of entities may further restrict the number of MeSH terms suggested by MeSH Term suggestion methods (Exp: Use parent MeSH entity to restrict which child entities can appear in the suggestion list). (2) Use of MeSH categories: MeSH entities are categorised according to their natures (term, concept, descriptor and category). The nature of MeSH entities may be used in the fine-tuning process of the MeSH term suggestion methods to represent the MeSH entities. (3) Use of external MeSH definition: Each MeSH entity has a corresponding Wikipedia page to explain its content and uses. These comprehensive pages may be used to further fine-tune our MeSH term suggestion model to achieve effective MeSH term suggestions. Identifying MeSH terms to add to a Boolean query for a systematic review literature search is a difficult task for information specialists. The findings of this article have implications for both the Information Retrieval and Systematic Review communities. Firstly, our methods can be used in automatic query formulation situations (see, e.g., work by~\citet{scells2021comparison}). Secondly, they can be integrated into existing tools to assist information specialists in formulating more effective queries~\cite{scells2018searchrefiner,li2020systematic}. \section{Introduction} A medical systematic review is a comprehensive review of literature for a highly focused research question. Systematic reviews are seen as the highest form of evidence and are used extensively in healthcare decision making and clinical medical practice. In order to synthesise literature into a systematic review, a search must be undertaken. A major component of this search is a Boolean query. The Boolean query is often developed by a trained expert (i.e., an information specialist), who works closely with the research team to develop the search, and usually has some knowledge of the domain been searched. The most commonly used database for searching medical literature is PubMed. Due to the increasing size and scope of these databases, and of PubMed in particular, the Medical Subject Headings (MeSH) thesaurus was developed to conceptually index studies~\cite{zieman1997conceptual, richter2012using}. MeSH is a controlled vocabulary thesaurus arranged in a hierarchical tree structure (specificity increases with depth in a parent$\rightarrow$child relationship, e.g., \texttt{Anatomy}$\rightarrow$\texttt{Body Regions}$\rightarrow$\texttt{Head}$\rightarrow$\texttt{Eye}\dots etc.). Indexing and categorising studies with MeSH terms enables queries to be developed which incorporate both free-text keywords \textit{and} MeSH terms --- enabling more effective searches. The use of MeSH terms in queries has been shown to be more effective than free-text keywords alone~\cite{richter2012using, chang2006searching, abdou2008searching, tenopir1985full}, e.g., they increase precision~\cite{liu2017evaluating} and are far less ambiguous than free-text~\cite{wacholder1997disambiguation}. However, it is still difficult even for expert information specialists to be familiar with the entire MeSH controlled vocabulary~\cite{liu2009impact, liu2017evaluating} --- at the time of writing, MeSH contains 29,640 unique headings. PubMed has attempted to overcome this difficulty by developing a method called Automatic Term Mapping (ATM). ATM is an automatic query expansion method which attempts to seamlessly map free-text keywords in a query to one of the three categories (index tables): MeSH, journal name or author name~\cite{nahin2003change}. Although ATM is applied by default for all queries issued to PubMed, it has several semantic limitations: it is inaccurate when used to expand free-text acronyms into MeSH terms~\cite{schulz2001indexing}; it produces different MeSH expansions even though synonymic free-text terms are used~\cite{adlassnig2009optimization}; and has difficulty disambiguating between MeSH terms and journal names~\cite{smith2004examination}. Despite these limitations, the use of ATM for MeSH term suggestion has been shown to increase the precision of free-text searches in the genomic domain~\cite{lu2009evaluation}, and is the state-of-the-art method for the MeSH Term suggestion task. However, its use has, to the best of the authors' knowledge, not been empirically evaluated in the context of improving the effectiveness of systematic review literature search queries. Recent advances in the use of pre-trained language models (PLMs) such as BERT \cite{DBLP:conf/naacl/DevlinCLT19}, T5 \cite{raffel2019exploring}, and GPT-3 \cite{brown2020language} have delivered state-of-the-art performance in many natural language processing tasks. Typically, a pre-trained language model is trained on a large corpus using the transformer architecture to ``get familiar'' with language representations. Then the model is fine-tuned to downstream tasks to perform with high effectiveness across the target task. The transformer architecture is an encoder-decoder model training structure that does not use recurrence and convolutions~\cite{vaswani2017attention}. Prior work showed that using PLMs can significantly increase effectiveness in ad-hoc search~\cite{lin2021pretrained} as well as in professional search~\cite{Eugene2022ecirtar, chalkidis-etal-2020-legal,QIN2021121,choe2022short}. In this article, we introduce the task of MeSH term suggestion for Boolean queries used in systematic review literature search \footnote{This article is an extension of our previous work published at the 2021 Australasian Document Computing Symposium~\cite{wang2021mesh}.}. We model this task within the context of an information specialist looking for MeSH terms to add to a query without MeSH terms currently present. We also propose a framework to evaluate the effectiveness of the suggestion of MeSH terms on established collections of systematic review literature search queries. This article adds to a recent stream of research that has focused on computational methods for the assisted formulation~\cite{scells2020automatic,scells2020objective,scells2021comparison, agosti2019analysis} or refinement~\cite{wang2021mesh, scells2018generating, scells2019www, agosti2020post, alharbi2020refining} of Boolean queries for systematic review creation, and more generally to research on computational methods for technology-assisted reviews~\cite{li2020stop, sneyd2021stopping, lee2022towards, lee2018seed, cormack2017technology}. Furthermore, we propose two categories of methods for the MeSH term suggestion task, including methods based on the BERT pre-trained language model and methods not based on BERT (lexical methods). We show that our methods suggest MeSH terms that outperform the effectiveness of the MeSH terms selected by the information specialists and included in the original queries. Our methods are readily integrable into tools for information specialists to help with the construction of systematic review Boolean queries. The contributions of this article are: \begin{enumerate} \item The introduction of the new task of suggesting MeSH terms for systematic review literature search (Boolean queries), modelled within the context of an information specialist looking for MeSH terms to add to a query without MeSH terms present. \item The formulation of MeSH term suggestion methods to help information specialists and researchers to construct Boolean queries for systematic review creation. \item An empirical evaluation of the effectiveness of different MeSH terms suggestion methods \item An understanding of how the MeSH terms suggested by the proposed automatic methods differ from those originally selected by information specialists formulating the query. \end{enumerate} \section{Material and methods} \subsection{Overview of the MeSH Term Suggestion Task} We start by outlining the task of MeSH term suggestion for Boolean queries that do not already contain MeSH terms. We assume the user has entered a Boolean query without MeSH terms. A Boolean query can be viewed as a tree where Boolean operators (e.g. AND, OR) represent the internal nodes of the tree, while free-text atomic clauses and MeSH Terms are the leaves. Free-text atomic clauses are one or more words that express a concept, e.g., a disease, a treatment or a population aspect. We call each of the first level nodes of the tree (i.e. the nodes at depth 1) a query fragment. Typically, a query fragment represents an individual aspect of an information need~\cite{suhail2013methods}; specifically, each query fragment corresponds to a different PICO element, i.e. population, intervention, control, and outcome~\cite{schardt2007utilization}. These concepts are shown in Figure~\ref{fig:query-concept}. The task of MeSH term suggestion is to identify appropriate MeSH terms to be added as leaves to a query fragment. In this article, we suggest MeSH terms for each query fragment independently of each other. We leave the investigation of query fragment dependencies concerning MeSH term suggestions for future work. \begin{figure}[t!] \centering \includegraphics[width=0.9\textwidth]{figures/fragment_atc_explain} \caption{Example query showing a \textbf{Boolean Query}, two \textbf{Query Fragments}, several \textbf{Free text atomic clauses}, and a \textbf{MeSH term}.} \label{fig:query-concept} \end{figure} Figure~\ref{fig:query-overview} gives an intuition for how we obtain query fragments from a Boolean query, how MeSH terms are suggested for a given query fragment, and how we perform \textit{defragmentation} to construct a new Boolean query that includes MeSH terms. The figure shows that after fragmentation (i.e. the process of deriving query fragments), we remove all the MeSH terms from each query fragment. We then apply a MeSH term suggestion technique which adds new MeSH terms into a query fragment. The new query fragments that now contain suggested MeSH terms are then defragmented by combining all of the query fragments corresponding to the original query with the \texttt{AND} operator. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{figures/query_overview} \caption{Overview of the MeSH term suggestion procedure. Proposed methods using lexical MeSH term retrieval or BERT MeSH term retrieval facilitate the suggestion of MeSH terms. We evaluate each method that suggests MeSH terms in terms of (1) the ability for the suggested MeSH terms to effectively retrieve literature for a defragmented Boolean query , (2) overlap between suggested MeSH terms and MeSH terms included in the original query. Note that the number of MeSH terms suggested for a fragment may be lower or higher than the number of MeSH terms in the original query.} \label{fig:query-overview} \end{figure} This work extends our existing line of research into MeSH term suggestion \cite{wang2021mesh}, where we previously developed several techniques that depend on pre-existing lexical matching systems. One limitation of these systems is their dependence on manually crafted rules that are expensive to create and have limitations in terms of how words are matched to MeSH terms (e.g., spelling variants, acronyms, misspellings). This article instead investigates the use of pre-trained language models, i.e., BERT, for the task of MeSH term suggestion. These neural models have been shown to be resilient to the shortcomings of lexical-based systems~\cite{DBLP:conf/naacl/DevlinCLT19, wang2021bert}. However, neural models have their own limitations, particularly requiring large amounts of training data. The following sections first provide a brief overview of our existing lexical-based techniques and then describe our new neural techniques in detail, specifically addressing the need for ad-hoc training data. \subsection{Lexical MeSH Term Suggestion} Our lexical-based methods are formulated as a pipeline of three steps: retrieval, ranking, and refinement. The following sections provide a brief overview of each of these steps. For a more comprehensive discussion of the lexical-based methods, refer to our previous work~\cite{wang2021mesh}. \paragraph{Retrieval} \label{section:mesh_retrieval} The first step in our MeSH term suggestion\xspace pipeline is the \textbf{retrieval} of MeSH terms\xspace. The retrieval of MeSH terms\xspace is facilitated by three different methods: \begin{description} \item[ATM] The entire free-text only query fragment is submitted to the PubMed Entrez API~\cite{sayers2010general} for \textit{automatic term mapping} (ATM). This is the default system used by PubMed for automatically adding MeSH terms to queries. \item[MetaMap] Each free-text atomic clause in a query fragment is submitted to MetaMap~\cite{aronson2001effective}.\footnote{Version 2018 with options set to default values.} The results are filtered to only include those entities derived from the MeSH source. All of the mapped MeSH terms\xspace are recorded for each of the free-text terms in a query fragment. Additionally, the score is recorded for each MeSH term. \item[UMLS] We index UMLS~\cite{bodenreider2004unified}\footnote{version 2019AB using the \texttt{MRCONSO}, \texttt{MRDEF}, \texttt{MRREL}, and \texttt{MRSTY} tables.} into Elasticsearch v7.6. Each free-text atomic clause in the query fragment with MeSH terms removed is submitted to the Elasticsearch index. The results are filtered to only include synonyms of concepts derived from the MeSH source. Additionally, the BM25 score is recorded for each MeSH term. \end{description} For the MetaMap and UMLS approaches, the same MeSH term may be retrieved multiple times for a given free-text fragment. To overcome this issue, we re-score the MeSH terms\xspace using rank fusion (CombSUM)~\cite{fox1994combination}. The intuition for this re-scoring is that highly common MeSH terms\xspace that also obtain a high score from these retrieval methods should be scored highly overall (thus ranked higher than common MeSH terms\xspace \textit{and} highly scoring MeSH terms\xspace). \paragraph{Ranking} \label{section:mesh_ranking} Once MeSH terms have been retrieved, they are ranked according to the approach for entity ranking described by Jimmy et al.~\cite{jimmy2019health} by adapting features proposed by Balog~\cite{balog2018entity}. In total, we use eleven entity features. Positive instances correspond to MeSH terms\xspace in the original query fragment; negative instances correspond to MeSH terms\xspace not in the original query fragment (binary labels). With features and instance labels, we train a learning-to-rank (LTR) model for each retrieval method. In addition to LTR, we also investigate a rank fusion approach~\cite{fox1994combination}, where we combine the normalized MeSH term suggestion scores from each of the three methods to produce a new ranking that incorporates the highest ranking MeSH terms from each method. The intuition for investigating rank fusion in this context is that each method may retrieve different MeSH terms; and those terms may be ranked differently each time. Therefore, we boost MeSH terms that are retrieved and ranked highly by multiple methods. \paragraph{Refinement} \label{section:mesh_refinement} Finally, we seek to refine the suggested MeSH terms by estimating a rank cut-off. We do this using a score-based gain function. Formally, the cumulative gain $CG$ for a MeSH term at rank $p$ is \begin{equation} CG_p = \sum_{i=1}^{p}score_i \end{equation} \noindent where the score for a MeSH term is equal to $1-normalised\ score$ (i.e., min-max normalisation) for the MeSH term. We tune a parameter, $\kappa$, for each retrieval method which controls the percentage of total $CG$ allowed to be observed before the ranking is cut-off (i.e., a refinement of the ranking). We tune $\kappa$ from 5\% to 95\% in increments of 5\%. The intuition for re-scoring MeSH terms becomes apparent when used with the $\kappa$ parameter: the highest-ranking MeSH term will receive a score of 0, resulting in at least one MeSH term suggested for every query fragment. Note that MeSH terms may share the same score, i.e., they may be tied. We take a conservative approach to account for the problem of tied MeSH terms at the boundary of the cut-off specified by $\kappa$. Whenever we encounter ties, we treat all of the tied MeSH terms as a single accumulation of gain that equals the summed gain across the scores of the tied MeSH terms. This treatment has the effect that tied MeSH terms account for much larger accumulations of gain. Therefore, tied MeSH terms at the top of rankings are more likely to be included in the cut-off than tied MeSH terms at the bottom. In essence, either all tied MeSH terms are considered within the cut-off (i.e., ties at the top of the ranking), or no tied MeSH terms are considered (i.e., ties at the bottom of the ranking). \subsection{BERT MeSH Term Suggestion} \label{sec:BERT_suggester} \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{figures/BERT_suggestion} \caption{Overview of the MeSH term suggestion for the BERT methods. Note that Fusion of MeSH ranks may be optional in the pipeline.} \label{fig:BERT_suggestion} \end{figure} Next, we extend our MeSH term suggestion methods using fine-tuned PLM models. Firstly, PLM models are typically chosen from the same domain in which the task is conducted. \paragraph{Architecture} We show the architecture of our fine-tuning and inference processes in Figure \ref{fig:architecture}. We use BioBERT~\cite{lee2020biobert} as the base PLM, as the context of this paper is medical systematic reviews. BioBERT is a PLM pre-trained on PubMed abstracts and PubMed Central (PMC)\footnote{PubMed Central is the repository containing full-text articles of the open-access part of the PubMed database.} full-text articles using the BERT\xspace training architecture \cite{DBLP:conf/naacl/DevlinCLT19}. After fine-tuning, BioBERT has achieved state-of-the-art performance on many medical-related tasks, including biomedical named entity recognition, relation extraction and question answering \cite{lee2020biobert}. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{figures/architecture} \caption{Architecture of model fine-tuning and inference.} \label{fig:architecture} \end{figure} Ideally, training data closely related to the target task should be used to fine-tune a PLM to achieve the highest effectiveness. Ideally, in our case, we would use professionally constructed medical systematic review Boolean queries to fine-tune our model. However, PLMs are typically data-hungry and require a large number of labelled training samples. In systematic review literature search, several public datasets are available with Boolean queries, such as the CLEF TAR collections~\cite{kanoulas2017clef,kanoulas2018clef,kanoulas2019clef}, the collection of~\citet{wang2022little}, and the collection of \citet{scells2017test}. Between these datasets, however, only 253 unique topics would be available to train the model: an insufficient amount to effectively fine-tune a BERT model. Instead, we create training samples by approximating the target task using data obtained from PubMed. We use the publicly available PubMed baseline to obtain the metadata about all published articles up to the start of 2022. The metadata contains information such as the title and abstract, but importantly for this work, it also includes author-assigned keywords and the relevant MeSH terms for an article. We use the assigned keywords and MeSH terms for every article in the PubMed dataset to approximate the task of MeSH term suggestion. To maximise the amount of training data, we also extract keywords from the title (as not all PubMed articles contain keywords). To tokenise titles, we use the process described by \citet{wang2022seed}. Firstly, we tokenise the title using Gensim~\cite{khosrovian2008gensim}, and then we remove stopwords too using NLTK~\cite{bird2004nltk}. We use the toolkit proposed by~\citet{Gao2022TevatronAE} to develop a dense retriever to suggest MeSH Terms. The model is fine-tuned with localized contrastive loss using triples of $<k_{a,i},m_a^+,m_a^->$ where $a$ is a PubMed article, $k_{a,i}$ is the $i$th keyword in the PubMed article, $m_a^+$ are the MeSH terms for the PubMed article, and $m_a^-$ are ten randomly sampled MeSH terms from the MeSH thesaurus. Many MeSH terms contain spaces or punctuation. Our model considers each MeSH term a unique token in the model vocabulary. Once the model is fine-tuned, we obtain an encoding for all MeSH terms. At inference time, we create an encoding for a keyword to obtain a score using the \texttt{[CLS]} token for all MeSH terms. Thus, our method scores and ranks all MeSH terms given a keyword. \begin{table*} \centering \small \begin{tabular}{c|p{75pt}|p{75pt}|p{75pt}|p{75pt}} \toprule MeSH Removed Fragment& \multicolumn{4}{c}{neonatal sepsis OR neonatal bacteremia OR neonatal infections OR death} \\ \midrule free text atomic clauses\xspace&neonatal sepsis & neonatal bacteremia& neonatal infections&death \\\midrule semantic group&\multicolumn{3}{c|}{neonatal sepsis, neonatal bacteremia, neonatal infections}&death \\ \bottomrule \end{tabular} \caption{Example query fragments with separation of semantic groups. In the example, `neonatal sepsis', `neonatal bacteremia' and `neonatal infections' are grouped to form a semantic group, while `death' is another semantic group. } \label{table:semantic_example} \end{table*} \paragraph{Ranking Suggestions} The goal of MeSH term suggestion is to suggest MeSH terms for each query fragment. However, the result from the BERT suggestion method consists of a ranked list of MeSH term suggestion\xspace for each free text atomic clause\xspace. We need to combine the rankings for each MeSH term. We formulate this combination task into two steps, (1) choosing how we represent a MeSH term ranking, and (2) choosing where to cut off the ranking. We present an overview of the combination task in Figure \ref{fig:BERT_suggestion}. First, we choose the best way to represent a ranking, which means deciding if MeSH terms should be suggested individually for every free text atomic clauses\xspace, as a whole for every fragment\xspace, or using other heuristics to decide how the representation should be computed. We designed three ranking representation methods \begin{enumerate} \item \textbf{Atomic BERT\xspace}: Firstly, we treat suggestions for each free text atomic clause\xspace individually, essentially applying no strategy to combine the suggestions. \item \textbf{Fragment BERT\xspace}: Next, we study the combination of all MeSH term rankings for a given query fragment. We apply rank fusion (normalised CombSUM~\cite{fox1994combination}) to all of the free text atomic clauses\xspace in a query fragment. For computational reasons, we only use the top 20 MeSH terms for each free text atomic clause\xspace. \item \textbf{Semantic BERT\xspace}: Finally, we study semantically grouping free text atomic clauses\xspace and apply the same rank fusion technique as above, but this time to each group. We show an example of a semantic group in Table \ref{table:semantic_example}. To derive semantic groups, we first take all free text atomic clauses\xspace from the fragment\xspace and obtain word2vec embeddings for each free text atomic clause\xspace. Then we compute cosine similarities between all free text atomic clause\xspace to decide if they are semantically related. In our experiments, we apply a threshold of 0.7 on the similarity. We use a word2vec model pre-trained on PubMed and Wikipedia~\cite{moen2013distributional}. There are two reasons we use word2vec rather than BERT for semantic groups. First, if we apply our proposed BERT model, we note that we fine-tuned using semantic pairs of free text atomic clauses\xspace and MeSH terms: thus, calculating the similarity between two free text atomic clauses\xspace can cause a model mismatch. Secondly, the use of an additional BERT model will increase the latency in producing suggestions at inference time, as each free text atomic clause\xspace needs to be encoded twice. \end{enumerate} Second, we choose where to cut off the ranking of MeSH terms from the ranking representations. We propose four strategies to cut off MeSH term rankings \begin{enumerate} \item \textbf{First only (FO)}: The first MeSH term of the ranking is selected for each ranking representation. \item \textbf{Same as free text atomic clauses\xspace (SA)}: The number of MeSH terms selected equals the number of free text atomic clauses\xspace in each fragment\xspace (i.e, only applicable to \textbf{Fragment BERT}). \item \textbf{Same as original (SO)}: The MeSH terms selected equals the number of MeSH terms in the query fragment prior to removing MeSH terms (i.e, only applicable to \textbf{Fragment BERT}). \item \textbf{Linear (LN)}: The number of MeSH terms selected is learnt using a linear function with respect to the number of free text atomic clauses\xspace in the fragments (i.e, only applicable to \textbf{Fragment BERT}). \end{enumerate} \subsection{Evaluation} \label{methods:evaluation} The end goal of a systematic review literature search is to find all of the relevant literature at the minimum cost. Thus, an effective Boolean query minimises the number of documents retrieved while maximising the retrieval of relevant documents. In our MeSH term suggestion task, we use the retrieval effectiveness of defragmented Boolean queries to evaluate MeSH term suggestion. The MeSH terms included in the original query have been derived often after careful consideration by expert information specialists. We therefore consider how the MeSH terms included in the original queries differ from those suggested by the methods investigated in this work; specifically, we measure the overlap between the suggested MeSH terms and the MeSH terms included in the original query. We note that a MeSH term that is not in the original query may not necessarily be less effective of a search term than one included in the original query. To evaluate the effectiveness of the suggested MeSH terms for the task of systematic review literature search, once query fragments are defragmented, the retrieval effectiveness is evaluated using typical systematic review literature search evaluation measures: precision, recall, and ${F\beta}$, with $\beta=\{1,3\}$. The PubMed Entrez API is used to directly issue defragmented Boolean queries to obtain retrieval results. As PubMed is constantly updated with new studies, we apply a date restriction to all queries for reproducibility purposes. We use the Jaccard index measure to evaluate the overlap of MeSH terms between those suggested by the investigated methods and those included in the original query. For both evaluation settings (i.e., Boolean query retrieval and evaluation against original MeSH terms), we evaluate the lexical suggestion method in two settings: (i) \textbf{all}, where all retrieved MeSH terms are considered; and (i) \textbf{cut}, where the score-based cut-off is used. We also evaluate all BERT\xspace suggestion methods and compare their effectiveness with that of the original query and the lexical methods. \subsection{Experimental Setup} \begin{figure}[!t] \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/2017_learn.pdf} \end{minipage}\hfill \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/2018_learn.pdf} \end{minipage}\hfill \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/2019_dta_learn.pdf} \end{minipage}\hfill \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/2019_intervention_learn.pdf} \end{minipage} \caption{Linear regression performed on the number of keywords (x-axis) and the number of MeSH terms (y-axis) in query fragments for training splits of CLEF TAR 2017, 2018, 2019-dta and 2019-intervention.} \label{fig:learn_J} \end{figure} For our experiments, we use topics from the CLEF TAR task from 2017, 2018, and 2019 \cite{kanoulas2017clef, kanoulas2018clef, kanoulas2019clef}. 15 topics are discarded due to lack of MeSH terms\footnote{Discarded topics are: \textbf{2017}: CD007427, CD010771, CD010772, CD010775, CD010783, CD010860, CD011145; \textbf{2018}: CD007427, CD009263, CD009694; \textbf{2019}: CD006715, CD007427, CD009263, CD009694, CD011768.}. An additional topic is discarded because of retrieval issues\footnote{The additional discard topic is \textbf{2017}: CD010276.}, likely resulting from the fact that we translate queries automatically from one format (Ovid Medline) into another format (PubMed)~\cite{scells2018querylab}. In total, we used 116 unique topics, as each year has partial overlap. For each topic, we automatically divide the Boolean query for that topic into query fragments~\cite{scells2018querylab}. Each fragment contains at least one MeSH term. This results in a total of 311 unique query fragments (2.68 fragments per query on average). For each query fragment, we corrected any errors (e.g., spelling mistakes, syntactic errors), extracted MeSH terms, keywords, query fragments with MeSH terms, and query fragments without MeSH terms. For training the LTR model for each lexical method, we use the pre-split training and test portions from the CLEF datasets. The 2019 topics are also split on systematic review type (intervention and diagnostic test accuracy --- indicated as `intervention' and `dta' respectively in the results), while those for 2017 and 2018 are all diagnostic test accuracy. We use the `quickrank' library~\cite{capannini2016quality} for LTR, instantiated with LambdaMART trained to maximise nDCG. We leave other settings as per default. For learning the linear function of the BERT suggestion method to decide the cut-off value of the MeSH term ranking list, as described in Section~\ref{sec:BERT_suggester}, we use the training portions of the CLEF TAR datasets. First, we obtain all the fragments\xspace from the CLEF TAR training splits. We count the number of free text atomic clauses\xspace and MeSH terms in each fragment\xspace. We then perform linear regression on these numbers to determine a function for each CLEF TAR dataset. We show the linear regression in Figure \ref{fig:learn_J}. \section{Appendices} \begin{landscape} \begin{table} \centering \footnotesize \begin{tabular}{p{1pt}l|cccc|cccc|cccc|cccc} \\ \toprule \multicolumn{2}{c|}{Dataset}&\multicolumn{4}{c|}{2017}&\multicolumn{4}{c|}{2018}&\multicolumn{4}{c|}{2019-dta}&\multicolumn{4}{c}{2019-intervention}\\ \midrule \multicolumn{2}{c|}{Method}&\multicolumn{1}{c}{P}&\multicolumn{1}{c}{F1}&\multicolumn{1}{c}{F3}&\multicolumn{1}{c|}{R}&\multicolumn{1}{c}{P}&\multicolumn{1}{c}{F1}&\multicolumn{1}{c}{F3}&\multicolumn{1}{c|}{R}&\multicolumn{1}{c}{P}&\multicolumn{1}{c}{F1}&\multicolumn{1}{c}{F3}&\multicolumn{1}{c|}{R}&\multicolumn{1}{c}{P}&\multicolumn{1}{c}{F1}&\multicolumn{1}{c}{F3}&\multicolumn{1}{c}{R}\\ \midrule &ORIGINAL&0.0288&0.0311&0.0440&0.7745&0.0323&0.0576&0.0965&0.8629&0.0227&\textbf{0.0421}&\textbf{0.0738}&0.8966&0.0165&0.0212&0.0309&0.7471\\\midrule \multirow{8}{*}{\rotatebox{90}{Lexical Method}}&ATM&0.0265&0.0262&0.0353&0.7549&0.0317&0.0552&0.0898&0.8190&0.0113&0.0211&0.0373&0.8916&0.0156&0.0183&0.0269&0.7073\\ &ATM-CUT&0.0316&0.0299&0.0404&0.7269&0.0354&0.0624&0.1033&0.7998&\textbf{0.0243}&0.0398&0.0637&0.8375&0.0173&0.0191&0.0288&0.6938\\ &MetaMap&0.0304&0.0287&0.0381&0.7519&0.0342&0.0599&0.0980&0.8150&0.0131&0.0245&0.0433&0.8791&0.0135&0.0218&0.0339&0.6974\\ &MetaMap-CUT&0.0337&0.0312&0.0423&0.7191&0.0360&0.0633&0.1043&0.8071&0.0193&0.0358&0.0625&0.8393&0.0159&0.0251&0.0382&0.6831\\ &UMLS&0.0275&0.0269&0.0355&0.7458&0.0297&0.0519&0.0847&0.8200&0.0114&0.0214&0.0384&0.8616&0.0118&0.0183&0.0275&0.6998\\ &UMLS-CUT&0.0335&0.0315&0.0430&0.7225&0.0384&\textbf{0.0681}&\textbf{0.1133}&0.7963&0.0174&0.0305&0.0508&0.8381&0.0173&0.0191&0.0295&0.6638\\ &Fusion&0.0218&0.0227&0.0300&0.7712&0.0284&0.0495&0.0800&0.8455&0.0103&0.0192&0.0342&0.9075&0.0109&0.0173&0.0263&0.7212\\ &Fusion-CUT&0.0323&0.0303&0.0409&0.7282&0.0333&0.0582&0.0951&0.8120&0.0147&0.0269&0.0465&0.8394&0.0161&0.0173&0.0262&0.6797\\\midrule \multirow{6}{*}{\rotatebox{90}{BERT Method}}&Atomic-BERT-FO&0.0257&0.0249&0.0330&\textbf{0.7830}&0.0289&0.0488&0.0795&0.8523&0.0092&0.0173&0.0310&0.8870&0.0070&0.0126&0.0219&0.7587\\ &Semantic-BERT-FO&0.0273&0.0272&0.0363&0.7633&0.0284&0.0501&0.0820&0.8502&0.0096&0.0181&0.0324&0.8870&0.0110&0.0183&0.0288&0.7483\\ &Fragment-BERT-FO&\textbf{0.0342}&\textbf{0.0324}&\textbf{0.0446}&0.7415&\textbf{0.0382}&0.0678&0.1132&0.8041&0.0169&0.0314&0.0548&0.8924&\textbf{0.0212}&\textbf{0.0276}&\textbf{0.0422}&0.7106\\ &Fragment-BERT-SA&0.0212&0.0216&0.0284&0.7699&0.0268&0.0471&0.0772&\textbf{0.8652}&0.0097&0.0181&0.0323&\textbf{0.9357}&0.0076&0.0137&0.0235&\textbf{0.7806}\\ &Fragment-BERT-SO&0.0265&0.0250&0.0335&0.7593&0.0328&0.0588&0.0991&0.8258&0.0129&0.0243&0.0433&0.8987&0.0176&0.0238&0.0358&0.7431\\ &Fragment-BERT-LN&0.0265&0.0274&0.0373&0.7615&0.0318&0.0561&0.0925&0.8355&0.0112&0.0211&0.0378&0.8969&0.0105&0.0167&0.0265&0.7428\\ \bottomrule \end{tabular} \caption{Search effectiveness of Boolean query using suggested MeSH terms evaluated by precision (P), F1, F3 and recall (R). Lexical methods: For each method, \textit{CUT} indicates cut-off ranks. BERT methods: \textit{FO}, \textit{SA}, \textit{SO}, \textit{LN} indicate different cut-off strategies. No statistical significant differences are detected between the ORIGINAL query and those obtained by the other methods (two-tailed t-test with Bonferroni correction, $p<0.05$).} \vspace{-10pt} \label{table:search_result} \end{table} \begin{table} \centering \footnotesize \begin{tabular}{p{1pt}l|cccc|cccc|cccc|cccc} \\ \toprule \multicolumn{2}{c|}{Dataset}&\multicolumn{4}{c|}{2017}&\multicolumn{4}{c|}{2018}&\multicolumn{4}{c|}{2019-dta}&\multicolumn{4}{c}{2019-intervention}\\ \midrule \multicolumn{2}{c|}{Method}&\multicolumn{1}{c}{P}&\multicolumn{1}{c}{F1}&\multicolumn{1}{c}{F3}&\multicolumn{1}{c|}{R}&\multicolumn{1}{c}{P}&\multicolumn{1}{c}{F1}&\multicolumn{1}{c}{F3}&\multicolumn{1}{c|}{R}&\multicolumn{1}{c}{P}&\multicolumn{1}{c}{F1}&\multicolumn{1}{c}{F3}&\multicolumn{1}{c|}{R}&\multicolumn{1}{c}{P}&\multicolumn{1}{c}{F1}&\multicolumn{1}{c}{F3}&\multicolumn{1}{c}{R}\\ \midrule \multirow{8}{*}{\rotatebox{90}{Lexical Method}}&ATM&0.9148&0.7515&0.6465&0.8570&0.9670&0.9216&0.8578&0.5820&0.4586&0.4573&0.4544&0.9674&0.9400&0.7730&0.7292&0.7467\\ &ATM-CUT&0.8992&0.9396&0.8480&0.6698&0.8426&0.8532&0.4265&0.9196&0.9378&0.8354&0.6629&0.9514&0.8275&0.8556&0.6688&0.8306\\ &MetaMap&0.9411&0.8784&0.7572&0.8366&0.9263&0.9668&0.5474&0.5297&0.5302&0.5294&0.8930&0.7702&0.9612&0.8361&0.6867&0.8956\\ &MetaMap-CUT&0.8279&0.9954&0.9320&0.6203&0.8159&0.8330&0.4813&0.8305&0.8271&0.8205&0.6691&0.9502&0.7414&0.6170&0.6078&0.7990\\ &UMLS&0.9468&0.7873&0.6529&0.7933&0.8132&0.7511&0.5876&0.4432&0.4475&0.4535&0.7712&0.6556&0.8076&0.8162&0.7007&0.8556\\ &UMLS-CUT&0.8323&0.9785&0.9583&0.6363&0.6625&0.6500&0.4027&0.7251&0.6698&0.6245&0.6667&0.9508&0.8238&0.8980&0.5085&0.6661\\ &Fusion&0.7208&0.5824&0.4505&0.9755&0.7373&0.6598&0.8258&0.4014&0.4031&0.4049&0.9199&0.5801&0.7308&0.7487&0.8293&0.7838\\ &Fusion-CUT&0.8768&0.9598&0.8729&0.6786&0.9817&0.9714&0.5223&0.5972&0.5856&0.5704&0.6722&0.9726&0.6744&0.6682&0.5919&0.9463\\\midrule \multirow{6}{*}{\rotatebox{90}{BERT Method}}&Atomic-BERT-FO&0.8905&0.7021&0.5662&0.9363&0.8109&0.7149&0.6493&0.8917&0.3493&0.3526&0.3566&0.9410&0.2777&0.3441&0.4487&0.9215\\ &Semantic-BERT-FO&0.9465&0.8073&0.6916&0.9164&0.7530&0.6962&0.8704&0.3615&0.3657&0.3711&0.9410&0.5684&0.7928&0.8853&0.9918&0.7841\\ &Fragment-BERT-FO&0.8075&0.9371&0.9712&0.7663&0.6737&0.6540&0.4589&0.7074&0.7015&0.6948&0.9715&0.7003&0.5612&0.4209&0.7636&0.6814\\ &Fragment-BERT-SA&0.7171&0.5460&0.3968&0.9658&0.6480&0.5888&0.9754&0.3723&0.3748&0.3777&0.6593&0.3085&0.4157&0.5481&0.7687&0.6837\\ &Fragment-BERT-SO&0.9135&0.6889&0.5615&0.8908&0.9592&0.9406&0.6374&0.5109&0.5154&0.5208&0.9856&0.9295&0.8150&0.7191&0.9736&0.9739\\ &Fragment-BERT-LN&0.9119&0.8152&0.7293&0.9053&0.9452&0.9110&0.7120&0.4368&0.4416&0.4474&0.9978&0.5192&0.6417&0.7200&0.9715&0.9681\\ \bottomrule \end{tabular} \caption{ Two-tailed t-test results of Boolean query search effectiveness between the ORIGINAL query and those obtained by the other methods by precision (P), F1, F3 and recall (R). Lexical methods: \textit{CUT} indicates cut-off ranks. BERT methods: \textit{FO}, \textit{SA}, \textit{SO}, \textit{LN} indicate different cut-off strategies.} \vspace{-10pt} \label{table:search_result_p} \end{table} \end{landscape} \begin{landscape} \begin{table*} \centering \scriptsize \begin{tabular}{l|p{175pt}|c|p{335pt}} \multicolumn{4}{c}{}\\ \toprule Topic&\multicolumn{3}{c}{CD009642}\\\midrule Fragments&\multicolumn{1}{c}{Fragment 1}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{Fragment 2} \\\midrule ORIGINAL& \textbf{Lidocaine} OR lidocain* OR Lignocain* OR Xylocain* &&\textbf{Pain} OR \textbf{Pain, Postoperative} OR \textbf{Postoperative Care} OR \textbf{Postoperative Complications} OR (post operative OR postoperative) AND (pain* OR recovery) \\\cmidrule{1-2}\cmidrule{4-4} ATM & lidocain* OR Lignocain* OR Xylocain*&& \textbf{Pain} OR (post operative OR postoperative) AND (pain* OR recovery) \\\cmidrule{1-2}\cmidrule{4-4} ATM-CUT & lidocain* OR Lignocain* OR Xylocain* &&\textbf{Pain} OR (post operative OR postoperative) AND (pain* OR recovery)\\\cmidrule{1-2}\cmidrule{4-4} MetaMap& \textbf{Lidocaine} OR lidocain* OR Lignocain* OR Xylocain*&& \textbf{Pain} OR (post operative OR postoperative) AND (pain* OR recovery) \\\cmidrule{1-2}\cmidrule{4-4} MetaMap-CUT &\textbf{Lidocaine} OR lidocain* OR Lignocain* OR Xylocain* &&\textbf{Pain} OR (post operative OR postoperative) AND (pain* OR recovery)\\\cmidrule{1-2}\cmidrule{4-4} UMLS& \textbf{Lidocaine} OR lidocain* OR Lignocain* OR Xylocain*&& \textbf{Pain} OR (post operative OR postoperative) AND (pain* OR recovery) \\\cmidrule{1-2}\cmidrule{4-4} UMLS-CUT & \textbf{Lidocaine} OR lidocain* OR Lignocain* OR Xylocain* && \textbf{Pain} OR (post operative OR postoperative) AND (pain* OR recovery) \\\cmidrule{1-2}\cmidrule{4-4} Fusion & \textbf{Lidocaine} OR lidocain* OR Lignocain* OR Xylocain*&\multirow{6}{*}{AND}& \textbf{Pain} OR (post operative OR postoperative) AND (pain* OR recovery) \\\cmidrule{1-2}\cmidrule{4-4} Fusion-CUT & \textbf{Lidocaine} OR lidocain* OR Lignocain* OR Xylocain*&& \textbf{Pain} OR (post operative OR postoperative) AND (pain* OR recovery) \\\cmidrule{1-2}\cmidrule{4-4} Atomic-BERT-FO & \textbf{Lidocaine} OR \textbf{Xylans} OR lidocain* OR Lignocain* OR Xylocain*&& \textbf{Postoperative Care} OR \textbf{Pain} OR \textbf{Recovery of Function} OR (post operative OR postoperative) AND (pain* OR recovery) \\\cmidrule{1-2}\cmidrule{4-4} Semantic-BERT-FO & \textbf{Lidocaine} OR \textbf{Xylans} OR lidocain* OR Lignocain* OR Xylocain*&& \textbf{Postoperative Care} OR \textbf{Pain} OR \textbf{Recovery of Function} OR (post operative OR postoperative) AND (pain* OR recovery) \\\cmidrule{1-2}\cmidrule{4-4} Fragment-BERT-FO & \textbf{Lidocaine} OR lidocain* OR Lignocain* OR Xylocain*&& \textbf{Pain, Postoperative} OR (post operative OR postoperative) AND (pain* OR recovery) \\\cmidrule{1-2}\cmidrule{4-4} Fragment-BERT-SA & \textbf{Lidocaine} OR \textbf{Procaine} OR \textbf{Xylans} OR lidocain* OR Lignocain* OR Xylocain*&& \textbf{Pain, Postoperative} OR \textbf{Postoperative Care} OR \textbf{Postoperative Period} OR \textbf{Postoperative Complications} OR (post operative OR postoperative) AND (pain* OR recovery) \\\cmidrule{1-2}\cmidrule{4-4} Fragment-BERT-SO & \textbf{Lidocaine} OR lidocain* OR Lignocain* OR Xylocain*&& \textbf{Pain, Postoperative} OR \textbf{Postoperative Care} OR \textbf{Postoperative Period} OR \textbf{Postoperative Complications} OR (post operative OR postoperative) AND (pain* OR recovery) \\\cmidrule{1-2}\cmidrule{4-4} Fragment-BERT-LN & \textbf{Lidocaine} OR \textbf{Procaine} OR \textbf{Xylans} OR lidocain* OR Lignocain* OR Xylocain*&& \textbf{Pain, Postoperative} OR \textbf{Postoperative Care} OR \textbf{Postoperative Period} OR (post operative OR postoperative) AND (pain* OR recovery) \\ \bottomrule \end{tabular} \caption{Query fragments in different methods, For Lexical methods: \textit{CUT} indicates cut-off ranks . For BERT suggestion method, \textit{A-B}indicates Atomic BERT\xspace, \textit{S-B} indicates Semantic BERT\xspace, \textit{F-B} indicates Fragment BERT\xspace. In each BERT method, \textit{FO}, \textit{SA}, \textit{SO}, \textit{LN} indicates cut-off strategy used. For each fragment, bold text means MeSH term.} \label{table:casestudy_better} \end{table*} \end{landscape} \begin{landscape} \begin{table*} \centering \scriptsize \begin{tabular}{l|p{120pt}|c|p{360pt}} \multicolumn{4}{c}{}\\ \toprule Topic& \multicolumn{3}{c}{CD004414}\\\midrule Fragments&\multicolumn{1}{c}{Fragment 1} &\multicolumn{1}{c}{}&\multicolumn{1}{c}{Fragment 2}\\\midrule ORIGINAL& \textbf{Hand} OR hand* OR finger* OR palm*&& \textbf{Hand Dermatoses} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*) \\\cmidrule{1-2}\cmidrule{4-4} ATM & \textbf{Hand} OR \textbf{Fingers} OR hand* OR finger* OR palm* && \textbf{Fingers} OR \textbf{Eczema} OR \textbf{Hand} OR \textbf{Irritants} OR \textbf{Occupations} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} ATM-CUT & \textbf{Hand} OR hand* OR finger* OR palm* && \textbf{Fingers} OR \textbf{Eczema} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} MetaMap & \textbf{Hand} OR \textbf{Fingers} OR hand* OR finger* OR palm* && \textbf{Hand} OR \textbf{Fingers} OR \textbf{Eczema} OR \textbf{Occupations} OR \textbf{Irritants} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} MetaMap-CUT & \textbf{Hand} OR hand* OR finger* OR palm* && \textbf{Hand} OR \textbf{Fingers} OR \textbf{Eczema} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} UMLS & \textbf{Fingers} OR \textbf{Hand} OR \textbf{Palm Oil} OR \textbf{Computers, Handheld} OR hand* OR finger* OR palm* && \textbf{Eczema} OR \textbf{Fingers} OR \textbf{Hand} OR \textbf{Dermatitis, Atopic} OR \textbf{Kaposi Varicelliform Eruption} OR \textbf{Retirement} OR \textbf{Computers, Handheld} OR \textbf{Occupations} OR \textbf{Palm Oil} OR \textbf{Irritants} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} UMLS-CUT & \textbf{Fingers} OR hand* OR finger* OR palm* && \textbf{Eczema} OR \textbf{Fingers} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} Fusion & \textbf{Hand} OR \textbf{Fingers} OR \textbf{Palm Oil} OR \textbf{Computers, Handheld} OR hand* OR finger* OR palm* &\multirow{6}{*}{AND}& \textbf{Eczema} OR \textbf{Fingers} OR \textbf{Hand} OR \textbf{Dermatitis, Atopic} OR \textbf{Occupations} OR \textbf{Kaposi Varicelliform Eruption} OR \textbf{Retirement} OR \textbf{Irritants} OR \textbf{Computers, Handheld} OR \textbf{Palm Oil} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} Fusion-CUT & \textbf{Hand} OR hand* OR finger* OR palm* && \textbf{Eczema} OR \textbf{Fingers} OR \textbf{Hand} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} Atomic-BERT-FO & \textbf{Hand} OR \textbf{Fingers} OR \textbf{Palm Oil} OR hand* OR finger* OR palm* && \textbf{Hand} OR \textbf{Eczema} OR \textbf{Occupations} OR \textbf{Dermatology} OR \textbf{Irritants} OR \textbf{Dermatitis, Contact} OR \textbf{Fingers} OR \textbf{Palm Oil} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} Semantic-BERT-FO & \textbf{Hand} OR \textbf{Fingers} OR \textbf{Palm Oil} OR hand* OR finger* OR palm* && \textbf{Dermatology} OR \textbf{Eczema} OR \textbf{Occupations} OR \textbf{Irritants} OR \textbf{Dermatitis, Contact} OR \textbf{Hand} OR \textbf{Fingers} OR \textbf{Palm Oil} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} Fragment-BERT-FO & \textbf{Hand} OR hand* OR finger* OR palm* && \textbf{Dermatitis, Contact} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} Fragment-BERT-SA & \textbf{Hand} OR \textbf{Fingers} OR \textbf{Palm Oil} OR hand* OR finger* OR palm* && \textbf{Dermatitis, Contact} OR \textbf{Dermatitis, Allergic Contact} OR \textbf{Hand} OR \textbf{Fingers} OR \textbf{Eczema} OR \textbf{Dermatitis, Atopic} OR \textbf{Patch Tests} OR \textbf{Skin Diseases} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} Fragment-BERT-SO & \textbf{Hand} OR hand* OR finger* OR palm* && \textbf{Dermatitis, Contact} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\\cmidrule{1-2}\cmidrule{4-4} Fragment-BERT-LN & \textbf{Hand} OR \textbf{Fingers} OR \textbf{Palm Oil} OR hand* OR finger* OR palm* && \textbf{Dermatitis, Contact} OR \textbf{Dermatitis, Allergic Contact} OR \textbf{Hand} OR (dermat* OR eczema) AND (occupation* OR irritant* OR contact) AND (hand* OR finger* OR palm*)\\ \bottomrule \end{tabular} \caption{Query fragments in different methods, For Lexical methods: \textit{CUT} indicates cut-off ranks . For BERT suggestion method, \textit{A-B}indicates Atomic BERT\xspace, \textit{S-B} indicates Semantic BERT\xspace, \textit{F-B} indicates Fragment BERT\xspace. In each BERT method, \textit{FO}, \textit{SA}, \textit{SO}, \textit{LN} indicates cut-off strategy used. For each fragment, bold text means MeSH term.} \label{table:casestudy_worse} \end{table*} \end{landscape} \section{Acknowledgments} Shuai Wang is supported by a UQ Earmarked PhD Scholarship and this research is funded by the Australian Research Council Discovery Projects program ARC Discovery Project DP210104043. \bibliographystyle{elsarticle-num-names} \section{Theory/calculation} \subsection{Experimental Setup} We use topics from the CLEF TAR task from 2017, 2018, and 2019 \cite{kanoulas2017clef, kanoulas2018clef, kanoulas2019clef}. 15 topics are discarded due to lack of MeSH terms (\textbf{2017}: CD007427, CD010771, CD010772, CD010775, CD010783, CD010860, CD011145; \textbf{2018}: CD007427, CD009263, CD009694; \textbf{2019}: CD006715, CD007427, CD009263, CD009694, CD011768). An additional 1 topic is discarded because of retrieval issues (\textbf{2017}: CD010276), likely resulting from the fact that some queries are automatically translated from queries in one format (Ovid Medline) into another format (PubMed). In total we used 245 topics across all three datasets (116 unique, as each year has partial overlap). For each topic, we divide the Boolean query for that topic into several query fragments. We create these fragments using the transmute tool~\cite{scells2018querylab}. Each fragment contains at least one MeSH term. This results in a total of 311 unique query fragments for the three years (2.68 fragments per query on average). For each of the query fragments, we corrected any errors (e.g., spelling mistakes, syntactic errors), extracted MeSH terms, keywords, query fragment with MeSH terms, and query fragments without MeSH terms. For training the LTR model for MeSH term ranking, the pre-split training and test portions from the CLEF datasets are used. The 2019 topics are split also on systematic review type (intervention and diagnostic test accuracy --- indicated as `intervention' and `dta' respectively in the results), while those for 2017 and 2018 are all diagnostic test accuracy. We use the quickrank library~\cite{capannini2016quality} for LTR, instantiated with LambdaMART trained to maximise nDCG. We leave other settings as per default. \section{Results} Results in this section are presented on the test splits of the CLEF TAR datasets (i.e., 2017, 2018, 2019-dta, 2019-intervention). We first analyse the search effectiveness of lexical methods versus our new BERT methods and then analyse the MeSH suggestion effectiveness compared to the MeSH terms originally used \subsection{Retrieval Effectiveness} \label{sec:searcb_effectiveness_nonBERT} \paragraph{Lexical Methods} \label{lexi_finding} The results of the lexical methods presented in Table~\ref{table:search_result} are the same reported in our previous work~\cite{wang2021mesh}. We discuss them briefly here for completeness. Unrefined methods generally contain higher recall than corresponding refined methods, with lower precision. This finding indicates that adding more MeSH terms in the query fragments can cause both more relevant and irrelevant studies to be retrieved. When compared using F1 and F3, the refined methods consistently outperform unrefined methods for each dataset. The U-CUT method achieves the highest effectiveness on CLEF 2017 and 2018, while A-CUT can achieve the highest effectiveness on CLEF 2019 dta; M-CUT for CLEF TAR 2019 intervention dataset in terms of F1 and F3. In terms of recall, the unrefined fusion method achieves the highest recall among all lexical suggestion methods. This gain in the recall is likely because unrefined fusion combines all of the MeSH terms suggested by the other three methods (ATM, MetaMap, and UMLS) using `OR'. This suggests that the unrefined fusion method is not beneficial for improving the precision of a Boolean query. However, suppose semi-automatic MeSH term suggestion can be used. Information specialists may be able to use the suggestion and apply their expertise to decide which MeSH terms may be included to achieve higher performance. \paragraph{BERT Methods} \label{sec:searcb_effectiveness_BERT} We first compare the effectiveness of the BERT methods with the original Boolean query. The result shows that under all evaluation measures (Precision, F1, F3 and Recall), BERT methods can outperform the original query on CLEF TAR 2017, 2018 and 2019-intervention, while effectiveness is generally worse for CLEF TAR 2019-dta. Note that CLEF TAR 2019-dta only contains eight unique topics; the lower effectiveness is likely due to a handful of topics. Next, we compare the effectiveness of BERT suggestions against lexical suggestions. When comparing with un-refined lexical methods, the effectiveness of BERT suggestions is comparable in terms of F1 and F3, showing substantial gains across all datasets. However, compared with refined lexical methods, BERT suggestions generally obtain comparable results to refined lexical suggestions, except in CLEF TAR 2019-dta, in which refined lexical suggestion methods achieve higher effectiveness. In terms of recall, BERT suggestions obtain slightly higher recall to un-refined lexical suggestions, but substantially higher recall than refined lexical suggestions. As mentioned in Section \ref{lexi_finding}, unrefined lexical methods are effective to achieve higher recall while refined lexical methods are effective to achieve higher Precision, F1 and F3. We find that the MeSH terms suggested by BERT can obtain similar recall effectiveness to un-refined lexical methods while F1 and F3 can be comparable to refined lexical methods. Therefore, compare with lexical methods, BERT methods may be preferred to suggest more effective MeSH Terms. \subsection{Impact of BERT Ranking Representations} We compare different ranking representations of BERT, including Atomic BERT\xspace, Semantic BERT\xspace and Fragment BERT\xspace. We use the same cut-off strategy to compare these three representations fairly. We find that the precision, F1 and F3 values of Fragment BERT\xspace are the highest among the three methods, while recall of Fragment BERT\xspace is the lowest. However, only one MeSH term is suggested for each fragment\xspace when Fragment BERT\xspace is used. This trade-off of precision and recall also suggests the same finding we described for lexical methods, where adding more MeSH terms can cause more studies to be retrieved. Between Semantic BERT\xspace and Atomic BERT\xspace, Semantic BERT\xspace is able to obtain higher precision while recall is lower than Atomic BERT\xspace. When comparing using F1 or F3, Semantic BERT\xspace always achieves higher effectiveness. Therefore, the use of Semantic BERT\xspace is preferred over Atomic BERT\xspace \subsection{Impact of Cut-off Strategy} When comparing different cut-off strategies for BERT suggestions\footnote{Only Fragment BERT\xspace is considered as SA, SO, LN are only applicable to Fragment BERT\xspace.}, we find that FO can consistently achieve the highest Precision, F1 and F3 compared to other cut-off methods. On the other hand, the recall value of using FO is the lowest among all other methods, indicating that the trade-off of precision and recall is again caused by the number of MeSH terms added to the query. For the other three cut-off strategies, including SA, SO, and LN, we find that SO and LN consistently outperform SA, suggesting that information specialists have an intuition for how many MeSH terms to add to a query \subsection{Are Suggested MeSH Terms the Same as those in the Original Queries?} \begin{table*} \centering \small \begin{tabular}{p{2pt}l|cc|cc|cc|cc} \\ \toprule \multicolumn{2}{c|}{Dataset}&\multicolumn{2}{c|}{2017}&\multicolumn{2}{c|}{2018}&\multicolumn{2}{c|}{2019-dta}&\multicolumn{2}{c}{2019-intervention}\\ \midrule \multicolumn{2}{c|}{Method}&\multicolumn{1}{c}{Jaccard}&\multicolumn{1}{c|}{Num}&\multicolumn{1}{c}{Jaccard}&\multicolumn{1}{c|}{Num}&\multicolumn{1}{c}{Jaccard}&\multicolumn{1}{c|}{Num}&\multicolumn{1}{c}{Jaccard}&\multicolumn{1}{c}{Num}\\ \midrule \multirow{8}{*}{\rotatebox{90}{Lexical Method}}&ATM&0.0999&5.5373&0.2368&6.0139&0.2117&5.1500&0.2356&4.8868\\ &ATM-CUT&0.1995$^{*}$&2.4179&0.1938&2.3056&0.2004&2.0500&0.2109&1.3019\\ &MetaMap&0.2654$^{*}$&4.6866&0.2218&4.0417&0.2163&4.8000&0.2069&4.5094\\ &MetaMap-CUT&0.2374$^{*}$&2.3134&0.1964&1.9028&0.2241&2.3500&0.1981&1.7736\\ &UMLS&0.2243$^{*}$&8.9254&0.2235&7.9722&0.1905&7.7000&0.2405&7.5660\\ &UMLS-CUT&0.2751$^{*}$&1.8955&0.2424&1.8611&0.1986&2.2000&0.2050&1.7547\\ &Fusion&0.2165$^{*}$&11.4776&0.2160&10.9444&0.1735&10.5000&0.2212&9.7358\\ &Fusion-CUT&0.2761$^{*}$&2.7761&0.2742&3.3194&0.2508&3.1000&0.2909&2.4340\\\midrule \multirow{6}{*}{\rotatebox{90}{BERT Method}}&Atomic-BERT-FO&0.2532$^{*}$&12.7313&0.3105&12.2639&0.1573&11.8500&0.2252&13.6226\\ &Semantic-BERT-FO&0.2370$^{*}$&11.0746&0.2963&10.6944&0.1654&10.7500&0.2219&11.5283\\ &Fragment-BERT-FO&0.3455$^{*}$&1.0000&0.3812$^{*}$&1.0000&0.1681&1.0000&0.2235&1.0000\\ &Fragment-BERT-SA&0.2233$^{*}$&\textbf{16.6269}&0.2639&\textbf{16.4861}&0.1790&\textbf{15.5000}&0.2531&\textbf{17.2264}\\ &Fragment-BERT-SO&\textbf{0.3921$^{*}$}&4.1343&\textbf{0.4634$^{*}$}&4.8333&0.2574&4.4000&\textbf{0.3301}&2.7547\\ &Fragment-BERT-LN&0.2780$^{*}$&5.2687&0.2689&3.7778&\textbf{0.2667}&3.8500&0.2415&3.8491\\ \bottomrule \end{tabular} \caption{Jaccard index(Jaccard) values quantifying the overlap between the MeSH terms suggsted by the investigated methods and those in the original query, along with the average number (Num) of MeSH term suggested by each method. In the original queries, there were on average 4.1343 MeSH terms for 2017, 4.8333 for 2018, 4.4000 for 2019-dta, and 2.7547 for 2019-intervention. Lexical methods: \textit{CUT} indicates cut-off ranks. BERT methods: \textit{FO}, \textit{SA}, \textit{SO}, \textit{LN} indicate different cut-off strategies. Two-tailed statistical significance (t-test, $p<0.05$) with Bonferroni correction between ATM and the other methods is indicated by $*$.} \label{table:suggestion_result} \end{table*} Next, we study the overlap between the MeSH term suggested by the considered methods and those included in the original query; this is reported in Table~\ref{table:suggestion_result} and is measured with the Jaccard index. One immediate observation is that the overlap of all based methods is considerably higher than that of lexical methods. This observation is based on that the highest value of the Jaccard index in each dataset always appear in the BERT suggestion method. Moreover, when applying the SO cut-off strategy to Fragment BERT\xspace, the highest overlap is always obtained, which indicates that BERT suggestion methods also agree on the Terms chosen by systematic reviewers. \begin{figure}[!thb] \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/ATM.pdf} \end{minipage}\hfill \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/ATM-CUT.pdf} \end{minipage} \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/MetaMap.pdf} \end{minipage} \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/MetaMap-CUT.pdf} \end{minipage} \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/UMLS.pdf} \end{minipage} \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/UMLS-CUT.pdf} \end{minipage} \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/Fusion.pdf} \end{minipage} \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/Fusion-CUT.pdf} \end{minipage} \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/Atomic-BERT-FO.pdf} \end{minipage}\hfill \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/Semantic-BERT-FO.pdf} \end{minipage}\hfill \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/Fragment-BERT-FO.pdf} \end{minipage}\hfill \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/Fragment-BERT-SA.pdf} \end{minipage} \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/Fragment-BERT-SO.pdf} \end{minipage} \begin{minipage}{0.001\textwidth} \centering \end{minipage} \begin{minipage}{0.245\textwidth} \centering \includegraphics[width=1\linewidth]{plots/corrolation/method/Fragment-BERT-LN.pdf} \end{minipage} \caption{Correlation graph of search effectiveness versus the overlap of MeSH terms. The x-axis reports the Jaccard index for overlap between suggested MeSH terms and MeSH terms included in the original query, and the y-axis reports the F1 values for search effectiveness for each topic.} \label{fig:correlation} \end{figure} The previous results reported in Table~\ref{table:search_result} highlighted that, in general, BERT methods were better than lexical methods in suggesting effective search terms; and these were more effective than those in the original queries, although differences were not statistically significant. These results, in conjunction with the findings in Table~\ref{table:suggestion_result}, indicate that the BERT methods identify very similar MeSH terms that present in the original queries -- and the MeSH terms identified by BERT methods are more effective than those provided by other methods. Intuitively, we further analyse whether search effectiveness and the suggestion of MeSH terms that are included in the original query correlate, meaning that Mesh Terms used in the original Boolean query may be of very high quality and should be used as gold standard. The Jaccard index measure is used once more to represent the similarity between suggested MeSH terms and those present in the original query, and F1 is used to represent the search effectiveness of the associated query (with suggested MeSH terms included). The results of this correlation analysis are reported in Fig~\ref{fig:correlation}. We find that, while for all lexical methods search effectiveness is weakly correlated with the overlap of MeSH terms, this is not the case for BERT methods. This indicates that MeSH term suggestion\xspace from the original query may not be the best MeSH term suggestion\xspace to suggest. In fact, it often is that MeSH terms that are suggested but not included in the original query provide higher search effectiveness than the original MeSH terms themselves. \begin{figure}[!thb] \begin{minipage}{\textwidth} \centering \includegraphics[width=1\linewidth]{plots/UMLS-CUTdif.pdf} \end{minipage} \begin{minipage}{\textwidth} \centering \includegraphics[width=1\linewidth]{plots/Fragment-BERT-FOdif.pdf} \end{minipage} \caption{Plot showing systematic review topics versus original query effectiveness; each bar represents a topic. The y-axis represents the effectiveness difference between the query with the suggested MeSH terms and the original query. Effectiveness is measured using F1.} \label{fig:search_stability} \end{figure} \subsection{Search Stability} We next analyse the search effectiveness stability of different MeSH term suggestion methods on a topic-by-topic basis. With search effectiveness stability we refer to the amount of variance across topics of the measured search effectiveness obtained when using queries with MeSH terms suggested by a specific MeSH term suggestion method. The larger the effectiveness, the lower the stability. We only analyse the best-performing lexical (U-CUT) and BERT (F-B-FO) methods Figure~\ref{fig:search_stability}, which combines the topics of all of the CLEF TAR datasets test splits, shows that for most of the topics, both kinds of MeSH suggestion methods can outperform or match the effectiveness of the original queries. We also find that our MeSH term suggestion methods sometimes obtain lower effectiveness. It is unclear if these are difficult topics to suggest MeSH terms for, or if there are mistakes in these queries that cause the poor effectiveness (e.g., spelling mistakes in the free text atomic clauses\xspace that were not detected at the time of data cleaning). \subsection{Case Study} Given the findings above, we next seek to investigate the reasons for highly effective or ineffective results. We choose topic CD009642 and topic CD004414 from the CLEF TAR 2019-intervention dataset as they are representative topics where suggestion methods outperform the effectiveness (CD009642) or struggle to match the effectiveness (CD004414) of the original query. Query fragments corresponding to these topics for all the suggestion methods are shown in Tables~\ref{table:casestudy_better} and~\ref{table:casestudy_worse}. We also show their search effectiveness in Table \ref{table:case_resulta}. Firstly, we find that both the suggested MeSH terms and the search effectiveness are similar for all lexical methods. One exception is UMLS in topic CD004414, which suggests more MeSH terms than the other two methods, causing a drop in effectiveness On the other hand, MeSH terms suggested by BERT methods appear to differ greatly from lexical methods. BERT methods have captured both lexically similar MeSH terms and terms semantically related to the input free text atomic clauses\xspace. One example is shown in the first fragment of topic CD009642. While all lexical methods suggest \textit{Lidocaine} which is lexically equal to \textit{lidocain}, BERT methods suggest similar drugs such as \textit{Procaine}. Another example in topic CD004414 shows that BERT methods can use this semantic matching ability to suggest MeSH terms indicating the method of intervention, shown in suggesting \textit{Patch Tests}. Therefore, BERT methods suggest MeSH terms that are not bound to the lexical semantics of a free text atomic clause\xspace. Another advantage of BERT methods is that they guarantee that at least one MeSH term will be suggested. For lexical methods, suggestions are based on pre-existing rule-based knowledge; thus, when free text atomic clauses\xspace can not be matched, no MeSH terms can be suggested (e.g., ATM does not suggest any MeSH term in Fragment 1 of CD009642). Overall, we believe that the semantic matching of BERT may sometimes be detrimental to MeSH suggestion. We leave the investigation into how to prevent BERT from suggesting MeSH terms that are not relevant to the information need of a query fragment (e.g., suggesting a MeSH term that is the intervention of an outcome for a query fragment) for future work. \begin{table}[t!] \centering \normalsize \begin{tabular}{l|p{36pt}|p{36pt}|p{36pt}|p{36pt}|p{36pt}|p{36pt}|p{36pt}|p{36pt}} \multicolumn{9}{c}{}\\ \toprule Topic ID&\multicolumn{4}{c|}{CD009642}&\multicolumn{4}{c}{CD004414}\\\midrule Method&P&F1&F3&R&P&F1&F3&R\\ \midrule ORIGINAL& 0.0088& 0.0175& 0.0344& 1.0000& 0.0013& 0.0026& 0.0052& 0.6875\\\midrule ATM& 0.0109& 0.0215& 0.0421& 0.9194& 0.0018& 0.0035& 0.0070& 0.3125\\ ATM-CUT& 0.0109& 0.0215& 0.0421& 0.9194& 0.0020& 0.0040& 0.0078& 0.3125\\ MetaMap& 0.0109& 0.0215& 0.0421& 0.9194& 0.0018& 0.0035& 0.0070& 0.3125\\ MetaMap-CUT& 0.0109& 0.0215& 0.0421& 0.9194& 0.0014& 0.0027& 0.0054& 0.3125\\ UMLS& 0.0109& 0.0215& 0.0421& 0.9194& 0.0013& 0.0025& 0.0050& 0.3125\\ UMLS-CUT& 0.0109& 0.0215& 0.0421& 0.9194& 0.0020& 0.0040& 0.0078& 0.3125\\ Fusion& 0.0109& 0.0215& 0.0421& 0.9194& 0.0018& 0.0035& 0.0069& 0.3125\\ Fusion-CUT& 0.0109& 0.0215& 0.0421& 0.9194& 0.0014& 0.0027& 0.0054& 0.3125\\\midrule Atomic-BERT-FO& 0.0108& 0.0214& 0.0418& 0.9194& 0.0012& 0.0024& 0.0048& 0.3125\\ Semantic-BERT-FO& 0.0108& 0.0214& 0.0418& 0.9194& 0.0012& 0.0024& 0.0048& 0.3125\\ Fragment-BERT-SA& 0.0259& 0.0504& 0.0955& 0.9194& 0.0012& 0.0024& 0.0048& 0.3125\\ Fragment-BERT-SO& 0.0270& 0.0525& 0.0993& 0.9194& 0.0028& 0.0055& 0.0109& 0.3125\\ Fragment-BERT-LN& 0.0276& 0.0536& 0.1013& 0.9194& 0.0013& 0.0026& 0.0052& 0.3125\\ \bottomrule \end{tabular} \caption{Search effectiveness of the Boolean queries with the suggested MeSH terms evaluated by precision (P), F1, F3 and recall (R). For Lexical methods: \textit{CUT} indicates cut-off ranks. BERT methods: \textit{FO}, \textit{SA}, \textit{SO}, \textit{LN} indicate different cut-off strategies.} \label{table:case_resulta} \end{table}
{ "timestamp": "2022-09-20T02:22:38", "yymm": "2209", "arxiv_id": "2209.08687", "language": "en", "url": "https://arxiv.org/abs/2209.08687" }
\section*{Acknowledgement} We would like to thank the anonymous reviewers for their suggestions and comments. This material is in part based upon work supported by Berkeley DeepDrive and Berkeley Artificial Intelligence Research. \section{Experimental Setup Details} We describe additional details of our experimental setup including datasets and comparison methods in this section. \subsection{Datasets} \label{apx:data} We introduce the link prediction and triplet classification datasets as below. \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \begin{table*}[htbp] \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcc} \toprule \textbf{Method} & \multicolumn{2}{c}{\textbf{Score Function}} \\ \midrule TransE & $-\norm{\textbf{h} + \textbf{r} - \textbf{t}}$ & $\textbf{h}, \textbf{r}, \textbf{t} \in \mathbb{R}^k$\\ TransH & $-\left\|\left(\mathbf{h}-\mathbf{w}_{r}^{\top} \mathbf{h} \mathbf{w}_{r}\right)+\mathbf{r}-\left(\mathbf{t}-\mathbf{w}_{r}^{\top} \mathbf{t} \mathbf{w}_{r}\right)\right\|$ & $\mathbf{h}, \mathbf{t}, \mathbf{r}, \mathbf{w}_{r} \in \mathbb{R}^{k}$\\ TransR & $-\norm{\mathbf{M}_r\mathbf{h} + \mathbf{r} - \mathbf{M}_r\mathbf{t}}$ & $\mathbf{h}, \mathbf{t}\in\mathbb{R}^k, \mathbf{M}_r\in\mathbb{R}^{k\times d}$ \\ TransD & $-\left\|\left(\mathbf{w}_{r} \mathbf{w}_{h}^{\top}+\mathbf{I}\right) \mathbf{h}+\mathbf{r}-\left(\mathbf{w}_{r} \mathbf{w}_{t}^{\top}+\mathbf{I}\right) \mathbf{t}\right\|$ & $\mathbf{h}, \mathbf{t}, \mathbf{w}_{h} \mathbf{w}_{t} \in \mathbb{R}^{k}, \mathbf{r}, \mathbf{w}_{r} \in \mathbb{R}^{d}$\\ TransG & $\sum_{i} \pi_{r}^{i} \exp \left(-\frac{\left\|\boldsymbol{\mu}_{h}+\boldsymbol{\mu}_{r}^{i}-\boldsymbol{\mu}_{t}\right\|}{\sigma_{h}^{2}+\sigma_{t}^{2}}\right)$ & $\mathbf{h} \sim \mathcal{N}\left(\boldsymbol{\mu}_{h}, \boldsymbol{\sigma}_{h}^{2} \mathbf{I}\right)$, $\mathbf{t} \sim \mathcal{N}\left(\boldsymbol{\mu}_{t}, \Sigma_{t}\right)$, $\boldsymbol{\mu}_{h}, \boldsymbol{\mu}_{t} \in \mathbb{R}^{k}$\\ TranSparse-S & $-\left\|\mathbf{M}_{r}\left(\theta_{r}\right) \mathbf{h}+\mathbf{r}-\mathbf{M}_{r}\left(\theta_{r}\right) \mathbf{t}\right\|_{1 / 2}^{2}$ $-\left\|\mathbf{M}_{r}^{1}\left(\theta_{r}^{1}\right) \mathbf{h}+\mathbf{r}-\mathbf{M}_{r}^{2}\left(\theta_{r}^{2}\right) \mathbf{t}\right\|_{1 / 2}^{2}$ & $\mathbf{h}, \mathbf{t} \in \mathbb{R}^{k}$, $\mathbf{r} \in \mathbb{R}^{d}, \mathbf{M}_{r}\left(\theta_{r}\right) \in \mathbb{R}^{k \times d}$, $\mathbf{M}_{r}^{1}\left(\theta_{r}^{1}\right), \mathbf{M}_{r}^{2}\left(\theta_{r}^{2}\right) \in \mathbb{R}^{k \times d}$\\ DistMult & $ \langle \textbf{r}, \textbf{h}, \textbf{t} \rangle$ & $\textbf{h}, \textbf{r}, \textbf{t} \in \mathbb{R}^k$\\ ConvKB & $\operatorname{concat}(g([\boldsymbol{h}, \boldsymbol{r}, \boldsymbol{t}] * \omega)) \mathbf{w}$ & $\textbf{h}, \textbf{r}, \textbf{t} \in \mathbb{R}^k$ \\ ComplEx & $ \Re(\langle \textbf{r}, \textbf{h}, \overline{\textbf{t}} \rangle)$ & $\textbf{h}, \textbf{r}, \textbf{t} \in \mathbb{C}^k$\\ ConvE & $ \langle \sigma(\mathrm{vec}(\sigma([ \overline{\textbf{r}}, \overline{\textbf{h}}] \ast \boldsymbol{\Omega})) \mathbf{W}), \textbf{t} \rangle$ & $\textbf{h}, \textbf{r}, \textbf{t} \in \mathbb{R}^k$\\ RotatE & $-\norm{\textbf{h} \circ \textbf{r} - \textbf{t}}^2$ & $\textbf{h}, \textbf{r}, \textbf{t} \in \mathbb{C}^k, |r_i| = 1$ \\ REFE & $-\mathrm{arctanh}(\norm{-\langle\mathbf{h}, \mathrm{Ref}(\mathbf{r})\rangle\oplus^c \mathbf{t}})$ & $\textbf{h}, \textbf{r}, \textbf{t} \in \mathbb{R}^k$\\ HAKE & $\text{RotatE}-\norm{\sin((\mathbf{h} + \mathbf{r} - \mathbf{t}) / 2)}_1$ & $\textbf{h}, \textbf{r}, \textbf{t} \in \mathbb{R}^k$ \\ ComplEx-DURA & $\text{ComplEx} - \langle \mathbf{h}, \mathbf{r}\rangle^2 - \norm{\mathbf{t}}^2$ & $\textbf{h}, \textbf{r}, \textbf{t} \in \mathbb{C}^k$ \\ \bottomrule \end{tabular}} \caption{The score functions $f_r(\textbf{h}, \textbf{t})$ of shallow structure embedding models for knowledge graph embedding, where $\langle \cdot \rangle$ denotes the generalized dot product, $\circ$ denotes the Hadamard product, $\sigma$ denotes activation function and $\ast$ denotes 2D convolution. $\overline{\ \cdot\ }$ denotes conjugate for complex vectors, and 2D reshaping for real vectors in the ConvE model. $\mathrm{Ref}(\theta)$ denotes the reflection matrix induced by rotation parameters $\theta$. $\oplus^c$ is Möbius addition that provides an analogue to Euclidean addition for hyperbolic space. \label{tab:structure_methods}} \end{table*} \subsubsection{Link Prediction} \begin{itemize}[leftmargin=*] \item {\bf FB15k-237}. Freebase is a large collaborative knowledge graph consisting of data composed mainly by its community members. It is an online collection of structured data harvested from many sources, including individual and user-submitted wiki contributions \cite{freebase}. FB15k is a selected subset of Freebase that consists of 14,951 entities and 1,345 relationships \cite{TransE}. FB15K-237 is a variant of FB15K where inverse relations and redundant relations are removed, resulting in 237 relations \cite{text_joint_kb}. \item {\bf WN18RR}. WordNet is a lexical database of semantic relations between words in English. WN18 \cite{TransE} is a subset of WordNet which consists of 18 relations and 40,943 entities. WN18RR is created to ensure that the evaluation dataset does not have inverse relations to prevent test leakage \cite{ConvE}. \item {\bf UMLS}. UMLS semantic network \cite{UMLS} is an upper-level ontology of Unified Medical Language System. The semantic network, through its 135 semantic types, provides a consistent categorization of all concepts represented in the UMLS. The 46 links between the semantic types provide the structure for the network and represent important relationships in the biomedical domain. \comm{ \item {\bf YAGO3-10}. Yet Another Great Ontology (YAGO) is a knowledge graph that augments WordNet with common knowledge facts extracted from Wikipedia, converting WordNet from a primarily linguistic resource to a common knowledge graph \cite{yago}. YAGO3-10 is a benchmark dataset for knowledge graph completion. It is a subset of YAGO3 (which itself is an extension of YAGO) that contains entities associated with at least ten different relations. Table~\ref{tab:yago} shows the statistics of YAGO3-10 dataset. } \end{itemize} \comm{ \begin{table}[htbp] \centering \begin{tabular}{ccccc} \toprule \# {\bf Entity} & \# {\bf Relation} & \# {\bf Train} & \# {\bf Dev} & \# {\bf Test} \\ \midrule 123,182 & 37 & 1,079,040 & 5,000 & 5,000 \\ \bottomrule \end{tabular} \caption{Statistics of YAGO3-10 dataset.} \label{tab:yago} \end{table} } \subsubsection{Triplet Classification} \begin{itemize}[leftmargin=*] \item {\bf WN11 and FB13} are subsets of WordNet and FreeBase respectively for triplet classification, where \citet{KB_NL_1} randomly switch entities from correct testing triplets resulting in a total of doubling the number of test triplets with an equal number of positive and negative examples. \end{itemize} \subsection{Comparison Methods} \label{apx:comp} We compare \textsc{LaSS}\ to three types of knowledge graph completion methods: shallow structure embedding, deep structure embedding, and language semantic embedding.\footnote{We refer the readers to \cite{ji2021survey} for a more comprehensive review of the knowledge graph completion methods.} \subsubsection{Shallow Structure Embedding} TransE~\cite{TransE}, TransH~\cite{TransH}, TransR~\cite{TransR}, TransD~\cite{ji_knowledge_2015}, TransG~\cite{xiao_transg_2016}, TranSparse-S~\cite{ji_knowledge_2016}, DistMult~\cite{DistMult}, ConvKB~\cite{ConvKB}, ComplEx~\cite{trouillon_complex_2016}, ConvE~\cite{ConvE}, RotatE~\cite{RotateE}, REFE~\cite{chami-etal-2020-low}, HAKE~\cite{zhang_learning_2019}, and ComplEx-DURA~\cite{NEURIPS2020_f6185f0e} are methods based only on the structure of the knowledge graphs. DistMult-HRS~\cite{zhang_knowledge_2018} is an extension of DistMult which is combined with a three-layer hierarchical relation structure (HRS) loss. Each of these methods proposes a scoring function regarding a knowledge triplet, without using the natural language descriptions or names of entities or relations. The scoring functions are shown in Table~\ref{tab:structure_methods}. \subsubsection{Deep Structure Embedding} \begin{itemize}[leftmargin=*] \item {\bf NTN} (Neural Tensor Network)~\cite{KB_NL_1} models entities across multiple dimensions by a bilinear tensor neural layer. \item {\bf DOLORES}~\cite{wang_dolores:_2018} is based on bi-directional LSTMs and learns deep representations of entities and relations from constructed entity-relation chains. \item {\bf KBGAT} proposes an attention-based feature embedding that captures both entity and relation features in any given entity’s neighborhood, and additionally encapsulates relation clusters and multi-hop relations \cite{nathani-etal-2019-learning}. \item {\bf GAATs} integrates an attenuated attention mechanism in a graph neural network to assign different weights in different relation paths and acquire the information from the neighborhoods \cite{wang_knowledge_2020}. \item {\bf NePTuNe} takes advantage of both TuckER and NTN by carefully crafted nonlinearities and a shared core tensor intrinsic to the Tucker decomposition \cite{abs-2104-07824}. \item {\bf ComplEx-N3-RP} introduces an auxiliary training task to predict relation types as a self-supervised objective. \cite{ChenM0S21}. \end{itemize} \subsubsection{Language Semantic Embedding} \begin{itemize}[leftmargin=*] \item {\bf TEKE}~\cite{wang_text-enhanced_2016} takes advantage of the context information in a text corpus. The textual context information is incorporated to expand the semantic structure of the knowledge graph and each relation is enabled to own different representations for different head and tail entities. \item {\bf AATE}~\cite{KB_NL_6} is a text-enhanced knowledge graph representation learning method, which can represent a relation/entity with different representations in different triples by exploiting additional textual information. \item {\bf KG-BERT}~\cite{KGBERT} considers triples in knowledge graphs as textual sequences, where each textual sequence is a concatenation of text descriptions of the head entity, the relation, and the tail entity. Then KG-BERT treats the knowledge graph completion task as a text binary classification task, and then solves it by fine-tuning a pre-trained BERT. \item {\bf StAR}~\cite{wang_structure-augmented_2021} partitions each triplet into two asymmetric parts as in translation-based graph embedding approach, and encodes both parts into contextualized representations by a Siamese-style textual encoder (BERT or RoBERTa). \end{itemize} \comm{ \section{Additional Link Prediction Results} \jianhao{add table yago3: hit10 mrr, hyperparameters; and result discussion} Besides the results reported in Table~\ref{tab:lp_results}, we perform link prediction on YAGO3-10. The training hyperparameters are the same as the implementation details stated in Sec.~\ref{sec:exp}. We report Hits@10 and Mean Reciprocal Rank (MRR) since most comparison methods on YAGO3-10 do not include Mean Rank (MR) results. A higher Hits@10 and MRR are better. The results are shown in Table \ref{tab:yago_result}. Due to the size of YAGO3-10, we have not finished testing on YAGO3-10 at the time of submission. However, we have observed that relatively stable evaluation metrics are reported during the test. We will update with the final test results soon. Based on the current results, we find that \textsc{LaSS}\ is able to produce competitive results on the large-scale knowledge graph completion benchmark compared to the state-of-the-art. This shows that the knowledge graph completion ability of \textsc{LaSS}\ is transferable to different types of knowledge graphs. This is mainly because of the use of both language semantic and structure embedding. \jianhao{add result discussion} \begin{table}[htbp] \centering \resizebox{1.0\linewidth}{!} { \begin{tabular}{l|cc} \toprule \multirow{2}{*}{{\bf Method}} & \multicolumn{2}{c}{{\bf YAGO3-10}} \\ & {\bf Hits@10} & {\bf MRR\jianhao{MRR}} \\ \hline DistMult~\cite{DistMult} & 0.54 & 0.34 \\ ComplEx~\cite{trouillon_complex_2016} & 0.55 & 0.36 \\ ConvE~\cite{ConvE} & 0.62 & 0.44 \\ RotatE~\cite{RotateE} & 0.670 & 0.495 \\ REFE~\cite{chami-etal-2020-low} & 0.527 & 0.370 \\ HAKE~\cite{zhang_learning_2019} & 0.694 & 0.545\\ ComplEx-DURA~\cite{NEURIPS2020_f6185f0e} & 0.713 & 0.584 \\ \hdashline \textsc{LaSS}-RoBERTa$_{\rm BASE}$ (\electricblue{\small ours}) & 0.500* & 0.258*\\ \bottomrule \end{tabular}} \caption{{Link prediction results on YAGO3-10. An asterisk (*) indicates that the number is based on a partial YAGO3-10 test set. Due to the size of YAGO3-10, we are not able to finalize the results at the time of submission. Although we have observed relatively stable test results during the inference phase, we will make sure the final results are based on the complete evaluation.}} \label{tab:yago_result} \end{table} } \section{\textsc{LaSS}} We introduce \textsc{LaSS}~to embed both semantics and structures of knowledge graphs (KG) with natural language. As shown in Figure~\ref{fig:overview}, \textsc{LaSS}~incorporates two embeddings: semantic embedding and structure embedding. The semantic embedding captures the semantics in the natural language description of the KG triplets. The structure embedding further reconstructs the structure information of the KGs from the semantic embedding. \textsc{LaSS}~embeds KG in a vector space by fine-tuning a pre-trained language model (LM) w.r.t. a structured loss, where the forward pass performs semantic embedding and the optimization of structured loss conducts structure embedding. \subsection{Semantic Embedding} \label{sec:semantic} A KG of triplets is denoted as $G$. Each triplet of $G$ is in the form of \texthrt{(h, r, t)}, where \texthrt{h,t} $\in E$ and \texthrt{r} $\in R$. $E$ is the set of entities, and $R$ is the set of relations. The semantic similarities between the head entity \texthrt{h}, relation \texthrt{r}, and tail entity \texthrt{t} are crucial to complete a factual triplet. For example, given \texthrt{h} = ``Bob Dylan'' and \texthrt{r} = ``was born in'', the task is to predict a missing \texthrt{t}, where the candidates are ``Duluth'' and ``Apple''. The semantic similarity between ``Bob Dylan'' and ``Duluth'', as well as the similarity between ``was born in'' and ``Duluth'' should be larger than their similarities with ``Apple'' as ``Duluth'' is the ground-truth answer. Pre-trained LMs capture the rich semantics in natural language via pre-training on large-scale textual corpora. This inspires us to use the semantics stored in the parameters of LMs to encode the semantics of triplets. Formally, for a triplet \texthrt{(h, r, t)}, both entities (\texthrt{h} and \texthrt{t}) and relation (\texthrt{r}) are represented by their corresponding natural language descriptions. The head entity \texthrt{h} is represented as a sequence of tokens, $T^h = (x_1^h, \cdots, x_{n_h}^h)$, describing the entity. Similarly, $T^t = (x_1^t, \cdots, x_{n_t}^t)$ represents the tail entity \texthrt{t}. $T^r = (x_1^r, \cdots, x_{n_r}^r)$ denotes the relation \texthrt{r}. We generate the semantic embedding via the forward pass of the LMs as shown in Figure~\ref{fig:overview}. The knowledge graph completion tasks require explicit modeling of dependency of the head, relation and tail. For example, both the connections between head and tail, and relation and tail contribute to the prediction of the tail in the link prediction task. Therefore, we use the concatenation of $T^h$, $T^r$, and $T^t$ as the input sequence to the LMs, and use the mean pooling over the output representation of every token in $T^h$, $T^r$, and $T^t$ from the forward pass of LMs as $\mathbf{h}, \mathbf{r}, \mathbf{t}\in \mathbb{R}^k$, where $k$ is the dimension of the embedding vectors. More specifically, we construct the input sequence in the following format: \textspt{[B]} $T^h$ \textspt{[S]} $T^r$ \textspt{[S]} $T^t$ \textspt{[S]}, where \textspt{[B]} is a special symbol added in front of every input sequence, and \textspt{[S]} is a special separator token. The special tokens are different for various LMs. For example, \textspt{[B]} and \textspt{[S]} are implemented as \textspt{[CLS]} and \textspt{[SEP]} for BERT~\cite{Devlin_Chang_Lee_Toutanova_2019} respectively. The input sequence is then converted to the corresponding input embeddings of the LMs. For example, the input embeddings of BERT are the sum of the token embeddings, the segment embeddings, and the position embeddings. The input embeddings are fed into the LM. We add a mean pooling layer on top of the output layer of the LM and perform mean pooling over the output representation of every token in $T^h$, i.e., $(\mathbf{o}_1^h, \cdots, \mathbf{o}_{n_h}^h)$, resulting in $\mathbf{h}$ as illustrated in Figure~\ref{fig:overview}. We obtain $\mathbf{r}$ and $\mathbf{t}$ in the same way. The dimension $k$ equals to the hidden size of the LM. \subsection{Structure Embedding} Structural information of KGs has been successfully used in the KG completion. Traditional approaches regard the relationship between two entities corresponds to a translation between the embeddings of the entities. This is different from the above semantic embedding and the forward pass cannot capture the structure information. We propose to incorporate the structure embedding by fine-tuning the pre-trained LM with a structure loss. The goal is to reconstruct structure information in the semantic embedding. The updated embeddings of \texthrt{h}, \texthrt{r}, and \texthrt{t} are still denoted as $\mathbf{h}$, $\mathbf{r}$, and $\mathbf{t}$, which incorporate structure information of KGs while preserve semantic information. We reconstruct structure information in the semantic embeddings via optimizing a probabilistic structured loss, in which the score function of a triplet \texthrt{(h, r, t)} is defined by Eq.~\ref{eq:score}: \begin{equation}\label{eq:score} f(\mathbf{h},\mathbf{r},\mathbf{t}) = b - \frac{1}{2} \Vert \mathbf{h}+\mathbf{r}-\mathbf{t} \Vert_2^2 \end{equation} If \texthrt{(h, r, t)} holds, we have $\mathbf{h} + \mathbf{r} \approx \mathbf{t}$. We also use $f(\Vert \mathbf{h}+\mathbf{r}-\mathbf{t} \Vert_2^2)$ to denote this in Figure~\ref{fig:overview} for simplicity. The score function is motivated by TransE~\cite{TransE}. We define the following probabilistic model based on the score function (\ref{eq:score}): \begin{equation}\label{eq:softmax} \Pr({h}|{r},{t}) = \frac{\text{exp}(f(\mathbf{h},\mathbf{r},\mathbf{t}))}{\sum_{\tilde{h} \in E} \text{exp}(f(\tilde{\mathbf{h}},\mathbf{r},\mathbf{t}))} \end{equation} Here $\tilde{h}$ is the corrupted head sampled from the entity set $E$. $\Pr(r|h,t)$ and $\Pr(t|h,r)$ have a similar form except that the summation in the denominator is over corrupted relations and tails, respectively. The probabilistic structured loss is defined in Eq.~\ref{eq:know_loss}. The goal is to minimize the negative log likelihood over the KG: \begin{equation}\label{eq:know_loss} \begin{aligned} L = - \sum_{{(h, r, t)} \in G} (& \log \Pr({h}|{r},{t}) + \log \Pr({r}|{h},{t}) \\ & + \log \Pr({t}|{h},{r})) \end{aligned} \end{equation} \paragraph{Optimization} Computing the probability in Eq.~\ref{eq:softmax} is computationally inefficient since it requires a forward pass of all possible triplets $(\tilde{h}, r, t)$ to compute the denominator. We use negative sampling~\cite{Mikolov2013DistributedRO} to make training more efficient. Instead of minimizing $-\log \Pr(h|r,t)$ as in Eq.~\ref{eq:know_loss}, we optimize the loss as is described in Eq.~\ref{eq:ns} for modeling \texthrt{h}. \begin{equation}\label{eq:ns} \begin{aligned} L_\texthrt{h} = - &\log\Pr(1|h, r, t) \\ - &\sum_{i}^{n_\text{ns}} \mathbb{E}_{\tilde{h}_i\sim E \backslash \{h\}}\log \Pr(0 | \tilde{h}_i, r, t) \end{aligned} \end{equation} where $\Pr(1|h, r, t) = \sigma(f(\mathbf{h}, \mathbf{r}, \mathbf{t}))$. The loss for modeling \texthrt{r} and \texthrt{t} is similarly defined. Here, hyperparameter $n_\text{ns}$ is the number of negative samples. Each negatively sampled head $\tilde{h}_i$ is drawn uniformly without replacement from the entity set $E \backslash \{h\}$. A sample is not treated as a negative sample if it is already a positive example. We have the final structured loss $L=\sum_{{(h, r, t)} \in G} (L_\texthrt{h} + L_\texthrt{r} + L_\texthrt{t})$ by adopting the similar negative sampling procedures for relations and tail entities. The training of \textsc{LaSS}\ is unified as fine-tuning an LM with respect to a structured loss. The semantic embedding is obtained by the forward pass of the LM. The structure embedding is conducted by optimizing the structured loss through backpropagation of the LM. \section{Conclusion} We propose a new embedding method that leverages both semantics and structures of the knowledge graphs for the task of knowledge graph completion, and offers additional benefits in low-resource settings. The method maps a knowledge graph triplet to an embedding space via fine-tuning language models, where the forward pass captures semantics and the loss reconstructs structures. Our method has shown significant improvements on knowledge graph completion benchmarks. The implementation has made no modifications to the language model architectures. The results suggest that the learned embeddings are generally useful in downstream knowledge-driven applications, and potentially useful for more natural language understanding tasks. We hope our results will foster further research in this direction. \section{Ethical Considerations} We hereby acknowledge that all of the co-authors of this work are aware of the provided \textit{ACM Code of Ethics} and honor the code of conduct. The followings give the aspects of both our ethical considerations and our potential impacts to the community. This work uses pre-trained LMs for knowledge graph completion. The risks and potential misuse of LMs are discussed in \citet{Brown_Mann_Ryder}. There are potential undesirable biases in the datasets, such as unfaithful descriptions from Wikipedia. We do not anticipate the production of harmful outputs after using our model, especially towards vulnerable populations. \section{Environmental Considerations} We use BERT and RoBERTa as our pre-trained LMs. According to the estimation in \citet{strubell-etal-2019-energy}, pre-training a base model costs 1,507 kWh$\cdot$PUE and emits 1,438 lb $CO_2$, while pre-training a large model requires 4 times the resources of a base model. In addition, our fine-tuning takes less than 1\% gradient-steps of the number of steps of pre-training. Therefore, our energy cost and $CO_2$ emissions are relatively small. Besides, the results in the low-resource settings show that our method has better sampling efficiency. This indicates that we can further reduce energy consumption when training with fewer data. \section{Experiments} \label{sec:exp} \subsection{Experimental Setup} \paragraph{Datasets} We test the performance of our method on five KG benchmarks built with three KGs: Freebase~\cite{bollacker_freebase:_2008}, WordNet~\cite{miller_wordnet:_1995} and UMLS~\cite{ConvE}. Freebase is a large-scale KG containing general knowledge facts. We employ two subsets from Freebase, namely FB15K-237~\cite{toutanova_observed_2015}, and FB13~\cite{KB_NL_1}. WordNet provides semantic knowledge of words. We use two subsets from WordNet, namely WN18RR~\cite{ConvE}, and WN11~\cite{KB_NL_1}. UMLS is a medical semantic network containing semantic entities and relations. The statistics are summarized in Table~\ref{tab:statistics}. We also provide a detailed description of the datasets in Appendix~\ref{apx:data}. \input{tabs/statistics} \paragraph{Implementation Details} We use two families of LMs with \textsc{LaSS}. First, we adopt both BERT$_{\rm BASE}$ and BERT$_{\rm LARGE}$ from ~\cite{Devlin_Chang_Lee_Toutanova_2019} with \textsc{LaSS}, namely \textsc{LaSS}-BERT$_{\rm BASE}$ and \textsc{LaSS}-BERT$_{\rm LARGE}$. Second, RoBERTa family~\cite{Liu_Ott_Goyal_Du_Joshi_Chen_Levy_Lewis_Zettlemoyer_Stoyanov_2019} is used, namely \textsc{LaSS}-RoBERTa$_{\rm BASE}$ and \textsc{LaSS}-RoBERTa$_{\rm LARGE}$. We train \textsc{LaSS}\ with AdamW~\cite{loshchilov_decoupled_2018} on each KG dataset via fine-tuning the corresponding LMs. The training hyperparameters are set as follows. For \textsc{LaSS}-BERT$_{\rm BASE}$ and \textsc{LaSS}-RoBERTa$_{\rm BASE}$, the batch size is set to 128, the learning rate is set to 3e-5 with linear warm-up and 0.01 weight decay. We set the batch size to 64 for \textsc{LaSS}-BERT$_{\rm LARGE}$ and \textsc{LaSS}-RoBERTa$_{\rm LARGE}$. The number of training epochs is set to 5. The margin $b$ in Eq.~\ref{eq:score} is empirically set to 7. We sample 5 negative entities or relations resulting in 15 negative triplets for each positive triplet for the negative sampling. We represent entities and relations as their names or descriptions~\cite{KGBERT}. For FB15k-237, we used entity descriptions from ~\cite{KB_NL_3}. For FB13, we use entity descriptions in Wikipedia. For WN18RR, we use definitions of synsets as entity descriptions. For WN11 and UMLS, the entity names are used as the entity descriptions. The relation descriptions are based on the relation names across all the datasets. The input sequence is constructed based on Sec.~\ref{sec:semantic}. For \textsc{LaSS}-BERT$_{\rm BASE}$ and \textsc{LaSS}-BERT$_{\rm LARGE}$, we use a character-level BPE vocabulary. \textspt{[B]} is replaced with \textspt{[CLS]}, and \textspt{[S]} is replaced with \textspt{[SEP]}. For \textsc{LaSS}-RoBERTa$_{\rm BASE}$ and \textsc{LaSS}-RoBERTa$_{\rm LARGE}$, we use a byte-level BPE vocabulary, and \textspt{[B]} and \textspt{[S]} are replaced with \textspt{BOS} and \textspt{EOS} respectively. We implement \textsc{LaSS}\ using the Transformers package~\cite{Wolf2019HuggingFacesTS}. \paragraph{Comparison Methods} We compare our method to state-of-the-art methods, including (\expandafter{\romannumeral1}) shallow structure embedding: TransE~\cite{TransE}, TransH~\cite{TransH}, TransR~\cite{TransR}, TransD~\cite{ji_knowledge_2015}, TransG~\cite{xiao_transg_2016}, TranSparse~\cite{ji_knowledge_2016}, DistMult~\cite{DistMult}, DistMult-HRS~\cite{zhang_knowledge_2018}, ConvE~\cite{ConvE}, ConvKB~\cite{ConvKB}, ComplEx~\cite{trouillon_complex_2016}, RotatE~\cite{RotateE}, REFE~\cite{chami-etal-2020-low}, HAKE~\cite{zhang_learning_2019}, and ComplEx-DURA~\cite{NEURIPS2020_f6185f0e}; (\expandafter{\romannumeral2}) deep structure embedding: NTN~\cite{KB_NL_1}, DOLORES~\cite{wang_dolores:_2018}, KBGAT~\cite{nathani-etal-2019-learning}, GAATs~\cite{wang_knowledge_2020}, NePTuNe~\cite{abs-2104-07824}, and ComplEx-N3-RP~\cite{ChenM0S21}; (\expandafter{\romannumeral3}) language semantic embedding: TEKE~\cite{wang_text-enhanced_2016}, KG-BERT~\cite{KGBERT}, and stAR~\cite{wang_structure-augmented_2021}. We present a detailed technical description of the above methods in Appendix~\ref{apx:comp}. \input{tabs/tc_results} \subsection{Triplet Classification} The task of triplet classification judges whether a given triplet \texthrt{(h, r, t)} is correct or not. The task is a binary classification task. We use WN11 and FB13 for the task, since only the test sets of the two datasets contain both positive and negative triplets among all the datasets. For the task, we use the score function as defined in Eq.~\ref{eq:score} , and set a score threshold. For a triplet, if the score is above the threshold, the triplet is classified as positive, otherwise negative. We set the threshold empirically based on the accuracy on the validation set. As shown in Table~\ref{tab:tc_results}, we conclude with the following findings. \begin{table}[htb] \centering \resizebox{0.95\linewidth}{!}{ \begin{tabular}{cccc} \toprule Head & Relation & Tail & Label \\ \midrule ron ziegler & gender & male & \textcolor{green}{\ding{51}} \\ john fortescue & profession & writer & \textcolor{green}{\ding{51}} \\ george j adams & cause of death & typhoid fever & \textcolor{green}{\ding{51}} \\ fleiss joseph & institution & columbia university & \textcolor{green}{\ding{51}} \\ edmund husserl & nationality & austria & \textcolor{green}{\ding{51}} \\ aleksandr bakulev & gender & female & \textcolor{red}{\ding{55}} \\ emile littre & profession & physicist & \textcolor{red}{\ding{55}} \\ joseph smith jr & cause of death & emphysema & \textcolor{red}{\ding{55}} \\ frank g slaughter & institution & university of toronto & \textcolor{red}{\ding{55}} \\ julius klinger & nationality & romania & \textcolor{red}{\ding{55}} \\ \bottomrule \end{tabular}} \caption{\small {Samples of \textsc{LaSS}'s correct predictions on FB13, where KG-BERT~\cite{KGBERT} outputs wrong predictions. Label \textcolor{green}{\ding{51}}\ means a gold positive triplet. \textcolor{red}{\ding{55}}\ indicates a gold negative triplet.}} \label{tab:good_case_fb13} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{figs/WN11_FB13-cropped.pdf} \caption{{\small Triplet classification accuracy in a low-resource regime: training with different proportions of the corresponding training datasets on WN11 and FB13.}} \label{fig:partial-bert} \end{figure} We find that our methods consistently produce state-of-the-art results on triplet classification tasks. This indicates that our score function has captured semantics and structures that are crucial for the triplet classification. We also notice that \textsc{LaSS}-BERT generates slightly better results compared to \textsc{LaSS}-RoBERTa. This is due to RoBERTa removing the NSP objective, however the objective naturally fits in the triplet classification task. \textsc{LaSS}-RoBERTa still generates reasonable results. The reason is that the masked LM objective captures the necessary semantics needed for the triplet classification, and \textsc{LaSS}\ is able to preserve the important semantic information. In Table~\ref{tab:good_case_fb13}, we also show some cases where \textsc{LaSS}-BERT$_{\rm BASE}$ makes correct predictions while KG-BERT produces incorrect ones on FB13. Compared to KG-BERT, we find that \textsc{LaSS}\ is more capable in relations that require comprehensive structure information, such as ``institution''. \input{tabs/lp_results} \subsection{Low-Resource Settings} We additionally test the accuracy of triplet classification in a low-data regime, in particular, when using 5\%, 10\%, 15\%, 20\%, and 30\% of the training data on WN11 and FB13. The results are shown in Figure~\ref{fig:partial-bert}. \textsc{LaSS}-BERT$_{\rm LARGE}$ consistently outperforms the state-of-the-art KG-BERT. This indicates that \textsc{LaSS}\ is more data-efficient, as it leverages both semantics and structures in the training data. We also find that \textsc{LaSS}\ is able to produce competitive results with less training data compared to existing methods even with full training data. \textsc{LaSS}-BERT$_{\rm LARGE}$ with 5\% training data of WN11 outperforms most of the existing methods using full training data. When using 10\% training data of FB13, \textsc{LaSS}-BERT$_{\rm LARGE}$ is able to perform comparably with KG-BERT with full training data, and outperforms the remaining methods. This is because \textsc{LaSS}\ transfers the knowledge about semantics better to the tasks compared to existing approaches without fully leveraging the KG semantics. The results suggest that \textsc{LaSS}\ is effective in low-resource scenarios. \subsection{Link Prediction} \label{sec:link} Link prediction aims to predict a missing entity given a relation and the other entity, which is evaluated as a ranking problem. We perform link prediction on FB15k-237, WN18RR and UMLS datasets. For each correct triplet \texthrt{(h, r, t)}, either \texthrt{h} or \texthrt{t} is corrupted by replacing it with every other entity in the entity set $E$. These triplets are ranked based on scores produced by Eq.~\ref{eq:score} of \textsc{LaSS}. The evaluation is under the filtered setting~\cite{TransE}, i.e., removing all the triplets that appear either in the train, dev, or test set. Two common metrics, Mean Rank (MR) and Hits@10 (the proportion of correct entities ranked in the top 10) are used to evaluate the results. A lower MR is better while a higher Hits@10 is better. From the results in Table~\ref{tab:lp_results}, we summarize key observations as below. We find all our methods significantly outperform the compared methods in MR, and reach competitive or better Hits@10. \textsc{LaSS}-RoBERTa$_{\rm LARGE}$ performs the best on WN18RR, which outperforms the best compared method StAR by 11 units in MR and 5.4\% in Hits@10. It also delivers the best MR on FB15k-237. On UMLS, the existing state-of-the-art performance sets a high standard. However, \textsc{LaSS}-BERT$_{\rm BASE}$ still outperforms others by at least 0.08 unit in MR. The reasons for the improvements are mainly two-fold. (\expandafter{\romannumeral1}) \textsc{LaSS}\ is able to capture the structural patterns in the existing triplets to predict the missing ones via the structured loss. Compared to KG-BERT, \textsc{LaSS}\ is able to use the neighboring entities in the KGs for the prediction. (\expandafter{\romannumeral2}) \textsc{LaSS}\ is able to maintain the semantics of the KGs through semantic embedding to avoid unreasonable triplets with high ranks. For example, if \texthrt{CEO\_Of} holds between two entities, the \texthrt{employee\_Of} also holds, but \texthrt{birth\_Place} does not hold. This is the main reason that \textsc{LaSS}\ outperforms all structure embedding based methods by a large margin especially in MR. For instance, \textsc{LaSS}\ significantly outperforms TransE, which shares the similar structured loss with \textsc{LaSS}. Compare to the improvements made on FB15k-237, \textsc{LaSS}-RoBERTa$_{\rm LARGE}$ has significantly improved the state-of-the-art results on WN18RR. The main reason leading to such significant improvements is that the pre-trained LMs provide more semantics in the semantic embedding for WordNet as those LMs are trained on textual corpora to capture relationships between words. While WordNet provides the relationships between words, FB15k-237 contains real-world entities and relations, which are less captured by the LMs. We also notice that \textsc{LaSS}\ only produces moderate Hits@10 on FB15k-237. The main reason is that FB15k-237 presents more complex relations between entities compared to other link prediction datasets shown in Table~\ref{tab:statistics}. Therefore, a more complex structured loss is expected for \textsc{LaSS}\ to gain further improvements. We leave it as one of the future explorations. Besides, on FB15k-237 and WN18RR, \textsc{LaSS}-BERT$_{\rm LARGE}$ outperforms \textsc{LaSS}-BERT$_{\rm BASE}$, and \textsc{LaSS}-RoBERTa$_{\rm LARGE}$ also outperforms \textsc{LaSS}-RoBERTa$_{\rm BASE}$. This confirms the recent findings~\cite{LAMA} that larger LMs store more semantic knowledge in the parameters. We expect further improvements when larger LMs are used with \textsc{LaSS}. On UMLS, we observe slightly different trends. This is mainly because UMLS is a relatively small dataset, thus large models can suffer from overfitting. Overall, RoBERTa improves the BERT pre-training procedure from several perspectives. The improved pre-training procedure enables RoBERTa to generate better performance in many downstream tasks. This suggests that an improved pre-training procedure can enrich the semantics learned in the corresponding LMs. Both link prediction and triplet classification are core KG completion tasks. The results show that the proposed \textsc{LaSS}\ generalizes well in KG completion tasks. Different from KG-BERT that designs different models for the tasks, our method does not introduce task-specific parameters or losses for different tasks. \begin{figure}[htb] \centering \subcaptionbox{{\small Semantics.}\label{fig:case_semantic}}{\includegraphics[width=0.25\textwidth]{figs/case_semantic.png}}% \hspace{0.1in} \subcaptionbox{{\small Structures.}\label{fig:case_structure}}{\includegraphics[width=0.2\textwidth]{figs/case_structure.png}}% \caption{{\small Illustration of attention weights of the last layer of \textsc{LaSS}-BERT$_{\rm BASE}$.}} \label{fig:para} \end{figure} \subsection{Case Study} We show uncurated examples to illustrate why \textsc{LaSS}~can yield the above results, especially how the parameters of the LMs capture the semantics and structures. As attention layers are basic building blocks of the LMs, we focus on visualizing the attention weights with different input sequences. We use BertViz~\cite{vig_multiscale_2019} to illustrate the attention weights of the LMs. Given an example of a positive triplet, $h=$ ``symbololatry, the worship of symbols'', $r=$ ``hypernym'', and $t=$ ``veneration, religious zeal'', Figure~\ref{fig:case_semantic} shows the attention weights of the last layer of \textsc{LaSS}-BERT$_{\rm BASE}$ on WN11. We find that semantically related tokens attend to each other with relatively high scores. For example, ``religious'' attends intensively to ``worship'' and ``veneration''. As in multi-head self attention~\cite{Vaswani_Shazeer_Parmar_Uszkoreit_Jones_Gomez_Kaiser_Polosukhin_2017}, different attention heads in different colors attend to different aspects of the input, the heads are then concatenated to compute the final attention weights. The darker the color, the larger the attention score. This demonstrates that the semantic embedding of \textsc{LaSS}~is effective in capturing the semantics in the natural language description of the triplets. We show another positive example with $h=$ ``successfulness, the condition of prospering'', $r=$ ``hypernym'', and $t=$ ``luckiness, an auspicious state resulting from favorable outcomes''. Figure~\ref{fig:case_structure} illustrates the attention weights of the last layer of \textsc{LaSS}-BERT$_{\rm BASE}$ on WN11. We observe that tokens are highly attended to each other with similar structure roles in the triplet, even though they share fewer semantic similarities. For instance, the attention score between ``hypernym'' and ``condition'' is large. There is also a large attention score between ``hypernym'' and ``state''. This is because both ``condition'' and ``state'' capture the critical structure information of the triplet. The results indicate that the structure embedding of \textsc{LaSS}~is able to reconstruct the structure information in the semantic embeddings. \subsection{Error Analysis} To better understand the limitations of \textsc{LaSS}, we perform a detailed analysis of the errors. We use triplet classification as an example. We investigate the errors made by \textsc{LaSS}-BERT$_{\rm BASE}$ on WN11 and summarize the errors based on the relations in Table~\ref{tab:error}. We find most errors are caused by relations that are hard to be distinguished from each other due to their semantic similarities. For example, ``domain topic'' and ``domain region'' are such relations with an unclear semantic boundary. \begin{table}[htb] \centering \resizebox{0.85\linewidth}{!}{ \begin{tabular}{cc} \toprule {\bf Relation} & {\bf Percentage} (\%) \\ \hline domain topic & 19.8 \\ domain region & 10.8 \\ member meronym & 9.1 \\ has instance & 8.4 \\ has part & 8.1 \\ similar to & 7.1 \\ part of & 6.3 \\ synset domain topic & 5.5 \\ type of & 4.6 \\ member holonym & 3.9 \\ subordinate instance of & 3.2 \\ \bottomrule \end{tabular}} \caption{{\small Analysis of most common errors of \textsc{LaSS}-BERT$_{\rm BASE}$ categorized by relations on WN11.}} \label{tab:error} \end{table} \section{Introduction} Knowledge graphs (KG), such as Wikidata and Freebase~\cite{bollacker_freebase:_2008}, consist of factual triplets. KGs have been useful resources for both humans and machines. A triplet in the form of \texthrt{(head entity, relation, tail entity)}, where the relation involves both head and tail entities, has been used in a great variety of applications, such as question answering~\cite{Guu2015TraversingKG,hao_end--end_2017} and web search~\cite{Xiong2017ExplicitSR}. Incompleteness has been a longstanding issue in KGs~\cite{Carlson2010TowardAA}, impeding their wider adoption in real-world applications. KG completion aims to predict a missing entity or relation of a factual triplet. Structural patterns in the existing triplets are useful to predict the missing elements~\cite{TransE,RotateE}. For example, a composition pattern can be learned to predict the relation \texthrt{grandmother\_Of} based on two consecutive \texthrt{mother\_Of} relations. Besides the structure information, semantic relatedness between entities and relations is also critical to infer entities or relations with similar meanings~\cite{KB_NL_6,KGBERT,wang_structure-augmented_2021}. For example, if a relationship \texthrt{CEO\_Of} holds between two entities, the relation \texthrt{employee\_Of} also holds. There are two kinds of KG completion approaches that fall into different learning paradigms. First, the structure-based approaches treat entities and relations as nodes and edges, and use graph embedding methods to learn their representations. Second, the semantic-based approaches encode the text description of entities and relations via language models. While both structures and semantics are vital to KG completion, it is non-trivial for existing methods to process both structural and semantic information. \begin{figure*}[h] \centering \includegraphics[width=0.70\textwidth]{figs/pretrain-linyuan-2.pdf} \caption{{\small Overview of \textsc{LaSS}. \textsc{LaSS}\ maps a knowledge triplet \texthrt{(Head Entity, Relation, Tail Entity)}, in short \texthrt{(h, r, t)}, to the corresponding embedding vectors, $\mathbf{h}, \mathbf{r}, \mathbf{t} \in \mathbb{R}^k$. \textsc{LaSS}\ embeds KGs for KG completion via fine-tuning pre-trained language models (LM) w.r.t. a probabilistic structured loss, where the forward pass of the LMs captures semantics and the loss reconstructs structures. In particular, \textsc{LaSS}\ consists of semantic embedding and structure embedding. The {\em semantic embedding} (leftmost arrow) is generated by a forward pass of the LMs followed by a pooling layer over the natural language description of a triplet. \textspt{[B]} (the beginning token) and \textspt{[S]} (the separator token) are special tokens of LMs attached to the description. For example, the textual description of head entity is $(x_1^h, \cdots, x_{n_h}^h)$. $\mathbf{h}$ is calculated as the mean pooling of the corresponding LM outputs $(\mathbf{o}_1^h, \cdots, \mathbf{o}_{n_h}^h)$. $\mathbf{r}$ and $\mathbf{t}$ are calculated similarly. The {\em structure embedding} (rightmost arrow) reconstructs KG structures in the semantic embeddings via optimizing a structured loss on top of the LMs through backpropagation. The structured loss is based on a score function $f(\Vert \mathbf{h}+\mathbf{r}-\mathbf{t} \Vert_2^2)$, which regards the relationship between two entities corresponds to a translation between the embeddings of the entities. The goal is to minimize the loss function so that $\mathbf{h} + \mathbf{r} \approx \mathbf{t}$ when \texthrt{(h, r, t)} holds.} \label{fig:overview}} \end{figure*} In this paper, we propose \textsc{LaSS}, a joint language semantic and structure embedding for knowledge graph completion, which incorporates both semantics and structures in a KG triplet. \textsc{LaSS}\ embeds a triplet into a vector space by fine-tuning pre-trained language models (LM) with respect to a structured loss. \textsc{LaSS}\ involves both semantic embedding and structure embedding. The semantic embedding captures the semantics of the triplet, which corresponds to the forward pass of a pre-trained LM over the natural language description of the triplet. The structure embedding aims to reconstruct the structures in the semantic embedding, which corresponds to optimizing a probabilistic structured loss via the backpropagation of the LM. Intuitively, the structured loss treats the relationship between two entities as a translation between embeddings of the entities. \textsc{LaSS}\ outperforms the existing approaches on a collection of KG completion benchmarks. We further evaluate \textsc{LaSS}\ in low-resource settings and find that it is more data-efficient than other methods. The reason is that our method exploits both semantics and structures in the training data. The contributions are the following: \begin{itemize} \item We design a natural language embedding approach, \textsc{LaSS}, that integrates both structural and semantic information of KGs, for KG completion. We train \textsc{LaSS}~by fine-tuning pre-trained LMs w.r.t. a structured loss, where the forward pass of the LMs captures semantics and the loss reconstructs structures. The method consists of both the KG module and the LM module, which sheds light on the connections between the KGs and deep language representation, and advances the research at the intersection of the two areas. \item We evaluate \textsc{LaSS}~on two KG completion tasks, link prediction and triplet classification, and obtain state-of-the-art performance. The results suggest that capturing both semantics and structures is critical to understand the KGs. The findings are beneficial to many downstream knowledge-driven applications. \item We show that we can significantly improve the performance in the low-resource settings over existing approaches, thanks to the improved transfer of knowledge about semantics. \end{itemize} \subsection{Related Models} \label{sec:relatedmodel} There are several key differences between our \textsc{LaSS}\ and KG-BERT~\cite{KGBERT}: (\expandafter{\romannumeral1}) \textsc{LaSS}\ reconstructs the structures of KGs via structure embedding, while KG-BERT does not; (\expandafter{\romannumeral2}) \textsc{LaSS}\ unifies the link prediction and triplet classification under the same architecture (see Sec.~\ref{sec:exp} for details), while KG-BERT designs different architectures for different tasks; (\expandafter{\romannumeral3}) \textsc{LaSS}\ works with two families of LMs, while KG-BERT only works with BERT$_{\rm BASE}$. \textsc{LaSS}\ is not particularly designed for BERT, shedding light on understanding the role of semantics in LMs for the KG completion. TransE~\cite{TransE} treats the relation as a translation of the embeddings from the head to tail. Therefore $\mathbf{h} + \mathbf{r} \approx \mathbf{t}$ when \texthrt{(h, r, t)} holds. TransE designs a margin-based ranking loss based on the $l_2$ norm $\Vert \mathbf{h}+\mathbf{r}-\mathbf{t} \Vert_2^2$. The key differences between \textsc{LaSS}\ and TransE are: (\expandafter{\romannumeral1}) \textsc{LaSS}\ leverages the natural language semantics in LMs, while TransE does not; (\expandafter{\romannumeral2}) \textsc{LaSS}\ is a probabilistic structured loss, and is more computationally efficient and data-efficient compared to TransE. The main advantage of the probabilistic loss is that we eliminate the norm calculation that TransE requires to prevent the training process to trivially minimize its loss by increasing the embeddings of entities or relations. The ranking loss of TransE calculates the loss of some training examples as zeros, which will not contribute to the optimization procedure. Our probabilistic loss makes use of all the training examples. Besides, we introduce corrupted relations in the loss, which provides more flexibility in incorporating the KG structure. \fi \section{Discussion} \label{sec:dis} \paragraph{Structure Losses} There are several directions to further improve \textsc{LaSS}. \textsc{LaSS}~uses the probabilistic structured loss based on the score function of TransE, which learns a single representation for every entity and relation in the same embedding space. However, different relationships expect different entity embeddings. We propose to enable an entity to have distinct distributed representations when involved in different relations. For example, a new score function $\Vert \mathbf{h}_r+\mathbf{r}-\mathbf{t}_r \Vert_2^2$ models entities and relations in distinct spaces, and performs the translation between entity embeddings in relation space. The idea is in the same spirit as TransH~\cite{TransH} and TransR~\cite{TransR}. However, a downside of leveraging those losses is that they will bring additional computation overhead. Our method aims to trade off the computation costs and effectiveness. Exploring computation-light methods that involve alternative losses is one of the future investigations. \paragraph{Pre-trained LMs} We have explored two pre-trained LM families: BERT and RoBERTa. There are three possible directions along this line. First, as indicated in the experimental findings, larger LMs often store more semantics, which can improve the semantic embedding module of \textsc{LaSS}. We propose to examine larger pre-trained LMs, such as GPT-2~\cite{Radford_Wu_Child_Luan_Amodei_Sutskever_2019}, GPT-3~\citep{Brown_Mann_Ryder}, and Megatron-LM~\citep{shoeybi_megatron-lm:_2020}. Incorporating longer language descriptions (e.g., Wikipedia page) of the entities in the knowledge graphs will provide richer knowledge for improved natural language understanding. Second, the fine-tuning procedure of the deep LMs for KG completion tasks, especially link prediction, is still computationally inefficient. Investigating light LM architectures, such as ALBERT~\citep{lan_albert:_2019}, to speed up the training process, is one of the promising directions. Finally, our proposed method is generally useful for many knowledge-driven downstream NLP tasks (e.g., question answering, factual probing) as well as low-resource NLP tasks. Ensembling our method with autoregressive models (e.g., GPT-2) will enable the method to perform text generation tasks. \section{Related Work} \paragraph{Pre-trained LMs} Pre-trained LMs, such as BERT, have recently been used to obtain state-of-the-art results in many NLP benchmarks~\cite{Devlin_Chang_Lee_Toutanova_2019,Liu_Ott_Goyal_Du_Joshi_Chen_Levy_Lewis_Zettlemoyer_Stoyanov_2019}. These models are usually based on Transformers~\cite{Vaswani_Shazeer_Parmar_Uszkoreit_Jones_Gomez_Kaiser_Polosukhin_2017} and trained on unlabeled text corpora. They are used to improve downstream tasks via embedding~\cite{Peters_Neumann_Iyyer_Gardner_Clark_Lee_Zettlemoyer_2018}, fine-tuning~\cite{Radford_2018}, or few-shot learning~\cite{Radford_Wu_Child_Luan_Amodei_Sutskever_2019}. Fine-tuning bidirectional Transformers is the most widely used scheme in recent NLP applications, and the approach described in this paper is also based on this scheme. The main difference is that we design a structured loss on top of the LMs aiming to capture structures in natural language. \paragraph{Knowledge Graph Embedding} KG embedding aims to map entities and their relations to a continuous vector space. Traditional KG embedding methods represent each entity or each relation with a fixed vector. For any triplet (h,r,t), they use a scoring function $f(\mathbf{h},\mathbf{r},\mathbf{t})$ to model its likelihood. The scoring function of TransE~\cite{TransE} is a negative translational distance. It can be augmented with different geometric transformations such as linear projections~\cite{TransH,TransR} or rotations~\cite{RotateE}. Other models based on bilinear transformations~\cite{DistMult}, and convolutions~\cite{ConvE}, also show promising results on KG completion benchmarks. Our structured loss is motivated by TransE. The main differences are the following. TransE~\cite{TransE} treats the relation as a translation of the embeddings from the head to the tail. Therefore $\mathbf{h} + \mathbf{r} \approx \mathbf{t}$ when \texthrt{(h, r, t)} holds. TransE designs a margin-based ranking loss based on the $l_2$ norm $\Vert \mathbf{h}+\mathbf{r}-\mathbf{t} \Vert_2^2$. The key differences between \textsc{LaSS}\ and TransE are: (\expandafter{\romannumeral1}) \textsc{LaSS}\ leverages the natural language semantics in LMs, while TransE does not; (\expandafter{\romannumeral2}) \textsc{LaSS}\ is a probabilistic structured loss, and is more computationally efficient and data-efficient compared to TransE. The main advantage of the probabilistic loss is that we eliminate the norm calculation that TransE requires to prevent the training process from trivially minimizing its loss by increasing the embeddings of entities or relations. The ranking loss of TransE calculates the loss of some training examples as zeros, which will not contribute to the optimization procedure. Our probabilistic loss makes use of all the training examples. Besides, we introduce corrupted relations in the loss, which provides more flexibility in incorporating the KG structure. Traditional KG embedding approaches aforementioned regard entities and relations as basic units, without using any extra information. However, studies~\cite{KB_NL_1,KB_NL_2,KB_NL_3} show that a KG model that models the natural language descriptions of entities and relations usually outperforms those methods that only model the structure of knowledge triplets. \citet{LAMA} use LMs as virtual KGs to answer factual questions. ERNIE~\cite{ERNIE} integrates structural KGs into pre-trained models to improve knowledge-driven NLP tasks. By contrast, we aim to combine both the structures and semantics of the KGs via a unified optimization procedure for the task of KG completion. KG-BERT~\cite{KGBERT} models KG completion tasks as sentence classification tasks and solves them by fine-tuning pre-trained LMs. There are several key differences between our \textsc{LaSS}\ and KG-BERT~\cite{KGBERT}: (\expandafter{\romannumeral1}) \textsc{LaSS}\ reconstructs the structures of KGs via structure embedding, while KG-BERT does not; (\expandafter{\romannumeral2}) \textsc{LaSS}\ unifies the link prediction and triplet classification under the same architecture, while KG-BERT designs different architectures for different tasks; (\expandafter{\romannumeral3}) \textsc{LaSS}\ works with two families of LMs, while KG-BERT only works with BERT$_{\rm BASE}$. \textsc{LaSS}\ is not particularly designed for BERT, shedding light on understanding the role of semantics in LMs for KG completion.
{ "timestamp": "2022-09-20T02:23:41", "yymm": "2209", "arxiv_id": "2209.08721", "language": "en", "url": "https://arxiv.org/abs/2209.08721" }
\section{Acknowledgment} \label{sec:Acknowledgement} The authors would like to thank Han Gong, Junchao Ma, and Manas Shah for contributing to this work. \section{Conclusions} \label{sec:Conclusion} In conclusion, we introduced a Multi-contact MPC framework to tackle the problem of contact mode transitions in humanoid dynamic loco-manipulations. We investigated the most optimal dynamics model for humanoid loco-manipulation when considering object dynamics. By treating the object as an external force in dynamics, we eliminated sudden and significant formulation changes in MPC formulation. With the proposed method, we allowed the humanoid robot to complete 3-D multi-task dynamic loco-manipulations. The proposed method is more efficient than the quasi-static approach on humanoid robots in applications such as package transferring in logistics. \section{Contact-schedule-based Control} \label{sec:Control} In this section, we present the contact-schedule-based control framework for synchronized humanoid loco-manipulation. In detail, we present the hybrid control framework including CSMPC and CSWBC. \subsection{Contact Schedule Visualization} \label{subsec:contactSchedule} The contact schedule in MPC is very powerful due to the predictive nature of MPC that optimizes the current control input by the knowledge of future dynamic changes. We leverage this to allow robots to have frequent changes in dynamics. A visualization of the contact schedule in CSMPC is shown in Figure. \ref{fig:contactSchedule}. This contact schedule describes a task commanding the robot to pick-up and walk with an object. \subsection{CSMPC Formulation} \label{subsec:CSMPC_form} First, we present the CSMPC formulation that employs the SRBD proposed in Section.\ref{subsec:CSMPC}. We choose the state variables as $[{\bm \Theta};{\bm p}_c;{\bm \omega};\dot {{\bm p}}_c]$ and control inputs as $\bm U= [\bm u; \bm F_{ext}]$, then the simplified dynamics equation can be represented as \begin{align} \label{eq:simpDyn} \frac{d}{dt}\left[\begin{array}{c} {\bm \Theta}\\{\bm p}_c\\{\bm \omega}\\\dot {{\bm p}}_c \end{array} \right] = \bm A \left[\begin{array}{c} {\bm \Theta}\\{\bm p}_c\\{\bm \omega}\\\dot {{\bm p}}_c \end{array} \right] + \bm B \bm U + \left[\begin{array}{c} 0\\0\\0\\\bm g \end{array} \right] \end{align} \begin{align} \label{eq:A} \bm A = \left[\begin{array}{cccc} \mathbf 0_3 & \mathbf 0_3 & \mathbf R_b^{-1} & \mathbf 0_3 \\ \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 & \mathbf I_3\\ \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3\\ \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 \end{array} \right], \mathbf R_b = \left[\begin{array}{ccc} {c_\theta}{c_\psi} & -{s_\psi} & 0 \\ {c_\theta}{s_\psi} & c_\psi & 0 \\ -s_\theta & 0 & 1 \end{array} \right] \end{align} \begin{align} \label{eq:B} \bm B = \left[\begin{array}{ccccc} \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_{3\times2} & \mathbf 0_{3\times2} & \mathbf 0_3\\ \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_{3\times2} & \mathbf 0_{3\times2} & \mathbf 0_3 \\ \frac{\sigma _{C1} \bm r_1\times}{\bm I_G} & \frac{\sigma _{C2} \bm r_2\times}{\bm I_G} & \frac{\sigma _{C1}\mathbf L}{\bm I_G} & \frac{\sigma _{C2} \mathbf L}{\bm I_G} & \frac{\sigma _{e}\bm r_e\times}{\bm I_G}\\ \frac{\sigma _{C1}\mathbf {I}_{3}}{m_{ub}} & \frac{\sigma _{C2} \mathbf {I}_{3}}{m_{ub}} & \mathbf {0}_{3\times2} & \mathbf {0}_{3\times2} & \frac{\sigma _{e}\mathbf {I}_{3}}{m_{ub}} \end{array} \right] \end{align} where $s$ denotes sine operator and $c$ denotes cosine operator. Note that $\mathbf R_b$ is not inevitable at $\theta = \pm90^\circ$. We intend to not allow the robot to have a pitch angle of $\pm90^\circ$ in any tasks. To guarantee the linear relation in the formulation of Equation (\ref{eq:simpDyn}), we choose to include gravity $\bm g \in \mathbb{R}^3$ as a dummy variable in decision variables $\bm x = [{\bm \Theta};{\bm p}_c;{\bm \omega};\dot {{\bm p}}_c; \bm g]\in \mathbb{R}^{15}$, to form a state-space equation, with continuous-time matrices ${\hat{\bm A_c}}$ and ${\hat{\bm B_c}}$: \begin{align} \label{eq:linearSS} \dot{{\bm { x}}}(t) = {\hat{\bm A_c}} {{\bm {x}}} + {\hat{\bm B_c}} \bm U. \end{align} The formulation of the MPC problem with finite horizon $k$ is written as follows. The objective of the problem is to drive state $\bm x$ close to the desired input while minimizing $\bm U$. The desired input, mentioned in Section. \ref{sec:dynamicsModel}, are hybrid rigid body state information (i.e. body leans backwards). These objectives are weighted by matrices $\bm Q_i$ and $\bm R_i$. \begin{align} \label{eq:MPCform} \underset{\bm{x,U}}{\operatorname{min}} \:\: & \sum_{i = 0}^{k-1}(\bm x_{i+1}- \bm x_{i+1}^{ref})^T\bm Q_i(\bm x_{i+1}- \bm x_{i+1}^{ref}) + \bm{R}_i\| \bm{U}_i \| \end{align} \begin{subequations} \begin{align} \label{eq:dynamicCons} \:\:\operatorname{s.t.} \quad {\bm {x}}[i+1] = \bm {\hat{A}}[i]\bm x[i] + \bm {\hat{B}}[i]\bm U[i], \\ \label{eq:frictionCons} \nonumber -\mu {F}_{nz} \leq F_{nx} \leq \mu {F}_{nz} \quad \quad\\ -\mu {F}_{nz} \leq F_{ny} \leq \mu {F}_{nz} \quad \quad\\ \label{eq:forceCons} 0< {F}_{min} \leq F_{nz} \leq {F}_{max} \quad \quad\\ \label{eq:forceCons2} \bm {F}_{ext} = \left[\begin{array}{ccc} 0 & 0 & m_og \end{array} \right]^\intercal \quad \quad \end{align} \end{subequations} Equation (\ref{eq:dynamicCons}) to (\ref{eq:forceCons}) are constraints of the MPC problem. Equation (\ref{eq:dynamicCons}) is an equality constraint of the linearized dynamics equation in discrete-time at $i$th time-step derived from equation (\ref{eq:linearSS}). Equation (\ref{eq:frictionCons}) describes inequality constraints on contact friction pyramid. Equation (\ref{eq:forceCons}) describes the bounds of reaction forces. Equation (\ref{eq:forceCons2}) is an equality constraint that ensures the external force from control input is equal to gravitational force of the object. The translation of the proposed MPC problem into Quadratic Programming (QP) form to be efficiently solved can be found in many related works and previous works (e.g., \cite{di2018dynamic}, \cite{li2021force}). \begin{figure}[!t] \vspace{0.2cm} \center \includegraphics[clip, trim=7cm 1.3cm 10cm 8.5cm, width=\columnwidth]{Figures/contactSchedule.pdf} \caption{{\bfseries MPC Contact Schedule Visualized} } \label{fig:contactSchedule} \vspace{-0.4cm} \end{figure} \subsection{Task-oriented Cartesian PD Control} \label{subsec:CartesianPD} While the CSMPC primarily provides optimal ground reaction force-and-moments to the stance legs during locomotion, we choose to use Cartesian-space PD control to manipulate arms and swing foot $n$ based on the tasks and contact schedules. The desired swing foot location $\bm p_{f,n}^{des} \in \mathbb R^{3}$ follows a heuristic foot placement policy based on spring-loaded inverted pendulum \cite{raibert1986legged}, with addition of the consideration of desired CoM linear speed, $\dot{\bm p}_c^{des}$ \cite{kim2019highly}, \begin{align} \label{eq:footPlacement} \bm p_{f,n}^{des} = \bm p_{c} + \dot{\bm p}_c \Delta t/2 + k(\dot{\bm p}_c-\dot{\bm p}_c^{des}), \end{align} where $\Delta t$ is the gait period and $k$ is a scaling factor for tracking desired linear velocity. (e.g., push-recovery) The $n$th swing foot force by Cartesian PD control is \begin{align} \label{eq:pdlaw} \bm F_{swing,n}=\bm K_P(\bm p_{f,n}^{des}-\bm p_{f,n})+\bm K_D(\dot{\bm p}_{f,n}^{des}-\dot{\bm p}_{f,n}) \end{align} Similarly, to control the $m$th hand of the robot to move to desired task locations $\bm p_{h,m}^{des} \in \mathbb R^{3}$, the hand force is \begin{align} \label{eq:pdlaw2} \bm F_{hand,m}=\bm K_P(\bm p_{h,1}^{des}-\bm p_{h,m})+\bm K_D(\dot{\bm p}_{h,m}^{des}-\dot{\bm p}_{h,m}) \end{align} \begin{figure*}[!t] \vspace{0.2cm} \center \includegraphics[width=\textwidth]{Figures/snapshots.png} \caption{{\bfseries Simulation Snapshots} Simulation snapshots of 1) Pickup, walk, and drop off a 4 $\unit{kg}$ box object, 2) Pickup, walk in place, and throw a 2 $\unit{kg}$ spherical object. Blue region represents the origin of the object. Orange region represents the target local for drop-off. } \label{fig:snapshots} \vspace{-0.7cm} \end{figure*} \subsection{CSWBC Formulation} \label{subsec:CSWBC_form} To synchronize the hybrid controller inputs from Section. \ref{subsec:CSMPC_form} to \ref{subsec:CartesianPD} and leverage the whole-body dynamics model in Section. \ref{subsec:CSWBC}, we use a contact-schedule-based WBC to collect all control inputs and solve for optimal joint torques. The CSWBC is simplified and formed into a QP problem for efficient execution and high frequency. The desired command in CSWBC is based on the desired states of the robot $\bm x_c^{des} = [\bm p_c^{des}; \bm \Theta^{des}]$, by a simple PD control law: \begin{align} \label{eq:desAcc} \ddot{\bm x}_c^{des} = \bm K_p(\bm x_c^{des} - \bm x_c) + \bm K_d(\dot{\bm x}_c^{des} - \dot{\bm x}_c) \end{align} The desired acceleration command $\bm \ddot{x}_c^{des}$ is translated into task-space $\ddot {\mathbf {q}}_{cmd}$ by an inverse-kinematic-based null space projection technique described in \cite{kim2019highly}. Now the WBC-QP problem to compute the minimized relaxation components of MPC ground reaction force $\Delta \bm u$ and joint acceleration command $\Delta \ddot{\mathbf q}$ is as follows, \begin{align} \label{eq:WBC-QP} \underset{{\Delta \ddot{\mathbf q},\Delta \bm u}}{\operatorname{min}} \:\: & \Delta \ddot{\mathbf q}^\intercal {\mathbf H} \Delta \ddot{\mathbf q} + \Delta \bm u^\intercal {\mathbf K} \Delta \bm u \vspace{0.5cm} \end{align} \begin{subequations} \begin{align} \label{eq:WBC_cons1} \nonumber \operatorname{s.t.} \quad \mathbf S_{b}\{\mathbf M (\Delta \ddot{\mathbf q} + \ddot{\mathbf q}_{cmd}) + \mathbf C + \mathbf g \\ - \bm \tau_f(t, \Delta \bm u , \bm u)\} = \mathbf 0 \\ \label{eq:WBC_cons3} \quad \quad \quad \bm u_{min} \leq \Delta \bm u + \bm u \leq \bm u_{max} \quad\\ \label{eq:WBC_cons4} \quad \quad \quad \bm \tau_{min} \leq \bm \tau \leq \bm \tau_{max} \quad \end{align} \end{subequations} In equation (\ref{eq:WBC-QP}), $\mathbf H \in \mathbb{R}^{16\times16}$ and $\mathbf K \in \mathbb{R}^{10\times10}$ are diagonal weighting matrices for each objective. Equation (\ref{eq:WBC_cons1}) is a dynamics constraint formed by equation \ref{eq:EOM1} in order to control the floating base dynamics. Selection matrix $\mathbf S_{b}$ consists of 1s and 0s to identify the float base joints. Finally, the optimal joint torque $\bm \tau$ can be calculated as \begin{align} \label{eq:torque} \left[\begin{array}{c} \bm 0 \\ \bm \tau \end{array} \right] = \mathbf M (\Delta \ddot{\mathbf q} + \ddot{\mathbf q}_{cmd}) + \mathbf C + \mathbf g - \bm \tau_f(t, \Delta \bm u , \bm u) \end{align} where $\bm \tau_f$ is a time-dependent term that summarizes all external forces applied to the system based on the contact schedules, shown in equation \ref{eq:EOM2}. The swing foot and hand forces are feed-forward from Cartesian PD controller, and optimal reaction force-and-moments are $\Delta \bm u+\bm u$. \section{Proposed Approach} \label{sec:dynamicsModel} In this section, we investigate an optimal approach to represent the humanoid and object dynamics in an SRBD for Multi-contact MPC. We also present the details of the Multi-contact MPC framework. \begin{figure}[h] \vspace{-0.2cm} \centering \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[clip, trim=0cm 1cm 22.5cm 2cm, width=\columnwidth]{Figures/dynamicModels.pdf} \caption{Bipedal SRBD \\ (Reference Baseline)} \label{fig:model1} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[clip, trim=23.3cm 1cm 0cm 2cm, width=\columnwidth]{Figures/dynamicModels.pdf} \caption{Humanoid SRBD 1 \\ (Combined Rigid Body)} \label{fig:model2} \end{subfigure} \hfill \begin{subfigure}[b]{0.155\textwidth} \centering \includegraphics[clip, trim=11.5cm 1cm 10.5cm 2cm, width=\columnwidth]{Figures/dynamicModels.pdf} \caption{Humanoid SRBD 2 \\ (External Force Model)} \label{fig:model3} \end{subfigure} \caption{{\bfseries SRBDs.} a) Baseline Bipedal SRBD in \cite{li2021force} b) Combined rigid body dynamics c) SRBD treating objects as external forces. Humanoid SRBD 2 is used in our proposed approach.} \label{fig:simplifiedDynamics} \vspace{-0.3cm} \end{figure} \subsection{Dynamics Model} \label{subsec:CSMPC} Modifying the SRBD in force-based control has shown success in our previous works on bipedal robots \cite{li2021force}. In this work, we further investigate the most optimal dynamics model for the humanoid robot carrying a heavy object while considering the contact schedule for transition between different contact modes. We first propose two different SRBD models for considering the object dynamics in humanoid robot SRBD. The baseline model that we developed our new models from is an SRBD for bipedal robot locomotion in \cite{li2021force}. Shown in Figure. \ref{fig:model1}, This SRBD treats the robot's upper body and hips as a rigid body to estimate the centroidal dynamics of this rigid body \cite{orin2013centroidal}. The ground reaction forces and moments at the foot locations are applied to the float base, $ \bm u=[\bm F_1,\:\bm F_2,\:\bm M_1,\:\bm M_2]^\intercal$, where $ \bm F_n = [ F_{nx},\: F_{ny},\: F_{nz}]^\intercal, \bm M_n = [ M_{ny},\: M_{nz}]^\intercal, $ leg $n = 1, 2 $. It is assumed that the robot has lightweight and low-inertia legs \cite{di2018dynamic,li2021force}. We form the following models with the assumption that the object's location and physical properties are known. \subsubsection{\textbf{Model 1}} In the newly proposed SRBD for humanoid robot carrying an object in Figure. \ref{fig:model2}, we combine the upper body (rigid body formed by trunk, shoulders, and hips) with the object and treat them as a single combined rigid body, hence naming it the Combined Rigid Body model. The combined center of mass (CoM) location and rotational inertia are updated in real-time base on the location of the object $\bm p_o$ and upper body $\bm p_{c}$. We use the parallel axis theorem \cite{hass} to calculate the rotational inertia $\bm I_b$ of the rigid body. The combined CoM location $\bm p_h \in \mathbb R^{3}$ is calculated as follows, \begin{align} \bm p_h = \frac{\bm p_{c} m_{ub} + \bm p_{o} m_{o}}{ m_{ub} + m_{o}}, \end{align} where $m_{ub}$ and $m_{o}$ are the upper body mass and object mass. In this proposed configuration, the combined rigid body CoM location is in between trunk CoM and Object CoM locations. Hence, to align the CoM location with the hip and foot location vertically for better balancing performance, the upper body pitches backward. This also mimics how a human carries a heavy box and remains balanced. \subsubsection{\textbf{Model 2}} In the second approach we investigated, naming it External Force Model, we simplified the object to an external force $\bm F_{ext} \in \mathbb R^{3}$ applied to the rigid body, shown in Figure. \ref{fig:model3}. This external force is the only added terms in the formulation of SRBD, hence reducing the number of variables that has sudden change when doing manipulation tasks. The downside of Model 1: Combined Rigid Body Model is that when an object is added to the upper body in manipulation, the sudden nonlinear changes of state variables are explicitly expressed in the formulation of the SRBD, especially in terms of combined CoM location, pitch angle, and physical properties of the rigid body. The MPC problem struggles to find optimal solutions due to these sudden, significant, and nonlinear changes. Whereas the Model 2: External Force Model tackles this problem by only having one added term (i.e. external force) to the dynamics for simplicity and linearity when dynamics change between contact modes. Comparisons of the two investigated approaches are shown in Section. \ref{sec:Results}. \begin{remark} \vspace{0.1cm} \textit{The humanoid SRBD we decided to use for the rest of the work is Model 2: External Force Model.} \vspace{0.1cm} \end{remark} Developed from the \cite{nguyen2019optimized} and \cite{li2021force} The Multi-contact SRBD is expressed as follows, \begin{multline} \label{eq:simplifiedDynamics} \left[\begin{array}{cccc} \sigma _{C1}(t)\mathbf {I}_{3} & \sigma _{C2}(t) \mathbf {I}_{3} & \mathbf {0}_{3\times2} & \mathbf {0}_{3\times2} \\ \sigma _{C1}(t) \bm r_1\times & \sigma _{C2}(t) \bm r_2\times & \sigma _{C1}(t) \mathbf L & \sigma _{C2}(t) \mathbf L \end{array} \right] \bm u \\ + \sigma_{e}(t)\left[\begin{array}{c} \bm F_{ext}(t) \\ \bm r_e\times \bm F_{ext}(t) \end{array} \right]= \left[\begin{array}{c} m (\ddot{\bm{p}}_{c} +\bm{g}) \\ \frac{d}{dt}(\bm I_G {{\bm \omega}}) \end{array} \right], \end{multline} where $\bm r_i\times$ and $\bm r_e\times$ stands for the skew-symmetric matrix of distance vector from upper body CoM $\bm p_c$ to foot $\bm p_i$ and to external object CoM location, $\bm I_G$ is the rotational inertia of the upper body expressed in the world frame. $\mathbf L$ is moment selection matrix, $\mathbf L = [0, 0; 1, 0; 0, 1]$. The time-dependent contact-schedule terms $\sigma(t)$ for each task are expressed in either 1s or 0s to reflect the task schedule. Different combinations of these contact schedules reflect different contact modes in the dynamics model. \subsection{Multi-contact MPC} \label{subsec:CSMPC_form} Now we present the CSMPC formulation that employs the SRBD proposed in Section.\ref{subsec:CSMPC}. We choose the state variables as $[{\bm \Theta};{\bm p}_c;{\bm \omega};\dot {{\bm p}}_c]$ and control inputs as $\bm U= [\bm u; \bm F_{ext}]$, then the simplified dynamics equation can be represented as \begin{align} \label{eq:simpDyn} \frac{d}{dt}\left[\begin{array}{c} {\bm \Theta}\\{\bm p}_c\\{\bm \omega}\\\dot {{\bm p}}_c \end{array} \right] = \bm A \left[\begin{array}{c} {\bm \Theta}\\{\bm p}_c\\{\bm \omega}\\\dot {{\bm p}}_c \end{array} \right] + \bm B \bm U + \left[\begin{array}{c} \mathbf 0_{3\times1}\\\mathbf 0_{3\times1}\\\mathbf 0_{3\times1}\\\bm g \end{array} \right] \end{align} \begin{align} \label{eq:A} \bm A = \left[\begin{array}{cccc} \mathbf 0_3 & \mathbf 0_3 & \mathbf R_b^{-1} & \mathbf 0_3 \\ \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 & \mathbf I_3\\ \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3\\ \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 \end{array} \right], \mathbf R_b = \left[\begin{array}{ccc} {c_\theta}{c_\psi} & -{s_\psi} & 0 \\ {c_\theta}{s_\psi} & c_\psi & 0 \\ -s_\theta & 0 & 1 \end{array} \right] \end{align} \begin{align} \label{eq:B} \bm B = \left[\begin{array}{ccccc} \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_{3\times2} & \mathbf 0_{3\times2} & \mathbf 0_3\\ \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_{3\times2} & \mathbf 0_{3\times2} & \mathbf 0_3 \\ \frac{\sigma _{C1} \bm r_1\times}{\bm I_G} & \frac{\sigma _{C2} \bm r_2\times}{\bm I_G} & \frac{\sigma _{C1}\mathbf L}{\bm I_G} & \frac{\sigma _{C2} \mathbf L}{\bm I_G} & \frac{\sigma _{e}\bm r_e\times}{\bm I_G}\\ \frac{\sigma _{C1}\mathbf {I}_{3}}{m_{ub}} & \frac{\sigma _{C2} \mathbf {I}_{3}}{m_{ub}} & \mathbf {0}_{3\times2} & \mathbf {0}_{3\times2} & \frac{\sigma _{e}\mathbf {I}_{3}}{m_{ub}} \end{array} \right] \end{align} where $s$ denotes sine operator, and $c$ denotes cosine operator. Note that $\mathbf R_b$ is not invertible at $\theta = \pm90^\circ$. We intend to not allow the robot to have a pitch angle of $\pm90^\circ$ in any tasks. To guarantee the linear relation in the formulation of Equation (\ref{eq:simpDyn}), we choose to include gravity $\bm g \in \mathbb{R}^3$ as a dummy variable in decision variables $\bm x = [{\bm \Theta};{\bm p}_c;{\bm \omega};\dot {{\bm p}}_c; \bm g]\in \mathbb{R}^{15}$. This ensures the presence of gravitational force and forms a state-space equation with continuous-time matrices ${\hat{\bm A_c}}$ and ${\hat{\bm B_c}}$: \begin{figure*}[!t] \vspace{0.2cm} \center \includegraphics[width=\textwidth]{Figures/snapshots.png} \caption{{\bfseries Simulation Snapshots} 1) Pickup, walk, and drop off a 4 $\unit{kg}$ box object, 2) Pickup, walk in-place, and throw a 2 $\unit{kg}$ spherical object. The Blue region represents the origin of the object. The Orange region represents the target local for drop-off. } \label{fig:snapshots} \vspace{-0.7cm} \end{figure*} \begin{align} \label{eq:linearSS} \dot{{\bm { x}}}(t) = {\hat{\bm A_c}} {{\bm {x}}} + {\hat{\bm B_c}} \bm U. \end{align} The formulation of the MPC problem with finite horizon $k$ is written as follows. The objective of the problem is to drive state $\bm x$ close to the desired input while minimizing $\bm U$. These objectives are weighted by matrices $\bm Q_i \in \mathbb R^{15\times 15}$ and $\bm R_i \in \mathbb R^{10\times 10}$. \begin{align} \label{eq:MPCform} \underset{\bm{x,U}}{\operatorname{min}} \:\: & \sum_{i = 0}^{k-1}(\bm x_{i+1}- \bm x_{i+1}^{ref})^T\bm Q_i(\bm x_{i+1}- \bm x_{i+1}^{ref}) + \bm{R}_i\| \bm{U}_i \| \end{align} \begin{subequations} \begin{align} \label{eq:dynamicCons} \:\:\operatorname{s.t.} \quad {\bm {x}}[i+1] = \bm {\hat{A}}[i]\bm x[i] + \bm {\hat{B}}[i]\bm U[i], \\ \label{eq:frictionCons} \nonumber -\mu {F}_{nz} \leq F_{nx} \leq \mu {F}_{nz} \quad \quad\\ -\mu {F}_{nz} \leq F_{ny} \leq \mu {F}_{nz} \quad \quad\\ \label{eq:forceCons} 0< {F}_{min} \leq F_{nz} \leq {F}_{max} \quad \quad\\ \label{eq:forceCons2} \bm {F}_{ext} = \left[\begin{array}{ccc} 0 & 0 & m_og \end{array} \right]^\intercal \quad \quad \end{align} \end{subequations} Equation (\ref{eq:dynamicCons}) to (\ref{eq:forceCons}) are constraints of the MPC problem. Equation (\ref{eq:dynamicCons}) is an equality constraint of the linearized dynamics equation in discrete-time at $i$th time-step derived from equation (\ref{eq:linearSS}). Equation (\ref{eq:frictionCons}) describes inequality constraints on the contact friction pyramid. Equation (\ref{eq:forceCons}) describes the bounds of reaction forces. Equation (\ref{eq:forceCons2}) is an equality constraint that ensures the external force from the control input is equal to the gravitational force of the object. The translation of the proposed MPC problem into Quadratic Programming (QP) form to be efficiently solved can be found in many related works and previous works (e.g., \cite{di2018dynamic}, \cite{li2021force}). \subsection{Low-level Control} \label{subsec:lowlevel} While the Multi-contact MPC primarily provides optimal ground reaction force-and-moments to the stance legs during locomotion, we choose to use Cartesian-space PD control to manipulate arms and swing foot $n$ based on the tasks and contact modes. And then map all controller forces to joint torques by WBC. The desired swing foot location $\bm p_{f,n}^{des} \in \mathbb R^{3}$ follows a heuristic foot placement policy based on spring-loaded inverted pendulum \cite{raibert1986legged}, with addition of the consideration of desired CoM linear speed, $\dot{\bm p}_c^{des}$ \cite{kim2019highly}, \begin{align} \label{eq:footPlacement} \bm p_{f,n}^{des} = \bm p_{c} + \dot{\bm p}_c \Delta t/2 + k(\dot{\bm p}_c-\dot{\bm p}_c^{des}), \end{align} where $\Delta t$ is the gait period, and $k$ is a scaling factor for tracking desired linear velocity. (e.g., push-recovery) The $n$th swing foot force by Cartesian PD control is \begin{align} \label{eq:pdlaw} \bm F_{swing,n}=\bm K_P(\bm p_{f,n}^{des}-\bm p_{f,n})+\bm K_D(\dot{\bm p}_{f,n}^{des}-\dot{\bm p}_{f,n}) \end{align} Similarly, to control the $m$th hand of the robot to move to desired task locations $\bm p_{h,m}^{des} \in \mathbb R^{3}$, the hand force is \begin{align} \label{eq:pdlaw2} \bm F_{hand,m}=\bm K_P(\bm p_{h,1}^{des}-\bm p_{h,m})+\bm K_D(\dot{\bm p}_{h,m}^{des}-\dot{\bm p}_{h,m}) \end{align} Using WBC to map optimal ground reaction forces to joint torques on legged robots is an established approach in many related works (e.g.,\cite{kim2019highly, chignoli2021humanoid}). We use WBC as a low-level controller to synchronize the hybrid controller inputs described from MPC and Cartesian PD controllers. We extend the second humanoid SRBD (external force model) in Section. \ref{subsec:CSMPC} to the whole-body dynamics of the humanoid robot with the object. Similarly, by simplifying the object dynamics as an external force applied to the robot, the full joint space equation of motion for the humanoid robot has the form, \begin{align} \label{eq:EOM1} \mathbf M \ddot{\mathbf q} + \mathbf C + \mathbf g = \left[\begin{array}{c} \mathbf 0 \\ \bm \tau \end{array} \right] + \bm \tau_f \end{align} \begin{multline} \label{eq:EOM2} \bm \tau_f = \bm J_c^\intercal \left[\begin{array}{c} \sigma _{C1}(t)\bm F_1 \\ \sigma _{C2}(t)\bm F_2 \\ \sigma _{C1}(t)\bm M_1 \\ \sigma _{C2}(t)\bm M_2 \end{array} \right] + \bm J_{PD}^\intercal \left[\begin{array}{c} \sigma _{H1}(t)\bm F_{hand,1} \\ \sigma _{H2}(t)\bm F_{hand,2} \\ \sigma _{C2}(t)\bm F_{swing,1} \\ \sigma _{C1}(t)\bm F_{swing,2} \end{array} \right] \\ + \sigma _{e}(t) \bm J_{e}^\intercal \bm F_{ext}(t) \end{multline} where $\mathbf M \in \mathbb R^{22\times 22}$ is the generalized mass matrix, $\mathbf C$, $\mathbf g \in \mathbb R^{22}$ are the Coriolis and gravity forces. $\ddot{\mathbf q}$ is a vector space containing acceleration of floating base trunk $\ddot{\mathbf q_b}\in \mathbb{R}^{6}$ and joints $\ddot{\mathbf q_j}\in \mathbb{R}^{16}$, as described in \cite{kim2019highly}. $\bm J_c$, $\bm J_{PD}$, and $\bm J_{e}$ are the Jacobians describing the foot contacts, Cartesian PD control end-effector locations (i.e. hands are feet), and external force location (i.e. CoM of the object). The WBC can be simplified and formed into a QP problem for efficient execution and high frequency. The desired command in WBC is based on the desired states of the robot $\bm x_c^{des} = [\bm p_c^{des}; \bm \Theta^{des}]$, by a simple PD control law: \begin{align} \label{eq:desAcc} \ddot{\bm x}_c^{des} = \bm K_p^{WBC}(\bm x_c^{des} - \bm x_c) + \bm K_d^{WBC}(\dot{\bm x}_c^{des} - \dot{\bm x}_c) \end{align} The desired acceleration command $\bm \ddot{x}_c^{des}$ is translated into task-space $\ddot {\mathbf {q}}_{cmd}$ by an inverse-kinematic-based null space projection technique described in \cite{kim2019highly}. Now the WBC-QP problem to compute the minimized relaxation components of MPC ground reaction force $\Delta \bm u$ and joint acceleration command $\Delta \ddot{\mathbf q}$ is as follows, \begin{align} \label{eq:WBC-QP} \underset{{\Delta \ddot{\mathbf q},\Delta \bm u}}{\operatorname{min}} \:\: & \Delta \ddot{\mathbf q}^\intercal {\mathbf H} \Delta \ddot{\mathbf q} + \Delta \bm u^\intercal {\mathbf K} \Delta \bm u \vspace{0.5cm} \end{align} \begin{subequations} \begin{align} \label{eq:WBC_cons1} \nonumber \operatorname{s.t.} \quad \mathbf S_{b}\{\mathbf M (\Delta \ddot{\mathbf q} + \ddot{\mathbf q}_{cmd}) + \mathbf C + \mathbf g \\ - \bm \tau_f(t, \Delta \bm u , \bm u)\} = \mathbf 0 \\ \label{eq:WBC_cons3} \quad \quad \quad \bm u_{min} \leq \Delta \bm u + \bm u \leq \bm u_{max} \quad\\ \label{eq:WBC_cons4} \quad \quad \quad \bm \tau_{min} \leq \bm \tau \leq \bm \tau_{max} \quad \end{align} \end{subequations} In equation (\ref{eq:WBC-QP}), $\mathbf H \in \mathbb{R}^{16\times16}$ and $\mathbf K \in \mathbb{R}^{10\times10}$ are diagonal weighting matrices for each objective. Equation (\ref{eq:WBC_cons1}) is a dynamics constraint formed by equation \ref{eq:EOM1} in order to control the floating base dynamics. Selection matrix $\mathbf S_{b}$ consists of 1s and 0s to identify the float base joints. Finally, the optimal joint torque $\bm \tau$ can be calculated as \begin{align} \label{eq:torque} \left[\begin{array}{c} \bm 0 \\ \bm \tau \end{array} \right] = \mathbf M (\Delta \ddot{\mathbf q} + \ddot{\mathbf q}_{cmd}) + \mathbf C + \mathbf g - \bm \tau_f(t, \Delta \bm u , \bm u) \end{align} where $\bm \tau_f$ is a time-dependent term that summarizes all external forces applied to the system based on the contact schedules, shown in equation \ref{eq:EOM2}. The swing foot and hand forces are feed-forward from the Cartesian PD controller, and optimal reaction force-and-moments are $\Delta \bm u+\bm u$. \section{Introduction} \label{sec:Introduction} Object manipulation control on intelligent robotic systems can benefit many industries pertaining to logistics, sorting, and warehousing. There have been many successful manipulation-oriented legged robotic platforms in recent years (e.g., Digit humanoid robot \cite{digityoutube}, ANYmal quadruped with arm \cite{ferrolho2022roloma}\cite{chiu2022collision}, Spot mini with arm \cite{zimmermann2021go}, and Handle \cite{handleyoutube}). Loco-manipulation on legged robot is particularly interesting to study because of the maneuverability and high degree-of-freedoms legged robots have, which allows the possibilities of various control schemes and robot configurations to manipulate an object. For example, humanoid robot pushing an object with hand \cite{murooka2021humanoid} and quadruped robot manipulate objects with arms and legs \cite{wolfslag2020optimisation}. We are particularly interested in the problem of humanoid robots carrying and manipulating an object during locomotion (i.e. applications in logistics). Agility Robotics \cite{digityoutube} and Boston Dynamics \cite{handleyoutube} allowed their humanoid robots to load and unload objects while standing still. With the knowledge of contact schedules and smooth transitions between contact modes in dynamics models, we can further push the limit of humanoid loco-manipulation with MPC, such as carrying heavy loads and aggressively throwing a box while maintaining stable locomotion via a uniformed Multi-contact MPC framework. \begin{figure}[t] \center \includegraphics[width=0.85 \columnwidth]{Figures/title2.png} \caption{{\bfseries Humanoid Robot Throwing a 2 $\unit{kg}$ Ball while Walking in Place } Simulation video: \protect\url{https://youtu.be/V8PIpE2YGhw}. } \label{fig:title} \vspace{-1.5em} \end{figure} Our recent work \cite{li2021force} on force-and-moment-based MPC schemes employs a simplified rigid body dynamics model and has allowed a 16-Degree-of-Freedom (DoF) bipedal robot to perform stable 3-D locomotion. However, simply extending the control scheme from the previous work does not work well on our 22-DoF humanoid robot in loco-manipulation tasks. In this paper, we develop a new humanoid robot dynamics model to consider different contact modes and use MPC as a solution to bridge the transition between different contact modes. To do so, we use contact schedules to represent contact modes in robot dynamics and include mode transitions in MPC prediction horizons. The contact modes in our humanoid loco-manipulation include hand contact with the object, foot contact with the ground, object contact with the ground, and any combination of the above. In the simplified rigid body dynamics (SRBD) model, we include and simplify the object dynamics to minimize MPC formulation change during the transition between contact modes while maintaining good performance in loco-manipulation. MPC framework has been successfully implemented on many modern legged robots for dynamic locomotion. For instance, the MIT Cheetah Mini \cite{katz2019mini,kim2019highly} quadruped robot and MIT Humanoid robot \cite{chignoli2021humanoid} both have demonstrated outstanding dynamical motion with the force-based MPC framework. In \cite{chignoli2021humanoid}, the humanoid robot is able to perform aerial motions such as 3-D jumps and flips with offline kino-dynamics-based optimization and uses the MPC and Whole-body Control (WBC) as landing control. In our previous work, we also demonstrated dynamic locomotion on bipedal robots with a force-and-moment-based MPC control scheme \cite{li2021force}. In this work, we decide to use MPC as the main control scheme and develop it further to consider and bridge contact mode transitions and changes in humanoid loco-manipulation, specifically for the purpose of enhancing the efficiency and mobility in these tasks. Humanoid loco-manipulation control in \cite{murooka2021humanoid} considers the interaction forces with both the object and the ground (i.e. pushing force and ground reaction force), and assumes the contact is consistent and unchanged. In our problem, we focus specifically on carrying and manipulating objects that may have multiple contact modes to the ground and to the robot. We ignore the interaction force in the dynamics model. Instead, we consider both the object and the upper body in the SRBD. This approach minimizes the number of control inputs in MPC to ensure problem simplicity, linearity, and feasibility. The main contributions of the paper are as follows: \begin{itemize} \item We investigate and compare humanoid SRBD models to include object dynamics and time-varying contact modes in Multi-contact MPC. \item We propose a Multi-contact MPC framework for humanoid robots to perform loco-manipulation tasks such as transporting and throwing weighted objects with contact schedule information. \item Our proposed framework is validated in numerical simulations and demonstrated the capability of humanoid multi-task loco-manipulation such as throwing weighted objects while walking and remaining balanced, transporting packages more efficiently with less body yaw, and carrying a heavy load. \end{itemize} The rest of the paper is organized as follows. Section. \ref{sec:robotModel} introduces the physical design and parameters of the humanoid robot and the overview of the system architecture. Section.\ref{sec:dynamicsModel} presents the dynamics models we investigated for our control framework and the Multi-contact MPC in detail. Some simulation result highlights and comparisons are presented in Section. \ref{sec:Results}. \section{Results} \label{sec:Results} In this section, we will present highlighted results for validation of our proposed humanoid dynamics model with object dynamics and the Multi=contact MPC framework in humanoid loco-manipulation simulations. The readers are encouraged to watch the supplementary simulation videos\footnote{\url{https://youtu.be/V8PIpE2YGhw}}. We validate our proposed approach in a high-fidelity physical-realistic simulation framework in MATLAB Simulink with Simscape Multibody library. We also use Spatial v2 software package \cite{featherstone2014rigid} to acquire coefficient matrices of dynamics equations for WBC. In this simulation, we assume the state information and physical properties of the object are known. Weighting matrices in MPC are \begin{flalign*} \nonumber \bm Q_i = \text{diag}[1500\:2000\:1000\:1000\:1000\:1000\:1\:3\:10\:1\:1\: 1 \:1\:1\:1],\: \\ \nonumber \bm R_i = \text{diag}[1\:1\: 1 \:1\:1\: 1 \:5\:5\: 5 \:5\:5\: 5 \:]\times10^{-4} \quad \quad \quad \quad \quad \quad \quad \: \end{flalign*} PD gains in WBC are \begin{flalign*} \nonumber \bm K_p^{WBC} = \text{diag}[200\:200\: 500 \:1000\:1500\: 1000] \quad \quad \quad \: \: \\ \nonumber \bm K_d^{WBC} = \text{diag}[20\:20\: 30 \:30\:30\: 30] \quad \quad \quad \quad \quad \quad \quad \quad \end{flalign*} Firstly, we present the comparison between the dynamics models we investigated in Section. \ref{sec:dynamicsModel} in a simple simulation of a humanoid robot balancing while holding a 5 $\unit{kg}$ box. We compare the approaches of using the combined rigid body model in Figure. \ref{fig:model2} and external force model in Figure. \ref{fig:model3}. Figure. \ref{fig:comparison} shows the comparison simulation snapshots, with the combined rigid body model in controllers, the robot is not able to recover from leaning forward when the external weight is applied. However, controllers using the external force model handle the applied external weight well and are able to adapt the robot to a favorable pose to carry the weight. (i.e. leaning back) With this framework, our humanoid robot can carry up to 14 $\unit{kg}$ ($82\%$ robot mass) while standing still. Joint torque plots of this simulation are provided in Figure. \ref{fig:torque}. \begin{figure}[!t] \vspace{0.2cm} \center \includegraphics[clip, trim=1.8cm 4.7cm 8.4cm 2cm, width=\columnwidth]{Figures/modelComparison.pdf} \caption{{\bfseries Comparison of Dynamics Models} Simulation snapshots of 1) Hybrid Rigid Body Model vs. 2) External Force Model in humanoid balancing with 5 $\unit{kg}$ weight block. } \label{fig:comparison} \vspace{-.4cm} \end{figure} \begin{figure}[!t] \vspace{0.2cm} \center \includegraphics[clip, trim=0.2cm 4cm 6.2cm 4cm, width=\columnwidth]{Figures/90turnComparison.pdf} \caption{{\bfseries Package Transfer Simulation Snapshots} Comparisons of quasi-static approach that separates locomotion and manipulation, and proposed Multi-contact MPC loco-manipulation approach} \label{fig:90turn} \vspace{-.4cm} \end{figure} Next, we present 2-D loco-manipulation examples in simulation following our proposed Multi-contact MPC framework. Shown in Figure. \ref{fig:snapshots}, the top snapshots represent the simulation when giving contact schedule information of tasks following the sequence of pick up, walk, and drop off a 4 $\unit{kg}$ box-shaped object. The contact schedule information is given to the controllers offline based on the target location and command tasks by the user. The bottom snapshots in Figure. \ref{fig:snapshots} represents a simulation of picking up, walking in-place, and throwing a 2 $\unit{kg}$ spherical object. It can be observed that our proposed framework provides good performance when multiple tasks take place concurrently. The framework synchronizes and bridges contact modes by long prediction horizons and high frequency desired state tracking. In the above simulations, the MPC samples at 0.03 $\unit{s}$ and has a prediction horizon $k = 20$. We would also like to demonstrate the advantages of our Multi-contact MPC framework on humanoid loco-manipulation in logistic applications. The example task is to transfer a package from a table to a conveyor belt, following: \begin{enumerate} \item Turn $90^\circ$ counterclockwise \item Pick up a package on the table \item Turn $90^\circ$ clockwise \item Drop off the package on the conveyor belt \end{enumerate} The traditional approaches on many existing humanoids are to follow the task sequence and stand still while manipulating the package quasi-statically (e.g.,\cite{digityoutube} \cite{atlasyoutube}). With the proposed approach, we can allow the robot to combine 3-D locomotion (for turning) with dynamic manipulation to complete such task with a considerably faster speed. Snapshots of the simulation top view are presented in Figure. \ref{fig:90turn}. The dotted lines in this figure represent the direction the robot is facing at the beginning of each task, and the shaded sectors represent the yaw angle differences of picking-up and dropping-off starting timings. In the quasi-static approach, the robot is commanded to turn in-place with locomotion and stand still while picking up or dropping off packages, due to the lack of consideration of transitions between contact modes and synchronized contact schedules. Thus the manipulations are done in 2D when facing the package. On the other hand, in a Multi-contact MPC where all contact modes and transitions are known in the prediction horizons, we can allow 3-D loco-manipulation with the object more dynamically. This allows earlier manipulation task timings because the robot does not need to stop during turning yet performs more efficiently than the quasi-static approach. It is observed that the yaw angle difference between picking-up and dropping-off task timings can be up to 30$^\circ$ less than that in the quasi-static approach. The proposed approach is also 1.7 $\unit{s}$ faster in this proposed example. Figure. \ref{fig:torque2} shows the left leg joint torques in this example with the proposed approach. All torque values are in joint torque limits in such dynamic motion. \begin{figure}[!t] \vspace{0.2cm} \center \begin{subfigure}[b]{0.45\textwidth} \center \includegraphics[clip, trim=3cm 10.6cm 3cm 10.6cm, width=\columnwidth]{Figures/Torque_14kg_load.pdf} \vspace{-0.7cm} \caption{ Simulation of Load-carrying up to 14 $\unit{kg}$} \label{fig:torque} \vspace{0.2cm} \end{subfigure} \vspace{-0.2cm} \\ \begin{subfigure}[b]{0.45\textwidth} \center \includegraphics[clip, trim=3cm 9cm 3cm 10cm, width=\columnwidth]{Figures/Torque_90_fast.pdf} \vspace{-0.7cm} \caption{Simulation of proposed approach in Figure. \ref{fig:90turn}} \label{fig:torque2} \end{subfigure} \caption{{\bfseries Torque Plots of Left Leg Joints}} \label{fig:torques} \vspace{-0.5cm} \end{figure} \section{Robot Model and System Overview} \subsection{Robot Model} \label{sec:robotModel} In this section, we present the humanoid robot model that is used for this work. Our humanoid robot model, shown in Figure. \ref{fig:title}, is modified from the design in our previous work \cite{li2021force}, a small-scale humanoid robot with 5-DoF legs, 3-DoF arms, and a total of 22-DoF. Each joint is actuated by Unitree A1 torque-controlled motor. The ab, hip, thigh, and ankle joints all have a 33.5 $\unit{Nm}$ maximum torque output and 21.0 $\unit{rad/s}$ maximum joint speed output. Due to the intended application, we halved the gear ratio for knee motors to have 67 $\unit{Nm}$ maximum torque. In the humanoid leg design, we strategically placed all joint actuators on the upper of the thigh links, close to the hips, for mass concentration, in order to minimize the leg dynamics during locomotion. Negligible leg mass is an important assumption in our force-and-moment-based simplified dynamics model in MPC \cite{li2021force}. The overall mass of our humanoid robot is around 17 $\unit{kg}$. \subsection{System Overview} \label{sec:sysoverview} This section presents an overview of the control architecture of our proposed work, shown in Figure. \ref{fig:controlArchi}. In our approach, we leverage the contacting schedules and MPC prediction horizons to include smooth transitions between contact modes in loco-manipulation tasks. \begin{figure}[!t] \vspace{0.2cm} \center \includegraphics[clip, trim=0cm 0cm 1cm 2cm, width=\columnwidth]{Figures/SytemOverview_with_ContactSchedule.pdf} \caption{{\bfseries System Block Diagram} Control system architecture.} \label{fig:controlArchi} \vspace{-0.2cm} \end{figure} The user defines the contact timings of the loco-manipulation tasks along with desired states. The contact modes are represented by contact schedules. The contact mode/schedule information is very powerful due to the predictive nature of MPC that optimizes the current control input by the knowledge of future dynamics changes. A visualization of the contact schedule based on the contact timing inputs in MPC is given in Figure. \ref{fig:controlArchi}. This contact schedule describes a task commanding the robot to pick up and walk with an object. The time-varying Multi-contact SRBD adapts based on the contact mode. The stance legs are controlled by MPC, while the swing legs and arms are controlled by the Cartesian PD controller. The optimal control inputs $\bm u \in \mathbb R^{10}$ are in terms of contact ground reaction forces $\bm F_n$ and moments $\bm M_n$ from Multi-contact MPC. Both $\bm F_{PD}\in \mathbb R^{12}$ and $\bm u$ are input to a Whole-body Control (WBC) to calculate the optimal joint torques $\bm \tau \in \mathbb R^{16}$ of the humanoid robot. The robot state feedback $\bm x$ include body Euler angles (roll, pitch, and yaw) ${\bm \Theta = [\phi,\:\theta,\:\psi]}^\intercal$, position $\bm p_c \in \mathbb R^{3}$, velocity of body CoM $\dot{\bm p}_c \in \mathbb R^{3}$, and angular velocity $\bm \omega \in \mathbb R^{3}$. Joint feedback includes the joint positions $\bm q \in \mathbb R^{16}$ and velocities $\dot{\bm q} \in \mathbb R^{16}$ of the humanoid robot.
{ "timestamp": "2022-09-20T02:21:48", "yymm": "2209", "arxiv_id": "2209.08662", "language": "en", "url": "https://arxiv.org/abs/2209.08662" }
\section{Introduction} \label{sec:intro} \noindent Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus~2 (SARS-CoV-2), continues to spread worldwide since 2019~\cite{SA2022}---in spite of the implementation of different control measures such as social distancing, wearing of face masks, sanitation, lock-downs, vaccinations, etc. We did a study on non-pharmaceutical interventions in Africa and linked them to early-stage outbreak dynamics~\cite{AMOUZOUVI2021e00987}. In Ref.~\cite{ribeiro2022characterisation}, the characterisation of omicron variant and the impact of vaccination, transmission rate, mortality, and reinfection in South Africa, Germany, and Brazil were studied. It was observed that the reinfection rate was as high as 40\% in South Africa with only 29\% of its population fully vaccinated, and as low as 13\% in Brazil with over 70\% and 80\% of its population fully vaccinated and with at least one dose, respectively. In Ref.~\cite{mukandavire2020quantifying}, a model was developed and analysed to quantify early COVID-19 outbreak transmission in South Africa and explored vaccine efficacy scenarios. It was observed that a vaccine with 70\% efficacy had the capacity to contain the COVID-19 outbreak at vaccination coverage of 94.44\%; a vaccine with a 100\% efficacy required a 66.10\% coverage. Social distancing measures put in place have so far reduced the number of social contacts by 80.31\%. Their results suggested that a highly efficacious vaccine would have been required to contain COVID-19 in South Africa. Therefore, social distancing measures to reduce contact remained key in controlling infections in the absence of vaccines and other therapeutics. The reduction in the number of contacts and transmission probability together with quarantining the infectious individuals were found to influence the basic reproduction number $R_0$. In addition, vaccination had contributed to the reduction of $R_0$ in South Africa~\cite{kassa2021modelling}. In Ref.~\cite{diagne2021mathematical}, a mathematical model of COVID-19 with vaccination and treatment was developed. The simulation results suggested that despite the effectiveness of COVID-19 vaccination and treatment to mitigate the spread of COVID-19, when $R_0>1$, additional efforts such as non-pharmaceutical public health interventions should continue to be implemented. The rate at which the disease spread across Africa varied over time due to individuals changing their behavior as the pandemic evolves, and due to changing government policies and vaccination programs. In this study, we investigated the impact of vaccination in the second year of COVID-19 pandemic in seven African countries (Ghana, Kenya, Mozambique, Nigeria, South Africa, Togo and Zambia). This is a continuation of our work reported in Ref.~\cite{AMOUZOUVI2021e00987}. We modelled the outbreak in the seven African countries, noting that several model parameters varied over time. We analyzed data taken over a two-year period: the first (without vaccination) and second (with vaccination) years of the pandemic. We extracted and compared parameters to gauge the impact of vaccination as a pharmaceutical intervention. A number of vaccines for COVID-19 have been developed by pharmaceutical and biotech companies. Each vaccine differs in the biotechnology used, efficacy, and geographic availability. In different African countries, different vaccines were used, sometimes in combinations. For example, in South Africa, the commonly used vaccines are those developed by Pfizer and Johnson \& Johnson. Our study of vaccination impact was informed by the vaccination programs in the African countries considered. In the UK, the USA, and European Union, general vaccination roll outs started in earlier 2021, while African countries started vaccination campaigns later. Because of the unequal availability of vaccines around the world, the starting dates of the vaccination might have an impact in this study. However, given that we have analyzed vaccination data for one year in all countries considered, the starting dates of vaccination do not affect our conclusions. The paper is organised as follows. In Section~\ref{sec:sidarthev}, we present the formulation of SIDARTHE-V model~\cite{Giordano2021} considering the impact of vaccination campaigns. In Section~\ref{sec:analysis}, we present the analysis of COVID-19 data with vaccination campaigns in the seven African countries considered. We discuss the impact of vaccination in Section~\ref{sec:disc} and offer concluding remarks in Section~\ref{sec:conc}. \section{SIDARTHE-V model with vaccination roll outs} \label{sec:sidarthev} \noindent In this study, we applied the SIDARTHE-V model~\cite{Giordano2021} with vaccination campaigns in the second year of the pandemic. The original SIDARTHE-V of ~\cite{Giordano2021} assumes that all vaccinated are immunized. In this study, we considered the possibility that vaccinated individuals can still get infected, become infectious and threatened; these dynamics are captured by connecting the $V$ and $I$ compartments, as shown in Figure~\ref{fig:SIDARTHEV}, where the parameters and variables of the model are presented. Equations~\ref{EQ01} describe the pandemic evolution, with vaccination roll outs: \begin{eqnarray} \left\{\begin{array}{lcl} \dot{S} &=& - \left(\alpha I + \beta D + \gamma A + \delta R\right)S - \phi S\\ \dot{V}&=& - \alpha' IV + \phi S \\ \dot{I} &=& \left(\alpha I + \beta D + \gamma A + \delta R\right)S + \alpha'IV - \left(\epsilon + \lambda + \zeta\right)I\\ \dot{D} &=& \epsilon I - \left(\eta + \rho\right) D\\ \dot{A} &=& \zeta I - \left(\theta + \mu + \kappa\right)A\\ \dot{R} &=& \eta D + \theta A - \left(\tau_1 + \nu \right)R\\ \dot{T} &=& \mu A + \nu R - \left(\tau_2 + \sigma\right) T \\ \dot{H} &=& \lambda I + \kappa A + \sigma T + \xi R + \rho D\\ \dot{E} &=& \tau_1 R + \tau_2 T \label{EQ01} \end{array}\right. \end{eqnarray} The basic reproduction number, $R_0$, is the average number of secondary cases produced by an infected individual in a population where everyone is susceptible~\cite{van2002reproduction}. Estimating $R_0$ helps in the implementation of appropriate responses to pandemic evolution, in particular the number of people to vaccinate for herd immunity. In the SIDARTHE-V model, Equations~\eqref{EQ01}, $R_0$ is given by: \begin{eqnarray} R_0 &=& \displaystyle{\frac{\alpha r_{2} r_{3} r_{4} + \beta \epsilon r_{3} r_{4} + \delta \epsilon \eta r_{3} + \delta r_{2} \tau \zeta + \gamma r_{2} r_{4} \zeta}{r_{1} r_{2} r_{3} r_{4}}}, \label{EQ02} \end{eqnarray} where $r_1 = \epsilon + \zeta + \lambda, ~r_2 = \eta + \rho, ~ r_3 = \theta + \mu + \kappa, ~r_4 = \nu + \xi$. For better understanding of the $R_0$ derivation, see~\cite{Giordano2020}. From Equation~\eqref{EQ02}, it can be seen that $R_0$ depends on the model parameters that affect pandemic evolution. Thus, it is very important to understand the model parameters and to make sure they are extracted correctly. \section{Analysis of COVID-19 data with vaccination} \label{sec:analysis} \noindent In our previous work~\cite{AMOUZOUVI2021e00987}, we studied the evolution of COVID-19 in African countries; however, vaccination was not considered and the Nigerian COVID-19 data was not included. For this reason, we start this section with the analysis of the data of Nigeria from the time when the first COVID-19 case was identified in that country---this includes the first year with no vaccination followed by another year with vaccination roll outs. For the other countries in this study, namely Ghana, Kenya, Mozambique, South Africa, Togo and Zambia, COVID-19 data of the first year without vaccination, were studied in Ref.~\cite{AMOUZOUVI2021e00987}; in this section, we continue the analysis of COVID-19 data of these countries from the onsets of vaccination campaigns. \subsection{Analysis of COVID-19 data of Nigeria} \noindent In Nigeria, they confirmed the first case in the Infectious Disease Centre, Yaba, Lagos State, on February 27, 2020. An airplane from Milan, Italy, arrived at the International Airport, Lagos, on February 14, 2020 with an infected Italian citizen who went to his company's site in Ogun State the following day. The health authorities (Nigeria Centre for Disease Control) implemented containment measures by contact tracing of ‘Persons of Interest’ which included all persons on the flight and those he had close contact with while in Lagos and Ogun States~\cite{NCBI2020}. After a period of two weeks, cases were detected in Lagos and Abuja and this marked the emergence of the spread in the country. The Federal Government restricted international commercial flights into the country, effective from March 23, 2020~\cite{NCAA2020}. The Federal Government ordered the closure of schools and all the non-essential services (businesses and industries) and ordered cessation of all movements in Lagos State, Ogun State and the Federal Capital Territory, Abuja, on March 29, 2020 for an initial period of 14 days. Later, the restriction on movements was extended for another 14 days from April 12, 2020~\cite{NCDC2020}. Most State Governments restricted public gatherings and religious activities for over fifty persons. The Federal Government lifted the travel ban on domestic flights on April 20, 2020, and ordered a nationwide overnight curfew on movements from 8:00~pm to 6:00~am on May 2, 2020, and later eased the overnight curfew on movements on the September 3, 2020 to be from 12:00~am to 4:00~am. On May 4, 2020, the Federal Government authorized the gradual easing of lockdown in the previously restricted states, and mandated the use of face masks in public. On May 6, 2020, the Federal Government announced an extension of the travel ban on both international and local flights to, June 7, 2020, to curb the spread of coronavirus in the country. The Federal Government reopened international flights for operations on August 29, 2020~\cite{NCDC2022}. On January 27, 2021, the President signed six COVID-19 Health Protection Regulations 2021, with restrictions on gatherings, operations of public places, mandatory compliance with treatment protocols, offences and penalties, enforcement and application and lastly the interpretation and citations of the regulations~\cite{NCDC2021}. After the first confirmed case on February 27, 2020, the number of confirmed cases increased drastically and the total number of confirmed cases as of March 27, 2022 was 255,341 with a total number of 249,566 discharged cases and 2,633 active cases. The first death case was on March 23, 2020; death cases have increased to a total number of 3,142 as of March 27, 2022. The health sector started COVID-19 sample test on April 8, 2020 and on March 27, 2022, they have recorded total tests of 4,589,725~\cite{NCAA2020,NCDC2020}. The first shipment of four million Oxford-AstraZeneca COVID-19 vaccines arrived in the country on March 2, 2021, and vaccination began on March 5, 2021. The country received subsequent shipments of Moderna, Johnson \& Johnson and Pfizer COVID-19 vaccines on August 1, August 12 and October 14, 2021 respectively. Due to the single dose requirement of Johnson \& Johnson COVID-19 vaccine, the Nigeria's National Primary Health Care Development Agency (NPHCDA) prioritised hard-to-reach and vulnerable areas for vaccination~\cite{VON2021}. As of March 27, 2022, there were 21,049,754 persons who have received their first dose and 9,565,143 who have received their second dose~\cite{VON2021}. \begin{figure}[!htbp] \begin{center} \includegraphics[width=\textwidth]{ModelingNig.png} \caption{The modelling of 2 years of COVID-19 data of Nigeria. Day~0 corresponds to the onset of the pandemic, i.e. February 27, 2020. The top plot shows the data and model for active, recovered, death and total cases, and fully-vaccinated individuals. Vaccination drive started on March 5, 2021. The bottom plot shows the time-dependent basic reproduction number.} \label{fig:Nigeria1} \end{center} \end{figure} Figure~\ref{fig:Nigeria1} (top plot) shows the SIDARTHE-V modelling of the Nigerian COVID-19 data of active, recovered, extinct and fully-vaccinated cases. The time-dependent basic reproduction number $R_0$, obtained by fitting the model to the data, is shown in the bottom plot. The $R_0$ increased significantly to eight after a week. This was largely due to learning period about the pandemic and lack of public control measures. Around day 35, the $R_0$ dropped below one mainly because of introduction of public control measures by the government and awareness by the public. Another increase in $R_0$ to a point above two was observed around day 40 most likely because of the difficulties to comply with the control measures. Around day 65, it also dropped below one. The $R_0$ later increased around day 75 above one and later rose to a point above three around day 150 due to ineffectiveness of the measures in some parts of the country and lack of enforcement strategies from the government. Around day 165, the $R_0$ dropped well below one and increased above two around day 205. Another drop occurred around day 230 to point zero after some restrictions from the government. We see that around day 250, there was an increase in $R_0$ above one and it was within the range of two around day 280. Even till after day 700, $R_0$ remained below two. These fluctuations were due to the negligence of the people to observe the control measures. Figure~\ref{fig:Nigeria2} shows the quality of the modelling as ratios of data over model predictions; the figure also shows the model prediction of the infected but unaffected population. The vaccination has eased the anxiety caused by the pandemic and also enabled the government to relax lockdown protocols. Businesses and institutions such as the education sector have resumed their services. \subsection{COVID-19 vaccination analysis for South Africa} \label{sec:sa} \noindent In South Africa, COVID-19 vaccination has been an ongoing immunisation campaign to vaccinate 40 million South Africans~\cite{Vaccine2022}. Four types of COVID-19 vaccines were approved by the South African Health Products Regulatory Authority (SAHPRA), namely, Johnson \& Johnson, Pfizer, Sinovac and AstraZeneca~\cite{Vaccine2022}. For the South Africa COVID-19 case study, Johnson \& Johnson's Janssen and Pfizer vaccines were considered~\cite{SA2022}. As of June 9, 2022, $535,714$ COVID-19 hospital admissions were recorded in South Africa~\cite{HOSPADM2022}. In our previous study~\cite{AMOUZOUVI2021e00987}, we covered the South African COVID-19 data up to adjusted alert level~3 that was effect from December 29, 2020, to February 28, 2021~\cite{AMOUZOUVI2021e00987}. Based on the changes of COVID-19 new cases in South Africa, the government introduced adjusted alert levels, defined in Ref.~\cite{AMOUZOUVI2021e00987}, as follows~\cite{SAlifted2022, Vaccine2022}: \begin{itemize} \item Level 1: March 1--May 30, 2021; \item Level 2: May 31--June 15, 2021; \item Level 3: June 16--June 27, 2021; \item Level 4: June 28--July 25, 2021 \item Level 3: July 26--September 12, 2021; \item Level 2: September 13--30, 2021; and \item Level 1: October 1, 2021--April 14, 2022. \end{itemize} On May 3, 2022, South Africa confirmed $3,661,635$ recovered individuals, $100,377$ death cases and $\sim17.7$ million vaccinated individuals, and $3,802,198$ positive cases~\cite{Vaccine2022}. The National State of Disaster in South Africa has been lifted since April 5, 2022~\cite{SAlifted2022}. In South Africa, the health care workers were the first group to be vaccinated; it started on February 18, 2021 (day 350) until May 17, 2021 (day 439) under phase~1 of the Sisonke Protocol, which enabled the government to make the Johnson \& Johnson vaccine quickly accessible through a research initiative~\cite{PETER20222, Sisonke}. The death case remained constant during phase 1 while the number of active, healed and total cases slightly remained constant. During Phase 2 which started on May 18, 2021, everyone from age 16 and above was allowed to be vaccinated with the first dose of Johnson \& Johnson and Pfizer. \begin{figure}[!h] \begin{center} \includegraphics[width=\textwidth]{SouthAfrica-Kenya-Modeling.pdf} \caption{The modelling of about 2 years of COVID-19 data of South Africa (left plot) and Kenya (right plot). Day~0 corresponds to the onset of the pandemic, i.e. March 5, and March 12, 2020, for South Africa and Kenya respectively. The top plots show the data and model for active, recovered, death and total cases, and for fully-vaccinated individuals. Vaccination drives started on February 28, 2021 (South Africa) and March 5, 2021 (Kenya). The bottom plots show the time-dependent basic reproduction numbers.} \label{fig:SouthAfricaKenya} \end{center} \end{figure} Figure~\ref{fig:SouthAfricaKenya} (left plots) shows the modelling of the South African data; the first year of the pandemic was studied and discussed in Ref.~\cite{AMOUZOUVI2021e00987}. The second year of the South African COVID-19 data, with vaccination roll outs, is extensively discussed in Section~\ref{sec:disc}. \subsection{COVID-19 vaccination analysis for Kenya} \noindent The data used in this analysis were taken from the daily press releases on the website of the Ministry of Health, Government of the Republic of Kenya~\cite{GoK}. Having received the first 1.12 million doses of the Oxford-AstraZeneca COVID-19 vaccine, the vaccination drive in Kenya kicked off on March 5, 2021. This was one year after the first case of COVID-19 was reported in the country on March 12, 2020. Six hundred and sixty-seven doses of AstraZeneca were administered on the first day of vaccination to front-line healthcare workers only at the Kenyatta National Hospital, Nairobi. This was then followed by other essential workers such as security officers and teachers in the first few weeks of the vaccination programme, followed by targeted people with higher risks of severe disease and those aged 50 years and above. The administration of the second dose began on May 28, 2021, and 203 people received their second dose. After five months of administering the AstraZeneca vaccine only, 880,460 doses of the Moderna vaccine were received on August 23, 2021 from the US government via COVAX, making Moderna the second COVID-19 vaccine to be offered in the country. Additional 141,600 doses of Johnson \& Johnson vaccine were received soon afterwards on September 3, 2021. This was the third vaccine type to be offered and totaled to 4.2 million doses of vaccine received~\cite{GoK}. On September 17, 2021, the country received 795,600 doses of the Pfizer vaccine from the US government, making Pfizer the fourth vaccine offered. Shortly afterwards, on September 18 2021, the government received 200,000 doses of Sinopharm COVID-19 vaccine from the Chinese government. The government has authorised all five vaccines and at the time of writing, they were being used across the country. After a slow uptake of the vaccines among the population due vaccine hesitancy~\cite{Orangi2021}, a spike was observed on November 23, 2021, with the highest number of vaccination doses administered to 103,506 people in a single day. This followed a government directive on November 21, 2021, stating that anyone not vaccinated by December 21, 2021, would be refused in-person government services and access to public entertainment spots such as restaurants. By the end of 2021, 7\% of the population was fully vaccinated and $\sim 10\%$ of the population partly vaccinated. This figure slightly surpassed the government target of 10 million people by the end of the year 2021. Figure~\ref{fig:SouthAfricaKenya} (right plots) shows the modelling of two years of COVID-19 data in Kenya with the vaccination rolls commencing on day 358 (highlighted by the blue vertical line), almost a year after the first COVID-19 case was reported in the country---a detailed study of the data before vaccination campaigns was discussed in Ref.~\cite{AMOUZOUVI2021e00987}. The issuance of the second dose began around day 450 as highlighted by the second blue vertical line. Around day 480 ($\sim 30$ days after the second dose), there was a sharp decrease in the number of active cases. Into the second year of the pandemic, the basic reproduction number $R_0$ remained $\approx$1 or below 1 with slight variations during minor peaks. At day $\sim$ 650, $R_0$ increased sharply to $R \sim 5$. This was due to a slight but sharp increase in active cases, following a steady decrease in the number of active cases in the country. Kenya is part of the WHO AFRO 20 priority African countries with a high risk of slow COVID-19 vaccination roll out~\cite{WHO_ke}. Therefore, the WHO AFRO implemented phased COVID-19 vaccination campaigns in February 2022 in order to boost vaccination rates. This entailed community outreach efforts and increased number of vaccination sites from 800 to 6,000 sites. Over a period of two weeks (February 3--17), the daily vaccination average increased from 70,000 to 200,000 people. This also raised the percentage of the population that was fully vaccinated from 9.9\% to 13.4\%. As of March 11, 2022, two years after the first COVID-19 case was reported in the country and one year after the mass vaccination programme roll out, 8,054,405 vaccine doses had been administered and $\sim 14.8\%$ (7,930,000) of the total population had been fully vaccinated. At the time of writing, a total of 323,140 COVID-19 cases were reported and a total of 5,644 deaths recorded. COVID-19 restrictions are no longer in place though the government is encouraging citizens to wear masks and maintain social distance where possible. Factors affecting the vaccination programme in Kenya include: i) funding, ii) the availability of vaccines, iii) storage requirements, iv) vaccine hesitancy among the population~\cite{Orangi2021} and geographical inequalities in accessing vaccines in hard-to-reach areas~\cite{MUCHIRI2022}. The government aims to vaccinate 15.91~million people by June 2023 in a 3-phased roll-out approach initially targeting 1.25~million people by June 2021 in phase one. This was followed by phase two, July 2021--June 2022, with a target of 9.76~million people, including the elderly and people with underlying health conditions. The third phase started in July 2022 and will run until June 2023, with a target of 4.9~million people above 18 years old, those with underlying health risks and essential workers. \subsection{COVID-19 vaccination analysis for Ghana} \noindent In Ghana, the government committed to acquiring COVID-19 vaccines on December 20, 2020~\cite{lamptey2021nationwide}. Ghana was the first country to receive COVID-19 vaccines from the COVAX initiative and began its first vaccine roll out on March 1, 2021~\cite{WHOGH, Ghana-COVAX, nonvignon2022estimating} with the AstraZeneca vaccine. Johnson \& Johnson (J\&J), Moderna, Pfizer, and Sputnik~V were the COVID-19 vaccines also approved and administered in Ghana. Figure~\ref{fig:GhanaTogo} (left plots) shows the modelling of the Ghanaian data over a two-year period: data from the first year of the pandemic---before vaccination started---were analyzed and discussed in Ref.~\cite{AMOUZOUVI2021e00987}; in this study, we focused on the second year of data with vaccination drives. \begin{figure}[!h] \begin{center} \includegraphics[width=\textwidth]{GhanaTogo.png} \caption{The modelling of about 2 years of COVID-19 data of Ghana (left plot) and Togo (right plot). Day~0 corresponds to the onset of the pandemic, i.e. March 12, and March 6, 2020, for Ghana and Togo respectively. The top plot show the data and model for active, recovered, death and total cases, and for fully-vaccinated individuals. Vaccination drives started on March 1, 2021 (Ghana) and March 9, 2021 (Togo). The bottom plots show the time-dependent basic reproduction numbers.} \label{fig:GhanaTogo} \end{center} \end{figure} The second, third and fourth COVID-19 infection waves in Ghana were caused by the emergence of novel coronavirus variants namely Alpha, Delta and Omicron variants. A study conducted by~\cite{morang2022genetic} indicates that, the Delta, Alpha, Beta and Eta made up the top viral lineages within the sequenced SARS-CoV-2 genomes in Ghana. At the time of writing, the Beta variant was still being monitored in Ghana since it had the third highest frequency. During the second wave, regions further from Accra, such as the Northern and Upper East, had different variants. These locations lagged behind the rest of the country in the third wave and did not appear to experience one~\cite{G.H.S}. The Beta variant was prominent in Ghana when the airport reopened to foreign travelers in September 2020, and it remained the most dominant lineage throughout 2020. The Alpha variant superseded Beta in January 2021 and became the major cause of all reported illnesses until June 2021, when Delta lineages took over. The Delta lineages started in June 2021 until September 2021. Major variations such as Alpha, Beta, Delta, Eta, and Kappa were found in samples from tourists first, then in community instances, according to~\cite{morang2022genetic}. The president of Ghana and his vice were the first to receive the AstraZeneca vaccine on March 1, 2021~\cite{CNR}. By March 2, 2021, vaccination was launched in the Ashanti region and over 10,000 people had been vaccination. The second doses of the AstraZeneca vaccine commenced on May 19, 2021. By April 25, 2022, $14,268,269$ doses of these vaccines have been administered; $18.3\%$ of Ghana's population has been fully vaccinated, $29.9\%$ has received at least one dose of the vaccines and $360,201$ people have received the first booster dose. By April 30, 2022, there were $161,216$ cases in Ghana. Out of these, there were $159,737$ recoveries, $1,445$ deaths and $34$ active cases. Greater Accra region recorded the highest number of COVID-19 cases at $90,826$ followed by the Ashanti region with $22,299$ cases~\cite{G.H.S}. \subsection{COVID-19 vaccination analysis for Togo} \noindent On March 7, 2021, approximately one year after the detection of the first case, the country received 156000 doses of AstraZeneca through the COVAX facility~\cite{world2021covid, Konu2021.04.20.21254863}, and the vaccination campaign started the following day. 120000 additional doses of AstraZeneca were received on March 31, 2021. After these, additional 100620 Pfizer doses were obtained in May 2021, followed by 200000 doses of Sinovac on April 23, 2021. On August 7, 2021, Togo received additional 118000 doses of Johnson \& Johnson vaccine out of 4 million doses that it had ordered. The World Health Organisation Coronavirus Dashboard indicates that, by August 14, 2022, Togo had received 3262548 COVID-19 vaccine doses, with 2152846 people vaccinated---corresponding to $\sim 25.4$\% of the population qualified for vaccination---and 1425113 persons fully vaccinated~\cite{WHO-Dashboard}. The vaccination started with health workers on March 10, 2021, day 370 as shown in Figure~\ref{fig:GhanaTogo} (right plots), followed by clinically vulnerable individuals, then people over 50 years old~\cite{world2021covid, Konu2021.04.20.21254863}. It took approximately 2 months to cover this targeted population. After priority groups had been vaccinated, there was a wider roll out among younger age groups. One month after vaccination campaign (from day 400) began, we started to see impact on infection rate, and this is reflected in $R_0$ as shown in Figure~\ref{fig:GhanaTogo} (right plot). The data from the first year of the pandemic---before vaccination started---were analyzed and discussed in Ref.~\cite{AMOUZOUVI2021e00987}. Active cases continued to decrease up to three months after the vaccination started while $R_0$ sharply increased in the third month. This increase in $R_0$ resulted from the relaxation of the control measures that where in place before the start of the vaccination. These measures were largely no longer respected, as people thought that the problem of COVID-19 would be solved immediately by the arrival of the vaccines. After day 470, the active cases started to increase again when the vaccine doses were finished and a new COVID-19 variant (Delta) emerged. As the active cases started to increase, the government warned the population of the new variant and encouraged rigorous adherence to the control measures. More vaccines were received later and distributed across the country. However, as the government accelerated the vaccination campaign, vaccine hesitancy set in. There was an increase in general vaccine hesitancy but especially towards COVID-19 vaccines~\cite{gittings2021even, alemayehu2022determinants, adunimay2022western}. Measures to encourage vaccination were therefore put in place, such as obligatory presentation of the COVID-19 vaccination card before entering any public institution. Despite these different strategies, as of September 17, 2021, the proportion of the population who had received two doses of the COVID-19 vaccine was only 5.6\%. To reach the vaccination targets, the WHO Country Office in Togo provided technical and financial support to the Togolese government; through the Ministry of Health, Public Hygiene and Universal Access to Health Care (MSHPAUS), they initiated community dialogues and broad awareness-raising in the Grand-Lomé region, the epicentre of the epidemic in Togo. These reduced misinformation and removed barriers to vaccine acceptance. However, there has been rises and falls in the basic reproduction number as shown in Figure~\ref{fig:GhanaTogo} (bottom-right plot); the rises may be related to the non-respect of the control measures. This overall observation allows to stress that both control measures and vaccination are necessary to overcome the COVID-19 pandemic. \subsection{COVID-19 vaccination analysis for Mozambique} \noindent The datasets used in this study for the particular case of Mozambique were taken from the daily press releases and daily bulletins on the website of the government~\cite{Moz1,Moz2}. With the purpose of understanding the impact of vaccination on the evolution of COVID-19 in Mozambique, the modelling of COVID-19 data was carried out, and the results are shown in the left plots of Figure~\ref{fig:MozambiqueZambia}---the figure shows the modelling of COVID-19 data in Mozambique with approximately one year of vaccination campaigns. The data of the first year of the pandemic---before vaccination started---were analyzed and discussed in Ref.~\cite{AMOUZOUVI2021e00987}; Figure~\ref{fig:MozambiqueZambia} (left plots) also shows the first year of data before vaccination. \begin{figure}[!h] \begin{center} \includegraphics[width=\textwidth]{Mozambique_Zambia.png} \caption{The modelling of $\sim$2 years of COVID-19 data of Mozambique (left plot) and Zambia (right plot). Day~0 corresponds to the onset of the pandemic, i.e. March 20, and March 18, 2020, for Mozambique and Zambia respectively. The top plots show the data and model for active, recovered, death and total cases, and for fully-vaccinated individuals. Vaccination drives started on March 8, 2021 (Mozambique) and April 14, 2021 (Zambia). The bottom plots show the time-dependent basic reproduction numbers.} \label{fig:MozambiqueZambia} \end{center} \end{figure} In Mozambique, the vaccination campaign started on March 8, 2021, at the end of the first year of COVID-19 and approximately when the second wave was fading toward February 2021. When the Government started the vaccination campaign, there was already a reduction of active cases because of non-pharmaceutical measures which were being implemented according to the Decree 7/2021 of March~5 (see Ref.~\cite{Moz3}). In general, the first vaccination campaign targeted health professionals, older people, diabetic patients, defence and security forces as well as university teachers~\cite{Moz4}. Between April 19 and May 10, 2021, Mozambique had the second stage of vaccination that covered final-year medical students, teachers who were not covered in the first stage, inmates, police and primary school teachers. The third stage of vaccination was between October 20 and November 3, 2021; it covered carriers, people that were not vaccinated in the first two stages, motorcycle taxis, students and all vulnerable people. Around the end of the fourth wave, on January 20 of March, booster doses were introduced~\cite{Moz5}. Figure~\ref{fig:MozambiqueZambia} (bottom-left plot) shows the $R_0$ evolution in Mozambique. The $R_0$ vary from 2.5 to 0.1 as follows: 1) in the second wave, the $R_0$ varied between 0.1 to 2.1; 2) in the third wave, the $R_0$ varied between 0.4 to 2.5; 3) in the fourth wave, the $R_0$ varied between 0.1 to 1.8. The $R_0$ fluctuations were related to the Government regulations of non-pharmaceutical interventions together with the onset of new variants which triggered new waves. During vaccination campaign, infection was still spreading, but with diminishing impact as shown in Figure~\ref{fig:Mozambique3})---the fifth wave of COVID-19 started in the last week of May 2022 and was fading at the time of writing. The onset of this wave coincided with the time when the winter brought unusually low temperatures in some regions and many people suffered from normal flu symptoms. This new wave was relatively small in terms of the number of people affected, duration and impact compared to the previous waves. The rate of deaths in this wave was very low; the rate of recovery was high with a small number of people needing hospitalization. At the time of writing, the government set a vaccination target of 15 million people, with about 97$\%$ fully vaccinated---of these, 605166 individuals received the booster. The government was planning to start vaccinating people aged between 12 and 17 years~\cite{Moz1,Moz2}. \subsection{COVID-19 vaccination analysis for Zambia} \noindent The Zambian data of the first year of the pandemic---before vaccination started---were analyzed and discussed in Ref.~\cite{AMOUZOUVI2021e00987}. The government of Zambia through the Ministry of Health issued a strategy in response to the COVID-19 outbreak which anchored on nine principles: surveillance and case finding; case management; infection prevention and control; health promotion, risk communication and community engagement; laboratory diagnosis; logistics and supply chain management; availability of appropriate competent and adequate workforce; provision of routine essential health services; and lastly vaccination. To this end, Zambia’s COVID-19 vaccination exercise required logistical and supply chain management to provide access to vaccines for the eligible population. The four main vaccine providers were: \begin{itemize} \item the COVAX Mechanism, a global Initiative representing a partnership between the WHO, Global Alliance for Vaccines and Immunization (GAVI), United Nation’s Children Fund (UNICEF) and the Coalition for Epidemic Preparedness Innovations (CEPI) working on the equitable distribution of COVID-19 vaccines, which include Astra Zeneca and Johnson and Johnson Vaccine; \item donations of vaccines from donors and cooperating partners as long as they are approved by the Ministry of Health through the Zambia Medicines Regulatory Authority (ZAMRA); \item private sector health services who provide vaccines as approved by local authority; \item a basket of vaccines which include Pfizer Biotech, Moderna, Johnson and Johnson, Sinovac and Sputinik and others~\cite{Zambia-MoH}. \end{itemize} ZAMRA approved five COVID-19 vaccines---namely Sinopharm, Johnson \& Johnson, AstraZeneca Covishield, AZD 12225-Korea AstraZeneca, and Pfizer Biotech---following stringent review processes and riding on the World Health Organization (WHO) emergency use listing of the vaccines. Vaccines were administered free to all persons over the age of twelve. The rate of vaccination in Zambia increased as vaccine deliveries increased; Zambia received 14.9 million COVID-19 vaccine doses through the COVAX platform and 11.59 million doses reflecting 77\% of the total vaccine doses were brought into the country~\cite{ Zambia-Can}. This increasing trend reduced hospitalization and infection rates as shown in Figure~\ref{fig:MozambiqueZambia} (right plots). COVID-19 vaccination was officially launched April 14, 2021 at the University Teaching Hospital in Lusaka. Zambia received the first consignment of 228,000 doses of the vaccine from the COVAX facility, a global Initiative representing a partnership between the WHO, Global Alliance for Vaccines and Immunization (GAVI), United Nation’s Children Fund (UNICEF) and the Coalition for Epidemic Preparedness Innovations (CEPI) working on the equitable distribution of COVID-19 vaccines~\cite{Zambia-Fra}. Using geo-spatial data, the initial voluntary COVID-19 vaccination exercise in the country targeted 8.4 million people above the age of 18 (or a 70\% vaccination rate) through mass campaigns~\cite{Zambia-Grid3}. The National COVID-19 Vaccine Deployment Plan identified frontline workers essential to sustaining the COVID-19 response strategy. These were health workers, teachers, immigration, police, religious and traditional leaders. Further, people at greatest risk of severe COVID-19 disease (those with other underlying diseases and those aged above 65 years) were also placed in the initial group. A second shipment of 228,000 COVID-19 vaccine doses was delivered, a donation of vaccines by France through the COVAX Facility. The delivery arranged by UNICEF provided an important boost to the on-going COVID-19 vaccination campaign. Zambia has received 10,406,410 vaccines doses, broken down as follows: 188,400 Moderna; 1,549,300 AstraZeneca; 1,749,600 Sinopharm; 2,296,710 Pfizer; and 4,622,400 Johnson \& Johnson; with the most recent receipt being 574,470 doses of Pfizer received under COVAX on May 6, 2022. In summary, of the total vaccine doses received, 5,598,359 (54\%) have been administered to date; wastage rate is estimated at 2.3\%~\cite{ Zambia-sit}. By September 30, 2021, approximately 731,450 doses of COVID-19 were administered in Zambia, representing about 2\% of vaccinated adult Zambians. As of November 29, 2021, the number of vaccinated Zambian adults rose to 1,074,368 due to the government's emphasis on having many Zambians vaccinated but was still below the WHO recommendation of 10\% threshold for adult population vaccinated individuals~\cite{Zambia-cov}. As of December 31, 2021, the prevalence of COVID-19 vaccinations in Zambia was at 3.55\%. The number of doses of COVID-19 vaccines administered increased, indicating that individuals had steadily been accepting the vaccines. However, Zambia, like many other developing countries, experienced high COVID-19 vaccine hesitancy, i.e. the refusal or delay of vaccination despite the availability of vaccines. Vaccine hesitancy is a multifactorial issue that requires many players to handle. The WHO contends that vaccine hesitancy is a complex issue, and no single strategy will be able to address it single-handedly. Vaccine hesitancy can be attributed to the valleys in the curves as opposed to having a steadily increasing plot~\cite{ Mudenda}. Nationwide vaccination campaign was launched in a bid to escalate progress towards attaining the vaccination target of 70\% of the eligible population and to achieve 70\% herd immunity against the pandemic. The campaign received overwhelming public response. These were done as door-to-door vaccination campaign against COVID-19, engagement of community-based health volunteers and mass citizen sensitization on the benefits of vaccination. By July 11, 2022, 28\% of people in Zambia received at least one vaccine dose, and 26\% were fully vaccinated. Since the first launch of the vaccination program on April 14, 2021, only about 5\% of the eligible population have been vaccinated, at the time of writing. With the relaunched campaign, the government was targeting to vaccinate 8.4 million eligible people aged 18 and above, or 70\% of the eligible population. Figure~\ref{fig:MozambiqueZambia} (right plots) shows the progression of the COVID-19 during the two years of pandemic in Zambia. The period before the vaccination roll out was discussed in Ref.~\cite{AMOUZOUVI2021e00987}. The modelling of COVID-19 vaccination in Zambia to eligible persons began after April 14, 2021. By May 25, 2021, a total of 5286 were fully vaccinated with Sinopharm and AstraZeneca. In addition, by January 2, 2022, eligible individuals started to receive the booster vaccines, corresponding to cumulative total of 1649. However, in about 8 to 9 months following the start of vaccination, as shown in Figure~\ref{fig:MozambiqueZambia} (bottom-right plot), $R_0$ increased twice to approximately 2 due to vaccine hesitancy, lack of strict adherence to COVID-19 protocols such as wearing of masks, hand sanitizing and keeping a safe social distance of about 1m. Finally in the period around day 700, $R_0$ reduced significantly to approximately 1 coupled with corresponding reduction in active cases and increased recovery. This significant reduction could be attributed to effective measures introduced by the government to curb the spread of COVID-19 such as mandatory wearing of masks in public places, maintenance of safe distance and lock down measures, closure of schools and universities. \section{Impact of vaccination} \label{sec:disc} In this study, we focused on the second year of the COVID-19 pandemic with vaccination roll outs. To discuss the impact of vaccination, we took the case of South Africa where the available data was statistically significant as described in Section~\ref{sec:sa}. At the beginning of the vaccination campaign, around Day 349, as shown in Figure~\ref{fig:SA-vac} (bottom plot), the number of active cases was declining and the $R_0$, estimated from the bottom-left plot of Figure~\ref{fig:SouthAfricaKenya}, was 0.99 and the government relaxed the control measures to alert level one on March 1, 2021. The SIDARTHE-V model extrapolation into the period of vaccination is shown in Figure~\ref{fig:SA-vac} and suggests that the active cases should dwindle and the death rate should plateau over time. The relaxation of the control measures without enough vaccinated individuals to reach herd immunity led to the third and fourth waves seen in Figure~\ref{fig:SA-vac} (bottom plot), although vaccination was ramping up (Figure~\ref{fig:SouthAfricaKenya}, top-left plot). \begin{figure}[!h] \begin{center} \includegraphics[width=\textwidth]{RealAndExtrapolation-SouthAfrica.pdf} \caption{Death cases (top plot) and active case (bottom plot) with extrapolation into the period of the vaccination campaign, for South Africa. The plot is shown for a period of 2 years of the COVID-19. Day~0 is March 5, 2020. The vertical dotted-line indicates the start of vaccination campaign.} \label{fig:SA-vac} \end{center} \end{figure} The number of people $n$ to vaccinate to reach herd immunity is \begin{equation} n = N \times (1-1/R_0), \label{eq:pfrac} \end{equation} where $N$ is the population. At the onsets of the third and fourth waves, $R_0$ was estimated at $\sim1.4$ and $\sim2.0$ respectively, as shown in Figure~\ref{fig:SA-waves} (top plot). Assuming $N=60$~million for South Africa, the number of the people to vaccinate at the beginning of the third and fourth waves were $n_1=17.1$~million and $n_2=30.0$~million respectively; however, the corresponding numbers of full-vaccinated persons were 318670 and 14031159. Although the vaccination was continuing as shown in Figure~\ref{fig:SouthAfricaKenya} (left plot), herd immunity was not reached. \begin{figure}[!h] \begin{center} \includegraphics[width=\textwidth]{SouthAfrica-waves-vacccinations.pdf} \caption{South African waves of COVID-19 and $R_0$ estimates at the beginning of each wave, numbers of fully-vaccinated persons at the onsets of the third and fourth waves, and the cumulative deaths as a function of time (top plot). Number of daily death counts is shown in the bottom plot. Day~0 is March 5, 2020. The vertical lines indicates the beginning of pandemic waves.} \label{fig:SA-waves} \end{center} \end{figure} The lack of herd immunity may be the cause of the fifth waves shown in the top plot of Figure~\ref{fig:SA-waves}. The impact of vaccination was beginning to be felt at the time of the fifth wave---this can be seen in: \begin{itemize} \item the fifth wave which was relative smaller than the previous ones; \item the cumulative deaths which was plateauing (top plot of Figure~\ref{fig:SA-waves}); \item the daily death counts which had fallen (bottom plot of Figure~\ref{fig:SA-waves}); \item and the relaxation of control measures to level one without resurgence any significant wave. \end{itemize} The impact of vaccination, inferred from Figure~\ref{fig:SA-waves}, appears consistent with the general understanding of what vaccination program would achieve. The vaccination program is expected to reduce COVID-19 hospitalizations and deaths. Delays to implementing of a vaccination program can significantly increase the number of infections and subsequently the numbers of hospitalizations and deaths. The basic reproduction number, $R_0$, combines many effects——captured in the model parameters that appear in Equation~\ref{EQ02}---to provide an understanding of the pandemic evolution with control measures or vaccination impacts. These included the death rates ($\tau_1$, $\tau_2$) and worsening rates of infected population $\mu$ and $\nu$. Comparing death rates before and after vaccination, we see that the parameters $\tau$ are reduced after vaccination campaigns; it means that we can have large infection rates without people dying in large numbers, see the bottom plot of Figure~\ref{fig:SA-waves}. Further, the reduction on the parameters $\mu$ and $\nu$ would reduce severity of infections (see the fifth wave in the top plot of Figure~\ref{fig:SA-waves}) and help in reducing the number of deaths. The number of death could have been drastically reduced had the non-pharmaceutical interventions been implemented for a while at the beginning of the vaccination program. This is on top of the observation that the death rates due to COVID-19 in Africa are relatively low. From the data and model simulation, we conclude that vaccination is an important tool in the fight against COVID-19, and an early implementation of vaccination program could have saved lives and yielded positive impact. \section{Conclusions} \label{sec:conc} We studied the impact of vaccination in Nigeria, South Africa, Kenya, Ghana, Togo, Mozambique and Zambia. The SIDARTHE-V model was used in simultaneous fits to active, recovered, extinct and vaccinated cases in the countries considered. We observed that it is important to combine vaccination roll outs with control measures to contain the pandemic until herd immunity is achieved. To assess the impact of vaccination in Africa, we studied the South African case in more detail since it was the most impacted country in the continent, and also where we have more statistically significant vaccination data. The impact of vaccination was observed after almost one year when $\sim$ a third of the population had been fully vaccinated. This was reflected in the significantly reduced daily death counts, the plateauing of the cumulative death rate and the relaxation of control measures without resurgence of COVID-19 peak waves. For the other countries studied, the impact of vaccination was not easy to gauge because of the relatively smaller numbers of COVID-19 cases and fully vaccinated people. However, the conclusion reached in the South Africa case may be applicable to other countries, that is, vaccination roll outs need to be combined with control measures until enough population has been vaccinated such that the relaxation of control measures no longer lead to significant waves. \newpage \noindent
{ "timestamp": "2022-09-23T02:02:13", "yymm": "2209", "arxiv_id": "2209.08694", "language": "en", "url": "https://arxiv.org/abs/2209.08694" }
\section{Introduction}\label{sec:introduction} Deep Neural Networks (DNNs) are incorporated in real-world applications used by a broad spectrum of industry sectors including healthcare \citep{Shorten2021DeepLA,Fink2020PotentialCA}, finance \citep{Huang2020DeepLI, Culkin2017MachineLI}, self-driving vehicles \citep{Swinney2021UnmannedAV}, and cybersecurity \citep{Ferrag2020DeepLF}. These applications utilize DNNs in various fields such as computer vision \citep{Hassaballah2020DeepLI,Swinney2021UnmannedAV}, audio signal processing \citep{Arakawa2019ImplementationOD,Tashev2017DNNbasedCV},and natural language processing \citep{Otter2021ASO}. Many services in large companies such as Google and Amazon have DNN-based back-end software (e.g., Google Lens and Amazon Rekognition) with tremendous volume of queries per second. For instance, Google processes over 99,000 searches every second \citep{mohsin_2022} and spends a substantial amount of computation power and time at their models' run-time \citep{Xiang2019PipelinedDC}. These services are often time-sensitive, resource-intensive, and require high availability and reliability. Now the question is how fast the current state-of-the-art (STOA) DNN models are at inference time and to what extent they can provide low latency responses to queries. The SOTA model depends on the application domain and the problem at hand. However, the trend in DNN design is indeed toward pre-trained large-scale models due to their reduced training cost (only fine-tuning) while providing dominating results (since they are huge models trained on an extensive dataset). One of the downsides of large-scale models (pre-trained or not) is their high inference latency. Although the inference latency is usually negligible per instance, as discussed, a relatively slow inference can jeopardize a service's performance in terms of throughput when the QPS is high. In general, in a DNN-based software development and deployment pipeline, the inference stage is part of the so called ``model serving'' process, which enables the model to serve inference requests or jobs \citep{Xiang2019PipelinedDC} by directly loading the model in the process or by employing serving frameworks such as TensorFlow Serving \citep{Olston2017TensorFlowServingFH} or Clipper \citep{Crankshaw2017ClipperAL}. The inference phase is an expensive stage in a deep neural model's life cycle in terms of time and computation costs \citep{Desislavov2021ComputeAE}. Therefore, efforts towards decreasing the inference cost in production have increased rapidly throughout the past few years. From the software engineering perspective, caching is a standard practice to improve software systems performance, which helps avoid redundant computations. Caching is the process of storing recently observed information to be reused when needed in the future, instead of re-computation \citep{Wessels-2001,caching-def}. Caching is usually orthogonal to the underlying procedure, meaning that it is applied by observing the inputs and outputs of the target procedure and does not engage with the internal computations of the cached function. Caching effectiveness is best observed when the cached procedure often receives duplicated inputs while in a similar internal state---for instance, accessing a particular memory block, loading a web page, or fetching the books listed in a specific category in a library database. It is also possible to adopt a standard caching approach with DNNs (e.g., some work cache a DNN's output solely based on its input values \citep{Crankshaw2017ClipperAL}). However, it would most likely provide a meager improvement due to the high dimension and size of the data (such as images, audios, texts) and low duplication among the requests. However, due to the feature extracting nature of the deep neural networks, we can expect the inputs with similar outputs (e.g.,\ images of the same person or the same object) to have a pattern in the intermediate layers' activation values. Therefore, we exploit the opportunity to cache a DNN's output based on the intermediate layers' activation values. This way, \textbf{we can cache the results not by looking at the raw inputs but by looking at their extracted features in the intermediate layers within the model's forward-pass}. The intermediate layers often have even higher dimensions than the input data. Therefore, we use shallow classifiers \citep{Kaya2019ShallowDeepNU} to replace the classic cache storing and look-up procedures. A shallow classifier is a supplementary model attached to an intermediate layer in the base model that uses the intermediate layer's activation values to infer a prediction. In the caching method, training a shallow classifier on a set of samples mimics the procedure of storing those samples in a cache storage, and inferring for a new sample using the shallow classifier mimics the look-up procedure. Caching is more problematic in regression models where the outputs are continuous values. Specifically, it is less likely that two different samples have the same outcome in a regression model compared to a classification one. Therefore, the experiments in this research focus on classification models. Thus, here we propose caching the predictions made by off-the-shelf classification models using shallow classifiers trained using the samples and information collected at inference time. We first evaluate the rationality of our method in our first research question by measuring how it affects the final accuracy of the given base models and assessing the effectiveness of the parameters we introduce (tolerance and confidence thresholds) as a knob to control the caching certainty. We further evaluate the method in terms of computational complexity and inference latency improvements in the second and third research questions. We measure this improvements by comparing the FLOPs count, memory consumption, and inference latency for the original model vs. the cache-enabled version that we build throughout this experiment. We observed up to 58\% reduction in FLOPs, up to 46\% acceleration in inference latency while inferring on CPU and up to 18\% on GPU, with less than 2\% drop in accuracy. In the rest of the paper, we discuss our motivations in section \ref{sec:motivation}, the background and related works in section \ref{sec:bkg}, details of the method in section \ref{sec:method}, design and evaluation of the study in section \ref{sec:empirical-evaluation}, and lastly, we conclude the discussions in section \ref{sec:conclusion}. \section{Motivation}\label{sec:motivation} Many real-world software services utilize deep neural models and, simultaneously, require low response time to meet their service level objectives (SLO). This requirement usually leads to allocating expensive infrastructure and hardware resources to the services \citep{VelascoMontero2019OnTC}. The high computational cost of DNN models directly affects the service provider in terms of their delivery cost and the environment in terms of the carbon footprint of the data centers running such services 24/7. Countless high-traffic online platforms such as online stores, photo/video sharing platforms, digital advertising platforms, and trading platforms use neural networks within the process of serving their user requests. For instance, displaying an online advertisement involves an online ad-click rate prediction based on the user features \citep{Gharibshah2020DeepLF}. Furthermore, online stores also use deep learning classification models for various purposes, such as product categorization, recommendation, product review sentiment analysis, and customer churn rate prediction. In terms of the traffic load, Google Lens for instance has reached an average of 3 billion usages per month in 2021 \citep{maxham_diaz_2021}. Employing a variety of machine learning and deep learning models, Google spends billions of dollars on data centers and infrastructure to process such volume of requests \citep{spadafora_2022}. Thus, extensive work towards minimizing the energy footprint of the large-scale services has been done \citep{Lo2014TowardsEP,Buyya2018SustainableCC}. On the other hand, the trend in DNNs deployment on resource-constrained devices such as mobile and IoT devices has also been rising in the past few years \citep{Lin2020MCUNetTD,Yoo2020DeepLP}. Various scenarios involve DNNs performing on-device predictions where low inference latency and/or low compute consumption is required. For instance, traffic sign classification in autonomous vehicles \citep{Zhang2020LightweightDN} requires low latency, and on-device voice command recognition systems \citep{Lin2018EdgeSpeechNetsHE} and mobile visual assistants \citep{9179386} require low compute consumption. Moreover, using pre-trained off-the-shelf DNN models and adapting them to new tasks using transfer learning is playing a fundamental role in enabling practitioners in different areas to utilize DNNs \citep{Shrestha2019CrossFrequencyCO,Abed2020AlzheimersDP,Lee2020EvaluationOT}. However, the pre-trained models' original training data is not always available to the users. The absence of the training data can be due to different reasons, such as the high volume or cost of the data, privacy requirements, or intellectual property regulations. Accounting for such common cases, we restrict our method to use only the data collected at inference time (test set). The inference data are unlabelled, meaning that their ground truth labels are not available to the user. Hence, our method relies only on the model's internal values and final outputs and does not require access to the ground truth labels. DNNs compute performance improvement has received a considerable amount of attention in terms of specialized hardware accelerators \citep{Wang2019BenchmarkingTG,Dally2020DomainspecificHA,Deng2020ModelCA}, and framework-level optimizations \citep{Crankshaw2017ClipperAL,Shi2018PerformanceMA}. On the other hand, model compression methods propose modifications to the model's structure (i.e., weights and connections) to reduce their compute complexity. By applying one or more model compression methods, practitioners either replace the modified model and lose a fixed amount of accuracy, or manage multiple versions of the model with different accuracy and complexity. Having multiple model versions, they select one for inference based on the current workload \citep{Taylor2018AdaptiveDL, Marco2020OptimizingDL} or available resources \citep{Guan2018EnergyefficientAI}. However, our method optimizes the model while preserving its original structure, allowing the user to enable/disable the optimization without the overhead of managing and loading/offloading multiple model versions. Considering the trends, requirements, and motivations discussed above, we design the caching method to add one or more alternative exit paths in the model with less computation required than the remaining layers in the backbone, controlled by the shallow classifiers we train using only the inference data. \section{Background and related works}\label{sec:bkg} In this section, we briefly review the background topics to the model inference optimization problem. Following this background discussions, we introduce the techniques used to build the caching procedure. Figure\ref{fig:background} puts the discussed background and related techniques into the picture. \begin{figure}[htbp] \centering \begin{center} \includegraphics[width=\columnwidth]{background.eps} \caption{DNN inference optimization perspectives and solutions}\label{fig:background} \end{center} \end{figure} \subsection{Inference optimization} There are two perspectives addressing the model inference optimization problem. The first perspective is interested in optimizing the model deployment platform and covers a broad range of optimization targets \citep{Yu2021ASO}. These studies often target the deployment environments in resource-constrained edge devices \citep{Liu2021SecDeepSA,Zhao2018DeepThingsDA} or resourceful cloud-based devices \citep{Li2020AutomatingCD}. Others focus on hardware-specific optimizations \citep{Zhu2018ResearchOP} and inference job scheduling \citep{Wu2020IrinaAD}. The second perspective is focused on minimizing the model's inference compute requirements by compressing the model. Among model compression techniques, model pruning \citep{han2015deep,Zhang2018ASD,Liu2019RethinkingTV}, model quantization \citep{Courbariaux2015BinaryConnectTD,Rastegari2016XNORNetIC,Nagel2019DataFreeQT}, and model distillation \citep{Bucila2006ModelC,Polino2018ModelCV,Hinton2015DistillingTK} are being extensively used. These ideas alleviate the model's computational complexity by pruning the weights, computing the floating-point calculations at lower precision, and distilling the knowledge from a teacher (more complex) model into a student (less complex) model, respectively. These techniques modify the original model and often cause a fixed amount of loss in the test accuracy. \subsection{Early Exits in DNNs}\label{subsec:early-exit} ``Early exit'' generally refers to an alternative path in a DNN model which can be taken by a sample instead of proceeding to the next layers in the model. Many previous works have used the early exit concept for different purposes \citep{Xiao2021SelfCheckingDN,Scardapane2020WhySW,Matsubara2022SplitCA}. Among them, Shallow Deep Networks (SDN) \citep{Kaya2019ShallowDeepNU} points out the ``overthinking'' problem in deep neural networks. ``Overthinking'' refers to the models spending a fixed amount of computational resources for any query sample, regardless of their complexity (i.e., how deep the neural network should be to infer the correct prediction for the sample). Their research proposes attaching shallow classifiers to the intermediate layers in the model to form the early exits. Each shallow classifier in SDN provides a prediction based on the values of the intermediate layer to which it is attached. On the other hand, \citep{Xiao2021SelfCheckingDN} incorporates the shallow classifiers to obtain multiple predictions for each sample. In their method, they use early exits as an ensemble of models to increase the base model's accuracy. The functionality of the shallow classifiers in our proposed method is similar to SDN. However, the SDN method trains the shallow classifier using the ground truth data in the training set and overlooks the available knowledge in the original model. This constraint renders the proposed method useless when using a pre-trained model without access to the original training data, which is commonly the case for practitioners. \subsection{DNN Distillation and Self-distillation}\label{subsec:distillation} Among machine learning tasks, the classification category is one of the significant use cases where DNNs have been successful in recent years. Classification is applied to a broad range of data such as image \citep{Bharadi2017ImageCU}, text \citep{Varghese2020DeepLI}, audio \citep{Lee2009UnsupervisedFL}, and time-series \citep{Zheng2014TimeSC} classification. Knowledge distillation(KD) \citep{Bucila2006ModelC,Polino2018ModelCV,Hinton2015DistillingTK} is a model compression method that trains a relatively small (less complex) model known as the student to mimic the behavior of a larger (more complex) model known as the teacher. Classification models usually provide a probability distribution (PD) representing the probability of the input belonging to each class. KD trains the student model to provide similar PDs (i.e.,\ soft labels) to the teacher model rather than training it with just a class label for each sample (i.e.,\ hard labels). KD uses specialized loss functions in the training process, such as Kullback-Leibler Divergence \citep{Joyce2011} to measure how one PD is different from another. KD usually is a 2-step process consisting of training a large complex model to achieve high accuracy and distilling its knowledge into a smaller model. An essential challenge in KD is choosing the right teacher and student models. Self-distillation \citep{self-distillation} addresses this challenge by introducing a single-step method to train the teacher model along with multiple shallow classifiers. Each shallow classifier in self-distillation is a candidate student model which is trained by distilling the knowledge from one or more of the deeper classifiers. In contrast to SDN, self-distillation utilizes knowledge distillation to train the shallow classifiers. However, it still trains the base model from scratch along with the shallow classifiers, using the original training set. This training procedure conflicts with our objectives in both aspects. Specifically, we use a pre-trained model and keep it unchanged throughout the experiment and only use inference data to train the shallow classifiers. Our work modifies and puts the presented methods in SDN and self-distillation in the context of caching the final predictions of pre-trained DNN models. The method trains the shallow classifiers using only the unlabelled samples collected at run-time and measures the improvement in inference compute costs achieved by the early exits throughout the forward-passes. \subsection{DNN Prediction Caching} Clipper \citep{Crankshaw2017ClipperAL} is a serving framework that incorporates caching DNNs predictions based on their inputs. Freeze Inference \citep{Kumar2019AcceleratingDL} investigates the use of traditional ML models such as K-NN and K-Means to predict based on intermediate layers' values. They show that the size and computation complexity of those ML models grows proportionally with the number of available samples and their computational overheads by far exceed any improvement. In Learned Caches, \citep{Balasubramanian2021AcceleratingDL} extend the Freeze Inference by replacing the ML models with a pair of DNN models. A predictor model predicting the outputs and a binary classifier predicting whether the output should be used as the final prediction. Their method uses the ground truth data in the process of training the predictor and selector models. In contrast, our method 1) only uses unlabelled inference data, 2) automates the process of cache-enabling, 3) uses a confidence-based cache hit determination, 4) handles batch processing by batch shrinking. \section{Methodology}\label{sec:method} In this section, we explain the method to convert a pre-trained deep neural model (which we call the backbone) to its extended version with our caching method (called cache-enabled model). The caching method adds one or more early-exit paths to the backbone, controlled by the shallow classifiers (which we call the cache models), allowing the model to infer a decision faster at run-time for some test data samples (cache hits). Faster decision for a portion of queries will result in a reduced mean response time. ``Cache model'' is a supplementary model that we attach to an intermediate layer in the backbone, and given the layer's values provides a prediction (along with a confidence value) for the backbone's output. Just a reminder that as our principal motivation, we assume that the original training data is unavailable for the user, as is the case for most large-scale pre-trained models used in practice. Therefore, in the rest of the paper, unless we explicitly mention it, the terms dataset, training set, validation set, and test set all refer to the whole available data at run-time or a respective subset. Our procedure for cache-enabling a pre-trained model is chiefly derived from the self-distillation method \citep{self-distillation}. However, we adopt the method to cache-enable pre-trained models using only their recorded outputs, without access to the ground truth (GT) labels. A step-by-step guide on cache-enabling an off-the-shelf pre-trained model from a user perspective contains the following steps: \begin{enumerate} \item Identify the candidate layers to be cached \item Build a cache model for each candidate \item Assign confidence thresholds to the built models for determining the cache hits \item Evaluate and optimize the cache-enabled model \item Update and maintenance \end{enumerate} In the following subsections, we further discuss the procedure and design decisions in each step outlined above. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{procedure.eps} \caption{Cache-enabling procedure, candidate layers, and data paths. }\label{fig:procedure} \end{figure} \subsection{Identifying candidate layers}\label{subsec:candidates} Choosing which layers to cache is the first step toward cache-enabling a model. A candidate layer is a layer that we will examine its values correlation to the final predictions by training a cache model based on them. One can simply list all the layers in the backbone as candidates. However, since we launch a search for a cache model per candidate layer in the next step, we suggest narrowing the list by filtering out some layers with the following criteria: \begin{itemize} \item Some layers are disabled at inference time, such as dropouts and batch normalizations. These layers do not modify their input values at inference time. Therefore, we cross them off the candidates list. \item A few last layers in the model (close to the output layer, such as \texttt{L15} in Figure \ref{fig:procedure}) might not be valuable candidates for caching since the remaining layers might not have heavy computations to reach the output. \item DNN models usually are composed of multiple components (i.e.\ first-level modules) consisting of multiple layers such as multiple residual blocks in ResNet models \cite{He2016DeepRL}). We narrow down the search space to the outputs layers in those components. \item We only consider the layers which, given their activation values, the backbone's output is uniquely determined without any other layer's state involved (i.e., the backbone's output is a function of the layer's output). In other words, a layer with other layers or connections in parallel (such as \texttt{L7-L11} and \texttt{L13} in the Figure \ref{fig:procedure}) is not suitable for caching since the backbone's output does not solely depend on the layer's output. \end{itemize} Having the initial set of the candidate layers, we next build and associate a cache model to each one. \subsection{Building cache models}\label{subsec:cache-models} Building a cache model to be associated with an intermediate layer in the backbone consists of finding a suitable architecture for the cache model and training the model with that architecture. The details of the architecture search (search space, search method, and evaluation method) and the training procedure (training data extraction and the loss function) are discussed in the following two subsections. \subsubsection{Cache models architecture} A cache model can have an architecture with any size in depth and breadth, as long as it provides more computational improvement than its overhead. In other words, it must have substantially less complexity (i.e., number of parameters and connections) than the rest of the layers in the backbone that come after the corresponding intermediate layer. The search space for such models would contain architectures with different numbers and types of layers (e.g.,\ a stack of dense and/or convolution layers). Nevertheless, all the models in the search space must output a PD identical to the backbone's output in terms of size (i.e., the number of classes) and activation (e.g.,\ SoftMax or LogSoftMax). In our experiments, the search space consists of architectures with a stack of (up to 2) convolution layers followed by another stack of (up to 2) linear layers, with multiple choices of kernel and stride sizes for the convolutions and neuron counts for the linear layers. However, users can modify or expand the search space according to their specific needs and budget. The objective of the search is to find a minimal architecture that converges and predicts the backbone's output with acceptable accuracy. Note that any accuracy given by a cache model (better than random) can be helpful as we will have a proper selection mechanism later in the process to only use the cache predictions that are (most likely) correct, and also to discard the cache models yielding low computational improvement. The user can conduct the search by empirically sampling through the search space or by using a automated Neural Architecture Search (NAS) tool such as Auto-Keras \citep{Jin2019AutoKerasAE}, Auto-PyTorch \citep{zimmer2021auto}, Neural Network Intelligence (NNI) \citep{nni2021}, or NASLib \citep{naslib-2020}. However, we used NNI to conduct the search and customized the evaluation process to account for the models' accuracy and their computational complexity. We have used the floating point operations (FLOPs) count as the estimation for the models' computational complexity in this stage. Several factors influence a cache model's architecture for a given intermediate layer. These factors include the target intermediate layer's dimensions, its position in the backbone, and the dataset specifications such as its number of target classes. For instance, the first cache models in CIFAR100-Resnet50 and in CIFAR10-Resnet18 experiments (shown as cache1 in Figure \ref{fig:extended-schemas}) have the same input size, but since CIFAR100 has more target classes, it reasonably requires a cache model with more learning capacity. Therefore, using NAS to design the cache models helps automate the process and alleviate deep learning expert supervision in designing the cache models. Regardless of the search method, evaluating a nominated architecture requires training a model with the given architecture which we discuss the procedure in the next section. Moreover, since the search space is limited in depth, it is possible that for some intermediate layers, neither of the cache models converge (i.e., the model provides nearly random results). In such cases, we discard the candidate layer as non-suitable for caching. \subsubsection{Training a cache model} \label{subsec:training} Figure (\ref{fig:procedure}) illustrates a cache-enabled model's schema consisting of the backbone (the dashed box) and the associated cache models. A cache model's objective is to predict the output of the backbone model, given the corresponding intermediate layer's output, per input sample. Similar to the backbone, cache models are classification models. However, their inputs are the activation values in the intermediate layers. As suggested in self-distillation \citep{self-distillation}, training a cache model is essentially similar to distilling the knowledge from the backbone (final classifier) into the cache model. Therefore, to distill the knowledge from the backbone into the cache models, we need a medial dataset (MD) based on the collected inference data (ID). The medial dataest for training a cache model associated to an intermediate layer \texttt{L} in the backbone \texttt{B} consists of the set of activation values in the layer \texttt{L} and the PDs given by \texttt{B} per samples in the given ID, formally annotated as below: \begin{equation} \label{eqn:InputLabelPairs1} MD_L = [i \in ID : <B_L(i) , B(i)>] \end{equation} where: \noindent \details{\texttt{MD\textsubscript{L}}}{Medial dataset for the cache model associated with the layer \texttt{L}} \details{\texttt{ID}}{The collected inference data consisting of unlabelled samples} \details{\texttt{B\textsubscript{L}(i)}}{Activation values in layer \texttt{L} given the sample \texttt{i} to the backbone \texttt{B}} \details{\texttt{B(i)}}{The backbone's PD output for the sample \texttt{i}} Note that the labels in MDs are the backbone's outputs and not the GT labels, as we assumed the GT labels to be unavailable. We split the MD\textsubscript{L} into three splits ($MD_L^{Train}$, $MD_L^{Val}$, $MD_L^{Test}$) and use them respectively similar to the common deep learning training and test practices. Similar to distillation method \citep{Hinton2015DistillingTK}, we use Kullback–Leibler Divergence (KLDiv) \citep{Joyce2011} loss function in the training procedure. KLDiv measures how different are the two given PDs. Thus, minimizing the KLDiv loss value over $MD_L^{Train}$ trains the cache model to estimate the prediction of the backbone ($B(i)$). Unlike self-distillation where \citep{self-distillation} train the backbone and the shallow classifiers simultaneously, in our method, while training a cache model, it is crucial to freeze the rest of the model including the backbone and the other cache models (if any) in the collection, to ensure the training process does not modify any parameter not belonging to the current cache model. \subsection{Assigning confidence threshold} The probability value associated to the predicted class (the one with the highest probability) is known as the model's confidence in the prediction. The cache model's prediction confidence for a particular input will indicate whether we stick with that prediction (cache hit) or we proceed with the rest of the backbone to the next --- or probably final --- exit (cache miss). Confidence calibration means enhancing the model to provide an accurate confidence. In other words, a well-calibrated model's confidence accurately represents the likelihood for that prediction to be correct(\cite{pmlr-v70-guo17a}). An over-confident cache model will lead the model to prematurely exit for some samples based on incorrect predictions, while an under-confident cache model will bear a low cache hit rate. Therefore, after building a cache model, we also calibrate its confidence using $MD_L^{Val}$ to better distinguish the predictions more likely to be correct. Several confidence calibration methods are discussed in \citep{pmlr-v70-guo17a}, among which the temperature scaling (in the output layer) has shown to be practical and easy to implement. Having the model calibrated, we next assign a confidence threshold value to the model which will be used at inference time to determine the cache hits and misses. When a cache model identifies a cache hit, its prediction is considered to be the final prediction. However, when needed for validation and test purposes, we obtain the predictions from the cache model and the backbone. \begin{table}[h] \begin{center} \begin{minipage}{\columnwidth} \centering \caption{Cache prediction confusion matrix, C: Cached predicted class, B: Backbone's predicted class, GT: Ground Truth label}\label{table:confusion}% \begin{tabular}{@{}c|ccc@{}} \toprule Category & B = C & B = GT & C = GT \\ \midrule \textbf{$BC$} & \checkmark & \checkmark & \checkmark \\ $\overline{BC}$ & \checkmark & X & X \\ $B\overline{C}$ & X & \checkmark & X \\ $\overline{B}C$ & X & X & \checkmark \\ $\overline{B}\ \overline{C}$ & X & X & X \\ \end{tabular} \end{minipage} \end{center} \end{table} A cache model's prediction (C) for an input to the backbone falls into one of the 5 correctness categories listed in table \ref{table:confusion} with respect to the ground truth labels (GT) and the backbone's prediction (B) for the input. Among the cases where the cache model and the backbone disagree, the $B\overline{C}$ predictions negatively affect the final accuracy and on the other hand, the $\overline{B}C$ predictions positively affect the final accuracy. The Equation \ref{eqn:accuracy_change} formulates a cache model's actual effect on the final accuracy. \begin{equation} \label{eqn:accuracy_change} F_\Delta(\theta) = \overline{B}C_\Delta(\theta) - B\overline{C}_\Delta(\theta) \end{equation} Where: \noindent \details{$\Delta$}{The cache model} \details{$F_\Delta$}{The actual accuracy effect $\Delta$ causes given $\theta$ as threshold} \details{$B\overline{C}_\Delta$}{Ratio of $B\overline{C}$ predictions by $\Delta$ given $\theta$ as threshold} \details{$\overline{B}C_\Delta$}{Ratio of $\overline{B}C$ predictions by $\Delta$ given $\theta$ as threshold} However, since we use the unlabelled inference data to form the MDs, we can only estimate an upper bound for the cache model's effect in the final accuracy. The estimation assumes that an incorrect cache would always lead to an incorrect classification for the sample ($\overline{B}C$). We estimate the change in the accuracy upper bound a cache model causes given a certain confidence threshold, by its hit rate and cache accuracy: \begin{equation} \label{eqn:accuracy_change_ub} F_\Delta(\theta) \le HR_\Delta(\theta) \times (1- CA_\Delta(\theta)) \end{equation} Where \noindent \details{$\Delta$}{The cache model} \details{$F_\Delta$}{The expected accuracy drop $\Delta$ causes given $\theta$ as threshold} \details{$HR_\Delta$}{Hit rate provided by $\Delta$ given $\theta$ as threshold} \details{$CA_\Delta$}{Cache accuracy provided by $\Delta$ given $\theta$ as threshold} Given the tolerance $T$ for drop in final accuracy, we assign a confidence threshold to each cache model that yields no more than $X/2^n\%$ expected accuracy drop on $MD_L^{Val}$ according to the Equation \ref{eqn:accuracy_change_ub}, where \texttt{n} is the 1-based index of the cache model in the setup. It is important to note that there are alternative methods to distribute the accuracy drop budget among the cache models. For instance, one can equally distribute the budget. However, as we show in the evaluations later in section \ref{subsec:rq1}, we find it reasonable to assign more budget to the cache models shallower positions in the backbone. \subsection{Evaluation and optimization of the cache-enabled model}\label{subsec:positions} So far, we have a set of cached layers and their corresponding cache models ready for deployment. Algorithm \ref{alg:prediction} demonstrates a Python-style pseudo implementation of cache-enabled model inference process. When the cache-enabled model receives a batch of samples, it proceeds layer-by-layer similar to the standard forward-pass. Once a cached layer's activation values are available, it will pass the values to the corresponding cache model and obtains an early prediction with a confidence value per sample in the batch. For each sample, if the corresponding confidence value exceeds the specified threshold, we consider it a cache hit. Hence, we have the final prediction for the sample without passing it through the rest of the backbone. At this point, the prediction can be sent to the procedure awaiting the results (e.g.\ an API, a socket connection, a callback). We shrink the batch by discarding the cache hits items at each exit and proceed with a smaller batch to the next (or the final) exit. \begin{algorithm} \caption{Cache-enabled model inference}\label{alg:prediction} \begin{algorithmic}[1] \Require Backbone \Comment{The original model} \Require CachedLayers \Comment{List of cached layers} \Require Layer \Comment{As part of Backbone, including the associated cache model and threshold} \Procedure{ForwardPass}{X, callback} \Comment{X: Input batch} \For{\texttt{Layer in Backbone.Layers}} \Comment{In order of presence}\footnotemark \State X $\gets$ Layer(X) \newline \If{Layer in CachedLayers} \State Cache $\gets$ Layer.CacheModel \State T $\gets$ Cache.Threshold \State cachedPDs $\gets$ Cache(X) \State confidences $\gets$ max(cachedPDs, axis=1) \State callback(cachedPDs[confidences$\geq$ T]) \Comment{Resolve cache hits} \State X $\gets$ X[confidences$<$T] \Comment{Shrink the batch} \EndIf \EndFor \EndProcedure \end{algorithmic} \end{algorithm} \footnotetext{The loop is to show that each cache model will receive the cached layer's activation values when available, immediately, before proceeding to the next layer in the base model.} So far in the method, we have only evaluated the cache models individually, but to gain the highest improvement, we must also evaluate their collaborative performance within the cache-enabled model. Once the cache-enabled model is deployed, each cache model affects the following cache models' hit rates by narrowing the set of samples for which they will infer. More specifically, even if a cache model shows promising hit rate and accuracy in individual evaluation, its performance in the deployment can be affected due to the previous cache hits made by the earlier cache models (connected to shallower layers in the backbone). Therefore, we need to choose the optimum subset of cache models to infer the predictions with the minimum computations. A brute force approach to find the optimum subset would require evaluating the cache-enabled model with each subset of the cache models. However, we implement a more efficient method without multiple executions of the cache-enabled model. First, for each cache model, we record its prediction per sample in the $MD_L^{Val}$ and their confidence values. We also record two FLOPs counts per cache model; One is the cache model's FLOPs count(C\textsubscript{1}), and the other is the fallback FLOPs count which denotes the FLOPs in the remaining layers in the backbone(C\textsubscript{2}). For example, for the layer \texttt{L12} in the Figure \ref{fig:procedure}, C\textsubscript{1} is the corresponding cache model's FLOPs count, and C\textsubscript{2} is the FLOPs count in the layers \texttt{L13} through \texttt{L16}. For each subset $S$, we process the lists of predictions recorded for each model in $S$ to generate the lists of samples they actually receive when deployed along with other cache models in $S$. The processing consist of keeping only the samples in each list for which there has been no cache hits by the previous cache models in the subset. Further, we divide each list into two parts according to each cache model's confidence threshold; One consisting of the cache hits, and the other consisting of the cache misses. Finally, we score each subset using the processed lists and recorded values for each cache model in $S$ as follows: \begin{equation} \label{eqn:cache_score} K(S) = \sum_{\Delta\in S} \piped{H_\Delta} \times (C_{2, \Delta} - C_{1, \Delta}) - \piped{M_\Delta} \times C_{1, \Delta} \end{equation} Where \noindent \details{$K$}{The caching score for subset $S$} \details{$\Delta$}{A cache model in $S$} \details{$H_\Delta$}{The generated list of cache hits for $\Delta$} \details{$M_\Delta$}{The generated list of cache misses for $\Delta$} \details{$C_{1, \Delta}$}{FLOPs count recorded for $\Delta$} \details{$C_{2, \Delta}$}{Fallback FLOPs count recorded for $\Delta$} The score equation accounts for both the improvement a cache model provides through its cache hits within the subset, and the overhead it produces for its cache misses. Final schemas after applying the method on MobileFaceNet, EfficientNet, ResNet18, and ResNet50 are illustrated in Figure \ref{fig:extended-schemas}. The figure demonstrates the chosen subsets and their associated cache models per backbone and dataset. \begin{figure}[!htbp] \centering \includegraphics[width=\textwidth]{extended-schemas.eps} \caption{Final schema of the cache models, for the experiments CIFAR10-Resnet18, CIFAR100-Resnet150, LWF-EfficientNet, and LFW-MobileFaceNet }\label{fig:extended-schemas} \end{figure} \subsection{Updates and maintenance}\label{subsec:maintenance} Similar to conventional caching, layer caching also requires recurring updates to the cache space to adapt to the trend in inference data. However, unlike conventional caching, we can not update the cache models in real-time. Therefore, to update the cache models using the extended set of collected inference samples, we retrain them and re-adjust their confidence thresholds. The retraining adapts the cache models to the trend in the incoming queries and maintains their cache accuracy. We consider two triggers for the updates: \begin{inparaenum}[I)] \item When the size of the recently collected data reaches a threshold (e.g.\ 20\% of the collected samples are new) and \item When the backbone is modified or retrained. \end{inparaenum} However, users must adapt the recommended triggers to their requirements and budget. \section{Empirical Evaluation}\label{sec:empirical-evaluation} In this section, we explain our experiment's objective, research questions, the tool implementation, and the experiment design including the backbones and datasets, evaluation metrics, and the environment configuration. \subsection{Objectives and research questions} The high-level objective of this experiment is to assess the ability of the automated layer caching mechanism to improve the compute requirements and inference time for DNN-based services. To address the above objective, we designed the following research questions (RQ): \begin{itemize} \item [RQ1] To what extent the cache models can accurately predict the backbone's output and the ground truth data?\\ This RQ investigates the core idea of caching as a mechanism to estimate the final outputs earlier in the model. The assessments in this RQ considers the cache models' accuracy in predicting the backbone's output (cache accuracy) and predicting the correct labels (GT accuracy). \item [RQ2] To what extent can cache-enabling improve compute requirements?\\ In this RQ, we are interested in how cache-enabling affects the models' computation requirements. In these measurements, we measure the FLOPs counts and memory usage as the metrics for the models' compute consumption. \item [RQ3] How much acceleration does cache-enabling provide on CPU/GPU?\\ In this RQ, we are interested in the actual amount of end-to-end speed up that a cache-enabled model can achieve. We break this result down to CPU and GPU accelerations, since they address different types of computation during the inference phase and thus may have been differently affected. \end{itemize} \subsection{Tasks and datasets}\label{subsec:datasets} Among the diverse set of classification tasks in real-world that are implemented by solutions utilizing DNN models, we have selected two representatives: face recognition and object classification. Both tasks are quite commonly addressed by DNNs and often used in large-scale services that have non-functional requirements such as: high throughput (due to the nature of the service and the large volume of input data) and are time-sensitive. The face recognition models are originally trained on larger datasets such as MS-Celeb-1M \citep{Guo2016MSCeleb1MAD} and are usually tested with different --- and smaller --- datasets such as LFW \citep{Huang2008LabeledFI}, CPLFW \citep{Zheng2017CrossAgeLA}, RFW \citep{Wang2019RacialFI}, AgeDB30 \citep{Moschoglou2017AgeDBTF}, and MegaFace \citep{KemelmacherShlizerman2016TheMB} for testing the models against specific challenges, such as age/ethnic biases, and recognizing mask covered faces. We used the Labeled Faces in the Wild (LFW) dataset for face recognition which contains 13,233 images of 5,749 people. We used the images of 127 identities who have at least 11 images in the set so we can split them for training, validation and testing. We also used CIFAR10 and CIFAR100 test sets \citep{Krizhevsky2009LearningML} for object classification, each containing 10000 images distributed equally among 10 and 100 classes, respectively. A reminder that we do not use the training data, rather we only use the test sets to simulate incoming queries at run-time. Specifically, we use only the test splits of the CIFAR datasets. However, we use the whole LFW data as it has not been used to train the face recognition models. Moreover, we do not use the labels in these test sets in the training and optimization process, rather we only use them in the evaluation step to provide GT accuracy statistics. Each dataset mentioned above represents an inference workload for the models. Thus, we split each one into training, validation and test partitions with 50\%, 20\%, and 30\% proportionality, respectively. However, we augmented the test sets using flips and rotations to improve the statistical significance of our testing measurements. \subsection{Backbones}\label{subsec:backbones} The proposed cache-enabling method is applicable to any deep classifier model. However, the results will vary for different models based on their complexity. Among the available face recognition models, we have chosen well-known MobileFaceNet and EfficientNet models to evaluate the method, and we experiment with ResNet18 and ResNet50 for object classification. The object classification models are typical classifier models out-of-the-box. However, the face recognition models are feature extractors that provide embedding vectors for each image based on the face/landmarks features. They can still be used to classify a face-identity dataset. Therefore, we attached a classifier block to those models and trained them (with the feature extractor layers frozen) to classify the images of the 127 identities with the highest number of images in the LFW dataset (above 10). It is important to note that since the added classifier block is a part of the pre-trained model under study, we discarded the data portion used to train the classifier block to ensure we still hold on to the constraint of working with pre-trained models without access to the original training dataset. \subsection{Metrics and measurements}\label{sub:measurement} Our evaluation metrics for RQ1 are ground truth (GT) accuracy and cache accuracy. Cache accuracy measures how accurately a cache model predicts the backbone's outputs (regardless of correctness). The GT accuracy applies to both cache-enabled model and each individual cache model. However, the cache accuracy only applies to the cache models. In RQ2, we compare the original models and their cache-enabled version in terms of the average FLOPs count occurring for inference and their memory usage. We only measure the resources used in inference. Specifically, we exclude the training-specific layers (e.g., Batch Normalization and Dropout) and computations (e.g., gradient operations) in the analysis. FLOPs count takes the model architecture and the input size into account and estimates the computations required by the model to infer for the input \citep{Desislavov2021ComputeAE}. In other words, the fewer FLOPs used for inference, the more efficient is the model in terms of compute and energy consumption. On the other hand, we report two aspects of memory usage for the models. First is the total space used to load the models on the memory (i.e.\ model size). This metric is essentially agnostic to the performance of cache models and only considers the memory cost of loading them along with the backbone. In addition to the memory required for their weights, DNNs also allocate a sizeable amount of temporary memory for buffers (also referred to as tensors) that correspond to intermediate results produced during the evaluation of the DNN's layers \cite{Levental2022MemoryPF}. Therefore, our second metric is the live tensor memory allocations (LTMA) during inference. LTMA measures the total memory allocated to load, move, and transform the input tensor through the model's layers to form the output tensor while executing the model. In RQ3, we compare the average inference latency by the original model and its cache-enabled counterpart. Inference latency measures the time spent from passing the input to the model till it exits the model (by either an early exit or the final classifier in the backbone). Various factors affect the inference latency including hardware-specific optimizations (e.g., asynchronous computation), framework, and model implementation. In our measurements, the framework and model implementations are fixed as discussed in section \ref{subsec:implementation}. However, to account for other factors, we repeat each measurement for 100 times and report the average inference latency recorded for each experiment. Further, to also account for the asynchronous computations effects in GPU inference latency, we repeated the experiments with different batch sizes. \subsection{Implementation}\label{subsec:implementation} We developed the caching tool using PyTorch, which is accessible through the GitHub repository\footnote{https://github.com/aminabedi/Automated-Layer-Caching}. Figure \ref{fig:framework} shows the overall system design. The tool provides a NAS module, an optimizer module, and a deployment module. The NAS module provides the architectures to be used per cache model. The optimizer assigns the confidence thresholds, finds the best subset of the cache models and provides evaluation reports. Lastly, the deployment module launches a web server with the cache-enabled model ready to serve queries. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{framework.eps} \caption{Caching system overall framework}\label{fig:framework} \end{figure} \subsubsection{NAS Module} Existing NAS tools typically define different search spaces according to different tasks which constrains their applicability to certain input types and sizes. Using such tools with input constraints defeats our method's generalization and automation purpose since the cache models' inputs can have any dimension and size. For instance, ProxylessNAS \citep{Cai2019ProxylessNASDN} specializes in optimizing the neural architecture performance for a target hardware. However, it is only applicable for image classification tasks and requires certain input specifications (e.g.,\ 3xHxW images normalized using given values). Similarly, Auto-PyTorch \citep{zimmer2021auto} and Auto-Keras are only applicable to tabular, text, and image datasets. We chose NNI by Microsoft \citep{nni2021} as it does not constrain the model inputs in terms of type, size, and dimensions. NNI also provides an extensible search space definition with support for variable number of layers and nested choices (e.g., choosing among different layer types, each with different layer-specific parameters). Given the backbone implementation, the dataset, and the search space, the module launches an NNI experiment per candidate layer to search for an optimum cache model for the layer. Each experiment launches a web GUI for the progress reports and the results. We aim for end-to-end automation in the tool. However, currently, the user still needs to manually export the architecture specifications when using the NAS module and convert them to a proper python implementation (i.e., a PyTorch module implementing the architecture). The specifications are available to the user through the experiments web GUI and also in the trial output files. This shortcoming is due to the NNI implementation, which does not currently provide access to the model objects within the experiments. We have created an enhancement suggestion on the NNI repository to support the model object access (issue \#4910). \subsubsection{Optimizer and deployment modules} Given the backbone's implementation and the cache models, the optimizer evaluates cache models, assigns their confidence thresholds, finds the best subset of the cache models and disables the rest, and finally reports the relevant performance metrics for the cache-enabled model and each cache model. We used the DeepSpeed by Microsoft and PyTorch profiler to profile the FLOPs counts, memory usage, and latency values for the cache models and the backbones. The user can use each module independently. Specifically, the user can skip the architecture search via the NAS module and provide the architectures manually to the optimizer, and the module trains them before proceeding to the evaluation. The tool also offers an extensive set of configurations. More specifically, the user can configure the tool to use one device (e.g., GPU) for training processes and the other (e.g., CPU) for evaluation and deployment. The deployment module launches a web server and exposes a WebSocket API to the cache-enabled model. The query batches passed to the socket will receive one response per item, as soon as the prediction is available through either of the (early or final) exits. \subsubsection{Backbone Implementation} We used the backbone implementations and weights provided by the FaceX-Zoo \citep{Wang2021FaceXZooAP} repository to conduct the experiments with LWF dataset on MobileFaceNet and EfficientNet models. For experimenting with CIFAR10 and CIFAR100, we used the implementations provided by torchvision \citep{Marcel2010TorchvisionTM} and the weights provided by \citep{huy_phan_2021_4431043} and \citep{weiaicunzai2020}. All the backbone implementations were modified to implement an interface that handles the interactions with the cache models, controls the exits (cache hits and misses), and provides the relevant reports and metrics. We documented the interface usage in the repository, so users can experiment with new backbones and datasets. We refer interested readers to a blog post on how to extract intermediate activations in PyTorch \citep{nanbhas2020forwardhook} which introduces three methods to access the activation values. The introduced forward hooks method in PyTorch is very convenient for read-only purposes. However, our method requires performing actions based on the activation values, specifically, cache lookup and batch shrinking and avoiding further computation through the next layers. Therefore, we used the so called ``hacker'' method to access the activation values and perform these action and provided the interface for easy replication on different backbones. \subsection{Environment setup} The hardware used for inference substantially affects the results due to the hardware-specific optimizations such as computation parallelism. In our experiments, we have used an ``Intel(R) Core(TM) i7-10700K CPU @ 3.80GH'' to measure on-CPU inference times and an ``NVIDIA GeForce RTX 3070'' GPU to measure on-GPU inference time. \subsection{Experiment results}\label{sec:evaluation} In this sub-section, we evaluate the results of applying the method on the baseline backbones and discuss the answers to the RQs. \subsubsection{RQ1. How accurate are the cache models in predicting the backbone's output and the ground truth labels?}\label{subsec:rq1} In this RQ we are interested in the built cache models' performance in terms of their hit rate, GT accuracy, and cache accuracy. We break down the measurements into two parts. The first part covers the cache models' individual performance over the whole test set without any other cache model involved. The second part covers their collaborative performance within in the cache-enabled model. \subsubsection{Cache models' individual performance}\label{subsubsec:individual-performance} \begin{figure}[!htbp] \centering \includegraphics[width=\columnwidth]{Cifar100-Resnet50} \caption{Individual accuracy and hit rate of the cache models vs. confidence threshold per cache model in CIFAR100 - Resnet50 experiment}\label{fig:acc-conf-cifar100-resnet50} \end{figure} Figure \ref{fig:acc-conf-cifar100-resnet50} portrays each cache model's individual performance against any confidence threshold value in CIFAR100-Resnet50 experiment. The figures demonstrating the same measurements for other experiments are available in appendix \ref{appendice1}. We make three key observations here. First, deeper cache models are more confident and accurate in their predictions. For instance, cache 1 in the Figure \ref{fig:acc-conf-cifar100-resnet50} has 33.36\% GT accuracy and 35.74\% cache accuracy, while these metrics increase to 78.60\% and 95.38\% for Cache 3, respectively. This observation agrees with the generally acknowledged feature extraction pattern in the DNNs --- deeper layers convey more detailed information. The second key observation is the inverse correlation between the cache models' accuracy (both GT and cache) and their hit rates. This observation highlights the reliability of confidence thresholds in distinguishing the predictions more likely to be correct. For instance, cache 1 in Figure \ref{fig:acc-conf-cifar100-resnet50}, with a 20\% confidence threshold, yields 35.24\% hit rate but also 8.99\% drop in the final accuracy. However, with a 60\% confidence threshold, it yields a 4\% hit rate and does not reduce the final accuracy more than 0.1\%. The third observation is that the cache accuracy is higher than the GT accuracy in all cases. This difference is because we have trained the cache models to mimic the backbone only by observing its activation values in the intermediate layers and outputs. Since we have not assumed access to the GT labels (which is the case for inference data collected at run-time) while training the cache models, they have learned to make correct predictions only through predicting the backbone's output, which might have been incorrect in the first place. On the other hand, we observed that the cache models predict the correct labels for a portion of samples for which the backbone misclassifies. For instance, for 0.92\% of the samples, cache 3 (in the Figure \ref{fig:acc-conf-cifar100-resnet50}) correctly predicted the GT labels while the backbone failed ($\overline{B}C$ predictions). This shows the cache models' potential to partially compensate for their incorrect caches ($B\overline{C}$ predictions) by correcting the backbone's predictions for some samples ($\overline{B}C$). This indeed agrees with the overthinking concept in SDN (as discussed in \ref{subsec:distillation}) since for this set of samples, the cache models have been able to predict correctly in the shallower layers of the backbone. \subsubsection{Cache models' collaborative performance} \begin{table}[!htbp] \begin{center} \begin{minipage}{\columnwidth} \caption{Cache models' collaborative performance in terms of hit rate(HR), cache accuracy (A\textsubscript{cache}), GT accuracy (A\textsubscript{GT}), and their effect on the final accuracy($\downarrow$A\textsubscript{effect}). LFW: Labeled Faces in the Wild, MFN: MobileFaceNet, EFN: EfficientNet}\label{tab:collaboration}% \begin{tabular}{@{}ccccc|cccc@{}} \toprule \multirow{2}{*}{Data}&\multirow{2}{*}{Model}&\multicolumn{2}{c}{Final accuracy}&\multirow{2}{*}{Exit\#} & \multirow{2}{*}{HR} & \multirow{2}{*}{A\textsubscript{cache}} & \multirow{2}{*}{A\textsubscript{GT}} & \multirow{2}{*}{$\downarrow$ A\textsubscript{effect}}\\ & & Base & Cache-enabled & & & & \\ \midrule \multirow{8}{*}{\rotatebox[origin=c]{90}{CIFAR10}} & \multirow{4}{*}{\rotatebox[origin=c]{90}{Resnet18}} & \multirow{4}{*}{88.71\%} & \multirow{4}{*}{86.49\%} & 1 & 67.21\% & 92.29\% & 88.91\% & 01.31\% \\ & & & & 2 & 10.33\% & 89.76\% & 76.63\% & 0.56\% \\ & & & & 3 & 11.24\% & 85.71\% & 51.43\% & 0.25\%\\ & & & & 4 & 8.32\% & 91.37\% & 35.71\% & 0.1 \%\\ & \multirow{4}{*}{\rotatebox[origin=c]{90}{Resnet50}} & \multirow{4}{*}{87.92\%} & \multirow{4}{*}{85.88\%} & 1 & 61.41\% & 89.12\% & 86.19\% & 1.12\%\\ & & & & 2 & 15.73\% & 93.01\% & 77.84\% & 0.58\%\\ & & & & 3 & 10.29\% & 82.22\% & 53.33\% & 0.3\%\\ & & & & 4 & 6.1\% & 97.47\% & 42.65\% & 0.04\%\\ \hline \multirow{8}{*}{\rotatebox[origin=c]{90}{CIFAR100}} & \multirow{4}{*}{\rotatebox[origin=c]{90}{Resnet18}} & \multirow{4}{*}{75.92\%} & \multirow{4}{*}{74.47\%} & 1 & 11.96\% & 99.29\% & 82.11\% & 0.94\% \\ & & & & 2 & 58.26\% & 99.62\% & 85.41\% & 0.1\% \\ & & & & 3 & 7.26 \% & 93.81\% & 59.29\% & 0.3\%\\ & & & & 4 & 5.36\% & 55.56\% & 38.89\% & 0.11\%\\ & \multirow{4}{*}{\rotatebox[origin=c]{90}{Resnet50}} & \multirow{4}{*}{78.98\%} & \multirow{4}{*}{77.04\%} & 1 & 11.92\% & 76.34\% & 80.2\% & 1.32\%\\ & & & & 2 & 61.98\% & 98.56\% & 84.55\% & 0.34\%\\ & & & & 3 & 11.5\% & 97.85\% & 63.69\% & 0.27\%\\ & & & & 4 & 7.38\% & 73.68\% & 52.63\% & 0.1\%\\ \hline \multirow{5}{*}{\rotatebox[origin=c]{90}{LFW}} & \multirow{3}{*}{\rotatebox[origin=c]{90}{MFN}} & \multirow{3}{*}{97.78\%} & \multirow{3}{*}{96.91\%} & 1 & 37.35\% & 98.63\% & 97.88\% & 0.51\% \\ & & & & 2 & 41.02\% & 99.71\% & 99.71\% & 0\% \\ & & & & 3 & 55.95\% & 93.44\% & 96.18\% & 0.24\% \\ & \multirow{2}{*}{\rotatebox[origin=c]{90}{EFN}} & \multirow{2}{*}{97.29\%} & \multirow{2}{*}{95.35\%} & 1 & 63.73\% & 96.82\% & 96.24\% & 1.67\%\\ & & & & 2 & 14.52\% & 99.12\% & 98.76\% & 0.02\%\\ \hline \end{tabular} \end{minipage} \end{center} \end{table} Table \ref{tab:collaboration} describes the cache models' collaborative performance within the cache-enabled model per experiment. In the table, we also report how each cache model's cache hits have affected the final accuracy. Here, we observe that while evaluating the cache models on the subset of samples, which were missed by the previous cache models (the relatively more complex ones), the measured hit rate and GT accuracy is substantially lower compared to the evaluation on the whole dataset. This is indeed due to the fact that the simpler samples (less detailed and easier to classify) are resolved earlier in the model. More specifically, hit rate decreases since the cache models are less confident in their prediction for the more complex samples, and GT accuracy also decreases since the backbone also is less accurate for such samples. However, we observe that the cache models still have high cache accuracy with low impact on the overall accuracy. This observation shows how the confidence-based caching method has effectively enabled the cache models to provide early predictions and keep the overall accuracy drop within the given tolerance. \subsubsection{RQ2. To what extent can cache-enabling improve compute requirements?}\label{subsec:rq2} In this RQ, we showcase the amount of computation caching can save in terms of FLOPs count and analyze the memory usage of the models. \begin{table}[!htbp] \begin{center} \begin{minipage}{\columnwidth} \centering \caption{Original and cache-enabled models FLOPs (M:Mega - $10^6$)}\label{table:flops} \begin{tabular}{@{}ll|ccc@{}} \toprule \multirow{2}{*}{Dataset(input size)} & \multirow{2}{*}{Model} & \multicolumn{2}{c}{FLOPs} & \multirow{2}{*}{$\downarrow$ Ratio} \\ & & Original & Cache-enabled \\ \midrule \multirow{2}{*}{CIFAR10($3\times32\times32$)} & Resnet18 & 765M & 414M & 45.88\%\\ & Resnet50 & 1303M & 601M & 53.87\%\\ \hline \multirow{2}{*}{CIFAR100($3\times32\times32$)} & Resnet18 & 766M & 374M & 51.17\%\\ & Resnet50 & 1304M & 547M & 58.05\%\\ \hline \multirow{2}{*}{LFW($3\times112\times112$)} & MobileFaceNet & 474M & 296M & 37.55\% \\ & EfficientNet & 272M & 182M & 33.08\% \\ \end{tabular} \end{minipage} \end{center} \end{table} Table \ref{table:flops} demonstrates the average amount of FLOPs computed for inference per sample. Here we observe that shrinking the batches proportionally decreases the FLOPs count required for inference. \begin{table}[!htbp] \begin{center} \begin{minipage}{\columnwidth} \centering \caption{Original and cache-enabled models memory usage}\label{table:memory} \begin{tabular}{@{}ll|cccc@{}} \toprule \multirow{3}{*}{Dataset(input size)} & \multirow{3}{*}{Model} & \multicolumn{2}{c}{Original} & \multicolumn{2}{c}{Cache-enabled} \\ & & Model Size & LTMA & Model Size & LTMA \\ \midrule \multirow{2}{*}{CIFAR10($3\times32\times32$)} & Resnet18 & 43MB & 102MB & 97MB & 88MB\\ & Resnet50 & 91MB & 235MB & 243MB & 201MB\\ \hline \multirow{2}{*}{CIFAR100($3\times32\times32$)} & Resnet18 & 43MB & 104MB & 383MB & 93MB\\ & Resnet50 & 91MB & 235MB & 552MB & 189MB\\ \hline \multirow{2}{*}{LFW($3\times112\times112$)} & MobileFaceNet & 286MB & 567MB & 350MB & 515MB \\ & EfficientNet & 147MB & 371MB & 297MB & 349MB \\ \end{tabular} \end{minipage} \end{center} \end{table} Moreover, table \ref{table:memory} shows the memory used to load the models (i.e.,\ the model size) and the total LTMA during inference while inferring for the test set. As expected, the cache-enabled models' size is larger than the original model in all cases since they include the backbone and the additional cache models. However, the decreased LTMA in all cases shows the reduced amount of memory allocations during the inference time. Generally, lower LTMA indicates smaller tensor dimensions (e.g.\ batch size, input and operators' dimensions) \citep{Ren2021SentinelET}. However, in our case, since we do not change neither of the dimensions, lower LTMA is due to avoiding the computations in the remaining layers after cache hits which require further memory allocations. Although the FLOPs count and memory usage indicate the model's inference computational requirements, the decreased amount of FLOPs and LTMA does not necessarily lead to proportional reduction in the models' inference latency, which we further investigate in the next RQ. \subsubsection{RQ3. How much acceleration does cache-enabling provide on CPU/GPU?}\label{subsec:rq3} In this RQ, we investigate the end-to-end improvement that cache-enabling offers. The results of this measurement clearly depend on multiple deployment factors such as the underlying hardware and framework, and as we discuss later in the section, their asynchronous computation capabilities. \begin{table}[h] \begin{center} \begin{minipage}{\columnwidth} \centering \caption{end-to-end evaluation of cache-enabled models improvement in average inference latency, batch size = 32, MFN: MobileFaceNet, EFN: EfficientNet}\label{table:latencies}% \begin{tabular}{@{}ll|cccccc@{}} \toprule \multirow{2}{*}{Dataset} &\multirow{2}{*}{Model} & \multicolumn{2}{c}{Original latency} & \multicolumn{2}{c}{Cache-enabled latency} & \multicolumn{2}{c}{$\downarrow$ Ratio}\\ & & CPU & GPU & CPU & GPU & CPU & GPU \\ \midrule \multirow{2}{*}{CIFAR10} &Resnet18 & 13.4 ms& 1.08 ms& 10.11 ms& 0.98 ms& 24.55\%& 10.2\% \\ & Resnet50 & 18.73 ms & 1.81 ms & 14.62 ms & 1.51 ms & 31.08\% & 16.57\% \\ \hline \multirow{2}{*}{CIFAR100} & Resnet18 & 14.23 ms& 1.39 ms& 9.39 ms& 1.25 ms& 34.01\%& 10.08\%\\ & Resnet50 & 19.59 ms & 2.05 ms & 9.02 ms & 1.84 ms & \textbf{46.08\%} & 16.75\% \\ \hline \multirow{2}{*}{LFW} & MFN & 25.34 ms & 8.22 ms & 16.91 ms & 7.30 ms & 33.23\% & 11.19\% \\ & EFN & 39.41 ms & 17.63 ms & 27.98 ms & 14.38 ms & 29.01\% & \textbf{18.44}\% \\ \end{tabular} \end{minipage} \end{center} \end{table} Table (\ref{table:latencies}) shows the average latency for the base models on CPU and GPU, vs. their cache-enabled counterparts, evaluated on the test set. The first key observation here is the improvements on CPU. This improvement is due to the low parallelism in the CPU architecture. Essentially, the computations volume on CPU is proportional to the number of samples. Therefore, when a sample takes an early exit, the remaining computation required to finish the tasks for the batch proportionally decreases. The second observation is the relatively lower latency improvement on GPU. This observation shows that shrinking a batch does not proportionally reduce the inference time on GPU, which is due to the high parallelism in the hardware. Shrinking the batch on GPU provides a certain overhead since it interrupts the on-chip parallelism and hardware optimizations. This interruption forces the hardware to re-plan its computations which can be time consuming. Thus, batch shrinking improvements can be insignificant on GPU. \begin{table}[h] \begin{center} \begin{minipage}{\columnwidth} \centering \caption{Inference latency improvement on GPU vs. batch size in Resnet18 and Resnet50 trained on CIFAR100}\label{table:gpu-batch}% \begin{tabular}{@{}lc|cccccc@{}} \toprule Model & Batch Size & Original Latency & Cache-enabled Latency & $\downarrow$ Ratio\\ \midrule \multirow{4}{*}{Resnet18} & 16 & 1.34 ms & 1.18 ms & 11.83\% \\ & 32 & 1.39 ms & 1.25 ms & 10.08\% \\ & 64 & 1.43 ms & 1.77 ms & -24.28\% \\ & 128 & 1.61 ms & 2.11 ms & -31.05\% \\ \hline \multirow{4}{*}{Resnet50} & 16 & 1.98 ms & 1.71 ms & 13.68\% \\ & 32 & 2.05 ms & 1.84 ms & 16.75\% \\ & 64 & 2.19 ms & 1.98 ms & 9.21\% \\ & 128 & 2.7 ms & 3.22 ms & -19.43\% \\ \end{tabular} \end{minipage} \end{center} \end{table} Table \ref{table:gpu-batch} further demonstrates how the batch size affects the improvement provided by caching. The key observation here is that increasing the batch size can negate the caching effect on the inference latency which as discussed is due to fewer number of batches that are fully resolved through the cache models and do not reach the last layers. In conclusion, the latency improvement here highly depends on the hardware used in inference and must be specifically analyzed per hardware environment and computation parameters such as batch size. However, the method still can be useful when the model is not performing batch inferences (batch size = 1). One can also use the tool and get a best prediction so far within the forward-pass process by disabling the batch shrinking. Doing so will generate multiple predictions per input sample, one per exit (early and final). \subsection{Limitation and future directions}\label{subsec:discussion} The first limitation of this study is that the proposed method is limited to classification models since it would be more complicated for the cache models to predict a regression model's output due to their continuous values. This limitation is strongly tied to the effectiveness of knowledge distillation in case of regression models. The method also does not take the internal state of the backbone (if any) into account, such as the hidden states in recurrent neural networks. Therefore, the method's effectiveness for such models still needs to be further assessed. Moreover, practitioners should take the underlying hardware and the backbone structure into account as they directly affect the final performance. On this note, as shown in section \ref{subsec:rq3}, different models provide different performances in terms of inference latency in the first place, therefore, choosing the right model for the task comes first, and caching can be helpful in improving the performance. \section{Conclusion}\label{sec:conclusion} In this paper, we have showed that our automated cashing approach is able to extend a pre-trained classification DNN to a cache-enabled version using a relatively small and unlabelled dataset. The required training dataset for cashing models are collected just by recording the input items and their corresponding backbone outputs at the inference time. We have also shown that the caching method can introduce significant improvement in the model's computing requirements and inference latency, specially when the inference is performed on CPU. We discussed the parameters, design choices, and the procedure of cache-enabling a pre-trained off-the-shelf model, and the required updates and maintenance. In conclusion, while traditional caching might not be beneficial for DNN models due to the diversity, size and dimensions of the inputs, caching the features in the hidden layers of the DNNs using the cache models can achieve significant improvement in the model's inference computational complexity and latency. As shown in sections \ref{subsec:rq2} and \ref{subsec:rq3}, caching reduces the average inference FLOPs by up to 58\% and the latency up to 46.09\% on CPU and 18.44\% on GPU. \backmatter \bmhead{Acknowledgments} The work of Pooyan Jamshidi has been partially supported by NSF (Awards 2007202, 2107463, and 2233873) and NASA (Award 80NSSC20K1720). \section*{Declarations} \textbf{Conflict of interest} There are no conflict of interests. \begin{appendices} \section{Cache models' individual performance for all experimenst}\label{appendice1} The following figures demonstrate the hit rate, GT accuracy, and cache accuracy of each cache model vs. the confidence threshold, per experiment dataset and backbone. \begin{figure}[p] \centering \includegraphics[width=\columnwidth]{Cifar10-Resnet18} \caption{Experiment: CIFAR10-Resnet18} \end{figure} \begin{figure}[p] \centering \includegraphics[width=\columnwidth]{Cifar10-Resnet50} \caption{Experiment: CIFAR10-Resnet50} \end{figure} \begin{figure}[p] \centering \includegraphics[width=\columnwidth]{Cifar100-Resnet18} \caption{Experiment: CIFAR100-Resnet18} \end{figure} \begin{figure}[p] \centering \includegraphics[width=\columnwidth]{LFW-MobileFaceNet.eps} \caption{Experiment: LFW-MobileFaceNet} \end{figure} \begin{figure}[p] \centering \includegraphics[width=\columnwidth]{LFW-EfficientNet.eps} \caption{Experiment: LFW-EfficientNet} \end{figure} \end{appendices}
{ "timestamp": "2022-09-20T02:20:40", "yymm": "2209", "arxiv_id": "2209.08625", "language": "en", "url": "https://arxiv.org/abs/2209.08625" }
\section{Conclusion} {DPM}\xspace is a promising new technology for building KVSs. Prior {DPM}\xspace KVSs have had to sacrifice at least one of three desirable characteristics: high common-case performance, scalability, and lightweight reconfiguration. We present the {\textsc{Dinomo}}\xspace KVS, which uses a novel combination of techniques to achieve these properties simultaneously, which we demonstrate through empirical evaluation. \section{{\textsc{Dinomo}}\xspace} \label{sec:dinomo} \input{tbl-dinomo-summary} We now present {\textsc{Dinomo}}\xspace, a key-value store (KVS) for {DPM}\xspace. We first describe its API, target workloads, goals, and the guarantees it provides. Then, we explain how {\textsc{Dinomo}}\xspace achieves its goals (Table~\ref{tab:design-summary}). \vheading{API}. {\textsc{Dinomo}}\xspace allows applications to perform \texttt{insert(key, value)}, \texttt{update(key, value)}, \texttt{lookup(key)}, or \texttt{delete(key)} on variable-sized key-value pairs. We refer to the \texttt{lookup} operations as \emph{reads}, and the \texttt{insert}, \texttt{update}, and \texttt{delete} operations as \emph{writes}. \vheading{Target workloads}. {\textsc{Dinomo}}\xspace targets applications with dynamic working sets and sizes, and non-uniform workloads with varying skew~\cite{novakCLOUD20, quCSUR18, anna-vldb19}. Large variations in workloads require {DPM}\xspace KVSs to allow the elastic deployment of resources (e.g., {KNs}\xspace) in response to those dynamics~\cite{zhangvldb21,caoSIGMOD21}. \vheading{Goals}. {\textsc{Dinomo}}\xspace aims to achieve the following goals: \begin{itemize}[leftmargin=10pt] \item High common-case performance in the absence of failures or reconfiguration \item Scalability of performance when the number of {KNs}\xspace increases \item Lightweight online reconfiguration to effectively handle {KN}\xspace failures, bursty workloads, and load imbalance on available {KNs}\xspace \item Linearizable reads and writes \end{itemize} \vheading{Guarantees}. {\textsc{Dinomo}}\xspace guarantees that once committed, data will not be lost or corrupted regardless of {KNs}\xspace failures. It also ensures data remains available if at least one {KN}\xspace and the {DPM}\xspace are available. \begin{comment} Next, we describe the design decisions in {\textsc{Dinomo}}\xspace (Table~\ref{tab:design-summary}). We first provide an overview of the {\textsc{Dinomo}}\xspace architecture (\sref{sec-arch}). We then describe how {\textsc{Dinomo}}\xspace stores data and metadata in {DPM}\xspace (\sref{sec:dpm-sharing}). Next, we discuss how {\textsc{Dinomo}}\xspace architects its caches to reduce network round trips to {DPM}\xspace (\sref{sec:adaptive-caching}). Then, we detail how {\textsc{Dinomo}}\xspace sidesteps consistency overheads via ownership partitioning\xspace (\sref{sec:ownership-part}). Finally, we describe how {\textsc{Dinomo}}\xspace achieves lightweight reconfiguration (\sref{sec:control-plane}), and discuss some of our optimizations (\sref{sec:onesided-async}). \end{comment} \subsection{Architecture} \label{sec-arch} Figure~\ref{fig-high-level-arch} shows the high-level architecture of {\textsc{Dinomo}}\xspace. {\textsc{Dinomo}}\xspace consists of clients, {routing nodes}\xspace ({RNs}\xspace), {KVS nodes}\xspace ({KNs}\xspace), {DPM}\xspace, and {monitoring/management node}\xspace ({M-node}\xspace). We describe those components and how a request flows between them. Applications and users interact with {\textsc{Dinomo}}\xspace through clients. {RNs}\xspace are the client-facing tier that provides cluster membership and isolate clients from the internal variance of the KVS cluster. A client first contacts a {RN}\xspace to obtain cluster membership and caches the mapping of key ranges to various {KNs}\xspace. The client contacts the appropriate {KN}\xspace that will then perform the read or write operation on its behalf. Each {KN}\xspace is equipped with general-purpose processors and a small amount of DRAM relative to the {DPM}\xspace capacity. The {KN}\xspace uses one-sided or two-sided RDMA primitives to access {DPM}\xspace over the interconnect~\cite{sebastian2020disappl}; note that the one-sided RDMA primitive can read or write data without involving the {DPM}\xspace processors. The {DPM}\xspace has the large shared PM pool and limited computational power relative to {KNs}\xspace~\cite{ma2020asymnvm,tsai2020disaggre,MingFAST22}. This asymmetry is deliberate: {KNs}\xspace are intended to run complex operations in a critical path, whereas the {DPM}\xspace is intended to execute lightweight tasks outside the critical path, while keeping the cost of provisioning {DPM}\xspace low. The {KN}\xspace caches the data it fetches from the {DPM}\xspace in its local DRAM, and responds to the client request. The {M-node}\xspace observes {KNs}\xspace statuses and workload characteristics to detect {KN}\xspace failures, load imbalance, or workload skew, and triggers a suitable reconfiguration. Note that we separately deploy the different functional components of {\textsc{Dinomo}}\xspace to enable us to independently scale them up and down as required. It is also possible to co-locate some components at the expense of reducing the efficiency of policy decisions when scaling resources. \input{fig-high-level-arch} \vheading{Assumptions}. We assume all components in the {\textsc{Dinomo}}\xspace cluster are inter-connected through a reliable local network (either over TCP/IP or RDMA RC). The interconnect bandwidth between {KNs}\xspace and {DPM}\xspace is lower than the memory bandwidth of the PM itself, usually making network the bottleneck~\cite{anujSOCC20, assise-osdi20}. {KN}\xspace failures are fail-stop and independent of {DPM}\xspace failures; when an {KN}\xspace fails, its DRAM contents are lost. The {DPM}\xspace has internal mechanisms or hardware support to ensure high availability~\cite{ma2020asymnvm, tsai2020disaggre,keetonOpenFAM19, lee2022hydra, zhouOSDI22} and hardware-level memory protection~\cite{nightingale2011cycles, sridharan2015memory,volosCAL21,zhangMICRO18}. The {M-node}\xspace is always alive; this can be ensured via consensus and replication~\cite{lamport2001paxos,lamport2019paxos,diegoATC14}. As the {M-node}\xspace deals with infrequent lightweight tasks, using consensus does not introduce performance bottlenecks. \subsection{Data organization on DPM} \label{sec:dpm-sharing} Figure~\ref{fig-dinomo-data-plane} shows the data-plane components in {\textsc{Dinomo}}\xspace. {\textsc{Dinomo}}\xspace stores data (key-value pairs) and metadata (indexing structures) in {DPM}\xspace, providing durability and the source of ground truth. \vheading{Storing data in logs}. In response to a write request, a {KN}\xspace writes data to an exclusive log on {DPM}\xspace. This write is performed with a single one-sided write operation in the critical path. The log is broken into a series of segments. Since each {KN}\xspace handles requests on exclusive logical data partitions (\sref{sec:ownership-part}), two {KNs}\xspace will never log a write for the same key. The {DPM}\xspace processors asynchronously merge the write operations in a log segment in order into the metadata index. Logs of different {KNs}\xspace may be merged into the index simultaneously. \vheading{Metadata index}. The metadata index in {DPM}\xspace must satisfy the following requirements. First, {KNs}\xspace should not hold locks while performing index traversals; locks cause cross-node synchronization overheads. Next, even if a {KN}\xspace fails while performing an index traversal, other {KNs}\xspace should be able to make progress. Finally, the index should support concurrent and consistent updates, allowing {DPM}\xspace threads to perform non-conflicting updates in parallel. Most state-of-the-art concurrent PM indexes satisfy these requirements~\cite{lee2019recipe, Hwang-FAST18, Arulraj-VLDB18, Shaonan-FAST21, Zhangyu-ATC20}; these PM indexes provide lock-free reads and log-free in-place writes. Thus, with such PM indexes, {\textsc{Dinomo}}\xspace provides a globally consistent view of data in a scalable manner, independent of the number of {KNs}\xspace. \input{fig-dinomo-data-plane} \vheading{Consistency}. {\textsc{Dinomo}}\xspace guarantees linearizability, the strongest consistency level for non-transactional stores~\cite{viotti-CSUR17}. {\textsc{Dinomo}}\xspace ensures that a successful write request commits the data atomically in {DPM}\xspace, and that subsequent reads return the latest committed value. To satisfy linearizability, {\textsc{Dinomo}}\xspace merges data logs in request order to the metadata index. Other core design decisions like ownership partitioning\xspace across {KNs}\xspace (\sref{sec:ownership-part}), and using indirect pointers for selective replication (\sref{sec:ownership-part}), help provide linearizability. Before reconfiguration or after failure, {\textsc{Dinomo}}\xspace merges all pending logs from the {KNs}\xspace involved before allowing the other {KNs}\xspace to serve reads. \subsection{Disaggregated Adaptive Caching\xspace} \label{sec:adaptive-caching} It would be prohibitively expensive for {KNs}\xspace to do network round trips (RTs) for every read operation. To avoid these overheads, {KNs}\xspace use local DRAM to cache data and metadata. Because {KNs}\xspace have limited memory, efficient caching is crucial for high common-case performance. We introduce \emph{Disaggregated Adaptive Caching\xspace} ({\textsc{DAC}}\xspace), a novel caching scheme to efficiently use DRAM at {KN}\xspace. \vheading{Motivation}. As {DPM}\xspace is directly accessible to {KNs}\xspace via one-sided RDMA operations with low latency owing to its byte addressability, {KNs}\xspace can cache not only data in the form of \emph{values} but also metadata in the form of \emph{shortcuts}. A \emph{value} entry keeps the entire copy of a {DPM}\xspace value, so the {KN}\xspace can access everything locally. A \emph{shortcut} entry keeps a fixed 64-bit pointer to the value in {DPM}\xspace; accessing the data incurs a one-sided operation to {DPM}\xspace. If neither value nor shortcut are cached, accessing the value incurs significant overhead: the {KN}\xspace needs to traverse a metadata structure in {DPM}\xspace to find the value's location and then access the value. Traversing metadata structures like trees, skip lists, or chaining lists in hash tables, requires multiple RTs to {DPM}\xspace or remote procedures in {DPM}\xspace, both of which have much higher overheads than a single one-sided operation~\cite{qingSIGMOD22,aguilera2019designing,zieglerSIGMOD19,pengfeiATC21}. Caching values improves performance relative to caching shortcuts, but requires more cache space. This raises an interesting question: is it better to cache a few values with no overheads upon cache hits, or a larger number of shortcuts with fixed hit overheads? The answer is simple in extreme cases: in highly skewed workloads, where a small number of hot key-value pairs can fit in the cache, storing values is better. When workloads are close to uniform distribution with total size larger than the cache, storing shortcuts is better. Unfortunately, most workloads fall between these two extremes and offer no clear answer. A simple static caching policy may reserve some fixed ratio of cache space for storing values and devote the rest to shortcuts. What should this ratio be? We observe that the efficient ratio is dependent on workload patterns and aggregate memory available for caching. In a disaggregated system like {\textsc{Dinomo}}\xspace that has autoscaling, neither workload patterns nor memory available is known ahead of time, ruling out static policies. \vheading{Adaptive Policy}. We introduce {\textsc{DAC}}\xspace, a novel caching policy that dynamically selects the ratio of values to shortcut entries as needed. This policy automatically \emph{adapts} to the changes in workload patterns and to the changes in the aggregate memory space for caching at {KNs}\xspace due to cluster reconfiguration, as shown in Figure~\ref{fig-dinomo-data-plane}. \vheading{Insight}. {\textsc{DAC}}\xspace is based on the following insight. Performance is highly correlated with number of network RTs, so we seek to minimize that. Caching a shortcut reduces RTs from $M$ (where $M$ is the cost of an index lookup) to one, while caching a value instead of a shortcut reduces RTs from one to zero. Thus, caching shortcuts provides the bigger gain. We treat value caching as an optimization on top of shortcut caching. Value caching is used when we have spare space in the cache, or when we observe that storing a value can serve more requests than storing an equivalent number of shortcuts. Table~\ref{tab-policy} details the policy. In {\textsc{DAC}}\xspace, values can be demoted to shortcuts and shortcuts can be evicted. Shortcuts can also be promoted to values. \vheading{Demotions}. Demotions occur on cache misses to make space for a new cache entry. To demote a value to a shortcut, we pick the least-recently-used key, leveraging temporal locality. To evict a shortcut, we pick the least-frequently-used key, in order to preserve frequently used keys in the cache and cater to skewed workloads. \input{tbl-policy} \vheading{Promotions}. Promotions depend on whether the benefits from caching a value outweigh the benefits from caching a suitable number of shortcuts. To determine if a shortcut $P$ needs to be promoted to a value, we use the following calculation. If at least $N$ least-frequently-used shortcuts need to be evicted to make space for caching one value, then the shortcut $P$ needs to satisfy the following relation to be promoted: \begin{equation} \begin{aligned} Hits(P) \times \textrm{Avg. shortcut hit RTs} \geq \\ \sum_{i=1}^{N} Hits(Shortcut_{i}) \times \textrm{Avg. cache miss RTs} \end{aligned} \end{equation} This formula accounts for the two elements of the trade-off: the differences in the value and shortcut sizes, and the differences in the cost of a value miss and a shortcut miss. The left side of the inequality is the number of round-trips saved if we promote shortcut $P$ to a value; the right side is the number of additional round-trips incurred if we evict $N$ shortcuts to make space for the promotion of $P$. We promote if the savings are greater than the penalty. Note that the \emph{Avg. shortcut hit RT} is always one, but the \emph{Avg. cache miss RT} needs to be determined experimentally, which is done by keeping a moving average of past requests. \subsection{Ownership Partitioning\xspace} \label{sec:ownership-part} When multiple {KNs}\xspace cache the same value, this can result in consistency overheads (\textit{e.g.,}\xspace cache invalidation) from ensuring linearizability. {\textsc{Dinomo}}\xspace sidesteps this via \emph{ownership partitioning\xspace} ({\textsc{OP}}\xspace). Owing to DPM architecture where {KNs}\xspace are disaggregated from the shared PM pool, data access and ownership can be independent considerations: it is possible to partition ownership while sharing access to data. This insight motivates {\textsc{OP}}\xspace that strikes a balance between shared everything and shared nothing. {\textsc{OP}}\xspace allows {KNs}\xspace to cache unique data, avoid consistency overheads, and thereby achieve high scalability. Although similar ideas have been previously used in other contexts~\cite{khattar1999introduction, caulfieldISCA13, snowflake, atulOSDI16}, we are the first to adapt it for {DPM}\xspace. \vheading{Central Idea}. {KNs}\xspace have exclusive but temporary ownership of logical, disjoint partitions of data. At any time, a partition is accessed by only one {KN}\xspace---its designated owner. {\textsc{OP}}\xspace allows {KNs}\xspace to scale without reorganizing data and metadata. \vheading{Partitioning the ownership}. Routing nodes maintain the mapping of key ranges to their owner {KNs}\xspace. Client's requests are routed to the appropriate owner {KN}\xspace. The owner {KN}\xspace can use its local DRAM to cache data and metadata with high cache locality and provide good read performance. {\textsc{Dinomo}}\xspace does not require cache coherence protocols at {KNs}\xspace, as {KNs}\xspace have exclusive access to their partitions. As scaling {KNs}\xspace increases the total DRAM available for caching, {\textsc{OP}}\xspace scales performance by utilizing the DRAM cache effectively (no redundant copies) and avoiding consistency overheads. \vheading{Ownership metadata}. {\textsc{Dinomo}}\xspace uses consistent hashing to assign the primary owners for key ranges; {\textsc{Dinomo}}\xspace is compatible with other (\textit{e.g.,}\xspace key-range or hash-based) partitioning algorithms. Within a {KN}\xspace, a key range is further partitioned among its various threads. Both {KNs}\xspace and {RNs}\xspace maintain the partitioning metadata in a global hash ring, which stores key to {KVS node}\xspace IP mappings, and a local hash ring, which stores key to thread mappings. Whenever the mapping changes, {RNs}\xspace are updated together with {KNs}\xspace. Clients cache routing information; when the mapping changes, the {KN}\xspace they contact will direct them to a routing node to get the latest mapping information. Each {KN}\xspace always knows the key range it is supposed to handle, and will refuse requests for other key ranges. \vheading{Benefits}. Ownership partitioning\xspace provides multiple benefits. \viheading{High performance}. {\textsc{Dinomo}}\xspace achieves high performance in the common case by partitioning the ownership across {KNs}\xspace, allowing multiple {KNs}\xspace to cache unique data partitions with high cache locality. \viheading{Scalability}. By avoiding the overhead for maintaining consistency at {KN}\xspace caches, {\textsc{Dinomo}}\xspace achieves scalability. \viheading{Lightweight reconfiguration}. {\textsc{Dinomo}}\xspace can quickly change the number of {KNs}\xspace without physically reorganizing data or metadata; the current owner empties its cache, completes outstanding operations, hands ownership to the new {KN}\xspace, and the new owner begins serving requests. If a {KN}\xspace fails, partitions owned by the failed {KN}\xspace can be assigned to new owners that can immediately serve data. \vheading{Selective replication}. Partition-based systems may suffer from load imbalance with highly skewed workloads. In these circumstances, adding more {KNs}\xspace does not distribute the load across available {KNs}\xspace. Even if the popular key's value is cached in a {KN}\xspace, performance is bottlenecked by that {KN}\xspace's processing or network capacity. {\textsc{Dinomo}}\xspace recognizes such scenarios and shares the ownership of highly popular keys across multiple {KNs}\xspace, effectively replicating such keys to provide scalability beyond a single node's abilities. The replication metadata is stored along with the mapping information at {RNs}\xspace and {KNs}\xspace while handled similarly. Clients cache and use this metadata to route requests to primary and secondary owners. {\textsc{Dinomo}}\xspace uses \emph{indirect pointers} to allow {KNs}\xspace to share ownership and read or write the shared key-value pairs consistently. An indirect pointer points to a location in {DPM}\xspace that stores a pointer to the value instead of the value itself, and the {KNs}\xspace access the shared value with one-sided CAS operations on the indirect pointers to ensure the linearizable access. Due to the sharing with indirect pointers, {\textsc{Dinomo}}\xspace incurs consistency overheads to balance the load across {KNs}\xspace. {\textsc{Dinomo}}\xspace limits these consistency overheads by using indirect pointers only for hot keys. When a key becomes shared, {\textsc{Dinomo}}\xspace installs an indirect pointer to the key in {DPM}\xspace. When a {KN}\xspace updates a shared key, it writes the value at a new location and atomically updates the indirect pointer. A {KN}\xspace reading a shared key has to first read the indirect pointer and then read the value; thus, shared keys pay a cost that is avoided by default. Removing sharing from the key requires the {KNs}\xspace that own the shared key to invalidate it in their caches. Once the invalidation is done, the indirect pointer is removed in {DPM}\xspace. \subsection{Reconfiguration} \label{sec:control-plane} \kim{I suggest starting this section with a high-level overview of why/when reconfiguration occurs. We can then point out that we model our policy engine after Anna's and highlight the differences that we use to adapt Anna's policy engine to Dinomo.} The {M-node}\xspace triggers reconfigurations to improve performance when SLOs are violated, to release under-utilized resources, or to tolerate {KN}\xspace failures. We first present those policy details and then explain our principled reconfiguration protocol. \vheading{Policy engine}. The policy engine in the {M-node}\xspace governs when and what kind of reconfigurations to trigger. Our policy engine follows prior autoscaling work~\cite{anna-vldb19}, with simplifications for {\textsc{Dinomo}}\xspace; for example, memory consumption is not a consideration in scaling {KNs}\xspace since the memory in a {KN}\xspace is used as a cache without overflow. The policy engine allows the configuration of the following parameters: \emph{average/tail latency SLOs}, \emph{over-utilization lower bound} and \emph{under-utilization upper bound}, and \emph{key hotness lower bound} and \emph{key coldness upper bound}. The {M-node}\xspace periodically collects latency information from clients, the average {KN}\xspace occupancy (\textit{i.e.,}\xspace CPU working time per monitoring-epoch interval), and the average access frequency for keys from {KNs}\xspace. It then proactively detects the latency SLO violations and corrects them dynamically. Table~\ref{tbl-policy-violations} summarizes reconfiguration scenarios. \input{tbl-eval-policy} \vheading{Cluster membership changes}. In {\textsc{Dinomo}}\xspace, cluster membership is changed under the following scenarios. First, the {M-node}\xspace may detect a {KN}\xspace failure and notify the alive nodes. Second, the {M-node}\xspace may detect a latency SLO violation (average or tail latency SLO) and find that all the {KNs}\xspace are over-utilized (the minimum occupancy of all {KNs}\xspace is larger than the \emph{over-utilization lower bound}), which triggers the addition of a new {KN}\xspace. Third, the {M-node}\xspace may detect that there is an under-utilized {KN}\xspace (its occupancy is lower than \emph{under-utilization upper bound}); if the latency SLOs are not violated, this triggers that {KN}\xspace's removal. While ownership mapping is being redistributed due to the membership changes, clients' request latencies can briefly increase. To prevent the policy engine from over-reacting during the ownership redistribution, {\textsc{Dinomo}}\xspace adds or removes at most one node per decision epoch and applies a grace period to allow the system to stabilize before the next decision. \vheading{Ownership replication changes}. If the {M-node}\xspace detects an SLO violation and notices that all {KNs}\xspace are not over-utilized, then the {M-node}\xspace identifies highly popular keys and increases their replication factor. In detail, the {M-node}\xspace considers a key to be highly popular if its average access frequency is greater than the \emph{key hotness lower bound}. {\textsc{Dinomo}}\xspace increases the replication factor $R$ (the number of secondary owners) of a hot key, based on the ratio between the average latency of the hot key and the \emph{average latency SLO}. The {M-node}\xspace considers a key to be cold if its access frequency is below the \emph{key coldness upper bound}. If the latency SLOs are met and none of the {KNs}\xspace are under-utilized (the {M-node}\xspace cannot remove any {KN}\xspace), the {M-node}\xspace identifies cold keys with high replication factors ($R > 1$) and dereplicates them ($R{=}1$). \vheading{Fault tolerance}. The {DPM}\xspace is the source of ground truth in {\textsc{Dinomo}}\xspace; it persistently stores data (key-value pairs), metadata (indexing data structures), and other policy information (ownership/replication metadata). {\textsc{Dinomo}}\xspace{}'s {KNs}\xspace and {RNs}\xspace store soft state that can be reconstructed if a node fails. When a {KN}\xspace or {RN}\xspace fails, it retrieves the up-to-date policy information from the {DPM}\xspace and rebuilds the ownership mapping of key ranges before resuming. Unlike {RNs}\xspace, a {KN}\xspace failure changes the ownership mapping among the alive {KNs}\xspace. The {M-node}\xspace ensures that the ownership mapping is corrected before allowing the failed {KN}\xspace to resume. After detecting a {KN}\xspace failure, the {M-node}\xspace picks one of the alive {KNs}\xspace to complete the pending operations in the log segments from the failed {KN}\xspace, and broadcasts the failure to all {\textsc{Dinomo}}\xspace components. On receiving a failure message, {KNs}\xspace and {RNs}\xspace repartition the ownership mapping by updating their local hash rings. \vheading{Reconfiguration steps}. We now describe how {\textsc{Dinomo}}\xspace performs reconfigurations. Broadly, the following steps occur: \begin{enumerate}[leftmargin=15pt] \item {KNs}\xspace participating in the reconfiguration are identified ({KNs}\xspace for which the ownership mapping changes) \item The {KNs}\xspace become unavailable \item {DPM}\xspace synchronously merges the data in logs for these {KNs}\xspace \item The {KNs}\xspace get their new mapping information \item The {KNs}\xspace become available, and the cluster continues operation \item The mapping information in the remaining {KNs}\xspace (not participating in the reconfiguration) is updated asynchronously \item The {RNs}\xspace are asynchronously updated with the new mapping information \end{enumerate} The cluster can continue operation at step five because {KNs}\xspace will reject requests for key ranges they do not own. Thus, other {KNs}\xspace can be updated without blocking the nodes undergoing reconfiguration. In certain special cases, {\textsc{Dinomo}}\xspace can perform reconfiguration without blocking any {KNs}\xspace. This can happen when a new partition is being added to {\textsc{Dinomo}}\xspace (no previous owner to race with) or when a {KN}\xspace fails and its partitions are being redistributed. Note that there is no expensive data copying or movement during reconfiguration. This is the key property that enables lightweight reconfiguration for {\textsc{Dinomo}}\xspace. \subsection{Optimizations} \label{sec:onesided-async} {\textsc{Dinomo}}\xspace includes optimizations in its data path to reduce CPU bottlenecks and network utilization from {DPM}\xspace. \vheading{One-sided \& asynchronous post processing}. To minimize the CPU bottlenecks and network utilization, {\textsc{Dinomo}}\xspace{}'s data path uses \emph{one-sided operations} with \emph{asynchronous post-processing}. With a one-sided operation (e.g., RDMA reads, writes), a {KN}\xspace executes directly on the {DPM}\xspace without involving the {DPM}\xspace processor. In contrast, two-sided operations (e.g., RPCs) are handled by the {DPM}\xspace processor. One-sided operations have lower latency and higher bandwidth than two-sided operations~\cite{storm, FaRM-NSDI14,Anuj-ATC16,ChristopherATC13,xingdaOSDI18}, but one-sided operations are limited in functionality~\cite{aguilera2019designing}. For the best performance, {\textsc{Dinomo}}\xspace uses one-sided operations in the data path and delegates the post-processing of writes to the {DPM}\xspace processors asynchronously. \viheading{One-sided reads}. For reads, an {KN}\xspace directly returns the value from its cache upon a value hit. On a shortcut hit, it performs a single one-sided operation to retrieve the value in {DPM}\xspace from the shortcut pointer. On a cache miss, the {KN}\xspace performs multiple one-sided operations to find the address of the value (index traversals), and uses another one-sided operation to fetch the value from that address. \viheading{Asynchronous post-processing of writes}. {\textsc{Dinomo}}\xspace batches multiple log entries into a log segment unit and writes them to {DPM}\xspace using a one-sided RDMA write operation. With {\textsc{OP}}\xspace, {\textsc{Dinomo}}\xspace can batch the writes for the keys in the same partition without consistency overheads. The post processing to merge the writes into the metadata index is asynchronously handled by {DPM}\xspace processors off critical path. {\textsc{Dinomo}}\xspace caches the committed log segments to aid the subsequent reads to be served locally at {KNs}\xspace without expensive network RTs to read the large log segments remotely. These optimizations have two benefits. First, it reduces the latency as well as network costs per operation. Second, it amortizes the merging operation across all the write operations in a log segment (typically several megabytes in size). Because the merging is done asynchronously, the {DPM}\xspace processors can have lower computing power without significantly affecting {\textsc{Dinomo}}\xspace performance. \subsection{Microbenchmark}\label{sec:eval-micro} We use micro-benchmarks to investigate several issues. We first consider whether DAC is an effective caching strategy. Next, we explore how much compute capacity the {DPM}\xspace requires to prevent the asynchronous merging of writes from becoming the bottleneck; based on the results, we also discuss how our use of DRAM to emulate PM affects our results. \vheading{{\textsc{DAC}}\xspace}. The {KN}\xspace caches can be used to store values, shortcut pointers, or a mix of both. To evaluate {\textsc{DAC}}\xspace against different caching strategies, we use a single {KN}\xspace with 16 threads. We first load 30M key-value pairs into {\textsc{Dinomo}}\xspace with 8B keys and 64B values. We then run a read-only workload with a working set of uniformly-distributed 1.5M keys (5\% of the dataset) to evaluate performance. We generate the workload locally and measure the peak throughput within the {KN}\xspace by varying the available DRAM for caching from 1\%--16\% of the dataset size. We configure {\textsc{Dinomo}}\xspace to use different caching policies (Figure~\ref{fig-eval-micro-cache}). The static-X policies reserve X\% of their cache size for storing values; the rest of the cache is used for shortcuts. All non-{\textsc{DAC}}\xspace policies use LRU to evict entries. Figure~\ref{fig-eval-micro-cache} shows the read throughput obtained with the different cache policies. With an aggregate cache size of 2\% of the dataset, a shortcut-only cache performs best, whereas with a cache size 4$\times$\xspace as large, a value-only cache performs best. The aggregate cache size is dependent on the number of active {KNs}\xspace, which may dynamically change with cluster reconfiguration or {KN}\xspace failures. Therefore, a static caching policy is not a good fit. The right policy depends upon the workload patterns and aggregate cache size. Despite not knowing the workload patterns or the aggregate cache size, {\textsc{DAC}}\xspace is within 16\% of the best performing policy, in all settings. With a medium-sized cache that is 4\% of the dataset size, {\textsc{DAC}}\xspace exceeds the performance of both shortcut-only and value-only caching policies by taking advantage of both. Further, as shown in Table~\ref{tab-micro-cache-rtts}, {\textsc{DAC}}\xspace has the lowest number of round trips per operation compared to all static policies, reducing the network utilization and providing high performance. \input{fig-micro-app} \input{fig-perf-scaling} \input{tbl-perf-profile} \vheading{Asynchronous post processing}. A delay in merging log segments due to the limited compute capacity in {DPM}\xspace can block the critical path of {KNs}\xspace writing logs. To evaluate this impact from the worst-case scenario in our setup, we run an insert-only workload using 16 {KNs}\xspace and 8 client nodes; this is the most compute-intensive workload, as it incurs structural changes to the {DPM}\xspace index (\textit{e.g.,}\xspace resizing hash table). We first load 32GB of data and then run the workload writing up to 100GB of data into {DPM}\xspace with 8B keys and 1KB values. We measure the peak throughput of log writing and merging for different {DPM}\xspace thread counts. For the log-write throughput, we collect the aggregate throughput across 16 {KNs}\xspace every 10 seconds for 30 seconds and average them; the log-write max is the maximum throughput the {KNs}\xspace can obtain if they never wait for {DPM}\xspace to merge logs. To measure the merge throughput, we pre-generate log segments locally on {DPM}\xspace for the dataset and then measure the performance of merging those log segments. As our testbed has no IB-enabled PM machines, we measure the merge throughput on PM using a local PM machine (Intel Xeon Silver 4314 CPU with 16 cores at 2.4GHz and 512GB Intel Optane DC PM on 4 NVDIMMs) to estimate the impact from using PM for {DPM}\xspace. We make a number of observations based on the results in Figure~\ref{fig-eval-micro-app}. First, we observe that to write logs at the maximum rate, {DPM}\xspace should have enough computing capability to merge at the log-write max rate; four or more {DPM}\xspace threads are required from our setup. Second, we observe that because of PM's higher access latency, PM merge throughput is lower than DRAM; when using four threads, the lower PM merge throughput can become the bottleneck. Third, we confirm despite in-DIMM write amplifications~\cite{yang20-pm, anujSOCC20}, merge operations consume PM write bandwidth only up to 2GB/s (monitored by PCM~\cite{pcm}); 9.2GB/s out of the maximum (11.2GB/s) is still available to absorb incoming writes from the KNs over the network, making the network (7GB/s) the bottleneck rather than PM. We conclude that, in some scenarios, using PM instead of DRAM requires a higher number of {DPM}\xspace threads to prevent the merging delay from becoming the bottleneck. However, even in this worst-case scenario, PM merge throughput with 4 threads was only 16\% lower than log-write max; for more realistic scenarios with a mix of read and write operations (as used in our following end-to-end experiments), {DPM}\xspace should be able to operate with the same number of threads (4 threads or more) on both PM and DRAM for 16 {KNs}\xspace. \section{Evaluation} \label{sec:evaluation} We evaluate the performance of {\textsc{Dinomo}}\xspace and study the breakdown of the benefits from Ownership Partitioning\xspace ({\textsc{OP}}\xspace), Disaggregated Adaptive Caching\xspace ({\textsc{DAC}}\xspace), and selective replication. We design our experiments to answer the following questions: \begin{itemize}[leftmargin=15pt] \item Does {\textsc{DAC}}\xspace help reduce network round trips? How does it fare against other caching policies? \item How much impact does the compute capacity in {DPM}\xspace have on the overall throughput in {\textsc{Dinomo}}\xspace? \item How does {\textsc{Dinomo}}\xspace fare performance and scalability against the state-of-the-art? \item What fraction of {\textsc{Dinomo}}\xspace's benefits can be attributed to the {\textsc{OP}}\xspace architecture and the {\textsc{DAC}}\xspace caching? \item How elastic and responsive is {\textsc{Dinomo}}\xspace while handling bursty workloads, load imbalance, and {KN}\xspace failures? \end{itemize} \vheading{Comparison points}. As our baseline, we use Clover~\cite{tsai2020disaggre}, a state-of-the-art and open-source key-value store designed for {DPM}\xspace. Clover has a shared-everything architecture with a shortcut-only cache at its {KNs}\xspace. {KNs}\xspace perform out-of-place updates to the data in {DPM}\xspace, and incur additional overheads to provide strong consistency. For example, stale cached entries require {KNs}\xspace to walk through a chain of versions to find the most recent data in {DPM}\xspace. Besides Clover, we compare {\textsc{Dinomo}}\xspace with two variants, {\textsc{Dinomo}}\xspace-S and {\textsc{Dinomo}}\xspace-N. {\textsc{Dinomo}}\xspace uses three techniques: {\textsc{DAC}}\xspace, {\textsc{OP}}\xspace, and selective replication. {\textsc{Dinomo}}\xspace-S employs shortcut-only caching; it is otherwise identical to {\textsc{Dinomo}}\xspace. As the source code of AsymNVM~\cite{ma2020asymnvm} is not publicly available, we implement {\textsc{Dinomo}}\xspace-N to compare {\textsc{Dinomo}}\xspace with a shared-nothing counterpart; it uses {\textsc{DAC}}\xspace but partitions data and metadata in {DPM}\xspace, where each partition is exclusively accessed by a single {KN}\xspace without selective replication. Comparing {\textsc{Dinomo}}\xspace-S with Clover highlights the benefits of partitioning ownership in {\textsc{OP}}\xspace, and comparing {\textsc{Dinomo}}\xspace with {\textsc{Dinomo}}\xspace-S shows the benefits from {\textsc{DAC}}\xspace. We also investigate the trade-off from sharing data in {\textsc{OP}}\xspace by comparing {\textsc{Dinomo}}\xspace with {\textsc{Dinomo}}\xspace-N. \vheading{Experiment setup}. We use Kubernetes pods to represent all of the node instances in the {\textsc{Dinomo}}\xspace cluster. We restrict the host resources assigned to the pods depending on the node types' features to emulate the asymmetric {DPM}\xspace architecture (i.e., {KNs}\xspace have more-capable computation but smaller memory than {DPM}\xspace). Each individual pod is pinned to a separate server for resource isolation purposes. We deploy {\textsc{Dinomo}}\xspace on the Chameleon Cloud~\cite{chameleonATC20}, an experimental large-scale testbed for cloud research. We use InfiniBand-enabled (IB-enabled) servers as hosts for {KNs}\xspace and the {DPM}\xspace; each two-socket server has Intel Xeon E5-2670v3 processors, 24 cores at 2.30 GHz in total, and 128 GB DRAM. The shared {DPM}\xspace uses a maximum of 4 threads and 110 GB of DRAM as a proxy for the PM, which is registered to be RDMA-accessible. Each {KN}\xspace uses a maximum of 8 threads and 1 GB of DRAM for caching (${\approx}1$\% of the {DPM}\xspace size). {DPM}\xspace and the {KNs}\xspace are connected by Mellanox FDR ConnectX-3 adapters with 56 Gbps per port. We emulate PM using DRAM as performance is constrained by the network rather than PM or DRAM: network latency (1--20 us) is at least 10$\times$\xspace higher than DRAM or PM latencies (100s of ns); network bandwidth (7GB/s) is lower than PM bandwidth (32GB/s Read / 11.2GB/s Write)~\cite{anujSOCC20, assise-osdi20}. The external servers that run application workloads, henceforth termed \emph{client nodes}, and the routing service do not need a high-speed interconnect with the {KNs}\xspace or {DPM}\xspace. Hence, for client nodes and routing nodes ({RNs}\xspace), we use two-socket servers with AMD EPYC 7763 processors, 128 cores at 2.45 GHz in total, 256 GB of DRAM, and a 10 Gbps Ethernet NIC. Each client node uses 64 threads to run a closed-loop workload with one or more outstanding requests per thread. We use a single {RN}\xspace with 64 threads. The same routing layer is used across all KVS variants in our evaluation. Apart from the data plane components ({KNs}\xspace and {DPM}\xspace), {\textsc{Dinomo}}\xspace, {\textsc{Dinomo}}\xspace-N, and {\textsc{Dinomo}}\xspace-S use a specific control-plane instance for the {M-node}\xspace, which is deployed on a server (same server configuration as the {RNs}\xspace) with a single thread. For Clover, we use an extra IB-enabled server (same server configuration of the {KNs}\xspace) for its metadata server with 6 threads (4 workers, 1 epoch thread, 1 GC thread). \input{fig-micro-cache} \input{tbl-micro-cache-rtts} \vheading{Workloads and configurations}. We use YCSB-style workloads~\cite{anna-vldb19,cooperSOCC10} with five request patterns: read-only (100\% reads), read-mostly (95\% reads/5\% updates and 95\% reads/5\% inserts), and write-heavy (50\% reads/50\% updates and 50\% reads/50\% inserts). These workloads use 8B keys and 1KB values and the following Zipfian coefficients: 0.99 (the YCSB-default value) for moderate skew, 2 for high skew, and 0.5 for low skew (close to uniform). For each experiment, we first load 32 GB of data (key-value pairs) and then write up to 100GB of data during the workload including inserts. With 16 {KNs}\xspace, each equipped with a 1GB cache, the {KNs}\xspace can cache up to 50\% of the loaded dataset. We generate the workload from the client nodes and measure system throughput and other profiling metrics, averaging them over a 10-second interval. \input{eval-micro} \subsection{Performance and Scalability} \label{sec:eval-perf-scale} We now compare the end-to-end performance and scalability of {\textsc{Dinomo}}\xspace, {\textsc{Dinomo}}\xspace-S ({\textsc{Dinomo}}\xspace with a shortcut-only cache), {\textsc{Dinomo}}\xspace-N ({\textsc{Dinomo}}\xspace with {\textsc{DAC}}\xspace and data/metadata partitioning), and Clover. We use workloads with moderate skew (Zipf 0.99) to observe the performance and scalability in the common case. We use 8 client nodes to run these workloads and measure the peak throughput by increasing the outstanding requests per client thread until the {KNs}\xspace' CPUs are saturated. After a 1-minute warm-up period, we collect the aggregate throughput across {KNs}\xspace every 10 seconds for 40 seconds and average them. In this experiment, the number of {KNs}\xspace is fixed, and hence there is no reconfiguration. However, the overhead to monitor system statistics (which are used to trigger reconfiguration) is reflected in the measurement of {\textsc{Dinomo}}\xspace and its variants. We profile the workload and collect metrics such as aggregate cache hit ratio and the average number of network round trips per operation (RTs/op) across all {KNs}\xspace, as shown in Table~\ref{tbl-perf-profile}. As shown in Figure~\ref{fig-perf}, {\textsc{Dinomo}}\xspace's throughput scales to 16 {KNs}\xspace. In contrast, Clover's throughput does not scale beyond 4 {KNs}\xspace due to either a network bottleneck or the CPU bottleneck from its metadata server. With 16 {KNs}\xspace, {\textsc{Dinomo}}\xspace outperforms Clover by at least 3.8$\times$\xspace across all workloads. {\textsc{Dinomo}}\xspace-S does not scale beyond 8 {KNs}\xspace in read-dominated workloads because of network bottlenecks. The performance of {\textsc{Dinomo}}\xspace and {\textsc{Dinomo}}\xspace-N is almost on par (max difference is 11\%). We observe that both {\textsc{Dinomo}}\xspace and {\textsc{Dinomo}}\xspace-N achieve high performance due to high cache locality at {KNs}\xspace resulting from partitioning. While partitioning data and metadata in {\textsc{Dinomo}}\xspace-N also reduces synchronization overheads, we did not notice significant benefit due to this in the tested workloads. \vheading{{\textsc{OP}}\xspace enables scalable performance}. We observe that increasing the number of {KNs}\xspace from 1 to 16 reduces the cache hit ratio in Clover across all workloads (Table~\ref{tbl-perf-profile}). This performance drop is counterintuitive, as the DRAM available for caching increases with the number of {KNs}\xspace. However, in shared-everything architectures {KNs}\xspace can handle any request, so multiple {KNs}\xspace may incur cache misses on the same key. With more {KNs}\xspace, even with moderate skew, the redundant cache misses increase. In summary, shared-everything architectures do not provide good cache locality and prevent the efficient use of {KN}\xspace-side memory for caching. In contrast, {\textsc{OP}}\xspace partitions the ownership of keys across {KNs}\xspace, providing high cache locality for requests and eliminating redundant shortcuts at multiple {KNs}\xspace. Note that, for these workloads, {\textsc{Dinomo}}\xspace-S sees a 100\% hit ratio across all {KNs}\xspace and with any number of {KNs}\xspace (Table~\ref{tbl-perf-profile}). \vheading{{\textsc{DAC}}\xspace boosts performance and scalability}. {\textsc{Dinomo}}\xspace has a higher cache hit rate (from values) with more {KNs}\xspace and takes fewer RTs/op, compared to both {\textsc{Dinomo}}\xspace-S and Clover. {\textsc{Dinomo}}\xspace-S has higher network costs: up to 10$\times$\xspace more RTs/op than {\textsc{Dinomo}}\xspace. Clover is even worse: from 4$\times$\xspace to 87$\times$\xspace more RTs/op than {\textsc{Dinomo}}\xspace, due to shortcut-only caching and a lack of locality that results in consistency overheads and redundant caching. The aggregate memory available for caching increases with {KNs}\xspace for all systems. However, {\textsc{DAC}}\xspace helps {KNs}\xspace cache more values (as opposed to shortcuts), and thus incur fewer round trips to {DPM}\xspace per operation. In {\textsc{Dinomo}}\xspace, the cache hit \% from values increases from 52\% with 1 {KN}\xspace up to 88\% with 16 {KNs}\xspace across all workloads (Table~\ref{tbl-perf-profile}). With 1 {KN}\xspace, {\textsc{Dinomo}}\xspace caches more shortcuts, incurring 1 RT at a cache hit, while with 16 {KNs}\xspace, {\textsc{Dinomo}}\xspace caches more values, and hence takes fewer RTs/op (0.1 RTs/op across all workloads). {\textsc{Dinomo}}\xspace has fewer RTs/op in write-heavy workloads on average than read-dominated workloads, as {KNs}\xspace persists multiple write operations in a batch with 1 RT on {DPM}\xspace. Overall, we see that {\textsc{DAC}}\xspace is effective in reducing RTs to {DPM}\xspace. \subsection{Elasticity} \label{sec:eval-elasticity} We now demonstrate {\textsc{Dinomo}}\xspace can elastically scale the number of {KNs}\xspace, balance loads across {KNs}\xspace, and tolerate failures. We use a workload with 50\% reads and 50\% updates with three different skew distributions. When a reconfiguration is triggered in this workload, any pending writes must be merged to {DPM}\xspace before the reconfiguration can proceed. We run a client node with one outstanding request per thread a time. \vheading{Policy Variables}. We set the parameters of the policy engine (\sref{sec:control-plane}) and design the experiments to trigger various forms of reconfiguration. We use an \emph{average latency SLO} of 1.2ms and a \emph{tail latency SLO} (99-percentile latency) of 16ms. The \emph{over-utilization lower bound} is configured to be 20\% {KN}\xspace occupancy, and the \emph{under-utilization upper bound} is set to 10\% {KN}\xspace occupancy. Furthermore, we configure the \emph{key-hotness lower bound} to 3 standard deviations above the mean key access frequency and the \emph{key-coldness upper bound} to 1 standard deviation below the mean. Note that the goal of the experiments is to study the elasticity of {\textsc{Dinomo}}\xspace under various scenarios; we chose these policy parameters as simple triggers for these scenarios, not as an indication of the best policies. \input{fig-scale-out} \vheading{Auto Scaling}. We evaluate {\textsc{Dinomo}}\xspace with bursty, irregular workloads and compare its elasticity in scaling {KNs}\xspace with {\textsc{Dinomo}}\xspace-N. We were unable to run Clover for this experiment because Clover has no implementation for auto-scaling {KNs}\xspace. We produce scenarios where a new {KN}\xspace is required or an existing {KN}\xspace is no longer needed. Recall that {\textsc{Dinomo}}\xspace adds new {KNs}\xspace automatically only if a latency SLO is violated, the {KNs}\xspace are over-utilized, and an additional {KN}\xspace is available. {\textsc{Dinomo}}\xspace automatically evicts a {KN}\xspace only if the latency SLOs are met and the {KN}\xspace is underutilized. The grace period after each reconfiguration is configured to 90 seconds. To produce a bursty workload, we start running the workload with low skew (Zipf 0.5) on {\textsc{Dinomo}}\xspace using 1 client node for 20 seconds. We then increase the load on {\textsc{Dinomo}}\xspace by 7$\times$\xspace by adding 7 additional client nodes. We observe the performance of {\textsc{Dinomo}}\xspace for a few minutes until it stabilizes, and at the 230-second mark, we remove 7 client nodes to lower the load by 7$\times$\xspace again. Figure~\ref{fig-scale-out} shows the behavior of {\textsc{Dinomo}}\xspace and {\textsc{Dinomo}}\xspace-N during this experiment. {\textsc{Dinomo}}\xspace and {\textsc{Dinomo}}\xspace-N meet the latency SLOs until the load increases at 30 seconds, when the {M-node}\xspace detects a latency SLO violation: the tail latency SLO is exceeded. The {M-node}\xspace then observes that {KNs}\xspace are over-utilized (minimum {KN}\xspace occupancy in {\textsc{Dinomo}}\xspace is about 35\%), and hence corrects the situation by adding a new {KN}\xspace. Once the new {KN}\xspace comes online at 40-50 seconds, {\textsc{Dinomo}}\xspace shows a brief latency increase and throughput dip, as the nodes update their hash rings. However, {\textsc{Dinomo}}\xspace-N experiences a 40-second latency spike and throughput dip at 60 seconds, where the throughput drops to 0 due to the processing delay during data reorganization. After a 90-second grace window, although the average latency SLO is met, the tail latency SLO is still violated. {\textsc{Dinomo}}\xspace and {\textsc{Dinomo}}\xspace-N react to the situation by adding another {KN}\xspace. Again, {\textsc{Dinomo}}\xspace only sees a brief increase in latency, while {\textsc{Dinomo}}\xspace-N's latency increases for 30 seconds. After the grace window, as both latency SLOs are met, {\textsc{Dinomo}}\xspace and {\textsc{Dinomo}}\xspace-N do not take any further actions. At 230 seconds, the load is suddenly reduced. In the next 10 seconds, the {M-node}\xspace detects an under-utilized {KN}\xspace with less than 10\% occupancy. As the latency SLOs are met, the policy engine triggers the {KN}\xspace eviction. While removing the under-utilized {KN}\xspace, {\textsc{Dinomo}}\xspace sees a brief rise in average and tail latency without violating SLOs. However, {\textsc{Dinomo}}\xspace-N shows a 20-second throughput dip and latency spike before stabilizing. Overall, we see that {\textsc{Dinomo}}\xspace is more responsive with fewer throughput and latency disruptions than {\textsc{Dinomo}}\xspace-N and can automatically scale {KNs}\xspace as required by changes in load. \input{fig-load-balance} \vheading{Load Balancing}. We now describe how {\textsc{Dinomo}}\xspace handles non-uniform load on its {KNs}\xspace and scales its throughput for hot spots, in comparison to {\textsc{Dinomo}}\xspace-N and Clover. To handle these scenarios, recall that {\textsc{Dinomo}}\xspace uses selective replication; this mechanism is triggered only if a latency SLO is violated due to a few hot keys and the {KNs}\xspace are not over-utilized. For these experiments, we use a skewed workload with 8 client nodes and 16 {KNs}\xspace. We start the experiments with a low-skew workload (Zipf 0.5) and then switch to the highly-skewed workload (Zipf 2). {\textsc{Dinomo}}\xspace{}'s policy engine checks that the {KNs}\xspace are not over-utilized (minimum occupancy lower than 10\%) and identifies that the latency SLO is violated due to 4 hot keys. As a result, the policy engine triggers the selective replication of the 4 keys. Figure~\ref{fig-load-balance} shows the KVSs' behavior during the experiment. Initially, all the KVSs meet the latency SLO and balance the load across {KNs}\xspace. At 20 seconds, the workload switches to the highly skewed pattern, resulting in latency SLO violations and an increase in load imbalance between {KNs}\xspace. {\textsc{Dinomo}}\xspace gradually increases the replication factor of the 4 keys between 30 and 90 seconds. During this period, {\textsc{Dinomo}}\xspace experiences brief tail latency spikes due to the additional delay for clients to retrieve the up-to-date ownership mapping of replicated keys from the {RN}\xspace, but throughput gradually increases. At 90 seconds, {\textsc{Dinomo}}\xspace fully replicates the hot keys across all available {KNs}\xspace, and the throughput stabilizes. The latency SLOs are also met. {\textsc{Dinomo}}\xspace was the only system to satisfy the SLOs; both Clover and {\textsc{Dinomo}}\xspace-N constantly violate the SLOs for the highly-skewed workload. Clover initially outperforms {\textsc{Dinomo}}\xspace without selective replication and {\textsc{Dinomo}}\xspace-N by almost 4$\times$\xspace on the highly-skewed workload. However, once we enable selective replication in {\textsc{Dinomo}}\xspace, hot keys start becoming shared by multiple {KNs}\xspace at about 30-40 seconds; once all the hot keys are completely replicated, {\textsc{Dinomo}}\xspace's performance stabilizes in about 1 minute and it outperforms Clover by almost 1.6$\times$\xspace and {\textsc{Dinomo}}\xspace-N up to 5.6$\times$\xspace. Selectively replicating hot keys in {\textsc{Dinomo}}\xspace allows multiple {KNs}\xspace to access {DPM}\xspace for the hot keys, increasing the overall throughput. Our use of indirect pointers in accessing hot keys restricts {KNs}\xspace from caching values. Hence, {\textsc{Dinomo}}\xspace selectively replicates only the hottest keys while restricting {KNs}\xspace to cache only their shortcuts; {KNs}\xspace maintain exclusive ownership over non-hot keys and continue to cache their values adaptively. Overall, our experiments highlight the benefits of selective replication with {\textsc{OP}}\xspace for load balancing across {KNs}\xspace and for handling hot spots as a better alternative to shared-everything. \input{fig-failover-throughput} \vheading{Fault Tolerance.} Finally, we induce a KN failure to compare the resilience and elasticity of {\textsc{Dinomo}}\xspace, {\textsc{Dinomo}}\xspace-N and Clover. In a cluster with 16 {KNs}\xspace, we run a moderate skew (Zipf 0.99) workload for 2 minutes using 8 client nodes, and simulate a {KN}\xspace failure at around 40 seconds. We simulate the failure by eliminating a randomly selected {KN}\xspace. User requests are set to time out after 500ms. We observe that {\textsc{Dinomo}}\xspace quickly recovers from the {KN}\xspace failure (Figure~\ref{fig-failover-throughput}). We notice that the throughput briefly drops by 45\%, average latency increases by 1.2$\times$\xspace (0.8 ms), and the tail latency increases by 1.5$\times$\xspace (1.4 ms). Upon detecting the failure, {\textsc{Dinomo}}\xspace merges the pending log segments from the failed {KN}\xspace and redistributes ownership across other alive {KNs}\xspace. These steps take less than 109 ms. {\textsc{Dinomo}}\xspace-N, on the other hand, experiences a 20-second dip in performance at 50 seconds, where the throughput drops to 0 as it stops serving requests while reshuffling data. The time to reorganize data takes more than 11 seconds in {\textsc{Dinomo}}\xspace-N. Clover tolerates the {KN}\xspace failure elastically, showing a brief 55\% drop in its throughput. Clover only needs to update the cluster membership of alive {KNs}\xspace in {RNs}\xspace after failures (without any data reorganization) to allow clients to retrieve the new membership after timeouts. The time to update {RNs}\xspace takes less than 68 ms. Overall, compared to {\textsc{Dinomo}}\xspace-N, {\textsc{Dinomo}}\xspace recovers from {KN}\xspace failure faster since it is not required to reorganize data owing to the data sharing in {\textsc{OP}}\xspace. Similar to Clover, {\textsc{Dinomo}}\xspace stabilizes its performance quickly, and satisfies all SLOs. \section{Implementation} \label{sec:impl} We implement {\textsc{Dinomo}}\xspace in 10K lines of C++ code. We use the standard C++ library and several open-source libraries including ZeroMQ~\cite{zeromq}, Google Protocol Buffers~\cite{protobuf}, libibverbs~\cite{libibverbs}, and the PMDK library~\cite{pmdklib}. This section discusses {\textsc{Dinomo}}\xspace{}'s {DPM}\xspace data structures, {\textsc{DAC}}\xspace implementation, and cluster management. \vheading{DPM metadata index}. {\textsc{Dinomo}}\xspace uses RECIPE's P-CLHT (Persistent Cache Line Hash Table)~\cite{lee2019recipe}, which supports lock-free reads and log-free in-place writes, as its metadata index in {DPM}\xspace. P-CLHT is a chaining hash table aimed at minimizing the CPU-cache coherence and persistence overheads on PM. Each bucket in P-CLHT has the size of a single cache line and holds three key-value pairs~\cite{davidASPLOS15}. The design allows each access/update to the hash table to incur only a single cache-line access/flush in the common case. For lock-free reads, P-CLHT employs atomic snapshots of key-value pairs. We modify the index to use RDMA reads for lookups. On hash collisions, {KNs}\xspace may have to perform multiple one-sided RDMA reads to traverse the hash chain and read the value. The cacheline-conscious bucket design of P-CLHT, cache-coherent DMA~\cite{FaRM-NSDI14,Anuj-ATC16}, and out-of-place value updates allow us to avoid memory-access races~\cite{ChristopherATC13,singhviSIGCOMM21} between the updates by {DPM}\xspace processors and one-sided RDMA reads by {KNs}\xspace. \vheading{DPM log segments}. {\textsc{Dinomo}}\xspace implements 8 MB log segments and handles variable length key-value pairs. {KNs}\xspace proactively preallocate log segments for their own use using two-sided operations. {KNs}\xspace log write operations into {DPM}\xspace log segments and cache them; upon cache misses in {\textsc{DAC}}\xspace, {KNs}\xspace have to search cached log segments to find the latest value. {\textsc{Dinomo}}\xspace implements Bloom filters atop cached log segments for quick membership queries. {\textsc{Dinomo}}\xspace maintains the following invariant: un-merged log segments are cached in the {KNs}\xspace that wrote them. Due to {\textsc{OP}}\xspace, other {KNs}\xspace won't access these log segments, thus eliminating the need for read operations to check the un-merged log segments on other nodes. {KNs}\xspace can add a new log segment to {DPM}\xspace without blocking until their un-merged log-segment length reaches a certain threshold (default is 2); when the threshold is reached, the critical write paths are blocked until the {DPM}\xspace processors complete merging below the threshold. {\textsc{Dinomo}}\xspace logs write operations with commit-markers (e.g., a seal byte at the end of the entry~\cite{FaRM-NSDI14,liu-TODS19}) to {DPM}\xspace log segments to ensure crash consistency and to aid recovery. The DPM index directly points to the values stored in the log entries. Since {KNs}\xspace know the address of the log segments they write (and therefore where values are stored), they can produce and locally cache shortcuts to values in {DPM}\xspace without an extra round trip. To garbage collect stale log segments, {\textsc{Dinomo}}\xspace maintains per-log-segment counters that reflect the number of valid and invalid values in each log segment. Once the number of invalid values matches the total number of values in a log segment, a {DPM}\xspace processor garbage collects the log segment. \vheading{DPM persistence}. While merging log segments, {\textsc{Dinomo}}\xspace{}'s {DPM}\xspace processing threads persist all the writes to the {DPM}\xspace index structure using \texttt{CLWB}, \texttt{sfence}, and non-temporal store instructions~\cite{rudoff2017persistent}. RDMA currently does not support durable RDMA writes. However, the proposed durable write in the IETF standards working document~\cite{Talpey2016RDMADW} behaves similar to a non-durable write, requiring one network round trip. Our implementation currently uses non-durable writes, and we plan to update these to durable writes once they become available~\cite{Kim-SIGCOMM18}. \vheading{{\textsc{DAC}}\xspace}. {\textsc{DAC}}\xspace is implemented using standard C++ libraries. {\textsc{DAC}}\xspace uses two unordered maps to store values and shortcuts. Least recently used values and least frequently used shortcuts are evicted. The key access frequency is tracked using a multimap. The shortcut entries in {\textsc{DAC}}\xspace contain a pointer to a {DPM}\xspace value, and the {DPM}\xspace value length. The value entries have two more extra fields, an access count and a copy of the {DPM}\xspace value. In {\textsc{DAC}}\xspace, demoted values are cached as shortcuts, and shortcuts being promoted inherit their access counts to preserve their access history. \begin{comment} \vheading{Policy engine}. The policy engine allows the configuration of the following parameters: \emph{average/tail latency SLOs}, \emph{over-utilization lower bound} and \emph{under-utilization upper bound}, and \emph{key hotness lower bound}, and \emph{key coldness upper bound}. We follow the prior autoscaling work~\cite{anna-vldb19} to design our policy engine parameters. The {M-node}\xspace periodically collects the average latency and the tail latency (99-percentile latency) per request from the clients. It also collects the average {KN}\xspace occupancy (\textit{i.e.,}\xspace CPU working time per monitoring-epoch interval) from the {KNs}\xspace. Finally, the {M-node}\xspace collects the average frequency of requests for every key observed during the monitoring epoch from {KNs}\xspace. It then proactively detects the latency SLO violations and corrects them dynamically. \input{tbl-eval-policy} \viheading{Cluster membership changes}. In {\textsc{Dinomo}}\xspace, cluster membership is updated under the following scenarios (Table~\ref{tbl-policy-violations}). First, the {M-node}\xspace may detect a {KN}\xspace failure and notify the alive nodes. Second, the {M-node}\xspace may detect a latency SLO violation (average or tail latency SLO) and find that all the {KNs}\xspace are over-utilized (the minimum occupancy of all {KNs}\xspace is larger than the \emph{over-utilization lower bound}), which triggers the addition of a new {KN}\xspace. Third, the {M-node}\xspace may detect that the latency SLOs are not violated but there is an under-utilized {KN}\xspace (its occupancy is lower than \emph{under-utilization upper bound}), which triggers that {KN}\xspace's removal. \viheading{Selective replication}. If {M-node}\xspace detects an SLO violation and notices that not all {KNs}\xspace are over-utilized, then {M-node}\xspace identifies highly popular keys and increases their replication factor (Table~\ref{tbl-policy-violations}). In detail, the {M-node}\xspace considers a key to be highly popular if its average access frequency is greater than the \emph{key hotness lower bound} (three standard deviations beyond the mean access frequency of keys). Also, {\textsc{Dinomo}}\xspace decides the replication factor, $R$, based on its latency (larger latencies have a higher replication factor). The replication factor is the ratio between average latency of the hot key and the \emph{average latency SLO}; however, the maximum replication factor for any key is the total number of active {KNs}\xspace. If the latency SLOs are met and none of the {KNs}\xspace are under-utilized ({M-node}\xspace cannot remove any {KN}\xspace), {M-node}\xspace identifies cold keys with high replication factor ($R$ > $1$) and dereplicates them. Recall that {M-node}\xspace considers a key cold if its access frequency is below the \emph{key coldness upper bound}. For cold keys, {M-node}\xspace resets their replication factor ($R$ = $1$) and removes secondary owners; {\textsc{Dinomo}}\xspace ensures that {KNs}\xspace have exclusive access for non-hot keys. \end{comment} \vheading{Cluster management}. {\textsc{Dinomo}}\xspace uses Kubernetes~\cite{kubernetes} for cluster orchestration. Pods are the smallest deployable units in Kubernetes. Each {\textsc{Dinomo}}\xspace component is instantiated in a separate Kubernetes pod with a corresponding Docker~\cite{docker} container. {\textsc{Dinomo}}\xspace uses Kubernetes to add/remove {KN}\xspace pods and restart failed pods. The {M-node}\xspace pod is colocated with the Kubernetes master. The {M-node}\xspace's policy engine adds/removes {KN}\xspace pods by running simple bash scripts executing \texttt{kubectl}~\cite{kubectl} commands to the Kubernetes master. The Kubernetes master keeps track of pod status using heartbeats, and the {M-node}\xspace uses this information to detect failures in {KN}\xspace pods. \section{Introduction} Large cloud providers operate at a much larger scale than traditional enterprise data centers and aim to optimize their infrastructures for high utilization. However, recent work indicates that resources in cloud data centers remain underutilized~\cite{guz2017nof,AlibabaTrace-BIGDATA2017,BorgTrace-EuroSys2020,yizhouOSDI18}. In the face of dynamic and bursty workloads, scheduling tasks such that resource utilization is high proves challenging~\cite{zhangvldb21}. For example, cluster memory utilization can be as low as 60\%~\cite{chengBigData18, VermaCluster14, TirmaziEurosys20}. One promising way to increase resource utilization is to disaggregate resources~\cite{sebastian2020disappl,limISCA09,peterOSDI16,keeton2015machine}. In a disaggregated cluster, resources such as CPU, memory, and storage are each collected into a separate central network-attached pool. By sharing these resources across users and applications, utilization can be increased significantly. Furthermore, each resource can be scaled up or down independently of the others: for example, memory can be added without the need to also add CPU or storage. Such disaggregation has long been practiced for storage in the form of network-attached storage (NAS)~\cite{gibsonNAS} and Storage Area Networks (SAN)~\cite{khattar1999introduction}. In this work, we take the idea one step further and consider a cluster where Persistent Memory is disaggregated. Persistent Memory (PM) is a new memory technology that provides durability like traditional storage, with performance close to DRAM~\cite{pm-arxiv, yang20-pm,IntelPMM}. Since PM has much higher cost per GB than conventional storage~\cite{assise-osdi20}, it is critical to achieve high utilization in PM deployments. Similar to traditional storage, the utilization of PM would increase from disaggregation. However, the DRAM-like latencies of PM make disaggregation challenging, since the network latency is an order of magnitude higher than PM latency. Disaggregated Persistent Memory ({DPM}\xspace) is still under active research and development, and hence there are different kinds of {DPM}\xspace to build upon. In this work, we assume that {DPM}\xspace is available as a centralized, reliable pool accessible via the network~\cite{keeton2017machine}. We further assume that {DPM}\xspace includes limited computational capability, as prior work shows such capability is critical for achieving good performance~\cite{ma2020asymnvm,tsai2020disaggre,MingFAST22}. We are interested in using DPM to build persistent key-value stores (KVSs), which are critical pieces of software infrastructure. The KVS consists of a number of {KVS nodes}\xspace ({KNs}\xspace) equipped with general-purpose processors, a relatively small amount of local DRAM, and high-performance network primitives like RDMA to access {DPM}\xspace over the network~\cite{volosSOCC18}. An ideal KVS for {DPM}\xspace would have a number of properties: high common-case performance, scalability, and quick reconfiguration that allows handling failures, bursty workloads, and load imbalance efficiently. Building a KVS that achieves all the goals simultaneously is challenging. First, {KNs}\xspace incur expensive network round trips (RTs) for accessing data and metadata in {DPM}\xspace. Despite these overheads, the KVS must provide high performance. Second, to benefit from independent scaling of {KNs}\xspace and PM, the KVS must be elastic and support lightweight reconfiguration of resources. Finally, the KVS must provide scalable performance without bottlenecks due to load imbalance at {KNs}\xspace or from non-uniform workload patterns. Prior {DPM}\xspace KVSs~\cite{ma2020asymnvm,tsai2020disaggre} make design trade-offs that make these goals difficult to satisfy simultaneously. For example, AsymNVM~\cite{ma2020asymnvm} achieves high performance by adopting a shared-nothing architecture to enable high cache locality at {KNs}\xspace. However, expensive data reorganization is needed when changing the number of {KNs}\xspace or rebalancing their load, thus limiting elasticity and efficient load balancing. Similarly, Clover~\cite{tsai2020disaggre} supports straightforward load balancing and high elasticity using a shared-everything architecture where data is shared across {KNs}\xspace, and any {KN}\xspace can handle any request. However, performance and scalability suffer as a result of poor cache locality and consistency overheads (including cache coherence, contention, and synchronization overheads due to sharing) in the common case~\cite{porobicVLDB12}. In this work, we present \textbf{{\textsc{Dinomo}}\xspace}, the first {DPM}\xspace KVS that simultaneously achieves high common-case performance, scalability, and lightweight online reconfiguration. {\textsc{Dinomo}}\xspace also provides linearizable reads and writes. To achieve these goals, {\textsc{Dinomo}}\xspace carefully adapts techniques from the storage research communities, including caching, ownership partitioning\xspace, selective replication, and lock-free and log-free PM indexing. \vheading{Data organization on {DPM}\xspace}. {\textsc{Dinomo}}\xspace stores data and metadata on {DPM}\xspace to enable concurrent and consistent access by all {KNs}\xspace. Because {DPM}\xspace is shared among all {KNs}\xspace, it functions as the source of ground truth in the system. To enable consistent updates, data is written to {DPM}\xspace in the form of log entries by the {KNs}\xspace. These log entries are asynchronously merged in order into the metadata index by the processors at the {DPM}\xspace. For its metadata index, DPM uses a concurrent PM index~\cite{lee2019recipe} which provides lock-free reads and log-free in-place-writes; the lock-free reads allow us to eliminate synchronization overheads between KNs and log-free in-place-writes allow DPM processors to concurrently update the metadata. \vheading{Disaggregated Adaptive Caching\xspace ({\textsc{DAC}}\xspace)}. Similar to other disaggregated systems, {\textsc{Dinomo}}\xspace reduces network RTs by caching data and metadata in the local DRAM of each {KN}\xspace. Data is cached by storing the key-value pair, and metadata is cached by storing a pointer to the data on {DPM}\xspace (termed \textit{shortcuts}~\cite{tsai2020disaggre}). To determine how best to divide the cache space between data and metadata, {\textsc{Dinomo}}\xspace uses {\textsc{DAC}}\xspace, a novel adaptive caching policy that actively maintains the right balance between caching values and shortcuts based on the workload patterns and available memory at {KNs}\xspace. {\textsc{DAC}}\xspace allows {\textsc{Dinomo}}\xspace to make efficient use of the DRAM at {KNs}\xspace without making any assumptions about the workload. \vheading{Ownership Partitioning\xspace ({\textsc{OP}}\xspace)}. While caching at the {KNs}\xspace can reduce network RTs, it can incur significant consistency overheads when {KNs}\xspace can share the same data. To handle this concern, {\textsc{Dinomo}}\xspace partitions the \emph{ownership} of data across {KNs}\xspace, while data and metadata are shared via {DPM}\xspace. This provides three benefits. First, it allows {KNs}\xspace to cache the data they own, thus providing high cache locality without consistency overheads. Second, by sharing the data and metadata, {\textsc{OP}}\xspace supports changing the number of {KNs}\xspace or rebalancing their load by repartitioning only the ownership of data among {KNs}\xspace, without expensive data reorganization at {DPM}\xspace. Finally, since each key is only accessed by one {KN}\xspace at any given point, combined with our principled reconfiguration protocol, {\textsc{Dinomo}}\xspace achieves linearizable reads and writes. Similar ideas have been proposed before in other contexts~\cite{khattar1999introduction, caulfieldISCA13, snowflake, atulOSDI16}, but we are the first to adapt it for {DPM}\xspace. With {\textsc{OP}}\xspace, {\textsc{Dinomo}}\xspace achieves high performance/scalability from locality-preserving {KN}\xspace-side caching without consistency overheads and high elasticity from lightweight reconfiguration. \vheading{Selective Replication}. {\textsc{Dinomo}}\xspace{}'s ownership partitioning\xspace, however, may experience performance or scalability bottlenecks at {KNs}\xspace due to load imbalance under highly skewed workloads (\textit{i.e.,}\xspace the maximum throughput for requests on a single key is limited by the processing capacity of a single {KN}\xspace). To avoid this issue and provide scalable performance for highly skewed workloads, {\textsc{Dinomo}}\xspace \emph{selectively replicates} the ownership of hot keys across multiple {KNs}\xspace. {\textsc{Dinomo}}\xspace has a separate {monitoring/management node}\xspace that identifies hot keys, initiates their ownership replication to other {KNs}\xspace, and thus balances the load from hot keys across available {KNs}\xspace. \vheading{Alleviate network and CPU bottleneck}. {\textsc{Dinomo}}\xspace{}'s data path uses \emph{one-sided RDMA operations} with \emph{asynchronous post-processing}. All reads to {DPM}\xspace by {KNs}\xspace use one-sided RDMA operations on a shortcut hit or a cache miss. {\textsc{Dinomo}}\xspace writes multiple log entries in a batch in the critical path using a one-sided RDMA operation, and delegates the merging of the writes into the metadata index to the {DPM}\xspace processors asynchronously. Asynchronous post-processing reduces write latency and amortizes {DPM}\xspace processing utilization across multiple writes, reducing how much {DPM}\xspace computing power is needed in the critical path. These optimizations decrease the network messages per operation and alleviate the processing bottleneck from {DPM}\xspace, increasing the efficiency of {\textsc{Dinomo}}\xspace in addition to techniques like {\textsc{DAC}}\xspace and {\textsc{OP}}\xspace. \vheading{Limitations}. Our work has a number of limitations. First, while we address the challenge of scaling {KNs}\xspace, we do not tackle how to make {DPM}\xspace reliable or scalable. Second, {\textsc{Dinomo}}\xspace{} targets key-value store functionality for {DPM}\xspace systems. Many of its ideas may be equally applicable for a broader range of {DPM}\xspace-based storage systems as well as disaggregated DRAM systems, but we have not explored this. Finally, while our work provides mechanisms for scaling {KNs}\xspace, it does not tackle the policy question of when {KNs}\xspace should be scaled. We consider these areas ripe for future work. \vheading{Evaluation}. We implement {\textsc{Dinomo}}\xspace in 10K lines of C++ code. We compare the end-to-end performance and scalability of {\textsc{Dinomo}}\xspace with Clover~\cite{tsai2020disaggre}, a state-of-the-art {DPM}\xspace KVS. Our experiments show that {\textsc{Dinomo}}\xspace achieves both better common-case performance and scalability than Clover. {\textsc{Dinomo}}\xspace's throughput scales to 16 {KNs}\xspace, while Clover's throughput does not scale beyond 4 {KNs}\xspace. With 16 {KNs}\xspace, {\textsc{Dinomo}}\xspace outperforms Clover by at least 3.8$\times$\xspace on all workloads we evaluate. We also show that {\textsc{Dinomo}}\xspace elastically scales-out {KNs}\xspace, balances the load across {KNs}\xspace, and handles {KN}\xspace failures quickly. In summary, this paper makes the following contributions: \begin{itemize} \item We present {\textsc{Dinomo}}\xspace, the first {DPM}\xspace key-value store that simultaneously achieves high performance, scalability, and lightweight online reconfiguration (\sref{sec:dinomo}) \item We present {\textsc{DAC}}\xspace, a novel adaptive caching policy that helps utilize the {KN}\xspace-side memory effectively without any assumptions on workload patterns (\sref{sec:adaptive-caching}) \item We adapt {\textsc{OP}}\xspace for {DPM}\xspace KVSs to achieve high performance, scalability, and lightweight reconfiguration (\sref{sec:ownership-part}) \item We experimentally show that {\textsc{Dinomo}}\xspace can efficiently react to both {KN}\xspace failures and load imbalance, and automatically scale the number of {KNs}\xspace by capturing load dynamics (\sref{sec:evaluation}) \end{itemize} \section{Background and Motivation} \input{tbl-kvs-comparison} We describe persistent memory (PM) and how it can be used in disaggregated settings. We then discuss prior key-value stores (KVSs) for disaggregated PM ({DPM}\xspace) and motivate the need for a new KVS. \subsection{Persistent Memory and Disaggregation} PM is a non-volatile memory technology with unique characteristics~\cite{pm-arxiv, yang20-pm}. PM is connected directly to the memory bus -- it is byte addressable, and has performance close to DRAM. It has high capacity: Intel's Optane DC PM is available up to 512GiB per NVDIMM~\cite{IntelPMM}. The per-GB cost of PM is higher than high-end solid state drives, but less than DRAM~\cite{assise-osdi20}. To improve cost efficiency and PM utilization, prior work proposes {DPM}\xspace~\cite{tsai2020disaggre, ma2020asymnvm, MingFAST22, LiuICCD21, keetonOpenFAM19, volosSOCC18}. We note that our work is agnostic to the choice of PM technology and specific PM product (\textit{e.g.,}\xspace PCM~\cite{WongPCM}, STT-MRAM~\cite{ApalkovSTTMRAM}, Memristor~\cite{YangMemristor}, Optane DC PM~\cite{IntelPMM}, Memory-Semantic CXL SSD~\cite{flash22-keynote}). \vheading{Disaggregated PM}. In disaggregated settings, PM is available as a central, reliable pool of memory accessible over a network. \emph{{KVS nodes}\xspace} ({KNs}\xspace) are used to access the data in {DPM}\xspace; {KNs}\xspace have limited DRAM and use network primitives like RDMA to access the PM pool over a fast interconnect such as InfiniBand~\cite{sebastian2020disappl}, PMoF~\cite{Golander-SYSTOR17, Paul-PMSummit18}, or Gen-Z~\cite{genz}. Disaggregation allows independent scaling of PM and {KNs}\xspace and introduces separate failure domains, where {KN}\xspace failures do not cause {DPM}\xspace failures. {DPM}\xspace can be classified as active or passive. Active {DPM}\xspace has small processing units such as ARM-SOCs, ASICs, or FPGAs, with high-bandwidth network ports. In active {DPM}\xspace, the compute capacity at {DPM}\xspace is used for local processing, including network, application-level, and data store processing~\cite{ma2020asymnvm, kim2018hyperloop, sidler2020strom}. Prior work has proposed data stores for active {DPM}\xspace that leverage this limited computational power~\cite{tsai2020disaggre, ma2020asymnvm, MingFAST22, zhiyuanASPLOS22, LiuICCD21}. In contrast, passive {DPM}\xspace has no computational abilities at the {DPM}\xspace pool. {KNs}\xspace can only use one-sided RDMA operations to access and modify the data in {DPM}\xspace. Data stores for passive {DPM}\xspace~\cite{tsai2020disaggre} have poor performance and scalability due to the limited functionality of the one-sided network primitives~\cite{aguilera2019designing}, showing that active {DPM}\xspace is a more practical deployment. \subsection{DPM Key-Value Stores}\label{sec:back-mot-dpm-kvs} Previously proposed {DPM}\xspace key-values stores differ based on how they handle data, metadata, and ownership; metadata is information used to locate and access data (like an index); ownership determines if a data item can be read or written. \vheading{AsymNVM}. AsymNVM~\cite{ma2020asymnvm} adopts a shared-nothing architecture. Data in {DPM}\xspace is partitioned, and each partition is exclusively accessed by a single {KN}\xspace. Every {KN}\xspace uses its local memory to cache data from its partition (Table~\ref{tab:kvs-comparison}); caching helps reduce expensive network round trips to {DPM}\xspace. As {KNs}\xspace have exclusive ownership over data, their caches can preserve high locality and can be consistent without incurring additional consistency overheads. Thus, shared-nothing architectures provide high performance and scalability in the common case by effectively using {KN}\xspace caches to process requests. However, reconfiguring the number of {KNs}\xspace or balancing load across {KNs}\xspace requires physical reorganization of data and metadata~\cite{guz2017nof, klimovic2016flash, Bindschaedler2020hail, ma2020asymnvm}. For example, adding a new {KN}\xspace may require the metadata of a partition to be split, resulting in expensive data copies at {DPM}\xspace. Thus, AsymNVM offers performance at the expense of elasticity and fast reconfiguration. \vheading{Clover}. Clover~\cite{tsai2020disaggre} adopts a shared-everything architecture. All {KNs}\xspace share the ownership of data in {DPM}\xspace, and every {KN}\xspace can access and modify all data and metadata (Table~\ref{tab:kvs-comparison}). {KNs}\xspace can use local memory to cache data. However, due to sharing, {KNs}\xspace have poor cache locality and need to keep their caches consistent, incurring significant consistency overheads that reduce the common-case performance and scalability~\cite{porobicVLDB12}. Nevertheless, Clover can support lightweight reconfiguration without re-partitioning data or metadata and allow straightforward load balancing across {KNs}\xspace. Overall, Clover offers elasticity and lightweight reconfiguration at the expense of high common-case performance and scalability. In summary, prior {DPM}\xspace key-value stores sacrifice one of high common-case performance, scalability, or lightweight reconfiguration for the other two (Table~\ref{tab:kvs-comparison}). Thus, there is a need for a new {DPM}\xspace key-value store that achieves the three properties simultaneously. \section{Related work} \label{sec:related-work} We place our contributions in the context of relevant prior work. \vheading{{DPM}\xspace architectures.} {\textsc{OP}}\xspace follows the idea that just because you \emph{can} share, it does not mean you \emph{should} share. This observation has been made before in other contexts. Storage Area Networks provide storage disaggregation in a data center~\cite{khattar1999introduction}, where volumes could be shared among hosts, but often they are not~\cite{caulfieldISCA13}. Key-value stores provide storage disaggregation in the cloud, where data can be shared among nodes, but applications may choose not to~\cite{snowflake}. Fine-grained logical partitioning has been proposed to support live reconfigurations in in-memory key-value stores~\cite{kulkarni17rocksteady,atulOSDI16}, in-memory databases~\cite{elmore15squall}, and graph processing~\cite{xie19pragh}. Even multiprocessor shared-memory systems sometimes forgo sharing of data structures among threads, choosing instead to partition data~\cite{baumann09barrelfish,delegation,lim2014mica}. Our work demonstrates that partitioning logical ownership while sharing physical data and metadata in {DPM}\xspace provides high performance and lightweight reconfigurability. \vheading{{\textsc{DAC}}\xspace.} Adaptive caching policies have been explored in other contexts, illustrating how a single cache can be used for multiple purposes or how a replacement policy can consider multiple behaviors. For example, the Sprite operating system shared its memory between the file system buffer cache and the virtual memory system~\cite{nelson88spritecache}. The Adaptive Replacement Cache (ARC) uses a replacement policy that balances between recency and frequency of accesses~\cite{megiddo03arc}. In contrast to these systems, which use fixed-size cache entries with uniform miss penalties, {\textsc{DAC}}\xspace manages a cache where different types of entries (e.g., values vs. shortcuts) have different sizes and varying miss penalties. The novelty of our scheme arises from a new setting ({DPM}\xspace) where adaptivity is essential.
{ "timestamp": "2022-09-20T02:24:03", "yymm": "2209", "arxiv_id": "2209.08743", "language": "en", "url": "https://arxiv.org/abs/2209.08743" }
\section{Overview: conformal field theories} \label{section:cft} Let us introduce in this section the main parts of 2d CFTs. We will give an axiomatic based introduction and follow closely the book of Sylvain Ribault \cite{ribaultplane} and the introductory notes of Bert Schellekens \cite{schellekens}. \\ The underlying space of the CFTs, discussed in this thesis, are the compactified complex plane $\mathbb{C} \cup \{\infty \}$, known as Riemann sphere and the complex torus $\frac{\mathbb{C}}{\mathbb{Z} + \tau \mathbb{Z}}$ with modular parameter $\tau \in \mathbb{C} - \mathbb{R}$. The discussion mainly concentrates on the Riemann sphere unless stated otherwise. \subsection{Symmetry algebra} In the euclidean case we introduce the metric $\mathrm{d}s^2 = \mathrm{d}z \mathrm{d}\bar{z}$ on the Riemann sphere. We consider field theories invariant under conformal transformations, which prominently includes scale transformations $z \mapsto \lambda z$, translations $z \mapsto z + a$ and rotations $z \mapsto e^{i\phi}z$. In general conformal transformations contain any angle preserving coordinate transformations i.e. $g \mapsto \Omega(z)g$ when $z \mapsto f(z)$. Locally this is fullfilled by any holomorphic function $(z,\bar{z}) \mapsto (f(z), \bar{f}(z))$, with $\Omega(z) = f'(z)\bar{f}'(z)$. Here $\bar{z}$ is the complex conjugate of $z$. Expanding a conformal transformation $f(z)$ around the identity map $f(z) = z + \epsilon(z)$, yields the infinite dimensional Witt algebra identified by the holomorphic/antiholomorphic generators $l_n := - z^{n+1}\partial_z$/$\bar{l}_n := - \bar{z}^{n+1}\partial_{\bar{z}}$, with commutation relations, \begin{gather} [l_n,l_m] = (n-m)l_{n+m}, \quad [\bar{l}_n,\bar{l}_m] = (n-m)\bar{l}_{n+m}, \quad [l_n,\bar{l}_m] = 0. \end{gather} The global, angle preserving transformations are given by the Möbius group $\mathrm{PSL}_2(\mathbb{C}) = \frac{SL_2(\mathbb{C})}{\{id, -id\}}$ and the identification, \begin{equation} \begin{pmatrix} a & b \\ c & d \end{pmatrix} \mapsto \frac{az+b}{cz+d}. \end{equation} A common rescaling of the matrix elements doesn't change the transformation. Hence it is enough to consider only matrices with determinant one and identify $[g] = \{g, -g\}$ i.e. $g \in \mathrm{PSL}_2$. A change of phase doesn't change obersvables in quantum field theories and therefore we consider Lie group representations $R$ with an additional phase, depending on the multiplication of group elements $R(g)R(g') = e^{i\phi(g,g')}R(gg')$. At the level of the Lie algebra this extra phase is accounted by its central extension. In our case the unique central extended Witt algebra is the Virasoro algebra $\mathfrak{V}$. Formally the Virasoro algebra is defined as the complex vector space spanned by $\{L_{n \in \mathbb{Z}}, c\}$ with Lie brackets, \begin{gather} \label{virasoro} [L_n, L_m] = (n-m)L_{n+m} + \frac{c}{12} m(m^2 - 1), \quad [L_n, c] = 0. \end{gather} $c \in \mathbb{C}$ is the central charge of the theory and commutes with all other elements. Vice versa we have a copy of $\mathfrak{V}$ with the same central charge, to account for the anti-holomorphic part with $[\bar{L}_n,L_m] = 0$. The complete symmetry algebra of space-time transformations of CFTs is the direct sum $\mathfrak{V} \oplus \bar{\mathfrak{V}}$. \subsection{Representations and fields} So far we equipped the CFT with a space and a symmetry algebra. In order to be able to calculate observables, we equip it further with representations of $\mathfrak{V} \oplus \bar{\mathfrak{V}}$ and fields transforming in this representations. Vaguely spoken, the space of representations is called the spectrum $\mathbb{S}$ of the CFT. For simplicity we only investigate representations of one copy of the Virasoro algebra and tensor them to get representations of the complete symmetry algebra $\mathfrak{V} \oplus \bar{\mathfrak{V}}$. \textit{Remark:} From now on we do not distinguish between the action $R(g)$ of the representation and the algebra element $g \in \mathfrak{g}$. Similiarly we do not distinguish between the representation $R$ and the vector space $V_R$ it acts on. \subsubsection{Highest weight representation} \textit{Axiom:} We demand that $L_0$ is diagonalizable. Let's introduce the Verma module with highest weight vector $|\Delta\rangle$, \begin{equation} \label{virverma} \mathcal{V}_{\Delta} := U(\mathfrak{B}^-)|\Delta\rangle, \end{equation} where $U(\mathfrak{B}^-)$ is the universal envelope of the Virasoro algebra with only negative indices $L_{n<0}$. We equip the space $U(\mathfrak{B^-})$ with the basis $\mathfrak{L}:= \{L_N := L_{-n_1} \cdot ... \cdot L_{-n_M}\}_{1\leq n_1 \leq ... \leq n_M}$ and call the integer $N := - \sum_{i=1}^M n_i$ the level. Correspondingly we say that the vector $L_{-n_1} \cdot ... \cdot L_{-n_M}|\Delta \rangle$ is at level $N$. The action of $\mathfrak{V}$ on the Verma module is then given by, \begin{gather} \label{virverma2} L_{n>0} |\Delta \rangle = 0, \quad L_0 |\Delta\rangle = (\Delta \in \mathbb{C}) |\Delta\rangle. \end{gather} The highest weight vector $|\Delta\rangle$ of the representation (\ref{virverma2}) is also called primary state and $\Delta$ its conformal dimension. We will see in section (\ref{conformaltransformation}) the role of the conformal dimension in conformal transformations of primary fields. With respect to $L_0$ the descendant states at level $N>0$, have eigenvalue $L_0 L_N|\Delta\rangle = (\Delta + N)L_N|\Delta\rangle$. This implies that the real-part of the $L_0$-eigenvalues is bounded from below: $Re(\Delta + N) \geq Re(\Delta)$. We interpret the operator $H \sim L_0 + \bar{L_0}$ as the hamiltonian of the theory and therefore the spectrum is stable, because of the lower bound on the energies $\Delta$. \subsubsection{Degenerate representations} \label{degrep} In this section we work out irreducible Verma modules and degenerate representations. In section (\ref{virasorofusion}) we will then see how degenerate representations constrain correlation functions. Verma modules (\ref{virverma}) are reducible if they contain null vectors, \begin{definition}[Null vector] A vector $|\chi,N\rangle := \sum_{|M| = N} a_M L_{-M} |\Delta\rangle$, fullfilling the condition $L_{n>0}|\chi,N\rangle = 0$ is called a null vector at level $N>0$. \end{definition} A null vector $|\chi,N\rangle$ generates an invariant subspace $U(\mathfrak{B}^-)|\chi,N\rangle \subset \mathcal{V}_{\Delta}$ and therefore the representation is decomposable. There is a very helpful theorem, connecting null vectors and irreducibility, \begin{theorem} The Verma module is irreducible if and only if it contains no null vectors. \end{theorem} Conversely we build irreducible representations (shorthand \textit{irreps}) by taking the quotient of the Verma module with all invariant subspaces generated by all null vectors. The quotient space is called degenerate representation. For any two positive integers $r,s$ forming the product $rs = N$, exist $p_N := \sum_{rs = N} 1$ inequivalent null vectors $|\chi_{r,s}\rangle$ at level $N = rs$, \begin{gather} \label{virdeg} |\chi_{r,s}\rangle = L_{r,s}|\Delta_{r,s}\rangle, \quad L_{r,s} = \sum_{|M| = N} d_M(c,r,s) L_M. \end{gather} The coefficients $d_M$ depend on the central charge $c$ and the integers $r,s$. For example in the case of level 2 null vectors we have, \begin{equation} L_{2,1} = L_{-1}^2 + b^2 L_{-2}, \quad L_{1,2} = L_{-1}^2 + b^{-2} L_{-2}, \end{equation} where $b$ is introduced in (\ref{I.7}). Moreover the conformal dimension $\Delta_{<r,s>} = \Delta (P_{<r,s>})$ in (\ref{virdeg}) is determined by the null vector condition on $|\chi_{r,s}\rangle$ and given in terms of the central charge $c$ by the parametrization, \begin{gather} \label{I.7} \Delta(P) := \frac{c - 1}{24} - P^2, \quad P_{<r,s>} := \frac{1}{2}(rb + sb^{-1}), \quad c := 13 + 6 b^2 + 6 b^{-2}. \end{gather} $P$ is called the momentum and $b$ the coupling constant. A widely used relation is that $\Delta_{<r,-s>} = \Delta_{<r,s>} + rs = \Delta_{<-r,s>}$, where $\Delta_{<r,-s>}$ is the conformal dimension of the null vector $|\chi_{r,s}\rangle$. In terms of the coupling constant $P_{<r,s>}(b)$ and $\Delta_{<r,s>}(b)$ are invariant under the simultaneous exchange of $r \leftrightarrow s$ and $ b \leftrightarrow b^{-1}$ (Let's name the transformation $b \rightarrow b^{-1}$ \textbf{$b$-symmetry}). For generic values of $c$, the null vector $|\chi_{r,s}\rangle$ is the only one in the Verma module $\mathcal{V}_{\Delta_{<r,s>}}$ and thus the quotient space, \begin{equation} R_{r,s} := \frac{\mathcal{V}_{\Delta_{<r,s>}}}{\mathcal{V}_{\Delta_{<r,-s>}}}, \end{equation} forms a \textit{single} degenerate representation. \noindent \textbf{A-series minimal models:} A very prominent example of CFTs are the A-series minimal models, including the critical Ising model. Their spectra are made of \textit{doubly} degenerate representations, who contain two null vectors and therefore satisfy $\Delta_{<r,s>} = \Delta_{<r',s'>}$ for some $(r,s) \neq (r',s')$. This condition on the conformal dimension determines the central charge $c_{p,q} = 1 - 6 \frac{(q-p)^2}{pq}$ for $p,q$ coprime integers. In particulary A-series minimal models demand the extra condition $2 \leq p, q$ otherwise their spectrum is empty. For example the critical Ising model has central charge $c = \frac{1}{2}$ with $p = 4, q = 3$. \subsubsection{State field correspondence} We demand that there exists a one to one map $\mathfrak{i}$ of fields $V(z)$ and states in the spectrum $\mathbb{S}$, \begin{gather} \mathfrak{i}: |v\rangle \mapsto V_{|v\rangle}(z). \end{gather} $\mathfrak{i}$ extends the action of $\mathfrak{V}$ onto fields by $L^{(z)}_n V_{|v\rangle}(z) := V_{L_n|v\rangle}(z)$. In string theory the correspondence can be thought of as having the space of states at the origin of the Riemann sphere $z = 0$, corresponding to the far past on the string worldsheet. The state is translated by $V_{|v\rangle}(z) = e^{-izL_{-1}} V_{|v\rangle}(0) e^{izL_{-1}}$ to any point on the sphere. \newline A general representation of the symmetry algebra $\mathfrak{V} \oplus \bar{\mathfrak{V}}$ is realized by tensoring two Verma modules (or equivalently degenerate representations), \begin{gather} \label{fieldrep} \begin{aligned} \mathfrak{V} \oplus \bar{\mathfrak{V}} &\rightarrow End(\mathcal{V}_{\Delta} \otimes \mathcal{V}_{\bar{\Delta}}) \\ L_n + \bar{L_m} &\mapsto (L_n +\bar{L_m})V_{\Delta,\bar{\Delta}}(z) := V_{L_n|\Delta\rangle \otimes |\bar{\Delta}\rangle + |\Delta\rangle \otimes \bar{L_m}|\bar{\Delta}\rangle}(z). \end{aligned} \end{gather} The field $V_{\Delta,\bar{\Delta}}(z)$ is called \textbf{primary field} and corresponds to the highest weight state $|\Delta\rangle \otimes |\bar{\Delta}\rangle \in \mathcal{V}_{\Delta} \otimes \mathcal{V}_{\bar{\Delta}}$. We call fields diagonal if $\Delta = \bar{\Delta}$ and write $V_{\Delta}$. On the other side a CFT is diagonal if the spectrum contains only diagonal fields. Be careful with the expression $\bar{\Delta}$, as it is not the complex conjugate of $\Delta$ but the conformal dimension corresponding to $\bar{\mathfrak{V}}$. \textit{Axiom:} We introduce the additional axiom, that the generator $L_{-1}$/$ \bar{L}_{-1}$ acts on fields as the derivative in $z$/$\bar{z}$. There exists an exceptional field, the Virasoro field $T(y)$, which generates the symmetry algebra $\mathfrak{V}$. The Virasoro field is identified as the energy momentum tensor of conformal symmetries. It is defined implicitly, such that its $n+1$'th moment at $z$ is the generator $L^{(z)}_{n}$ (conversely for $\bar{T}$ generating $\bar{L}^{(z)}_{n}$), \begin{equation} L^{(z)}_n = \frac{1}{2\pi i} \oint_{z} dy (y-z)^{n+1}T(y). \end{equation} Therefore $T(y)$ can be written in terms of its moments, \begin{equation} \label{tmoments} T(y) = \sum_{i \in \mathbb{Z}} \frac{L_n^{(z)}}{(y-z)^{n+2}}. \end{equation} \textit{Axiom:} $T(y)$ is holomorphic and behaves as $O(y^{-4})$ at $y = \infty$. On the quantum level the fields are operators, where we are interested in the operator product expansion (shorthand \textbf{OPE}) of $T$ with a primary field $V_{\Delta}$ and $T$ with itself, \begin{gather} \label{TVOPE} \begin{aligned} T(y)V_{\Delta,\bar{\Delta}}(z) &\underset{y \rightarrow z}{=} \frac{\Delta V_{\Delta,\bar{\Delta}}(z)}{(y-z)^2} + \frac{\partial_z V_{\Delta,\bar{\Delta}}(z) }{(y-z)} + O(1) \\ T(y)T(z) &\underset{y \rightarrow z}{=} \frac{c/2}{(y-z)^4} + \frac{2T(z)}{(y-z)^2} + \frac{\partial_z T(z)}{(y-z)} + O(y-z). \end{aligned} \end{gather} In our case the OPEs (\ref{TVOPE}) have finite radius of convergence and is therefore well defined. The $TT$-OPE is equivalent to the Virasoro commutation relation (\ref{virasoro}) and the $TV$-OPE is generic for primary fields. \subsection{Observables} Up to this point we have introduced the underlying space, the symmetry algebra, the representations and fields. Now its time to discuss the observables of the CFT. Beforehand we introduce a generic notation of the complex modulus, which is often used because of the factorizable left and right chiral representation $\mathfrak{V} \oplus \bar{\mathfrak{V}}$, \begin{equation} |f(\Delta,z,L_n)| ^2 := f(\Delta,z, L_n) f(\bar{\Delta},\bar{z},\bar{L}_n). \end{equation} \subsubsection{Correlation functions} We define the observables to be correlation function of N fields (shorthand \textbf{N-point function}). The correlator is a multilinear function of fields, depending on the conformal dimensions and their position. In the bosonic case the correlator transforms under the one dimensional symmetric representation of the symmetric group $S_N$. Diagonal CFTs are bosonic and contain only spin zero fields (see \ref{conformaltransformation}). We write the N-point function as, \begin{equation} \langle \prod_{i = 1}^N V_{\sigma_i,\bar{\sigma}_i}(z_i)\rangle, \end{equation} where we denoted by $(\sigma,\bar{\sigma})$ to indicate any field in the representation (\ref{fieldrep}). The N-point functions are restricted by Ward identities, which are determined by the Virasoro symmetry of conformal field theory. Ward identities can be obtained by a weighted contour integral of the meromorphic correlator $Z(y) := \langle T(y)\prod_i V_{\sigma_i}(z_i)\rangle$ around all poles $z = z_i$. The weight $\epsilon(y)$ is such that it has no poles outside $\{z_i\}_i$, \begin{equation} \label{wardint} \oint_{\infty} \mathrm{d}y \epsilon(y) Z(y) = 0, \quad \epsilon \underset{y \rightarrow \infty}{\leq} O(y^2). \end{equation} This integral is evaluated using the $TV$-OPE (\ref{TVOPE}) and the residue theorem of complex analysis. \textit{Infinitesimal transformations:} A general result of quantum field theory is that an infinitesimal transformation $f(z) = z + \epsilon(z)$ of a generic field $\Psi(z)$ is given by \cite{schellekens}, \begin{equation} \delta_{\epsilon}\Psi(z) = \frac{1}{2 \pi i} \oint_{z} dy \epsilon(y) T(y) \Psi(z). \end{equation} This statement motivates the use of $Z(y)$ and connects it to conformal transformations of N-point functions. \subsubsection{Global Ward identities} \label{conformaltransformation} Let's consider $g \in \mathrm{PSL_2(\mathbb{C})}$ a Möbius transformation and the corresponding infinitesimal transformations $\epsilon \in \{1,y,y^2\}$ in equation (\ref{wardint}). We can deduce how a primary field transforms under $g$, \begin{equation} \label{fieldtransform} V_{\Delta,\bar{\Delta}}'(z) := T_g V_{\Delta,\bar{\Delta}}(z) = |(cz+d)^{-2\Delta}|^2V_{\Delta,\bar{\Delta}}(gz), \end{equation} with the action of $PSL_2$ on the space $gz := \frac{az+b}{cz+d}$. Under global conformal transformation N-point functions stay invariant, as a consequence of exponentiating the Ward identities (\ref{wardint}) with $\epsilon \in \{1,y,y^2\}$. Therefore the classical conformal invariance transmits anomaly free to the quantum theory, \begin{equation} \label{conformal_invariance} \langle \prod_{i=1}^N V_{\Delta_i,\bar{\Delta_i}}(z_i) \rangle = \langle \prod_{i=1}^N T_g V_{\Delta_i,\bar{\Delta_i}}(z_i) \rangle. \end{equation} The $\mathrm{PSL}_2$ invariance of N-point functions imply, that within correlation functions three coordinates can be choosen freely. Especially 2- and 3-point functions are determined on the whole sphere, if they are known at some fixed coordinates. The invariance equation (\ref{conformal_invariance}) is generalized to local conformal i.e. holomorphic transformations $z \rightarrow h(z)$, \begin{equation} \langle \prod_{i=1}^N V_{\Delta_i,\bar{\Delta_i}}(z_i) \rangle = \prod_{i=1}^N h'(z) \langle \prod_{i=1}^N V_{\Delta_i,\bar{\Delta_i}}(h(z_i)) \rangle. \end{equation} Outside the domain of biholomorphism of $h(z)$, especially at $h^{'}(z_0) \in \{0,\infty\}$, descendant states $L_{n\leq -2}V_{\Delta}$ appear in the correlation function. But we do not go into more details. \textit{Spin of a primary:} Under space rotations $f(z) = e^{i\phi}z$ the field transforms as $V_{\Delta,\bar{\Delta}}(z) \rightarrow e^{i \phi (\Delta - \bar{\Delta})}V_{\Delta,\bar{\Delta}}(z)$. Let $S := \Delta - \bar{\Delta}$ be the spin of the primary field. For $S \in \frac{1}{2} + \mathbb{Z}$ the field flips sign under $2 \pi = \phi$ rotation and is thus fermionic. For $S \in \mathbb{Z}$ the field is bosonic. Generally the spin can take any value but for the ongoing discussion we dont specify it or consider diagonal fields i.e. spin zero. \textit{Fields at the north pole:} Another subtlety is how the primary fields and its descendents behave analytically at the point $z = \infty$. From the transformation properties (\ref{fieldtransform}) we deduce that $V_{\Delta}(\infty) = \mathrm{lim}_{z \rightarrow \infty} |z^{2\Delta}|^2 V_{\Delta}(z)$ (choose $gz = -\frac{1}{z}$ and use equation (\ref{conformal_invariance})). From the axiom identifying $L_{-1} = \partial_z$, we can argue that $L_{-1}V_{\Delta}(\infty) = \mathrm{lim}_{z \rightarrow \infty} z^{2\Delta + 1}\bar{z}^{2\bar{\Delta}} L_{-1}V_{\Delta}(z)$, and generally, \begin{equation} L \bar{L}V_{\Delta}(\infty) := \mathrm{lim}_{z \rightarrow \infty} |z^{2\Delta + |L|}|^2 L \bar{L} V_{\Delta}(z). \end{equation} \subsubsection{Local Ward identities} If we choose $\epsilon = \frac{1}{(z-z_i)^{n-1}}, \quad n>1$ in equation (\ref{wardint}) we get a set of local Ward identities. For simplicity we concentrate only on the case where there is at most one descendant field in the N-point function, \begin{equation} \label{localward} \langle L^{(z_i)}_{-n} V_{\sigma_i}(z_i) \prod_{j \neq i} V_{\Delta_j}(z_j)\rangle = \sum_{j \neq i} (-\frac{\partial_{z_i}}{z_{ji}^{n-1}} + \frac{(n-1)\Delta_j}{z_{ji}^n})\langle V_{\sigma_i}(z_i) \prod_{j \neq i} V_{\Delta_j}(z_j) \rangle, \end{equation} where we introduce the shorthand notation $z_{ij} := z_i - z_j$. In our setup it can be shown by induction that any N-point function with descendant fields, can be related to N-point functions with lower level descendant fields only by differential operators in $z$. The global and local Ward identities of the right chiral symmetry $\bar{\mathfrak{V}}$ take a similiar form. \subsubsection{Operator product expansion} In the introduction we discussed how conformal field theories are especially interesting because they possess non-perturbative solutions. Hence the $VV$-OPE of primary fields is introduced to be able to solve the CFT exactly. By axiom the $VV$-OPE takes the form, \begin{equation} \label{OPEform} V_{\sigma_1,\bar{\sigma_1}}(z_1) V_{\sigma_2,\bar{\sigma_2}}(z_2) \underset{z_1 \rightarrow z_2}{=} \sum_{\sigma_3,\bar{\sigma_3} \in \mathbb{S}} C^{12}_{3}(z_1,z_2) V_{\sigma_3,\bar{\sigma_3}}(z_2). \end{equation} The sum on the right side sums over all fields in the spectrum $\mathbb{S}$ of the CFT. In the case of Virasoro symmetry the $VV$-OPE has finite radius of convergence. Applying $\oint_{z_1,z_2} \mathrm{d}z (z-z_2)^{n+1}T(z)$ on both sides of (\ref{OPEform}), yields the OPE Ward identities, \begin{equation} \bigg (L_n^{(z_2)} + \sum_{m=-1}^n \binom{m+1}{n+1} z_{12}^{n-m} L_m^{(z_1)}\bigg )V_{\sigma_1}(z_1)V_{\sigma_2}(z_2) = \sum_{\sigma_3 \in \mathbb{S}} C^{12}_{3}(z_1,z_2) L^{(z_2)}_nV_{\sigma_3}(z_2). \end{equation} The solution of the OPE Ward identity reads, \begin{equation} \label{vvope} V_{\Delta_1,\bar{\Delta_1}}(z_1) V_{\Delta_2,\bar{\Delta_2}}(z_2) = \sum_{\Delta_3,\bar{\Delta_3} \in \mathbb{S}} \frac{C_{123}}{B_3} \bigg |z_{12}^{\Delta_3^{12}} \sum_{L\in U(\mathfrak{B^-})} z_{12}^{|L|}f^L_{1,2,3}L \bigg |^2V_{\Delta_3,\bar{\Delta_3}}(z_2), \end{equation} where we use the notation $\Delta_I^J = \sum_{i \in I} \Delta_i - \sum_{j \in J}\Delta_j$ and the structure constant $B_2$ and $ C_{123}$ of the 2- and 3-point function (see equation (\ref{2p}) and (\ref{3p})). The coefficients $f^L_{1,2,3}$ are the solutions of the linear equations, \begin{equation} \sum_{|L| = N-n} f^L_{1,2,3}(\Delta_3 + N - n + n\Delta_1 - \Delta_2)LV_{\Delta_3}(z) = \sum_{|L|=N}f^L_{1,2,3}L_nLV_{\Delta_3}(z). \end{equation} Up to the factor $\frac{C_{123}}{B_3}$ and the spectrum $\mathbb{S}$, the $VV$-OPE of primaries is known only by symmetry considerations. Therefore N-point functions of primaries can be reduced to (N-1)-point functions by the $VV$-OPE. \subsection{Conformal blocks} In general N-point functions can be written as a sum of spectrum dependent structure constants $C_J$ and universal $z$-dependent functions $\mathfrak{F}_J^{(N)}(z)$, \begin{gather} \langle \Pi_i V_i(z_i)\rangle = \sum_J C_J \mathfrak{F}_J^{(N)}(z)\mathfrak{F}_J^{(N)}(\bar{z}). \end{gather} The functions $\mathfrak{F}^{(N)}$ are the conformal blocks of N-point functions (shorthand \textbf{blocks}). Conformal blocks are universal objects of CFTs, determined only by symmetry considerations and build the theory, as they determine up to the structure constants, all correlation functions. \subsubsection{2- and 3-point blocks} For the 2- and 3-point functions there is only one unknown structure constant and the blocks take a simple form. In both cases the correlator is determined by the global Ward identity (\ref{conformal_invariance}). In the 2-point case the correlator and 2-point block is given by, \begin{gather} \label{2p} \begin{aligned} \langle V_{\Delta_1,\bar{\Delta}_1}(z_1) V_{\Delta_2,\bar{\Delta}_2}(z_2)\rangle &= B_3 |\mathfrak{F}^{(2)}(\Delta_1,\Delta_2|z_1,z_2)|^2 \\ \mathfrak{F}^{(2)} &= \delta_{\Delta_1,\Delta_2} z_{12}^{-2\Delta_1}, \end{aligned} \end{gather} and the 3-point correlator and block by, \begin{gather} \label{3p} \begin{aligned} \langle V_{\Delta_1,\bar{\Delta}_1}(z_1)V_{\Delta_2,\bar{\Delta}_2}(z_2)V_{\Delta_3,\bar{\Delta}_3}(z_3)\rangle &= C_{123} |\mathfrak{F}^{(3)}(\Delta_1,\Delta_2,\Delta_3|z_1,z_2,z_3)|^2 \\ \mathfrak{F}^{(3)} &= z^{\Delta^{12}_3}_{12}z^{\Delta^{23}_1}_{23}z^{\Delta^{31}_2}_{31}. \end{aligned} \end{gather} \subsubsection{4-point blocks} The four point function, \begin{equation} \langle V_{\Delta_1,\bar{\Delta}_1}(z_1)V_{\Delta_2,\bar{\Delta}_2}(z_2)V_{\Delta_3,\bar{\Delta}_3}(z_3)V_{\Delta_4,\bar{\Delta}_4}(z_4)\rangle \end{equation} seems to make more problems. First, the global Ward identities (\ref{conformal_invariance}) only determines three degree of freedom. Therefore we are obliged to use local Ward identities i.e. the $VV$-OPE (\ref{vvope}) to determine the undetermined degree of freedom from global Ward identities. Second, in the 4-point case there are 3 possibilities to apply the $VV$-OPE (field 1 with field 2, 3 or 4). Each possibility corresponds to one of the three channel s, t and u. Those channels are related to each other by a permutation of the conformal dimensions $\{\Delta_i\}_{i = 1,..,4}$ and complex coordinates $\{z_i\}_{i = 1,...,4}$. In the s-channel case, where the primaries $V_1$ and $V_2$ are OPE'd, the result reads, \begin{gather} \label{4point} \mathfrak{F}_{\Delta_s}^{(4,s)}(\Delta_1,\Delta_2,\Delta_3,\Delta_4|z_1,z_2,z_3,z_4) = z_{13}^{-2\Delta_1}z_{23}^{\Delta^{14}_{23}}z_{34}^{\Delta^{12}_{34}} z_{24}^{\Delta^{3}_{124}}\bigg(x^{\Delta^{s}_{12}} \sum_{L \in L}x^{|L|}f_{\Delta_1,\Delta_2,\Delta_s}^Lg_{\Delta_s,\Delta_3,\Delta_4}^L\bigg), \end{gather} with coefficients $g^L_{\Delta_s,\Delta_3,\Delta_4} = \frac{\langle L V_{\Delta_s}(z_s) V_{\Delta_3}(z_3) V_{\Delta_4}(z_4)\rangle}{C_{s34}} = D_{z_s}(L) \mathfrak{F}^{(3)}(\Delta_s,\Delta_3,\Delta_4|z_s,z_3,z_4)$, where $D_{z_s}(L)$ is the differential operator in $z_s$ associated to the descendant $L \in \mathfrak{L}$. \textit{Remark:} The term in the bracket of equation (\ref{4point}) correponds to the limit $(z_i)_{i=1,...,4} \rightarrow (x,0,\infty, 1)$ i.e. $\mathfrak{F}^{(4)}_{\Delta_s}(...|x,0,\infty,1)$. For the sake of completeness, we show how the u-channel block is related to the s-channel block. In the u-channel we apply the OPE to the fields $V_1$ and $V_3$. The resulting u-channel block differs by a permutation from the s-channel block, \begin{equation} \mathfrak{F}^{(4,u)}_{\Delta_u}(\Delta_1,\Delta_2,\Delta_3,\Delta_4|z_1,z_2,z_3,z_4) = \mathfrak{F}^{(4,s)}_{\Delta_u}(\Delta_1,\Delta_3,\Delta_2,\Delta_4|z_1,z_3,z_2,z_4). \end{equation} In the limit $(z_i)_{i=1,...,4} \rightarrow (x,0,\infty,1)$ and using the definition $\mathfrak{F}^{(4)}(\Delta_i|x) := \mathfrak{F}^{(4)}(\Delta_i|x,0,\infty,1)$, we get \begin{equation} \mathfrak{F}^{(4,u)}_{\Delta_u}(\Delta_{i}|x) = x^{-2\Delta_1}\mathfrak{F}^{(4,s)}_{\Delta_u}(\Delta_{i}|\frac{1}{x}), \end{equation} where we used the conformal transformation $x \mapsto \frac{0x + i}{ix + 0}$. A similiar statement can be worked out for the t- and s- channel block and the t- and u- channel block. \subsubsection{Torus: 1-point block} \label{t1p} If the underlying space is any Riemann surface, in our example the torus, we need to account for the extra structure to calculate the correlation functions. A torus can be described by a 2d lattice on the complex plane. We identify the points $z \sim w$ if and only if $z = w + n w_1 + m w_2$, where $n,m$ are integers and $w_1, w_2$ linearly independent complex numbers, spanning the lattice. Global conformal invariance allows us to rescale the lattice vectors to be $w_1 = 1$ and $w_2 = \tau \in \mathbb{H}$ in the upper halfplane. Fields must respect the identification $\phi(z) = \phi(w)$ if $w \sim z$. The torus is then the quotient space $\mathbb{C}/(\mathbb{Z} + \tau \mathbb{Z})$ := $\mathbb{C}/\sim$. In the path integral formalism it is easier to implement the identification $\phi(z) = \phi(w)$ if $z \sim w$. The path integral formalism needs a Lagrangian description, but the result can be generalized and only needs a well defined spectrum $\mathbb{S}$ to trace over, \begin{equation} \label{torcorr} \langle G[\phi](x_i) \rangle \propto \int_{\phi(z) = \phi(w)} \mathrm{D}f G[\phi](x_i) e^{-S_E(\phi)} = \mathrm{tr}_{\mathbb{S}}(G[\phi](x_i)e^{2\pi i \tau (L_0 - \frac{c}{24})}e^{-2\pi i \bar{\tau} (\bar{L}_0 - \frac{c}{24})}). \end{equation} For $G := 1$, (\ref{torcorr}) is known as the torus partition function $q^{\Delta - \frac{c-1}{24}}\eta(q)^{-1}$, where $\eta(q) = q^{\frac{1}{24}}\prod_{i \in \mathbb{N}_{*}}(1-q^i)$ is the Dedekind eta function \cite{francesco}. The partition function is invariant under modular transformations $\mathrm{PSL}(\mathbb{Z})$ of $\tau$, which corresponds to the freedom of choosing a lattice base. Evaluating the 1-point function $G[\phi_{\Delta,\bar{\Delta}}] = \phi_{\Delta,\bar{\Delta}}$ in equation (\ref{torcorr}), in terms of blocks yields ($q := e^{2 \pi i \tau}$), \begin{equation} \label{tor1pt} \langle \phi_{\Delta_1,\bar{\Delta}_1}(0) \rangle = \sum_{\Delta, \bar{\Delta}} C_{\Delta_1,\bar{\Delta}_1,\Delta,\bar{\Delta}} |\mathfrak{T}^{(1)}_{\Delta}(\Delta_1|q)|^2, \end{equation} where we introduced the torus 1-point block $\mathfrak{T}^{(1)}$. A $\Delta$-recursive presentation of the torus 1-point block was proposed by Fateev and Litvinov \cite{fateev}, \begin{gather} \label{torrec} \begin{aligned} \mathfrak{T}^{(1)}_{\Delta}(\Delta_1|q) &= \frac{q^{\Delta - \frac{c-1}{24}}}{\eta(q)}H_{\Delta}(\Delta_1|q) \\ H_{\Delta} &= 1 + \sum_{m,n=1}^{\infty}\frac{q^{mn}R_{m,n}(\Delta_1,c)}{\Delta - \Delta_{m,n}}H_{\Delta_{m,-n}}, \end{aligned} \end{gather} where $R_{m,n}$ is defined in equation (\ref{rmn}). This recursion, seen as a series in $q$, makes the poles $\Delta = \Delta_{m,n}$ manifest. The main disadvantage this recursion relation entails, are the additional singularities in the central charge $c$. They are considered unphysical in the sense, that they are expected to vanish after summation. In section (\ref{section:recursion}) the recursion relation is expressed in an explicit form, to make the $c$-singularities present and a method is proposed to calculate a $c$-singularity free expression at each order in $q$. \subsection{Fusion algebra and fusion rules} \label{virasorofusion} An important feature of the $VV$-OPE (\ref{vvope}) is the fusion algebra. It encodes the rules which representations (in our case Verma modules (\ref{virverma}) or degenerate representations (\ref{degrep})) occur in the OPE and is thus similiar to the decomposition of tensor products of representations. It is usually denoted as $R_1 \times R_2 = \oplus_i N^i_{12}R_i, N \in \mathbb{N}$ and shares the same property as the OPE, namely it is bilinear, associative and commutative. In this work we are more interested if a certain representation occurs in the fusion product and thus tend to not specify the value of $N^i_{12}$ if it is non-zero. The most prominent example is the fusion of degenerate representations $R_{r,s}$ with Verma modules $\mathcal{V}_{\Delta}$. Consider the action $L_{r,s}$ on the degenerate field $L_{r,s}V_{\Delta_{<r,s>}} = 0$ within a 3-point function of diagonal fields, \begin{equation} \langle (L_{r,s} V_{\Delta_{<r,s>}}) V_{\Delta_2}(z_2) V_{\Delta_3}(z_3) \rangle = 0. \end{equation} It turns out that this condition constrains the conformal dimension $\Delta_3$ as follow, \begin{equation} \label{virfusion} \Delta(P_3) = \Delta(P_2+ib+jb^{-1}), \end{equation} for $i \in I:= \{-\frac{r-1}{2},...,\frac{r-1}{2}\}$ and $j \in J:= \{-\frac{s-1}{2},...,\frac{s-1}{2}\}$. The condition (\ref{virfusion}) is equivalent to the fusion rule, \begin{equation} R_{r,s} \times \mathcal{V}_{\Delta(P)} = \sum_{i\in I}\sum_{j \in J} \mathcal{V}_{\Delta(P+ib+jb^{-1})}. \end{equation} Fusion rules in a CFT with the larger symmerty algebra $\hat{\mathfrak{sl}}_2$ will be discussed and derived in chapter (\ref{fusionsl2}). \section{Singularity free at order 3 and 4} \label{appA} The $c$-pole free expression $K_N$ (\ref{zamoanddaro}) at order 3 and 4 have been calculated and nummerically checked to coincide with the expressions (\ref{horder}) outside the poles in the central charge. To express the solutions, we recall the $b$-symmetric Laurent-polynomials, \begin{equation} B_0 = 1, \quad B_j = \beta^j + \beta^{-j}. \end{equation} At order 3 the solutions reads, \begin{gather} \label{K3} \begin{aligned} K_3 &= \frac{1}{192} \bigg\{\bigg(144 \Delta + 288 \Delta^2 - 144 \Delta \Delta_1 + 48 (\Delta_1-1) \Delta_1 + 144 \Delta \Delta_1^2 - 12 (\Delta_1 - 1) \Delta_1^2 \\ &+ 12 (\Delta_1-1) \Delta_1^3\bigg) B_2 + \bigg(888 \Delta + 960 \Delta^3 - 1840 \Delta \Delta_1 + 440 (\Delta_1-1) \Delta_1 + 1574 \Delta \Delta_1^2 \\ &- 164 (\Delta_1-1) \Delta_1^2 - 332 \Delta \Delta_1^3 + 68 (\Delta_1-1) \Delta_1^3 + 22 \Delta \Delta_1^4 + 24 \Delta^2 (76 - 75 \Delta_1 + 27 \Delta_1^2)\bigg) B_1 \\ &+ \bigg(1536 \Delta + 768 \Delta^4 - 3232 \Delta \Delta_1 + 824 (\Delta_1-1) \Delta_1 + 2535 \Delta \Delta_1^2 - 323 (\Delta_1-1) \Delta_1^2 \\ &- 590 \Delta \Delta_1^3 + 115 (\Delta_1-1) \Delta_1^3 + 39 \Delta \Delta_1^4 + 96 \Delta^3 (24 - 19 \Delta_1 + 3 \Delta_1^2) + 8 \Delta^2 (342 - 411 \Delta_1 \\ &+ 238 \Delta_1^2 - 20 \Delta_1^3 + \Delta_1^4)\bigg)\bigg\} \end{aligned} \end{gather} At order 4 the solutions reads, \begingroup \allowdisplaybreaks \begin{subequations} \label{K4} \begin{align*} K_4 &= \frac{45}{2048} \bigg(96 \Delta + 288 \Delta^2 + 192 \Delta^3 - 36 \Delta_1 - 168 \Delta \Delta_1 - 144 \Delta^2 \Delta_1 + 52 \Delta_1^2 + 192 \Delta \Delta_1^2 + 144 \Delta^2 \Delta_1^2 \\ &- 33 \Delta_1^3 - 48 \Delta \Delta_1^3 + 19 \Delta_1^4 + 24 \Delta \Delta_1^4 - 3 \Delta_1^5 + \Delta_1^6\bigg) B_4 + \frac{3}{4096}\bigg(45120 \Delta + 141600 \Delta^2 \\ & + 132000 \Delta^3 + 41280 \Delta^4 - 14652 \Delta_1 - 92604 \Delta \Delta_1 - 136152 \Delta^2 \Delta_1 - 61392 \Delta^3 \Delta_1 + 27284 \Delta_1^2 \\ &+ 109180 \Delta \Delta_1^2 + 105984 \Delta^2 \Delta_1^2 + 26832 \Delta^3 \Delta_1^2 - 22281 \Delta_1^3 - 49095 \Delta \Delta_1^3 - 26064 \Delta^2 \Delta_1^3 \\ & + 11363 \Delta_1^4 + 16885 \Delta \Delta_1^4 + 4392 \Delta^2 \Delta_1^4 - 2091 \Delta_1^5 - 1749 \Delta \Delta_1^5 + 377 \Delta_1^6 + 103 \Delta \Delta_1^6\bigg) B_3 \\ & + \frac{1}{8192}\bigg(1423008 \Delta + 4167168 \Delta^2 + 4189536 \Delta^3 + 2013312 \Delta^4 + 446976 \Delta^5 - 442908 \Delta_1 \\ &- 3093480 \Delta \Delta_1 - 5225328 \Delta^2 \Delta_1 - 3529776 \Delta^3\Delta_1 - 993408 \Delta^4 \Delta_1 + 914364 \Delta_1^2 + 3900600 \Delta \Delta_1^2 \\ & + 4702504 \Delta^2 \Delta_1^2 + 2296944 \Delta^3 \Delta_1^2 + 336768 \Delta^4 \Delta_1^2 - 799689 \Delta_1^3 - 2092518 \Delta \Delta_1^3 - 1684440 \Delta^2 \Delta_1^3 \\ &- 450048 \Delta^3 \Delta_1^3 + 391899 \Delta_1^4 + 673314 \Delta \Delta_1^4 + 343312 \Delta^2 \Delta_1^4 + 41280 \Delta^3 \Delta_1^4 - 74427 \Delta_1^5 \\ & - 84018 \Delta \Delta_1^5 - 20280 \Delta^2 \Delta_1^5 + 10761 \Delta_1^6 + 4902 \Delta \Delta_1^6 + 712 \Delta^2 \Delta_1^6\bigg) B_2 + \frac{1}{49152} \bigg(1683456 \Delta^6 \\ &+ 3072 \Delta^5 (3292 - 1669 \Delta_1 + 409 \Delta_1^2) + 768 \Delta^4 (45703 - 32773 \Delta_1 + 17503 \Delta_1^2 - 2532 \Delta_1^3 \\ &+ 162 \Delta_1^4) + 9 \Delta_1 (-734868 + 1587868 \Delta_1 - 1423299 \Delta_1^2 + 684065 \Delta_1^3 - 130937 \Delta_1^4 + 17171 \Delta_1^5) \\ &+ 64 \Delta^3 (978855 - 1016103 \Delta_1 + 729839 \Delta_1^2 - 190791 \Delta_1^3 + 25799 \Delta_1^4 - 1023 \Delta_1^5 + 29 \Delta_1^6) \\ &+ 8 \Delta^2 (7271424 - 10546749 \Delta_1 + 10186433 \Delta_1^2 - 3976128 \Delta_1^3 + 840752 \Delta_1^4 - 61824 \Delta_1^5 \\ &+ 2156 \Delta_1^6) + 3 \Delta (7167168 - 15881724 \Delta_1 + 20837492 \Delta_1^2 - 11935235 \Delta_1^3 + 3761009 \Delta_1^4 \\ &- 491001 \Delta_1^5 + 28819 \Delta_1^6)\bigg) B_1 + \frac{1}{49152} \bigg(28999296 \Delta + 75605760 \Delta^2 + 85087104 \Delta^3 + 47781120 \Delta^4 \\ &+ 15793152 \Delta^5 + 2617344 \Delta^6 + 344064 \Delta^7 - 8901576 \Delta_1 - 64431288 \Delta \Delta_1 - 115515792 \Delta^2 \Delta_1 \\ &- 92818272 \Delta^3 \Delta_1 - 38587392 \Delta^4 \Delta_1 - 8169984 \Delta^5 \Delta_1 - 1314816 \Delta^6 \Delta_1 + 19491840 \Delta_1^2 \\ &+ 85511472 \Delta \Delta_1^2 + 114226816 \Delta^2 \Delta_1^2 + 68972000 \Delta^3 \Delta_1^2 + 20325120 \Delta^4 \Delta_1^2 + 3518976 \Delta^5 \Delta_1^2 \\ &+ 208896 \Delta^6 \Delta_1^2 - 17583363 \Delta_1^3 - 49771617 \Delta \Delta_1^3 - 45926220 \Delta^2 \Delta_1^3 - 18616256 \Delta^3 \Delta_1^3 \\ &- 3539200 \Delta^4 \Delta_1^3 - 344064 \Delta^5 \Delta_1^3 + 8397441 \Delta_1^4 + 15592539 \Delta \Delta_1^4 + 9878804 \Delta^2 \Delta_1^4 \\ &+ 2576704 \Delta^3 \Delta_1^4 + 314880 \Delta^4 \Delta_1^4 + 18432 \Delta^5 \Delta_1^4 - 1609353 \Delta_1^5 - 2056035 \Delta \Delta_1^5 - 775652 \Delta^2 \Delta_1^5 \\ &- 102976 \Delta^3 \Delta_1^5 - 9984 \Delta^4 \Delta_1^5 + 205011 \Delta_1^6 + 121329 \Delta \Delta_1^6 + 26924 \Delta^2 \Delta_1^6 + 2880 \Delta^3 \Delta_1^6 + 256 \Delta^4 \Delta_1^6\bigg) \\ \tag{\ref{K4}} \end{align*} \end{subequations} \endgroup \section{Irreducible representations of $\mathfrak{sl}_2$} \label{appB} We give a short overview of irreducible representations (shorthand \textit{irreps}) of $\mathfrak{sl}_2$, as examples of horizontal representations of highest weight representations. For simplicity we work in the $m$-basis, which diagonalizes the $J^0_0$ operator. Hence the states are given by $|j,m\rangle$, with diagonal quadratic Casimir operator $C_2 = 2j(j+1)$. There exist 4 different types of irreps, one finite type with half integer spin $j \in \mathbb{N}/2$ and three infinite types with complex spin $j \in \mathbb{C}$. The three infinite types have either one of the state $|j,\pm j\rangle$ in their space or none, whereas the finite type has both states in its space. In a diagramatic way, \begin{figure}[H] \centering \includegraphics[width=12cm]{diagram-20220704.png} \caption{Diagramatic representation of the $J^0_0$-eigenvalue of the four irreducible representations. The yellow coloured line corresponds to half integer spin and is finite (contains $|j,\pm j\rangle$), the red coloured lines correspond to discrete type representations (upper line contains $|j,j \rangle$, lower line $|j,-j\rangle$) and the orange coloured to the continuous type representation (contains none $|j,\pm j\rangle$).} \label{fig:sl2reps} \end{figure} First we define a general action of the subalgebra $\mathfrak{sl_2} \subset \hat{\mathfrak{sl_2}}$ on the descending vector space, \begin{equation} V^-_j := \mathrm{span}_{\mathbb{C}}(\{|j,m\rangle; j \in \mathbb{C}, m \in j - \mathbb{N}\}). \end{equation} The action is defined as, \begin{gather} J^0_0 |j,m\rangle = m |j,m\rangle, \quad J^{\pm}_0 |j,m\rangle = (j\mp m)|j,m\pm 1\rangle. \end{gather} This action can be generated by the highest weight state $|j,j\rangle$, which satisfies $ J^+_0|j,j\rangle = 0$. With the help of the self-inverse automorphism $(\circ)^*: J^0_0 \mapsto - J^0_0, J^{\pm}_0 = - J^{\mp}_0$ we define the ascending vector space implicit via the action $V^+_j := (V^-_j)^*$. Here we use the same notation for the vector space and its representation but mean that we \textit{apply the automorphism action on the descending vector space} and identify $|j,-m\rangle^* = |j,m\rangle$. It is characterised by $m \in -j + \mathbb{N}$ and generated by the lowest weight $|j,-j\rangle^*$ because $0 = (J^-_0)^* |j,j\rangle = J^-_0|j,-j\rangle^*$. If we restrict on $j \in (-\infty, -1/2)$ we call $D^{\pm}_j := V^{\pm}_j $ the discrete series representations, both are unitary, infinite and irreducible. The continuous series representation is less constrained and defined via, \begin{equation} C_j^{\alpha} := \mathrm{span}_{\mathbb{C}}(\{|j,m\rangle; j \in -1/2 + i\mathbb{R}_+, \alpha \in \mathbb{R}/\mathbb{Z}, m \in \alpha + \mathbb{Z}\}). \end{equation} Where we use the same action as for the discrete series and notice that it has not a highest nor a lowest weight state. It is unitary, infinite and irreducible. Under conjugation we have $(C^{\alpha}_j)^* = C^{-\alpha}_j$. \end{document} \section{1-point block on the torus}\label{section:recursion} We have seen in section (\ref{t1p}) a recursive formulation (\ref{torrec}) to calculate the 1-point block of a primary on the torus. As already pointed out there will arrise poles in the central charge $c$. In this section we show explicitly that the $c$-poles, of the first four orders in fact cancel and assume they cancel at each order i.e. they are a mathematical artefact of the recursive formulation. To start the discussion we introduce the factors $R_{m,n}$, useful reformulations and definitions. It is of importance to keep in mind the three variables we are going to work with: the central charge $c$ in terms of $b$ (\ref{I.7}), the external conformal dimension $\Delta_1$ and the internal conformal dimension $\Delta$. \subsection{Preliminaries} The $R_{m,n}$-factors in equation (\ref{torrec}) are defined with help of the momentum $P_{<r,s>}$ (recall \ref{I.7}), \begin{gather} \label{rmn} R_{m,n} := \frac{2P_{<0,0>} P_{<m,n>}}{\prod_{r = 1-m}^{m} \prod_{s=1-n}^{n} 2P_{<r,s>}} \prod_{r\underset{2}{=}1-2m}^{2m-1} \prod_{s \underset{2}{=}1-2n}^{2n-1} (P_1 + P_{<r,s>}), \end{gather} where the subscript 2 under the equation sign indicates, to increment the steps in the product by two instead of one. \textit{Note:} We have seen that $P_{<r,s>}$ under $b$-symmetry exchanges $r \leftrightarrow s$ and therefore $R_{r,s}$ has the same feature $R_{r,s}(b) = R_{s,r}(b^{-1})$. For convenience we split $R_{m,n} = E_{m,n}F_{m,n}$ and write it explicitly in terms of $\beta := b^2$, $\Delta_1$ and $\Delta$, \begin{gather} \label{rmnpoles} \begin{aligned} F_{m,n} & := \prod_{r =^2 1}^{2m-1}\prod_{s=^2 1}^{2n-1}(\Delta_{<r,s>} - \Delta_1)(\Delta_{<r,-s>} - \Delta_1) \\ E_{m,n} & := \frac{m n}{2 m!^2 n!^2} \beta^{2m(n-1)}\prod_{r=1}^{m-1} \frac{-n^{-2}}{1-\frac{r^2}{n^2}\beta^2} \prod_{s=1}^{n-1} \frac{s^{-2}}{1 - \frac{m^2}{s^2}\beta^2} \prod_{r=1}^{m-1} \prod_{s=1}^{n-1} (\frac{s^{-2}}{1 - \frac{r^2}{s^2}\beta^2})^2. \end{aligned} \end{gather} The reason of this reformulation is to be able to expand it easily around $\beta = 0$, using $\frac{1}{1-ax} = \sum_{i \geq 0} (ax)^i$. And it makes the poles of $E_{m,n}$ at $\beta \in \{ \pm \frac{1}{1,...,m},..., \pm \frac{n}{1,...,m-1}\}$ visible. Whereas $F_{mn}$ is regular in $\beta$ outside $0,\infty$. We have already discussed the $b$-symmetry in connection with degenerate representations. $b$-symmetry is manifest with respect to observables because it leaves the central charge (\ref{I.7}) invariant. Thus we define the $b$-symmetric Laurent-polynomials, \begin{gather} \label{BS} B_0 := 1, \quad B_j := \beta^j + \beta^{-j}, \quad j \in \mathbb{N}_{>0}, \end{gather} because they will help us to formulate the results in a nice way. Since the recursion sums up terms in the range $1\leq mn\leq N$, it is convenient to define the number $P_N := \sum_{mn \leq N}1$. It counts how many possibilities $(m,n) \in \mathbb{N}^2_{*}$ exist with product $mn \leq N$. As an example we give the first four cases, \begin{equation} P_1 = 1, \quad P_2 = 3, \quad P_3 = 5, \quad P_4 = 8. \end{equation} Another common expression at fixed $(m,n) \in \mathbb{N}_{*}^2$, which is used to calculate a $c$-pole free recursion is, \begin{equation} \label{hprefactor} \frac{1}{\prod_{rs \leq N - mn}(\Delta_{m,-n}-\Delta_{r,s})} =(4\beta)^{P_{N-mn}}\prod_{rs \leq N-mn} \frac{1}{s-n + (r+m)\beta} \prod_{rs \leq N-mn} \frac{1}{s+n + (r-m)\beta}, \end{equation} where we rewrote it, to be able to expand it easily around $\beta = 0$. In this expansion the smallest power of the $\beta$-series appearing in (\ref{hprefactor}), is $P_{N-mn} - (\big \lfloor\frac{N}{n}\big \rfloor - m)$, where we counted the amount of times the condition $s = n$ within the left product on the right-hand-side in (\ref{hprefactor}) is fulfilled. For example in the case $m=1=n$ and $N=4$, we have the smallest $\beta$-power $P_{4-1} - (\big \lfloor\frac{4}{1}\big \rfloor - 1) = 5 - (4-1) = 2$. \subsubsection{Recursion in powers of $q$} We rewrite the recursion (\ref{torrec}) in a power series expansion of $q$, \begin{equation} \label{order} H_{\Delta}(P_1|q) = 1 + \sum_{i = 1}^{\infty} q^i H^{(i)}_{\Delta}(P_1). \end{equation} At each order $i$ (\ref{order}) the $H^{(i)}_{\Delta}$ can be reexpressed as: \begin{gather} \label{horder} \begin{aligned} H^{(1)}_{\Delta}(P_1) &= \frac{R_{1,1}}{\Delta} \\ H^{(2)}_{\Delta}(P_1) &= H^{(1)}_{\Delta = 1} H^{(1)}_{\Delta} + \frac{R_{2,1}}{\Delta - \Delta_{<2,1>}} + \frac{R_{1,2}}{\Delta - \Delta_{<1,2>}} = \frac{R_{1,1}^2}{\Delta} + \frac{R_{2,1}}{\Delta - \Delta_{<2,1>}} + \frac{R_{1,2}}{\Delta - \Delta_{<1,2>}} \\ H^{(3)}_{\Delta}(P_1) &= H^{(2)}_{\Delta = 1} H^{(1)}_{\Delta} + \frac{R_{2,1}R_{1,1}}{(\Delta - \Delta_{<2,1>})\Delta_{<2,-1>}} + \frac{R_{1,2}R_{1,1}}{(\Delta - \Delta_{<1,2>})\Delta_{<1,-2>}} + \frac{R_{3,1}}{\Delta - \Delta_{<3,1>}} + \frac{R_{1,3}}{\Delta - \Delta_{<1,3>}} \\ H^{(4)}_{\Delta}(P_1) &= \frac{R_{1,1}}{\Delta}H^{(3)}_{\Delta = 1} + \frac{R_{2,1}}{\Delta - \Delta_{<2,1>}}H^{(2)}_{\Delta = \Delta_{<2,-1>}} + \frac{R_{1,2}}{\Delta - \Delta_{<1,2>}}H^{(2)}_{\Delta = \Delta_{<1,-2>}} + \frac{R_{3,1}}{\Delta - \Delta_{<3,1>}}H^{(1)}_{\Delta = \Delta_{<3,-1>}} \\ &+ \frac{R_{1,3}}{\Delta - \Delta_{<1,3>}}H^{(1)}_{\Delta = \Delta_{<1,-3>}} + \frac{R_{4,1}}{\Delta - \Delta_{<4,1>}} + \frac{R_{2,2}}{\Delta - \Delta_{<2,2>}} + \frac{R_{1,4}}{\Delta - \Delta_{<1,4>}} \\ &\vdots \\ H^{(N)}_{\Delta}(P_1) &= \sum_{mn \leq N} \frac{R_{m,n}}{\Delta - \Delta_{m,n}} H^{(N-mn)}_{\Delta = \Delta_{<m,-n>}}(P_1), \end{aligned} \end{gather} where we define the zero'th order $H^{(0)}_{\Delta} := 1$. Using equation (\ref{rmn}) we calculate $R_{1,1} = \frac{\Delta_1 (\Delta_1 - 1)}{2}$ and therefore see that there are no poles in the central charge at order 1. \subsection{Singularity free at order 2: hands on calculation} The poles in $c$ start to make trouble at order 2 and we go through the steps to get rid of it. Let's consider the expression of $H^{(2)}_{\Delta}$, \begin{equation} \label{order2} H^{(2)}_{\Delta}(P_1) = \frac{R_{1,1}^2}{\Delta} + \frac{R_{2,1}}{\Delta - \Delta_{<2,1>}} + \frac{R_{1,2}}{\Delta - \Delta_{<1,2>}}. \end{equation} Altough $R_{1,1}$ is regular, $R_{2,1}(\beta)$ is not and we calculate using (\ref{rmnpoles}), \begin{equation} \label{order2pole} E_{2,1}(\beta) = \frac{1}{4} \frac{1}{\beta^2 - 1}, \quad E_{1,2}(\beta) = \beta^2 E_{2,1}(\beta), \end{equation} and therefore the extra poles $\beta = \pm 1$ arise. To cancel the extra poles, we pair the second and third term in (\ref{order2}) and bring them to the same denominator, \begin{equation} \label{order2pair} \frac{\beta E_{1,2}}{(\Delta - \Delta_{<1,2>})(\Delta - \Delta_{<2,1>})} \bigg(\beta^{-1} F_{1,2}(\Delta - \Delta_{<2,1>}) - \beta F_{2,1}(\Delta - \Delta_{<1,2>})\bigg). \end{equation} We observe that the factor in brackets in (\ref{order2pair}) changes sign under $b$-symmetry transformation. Laurent-polynomials with this property can be written as a sum of $\beta^n - \beta^{-n} = (\beta - \beta^{-1}) \sum_{i \underset{2}{=}-(n-1)}^{n-1} \beta^i$ and therefore the poles (\ref{order2pole}) cancel i.e. the poles are unphysical. A more carful analysis and bringing all terms in (\ref{order2}) down to a common denominator, yields the pole free expression at order 2, \begin{gather} \label{o2} H^{(2)}_{\Delta} = \frac{1}{32} \frac{R_{1,1}}{\Delta (\Delta - \Delta_{<1,2>})(\Delta - \Delta_{<2,1>})} \big(96 \Delta^2 + 4\Delta (-14 \Delta_1 + 2 \Delta_1^2 + 5 + c) + c \Delta_1(\Delta_1 - 1)\big). \end{gather} This procedure can be applied hypothetically at any order $N$, where we identify terms with the same poles, bring it down to a common denominator and finally cancel the pole. At higher order there arise more and more pole in each term because of the expression $E_{mn}$ in (\ref{rmnpoles}). Therefore it is getting out of control to cancel pole by pole. Nevertheless it has been done up to order 3 and 4, where the reader find the solutions in the appendix (\ref{appA}). \subsection{Proposal: Singularity free at each order} To start the discussion, we state first the following conjecture, \begin{conjecture} At each order $H^{(N)}_{\Delta}$ the poles in the central charge $c$ cancel in between terms and therefore a singularity free expression exists. \end{conjecture} Second we define the functions $K_N(\Delta,\Delta_1,\beta)$ in the sense of the recursion (\ref{torrec}), \begin{gather} \begin{aligned} H^{(N)}_{\Delta} &= \frac{R_{11}}{\prod_{kl \leq N}(\Delta - \Delta_{<k,l>})} K_N \\ \label{zamoanddaro} K_N &:= \sum_{mn \leq N} \bigg(\frac{E_{mn}F_{mn}}{R_{11}} \prod_{\underset{k,l \neq m,n}{kl\leq N}}(\Delta - \Delta_{<k,l>}) H^{(N-mn)}_{\Delta_{<m,-n>}}\bigg). \end{aligned} \end{gather} Investigating the algebraic form of the pole free expression of the first four orders (\ref{o2} and appendix \ref{appA}) leads us to conjecture the form of the pole free expression, \begin{conjecture} $K_N$ is a $b$-symmetric Laurent-polynomial in $\beta$ of degree $Q_N := P_N - N$, with coefficients polynomial in $\Delta, \Delta_1$. Concretely the algebraic form of $K_N$ is given by (recall the definition of $B_s$ (\ref{BS})), \begin{gather} \label{grandconjecture} \boxed{\begin{aligned} K_N(\Delta,\Delta_1,\beta) = & \biggl\{\sum_{i=0}^{P_N - 2} \Delta^{P_N - 1-i } \biggl(\sum_{s=0}^{min(Q_N,i)} \sum_{j = 0}^{min(2N-2,2i-2s)} (-1)^j C^N_{ijs} \Delta_1^{j} B_s\biggr) \biggr\}\\ &+ \Delta_1(\Delta_1 - 1)\sum_{s = 0}^{Q_N} \sum_{j=0}^{2N-4} (-1)^j C^N_{(P_N - 1)js}\Delta_1^j B_s, \end{aligned}} \end{gather} with positive rational coefficients $C^N_{ijs} \in \mathbb{Q}_+$. \end{conjecture} Note that in the first line of (\ref{grandconjecture}) the power of $\Delta$ is at least one. Whereas the second line, not included in the sum over $i$, has no $\Delta$-dependence and the power of $\Delta_1$ is at least one. \subsubsection{Large $c$ limit} In the large central charge limit of the torus 1-point block, an upper bound on the degree of the Laurent-polynomial $K_N$ can be calculated. Let's work for now with a hypothetical degree $\tilde{Q}_N$ of the Laurent-polynom $K_N$. The large $c$ limit of the block exists and is represented as \cite{largec}, \begin{equation} \label{climit} q^{\frac{c}{24} - \Delta} \mathfrak{T}^{(1)}_{\Delta}(\Delta_1|q) = \mathcal{L}_{\Delta}(\Delta_1|q) + \mathcal{O}(c^{-1}), \end{equation} where $\mathcal{L}_{\Delta}$ is known as the light torus block. We observe that in the $\beta \rightarrow 0$ limit, the central charge behaves as $c = 6\beta^{-1} \rightarrow \infty$. And conclude that in the recursive representation (\ref{torrec}), along with the assumption that $K_N$ is a Laurent-polynom of degree $\tilde{Q}_N$, the limit $\beta \rightarrow 0$ exists and is given by, \begin{gather} \label{betalimit} q^{\frac{c}{24} - \Delta} \mathfrak{T}^{(1)}_{\Delta}(\Delta_1|q) \propto \sum_{N \geq 0} \frac{q^N}{\prod_{rs\leq N}(\Delta - \Delta_{r,s})} K_N(\beta) \underset{\beta \rightarrow 0}{\sim} \sum_{N \geq 0} q^N \mathcal{O}(\beta^{Q_N-\tilde{Q}_N}), \end{gather} where the dominant behaviour of $\prod_{rs\leq N} (\Delta - \Delta_{r,s})^{-1}$ is given by $\beta^{Q_N}(1 + \mathcal{O}(\beta))$. Therefore we conclude that the limit (\ref{betalimit}) exists only if $\tilde{Q}_N \leq Q_N$. \subsubsection{Tracking $\Delta$ in (\ref{zamoanddaro})} By multiplying out the product $\prod_{\underset{k,l \neq m,n}{kl\leq N}}(\Delta - \Delta_{k,l})$ in (\ref{zamoanddaro}), we inspect the prefactors of the $\Delta^i$-monomials for $i \in \{1,...,P_N - 1\}$ in $K_N$ and compare it to the conjecture (\ref{grandconjecture}). Those prefactors are manifest $b$-symmetry invariant \footnote[1]{The $b$-symmetry invariance can be seen if we pair $(m,n)$- with $(n,m)$-terms in (\ref{zamoanddaro})}. Due to the expected, very restricted form of $K_N$, there will arise many non-trivial constraints. \newline \textbf{($\Delta^0$):} The coefficient in front of $\Delta^0$ takes a very simple form, \begin{equation} \label{I.13} K_N(\Delta = 0,\Delta_1) = \Delta^0 \times \biggl\{(-1)^{P_N - 1} \frac{\big(\prod_{\underset{k,l \neq 1,1}{kl\leq N}}\Delta_{<k,l>}\big) \cdot R_{11}}{\prod_{rs \leq N - 1}\big(1 - \Delta_{<r,s>}\big)}K_{N-1}(\Delta = 1,\Delta_1)\biggr\}. \end{equation} Thus we can check if the power $Q_N$ is consistent and indeed given by the upper bound. We apply the conjecture on $K_{N-1}$, i.e. it is a Laurent-polynomial of degree $Q_{N-1}$ and expand everything else in (\ref{I.13}) around $\beta = 0$ (see \ref{hprefactor}). In the resulting product of $\beta$-series, we observe that the smallest power of $\beta$ appearing is $-Q_N$ and therefore get the expected power of $Q_N$. In fact, the factor $2 R_{11} = \Delta_1 (\Delta_1 - 1)$ in (\ref{I.13}) implies the zeroes $K_N(\Delta = 0, \Delta_1 = 0, \beta) = 0 = K_N(\Delta = 0, \Delta_1 = 1 , \beta)$, because $K_N$ is regular in $\Delta_1$. Thats why we conclude that no $\Delta^0 \Delta_1^0 \beta^i$-terms appear as conjectured, but the factor $R_{11} \propto \Delta_1 (\Delta_1 - 1)$ appears in the second line of (\ref{grandconjecture}). \newline \textbf{($\Delta^{P_N-1}$):} Another interesting identity arises from the prefactor of $\Delta^{P_N - 1}$. Using the conjecture we infer that, \begin{equation} \label{justanumber} \sum_{mn\leq N} \frac{R_{mn}}{R_{11}} H^{(N-mn)}_{\Delta_{<m,-n>}}, \end{equation} is just a number, independent of $\beta$, $\Delta$ and $\Delta_1$ and thus equal to $C^N_{000}$. Assuming this number exists, we are able to determine a closed formula for it, \begin{equation} C^N_{000} = \sum_{mn = N} n, \end{equation} where we took in (\ref{justanumber}) the limit $\Delta_1 \rightarrow 0$ and evaluated the term proportional to $\beta^0$. In the $\Delta_1$-limit only the $mn = N$ - terms contribute, because $R_{mn} \propto \Delta_1(\Delta_1 - 1)$ and $H^{(N-mn)} \propto \Delta_1(\Delta_1 - 1)$ are simply zero (for $\Delta_1 = 0$ the 1-point function (\ref{tor1pt}) is simply equal to the torus partition function and therefore $H_{\Delta}(\Delta_1 = 0, \beta) = 1$ (\ref{order})). And in the final step we determined the $\beta = 0$ - expansion of $E_{mn}F_{mn}/R_{11} \underset{\Delta_1 = 0}{=} n + \mathcal{O}(\beta)$, using the expressions of $E_{mn}$ and $F_{mn}$ (\ref{rmnpoles}). For example $C^N_{000}$ for the first 6 orders is given by, \begin{equation} C^1_{000} = 1, \quad C^2_{000} = 3, \quad C^3_{000} = 4, \quad C^4_{000} = 7, \quad C^5_{000} = 6, \quad C^6_{000} = 12, \end{equation} which coincides with the results of the pole free expressions at order 1, 2, 3 and 4 (\ref{o2}), (appendix \ref{appA}). Tracking in (\ref{justanumber}) the smallest $\beta$-power in the $\beta = 0$ expansion (which occurs at $(m,n) = (1,2)$ \footnote[2]{The number $\lfloor\frac{N}{n}\rfloor - m$ is maximum for positive integers $(1,1) \neq (m,n) = (1,2)$.}) leads to the constraint, \begin{equation} \label{cons2} 0 = \big(\beta^{\lfloor N/2 \rfloor - 2} H^{N-2}_{\Delta_{1,-2}}\big)\big|_{\beta = 0} \end{equation} if $2 - \lfloor N/2 \rfloor <0$. For example at order $N = 6$, we might have naively expected a non-zero contribution at $\beta^{2-\lfloor 6/2 \rfloor} = \beta^{-1}$ but it has been numerically checked that (\ref{cons2}) is indeed fullfilled for $N = 6$. \subsubsection{Calculation of pole free expression} We have conjectured the existence of a pole free expression and deduced the algebraic form of the $K_N$'s (\ref{grandconjecture}). Here we propose a way to compute the pole free expression from the definition of $K_N$ (\ref{zamoanddaro}), \begin{corollary} The pole free expression in conjecture (\ref{grandconjecture}) can be calculated by expanding $K_N$ (\ref{zamoanddaro}) around $\beta = 0$, \begin{equation} \label{howto2} \underset{\beta \rightarrow 0}{\mathrm{lim}} K_N = K_N(\Delta,\Delta_1)_ {Q_N}\beta^{-Q_N} + K_N(\Delta,\Delta_1)_{1-Q_N} \beta^{-Q_N + 1} + ... + K_N(\Delta,\Delta_1)_{0} \beta^{0} + \mathcal{O}(\beta^{1}), \end{equation} where $K_N(\Delta,\Delta_1,\beta) = \sum_{j=0}^{Q_N} K_N(\Delta,\Delta_1)_j B_j$ is the pole free expression. \end{corollary} Expanding (\ref{howto2}) around $\beta = 0$ seems hard to evaluate and thus we want to give a short description how to do it. Recall the definition of $K_N$, \begin{equation} \label{recallkn} K_N = \sum_{mn \leq N} \bigg\{\frac{E_{mn}F_{mn}}{R_{11}} \prod_{k,l \neq m,n}(\Delta - \Delta_{<k,l>}) \bigg( \frac{R_{11}}{\prod_{kl \leq N-mn}(\Delta_{<m,-n>} - \Delta_{<k,l>})} K_{N-mn}\bigg)^{1 - \delta_{N,mn}}\bigg\}, \end{equation} where we recursively replaced $H^{(N-mn)}_{\Delta_{<m,-n>}}$ by the expression (\ref{zamoanddaro}). The power $1 - \delta_{N,mn}$ needs to be added because $H^{(N-mn)}_{\Delta} = 1$, if $mn = N$. Equation (\ref{recallkn}) is enough to calculate the $\beta = 0$ expansion. We can simply replace the factors $E_{mn}$ by expression (\ref{rmnpoles}) and the factor $\prod_{rs\leq N-mn}(\Delta_{m,-n} - \Delta_{r,s})^{-1}$ by (\ref{hprefactor}). In this substitution (\ref{recallkn}) is completely written as a sum of products of $\beta$-series \footnote[3]{The series are either finite or infinite, but with a lower bound $\gamma > -\infty$ on the powers of $\beta$, $\sum_{i = \gamma}^{\infty} a_i \beta^i$.}. This is because we observe that any other factor, for example $K_{N-mn}$ or $F_{mn}$ (\ref{rmnpoles}), arising in $K_N$, is already a Laurent-polynom in $\beta$. The last step is to evaluate the product of all series and Laurent-polynomials in $\beta$ and to pick up the prefactors of $\beta^{-Q_N}$ up to $\beta^0$. \textit{Remark:} There is no need to treat $\Delta_1$ as a variable, since the recursion (\ref{horder}) does not need informations concering $\Delta_1$ and thus can be fixed. It is nevertheless useful to know the $\Delta_1$-dependence, otherwise the $K_N$'s for fixed $\Delta_1$ have to be calculated again for a different choice of $\Delta_1$. \section{Affine symmetry} \label{section:affine} In this section we introduce a CFT with an additional symmetry described by the affine Lie algebra $\hat{\mathfrak{g}}$. For a general Lie algebra $\mathfrak{g}$, we define holomorphic currents $J^a(z)$ implicitly through their OPE, \begin{equation} J^a(y) J^b(z) = \frac{k K^{ab}}{(y-z)^2} + \frac{f^{ab}_c J^c(z)}{(y-z)} + O(1), \end{equation} where we introduced the level $k$, the structure constant $f^{ab}_c$ of the Lie algebra $\mathfrak{g}$ and the Killing form $K^{ab} := \frac{1}{2g} f^{ac}_d f^{bd}_c$ ($g$ is the dual coxeter number of the Lie algebra $\mathfrak{g}$, in the case of $\mathfrak{sl}_N$ we have $g = N$). By extracting the modes of the current $J^a(y)$, \begin{equation} J^{a,(z)}_n := \oint_z dy (y-z)^n J^a(y), \end{equation} we get the generators $J^a_n$ of the affine Lie algebra $\hat{\mathfrak{g}}$, which obey the commutation relations, \begin{equation} \label{affinecommutation} [J^a_n, J^b_m] = f^{ab}_c J^c_{n+m} + n k K^{ab} \delta_{n+m,0}. \end{equation} Here the level $k$ becomes the central element of the algebra \footnote[2]{For indecomposable representations of $\hat{\mathfrak{g}}$, the level $k$ acts as a number.}. Do not confuse the level $k$ with the level of descendence $N$. The level of descendence of an element $J^A_N := (J^{a_1}_{-n_1} \cdot ... \cdot J^{a_M}_{-n_M})_{a_i \in A} \in U(\hat{\mathfrak{g}})$, where $A$ is some discrete set, is defined to be the integer $N := \sum_{i=1}^M n_i$. \begin{definition}[Horizontal algebra] The subalgebra of $\hat{\mathfrak{g}}$ generated by the set $\{J^a_0\}_a$ (we check that the commutations relations (\ref{affinecommutation}) are invariant for $n=0$), is isomorph to the underlying Lie algebra $\mathfrak{g}$. Let's call the subalgebra in this context 'horizontal algebra'. \end{definition} Similiar to the Virasoro case a Virasoro field can be constructed to generate the conformal symmetry of the affine symmetric CFT, \begin{equation} \label{emhat} T(z) := \frac{K_{ab}(J^a J^b)(z)}{2(k+g)}, \end{equation} where we introduced the normal ordered product, \begin{equation} (AB)(z) = \frac{1}{2\pi i}\oint_z \frac{\mathrm{d}z}{y-z}A(y)B(z). \end{equation} Equation (\ref{emhat}) is better known as the Sugawara construction and is consistent only if $k \neq -2$. The modes of $T(z)$ fulfill the Virasoro commutation relation (\ref{virasoro}) with central charge $c = \frac{k \mathrm{dim(\mathfrak{g})}}{k+g}$. \subsection{Affine primary fields and Ward identities} An affine primary field $\Phi^{R}(z)$ is defined by the OPE with the current $J^a(z)$, \begin{equation} \label{affineope} J^a(y) \Phi^{R}(z) = \frac{-R(t^a)^T\Phi^{R}(z)}{(y-z)} + O(1), \end{equation} where $R$ is an arbritrary representation of the horizontal algebra. For convenience lets call $R$ the horizontal representation\footnote[1]{The reader is advised to read appendix (\ref{appB}) to get a short introduction to $\mathfrak{sl}_2$-irreducible representations, which are prominent examples of horizontal representations.}, where the state $|\Phi^{R}\rangle$ corresponding to $\Phi^{R}(z)$, is a state in the horizontal representation. And the action of $R$ (\ref{affineope}) is better understood if written in a basis of the horizontal representation, $\Phi_i \mapsto -R(t^a)^T \Phi_i = -R(t^a)_{ij} \Phi_j$. Note that we expect the OPE $J^aJ^b\Phi$ to be associative and therefore need the minus sign in (\ref{affineope}). The primary field $\Phi^{R}(z)$ and the Virasoro field $T(z)$ (\ref{emhat}) satisfies the OPE (\ref{TVOPE}) and is thus a primary in the sense of the Virasoro algebra. Therefore correlators of affine primaries fulfill the Virasoro Ward identities and the results from section \ref{section:cft} can be used. \subsubsection{Isospin variables} Affine primary fields $\Phi^R$ transform linearly under the representation $R$ of the horizontal algebra $\mathfrak{g}$. Moreover the primaries can be represented as functions $\Phi^R_x$ depending on the isospin variable $x$. $R$ acts then on the isospin variable $x$ as differential operators $D^R_x(t^a) \Phi^R_x := R(t^a) \Phi^R_x$. In the $\mathfrak{sl}_2$-case a common choice of basis, the $x$-basis, is given by the differential operators, \begin{gather} \label{xbasis} D^j_x(t^-) = -\partial_x, \quad D^j_x(t^0) = x\partial_x - j \quad D^j_x(t^+) = x^2 \partial_x - 2jx. \end{gather} In the initial definition (\ref{affineope}) the action is defined by its transposed, such that in the isospin formalism the consecutive action acts as $J^a_0 J^b_0 \Phi^{R}_x(z) = D^R_x(t^b)D^R_x(t^a)\Phi^{R}_x(z)$. In the same way, as in the Virasoro case, we can derive affine Ward identities to constrain the N-point functions. \subsubsection{Global Ward identity} The global Ward identities for a given basis $\{t^a\}_a$ of the horizontal algebra read, \begin{equation} \label{sl2global} 0 = \sum_{i=1}^N D^{R_i}_{X_i}(t^a)\langle \prod^N_{i=1} \Phi^{R_i}_{X_i}(z_i)\rangle. \end{equation} In the $x$-basis and the case of $\mathfrak{sl}_2$, the solution of the global Ward identities (\ref{sl2global}) for the three point function is given by, \begin{equation} \label{affine3p} \langle \Phi^{j_1}_{x_1}(z_1)\Phi^{j_2}_{x_2}(z_2)\Phi^{j_3}_{x_3}(z_3) \rangle \propto x^{j^3_{12}}_{12}x^{j^1_{23}}_{23}x^{j^2_{31}}_{31}, \end{equation} where we again used the short hand notation $j_I^J = \sum_{i \in I} j_i - \sum_{j \in J}j_j$ and $x_{ij} := x_i - x_j$. \subsubsection{Local Ward identity} If all except one field in the N-point functions are affine primaries, a useful local Ward identity reads, \begin{equation} \label{sl2local} \langle J^a_{n < 0} \Phi^{\sigma_i}(z_i)\prod_{j\neq i} \Phi^{R_j}_{X_j}(z_j) \rangle = \sum_{j\neq i} \frac{D^{R_j}_{X_j}(t^a)}{z_{ji}^n}\langle \Phi^{\sigma_i}(z_i)\prod_{j\neq i} \Phi^{R_j}_{X_j}(z_j)\rangle. \end{equation} Both, global and local Ward identities, along with the three point function (\ref{affine3p}), will be used in section (\ref{fusionsl2}) to calculate fusion rules between affine primary fields and degenerate fields. \subsection{$\hat{\mathfrak{sl}}_2$ degenerate representations} In this section we proceed to discuss degenerate representations and determine null vectors in the case of general $\mathfrak{sl}_2$ horizontal representations. Remember a degenerate representation is constructed by modding out the sub-representations, generated by the null vectors. We work with the variable $t := k + 2$. As for our purpose ($\mathfrak{sl_2}$) we use the $(0,\pm)$-basis with Lie bracket, \begin{gather} [t^0,t^{\pm}] = \pm t^{\pm}, \quad [t^+, t^-] = 2t^0. \end{gather} The corresponding symmetric Killing form is given by $K_{00} = 2, K_{+-} = 1$, with inverse $K^{00} = \frac{1}{2}, K^{+-} = 1$ and is used to raise and lower indices. Specifically in the $\mathfrak{sl}_2$ case, the structure constant fulfills the non-trivial identity $f^{ab}_i f^{ic}_d = 2(K^a_d K^{bc} - K^{ac}K^b_d)$. Furthermore the quadratic Casimir element $C_2 = K_{ab}J^a_0J^b_0$ in the $(0,\pm)$-basis is given by $C_2 = 2((J^0_0)^2 + J^+_0 J^-_0 - J^0_0)$. \subsubsection{Affine highest weight representation} Let's take an affine Lie algebra $\hat{\mathfrak{g}}$ and a representation $R(t^a)$ of the horizontal algebra $t^a \in \mathfrak{g} \subset \hat{\mathfrak{g}}$. The highest weight representation $\hat{R}$, equivalent to the OPE (\ref{affineope})\footnote[1]{The equivalence can be worked out by determining the action of $J^a_n$ on the primary field $\Phi^{R}$, which is indeed a highest weight representation.}, is constructed by generalizing the action from $\mathfrak{g}$ to $\hat{\mathfrak{g}}$ on any state $|v\rangle \in R$, \begin{gather} \label{highestweightrep} J^a_{n>0}|v\rangle = 0, \quad J^a_0 |v\rangle = -(t_a)^T |v\rangle. \end{gather} The resulting, generalized action is a highest weight representation of $\hat{\mathfrak{sl}}_2$. Compared to the Virasoro case (\ref{virverma}), the horizontal subspace is in general not one dimensional and spanned by a basis of the horizontal vector space. Null vectors in highest weight representations (\ref{highestweightrep}) of affine Lie algebras are defined generally, \begin{definition}[Affine null vector] A null vector at level $N$ $|\chi,N\rangle = \sum_{i,A} c_A^i J^A_{N} |v_i\rangle$ in a highest weight representation (\ref{highestweightrep}) fulfills the conditions $J^a_{n>0} |\chi,N\rangle = 0$ for any $a$. \end{definition} It is enough to check the null vector condition only for $J^a_{1}$ because of the commutation $[J^a_1,J^b_1] = f^{ab}_c J_2^c$ and by induction on the level. The connection to irreducibility is given by the following theorem \cite{bauer}, \begin{theorem} The highest weight representation (\ref{highestweightrep}) is irreducible if and only if it contains no null vectors. \end{theorem} Moreover the existence of null vectors imply an $\mathfrak{sl}_2$-module, \begin{proposition} \label{nullmodule} The left-action of $\mathfrak{sl}_2$ on null vectors yields other null vectors at the same level. \end{proposition} \begin{proof} Let $|\chi,N\rangle$ be a null vector. We check for the null vector condition, $J^m_1J^a_0|\chi,N\rangle = ([J^m_1,J^a_0] + J^a_0 J^m_1) |\chi,N\rangle$ = 0, where we use $[J^m_1,J^a_0] = f^{ma}_e J^e_1$. \end{proof} The goal in the next sections is to find null vectors at level 0 and 1, to be able to reproduce the fusion rules of degenerate representations at any level similiar to section (\ref{virasorofusion}). \subsubsection{Level 0 null vectors} \label{lvl0} The highest weight representation extension of the finite irreducible representation with half integer spin $j = j_{0,s} := \frac{s-1}{2}, \quad s \in \mathbb{N}_+$ (see appendix \ref{appB}) of $\mathfrak{sl}_2$ contains a well known null vector at level 0 \cite{lorentz1}, \begin{equation} (J^-_0)^{2j+1} |j,j\rangle = 0. \end{equation} This null vector satisifes trivially the null vector condition and is actually of different nature, since it is equal the $0$-element of the representation. Consequently, we don't need to mod out the subspace, in order to derive fusion rules. \subsubsection{Level 1 null vectors} \label{lvl1} Malikov, Feigin and Fuks already determined examples of null vectors at each level $N \geq 1$, but they worked them out in a specific $\mathfrak{sl}_2$ basis and horizontal representation~\cite{feigin}. It is therefore of interest to work out Lie algebra basis independent null vectors in a horizontal representation-independent framework. Hence the question arise, if ideas in this framework around $\mathfrak{sl}_2$ can be extended to the Lie algebra $\mathfrak{sl}_N$ or general simple Lie algebras. We go to a general setting and focus on horizontal representations, which fulfill two properties, \newline \noindent \textbf{(1)} \textit{The horizontal representation is indecomposable and thus the quadratic Casimir $C_2$ is proportional to the identity. In the case of $\mathfrak{sl}_2$ we use the parametrization $C_2 = 2j (j+1)$ with reflection symmetry $j \mapsto -1-j$.} \\ \textbf{(2)} \textit{The horizontal representation $R$/$V_R$ canonically extends to a highest weight representation $\hat{R}$/ $\hat{V}_R$.} \\ This leads to the natural definition, \begin{definition}[Null vector space] \label{NVA} Consider $U(\hat{\mathfrak{sl}}_2)$ where we mod out the subvectorspaces $W_1 := \mathrm{span}_{\mathbb{C}}(U(\hat{\mathfrak{sl}}_2)J^a_{n>0})$ and $W_2 := \mathrm{span}_{\mathbb{C}}(U(\hat{\mathfrak{sl}}_2)(C_2 - 2j(j+1)))$. We call the resulting vector space $\frac{U(\hat{\mathfrak{sl}}_2)}{W_1 + W_2}$, the null vector space. \end{definition} Remember in the universal enveloping framework two elements are equal if they differ by commutation. For example in $W_2$ we have $J^+_0 J^0_0 J^-_0 = J^0_0J^+_0 J^-_0 - J^+_0J^-_0$ and thus can be replaced by $J^+_0 J^-_0 = j(j+1) - (J^0_0)^2 + J^0_0$. And in $W_1$ the element $J^+_1 J^-_0 = 2 J^0_1 + J^-_0 J^+_1$ is simply zero. The null vector space encodes the action of the affine highest weight representation, i.e. for any $T \in \frac{U(\hat{\mathfrak{sl}}_2)}{W_1 + W_2}$ we define the canonical map, \begin{equation} \label{map2} T: V_R \rightarrow \hat{V}_R, \quad |v\rangle \mapsto T|v\rangle. \end{equation} We also need to redefine the null vector condition accordingly, \begin{definition}[Null operator] Within the null vector space, the null vector condition transforms into $\forall a: J^a_1 \hat{T} = 0$ or equivalently $\forall a: [J^a_1,\hat{T}] = 0$. $\hat{T}$ is called a null operator and maps any horizontal state to a null vector via the corresponding map (\ref{map2}). \end{definition} Left-action of $U(\mathfrak{sl}_2)$ on $\hat{T}$ and the freedom of choice of $|v\rangle \in V_R$ forms a set of null operators $U(\mathfrak{sl}_2) \hat{T} U(\mathfrak{sl}_2)$. This space actually forms a vector space over $\mathbb{C}$ and is an $\mathfrak{sl}_2$-module by means of the adjoint action. The main advantage of the null vector algebra is that we are able to search for null operators, without taking care about the affine highest weight representation. \newline \noindent \textbf{Eigenspaces:} The null vector space forms an $\mathfrak{sl}_2$-module and therefore we can diagonalize it with respect to the adjoint action $[J^0_0,\circ]$. The resulting eigenspaces $E^Q$ contains elements of constant charge $Q$, where the charge of an element $\prod_{i = 1}^M J^{a_i}_{n_i} \in \frac{U(\hat{\mathfrak{sl}}_2)}{W_1 + W_2}$ is given by $Q := \sum_{i=1}^M a_i$. Such that for an element $J^Q \in E^Q$ we have $[J^0_0, J^Q] = Q J^Q$. \newline \noindent \textbf{Universal null vector:} At level 1 we have determined a basis independent null operator, \begin{equation} \label{level1null} \boxed{\hat{T}^c_1 := 2 K_{ab} J^a_{-1}J^b_{0} J_0^c - t f^c_{ab} J^a_{-1}J^b_0 - t^2 J^c_{-1},} \end{equation} where the quadratic Casimir takes the value $C_2 = \frac{t^2}{2} - t$. We want to show explicitly that $\hat{T}^c_1$ fulfills the null vector condition. To do so let's act with $J^d_1$ and use the commutation relation (\ref{affinecommutation}), \begin{gather} \begin{aligned} J^d_1 \hat{T}^c_1 &= 2K_{ab}[J^d_1,J^a_{-1}]J^b_0J^c_0 - t f^c_{ab}[J^d_1,J^a_{-1}] J^b_0 - t^2 [J^d_1,J^c_{-1}] \\ &= (f^{da}_e J^e_0 + k K^{da})(2K_{ab}J^b_0 J^c_0 - t f^c_{ab}J^b_0) - t^2 (f^{dc}_e J^e_0 + k K^{dc}) = ... \end{aligned} \end{gather} Then we use the identities $f^{d}_{be}J^e_0J^b_0 = 2 J^d_0$ and $f^{ab}_i f^{ic}_d = 2(K^a_d K^{bc} - K^{ac}K^b_d)$ (this identity only applies to $\mathfrak{sl}_2$) and group together similiar terms, \begin{gather} \label{step2} \begin{aligned} ... = J^d_0J^c_0(4 + 2k) -2tJ^c_0J^d_0 + K^{cd}(2t C_2 - t^2 k) + f^{cd}_e J^e_0(\underbrace{t^2 - tk - 2t}_{= 0}). \end{aligned} \end{gather} Finally we commute the term $-2t J^c_0J^d_0 = -2t (J^d_0J^c_0 + f^{cd}_e J^e_0)$ and get that (\ref{step2}) is zero if and only if $C_2 = \frac{tk}{2} = \frac{t^2}{2} - t$. Interestingly the null vector operators $T^c_1$ satisfy the commutation relation, \begin{equation} \label{nullcommutation} [J^a_0,T^b_1] = f^{ab}_c T^c_1. \end{equation} \noindent \textbf{Set of null operators:} The main goal is to give a sketch of the proof of the following conjecture, where we work for simplicity in the $(0,\pm)$-basis, \begin{conjecture}[Universal null operator] \label{conjuniq} The null operator $\hat{T}^0_1$ at level 1 and $C_2 = \frac{t^2}{2} - t$ is universal, such that the subspace $\mathrm{span}_{\mathbb{C}}(U(\mathfrak{sl}_2)\hat{T}^{0}_1U(\mathfrak{sl}_2))$ contains the set of all null operators in the null vector space $\{\hat{T}\in \frac{U(\hat{\mathfrak{sl}}_2)}{W_1 + W_2}| J^a_1 \hat{T} = 0\}$. \end{conjecture} To begin with, we want to show the weaker proposition, \begin{proposition} \label{prope0} $\hat{T}^0_1$ generates any finite null operator in $E^0$. The null operators are generated by $\hat{T}^0_1 \mathrm{span}_{\mathbb{C}}(J_0^0)$. \end{proposition} In general we expect, that every null operator can be written as a linear combination of products $\prod^N_{i = 1} J^{a_i}_{n_i}$ and therefore the following proposition helps, \begin{proposition} Every product $\prod^N_{i = 1} J^{a_i}_{n_i}$ at level $(-\sum_i n_i) = 1$ can be rewritten as a linear combinations of products $J^{c}_{-1} \prod_j J^{b_j}_0$ such that the charge $c + \sum_j b_j = \sum_i a_i$ stays invariant. \end{proposition} \begin{proof} Identify all the generators $J^a_{n>0}$ and commute them to the right, to be left with generators at level 0 and one generator at level 1. Commute the level 1 generator to the left. All this operations conserve the level and charge due to the commutation relation (\ref{affinecommutation}). \end{proof} In addition we conclude that if $\hat{T} = \lambda_1 \prod_i J^{a_i}_{n_i} + \lambda_2 \prod_i J^{b_i}_{m_i}$ is a null operator with charge $A\sum a_i \neq \sum b_i$, then each term is a null operator by its own, because of linear independence of vectors in different eigenspaces $E^Q$. Therefore at each charge and level, we can rearrange the null vector candidate into a simple ansatz. The ansatz at level one and in the Eigenspace of zero charge $E^0$ can generally be written as, \begin{equation} \label{an1} \hat{T} = J^+_{-1}J^-_0 (\sum_{i\geq 0}a_i (J^0_0)^i) + J^0_{-1}(\sum_{i\geq 0}b_i (J^0_0)^i) + J^-_{-1}J^+_0 (\sum_{i\geq 0}c_i (J^0_0)^i). \end{equation} Applying the null operator condition $J^a_1 \hat{T} = 0$ to this ansatz yields the solution, \begin{equation} \label{nullrecursion} b_{i+1} = 2 a_i - t a_{i+1}, \quad c_{i+1} = \frac{2}{t}(a_i - c_i) - a_{i+1}, \end{equation} where we define that quantities with negative indices vanish. \newline \noindent \textbf{Finite solutions:} A finite solution of (\ref{nullrecursion}) is defined by the condition $a_{N + (i \geq 1)} = 0$ with $N \in \mathbb{N}_{*}$, which implies $b_{N + 1 + (i \geq 1)} = 0$ and $c_{N+(i \geq 1)} = 0$. The recursion (\ref{nullrecursion}) constrains $a_N = \frac{2}{t}(a_{N-1} - c_{N-1}(a_{i\leq N-1})) - a_{N}$ and therefore we can freely choose $A_N := (a_0,...,a_{N-1})$, to determine any finite solutions for any $N$. We write $\hat{T}_N(A_N)$ for the null vector operator corresponding to the solution $A_N$. The space of finite solutions $\{\hat{T}_N(A_N)| A_N \in \mathbb{C}^{N}, N \in \mathbb{N}\}$ forms a vector space because the recursion (\ref{nullrecursion}) is linear. We will give two examples of finite solutions at $N = 1,2$, \begin{gather} \begin{aligned} \hat{T}_1(t,2) =& J^0_{-1}(4(J^0_0)^2 - t^2) + J^-_{-1}J^+_0(2J_0^0 - t) + J_{-1}^+J_0^-(2J_0^0 + t) \\ \hat{T}_2(t,4,4/t) =& J^0_{-1}(-t^2 - 2t J^0_0 +4(J^0_0)^2 + 8/t (J^0_0)^3) + J^-_{-1}J^+_0(-t + 4/t(J^0_0)^2) \\ &+ J_{-1}^+J_0^-(t + 4 J^0_0 + 4/t (J^0_0)^2). \end{aligned} \end{gather} Actually we have the redundancy $\hat{T}_2= \hat{T_1} (2J_0^0 + t)/t$. The solution $\hat{T}_1(t,2)$ corresponds to the initial result (\ref{level1null}) with $c = 0$. After working out the finite solutions in $E^0$, we are ready to proof proposition \ref{prope0}, \begin{proof} The proof is done by induction on the positive integer $N$, such that $a_{N+i} = 0$ for $i\geq 1$. We have already seen that at $N=2$ it can be related to $N=1$ and thus assume it is possible up to $N-1$. \\ We start by noticing that, due to the linear character of the solution $(A_N,a_N)$ we can choose without loss of generality $a_0 = 1$ and write $(A_N,a_N) = (1,0,...,0,a_N') + (0,a_1,...,a_N - a_N')$. Because of $0 = a_0 = c_0 = b_0$ in the second term, we can factor out $a_1 J^0_0$ in the ansatz (III.15) and relate it to the $N-1$ case. \\ To decompose the first term $(1,0,...,0,a_N')$, we notice first that the recursion (\ref{nullrecursion}) implies $a_N' =(-1)^{N+1} (\frac{2}{t})^N$. Therefore we decompose it further $(1,0,...,0,a_N') = (1,0...,0,a_{N-1}') + (-1)^{N+1}(\frac{2}{t})^{N-1} (0,...,0,1,\frac{2}{t})$ and factor out $(J_0^0)^{N-1}$ in the second term. We see both terms can be related to lower $N$-cases. This construction is equivalent to $\hat{T}(1,0,...,a_N) = \hat{T}(1,0,...,a_{N-1}) + (-1)^{N+1}(\frac{2}{t})^{N-1} \hat{T}(1,\frac{2}{t})(J_0^0)^{N-1}$. Using the induction assumption again, we conclude that we can decompose every finite solution to $\hat{T}^0_1 \sum_i a_i (J_0^0)^i$. \end{proof} In different eigenspaces with non-zero charge $E^Q$ we can create null operators by acting multiple time with $J^{\pm}_0$ on $\hat{T}^0_1$ from the left or right. The last step would be to proof that at each charge $ Q \neq 0$ all finite solutions can be generated by $\hat{T}^0_1$. We haven't found a promising proof, but it should closely follow the proof of proposition \ref{prope0} and therefore leave it as a conjecture, \begin{conjecture} The null vectors in each eigenspace $E^Q$ with charge $0 \neq Q \in \mathbb{Z}$ are generated by the unique element $T^{0}_1$. \end{conjecture} \textit{Remark:} Note that indecomposable horizontal representations do not contain a unique vector to generate the whole vector space by repeatedly applying the action, whereas an irreducible does. Therefore we expect that every null vector in the irreducible case, can be reached with the map (\ref{map2}) and the subspace $\mathrm{span}_{\mathbb{C}}(U(\mathfrak{sl}_2)\hat{T}^{0}_1U(\mathfrak{sl}_2))$ from conjecture \ref{conjuniq}. It is not entirely clear if the same holds for indecomposable representations. \subsection{$\hat{\mathfrak{sl}}_2$ fusion rules} \label{fusionsl2} As in the Virasoro case, we deduce the analog of fusion but with primary fields in $\hat{\mathfrak{sl}}_2$-degenerate representations (shorthand \textit{degenerate field}). Generally we name the null operator $\hat{T}^{r,s}$ and the degenerate representation $\hat{R}^{r,s}$, if they correspond to the spin $j_{r,s} = \frac{s-1}{2} - \frac{t}{2} r$. On the other side we name the highest weight representation $R_j$ with well defined qadratic Casimir $2j(j+1)$ and spin $j$. The idea is then to use the prominent relation $0 = \hat{T}^{r,s}|v\rangle$ of degenerate representations within N-point functions, \begin{equation} \label{nullvectorequation} 0 = \langle \hat{T}^{r,s} \Phi_{j_{r,s}} \Phi^{R_1}_{j_1}\cdot ... \cdot \Phi^{R_{N-1}}_{j_{N-1}} \rangle, \end{equation} where the degenerate field $\Phi_{r,s}$ corresponds to the state $|v\rangle$. Then we apply the affine Ward identities (\ref{sl2global}) resp. (\ref{sl2local}) to deduce the differential equation of (\ref{nullvectorequation}) in the isospin framework. We call these equations (\ref{nullvectorequation}) \textit{null vector equations}. Take care that the consecutive action acts as $J^a_0 J^b_0 = (-D(t^b))(-D(t^a))$. \subsubsection{Finite level 0} For the finite irreducible horizontal representation (\ref{lvl0}), we get the null vector equation $(J^-_0)^{2j + 1}|j,j\rangle = 0$ with spin $j = j_{0,s}$. Using the global Ward identity (\ref{sl2global}) and the $x$-basis (\ref{xbasis}), this constrains the 3-point function (\ref{affine3p}), \begin{equation} 0 = (\partial_x)^{2j+1} \langle \Phi^j_x(z) \Phi^{j_2}_{x_2}(z_2) \Phi^{j_3}_{x_3}(z_3) \rangle, \end{equation} For a non-zero 3-point function the condition on the spins read, \begin{equation} \prod_{i=0}^{2j}(j_3 - j_2 - j_{0,s} + i) = 0. \end{equation} This can be identified as the fusion rule of a generic field with a degenerate field with spin $j_{0,s}$, \begin{equation} \hat{R}^{0,s} \times \hat{R}_j = \hat{R}_{j + j_{0,s}} + \hat{R}_{j + j_{0,s-2}} + ... + \hat{R}_{j - j_{0,s}}. \end{equation} Thus in general the field with spin $j_{0,1} = 0 \Rightarrow \Delta_{j = 0} = 0$ \footnote[1]{The conformal dimension of an $\mathfrak{sl}_2$ primary is given by $\Delta_j = \frac{j(j+1)}{t}$} acts as the identity field $\Phi^0(z) \propto id$. \subsubsection{Generic representation level 1} Altough Malikov, Feigin and Fuks have determined a null vector at level 1, we would not be able to use it directly to determine the null vector equation. This is because it is written in a basis which diagonalizes $J^0_0$. Therefore we take the degenerate representation with null operator $\hat{T}_1^0$ (\ref{level1null}) on any horizontal state $\hat{T}^0_1|v\rangle$ and spin $j_{1,1}$. This approach yields the following null vector equation in the case of the 3-point function, \begin{gather} \begin{aligned} 0 = \sum_{s = 2,3} \frac{1}{z_{s1}} \bigg(2K_{ab}D_{x_1}(t^0)D_{x_s}(t^a)D_{x_1}(t^b) + t f^0_{ab} D_{x_s}(t^a) D_{x_1}(t^b)& \\ - t^2 D_{x_s}(t^0)\bigg) \langle \Phi^{j_{1,1}}_{x_1}(z_1) \Phi^{j_2}_{x_2}(z_2)\Phi^{j_3}_{x_3}(z_3)\rangle&. \end{aligned} \end{gather} In the limit $z_1, z_3 \rightarrow 0, \infty$ only the term proportional to $\frac{1}{z_{21}}$ stays alive and in the $x$-basis it becomes the differential equation, \begin{gather} \begin{aligned} 0 = \big(&-2x_1 x_{12}^2 \partial_2(\partial_1)^2 + 4j_2 x_1 x_{21} (\partial_1)^2 + 4x_1(1+t)x_{21} \partial_2\partial_1\\ &- 4j_2 x_1 (1+t)\partial_1 - 2x_1 t(1+t) \partial_2\big) \langle \Phi^{j_{1,1}}_{x_1}(z_1) \Phi^{j_2}_{x_2}(z_2)\Phi^{j_3}_{x_3}(z_3) \rangle. \end{aligned} \end{gather} We use the solution of the 3-point function (\ref{affine3p}) and get the following condition on the spins, \begin{gather} \begin{aligned} 0 = \big(&8 j_2^3 + 8 j_3^3 - 4 j_3^2 (t-2) - 2 j_3 t^2 + (t-2)t^2 \\ &- 4 j_2^2 (2 j_3 + t - 2) - 2 j_2 (4 j_3^2 - 4 j_3 (t - 2) + t^2)\big). \end{aligned} \end{gather} If the 3-point function is non-zero, this equation has the three solutions $j_3 \in \{j_2 \pm j_{1,1}, -1-j_2 + j_{1,1}\}$ \footnote[2]{The parametrization $C_2 = 2j(j+1)$ contains the symmetry $j \rightarrow -1 - j$ and we get equivalent solutions up to this symmetry.}. Therefore the fusion rules can be determined, \begin{equation} \hat{R}^{1,1} \times \hat{R}_j = \hat{R}_{j+j_{1,1}} + \hat{R}_{j-j_{1,1}}. \end{equation} \subsubsection{Generalized Fusion Rule} The Fusion algebra is associative and commutative, such that we are able to generate the fusion rule of a degenerate field with spin $j_{r,s}$ by repeatedly fusing fields with spin $j_{0,2}$ and $j_{1,1}$ (the field corresponding to $j_{0,1} = 0$ is the identity field $\hat{R}^{0,1} \times \hat{R}_j = \hat{R}_j$). To see that these two spins are enough to generate the fusion rules, we take a look at the fusions, \begin{equation} \hat{R}^{1,1} \times \hat{R}^{1,1} = \hat{R}^{2,1} \oplus \hat{R}^{0,1}, \quad \hat{R}^{0,2} \times \hat{R}^{0,2} = \hat{R}^{0,3} \oplus \hat{R}^{0,1}, \quad \hat{R}^{0,2} \times \hat{R}^{1,1} = \hat{R}^{1,2}. \end{equation} Thus $\hat{R}^{1,1}$ increases or decreases the left index by one unit and $\hat{R}^{0,2}$ increases the right index by one unit. Using this we can write down, how a generic highest weight representation $\hat{R}_j$ fusions with a degenerate representation $\hat{R}^{r,s}$: \begin{equation} \label{sl2fusion} \hat{R}^{r,s} \times \hat{R}_j = \hat{R}_{j - j_{r,s}} + \hat{R}_{j - j_{r-2,s}} + \hat{R}_{j - j_{r,s-2}} + ... + \hat{R}_{j - j_{-r,-s + 2}} \end{equation} The fusion between two degenerate fields, respecting associativity and commutativity, yields the closed formula, \begin{equation} \hat{R}^{r,s} \times \hat{R}^{a,b} = \sum_{i \underset{2}{=} |r-a|}^{r+a} \sum_{j \underset{2}{=} |s-b|+1}^{s + b - 1} \hat{R}^{i,j}. \end{equation} The derived fusion rule (\ref{sl2fusion}) coincide with previous work, for example in \cite{ribaultplane}, \cite{bauer}, \cite{awata}. \section*{Conclusion} In this thesis we have investigated the poles in the central charge the torus 1-point block and null vectors in highest weight representation of $\hat{\mathfrak{sl}}_2$. By calculating the first four orders of the recursion relation of the 1-point block (\ref{horder}), we have found that there seems to be a very specific algebraic form of the $c$-pole free expression $K_N$ (\ref{zamoanddaro}). It is a laurent polynomial in the variable $\beta := b^2$ of degree $Q_N := \big(\sum_{1 \leq mn \leq N} 1 \big)- N$, with prefactors depending on the external $\Delta_1$ and internal $\Delta$ conformal dimension. Altough it seems hard to get rid of the poles in the central charge, we have found that an expansion around $\beta = 0$ of $K_N$, can be used to find the unknown prefactors of the Laurent-polynomial. The possibility to calculate a pole-free torus 1-point block, helps to determine numerically 1-point functions of CFTs with rational central charge. For future work, calculations and checks at higher order will help to affirm that the algebraic form holds for orders higher than four. Within the second part, we moved to CFTs with affine Lie algebras as underlying symmetry, where we specially focused on null vectors and degenerate representations. In the case of $\hat{\mathfrak{sl}}_2$, we have found a $\mathfrak{sl}_2$-basis independent and horizontal representation independent (up to indecomposability) null operator at level 1 with quadratic Casimir $C_2 = \frac{t^2}{2} - t$, \begin{equation} \hat{T}^c_1 := 2 K_{ab} J^a_{-1}J^b_{0} J_0^c - t f^c_{ab} J^a_{-1}J^b_0 - t^2 J^c_{-1}. \end{equation} Applying the null operator $\hat{T}^c_0$ on any state of the horizontal representation, generates a null vector in the usual sense. In this framework we derived the fusion rules of degenerate representations with general highest weight representations, which coincides with earlier work. \section*{Acknowledgement} I would like to express my deepest gratitude to my supervisor Dr. Sylvain Ribault, who invested a lot of time to discuss the content of this work and for his all-time valuable advice. A special thanks goes to the Institute de Physique Théoretique for hosting me during the M2-internship and providing me this very special opportunity. Last but not least I want to thank my family without their help I wouldn't be at this point where I am.
{ "timestamp": "2022-09-20T02:21:27", "yymm": "2209", "arxiv_id": "2209.08653", "language": "en", "url": "https://arxiv.org/abs/2209.08653" }
\section{INTRODUCTION} Minimally Invasive Surgery (MIS, aka laparoscopic surgery) has become the gold standard for several procedures (i.e., cholecystectomy \& appendectomy), as it provides better clinical outcomes including reducing blood loss, minimising trauma to the body, less post-operative pain and faster recovery~\cite{velanovich2000laparoscopic, wilson2014overview}. Despite the benefits from MIS, surgeons loss direct vision and touch on the target, which decreases surgeon-patient transparency imposing technical challenges to the surgeon. These challenges have motivated the development of automatic techniques for the analysis of the surgical flow~\cite{aviles2016towards,maier2017surgical,vercauteren2019cai4cai,nwoye2022rendezvous}. In particular, this work addresses a key research problem in surgical science\textemdash surgical recognition, which provides context-aware support and safety. The majority of existing techniques focus on phase recognition \cite{blum2010modeling,dergachyova2016automatic,lo2003episode,twinanda2016endonet,zisimopoulos2018deepphase}. However, phase recognition is limited by its own definition; as it does not provide a complete information on the surgical scene. We therefore consider the setting of \textit{surgical action triplet recognition}, which offers a better understanding of the surgical scene. The goal of triplet recognition is to recognise the $\left< \mbox{instrument, verbs, target}\right>$ and their inherent relations. A visualisation of this task is displayed in Fig.~\ref{networkB}. The concept behind triplet recognition has been recognised in the early works of that~\cite{neumuth2006acquisition,katic2014knowledge}. However, it has not been until the recent introduction of more richer datasets, such as CholecT40~\cite{nwoye2020recognition}, that the community started developing new techniques under more realistic conditions. The work of that Nwoye et al~\cite{nwoye2020recognition} proposed a framework called Tripnet, which was the first work to formally address surgical actions as triplets. In that work, authors proposed a 3D interaction space for learning the triplets. In a more recent work, the authors of~\cite{nwoye2022rendezvous} introduced two new models. The first one is a direct extension of Tripnet called Attention Tripnet, where the novelty rely on a spatial attention mechanism. In the same work, the authors introduced another model called Rendezvous (RDV) that highlights a transformer-inspired neural network. \setlength{\tabcolsep}{0mm}{ \begin{figure} \centering \includegraphics[width=0.42\textwidth]{taskIllustration.pdf} \caption[Main network structure]{Visualisation of the surgical action triplet recognition task. We consider the tasks where the instrument ($I$), verb ($V$, action) and target ($T$, anatomical part) seek to be predicted. \\[-0.8cm]} \label{networkB} \end{figure} } A commonality of existing techniques is the development of new mechanisms for improving the network architecture. However and despite the potential improvements, the performance of existing techniques are substantially lower than other tasks in surgical sciences. In this work, we go contrariwise existing techniques, and tackle the surgical action triplet recognition problem from the lens of robustness and explainability. In the machine learning community there is a substantially increase of interest in understanding the lack of reliability of deep learning models (e.g.,~\cite{ribeiro2016should,koh2017understanding,sundararajan2017axiomatic,liu2019multi,yeh2019fidelity, hsieh2020evaluations}). To understand the lack of reliability of existing deep networks, a popular family of techniques is the so-called \textit{feature based explanations} via robustness analysis~\cite{simonyan2013deep,zeiler2014visualizing,plumb2018model,wong2021leveraging,singla2021salient}. Whilst existing techniques have extensively been evaluated for natural images tasks, there are no existing works addressing the complex problems as in action triplet recognition. \faHandPointRight[regular] \textbf{Contributions.} In this work, we introduce, to the best of our knowledge, \textit{the first study to understand the failure of existing deep learning models for surgical action triplet recognition. } To do this, we analyse the failures of existing state-of-the-art solutions through the lens of robustness. To do this, we push to the limit the existing SOTA techniques for surgical action triplet recognition under weak and strong $\delta-$pertubations. We then extensively analyse the failure modes via the evaluation criteria Robustness-$S$, which analyse the behaviour of the models through feature based explanations. Our study reveals the impact of core and spurious features for more robust models. Our study opens the door to more trustworthiness and reliability deep learning models in surgical data science, which is imperative for MIS. \section{METHODOLOGY} We describe two key parts: i) our setting along with our assumptions and ii) how we evaluate robustness via adversarial optimisation. The workflow of our work is displayed in Fig.~\ref{network}. \subsection{Surgical Action Triplet Recognition} In the surgical action triplet recognition problem, the main task is to recognise the triplet $IVT$, which is the composition of three components during surgery: instrument ($I$) , verb ($V$) and target ($T$) in a given RGB image $\bm{x} \in \mathbb{R}^{H \times W \times 3}$. Formally, we consider a given set of samples $\{ (\bm{x}_n ,{y}_n) \}_{n=1}^{N}$ with provided labels $\mathcal{Y} = \{0,1,..,C_{IVT}-1\}$ for $C_{IVT}=100$ classes. We seek then to predict a function $f: \mathcal{X} \mapsto \mathcal{Y}$ such that $f$ gets a good estimate for the unseen data. That is, a given parametrised deep learning model takes the image $\bm{x}$ as input, and output a set of class-wise presence probabilities, in our case 100 classes, under the $IVT$ composition, $\bm{Y_{IVT}} \in \mathbb{R}^{100}$, which we call it the logits of $IVT$. Since there are three individual components under the triplet composition, with in the training network, we also considered the individual component $d^* \in \{I, V, T\}$, each with class number $C_{d^*}$ (i.e. $C_I=6$, $C_V=10$, $C_T=15$). The logits of each component, $\bm{Y_{d^*}} \in \mathbb{R}^{C_{d^*}}$, are computed and used within the network. In current state-of-the-art (SOTA) deep models~\cite{nwoye2020recognition,nwoye2022rendezvous}, there is a communal structure divided into three parts: i) the feature extraction backbone; ii) the individual component encoder; iii) the triplet aggregation decoder that associate the components and output the logits of the $IVT$ triplet. More precisely, the the individual component encoder firstly concentrate on the instrument component to output Class Activation Maps (CAMs $\in \mathbb{R}^{H \times W \times C_d}$) and the logits $\bm{Y_I}$ of the instrument classes; the CAMs are then associated with the verb and target components separately for their logits ($\bm{Y_V}$ and $\bm{Y_T}$) to address the instrument-centric nature of the triplet. The current SOTA techniques for surgical action triplet recognition focus on improving the components ii) \& iii). However, the performance is still substantially lower then other surgical tasks. Our intuition behind such behaviour is due to the inherent complex and ambiguous conditions in MIS, which reflects the inability of the models to learn meaningful features. Our work is then based on the following modelling hypothesis. \begin{lem}[Deep Features are key for Robustness]{thm:hypothesis1} \textit{Deep surgical techniques for triplet recognition lacks of reliability due to the ineffective features. Therefore, the key for boosting performance, improve trustworthiness and reliability, and understand failure of deep models is in the deep features.} \end{lem} Following previous hypothesis, we address the questions of\textemdash why deep triplet recognition models fail? We do that by analysing the feature based explanations via robustness. To do this, we consider the current three SOTA techniques for our study: Tripnet~\cite{nwoye2020recognition}, Attention Tripnet and Rendezvous~\cite{nwoye2022rendezvous}. Moreover, we extensively investigate the repercussion of deep features using four widely used backbones ResNet-18, ResNet-50~\cite{https://doi.org/10.48550/arxiv.1512.03385}, DenseNet-121~\cite{https://doi.org/10.48550/arxiv.1608.06993} and Swin Transformer\cite{https://doi.org/10.48550/arxiv.2103.14030}. In the next section, we detail our strategy for analysing robustness. \setlength{\tabcolsep}{0mm}{ \begin{figure} \centering \includegraphics[width=0.48\textwidth]{network-robustness.png} \caption[Main network structure]{Illustration of the main network structure, and how the adversarial perturbation is added to measure robustness.\\[-0.5cm]} \label{network} \end{figure} } \subsection{Feature Based Explanations via Robustness} Our models of the triplet recognition output the logits of triplets composition, we then use it to select our predicted label for the classification result. We define the model from image $\bm{x}$ to the predict label $\hat{y}$ as $f: \mathcal{X} \rightarrow \mathcal{Y}$, where $\mathcal{X} \subset \mathbb{R}^{H \times W \times 3}, \mathcal{Y}=\{0,1,2, ..., C_{IVT}-1\}$ For each class $m \in \mathcal{Y}$ and within each given sample, we seek to recognise core and spurious attributions~\cite{singla2021salient,singla2021understanding}, which definition is as follows. \fcircle[fill=deblue]{3pt} \textbf{Core Attributes:} they refer to the features that form a part in the object we are detecting. \fcircle[fill=wine]{3pt} \textbf{Spurious Attributes:} these are the ones that not a part of the object but co-occurs with it. \\ \textbf{How We Evaluate Robustness?} The body of literature has reported several alternatives for addressing the robustness of deep networks. Our work is motivated by recent findings on perturbation based methods, where even a small perturbations can significantly affect the performance of neural nets. In particularly, we consider the setting of adversarial training~\cite{allen2022feature,olah2018building,engstrom2019adversarial} for \textit{robustify} a give deep model. The idea behind adversarial training for robustness is to enforce a given model to maintain their performance under a given perturbation $\delta$. This problem can be seen casted as an optimisation problem over the network parameters $\theta$ as: \begin{equation} \theta^* = \arg \min_{\theta} \mathbb{E}_{(\bm{x},y)\sim \mathcal{D}}[ \mathcal{L}_{\theta} (\bm{x},y)]. \end{equation} where $\mathbb{E}[\mathcal{L}_\theta(\cdot)]$ denotes the expected loss to the parameter $\theta$. \\ One seek to the model be resistant to a any $\delta-$perturbation. In this work, we follow a generalised adversarial training model, which reads: \begin{theo}[Adversarial training under $\delta$]{def1:generalised} \centering $ \theta^* = \arg \min_{\theta} \mathbb{E}_{(\bm{x},y)\sim \mathcal{D}}[ \max_{\bm{\delta \in \Delta}}\mathcal{L}_{\theta} (\bm{x}+\bm{\delta},y)].$ \end{theo} The goal is to the model do not change their performance even under the worse (strong) $\delta$. The machine learning literature has explored a different forms of the generalised model in definition~\eqref{def1:generalised}. For example, a better sparsity regulariser for the adversarial training as in~\cite{xu2018structured}. In this work, we adopt the evaluation criteria of that~\cite{hsieh2020evaluations}, where one seeks to measure the susceptibility of features to adversarial perturbations. More precisely, we can have an insight of the deep features extracted by our prediction through visualising compact set of relevant features selected by some defined explanation methods on trained models, and measuring the robustness of the models by performing adversarial attacks on the relevant or the irrelevant features. We denote the set of all features as $U$, and consider a general set of feature $S \subseteq U$. Since the feature we are interested are those in the image $\bm{x}$, we further denote the subset of $S$ that related to the image as $\bm{x}_S$. To measure the robustness of the model, we rewrote the generalised model~\eqref{def1:generalised} following the evaluation criteria of that~\cite{hsieh2020evaluations}. A model on input $\bm{x}$ with adversarial perturbation on feature set $S$ then reads: \begin{theo}[Adversarial $\delta$ \& Robustness-$S$]{def1:robustnessS} \centering $ \epsilon^*_{\bm{x}_S} = \{ \min_{\bm{\delta}} \| \bm{\delta}\|_{p} \quad s.t. f(\bm{x}+\bm{\delta}) \neq y, \quad \bm{\delta}_{\overline{S}} =0\},$ \end{theo} where $y$ is the ground truth label of image $\bm{x}$; $\| \cdot \|_{p}$ denotes the adversarial perturbation norm; $\overline{S} = U \setminus S$ denotes the complementary set of feature $S$ with $\bm{\delta}_{\overline{S}} =0$ constraining the perturbation only happens on $\bm{x}_S$. We refer to $\epsilon^*_{\bm{x}_S}$ as \textbf{Robustness-$\bm{S}$}~\cite{hsieh2020evaluations}, or the minimum adversarial perturbation norm on $\bm{x}_S$.\\ We then denote the relevant features selected by the explanation methods as $S_r \subseteq U$, with the irrelevant features as its complementary set $\overline{S_r}= U \setminus S_r$. Thus, the robustness on chosen feature sets\textemdash $S_r$ and $\overline{S_r}$ tested on image $\bm{x}$ are: {\begin{center} Robustness-$S_r =\epsilon^{*}_{\bm{x}_{S_r}}; \quad$ Robustness-$\overline{S_r} =\epsilon^{*}_{\bm{x}_{\overline{S_r}}} \thinspace.$\end{center}} \setlength{\tabcolsep}{3.9mm}{ \begin{table*}[] \centering \caption[Performance comparison] Performance comparison for the task of Triplet recognition. The results are reported in terms of Average Precision ($AP \%$) on the CholecT45 dataset using the official cross-validation split.\\[-0.3cm] } \label{1A} \begin{tabular}{ll|lllllll} \toprul \multicolumn{2}{c|}{\cellcolor[HTML]{EFEFEF}\textsc{Method}} & \multicolumn{3}{c}{\cellcolor[HTML]{EFEFEF}\textsc{Component Detection}} & & \multicolumn{3}{c}{\cellcolor[HTML]{EFEFEF}\textsc{Triplet Association}} \\ \cline{1-2} \multicolumn{1}{l|}{\textsc{Baseline}} & \textsc{Backbone} & $\quad AP_{I}$ & $\quad AP_{V}$ & $\quad AP_{T}$ & & $\quad AP_{IV}$& $\quad AP_{IT}$& $\quad AP_{IVT}$ \\ \hline \multicolumn{1}{l|}{} & ResNet-18 & $82.4 \pm 2.5$ & $54.1 \pm 2.0$ & $33.0 \pm 2.3$ & & $30.6 \pm 2.6$ & $25.9 \pm 1.5$ & $21.2 \pm 1.2$ \\ \multicolumn{1}{l|}{Tripnet} & ResNet-50 & $85.3 \pm 1.3$ & $57.8 \pm 1.6$ & $34.7 \pm 1.9$ & & $31.3 \pm 2.3$ & $27.1 \pm 2.4$ & $21.9 \pm 1.5$ \\ \multicolumn{1}{l|}{} & DenseNet-121 & $86.9 \pm 1.4$ & $58.7 \pm 1.5$ & $35.6 \pm 2.8$ & & $33.4 \pm 3.4$ & $27.8 \pm 1.8$ & $22.5 \pm 2.3$ \\ \hline \multicolumn{1}{l|}{} & ResNet-18 & $82.2 \pm 2.6$ & $56.7 \pm 3.8$ & $34.6 \pm 2.2$ & & $30.8 \pm 1.8$ & $27.4 \pm 1.3$ & $21.7 \pm 1.3$ \\ \multicolumn{1}{l|}{Attention Tripnet} & ResNet-50 & $81.9 \pm 3.0$ & $56.8 \pm 1.1$ & $34.1 \pm 1.4$ & & $31.5 \pm 2.2$ & $27.5 \pm 1.0$ & $21.9 \pm 1.2$ \\ \multicolumn{1}{l|}{} & DenseNet-121 & $83.7 \pm 3.5$ & $57.5 \pm 3.2$ & $34.3 \pm 1.3$ & & $33.1 \pm 2.4$ & $28.5 \pm 1.6$ & $22.8 \pm 1.3$ \\ \hline \multicolumn{1}{l|}{} & ResNet-18 & $85.3 \pm 1.4$ & $58.9 \pm 2.6$ & $35.2 \pm 3.4$ & & $33.6 \pm 2.6$ & $30.1 \pm 2.8$ & $24.3 \pm 2.3$ \\ \multicolumn{1}{l|}{Rendezvous} & ResNet-50 & $85.4 \pm 1.6$ & $58.4 \pm 1.4$ & $34.7 \pm 2.4$ & & $35.3 \pm 3.5$ & $30.8 \pm 2.6$ & $25.3 \pm 2.7$ \\ \multicolumn{1}{l|}{} & DenseNet-121 & $88.5 \pm 2.7$ & $61.7 \pm 1.7$ & $36.7 \pm 2.1$ & & $36.5 \pm 4.7$ & $32.1 \pm 2.7$ & $26.3 \pm 2.9$ \\ \multicolumn{1}{l|}{} & Swin-T & $73.6 \pm 1.9$ & $48.3 \pm 2.6$ & $29.2 \pm 1.4$ & & $28.1 \pm 3.1$ & $24.7 \pm 2.0$ & $20.4 \pm 2.1$ \\\bottomrul \end{tabular} \end{table*} } \setlength{\tabcolsep}{0mm}{ \begin{table*}[] \centering \caption[Heatmap on Class Activation Map visualisation] Heatmaps Comparison under different feature extraction backbones. We displayed four randomly selected images in fold 3 when using the best performed weights trained and validated on folds 1,2,4 and 5.\\[-0.3cm] } \label{1B} \begin{tabular}{l} \includegraphics[width=0.98\textwidth]{results/1B.pdf} \end{tabular} \end{table*} } \setlength{\tabcolsep}{0mm}{ \begin{table*}[] \centering \caption[Top 5 predicted Triplet classes]{Top 5 predicted Triplet classes in each of the 10 models. The top 5 is assessed by the ${AP}_{IVT}$ score.\\[-0.3cm]} \label{3A} \begin{tabular}{l} \includegraphics[page=1,width=0.98\textwidth]{results/1C.pdf}\\[-0.3cm] \end{tabular} \end{table*} } \setlength{\tabcolsep}{6.3mm}{ \begin{table*}[] \centering \caption[Robustness comparison]{Robustness measured on 400 examples (i.e. images) randomly selected form the images in the fold 3 videos with exactly 1 labeled triplet. Top 25 percent of relevant $S_r$ or irrelevant $\overline{S_r}$ features are selected from 2 explanation methods $\mbox{Grad}$ and $\mbox{IG}$. We perform attack on the selected 25 percent. } \label{robustness-25} \begin{tabular}{l|c|llll} \toprule \cellcolor[HTML]{EFEFEF}\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}{\textsc{\cellcolor[HTML]{EFEFEF}Attacked Features}}\\ \end{tabular}} & \cellcolor[HTML]{EFEFEF}\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}{\textsc{\cellcolor[HTML]{EFEFEF}Explanation Methods}}\\ \end{tabular}} & \multicolumn{4}{c}{{\cellcolor[HTML]{EFEFEF}\textsc{Backbones (on Rendezvous)}}} \\ \cline{3-6} & & ResNet-18 & ResNet-50 & DenseNet-121 & Swin-T \\ \hline \multirow{2}{*}{} Robustness-$\overline{S_{r}}$ & Grad & 2.599687 & 2.651435 & \textbf{3.287798} & 1.778592 \\ & IG & 2.621901 & 2.686064 & \textbf{3.319311} & 1.777737 \\ \hline \multirow{2}{*}{} Robustness-${S_{r}}$ & Grad & 2.517404 & 2.608013 & \textbf{3.188270} & 1.750599 \\ & IG & 2.515343 & 2.603118 & \textbf{3.187848} & 1.749097 \\\bottomrule \end{tabular} \\[-0.36cm] \end{table*} } \section{EXPERIMENTAL RESULTS} In this section, we describe in detail the range of experiments that we conducted to validate our methodology. \subsection{Dataset Description and Evaluation Protocol} \textbf{Dataset Description.} We use CholecT45 dataset~\cite{nwoye2022data} to evaluate the robustness of the three SOTA models for the Surgical Action Triplet Recognition task. Specifically, CholecT45 dataset contains 45 videos with annotations including 6 classes of instrument, 10 classes of verb, and 15 classes of target (i.e. $C_{I} = 6, \thinspace C_{V} =10, \thinspace C_{T} = 15$) generating 900 ($6\times 10\times 25$) potential combinations for triplet labels. To maximise the clinical utility, we utilise the top-100 combinations of relevant labels, which are selected by removing a large portion of spurious combinations according to class grouping and surgical relevance rating \cite{nwoye2022rendezvous}. Each video contains around $2,000$ annotated frames extracted at $1$ fps in RGB channels, leading to a total of $90,489$ recorded frames. To remove the redundant information, the frames captured after the laparoscope been taken out of the body are blacked out with value $[0,0,0]$. \textbf{Evaluation Protocol.} The triplet action recognition is evaluated by the average precision ($AP$) metric. Our models can directly output the predictions of triplet class $AP_{IVT}$. Instead, $AP_{d}$ where $d \in \{ I, V, T, IV, IT\}$ cannot be predicted explicitly. Then we obtain the final predictions of $d \in \{ I, V, T, IV, IT\}$ components according to \cite{nwoye2022data,nwoye2022rendezvous}: \begin{equation*} \label{d_logits} \begin{split} {\bm{{Y}_{d}}}^k = \thinspace \max_{m} \{{\bm{Y_{IVT}}}^{m}\} , \quad \forall \thinspace {m \in \{0,1..,C_{IVT}\} \thinspace s.t. \thinspace h_d(m)=k} , \end{split} \end{equation*} where we calculate the probability of class $k \in \{0, 1,.., C_{d}-1 \}$ under component $d$; and $h_d(\cdot)$ maps the class $m$ from $IVT$ triplet compositions to the class under component $d$. In our robustness analysis, the main evaluation criteria is the robustness subject to the selected feature set ($S_r$ and $\overline{S_r}$) on each backbone using the formula in \eqref{def1:robustnessS}. \subsection{Implementation Details} We evaluate the model performance based on five-fold cross-validation, where we split 45 full videos into 5 equal folds. The testing set is selected from these 5 folds, and we treat the remaining 4 folds as the training set. Moreover, 5 videos from the 36 training set videos are selected as validation set during training. The models are trained using the Stochastic Gradient Descent (SGD) optimiser. The feature extraction backbones are initialised with ImageNet pre-trained weights. Both linear and exponential decay of learning rate are used during training, with initial learning rates as $\{1 e^{-2}, 1 e^{-2}, 1 e^{-2}\}$ for backbone, encoder and decoder parts respectively. We set the batch size as $32$, and epoch which performs the best among all recorded epochs up to $AP$ score saturation on validation set in the specified k-fold. To reduce computational load, the input images and corresponding segmentation masks are resized from $256 \times 448$ to $8 \times 14$. {For fair comparison, we ran all SOTA models (following all suggested protocols from the official repository)} under the same conditions and using the official cross-validation split of the CholecT45 dataset~\cite{nwoye2022data}. \subsection{Evaluation on Downstream Tasks} In this section, we carefully analyse the the current SOTA techniques for triplet recognition from the feature based explanability lens. \faHandPointRight[regular] \textbf{Results on Triplet Recognition with Cross-Validation.} As first part of our analysis, we investigate the performance limitation on current SOTA techniques, and emphasise how such limitation is linked to the lack of reliable features. The results are reported in Table~\ref{1A}. In a closer look at the results, we observe that ResNet-18, in general, performs the worst among the compared backbones. However, we can observe that for one case, component analysis, it performs better than ResNet-50 under Tripnet Attention baseline. The intuition being such behaviour is that the MIS setting rely on ambiguous condition and, in some cases, some frames might contain higher spurious features that are better captured by it. We remark that the mean and standard-deviation in Table \ref{1A} are calculated from the 5 folds in each combination of backbone and baseline. We also observe that ResNet-50 performs better than ResNet-18 due to the deeper feature extraction. The best performance, for both the tasks\textemdash component detection and triplet association, is reported by DenseNet-121. The intuition behind the performance gain is that DenseNet-121 somehow mitigate the issue of the limitation of the capability representation. This is because ResNet type networks are limited by the the identity shortcut that stabilises training. These results support our modelling hypothesis that the key of performance in the robustness of the deep features. A key finding in our results is that whilst existing SOTA techniques~\cite{nwoye2022data,nwoye2022rendezvous} are devoted to develop new network mechanisms, one can observe that a substantial performance improvement when improving the feature extraction. Moreover and unlike other surgical tasks, current techniques for triplet recognition are limited in performance. Why is this happening? Our results showed that the key is in the \textit{reliable features} (linked to robustness); as enforcing more meaningful features, through several backbones, a significant performance improvement over all SOTA techniques is observed. To further support our previous findings, we also ran a set of experiments using the trending principle of Transformers. More precisely, an non CNN backbone\textemdash the tiny Swin Transformer (Swin-T)~\cite{https://doi.org/10.48550/arxiv.2103.14030} has also been tested on the Rendezvous, which has rather low $AP$ scores on all of the 6 components in oppose to the 3 CNN backbones. This could be led by the shifted windows in the Swin-T, it is truth that the shifted windows largely reduced the computational cost, but this could lead to bias feature attribute within bounding boxes, the incoherent spreading can be seen clearly in the visualisation of detected relevant features in Swin-T in Fig. \ref{robustness-analysis} (a). In Table~\ref{1A} we displayed the average results over all classes but\textemdash what behaviour can be observed from the per-class performance? It can be seen from Table \ref{3A} that though the best 5 predicted classes are different in each model, the predicted compositions are seems clinically sensible supporting our previous discussion. In addition, the top 1 per-class $AP$ score is significant higher in DenseNet-121 with Rendezvous. \faHandPointRight[regular] \textbf{Visualisation Results.} To interpret features is far from being trivial. To address this issue, we provide a human-like comparison via heatmaps in Table~\ref{1B}. The implementation of the heatmaps are adapted from \cite{zhou2016learning}. The displayed outputs reflect what the model is focusing based on the extracted features. These results support our hypothesis that deep features are the key in making correct predictions over any new network mechanism. We observed that in the worst performed backbone\textemdash Swin-T, the feature been extracted are mostly spread across the images, however, the ones concentrate on core attributes are not though performed the best. In the best performed DenseNet-121, reasonable amount of attention are also been paid to spurious attributes; this can be seen more directly in our later discussion on robustness visualisation Fig. \ref{robustness-analysis}. The reported probability on the predicted label emphasis again the outstanding performance of DenseNet-121 backbone; in the sense that, the higher the probability for the correct label the better, the lower it is for incorrect prediction the better. \faHandPointRight[regular] \textbf{Why Surgical Triplet Recognition Models Fail? Robustness and Interpretability.} We further support our findings through the lens of robustness. We use as evaluation criteria Robustness-$S_r$ and Robustness-$\overline{S_r}$ with different explanation methods: vanilla gradient (Grad) \cite{shrikumar2017learning} and integrated gradient (IG) \cite{sundararajan2017axiomatic}. The results are in Table \ref{robustness-25} \& Fig. \ref{robustness-analysis}. \setlength{\tabcolsep}{0mm}{ \begin{figure*}[] \centering \begin{tabular}{l} \includegraphics[page=1,width=1\textwidth]{results/2B.pdf}\\ \includegraphics[page=2,width=1\textwidth]{results/2B.pdf} \end{tabular} \caption[Visualisation of Robustness]{The set of figures shows robustness analysis on randomly selected images with a. the visualisation of the Top 15 percent of important features selected by the 2 explanation methods- Grad and IG; b. (/d.) the trends showing the robustness measured on the relevant $S_{r}$ (/irrelevant $\overline{S_{r}}$) features been selected by the 2 explanation methods against the percentage of Top features been defined as relevant; c. the comparison of the robustness across the 4 backbones embedded in Rendezvous baseline.\\[-0.5cm]} \label{robustness-analysis} \end{figure*} } \subsubsection{Comparison between different backbones} In Table \ref{robustness-25}, we show the robustness results with top $25\%$ attacked features on the average over $400$ frames randomly chosen with exactly $1$ labeled triplet. On one hand, we observe that the DenseNet-121 backbone consistently outperforms other network architectures on both evaluation criteria Robustness-$S_r$ and Robustness-$\overline{S_r}$. This suggests that DenseNet-121 backbone does capture different explanation characteristics which ignored by other network backbones. On the other hand, our results are supported by the finding in \cite{hsieh2020evaluations}, as IG performs better than Grad; and the attack on relevant features yields lower robustness than perturbing the same percentage of irrelevant features. \subsubsection{Robustness explanation for specific images} To more objectively evaluate the robustness explanation for specific images, we show: (a) Visualisation of important features, (b) Robustness-$\overline{S_r}$, (c) Robustness against the percentage of Top features, and (d) Robustness-$S_r$ in Fig. \ref{robustness-analysis}. In Fig. \ref{robustness-analysis} (a), we visualise the Top $15\%$ features (with yellow dots) by Grad and IG, respectively, and overlay it on manually labelled region containing instrument (in red) and target (in green). We observe that the best performed backbone (can be seen from the robustness comparison curves in Fig. \ref{robustness-analysis} (c)) on the specific image is the one that not only pays attention to \textit{core attributes, but also the spurious attribute.} In the image VID08-000188, the best performed model is ResNet-18, which shows the ambiguous condition on individual images. In a closer look at Fig.~\ref{robustness-analysis} (a), a small portion of the most relevant feature extracted by ResNet-18 are spread not on the close surrounding of the object area. This importance of spurious attribute is further highlighted in image VID18-001156. We observe that DenseNet-121 provides the most robust result highlighting relevant features within the tissue region and across tool tip. The worst performed model\textemdash ResNet-18 merely treated the core attributes as relevant. The relevant role of spurious attributes can be explained by the nature of the triplet, which consists a verb component that is not the physical object. Overall, we observe that reliable deep features are the key for robust models in triplet recognition. Moreover, we observe, unlike existing works of robustness against spurious features, that both core and spurious attributes are key for the prediction. \section{CONCLUSION} We present the first work to understand the failure of existing deep learning models for the task of triplet recognition. We provided an extensive analysis through the lens of robustness. The significance of our work lies on understanding and addressing the key issues associated with the substantially limited in performance of existing techniques. Our work offers a step forward to more trustworthiness and reliability models. \section*{ACKNOWLEDGEMENTS} YC and AIAR greatly acknowledge support from a C2D3 Early Career Research Seed Fund and CMIH EP/T017961/1, University of Cambridge. CBS acknowledges support from the Philip Leverhulme Prize, the Royal Society Wolfson Fellowship, the EPSRC advanced career fellowship EP/V029428/1, EPSRC grants EP/S026045/1 and EP/T003553/1, EP/N014588/1, EP/T017961/1, the Wellcome Innovator Awards 215733/Z/19/Z and 221633/Z/20/Z, the European Union Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No. 777826 NoMADS, the Cantab Capital Institute for the Mathematics of Information and the Alan Turing Institute. \bibliographystyle{IEEEtran}
{ "timestamp": "2022-09-20T02:21:19", "yymm": "2209", "arxiv_id": "2209.08647", "language": "en", "url": "https://arxiv.org/abs/2209.08647" }
\section{Introduction} More than half a century ago, Cook and Levin inaugurated the field of NP-completness. The fact that the Constraint Satisfaction Problem (CSP) is NP-complete has been the cornerstone of our understanding and approach to important optimization problems arising in countless applications. The NP completeness of deciding whether a CSP instance is satisfiable, plays an important role also in physics. This is because constraint satisfaction corresponds to a classical local Hamiltonian which expresses the total energy of a system of particles; the energy is the sum of terms, each of which describes the energy interaction between constant-sized clusters of particles. Finding whether the lowest energy of such systems is below some threshold or above it, is a special class of CSP, and was famously shown to be NP-complete in many cases by Barahona and others \cite{B82,I00}. The theory of NP-completeness has a natural generalization in the quantum setting; the Cook-Levin theorem was generalized by Kitaev \cite{KSV02} to show that the following problem is QMA-complete: Given a local Hamiltonian with quantum energy interactions describing the energy in a quantum many-body system, decide whether the ground energy is above some value or below another; this problem has been intensely studied in recent years \cite{KSV02, Gharibian_2015, OT05, Aharonov_2009}. A natural variant of the NP-complete decision CSP, is the function version of the CSP. In this version, the question is not whether all constraints can be satisfied, but what is the {\it maximum number} of constraints that can be satisfied by any assignment. One could also consider a weighted variant, as we do here, where the goal is compute the cost of the optimal assignment which minimizes the weighted sum of violated constraints. Computing the cost of the optimal solution is in fact the more natural version of CSP problems in a large number of combinatorial applications (to give just two examples, max-cut and max independent set). Importantly, in classical physics, when considering the local Hamiltonian corresponding to a CSP, the function version of the problem is in fact the problem of finding the lowest possible energy for the Hamiltonian over all possible states -- one of the most important notions in physics. In the quantum case, the function version corresponds to providing an estimate of the lowest energy of the system over all possible quantum states, which is one of the main subjects of interest in condensed matter physics. What is known from the computational complexity angle, about this physically motivated question, the function versions of CSPs? In 1988, Krentel \cite{Krentel} proved that the function problem for constraint satisfaction is $\mbox{FP}^{\mbox{NEXP}}$-complete. Krentel's proof is significantly more involved technically that that of the Cook-Levin's theorem which characterizes the complexity of the decision variant of CSPs. Furthermore, and in stark contrast to the theory of decision problems and NP-completeness, the function version of CSP seems to have received less attention in the TCS literature. From the physics point of view, an additional point becomes extremely important. By and large, physicists study local Hamiltonians, be them classical or quantum, in a {\em translational-invariant} (TI) setting. In this setting the particles are located at the vertices of a geometric lattice and all the terms acting on adjacent pairs of particles along a particular dimension are the same. In particular, the model most relevant to physics is TI in a very strong sense: the dimension of the individual particles and the Hamiltonian term acting on each pair of adjacent particles in a lattice are fixed parameters of the problem. When considering finite systems, the only input is an integer $N$ indicating the size of the system. This set-up corresponds to the fact that in physics, different Hamiltonians represent completely different physical systems. Thus, studying the ground energy (or some other quantity) in the AKLT model is considered to be a completely different problem than studying the same quantity in, say, the Ising model \footnote{ Some of the recent work on the quantum TI local Hamiltonian problem \cite{Bausch_2017} adopt a weaker notion in which the input also includes the Hamiltonian term that is applied to each pair of particles, allowing the Hamiltonian to be tuned to the size of the system. This model is mainly considered in quantum Hamiltonian complexity, but have not been a topic of study in physics.}. For decision problems, Gottesman and Irani \cite{GI} show hardness results for both quantum and classical TI Hamiltonians. Since the size of the grid can be given by logarithmically many bits, and there is no other input, one encounters an exponential factor compared to the common CSP problems; thus the results in \cite{GI} show NEXP and $QMA_{EXP}$ completeness for the classical and quantum variants of the problem, respectively. We note that a tightly related line of works studies TI infinite systems \cite{AI21, CW21, C15} and considers computability and computational complexity of decision problems in that domain, namely in the so called thermodynamic limit. Although the focus here is on finite systems, constructions for finite systems have played an important role for the results in the thermodynamic limit. In particular, all the results in \cite{AI21, CW21, C15} use a finite construction layered on top of a certain type of aperiodic tiling of the infinite grid. To the best of our knowledge, the computational complexity of function CSPs in the TI setting, has remained open. In this paper we provide a tight characterization of its complexity, and show that the function version of TI CSP on a $2$-dimensional grid is complete for $\mbox{FP}^{\mbox{NEXP}}$. This result thus strengthens Krentel's construction for general CSPs to apply even TI systems for two and higher dimensions. The result is also a generalization of Gottesman-Irani who prove hardness for 2D TI systems for the standard decision problem, where one only needs to determine if the ground energy is below a given threshold. One of the key technical challenges in our result is to effectively create large ($\Theta(N^{\epsilon})$) costs on an $N \times N$ grid using only two constant-sized terms which apply in the horizontal and vertical directions. Thus, as a stepping stone to the more complex result on the function version of TI 2D CSPs, we show a fault-tolerant result which we believe is of interest on its own, namely that it is $\mbox{NEXP}$-complete to even approximate the ground energy to within an additive $\Theta(N^{1/4})$. \subsection{Problem Definitions, Results and Main Challenges} It is most convenient to present our results using the language of the weighted tiling problem, where we focus here on the two dimensional case\footnote{ Our version of tiling is equivalent to the more common Wang tiles \cite{Wang60}.}. In this tiling problem, one is asked to tile an $N \times N$ 2D grid with a set of $1 \times 1$ tiles. The tiles come in different colors and only some pairs of colors can be placed next to each other in either the horizontal or vertical directions. More precisely, a set of tiling rules ${\cal{T}}$ is a triple $(T, \delta_H, \delta_V)$, where $T$ is a finite set of tile {\em types} $T = \{t_1, \ldots, t_d\}$, and $\delta_H$ and $\delta_V$ are functions from $T \times T$ to $\mathbb{Z}$. For $(t, t') \in T \times T$, $\delta_h(t,t')$ is the cost of putting a tile of type $t$ immediately to the left of a tile of type $t'$ and $\delta_v(t,t')$ is the cost of putting a tile of type $t$ immediately above a tile of type $t'$. Let $\lambda_0({\cal{T}}(N))$ be the minimum cost of tiling an $N \times N$ grid with tiling rules ${\cal{T}}$. The goal is to tile the grid with minimal total cost. Note that this problem is directly analogous to a classical Hamiltonian in 2D. We first define a function version of the problem. \begin{definition} {\sc ${\cal{T}}$-FWT (Function Weighted Tiling)} \item \textbf{Input:} An integer $N$ specified with $\lfloor \log N + 1 \rfloor$ bits \item \textbf{Output:} $\lambda_0({\cal{T}}(N))$ \end{definition} \begin{theorem} \label{th-function} {\bf {(Main)}} There exists a set of tiling rules ${\cal{T}}$ such that ${\cal{T}}$-FWT is $\mbox{FP}^{\mbox{NEXP}}$-complete. \end{theorem} We note that the fact that the function problem is complete for 2D immediately implies that it is complete for any grid of dimension at least $2$ since the 2D construction can be embedded into a higher dimensional grid. The 1D CSP case is poly-time computable using dynamic programming. The upper bound in Theorem \ref{th-function} is easy: it can be achieved by binary search with access to an oracle for the decision problem. For the lower bound, one encounters a challenge. The reduction must encode in the tiling rules the computation of a polynomial time Turing Machine with access to a $\mbox{NEXP}$ oracle. If an instance given to the oracle is a {\em yes} instance, the computation of the verifier can be encoded into the tiling rules. However {\em no} instances cannot be directly verified in this way. Krentel's proof that the function problem of weighted SAT is $\mbox{FP}^{\mbox{NP}}$-complete \cite{Krentel} overcomes this challenge; let us recall it and then explain the problem in carrying it over to the TI setting. Krentel uses an accounting scheme \cite{Krentel,Papa} that applies a cost to every string $z$ representing guesses for the sequence of responses to all the oracle queries made. The accounting scheme needs to ensure that the minimum cost $z$ is equal to the correct sequence of oracle responses, $\tilde{z}$. {\em yes} and {\em no} guesses are treated differently, due to the fact that the verifier can check {\em yes} instances (and thus incorrect {\em yes} guesses can incur a very high cost), but {\em no} guesses, cannot be directly verified. In Krentel's scheme, {\em no} guesses incur a more modest cost, whether correct or not, and their cost must decrease exponentially. This is because the oracle queries are adaptive; an incorrect oracle response could potentially change all the oracle queries made in the future and so it is important that the penalty for an incorrect guess on the $i^{th}$ query is higher than the cost that could potentially be saved on all future queries. The weights on clauses that implement this accounting scheme are multiplied by a large power of two to ensure that they are the dominant factor in determining the optimal assignment. The difficulty in applying Krentel's accounting scheme in the TI setting is that the costs must grow with the size of the input. Therefore, it is not possible to apply the costs directly into the tiling rules which are of fixed constant size. A natural attempt to circumvent the problem is to assign the required large penalty by many tiles, each of which would acquire a constant penalty; however, the problem in implementing this approach is that Cook-Levin type reductions from computations to tilings are very brittle, as a single error can potentially derail the entire computation. For example, imagine inserting a row that does not have a Turing Machine head. There will be a single fault where the head disappears from one row to the next, but every row thereafter will contain the unchanging contents of the Turing Machine tape without a head to execute a next step. This imposes a challenge since when enforcing large costs by using many tiles, or constraints, we need to make sure that many of these constraints are indeed violated in order to incur the required large penalty. We provide a construction which circumvents this issue by exhibiting some {\it fault tolerance} properties. We thus prove what can be viewed as a gapped version or a hardness of approximation result, which is then a natural stepping stone to implementing the more intricate function required in Krentel's accounting scheme. To this end we define an approximation version of weighted tiling: \begin{definition} {\sc $({\cal{T}},f)$-GWT (Gapped Weighted Tiling)} \item \textbf{Input:} An integer $N$ specified with $\lfloor \log N + 1 \rfloor$ bits. Two integers $a$ and $b$ such that $b-a \ge f(N)$. \item \textbf{Output:} Determine whether $\lambda_0({\cal{T}}(N)) \le a$ or $\lambda_0({\cal{T}}(N)) \ge b$. \end{definition} \begin{theorem} \label{th-gapped} There exists a set of tiling rules ${\cal{T}}$ such that $({\cal{T}},f)$-GWT is $\mbox{NEXP}$-complete for a function $f(n) = \Omega(N^{1/4})$. \end{theorem} This shows that it is $\mbox{NEXP}$-hard to even approximate the cost of the optimal tiling to within an additive error that is $\Omega(N^{1/4})$. This can be viewed as a gapped version of the results of \cite{GI}; the proof constructs a reduction mapping the computation into a tiling such that even in the presence of $O(N^{1/4})$ faults, the computation encoded by the tiling is able to proceed and produce approximately correct results. Theorem \ref{th-gapped} is of potential interest on its own. It might resemble a PCP type result, but the model we consider differs from the standard PCP setting in two ways: the first is that the underlying graph is a grid, rather than a graph with much higher connectivity, and the second is translational-invariance. It is not possible to obtain a hardness of approximation result with an additive error that is linear in $N$ (as one has in the PCP theorem) on any finite dimensional lattice because such graphs do not have the necessary expansion properties. For example, in 2D, one could divide the grid into $b \times b$ squares for $b = \Theta(\sqrt{\log N})$ and solve each square optimally in polynomial time. The resulting solution would be within an additive $N/ \sqrt{\log N}$ of the optimal solution. To the best of our knowledge, no gapped version was proven before for CSP problem set on a constant dimensional grid, even without the TI restriction. Finally, our results provide tight characterizations of the complexity of the following decision problem; \begin{definition} {\sc ${\cal{T}}$-PWT (Parity Weighted Tiling)} \item \textbf{Input:} An integer $N$ specified with $\lfloor \log N + 1 \rfloor$ bits \item \textbf{Output:} Determine whether $\lambda_0({\cal{T}}(N))$ is odd or even. \end{definition} The proof is very similar to the proof of Theorem \ref{th-function}. The result on Parity Weighted Tiling illustrates that decision problems related to CSP can be complete for an oracle class just like the function problem. The crucial difference between the threshold decision problem (is the cost of the optimal solution less than $t$?) which is $\mbox{NEXP}$-complete and the parity problem which is $\mbox{P}^{\mbox{NEXP}}$-complete is that the parity problem still seems to require determining the optimal cost. This seems to make the characterization of its complexity as challenging as for the function version of the problem. {~} \noindent{\bf Organization of remainder of the introduction:} We next proceed to an overview of the proofs. We start with the setup of tiling rules and layers in Subsection \ref{sec-buildingBlocks}. The proofs of the Theorems are given in subsection \ref{sec-introfunction}. We end with related work and open questions in Subsection \ref{sec-discussion}. \subsection{Tiling Rules and Layers} \label{sec-buildingBlocks} We assume that there is a special tile denoted by $\Box$ which must be placed around the perimeter of the grid to be tiled. Moreover, no $\Box$ tile can be placed in the interior of the grid. We will return later to enforcing this condition in the context of the different problems. The tiles on the interior will be composed of multiple layers where each layer has its own set of tile types. A tile type for an internal tile in the overall construction is described by a tile type for each of the layers. For ease of exposition, we allow our tiling rules to also apply to local {\it squares} of four tiles. This can easily then be translated to two-local constraints on tiles, as in our definition of the tiling problem. This simple transition is described more fully in Section \ref{sec-tiling}. For the remainder of the paper our tiling rules include constraints on local squares of four tiles, as well as pairs of horizontal tiles. If the four tiles in a square are all interior tiles, then each possible pattern of four square tiles within a layer will be designated as legal or illegal. The overall cost of placing four interior tiles in a local square together will be function of whether the square for each layer is legal or illegal. For the Gapped Weighted Tiling, the cost will be just the number of layers for which the square pattern is illegal. For the Function Weighted Tiling and Weighted Tiling Parity, illegal squares at different layers will contribute different amounts to the cost. In general, a no-cost tiling of each Layer represents a computational process where each row represents the state of a Turing Machine. The computation reverses direction from one layer to the next. The rows of a tiling of an $N \times N$ grid will be numbered $r_0$ through $r_{N-1}$ from bottom to top. When referring to the rows in a particular layer, we will exclude the border rows and order the rows according to the computation direction. So the first row of Layer $1$, which proceeds from bottom to top, is row $r_1$ and the last row of Layer $1$ is $r_{N-2}$. Layer $2$ proceeds from top to bottom, so the first row for Layer $2$ is $r_{N-2}$ and the last row is $r_1$. For the most part, the rules governing the tiling apply to the tile types within each individual layer. The different layers only interact at the lower and upper border of the grid. This is how the output of one process (on Layer $i$) is translated into the input for the next process (on Layer $i+1$). For example, a square may be illegal if the two lower tiles are $\Box~\Box$, and the two upper tiles violate certain constraints between the Layer $i$ and Layer $i+1$ types. Some of the layers will also have additional constraints on which tiles can be next to each other in the horizontal direction. Each type of violated constraint is given a name described below. \begin{definition} {\bf [Faults in a Tiling]} An occurrence of any of the illegal patterns described in the constructions is called a {\em fault}. A tiling with no faults, will correspond to a {\em fault-free} computation. \end{definition} There will be some additional costs (described later) associated with a computation ending in a rejecting state. These are not considered faults because they can happen in correct computations. Figure \ref{fig-squareTypes} illustrates the different types of tiling constraints. \begin{description} \item{\bf Illegal Computation Squares:} For each layer, every pattern of four tile types will be designated as a {\em legal computation square} or an {\em illegal computation square}. In general, these rules enforce that the tiling within the layer represents a consistent execution of a Turing Machine. We describe in Subsection \ref{sec-TM2Tile} how to translate the rules of a Turing Machine into legal and illegal computation squares. \item{\bf Illegal Pairs:} Some of the layers will have additional constraints on which tiles can be placed next to each other in the horizontal direction. Each ordered pair of tiles types for that layer will be designated as a {\em legal pair} or an {\em illegal pair}. \item{\bf Illegal Initialization Squares:} For each layer, there are also some initialization rules that constrain the initial configuration of the Turing Machine. If the layer runs bottom to top, then these rules apply to $r_0$, which consists of all $\Box$ tiles, and the first row of the layer. For example, if tile $t_1$ can not be immediately to the left of $t_2$ in the first row of Layer $i$, then the square with $\Box~\Box$ directly below $t_1~t_2$ is an {\em illegal initialization square} for Layer $i$. If the Turing Machine for the layer runs top to bottom, then the square with $\Box~\Box$ directly above $t_1~t_2$ in Layer $i$ is illegal. \item{\bf Illegal Translation Squares:} Finally, we add rules that control how the last row of Layer $i$ is translated to the first row of Layer $i+1$. If Layer $i$ runs top to bottom, then the rules apply to rows $r_0$ and $r_1$. For example, if tile $t$ in Layer $i$ cannot be translated to $t'$ in Layer $i+1$, then any square with a $\Box$ directly below a tile whose Layer $i$ type is $t$ and whose Layer $i+1$ type is $t'$ would be illegal. The translation rules can also apply to pairs of adjacent tiles. E.g., it could be illegal to have a square whose bottom two tiles are $\Box~\Box$ and whose top two tiles have $t_1~t_2$ in Layer $i$ and $t_3~t_4$ in Layer $i+1$. \end{description} \begin{figure}[ht] \centering \begin{subfigure}{0.14\textwidth} \centering \includegraphics[width=\textwidth]{IllegalSquares.png} \caption{Computation} \label{fig-squares} \end{subfigure}% \hspace{.4in} \begin{subfigure}{0.14\textwidth} \centering \includegraphics[width=\textwidth]{IllegalPairs.png} \caption{Illegal pairs.} \label{fig-pairs} \end{subfigure} \hspace{.4in} \begin{subfigure}{0.14\textwidth} \centering \includegraphics[width=\textwidth]{TranslationRules.png} \caption{Translation.} \label{fig-trans} \end{subfigure}% \hspace{.4in} \begin{subfigure}{0.14\textwidth} \centering \includegraphics[width=\textwidth]{InitializationRules.png} \caption{Initialization.} \label{fig-init} \end{subfigure} \caption{\small{Interior tiles have four layers. Border tiles have one layer and are labeled with the $\Box$ symbol. (a) An illegal computation square for Layer $2$. The constraint applies to the four tile types for Layer $2$ shown in gray. (b) An illegal pair for Layer $2$. The constraint applies to the two adjacent tile types for Layer $2$ shown in gray. (c) An illegal translation square from Layer $2$ to Layer $3$. The constraint applies to two border tiles and the tile types for Layers $2$ and $3$ for the other two interior tiles. (d) An illegal initialization square for Layer $2$. The constraint applies to two border tiles and the Layer $2$ tile types for the other two interior tiles.}} \label{fig-squareTypes} \end{figure} \subsection{Proofs Overview} \noindent {\bf Thoeorem \ref{th-gapped}: Gapped Weighted Tiling} \label{sec-introgapped} Recall that the standard encoding of a Turing Machine into tiling rules is very brittle in that a single fault can derail the entire computation. The most straight forward way to overcome this is using a construction which embeds many repetitions of the computation, so that many faults would be required to derail a large number of those computations. Multiple computations thus need to be set up and initiated, using a single faulty Turing machine with TI rules. In our construction, this is achieved by a first stage of the computation (implemented in Layer $1$, as we describe below), which, roughly, creates intervals in the top row of Layer $1$, such that the independent repetitions of the computations will occur in different strips on the grid; the boundaries of the strips are determined by those intervals. The difficulty is how to implement the initial set up using a single Turing machine, in a fault tolerant way. We now describe the details. The tiling rules for the first two layers, as well as the reduction mapping $x$ to $N$ are independent of the language $L \in \mbox{NEXP}$, the language we are reducing from. Let $V$ denote the exponential time verifier for $L$. Subsection \ref{sec-TM2Tile} describes the details of how a Turing Machine computation is encoded in a set of tiling rules. In general tiles will be either {\em tape} tiles which encode a single symbol from the Turing Machine's tape alphabet or {\em head} tiles which encode both the state of the Turing Machine as well as the current tape symbol to which the head is pointing. The Turing Machine computation represented in Layer $1$ starts with two non-blank symbols and proceeds to write a sequence of intervals on the tape, where an interval is a sequence of $B$ symbols bracketed on either side by a {\em delimiter} tile from the set $\{X, \overline{X}, \vartriangleleft, \vartriangleright\}$. The Turing Machine just repeatedly executes a single loop which we refer to as the {\em Outer Loop}. In one iteration of the Outer Loop, an additional $B$ symbol is inserted into every interval and a new interval with no $B$'s in the middle is added to the right end of the non-blank symbols. In an fault-free execution of the Turing Machine, after $m$ iterations of the loop, there are $m+1$ intervals. The number of symbols in each interval (including the delimiter tiles on either end) is $m+2, m+1, m, \ldots, 2$. For $m = 4$, the row should look like: \vspace{.1in} \begin{center} \noindent \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\Box$ & $(q/\vartriangleleft)$ & B & B & B & X & B & B & X & B & X & $\vartriangleright$ & \# & \# & $\cdots$ & \# & $\Box$ \\ \hline \end{tabular} \end{center} \vspace{.1in} When the top row of Layer 1 is translated to Layer $2$, the head tile for the Layer $1$ Turing Machine is translated to a tape tile (so the state information is lost) and a head tile is inserted on the left end of every interval. For example, an interval $X~B~B~B~\cdots~B~X$ at the end of Layer $1$ is translated to $X~(q_s/S)~B~B~\cdots~B~T~X$ in the first row of Layer $2$. In Layers $2$ and $3$ the sizes and locations of the intervals do not change within a row unless the interval contains an illegal square. Thus, a single interval over all the rows of Layer $2$ forms a vertical strip of tiles, and a separate, independent computation takes place within each strip. See Figure \ref{fig-strips} for an example. Once the intervals are created on Layer $1$, each computation on Layers $2$ and $3$ is fault-free unless the strip contains an illegal square. Thus, the number of illegal squares is at least the number of strips that fail to complete their computation correctly. In Layer $2$, the computation is just a binary counter Turing Machine that continually increments a binary counter. All the strips that do not contain an illegal square will have the same string $x$ represented in the final row of Layer $2$. The string $x$ then serves as the input to the computation in Layer $3$. The binary counter Turing Machine in Layer $2$ runs for exactly $N-3$ steps. The reduction is the function that maps $x$ to $N$, where the string $x$ is written on the tape of a binary counting Turing Machine after $N-3$ steps. Lemma \ref{lem-bctm} gives an exact formula mapping $x$ to $N$ and shows that the value of the number represented by the string $x$ is $\Theta(N)$, the dimension of the grid. The idea of using a binary counting Turing Machine to translate the size of the grid to a binary input for a computation was used previously in \cite{GI}. Although since the construction in \cite{GI} had a gap of $1$, only a single execution of the verifier was needed. Since we are trying to produce a gap of $f(N)$, we need at least $f(N)$ separate computations each of which simulates the verifier on input $x$. Each interval $X~(q_s/S)~x~B~\cdots~B~T~X$ is translated unchanged to Layer $3$. The computation in each strip in Layer $3$ simulates the verifier on input $x$ using a witness that is guessed in the tiling. There is a final cost for any rejecting computation. If $x \in L$, it will be possible to tile each strip at $0$ cost. If $x \not\in L$, every strip will contain an illegal square or will incur a cost for the correct rejecting computation. Thus, the gap is essentially created by these parallel computations, each of which contributes a constant cost if $x\not \in L$. Since the sizes of the intervals go down to $0$, some of the intervals will be too narrow to complete the computation in either Layers $2$ or $3$. If the head ever hits the right end of its interval, it transitions to an infinite loop, causing no additional cost. A standard padding argument (Claim \ref{claim-pad}) guarantees that an interval need only be $\Theta(N^{1/4})$ wide to complete the computations in Layers $2$ and $3$. The analysis of Layer $1$ then needs to guarantee that despite the faults, there will be sufficiently many sufficiently wide intervals. The main challenge in the proof is in making the computation in Layer $1$ fault-tolerant, meaning that each illegal pair or square cannot derail the computation too much. The horizontal rules in Layer $1$ are critical for enforcing that this cannot happen. We show that a row in the tiling that has no illegal pairs corresponds to a sensible configuration of the Turing Machine. In particular such a row has exactly one head tile that lies in between the $\vartriangleleft$ and $\vartriangleright$ tiles. Note that faults can still alter the computation in potentially strange ways. Nonetheless, we also show that starting from a row with no illegal pairs, the Layer $1$ Turing Machine will be able to make progress, and after a sequence of fault-free steps (corresponding to a sequence of rows containing no illegal squares), the computation will perform a complete iteration of the loop. Since the number of illegal pairs and squares is bounded by $O(N^{1/4})$, there are enough complete iterations of the loop to ensure that the last row of Layer $1$ has enough intervals that are wide enough to complete the computations in Layers $2,3$. By far the most technically involved part of the paper is the analysis of Layer $1$ described in Section \ref{sec-L1analysis}. All of the results use the final Lemma \ref{lem-analysisL1}, which gives a tight characterization of the difference between the final row in Layer $1$ of a fault-free tiling and the final row of a tiling with faults. In fact, the result on Gapped Weighted Tiling could be established with looser bounds, but we provide the analysis once in Section \ref{sec-L1analysis} in a form that can be used for all the results in the paper. Section \ref{sec-introfunction} describes more fully how this tight characterization is accomplished. A more in-depth overview is then given in the beginning of Section \ref{sec-L1analysis}. {~} \noindent{\bf Theorem \ref{th-function}: Weighted Tiling Function} \label{sec-introfunction} The hardness reduction for Function Weighted Tiling reduces from an oracle class. The function $f$ is computed by a polynomial time Turing Machine $M$ with access to an oracle for language $L' \in \mbox{NEXP}$. Let $V$ denote the exponential time verifier for $L'$. Using a standard padding argument (see for example Lemma 2.30 from \cite{AI21}) we can assume that for a constant $c$ of our choice, for every $|x| = n$, there is a $\overline{n} \le cn$, such that the size of $f(x)$ is at most $\overline{n}$, and M makes at most $\overline{n}$ oracle calls to $L'$. Let $z$ denote an $\overline{n}$-bit string denoting the responses to the oracle queries made on input $x$. With $x$ and $z$ fixed, the set of inputs to the oracle $(o_1, \ldots, o_{\overline{n}})$ is also determined. $V(o_j)$ is an indicator function denoting whether $o_j$ is in $L'$. Note that since $L'$ is in $\mbox{NEXP}$, if $V(o_j) = 1$, there exists a witness that will cause the verifier to accept and if $V(o_j) = 0$, V will always reject regardless of the witness. Define: $${\cal{C}}(x,z) = \sum_{j=1}^{\overline{n}} \left[ (1 - z_j)\cdot 2^{\overline{n} - j} + z_j \cdot (1-V(o_j)) \cdot 2^{\overline{n}} \right]$$ Let $f(x,z)$ be the output of Turing Machine $M$ on input $x$ with oracle responses $z$. Note that since $|f(x,z)| \le \overline{n}$, $f(x,z) \le 2^{\overline{n}}$. The construction will ensure that the minimum cost tiling for a particular $x$ and $z$ will be $2^{\overline{n}+5} {\cal{C}}(x,z) + 2^3 \cdot f(x,z).$ Note that ${\cal{C}}(x,z)$ represented in the $\overline{n}$ high-order bits of the cost has the necessary structure where the costs for a {\em no} oracle response decrease exponentially in $j$, the index of the oracle query. The cost for a {\em yes} guess will be $0$ if the input to the oracle $o_j$ is in fact in $L'$ (i.e., $V(o_j)=1$) and will be a very large cost of $2^{2\overline{n}+5}$ if $o_j$ is not in $L'$. This function will guarantee that the overall cost is minimized when $z$ is the correct string of oracle responses. In addition, the low order bits encode the output of the function $f(x,z)$. So if the minimum cost tiling can be computed, this will correspond to $f(x)$, which is $f(x, \bar{z})$, where $\bar{z}$ is the string of correct oracle responses. The factor of $8$ ensures that even if the minimum cost is off by $\pm 3$, the value of $f(x)$ can still be recovered. So far what we have described just implementing the original accounting scheme devised by Krentel. The challenge is to implement this cost function in 2D with TI terms. Note that since the tiling rules are fixed parameters of the problem, it is not possible to encode the cost function directly into the penalty terms. As with the Gapped Weighted Tiling problem the function is collectively computed by a set of parallel processes within each strip created by the intervals from Layer $1$. However, instead of a threshold function which is either $+f(N)$ or $0$, the parallel processes must collectively compute the more intricate function described above, which requires that the individual processes have some additional information. We will describe first what happens in a fault-free computation (with no illegal pairs or computation squares) and then describe how fault-tolerance is enforced and proven. The construction for Layers $1$ and $2$ are exactly the same as for the Gapped Weighted Tiling problem. Layer $1$ creates a set of intervals. We definite the function $\mu(N)$ to denote the number of intervals on the tape if the Turing Machine for Layer $1$ executes $N-3$ steps. If after $N-3$ steps, the computation just happens to finish at the end of an execution of the Outer Loop then the intervals have sizes (from left to right) $\mu(N)+1, \mu(N), \ldots, 2$. If the computation finishes in the middle of an execution of the Outer Loop, the actual sequence of interval sizes will be close to $\mu(N)+1, \mu(N), \ldots, 2$. The largest interval could have size $\mu(N)+2$ and there may be a couple missing values in the range where the current interval is being increased. Lemma \ref{lem-errorFreeSizes} describes the possible deviations in detail. $\mu(N)$ is $\Theta(N^{1/4})$ and we show using a standard padding argument that for the constant $c$ of our choice, all of the computations require at most $c \mu(N)$ space. This allows us to establish that at least half of the intervals will be large enough to complete the required computations. As in the previous construction, Layer $2$ then executes a binary counting Turing Machine which results in the string $x$ written to the left of each interval which is large enough to complete the computation. Note that Layer $1$ is a {\em global} Turing Machine which executes a single process across the entire grid, while Layer $2$ represents {\em local} computations within each strip. When $x$ is translated from Layer $2$ to Layer $3$ it is augmented with a guess string $z$ for the oracle queries. $x$. However, there is no guarantee that the guess for each interval is the same. Note that $z$ can be arbitrary but it must be consistently the same for each interval. Layer $3$ then executes a global Turing Machine which imposes a high penalty if the $z$ strings in each strip are not all the same. This penalty is higher than the cost function for any $z$, so the lowest cost tiling will correspond to a configuration in which each strip has the same $x$ and $z$. Finally, in Layer $4$, there is a local computation in each interval, each of which makes a $+1$ or $0$ contribution towards the overall cost. The computation within each interval requires a unique tag in order to determine which term of the cost it will contribute to. The tag comes from the size of the interval. The computation begins with counting the number of locations in the interval. This can be accomplished by having the head shuttle back and forth between the two ends of the interval implementing both a unary and binary counter until the unary counter extends across the entire interval. The head returns to the left end of the interval and begins the next phase of the computation. Since the size of an interval is at most $O(N^{1/4})$ this phase of the computation will take at most $O(N^{1/2})$ steps. Now each computation has the same pair $(x,z)$ and a its own integer $r$ indicating the size of the interval. From $x$, the values of $N$ and $\mu(N)$ can be determined. Lemma \ref{lem-errorFreeSizes} shows that in a fault-free computation, the sizes of the intervals will decrease from left to right. Moreover, all interval sizes are in the set $\{ \mu(N)+2, \mu (N)+1, \ldots, 2\}$ with at most one missing value from that set and at most two duplicates. Thus, the value $\mu(N) - r +2$, will be an almost unique identifier for each interval, starting with $0$ or $1$ on the left and increasing to the right. Using this tag, each interval determines which portion of the cost it will contribute to. The number of intervals assigned to compute a particular term in the cost will depend on the value of the term since each interval can contribute at most $1$ to the overall cost. If an interval is assigned to check a {\em yes} guess ($z_k = 1$) the computation uses $x$ and $z$ to determine the $k^{th}$ input to the oracle $o_k$, guesses a witness and simulates $V$ on input $o_k$ with the guessed witness. There is a cost of $+1$ if $V$ rejects and $0$ if $V$ accepts. If $o_k$ is in fact in $L'$, there is a witness which will allow for a zero cost tiling withing that interval. If $o_k \not\in L$, then every witness will lead to a $+1$ cost. Thus, the optimal set of witnesses will result in the minimum value for $2^{\overline{n}+5} {\cal{C}}(x,z)$. In addition, exactly $2^3 f(x,z)$ of the intervals will just transition to the rejecting state, incurring a cost of $+1$. The total cost due to those intervals is $2^3 f(x,z)$. For the remaining intervals, no cost is incurred. The cost of a computation fault (illegal pair or square) is a constant that is larger than the cost of ending in a rejecting computation. Therefore, for each independent computation (in Layers $2$ and $4$) the optimal tiling will correspond to a correct computation which may or may not incur a cost for ending in a rejecting state. Technically, the most challenging part of the proof is to show that the process on Layer $1$ which creates the intervals is fault-tolerant. The proof for Function Weighted Tiling requires stronger conditions than for Gapped Weighted Tiling since we not only have to show that there are a large number of large intervals at the end of Layer $1$ but we need to establish that the sequence of interval sizes is close to what one would have in a fault-free computation. To this end, we use a potential function $A$ which captures how much a sequence of interval sizes $(s_1, s_2, \ldots, s_m)$ deviates from the expected sequence $(m+1, m, m-1, \ldots ,2)$. The main part of the proof is to show that each illegal square or pair can cause the value of $A$ to increase by at most a constant amount. At the end of Layer $1$, the ideal sequence of interval sizes is $(\mu(N)+1, \mu(N), \ldots, 2)$. Every interval size that is missing from the actual sequence of interval sizes has caused $A$ to increase by at least a fixed amount which in turn corresponds to faults incurred in the computation. Thus, we show that it is more cost-effective to complete the computation correctly (and not incur the higher cost of a fault) and incur the smaller potential cost of a rejecting computation. The most important measure of progress of the tiling/computation in Layer $1$ is the number of times the encoded Turing Machine completes an iteration of the Outer Loop in which the size of every interval increases by $1$ and a new interval of size $2$ is added. Faults can potentially cause an iteration of the Outer Loop to take longer as they may force the head to shuttle back and forth more times which in turn could result in fewer iterations. Even in a fault-free computation, the number of steps per iteration increases with each iteration because there are more intervals. One of the main lemmas in the analysis is Lemma \ref{lem-segLB} which lower bounds the number of times the loop is completed in relation to that number in a fault-free computation. The proof is a delicate inductive argument which uses the fact that the increase in the running time of a loop is not accelerated too much with each additional fault. {~} \noindent{\bf Proof Overview for Parity Weighted Tiling} \label{sec-introparity} The proof for parity weighted tiling is very similar to the function problem. Suppose that a language $L \in \mbox{P}^{\mbox{NEXP}}$ is computed by a Turing Machine $M$ with access to an oracle for $L' \in \mbox{NEXP}$. Let $M(x,z)$ be the indicator function that is $0$ if $M(x,z)$ accepts and $1$ if $M(x,z)$ rejects. The overall cost computed by the collective computations is: $4 {\cal{C}}(x,z) + M(x,z)$. The left-most interval computes $M(x,z)$ and results in a $+1$ cost in the case that $M$ rejects. The remaining intervals which collectively compute Krentel's cost function all impose costs of $+2$ or $0$. Thus the expression $4 {\cal{C}}(x,z) + M(x,z)$ will guarantee that the minimum ${\cal{C}}(c,z)$ corresponds to the correct guess $\bar{z}$. Furthermore, the rightmost bit will be $M(x,z)$ which will cause the minimum cost to be odd or even, depending on whether $M$ accepts. \subsection{Discussion, Related Work, and Open Problems} \label{sec-discussion} Despite the fact that the function version of classical local-Hamiltonians describes the task of the computational (classical) physicist much more naturally than decision problems, complexity of function problems was hardly studied even in the non-TI setting, in the literature of classical theory of computer science. Recently, related results were discovered in the domain of quantum computational complexity. In particular, in \cite{AI21}, Aharonov and Irani use a construction for the function version of (finite) quantum local Hamiltonian as a component for a hardness result for the infinite 2D grid. More specifically, they prove that the problem of estimating the ground energy of a local Hamiltonian on a finite 2D grid, is hard for $\mbox{FP}^{\mbox{NP}}$. Importantly, their results do not imply the hardness result presented in this paper, and it seems impossible to extend their proof to deduce the classical hardness result of Theorem \ref{th-function}. Like \cite{AI21} we implement Krentel's cost function using a fixed Hamiltonian term, but since their construction is quantum (as opposed to the classical construction in this paper), they are able to prove the result using a completely different set of tools which do not carry over to the classical case. In quantum constructions, the lowest energy is an eigenvalue of a general Hermitian matrix and the matrix can be constructed to fine tune the ground energy to an inverse polynomial precision. In classical constructions, the total energy will be a sum over terms where each term is chosen from a constant-sized set of values determined by the finite horizontal and vertical tiling rules. This allows far less control in the classical setting over the precision of the minimum cost tiling. Incidentally, note that the results for the quantum case proven in \cite{AI21} are not tight, which follows from the fact that they use a quantum construction to obtain hardness for $\mbox{FP}^{\mbox{NEXP}}$, a classical complexity class. It seems challenging to make the characterization tight in the quantum case. In contrast to the class NP, the class $\mbox{QMA}$ is a class of promise-problems and in simulating a $\mbox{P}^{\mbox{QMA}}$ machine, there is no guarantee that the queries sent to the $\mbox{QMA}$ oracle will be valid queries. The cost/energy applied for a particular query will depend on the probability that a $\mbox{QMA}$ verifier accepts on the provided input. If the input is invalid, then the probability of acceptance can be arbitrary. Thus, Krentel's cost function will potentially be an uncontrolled quantity. Typically in a reduction where we want to embed the output of a function into the value of the minimum energy, the low order bits of the energy are used to encode the output of the function. It's not clear how to do this without being able to control the binary representation of the minimum energy. Note that by embedding a classical computation in the Hamiltonian, the issue of invalid queries is circumvented. Both \cite{AI21} and \cite{CW21} study the complexity of computing the ground energy density of infinite TI Hamiltonians to within a desired precision making use of the technique introduced by Cubitt, Prerez-Garcia, and Wolf which embeds {\it finite} Hamiltonian constructions of exponentially increasing sizes, into the 2D infinite lattice, using Robinson tiles. Robinson tiling rules \cite{R71} force an aperiodic structure on the tiling of the infinite plane, with squares of exponentially increasing size. The quantum construction of \cite{AI21} layers a TI 1D Hamiltonian on top of one of the sides of all the squares. The classical construction of \cite{CW21} layers a classical finite construction on each square. Neither work obtains tight results due to the same issue with invalid queries, although the two papers compromise in completely different ways. The primary technical innovation introduced in \cite{CW21} is to devise a more robust version of Robinson tiles which ensures that the lowest energy state corresponds to a correct Robinson tiling, even though the cost of the classical finite construction layered on top may introduce a penalty. If it were possible to obtain an even more robust version of Robinson tiles, one potentially could layer the finite construction from the current paper on top the more robust constructions in the hopes of showing that computing the ground energy density of a classical TI Hamiltonian in the thermodynamic limit is complete for $\mbox{EXP}^{\mbox{NEXP}}$ under Karp reductions. The results in this paper are also related to the work of Ambainis \cite{Amb2014} which characterizes the complexity of measuring local observables of ground states of local Hamiltonians (APX-SIM), showing that the problem is complete for $P^{\mbox{QMA}[\log n]}$. $P^{\mbox{QMA}[\log n]}$ contains those problems that can be solved by a polynomial time classical Turing Machine with access to $O(\log n)$ queries to a $\mbox{QMA}$ oracle. This type of question (determining a property of the ground state) is similar to our classical result about determining whether the cost of the optimal tiling is odd or even. The results on APX-SIM \cite{Amb2014, GY19, GPY19} are not hindered by the issue of invalid queries because the quantity being measured is not the actual energy itself. Note that the important point here is the property that distinguishes the state to be measured (minimum energy) is different than the local observable applied to the measured state. By contrast, computing the energy of the lowest energy state appears to be more difficult. The issue of invalid queries appears to be an obstacle, even when the Hamiltonian terms are position-dependent as in the constructions of \cite{GY19, GPY19}, as well as in the TI constructions in \cite{AI21,WBG20}. Finally, it was mentioned earlier that the approximation problem considered here differs from the standard PCP setting in that the underlying graph is a grid and the terms are TI. It remains an open question as to whether there is a family of TI instances of constraint satisfaction on general graphs for which it is hard to estimate the optimal solution to within an additive $\Theta(N)$. \subsection{Paper Outline} Section \ref{sec-TM2Tile} describes how the rules of a Turing Machine are translated into tiling rules. For all the results presented here, Layer $1$ encodes the execution of a Turing Machine that creates intervals within the grid that marks off where parallel computations will take place in subsequent layers. Since the construction and analysis is common to all the results, we present that first. Section \ref{sec-L1construction} gives the construction and Section \ref{sec-L1analysis} proves the main lemmas that are needed for the analysis of each construction. Section \ref{sec-GWT} then describes the rest of the construction and proof for Gapped Weighted Tiling which only requires two additional layers. Section \ref{sec-PWT} describes the rest of the construction and analysis for Parity Weighted Tiling. Finally, Section \ref{sec-FWT} describes the modifications to the construction for Parity Weighted Tiling that is requires for Function Weighted Tiling. \section{Tilings and Turing Machines} \subsection{Equivalence of Tiling Variants} \label{sec-tiling} We defined the translationally-invariant tiling problem to use two constraints: one is applied to each pair of adjacent tiles in the vertical direction and the other is applied to each pair of adjacent tiles in the horizontal direction. For convenience, the local constraints described throughout the paper apply to local configurations in a square of four tiles as well as to pairs of tiles in the horizontal direction. We sketch here how one can transform a set of a set of tiling rules that applies to squares into an equivalent one that applies to only pairs. For each pair of tiles $t_1$ and $t_2$ in the original tiling rules, create a new combined tile type $[t_1, t_2]$. A large (but constant) constraint can be added to ensure that a tiling with the new tile types is {\em consistent}, meaning that a tile of the form $[*,t]$ must go to the left of a tile of the form $[t,*]$. More specifically, an inconsistent tiling can be transformed into a consistent tiling in a way that strictly reduces the cost. There is now a one-to-one correspondence between consistent tilings with the new tile types and tilings with the original tile types. Moreover, a vertically aligned pair with tile $[t_a, t_b]$ on top of $[t_c, t_d]$ can enforce the same constraint as a square in the original tiling rules with tiles $t_a$ and $t_b$ in the top row of the square and with $t_c$ and $t_d$ in the bottom row of the square. Constraints on the new tiles can be fixed so that the cost of corresponding tilings are the same. \subsection{Encoding Turing Machine Computations in Tilings} \label{sec-TM2Tile} In this subsection we describe how the rules of a Turing Machine are translated into legal and illegal {\em computation} squares so that a tiling that contains no illegal computation squares corresponds to a correct execution of the Turing Machine. Note that the illegal and legal computation squares described in this subsection would apply to one particular Layer of the tiling. We assume here that the Turning Machine proceeds from the bottom to the top. If the direction were reversed, the the top and bottom rows of the legal and illegal computation squares would be swapped. Consider a Turing Machine $M$ with tape alphabet $\Gamma$ and set of states $Q$. The set of tiles types for the layer corresponds to $\Gamma \cup (\Gamma \times Q)$. A tile is called a {\em tape} tile if it is labeled with a symbol from the tape alphabet. A tile is called a {\em head} tile if it is labeled with a state and tape alphabet symbol, e.g. $(q/c)$. Each configuration of the Turing Machine is represented by a row of tiles. In general, we will have Turing Machines whose non-blank tape symbols are bracketed on the left by the symbol $\vartriangleleft$ and on the right by symbol $\vartriangleright$. The tape contents to the right of $\vartriangleright$ is an infinite sequence of blank ($\#$) symbols. The corresponding row of tiles will have $\#$ tiles extending to the $\Box$ tile at the right side of the grid. For example, consider a Turing Machine with state $q$ and whose tape symbols include B, X, $\vartriangleleft$, and $\vartriangleright$. The Turing Machine configuration \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & & & & & & & & $q$ & & & & & & & & \\ & & & & & & & & & & & $\downarrow$ & & & & & & & & \\ $\Box$ & $\vartriangleleft$ & B & B & B & B & X & B & B & B & X & B & X & $\vartriangleright$ & & & & & & \\ \end{tabular} \end{description} will be represented by the row \vspace{.1in} \noindent \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\Box$ & $\vartriangleleft$ & B & B & B & B & X & B & B & B & X & q/B & X & $\vartriangleright$ & \# & \# & $\cdots$ & \# & $\Box$ \\ \hline \end{tabular} \vspace{.1in} For ease of notation, we will represent this row of tiles by the string: $$\Box~\vartriangleleft~B~B~B~B~X~B~B~B~X~(q/B)~X ~\vartriangleright \#~\cdots \#~ \Box $$ Suppose the Turing Machine has rule $\delta(q_0, a) \rightarrow (q_1, b, L)$ then the following four squares would be legal for any tape symbols $x$ and $y$: \vspace{.1in} \begin{tabular}{|c|c|} \hline $q_1/x$ & b \\ \hline x & $q_0/a$ \\ \hline \end{tabular}~~~~~ \begin{tabular}{|c|c|} \hline b & y \\ \hline $q_0/a$ & y \\ \hline \end{tabular}~~~~~ \begin{tabular}{|c|c|} \hline y & $q_1/x$ \\ \hline y & x \\ \hline \end{tabular}~~~~~ \begin{tabular}{|c|c|} \hline $\Box$ & $q_1/x$ \\ \hline $\Box$ & x \\ \hline \end{tabular} \vspace{.1in} Thus if two adjacent rows do not contain any illegal computation squares, then the next row up reflects the head location and tape contents after the rule has been applied. Similarly, if the Turing Machine has rule $\delta(q_0, a) \rightarrow (q_1, b, R)$ then the following four squares would be legal for any tape symbols $x$ and $y$: \vspace{.1in} \begin{tabular}{|c|c|} \hline b & $q_1/x$ \\ \hline $q_0/a$ & x \\ \hline \end{tabular}~~~~~ \begin{tabular}{|c|c|} \hline y & b \\ \hline y & $q_0/a$ \\ \hline \end{tabular}~~~~~ \begin{tabular}{|c|c|} \hline $\Box$ & b \\ \hline $\Box$ & $q_0/a$ \\ \hline \end{tabular}~~~~~ \begin{tabular}{|c|c|} \hline $q_1/x$ & y \\ \hline x & y \\ \hline \end{tabular} \vspace{.1in} There can also be Turing Machine rules in which the head does not change location: $\delta(q_0, a) \rightarrow (q_1, b, -)$. Then the following three squares would be legal for any tape symbol $x$: \vspace{.1in} \begin{tabular}{|c|c|} \hline $q_1/b$ & x \\ \hline $q_0/a$ & x \\ \hline \end{tabular}~~~~~ \begin{tabular}{|c|c|} \hline x & $q_1/b$ \\ \hline x & $q_0/a$\\ \hline \end{tabular}~~~~~ \begin{tabular}{|c|c|} \hline $\Box$ & $q_1/b$ \\ \hline $\Box$ & $q_0/a$\\ \hline \end{tabular} \vspace{.1in} In the absence of a head tile, two vertically neighboring tiles must be the same tile type. Thus for any two tape symbols $x$ and $y$ the following square is legal: \vspace{.1in} \begin{tabular}{|c|c|} \hline x & y \\ \hline x & y \\ \hline \end{tabular} \vspace{.1in} All squares that have four interior tiles and whose tile types for the layer are not one of the legal squares described above is an illegal computation square. In general, we will design TMs where each state $q$ is reached from a unique direction. In other words, for each state $q$, the TM can not have rules corresponding to more the one of the following three forms: \begin{enumerate} \item $\delta(*,*) = (q,*,R)$ \item $\delta(*,*) = (q,*,L)$ \item $\delta(*,*) = (q,*,-)$ \end{enumerate} Any TM that doesn't have this property can be transformed into a TM that does have this property by adding some additional states and rules. This restriction ensures that for each state $q$, and any tape symbols $x$, $y$, and $z$, only one of the squares below can be legal: \vspace{.1in} \begin{tabular}{|c|c|} \hline y & $q/x$ \\ \hline y & x \\ \hline \end{tabular}~~~ \begin{tabular}{|c|c|} \hline $q/x$ & y \\ \hline x & y \\ \hline \end{tabular}~~~ \begin{tabular}{|c|c|} \hline $q/x$ & y \\ \hline $q'/z$ & y \\ \hline \end{tabular} \vspace{.1in} Thus the only legal way for a head tile to appear in a row, is for there to be another head tile just below it or to the immediate left or right in the preceding row. \begin{definition} The number of illegal pairs and squares in Layer $i$, denoted by $F_i$ is the number of illegal pairs, illegal computation squares, and illegal initialization squares in Layer $i$, plus, if $i > 1$, the number of illegal translation squares from Layer $i-1$ to Layer $i$. We will refer to $F_i$ as the {\bf cost} of Layer $i$. \end{definition} The cost of the whole tiling will be a linear combination of the $F_i$'s. A square is called a a {\em head} square if one of the two lower tiles is a head tile and one of the two upper tiles is a head tile. Note that a head square can be illegal or legal. The next two facts about the encoding of Turing Machines in tiling rules will be useful in analyzing Layer 1 of the constructions. Based on the method described above for translating Turing Machine rules into legal and illegal computation squares, the following two sets of facts can be easily verified. \begin{fact} \label{lem-twoTiles} \ifshow {\bf (lem:twoTiles)} \else \fi Consider a tiling where a tile $t'$ is directly above tile $t$. If any of the following conditions hold, then both squares containing $t'$ and $t$ are illegal computation squares: \begin{enumerate} \item $t'$ and $t$ are both tape tiles and $t \neq t'$ \item $t'$ is a head tile $(q/c')$, $t$ is a tape tile $c$ and $c \neq c'$. \item $t'$ is a tape tile $c'$, $t$ is a head tile $(q,c)$ and $\delta(q, c) = (*, c', L/R)$ is not a TM rule. \item $t'$ is a head tile $(q'/c')$, $t$ is a head tile $(q/c)$ and $\delta(q, c) = (q', c', -)$ is not a TM rule. \end{enumerate} \end{fact} \begin{fact} \label{lem-twoTiles2} \ifshow {\bf (lem:twoTiles2)} \else \fi Consider a tiling where a tile $t'$ is directly above tile $t$. If either of the following two conditions hold, then the vertically aligned pair is in a legal head square or an illegal computation square \begin{enumerate} \item $t'$ is a head tile $(q/c')$, $t$ is a tape tile $c$ and $c = c'$. \item $t'$ is a tape tile $c'$, $t$ is a head tile $(q/c)$ and $\delta(q, c) = (*, c', *)$ is a valid TM rule. \item $t'$ is a head tile $(q'/c')$, $t$ is a head tile $(q/c)$ and $\delta(q, c) = (q', c', -)$ is a TM rule. \end{enumerate} \end{fact} \section{Layer 1 Construction: Creating the Intervals} \label{sec-L1construction} \subsection{Overview of Layer 1} \label{sec-L1TMcomponents} The role of Layer 1 is to create what we call "intervals" whose beginning and ends will mark the regions where separate computations which we want to repeat in the next layer, can occur. We will refer to an {\em interval} as a sequence of characters that start with a heavy tile from the set $\{X, \overline{X}, \vartriangleright, \vartriangleleft\}$ and includes all the $B$'s to the right, up to and including the next heavy tile so that neighboring intervals overlap in one location. The {\em size} of the interval is the number of characters, including the heavy on either end, so neighboring intervals overlap by one symbol. In general intervals will begin and end with $X$ or $\overline{X}$, except that the left-most interval begins with a $\vartriangleleft$ on the left and the right-most interval ends with $\vartriangleright$ on the right side. The idea is to design a computation which ends up at the top row of layer 1, such that this row can be viewed as a concatenation of intervals whose sizes begin with the largest one and decreases by one from one interval to the next. This is done by running a TM with an Outer Loop containing and Inner Loop, as follows. In each iteration of the outer loop, the size of each interval increases by $1$ and there is a new interval of size $2$ added to the right end of the tape. This is done by iterating over an Inner Loop, which increases the size of just a single interval (and pushes all intervals to its right one site further). To describe this we will define the Turing Machine that is simulated in Layer $1$. The transition rules of this TM define a set of legal squares for Layer $1$ as described in Section \ref{sec-TM2Tile}. The tape symbols are: $\{ X, B, \overline{X}, \vartriangleleft, \vartriangleright, \# \}$. $B$ is the blank symbol that is written by the Turing Machine. The unwritten tape symbols are all set to $\#$. The states can be divided into groups: \begin{enumerate} \item $q_{OS}$. OS stands for {\em Outer Start}. This computation is in $q_{OS}$ during a set up phase of the outer loop. \item The inner loop states are: $q_{IS}$, $q_{left}$, $q_{wX}$, $q_{wB}$, $q_{w\overline{X}}$, $q_{w\vartriangleright}$. \begin{enumerate} \item $q_{IS}$ is the starting state for the inner loop. \item $q_{left}$ just moves left until $\overline{X}$ is reached. \item $q_{wt}$ stands for {\em write} character $t$. This is how the contents are moved to the right. The state remembers the tape symbol $t$ that it just wrote over. \end{enumerate} \item $q_{e1}$ and $q_{e2}$ are special states for the very end of an iteration of the outer loop. \end{enumerate} Figure \ref{fig-OuterLoop} shows the steps of the Outer Loop in pseudo-code. \begin{figure}[ht] \noindent\fbox{\begin{minipage}{\textwidth} \begin{tabbing} (1) {\sc OuterLoop}: \\ (2) ~~~~~\= {\sc Set Up Phase}\\ (3) \> ~~~~~ \= Sweep left in state $q_{OS}$ until $\vartriangleleft$ is reached\\ (4) \> \>Transition to $q_{IS}$ and move right\\ (5) \> Start of {\sc Inner Loop}\\ (6) \> \> Move right in state $q_{IS}$ until an $X$ is reached\\ (7) \> \> ~~~~~\= If $\vartriangleright$ is reached before $X$, go to (14)\\ (8) \> \> Replace $X$ with $B$ and move right, transition to $q_{w\overline{X}}$\\ (9) \> \> Insert an $\overline{X}$:\\ (10) \>\>\> Sweep right, moving every symbol to the right ($q_{wt}$ for $t \in \{B, X, \vartriangleright\}$)\\ (11) \> \> When state $q_{w\vartriangleright}$ is reached, replace $\#$ with $\vartriangleright$, transition to $q_{left}$\\ (12) \> \> Move left in state $q_{left}$ until $\overline{X}$ is reached\\ (13) \> \> Replace $\overline{X}$ with $X$, transition to $q_{IS}$ and move right. Go to (5).\\ (14) \> {\sc Wind Down Phase}\\ (15) \> \> Replace $\vartriangleright$ with $B, X, \vartriangleright$ (states $q_{e1}$ and $q_{e2}$). \\ (16) \> \> Transition to $q_{OS}$. Go to (1). \end{tabbing} \end{minipage}} \caption{Pseudo-code for an integration of the Outer Loop.} \label{fig-OuterLoop} \end{figure} We now describe the exact transition rules of the TM which is simulated in layer 1. \subsection{The Turing Machine for Layer $1$} \noindent{\bf The Outer Loop} \begin{description} \item {\bf The start of an Outer Loop:} \begin{tabular}{cccccccccccccccccccc} & & & & & & & & & $q_{OS}$ & & & & & & & & & & \\ & & & & & & & & & $\downarrow$ & & & & & & & & & & \\ $\vartriangleleft$ & B & B & B & X & B & B &X & B & X & $\vartriangleright$ & & & & & & & & \\ \end{tabular} \item {\bf The start of the next Outer Loop:} \begin{tabular}{cccccccccccccccccccc} & & & & & & & & & & & & & & $q_{OS}$ & & & & \\ & & & & & & & & & & & & & & $\downarrow$ & & & & \\ $\vartriangleleft$ & B & B & B & B & X & B & B & B &X & B & B & X & B & X & $\vartriangleright$ & & & \\ \end{tabular} \end{description} \noindent{\bf Set Up Phase} Each iteration of the outer loop starts with a set-up phase in which the the head moves to the far left: \begin{description} \item \begin{tabular}{cccccccccccccccccccc} $q_{OS}$ & & & & & & & & & & & & & & & & & & & \\ $\downarrow$ & & & & & & & & & & & & & & & & & & & \\ $\vartriangleleft$ & B & B & B & X & B & B & X & B & X & $\vartriangleright$ & & & & & & & & \\ \end{tabular} \end{description} This is achieved by the rule $\delta(q_{OS}, t) = (q_{OS}, t, L)$, for any tape symbol $t \neq \vartriangleleft$. This continues until the head reaches $\vartriangleleft$. Then the state transitions to $q_{IS}$ and moves to the right. \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & $q_{IS}$ & & & & & & & & & & & & & & & & & & \\ & $\downarrow$ & & & & & & & & & & & & & & & & & & \\ $\vartriangleleft$ & B & B & B & X & B & B & X & B & X & $\vartriangleright$ & & & & & & & & \\ \end{tabular} \end{description} Rules: $\delta(q_{OS}, \vartriangleleft) = (q_{IS}, \vartriangleleft, R)$. This configuration will be the start of an iteration of the inner loop. \noindent{\bf The Inner Loop} The state $q_{IS}$ will move right past any $B$'s until it reaches a $X$: \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & $q_{IS}$ & & & & & & & & & & & & & & & \\ & & & & $\downarrow$ & & & & & & & & & & & & & & & \\ $\vartriangleleft$ & B & B & B & X & B & B & X & B & X & $\vartriangleright$ & & & & & & & & \\ \end{tabular} \end{description} Rules: $\delta(q_{IS}, B) = (q_{IS}, B, R)$. Then it replaces the $X$ with $B$ and inserts a $\overline{X}$ to the right of the $B$. This has the effect of increasing the size of the current interval. The $\overline{X}$ symbol tells the head where to return to in the next inner loop iteration. \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & & $q_{w \overline{X}}$ & & & & & & & & & & & & & & \\ & & & & & $\downarrow$ & & & & & & & & & & & & & & \\ $\vartriangleleft$ & B & B & B & B & B & B & X & B & X & $\vartriangleright$ & & & & & & & & \\ \end{tabular} \item \begin{tabular}{cccccccccccccccccccc} & & & & & & $q_{wB}$& & & & & & & & & & & & & \\ & & & & & & $\downarrow$ & & & & & & & & & & & & & \\ $\vartriangleleft$ & B & B & B & B & $\overline{X}$ & B & X & B & X & $\vartriangleright$ & & & & & & & & \\ \end{tabular} \end{description} Rules: $\delta(q_{IS}, X) = (q_{w\overline{X}}, B, R)$, and $\delta(q_{w\overline{X}}, t) = (q_{wt}, \overline{X}, R)$, for $t \in \{B, X, \vartriangleright\}$. The head moves to the right, moving each character over by one space. The state remembers, the last symbol that was overwritten. This continues until the head reaches $\vartriangleright$. \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & & & & & & & & $q_{w \vartriangleright}$ & & & & & & & & \\ & & & & & & & & & & & $\downarrow$ & & & & & & & & \\ $\vartriangleleft$ & B & B & B & B & $\overline{X}$ & B & B & X & B & X & \# & & & & & & & & \\ \end{tabular} \end{description} Rules: $\delta(q_{wb}, t) = (q_{wt}, b, R)$, for $b \in \{B, X\}$ and $t \in \{ B, X, \vartriangleright \}$. The state $q_{w \vartriangleright}$ writes a $\vartriangleright$, transitions to $q_{left}$ and moves left: \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & & & & & & & $q_{left}$ & & & & & & & & & \\ & & & & & & & & & & $\downarrow$ & & & & & & & & & \\ $\vartriangleleft$ & B & B & B & B & $\overline{X}$ & B & B & X & B & X & $\vartriangleright$ & & & & & & & & \\ \end{tabular} \end{description} Rules: $\delta(q_{w \vartriangleright}, t) = (q_{left}, \vartriangleright , L)$. Then the head then moves all the way to the left until it reaches $\overline{X}$: \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & & $q_{left}$ & & & & & & & & & & & & & & \\ & & & & & $\downarrow$ & & & & & & & & & & & & & & \\ $\vartriangleleft$ & B & B & B & B & $\overline{X}$ & B & B & X & B & X & $\vartriangleright$ & & & & & & & & \\ \end{tabular} \end{description} Rules: $\delta(q_{left}, b) = (q_{left}, b, L)$, for $b \in \{X, B\}$. When $\overline{X}$ is reached, the $\overline{X}$ is replaced with a $X$, the state transitions to $q_{IS}$ and a new inner loop begins. \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & & & $q_{IS}$ & & & & & & & & & & & & & \\ & & & & & & $\downarrow$ & & & & & & & & & & & & & \\ $\vartriangleleft$ & B & B & B & B & X & B & B & X & B & X & $\vartriangleright$ & & & & & & & & \\ \end{tabular} \end{description} Rules: $\delta(q_{left}, \overline{X}) = (q_{IS}, X, R)$. \vspace{.1in} At the beginning of each iteration of the inner loop, the location of the head has moved over by one interval. The intervals to the left of the head have all been increased. The current interval and the ones to the right have yet to be increased. Working through our example, after the next iteration of the inner loop we have: \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & & & & & & & $q_{IS}$ & & & & & & & & & \\ & & & & & & & & & & $\downarrow$ & & & & & & & & & \\ $\vartriangleleft$ & B & B & B & B & X & B & B & B & X & B & X & $\vartriangleright$ & & & & & & & \\ \end{tabular} \end{description} Then after the next inner loop: \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & & & & & & & & & & $q_{IS}$ & & & & & &\\ & & & & & & & & & & & & & $\downarrow$ & & & & & & \\ $\vartriangleleft$ & B & B & B & B & X & B & B & B & X & B & B & X & $\vartriangleright$ & & & & & & \\ \end{tabular} \end{description} \noindent{\bf Wind Down Phase} In the last iteration of the inner loop, there is no $X$ in between $q_{IS}$ and $\vartriangleright$. This situation is detected when the TM is in the state $q_{IS}$ and it encounters a $\vartriangleright$ instead of an X. In this case, the last interval is increased and a new $X$ is added. So $\vartriangleright$ is replaced by $B~ X ~ \vartriangleright$. \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & & & & & & & & & & $q_{IS}$ & & & & & &\\ & & & & & & & & & & & & & $\downarrow$ & & & & & & \\ $\vartriangleleft$ & B & B & B & B & X & B & B & B & X & B & B & X & $\vartriangleright$ & & & & & & \\ \end{tabular} \end{description} \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & & & & & & & & & & & $q_{e1}$ & & & & &\\ & & & & & & & & & & & & & & $\downarrow$& & & & & \\ $\vartriangleleft$ & B & B & B & B & X & B & B & B & X & B & B & X & B & \# & & & & & \\ \end{tabular} \end{description} \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & & & & & & & & & & & & $q_{e2}$ & & & &\\ & & & & & & & & & & & & & & & $\downarrow$ & & & & \\ $\vartriangleleft$ & B & B & B & B & X & B & B & B & X & B & B & X & B & X & \# & & & & \\ \end{tabular} \end{description} \begin{description} \item \begin{tabular}{cccccccccccccccccccc} & & & & & & & & & & & & & & $q_{OS}$ & & & & &\\ & & & & & & & & & & & & & & $\downarrow$ & & & & & \\ $\vartriangleleft$ & B & B & B & B & X & B & B & B & X & B & B & X & B & X & $\vartriangleright$ & & & & \\ \end{tabular} \end{description} Rules: $\delta(q_{IS}, \vartriangleright ) = (q_{e1}, B, R)$, $\delta(q_{e1}, \#) = (q_{e2}, X, R)$, $\delta(q_{e2}, \#) = (q_{OS}, \vartriangleright, L)$. The following notion will play an important role in the analysis. \begin{definition} {\bf End configuration} The configuration $\vartriangleleft~\{B, X, \overline{X}\}^*~(q_{e2}/\#)$ is called an {\em end} configuration \end{definition} An end configuration is the final configuration in an iteration of the Outer Loop. Even if there have been some faults in previous steps of the computation, if the Turing Machine is in an end configuration and computes future steps without faults, the Turing Machine will begin a new iteration of the Outer Loop. Figure \ref{fig-TMrules} summarizes the Turing Machine transition rules. Note that not every state/input symbol combination will occur in a fault-free computation. However, we are defining the transition function on a wider set of inputs so that the Turing Machine can recover from errors due to faults that occurred earlier in the computation. \begin{figure}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $\vartriangleleft$ & $X$ & $B$ & $\overline{X}$ & $\vartriangleright$ & $\#$ \\ \hline \hline $q_{OS}$ & $(q_{IS}, \vartriangleleft, R)$ & Left & Left & $(q_{OS}, X, L)$ & Left & * \\ \hline $q_{left}$ & $(q_{IS}, \vartriangleleft, R)$ & Left & Left & $(q_{IS}, X, R)$ & Left & * \\ \hline $q_{IS}$ & $(q_{IS}, \vartriangleleft, R)$ & $(q_{w \overline{X}}, B, R)$ & Right & Right & $(q_{e1}, B, R)$ & * \\ \hline $q_{w \overline{X}}$ & Right & Insert & Insert & Insert & Insert & * \\ \hline $q_{w X}$ & Right & Insert & Insert & Insert & Insert & * \\ \hline $q_{w B}$ & Right & Insert & Insert & Insert & Insert & * \\ \hline $q_{w \vartriangleright}$ & * & * & * & * & * & $(q_{left}, \vartriangleright, L)$ \\ \hline $q_{e1}$ & * & * & * & * & * & $(q_{e2}, X, R)$ \\ \hline $q_{e2}$ & * & * & * & * & * & $(q_{OS}, \vartriangleright, L)$ \\ \hline \end{tabular} \caption{A summary of the rules for the Layer 1 Turing Machine. The word {\bf Left} stands for $\delta(q,c) = (q,c,L)$. The word {\bf Right} stands for $\delta(q,c) = (q,c,R)$. The word {\bf Insert} stands for $\delta(q_{wt},c) = (q_{wc},t,R)$. A {\bf *} indicates that the Turing Machine does not have a legal transition on that state/tape symbol combination.} \label{fig-TMrules} \end{figure} {~} \noindent{\bf Initialization Rules for Layer $1$} The rules that constrain the initial configuration of the Layer $1$ Turing Machine shown in Figure \ref{fig-bottomRowRules}. A square with $\Box~\Box$ in the lower row and $t_1~t_2$ in Layer $1$ of the upper row is legal if and only if there is an edge from a vertex with $t_1$ to a vertex with $t_2$ in the graph. If a tile does not appear in the graph, then there is no legal square in which that tile appears in Layer $1$ directly above a $\Box$ tile. \begin{figure}[ht] \centering \includegraphics[width=3.0in]{L1_init.png} \caption{These rules constraint the Layer $1$ contents of the bottom row of the tiling.} \label{fig-bottomRowRules} \end{figure} If a tiling does not have an illegal initialization square for Layer $1$, then the first row for Layer $1$ must correspond the to the Turing Machine configuration shown below: \vspace{.1in} \begin{tabular}{cccccccccccccccccccc} & $q_{e2}$ & & & & & & & & & & & & & & & & & & \\ & $\downarrow$ & & & & & & & & & & & & & & & & & & \\ $\vartriangleleft$ & $\#$ & & & & & & & & & & & & & & & & & & \\ \end{tabular} \vspace{.1in} \subsection{Layer $1$: Additional Constraints for Fault Tolerance} \label{sec-validConfigs} The TM definition above still allows for a single fault to completely halt the computation. For example, suppose that a row doesn't have a head tile at all. There would be a local fault where the head tile disappears from one row to the next. All rows thereafter would just be a replication of the same tape contents from the row before and would not contain any illegal computation squares. To this end we expand the tile types we have so far, so that the tiles $\overline{X}$, $X$ and $B$ come in two different colors: red and blue. Note that there are now six different tiles corresponding to $\overline{X}$, $X$ and $B$: $\{ {\color{red} \overline{X}}, {\color{red} X}, {\color{red} B}, {\color{blue} \overline{X}}, {\color{blue} X}, {\color{blue} B}\}$. We can now partition the ordered pairs of Layer $1$ tile types into illegal and legal pairs. If two tiles are an illegal pair for Layer $1$ and those two tiles are adjacent in the horizontal direction in a tiling, then that pair of tiles will contribute to the overall cost of the tiling. The set of legal pairs is best illustrated as a directed graph as show in Figure \ref{figure-ValidConfigGraph}. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{ValidConfigGraphV2.png} \caption{Graphical representation of the set of valid rows. The state $q$ in the diagram can be any state that is not in $\{q_{e1}, q_{e2}, q_{w \vartriangleright}\}$} \label{figure-ValidConfigGraph} \end{figure} \begin{definition}{\bf Legal pairs} The pair $(t_1, t_2)$ is a legal pair if one of the following two conditions hold: \begin{enumerate} \item There is an edge $(v_1, v_2)$ such that $t_1$ is in $v_1$'s set and $t_2$ is in $v_2$'s set. \item There is a vertex $w$ such that $(v_1, w)$ and $(w, v_2)$ are edges, $t_1$ is in $v_1$'s set, $t_2$ is in $v_2$'s set and $\epsilon$ is in $w$'s set. \end{enumerate} All other pairs of tiles are illegal. \end{definition} The $\epsilon$ option is included only to reduce the number of edges in the graph. Suppose instead we added an edge $(v_1, v_2)$ whenever there is a vertex $w$ that contains $\epsilon$ such that $(v_1, w)$ and $(w, v_2)$ are also edges. Then the $\epsilon$'s could be removed from the graph and the resulting graph would define the same set of legal pairs. \begin{definition} A row of tiles is said to be {\em valid} if no two adjacent tiles are an illegal pair. Otherwise the row is {\em invalid}. \end{definition} The set of valid rows corresponds to TM configurations that are a superset of the configurations that the TM would reach in any fault-free\footnote{i.e. all rules are obeyed, zero cost} computation. We will prove below that each valid row corresponds to a unique Turing Machine configuration to which a rule can be applied. Thus, if a row $r$ is valid, then there is exactly one way to tile the next highest row without any illegal computation squares so that the resulting row is also valid. Since every row must begin and end with a $\Box$ tile and contains no other $\Box$ tiles, a row of tiles is valid if and only if it can be generated by the following process: Follow any path in the graph from the source to the sink. As each vertex is reached, select any tile type from the current vertex and place a tile of the type next in the row. $\epsilon$ denotes the option of not selecting a tile for the current vertex. {~} \noindent{\bf Augmenting tiling rules with colors } {~} The definition of legal/illegal computation squares must be augmented to address the fact that now $\overline{X}$, $X$ and $B$ tiles come in two different colors. In a head square, a tile to the left of a head tile must be blue and a tile to the right of a head tile must be red. For example, consider the following computation legal square below: \vspace{.1in} \begin{tabular}{|c|c|} \hline $(q_{left}/X)$ & $B$ \\ \hline $X$ & $(q_{left}/B)$ \\ \hline \end{tabular} \vspace{.1in} The $B$ in the upper-right corner must be red and the $X$ in the lower-left corner must be blue in order for the square to be a legal computation square. It will als be important to enforce that the only way for a tile to change color from one row to another will be in the presence of the head. so any square of the form \vspace{.1in} \begin{tabular}{|c|c|} \hline $B$ & * \\ \hline $B$ & * \\ \hline \end{tabular}~~~~~~~ \begin{tabular}{|c|c|} \hline $X$ & * \\ \hline $X$ & * \\ \hline \end{tabular}~~~~~~~ \begin{tabular}{|c|c|} \hline $\overline{X}$ & * \\ \hline $\overline{X}$ & * \\ \hline \end{tabular} \vspace{.1in} where the two vertically aligned $\overline{X}$, $X$'s or $B$'s have different colors will be an illegal computation square. To summarize, a square pattern of four Layer $1$ tile types is a legal computation square if the square is legal according to the rules translating a Turing Machine to legal and illegal computation squares outlined in Section \ref{sec-TM2Tile} and: \begin{itemize} \item if the square is a head square, then any $\overline{X}/X/B$ tile the left of a head tile is blue and any $\overline{X}/X/B$ tile the left of a head tile is red \item if a tile $t \in \{X, \overline{X}, B\}$ is directly above a tile $t' \in \{X, \overline{X}, B\}$, then $t$ and $t'$ have the same color. \end{itemize} Otherwise, the square is an illegal computation square. {~} \noindent{\bf Properties of Valid Rows } {~} The graphical representation of valid rows given in Figure \ref{figure-ValidConfigGraph} makes it clear that local constraints can be used to enforce that a row is valid. In particular, any row that is not valid must contain at least one illegal pair. However, the lemma below is a more useful description of valid rows which will allow us to establish that a valid row corresponds to a configuration of the Turing Machine to which one of the transition rules can be applied. It will be convenient to refer to the {\em tape contents} of a row of tiles. This is the row that would result from replacing every tile of the form $(q/c)$ with $c$. \begin{lemma} \label{lem-validRow} \ifshow {\bf (lem:validRow)} \else \fi A row is valid if and only if the row has the following properties: \begin{enumerate} \item The tape contents of the row has the form: $$\vartriangleleft~\{X,B,\overline{X}\}^*~\{\vartriangleright, \epsilon\}~\#^*$$ \item Exactly one tile is a head tile. \item Any $\overline{X}/X/B$ tiles to the left of the head tile are blue. Any $\overline{X}/X/B$ tiles to the right of the head tile are red. \item If there is a $\vartriangleright$ or $(q/\vartriangleright)$ tile, then the state is not in $\{ q_{e1}, q_{e2}, q_{w \vartriangleright}\}$ and the head is pointing to one of the tiles from the $\vartriangleleft$ to the $\vartriangleright$ (inclusive). \item If there is no $\vartriangleright$ or $(q/\vartriangleright)$ tile, then the head is pointing to the leftmost $\#$ tile and is in state $q_{e1}$, $q_{e2}$ or $q_{w \vartriangleright}$. \end{enumerate} \end{lemma} \begin{proof} We first establish that any valid row must have the five properties outlined in the lemma, starting with property $1$. If a row is valid, then it corresponds to a path from the source to the sink in the graph in Figure \ref{figure-ValidConfigGraph}. Note that every path from the source to the sink must first go through vertex $1$ or $2$, so the tape contents of the first tile must be $\vartriangleleft$. There is no path back to vertices $1$ or $2$, so the first tile is the only $\vartriangleleft$ tile. If there is a $\vartriangleright$ or $(q/\vartriangleright)$ tile, then the path must pass through vertices $6$ or $8$. There is no path back to vertices $6$ or $8$, so $\vartriangleright$ or $(q/\vartriangleright)$ only appear once. The only vertices that can come after vertices $6$ or $8$ is vertex $7$, so only $\#$ tiles can appear after a $\vartriangleright$ or $(q/\vartriangleright)$ tile. Once vertex $7$ is reached, only vertex $7$ can be reached before the sink. So after the first $\#$, there can only be $\#$ tiles until the final $\Box$ tile at the sink. The head tiles are all contained in vertices $1$, $4$, $8$, or $9$. To establish property $2$, note that every path from the source to the sink passes through one of the vertices $1$, $4$, $8$, or $9$ exactly once. If vertex $3$ is reached, it occurs before vertices $1$, $4$, $8$, or $9$ in the path. Therefore only blue $\overline{X}/X/B$ tiles can be to the left of the head tile. If vertex $5$ is reached, it occurs after vertices $1$, $4$, $8$, or $9$ in the path. Therefore only red $\overline{X}/X/B$ tiles can be to the right of the head tile. If there is no $\vartriangleright$, then the path must pass through vertex $9$. The state must be $q_{e1}$, $q_{e2}$, or $q_{w \vartriangleright}$, and the head points to the leftmost $\#$. If there is a $\vartriangleright$ symbol, then the path passes through vertices $1$, $4$, or $8$, in which case the state is not $q_{e1}$, $q_{e2}$, or $q_{w \vartriangleright}$ and the head points to one of the symbols from the $\vartriangleleft$ to the $\vartriangleright$. For the converse, the location of the head determines which vertex from $1$, $4$, $8$, or $9$ the path goes through. Any row in which the head points to the $\vartriangleleft$ symbol that also satisfies all five properties from the lemma can be generated by a path: $1 \rightarrow 5^* \rightarrow 6 \rightarrow 7^* $. Any row in which the head points to the $\vartriangleright$ symbol that also satisfies all five properties from the lemma can be generated by a path: $2 \rightarrow 3^* \rightarrow 8 \rightarrow 7^* $. Any row in which the head points to a $\overline{X}/X/B$ symbol that also satisfies all five properties from the lemma can be generated by a path: $2 \rightarrow 3^* \rightarrow 4 \rightarrow 5^* \rightarrow 6 \rightarrow 7^* $. Finally any row in which the head points to a $\#$ symbol that also satisfies all five properties from the lemma can be generated by a path: $2 \rightarrow 3^* \rightarrow 9 \rightarrow 7^* $. \end{proof} \begin{lemma} \label{lem-nextRow} \ifshow {\bf (lem:nextRow)} \else \fi If $r$ is a valid row, then there is exactly one row $r'$ that can be placed above $r$ such that there are no illegal computation squares that span the two rows $r$ and $r'$. Row $r'$ is valid. Moreover, row $r$ corresponds to a Turing Machine configuration and $r'$ represents the configuration resulting from executing one step in configuration $r$. \end{lemma} \begin{proof} Consider a valid row $r$. We will first argue that if there is a row $r'$ such that if $r'$ is placed directly above $r$, there are no illegal computation squares, then $r'$ must be unique. Since $r$ is valid, by Lemma \ref{lem-validRow}, it has exactly one head tile. Since each state in the Turing Machine is reached from a well defined direction, a head tile in row $r'$ must come from a specific location in $r$. Moreover, each head tile in $r$ moves to a well-defined location in $r'$. This defines a one-to-one correspondence between head tiles in row $r$ and head tiles in row $r'$. For example, if $(q'/c')$ is a head tile in $r'$ and the state $q'$ is reached from the left, then there must be a head tile $(q/c)$ in row $r$ one location to the left such that $\delta(q,c) = (q', *, R)$. Since there is a matching between head tiles in row $r$ and head tiles in row $r'$ and there is exactly one head tile in row $r$, then there is exactly one head tile in row $r'$. Since there are no rules in the Layer $1$ Turing Machine in which the head stays in the same location, there will be exactly one head square spanning rows $r$ and $r'$ with a head tile in one of the bottom two tiles and a head tile in one of the top two tiles. The square will have one of the following two forms, depending on whether the corresponding rule moves the head right or left: \vspace{.1in} \begin{tabular}{|c|c|} \hline b & $(q'/y)$ \\ \hline $q/c$ & y \\ \hline \end{tabular}~~~~~ \begin{tabular}{|c|c|} \hline $(q'/y)$ & b \\ \hline y & $q/c$ \\ \hline \end{tabular} \vspace{.1in} Note that the contents of the top two tiles are completely determined by the contents of the lower two tiles and the output of the transition function on input $(q, c)$. If either $y$ or $b$ are a $\overline{X}/X/B$ tiles, then their color is also determined by the rules for legal/illegal computation squares and the fact that the head square is a legal computation square. All other locations in the row have a tape tile in both $r$ and $r'$. If two vertically aligned tape tiles do not have the same tape symbol and color, then they must be contained in an illegal computation square. If tile $b$ in the head square is a $B/X/\overline{X}$ tile and is to the left of the head then $b$ must be blue. If tile $b$ is a $B/X/\overline{X}$ tile and is to the right of the head then $b$ must be red. Therefore if $r$ and $r'$ are both valid and there are no illegal computation squares spanning rows $r$ and $r'$, then the contents of $r'$ are completely determined. Next we need to show that if $r$ is valid, then there is a valid $r'$ such that there are no illegal computation squares spanning rows $r$ and $r'$ Since $r$ is valid, there is exactly one head tile in $r$ and therefore $r$ corresponds uniquely to a configuration of the Turing Machine. If the head is pointing to a $\#$ tile in row $r$, then the state is $q_{e1}$, $q_{e2}$ or $q_{w \vartriangleright}$. If the head is pointing to a non-$\#$ tile in row $r$, then the state is not $q_{e1}$, $q_{e2}$ or $q_{w \vartriangleright}$. Therefore row $r$ corresponds to a configuration of the Turing Machine and there is a transition rule from Figure \ref{fig-TMrules} that applies to this configuration. Let $r'$ be the row resulting from applying one step of the Turing Machine to the configuration represented by row $r$. Color all the $B/X/\overline{X}$ tiles to the left of the head tile blue and all the $B/X/\overline{X}$ tiles to the right of the head tile red. We will first establish that there are no illegal squares spanning rows $r$ and $r'$. The head square must be legal because it represents one correctly executed step of the Turing Machine. All other tiles outside of the head square are tape tiles and are the same in $r$ and $r'$ because they did not change in the computation step. Moreover since $r$ is valid, all $B/X/\overline{X}$ tiles to the left of the head square are blue and all $B/X/\overline{X}$ tiles to the right of the head square are red. Therefore any $B/X/\overline{X}$ tiles outside of the head square have the same color in $r$ and $r'$. The final step is argue that the resulting row $r'$ is valid. Property $3$ is satisfied by construction. Row $r'$ also satisfies Property $2$ because it corresponds to a valid Turing Machine configuration and therefore only has one head tile. It remains to establish that $r'$ satisfies properties $1$, $4$, and $5$. According to the Turing Machine rules (shown in Figure \ref{fig-TMrules}) if the head is pointing to a $\vartriangleleft$ symbol, it will write a $\vartriangleleft$ symbol and move right into state $q_{IS}$ or $q_{wt}$, where $t \neq \vartriangleright$. The tape contents remain the same and the head is still in between the $\vartriangleleft$ and $\vartriangleright$ symbols. If the head is pointing to a $B/X/\overline{X}$ symbol, it writes a $B/X/\overline{X}$ symbol and moves left or right. The new state will not be $q_{e1}$, $q_{e2}$, or $q_{w \vartriangleright}$. The tape contents still have the form $\vartriangleleft~\{B, X, \overline{X} \}^*~\vartriangleright~\#^*$ and the head is still in between the $\vartriangleleft$ and $\vartriangleright$ symbols. If the head is pointing to the $\vartriangleright$ or the leftmost $\#$, the Turing Machine will either write $\vartriangleright$ and move left into a state that is not in $\{q_{e1}, q_{e2}, q_{w \vartriangleright}\}$ or the Turing Machine will write a $B/X/\overline{X}$ symbol symbol and move right into state $q_{e1}$, $q_{e2}$, or $q_{w \vartriangleright}$. In the former case, the tape contents will be of the form $\vartriangleleft~\{B, X, \overline{X} \}^*~\vartriangleright~\#^*$ and the head is in between the $\vartriangleleft$ and $\vartriangleright$ symbols. In the latter case, the tape contents will be of the form $\vartriangleleft~\{B, X, \overline{X} \}^*~\#^*$ and the head will point to the leftmost $\#$ symbol. \end{proof} \section{Analysis of Layer $1$: Proving Fault Tolerance} \label{sec-L1analysis} The goal of the analysis of Layer $1$ is to show that as long as the number of faults is not too large ($O(N^{1/4})$) then the end result will approximate the result of a fault free computation reasonably well. Section \ref{sec-rowcost} associates each illegal pair or square with a particular row and bounds how much a tiling can change from one row to the next as a function of the number illegal configurations associated with that row. Then in order to show that the computation encoded in the tiling makes progress, even in the presence of faults, we define a notion of a {\em complete segment} which corresponds to a sequence of rows that represent a complete and fault-free iteration of the Outer Loop of the Layer $1$ Turing Machine. In a complete segment, each interval increases in size by $1$ and a new interval of length $2$ is added to the right end of the non-blank tape symbols. The main goal of Subsection \ref{sec-segLB} is to prove Lemma \ref{lem-segLB} which says that the number of complete segments in Layer $1$ is at least $\mu(N) - O(F)$, where $F$ is the number of faults and $\mu(N)$ the number of intervals in the last row of a fault-free tiling. In general, the number of steps in an iteration of the Outer Loop will depend on the number of intervals as well as the total length of all the intervals. Thus, it's possible for faults to create spurious intervals which can cause an iteration of the Outer Loop to take more steps. We define the {\em weight} of a row to be the number of intervals and the {\em length} of a row to be the number of non-blank tiles, corresponding to non-blank symbols on the Turing Machine tape. In order to lower bound the number of complete segments, we need to bound the effect of faults on the weight and length of a row and prove that they do not slow down the computation by too much. These bounds are given in Section \ref{sec-intervals}. The analysis on the number of complete segments is made more precise in Section \ref{sec-segLB} where we define the function $X$ which takes as input a sequence of positive integers $(s_1, \ldots, s_m)$ and outputs the exact number of steps in one iteration of the Outer Loop if the sequence of interval sizes (from left to right) at the beginning of the iteration is $(s_1, \ldots, s_m)$. Section \ref{sec-segLB} gives upper bounds on how much the function $X$ can change in a sequence of correct steps as well as how much $X$ can change as a result of faults. While the function $X$ can be used to bound the number of rows in a complete segment, it is also important to bound the number of rows outside of complete segments. For example a fault could cause the computation to pop into the middle of an iteration of the Outer Loop by having the head move to a completely arbitrary location. Alternatively, a sequence of correct steps could end in a fault before the end of the iteration of the Outer Loop is reached. These bounds are put together to prove Lemma \ref{lem-segLB} which says that the number of complete segments in Layer $1$ is at least $\mu(N) - O(F)$. While each complete segment (i.e. iteration of the Outer Loop) results in the creation of a new interval, it is also necessary to argue that the collection of sizes of the the intervals roughly corresponds to that in a correct faulty-free tiling. To that end, we introduce a means of identifying and tracking the intervals as they grow in size and move to the right. Section \ref{sec-clean} also defines the notion of {\em clean} and {\em corrupt} intervals. Intuitively, clean intervals are those that have not been affected by a fault lower down in the tiling. The clean intervals are given a unique tag which they keep for the duration of the computation. The analysis only provides a guarantee for the collection of interval sizes for the clean intervals. The lower bound on the number of complete segments proven in Section \ref{sec-segLB} is used to prove a lower bound on the number of clean intervals in Section \ref{sec-clean}. Suppose that the sizes of the clean intervals, from left to right, in the last row of Layer $1$ is $\vec{s} = (s_1, s_2, \ldots, s_m)$. Lemma \ref{lem-cleanLB} states that $m$, the number of clean intervals, in any tiling for Layer $1$ is at least $\mu(N) - O(F)$. Section \ref{sec-potential} defines a potential function which captures how much the sequence $\vec{s}$ differs from the idealized sequence $(m+1, m, \ldots, 3, 2)$. Lemma \ref{lem-potential} shows that the value of the potential function is at most $O(F)$. The final lemma required for analyzing the rest of the constructions is Lemma \ref{lem-analysisL1} which combines Lemmas \ref{lem-cleanLB} and \ref{lem-potential} and says that if the sequence of clean intervals at the end of Layer $1$ is $\vec{s}$, then the number of integers in the range $2, \ldots, \mu(N)+2$ that do not appear as an entry in $\vec{s}$ is bounded by $44 F_1 + 3$. Layer $3$ of the construction for Parity Weighted Tiling and Function Weighted Tiling uses a Turing Machine that sweeps across all of the non-blank symbols. In order to establish that this Turing Machine completes its work in $N$ steps, we need an upper bound on the length of a row as a function of $N$ and the number of faults in Layer $1$. This analysis is given in Section \ref{sec-lengthUB}. Note that this section is not used in the proof of Gapped Weighted Tiling. We present these bounds just after Section \ref{sec-segLB} because they use the definition for the function $X$ which is developed there. Finally, in order to compare a tiling with faults to an fault-free tiling, we will eventually need to characterize the sequence of interval sizes in an fault-free tiling. This characterization is given in Section \ref{sec-FF}. \subsection{The Cost of a Row} \label{sec-rowcost}\ifshow {\bf (sec:rowcost)} \else \fi To begin the analysis, it will be convenient to associate each illegal square or pair to a particular row in the tiling. This will be important for accounting for the discrepancy between a faulty tiling and a fault-free tiling with specific occurrences of illegal pairs or squares. The number of such illegal configurations attributed to a row is the cost of a row. Note that when analyzing the overall construction, we will refer to the number of illegal pairs and squares in Layer $i$ by $F_i$. In this subsection, which focuses on Layer $1$, we will drop the subscript and use $F$ to denote the number of illegal pairs and squares in Layer $1$. \begin{definition}{\bf [Cost of a Row]} Fix a tiling of the $N \times N$ grid. The rows of the grid will be numbered from bottom to top $r_0, \ldots, r_{N-1}$. Let $h(r_t)$ denote the number of illegal pairs in row $r_t$. (Note that since $r_0$ is assumed to consist only of $\Box$ tiles, $h(r_0) = 0$.) If $t \ge 1$, then $v(r_t)$ is defined to be the number of illegal computation squares contained in rows $r_t$ and $r_{t+1}$. $v(r_0)$ is defined to be the number of illegal initialization squares (which are by definition contained in rows $r_0$ and $r_1$). We denote by $F$ the total cost of Layer $1$ of the tiling, namely the total number of illegal pairs and squares, or the sum of the costs of all rows: $F=\sum_{i=0}^{N-1} h(r_i)+v(r_i)=\sum_i c(r_i)$. \end{definition} The following claim which is used throughout the analysis shows that the number of illegal pairs or squares attributed to a row can be used to bound the number of locations where the row differs from the row above it. \begin{claim} \label{cl-distUB} \ifshow {\bf (cl:distUB)} \else \fi {\bf [Upper Bound on the Distance Between Consecutive Rows]} If $r_{t-1}$ is an invalid row, then $d(r_{t-1}, r_{t}) \le 4h(r_{t-1}) + 2v(r_{t-1})$, where $d(r_{t-1}, r_{t})$ is the number of locations where $r_{t-1}$ and $r_t$ differ. \end{claim} \begin{proof} Suppose that row $r_{t-1}$ is an invalid row (i.e. $h(r_{t-1}) > 0$). Let $Q$ be the number of legal head squares that span rows $r_{t-1}$ and $r_t$. The number of differences between $r_{t-1}$ and $r_t$ inside the legal head squares is at most $2Q$. Now consider a pair of vertically aligned tiles, $t'$ and $t$, outside of any legal head square such that $t \neq t'$. The two tiles must satisfy at least one of the conditions in Facts \ref{lem-twoTiles} and \ref{lem-twoTiles2} and are therefore contained in at least one illegal computation square. Since each illegal computation square contains two pairs of vertically aligned tiles, the number of differences between $r_{t-1}$ and $r_t$ outside of legal head squares is at most $2 v(r_{t-1})$. $d(r_{t-1}, r_{t}) \le 2Q + 2v(r_{t-1})$. We have that the number of head tiles in the row is at least $Q$ (since two legal head squares cannot overlap, and each head squares contains a head tile in its bottom row). There is no path in the graph shown in Figure \ref{figure-ValidConfigGraph} from a vertex with a head tile back to another vertex with a head tile. So $h(r_{t-1}) \ge Q-1$. Furthermore, since $r_{t-1}$ is not a valid row, $h(r_{t-1}) \ge 1$. The two bounds on $Q$ together imply that $Q \le 2h(r_{t-1})$. Therefore $d(r_{t-1}, r_{t}) \le 4h(r_{t-1}) + 2v(r_{t-1})$. \end{proof} \subsection{Intervals and their Properties} \label{sec-intervals} \ifshow {\bf (sec:intervals)} \else \fi We would like to argue roughly that in any tiling that does not contain a large number of faults, the intervals are organized "more or less" like in the correct computation. To be able to make this kind of statement precise, we will need to expand the definition for intervals in a row so that intervals are well defined during all points in a valid computation as well as for invalid rows. We start by defining a {\it weight function} on tape symbols, TM states, and tile types. \begin{definition} {\bf [Weights of Tape Symbols and TM States]} If $c$ is a tape symbol, then the weight of $c$, denoted $w(c)$, equals $0$ if $c = B$ or $c = \#$, and is $1$ otherwise. If $q$ is a TM state, then $w(q) = 1$ if $q \in \{ q_{wX}, q_{w \vartriangleright}, q_{w \overline{X} }, q_{e1}, q_{e2}\}$ and $w(q) = 0$ otherwise. \end{definition} Note that the weight $1$ states are all states that write a weight-$1$ tile, so they encode the presence of a weight-$1$ tile in the state. These definitions allow us to assign weights to the tile types. \begin{definition} {\bf [Tile Weights]} If $t$ is a tape tile corresponding to tape symbol $c$, then $w(t) = w(c)$. If $t$ is a head tile corresponding to $(q/c)$, then $w(t) = w(c) + w(q)$. If a tile has weight greater than $0$, it is referred to as a {\em heavy} tile. Otherwise, the tile is called a $0$-weight tile. \end{definition} We can now formally describe the intervals in a way that is well-defined for an arbitrary row, in particular, the definition is precise, even for a row that represents a Turing Machine configuration in the middle of an execution of the Outer Loop. \begin{definition} {\bf [Intervals and their Sizes]} An {\em interval} is a sequence of more than one tiles in a row, that begin and end with heavy tiles and otherwise contain only $0$-weight tiles. A single tile whose weight is $2$ is an interval as well. If $I$ is an interval, then $s(I)$ denotes the size of $I$, which is the number of tiles in $I$. \end{definition} For example, the sequence of two tiles $X~(q_{wX}/B)$ is an interval of size $2$ and the single tile $(q_{wX}/X)$ is an interval of size $1$. The tile $(q_{wX}/X)$ would also be the right end of the interval extending to the left and the left end of the interval extending to the right. Therefore, a single tile can potentially be contained in three different intervals. In any row, the sequence of tiles from the leftmost to the rightmost heavy tile define a sequence of intervals. Consecutive intervals overlap by one tile. See Figure \ref{figure-Intervals} for an example. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{Intervals.png} \caption{Each interval in the row of tiles is represented by an arrow. The interval begins at the tile under the tail of the edge and includes all the tiles to the right up to and including the tile under the head of the edge. The tile representing $(q_{wX}/X)$ is part of three intervals.} \label{figure-Intervals} \end{figure} {~} The {\em length} of row $r$ is the number of tiles that are not $\#$ or $\Box$ and is denoted by $l(r)$. The {\em weight} of a row of tiles $r$ is the sum of the weights of the tiles in that row and is denoted bey $w(r)$. \begin{fact}\label{fact:WeightsInts} The number of intervals in a row $r$ is $w(r)-1$. \end{fact} \begin{proof} We prove by induction that the number of intervals contained in the row up to the $i^{th}$ heavy tile is the weight of the tiles up to and including that tile, minus $1$. This is true for the first heavy tile, and assuming it is true up to the $i^{th}$ tile, the next heavy tile adds one interval if its weight is $1$, and adds two intervals if its weight is $2$. \end{proof} The next two lemmas upper bound how much the length and the width of the rows change. The first lemma upper bounds the change over a sequence of rows that do not contain illegal pairs or squares, corresponding to correct Turing Machine steps. The second lemma bounds the change from one row to the next in the presence of faults. \begin{lemma} \label{lem-valid2} \ifshow {\bf (lem:valid2)} \else \fi {\bf [Change in Length and Weight over Correct Computation Steps]} Consider a sequence of rows from $r_{s}$ to $r_{t}$ such that the sequence of rows does not contain any illegal pairs or illegal squares. If rows $r_{s+1}$ through $r_{t-1}$ are not end rows, then $l(r_t) \le l(r_s) + w(r_s) + 1$ and $w(r_t) \le w(r_s) + 1$. If $r_s$ is an end row, then $l(r_t) \le l(r_s) + w(r_s)$. \end{lemma} \begin{proof} The only Turing Machine rule that increases the weight of the configuration is the rule $\delta(q_{e1}, \#) = (q_{e2}, X, R)$ which leads to an end row. Therefore all the rows $r_s$ through $r_{t-1}$ have the same weight and $w(r_t) \le w(r_s)+1$. Each iteration of the inner loop (from an $IS[k]$ configuration to an $IS[j]$ configuration where $j > k$) increases the size of one interval by one. So each iteration of the inner loop increases the total length by $1$. There are at most $m = w(r_s)-1$ iterations of the inner loop. Then the last step $\delta(q_{e1}, \#) = (q_{e2}, X, R)$ also increase the length by $1$. If $r_s$ is in state $q_{wt}$ then there can be an additional increase of one for the symbol $t$ that is inserted. Thus the length will increase by at most $w(r_s)+1$ and $l(r_t) \le l(r_s) + w(r_s)+1$. If $r_s$ is an end configuration, the state is $q_{e2}$ and there is not the additional $+1$ increase to the length and $l(r_t) \le l(r_s) + w(r_s)$. \end{proof} \begin{claim} \label{cl-invalidBound} \ifshow {\bf (cl:invalidBound)} \else \fi {\bf [Change in Length and Weight in the Presence of Faults]} Consider two consecutive rows $r_p$ and $r_{p+1}$ in a tiling. Then \begin{eqnarray*} l(r_{p+1}) & \le & l(r_p) + 2v(r_p) + h(r_p) + 1\\ w(r_{p+1}) & \le & w(r_p) + 2v(r_p) + h(r_p) + 1\\ \end{eqnarray*} Moreover, if $r_{p+1}$ is a valid row, then $l(r_{p+1}) \le l(r_p) + \max \{1, v(r_p)\}$. If $r_{p+1}$ is a valid row that is not an end row, then $w(r_{p+1}) \le w(r_p) + 2v(r_p)$. \end{claim} \begin{proof} We first account for any increase to the width and length from $r_p$ to $r_{p+1}$ that occurs inside legal head squares. Consider each legal head square in rows $r_p$ and $r_{p+1}$. Suppose there are $Q$ such squares. Each of these squares contains a head tile in the lower level. Also, the total increase in length or weight from $r_p$ to $r_{p+1}$ inside these squares is at most $Q$ since a valid step of the computation will cause the length or weight to increase by at most $1$. There is no path in the graph shown in Figure \ref{figure-ValidConfigGraph} from a vertex with a head tile to another vertex with a head tile. So $h(r_{p}) \ge Q-1$. Therefore the total increase in length or weight inside these legal head squares is at most $h(r_p) + 1$. If $r_{p+1}$ is valid, then there is only one head tile in $r_{p+1}$ and therefore at most one legal head square. The only TM step that increases the weight is $\delta(q_{e1},\#) = (q_{e2}, X, R)$. If $r_{p+1}$ is not an end row (i.e. a valid row that contains $q_{e2}$), then the weight does not increase in this square. Now we will show that any increase to the length or weight from $r_p$ to $r_{p+1}$ that occurs outside the legal head squares will be at most $2v(r_p)$. Each illegal square will be given two tokens that can be used to compensate for any increase to the width or length that occurs inside that square. Consider a tile $t'$ on top of tile $t$. If any of the conditions from Fact \ref{lem-twoTiles} apply, then both squares that include $t$ and $t'$ are illegal. The width and length can increase by at most $2$. The increase is paid for by a token from each of the two illegal squares that contain $t'$ and $t$. If any of the cases from Fact \ref{lem-twoTiles2} apply to $t'$ and $t$, the weight and length can increase by at most $1$. According to Fact \ref{lem-twoTiles2}, $t'$ and $t$ are in a legal head square or an illegal square. In the latter case, the increase in weight is accounted for by a token from the illegal square that contains $t'$ and $t$. The only case not covered by Facts \ref{lem-twoTiles} and \ref{lem-twoTiles2} is when $t'$ and $t$ are both tape tiles and $t' = t$, in which case the length and weight does not increase in that location. This covers the general upper bounds that apply regardless of whether $r_{p+1}$ is valid as well as the upper bound on the increase in weight when $r_{p+1}$ is valid. Finally, to bound the increase in length for the special case where $r_{p+1}$ is valid, we use again the fact that $r_{p+1}$ has exactly one head tile. Suppose that tile is in location $j$. The total increase in length from $r_p$ to $r_{p+1}$ at location $j$ is at most $1$. Any other increase in length is due to a tape tile $t$ in row $r_{p+1}$ on top of a $\#$ tile in row $r_p$. If there are no such occurrences, then $ l(r_{p+1}) \le l(r_p) + 1$. If there is at least one such vertically aligned pair, then each one can increase the length by at most $1$. Also, by Fact \ref{lem-twoTiles}, each such vertical pair participates in two illegal squares, one to the left and one to the right. Although these squares can overlap, the number of illegal squares is at least the number of such vertical pairs plus $1$. In other words, the total increase in the lengths due to tiles at locations other than $j$ is at most $v(r_p)-1$. In this case $ l(r_{p+1}) \le l(r_p) + v(r_p)$. \end{proof} \subsection{Segments and Complete Segments} \label{sec-segments} \ifshow {\bf (sec:segments)} \else \fi In a complete iteration of the Outer Loop of the Turing Machine, each interval increases in size by $1$ and a new interval of size $2$ is added. Thus, we want to show a lower bound for the number of fault-free iterations of the Outer Loop represented in the tiling (Lemma \ref{lem-segLB}). We partition the rows into segments. The end of a segment is reach whenever a row has non-zero cost or is a valid end row. A segment is said to be complete if the last row in the segment as well as the last row in the previous segment are zero-cost rows. Note that this implies that the last row in the two consecutive segments are end rows. Since there are no illegal computation squares contained in the rows of the segment, the segment corresponds to a sequence of fault-free steps of the Turing Machine from one end configuration to the next, which is a complete iteration of the Outer Loop of the Turing Machine. \begin{definition}{\bf [Segments and Complete Segments]} We will partition the sequence of rows a tiling in to segments going from bottom to top. The first segment consists of the first two rows $r_0$ and $r_1$. Each new segment begins at the row above the previous segment and ends at the next row $r$ that is either an end row or has $v(r)+h(r)>0$. Let $r$ be the last row of a segment and $r'$ be the last row of the previous segment. A segment is complete if $v(r)+h(r) = v(r')+h(r') = 0$. \end{definition} We can now combine the bounds from the previous sections to upper bound the weight and length of a row $r$ as a function of the number of segments and the cumulative cost below row $r$ in the tiling. \begin{lemma} \label{lem-phasebounds} \ifshow {\bf (lem:phasebounds)} \else \fi {\bf [Upper Bounds on Length and Weight as a Function of Number of Segments and Number of Faults]} Let $r$ be a row in segment $t$. Then $$w(r) \le 1 + t + 2 C_t$$ $$l(r) \le 1 + 2 t C_t + \sum_{j=1}^t j,$$ where $C_t = \sum_{j=0}^{t-1} [v(r_j) + h(r_j)]$. \end{lemma} \begin{proof} Proof by induction on $t$. {\em Base Case:} $t=1$. The first segment consists of $r_0$ and $r_1$. $C_1 = v(r_0)$, which is the number of illegal initialization squares. We will number the locations of tiles in $r_1$ from left to right so that tiles $1$ through $N-2$ are the tiles between the $\Box$ tiles. Square $j$ will be the tiles in locations $j-1$ and $j$ in rows $r_0$ and $r_1$. For any $S \subseteq \{2, \ldots, N-2\}$ let $v(S)$ be the number of $j \in S$ such that square $j$ is an illegal initialization square. Each tile in $r_1$ can contribute at most $1$ to the length and at most $2$ to the weight. Let $S_1$ be the locations in $r_1$ that have a tile that is not $\vartriangleleft$, $(q_{e2}/\#)$ or $\#$. For every $j \in S_1$, square $j$ is illegal, so $v(S_1) = |S_1|$. The total contribution to the weight or length of $r_1$ from tiles that are in locations in $S_1$ is at most $2 v(S_1)$. Let $S_2$ be the set of locations with a $\vartriangleleft$ tile. Square $j$ is illegal for every $j \in S_2$, except $j=1$, so $|S_2| \le v(S_2)+1$. Each $\vartriangleleft$ tile contributes $+1$ to the weight and length of $r_1$, so the total contribution to the weight or length from tiles in $S_2$ is at most $v(S_2)+1$. If a tile in location $j$ is $(q_{e2}/\#)$, then either square $j$ is illegal or location $j-1$ has a $\vartriangleleft$ tile. Let $S_3$ be the set of locations with a $(q_{e2}/\#)$ tile. The total contribution to the weight or length from tiles in $S_3$ is at most $v(S_3) + |S_2| \le v(S_3) + v(S_2) + 1$. A $\#$ tile does not contribute to the weight or length of $r_1$. The total weight or length of $r_1$ is at most $$2 v(S_1) + [v(S_2) + 1] + [v(S_3) + v(S_2) + 1] \le 2 + 2[v(S_1) + v(S_2) + v(S_3)] = 2 + 2 \cdot C_1.$$ \vspace{.1in} {\em Induction step:} We will bound the length and weight of the row at the end of segment $t > 1$. Note that if a row $r$ is not the last row in the segment, then it must have zero cost. The sequence of rows from $r$ to the end of the segment do not contain any illegal pairs or squares and thus correspond to a set of correctly executed Turing Machine steps applied to valid row. Note that the length and width of the row do not decrease with any step of the Turing Machine applied to a valid row. Therefore, it is enough to bound the length and width of the last row in the segment. Let $r'$ be the row at the end of previous segment. We consider three cases: {\bf Case 1:} $v(r') + h(r') = 0$. Then $C_t = C_{t-1}$. Rows $r'$ through $r$ do not contain any illegal squares or pairs. Row $r'$ must be an end row, so by Lemma \ref{lem-valid2}, $$w(r) \le 1 + w(r') \le 1 + 1 + (t-1) + 2C_{t-1} \le 1 + t + 2C_t$$ $$l(r) \le l(r') + w(r') \le \left[ 1 + 2(t-1)C_{t-1} + \sum_{j=1}^{t-1} j \right] + \left[ 1 + (t-1) + 2C_{t-1} \right] = 1 + 2tC_{t} + \sum_{j=1}^t j .$$ {\bf Case 2:} $v(r') + h(r') > 0$ and $r$ is the row right after $r'$. Let $\Delta$ denote $C_t - C_{t-1} = v(r') + h(r')$ By Claim \ref{cl-invalidBound}, $$w(r) \le w(r') + 2v(r') + h(r') + 1 \le 1 + (t-1) + 2 C_{t-1} + 2\Delta + 1 \le 1 + t + 2 C_t$$ \begin{eqnarray*} l(r) & \le & l(r') + 2v(r') + h(r') + 1\\ &\le & 1 + \sum_{j=1}^{t-1} j + 2(t-1)C_{t-1} + 2 \Delta + 1\\ & \le & 1 + \sum_{j=1}^{t-1} j + 2tC_{t-1} + 2t \Delta + t\\ & \le & 1 + \sum_{j=1}^{t} j + 2t C_t\\ \end{eqnarray*} {\bf Case 3:} $v(r') + h(r') > 0$ and $r$ is not the row right after $r'$. Then the intervening rows between $r'$ and $r$ are valid. Let $\bar{r}$ be the row right after $r'$. The row $\bar{r}$ is valid and not an end row. Otherwise, the segment would have ended at $\bar{r}$ instead of $r$. By Claim \ref{cl-invalidBound}, $w(\bar{r}) \le w(r') + 2v(r')$. By Lemma \ref{lem-valid2}, $w(r) \le 1 + w(r')$. Therefore, $$w(r) \le w(\bar{r}) + 1 \le w(r') + 2v(r') + 1 \le 1 + (t-1) + 2C_{t-1} +2v(r') + 1 \le 1 + t + 2C_t$$ By Claim \ref{cl-invalidBound}, and $l(\bar{r}) \le l(r') + \max \{ v(r'), 1\}$. Let $\Delta = C_t - C_{t-1} = v(r') + h(r')$. Since $\Delta > 0$, $l(\bar{r}) \le l(r') + \Delta$. The rows $\bar{r}$ through $r$ do not contain any illegal pairs and are valid. Therefore Lemma \ref{lem-valid2}, \begin{eqnarray*} l(r) & \le & l(\bar{r}) + w(\bar{r}) + 1\\ &\le & [l(r') + \Delta] + [ w(r') + 2v(r')] + 1\\ & \le & \left[ 1 + \sum_{j=1}^{t-1} j + 2(t-1)C_{t-1} + \Delta \right] + \left[ 1 + (t-1) + 2C_{t-1} + 2 \Delta \right] + 1\\ & \le & 1 + \sum_{j=1}^{t} j + 2t C_{t-1} + 2t \Delta = 1 + \sum_{j=1}^{t} j + 2t C_{t}\\ \end{eqnarray*} The last inequality follows from the facts that $\Delta \ge 1$ and $t \ge 2$. \end{proof} \subsection{Lower Bound on the Number of Segments} \label{sec-segLB} \ifshow {\bf (sec:degLB)} \else \fi In a fault-free tiling the number of intervals is equal to the number of complete segments because each iteration of the Outer Loop adds one segment. The goal of this section is to show that faults do not change the number of complete segments in the final row of Layer $1$ by too much. \begin{definition}{\bf[The function $\mu$]} Define $\mu(N)$ to be the number of intervals in the last row of Layer $1$ in an fault-free tiling of an $N \times N$ grid. \end{definition} The goal in this Section is to prove Lemma \ref{lem-segLB} which says that the number of complete segments in any tiling in Layer $1$ is at least $\mu(N) - 14 F_1$, where $F_1$ is the number of faults in the Layer $1$ tiling. Note that when $F$ is $o(\mu(N))$, which is the case we will be interested in for all the problems we consider, we are characterizing the number of complete segments tightly, up to low order terms. In order to achieve this tight characterization, we need to specify the exact number of rows in a complete segment which will depend on the sequence of interval lengths. To this end, we define the following function which characterizes the number of steps in a complete segment that starts with a row $r$. Let $s_1, \ldots, s_m$ be the sizes of all the intervals in row $r$ from left to right. Define the function $X$ as: \begin{equation} \label{eq-Xdef} X(r) = \sum_{j=1}^m \left[ 2j ( s_j -1) + 1 \right]. \end{equation} If the first row $r$ of a segment is a no-cost row, then the segment consists of one or more valid computation steps ending with either a fault or an end row. Lemma \ref{lem-valid} provides an upper bound on the number of rows in the segment in this case. The upper bound is the function $X$ applied to the start row plus some additional terms in the cases where the segment begins or ends with a row with non-zero cost. The proof is a somewhat tedious case analysis because if the previous segment ended with a fault, the computation represented in the segment could begin at any point in the execution of the Outer Loop and it is necessary to consider each possible starting point separately. For example, the fault could cause the head to appear in any location and in any state that is consistent with the horizontal constraints. The next three lemmas (Lemmas \ref{lem-bigSeg2} through \ref{lem-firstBig}) then upper bound the amount by which $X$ can increase in a sequence of rows. Lemma \ref{lem-bigSeg} considers the situation described above where the rows are contained within a segment and first row of the segment is a no-cost row. Lemma \ref{lem-bigSeg} considers a maximal sequence of rows with non-zero cost. Lemma \ref{lem-firstBig} considers the initial sequence of non-zero cost rows. Finally Lemma \ref{lem-segLB} puts all these bounds together to lower bound the number of complete segments. \begin{lemma} \label{lem-valid} \ifshow {\bf (lem:valid)} \else \fi {\bf [Upper Bound on the Length of a Segment]} Consider a sequence of rows from $r_{s}$ to $r_{t}$ such that the sequence of rows does not contain any illegal pairs or illegal squares. Suppose that the sizes of the intervals from left to right (both clean and corrupt) in row $r_s$ are $s_1, \ldots, s_m$. If rows $r_{s+1}$ through $r_{t-1}$ are not end rows, then $$t - s \le l(r_s) + 2w(r_s) - 2 + X(r_s) $$ If $r_s$ is an end row, then $t-s \le X(r_s)$. If $r_s$ and $r_t$ are both end rows, then $t-s = X(r_s)$. \end{lemma} \begin{proof} We will show that if the Turing Machine starts in a valid configuration, then the Turing Machine will reach an end row within $S = l(r_s) + 2w(r_s) - 2 + \sum_{j=1}^m \left[ 2j(s_j -1) + 1 \right] $ correctly executed steps. Consider a configuration of the Turing Machine that corresponds to a valid row. The configuration is an $IS[k]$ configuration if the state of the Turing Machine is $q_{IS}$ and the head is pointing to one of the internal weight-$0$ symbols in interval $k$ or the weight-$1$ symbol at the right end of interval $k$. We will establish that the Turing Machine will reach an end configuration or an $IS[j]$ configuration for $j > k$ within a certain number of steps. The head moves right in state $q_{IS}$ until it reaches a $X$ or $\vartriangleright$ symbol. If it reaches a $\vartriangleright$ symbol first, then it transitions to $q_{e1}$ and then $q_{e2}$ pointing to a $\#$ symbol. This is an end state. The number of steps has been $$s_k-2 + \sum_{i=k+1}^m (s_i-1) + 2 = 1 + \sum_{j=k}^m (s_j-1) \le S.$$ If the head reaches an $X$ symbol first in state $q_{IS}$, it writes a $B$, and then inserts an $\overline{X}$ by moving all the non $\#$ symbols over by one. Interval $k+1$ now has a $\overline{X}$ symbol on its left end. The number of steps has been at most $\sum_{i=k}^m (s_i-1)$. When it reaches the $q_{w \vartriangleright}$ state, it replaces the $\#$ with a $\vartriangleright$, transitions to $q_{left}$ and moves left. The head moves left in state $q_{left}$ until it reaches a $\overline{X}$ symbol. We are guaranteed that the left end of interval $k+1$ has $\overline{X}$ symbol but there could be a $\overline{X}$ that comes earlier. Let $j$ be the largest index such that the left end of interval $j$ is $\overline{X}$. When the head reaches the left end of interval $j$, it replaces the $\overline{X}$ with $X$, transitions to $q_{IS}$ and moves right. This has been an additional $1 + \sum_{i=j}^m (s_i-1)$ steps. The cycle from the $IS[k]$ configuration to the $IS[j]$ configuration is an iteration of the Inner Loop. The total number of steps has been $$1 + \sum_{i=k}^{m} (s_i - 1) + \sum_{i=j}^{m} (s_i - 1) = 1 + \sum_{i=k}^{j-1} (s_i - 1) + 2 \sum_{i=j}^{m} (s_i - 1) .$$ The number of steps to get to an end configuration from an $IS[k]$ configuration is maximized if we start in an $IS[1]$ configuration and for each $k$, after the Turing Machine leaves the $IS[k]$ configurations, the next time it reaches state $q_{IS}$ is in interval $j = k+1$ (an $IS[k+1]$ configuration). After reaching an $IS[m]$ configuration, the Turing Machine is guaranteed to reach an end configuration in $1 + (s_m - 1)$ steps. The maximum number of steps required to reach an end configuration when starting in any $IS[k]$ state is: \begin{eqnarray} & & \sum_{k=1}^{m-1} \left[ 1 + (s_k - 1) + \sum_{i=k+1}^{m} 2 (s_i - 1) \right] + 1 + (s_m - 1)\\ & = & m + \sum_{i = 1}^m (s_i - 1) + 2 \sum_{j=2}^m \sum_{i = j}^m (s_i-1) \label{eq-rowBound} \end{eqnarray} Next we will argue that as long as the initial configuration of the Turing Machine is valid, it will reach the state $q_{IS}$ or an end configuration within a certain number of steps. If the initial configuration is in state $q_{OS}$ or $q_{left}$, the head moves left until a $\vartriangleleft$ or $\overline{X}$ symbol is reached. Then the state transitions to $q_{IS}$ and moves right. This will be at most an additional $1 + \sum_{i=1}^m (s_i -1)$ steps. Adding this value to the bound from Expression (\ref{eq-rowBound}): \begin{equation} \label{eq-rowBound2} m + 1 + 2 \sum_{j=1}^m \sum_{i = j}^m (s_i-1) = 1 + \sum_{i=1}^m [2j (s_i-1) + 1] \end{equation} If the initial configuration is in state $q_{e1}$ then the current symbol is $\#$ and the configuration is one step away from an end configuration. If the initial configuration is in state $q_{e2}$ or $q_{w \vartriangleright}$, then the Turing Machine writes $\vartriangleright$ and moves left into state $q_{OS}$ or $q_{left}$. The head then moves left until a $\vartriangleleft$ or $\overline{X}$ symbol is reached. Then the state transitions to $q_{IS}$ and moves right. This will be at most an additional $1 + \sum_{i=1}^m (s_i -1)$ steps and an $IS[k]$ configuration has been reached. The same bound from (\ref{eq-rowBound2}) applies. Finally, suppose that if the Turing Machine is in a state $q_{wt}$. If the current symbol is $\vartriangleleft$, it writes a $\vartriangleleft$ symbol, and moves right into state $q_{IS}$. The current configuration is a $IS[1]$ configuration. In this case, the number of steps is the bound from Expression (\ref{eq-rowBound}) plus $1$. If the Turing Machine starts in a state $q_{wt}$ and the current symbol is not $\vartriangleleft$, then let $j$ be the index of the interval that the head starts in. The Turing Machine will insert a $t$ symbol and shift the contents of the tape to the right. When the Turing Machine reaches the $q_{w \vartriangleright}$ state, it writes a $\vartriangleright$, and moves left into state $q_{left}$. The number of steps has been at most $\sum_{i=1}^m (s_i -1)$. Interval $j$ has now increased in size by $1$. The bound from (\ref{eq-rowBound2}) applies, except that the size of interval $j$ is now $s_j+1$ instead of $j$. The total number of steps to reach an end row is at most $$\sum_{i=1}^m (s_i -1) + 1 + 2j + \sum_{i=1}^m [2j (s_i-1) + 1].$$ The $2j$ term comes from the fact that the size of interval $j$ is $s_j+1$ instead of $s_j$. Plugging in the expressions $\sum_{i=1}^m (s_i -1) = l(r_s)-1$ and $j \le m = w(r_s) - 1$: \begin{eqnarray*} & & \sum_{i=1}^m (s_i -1) + 1 + 2j + \sum_{i=1}^m [2j (s_i-1) + 1]\\ & \le & l(r_s) + 2(w(r_s) - 1) + \sum_{i=1}^m [2j (s_i-1) + 1]\\ & \le & l(r_s) + 2(w(r_s) - 1) + m + 2m \sum_{i=1}^m (s_i-1)\\ & \le & l(r_s) + 2(w(r_s) - 1) + w(r_s) - 1 + 2(w(r_s) - 1) (l(r_s)-1)\\ & \le & 2w(r_s)l(r_s) \end{eqnarray*} The simplification in the last line of the equations above use the fact that $l(r_s) \ge w(r_s)$. The second line gives the first upper bound for $t-s$ stated in the lemma and the last line gives the second upper bound for $t-s$. If the initial configuration is an end row, then the Turing Machine writes $\vartriangleright$ and moves left into state $q_{OS}$ until it reaches the $\vartriangleleft$ symbol. The head then moves right into state $q_{IS}$, which is an $IS[1]$ configuration. The number of steps so far is exactly $1 + \sum_{i=1}^m (s_i -1)$. The $q_{OS}$ state changes all $\overline{X}$ symbols to $X$ symbols. The Turing Machine maintains the invariant that when in state $q_{left}$ or $q_{wt}$, the $\overline{X}$ symbol will mark the right end point of the last interval that has been increased. Therefore, if an iterations of the inner loop that starts in an $IS[k]$ configuration, the next iteration will start in an $IS[k+1]$ configuration. Therefore the number of remaining steps to reach an end configuration is exactly the bound from (\ref{eq-rowBound}). The total number of steps is exactly the bound given in (\ref{eq-rowBound2}). If the sequence ends before an end row is reached, then the number of rows is less than the bound given in (\ref{eq-rowBound2}). \end{proof} The following three lemmas will be used to bound the growth of the function $X$. Lemma \ref{lem-bigSeg2} bounds how much $X$ can increase as the result of fault-free steps of the Turing Machine applied to valid configurations. Lemmas \ref{lem-bigSeg} and \ref{lem-firstBig} bound how much illegal pairs or squares can cause $X$ to increase. \begin{lemma} \label{lem-bigSeg2} \ifshow {\bf (lem:bigSeg2)} \else \fi {\bf [Upper Bound on Change in $X$ in Fault-Free Steps]} Consider a sequence of rows $r_a$ through $r_b$ that do not contain any illegal pairs or squares and such that rows $r_{a+1}$ through $r_{b-1}$ are not end rows. Then $$X(r_b) - X(r_a) \le 1 + (w(r_b)-1) w(r_b) + 2w(r_b).$$ If $r_a$ is an end row, then $X(r_b) - X(r_a) \le 1 + (w(r_b)-1) w(r_b)$. \end{lemma} \begin{proof} Since rows $r_a$ through $r_b$ do not contain any illegal pairs or squares, they correspond to $b-a$ steps applied to the valid configuration represented in row $r_a$. Moreover, the computation does not reach the end of an iteration of the Outer Loop until possibly the last step. Let $m = w(r_a)-1$ be the number of intervals in row $r_a$. If configuration $r_a$ starts in a state $q_{wt}$, then some interval $j$ may increase by one as the head sweeps right, moving all the symbols over by $1$. After this point, each interval can increase by at most $1$ before the end of the Outer Loop is reached. Thus, the summation in (\ref{eq-Xdef}) can increase by at most $2j + \sum_{j=1}^m 2j$. If $r_a$ is an end row, then the summation increases by at most $\sum_{j=1}^m 2j$. If $r_b$ is not an end row, then $w(r_b) = w(r_a)$ and $$\sum_{j=1}^m 2j = (w(r_b)-1) w(r_b).$$ $$2j + \sum_{j=1}^m 2j \le 2m + (w(r_b)-1) w(r_b) \le (w(r_b)-1) w(r_b) + 2w(r_b).$$ The first expression above bounds $X(r_b) - X(r_a)$ in the case that $r_a$ is an end row and $r_b$ is not an end row. The second expression bounds $X(r_b) - X(r_a)$ in the case that neither $r_a$ nor $r_b$ are end rows. If $r_b$ is an end row, then $w(r_b) = w(r_a)+1$ and a new interval of size $2$ is added to to the right end, increasing $X$ by an additional $2m+3$. $$2m+3 + \sum_{j=1}^m 2j = 1 + \sum_{j=1}^{m+1} 2j = 1 + (w(r_b)-1) w(r_b).$$ $$2m+3 + 2j + \sum_{j=1}^m 2j \le 1 + (w(r_b)-1) w(r_b) + 2w(r_b).$$ The first expression above bounds $X(r_b) - X(r_a)$ in the case that $r_a$ and $r_b$ are both end rows. The second expression bounds $X(r_b) - X(r_a)$ in the case that $r_a$ is not an end row, but $r_b$ is an end row. \end{proof} \begin{lemma} \label{lem-bigSeg} \ifshow {\bf (lem:bigSeg)} \else \fi {\bf [Upper Bound on Change in $X$ in a Sequence of Rows with Non-Zero Cost]} Consider a no-cost row $r_{a-1}$ such that row $r_{a}$ has a positive cost. Let $r_b$ be the next highest no-cost row after $r_a$. Let $f$ be the number of illegal pairs or squares contained in the rows $r_a$ through $r_b$. Let $F$ be the total number of illegal pairs or squares in Layer $1$. If row $r_b$ is in the $t^{th}$ segment, then $$X(r_b) - X(r_a) \le 8f[ t^2 + 4tF + t + 2F + 8f].$$ \end{lemma} \begin{proof} By Claim \ref{cl-distUB}, $d(r_b, r_a) \le 4 \sum_{j=a}^{b-1} [v(r_j) + h(r_j)] = 4f$. We will let $d$ denote $d(r_b, r_a)$. Note that $r_a$ must be a valid row because the row before it, $r_{a-1}$ is a no-cost row. Let $s_1, \ldots, s_m$ be the sizes of the intervals in $r_a$. We start by re-expressing the summation in the definition of $X(r)$ from (\ref{eq-Xdef}): $$\sum_{j=1}^m [2j(s_j - 1) + 1] = \sum_{j=1}^m \left[ 1 + \sum_{k=j}^m 2(s_k -1)\right].$$ We will account for how the summation in (\ref{eq-Xdef}) changes from $r_a$ to $r_b$. In the worst case, all of the heavy tiles in row $r_a$ are unchanged in $r_b$ and all the locations where the two rows differ result in new heavy tiles in row $r_b$. Since $r_a$ and $r_b$ are both valid rows, they have exactly one $\vartriangleright$ tile which is the right end of the right-most interval. The sum $\sum_{k=j}^m (s_k-1)$ is exactly the distance from the left end of interval $j$ to the $\vartriangleright$ tile. The position of the $\vartriangleright$ tile is at most $d$ spaces to the right in $r_b$ than it is in $r_a$, so the value of the summation in (\ref{eq-Xdef}) increases from $r_a$ to $r_b$ by at most $2dm$ as a result of the existing heavy tiles in $r_a$. There are at most $d$ new heavy tiles in $r_b$ that were not in $r_a$, each of which can create at most two new intervals (since the weight of every tile is at most $2$). Each new interval can increase the sum by at most $1 + 2d + \sum_{k=1}^m 2(s_k-1)$ which is at most $2 (l(r_a) + d)$. The increase due to the new intervals is therefore at most $4d(l(r_a)+d)$. Therefore $X(r_b) - X(r_a) \le 2dm + 4d(l(r_a)+ d) $. Since $r_{a}$ has non-zero cost, it is the last row in some segment $t'$, where $t' < t$. The number of intervals $m$ in row $r_a$ is $w(r_a) - 1$, which according to Lemma \ref{lem-phasebounds}, is at most $2F+ t' < 2F+t$. Using the same lemma, we can bound $l(r_a)$ by $2t'F + 1 + t'(t'+1)/2$ which is at most $2tF + t^2/2$ since $t \ge 2$. Plugging the bounds in, and using the fact that $d \le 4f$, we get $$X(r_b) - X(r_a) \le 8f[ t^2 + 4tF + t + 2F + 8f].$$ \end{proof} \begin{lemma} \label{lem-firstBig} \ifshow {\bf (lem:firstBig)} \else \fi {\bf [Upper Bound on Change in $X$ the Initial Sequence of Rows with Non-Zero Cost]} Let $t$ be the smallest index such that $r_t$ is a no-cost row. Let $f$ be the number of illegal pairs and squares contained in rows $r_0$ through $r_t$. Then $X(r_t) \le 18f^2 + 18f + 3$. \end{lemma} \begin{proof} By Lemma \ref{lem-phasebounds}, the weight and length of $r_1$ is at most $2 + 2c_1$, where $c_1$ is the number of illegal initialization squares. By Claim \ref{cl-invalidBound}, the weight and length can increase from $r_1$ to $r_t$ by at most $2c_2 + (t-1)$, where $c_2$ is the sum of the costs of rows $r_1$ through $r_{t-1}$. $c_2 \ge t-1$ because the first $t-1$ rows all have non-zero cost. Also $c_1 + c_2 = f$. Therefore the weight and length of $r_t$ is at most $$[2 + 2c_1] + [2c_2 + (t-1)] \le 2 + 2c_1 + 3c_2 \le 2 + 3f.$$ The summation in the definition of $X(r_t)$ is maximized if the contents of the row consists of $2 + 3f$ heavy tiles, in which case there are $3f + 1$ intervals all of size $2$. Then $X(r_t)$ is $(3f+2)^2 - 1$ which is at most $9f^2 + 12f + 3$. The additional cost of $l(r_t) + 2 w(r_t)$ is incurred only if $r_1$ is not an end row, in which case $f > 0$. Therefore $$X(r_t) \le [9f^2 + 12f + 3] + f(l(r_t) + 2 w(r_t)) \le [9f^2 + 12f + 3] + 3f(2 + 3f) \le 18f^2 + 18f + 3.$$ \end{proof} \begin{lemma} \label{lem-segLB} \ifshow {\bf (lem:segLB)} \else \fi {\bf [Lower Bound on the Number of Complete Segments]} Let $F$ be the number of illegal squares or pairs in Layer $1$ of a tiling of the $N \times N$ grid. Then the number of complete segments in Layer $1$ is at least $\mu(N) - 14F$. \end{lemma} \begin{proof} The main work of the proof is to show that the number of segments (complete or not) in Layer $1$ is at least $\mu(N) - 12 F_1$. Each segment ends on a row that incurs a cost or ends on an {\em end row}, corresponding to the last row of an iteration of the Outer Loop. Each illegal square or pair can cause the end of at most one segment. Therefore, if there are at least $\mu(N) - 12 F_1$ segments, then there must be at least $\mu(N) - 12 F_1 - 2 F_1 = \mu(N) - 14 F_1$ segments that end with a zero cost row and such that the segment before also ends on a zero cost row. These are, be definition, complete segments. Consider a tiling of the $N \times N$ grid that has a total of $F$ illegal pairs and squares. Number the segments $S_1, S_2, \ldots$ from bottom up. We will denote the number of row in segment $S_t$ by $|S_t|$. Let $\bar{S}_1, \bar{S}_2, \ldots$ be the segments in the unique tiling in Layer $1$ that has no illegal squares or pairs, which we call the {\em no-cost} tiling. There is one interval after the first segment (which just consists of the initial row). Then in a fault-free tiling every segment corresponds to one iteration of the Outer Loop in which exactly one interval is added in the last step. Therefore in a fault-free tiling the number of complete segments is equal to the number of intervals in the last row, which is $\mu(N)$. We will show that $$\sum_{t=1}^{\mu(N) - 12F} |S_t| \le \sum_{t=1}^{\mu(N)} | \bar{S}_t|.$$ Thus if $N$ is large enough to accommodate $\mu(N)$ segments in an fault-free tiling, then $N$ is large enough to accommodate $\mu(N) - 12F$ segments in a tiling with $F$ illegal pairs or squares. A segment will be called {\em big} if it has more than one row and otherwise it is called {\em small} . In a big segment, all the rows except possibly the last row of the segment have $v(r) + h(r) = 0$. If a row is valid ($h(r) = 0$) and there are no illegal squares contained in the row and the row above it ($v(r) = 0$), then according to Lemma \ref{lem-nextRow}, the row above $r$ is the unique row representing one step of the Turing Machine applied to the configuration corresponding to $r$, which is also a valid row. Therefore, if a segment has more than one row, all the rows in the segment are valid and there are no illegal pairs or squares contained in the rows of the segment. Finally, we will partition the illegal pairs and squares in Layer $1$ according to where they are located with respect to the big segments. If $S_t$ is small, then $f_t = 0$. Otherwise if $S_t$ is big, let $t'$ be the largest $2 \le t' < t$ such that $S_{t'}$ is also big. The quantity $f_t$ is defined to be the number of illegal squares contained in the set of rows from the last row in $S_{t'}$ through the first row in $S_t$. If $S_t$ is the first big segment then, $f_t$ is the number of illegal pairs and squares contained in the rows from $r_0$ through the first row in $S_t$, including the illegal initialization squares. Note that $\sum_t f_t = F$. Note that in any tiling, the first segment consists of rows $r_0$ and $r_1$, so we will only be concerned with bounding the number of rows in all the segments, except the first segment. We will define a quantity $X_t$ for each segment indexed by $t$. If the $t^{th}$ segment is small, then $X_t = 1$. If the $t^{th}$ segment is big, then $X_t = X(r)$, where $r$ is the first row in the segment. If the last row of the previous segment is a no-cost row, then it must be an end row. According to Lemma \ref{lem-valid}, $X_t$ is an upper bound on the number of rows in the segment. If the last row of the previous segment has non-zero cost, then $f_t > 0$ and the number of rows of the segment is at most $X_t + l(r) + 2w(r)$, where $r$ is the first row in the segment. Therefore $$\sum_{t=1}^m |S_t| \le \sum_{t=1}^m X_t + \max_r\{l(r) + 2w(r)\} \sum_t{f_t} = \sum_{t=1}^m X_t + F \cdot \max_r\{l(r) + 2w(r)\}$$ Using the bounds from Lemma \ref{lem-phasebounds} and simplifying, $l(r) \le 3mF + m^2$ and $w(r) \le 2(m+F)$, \begin{equation} \label{eq-Xbound} \sum_{t=1}^m |S_t| \le 3mF^2 + Fm^2 + 4mF + 4F^2 + \sum_{t=1}^m X_t \le Fm^2 + 11 mF^2 + \sum_{t=1}^m X_t \end{equation} We would like to bound how much the parameter $X_t$ can grow from one big segment to the next. To that end, we define $\Delta_t$ to be the increase in $X_t$. If $S_t$ is small, then $\Delta_t = 1$. Otherwise $\Delta_t = X_t - X_{t'}$, where $t'$ is the largest $2 \le t' < t$ such that $S_{t'}$ is also big. If $S_t$ is the first big segment then, $\Delta_t = X_t$. For any $m$: $$\sum_{t=2}^m X_t \le \sum_{t=2}^m (m-t+1) \Delta_t.$$ The goal then is to bound $\Delta_t$. Suppose that $S_t$ is a big segment that is not the first big segment. Let $S_{t'}$ be the last big segment before $S_t$. Let $r$ be the first row of $S_{t'}$, $r'$ be the last row of $S_{t'}$ and $r''$ the first row of $S_t$. $\Delta_t = X(r'') - X(r)$. We will bound $X(r'') - X(r')$ and $X(r') - X(r)$ separately. We first bound $X(r'') - X(r')$. There are two cases: {\bf Case 1:} $r'$ has zero cost. Let $\bar{r}$ be the row after $r'$. Since $r'$ is the last row in segment $S_{t'}$, it must be an end row. Row $\bar{r}$ represents the first configuration in the next iteration of the Outer Loop. Since the intervals do not change in the first step of the Outer Loop, $X(\bar{r}) - X(r') = 0$. If $\bar{r}$ is also a no-cost row, then it is the first row in a big segment, which means that $\bar{r}$ is the first row in $S_t$ and $\bar{r} = r''$. In this case, $f_t = 0$ and $X(r'') - X(r') = 0$. If $\bar{r}$ has positive cost, we can apply Lemma \ref{lem-bigSeg} with $r_a = \bar{r}$ and $r'' = r_b$ to get that $$X(r'') - X(r') = X(r'') - X(\bar{r}) \le 8f_t[ t^2 + 4tF + t + 2F + 8f_t].$$ {\bf Case 2:} $r'$ has positive cost. We can apply Lemma \ref{lem-bigSeg} with $r_a = r'$ and $r'' = r_b$ to get that $$X(r'') - X(r') = \le 8f_t[ t^2 + 4tF + t + 2F + 8f_t].$$ We now use Lemma \ref{lem-bigSeg2} to bound $X(r') - X(r)$. If the row before $r$ has zero cost, then it must be an end row, and $X(r') - X(r) \le 1 + w(r')(w(r')-1)$. If the row before $r$ has positive cost, then $X(r') - X(r) \le 1 + w(r')(w(r')-1) + 2w(r')$. Note that if the row before $r$ has positive cost, then $F > 0$. Therefore, we can combine the bounds to get: $$X(r') - X(r) \le 1 + w(r')(w(r')-1) + 2Fw(r')$$ By Claim \ref{cl-invalidBound}, $w(r') \le 1 + t' + 2F \le 2F + t$. Therefore $$X(r') - X(r) \le (2F+t)^2 - (2F+t) + 1 + 2F(2F+t) \le (t^2 -t +1) + 8F^2 + 6Ft$$ The bound for $\Delta_t = X(r'') - X(r)$ comes from adding the bounds for $X(r'') - X(r')$ and for $X(r') - X(r)$, to get \begin{equation} \label{eq-Delta} \Delta_t \le \left[ 8f_t( t^2 + 4tF + t + 2F + 8f_t) \right] + \left[ (t^2 -t +1) + 8F^2 + 6Ft \right]. \end{equation} The bound for $\Delta_t$ in \ref{eq-Delta} applies to the case where $S_t$ is a big segment that is not the first segment in Layer $1$. If $S_t$ is the first big segment, then by Lemma \ref{lem-firstBig}, $$X_t = \Delta_t \le (18(f_t)^2 + 18 f_t + 3).$$ If $S_t$ is not a big segment, then $\Delta_t = 1$. In either case, using the fact that $t \ge 2$, one can verify that $\Delta_t$ is bounded by the expression given in \ref{eq-Delta}. Therefore \begin{eqnarray*} & & \sum_{t=2}^m X_t = \sum_{t=2}^m (m-t+1) \Delta_t\\ &\le & \sum_{t=2}^m (m-t+1) \left[ 8f_t( t^2 + 4tF + t + 2F + 8f_t) + (t^2 -t +1) + 8F^2 + 6Ft \right]\\ & = & \sum_{t=2}^m (m-t+1)[8f_t t^2 + (t^2 -t +1)]\\ & + & \sum_{t=2}^m (m-t+1)[ 8f_t( 4tF + t + 2F + 8f_t) + 8F^2 + 6Ft ] \label{eq-Xbound2} \end{eqnarray*} By replacing $t$ or $m-t+1$ with $m$ and replacing $f_t$ with $F$, the second summation can be upper bounded by $72 F^2 m + 18 F m^2$. The dominant term in the expression above is from the $8t^2 f_t$ term. Using the fact that $\sum_t f_t \le F$ and $t \le m$, we can bound \begin{eqnarray*} \sum_{t=2}^m 8 (m-t+1) t^2 f_t & \le & \sum_{t=2}^m 8 t^2 f_t + \max_t 8F (m-t) t^2\\ & \le & 8 F m^2 + 8(4/27) F m^3 \le 8 F m^2 + 2 F m^3 \end{eqnarray*} The expression in the max function is maximized for t = 2m/3. Therefore: \begin{eqnarray*} \sum_{t=2}^m X_t \le 2Fm^3 + 72 F^2 m + 26 F m^2 + \sum_{t=2}^m (m-t+1)(t^2 - t + 1) \end{eqnarray*} Putting this bound together with the bound from (\ref{eq-Xbound}): $$\sum_{t=1}^m |S_t| \le 2Fm^3 + 83 m F^2 + 27 Fm^2 + \sum_{t=2}^m (m-t+1)(t^2 - t + 1)$$ Now we turn to the fault-free tiling. For $t \ge 2$, in the first row of $\bar{S}_t$, there are $t-1$ intervals of sizes $t, t-1, \ldots, 2$. Therefore $X_2 = 3$ and $\Delta_t = 1 + \sum_{j=1}^{t-1} 2j = t^2 - t + 1$. Letting $m = \mu(N)-12F$, we want to show that $$ 2Fm^3 + 83 m F^2 + 27 Fm^2 + \sum_{t=2}^m (m-t+1)(t^2 + t + 1) \le \sum_{t=2}^{m+12F} (m+12F-t+1)(t^2 - t + 1)$$ We will lower bound the difference of the two summations: \begin{eqnarray*} & & \sum_{t=2}^{m+12F} (m+12F-t+1)(t^2 - t + 1) - \sum_{t=2}^m (m-t+1)(t^2 - t + 1)\\ & \ge & \sum_{t=2}^m 12F(t^2 - t + 1) + \sum_{j=1}^{12F} (12F-j+1)[(m+j)^2 - (m+j) + 1]\\ & \ge & 12F\left( \frac{m^3}{3} - \frac{m}{3} \right) + [(m+1)^2 - (m+1) + 1]\sum_{j=1}^{12F} (12F-j+1)\\ & \ge & 4Fm^3 - 4Fm + 72 F^2 (m^2 + m + 1)\\ & \ge & 2Fm^3 + 83 m F^2 + 27 Fm^2 \end{eqnarray*} \end{proof} \subsection{Upper Bound on the Length of a Row} \label{sec-lengthUB} \ifshow {\bf (sec:lengthUB)} \else \fi Layer $3$ of the construction for Parity Weighted Tiling and Function Weighted Tiling uses a Turing Machine that sweeps across all of the non-blank symbols. In order to establish that this Turing Machine completes its work in $N$ steps, we need an upper bound on the length of a row as a function of $N$ and the number of faults in Layer $1$. In this section we present the upper bound on the length of a row required for this analysis. Note that this section is not used in the proof of Gapped Weighted Tiling. We present these bounds here because they use the definition for the function $X$ which is developed in the previous section. \begin{lemma} \label{lem-XdiffLB} \ifshow {\bf (lem:XdiffLB)} \else \fi {\bf [Lower Bound on the Change in $X$]} Consider any two consecutive segments which end at rows $r_a$ and $r_b$, respectively. Then $X(r_b) - X(r_a) \ge -2 l(r_a)$. If $r_a$ and $r_b$ are no-cost rows (meaning the segment ending in row $r_b$ is a complete segment, then $X(r_b) - X(r_a) \ge w(r_a)^2$. \end{lemma} \begin{proof} In a complete segment, the size of each segment increases by $1$ and there is a new segment of size $2$ added to the right end. Therefore the sequence $(s_1, s_2, \ldots, s_m)$ becomes $(s_1+1, \ldots, s_m +1, 2)$. Therefore the value of $X$ goes from $$\sum_{j=1}^m [ 2j (s_j-1) + 1 ] ~~~~\mbox{to}~~~~\sum_{j=1}^m [ 2j s_j + 1 ] + 2(m+1) + 1.$$ The overall increase is $\sum_{j=1}^m 2j + 2m + 3 \ge (m+1)^2$. The length of row $r_a$ is equal to $m+1$, so the value of $X$ increase by at least $w(r_a)^2$. Consider a sequence of of rows that do not contain any invalid squares. This sequence of rows represent correctly executed steps of the Turing Machine applied to a valid row. Each iteration of the inner loop causes one of the intervals to increase in size and the other intervals to remain the same which can only increase $X$. The only point at which the value of $X$ can decrease is when the head is moving right as it inserts a new blank symbol into an interval. As the rest of the tape symbols are moved to the right, one interval which holds the current location of the head can be temporarily decreased in size by $1$. This can decrease the value of $X$ by at most $2m$. By the time the head reaches the right end of the tape symbols, the sizes of all the intervals are at least their original size. This can contribute a decrease of at most $2(w(r_a)-1)$. If the last row in the previous segment has non-zero cost, There may be tiles that change from one row to the next outside of a valid computation square. The worst case is if a tile of weight $2$ changes to a tile of weight $0$. This would cause three consecutive intervals of sizes $s_j, 1, s_{j+2}$ to be merged into one interval of size $s_j + s_{j+2}+1$. The net effect is that $X$ can increase by at most four times the length of row $r_a$. Therefore, $X(r_b) - X(r_a) \ge -2 (l(r_a) + w(r_a) - 1)$. \end{proof} \begin{lemma} \label{lem-numSegUB} \ifshow {\bf (lem:numSegUB)} \else \fi {\bf [Upper Bound on the Number of Segments]} If row $r$ is the last row in a tiling in Layer $1$ and $F_1$ is the number of illegal pairs or squares in Layer $1$ of the tiling, then if $F_1 \le N^{1/4}/40$, the number of segments in Layer $1$ is at most $4N^{1/4} + 2 F_1$. \end{lemma} \begin{proof} Consider a complete segment. The weight of the row at the end of the segment is exactly one larger than the weight of the last row of the previous segment. At the end of the $t^{th}$ segment, there have been $t - 2F_1$ such increases overall. Furthermore, none of the valid Turing Machine rules decrease the weight of a row, so in any valid computation square the weight of the two top tiles is at least the weight of the two bottom squares. The difference in weight between two vertically aligned tiles is at most two. The number of vertically aligned tiles that are not the same and are outside of valid computation squares is at most $F_1$. Therefore the weight of any row after the first $t$ segments is at least $\max\{0,t - 4 F_1\}$. Let $r$ be the row just before the beginning of a complete segment. Then the value of $X$ increases by at least $w(r)^2$. This means that by the end of the $t^{th}$ segment, the total increases that occurred during complete segments is at least $$\sum_{j=1}^{t - 2F_1} (t-4 F_1)^2 \ge \frac{(t - 6F)}{3}$$ In each incomplete segment, the value of $X$ can decrease by at most $2(l+w+1)$, where $l$ and $w$ are an upper bounds on the length and weight of a row that has occurred so far. By Lemma \ref{lem-phasebounds}, $l \le 1 + 2t F_1 + t^2/2$ and $w \le 1 + t + 2F_1$. Therefore the total decrease which has occurred so far in all the incomplete segments is at most $4F_1(2 + 2tF_1 + t + 2F_1 + t^2)$. Therefore, by the end of the $t^{th}$ segment, the value of $X$ is at least $$\frac{(t - 6F)^3}{3} - 4F(2 + 2tF_1 + t + 2F_1 + t^2)$$. For $t \ge 3N^{1/4}$ and $F \le N^{1/4}/40$, then this function is at least $.15 t^3$. If a complete segment begins in row $r$, then by Lemma \ref{lem-valid}, the length of the segment is exactly $X(r)$. If the total number of segments is larger than $4N^{1/4} + 2 F_1$, then the total number of complete segments that happen after the first $3 N^{1/4}$ segments is at least $N^{1/4}$. Each of these segments lasts for at least $.15 (3N^{1/4})^3 \ge 4 N^{3/4}$ rows. This contradicts the fact that there are at most $N$ rows. \end{proof} \begin{lemma} \label{lem-lengthUB} \ifshow {\bf (lem:lengthUB)} \else \fi {\bf [Upper Bound on the Length of a Row]} If row $r$ is the last row in a tiling in Layer $1$ and $F_1$ is the number of illegal pairs or squares in Layer $1$ of the tiling and $F_1 \le N^{1/4}/40$, then $l(r) \le 9 N^{1/2} + 2 N^{1/4}+1$. \end{lemma} \begin{proof} Lemma \ref{lem-phasebounds} gives an upper bound on the length of a row $r$ at the end of the $t^{th}$ segment as a function of $F_1$ and $t$. Plugging in the bound from Lemma \ref{lem-numSegUB} on $t$, we get that \begin{eqnarray*} l(r) & \le & 2tF_1 + 1 + \sum_{j=1}^{t}j\\ & \le & 1 + 2(4N^{1/4} + 2 F_1)F_1 + \frac{(4N^{1/4} + 2 F_1)(4N^{1/4} + 2 F_1 + 1)}{2} \end{eqnarray*} When $F_1 \le N^{1/4}/40$, the expression can lower bounded by $9 N^{1/2} + 2 N^{1/4}+1$. \end{proof} \subsection{Clean and Corrupt Intervals} \label{sec-clean} \ifshow {\bf (sec:clean)} \else \fi In order to track the sizes of the intervals, we will need a somewhat sophisticated definition, which will recursively designate each interval in row $r_t$ {\it clean} or {\it corrupt}. Intuitively, a clean interval in row $r_t$ is an interval that is unaffected by an illegal pair or square in rows $r_0$ through $r_{t-1}$. Each clean interval in a row is given a tag. The clean intervals within a row all have unique tags. The tags allow us to track the intervals while the intervals shift and grow as the computation progresses (extending upwards in the tiling). Lemma \ref{lem-cleanLB} then uses the lower bound on the number of complete segments from Section \ref{sec-segLB} (Lemma \ref{lem-segLB}) to lower bound the number of clean intervals in the last row of Layer $1$. Tagging the clean intervals will be useful so that we can argue more precisely about how the sizes of those intervals grow. We are only be able to characterize the sizes of the clean intervals as faults can cause the sizes and number of an intervals to change in unpredictable ways. Consider for example an interval of size $s$ and a fault which causes a $B$ in the middle of that interval to change to an $X$ in the following row. The interval of size $s$ splits into two different intervals whose sizes sum to $s+1$. To write down the definition of {\it clean} and {\it corrupt} intervals, we need a notion of a distance between sequences of tiles: \begin{definition}{\bf [Row Distance on a Sequence of Locations]} The distance between two tiling rows $r$ and $r'$ is the number of locations where $r$ and $r'$ differ and is denoted by $d(r, r')$. If $S$ is a sequence of tile locations in a row of length $N$, and $r$ and $r'$ are two tiling rows, then $d_S(r,r')$ is the number of location in $S$ where $r$ and $r'$ differ. \end{definition} We will also need the following notation: For any valid row $r$, let $\mbox{next}(r)$ denote the unique row that can be placed above $r$ with no illegal squares. According to Lemma \ref{lem-nextRow}, $\mbox{next}(r)$ represents the application of one step of the Turing Machine to row $r$. Consider two consecutive rows in the tiling $r$ and $r'$. If $r$ is valid, then $\mbox{next}(r)$ is well-defined and we want to assess whether a clean interval in row $r$ remains clean in $r'$ according to whether it is equal to the corresponding interval in $\mbox{next}(r)$. On the other hand if $r$ is not valid, then the row will not necessarily correspond to a valid Turing Machine configuration and there is not a well-defined expectation for what $r'$ should be. In this case, we just compare $r'$ to $r$ in determining whether each clean interval in $r$ remains clean in $r'$. \begin{definition} \label{def-cleancorrupt} {\bf [Clean and Corrupt Intervals and Tags]} $r_0$ is assumed to consist of all $\Box$ tiles, so the definition of {\em clean} or {\em corrupt} starts with $r_1$. For $t = 1, \ldots N-1$ the intervals in row $r_t$ are designated as clean or corrupt, and the clean intervals will be tagged by positive integer numbers (which can be viewed as "names" of the clean intervals). This is done recursively, as follows. First, a {\it comparison row} $\tilde{r}$ is determined, the intervals in $\tilde{r}$ are designated as clean or corrupt, and the clean intervals in $\tilde{r}$ are tagged. This is then used to decide about the clean and corrupt intervals of $r_t$ as well as the tags for the clean intervals in $r_t$. We consider three cases: \begin{enumerate} \item {\bf Case 1:} $t = 1$. The comparison row $\tilde{r}$ is set to be the correct starting row: $$\Box~\vartriangleleft~(q_{e2}/\#)~\{ \# \}^{N-4} \Box$$ Since this row is correct, we already know the assignment of its clean and corrupt intervals: The only interval in $\tilde{r}$ is $\vartriangleleft~(q_{e2}/\#)$, which is designated as clean and assigned the tag $1$. \item {\bf Case 2:} $t > 1$ and $r_{t-1}$ is invalid. Then $\tilde{r} = r_{t-1}$. The intervals in $r_{t-1}$ have recursively been designated as clean or corrupt. Moreover, the clean intervals have been assigned tags. \item \label{case3}{\bf Case 3:} $t > 1$ and $r_{t-1}$ is valid. Then $\tilde{r} = \mbox{next}(r_{t-1})$. Order the intervals in $r_{t-1}$ and $\mbox{next}(r_{t-1})$ from left to right. The $j^{th}$ interval in $\mbox{next}(r_{t-1})$ is given the same clean/corrupt designation as the $j^{th}$ interval in $r_{t-1}$. If the $j^{th}$ interval is clean, then the tag is also the same. Claim \ref{cl-numIntsNext} below shows that since $r_{t-1}$ is valid, the number of intervals in $r_t$ is either the same or one larger than that of $r_{t-1}$. If $\mbox{next}(r_{t-1})$ has one more interval than $r_{t-1}$, then the new interval is the rightmost interval. The new interval is designated as clean and given the tag $t$. \end{enumerate} We now use the designation of clean or corrupt intervals in $\tilde{r}$ to determine which intervals are clean and corrupt in $r_t$. For each interval in $r_t$, let $S$ be the sequence of tiles in that interval. If $d_S(r_t, \tilde{r}) > 0$, then the interval is corrupt in $r_t$. If $d_S(r_t, \tilde{r}) = 0$, then the interval adopts the same clean/corrupt designation as in $\tilde{r}$. If the interval is clean then it adopts the same tag as the corresponding interval in $\tilde{r}$. \end{definition} The following Claim shows that Case \ref{case3} of the above definition is indeed well defined. \begin{claim} \label{cl-numIntsNext} \ifshow {\bf (cl:numIntsNext)} \else \fi {\bf [Change in the Number of Intervals in One TM Step]} If $r$ is a valid row, then $r$ and $\mbox{next}(r)$ have the same number of intervals, except if $r$ has the form $\Box~\vartriangleleft \{B, X, \overline{X}\}^* (q_{e1}/\#) \{ \# \}^*~\Box$ in which case $\mbox{next}(r)$ has a new interval. The new interval is the right-most interval in $\mbox{next}(r)$ and has size $2$. \end{claim} \begin{proof} By Fact \ref{fact:WeightsInts}, we know that the number of intervals is one less than the weight of the row. The only TM rule that increases the total weight of the tiles (namely, the only legal head square whose weight of top two tiles is bigger than that of its bottom two tiles) is $\delta(q_{e1}, \#) = (q_{e2}, X, R )$. By inspection of Figure \ref{figure-ValidConfigGraph}, the only valid row which has the tile $(q_{e1}/\#)$ as its head tile, is of the form $\Box~\vartriangleleft \{B, X, \overline{X}\}^* (q_{e1}/\#) \{ \# \}^*~\Box$. Therefore if any other rule is applied, the number of intervals is the same between the two rows. Note that $(q_{e1}/\#)$ is a heavy tile so $(q_{e1}/\#)$ is the right end of the right-most interval in $r$. When the rule is applied, the two tiles $(q_{e1}/\#), \#$ in $r$ become $X, (q_{e2}/\#)$ in $\mbox{next}(r)$. Since $X$ and $(q_{e2}/\#)$ are both heavy tiles, the interval to the left of the $X$ does not change and the new interval has size $2$. \end{proof} In order to use the tags in Definition \ref{def-cleancorrupt} to track intervals, we need the following lemma about the tags: \begin{lemma} \label{lem-tags} \ifshow {\bf (lem:tags)} \else \fi {\bf [Interval Tags are Unique]} Within a row $r_t$, all the tags of clean intervals are unique and $\le t$. If a clean interval in $r_t$ has a tag $j < t$, then there is a clean interval with tag $j$ in row $r_{t-1}$. \end{lemma} \begin{proof} By induction on $t$. There is only one possible clean interval in row $r_1$. If row $r_1$ has that interval, then its tag is $1$. For the inductive step, suppose that $r_{t-1}$ is invalid. Then in Definition \ref{def-cleancorrupt}, $\tilde{r} = r_{t-1}$. The only way for an interval to be clean in $r_t$ is for that interval to be identical to a clean interval in $r_{t-1}$ in which case the interval has the same tag in $r_t$ as it does in $r_{t-1}$. If the intervals all have unique tags $\le t-1$ in $r_{t-1}$ then they also have unique tags $\le t$ in $r_t$. If $r_{t-1}$ is valid, then $\tilde{r} = \mbox{next}(r_{t-1})$. There is a one-to-one correspondence (that preserves tags) between the clean intervals in $\mbox{next}(r_{t-1})$ and the intervals in $r_{t-1}$, except for the fact that $\mbox{next}(r_{t-1})$ might contain an additional new interval with tag $t$. If all the intervals in $r_{t-1}$ are unique and $\le t-1$, then all the intervals in $\mbox{next}(r_{t-1})$ are unique and $\le t$. The only way for an interval to be clean in $r_t$ is for that interval to be identical to a clean interval in $\mbox{next}(r_{t-1})$. If the tag of a clean interval in $r_t$ is $\le t-1$, then it corresponds to a clean interval in $\mbox{next}(r_{t-1})$ whose tag is $\le t-1$, which in turn corresponds to a clean interval in $r_{t-1}$. \end{proof} \begin{lemma} \label{lem-cleanLost} \ifshow {\bf (lem:cleanLost)} \else \fi {\bf [Upper Bound on the Loss of Clean Intervals]} The number of tags $j$ such that there is a clean interval with tag $j$ in $r_{t}$ but there is no clean interval with tag $j$ in $r_{t+1}$ is at most $12h(r_t) + 6v(r_t)$. \end{lemma} \begin{proof} By Lemma \ref{lem-nextRow}, if $h(r_t) + v(r_t) = 0$, then $r_{t+1}$ is the same as $\mbox{next}(r_t)$, which means that $r_{t+1}$ represents a correctly executed step of the Turing Machine on the configuration represented in the valid row $r_t$, and row $r_{t+1}$ is also valid. Thus no clean intervals are lost from $r_t$ to $r_{t+1}$, although it's possible that a clean interval with tag $t+1$ is added in row $r_{t+1}$. If $h(r_t) > 0$, then row $r_t$ is not valid. By Definition \ref{def-cleancorrupt}, $r_{t+1}$ is compared to $r_t$ in determining which intervals in $r_{t+1}$ are clean. Each clean interval in $r_t$ that is unchanged in $r_{t+1}$ remains clean in $r_{t+1}$. Therefore if a clean interval with tag $j$ that is present in $r_t$ does not appear in $r_{t+1}$, there must be a tile in the interval where $r_t$ and $r_{t+1}$ differ. Each tile in $r_t$ is included in at most $3$ intervals, so the number of intervals that are clean in $r_t$ that are no longer clean intervals in $r_{t+1}$ is at most $3d(r_t, r_{t+1})$, which by Claim \ref{cl-distUB} is at most $12h(r_t) + 6v(r_t)$. Now suppose that $r_t$ is valid ($h(r_t)=0$) but $v(r_t) >0$. $r_{t+1}$ is compared to $\mbox{next}(r_t)$ in determining which intervals are clean or corrupt. Every clean interval in $r_t$ corresponds to a clean interval in $\mbox{next}(r_t)$ with the same tag. Every clean interval in $\mbox{next}(r_t)$ that fails to appear in $r_{t+1}$ must contain a location where $\mbox{next}(r_t)$ and $r_{t+1}$ differ. Since each tile participates in at most $3$ intervals in $\mbox{next}(r_t)$, the number of clean intervals in $r_t$ that are no longer clean intervals in $r_{t+1}$ is at most $3 \cdot d(\mbox{next}(r_t), r_{t+1})$. We will now argue that $d(\mbox{next}(r_t), r_{t+1}) \le 2(v_t)$. $r_t$ and $\mbox{next}(r_t)$ only differ in two locations. The other locations are all tape tiles. If one of these tape tiles in $r_t$ is not the same in $r_{t+1}$, then those two vertically aligned tiles are in an illegal computation square. Now consider the legal head square resulting from putting $\mbox{next}(r_t)$ on top of $r_t$. If either tile in the top row of this square differs from the two corresponding tiles in $r_{t+1}$ then the head square is illegal. Therefore in each location where $\mbox{next}(r_t)$ and $r_{t+1}$ differ, the two vertically aligned tiles in $r_t$ and $r_{t+1}$ in that location are contained in an illegal computation square. Since each illegal computation square contains at most two pairs of vertically aligned tiles, the number of locations where $\mbox{next}(r_t)$ and $r_{t+1}$ differ is at most $2(v_t)$. \end{proof} \begin{lemma} \label{lem-cleanLB} \ifshow {\bf (lem:cleanLB)} \else \fi {\bf [Lower Bound on the Number of Clean Intervals]} Let $F_1$ be the number of illegal squares or pairs in Layer $1$ of a tiling of the $N \times N$ grid. Then the number of clean intervals in the last row of Layer $1$ is at least $\mu(N) - 26F_1$. \end{lemma} \begin{proof} By Lemma \ref{lem-segLB}, the number of complete segments in Layer $1$ is at least $\mu(N) - 14 F_1$. Since each complete segment corresponds to an fault-free iteration of the Outer Loop and since one new clean interval is added in each iteration of the Outer Loop, the number of clean intervals that are created in Layer $1$ is at least $\mu(N) - 14 F_1$. Each of these clean intervals is tagged with the row number in which it first appears, which is the last row in that segment. Therefore, each of the $\mu(N) - 14 F_1$ clean intervals created has a unique tag. By Lemma \ref{lem-cleanLost}, the number of tags such there is a clean interval in some row $r_{t-1}$ but no clean interval with tag $j$ in row $r_t$ is at most $12F_1$. Therefore the number of clean intervals in the last row of Layer $1$ is at least $\mu(N) - 14 F_1 - 12 F_1 = \mu(N) - 26 F_1$. \end{proof} \subsection{The Potential Function $A$} \label{sec-potential} \ifshow {\bf (sec:potential)} \else \fi In addition to giving a lower bound on the number of clean intervals in a tiling, we will want to show that the sizes of those intervals do not deviate too much from the sizes of the intervals in a tiling with no illegal pairs or squares. In an fault-free computation, if the number of intervals at the end of an iteration of the Outer Loop is $m$, then the sizes of those intervals from left to right will be $m+1, m, m-1, \ldots, 2$. The function $A$ captures the extent to which the interval sizes differ from this ideal case. \begin{definition} \label{def-A} Consider a sequence of $m$ positive integers $s_1, \ldots, s_m$. $$A(s_1, \ldots, s_m) = \sum_{j=1}^{m-1} |s_j - s_{j+1} - 1| + |s_m - 2|. $$ \end{definition} Note that $A(m+1, m, \ldots, 2) = 0$. It will also be convenient to refer to the value of $A$ for a row in Layer $1$ of a tiling. If the sizes of the intervals in row $r$ (from left to right) is $s_1, \ldots, s_m$, then $A(r) = A(s_1, \ldots, s_m)$. If $r$ does not have any clean intervals, then $A(r)$ is defined to be $0$. The primary goal of the analysis in this section is to prove Lemma \ref{lem-potential}, which says that if $r$ is the last row of Layer $1$ in a tiling, and $F_1$ is the number of illegal pairs or squares in Later $1$, then $A$ is bounded by $3 + O(F_1)$. \subsubsection{Features of the function $A$} The analysis of each layer will bound the value of $A$ by a constant times the number of illegal pairs and squares. At various points in the analysis, we will need to deduce features about the sequence of sizes of the clean intervals based on the bound for the value on the value of $A$. The following three technical lemmas describe the required features and argue that they follow from the bound on $A$. \begin{lemma} \label{lem-seq} \ifshow {\bf (lem:seq)} \else \fi Consider a sequence $s_1, \ldots, s_m$ such that each $s_i \ge 2$. Let $S$ denote the set of integers occurring in the sequence $s_1, \ldots, s_m$ with repetitions removed. Then $$ \left| \{2, \ldots, r \} - S \right| \le \max\{r-m-1,0\} + A(s_1, \ldots, s_m).$$ \end{lemma} \begin{proof} The value of $A(s_1, \ldots, s_m)$ is minimized if the $s_i$'s are sorted in decreasing order, so we will assume this is the case. Let $S_m$ be the set of integers in the range from $2$ through $s_m$. Define $N_{skip} = |S_m - S|$ and $N_{dup} = m - |S|$. If the sequence $(s_1,\ldots, s_m)$ is sorted in decreasing order then value of $A(s_1, \ldots, s_m)$ is equal to $N_{skip} + N_{dup}$. Furthermore $s_m \ge m+1 - N_{dups}$. If $r \le s_m$, then $$\left| \{2, \ldots, r \} - S \right| \le N_{skip} \le A(s_1, \ldots, s_m)$$ If $r > s_m$, then $$\left| \{2, \ldots, r \} - S \right| = r - s_m + N_{skip} \le r - m - 1 + N_{dup} + N_{skip} \le (r-m-1) + A(s_1, \ldots, s_m)$$ \end{proof} \begin{lemma} \label{lem-usingA} \ifshow {\bf (lem:usingA)} \else \fi Consider a sequence $s_1, \ldots, s_m$ whose $A$ value is at most $d$. Then at least $(m - d)/2$ of the $s_j$'s are at least at least $(m - d)/2$. \end{lemma} \begin{proof} We will prove that if there less than $(m - d)/2$ of the $s_j$'s are at least at least $(m - d)/2$, then the value of $A$ is strictly larger than $d$. The value of $A$ is minimized when the sequence $s_1, \ldots, s_m$ is sorted in non-increasing order, so we will also assume that the $s_j$'s are sorted accordingly. If there are less than $(m - d)/2$ of the $s_j$'s in the sequence that are at least at least $(m - d)/2$, then there are more than $m - (m - d)/2 = (m+d)/2$ of the $s_j$'s that fall in the range from $1$ to $\lfloor (m - d)/2 \rfloor$. This implies that there are more than $d$ of the $s_j$'s such that $s_j = s_{j-1}$. Each such value contributes at least $+1$ to the sum defining $A$. Since all the terms in $A$ are non-negative, the lemma follows. \end{proof} \begin{lemma} \label{lem-remove1} \ifshow {\bf (lem:remove1)} \else \fi Consider a sequence $s_1, \ldots, s_m$. If $s_j = 1$ and $s_j$ is removed from the sequence then the value of $A$ for the sequence decreases. If any $s_j$ is removed from the sequence, the value of $A$ increase by at most $1$. \end{lemma} \begin{proof} If $s_1 = 1$ then the first term in the sum defining $A$ is $|1 - s_2 - 1| = s+2$. Since $s_2$ is positive, the value of $A$ increases when $s_1$ is removed. If $s_m = 1$, the last two terms in the sum defining $A$ are $|s_{m-1} - 1 + 1| + |2 - 1| = s_{m-1}+1$. If $s_m$ is removed, these two terms are placed by $|s_{m-1}-2|$, which for positive $s_{m-1}$ is less than $s_{m-1}+1$. Finally, we consider the case that $j$ is neither, $1$ nor $m$. When $s_j = 1$ is removed, the two terms $|1 - s_{j+1} - 1| + |s_{j-1} - 1 - 1| = s_{j+1} + |s_{j-1}-2|$ are replaced by $|s_{j-1} - s_{j+1} - 1|$. If $s_{j-1} = 1$, then the original expression is $s_{j+1}+1$ which is replaced by $s_{j+1}$ and the sum decreases. If $s_{j-1} > 1$, then $s_{j+1} + |s_{j-1}-2| = s_{j+1} + s_{j-1}-2$. Meanwhile the new expression $|s_{j-1} - s_{j+1} - 1|$ is maximized when $s_{j+1} < s_j$ in which case $|s_{j-1} - s_{j+1} - 1| < s_{j+1}+1 - s_{j-1}$. We have that $$s_{j+1} + |s_{j-1}-2| = s_{j+1} + (s_{j-1}-2) \ge s_{j+1} - (s_{j-1} - 2) > s_{j+1} + 1 - s_{j-1} \ge |s_{j-1} - s_{j+1} - 1|.$$ \end{proof} \subsubsection{Bound on the value of $A$ in a tiling} The analysis for Layer $1$ must bound the value of $A$ in the last row of Layer $1$ as a function of the number of illegal squares and pairs in Layer $1$. We start by analyzing a sequence of rows with no illegal pairs. Lemma \ref{lem-noCostRows} gives an upper bound on the amount by which $A$ can increase over a sequence of fault-free steps (corresponding to no-cost rows). Then Lemma \ref{lem-potential} gives an upper bound on $A(r)$ as a function of the number of faults in all the rows below $r$ in the tiling. \begin{lemma} \label{lem-noCostRows} \ifshow {\bf (lem:noCostRows)} \else \fi {\bf [Upper Bound on the Increase in A Over a Sequence of No-Cost Rows]} Consider a sequence of consecutive rows in a tiling of the $N \times N$ grid, beginning with row $r_{s}$ and ending on row $r_{t}$. Suppose that the tiling from $r_{s}$ through $r_{t}$ does not contain any illegal pairs or squares. Then $A(r_{t}) \le A(r_{s}) + 6$. If $r_{s}$ is an end row then $A(r_{t}) \le A(r_{s}) + 3$. \end{lemma} \begin{proof} Each row proceeding upwards from $r_{s}$ corresponds to one step in the computation of the Turing machine. All the intervals (clean and corrupt) will be numbered from left to right, and we will track the changes to the interval sizes by $\delta$ functions. So a $\delta(j) = +1$ would indicate that the $j^{th}$ interval as increased in size by $1$. Note that a change to interval $j$ will only affect the value of $A$ if $j$ is a clean interval. We will use $m$ to denote the number of clean intervals and $M$ to denote the total number of intervals. Note that the number of intervals does not change in one iteration of the Outer Loop, except in the last step. At the start of an iteration of the Inner Loop, the state is $q_{IS}$. We will call a row an $\mbox{IS}[k]$ row, if the state is $q_{IS}$ and the head is in interval $k$. Since the intervals overlap, the head could possibly be in two intervals at the same time, such as $(q_{IS}/X)$. In this case, we say that the head is in the interval to the left. In analyzing the sizes of the intervals, it doesn't matter where in interval $k$ that the head begins. When the TM is in state $q_{IS}$, the head moves to the right past any $B$ symbols until it reaches an $X$ or a $\vartriangleright$ tile at which point it changes state. Thus, in any consecutive sequence of rows in which the state is $q_{IS}$, the head remains in the same interval and the interval sizes do not change. We will consider different subsequences of rows corresponding to different portions of an iteration of the Outer Loop. \begin{enumerate} \item {\bf The subsequence starts with an $\mbox{IS}[k]$ row and ends with an $\mbox{IS}[k+l]$ row.} After the $l-1$ iterations of the inner loop, the intervals $k$ through $k+l-1$ will have increased in size by $1$ and all other interval sizes will stay the same: $\delta(k) = \delta(k+1) = \cdots \delta(k+l-1) = +1$. \item {\bf The subsequence starts with an $\mbox{IS}[k]$ row and ends before an $\mbox{IS}[k+1]$ row is reached.} If the head never reaches the right end of interval $k$, the intervals do not change. If the head sweeps past the right end of interval $k$, then the size of interval $k$ increases by $1$. ($\delta(k) = +1$.) If the sequence ends while the head is in interval $j$ before reaching the $\vartriangleright$ tile, then the left end of interval $j$ has been moved over but the right end has not been moved over. The effect is that interval $j$ has decreased in size by $1$: $\delta(j) = -1$. If the subsequence ends after the head has reached the $\vartriangleright$ tile, the state will be $q_{w \vartriangleleft}$ or $q_{left}$. In this case, all the tiles have been moved over and there is no $\delta(j) = -1$ change. \item {\bf The subsequence starts in a row that is not an $\mbox{IS}$ row and ends when the first $\mbox{IS}[k]$ is reached.} If the initial state in the subsequence is $q_{w \vartriangleleft}$ or $q_{left}$, then the head just moves left until it reaches a $\overline{X}$ or $\vartriangleleft$ symbol, at which point the state transitions to $q_{IS}$. The intervals do not change. If the head is in the middle of sweeping right, then the state is $q_{w \overline{X}}$, $q_{wX}$ or $q_{wB}$. Suppose that the head starts in interval $i$. The size of interval $i$ is increased as the head sweeps right. When the head has finished sweeping to the right, all the tiles have been moved over and the net effect is $\delta(i) = +1$. \item {\bf The subsequence starts in the state $q_{OS}$ and stops on or before $\mbox{IS}[1]$.} The head sweeps left until reaching $\vartriangleleft$. The intervals do not change, so all $\delta$'s are $0$. \item {\bf The subsequence starts in an $\mbox{IS}[M]$ row and ends on or before the next end row.} The head moves right until it reaches the $\vartriangleright$ symbol. The tile $(q_{IS}/\vartriangleright)~\#$ become $B~ (q_{e1}/\#)$ which means that $\delta(M) = +1$. Then $(q_{e1}/\#)~\#$ become $X (q_{e2}/\#)$ which means a new interval of size $2$ has been added. The last step in which $X~(q_e2/\#)$ changes to $(q_{OS}/X)~\vartriangleright$ does not change any of the intervals. \end{enumerate} Let $c_1, \ldots, c_m$ be the indices of the clean intervals in $r_{s}$. Let $s_j$ denote the size of interval $c_j$. The value of $A$ is: $$A = \sum_{i=1}^{m-1} | s_i - s_{i+1} - 1| + |s_m - 2|.$$ We first consider the case in which $r_{s}$ is a end row. If the sequence ends before the first Inner Loop begins, the net effect on $A$ is $0$ because none of the intervals change in size. Otherwise, the sequence can then go on to $k$ complete iterations of the inner loop which causes the first $k$ intervals to increase in size by $1$. If $a$ is the number of clean intervals among the first $k$ intervals, then $\delta(c_1) = \delta(c_2) = \cdots \delta(c_a) = +1$, which can increase $A$ by at most $1$ because only the $|s_a - s_{a+1} - 1|$ term changes and that term can only change by $1$. If the sequence goes on to an incomplete iteration of the Inner Loop, the next clean interval could have increased ($\delta(c_{a+1}) = +1)$ and the change to $A$ is still at most $1$. It's also possible that the sequence ends while the head is sweeping right, in which case $\delta(c_j) = -1$ for some clean interval $c_j$. This can increase $A$ by an additional $+2$ for a total increase of $+3$. If the sequence completes all the Inner Loops, then all the clean intervals will have increased in size. In this case, none of the terms inside the sum in the expression for $A$ change. However the last term $|s_m-2|$ will increase by $1$. Finally, if the sequence completes the entire Outer Loop, an additional interval of size $2$ is added to the end, so there is a new $s_{m+1} = 2$. The net effect on $A$ from the beginning of the sequence is $0$. $$A(r_{t}) - A(r_{s}) = (|(s_m + 1) - s_{m+1} -1| + |s_{m+1} - 2|) - |s_m - 2|= 0.$$ Thus if $r_{s}$ is a end row, and the sequence ends before the next end row, $A$ can increase by at most $3$. Moreover if the sequence is one complete iteration of the outer loop (i.e. if $r_s$ and $r_t$ are end rows), then $A(r_{t}) - A(r_{s}) = 0$. Now suppose that the sequence begins in an arbitrary location in the middle of the middle of the Outer Loop. The worst case, is that the sequence starts during a right sweep in an inner loop. There is possibly a $\delta(i) = +1$ if the head starts out in a clean interval which is increased as the head sweeps right. ($A$ can increase by at most $+2$.) Then after a one or more complete iterations of the Inner Loop, a consecutive sequence of clean intervals could have increased in size by $1$. ($A$ can increase again by at most $+2$.) Then the sequence can end in the middle of an iteration of the inner loop in which case, there could also be a $\delta(j) = -1$ for the interval where the head ends up. (Another increase to $A$ of at most $+2$.) The total increase to $A$ is bounded by $6$. Thus, if the sequence does not contain any end rows (i.e. the sequence is contained within one iteration of the Outer Loop), then the increase to $A$ is at most $6$. If the sequence begins at an arbitrary location in the middle of the Outer Loop and completes the Outer Loop (i.e. ends on an end row), then there could be a $\delta(i) = +1$ if the head starts out in a clean interval which will be increased as the head sweeps right. ($A$ can increase by at most $+2$.) In addition, the next iterations of the Inner Loop will start with some interval $k$ and increase all the intervals to the right of $k$, this will result in $\delta(c_a) = \delta(c_{a+1}) = \cdots = \delta(c_m) = +1$ and finally, a new clean interval of size $2$ will be added to the right end. These changed can increase $A$ by at most $1$ because only the $|s_{a-1} - s_{a} - 1|$ term increases by $1$. The effect of increasing the last clean interval by $1$ and adding a new interval of size $2$ cancels out as argued above. Thus if the sequence starts in the middle of an Outer Loop and runs to the end of the Outer Loop, $A$ increases by at most $3$. We have argued that if the sequence from $r_{s}$ through $r_{t}$ does not contain any end rows (i.e. the sequence is contained within one iteration of the Outer Loop), then the increase to $A$ is at most $6$. If sequence does contain an end row, let $r_a$ be the first end row and $r_b$ be the last end row of the sequence. Since going from end row to end row, does not change the value of $A$, $A(r_b) - A(r_a) = 0$. A sequence that is contained in an Outer Loop and ends on a end row increases $A$ by at most $3$, so $A(r_a) - A(r_{s}) \le 3$. A sequence that is contained in an Outer Loop and begins with an end row increases $A$ by at most $3$, so $A(r_{t}) - A(r_{b}) \le 3$. Putting the inequalities together gives that $A(r_{t}) - A(r_{s}) \le 3$. \end{proof} \begin{lemma} \label{lem-potential} \ifshow {\bf (lem:potential)} \else \fi For any $t \ge 1$, $$A(r_t) \le 3 + 12[h(r_{t-1}) + v(r_{t-1})] + \sum_{j=2}^{t-2} 18[h(r_j) + v(r_j)].$$ \end{lemma} \begin{proof} By induction on $t$. Base case: $r=1$. The only possible clean interval in row $r_1$ is $\vartriangleleft (q_{e2}/\#)$. If $r_1$ has this clean interval, then $A(r_1) = |s_1 - 2| = 0$. If $r_1$ does not have any clean intervals, then $A(r_1) = 0$ by definition. Inductive step: Suppose the intervals in $r_t$ are determined by comparison to row $\tilde{r}$. Every clean interval in $r_t$ corresponds to a clean interval in $\tilde{r}$. The only way $A$ can change from $\tilde{r}$ to $r_t$ is if a clean interval is removed. Each location where $r_t$ and $\tilde{r}$ differ can remove at most three clean intervals from $\tilde{r}$ to row $r_t$. Each removed clean interval can increase $A$ by at most $1$. If $s_j$, $s_{j+1}$, and $s_{j+2}$ are the sizes of three consecutive clean intervals in $\tilde{r}$ and the interval of size $s_{j+1}$ is not present in $r_t$ then: $$|s_{j+2}-s_j-1| \le |s_{j+2} - s_{j+1} - 1| + |s_{j+1} - s_j| \le |s_{j+2} - s_{j+1} - 1| + |s_{j+1} - s_j - 1| + 1.$$ Therefore $A(r_t) - A(\tilde{r}) \le 3 d(\tilde{r}, r_t)$. {\bf Case 1:} $r_{t-1}$ is invalid. Then $\tilde{r} = r_{t-1}$. $$A(r_t) - A(r_{t-1}) = A(r_t) - A(\tilde{r}) \le 3 d(\tilde{r}, r_t) \le 6 v(r_{t-1}) + 12h(r_{t-1})$$ The last inequality is due to by Claim \ref{cl-distUB}. {\bf Case 2:} $r_{t-1}$ is valid and there is an illegal square in rows $r_{t-1}$ and $r_t$. Since $r_{t-1}$ is valid, $\tilde{r} = \mbox{next}(r_{t-1})$. If we were to put row $\mbox{next}(r_{t-1})$ on top of row $r_{t-1}$, the two rows would not contain any illegal pairs or squares and would therefore satisfy the conditions of Lemma \ref{lem-noCostRows}. So $A(\mbox{next}(r_{t-1})) - A(r_{t-1}) \le 6$. Every location where $\mbox{next}(r_{t-1})$ and $r_{t}$ differ must be contained in at least one illegal square and a square contains two locations.To see why this is true, consider placing $\mbox{next}(r_{t-1})$ over $r_{t-1}$. The two rows do not contain any illegal pairs or squares. Since $r_{t-1}$ is valid, there is exactly one head square that contains the head tile in the two rows. If $r_t$ differs from $\mbox{next}(r_{t-1})$ at eiher of those two locations, then the square must be illegal. In all other locations, $r_{t-1}$ and $\mbox{next}(r_{t-1})$ contain the same tape tile. If $r_t$ differs from the $\mbox{next}(r_{t-1})$ in any of those locations, then that would correspond to two vertically aligned tape tiles that are not the same. Any such vertically aligned pair is contained in two illegal squares. Therefore $$A(r_t) - A(\mbox{next}(r_{t-1})) = A(r_t) - A(\tilde{r}) \le 3 d(\mbox{next}(r_{t-1}), r_t) \le 6 v(r_{t-1}).$$ Putting the two inequalities together and using the fact that $v(r_{t-1}) \ge 1$, we get that $$A(r_t) - A(r_{t-1}) \le 6 v(r_{t-1}) + 6 \le 12 v(r_{t-1}).$$ {\bf Case 3:} Rows $r_0$ through $r_t$ do not have any illegal pairs or squares. Then $r_1$ is an end row. By Lemma \ref{lem-noCostRows}, $A(r_t) - A(r_1) \le 3$. Since $A(r_1) = 0$, then $A(r_t) \le 3$. {\bf Case 4:} $v(r_{t-1}) + h(r_{t-1}) = 0$ and there is a $0 \le p \le t-2$ such that $v(r_{p}) + h(r_{p}) > 0$. Let $p$ be the largest $p$ such that $p \le t-2$ and $v(r_{p}) + h(r_{p}) > 0$. The rows $r_{p+1}$ through $r_t$ do not contain any illegal pairs or squares. By Lemma \ref{lem-noCostRows}, $A(r_t) - A(r_{p+1}) \le 6$. By the inductive hypothesis, $$A(r_{p+1}) \le 3 + 12[h(r_{p}) + v(r_{p})] + \sum_{j=2}^{p-1} 18[h(r_j) + v(r_j)]$$ Therefore, using the fact that $v(r_{p}) + h(r_{p}) \ge 1$ and $p \le t-2$, we have that \begin{eqnarray*} A(r_{t}) & \le & 6 + 3 + 12[h(r_{p}) + v(r_{p})] + \sum_{j=2}^{p-1} 18[h(r_j) + v(r_j)]\\ & \le & 3 + 18[h(r_{p}) + v(r_{p})] + \sum_{j=2}^{p-1} 14[h(r_j) + v(r_j)]\\ & \le & 3 + \sum_{j=2}^{t-2} 18[h(r_j) + v(r_j)] \end{eqnarray*} \end{proof} We can put Lemmas \ref{lem-cleanLB} and \ref{lem-potential} together to get the following Lemma which summarizes what is needed from the analysis of Layer $1$ for the next Layer. In particular Lemma \ref{lem-analysisL1} bounds the number of interval sizes in the range $2$ through $\mu(N)+1$ not represented in the last row of Layer $1$. \begin{lemma} \label{lem-analysisL1} \ifshow {\bf (lem:analysisL1)} \else \fi {\bf [Bound on the Number of Missing Interval Sizes]} Consider a tiling of the $N \times N$ grid. Let $r$ be the tiling in the last row of Layer $1$. Let $S$ denote the set of integers such that $s \in S$ if there is a clean interval of size $s$ in row $r$. Let $F_1$ denote the total number of faults in Layer $1$. Then $$|\{2,3,\ldots,\mu(N)+1\} - S| \le 44 F_1 + 3$$ \end{lemma} \begin{proof} Let $(s_1, \ldots, s_m)$ denote the sizes of the clean intervals, from left to right, in row $r$. We can remove all the intervals of size $1$ and according to Lemma \ref{lem-remove1}, the value of $A$ for that sequence will only decrease. Since the size of the remaining intervals is at least $2$, we can apply Lemma \ref{lem-seq} with $r = \mu(N)+1$, $$ \left| \{2, \ldots, \mu(N)+1 \} - S \right| \le \max\{\mu(N)-m,0\} + A(s_1, \ldots, s_m).$$ By Lemma \ref{lem-cleanLB}, $m$, the number of clean intervals in the last row of Layer $1$ is at least $\mu(N) - 26F_1$, and therefore $\max\{\mu(N)-m,0\} \le 26 F_1$. Since $F_1 = \sum_{t=0}^{N-1} [h(r_t) + v(r_t)]$, by Lemma \ref{lem-potential}, the value of $A(s_1, \ldots, s_m)$, which is the value of $A$ for the last row of Layer $1$ is at most $18 F_1 + 3$. \end{proof} \subsection{Characterizing the intervals in an fault-free tiling} \label{sec-FF} \ifshow {\bf (sec:FF)} \else \fi In order to compare a tiling with faults to an fault-free tiling, we will eventually need to characterize the sequence of interval sizes in an fault-free tiling. At the end of every iteration of the outer loop, the sequence of intervals starts at some number $m$ and decreases by $1$, going from left to right, until the last interval which has size $2$. In the middle of an iteration of the Outer Loop, the sequence of interval lengths can differ slightly from this ideal case. The extent of the difference is characterized in the lemma below. \begin{lemma} \label{lem-errorFreeSizes} \ifshow {\bf (lem:errorFreeSizes)} \else \fi {\bf [The Sequence of Interval Sizes in a Fault-Free Tiling]} Consider a row $r$ that represents the configuration of the Turing Machine in an fault-free execution in which the TM is in the $m^{th}$ iteration of the Outer Loop. The number of intervals in $r$ is $m$. Define the set $S$ such that $j \in S$ if and only if there is an interval of size $j$ in row $r$. \begin{enumerate} \item The sizes of the intervals form a non-increasing sequence from left to right. \item There are at most two intervals with the same size and the largest size only appears once. \item $S \subseteq \{1, 2, \ldots, m+2\}$ \item $|\{2, \ldots, m+2\} - S| \le 2 $ \end{enumerate} \end{lemma} \begin{proof} The Turing Machine starts with one interval of size $2$. After $m-1$ complete iterations of the Outer Loop, $m-1$ intervals have been added, for a total of $m$ intervals. The sizes of those intervals is $(m+1, m, \ldots, 2)$. This sequence satisfies properties $1$ through $4$. Now consider the next execution of the Outer Loop. After the $j-1$ complete iterations of the Inner Loop, the leftmost $j-1$ intervals have increased in size by $1$, so the sequence is $(m+2, m+1, \ldots, m-j+4, m-j+2, m-j+1, \ldots 2) $. This sequence also satisfies $1$ through $4$. At this point $s_{j-1} = m-j+4$ and $s_j = m-j+2$. In the middle of the $j^{th}$ iteration of the Inner Loop, the intervals stay the same as the head sweeps left. When the head reaches the left end of interval $j$, it sweeps right and increases the size of that interval by $1$ when it reaches the right end of the interval. As the head sweeps to the right moving all the tape symbols over by $1$, one of the intervals to the right of interval $j$ is decreased temporarily by $1$. So if $\vec{s} = (m+2, m+1, \ldots, m-j+4, m-j+3, m-j+1, \ldots 2)$ and $\vec{t}$ is the current sequence, then $s_i = t_i$, except for one $k \in {j+1, \ldots, m}$ where $t_k = s_k-1$. The sizes are non-increasing from left to right (property $1$). The only two intervals with the same size are $t_k$ and $t_{k+1}$ (property $2$). All numbers are in the range $\{1, 2, \ldots, m+2\}$ (property $3$), and the two numbers in the range $\{1, 2, \ldots, m+2\}$ which are not in $S$ are $m-j+2$ and $s_k$ (property $4$). After $m$ iterations of the inner loop, the sequence of interval sizes is $(m+2, m+1, \ldots, 3)$. The next iteration of the Outer Loop begins when an interval of size $2$ is added to the right end. \end{proof} \begin{lemma} \label{lem-muBounds} \ifshow {\bf (lem:muBounds)} \else \fi {\bf [Bounds on $\mu(N)$]} The function $\mu(N) \ge N^{1/4}/2$. In addition $\mu(N)$ is $O(N^{1/4})$ and the exact value of $\mu(N)$ can be computed in time that is poly-logarithmic in $N$. \end{lemma} \begin{proof} In an fault-free tiling, the number of intervals increases by $1$ whenever an end row is reached. Row $r_1$ is an end row which has one interval. After the $t^{th}$ end row, there are $t$ intervals, of sizes $t+1, \ldots, 2$. By Lemma \ref{lem-valid}, the number of rows until the next end row is $\sum_{j=1}^t [2j (s_j-1) + 1]$, where the sizes of the intervals $s_1, \ldots, s_t$ are numbered from left to right. Plugging in $s_j = t - j + 2$, for $j = 1, \ldots, t$ gives $\sum_{j=1}^t [2j (t - j +1) + 1]$. Thus $\mu(N)$ is the largest value of $m$ such that \begin{equation} \label{ineq:mu} 1 + \sum_{t = 1}^{m-1} \sum_{j=1}^t [2j (t - j +1) + 1] \le N-2. \end{equation} Let $f(m)$ be defined to be $$f(m) = \sum_{t = 1}^{m-1} \sum_{j=1}^t [2j (t - j +1) + 1] = \sum_{t = 1}^{m-1} \left[ \frac 1 3 t^3 + t^2 + \frac 5 6 t \right].$$ Note that $f(m) = \Theta(m^4)$. Therefore there is a constant $c$ such that if $m \ge c N^{1/4}$, them $f(m) \ge N$ which means that $\mu(N) = O(N^{1/4}$. To get the more precise lower bound on $\mu(N)$: $$f(m) = \sum_{t = 1}^{m-1} \left[ \frac 1 3 t^3 + t^2 + \frac 5 6 t \right] \le \frac {13}{6}(m-1)^4 = \frac {13}{6}[m^4 - 4 m^3 + 6 m^2 - 4m +1] \le \frac {13}{2}m^4 - 3.$$ If $\mu(N) = N^{1/4}/2$, then $f(m) + 1 < N-2$, which means that $\mu(N) \ge N^{1/4}/2$. Finally, since $f(m)$ has a closed form that is a degree $4$ polynomial in $m$, it is possible to binary search on the range $N^{1/4}/2\le m \le cN^{1/4}$ to find the larges value $m$ for which the Inequality (\ref{ineq:mu}) holds. The number of iterations is $O(\log N)$ and computing the value of $f(m)$ can be done in time that is polynomial in $\log N$. \end{proof} \section{$\Theta(N^{1/4})$-Gapped Weighted Tiling is $\mbox{NEXP}$-complete} Containment is straight-forward since given a number $N$ expression in binary and a tiling of an $N \times N$ grid, which is exponential in $\log N$, the size of the input the cost of the tiling can be computed in $O(N^2)$ time and it can be verified whether the cost of the tiling is $0$ or at least $c N^{1/4}$ for some constant $c$. To establish the hardness of the Gapped Weighted Tiling Problem, we will show for and arbitrary $L \in \mbox{NEXP}$, a reduction to a translationally invariant 2D Tiling with an $n^{1/4}$ gap. Specifically, we will show a finite set of tiling rules so that, by a polynomial time computable function $x \rightarrow f(x) = N$, \begin{itemize} \item If $x \in L$, then there is a $0$ cost tiling of an $N \times N$ grid using the tiling rules. \item If $x \not\in L$, then any tiling of an $N \times N$ grid has cost at least $\Omega(N^{1/4})$. \end{itemize} Since $L \in \mbox{NEXP}$, there is an exponential time verifier $V$ that can verify that a string $x \in L$ given a witness whose size is exponential in $|x|$. We will assume that on input $x$, if $|x| = n$, then the verifier $V$ runs in time and space at most $2^{\delta n}$, including the space required for the witness. This can be achieved by padding: \begin{claim} \label{claim-pad} \ifshow {\bf (claim:pad)} \else \fi {\bf [Padding Argument]} If $L \mbox{NEXP}$, then for any constant $\delta$, $L$ is polynomial-time reducible to $L' \in \mbox{NEXP}$ such that the verifier $V'$ for $L'$ uses time and space $2^{\delta n}$. \end{claim} \begin{proof} Suppose that the verifier for $L$ uses time (and space) at most $2^{cn}$. Define a new language $L'$ $$L' = \{x 0^{dn} | x \in L ~\mbox{and}~ |x| = n \},$$ where $d = c/\delta -1$. A verifier $V'$ for $L'$ will first make sure there are the correct number of $0$'s at the end of the input, then will erase the $0$'s (incurring a small overhead). Then $V'$ will simulate $V$ on $x$. The running time is close to $2^{cn}$. The length of the input is $(d+1)n$. So as long as $c = \delta (d+1)$, the running time will be $n^{\delta (d+1)n}$. \end{proof} The large cost for $x \not\in L$ is achieved by $\Theta(N^{1/4})$ independent computations, each of which will run the verifier for $L$ on the input $x$. First we need to establish that there are enough intervals created in Layer $1$ that are wide enough to simulate the execution of the verifier. Lemma \ref{lem-L1gapped} shows that the analysis provided in Section \ref{sec-L1analysis} is sufficient to establish that there will be at least $N^{1/4} - O(F_1)$ intervals of size at least $N^{1/4}$. \begin{lemma} \label{lem-L1gapped} {\bf [Results From Layer $1$]} Consider a tiling of Layer $1$ and let $F_1$ be the total number of illegal pairs or squares in the tiling. Then there are at least $N^{1/4}/4 - 44 F_1 + 3$ clean intervals of size at least $N^{1/4}/4$. \end{lemma} \begin{proof} Let $(s_1, \ldots, s_m)$ be the sequence of sizes of the clean intervals in a tiling of Layer $1$. Lemma \ref{lem-analysisL1} says that the number of integers in the range $2, \ldots, \mu(N)$ that are not present in the sequence $(s_1, \ldots, s_m)$ is at most $44F_1 + 3$. By Lemma \ref{lem-muBounds}, $\mu(N) \ge N^{1/4}/2$, so there are at least $N^{1/4}/4$ numbers in the range $2, \ldots, \mu(N)+1$ of value at least $N^{1/4}/4$. At most $44F_1 + 3$ are missing. Therefore the number of clean intervals of size at least $N^{1/4}/4$ is at least $N^{1/4}/4 - 44 F_1 + 3$. \end{proof} It now remains to give the description of the constructions for Layers $2$ and $3$. \subsection{Layer 2} \label{sec-L2} Since Layer $1$ runs bottom to top and Layer $2$ runs top to bottom, the only interaction between Layers 1 and 2 takes place in $r_{N-2}$, which is the last row for Layer $1$ and the first row for Layer $2$. The translation rules from Layer $1$ to Layer $2$ will ensure that the heavy tiles from Layer $1$ are copied as $X$ tiles in Layer $2$. Thus, the intervals in Layer $2$ will have an $X$ on each end. The Layer $2$ tiling rules also enforce that an $X$ tile must go above an $X$ tile and can not be placed above any other tile. This implies that in a no-cost tiling, the $X$ tiles form columns of $X$'s in Layer 2. In the vertical strip of tiles between each column of $X$'s, there will be an independent Turing Machine computation. The tiling rules will enforce that the head of each Turing Machine stays within it's strip. The tile types for Layer 2 include $\Box$, $X$, and $\#$. Any square in which $X$ is directly above or below a tile that is not $X$ or $\Box$ is illegal. Similarly for $\#$, any square in which $\#$ is directly above or below a tile that is not $\#$ or $\Box$ is illegal. Since the $\Box$ tiles are only around the perimeter of the grid, the $X$'s and the $\#$'s must form columns in the interior of the grid. An example is shown in Figure \ref{fig-strips}. \begin{figure}[ht] \centering \includegraphics[width=3.0in]{Strips.png} \caption{A schematic of a Layer 2 tiling. Each strip between the column of X's will contain a tiling corresponding to an independent Turing Machine computation. In the example shown here, there are three independent computations.} \label{fig-strips} \end{figure} There will also be tile types representing the execution of a Turing Machine in between the two columns of $X$ tiles. The tape symbols for the Turing Machine will be $\{S, 0, 1, B, T\}$ and the states will be $\{q_r, q_l\}$. Thus, there will be tile types for each of the tape symbols (called {\it tape} tiles) and tile types for state-symbol pairs (called {\it head} tiles) of the form ($q/c$) where $q$ is a state and $c$ is a tape symbol. The relationship between Layer $1$ and $2$ tiles is summarized below. The rules will enforce the condition that for any tile directly below a $\Box$ tile, if the tile type in Layer $1$ is as indicated on the left, then the tile type for Layer $2$ must be one of the choices on the right. \begin{eqnarray*} \# & \rightarrow & \#\\ \mbox{Tile type}~t \neq \#~\mbox{and}~ w(t) > 0 & \rightarrow & X\\ \mbox{Tile type}~t \neq \#~\mbox{and}~ w(t) = 0 & \rightarrow & B~\mbox{or}~(q_l/S)~\mbox{or}~T\\ \end{eqnarray*} The translation rules enforce that the intervals in the last row of Layer $1$ are preserved in the first row of Layer $1$, except for intervals of size $1$ which correspond to a tile of weight $2$ in Layer $1$. In addition, Figure \ref{fig-bottomRowRulesL2} shows the initialization rules for Layer $2$. The meaning of the graph is that any square with $\Box~\Box$ in the top row and two Layer $2$ tiles $t_1~t_2$ in the bottom row that do not correspond to an edge from $t_1$ to $t_2$ in the graph is illegal. This resolves the ambiguity in whether a weight-$0$ tile in Layer $1$ gets copied to a $B$, $T$, or a $(q_l/S)$. If an interval has no illegal initialization squares, the interval could be $X~X$ or $X~T~X$. These are the only possibilities for intervals of size $2$ or $3$. If an interval has has no illegal initialization squares and has size at least $4$, the the interval must have the form $X~(q_l/S)~B^*~T~X$. \begin{figure}[ht] \centering \includegraphics[width=3.0in]{L2_init.png} \caption{These rules constrain the contents of the first row in Layer 2.} \label{fig-bottomRowRulesL2} \end{figure} Figure \ref{fig-correctTranslation} shows and example of a possible last row for Layer $1$ and its correct translation to the first row of Layer $2$. \begin{figure}[ht] \centering \includegraphics[width=4.7in]{IntervalTranslation.png} \caption{An example showing the last row from Layer $1$ and its correct translation to the first row of Layer $2$. The intervals in the two rows are shown with arrows. Every interval in Layer $1$ is translated to an interval of equal size in Layer $2$, except for the interval of size $1$.} \label{fig-correctTranslation} \end{figure} The Turing Machine that is executed within each strip continually increments a binary counter that is written in reverse on the tape. We call this the Binary Counter Turing Machine. The rules are summarized below. \begin{eqnarray*} \delta(q_l, S) & = & (S, q_r, R) \\ \delta(q_r, 1) & = & (0, q_r, R)\\ \delta(q_r, b) & = & (1, q_l, L)~\mbox{for}~ b \in \{0, B\}\\ \delta(q_l, b) & = & (b, q_l, L) ~\mbox{for}~ b \in \{0, 1\}\\ \delta(q, T) & = & (T, q, - )~\mbox{for}~ q \in \{q_r, q_l\}\\ \end{eqnarray*} In a single iteration, the head starts pointing to the $S$ in state $q_l$. It transitions to $q_r$ and moves right. The head then moves right (in state $q_r$) changing $1$'s to $0$'s until a $0$ or $B$ is encountered. The $0$ or $B$ is overwritten with $1$ and the head transitions to $q_l$ and moves left until the $S$ is reached again. If the head ever reaches the $T$ symbol at the right end of the interval, the Turing Machine hits an infinite loop and never changes state again. The TM rules are translated into legal and illegal squares as described in Section \ref{sec-TM2Tile}. The $X$ tile is treated as a tape symbol. For example, the rule $\delta(q_l, S) = (S, q_r, R)$ would mean that the square shown is legal. (Recall that the computation for Layer $2$ goes from top to bottom.) \vspace{.1in} \begin{tabular}{|c|c|} \hline $X$ & $(q_l/S)$ \\ \hline $X$ & $S$ \\ \hline \end{tabular} \vspace{.1in} The rule $\delta(q_r, T) = (T, q_r, -)$ would mean that the square shown is legal. \vspace{.1in} \begin{tabular}{|c|c|} \hline $(q_r/T)$ & $X$ \\ \hline $(q_r/T)$ & $X$\\ \hline \end{tabular} \vspace{.1in} The tape symbol $S$ prevents the head from trying to move left into the $X$ on the left and the symbol $T$ prevents the head from moving into the $X$ on the right. Thus, if the strip is not wide enough for the computation, the head reaches the $T$ symbol and the computation eventually gets stuck and does not advance. This does not cause any additional cost, so for every strip in which the computation starts in configuration $(q_l/S)~B^*~T$, there is always a unique way to tile that strip so that it does not contain any illegal squares. If the goal is to produce a string $x$ in the last row of Layer $2$, one could calculate the number $N$ such that after $N-3$ steps, the Turing Machine is in state $(q_l/S)$ and contents of the counter is $x$. Note that the contents of the counter always ends in $1$, so in order to produce an arbitrary string $x$, one can produce $x1$ and then ignore the last bit. The lemma below implies that the function mapping $x$ to $N$ is polynomial time computable and is the function used for the reduction. \begin{lemma} \label{lem-bctm} \ifshow {\bf (lem:bctm)} \else \fi {\bf [Number of Steps Use by the Binary Counter TM]} Consider a binary string $x$. Let $x^R$ denote the reverse of the string $x$. Let $n(1x)$ be the value of the number whose binary representation is $1x$ and let $w(x1)$ denote the number of $1$'s in the string $x1$. Then the number of steps required by the Binary Counter Turing Machine to write the string $x$ and end up with the head pointing to $S$ is $4n(1x^R) - 2w(x1)$. \end{lemma} \begin{proof} Let $f(n)$ be the number of steps until the Turing Machine ends up in configuration $(q_l/S)~0^n~1$. $f(0) = 2$. In order to end up in configuration $(q_l/S)~0^n~1$, the BCTM must first reach $(q_l/S)~1^n~B$. Then it takes $2(n+1)$ steps to complete the next increment step and reach $(q_l/S)~0^n~1$. The number of steps to reach $(q_l/S)~1^n~B$ is $f(n-1) + f(n-2) + \cdots f(0)$. Therefore the function $f$ obeys the recurrence: $$f(n) = \sum_{j=0}^{n-1} f(j) + 2(n+1).$$ The solution to this recurrence relation is $f(n) = 2(2^{n+1}-1)$. If the bits of $x$ are numbered from left to right $x_1 x_2 \cdots x_{n}1$, then the number of steps to reach $(q_l/S)~x$ is $$2(2^{n+1} - 1) + \sum_{j=1}^{n} x_j \cdot 2(2^j - 1) = 4 \cdot 2^n + \sum_{j=1}^n 4 x_j \cdot 2^{j-1} - 2 w(1x) = 4 n(1x^R) - 2 w(x1).$$ \end{proof} \subsubsection{Layer 2 Intervals} We now need to extend the definition of intervals, as well as clean and corrupt intervals to Layer $2$. In Layer 2, an interval begins with an $X$ tile and extends to the right up to and including the next $X$ tile. Note that since heavy tiles on Layer 1 get translated to $X$'s on Layer 2, the intervals of size greater than $1$ stay intact if the translation is done correctly. The translation is done at the top end of the grid, so the last row for Layer $1$, which is also the first row for Layer $2$, is row $r_{N-2}$. Row $r_{N-1}$ is the top row of the grid which contains all $\Box$ tiles. An interval in the first row of Layer $2$ is clean if the interval does not contain any illegal translation or initialization squares spanning rows $r_{N-2}$ and $r_{N-1}$ and the corresponding interval in the last row of Layer 1 was also clean. Otherwise the interval is corrupt. Lower down in the tiling, an interval in a row of Layer $2$ is clean if the sequence of tiles is also a clean interval in row above it and there are no illegal computation squares spanning the two consecutive rows in that interval. Since clean intervals do not move or change in size, two clean intervals in rows $r_t$ and $r_{t+1}$ have the same tag (i.e. they ``correspond'') if they occupy the same set of locations in their respective rows. Let $F_2$ be the number illegal squares in Layer $2$ of a tiling plus the number of illegal translation and initialization squares between Layers $1$ and $2$. \begin{lemma} \label{lem-L2analysisGap} \ifshow {\bf (lem:L2analysisGap)} \else \fi {\bf [Number of Clean Intervals Lost from Layer $1$ to Layer $2$]} Let $T_1$ be the set of tags corresponding to clean intervals of size at least $2$ in the last row of Layer $1$. Let $T_2$ be the set of tags corresponding to clean intervals in the last row of Layer $2$. Then $T_2 \subseteq T_1$ and $|T_1 - T_2| \le F_2$. The size and location of a clean interval with tag $j$ is the same in the last row of Layer $1$ and any row in Layer $2$. \end{lemma} \begin{proof} We will account for any changes in the set of clean intervals from the last row of Layer $1$ to the first row of Layer $2$ by illegal translation squares in each interval. Note since two neighboring intervals only overlap on one tile, an illegal square can be contained in at most one interval. If an interval of size at least $2$ is clean in the last row of Layer $1$ then that sequence of tiles consists of a heavy tile, followed by a sequence of weight-$0$ tiles, followed by a final heavy tile. Since heavy tiles are translated into $X$ tiles and weight-$0$ tiles are translated into non-$X$ tiles, then if the sequence is correctly translated, it results in an interval in the first row of Layer $2$. In this case, the two intervals occupy the same locations in the row and have the same tag. If a clean interval at the end of Layer $1$ does not correspond to a clean interval in Layer $2$, then the interval must contain an illegal translation square. Similarly, consider a clean interval in row $r_t$ of Layer $2$. The interval starts with an $X$, is followed be a sequence of non-$X$ tiles, and finally ends with an $X$ tile. If there are no illegal squares in the interval spanning rows $r_t$ and $r_{t-1}$, then in row $r_{t-1}$, the sequence begins and ends with $X$ and only has non-$X$ tiles in between and therefore corresponds to an interval. This follows from the fact that any square with an $X$ tile above or below a non-$X$ tile is illegal. The interval is clean in $r_t$ only if the interval is clean in $r_{t-1}$ in which case the two intervals occupy the same set of locations in their respective rows and have the same tag. A clean interval in row $r_t$ that does not correspond to a clean interval in row $r_{t-1}$, must contain an illegal computation square spanning rows $r_t$ and $r_{t-1}$. \end{proof} \begin{lemma} \label{lem-L2contents} \ifshow {\bf (lem:L2contents)} \else \fi {\bf [Contents in Each Clean Interval at the End of Layer $2$]} Consider a tiling of an $N \times N$, where $N = 4n(1x^R) - 2 w(x1) + 3$ for some binary string $x$. Then every clean interval in the last row of Layer $2$ of size at least $4$ that does not contain a $(q_r/T)$ or $(q_l/T)$ tile has the form: $$X~(q_l/S)~x1~B^*~T~X$$ Moreover, every clean interval in the last row of Layer $2$ of size at least $\log N + 5$ does not contain a tile of the form $(q_r/T)$ or $(q_l/T)$. \end{lemma} \begin{proof} If an interval is clean in the first row of Layer $2$ (row $r_{N-2}$), then there are no illegal initialization squares in that interval spanning $r_{N-1}$ and $r_{N-2}$ which means that the interval must correspond to a path in the graph denoted in Figure \ref{fig-bottomRowRulesL2}. The interval begins and ends with $X$ and has no $X$ tiles in the middle. The only possible path in the graph in Figure \ref{fig-bottomRowRulesL2} that begins and ends with $X$ with no other intervening $X$'s and has length at least $4$, corresponds to $X~(q_l/S)~B^*~T~X$. By induction on $t$, if the interval is clean in row $r_{N-2-t}$, then it is clean in rows $r_{N-2}$ through $r_{N-2-t}$ and the contents of the interval in row $r_{N-2-t}$ represents the configuration of the Turing Machine after $t$ steps, starting with $X~(q_l/S)~B^*~T~X$. If an interval is clean in row $r_1$, then it represents the state of the Turing Machine after $N-3$ time steps, starting with $X~(q_l/S)~B^*~T~X$. If at any point in these $N-3$ time steps, the head reached the $T$ at the right end of the interval, then the head will stay in that position and the interval will contain a $(q/T)$ tile in row $r_1$. If the interval row $r_1$ does not contain a $(q/T)$ tile, then the head never reached the $T$ during the first $N-3$ time steps. This implies that the configuration is the same as it would have been if there had been an infinite sequence of $B$ symbols to the right of $(q_l/S)$ in the initial time step. By Lemma \ref{lem-bctm}, the contents of the interval will be $X~(q_l/S)~x1~B^*~T~X$ in row $r_1$. After $N-3$ time steps, by Lemma \ref{lem-bctm}, the length of the counter is at most $\log N+1$. Therefore, the number of tape symbols that the head has reached is at most $\log N + 2$, including the $S$ to the left of the counter. If the interval has size at least $\log N + 5$, then excluding the $X$ on the left end and the $T~X$ on the right end, there are $\log N + 2$ tiles. This is enough room for the Turing Machine to complete $N-3$ steps without reaching the $T$ on the right end of the interval which means that the interval in row $r_1$ does not contain a $(q/T)$ tile. \end{proof} \subsection{Layer $3$ for Gapped Weighted Tiling} \label{sec-GWT} In this subsection, we describe the translation rules from Layer $2$ to Layer $3$ and give a high level description of the Turing Machine that operates within each strip. For every $t \in \{X, \Box, S, T, B, 0, 1, \# \}$, a $t$ tile is translated to another $t$ tile from Layer $2$ to Layer $3$. The translation of the head tiles of the form $(q/c)$, depend on the tape symbol $c$. For any state $q$ and tape symbol $c$ in the Layer $2$ Turing Machine, the following translation rules apply: \begin{eqnarray*} (q/T) & \rightarrow & (q_{s1}/T)\\ \mbox{for}~c \neq T, (q/c) & \rightarrow & (q_{s2}/c) \end{eqnarray*} To summarize, in translating from Layer $2$ to Layer $3$, the state information from Layer $2$ is lost and the new state depends only on whether the head of the Turing Machine in Layer $2$ reached the $T$ on the right end of the interval. If the head is pointing to $T$, then the new state on Layer $3$ is $q_{s1}$, otherwise the new state is $q_{s2}$. The tape symbols are translated without change. The Turing Machine that starts in state $q_{s1}$ repeatedly executes the single move $\delta(q_{s1}, T) = (q_{s1}, T, -)$ until the last row of Layer $3$. Thus if a computation in Layer $2$ reaches the $T$ at the right end of its interval, then it remains stuck for the rest of Layer $3$. As long as each step is executed correctly, there are no additional costs in these small intervals. Recall that we would like to show that for any language $L$ in $\mbox{NEXP}$, we will construct a set of tiling rules and a mapping from any string $x$ to a number $N$ such that if $x \in L$, then there is a way to tile the $N \times N$ grid with zero cost (no illegal pairs or squares) and if $x \not\in L$, then any tiling of the $N \times N$ will require cost that is $\Omega(N^{1/4})$. The Turing Machine that starts in state $q_{s2}$ will guess a witness $w$ and launch the verifier Turing Machine for a language $L \in \mbox{NEXP}$ with input $x$ and witness $w$, where $x1$ is the string written on the tape at the end of Layer $2$. Note that the Turing Machine in Layer $2$ always produces a string that ends in $1$, so in order to produce an arbitrary string, the last bit of the string produced is ignored. If at the end of Layer $2$, the interval is clean and the head has not reached the $T$ at the right end of the interval, then according to Lemma \ref{lem-L2contents}, it has the correct $x$ written on the tape of the Turing Machine. The second Turing Machine (that starts in $q_{s2}$) will also have the rule $\delta(q, T) = (q, T, -)$ for any $q$. Thus, intervals which are not wide enough to complete the computation of $V$ on $(x, w)$ (for any guess $w$) can be tiled without any additional cost. If the computation is able to complete and accepts, then there is no cost. Any square that contains a rejection state of the Turing Machine $V$ will incur a {\em rejection cost}. So an interval that is clean at the beginning of Layer $3$ and is wide enough to perform the computation that ends up in a rejecting state will contain at least one illegal square or square with a rejection cost. {~} {\bf Costs of tiles and the perimeter tiles} {~} Recall that the tile types consist of border tiles $\Box$ or interior tiles. Each interior tile is specified by it's tile type for each of the three layers. For any configuration of four tiles arranged in a square, let $p = 1$ if the bottom two tiles are an illegal pair for Layer $1$ and let $f_i = 1$ if the square is an illegal square for Layer $i$ or an illegal translation square from Layer $i-1$ to Layer $i$. Let $r = 1$ if the square contains a rejecting state for Layer $3$. The values $p$, $r$, and $f_i$ are $0$ otherwise. The cost for that square is then $p + f_1 + f_2 + f_3 + r$. If $F_i$ is the number of illegal pairs or squares in Layer $i$ in a tiling, and $R$ is the number of square on Layer $3$ that contain rejecting states, then the cost of that tiling is $F_1 + F_2 + F_3 + R$. The last technical point that we need to address before proving the hardness result for Gapped Weighted Tiling is to address the assumption that the perimeter of the grid consists of $\Box$ tiles and that there are no $\Box$ tiles on the interior of the grid. Towards this end, we create four types of $\Box$ tiles: NW, NE, SE, SW. The designation of a square that contains a $\Box$ tile as legal or illegal does not depend on the type of the $\Box$ tile. Let $C = 21$. We will adjust the costs for each square by adding the following amount to the cost of a square if a tile of the given type is in that location of the square: {~} \begin{tabular}{|c|c|c|c|c|} \hline & upper left & upper right & lower right & lower left\\ \hline \hline NW-$\Box$ & -C & & +2C & \\ \hline NE-$\Box$ & & -C & & +2C \\ \hline SE-$\Box$ & +2C & & -C & \\ \hline SW-$\Box$ & & +2C & & -C \\ \hline \end{tabular} \begin{lemma} \label{lem-border} {\bf [Validating the Assumption About Perimeter Tiles]} In any minimum cost tiling, the perimeter of the grid will consist of $\Box$ tiles and no border tile will be contained in the interior of the grid. Moreover, there is a way to tile the perimeter with $\Box$ tiles so that the total contribution due to the benefits and penalties from $\Box$ tiles is $-4C(N-1)$. \end{lemma} \begin{proof} The cost of any square before the adjustments due to the border tiles is at most $5$. Since each tile participates in at most four squares, changing a tile can cause the cost to change by at most $20$, ignoring the penalties and benefits due to the $\Box$ tiles. Consider a tiling of the grid with at least one $\Box$ tile on the interior. If a $\Box$ tile on the interior is changed to a non-$\Box$ tile, the cost will increase by at most $20$ due to changes in legal/illegal squares, one square will lose the $-C$ benefit by having a $\Box$ tile in one of its corners. This will amount to a total increase of $20+C$. However at least one square will lose the $2C$ penalty of having a border tile in the wrong corner. The change in the cost of the tiling will be $(20+C)-2C = 20 - C$, which is negative. If there is a non-$\Box$ tile on the perimeter, then replace that tile with a type of $\Box$ tile that will get the $-C$ benefit to the cost and no $2C$ penalty. The cost will increase by at most $20$ due to changes in legal/illegal squares, so the total change in cost will be $20-C$ which is negative. These changes can be continued until there are no $\Box$ tiles on the interior and only $\Box$ tiles on the perimeter. Each swap decreases the cost of the tiling. For each location on the border, there is a type of $\Box$ tile such that placing that type of $\Box$ tile in that location will result in one square with the $-C$ benefit and no squares with the $2C$ penalty. Since there are $4(N-1)$ tiles on the perimeter, the claim follows. \end{proof} We are now ready to prove the hardness result for Gapped Weighted Tiling: \begin{theorem} \label{th-GWT} \ifshow {\bf (th:GWT)} \else \fi $f(n)$-GWT in $2$-dimensions is $\mbox{NEXP}$-hard for some $f(n)$ that is $\Omega(n^{1/4})$. \end{theorem} \vspace{.1in} \begin{proof} Given binary string $x$, let $N$ be the number such that after $N-3$ steps of the binary counter Turing Machine from Layer $2$, the contents of the tape are $x1$ and the head is pointing to $S$ in state $q_l$. Note that the string representing the binary counter in the Layer $2$ Turing Machine always ends in $1$, so to get an arbitrary string $x$, we pick $N$ to produce $x1$ and then ignore the last $1$. According to Lemma \ref{lem-bctm}, the function mapping $x$ to $N$ is polynomial time computable and $|x| \le \log N$. If $x \in L$, then there a way to tile the grid with cost $-4C(N-1)$. This is achieved by first tiling the perimeter with $\Box$ tiles so that the total benefit from the $\Box$ tiles is $-4C(N-1)$. For the interior, have every Turing Machine in every layer executed without a fault. This means that every interval at the end of Layer $2$ is a clean interval. If the interval contains $(q/T)$ at the end of Layer $2$, the state is translated to $q_{s1}$ in Layer $3$ which initiates the Turing Machine that stays in the same state, incurring no cost. For all the intervals that do not contain $(q/T)$ at the end of Layer $2$, the state is translated to $q_{s2}$ in Layer $3$. According to Lemma \ref{lem-L2contents}, these intervals all have the correct binary string $x$ which is translated to Layer $3$ and serves as the input to the second Turing Machine (that starts in $q_{s2}$). If the head hits the right end of the interval in the Layer $3$ computation, then the Turing Machine stays in the same state for all the remaining steps, incurring no cost. The remaining intervals that are wide enough to complete a computation of $V$, will guess the correct witness $w$ and will accept on input $x$ and $w$. Since accepting computations do not incur a cost, the overall cost of the tiling is $0$. Now supposed that $x \not\in L$. Because of Lemma \ref{lem-border}, we can assume that the minimum tiling has only $\Box$ tiles along the perimeter and no $\Box$ tiles on the interior of the grid. The total benefit from the border tiles will be at most $-4C(N-1)$. We will prove that the cost due to legal/illegal squares or rejecting computations will be at least $N^{1/4}/c$ for some constant $c$. For $i \in \{1, 2, 3\}$, let $F_i$ denote the number of illegal squares and pairs in Layer $i$. For $i \ge 2$, $F_i$ also includes the number of illegal translation squares from Layer $i-1$ to Layer $i$. By Lemma \ref{lem-L1gapped}, at the end of Layer $1$, there will be at least $N^{1/4}/4 - 44F_1 -3$ clean intervals of size at least $N^{1/4}/4$. By Lemma \ref{lem-L2analysisGap}, the number of clean intervals of size at least $2$ decreases by at most $F_2$ from the end of Layer $1$ to the end of Layer $2$. Therefore, the number of clean intervals of size at least $N^{1/4}/4$ at the end of Layer $2$ will be at least $N^{1/4}/4 -44F_1 - F_2 -3$. According to Lemma \ref{lem-L2contents}, as long as $N^{1/4}/4 \ge \log N + 5$, these intervals will all have the correct $x$ at the end of Layer $2$. $|x| = n$ is at most $\log N$, which means that there is a $\delta$ such that for large enough $n$, $2^{\delta n} \le N^{1/4}/4$. Therefore, in any interval of size at least $N^{1/4}$, the computation in Layer $3$ has enough room to finish. Thus, if $x \not \in L$, then each interval that is clean at the end of Layer $2$ and has size at least $N^{1/4}/4$ will either contain an illegal translation square (from Layer $2$ to Layer $3$), an illegal square in Layer $3$ corresponding to an incorrect step of the Turing Machine, or a square containing a rejecting state. Therefore, the total cost due to rejecting computations is at least $N^{1/4}/4 -44F_1 - F_2 - F_3-3$. Either $44F_1 + F_2 + F_3 + 3 \ge N^{1/4}/8$ or $R \ge N^{1/4}/8$. \end{proof} \section{Function Weighted Tiling is $\mbox{FP}^{\mbox{NEXP}}$-complete} \label{sec-FWT} We now turn our attention to the Function Weighted Tiling Problem, which is to compute the cost of the minimum cost tiling for an $N \times N$ grid. We will show that FWT in $2$-dimensions is complete for $\mbox{FP}^{\mbox{NEXP}}$. \subsection{Containment} We will first argue that FWT is in $\mbox{FP}^{\mbox{NEXP}}$. Consider the decision problem whose input is a a positive integer $N$ and threshold $\tau$. The question is whether the minimum cost tiling for ${\cal{T}}$ on an $N \times N$ grid is less than or equal to $\tau$. This problem is in $\mbox{NEXP}$ since a tiling of cost less than or equal to $\tau$ is of size $N^2$ and can be checked in time $O(N^2)$. We can therefore use an oracle to $\mbox{NEXP}$ to binary search for the energy of the optimal tiling. The cost of any tiling is between $c_1 (N-1)^2$ and $c_2(N-1)^2$, where $c_1$ is the smallest cost for any square and $c_2$ is the largest cost for any square. Therefore, the number of queries will be $O(\log N)$ which is polynomial in the size of the input. \subsection{Hardness} An outline of the proof is given in Section \ref{sec-introfunction}. We will reduce from a function $f \in \mbox{P}^{\mbox{NEXP}}$. Let $M$ denote the polynomial-time Turing Machine that computes $f$. $M$ has access to an oracle for language $L' \in \mbox{NEXP}$. We will use $V$ to denote the exponential time verifier for $L'$. We will need to bound the space and running time used by the verifier $V$ as well as the size of the output $f(x)$ and number of oracle calls made by $M$. This can achieved by a standard padding argument. We borrow the following version from \cite{AI21} which has the elements we need: \begin{claim} \label{claim-pad2} \ifshow {\bf (claim:pad2)} \else \fi {\bf [Padding Argument: Lemma 2.30 from \cite{AI21}]} If $f \in \mbox{FP}^{\mbox{NEXP}}$, then for any constants, $c_1$ and $c_2$, $f$ is polynomial time reducible to a function $g \in \mbox{FP}^{\mbox{NEXP}}$ such that $g$ can computed by polynomial-time Turing Machine $M$ with access to a $\mbox{NEXP}$ oracle for language $L$. The verifier for $L$ is a Turing Machine $V$. Moreover, on input $x$ of length $n$, $M$ runs in $O(n)$ time, makes at most $c_1 n$ queries to the oracle. Also, the length of the queries made to the oracle is at most $c_1 n$ and the running time of $V$ as well as the size of the witness required for $V$ on any query made by $M$ is $O(2^{c_2n})$. In addition the length of the output of $g$ is at most $c_1 n$. \end{claim} \subsection{Analysis of Layer $2$ for Weighted Tiling Parity} In the last row of Layer $2$, the clean intervals can be designated as {\em long-form} or {\em short-form}. The long-form intervals are wide enough to complete the computation of the Binary Counter Turing Machine from Layer $2$. In the short-form intervals, the head gets stuck on the right end of the interval. These short-form intervals will not cause any cost to the overall tiling assuming they do not contain any illegal pairs or sqares. \begin{definition} {\bf [Long-form and Short-form Intervals]} In the last row of Layer $2$, a clean interval is a short-form interval if it has size $2$ or $3$ or contains a $(q/T)$ tile. Otherwise, it is a long-form interval. \end{definition} \begin{lemma} \label{lem-L2analysis} \ifshow {\bf (lem:L2analysis)} \else \fi {\bf [Summary of Analysis of Layer $2$]} Consider a tiling of an $N \times N$, where $N = 4n(1x^R) - 2 w(x1) + 3$ for some string $x$. Let $F_2$ be the number of illegal squares and pairs in Layer $2$. Let $F_1$ denote the number of illegal squares and pairs in Layer $1$. Let $r$ be the last row of Layer $2$ and let $S$ denote the set of sizes of the clean intervals in row $r$. Then if $F_1 \le N^{1/4}/40$, \begin{enumerate} \item $l(r) \le F_2 + 9 N^{1/2} + 2 N^{1/4}+1$. \item $|\{2,3,\ldots,\mu(N)+1\} - S| \le 44 F_1 + F_2 + 3$ \item Every clean interval of size at least $\log N + 5$ is a long-form interval. \item Every long-form interval has the form $X~(q_l/S)~x1~B^*~T~X$. \item Every short-form interval has the form $X~X$ or $X~T~X$, or $X~S~0^*~(q/T)~X$. \end{enumerate} \end{lemma} \begin{proof} Any increase in length from the last row of Layer $1$ to the first row of Layer $2$ must correspond to a $\#$ in Layer 1 that was not translated to a $\#$ in Layer 2, which would be contained in an illegal translation square. Any increase in length from row $r_{t}$ to row $r_{t-1}$ must come from a $\#$ tile in $r_{t}$ that has a non-$\#$ below it in $r_{t-1}$. Both squares containing the pair of vertically aligned tiles are illegal. Therefore, an increase in the length from the last row of Layer $1$ to the last row of Layer $2$ is accounted for by an illegal square in Layer $2$. The bound given in Item $1$ on the length of the last row in Layer $2$ is the expression from Lemma \ref{lem-lengthUB} which is an the upper bound on the length of the last row of Layer $1$ plus $F_2$. Note that Lemma \ref{lem-lengthUB} requires that $F_1 \le N^{1/4}/40$, which is assumed in this lemma as well. Let $r'$ be the last row in Layer $1$ and let $S'$ be the set of sizes of the clean intervals in row $r'$. By Lemma \ref{lem-analysisL1}, $|\{2,3,\ldots,\mu(N)+1\} - S'| \le 44 F_1 + 3$. Also by Lemma \ref{lem-L2analysisGap}, the set of the sizes of the clean intervals in row $r'$ is the same as row $r$, except that all the intervals of size $1$ are dropped and at most $F_2$ clean intervals are dropped. Therefore $|\{2,3,\ldots,\mu(N)+1\} - S| \le 44 F_1 + F_2 + 3$. Items $3$ and $4$ follow from Lemma \ref{lem-L2contents}. To prove item $5$, notice that the Figure \ref{fig-bottomRowRulesL2} shows the initialization rules for Layer $2$. The only possible interval of size $2$ is $X~X$ and the only possible interval of size $3$ is $X~T~X$. Since these intervals do not contain a head tile, they will remain unchanged in the last row of Layer $2$ unless they contain an illegal square somewhere in Layer $2$. Finally, if an interval has size at least $4$ and does not contain any illegal translation or initialization squares (a requirement for being a clean interval), then it has the form $X~(q_l/S)~B^j~T~X$ in the first row of Layer $2$, where $j$ is a non-negative integer. If the interval remains clean until the last row of Layer $2$, then the interval represents the state of the Binary Counter Turing Machine after $N-3$ steps. If the interval contains a $(q/T)$ tile, then the head must have reached the $T$ at the right end of the interval. Since the head always starts moving left when it encouners a $0$, the head can only reach the $T$ if all of the bits of the counter are $1$. From the state $X~(q_l/S)~1^r~T~X$, the head sweeps right, changing all the $1$'s to $0$'s. Then when it reaches the $T$, it remains stuck in that location for the remainder of the computation, resulting in $X~S~0^j~(q/T)~X$. \end{proof} The running time of the Outer Loop in Layer $3$ will be bounded by a function of the length of the longest binary string in the last row of Layer $2$. Therefore, we would like to bound the number of consecutive tiles that are $0$ or $1$ as a function of $N$ and the number of illegal pairs and squares. Towards this goal, we will require the following additional lemma. \begin{lemma} \label{lem-boundBits} \ifshow {\bf (lem:boundBits)} \else \fi {\bf [Bounding the Length of Binary Strings from Layer $2$]} Consider a tiling of the $N \times N$ grid. Let $S$ be a sequence of $m$ consecutive tiles in the last row of Layer $2$ occupying locations $l$ through $l+m-1$. Then one of the following must hold: \begin{enumerate} \item The sequence contains a tile that is not $0$, $1$, $(q/0)$ or $(q/1)$. \item The sequence has length at most $\log N+3$ \item There is a row $r_t$ in Layer $2$ such that there is an illegal square in locations $l$ through $l+m-1$ spanning $r_t$ and $r_{t+1}$. \end{enumerate} \end{lemma} \begin{proof} Any $T$ or $(q/T)$ that is directly above a tile that is not a $T$ or $(q/T)$ the vertically aligned pair of tiles is contained in an illegal square to the left and to the right. The same holds for $S$ or $(q/S)$ tiles, as well as $X$ and $\#$ tiles. Therefore, if locations $l$ through $l+m-1$ contain an $\#$, $X$, $(q/S)$ or $T$ tile in the first row of Layer $2$ and those locations consist of only $0$ and $1$ tiles in the last row of Layer $2$, there must be an illegal square in those locations somewhere in Layer $2$. The only other possibility is for locations $l$ through $l+m-1$ to consist entirely of $B$ tiles in the first row of Layer $2$. Let $h_1, \ldots, h_r$ be the indices of the rows in which locations $l$ through $l+m-1$ do not contain a head tile. We will prove by induction on $j$ that the tiles in locations $l$ through $l+m-1$ are $x~B^{m-s}$, where $x \in \{0, 1\}^s$ and $N-2 - h_j \ge n(x^R)$. Recall that $n(x^R)$ is the numerical value of the binary number represented by the reverse of string $R$. Since $n(x^R) \ge 2^{s-1}$, this means that $s = |x| \le \log N + 1$. As long as $m \ge \log N + 3$, there will be at least two $B$ tiles. As we will argue below, the appearance of a head tile can cause the number of $B$ symbols in the sequence to decrease by at most $1$, so if the last row of Layer $2$ contains a head tile, there will still be at least one $B$ tile in locations $l$ through $l+m-1$. The first row in Layer $2$ is $r_{N-2}$ and the tiles in locations $l$ through $l+m-1$ are $B^m$, so $h_1 = N-2$ and the claim holds. Now consider the sequence of rows $r_{h_{j-1}}$ through $r_{h_j}$. In the first and last rows of this sequence, locations $l$ through $l+m-1$ do not contain a head tile. In the other rows, locations $l$ through $l+m-1$ do contain a head tile. By induction, the tiles in locations $l$ through $l+m-1$ are $x~B^{m-s}$, where $x \in \{0, 1\}^s$ and $N-2 - h_{j-1} \ge n(x^R)$. Therefore $|x| \le \log N + 1$ and as long as $m \ge \log N + 3$, there are at least two $B$'s at the right end of the sequence of tiles $x~B^{m-s}$. If the head appears at the right end of the sequence in state $q_r$, it will change the $B$ to a $1$, but then will transition to $(q_l/B)$ and get stuck. If the head appears on the right end in state $q_l$, it gets stuck at the first step. Therefore, the head must appear at the left end of the sequence of tiles. If the head appears in state $q_l$ it will leave the sequence in the next step without changing $x$. If the head appears in state $q_r$ at the left end of the sequence, assuming the sequence of tiles does not contain any illegal squares, the head will sweep right, increment $x$ and then sweep left and leave the sequence of tiles. The value of $n(x^R)$ goes up by at most $1$, the sequence of tiles is $x'~B^{m-s}$, where $x'$ is the reverse of the string representing $n(x^R)+1$. Since $h_{j-1} \ge h_{j}+1$, we have: $$N-2 - h_{j} \ge N-2 - (h_{j-1}-1) \ge n(x^R) + 1 .$$ \end{proof} \subsection{Layer 3 for Weighted Tiling Parity} \label{sec-FEPlayer3} At the end of Layer $2$, each long-form clean interval contains a string $x$ that is deterministically computed from $N$ the size of the grid. When then string $x$ is translated to Layer $3$, each interval will also contain a non-deterministically chosen binary string $z$. The strings $x$ and $z$ will be co-located in the same $|x|$ tiles in the form a string $y$ over $\{0, 1, 2, 3\}$. The string $z$ will represent a set of guesses for the oracle responses when the Turing Machine $M$ is run on input $x$. The role of Layer $3$ is to ensure that each interval makes the same non-deterministic guess in each interval. Thus, Layer $3$ will represent a global computation (across all the intervals) that penalizes tilings that do not have a consensus for the guess $z$ over all the intervals. \subsubsection{Translation from Layer 2 to Layer 3} This subsection will describe the translation of tiles from the last row of Layer 2 to the first row of Layer 3. The translation rules for Layer $2$ in combination with the initialization rules for Layer $3$ ensure that long-form and short-form intervals are translated differently from Layer $2$ to Layer $3$. The translation rules from Layer $2$ to Layer $3$ are summarized below. The rules will enforce the condition that for any tile directly above a $\Box$ tile, if the tile type in Layer $2$ is as indicated on the left, then the tile type for Layer $3$ must be one of the choices on the right. The state $q$ represents any state $q$ in the Turing Machine for Layer $2$. \begin{eqnarray*} X & \rightarrow & \vartriangleleft~\mbox{or}~(q_r/\vartriangleright)~\mbox{or}~X\\ 0 & \rightarrow & 0~\mbox{or}~1~\mbox{or}~+\\ (q/0)& \rightarrow & 0~\mbox{or}~1\\ 1~\mbox{or}~(q/1) & \rightarrow & 2~\mbox{or}~3\\ S & \rightarrow & S~\mbox{or}~+\\ (q/S)& \rightarrow & S\\ (q/B)~\mbox{or}~ B & \rightarrow & B\\ T & \rightarrow & T\\ (q/T) & \rightarrow & +\\ \# & \rightarrow & \#\\ \end{eqnarray*} The intervals on Layer $3$ begin and end with a tile from the set $\{X, \vartriangleleft, \vartriangleright\}$ or any head tile $(q/c)$ where $c \in \{X, \vartriangleleft, \vartriangleright\}$. So if the last row of Layer $2$ is correctly translated to Layer $3$, then the intervals are preserved. The Turing Machine for layer $3$ will also have the property that it does not change the intervals, if executed correctly. Therefore, the definition of clean and corrupt intervals can be naturally extended from Layer $2$ to Layer $3$. In the first row of Layer $3$, an interval is clean if there are no illegal squares (translation or initialization) in rows $r_1$ and $r_2$ and the corresponding interval in the last row of Layer $2$ was clean. An interval in a higher row $r_t$ of Layer $3$ is clean if there are no illegal squares in that interval in Layer $3$ spanning rows $r_{t-1}$ and $r_t$ and the interval was clean in row $r_{t-1}$. The tag for each interval also remains unchanged. If an interval is clean, it adopts the tag for the corresponding interval (occupying the same locations) in the previous row. Clean intervals also adopt the same long/short-form designation of the corresponding clean interval in the previous row. There is still a high degree of flexibility in a legal translation of the last row of Layer $2$ to the first row of Layer $3$. The initialization rules for Layer $3$, summarized in Figure \ref{fig-L3_init}, introduces some additional constraints. \begin{figure}[ht] \centering \includegraphics[width=3.5in]{L3_init.png} \caption{These rules constrain the contents of the first row in Layer 3.} \label{fig-L3_init} \end{figure} \begin{definition} {\bf [Functions $f$ and $g$ mapping digits base $4$ to two bits]} We will use ${\cal{D}}$ to denote the set $\{0, 1, 2, 3\}$. The digits in ${\cal{D}}$ are used to encode two separate bits. We will think of a string $y \in {\cal{D}}^n$ as mapping to two different binary strings based on the first and second bits in the binary encoding of each digit. Thus $f_1(y) = x$, where $x_i = 0$ if $y_i = 0$ or $1$, and $x_i = 1$ if $y_i = 2$ or $3$. Also, $f_2(y) = z$, where $z_i = 0$ if $y_i = 0$ or $2$, and $z_i = 1$ if $y_i = 1$ or $3$. \end{definition} \begin{lemma} \label{lem-longShort} \ifshow {\bf (lem:longShort)} \else \fi {\bf [Form of the Intervals Translated to Layer $3$]} Consider a tiling of an $N \times N$ grid, where $N = 4n(1x^R) - 2w(x1)+3$, for some binary string $x$. Let $t$ and $t'$ represent two tiles from the set $\{X, \vartriangleleft, (q_r/\vartriangleright)\}$. In the first row of Layer $3$, every clean short-form interval has the form $$t~t'~~\mbox{or}~~t~T~t'~~\mbox{or}~~t~+^*~t'.$$ Every clean long-form interval has the form: $$t~S~y ~B^*~T^*~t'$$ where $y \in {\cal{D}}^n$ and $f_1(y) = x1$. \end{lemma} \begin{proof} According to Lemma \ref{lem-L2analysis}, a clean short-form interval at the end of Layer $2$ has the form $X~X$, $X~T~X$ or $X~S~0^* ~(q/T)~X$. According the the tanslation rule, each $X$ must be translated to a tile from $\{X, \vartriangleleft, (q_r/\vartriangleright)\}$. Also the $T$ must be translated to $T$. So if an interval at the end of Layer $2$ has the form $X~X$ or $X~T~X$ and it does not contain any illegal translation squares from Layer $2$ to Layer $3$, then it has the form $t~t'$ or $t~T~t'$, where $t$ and $t'$ are from the set $\{X, \vartriangleleft, (q_r/\vartriangleright)\}$. Alternatively, if the interval has the form $X~S~0^* ~(q/T)~X$, the $(q/T)$ tile must be translated to a $+$ tile. The $S$ could be translated to $S$ or $+$ and the $0$ tiles could be translated to $0$, $1$, or $+$. However, the initialization rules shown in Figure \ref{fig-L3_init}, show that the only tiles that can go to the left of a $+$ is either a $+$ or a tile from $\{X, \vartriangleleft, (q_r/\vartriangleright)\}$. Therefore, the only way to translate the tiles in the interval so that there are in illegal initialization or translation squares is to translate the interval to $t~+^*~t'$. According to Lemma \ref{lem-L2analysis}, a long form interval will look like $X~(q_l/S)~x1~B^*~T~X$ in the last row of Layer $2$. The $(q_l/S)$ tile must be translated to an $S$ tile. This means that the translated interval can not contain any $+$ tiles because the only tiles that can be next to a $+$ tile are another $+$ tile or a tile from $\{X, \vartriangleleft, (q_r/\vartriangleright)\}$. Thus, the bits in $x1$ are translated non-determisitically, so that the resulting string $y$ has $f_1(y) = x1$. Note that $0$ must go to $0$ or $1$, and $1$ must go to $2$ or $3$. The $B$ tiles are translated to $B$ tiles and the $T$ tile to a $T$ tile. Thus, the resulting interval, if there are no illegal translation or initialization squares, will have the form $t~S~y ~B^*~T^*~t'$, where $f(y) = x1$ and $t$ and $t'$ are from $\{X, \vartriangleleft, (q_r/\vartriangleright)\}$. \end{proof} There is still ambiguity in how tiles are translated from Layer $2$ to Layer $3$. The horizontal rules described in the next subsection will constrain how an $X$ from Layer $2$ is translated to Layer $3$, since there is an option of any tile from the set $\{X, \vartriangleleft, (q_r/\vartriangleright)\}$ in the translation rules. The final ambiguity is whether a $0$ is translated to $0$ or $1$ and whether a $1$ is translated to $2$ or $3$. Note that the first bit of $0, 1, 2, 3$ (expressed in binary) is the same as the underlying bit from Layer $2$, but the second bit can be chosen arbitrarily. The goal of the Turing Machine in Layer $3$ is to check that the second bits are all translated consistently across the intervals. There will be a large tiling cost if any of the strings are translated inconsistently. \subsubsection{Horizontal Rules for Layer $3$} Since the Turing Machine in Layer $3$ is a global computation that operates across the whole row and not just within a strip, we will need additional horizontal rules to ensure that a valid row has exactly one head tile and that the tape contents are bracketed on the left and right by $\vartriangleleft$ and $\vartriangleright$, respectively. The tape symbols for the Turing Machine will be: $$\Gamma = \{\vartriangleleft, \vartriangleright, X, 0, 1, 2, 3, \overline{0}, \overline{1}, \overline{2}, \overline{3}, B, S, T, + \}$$ We define a subset of the tape tiles $\Gamma' = \Gamma - \{ \vartriangleleft, \vartriangleright \}$. For every $c \in \Gamma'$, there will be a blue version of that tile and a red version of that tile: ${\color{blue} c}$ and ${\color{red} c}$. Figure \ref{fig-validConfigsL3} summarizes the horizontal rules for Layer $3$. The set $Q$ is the set of all states for the Turing Machine in Layer $3$. A pair of tiles $t_1~t_2$ is legal if there is an edge from the vertex containing $t_1$ to the vertex containing $t_2$ in the graph. All other pairs are illegal. A row of tiles for Layer $3$ is said to be {\em valid} if it does not contain any pair of adjacent tiles that are an illegal pair. Otherwise the row is {\em invalid}. The rules enforce that the $\Box$ tile on the left must be followed by a $\vartriangleleft$ symbol, followed by a sequence of tape symbols, followed by $\vartriangleright$, followed by a sequence of $\#$ tiles, followed by the final $\Box$ tile. In addition, exactly one of the tiles from the $\vartriangleleft$ to the $\vartriangleright$ must be a head tile. Thus, in a valid row, the intervals all begin and end with an $X$ tile, except the leftmost interval which begins with $\vartriangleleft$ and the rightmost interval which ends with $\vartriangleright$. \begin{figure}[ht] \centering \includegraphics[width=4.0in]{ValidConfigGraphL3.png} \caption{This graph shows the horizontal rules for Layer $3$.} \label{fig-validConfigsL3} \end{figure} \begin{lemma} \label{lem-L3valid} \ifshow {\bf (lem:L3valid)} \else \fi {\bf [Properties of a Valid Row in Layer $3$]} A row is valid if and only if the row satisfies the following conditions: \begin{enumerate} \item The tape contents of the row has the form: $$\Box~\vartriangleleft~(\Gamma')^*~\vartriangleright~\#^*~\Box$$ \item There is exactly one tile is a head tile. \item The head is located at one of the tiles from the $\vartriangleleft$ to the $\vartriangleright$. \item Any tiles from $\Gamma'$ to the left of the head tile are blue and any tiles from $\Gamma'$ to the right of the head tile are red. \end{enumerate} \end{lemma} \begin{proof} Any path in the graph in Figure \ref{fig-validConfigsL3} must pass through vertex $1$, $4$, or $7$ exactly once and therefore has exactly one head tile. In addition, the path goes through $1$ or $2$, then vertices from $3$, $4$, and $5$, followed by one of vertices $6$ or $7$ and then vertex $8$. Therefore, the tape contents of each such path must have the form $\Box~\vartriangleleft~(\Gamma')^*~\vartriangleright~\#^*~\Box$. There are no vertices in the graph in which the head is located at a $\Box$ or $\#$ tile, so the head must point to one of the symbols between the $\vartriangleleft$ and $\vartriangleright$ tiles. If vertex $3$ is reached, then it comes before vertex $4$ or $7$, which means that all the blue tiles from $\Gamma'$ precede the head tile. If vertex $5$ is reached, then it comes after vertex $1$ or $4$, which means that all the read tiles from $\Gamma'$ come after the head tile. For the converse, consider a row that satisfies all the properties in the lemma. If the head points to the $\vartriangleleft$ symbol, then it can be generated by a path of the form $1 \rightarrow 5^* \rightarrow 6 \rightarrow 8^*$. If the head points to the a symbol from $\Gamma'$, then it can be generated by a path of the form $2 \rightarrow 3^* \rightarrow 4 \rightarrow 5^* \rightarrow 6 \rightarrow 8^*$. If the head points to the $\vartriangleright$ symbol, then it can be generated by a path of the form $2 \rightarrow 3^* \rightarrow 7 \rightarrow 8^*$. \end{proof} We describe the Turing Machine rules in the next section. It will be important that there is a unique next step for any Turing Machine configuration that corresponds to a valid row. \subsubsection{The Layer 3 Turing Machine} The Turing Machine in Layer 3 is global in the sense that it works across all the intervals. The tape alphabet is the set: $$\Gamma = \{\vartriangleleft, \vartriangleright, X, 0, 1, 2, 3, \overline{0}, \overline{1}, \overline{2}, \overline{3}, B, S, T, + \}$$ The symbols in $\{ \overline{0}, \overline{1}, \overline{2}, \overline{3}\}$ are called {\em marked} digits. For digit $j \in {\cal{D}}$, marking $j$ corresponds to replacing $j$ with $\overline{j}$ and unmarking $j$ corresponds to replacing $\overline{j}$ with $j$. Other than marking or unmarking digits, the Turing Machine never changes the information on the tape. The program of the Turing Machine consists of two nested loops. The TM will just continually run the Outer Loop for as many steps as it is allowed to run. Consider the row at the beginning of an iteration of the Outer Loop. Let $l_1, \ldots, l_m$ be the locations of the $S$ tiles in this row. For each such $S$ tile, let $y_i$ be the string of digits immediately to the right of the $S$ tile. Note that $y_i$ could be the empty string if $S$ is followed by a non-digit. If the Outer Loop is executed without error, there will be a cost in an iteration of the Outer Loop for each $y_k$, where $k > 1$ and $y_k \neq y_1$. The digits in each string that have already been checked are marked. In an iteration of the Inner Loop, the Turing Machine reads the first unchecked digit in $y_1$ and checks that digit against the first unmarked digit in each of the other $y_j$'s. When a digit is checked, it becomes marked. Figure \ref{fig-OuterLoopL3} shows the steps of the Outer Loop in pseudo-code. Figure \ref{fig-TMrulesL3a} gives a table with all the Turing Machine rules. An iteration of the Outer Loop begins with the head just one space to the left of $\vartriangleright$. The Turing Machine sweeps left, unmarking all the digits until $\vartriangleleft$ is reached. This begins an iteration of the inner loop. The head sweeps right all the way from $\vartriangleleft$ to $\vartriangleright$, starting in state $q_{read}$. When the first $S$ is encountered, the Turing Machine reads and remembers the next unmarked digit $j$ and transitions to $q_{1j}$. This is the next unchecked digit of $y_1$. The digit $j$ is checked. In state $q_{1j}$ the head is looking for an $S$ which indicates the beginning of the next $y$. After an $S$ is reached, it transitions to $q_{2j}$ indicating that it is looking for the next unmarked digit. When the next unmarked digit is found, it checks if $j = k$. If $j \neq k$, a cost is incurred. The digit $k$ is marked and the head transitions to $q_{j1}$ in order to look for the next $S$. The Turing Machine also incurs a cost if in state $q_{2j}$ and a non-digit is encountered, indicating that the current $y$ being checked is shorter than $y_1$. When the $\vartriangleright$ is reached, then the Turing Machine transitions to $q_{ret}$ and sweeps left to the $\vartriangleleft$ to begin a new iteration of the inner loop. The Outer Loop terminates when all the digits in $y_1$ have been checked. This happens, when the state is $q_{read}$ and the head encounters a non-digit before an unmarked digit. The head then transitions to $q_{sweep}$ and sweeps right to the $\vartriangleright$. While the Turing Machine is in state $q_{sweep}$, any unchecked digit causes the Turing Machine to incur a cost. This happens is one of the strings $y$ is longer than $y_1$. When the head reaches $\vartriangleright$, it transitions to $q_{clear}$ which begins a new iteration of the Outer Loop. \begin{figure}[ht] \noindent \fbox{\begin{minipage}{\textwidth} \begin{tabbing} (1)~~ \= {\sc OuterLoop}: \\ (3) \> ~~~~~ \= Sweep left in state $q_{clear}$, unmarking every digit.\\ (4) \> \>When $\vartriangleleft$ is reached, transition to $q_{findS}$ and move right\\ (5) \> \> Start of {\sc Inner Loop}\\ (6) \> \> ~~~~~ \= Move right in state $q_{findS}$ until an $S$ or $\vartriangleright$ is reached\\ (7) \> \> \> ~~~~~ \= If $\vartriangleright$ is reached before $S$, transition to $q_{clear}$. Go to (1).\\ (8) \> \> \>When $S$ is reached, transition to $q_{read}$\\ \\ (9) \> \> \> Move right in state $q_{read}$ past any marked digits\\ (10) \>\>\> \> If $c \in \{B, X, +, S, T\}$ is reached before a digit, go to (24)\\ (11) \>\>\> \> If $\vartriangleright$ is reached before a digit, go to (26)\\ (12) \> \>\> If unmarked digit $j$ is reached, mark $j$, transition to $q_{1j}$ .\\ \\ (13) \> \> \> Move right in state $q_{1j}$ until an $S$ or $\vartriangleright$ is found.\\ (14) \> \> \> \>If $\vartriangleright$ is reached, go to (22).\\ (15) \> \> \>If $S$ is reached, transition to $q_{2j}$.\\ \\ (16) \> \> \> Move right in state $q_{2j}$ past any marked digits.\\ (17) \> \> \> If an unmarked digit $k$ is reached, mark $k$ and transition to $q_{1j}$.\\ (18) \> \> \> \>If $j \neq k$, then {\bf Cost.}\\ (19) \> \> \> If non-digit $c$ is reached for any $c \neq \vartriangleright, S$, transition to $q_{1j}$. {\bf Cost.}\\ (20) \> \> \> If $S$ is reached, stay in $q_{2j}$. {\bf Cost.}\\ (21) \> \> \> If $\vartriangleright$ is reached, go to (22). {\bf Cost.}\\ \\ (22) \> \> \> Transition to $q_{ret}$. Move left until $\vartriangleleft$ is found.\\ (23) \> \> \> Transition to $q_{findS}$. Go to (5)\\ (24) \> \> Transition to state $q_{sweep}$, move right until $\vartriangleright$ is reached \\ (25) \> \> \> If an unmarked $j$ is encountered in state $q_{sweep}$, there is a {\bf Cost.}\\ (26) \> \> Transition to $q_{clear}$. Go to (1). \end{tabbing} \end{minipage}} \caption{Pseudo-code for an integration of the Outer Loop for the Turing Machine in Layer $3$.} \label{fig-OuterLoopL3} \end{figure} \begin{figure}[ht] \centering \begin{tabular}{|c|c|c|c|c|} \hline & & & & \\ & $q_{findS}$ & $q_{read}$ & $q_{j1}$ & $q_{j2}$ \\ \hline $k$ & Right & $(q_{k1}, \overline{k} , R)$ & Right & $(q_{j2}, \overline{k}, R)^*$ \\ \hline $\overline{k}$ & Right & Right & Right & Right \\ \hline $S$ & Right & $(q_{sweep}, S , R)$ & $(q_{j2}, S, R)$ & $\mbox{Right}^{\dag}$\\ \hline $\vartriangleleft$ & Right & Right & $(q_{j2}, \vartriangleleft, R)$ & $(q_{j1}, \vartriangleleft, R)$ \\ \hline $\vartriangleright$ & $(q_{clear}, \vartriangleright, L)$ & $(q_{clear}, \vartriangleright , L)$ & $(q_{ret}, \vartriangleright, L)^{\dag}$ & $(q_{ret}, \vartriangleright, R)$ \\ \hline $c$ & Right & $(q_{sweep}, c , R)$ & Right & $\mbox{Right}^{\dag}$\\ \hline \end{tabular} \vspace{.2in} \begin{tabular}{|c|c|c|c|} \hline & & & \\ & $q_{ret}$ & $q_{sweep}$ & $q_{clear}$ \\ \hline $k$ & Left & $\mbox{Right}^{\dag}$ & Left \\ \hline $\overline{k}$ & Left & Right & $(q_{clear}, k, L)$ \\ \hline $S$ & Left & Right & Left \\ \hline $\vartriangleleft$ & Right & $(q_{sweep}, \vartriangleleft , R)$ & $(q_{findS}, \vartriangleleft, R)$ \\ \hline $\vartriangleright$ & Left & $(q_{clear}, \vartriangleright , L)$ & Left \\ \hline $c$ & Left & Right & Left \\ \hline \end{tabular} \caption{A summary of the rules for the Layer $3$ Turing Machine. The word $c$ is any tape character in $\{X, +, B, T\}$. $j$ is any digit from ${\cal{D}}$. {\bf Left} stands for $\delta(q,c) = (q,c,L)$. The word {\bf Right} stands for $\delta(q,c) = (q,c,R)$. Rule $*$ incurs a cost if $j \neq k$. Rules marked with $\dag$ incur a cost.} \label{fig-TMrulesL3a} \end{figure} The Turing Machine rules are translated into legal and illegal computation squares for as described in Section \ref{sec-TM2Tile}. As with Layer $1$, a legal head square is legal as long as any tiles from $\Gamma'$ to the left of a head tile is blue and any tiles from $\Gamma'$ to the right of a head tile are red. Also, a square is illegal if it has a tape tile directly above another tape tile that are different in any way, including the color. There is one final type of illegal square which occurs only in Layer $3$. These introduce the costs indicated in the pseudo-code shown in Figure \ref{fig-OuterLoopL3}. Any square which contains a tile of the form $(q_{2j}/k)$ where $j \neq k$ or $(q_{2j}/c)$, where $c \in \{B, S, T, +, X, \vartriangleright\}$, is an illegal square. In addition, any square that contains $(q_{sweep}/j)$ where $j \in {\cal{D}}$ is also an illegal square. We will call these {\em illegal verification squares}. \begin{lemma} \label{lem-nextRow3} \ifshow {\bf (lem:nextRow3)} \else \fi {\bf [Sequential Rows Represent TM Steps in Layer $3$]} Consider a row $r$ that is valid in Layer $3$. There is a unique row $r'$ that can be placed above $r$ such that there are no illegal computation squares that span the two rows $r$ and $r'$. $r'$ is valid. Moreover, row $r$ corresponds to a Turing Machine configuration and $r'$ represents the configuration resulting from executing one step in the configuration $r$. \end{lemma} \begin{proof} Consider a valid row $r$. If there is a row $r'$ such that if $r'$ is placed directly above $r$, there are no illegal computation squares, then $r'$ must be unique. The argument is almost identical to the analogous lemma proved for the Layer $1$ Turing Machine given in Lemma \ref{lem-nextRow}. Next we need to show that if $r$ is valid, then there is a valid $r'$ such that there are no illegal computation squares spanning rows $r$ and $r'$. Since $r$ is valid, there is exactly one head tile in $r$ and therefore $r$ corresponds uniquely to a configuration of the Turing Machine. Let $r'$ be the row resulting from applying one step of the Turing Machine to the configuration represented by row $r$. Color all the $\Gamma'$ tiles to the left of the head tile blue and all the $\Gamma'$ tiles to the right of the head tile red. We will first establish that there are no illegal squares spanning rows $r$ and $r'$. The head square must be legal because it represents one correctly executed step of the Turing Machine. All other tiles outside of the head square are tape tiles and are the same in $r$ and $r'$ because they did not change in the computation step. Moreover since $r$ is valid, all $\Gamma'$ tiles to the left of the head square are blue and all $\Gamma'$ tiles to the right of the head square are red. Therefore any $\Gamma'$ tiles outside of the head square have the same color in $r$ and $r'$. To establish that $r'$ is valid, we will show that $r'$ has all the properties from Lemma \ref{lem-L3valid}. If the head is pointing to a $\vartriangleleft$ symbol, it writes a $\vartriangleleft$ and moves right. If the head is pointing to a $\vartriangleright$ symbol, it writes a $\vartriangleright$ and moves left. If the head is pointing to a symbol from $\Gamma'$, it writes a symbol from $\Gamma'$ and moves left or right. This guarantees properties $1$ through $3$. Property $4$ is guaranteed by construction. \end{proof} \subsubsection{Analysis of Layer $3$} We would like to argue that if the number of illegal pairs or squares in Layer $3$ is less than a certain value, then there will be a complete error-free iteration of the Outer Loop. \begin{definition} {\bf [End Row for Layer $3$]} An {\em end row} for Layer $3$ is a valid row in which the head is pointing to $\vartriangleright$ symbol in state $q_{sweep}, q_{read}, q_{findS}$ or $q_{clear}$. \end{definition} Note that since an end row is valid, it corresponds to a valid configuration of the Turing Machine. In the next next step of the Turing Machine, the head is just to the left of the $\vartriangleright$ symbol in state $q_{clear}$, which begins a new iteration of the Outer Loop. \begin{lemma} \label{lem-L3OuterLoop} \ifshow {\bf (lem:L3OuterLoop)} \else \fi {\bf [Number of Steps to Reach an End Row in Layer $3$]} Consider a valid row $r_s$ in Layer $3$ of a tiling. Let $y$ be the maximal string of digits to the immediate right of the left-most $S$ tile in $r_s$. If there is no $S$ tile, then $y$ is empty. If rows $r_s$ through $r_t$ are all valid and do not contain any illegal computation squares and $t - s \ge 2 (|y|+2) \cdot l(r_s)$, then one of the rows in $r_{s+1}$ through $r_t$ must be an end row. \end{lemma} \begin{proof} We will argue that if the Turing Machine starts in a valid configuration, then it will reach an end configuration within $2 (|y|+2) \cdot l(r_s)$ steps. Each iteration of the Inner Loop begins with the head just to the right of the $\vartriangleleft$ symbol in state $q_{findS}$. We will first establish that it takes at most $2l(r_s)$ steps to reach the beginning of an iteration of the Inner Loop or an end configuration. If the state of $r_s$ is $q_{clear}$ or $q_{ret}$, the head moves left until a $\vartriangleleft$ symbol is reached. Then the head moves right into state $q_{findS}$, for a total of at most $l(r_s)$ steps. If the state of $r_s$ is $q_{sweep}$, $q_{findS}$, $q_{read}$, $q_{1j}$ or $q_{2,j}$ then the head will continue moving right and remain in one of those states until the $\vartriangleright$ is reached. If the state is $q_{sweep}$, $q_{findS}$ or $q_{read}$ when the $\vartriangleright$ is reached, this is an end configuration. If the state is $q_{1j}$ or $q_{2,j}$ when the $\vartriangleright$ is reached, the Turing Machine will move left in state $q_{ret}$ until the $\vartriangleleft$ is reached, at which point it transitions to $q_{findS}$ and moves right. This is the beginning of an iteration of the Inner Loop. The total number of steps so far has been $2 l(r_s)$. If the Turing Machine starts an iteration of the Inner Loop and the head never reaches an $S$ in state $q_{findS}$, it will reach the $\vartriangleright$ symbol in state $q_{findS}$, which is an end configuration. In this case, $|y_1| = 0$ and the number of steps spent in the Inner Loop is $l(r_s)$. Otherwise, the head will eventually reach an $S$ and will transition to $q_{read}$. If the head reaches a digit before a symbol from $\{B, S, +, X, T, \vartriangleright\}$ in state $q_{read}$, the digit will be marked and the head will continue going right in state $q_{1j}$ or $q_{2j}$ until it hits the $\vartriangleright$ symbol, in which case it transitions to $q_{ret}$, sweeps left until the $\vartriangleleft$ symbol is reached and then moves right into $q_{findS}$ to start another iteration of the Inner Loop. The marked digit is part of $y$ because at the beginning of an iteration of the Inner Loop, the state is initially $q_{findS}$ and can only transition to $q_{read}$ when the first $S$ is reached. The state remains in state $q_{read}$ until a symbol from $\{B, S, +, X, T, \vartriangleright\}$ or a digit is reached. Thus, if a digit is reached before a symbol from $\{B, S, +, X, T, \vartriangleright\}$m the current string is still $y$. Therefore, in each iteration of the inner loop, one additional digit from $y_1$ becomes marked and there can be at most $|y_1|$ iterations of the inner loop, each of which takes at most $2l(r_s)$ steps. The Inner Loop Iterations take a total of at most $2|y_1|l(r_s)$ steps. Finally, if the state is $q_{read}$ and the Turing Machine reaches a symbol from $\{B, X, +, X, T, \vartriangleright\}$, it will sweep right in state $q_{sweep}$ (if it is not already at the $\vartriangleright$ symbol) until the $\vartriangleright$ is reached. Within an additional $l(r_s)$ steps, the head will reach $\vartriangleright$ in state $q_{sweep}$ or $q_{read}$, which is and end configuration. The total number of steps is at most $2l(r_s) + 2|y_1|l(r_s) + l(r_s) \le 2 (|y|+2) \cdot l(r_s)$. \end{proof} The set ${\cal{D}}$ is the set of digits in $\{0, 1,2 ,3\}$. We can augment ${\cal{D}}$ to include the marked digits as well: ${\cal{D}}' = {\cal{D}} \cup \{\overline{0}, \overline{1}, \overline{2}, \overline{3}\}$. The function $val$ maps strings in ${\cal{D}}'$ to strings in ${\cal{D}}$ by changing marked digits to the corresponding unmarked digits, so $\overline{j}$ is mapped to $j$. The functions $f_1$ and $f_2$ can be extended to strings from $({\cal{D}}')^*$ by first applying the function $val$ and then applying $f_1$ or $f_2$. \begin{lemma} \label{lem-L3yUB} \ifshow {\bf (lem:L3yUB)} \else \fi {\bf [Bound on the Length of a String of Digits in Layer $3$]} Let $r$ be any row in Layer $3$ of a tiling of an $N \times N$ grid. Let $y$ be a string of consecutive tiles from ${\cal{D}}'$ in $r$. Then $|y| \le (F_2 + F_3) (\log N + 4)$ \end{lemma} \begin{proof} Consider a sequence of consecutive tiles in locations $l$ through $l+s-1$ that are all from ${\cal{D}}'$, where $s = \log N + 4$. If locations $l$ through $l+s-1$ in the last row in Layer $2$ are either bits ($0$ or $1$) or head tiles with bits ($(q/0)$ or $(q/1)$) then by Lemma \ref{lem-boundBits}, there is an illegal square contained in locations $l$ through $l+s-1$ spanning two consecutive rows in Layer $2$. If the last row of Layer $2$, locations $l$ through $l+s-1$ contain a tile that is not $0$, $1$, $(q/0)$, or $(q/1)$, then either there is an illegal translation square contained in those locations or those locations contain a tile that is not from the set ${\cal{D}}$ in the first row of Layer $3$. Furthermore, any two vertically aligned tiles in Layer $3$ in which a tile not from ${\cal{D}}'$ is directly below a tile that is from ${\cal{D}}$ prime must be contained in illegal squares on both sides. Therefore, if the locations $l$ through $l+s-1$ contain tiles from ${\cal{D}}'$ in a row from Layer $3$, then those locations must contain an illegal square somewhere in Layers $2$ or $3$. Partition the locations of $y$ into consecutive locations of length $\log N + 4$. There must be an illegal square from Layers $2$ or $3$ contained within each set of locations, which means that there can be at most $F_2 + F_3$ sets of locations. Therefore the length if $y$ is at most $(F_2 + F_3) (\log N + 4)$. \end{proof} \begin{lemma} \label{lem-completeOuter} \ifshow {\bf (lem:completeOuter)} \else \fi {\bf [Layer $3$ Completes At Least One Iteration of the Outer Loop]} As long as $F_1 \le N^{1/4}/40$ and $F_2 + F_3 \le N^{1/4}/10 \log N$, there is a sequence of rows in Layer $3$ with no illegal pairs or computation squares, in which the Turing Machine executes a complete iteration of the Outer Loop. \end{lemma} \begin{proof} Consider a set of consecutive rows in Layer $3$ with no illegal pairs or computation squares. Let $r$ be the first row in the sequence or rows. Let $y$ be the maximal sequence of tiles in row $r$ from the set ${\cal{D}}'$ that are directly to the right of the leftmost $S$ tile in row $r$. By Lemma \ref{lem-L3OuterLoop}, if the sequence lasts for at least $2(|y|+2)l(r)$ rows, then the computation represented in the sequence of rows will reach the end of an iteration of the Outer Loop. Note that $l(r)$ and $y$ do not change in the course of these rows because there are no illegal computation squares and the computation does not change the contents of the Turing Machine tape other than marking or unmarking digits. Thus, if the sequence lasts for yet another $2(|y|+2)l(r)$ rows, then the computation represented in the sequence of rows will include one full iteration of the Outer Loop. So as long as the sequence contains at least $4(|y|+2)l(r)$ rows, it will be guaranteed to contain a complete iteration of the Outer Loop with no computation errors. Lemma \ref{lem-L2analysis} gives an upper bound on the length of the last row of Layer $2$ which for now we will call $L$. Any two vertically aligned tiles with a non-$\#$ tile on top of a $\#$ tile will be contained in illegal computation squares on both sides. So the length of row $r$ can be at most the upper bound from Lemma \ref{lem-L3analysis} plus $F_3$. Meanwhile, Lemma \ref{lem-L3yUB} gives an upper bound of $(F_2 + F_3)( \log N + 4)$ for $|y|$. Putting these bounds together means that if we are guaranteed to have a sequence of at least $$4 \left[ (F_2 + F_3)( \log N + 4) + 2 \right](L+F_3)$$ consecutive rows with no illegal squares or computation squares, then there will be a complete iteration of the Outer Loop with no computation errors. Besides the first and last rows which are all filled with $\Box$ tiles, there are a total of $N-2$ rows. There are at most $F_3$ rows with an illegal pair or square. Therefore there must be at least one sequence of at least $(N-2-F_3)/(F_3+1)$ rows with no illegal pairs or squares. So as long as the following inequality holds: \begin{eqnarray*} \frac{N-2-F_3}{F_3+1} & \ge & 4 \left[ (F_2 + F_3)( \log N + 4) + 2 \right](F_3 + L)\\ \end{eqnarray*} If $F_1 \le N^{1/4}/40$, then the bound from Lemma \ref{lem-L2analysis} says that $L \le F_2 + 11 N^{1/2} + 1$. Using the assumption from the Lemma that $F_2 + F_3 \le N^{1/4}/10 \log N$, the inequality above can be verified. \end{proof} \begin{lemma} \label{lem-L3analysis} \ifshow {\bf (lem:L3analysis)} \else \fi {\bf [Summary of Analysis of Layer $3$]} Consider a tiling of an $N \times N$ grid, where $N = 4n(1x^R) - 2w(x1)+3$, for some binary string $x$. Let $F_i$ be the total number of illegal squares and pairs in Layer $i$, for $i = 1, 2, 3$. At the end of Layer $3$, the following conditions hold: \begin{enumerate} \item If $S$ is the set of sizes of clean intervals in the last row of Layer $3$, then $|\{2,3,\ldots,\mu(N)+1\} - S| \le 44 F_1 + F_2 + F_3+ 3$. \item Every clean short-form interval at the end of Layer $3$ has the form $X~X$, $X~T~X$ or $X~+^*~X$. \item Any clean interval of size at least $\log N + 5$ is a long-form interval. \item If $F_1 \le N^{1/4}/40$ and $F_2 + F_3 \le N^{1/4}/10 \log N$, then there exists a $y \in {\cal{D}}^*$ such that every long-form clean interval of has the form $X~S ~y_i~B~ B^*~T~X$, where $y_i \in {\cal{D}}'$ and $val(y_i) = y$ and $f_1(y) = x$. \end{enumerate} \end{lemma} \begin{proof} Let $T_2$ be the set of tags corresponding to clean intervals in the last row of Layer $2$. Let $T_3$ be the set of tags corresponding to clean intervals in the last row of Layer $3$. Since no clean intervals are created from the last row of Layer $2$ to the last row of Layer $3$ and each clean interval adopts the same tag as the corresponding clean interval in the preceding row, $T_3 \subseteq T_2$. If there is a clean interval in the last row of Layer $2$ with tag $j$ and no such clean interval in the first row of Layer $3$, then the interval contains an illegal translation square. Similarly, a clean interval with tag $j$ that does not correspond to a clean interval with tag $j$ in the next row, must contain an illegal square. Since every lost clean interval corresponds to an illegal translation square or illegal computation square in Layer $3$, $|T_2 - T_3| \le F_3$. If $S'$ is the set of sizes of clean intervals at the end of Layer $2$m then by Lemma \ref{lem-L2analysis}, $|\{2,3,\ldots,\mu(N)+1\} - S'| \le 44 F_1 + F_2 + 3$. At most $F_3$ clean intervals are lost from the end of Layer $2$ to the end of Layer $3$. Therefore, $|\{2,3,\ldots,\mu(N)+1\} - S| \le 44 F_1 + F_2 + F_3 + 3$. Item $2$ follows from the fact that the Turing Machine in Layer $3$ does not change any tile from one row to another, except for the movement of the head and marking or unmarking digit tiles. Therefore, if an interval is clean in Layer $3$, it has the same form as it did in the first row of Layer $3$. Lemma \ref{lem-longShort}, indicates what the short-form and long-form intervals look like in the first row of Layer $3$. Also, the intervals do not switch between being long-form or short-form intervals from the last row of Layer $2$. Therefore, by Lemma \ref{lem-L2analysis}, any clean interval of size at least $\log N + 5$ must be a long-form interval in the last row of Layer $3$, which proves item $3$. Now to prove item $4$. The assumption for this Lemma are the same as the conditions for Lemma \ref{lem-completeOuter}, so there is a sequence of consecutive rows in Layer $3$ that do not contain any illegal pairs or computation squares that correspond to a complete iteration of the Outer Loop. The $r_s$ be the first row in this sequence. Find the location of the leftmost $S$ tile in $r_s$ and let $y$ be the maximal sequence of consecutive tiles from ${\cal{D}}'$ that are immediately to the right of the $S$ tile. After the head sweeps left in state $q_{clean}$ at the beginning of the Outer Loop, every clean interval has the form $X~S~y_i~B^*~T~X$, where $y_i \in {\cal{D}}$. Each $y_i$ corresponds to the $i^{th}$ clean interval as they are numbered from left to right. $y$ may or may not be the same as $y_1$. If there is an $s$ where digit $s$ of $y_1$ differs from digit $s$ of $y_i$, then on the $s^{th}$ iteration of the inner loop, there will be an illegal verification square in interval $i$ when the Turing Machine is in state $q_{2j}$ and the first unmarked digit in interval $i$ is $k \neq j$. This occurs in line $(18)$ of the pseudo-code in Figure \ref{fig-OuterLoopL3}. Otherwise, if $y_1 \neq y_i$, then $y_1$ must be a proper prefix of $y_i$ or $y_i$ is a proper prefix of $y_1$. If $y_1$ is be a proper prefix of $y_i$, in the last iteration of the Inner Loop, when there are no longer unchecked digits in $y_1$, there will remain unchecked digits in $y_i$. These will trigger an illegal verification square when the Turing Machine is in state $q_{sweep}$ and encounters the unchecked digit in interval $i$. Finally if $y_i$ is a proper prefix of $y_1$, after $|y_i|$ iterations, the Turing Machine will read digit $|y_i|+1$ of $y_1$ but will encounter no unmarked digits in interval $S$. The state will transition to $q_{2j}$ when it reaches the $S$ in interval $i$ and it will encounter a non-digit before it encounters an unmarked digit. Thus triggering a cost in Line $(19)$ in the pseudo-code. Thus, every interval such that $y_i \neq y$ will no longer be clean after the iteration of the Outer Loop. Since every clean interval started with $f_1(y_i) = x1$, and the string of digits in the interval does not change as long as the interval remains clean, then $f_1(y_i) = x1$ will remain true after the iteration of the Outer Loop. In the remainder of the rows, if the interval remains clean, then it contains no illegal computation squares and the string of digits remains the same, except perhaps that some digits become marked, so Item $4$ will still hold for the last row of Layer $3$. \end{proof} \subsection{Layer $4$} The translation rules from Layer $3$ to Layer $4$ translate any $j$ or $\overline{j}$ tile to $j$, for $j \in {\cal{D}}$. $S$ is translated to $(q_{s}/S)$, and $t$ is translated to $t$ for any $t \in \{+, \#, X, B, T, \vartriangleright, \vartriangleleft \}$. The states from Layer $3$ are all dropped, so any tile of the form $(q/c)$ is translated to whatever $c$ would be translated to, according to the rules above. These translation rules ensure that every clean interval is translated to a clean interval as long as the tiles in the interval are translated correctly. Thus every clean long-form interval has the form: $X~(q_s/S) {\cal{D}}^* B^*T~X$. Recall that the functions $f_1$ and $f_2$ map a string of length $n$ over ${\cal{D}}$ to two binary strings of length $n$. The variable $x$ will denote the binary string $f_1(y)$ with the last bit removed and $z$ will denote $f_2(y)$. $x$ will be used as the input to the computation and $z$ will be used as a guess of the answers to the queries to the oracle $L'$. We will use $\bar{n}$ to denote the number of oracle queries made by $M$ on input $x$, so we will only be concerned with the first $\bar{n}$ bits of $z$. The tiling rules in Layer $4$ enforce that an $X$ tile must have a $\Box$ or $X$ tile above and below it. Similarly for $+$ and $\#$, so the only tiles that change from one row to another are tiles inside long-form interval unless there is an illegal square. Every clean long-form interval will contain an independent Turing Machine computation. The Turing Machine rules are translated into legal and illegal squares as described in Section \ref{sec-TM2Tile}. If the head reaches the $T$ at the right end of the interval, then the interval is not wide enough to complete the computation. In this case, the computation halts and does not incur any additional cost. The definitions for clean and corrupt intervals carry over to Layer $4$ as well. Since clean intervals do not change locations within a row, two clean intervals will have the same tag and long/short-form designation if they occupy the same locations within their respective rows. An interval in the first row of Layer $4$ is clean if those tiles corresponded to a clean interval at the end of Layer $3$ and the interval does not contain any illegal translation or initialization squares. In going from one row to the next row in the computation in Layer $4$, an interval is clean if those tiles corresponded to a clean interval in the previous row and there are no illegal squares spanning the current and previous rows in the interval. \begin{lemma} \label{lem-L4analysis} \ifshow {\bf (lem:L4analysis)} \else \fi {\bf [Bound on the Missing Clean Interval Sizes]} Let $F_i$ be the total number of illegal squares and pairs in Layer $i$, for $i = 1, 2, 3, 4$. If $S$ is the set of clean intervals at the end of Layer $4$ then $|\{2,3,\ldots,\mu(N)+1\} - S| \le 44 F_1 + F_2 + F_3 + F_4 + 3$. \end{lemma} \begin{proof} Let $T_3$ be the set of tags corresponding to clean intervals in the last row of Layer $3$. Let $T_4$ be the set of tags corresponding to clean intervals in the last row of Layer $4$. Since no clean intervals are created from the last row of Layer $3$ to the last row of Layer $4$ and each clean interval adopts the same tag as the corresponding clean interval in the preceding row, $T_4 \subseteq T_3$. If there is a clean interval in the last row of Layer $3$ with tag $j$ and no such clean interval in the first row of Layer $4$, then the interval contains an illegal translation square. Similarly, a clean interval with tag $j$ that does not correspond to a clean interval with tag $j$ in the next row, must contain an illegal square. Since every clean interval that is lost in Layer $4$ corresponds to an illegal square in Layer $4$, $|T_3 - T_4| \ge F_4$. If $S'$ is the set of sizes of the clean intervals at the end of Layer $3$, then by Lemma \ref{lem-L3analysis}, $|\{2,3,\ldots,\mu(N)+1\} - S'| \le 44 F_1 + F_2 + F_3 + 3$. Since at most $F_4$ clean intervals are lost from the end of Layer $3$ to the end of Layer $4$, $|\{2,3,\ldots,\mu(N)+1\} - S| \le 44 F_1 + F_2 + F_3 + F_4 + 3$. \end{proof} \subsubsection{The Layer $4$ Computation} Recall that we are reducing from a generic language $f \in \mbox{FP}^{\mbox{NEXP}}$ to Function Weighted Tiling. $M$ denotes the poly-time Turing Machine that computes $f$ with access to a $\mbox{NEXP}$ oracle. $L'$ is the the $\mbox{NEXP}$-time language that is the oracle for $M$. $V$ denotes the $\exp$-time Turing Machine that is the verifier for $L'$. The reduction maps a string $x$ to an integer $N$ such that after $N-3$ steps of the Binary Counter Turing Machine described in Section \ref{sec-L2}, the string on the tape is $x1$ and the head is at the left end of the tape. According to Lemma \ref{lem-bctm}, $N = 4n(1x^R) - 2w(x1) + 3$, where $x^R$ is the reverse of string $x$, $n(x)$ is the numerical value of the string $x$ in binary, and $w(x)$ is the number of $1$'s in $x$. All the clean long-form intervals start out with configuration $$(q_{s}/S) ~y~ B \cdots B ~T.$$ The computation that is initiated by state $q_{s}$ proceeds in several stages. {\bf Stage 1:} The computation in Stage $1$ "measures" the size of the interval and writes the size of the interval in binary. This is accomplished by a counter similar to the one used in \cite{GI}. The Turing Machine uses a binary and a unary counter. The head shuttles back and forth between the two ends of the interval (using the $S$ and the $T$ tiles to know when it has reached one of the two ends). In each cycle, the head increments both counters. When the unary counter has reached the $T$ on the right end of the interval, it transitions to a new state which begins the next phase of the computation. The counting procedure actually counts the number of interior tiles in the interval and the definition of the size of an interval includes the two endpoint tiles, so we add $2$ to the final count to get the size of the interval. We will call this value $r$ for a particular interval. Note that a clean interval can increase in size by at most $1$ per segment, so the size of any clean interval is bounded by the number of segments, which by Lemma \ref{lem-numSegUB}, is $O(N^{1/4} + F_1)$. We will argue below that the cost of the minimum tiling, which is at least $F_1$ is $O(N^{1/4})$. As long as the size of a clean interval is $O(N^{1/4})$, this phase of the computation takes time $O(N^{1/2})$. {\bf Stage 2:} The next stage of the computation uses $x$, $z$, and $r$ to select a term in the cost function towards which it will contribute. Recall that $\bar{n}$ is an upper bound on the number of oracle queries made by $M$ on an input of length $n$. The $i^{th}$ bit of $z$ will be denoted by $z_i$. The goal will be to have $\mbox{check}_k (z)$ intervals checking the $k^{th}$ bit of $z$, where $$\mbox{check}_k (z) = 2^{\overline{n}+5} [(1 - z_j) \cdot 2^{\overline{n}-j} + z_j \cdot 2^{\overline{n}}]$$ From the string $x$, the size of the grid $N$ is computed, as is $\mu(N)$ the number of intervals after $N-3$ steps of a correct computation of the Layer $1$ Turing Machine. The time and space complexity of the computation in Stage $2$ is bounded by a polynomial in $n$, which is polylogarithmic in $N$. By Lemmas \ref{lem-bctm} and \ref{lem-muBounds}, given $x$ of length $n$, the values of $N$ and $\mu(N)$ can be computed in $poly(n)$ time, which is polylogarithmic in $N$. The output of the function $f$ on input $x$ with oracle responses $z$ is denoted by $f(x,z)$ which is also computed by the Turing Machine. Note that in the idealized case in which the interval sizes go from $\mu(N)+1$ down to $2$, the value $I = \mu(N)+2-r$ is an almost unique identifier for each interval going from $1$ up to $\mu(N)$ from left to right. Note that even in a fault-free computation, if the computation in Layer $1$ finishes in the middle of an iteration of the Outer Loop, the sequence of interval sizes will deviate slightly from the idealized case. The computation in each interval will perform different tasks, depending on the value of $I$. {~} \renewcommand{\arraystretch}{1.4} \begin{tabular}{|c|c|} \hline Value of $I = \mu(N)+2-r $ & Action Taken \\ \hline \hline $I \le 0$ & Transition to $q_{acc}$ and halt \\ \hline & Compute the bit to check $k(r)$\\ $1 \le I \le \sum_{k=1}^{\overline{n}} \mbox{check}_k (z)$ & as described below\\ & Go to Stage $3$\\ \hline $\sum_{k=1}^{\overline{n}} \mbox{check}_k (z)+1 \le I \le \sum_{k=1}^{\overline{n}} \mbox{check}_k (z) + 2^3 f(x,z)$ & Transition to $q_{rej}$ and halt\\ \hline $ \sum_{k=1}^{\overline{n}} \mbox{check}_k (z) + 2^3 f(x,z) < I$ & Transition to $q_{acc}$ and halt\\ \hline \end{tabular} {~} \vspace{.1in} There is a cost of $+1$ for any computation square that enters a $q_{rej}$ state, so rejecting computations incur a cost of exactly $1$. Accepting computations do not incur any cost. The value of $k(r)$ is defined to be the smallest index $k$ such that $$\mu(N)- r + 2\le 2^{\overline{n}+5} \sum_{j=1}^k \mbox{check}_k (z) $$ In the next stage, the computation (if it did not stop in Stage $2$) will check the $k(r)^{th}$ bit of $z$. {\bf Stage 3:} If $z_{k(r)} = 0$, then the TM transitions to $q_{rej}$ and halts. If $z_{k(r)} = 1$, then the TM simulates the Turing Machine $M$ until the point of the ${k(r)}^{th}$ query, using $z_1, \ldots, z_{{k(r)}-1}$ as the oracle responses for the first ${k(r)}-1$ oracle queries. Let $s$ be the input to the ${k(r)}^{th}$ oracle query. The computation now simulates the verifier $V$ on input $s$ using a witness that is guessed. If $V$ accepts, the cost is $0$ and if $V$ rejects the cost is $1$. If $s \in L'$, there is a witness that causes $V$ to accept $s$, which implies that there is a $0$-cost tiling of that strip in Layer $4$. If $s \not\in L'$, then any tiling will either have an illegal square because the TM was not correctly executed or will have a cost of $1$ when $V$ terminates. \subsection{Putting the Layers Together} The costs for computations are realized by having any legal computation square which transitions to $q_{rej}$ have a cost of $1$. We will call these {\em rejecting} squares in order to distinguish them from illegal squares which in general have a higher cost. We are finally ready to define the cost of a square over all four layers of the tiling. Consider a square of four tiles where each tile is described by it's tile type for each later. Let $f_i$ denote the indicator variable that is $1$ if the Layer $i$ tile types are an illegal square for Layer $i$, and is $0$ otherwise. Let $p_1$ and $p_3$ designate if the square has an illegal pair in its bottom two tiles for Layers $1$ and $3$. Let $r$ be an indicator variable denoting whether the square is a rejecting square in Layer $4$. The cost of the square is: $$r + 48 (f_1 +p_1) + 5(f_2 + p_3 + f_3 + f_4)$$ Thus if $F_i$ is the number of illegal pairs or squares in Layer $i$, and if $R$ is the number of rejecting squares in Layer $4$, the total cost of a tiling is $$R + 48 F_1 + 5(F_2 + F_3 + F_4)$$ $48 F_1 + 5(F_2 + F_3 + F_4)$ is the cost from illegal pairs and squares. We will call $R$ the {\em rejection cost} for the tiling. We need to establish that regardless of whether $x \in L$, the minimum cost tiling has no illegal pairs or squares and the choice of $z$ corresponds the correct oracle responses for the queries to language $L'$. We first establish an upper bound on the minimum cost tiling of an $N \times N$ grid. \begin{lemma} \label{lem-ub} \ifshow {\bf (lem:ub)} \else \fi {\bf [Upper Bound on the Minimum Cost of a Tiling]} There is a tiling of the $N\times N$ grid whose cost is at most $N^{1/4}/4 \log N$. \end{lemma} \begin{proof} Consider a tiling with no illegal pairs or squares in which the string $z$ used for the oracle output bits is all $0$'s. According to Lemma \ref{lem-errorFreeSizes}, the sizes of the intervals are contained in $\{\mu(N)+2, \ldots, 1\}$ with at most one duplicate. Let $T= \sum_{k=1}^{\overline{n}} \mbox{check}_k (z) + 2^3 f(x,z)$. The leftmost $T$ intervals will incur a cost of $1$ since all the query responses are assumed to be $0$. The other intervals will not incur any cost. Since there can be at most one duplicate in the range $\mu(N)+1, \ldots, \mu(N)-(T-2)$, the total cost will be at most $T+1$. The highest order bit of $\mbox{check}_1 (z)$ is in location $\overline{n} - 1$. Therefore, the highest order bit of $T$ is $2 \overline{n} +4$ and the value of $T+1$ is at most $2^{2 \overline{n}+5}$, which by Claim \ref{claim-pad2} for large enough $N$ can assumed to be at most $ N^{1/4}/4 \log N$. \end{proof} \begin{lemma} \label{lem-oneCleanInt} \ifshow {\bf (lem:oneCleanInt)} \else \fi For sufficiently large $N$, the minimum cost tiling has at least one clean long-form interval at the end of Layer $4$. \end{lemma} \begin{proof} If $S$ is the set of sizes of clean intervals in the last row of Layer $4$, then Lemma \ref{lem-L4analysis} says that $|\{2,3,\ldots,\mu(N)+1\} - S| \le 44 F_1 + F_2 + F_3 + F_4 + 3$. By Lemma \ref{lem-muBounds}, the value of $\mu(N)$ is at least $ N^{1/4}/2$. If the tiling has a minimum cost then by Lemma \ref{lem-ub}, $$44 F_1 + F_2 + F_3 + F_4 \le \frac{ N^{1/4}}{4 \log N}.$$ Therefore $|\{2,3,\ldots, N^{1/4}/2+1\} - S| \le N^{1/4}/4 \log N + 3$. By Lemma \ref{lem-L3analysis}, any clean interval of size at least $\log N + 5$ is a long-form interval. Therefore for large enough $N$, there is at least one long-form interval. \end{proof} Any tiling in which $F_3$, the number of illegal pairs or squares in Layer $3$, is greater than $ N^{1/4}/10 \log N$ will have cost at least $\frac 5 {10} N^{1/4}/\log N$ since each illegal pair or square in Layer $3$ contributes $5$ to the overall cost. We know from Lemma \ref{lem-ub} that such a tiling will not be a minimum cost tiling. Therefore, we can ignore those tilings and assume that the condition in Item $4$ of Lemma \ref{lem-L3analysis} are met. This implies that there is a single $y \in {\cal{D}}^*$ in every clean long-form interval at the end of Layer $3$. Define $Cost(y, c)$ to be the cost of the minimum cost tiling whose clean large-form intervals all have string $y$ and that have $c$ illegal pairs or squares. The following Lemma says that we can ignore tilings with illegal pairs or squares. \begin{lemma} \label{lem-ignoreIllegal} \ifshow {\bf (lem:ignoreIllegal)} \else \fi For every $y$, and every $c > 0$, $Cost(y, 0) \le Cost(y, c)$. \end{lemma} \begin{proof} Fix the string $y \in {\cal{D}}^n$. Let $T_c$ be a minimum cost tiling with string $y$ and $c$ illegal squares. Similarly for $T_0$. Let $C_i$ denote the number of illegal squares on Layer $i$ in $T_c$. We only know that at least one of the $C_i$'s is positive. The cost of $T_c$ from illegal squares is $48 F_1 + 5(F_2+F_3+F_4)$. The cost of $T_0$ from illegal squares is $0$. Let $R(T)$ denote the rejection cost of tiling $T$. We will prove that $R(T_0) - R(T_c) \le 48 F_1 + 5(F_2+F_3+F_4)$. Let $s_1, \ldots, s_m$ be the the lengths of the clean intervals in the last row of Layer $4$ for $T_c$. Any clean interval of size $s$ in $T_c$ incurs exactly the same cost as a clean interval of size $s$ in $T_0$. Because the intervals are clean, the computations inside those intervals is the same and correct. Since the size of the interval is the same, the value of $y$ and $r$ for the two intervals is the same. Therefore the bit $k(r)$ of $z$ that is checked in the interval is the same. Since $T_c$ and $T_0$ are both assumed to be minimum cost tilings, the best witness for each computation in Layer $4$ is chosen. In other words, if there is a $0$-cost tiling for that interval, it will be used in both $T_c$ and $T_0$. If $T_0$ has $\mu(N)$ intervals, those intervals will all have sizes in the range $1$ through $\mu(N)+2$. Furthermore, according to Lemma \ref{lem-errorFreeSizes}, there is only one number in that range such that $T_0$ has two intervals of that size. This one extra interval will contribute a cost of at most $1$ to the overall cost. If $S$ is the set of sizes of the clean intervals at the end of Layer $4$ in $T_c$, then the difference in rejection costs between $T_0$ and $T_c$ is at most $|\{2,3,\ldots,\mu(N)+1\} - S |+1$. According to Lemma \ref{lem-L4analysis}, $|\{2,3,\ldots,\mu(N)+1\} - S| \le 44 F_1 + F_2 + F_3 + F_4 + 3$, which means that $$R(T_0) - R(T_c) \le 44 F_1 + F_2 + F_3 + F_4 + 4$$ Since at least one $F_i$ is positive, $$44 F_1 + F_2 + F_3 + F_4 + 4 \le 48 F_1 + 5(F_2+F_3+F_4)$$ \end{proof} We can now focus on tilings that have no illegal squares. \begin{lemma} \label{lem-wideEnough} \ifshow {\bf (lem:wideEnough)} \else \fi In any tiling with no illegal squares, the largest $T$ intervals will be wide enough to complete their computations, where $T= \sum_{k=1}^{\overline{n}} \mbox{check}_k (z) + 2^3 f(x,z)$. \end{lemma} \begin{proof} Suppose input string $x$ maps to the number $N$ in the reduction. According to Lemma \ref{lem-muBounds}, the number of intervals at the end of Layer $1$ is at least $N^{1/4}/2$. The largest $N^{1/4}/4$ of these intervals have size at least $N^{1/4}/4$. By Claim \ref{claim-pad2}, any of these intervals will be large enough to complete a computation of the verifier $V$. All the other computations in Layer $4$ as well as the other layers are polynomial in $n$ and therefore polylogarithmic in $N$. We need to establish that $T \le N^{1/4}/4$ so that the intervals used in the computation are among the $N^{1/4}/4$ largest. The value of $T$ is maximized if the string $z$ is all $1$'s. In this case the value of $T$ is at most $2^{2 \overline{n} + 5}$. By Claim \ref{claim-pad2}, we can assume that $2^{2 \overline{n} + 5} \le N^{1/4}/4$. \end{proof} Let $\mbox{num}(k)$ be the number of intervals that check the $k^{th}$ bit of $z$. The goal is to have $\mbox{num}(k) = \mbox{check}_k (z)$. Let $\mbox{num}(f)$ be the number of intervals whose value $\mu(N)-r+2$ is in the range $\sum_{k=1}^{\overline{n}} \mbox{check}_k (z) + 1$ through $\sum_{k=1}^{\overline{n}} \mbox{check}_k (z) + 2^3 f(x,z)$. The goal is to have $\mbox{num}(f) = 2^3 f(x,z)$. The following lemma shows that actual values for the $\mbox{num}$ functions are not far from the goal. \begin{lemma} \label{lem-blips} \ifshow {\bf (lem:blips)} \else \fi For any $S \subseteq [\bar{n}]$, in a fault-free tiling: $$\sum_{j \in S} \mbox{check}_j (z) + 2^3 f(x,z) - 1 \le \mbox{num}(f) + \sum_{j \in S} \mbox{num}(b) \le \sum_{j \in S} \mbox{check}_j (z) + 2^3 f(x,z) + 2$$ \end{lemma} \begin{proof} The set of interval sizes $r$ that check the $k^{th}$ bit are exactly those for which $\mu(N)+2-r$ is in the range $$\sum_{j =1}^{k-1} \mbox{check}_j (z) +1 ~~~~\mbox{through}~~~~\sum_{j =1}^{k} \mbox{check}_j (z).$$ The value of $\mbox{num}(f)$ are the number of intervals with size $r$ such that $\mu(N)+2-r$ is in the range $$\sum_{j =1}^{\overline{n}} \mbox{check}_j (z) +1 ~~~~\mbox{through}~~~~\sum_{j =1}^{\overline{n}} \mbox{check}_j (z) + 2^3 f(x,z).$$ Note that all of the above ranges are disjoint and contained in $\{2, \ldots, \mu(N)+2\}$. If the multi-set of all $\mu(N) - r +2$ for all the intervals is exactly $1$ through $\mu(N)$, then for every $k$, $\mbox{num}(k)$ is exactly $\mbox{check}_k (z)$ and $\mbox{num}(f) = 2^3 f(x,z)$. According the Lemma \ref{lem-errorFreeSizes}, the multi-set of interval sizes for a correct computation is contained in $\{2, \ldots, \mu(N)+2\}$. Moreover, there are at most two integers missing from this range and at most one duplicate. The Lemma follows. \end{proof} Let $\bar{y}$ be a string such that $f(\bar{y}) = x1$ and $g(\bar{y}) = \bar{z}$, where $\bar{z}$ is the correct answer to all of the oracle queries made by $M$ to the $L'$ oracle on input $x$. Note that the string $\bar{y}$ may not be unique because the number of oracle calls $\bar{n}$ is less than $n$, the number of bits in $\bar{z}$, so bits $\bar{n}+1$ through $n$ of $\bar{z}$ can be arbitrary. \begin{lemma} \label{lem-rightY} \ifshow {\bf (lem:rightY)} \else \fi Consider a tiling of an $N \times N$ grid, where $N = 4n(1x^R) - 2w(x1)+3$, for some binary string $x$. Let $\bar{y}$ be a string such that $f_1(\bar{y}) = x1$ and $g(\bar{y}) = \bar{z}$, where the $j^{th}$ bit of $\bar{z}$ is the correct answer answer to the $j^{th}$ oracle query, for $j = 1, \ldots, \overline{n}$. For every $y \in {\cal{D}}^n$, $Cost(\bar{y},0) \le Cost(y, 0)$. \end{lemma} \begin{proof} By Lemma \ref{lem-L3analysis}, in the last row of Layer $3$ every clean long-form interval has a $y$ such that $f_1(y) = x1$. Since the translation rules preserve the string, then any clean interval in the first row of Layer $4$ will also have $f_1(y) = x1$. Let $\overline{n}$ be the number of oracle calls made by Turing Machine $M$ on input $x$. We need to establish that the minimum is achieved when the first $\overline{n}$ bits of $f_2(y)=z$ are the same as the first $\overline{n}$ bits of $\bar{z}$. By induction. Assume that we have established that the first $k$ bits of the minimum $z$ must match $\bar{z}$ in order to achieve the minimum cost. The inputs to the first $k+1$ oracle queries are now fixed. Call these $s_1, \ldots, s_{k+1}$. Now suppose that $\bar{z}_{k+1} = 1$. That means $s_{k+1} \in L'$. Any string that agrees with $\bar{z}$ in the first $k$ bits and has $z_{k+1} = 0$, will pay a cost of $\mbox{num}(k+1)$ which is at least $\mbox{check}_{k+1}(x,z) - 1 = 2^{2\bar{n}-k+4} - 1$. Suppose instead we use the string that has $z_{k+1} = 1$ followed by a string of zeros. The intervals that are checking bit $k+1$ can be tiled at $0$ cost because $s_{k+1}$ is in fact in $L$. The intervals that are checking bits $k+2$ through $\bar{n}$ as well as the intervals implementing the cost $2^3 f(x,z)$ will incur a cost of $\mbox{num}(f) + \sum_{j=k+2}^{\bar{n}} \mbox{num}(j)$ which by Lemma \ref{lem-blips}, is at most $$2^3 f(x,z) + \sum_{j=k+2}^{\bar{n}} \mbox{check}_j (x,z)+2 \le 2^3(2^{2 \overline{n} - k + 1} - 1) + 2 = 2^{2\bar{n}-k+4} - 6$$ Since this is less than the cost of the incorrect guess $z_{k+1}=0$, the incorrect guess for $z_{k+1}$ can not yield the minimum cost when the correct guess $\bar{z}_{k+1} = 1$. Now suppose that $\bar{z}_{k+1} = 0$. That means $s_{k+1} \not\in L'$. Any string that agrees with $\bar{z}$ in the first $k$ bits and has $z_{k+1} = 1$, will pay a cost of $\mbox{num}(k+1)$ which is at least $2^{2 \overline{n} +5}$. Note that since $s_{k+1} \not\in L'$, when the verifier $V$ is run on input $s_{k+1}$, it must reject, which means that the intervals that check bit $k+1$ will incur a cost of $1$. Suppose instead we use $z_{k+1} = 0$ and $0$'s for the remaining bits of $z$. The cost will be $\mbox{num}(f) + \sum_{j=k+1}^{\bar{n}} \mbox{num}(j)$ which by Lemma \ref{lem-blips}, is at most $$2^3 f(x,z) + \sum_{j=k+1}^{\bar{n}} \mbox{check}_j (x,z) + 2\le 2^3 (2^{2 \overline{n}-k+2} -1) + 2 = 2^{2\bar{n}-k+5} - 6.$$ Since this is less than the cost of the incorrect guess $z_{k+1}=1$, the incorrect guess for $z_{k+1}$ can not yield the minimum cost when the correct guess $\bar{z}_{k+1} = 0$. We have shown that regardless of the true value for $\bar{z}_{k+1}$, an incorrect guess for $\bar{z}_{k+1}$ will result in a higher cost than the correct guess. \end{proof} We now have all the pieces in place to prove the reduction. \begin{theorem} \label{th-WFTfinal} \ifshow {\bf (WFTfinal)} \else \fi Consider a tiling of an $N \times N$ grid, where $N = 4n(1x^R) - 2w(x1)+3$, for some binary string $x$. Then the value of $f(x)$ can be recovered from the cost of the minimum cost tiling of an $N \times N$ grid. \end{theorem} \begin{proof} By Lemmas \ref{lem-ignoreIllegal} and \ref{lem-rightY}, the minimum cost tiling does not have any illegal pairs or squares and guesses a $y$ that maps to the correct $x$ for the input and the correct $z$ for the oracle responses. The overall cost of the tiling will be $$\sum_{j=1}^{\overline{n}} (1 - z_j) \mbox{num}(j) + \mbox{num}(f)$$ According to Lemma \ref{lem-blips}, the value will be one larger or two smaller than $$2^{\overline{n}+5} \sum_{j=1}^{\overline{n}} (1 - z_j) 2^{\overline{n}-j} + 2^3 f(x, \bar{z})$$ By dividing the minimum cost tiling by $8$ and rounding to the nearest integer, the lowest order $\overline{n}$ bits will be the value of $f(x, \bar{z}) = f(x)$. \end{proof} \section{Parity Weighted Tiling} \label{sec-PWT} The construction for Parity Weighted Tiling is almost exactly the same as with Function Weighted Tiling. We slightly modify the translation of intervals from Layer $3$ to Layer $4$ as follows. The translation rules from Layer $3$ to Layer $4$ translate any $j$ or $\overline{j}$ tile to $j$, for $j \in {\cal{D}}$. $S$ is translated to $(q_{s1}/S)$ or $(q_{s2}/S)$, and $t$ is translated to $t$ for any $t \in \{+, \#, X, B, T, \vartriangleright, \vartriangleleft \}$. The states from Layer $3$ are all dropped, so any tile of the form $(q/c)$ is translated to whatever $c$ would be translated to, according to the rules above. These translation rules ensure that every clean interval is translated to a clean interval as long as the tiles in the interval are translated correctly. Thus every clean long-form interval has the form: $X~(q_s/S) {\cal{D}}^* B^*T~X$. The ambiguity in whether $S$ is translated to $(q_{s1}/S)$ or $(q_{s2}/S)$ is resolved by the translation rule that $\vartriangleleft~S$ must be translated to $\vartriangleleft~(q{s1}/S)$ and $X~S$ must be translated to $X~(q{s2}/S)$. Thus the leftmost interval (if it is a clean long-form interval) looks like $(q_{s1}/S) {\cal{D}}^* B^*E$ and every other clean long-form interval looks like $(q_{s2}/S) {\cal{D}}^* B^*E$. We are now reducing from a language $L \in \mbox{P}^{\mbox{NEXP}}$ which is computed by a polynomial time Turing Machine $M$ with access to a $\mbox{NEXP}$ oracle. All intervals except for the leftmost interval behave in exactly the same way as for the function version, except that they incur a cost of $2$ for entering a rejection state. Note that we can think of the function for the decision problem as just mapping to a single bit, depending on whether $M$ accepts or rejects. The computation in the leftmost interval starts in a different start state which indicates that it will simulate the Turing Machine $M$ on input $(x,z)$ and incur a rejection cost of $+1$ depending on whether $M$ accepts or not. Finally multiply the cost of an illegal pair or square from the Function Weighted Tiling construction by $2$. Note that all costs, except the cost of the computation in the leftmost interval are all a multiple of $2$ times their corresponding cost in the Function Weighted Tiling construction, so the analysis in Section \ref{sec-FWT} comparing costs of different tilings still holds. \begin{theorem} \label{th-FEPfinal} \ifshow {\bf (th:FEPfinal)} \else \fi Consider a tiling of an $N \times N$ grid, where $N = 4n(1x^R) - 2w(x1)+3$, for some binary string $x$. Then the minimum cost tiling of an $N \times N$ grid is odd if $x \in L$ and is even if $x \not\in L$. \end{theorem} \begin{proof} By Lemmas \ref{lem-ignoreIllegal} and \ref{lem-rightY}, the minimum cost tiling does not have any illegal pairs or squares and guesses a $y$ that maps to the correct $x$ for the input and the correct $z$ for the oracle responses. Since the string $y$ is the same in all the long-form intervals, the the leftmost interval has the correct input $x$ and the correct outputs to the oracle queries. All costs in the tiling are even, except for the penalty at the end of the computation of the leftmost interval. This computation will accept if and only if $x \in L$. If the computation accepts, the cost in this interval is $0$ and the overall cost of the minimum cost tiling is even. If the computation rejects, then the cost in this interval is $1$ and the cost of the entire tiling is odd. \end{proof} \section{Acknowledgements} We are grateful to the Simons Institute for the Theory of Computing, at whose program on the ``The Quantum Wave in Computing'' this collaboration began. \bibliographystyle{alpha}
{ "timestamp": "2022-09-20T02:23:52", "yymm": "2209", "arxiv_id": "2209.08731", "language": "en", "url": "https://arxiv.org/abs/2209.08731" }
\section{Proof of Theorem~\ref{prop:compare_seg_hol_two_attrs}}\label{app:proof} In this section, we present the proof of~\Cref{prop:compare_seg_hol_two_attrs}. For any event $E$, we let $\setcomplement{E}$ denote the complement of $E$. We first derive equality~\eqref{eq:proof_both_attrs_err_reduce_disadvantage} below that is common across parts~\ref{item:attrs_one} and~\ref{item:attrs_both}, and then separately prove for parts~\ref{item:attrs_one} and~\ref{item:attrs_both} based on equality~\eqref{eq:proof_both_attrs_err_reduce_disadvantage}. Consider either part~\ref{item:attrs_one} or~\ref{item:attrs_both}, where one or both attributes are protected. Let $\eventerror_\textrm{hol}$ and $\eventerror_\textrm{seg}$ denote the events that the top-$1$ applicant\xspace is identified incorrectly, for holistic allocation and segmented allocation, respectively. Formally, when the number of attributes is $d = 2$, the top-$1$ applicant\xspace is identified incorrect when \begin{align*} \argmax_{i\in [n]} x_{i} \ne \argmax_{i\in [n]}\sum_{j\in [2]} (y_{i 1} + y_{i 2}) \end{align*} under the respective allocation scheme. For each applicant\xspace $i$, we term the mean of the attribute scores, $\frac{y_{i 1} + y_{i 2}}{2}$, as the estimated quality of the applicant\xspace. We first observe that it suffices to consider the case where the best applicant\xspace is disadvantaged: Since the bias factor $\beta$ only applies to the disadvantaged applicants\xspace, for either holistic or segmented allocation, the estimated quality of the advantaged applicants\xspace always equals their true quality; the estimated quality of the disadvantaged applicants\xspace, due to the discounting, is always lower than or at most equal to their true quality. Hence, when the best applicant\xspace is an advantaged applicant\xspace, it is always identified correctly. Formally, recall that the random variables $\valadvantage^{\textmax}_\textadvantage$ and $\valdisadvantage^{\textmax}_\textdisadvantage$ denote the true quality of the best applicant\xspace in the advantaged and the disadvantage groups, respectively. For $E\in \{\eventerror_\textrm{hol}, \eventerror_\textrm{seg}\}$, we have \begin{align} \mathbb{P}(E\mid \valdisadvantage^{\textmax}_\textdisadvantage < \valadvantage^{\textmax}_\textadvantage) = 0.\label{eq:proof_both_attrs_no_err_advantage_top} \end{align} Therefore, for $E\in \{\eventerror_\textrm{hol}, \eventerror_\textrm{seg}\}$, we have \begin{align} \mathbb{P}(E) & = \mathbb{P}(E \mid \valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage) \cdot \mathbb{P}(\valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage) + \mathbb{P}(E \mid \valdisadvantage^{\textmax}_\textdisadvantage < \valadvantage^{\textmax}_\textadvantage) \cdot \mathbb{P}(\valdisadvantage^{\textmax}_\textdisadvantage< \valadvantage^{\textmax}_\textadvantage) \nonumber\\ & \stackrel{\text{(i)}\xspace}{=} \mathbb{P}(E \mid \valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage) \cdot \mathbb{P}(\valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage) \nonumber\\ & \stackrel{\text{(ii)}\xspace}{=} \frac{1}{2} \mathbb{P}(E \mid \valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage)\label{eq:proof_both_attrs_err_reduce_disadvantage}, \end{align} where step~\text{(i)}\xspace is true by equality~\eqref{eq:proof_both_attrs_no_err_advantage_top}; step~\text{(ii)}\xspace is true because the fraction of disadvantaged applicants\xspace is $\alpha=0.5$, and hence $\mathbb{P}(\valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage)=\frac{1}{2}$ by symmetry. Note that the true qualities $\valdisadvantage^{\textmax}_\textdisadvantage$ and $\valadvantage^{\textmax}_\textadvantage$ are generated from the continuous distribution $\mathcal{D}$, so it is safe to ignore the case of $\valadvantage^{\textmax}_\textadvantage=\valdisadvantage^{\textmax}_\textdisadvantage$ which happens with probability $0$. We next observe that there is no difference between segmented\xspace and holistic\xspace allocations if neither or both reviewers are biased. Formally, let $R$ denote the event that exactly one out of the two reviewers is biased. We have \begin{align*} \mathbb{P}(\eventerror_\textrm{seg} \mid \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, \setcomplement{R}) = \mathbb{P}(\eventerror_\textrm{hol} \mid \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, \setcomplement{R}). \end{align*} Hence, it suffices to compare the conditional error $\mathbb{P}(E\mid \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, R)$ for $E\in \{\eventerror_\textrm{seg}, \eventerror_\textrm{hol}\}$. Let the random variable $S\subseteq [n]$ denote the set of disadvantaged applicants assigned to the unbiased reviewer. We now analyze the conditional error separately for part~\ref{item:attrs_one} and part~\ref{item:attrs_both}. \subsection{Proof of Theorem~\ref{prop:compare_seg_hol_two_attrs}\ref{item:attrs_one}} We analyze the conditional error $\mathbb{P}(E \mid \valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage, R)$ separately for holistic and segmented allocations. \paragraph{Error for segmented\xspace allocation.} Recall that in \segmented allocation\xspace, each of the two reviewers is assigned one attribute. Since the reviewer assignment is independent from all else, by symmetry, the biased reviewer is assigned the protected attribute with probability $\frac{1}{2}$. In this case, the estimated quality of the best disadvantaged applicant\xspace becomes $\frac{1 + \beta}{2}\valdisadvantage^{\textmax}_\textdisadvantage$. On the other hand, with probability $\frac{1}{2}$ the unbiased reviewer is assigned the protected attribute. In this case, there is no discounting, and the best applicant\xspace is always correctly identified. We have \begin{subequations}\label{eq:proof_one_attr} \begin{align} \mathbb{P}(\eventerror_\textrm{seg}\mid \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, R) = \frac{1}{2} \mathbb{P}\left(\frac{1+\beta}{2}\valdisadvantage^{\textmax}_\textdisadvantage < \valadvantage^{\textmax}_\textadvantage\mid \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, R\right).\label{proof_one_attr_seg_term} \end{align} \paragraph{Error for holistic allocation.} Recall that in \holistic allocation\xspace, each of the two reviewers is assigned both attributes of half of the applicants\xspace. By symmetry, the biased reviewer is assigned the best disadvantaged applicant\xspace with probability $\frac{1}{2}$. In this case, the estimated quality of the best disadvantaged applicant\xspace is again $\frac{1+\beta}{2}\valdisadvantage^{\textmax}_\textdisadvantage$. In order for this applicant\xspace to be identified the best, its estimated quality needs to exceed both the best advantaged applicant\xspace, and also all disadvantaged applicants\xspace assigned to the unbiased reviewer who does not discount. On the other hand, with probability $\frac{1}{2}$ the unbiased reviewer is assigned the best disadvantaged applicant\xspace. It is correctly identified as the best applicant\xspace, because there is no discounting applied on this applicant\xspace. Recall that random variable $S\subseteq [n]$ denote the set of disadvantaged applicants assigned to the unbiased reviewer. We have \begin{align} \mathbb{P}(\eventerror_\textrm{hol}\mid \valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage, R) = \frac{1}{2} \mathbb{P}\left(\left\{\frac{1 + \beta}{2} \valdisadvantage^{\textmax}_\textdisadvantage< \valadvantage^{\textmax}_\textadvantage\right\} \cup \left\{\frac{1+\beta}{2}\valdisadvantage^{\textmax}_\textdisadvantage < \max_{i\in S} x_i\right\} \;\middle| \;\valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, R\right).\label{eq:proof_one_attr_hol_term} \end{align} \end{subequations} \medskip Combining~\eqref{eq:proof_one_attr}, we have \begin{align*} \mathbb{P}(\eventerror_\textrm{seg} \mid \valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage, R)\le \mathbb{P}(\eventerror_\textrm{hol}\mid \valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage, R), \end{align*} completing the proof of part~\ref{item:attrs_one}. \subsection{Proof of Theorem~\ref{prop:compare_seg_hol_two_attrs}\ref{item:attrs_both}} We again analyze the conditional error $\mathbb{P}(E \mid \valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage, R)$ separately for holistic and segmented allocations. \paragraph{Error for segmented\xspace allocation.} When there is exactly one biased reviewer, one attribute of all disadvantaged applicants\xspace is discounted, and the estimated quality of the best disadvantaged applicant\xspace becomes $\frac{1 + \beta}{2}\valdisadvantage^{\textmax}_\textdisadvantage$. In this case, the estimated quality of the best disadvantaged applicant\xspace remains the best among the disadvantaged applicants\xspace. It is correctly identified as the best applicant\xspace, if and only if its estimated quality exceeds that of the best advantaged applicant\xspace. We have \begin{align} \mathbb{P}(\eventerror_\textrm{seg} \mid \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, R) & =\mathbb{P}\left(\frac{1+\beta}{2} \valdisadvantage^{\textmax}_\textdisadvantage < \valadvantage^{\textmax}_\textadvantage \;\middle|\; \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, R\right).\label{eq:proof_both_attrs_seg} \end{align} \paragraph{Error for holistic allocation.} By symmetry, with probability $\frac{1}{2}$, the best disadvantaged applicant is assigned the unbiased reviewer. In this case, its estimated quality equals the true quality, and it is corrected identified as the best applicant. On the other hand, with probability $\frac{1}{2}$, the best disadvantaged applicant is assigned the biased reviewer. In this case, the estimated quality of this applicant\xspace becomes $\beta\valdisadvantage^{\textmax}_\textdisadvantage$, and it remains the best among all disadvantaged applicants\xspace assigned to the biased reviewer. In order for this applicant\xspace to be identified as the best applicant\xspace, its estimated quality needs to exceed both the best advantaged applicant\xspace, and also all disadvantaged applicant\xspace assigned to the unbiased reviewer. Recall that the random variable $S\subseteq [n]$ denote the set of disadvantaged applicants assigned to the unbiased reviewer. We have \begin{align} \mathbb{P}(\eventerror_\textrm{hol} \mid \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, R) & = \frac{1}{2}\mathbb{P}\left(\left\{\beta\valdisadvantage^{\textmax}_\textdisadvantage < \valadvantage^{\textmax}_\textadvantage\right\} \cup \left\{\beta \valdisadvantage^{\textmax}_\textdisadvantage < \max_{i\in S} x_i\right\} \;\middle|\; \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, R\right).\label{eq:proof_both_attrs_hol_term} \end{align} Now setting $\beta = 0$ in~\eqref{eq:proof_both_attrs_hol_term}, we have \begin{align} \mathbb{P}(\eventerror_\textrm{hol} \mid \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, R) = \frac{1}{2}.\label{eq:proof_both_attrs_hol_case_one} \end{align} \paragraph{Comparing the error for segmented and holistic allocations.} Subtracting~\eqref{eq:proof_both_attrs_seg} from~\eqref{eq:proof_both_attrs_hol_case_one}, we have that when $\beta=0$, \begin{align} \mathbb{P}(\eventerror_\textrm{hol} \mid \valdisadvantage^{\textmax}_\textdisadvantage> \valadvantage^{\textmax}_\textadvantage, R) - \mathbb{P}(\eventerror_\textrm{seg} \mid \valdisadvantage^{\textmax}_\textdisadvantage> \valadvantage^{\textmax}_\textadvantage, R) & = \frac{1}{2} - \mathbb{P}\left(\frac{1}{2} \valdisadvantage^{\textmax}_\textdisadvantage < \valadvantage^{\textmax}_\textadvantage \;\middle|\; \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, R\right) \nonumber\\ % &= \mathbb{P}\left(\frac{1}{2} \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage \;\middle|\; \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, R\right) -\frac{1}{2} \nonumber\\ % & \stackrel{\text{(i)}\xspace}{=} \mathbb{P}\left(\frac{1}{2} \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage \;\middle|\; \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage\right) - \frac{1}{2} \nonumber\\ % & \stackrel{\text{(ii)}\xspace}{=} 2 \cdot \mathbb{P}\left(\frac{1}{2} \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage \right) - \frac{1}{2} \label{eq:proof_both_attrs_subtract} \end{align} where step~\text{(i)}\xspace is true because the quality values are independent of whether the reviewers are biased, and step~\text{(ii)}\xspace is true because \begin{align*} \mathbb{P}\left(\frac{1}{2} \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage \;\middle|\; \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage\right) =\frac{\mathbb{P}(\frac{1}{2} \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage, \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage)}{\mathbb{P}(\valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage)} = \frac{\mathbb{P}(\frac{1}{2} \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage)}{\mathbb{P}(\valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage)} = 2\cdot \mathbb{P}\left(\frac{1}{2} \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage\right), \end{align*} where the last equality is true because $\mathbb{P}(\valdisadvantage^{\textmax}_\textdisadvantage>\valadvantage^{\textmax}_\textadvantage)=\frac{1}{2}$ by symmetry, when the fraction of disadvantaged applicants\xspace is $\alpha = 0.5$. Hence, the difference in error between segmented\xspace and holistic\xspace allocations is \begin{align} \mathbb{P}(\eventerror_\textrm{hol}) - \mathbb{P}(\eventerror_\textrm{seg}) & \stackrel{\text{(i)}\xspace}{=} \frac{1}{2}\big[\mathbb{P}(\eventerror_\textrm{hol}\mid \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage) - \mathbb{P}(\eventerror_\textrm{seg}\mid \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage)\big] \nonumber\\ % & \stackrel{\text{(ii)}\xspace}{=} \frac{\mathbb{P}(R)}{2}\cdot \big[\mathbb{P}(\eventerror_\textrm{hol}\mid \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage,R) - \mathbb{P}(\eventerror_\textrm{seg}\mid \valdisadvantage^{\textmax}_\textdisadvantage > \valadvantage^{\textmax}_\textadvantage,R)\big]\label{eq:proof_both_attrs_err} \end{align} where step~\text{(i)}\xspace is true by~\eqref{eq:proof_both_attrs_err_reduce_disadvantage}, and step~\text{(ii)}\xspace is true because the error of segmented and holistic allocations is identical conditional on $\setcomplement{R}$. Plugging~\eqref{eq:proof_both_attrs_subtract} and the fact that $\mathbb{P}(R) = 2\gamma(1-\gamma)$ to~\eqref{eq:proof_both_attrs_err} and rearranging completes the proof of~\eqref{eq:prop_err_comparison} and~\eqref{eq:condition}. \paragraph{Condition for power-law distribution.} Following Definition 3 of~\citet{kleinberg2018rooney}, for non-negative functions $f(n)$ and $g(n)$, we define \begin{align*} f(n)\approxident g(n) \end{align*} if and only if $f(n) = g(n) \left(1\pm O\left(\frac{(\ln n)^2}{n}\right)\right)$. Now consider the power-law distribution with constant parameter $\delta$. Setting $\alpha= 1, \beta = 2, c = \alpha \beta^{-(1+\delta)}$ and $k=1$ in Theorem B.3 of~\citet{kleinberg2018rooney} yields \begin{align} \mathbb{P}(\valdisadvantage^{\textmax}_\textdisadvantage <2 \valadvantage^{\textmax}_\textadvantage) \approxident \left(1+2^{-(1+\delta)}\right)^{-1}.\label{eq:proof_both_attrs_prob_approx} \end{align} According to~\eqref{eq:condition}, \segmented allocation\xspace is better than \holistic allocation\xspace if and only if \begin{align*} \mathbb{P}(\valdisadvantage^{\textmax}_\textdisadvantage >2 \valadvantage^{\textmax}_\textadvantage) > 0.25, \end{align*} or equivalently \begin{align} \mathbb{P}(\valdisadvantage^{\textmax}_\textdisadvantage <2 \valadvantage^{\textmax}_\textadvantage) < 0.75.\label{eq:condition_rewrite} \end{align} Combining~\eqref{eq:proof_both_attrs_prob_approx} and~\eqref{eq:condition_rewrite}, for sufficiently large $n$, segmented allocation is better if and only if the constant $\delta$ satisfies \begin{align*} \left(1 + 2^{-(1+\delta)}\right)^{-1} < 0.75, \end{align*} or equivalently $\delta < \frac{\log 3}{\log 2} -1$, completing the proof of claim~\eqref{eq:condition_powerlaw}. \section{Results from a previous experiment}\label{app:previous_expt} In this section, we discuss a previous version of the experiment (termed ``previous experiment''), conducted before the new version presented in Section~\ref{sec:calibration} (termed ``new experiment''). For completeness, we present results pertaining to the previous experiment. Based on these results, we also discuss our reason to collect data for the new experiment. The setup of the previous experiment is identical to the new experiment described in Section~\ref{sec:calibration}, except for one difference in how the initial errors of the $20$Q-group\xspace (see the \grouptwentyinit curve in Figure~\ref{fig:calibration} and Figure~\ref{fig:calibration_old}) are computed, to be detailed later in this section. The main conclusions made in Section~\ref{sec:calibration} remain unchanged across these two experiments. We also describe a few other observations that are different across these two experiments, and discuss our conjectures of their causes. \subsection{Results consistent with the new experiment} We report the results for the previous experiment, under the same data analysis procedure as described in Section~\ref{sec:calibration}. Comparing the $5$Q-group\xspace and the $20$Q-group\xspace, the workers' mean error in the previous experiment is $1.05 \pm 0.06$ in the $5$Q-group\xspace, and $ 0.69 \pm 0.04$ in the $20$Q-group\xspace. We perform a univariate permutation test, and reject the null hypothesis that the errors of the two groups have the same mean (one-sided $p$-value $<0.01$; Cohen's effect size $d= 0.70$). Comparing the errors for individual pages within the $20$Q-group\xspace in the previous experiment, the results are shown in Figure~\ref{fig:calibration_old}. In the $20$Q-group\xspace, the (final) mean error for page $1$ is $0.76 \pm 0.05$, and the (final) mean error for page $4$ is $0.64 \pm 0.05$. We perform a univariate permutation test, and reject the null hypothesis that the mean errors for these two pages have the same mean (one-sided $p$-value $<0.01$; Cohen's effect size $d=0.25$). The qualitative trends from these results are consistent with the results reported in the new experiment in Section~\ref{sec:calibration}. \subsection{Observations different from the new experiment} Recall that the \grouptwentyinit curve represents the initial mean error for each page, defined as the error of the worker's answers for each individual page, right before the worker ever turns to the next page. As described in Section~\ref{sec:calibration}, we expect that the initial mean error for page $1$ of the $20$Q-group\xspace is similar to the mean error of the $5$Q-group\xspace, because the two settings are identical before the workers in the $20$Q-group\xspace ever turns to page $2$. However, in Figure~\ref{fig:calibration_old} pertaining to the previous experiment, we observe that the initial mean error for page $1$ of the $20$Q-group\xspace ($0.90\pm 0.05$), as shown in the \grouptwentyinit curve, is notably smaller than the error in the $5$Q-group\xspace ($1.05 \pm 0.06$). We make two hypotheses for this discrepancy in the previous experiment. \paragraph{Workers abandoning the task.} We allow the workers to abandon the task at any time they wish, and in the data analysis only include the workers who have completed the task. Since the workers in the $20$Q-group\xspace answer more questions than the $5$Q-group\xspace, it is possible that more workers from the $20$Q-group\xspace abandon the task, and this self-selection leads to higher quality workers for the $20$Q-group\xspace. We re-compute the errors by including the answers from the abandoning workers back to the analysis, and observe negligible difference compared to when excluding these abandoning workers. We thus determine that workers abandoning the task is not the primary cause of the discrepancy. \paragraph{Inaccuracy in imputing page turning.} In the previous experiment, we record the worker's answer to each question and its corresponding timestamp. More precisely, if a worker ever modifies their answer to a question, the initial answer and all subsequent modified answers are recorded along with their timestamps. In the previous experiment, timestamps directly associated with the action of turning pages are not recorded. As a remedy, we impute when the worker ever turns to the next page, by using the first timestamp that the worker ever answers a question on the next page. The initial mean error for the current page is then computed by using the answers right before this timestamp. This imputation introduces inaccuracy, because it does not preclude the possibility where a worker turns to the next page, looks at the applicants\xspace presented on the next page without answering any questions, and immediately turns back to the previous page to modify their answers. In this case, no answer to the next page is recorded, but the worker has indeed turned to the page, and then comes back to the previous page. It is thus possible that the worker uses information gained from the applicants\xspace on the next page to modify answers on the current page, leading to the lower initial error for page 1 of the the $20$Q-group\xspace that the error of the $5$Q-group\xspace.\footnote{ Recall that we allow the worker to turn back to previous pages at any time, but only turn to the next page if all questions on the current page have been answered, so the inaccuracy only comes from workers gaining information from the next one page, but not the subsequent pages later. } \paragraph{Change in experimental design.} To understand how the imputation of page turning in the previous experiment contributes to the discrepancy between the error of the $5$Q-group\xspace and the initial error for page 1 of the $20$Q-group\xspace, we conduct the new experiment presented in Section~\ref{sec:calibration}. In the new experiment, the task is identical except that we additionally record the timestamps directly associated with each button click that turns pages. To compute the initial error for each individual page of the $20$Q-group\xspace, we now directly use the first timestamp that the worker ever clicks the button to turn to the next page, and compute the initial mean error of the current page using the worker's answers right before this timestamp. For comparison, in the new experiment, we also compute the errors using the previously imputation for page turning. We observe negligible difference in the \grouptwentyinit curve, when using the button timestamps versus using the page-turning imputation in the new experiment. We hence determine that the primary cause of the discrepancy between the \grouptwentyinit curve and the $5$Q-group\xspace in the previous experiment is not the page-turning imputation either. We do not have a clear alternative explanation for this discrepancy, and attribute it to randomness in data collection. \paragraph{Difference in worker mean errors.} In addition to the discrepancy between the error of the $5$Q-group\xspace and the initial error for page 1 of the $20$Q-group\xspace in the previous experiment, we also observe that the errors are overall smaller in the previous experiment than in the new experiment. For example, the workers' mean error for the $5$Q-group\xspace and $20$Q-group\xspace are $1.05\pm 0.06$ and $0.69 \pm 0.04$ respectively in the previous experiment, compared to $1.14\pm 0.06$ and $0.84\pm 0.05$ respectively in the new experiment. We do not have a clear explanation for this difference. Since the two experiments only differ in whether the button click information is recorded, and this change is not visible to the workers, we conjecture that this difference in errors is related to worker quality changes specific to the crowdsourcing platform (e.g., different dates, day of the week, and time of the day) or randomness in data collection. \begin{figure}[tb] \centering \includegraphics[width=0.35\linewidth]{figures/experiment_old.pdf} \caption{The mean error in estimating the percentile bins in the previous experiment, for workers in the $5$Q-group\xspace and the $20$Q-group\xspace. Error bars represent the standard error of the mean.} \label{fig:calibration_old} \end{figure} \section{Introduction} Evaluation and selection are two essential functions that play a critical role in almost every organization as they determine who joins the organization, who remains, and the resulting organizational performance. However, they can also be a significant source of errors, leading to bad selection decisions and potentially limiting opportunities for certain groups. In the past, concerns about inaccurate or biased selection decisions have led to recommendations for the use of structured processes, such as structured job interviews, so that they are consistently conducted and fair to all applicants\xspace~\citep{schmidt1998validity}. \begin{figure*}[t] \centering \subfloat[]{ \begin{minipage}[t]{0.09\linewidth}\raisebox{0.62cm}{\includegraphics[width=\linewidth]{figures/scheme_nonanalytic.pdf}}\end{minipage} \hspace{0.8cm}\vrule\hspace{0.8cm} \includegraphics[width=0.606\linewidth]{figures/scheme_analytic.pdf} \label{float:scheme} } \hrule \subfloat[]{\includegraphics[width=0.729\linewidth]{figures/allocation.pdf}\label{float:allocation}} \caption{ An illustration of the difference between non-analytic and analytic evaluation approaches (top panel), and the spectrum of holistic vs. segmented allocation (bottom panel).\label{fig:scheme} } \end{figure*} At the other end of the spectrum involving lower-stakes selection problems, distribution and automation of the evaluation task has become popular. Over the last few decades, developments in online collaboration and decision making have demonstrated the benefits of using the ``crowd'' for many decisions \cite{surowiecki2005wisdom}, opening up new possibilities for conducting evaluation and selection in a more efficient, accurate, and potentially less biased manner. For some decisions, crowd-based processes produce better results when aggregating human inputs by algorithms. However, there have also been ample examples of less effective decisions arrived at by crowds \cite{hube2019understanding}. In reviewing the typical approaches to these crowd-based evaluations and decisions, it appears that the way they are structured varies considerably in terms of the kinds of information reviewed by evaluators when making decisions \cite{draws2021checklist}. We investigate the intricacies when taking this idea of distributed judgment back to the high-stakes regime, and study how the structure of an evaluation process influences the quality of decisions. Figure~\ref{fig:scheme}\subref{float:scheme} summarizes the design choices involved in an evaluation procedure. In this work, we focus on \textbf{\textit{analytic}} evaluation, where the evaluation of an applicant\xspace (e.g., a job candidate) is decomposed into a pre-defined set of attributes. Analytic evaluation is commonly used in hiring, admissions and grading. For example, in admissions, the attributes may include the student's school GPA, essay quality, and the strength of recommendations letters. On the other hand, in \textbf{\textit{non-analytic}} evaluation, the evaluator\xspace is not required to separately examine individual attributes. Instead, it is sufficient to provide an overall score to each applicant\xspace. While not defining attributes in the non-analytic regime offers evaluators\xspace the freedom to comprehensively think about all possible aspects of the applicants\xspace, the lack of structure may cause the evaluators\xspace to overly rely on their general impression, leading to inconsistency and inaccuracy as compared to the analytic approach~\cite{jonsson2021analytic}. Hence, analytic evaluation is our regime of interest. Another design choice is the method to aggregate attributes to derive an overall score for each applicant\xspace. We consider \textbf{\textit{exogenous}} aggregation, where attributes are aggregated using pre-defined (the simplest example is to take a mean, or a weighted mean, of all attributes), or algorithmically learned~\citep{noothigattu2018choosing} rules. On the other hand, \textbf{\textit{human}} aggregation means that after evaluating individual attributes, the evaluator\xspace additionally provides a final score by combining the attributes in some sensible way of the evaluator\xspace's choice. Although human aggregation hypothetically provides more flexibility, simple exogenous aggregation rules often turn out to be no less accurate, or even outperform human aggregation~\citep{kahneman2021noise}. All in all, the issues with the non-analytic approach or human aggregation reveal that the supremacy of human reasoning is overestimated: ``People trust that the complex characteristics of applicants can be best assessed by a sensitive, equally complex human being. This does not stand up to scientific scrutiny''~\citep{highhouse2008stubborn}. Hence, our regime of interest is \textbf{analytic evaluation under exogenous aggregation}. A fundamental question in structuring the evaluation process is how to allocate applicants\xspace and attributes to evaluators\xspace. There are two basic approaches which are likely to lead to different outcomes (see Figure~\ref{fig:scheme}\subref{float:allocation}). First, in \textbf{\textit{holistic\xspace}} allocation, evaluators\xspace are asked to review and assess all attributes about each applicant\xspace. As shown in Figure~\ref{fig:scheme}\subref{float:allocation}, if we assume each evaluator\xspace is represented by a rectangle of a fixed area (workload), then holistic\xspace allocation entails rectangles of the longest width (number of attributes), and therefore the smallest height (number of applicants\xspace). In \textbf{\textit{segmented}} allocation, if we hold the workload constant, each evaluator\xspace reviews one or a few attributes for a larger number of applicants\xspace (see the right end of Figure~\ref{fig:scheme}\subref{float:allocation}). Holistic\xspace allocation is quite common in organizational hiring processes as well as in academic admissions~\citep{de2021revising}, where people feel that a more complete understanding of an applicant or the nuances of human judgment improves the quality of decisions. Segmented\xspace allocation is more likely to be used when the attributes are considered relatively independent of one another. One example where a segmented approach is common is the grading of assignments in educational settings, where different instructors may grade different questions since performance on one is not viewed as relevant to the evaluation of another. Here, however, we raise the question of \textit{whether holistic allocation is as effective as often assumed, or might it be the case that segmented\xspace allocation would result in better decisions}? Certainly, for segmented\xspace allocation, it is necessary that the attributes being evaluated are separable enough that independent evaluations of them are feasible, which holds true in many instances. In these instances, we argue that segmented\xspace allocation could result in better decision quality, at least under certain conditions. We provide a brief review of the relevant literature and outline a framework describing the key difficulties associated with evaluation that have been identified in extant research, including \textbf{calibration} of evaluators, \textbf{efficiency} with which evaluation is conducted, and the degree of \textbf{bias} in the resulting decisions. In presenting our framework, we also delineate specific conditions under which we expect that holistic or segmented allocation leads to better decision outcomes. We employ a mixed-method study combining modeling, simulation, crowdsourcing experiments and theoretical analysis to explore the conditions under which holistic\xspace or segmented allocation performs better in terms of calibration, efficiency and fairness. In brief, we find that segmented\xspace allocation provides benefits for evaluator's calibration accuracy, whereas holistic\xspace allocation leads to greater efficiency in carrying out evaluations. In terms of mitigating bias, we observe mixed results depending on specific conditions of the application under consideration. Taken together, our work integrates a few lines of research with implications for the quality of evaluation decisions in a variety of different environments, and provides guidance to system designers for determining the evaluation structure that works best in their context. We discuss a few key differences between our work and prior literature. At a high level, holistic\xspace and segmented\xspace allocations concern about decomposing and distributing a complex task into smaller parts. In crowdsourcing, complex tasks, such as creating animated movies, making course videos or writing articles, are broken down into parts in a similar fashion. Different crowdsourcing workers complete different parts, and then their work is computationally or manually put together~\cite{kittur2011forge,retelny2014expert,cheng2015break}. While these works measure the quality of the completed work, such as by rating the written articles by professional journalists, we focus on more concrete and quantitative impacts of such decomposition in an evaluation and selection context, where some considerations we study such as fairness naturally arise. Another important application of human computation is peer review, where there is a large body of work on assigning reviewers to papers~\cite[Chapters 3 and 4]{shah2022surveyextended}, with a focus on finding the assignments that maximize the expertise of reviewers assigned to the papers or mitigating undesirable behavior by reviewers. In our work, we primarily consider applications where the work for evaluating individual attributes can be effectively decomposed. On the other hand, in peer review, the task usually cannot be readily decomposed, and reviewers are required to read the entire paper. We discuss more related work when we formally introduce specific dimensions in Section~\ref{sec:background}. All experiments conducted in this paper were approved by the Institutional Review Board (IRB) at Carnegie Mellon University. The crowdsourcing data, the user interface, as well as all code to reproduce our results is available at\\\url{https://github.com/jingyanw/segmented-vs-holistic}. \section{Theoretical Background} \label{sec:background} In theorizing the conditions under which holistic\xspace or segmented\xspace allocation leads to better decision making, we identify a few key difficulties from extant research, including \textit{calibration} of evaluators, \textit{efficiency} with which evaluation is conducted, and the degree of \textit{bias} mitigation. Along these key dimensions, we present six hypotheses, study three of them in detail using theory, simulation and experiments, and leave the remaining three hypotheses for future work. \subsection{Calibration} In the context of evaluation, we use ``calibration'' to refer to the ability of evaluators to apply consistent criteria in assessing applicants\xspace, such that the evaluation accurately reflects each applicant\xspace's quality relative to the entire pool~\cite{osborne1991statistical}. Note that if an evaluator is able to perfectly identify the placement of each applicant\xspace with respect to all others under consideration, then the evaluator\xspace identifies a perfect ranking of all applicants\xspace. However, the following reasons hinder the evaluator\xspace's ability. \subsubsection{Lack of information about the population.} In many situations, evaluators lack complete information about the full range of quality represented by applicants\xspace in the pool, and thus are not able to calibrate their assessment perfectly. Eliciting ordinal data such as pairwise comparisons or rankings~\cite{shah2017design} helps mitigate miscalibration. Nevertheless, ratings have their own benefits~\cite{wang2018your} and ratings of some form are widely used in practice to compare applicants\xspace assessed by different evaluators\xspace. For instance, the applicants\xspace are placed in categories such as \{definitely admit, maybe admit, waitlist, do not admit\} in admissions, and employees are placed in categories such as \{above average, below average\} in performance evaluation~\cite{goffin2011relative}. We expect that issues related to evaluator\xspace calibration are among the major drawbacks of holistic\xspace allocation. In holistic\xspace allocation, each evaluator\xspace assesses all attributes for each applicant\xspace they are assigned. With the exception of very small applicant pools, this necessitates that each evaluator\xspace only sees a small subset of the entire pool. By contrast, in segmented\xspace allocation, each evaluator\xspace sees a much larger set of the pool, perhaps even the entire set of scores in the pool for their assigned attributes. Therefore, we expect that segmented\xspace allocation has an advantage of enhancing evaluator\xspace calibration. Although it seems intuitive that evaluating more applicants\xspace improves calibration, it is unclear if it actually manifests in practice. To see a counter-argument, consider the following pair of scenarios. In the first scenario, the evaluator\xspace reviews $5$ applicants\xspace whereas in the second scenario, the evaluator\xspace reviews $20$ applicants\xspace. One may expect that when evaluating the last few applicants\xspace in the second scenario, the evaluator\xspace has already seen many more applications than in the first scenario. However, the evaluator\xspace may only be able to keep in mind $5$ or fewer applicants\xspace when evaluating any other applicant\xspace, in which case their calibration in both scenarios will be comparable. \begin{hypothesis}[Studied in Section~\ref{sec:calibration}]\label{h:calibration} For each individual attribute, segmented\xspace allocation, in which each evaluator\xspace has access to more applicants\xspace, leads to better calibration. \label{hypothesis-accuracy} \end{hypothesis} \subsubsection{Ordering effect.} The lack of information about the population suggests that calibration depends on the total number of applicants\xspace assigned to evaluators\xspace. Calibration may further vary as a function of the ordering that these applicants\xspace are evaluated. One reason for such variation is the bounded rationality of people, such as the cognitive effects of primacy and recency~\cite{page2010idol}, assimilation and contrast~\cite{damisch2006olympic}, and generosity-erosion~\cite{vives2021erosion}. A second reason for such variation is that evaluators\xspace gradually adapt their calibration as they evaluate each applicant\xspace along the way: When an evaluator\xspace rates the $5^\textrm{th}$ applicant\xspace, their grading scale is based on the first $5$ applicants\xspace seen so far, but by the time the evaluator\xspace moves to rate the $100^\textrm{th}$ applicant\xspace, they have acquired much more information for calibration from the $100$ applicants\xspace compared to when they rate the $5^\textrm{th}$ applicant\xspace. Such ordering effect can be mitigated in segmented\xspace allocation: Since the attributes of the applicants\xspace are assigned to different evaluators\xspace, the ordering can be shuffled so that each evaluator\xspace sees a different ordering, thereby ``averaging out'' the effect of ordering when the scores from these evaluators\xspace are aggregated. \begin{hypothesis} Segmented\xspace allocation, in which the ordering of the applicants can be shuffled independently for each attribute, leads to better calibration compared to holistic\xspace allocation, under which all attributes are evaluated under one ordering by design.\label{hypothesis:ordering} \end{hypothesis} \subsection{Efficiency} Selection and evaluation processes can also be resource-intensive and time-consuming. One reason of why quality might suffer is the basic human tendency to ``satisfice'' \cite{hilbert2012toward}, particularly when workload is high. Consequently, we contend that another important element to consider in evaluating the relative benefits of different allocation schemes is the impact on efficiency. This pertains to how quickly evaluators\xspace make their assessments, but also the degree to which an allocation scheme affords evaluators\xspace an opportunity to find shortcuts to adaptively allocate their effort. \subsubsection{Adaptively allocating effort.} The goal of many evaluation and selection processes is to identify the best subset of applicants\xspace from the available pool. In holistic\xspace allocation, if a particular applicant\xspace is clearly below the threshold on a subset of the attributes, the evaluator\xspace may conclude that the applicant\xspace will not be selected, without scrutinizing the remaining attributes or giving a precise score to the applicant\xspace. In addition, evaluators may use signals, such as red flags in recommendation letters in academic admissions, to draw a preliminary conclusion which they quickly confirm or deny with a cursory review of the remaining information. The evaluators also enjoy the flexibility to adaptively choose which attribute to review next based on the attributes already reviewed. In contrast, adaptive strategies are more challenging to implement in segmented\xspace allocation, because the evaluation task is typically allocated in parallel to the evaluators\xspace. That said, within segmented\xspace allocation, the system could employ a filtering rule for certain attributes \emph{before} assigning applicants\xspace to evaluators\xspace. For example, in academic admissions, threshold values for standardized test scores and GPAs are often used as preliminary filters to eliminate some applicants\xspace from further consideration. However, there are concerns such that standardized test scores are themselves biased against certain groups of applicants. Another remedy is to decompose the evaluation task into multiple rounds, where applicants are filtered in between rounds. However, having multiple rounds adds logistical complexity to the evaluation procedure, and may also require more time to complete the evaluation process. We hypothesize that in holistic\xspace allocation, evaluators can reap the adaptive benefits of efficiency without significantly sacrificing accuracy. Furthermore, we postulate that the gain is more prominent when the attributes being evaluated are correlated with one another: Screening applicants\xspace primarily based on the assessment of one attribute will be less likely to lead to errors in the overall assessment, when attributes are highly correlated than when they are only weakly correlated or independent. \begin{hypothesis}[Studied in Section~\ref{sec:efficiency}]\label{h:efficiency} Holistic\xspace allocation results in more efficiency in evaluation without significantly reducing accuracy, when the attributes being assessed are highly correlated and thus can be used as proxies or screening tools for one another. \label{hypothesis:efficiency} \end{hypothesis} \subsubsection{Switching costs.} In holistic\xspace allocation, the evaluator\xspace primarily switches between different attributes\xspace, whereas in segmented\xspace allocation, the evaluator\xspace primarily switches between applicants\xspace. Whether switching between applicants\xspace or attributes\xspace involves greater effort depends on the user interface, where the evaluator\xspace accesses applicant\xspace information by, for example, navigating through directories or downloading applicant\xspace files. A system for admissions, for example, may be designed such that more clicks are needed to access different applicants\xspace than different attributes within the same applicant\xspace. In this case, holistic\xspace allocation may incur lower switching costs than segmented\xspace allocation. However, in addition to the operational cost incurred by the user interface, another consideration relates to the cognitive load of switching between different types of information. For example, in assessing applicants for admissions, if an evaluator\xspace operating in holistic\xspace allocation has to shift from reviewing transcripts and test scores to evaluating essays and recommendation letters, the time and cognitive effort involved in this transition between attributes may outweigh the savings gained from the user interface. Consequently, whether holistic\xspace or segmented\xspace allocation leads to greater efficiency as a result of reduced switching costs depends on the user interface and the similarity in reasoning about different attributes. \begin{hypothesis} (a) Holistic\xspace allocation results in more efficiency than segmented\xspace allocation, when transitioning from one applicant\xspace to another requires more time or clicks than transitioning between attributes of the same applicant\xspace.\\ (b) Segmented\xspace allocation results in more efficiency than holistic\xspace allocation, when transitioning from one attribute to another requires high cognitive effort due to the level of variation in the data and assessment process, taking more time than transitioning between applicants\xspace for the same attribute. \end{hypothesis} We remark that the user interface should be designed to support the chosen allocation scheme. Specifically, if segmented\xspace allocation is used, then the interface should be constructed so that the switching cost between applicants\xspace for the same attribute should be made as low as possible. \subsection{Mitigating Bias} A major concern that regularly arises in evaluation and selection processes is that of bias. Researchers consider a decision to be biased when there is deviation from what is normatively predicted by classical probability and utility theory to be the optimal outcome based on the information or options available~\cite{hilbert2012toward}. Bias in decision making is a widely-studied topic in a number of fields, as it has substantial implications not only for evaluation and selection decisions, but also for many other high-stakes applications such as medical diagnosis, crime prevention, and financial performance, to name just a few~\citep{saposnik2016cognitive,costa2017bibliometric,kovera2019racial}. One type of biases of particular concern for evaluation and selection is those that result in systematic discrimination against certain groups on the basis of information that is irrelevant or inappropriate for assessment~\cite[Section 7]{bertrand2004call,moss2012science,tomkins2017reviewer,shah2022surveyextended}. Many biases operate on a subconscious level~\cite{greenwald2003understanding} and thus affect evaluations even when the evaluator\xspace intends to be fair. Consequently, the common recommendations include limiting subjective human judgment by using objective measures wherever possible, or when humans are making subjective assessments, ensuring that those are guided by specific outcome-relevant criteria and structured for consistent application to each applicant\xspace~\citep{campion1988structured,pogrebtsova2020selection}. We propose that the allocation scheme also has an impact on mitigating bias. Specifically, we anticipate that holistic\xspace and segmented\xspace allocations affect outcomes by limiting the impact of highly biased evaluators on overall decision accuracy, and restricting access to biasing information. \subsubsection{Reducing the impact of biased evaluators.} It is likely that different evaluators\xspace are biased to different extents. When some evaluators\xspace are biased and some are not (or less so), holistic\xspace and segmented\xspace allocations are likely to lead to different types of impact. In \holistic allocation\xspace, any particular applicant\xspace has a certain probability of being assigned a biased evaluator\xspace (depending on the fraction of biased evaluators\xspace). Consequently, a subset of the applicants\xspace are be highly affected by biased decisions, while the rest of the applicants\xspace are not. By contrast, in \segmented allocation\xspace, the probability that all attributes of a particular applicant\xspace are assessed by highly biased evaluators becomes lower; however, it is more likely that each applicant\xspace receives some assessment from at least one biased evaluator, compared to \holistic allocation\xspace. \begin{hypothesis}[Studied in Section~\ref{sec:bias}]\label{h:fairness} Compared to \holistic allocation\xspace, \segmented allocation\xspace better mitigates the impact of biased evaluators\xspace on the accuracy of the applicant\xspace evaluation, by reducing the chances that all attributes of any particular applicant\xspace are evaluated by biased evaluators\xspace. \label{hypothesis-bias2} \end{hypothesis} \subsubsection{Restricting access to biasing information.} Arguably, many interventions that have been made over the last several decades in traditional evaluation and selection processes are focused on limiting evaluators' access to biasing information. One famous example comes from symphony orchestras as they made efforts to incorporate more female musicians in the 1970s and 1980s~\cite{goldin2000orchestrating}. Initial diversity efforts yielded limited progress, even when many orchestras conducted auditions using screens to block evaluators' view of the candidates. However, one observant evaluator noted the difference in the sound on the wooden stage floor as the musicians entered for their audition, particularly the distinct sound of the high heels worn by the female musicians in contrast to the flat sounds made by most men's dress shoes. Consequently, a number of groups began using a carpeted walkway in addition to the screen, which resulted in a sudden increase in the number of women invited to join \cite{goldin2000orchestrating}. In generalizing this idea to the context of evaluation and selection, we hypothesize that \segmented allocation\xspace mitigates bias by limiting access to information about irrelevant and potentially biasing attributes. For example, an evaluator could be asked to evaluate the research statements of graduate school applicants without access to any other information about the applicants, substantially limiting the possibility of bias. \begin{hypothesis}\label{hypothesis:bias_access} Segmented\xspace allocation helps mitigate the impact of bias compared to holistic\xspace allocation, as a result of limiting evaluators' access to biasing information when they assess individual attributes of the applicants\xspace. \end{hypothesis} \section{Modeling Framework} We describe the mathematical framework used in our analysis. \paragraph{Notation.} We assume that there are $n$ applicants\xspace, and each applicant\xspace has $d$ attributes. We let $x_{ij}\in \mathbb{R}$ be the true quality of applicant\xspace $i\in [n]$ on attribute $j\in [d]$.\footnote{ We use the notation $[\kappa]\colon= \{1, 2, \ldots, \kappa\}$ for any positive integer $\kappa$. } A higher value represents higher quality. When there is more than one attribute, we define the true ranking of the applicants as the ranking induced by the mean of their attribute values. The evaluation task is represented by the matrix $\{x_{\idxapp\idxattr}\}_{i\in [n], j\in [d]}$, and we divide the matrix into sub-matrices as shown in Figure~\ref{fig:scheme}, where each evaluator\xspace assesses a smaller sub-matrix consisting of a subset of the applicants\xspace and a subset of the attributes (where the subset is allowed to equal the entire set). For simplicity, we assume each attribute of each applicant\xspace is evaluated once, so all the sub-matrices are disjoint and collectively partition the entire matrix. We let $y_{ij}\in \mathbb{R}$ denote the score given to attribute $j$ of applicant\xspace $i$ by the assigned evaluator\xspace. Note that $y_{ij}$ is often a noisy evaluation of $x_{ij}$. \paragraph{Metric.} In many evaluation and selection processes such as hiring or academic admissions, the goal is to choose a specified number of applicants\xspace of the highest quality. Therefore, the accuracy of the evaluation process is determined by the top-K accuracy in ranking. For simplicity, we consider the top-$1$ accuracy as studied by~\citet{kleinberg2018rooney}. That is, the accuracy is $1$ if the estimated ranking correctly identifies the best applicant\xspace in the true ranking, and $0$ otherwise.\footnote{ In our setup, we make sure that there exists a unique best applicant in the true ranking. If there are ties in the estimated ranking, the accuracy is computed as $1 / \text{(number of applicants\xspace in the tie)}$ if the true best applicant\xspace is one of the estimated applicants\xspace in the tie, and $0$ otherwise. } We also consider a second error of metric that is suitable for understanding the calibration of evaluators\xspace. This error represents the mean error in estimating the percentile of each applicant\xspace, described in detail in Section~\ref{sec:calibration}. \paragraph{Data generation.} In our simulations, we follow prior work~\cite{kleinberg2018rooney} and generate the attribute values from the power-law distribution unless specified otherwise. The power-law distribution with parameter $\delta > 0$ is defined as $\mathbb{P}[Z \ge t] = t^{-(1+\delta)}$ supported on $t\in [1, \infty)$, where $Z$ denotes the random variable. We allow the attributes to be correlated, defined by a correlation parameter $\sigma\in [-1, 1]$. For any desired distribution with c.d.f. $F$, we define the following procedure (cf.~\citealp{nelsen2010copula}) to generate $d$-dimensional correlated random variables. Let $\Phi$ denote the c.d.f. of the standard normal. For each applicant $i$, we first sample a vector $\boldsymbol{z}_i\in \mathbb{R}^d$ from a multinomial normal distribution as $\boldsymbol{z}_i\sim \mathcal{N}(0, (1-\sigma)I_d + \sigma\mathbf{1}_d\mathbf{1}_d^T)$ independent across the applicants $i\in [n]$. Then we compute the attribute values as $x_{ij} = F^{-1}(\Phi(z_{ij}))$. It can be verified that each $x_{ij}$ has marginal distribution $F$. As special cases, when $\sigma=1$, all attributes have identical values; when $\sigma=0$, all attributes are independent. \section{Methods and Results} In this section, we examine our hypotheses related to calibration, efficiency and mitigating bias. \subsection{Calibration}\label{sec:calibration} We focus on studying the relation between calibration accuracy and the number of applicants\xspace assigned to an evaluator\xspace, as described by Hypothesis~\ref{hypothesis-accuracy}. \paragraph{Operationalization of calibration.} Formally, we define calibration as the evaluator\xspace's accuracy of estimating the ranking (or percentile) of each applicant\xspace with respect to the entire pool of all applicants\xspace. We define calibration on this relative scale for three reasons. First, the selection problem is intrinsically relative in nature, that is, we aim to select the top applicants\xspace compared to the entire pool. Second, in many applications, the evaluators\xspace are asked to report relative data. For example, evaluators\xspace may be asked to give scores on a scale of $1\hyphen5$, where the criteria define the score of $1$ as the applicant\xspace being the bottom 20\% among all applicants\xspace, and $2$ as being 20\hyphen40\% among all applicants\xspace, etc. Third, social comparison theory suggests that people's reasoning has a relative nature~\cite{festinger1954social}. For example, being a ``top'' applicant\xspace is perceived as simply being significantly better than the rest of the applicants\xspace. For this reason, using a relative scale than an absolute scale is shown to be more effective in various judgment tasks~\cite{goffin2011relative}. \paragraph{Experimental setup.} To isolate the impact of calibration, we make a number of design simplifications, and conduct an experiment focusing on a single attribute\xspace. We recruit $200$ crowdsourcing workers on the Prolific platform. The workers are introduced to a hiring context and asked to evaluate scores of applicants\xspace. Specifically, they are told that there are $1000$ applicants\xspace with scores that are integers between $0$ and $300$, without any distributional information about the scores. Then the workers are presented some numbers in between $200$ and $300$, and are asked to estimate the percentile of the scores. The workers classify each score to one of the five bins with respect to the population: $0\hyphen20\%$, ${20}\hyphen{40}\%$, ${40}\hyphen{60}\%$, ${60}\hyphen{80}\%$, and $80\hyphen100\%$. We choose to ask the workers to report in $5$ quantized bins instead of directly reporting a number of percentile, because prior studies have shown that workers are not able to perceive fine numbers accurately due to limited processing abilities~\cite{miller1956magic,shah2016estimation} and therefore have higher accuracy when a small number of quantized choices are given (e.g.,~\citealp{lietz2010questionnaire}). We have confirmed this trend by a preliminary study comparing using $5$ bins versus $10$ bins. \paragraph{Question grouping.} The workers are divided into two groups uniformly at random. Recall that there is a single attribute. In the first group, each worker is presented scores of $5$ applicants\xspace (termed ``$5$Q-group\xspace''). In the second group, each worker is presented scores of $20$ applicants\xspace (termed ``$20$Q-group\xspace''). The workers are always presented with $5$ scores per page. That is, for the $20$Q-group\xspace, the $20$ questions are distributed across $4$ pages. Neither group of workers is told the number of scores they will be presented before starting the task. The workers are required to answer all questions on a page before proceeding to the next page, though they are allowed to review and edit their answers on previous pages at any time before submission. We choose to present $5$ questions per page and not inform the workers the total number of questions, to address the confounder that a worker who knows they have to do 20 questions may put less effort per question than if they knew they have to do only 5 questions. \paragraph{Values of scores.} Since we consider a single attribute, we use the shorthand $x_i \colon= x_{i 1}$ for the true score of each applicant\xspace $i$. Let $F$ be the distribution $\mathcal{N}(230, 25)$, truncated to the range of $[200, 300]$. The scores $\{x_{i}\}_{i\in [n]}$ in the $20$Q-group\xspace are generated i.i.d. from $F$. We pair up workers in the $20$Q-group\xspace and the $5$Q-group\xspace. For the scores in the $5$Q-group\xspace, we use the same values as the last $5$ questions in the $20$Q-group\xspace for a direct comparison. We choose this distribution for scores, because in a preliminary study where the workers are presented scores in the range of $[0, 100]$, we observe that the workers appear to have a strong uniform prior, mapping scores in $[0, 20]$ to percentile $0\hyphen20\%$, etc. This uniform mapping is an artifact of the experimental design that the quality under evaluation is real-valued. In more realistic situations, such a simplified mapping, say from applicants\xspace' interview performance to scores, does not exist. We therefore choose a range that is not $[0, 100]$ so that the workers do not rely on such priors. \paragraph{Experimental Results.} We record the worker calibration measured by their accuracy in estimating the percentile bins. Formally, let $\funcbin$ be the function mapping the percentile $0\hyphen 20\%, 20\hyphen40\%, 40\hyphen60\%, 60\hyphen80\%$ and $80\hyphen100\%$ to the bins $1, 2, 3, 4$ and $5$, respectively. For a single worker, let $y_i\in [5]$ be the bin reported for applicant\xspace $i$. Then the absolute error between the true bin and the reported bin for applicant\xspace $i$ incurred by this worker is defined as $\abs*{\funcbin(F^{-1}(x_i)) - y_i}$. For each worker, we compute their mean error over the applicants\xspace they evaluate. The workers' mean error is $1.14 \pm 0.06$ in the 5Q-group, and $0.84\pm 0.05$ in the 20Q-group. We perform a univariate permutation test between the mean errors of workers in the $20$Q-group\xspace, and those of workers in the $5$Q-group\xspace, using the difference in sample means as the test statistic. We reject the null hypothesis that the errors from the two groups have the same mean (one-sided $p$-value $<0.01$; Cohen's effect size $d=0.52$). This result indicates that evaluation in the $20$Q-group\xspace is more accurate than in the $5$Q-group\xspace, confirming Hypothesis~\ref{hypothesis-accuracy} that evaluators\xspace have better calibration when they see more applicants\xspace. For the $20$Q-group\xspace, we also separately compute each worker's mean error over each page of $5$ questions (that is, Q1-5, Q6-10, Q11-15, Q16-20). The mean error for each page is plotted in Figure~\ref{fig:calibration}. For the $20$Q-group\xspace, we also plot the error on each page using the answers reported right before the workers ever turn to see the next page (see the curve ``\grouptwentyinit''). The difference between the curves of the initial and final errors thus corresponds to the gain in calibration by workers correcting their answers to previous applicants\xspace by seeing applicants\xspace from later pages. First, we observe that such corrections notably decrease the error, especially for the first page. This observation further provides evidence for Hypothesis~\ref{hypothesis-accuracy} by showing that workers are able to use the information they see from applicants\xspace to perform correction. Second, even after this correction, the error has a decreasing trend from earlier pages to later pages, suggesting that workers have limited abilities in performing such corrections. Specifically, in the $20$Q-group\xspace, the (final) mean error for page $1$ is $0.95 \pm 0.06$, and the (final) mean error for page $4$ is $0.74 \pm 0.06$. We perform a univariate permutation test between the mean errors for page $1$ and page $4$, using the difference in sample mean as the test statistics. We reject the null hypothesis that the errors for these two pages have the same mean (one-sided $p$-value $<0.01$; Cohen's effect size $d=0.34$). Third, as a sanity check, we observe that for page $1$, the mean error in the \grouptwentyinit curve is similar to the mean error of the $5$Q-group\xspace. This is expected, as the workers from the two groups have strictly the same information before the workers in the $20$Q-group\xspace ever turn to the second page. We observe the same qualitative trends in a previous version of the experiment, discussed in Appendix~\ref{app:previous_expt}. \begin{figure}[tb] \centering \includegraphics[width=0.69\linewidth]{figures/experiment.pdf} \caption{The mean error in estimating the percentile bins, for workers in the $5$Q-group\xspace (representing holistic allocation) and the $20$Q-group\xspace (representing segmented allocation). Error bars represent the standard error of the mean. } \label{fig:calibration} \end{figure} \subsubsection{Simulations.} The key observation from the crowdsourcing experiment is that seeing more applicants\xspace improves calibration. We now conduct additional simulations for a more quantitative understanding. We stick with the setting of a single attribute. We consider the following model for evaluators\xspace. When an evaluator\xspace is assigned $n$ applicants\xspace, it assigns the lowest $\frac{n}{5}$ applicants\xspace to the bin $0\hyphen20\%$, followed by the next $\frac{n}{5}$ applicants\xspace to the bin $20\hyphen40\%$, etc. This is a natural model for evaluators\xspace, because as $n$ goes to infinity, the mean error on the reported bins approaches $0$. \begin{figure}[tb] \centering \includegraphics[width=0.69\linewidth]{figures/calibration.pdf}\label{float:calibration_l1_vs_n} \caption{The mean error in calibration of a single evaluator\xspace, as a function of the number of applicants\xspace. Each point is computed over $1000$ runs (error bars are too small to be visible).} \label{fig:calibration_simulation} \end{figure} We plot the mean error as a function of the number of applicants\xspace $n$ assigned to a single evaluator\xspace in Figure~\ref{fig:calibration_simulation}. The error decreases as the number of applications $n$ increases, matching the experimental result and therefore providing additional evidence supporting Hypothesis~\ref{hypothesis-accuracy}. We empirically observe that as the number of applicants\xspace $n$ increases, the mean error decreases at a rate of $\frac{1}{\sqrt{n}}$. \subsection{Efficiency}\label{sec:efficiency} We study the adaptive allocation of effort in Hypothesis~\ref{hypothesis:efficiency} via simulations. \paragraph{Setting.} We consider $n=200$ applicants\xspace, and for simplicity, $d=2$ attributes assessed by two evaluators\xspace. In segmented allocation, each evaluator\xspace is assigned one attribute of all applicants\xspace. In holistic\xspace allocation, each evaluator\xspace is assigned both attributes of half of the applicants\xspace. The attribute values are generated from a power-law distribution with parameter $1$, with correlation $\sigma\in [0, 1]$ between the two attributes. To isolate the efficiency aspect from calibration errors, we assume that an evaluator\xspace always reports the true value of the attributes, namely $y_{ij} = x_{ij}$ for each $(i, j)$ pair. According to Hypothesis~\ref{hypothesis:efficiency}, holistic\xspace allocation provides the opportunity for an evaluator\xspace to decide whether to evaluate the second attribute of an applicant, based on the quality of the first attribute. For simplicity, we assume that in holistic\xspace allocation, each evaluator\xspace always reviews attribute $1$ of all applicants\xspace. Each evaluator\xspace then reviews attribute $2$ only on the applicants\xspace who have scored high on attribute $1$. Specifically, we assume that attribute $2$ is only evaluated on a $\tau$-fraction\footnote{ Selecting the top $\tau$-fraction requires knowledge about attribute $1$ of all the applicants that an evaluator\xspace is assigned. In practice, an evaluator\xspace may select the applicants whose attribute $1$ exceeds a certain real-valued threshold, which approximately has the same effect. } of the applicants\xspace receiving the top scores on attribute $1$, for a parameter $\tau\in (0, 1]$. Finally, the best applicant is selected as the one whose mean of the two attribute scores is the maximum, namely $\argmax_{i\in [n]} (y_{i 1} + y_{i 2})$, from the applicants on which both attributes are evaluated. \begin{figure}[tb] \centering \includegraphics[width=0.69\linewidth]{figures/efficiency.pdf} \caption{ Top-$1$ accuracy for different fractions $\tau$ of the applicants\xspace evaluated for the second attribute, and various values of the correlation $\sigma$ between the two attributes. Each point is computed over $1000$ runs (error bars are too small to be visible). } \label{fig:cutoff} \end{figure} \paragraph{Simulations.} In Figure~\ref{fig:cutoff}, we compute the top-1 accuracy for different fraction $\tau$ and attribute correlation $\sigma$. When the correlation is $\sigma=1$ (see the blue curve), by definition evaluating only attribute $1$ achieves perfect accuracy, and there is no need to evaluate attribute $2$. When the correlation $\sigma$ is relatively high, we observe that relatively small values of $\tau$ introduce a significant amount of saving in terms of the total number of attributes evaluated, while the accuracy only decreases marginally. This observation validates Hypothesis~\ref{hypothesis:efficiency}, and we conclude that a higher correlation between the attributes allows more saving in holistic\xspace allocation. This result points to a tradeoff between efficiency and accuracy in holistic\xspace allocation -- namely, smaller $\tau$ introduces savings but also more error. The specific point to pick in this tradeoff depends on the goals of the system designer. \subsection{Mitigating Bias}\label{sec:bias} To study Hypothesis~\ref{hypothesis-bias2}, we present a simple model for analyzing the effect of bias reduction. We present theoretical guarantees and simulational results that characterize the regimes under which segmented\xspace allocation results in more accurate and less biased evaluations than holistic\xspace allocation. Our results provide intuition on the effect of redistributing and reducing the impact of biased evaluators\xspace. \paragraph{Formulation.} Recall that $x_{ij}$ denotes the true value of applicant\xspace $i\in [n]$ on attribute $j\in [d]$. We assume that the applicants\xspace consist of two groups -- advantaged and disadvantaged -- where a fraction $\alpha\in [0, 1]$ of the applicants\xspace are from the disadvantaged group. We assume that a fraction $\lambda\in [0, 1]$ of the attributes are ``protected''. Each evaluator\xspace has an independent probability of $\gamma\in (0, 1)$ to be biased in the following sense: An unbiased evaluator\xspace reports the (noiseless) true value $y_{ij} = x_{ij}$ for any applicant $i$ and any attribute $j$ that they are assigned, while a biased evaluator\xspace applies a multiplicative bias factor $\beta \in [0, 1)$ to the protected attributes of the disadvantaged applicants, and reports the true value otherwise. In other words, for attribute $j$ of applicant $i$, a biased evaluator\xspace reports \begin{align*} y_{ij}=\begin{cases} \beta x_{\idxapp\idxattr} & \text{ if $j$ protected and $i$ disadvantaged}\\ x_{\idxapp\idxattr} & \text{otherwise}. \end{cases} \end{align*} For ease of analysis, we consider a simple case of $d=2$ attributes, where the correlation between the two attributes is $\sigma=1$. That is, for each applicant $i$, the two attributes have identical values $x_{\idxapp1} = x_{\idxapp2}$. We hence use the shorthand $x_{i}$ to denote this value. We assume that the values $\{x_i\}_{i\in [n]}$ are generated i.i.d. from a continuous\footnote{ We consider continuous distributions for simplicity, so that the best applicant\xspace is uniquely defined with probability $1$. } distribution $\mathcal{D}$ supported on $[0, \infty)$, such as the power-law distribution. Let $\setapps_\textdisadvantage\subseteq [n]$ denotes the set of $\fracappsdisadvantagen$ disadvantaged applicants\xspace, and let $\setapps_\textadvantage\subseteq [n]$ denotes the set of $(1-\alpha)n$ advantaged applicants\xspace. Denote the quality of the best applicant\xspace in the disadvantaged group by $\valdisadvantage^{\textmax}_\textdisadvantage \colon= \max_{i\in \setapps_\textdisadvantage} x_i$, and likewise denote $\valadvantage^{\textmax}_\textadvantage\colon= \max_{i\in \setapps_\textadvantage} x_i$. We compute the mean of attribute scores for each applicant, and estimate the best applicant by selecting the one with the maximum mean score. Denote the expected top-1 error under holistic\xspace and segmented\xspace allocations by $\err_\texthol$ and $\err_\textseg$ respectively, formally defined by $\mathbb{P}(\argmax_{i\in [n]} x_i \ne \argmax_{i\in [n]} y_{i 1} + y_{i 2})$ , using the scores $\{y_{\idxapp\idxattr}\}$ under the two allocation schemes respectively. \subsubsection{Theoretical results.} We focus on a simplified case of two evaluators\xspace, which as we see shortly, already illustrates the intricacy of the comparison. In this setting, \holistic allocation\xspace assigns each evaluator\xspace both attributes of half of the applicants; \segmented allocation\xspace assigns each evaluator\xspace one attribute of all applicants. We assume that the assignment to applicants\xspace and attributes is uniformly at random. \begin{theorem}\label{prop:compare_seg_hol_two_attrs} Let the number of attributes be $d=2$. Let the fraction of disadvantaged applicants\xspace be $\alpha= 0.5$. Let the two attributes have identical values (that is, $x_i\colon= x_{\idxapp1} = x_{\idxapp2}$), sampled i.i.d. from a continuous distribution $\mathcal{D}$. Consider holistic\xspace and segmented\xspace allocations under two evaluators\xspace. \begin{enumerate}[label=(\alph*)] \item \label{item:attrs_one} Let $\lambda = 0.5$, that is, one of the two attributes is protected. Then for any bias factor $\beta\in [0, 1)$ and any evaluator\xspace bias probability $\gamma\in (0, 1)$, segmented allocation incurs a lower error than holistic\xspace allocation, that is, $ \err_\textseg \le \err_\texthol$. % \item \label{item:attrs_both} Let $\lambda = 1$, that is, both attributes are protected. Let $\beta=0$ (extreme downward bias against disadvantaged applicants\xspace). Then \begin{align} \err_\texthol - \err_\textseg = \frac{\gamma(1-\gamma)}{2} \left[4\cdot\Probbig{\valdisadvantage^{\textmax}_\textdisadvantage > 2\valadvantage^{\textmax}_\textadvantage}-1\right].\label{eq:prop_err_comparison} \end{align} Hence, for any $\gamma \in (0, 1)$, \segmented allocation\xspace incurs a lower error than \holistic allocation\xspace, if and only if \begin{align}\label{eq:condition} \Probbig{\valdisadvantage^{\textmax}_\textdisadvantage > 2 \valadvantage^{\textmax}_\textadvantage} > 0.25. \end{align} This condition~\eqref{eq:condition} is dependent on the number of applicants $n$ and and the distribution $\mathcal{D}$, and independent of the other problem parameters. In particular, for the power-law distribution with a constant parameter $\delta$, \segmented allocation\xspace is better than \holistic allocation\xspace for sufficiently large $n$, if and only if \begin{align} \delta < \frac{\log(3)}{\log(2)} - 1\approx 0.58.\label{eq:condition_powerlaw} \end{align} \end{enumerate} \end{theorem} The proof of this theorem is provided in Appendix~\ref{app:proof}. This theorem reveals that \segmented allocation\xspace is better than \holistic allocation\xspace in terms of accuracy over a large range of parameters, but not always. Despite the simplified settings considered in the theorem, the result illustrates how allocating biased evaluators\xspace differently leads to changes in accuracy. \subsubsection{Simulations.} \input{text_fig_bias} We study the effect of the set of parameters $(\delta, \sigma, \beta,\alpha,\lambda)$ in the model. Following the assumption of~\Cref{prop:compare_seg_hol_two_attrs}, we consider two evaluators\xspace for simulation. The proof of~\Cref{prop:compare_seg_hol_two_attrs} suggests that it suffices to consider one biased evaluator\xspace and one unbiased evaluator\xspace. We fix the number of applicants $n = 20$ and the number of attributes $d = 20$. To inspect the difference between holistic\xspace and segmented\xspace allocations, for ease of visualization, we vary two parameters at a time while keeping the other ones fixed. For consistency, one varying parameter is always $\delta$ for the power-law distribution. We set the default parameter values as $\sigma=0.5$, $\beta = 0$, $\alpha=0.5$ and $\lambda = 1$, when they are not chosen as the parameter to be varied. The results are shown in Figure~\ref{fig:fairness} and discussed below. \paragraph{Effect of power-law parameter ($\delta$)} In Figure~\ref{fig:fairness}\subref{float:fairness_sigma}-\subref{float:fairness_lda}, we observe the general trend that both segmented\xspace and holistic\xspace allocations achieve higher accuracy under smaller values of $\delta$. A smaller $\delta$ means that the distribution has a heavier tail, so that the values of the applicants\xspace are more spread out. Hence, the best applicant\xspace has a more extremal, higher value compared to the other applicants\xspace, giving stronger signal for the evaluation process and making it easier. Two exceptions to this general trend are \holistic allocation\xspace in Figure~\ref{fig:fairness}\subref{float:fairness_sigma} and~\ref{fig:fairness}\subref{float:fairness_alpha}, where the accuracy is independent of $\delta$. In these two cases, we have $\lambda=1$ and $\beta=0$. Hence, when a biased evaluator\xspace is assigned a disadvantaged applicant in holistic\xspace allocation, all attributes ($\lambda=1$) are discounted to zero ($\beta=0$), making it impossible for disadvantaged applicants\xspace to be identified as the best regardless of their values, and thus the accuracy is independent of $\delta$. \paragraph{Effect of correlation ($\sigma$): Figure~\ref{fig:fairness}\subref{float:fairness_sigma}} In \holistic allocation\xspace, for the same reason that the accuracy is independent of $\delta$ as previously explained, the accuracy is also independent of $\sigma$. In \segmented allocation\xspace, we observe that a higher correlation leads to a higher accuracy. This is because a higher (positive) correlation strengthens the signal for applicants\xspace. For example, consider the extreme case of $\sigma=1$. Then the same attribute value is replicated $d$ times for each applicant, improving robustness against randomness in the evaluation process due to bias. Comparing segmented\xspace and holistic\xspace allocations, we observe that segmented\xspace allocation performs better when $\sigma$ is high (more correlation) and $\delta$ is small (heavy tail in the distribution). The tradeoff between the two allocation schemes arises, because segmented\xspace allocation always discriminates disadvantaged applicants\xspace but to a lesser extent, whereas holistic\xspace allocation discriminates disadvantaged applicants\xspace less often but to a greater extent. When the correlation $\sigma$ between the attributes is high, the gain from only discriminating a fraction of the attributes (as supposed to all attributes) is more significant. Finally, note that in~\Cref{prop:compare_seg_hol_two_attrs} we set the correlation as $\sigma = 1$. Hence, the setting of~\Cref{prop:compare_seg_hol_two_attrs}\ref{item:attrs_both} corresponds to the top most matrix row in Figure~\ref{fig:fairness}\subref{float:fairness_sigma}. We observe that the sign of the comparison between the two schemes is consistent with the theoretical result, with a change-point at $\delta\approx 0.6$ in the right panel of Figure~\ref{fig:fairness}\subref{float:fairness_sigma}. \paragraph{Effect of bias factor ($\beta$): Figure~\ref{fig:fairness}\subref{float:fairness_beta}} We observe that both allocation schemes have a higher accuracy when the value of $\beta$ is large. This is natural because a larger $\beta$ corresponds to less discrimination by the biased evaluators\xspace. Comparing the two schemes, segmented\xspace allocation is more advantageous when $\beta$ is larger. We reason that when $\beta$ is small (more discrimination), the effect when a disadvantaged applicant\xspace is discriminated is very detrimental for the applicant\xspace (either on one attribute for segmented\xspace allocation, or both attributes for holistic\xspace allocation). Hence, holistic\xspace allocation performs better because the probability that a disadvantaged applicant\xspace is discriminated (to any extent) is smaller. On the other hand, when $\beta$ is large (less discrimination), the advantage of segmented\xspace allocation of discounting only on one attribute becomes more significant, as supposed to discounting both attributes in holistic\xspace allocation. \paragraph{Effect of fraction of disadvantaged applicants\xspace ($\alpha$): Figure~\ref{fig:fairness}\subref{float:fairness_alpha}} We observe that segmented allocation performs better in general when $\alpha$ is large. To reason about this effect, let us first think about the extreme case when $\alpha=1$. In this case, segmented allocation is better because it gives consistent treatment to all applicants. Namely, the biased evaluator\xspace discounts one attribute of all applicants. On the other hand, holistic\xspace allocation creates discrepancy between applicants\xspace, because only the disadvantaged applicants\xspace assigned to the biased evaluator\xspace are discounted. Moreover, note that the performance of segmented\xspace allocation is not monotonic in $\alpha$: For larger values of $\delta$, segmented\xspace allocation has the lowest accuracy when a large fraction, but not all applicants are disadvantaged. This non-monotonicity of segmented\xspace allocation leads to the non-monotonicity in comparing the two schemes in the right panel of Figure~\ref{fig:fairness}\subref{float:fairness_alpha}. \paragraph{Effect of fraction of protected attributes ($\lambda$): Figure~\ref{fig:fairness}\subref{float:fairness_lda}} We observe that segmented allocation performs better when $\lambda$ is small. In this case, there is less discrimination in both allocations: Segmented\xspace allocation decreases the probability that a biased evaluator\xspace is assigned a protected attribute, whereas holistic\xspace allocation decreases the impact of a biased evaluator\xspace on an applicant\xspace. Our empirical observation aligns with the theoretical results: Comparing part~\ref{item:attrs_one} and part~\ref{item:attrs_both} of~\Cref{prop:compare_seg_hol_two_attrs} also suggests that segmented allocation performs better for smaller values of $\lambda$. \medskip In summary, there is a tradeoff where more segmentation means that the disadvantaged applicants are more likely to be consistently discriminated, but to a lesser extent; the parameters tip this tradeoff in different manners. We conclude that Hypothesis~\ref{hypothesis-bias2} does not capture the complete picture, as the benefit of segmented\xspace allocation depends on the specific values of the parameters.\footnote{ While we aim to provide visualization for the combined effect of parameters $(\sigma, \alpha, \lambda, \beta, \delta)$, the 2-dimensional cross sections in Figure~\ref{fig:fairness} are not meant to provide the complete picture, and the trends we observe should not be interpreted as holding universally when the fixed parameters are set to any arbitrary values. For example, when one parameter lies in a regime that strongly favors one allocation over the other, it is possible that changing other parameters do not change the sign of comparison in contrast to the sign changes shown in Figure~\ref{fig:fairness}. } \section{Discussion}\label{sec:conclusion} In this work, we consider using segmented allocation as an alternative to the conventional holistic\xspace allocation, for applications such as hiring and admissions. We provide detailed discussions comparing the two allocation schemes, and present theoretical and experimental results focused on three aspects: calibration, efficiency and fairness. These results indicate the potential improvement by \segmented allocation\xspace on calibration, while also suggesting that \holistic allocation\xspace has potential benefits on efficiency. The two allocation schemes also distribute evaluators\xspace differently that lead to different impacts in terms of fairness. These results together suggest a tradeoff between holistic\xspace and segmented\xspace allocations (and the spectrum in between). The tradeoff and the choice of which allocation to use depends on the characteristics of specific applications and which dimensions are prioritized by the system designer. Immediate open problems include validating the remaining three hypotheses that are not analyzed in this paper, and extending the theoretical and simulation results to more general scenarios to improve our understanding of the bias considerations in Section~\ref{sec:bias}. For example, if each attribute of each application is evaluated by many evaluators\xspace, then it is natural to expect that the bias is averaged out more evenly across evaluators\xspace, and the discrepancy between holistic\xspace and segmented\xspace allocations becomes less prominent. There are also various other considerations, as well as open problems: \begin{itemize} \item \Segmented allocation\xspace requires grouping of attributes, and the system designer needs to do this grouping appropriately. For example, in the case of admissions, one may group test scores and GPAs as one attribute\xspace called ``scholarly performance''. In order to provide appropriate context to evaluators\xspace, one may also need to provide the same attributes to multiple evaluators\xspace. \item In addition to grouping the attributes, it is also possible to group the applicants\xspace. We have assumed that the applicants\xspace are distributed to evaluators\xspace uniformly at random. In reality, evaluators\xspace may have different expertise that make them more suitable to review a particular subset of the applicants\xspace. For example, in admissions, evaluators\xspace from the same educational background as the applicants\xspace may be more familiar with interpreting the schools and the GPAs. \item We have assumed for simplicity that the final score is computed by taking the mean over all attribute scores. In practice, we may want to use different weights for different attributes, or even use non-linear functions. In some cases, the aggregation function may not be precisely provided by the system designer, but needs to be learned from past data. This problem of learning the aggregation function for evaluation has been studied in the specific context of peer review~\cite{noothigattu2018choosing}, and it is of interest to extend such results to more general applications. \item This work discusses a spectrum of choices in terms of the number of attributes and applicants\xspace assigned to each evaluator\xspace. An open problem of interest is to establish the optimal point(s) on this holistic\xspace-segmented\xspace spectrum theoretically and practically for any given specification of the applications and desiderata. \end{itemize} \noindent\textbf{Acknowledgments.} We thank Lily Laredo for inputs on writing the worker instructions for the crowdsourcing experiments. This work was supported by the CMU Block Center and NSF CAREER 1942124.
{ "timestamp": "2022-09-20T02:21:54", "yymm": "2209", "arxiv_id": "2209.08665", "language": "en", "url": "https://arxiv.org/abs/2209.08665" }
\section{Introduction} Topic segmentation is a fundamental NLP task with the goal to separate textual documents into coherent segments (consisting of one or more sentences), following the document's underlying topical structure. The structural knowledge obtained from topic segmentation has been shown to play a vital role in key NLP downstream tasks, such as document summarization \cite{mitra-etal-1997-automatic, riedl-biemann-2012-text, xiao-carenini-2019-extractive}, question answering \cite{oh07, dennis-2017-core} and dialogue modeling \cite{topic_dialogue_2020, ijcai2020-517}. The aim of topic segmentation makes it tightly connected to related research areas aiming to understand the latent structure of long and potentially complex text. \begin{figure} \centering \includegraphics[width=3.0in]{example_5.png} \caption{An example article about Cholinergic Urticaria (CU) sampled from the \textit{en\_disease} portion of Wiki-Section dataset \cite{arnold-etal-2019-sector}. Left: discourse dependency structure predicted by the Sent-First discourse parser \cite{zhou-feng-2022-improve}.} \label{fig:wiki_example} \end{figure} Specifically, understanding the semantic and pragmatic underpinnings of a document can arguably support the task of separating continuous text into topical segments. To this end, discourse analysis and discourse parsing provide the means to understand and infer the semantic and pragmatic relationships underlying complete documents, well aligned with the local text coherence and highly correlated to the inter-sentential topical consistency, as shown in \citet{louis-nenkova-2012-coherence} and \citet{muangkammuen-etal-2020-neural}. With a variety of linguistic theories proposed in the past, such as the Rhetorical Structure Theory (RST) \cite{mann1988rhetorical}, the lexicalized discourse framework \cite{webber-etal-2003-anaphora} (underlying PDTB), and the Segmented Discourse Representation Theory (SDRT) \cite{asher1993reference, asher2003logics}, we follow the RST framework in this work (1) as we focus on monologue text (as compared to dialogue frameworks, such as SDRT) and (2) since RST postulates complete discourse trees spanning whole documents, directly aligned with the topical structure of complete documents \cite{huber2021predicting}. We further motivate the synergistic relationship between topic segmentation and discourse analysis/parsing in Figure~\ref{fig:wiki_example}, showing anecdotal evidence of the alignment between the document's topical structure and the respective RST-style discourse dependency graph. Starting from a sequence of sentences, the task of topic segmentation addresses the problem of splitting the given \textit{Wikipedia} article into an ordered set of topical-coherent fragments (here: \texttt{T1}, \texttt{T2} and \texttt{T3}) by predicting topical boundaries. As shown in the example, the document discourse tree is indicative of the topical structure of the document, as discourse dependencies occur considerably more often within a topic segment than across topic segments. Given significant influence on a variety of real-world tasks, topic segmentation is an active research area in the field of NLP. As such, modern, neural methods for monologue topic segmentation are proposed by formulating the task as a sentence-level sequence labeling problem, trained and evaluated on the large-scale \textit{Wikipedia} dataset \cite{xing-etal-2020-improving, glavas-2020-two, barrow-etal-2020-joint, lo-etal-2021-transformer-pre}. These Wikipedia articles are well-suited for the task of topic segmentation, providing natural section marks which can be reasonably used as ground-truth segment boundaries \cite{koshorek-etal-2018-text, arnold-etal-2019-sector}, superseding previously proposed unsupervised methods \cite{hearst-1997-text, galley-etal-2003-discourse, eisenstein-barzilay-2008-bayesian, Song2016DialogueSS}. Despite the significant improvements achieved by neural supervised topic segmentation models, it remains unclear if these topic segmenters effectively learn to cluster sentences into topical-coherent pieces based on the (document-level) topical consistency, or solely exploit superficial patterns (e.g., simple linguistic cues) in the training domain. To address this challenge, in this paper, we propose a more discourse-aware neural topic segmentation model. We thereby inject above-sentence discourse structures into basic topic segmenter to encourage the model to base its topic boundary prediction more explicitly on the topical consistency between sentences. More specifically, we propose to exploit a discourse dependency parser pre-trained on out-of-domain data to induce inter-sentential discourse dependency trees. Subsequently, we convert the dependency tree into a directed discourse graph with sentences as nodes and discourse dependencies as edges. With the generated discourse graph, a Graph Attention Network (GAT) \cite{velickovic2018graph} is used to encode sentences as discourse-contextualized representations by aggregating information from neighboring sentence nodes in the graph. Finally, the discourse-infused sentence representations are concatenated with standard encodings for segment boundary prediction. In our empirical study conducted on English evaluation datasets, we show that: ($i$) Injecting discourse structures can substantially improve the performance of the basic neural topic segmentation model on three datasets. ($ii$) Our novel, discourse-enhanced topic segmenter is more robust compared to the basic neural model in settings that require domain transfer, showing superior performance on four challenging real-world test sets, to confirm the improved domain-independence. ($iii$) Even if our proposal has inferior accuracy against a state-of-the-art segmenter sharing the same basic architecture, it does achieve significantly better efficiency assessed by model's parameter size and speeds for learning and inference, which makes it potentially more favorable in real-world use. \section{Related Work} \paragraph{Topic Segmentation} aims to reveal important aspects of the semantic structure of a document by splitting a sequence of sentences into topic-coherent textual units. Typically, computational topic segmentation models can be broadly separated into supervised and unsupervised approaches. Early topic segmentation methods usually fall into the category of unsupervised approaches, mainly due to the prevalent data sparsity issue at the time. Based on predicting the coherence between sentences through shallow (surface-level) features, unsupervised models reach a limited understanding of the contextualized structure of documents by merely relying on easy-to-extract but barely effective features for the similarity measurement between sentences (i.e., the degree of token overlap between two sentences) \cite{hearst-1997-text, eisenstein-barzilay-2008-bayesian}. Improving on the unsupervised topic segmentation paradigm, researchers started to address this issue by introducing pre-trained neural language models (LMs), trained on massive dataset \cite{topic_dialogue_2020, solbiati2021unsupervised, xing-carenini-2021-improving}. Some works show that the signal captured in pre-trained LMs (e.g., BERT \cite{devlin2019bert}) are more indicative of topic relevance between sentences than early surface-level features. However, these proposed strategies of integrating BERT into the topic segmentation framework solely exploit BERT to induce dense encodings and further compute reciprocal sentence similarities. While this constitues a reasonable first step, the considerable gap between the training objective of LMs and topic segmentation task requires further efforts along this line of work \cite{sun-etal-2022-sentence}. More recently, the data sparsity issue has been alleviated by the proposal of large-scale corpora sampled from \textit{Wikipedia} (e.g., Wiki-727k \cite{koshorek-etal-2018-text} and Wiki-Section \cite{arnold-etal-2019-sector}), in which well-structured articles with their section marks are used as gold labels for segment boundaries. As a result, neural supervised topic segmenters started to gain attention by reaching greater effectiveness and efficiency compared to previously proposed unsupervised approaches. These supervised topic segmenters typically follow a common strategy which formulates the task as a sentence-level sequence labeling problem. More specifically, by assigning binary labels to each sentence, models infer the likelihood of a sentence to be a topic segment boundary \cite{ koshorek-etal-2018-text, arnold-etal-2019-sector, barrow-etal-2020-joint, lo-etal-2021-transformer-pre}. However, we believe that current models, besides reaching promising performance, potentially favour simple linguistic cues over effective measurements for semantic cohesion, restricting their application to narrow domains. Some recent works have attempted to address this limitation via explicitly integrating coherence modeling components into segmenters \cite{xing-etal-2020-improving, glavas-2020-two}. However, compared to our objective in this work, these proposed coherence modeling strategies are either (i) only taking two adjacent sentences into account, limiting the additional module to extremely local contexts, or (ii) discriminating real documents from artificially ``incoherent" texts, resulting in implicit and synthetic negative training samples and heavy parameter size caused by modeling multiple tasks simultaneously. In contrast, we propose an effective method to integrate the document discourse (dependency) structure into neural topic segmentation frameworks, following the intuition that above-sentence discourse structure are indicative of text coherence and topical consistency, providing a more global and interpretable source of information for better topic transition prediction. \paragraph{Discourse Analysis and Parsing} analyze and generalize the underlying semantic and pragmatic structure of a coherence document (called a discourse). As an important upstream task in the field of NLP, discourse analysis proposes elaborate frameworks and theories to describe the textual organization of a document. To this end, a variety of popular discourse theories proposed in the past, such as (besides others) the Rhetorical Structure Theory (RST) \cite{mann1988rhetorical} and the lexicalized discourse framework \cite{webber2003anaphora} for monologues as well as the Segmented Discourse Representation Theory (SDRT) \cite{asher1993reference, asher2003logics} for dialogues. Among these theories, the RST discourse theory postulates a single, complete discourse tree for monologue documents, while the lexicalized discourse framework only focuses on local discourse connectives within and between adjacent sentences. Focusing on the connection between discourse information and topic segmentation, we employ the RST discourse theory in this work, most aligned with the requirement to capture topical coherence. Building on human annotated discourse treebanks, a mix of traditional and neural discourse parsers have been proposed over the last decades, with traditional approaches mainly exploiting surface-level features through Support-Vector Machines (SVMs) \cite{hernault2010hilda, ji-eisenstein-2014-representation, wang-etal-2017-two} or Conditional Random Fields (CRFs) \cite{joty-etal-2015-codra, feng2014linear}. On the other hand, neural models achieve similar or superior results on RST discourse parsing, with models using either custom architectures \cite{yu-etal-2018-transition, liu2018learning} or pre-trained LMs (e.g. BERT \cite{zhou-feng-2022-improve}, RoBERTa \cite{guz-etal-2020-unleashing}, SpanBERT \cite{guz-carenini-2020-coreference}). \begin{figure*} \centering \includegraphics[width=6.3in]{model_4.png} \caption{The overall architecture of our discourse-infused topic segmentation model. } \label{fig:proposed_model} \end{figure*} In this work, we generate discourse dependency trees from a BERT-based neural dependency parser proposed in \citet{zhou-feng-2022-improve}, since: (i) The parser follows the intuition that information, and hence structures, in sentences are oftentimes ``self-contained''. Therefore, it predicts the interactions between EDUs of the same sentence in a first stage and subsequently predicts the inter-sentential discourse structures, which aligns well with our objective of sentence-level topic segmenation. (ii) The parser by \citet{zhou-feng-2022-improve} makes direct prediction of dependency discourse structures, alleviating the potential error caused by converting constituency structures into their respective dependency trees. \section{Methodology} As shown in Figure~\ref{fig:proposed_model}, our proposed discourse-aware neural topic segmentation model comprises two components: the \textit{Hierarchical Topic Segmenter} and \textit{Discourse Graph Modeling}, highlighted in green and red respectively. Discourse Graph Modeling further comprises of a \textit{Discourse Graph Construction} and \textit{Graph Modeling} component. \subsection{Basic Model: Hierarchical Topic Segmenter} \label{sec:basic_model} The basic architecture of our proposal is adopted from the basic model in \citet{xing-etal-2020-improving}, consisting of two hierarchical layers: First, a sentence encoder contextualizes individual sentences, followed by the second layer, conditioning sentences on the complete document. Following the settings in \citet{xing-etal-2020-improving}, we adopt the attention BiLSTM architecture\footnote{We also considered Transformer as the backbone of contextualized encoder, but eventually chose BiLSTM for its superior performance.} for each layer and enhance the encodings with pre-trained BERT embeddings. Formally, given a document $D$ as a sequence of $n$ sentences, the sentence encoder (bottom component in Figure~\ref{fig:proposed_model}) yields the embedding for each individual sentence. Based on the obtained encodings, the document-level contextualization layer returns an ordered set of hidden states $\bm{H} = \{ \bm{h}_1, ..., \bm{h}_{n} \}$. Next, a simple multilayer perceptron (MLP) with a final softmax activation serves as a binary topic boundary predictor based on a threshold $\tau$, tuned on the validation set. During training, we optimize the model in accordance to the cross-entropy loss, while at inference time, every sentence (except the last sentence\footnote{We remove the last sentence from the sequence for prediction since it is per definition the end of the last segment.}) with a probability $\geq \tau$ is considered as the end of a segment. \subsection{Discourse Graph Modeling} Our goal is to inject inter-sentential discourse dependency structures into the task of topic segmentation. We believe that the additional, structural information is thereby well aligned with the topical consistency between sentences, hence, suited to guide the prediction of topic transitions. To integrate the discourse information into the basic model described in section~\ref{sec:basic_model}, we first generate an above-sentence discourse dependency tree $\bm{T}_D$ for the document. Specifically, we utilize the discourse dependency parsing model proposed in \citet{zhou-feng-2022-improve}, reaching state-of-the-art performance for discourse tree construction and relation type identification in multiple language settings. The ``Sent-First'' parser \cite{zhou-feng-2022-improve} further fits the aim of our proposal due to its two-staged approach, first generating discourse trees within sentences and subsequently combining sentence-level sub-trees. This hard constraint allows us to exclusively obtain above-sentence discourse structures, avoiding potentially leaky sub-trees \cite{joty-etal-2015-codra}. Regarding the discourse relations attached to every head-dependent pair (discourse dependency), we follow the observation in \citet{xu-etal-2020-discourse}, stating that the agreement between the type of rhetorical relation is usually lower and more ambiguous, to leave them for future work to avoid error propagation. In contrast to the original proposal in \citet{zhou-feng-2022-improve}, training and testing their dependency discourse parser on one corpus (i.e., SciDTB \cite{yang-li-2018-scidtb}), we believe that a mixture of several diverse and publicly available discourse treebanks with different document lengths and domains can increase the parser's robustness on new and unseen genres. Therefore, we retrain the parser on a mixture of RST-DT\footnote{\url{catalog.ldc.upenn.edu/LDC2002T07}} \cite{carlson2002rst}, GUM\footnote{\url{corpling.uis.georgetown.edu/gum}} \cite{Zeldes2017}, SciDTB\footnote{ \url{https://github.com/PKUTANGENT/SciDTB}} \cite{yang-li-2018-scidtb} and COVID19-DTB\footnote{\url{https://github.com/norikinishida/biomedical-discourse-treebanks}} \cite{10.1162/tacl_a_00451}. More specifically, we combine those discourse treebanks and randomly split the aggregated corpus into 80\% training, 10\% validation, 10\% test data. The parser retrained on our combined training portion achieves an Unlabeled Attachment Score (UAS) of 58.6 on the test portion. We show additional key dataset statistics for each treebank used in this paper in Table~\ref{tab:treebanks}. \begin{table} \centering \scalebox{0.9}{ \begin{tabular}{l | @{\space\space\space} c@{\space\space\space} c@{\space\space\space} c@{\space\space\space}} \specialrule{.1em}{.05em}{.05em} \textbf{Treebank} & \# of doc & \# sent/doc & \# edu/doc \\ \hline RST-DT & 385 & 22.5 & 56.6 \\ GUM & 150 & 49.3 & 114.2 \\ SciDTB & 1,355 & 5.3 & 14.1 \\ COVID19-DTB & 300 & 7.8 & 20.0 \\ \specialrule{.1em}{.05em}{.05em} \end{tabular} } \caption{\label{tab:treebanks} Key dataset statistics of the discourse treebanks used for retraining the Sent-First discourse parser \cite{zhou-feng-2022-improve}. } \end{table} After training the discourse parser to infer a discourse dependency tree $\bm{T}_{D}$ for document $D$, we convert the tree structure into a discourse graph $\bm{G}_{D}$ (as a binary matrix). Formally, we initialize the graph $\bm{G}_{D}$ as a $n \times n$ identity matrix $\bm{G}_D = \bm{I}_{n, n} $, connecting every node to itself. Afterwards, we fill in the remaining cells by assigning $\bm{G}_{D}[i][j] = 1$ iff $ \exists~ \bm{T}_{D}(i \rightarrow j)$, with $i$, $j$ indexing the head and dependant sentences in the document, respectively. Using the binary matrix representation of $\bm{G}_{D}$, we apply the multi-layer Graph Attention Network (GAT) \cite{velickovic2018graph} to update sentence encodings following the discourse graph. More specifically, with the discourse graph matrix $\bm{G}_{D}$ and the contextualized representations $\bm{H} = \{ \bm{h}_1, ..., \bm{h}_{n} \}$ described in section~\ref{sec:basic_model}, within each graph attentional layer, we perform self-attention on the sentence nodes. Taking the $l$th layer as an example, we compute the attention coefficient $\alpha_{i,j}$ between sentence nodes $i$, $j$ as: \vspace{-0.5ex} \begin{equation} \alpha^{l}_{ij} = softmax(e^{l}_{ij}) = \frac{exp(e^l_{ij})}{\sum_{k \in \mathcal{N}_{i}} exp(e^{l}_{ik})}, \label{eq:1} \end{equation} \vspace{-1.5ex} \begin{equation} e^{l}_{ij} = LeakyReLU(\bm{a}_{l}^{T}[\bm{W}_{l}\bm{g}^l_i||\bm{W}_{l}\bm{g}^l_j]) \label{eq:2} \end{equation} where $\bm{W}_l$ and $\bm{a}_l$ are learnable parameters for layer $l$ and $^{T}$ is the transposition operation. $\mathcal{N}_i$ denotes the direct neighborhood of node $i$ in the graph ($G_D[i][\cdot] = 1$). As the node representation input of the first GAT layer ($l = 0$), $\bm{g}^{0}_{i} = \bm{h}_{i} \in H$. Once attention coefficients are obtained, we compute the intermediate node representation $\bm{z}_{i}^{l}$ for sentence node $i$ at layer $l$ by aggregating information from neighboring nodes as: \begin{equation} \bm{z}^{l}_{i} = \sum_{j \in \mathcal{N}_{i}}\alpha^{l}_{ij}\bm{W}_{l}\bm{g}^l_j \label{eq:3} \end{equation} Following the step in \citet{huang-etal-2020-grade}, we combine the intermediate node representation $\bm{z}_{i}^{l}$ with the input of this layer $\bm{g}_{i}^{l}$ to get the updated node representation $\bm{g}_{i}^{l+1}$ as the input for the next layer: \begin{equation} \bm{g}^{l+1}_{i} = ELU(\bm{g}^l_i + \bm{z}_{i}^{l}) \label{eq:4} \end{equation} where ELU denotes an exponential linear unit \cite{Clevert2016FastAA}. With the output $\bm{g}_i$ from the last layer of GAT, we concatenate it together with $\bm{h}_i$ and further feed $[\bm{h}_i ; \bm{g}_{i}]$ into the predictor layer for segment boundary prediction. \section{Experiments} In order to quantitatively evaluate the effectiveness, generality and efficiency of our proposal, we conduct three sets of experiments to compare our topic segmentation approach against a variety of baselines and previous models. Namely, we assess the performance of our model in regards to the \textit{Intra-Domain Segment Inference Performance}, \textit{Domain Transfer Segment Inference Performance}, and conduct an additional \textit{Efficiency Analysis}. \subsection{Datasets} \subsubsection{Intra-Domain Datasets} For the set of intra-domain segment inference experiments, we train and test models within the same domain (here: on the same corpus). We thereby choose three diverse corpora (see Table~\ref{tab:stats_intra} for more details) for the intra-domain evaluation: \paragraph{Choi \cite{choi-2000-advances}.} This corpus consists of 920 articles artificially generated by randomly combining passages from the Brown corpus. The datapoints in this dataset are not human written, leading us to solely use this corpus for a preliminary performance assessment for topic segmentation models in a 80\% (train)/10\%(dev)/10\%(test) data-split. \begin{table} \centering \scalebox{0.93}{ \begin{tabular}{l | @{\space\space\space} c@{\space\space\space\space\space} c@{\space\space\space\space\space} c@{\space\space\space\space} } \specialrule{.1em}{.05em}{.05em} \textbf{Dataset} & \# of doc & \# sent/seg & \# seg/doc \\ \hline CHOI & 920 & 7.4 & 10.0 \\ RULES & 4,461 & 7.4 & 16.0 \\ SECTION & 21,376 & 7.2 & 7.9 \\ \specialrule{.1em}{.05em}{.05em} \end{tabular} } \caption{\label{tab:stats_intra} Statistics of the datasets used in intra-domain experiments.} \end{table} \paragraph{Rules \cite{bertrand2018hall}.} This corpus consists of 4,461 documents about regulation discussion published in the Federal Register\footnote{\url{https://www.govinfo.gov/}} by U.S. federal agencies. Since each paragraph is about one particular regulation and all regulations covered by one document are under the same category, we deem it as a reasonably coherent data source for topic segmentation evaluation with the paragraph breaks as ground-truth segment boundaries. We split this dataset into training, validation and test sets with the default 80\%, 10\%, 10\% data-split. \paragraph{Wiki-Section (Section) \cite{arnold-etal-2019-sector}.} This corpus originally contains Wikipedia articles in both English and German. The English portion of the dataset, which we use for our intra-domain experiment, consists of around 3.6k articles about diseases and 19.5k articles about cities around the world. After the step of filtering out problematic samples with incorrect sentence segmentation detected by mismatched counts between sentences and labels, the resulted dataset covers 21,376 articles with the highest-level section marks as ground-truth segment boundaries. We follow the setting in \citet{arnold-etal-2019-sector} by splitting the dataset into 70\% training, 10\% validation and 20\% test data. \subsubsection{Domain Transfer Datasets} To better evaluate models' robustness in cases where a domain-shift is present (called ``domain transfer segment inference''), we apply the topic segmenters trained on Wiki-Section to four small corpora heavily deviating from the training corpus (see Table~\ref{tab:stats_transfer} for more details): \\ \vspace{-2ex} \\ \textbf{Wiki-50 \cite{koshorek-etal-2018-text}} consists of 50 Wikipedia articles randomly sampled from the latest English Wikipedia dump. There is no overlap between this dataset and Wiki-Section. \\ \vspace{-2ex} \\ \textbf{Cities \cite{chen-etal-2009-global}} consists of 100 Wikipedia articles about cities. There is no overlap between this dataset and Wiki-Section, even the theme of this dataset is close to the portion of city articles in Wiki-Section. \\ \vspace{-2ex} \\ \textbf{Elements \cite{chen-etal-2009-global}} consists of 118 Wikipedia articles on chemical elements. \\ \vspace{-2ex} \\ \textbf{Clinical \cite{malioutov-barzilay-2006-minimum}} consists of 227 chapters in a clinical book. The subsection marks within each chapter are deemed as ground-truth segment boundaries. \begin{table} \centering \scalebox{0.93}{ \begin{tabular}{l | @{\space\space\space} c@{\space\space\space\space} c@{\space\space\space\space} c@{\space\space\space}} \specialrule{.1em}{.05em}{.05em} \textbf{Dataset} & \# of doc & \# sent/seg & \# seg/doc \\ \hline WIKI-50 & 50 & 13.6 & 3.5 \\ Cities & 100 & 5.2 & 12.2 \\ Elements & 118 & 3.3 & 6.8 \\ Clinical & 227 & 28.0 & 5.0 \\ \specialrule{.1em}{.05em}{.05em} \end{tabular} } \caption{\label{tab:stats_transfer} Statistics of the datasets used in domain transfer experiments.} \end{table} \subsection{Experimental Design} \paragraph{Baselines:} We directly compare our proposed discourse-aware topic segmentation model (called \textbf{Basic Model + Discourse}) with the following unsupervised and supervised baselines: \\ \vspace{-2ex} \\ \textbf{- BayesSeg} \cite{eisenstein-barzilay-2008-bayesian}: This unsupervised method makes segmentation prediction by situating the lexical cohesion of text in a Bayesian framework. A text span produced by a distinct lexical distribution is recognized as a coherent topic segment.\\ \vspace{-2ex} \\ \textbf{- GraphSeg} \cite{glavas-etal-2016-unsupervised}: This unsupervised method derives semantically coherent segments through reasoning on a semantic relatedness graph construed from greedy lemma alignment.\\ \vspace{-2ex} \\ \textbf{- TextSeg} \cite{koshorek-etal-2018-text}: This supervised neural topic segmenter adopts a hierarchical neural sequence labeling framework with BiLSTM as the main architecture of each layer. The basic model used in our paper (described in section~\ref{sec:basic_model}) is an effective extension of this approach. \\ \vspace{-2ex} \\ \textbf{- Sector} \cite{arnold-etal-2019-sector}: This is a supervised neural topic segmenter extended from \textit{TextSeg} by adding an auxiliary layer for sentence topic label prediction. The learned intermediate topic embeddings for sentences are directly utilized for segment boundary inference.\\ \vspace{-2ex} \\ \textbf{- Transformer} \cite{glavas-2020-two}: This is a supervised neural topic segmenter consisting of two hierarchically connected Transformer networks for sentence encoding and sentence contextualization respectively.\\ \vspace{-2ex} \\ \textbf{- Basic Model + Context} \cite{xing-etal-2020-improving}: This is a top-performing neural topic segmenter which shares the same basic architecture with our proposal. The approach improves the \textbf{context modeling} capacity of the plain basic model by adding an auxiliary coherence prediction module and restricted self-attention. \paragraph{Evaluation Metrics:} We use the $P_k$ error score\footnote{We also considered \textit{windiff} \cite{pevzner-hearst-2002-critique} as another evaluation metric. Since it was highly correlated with $P_k$, we omit it and only present performance by $P_k$ to better compare with results reported in previous works.} \cite{Beeferman1999} for our intra-domain and domain transfer segment inference evaluations. The metric thereby simply measures the probability that a pair of sentences located at two ends of a $k$-sized sliding window in a document are incorrectly identified as belonging to the same segment or not. $k$ is determined as half of the average true segment size of the document. Since it is a penalty metric, lower values indicates better performance. We further quantitatively analyze models' efficiency according to two aspects: Model size and model speed, evaluating the count of learnable parameters and batches/documents processed per second during training/inference, besides $P_k$ measurement. \paragraph{Implementation Details:} For the hierarchical topic segmenter (our basic model), we adopt the default setting in \citet{xing-etal-2020-improving}, with GoogleNews word2vec ($d = 300$) as initial word embeddings and the contextualized representation of special token \texttt{[CLS]} ($d = 768$) from \texttt{bert-base-uncased} as initial sentence embeddings. All BiLSTM layers have the hidden state size = 256. For the discourse graph model component, the number of GAT layers is set to 2 through validation and the number of heads is set to 4 as in \cite{velickovic2018graph}. The input and output dimensions of each layer = 256. Training uses Adam with $lr = 1e^{-3}$ and batch size = 8. Early stopping is applied within 10 epoches of model training and the boundary prediction threshold $\tau$ is tuned over the validation set of each corpus we use for intra-domain model evaluation. \subsection{Intra-Domain Segment Inference} \label{sec:intra_domain} \begin{table} \centering \scalebox{0.91}{ \begin{tabular}{l | c c c | c} \specialrule{.08em}{.05em}{.05em} \rowcolor{Gray} \textbf{Dataset} & \textbf{Choi} & \textbf{Rules} & \textbf{Section} & \textbf{RSTDT} \\ \hline Random & 49.4\textsuperscript{\space} & 50.6\textsuperscript{\space} & 51.3\textsuperscript{\space} & \cellcolor{green!15}40.5\textsuperscript{\space} \\ \hline BayesSeg & 20.8\textsuperscript{\space} & 41.5\textsuperscript{\space} & 39.5\textsuperscript{\space} & \cellcolor{green!15}37.5\textsuperscript{\space} \\ GraphSeg & 6.6\textsuperscript{\space} & 39.3\textsuperscript{\space} & 44.9\textsuperscript{\space} & \cellcolor{green!15}58.7\textsuperscript{\space} \\ TextSeg & 1.0\textsuperscript{\space} & 7.7\textsuperscript{\space} & 12.6\textsuperscript{\space} & \cellcolor{green!15}26.9\textsuperscript{\space} \\ Sector & --\textsuperscript{\space} & --\textsuperscript{\space} & 12.7\textsuperscript{\space} & \cellcolor{green!15}--\textsuperscript{\space}\\ Transformer & 4.8\textsuperscript{\space} & 9.6\textsuperscript{\space} & 13.6\textsuperscript{\space} & \cellcolor{green!15}--\textsuperscript{\space}\\ \hline Basic Model& 0.81\textsuperscript{\space} & 7.0\textsuperscript{\space} & 11.3\textsuperscript{\space} & \cellcolor{green!15}26.9\textsuperscript{\space} \\ +Context & \textbf{0.54}\textsuperscript{\space} & \textbf{5.8}\textsuperscript{\space} & \textbf{9.7}\textsuperscript{\space} & \cellcolor{green!15}\underline{25.4}\textsuperscript{\space}\\ \cellcolor{blue!10}+Discourse & \cellcolor{blue!10}\underline{0.59}\textsuperscript{\space} & \cellcolor{blue!10}\underline{6.1}\textsuperscript{\space} & \cellcolor{blue!10}\underline{10.2}\textsuperscript{\space} & \cellcolor{blue!10}\textbf{24.8}\textsuperscript{\space}\\ \specialrule{.08em}{.05em}{.05em} \end{tabular} } \caption{\label{tab:res_intra_domain} $P_k$ ($\downarrow$) error score on three corpora for intra-domain experiment. Results in \textbf{bold} and \underline{underlined} indicates the best and second best performance across all comparisons. The row in \textcolor{blue!45}{purple} is the results achieved by our proposal. The column in \textcolor{green!85}{green} is the results for RSTDT paragraph break prediction with gold discourse structures integrated.} \end{table} We report our results of the intra-domain segment inference on the Choi, Rules and Wiki-Section datasets in Table~\ref{tab:res_intra_domain}. For better performance comparison, the table is subdivided into three sub-tables: random baseline, previously proposed approaches and models build on top of the basic model we use. We observe that the basic model without any additinal components already outperforms alternative supervised and unsupervised segmenters. With the above-sentence discourse dependency information injected, as proposed in this paper, the method (named +Discourse) further improves the performance by a notable margin across all three corpora. We further find that our proposed approach does not achieve superior performances compared to the basic model enhanced with the context modeling strategy (+Context) in \citet{xing-etal-2020-improving}. We believe that a possible explanation for this under-performance could be the upstream parsing error of the discourse dependency parser applied out-of-domain, oftentimes severly impairing the parsing performance \cite{huber-carenini-2019-predicting}. Therefore, we conduct an additional experiment on RST-DT due to the availability of gold discourse structures annotated by human for this corpus. With no human-annotated topic segment boundaries at hand, we use paragraph breaks contained in RST-DT articles as the ground-truth for training and testing of topic segmentation models. Our results in Table~\ref{tab:res_intra_domain} show that the quality of discourse structure is positively correlated with enlarged improvements achieved by our proposal. In this case, the upper bound achieved by integrating gold discourse structures can even outperform the basic model enhanced by context modeling (+Context). \subsection{Domain Transfer Segment Inference} \label{sec:domain_transfer} \begin{table} \centering \scalebox{0.825}{ \begin{tabular}{l | c c c c} \specialrule{.1em}{.05em}{.05em} \rowcolor{Gray} \textbf{Dataset} & \multicolumn{1}{c}{\textbf{Wiki-50}} & \multicolumn{1}{c}{\textbf{Cities}} & \multicolumn{1}{c}{\textbf{Elements}} & \multicolumn{1}{c}{\textbf{Clinical}} \\ \hline Random & 52.7\textsuperscript{\space} & 47.1\textsuperscript{\space} & 50.1\textsuperscript{\space} & 44.1\textsuperscript{\space} \\ \hline BayesSeg & 49.2\textsuperscript{\space} & 36.2\textsuperscript{\space} & \textbf{35.6}\textsuperscript{\space} & 57.2\textsuperscript{\space} \\ GraphSeg & 63.6\textsuperscript{\space} & 40.0\textsuperscript{\space} & 49.1\textsuperscript{\space} & 64.6\textsuperscript{\space} \\ TextSeg & 28.5\textsuperscript{\space} & 19.8\textsuperscript{\space} & 43.9\textsuperscript{\space} & 36.6\textsuperscript{\space} \\ Sector & 28.6\textsuperscript{\space} & 33.4\textsuperscript{\space} & 42.8\textsuperscript{\space} & 36.9\textsuperscript{\space}\\ Transformer & 29.3\textsuperscript{\space} & 20.2\textsuperscript{\space} & 45.2\textsuperscript{\space} & 35.6\textsuperscript{\space}\\ \hline Basic Model& 28.7\textsuperscript{\space} & 17.9\textsuperscript{\space} & 43.5\textsuperscript{\space} & 33.8\textsuperscript{\space} \\ +Context & \textbf{26.8}\textsuperscript{\space} & \textbf{16.1}\textsuperscript{\space} & \underline{39.4}\textsuperscript{\space} & \textbf{30.5}\textsuperscript{\space}\\ \cellcolor{blue!10}+Discourse & \cellcolor{blue!10}\textbf{26.8}\textsuperscript{\space} & \cellcolor{blue!10}\underline{16.9}\textsuperscript{\space} & \cellcolor{blue!10}41.1\textsuperscript{\space} & \cellcolor{blue!10}\underline{31.8}\textsuperscript{\space}\\ \specialrule{.1em}{.05em}{.05em} \end{tabular} } \caption{\label{tab:res_transfer} $P_k$ ($\downarrow$) error score on four test corpora for domain transfer experiment. Results in \textbf{bold} and \underline{underlined} indicates the best and second best performance across all comparisons. The row highlighted in \textcolor{blue!45}{purple} is the results achieved by our proposal.} \end{table} Table~\ref{tab:res_transfer} presents the performance of simple baselines, previously proposed models and our new approach on the domain transfer task. Similar to the intra-domain segment inference, the Basic Model+Context approach still achieves the best performance across all testing domains except Elements, in which the unsupervised BayesSeg performs superior. However, our +Discourse strategy still leads to improvement over the basic model, and achieves comparable performance to the best model (+Context) on Wiki-50 and Cities. We believe that it gives evidence that injecting discourse dependency structures has potential to enhance the generality of topic segmentation models. \subsection{Efficiency Analysis} Table~\ref{tab:res_efficiency} compares the efficiency of the top two models, comparing our proposed approach (Basic Model+Discourse) against Basic Model+Context. The experiments for these systems were carried out on a Nvidia Telsa V100 16G GPU card. We observe that our strategy of injecting discourse dependency structures can improve model's performance on intra-domain and domain transfer setting, but with less increase of model size and loss of speed compared to +Context. More specifically, adding our discourse graph modeling component on top of the basic model introduces 65\% more learnable parameters while the context modeling components in \citet{xing-etal-2020-improving} cause a 127\% parameter increasing. On the other hand, discourse graph modeling slightly slows down the speed of model training and inference by 21\% and 7.7\% respectively, while making more complex context modeling significantly slows down the speed by 78\% and 46\%. Together with the previous results about model's effectiveness, we can see that our proposed system would be a better option in practical settings where efficiency is critical. Additionally, we conduct the same set of experiments for the model with both context modeling module and our proposed discourse structure integration (Basic Model+Context+Discourse). The performance of this model always falls in between +Context and +Discourse individually, but with the worst efficiency measured by model size and speed. \begin{table} \centering \scalebox{0.87}{ \begin{tabular}{l | c c c} \specialrule{.1em}{.05em}{.05em} \rowcolor{Gray} & \multicolumn{1}{c}{\textbf{\# Params $\downarrow$}} & \multicolumn{1}{c}{\textbf{T-Speed $\uparrow$}} & \multicolumn{1}{c}{\textbf{I-Speed $\uparrow$}} \\ \hline Basic Model & 4.82M\textsuperscript{\space} & 6.90\textsuperscript{\space} & 35.58\textsuperscript{\space} \\ \hline +Context & 10.93M\textsuperscript{\space} & 1.49\textsuperscript{\space} & 19.23\textsuperscript{\space} \\ +Discourse & \textbf{7.97M}\textsuperscript{\space} & \textbf{5.44}\textsuperscript{\space} & \textbf{32.85}\textsuperscript{\space} \\ \specialrule{.1em}{.05em}{.05em} \end{tabular} } \caption{\label{tab:res_efficiency} The efficiency comparison between our proposal and the method proposed in \citet{xing-etal-2020-improving} on the Wiki-Section corpus. These two models share the same basic segmentation framework. \textbf{T-Speed} refers the training speed as number of batches processed per second during training stage. \textbf{I-Speed} refers the inference speed as number of documents processed per second during inference stage.} \end{table} \section{Conclusion and Future Work} In this paper, we present a neural topic segmentation model with injection of above-sentence discourse dependency structures inferred from a state-of-the-art discourse dependency parser. Different from previously proposed methods, our segmenter leverages the discourse signal by encoding the topical consistency between sentences from a more global and interpretable point of view. Experiments on multiple settings (intra-domain, domain transfer and efficiency comparison) show that our system achieves comparable performance to one of the current top-performing topic segmenters, with much less model size increase and speed degradation. In the near future, we plan to investigate the synergy between topic segmentation and discourse parsing more comprehensively, by incorporating the type of inter-sentential rhetorical relations and analyzing whether and how this discourse knowledge can enhance supervised topic segmentation frameworks. In the long run, we intend to explore the possibility for discourse parsing to benefit segment topic labeling, which is another important task usually coupled together with topic segmentation to provide the coarse-grained structural information for documents. Particularly, we believe discourse parsing can potentially enhance the step of key phrase extraction in segment topic labeling due to the significant improvement it brings to the related task of name entity recognition (NER) \cite{jie-lu-2019-dependency}. \section*{Acknowledgments} We thank the anonymous reviewers and the UBC-NLP group for their insightful comments and suggestions. This research was supported by the Language \& Speech Innovation Lab of Cloud BU, Huawei Technologies Co., Ltd.
{ "timestamp": "2022-09-20T02:20:42", "yymm": "2209", "arxiv_id": "2209.08626", "language": "en", "url": "https://arxiv.org/abs/2209.08626" }
\section{Introduction} Since the early 1990s, we know that dissipative structures in hydrodynamic turbulence are vortex tubes \citep{SJO90,VM91}. Their typical size is of the order of the Kolmogorov length. In magnetohydrodynamics (MHD), the dissipative structures are magnetic flux tubes \citep{Nor+92,Bra+96, Mof+94, PP98}. Their thickness has been estimated to scale with the magnetic Prandtl number $P_{\rm m}=\nu/\eta$, i.e., the ratio of the kinematic viscosity $\nu$ to the magnetic diffusivity $\eta$. \cite{BPS95} estimated their size in terms of the gradient matrix $\mbox{\boldmath $\nabla$} {}\hat{\bm{B}}$ of the unit vector $\hat{\bm{B}}=\bm{B}/|\bm{B}|$ of the magnetic field $\bm{B}$ and found that it scales like $P_{\rm m}^{-1/2}$ relative to the Kolmogorov length scale. The inverse length scale of the magnetic structures can be calculated as the rms value of $\mbox{\boldmath $\nabla$} {}\hat{\bm{B}}$, i.e., $k_B=\bra{|\mbox{\boldmath $\nabla$} {}\hat{\bm{B}}|^2}^{1/2}$. The simulations of \cite{BPS95} were for the case of a convection-driven dynamo in the presence of rotation and compressibility, but similar results were later also obtained by \cite{Scheko+04} for a small-scale dynamo in homogeneous incompressible turbulence for $P_{\rm m}\leq1$. They also emphasized that a steeper dependence on $P_{\rm m}$ is expected for $P_{\rm m}\ll1$. The mechanisms of the small-scale dynamo action are different depending on the magnetic Prandtl number. For $P_{\rm m} \gg 1$, self-excitation of magnetic fluctuations is caused by the random stretching of the magnetic field by a smooth velocity field; see the analytical studies by \cite{Kaz68}, \cite{ZKR90}, \citet[][hereafter KA]{Kulsrud+Anderson92}, and \cite{SSF12}. For $P_{\rm m} \ll 1$, the small-scale dynamo is driven by velocity fluctuations at the resistive scale, which is located in the inertial range \citep{Kaz68,RK97,BC04,AH07,KR12,AMV19}. In particular, KA found that, for large values of $P_{\rm m}$, the magnetic energy spectrum is expected to be of the form \begin{equation} E_{\rm M}(k,t)\propto e^{2\gamma t} k^{3/2} K_0\left(k/k_\eta^{\rm KA} \right), \label{EMk0} \end{equation} where $K_0$ is the Macdonald function of order zero (or the modified Bessel function of the second kind), and $k_\eta^{\rm KA}$ is \begin{equation} k_\eta^{\rm KA}=(4\gamma/15\eta)^{1/2}, \label{ketaKA} \end{equation} where $\gamma$ is the growth rate of the magnetic field.\footnote{Note that the symbol $\gamma$ used in KA is 3/8th of the growth rate.} This provides another very different method for calculating a relevant wavenumber characterizing the scale of structures than $k_B$. The question of characteristic length scales in a small-scale dynamo continued attracting attention and has been investigated in more detail by \cite{CR09} with applications to the intergalactic medium. More recently, \cite{Kriel+22} confirmed the $P_{\rm m}^{-1/2}$ scaling for $1\leqP_{\rm m}\leq260$ also for the kinematic phase of the dynamo. The small-scale properties of interstellar turbulence can be assessed through interstellar scintillation measurements of pulsars \citep{Cordes+85, Rickett90, Armstrong+95, Bhat+04, Scalo+Elmegreen04}. Those measurements suggest that the dissipative structures of MHD turbulence are sheet-like with an inner scale down to $300\,{\rm km}$ \citep{Bhat+04}. However, more detailed measurements would be needed to determine the precise nature of the smallest dissipative structures \citep{Xu+Zhang17}. The goal of the present paper is to compare the relations between different length scales in small-scale dynamos. In addition to the values of $k_\eta^{\rm KA}$ and $k_B$ discussed above, we also determine a wavenumber $k_\eta$ that describes the resistive cutoff of the spectrum, and is analogous to the wavenumber $k_\nu$ based on the Kolmogorov (viscous) scale. \cite{Kriel+22} used a similar prescription, but did not compare with other definitions. Note that here $k_\eta$ is not calculated from the dynamo growth rate. Following earlier work \citep{Bra+18}, we consider weakly compressible turbulence with an isothermal equation of state and a constant sound speed $c_{\rm s}$, where the pressure is proportional to the density $\rho$, i.e., $p=\rhoc_{\rm s}^2$. \section{The model} \subsection{Basic equations} In this work, we are only interested in weak magnetic fields and ignore therefore the Lorentz force. The magnetic field is given as $\bm{B}=\mbox{\boldmath $\nabla$} {}\times\bm{A}$, where $\bm{A}$ is the magnetic vector potential. We thus solve the evolution equations for the magnetic vector potential $\bm{A}$, the velocity $\bm{u}$, and the logarithmic density $\ln\rho$ in the form \begin{equation} \frac{\partial\bm{A}}{\partial t}=\bm{u}\times\bm{B}+\eta\nabla^2\bm{A}, \label{dAdt} \end{equation} \begin{equation} \frac{{\rm D} {}\bm{u}}{{\rm D} {} t}=\mbox{\boldmath $f$} {}-c_{\rm s}^2\mbox{\boldmath $\nabla$} {}\ln\rho+ \frac{1}{\rho}\mbox{\boldmath $\nabla$} {}\cdot(2\rho\nu\mbox{\boldmath ${\sf S}$} {}), \end{equation} \begin{equation} \frac{{\rm D} {}\ln\rho}{{\rm D} {} t}=-\mbox{\boldmath $\nabla$} {}\cdot\bm{u}, \label{dlnrhodt} \end{equation} where ${\rm D} {}/{\rm D} {} t=\partial/\partial t+\bm{u}\cdot\mbox{\boldmath $\nabla$} {}$ is the advective derivative, $\mbox{\boldmath $f$} {}$ is a nonhelical forcing function consisting of plane waves with wavevector $\bm{k}$, and ${\sf S}_{ij}= (\partial_i u_j+\partial_j u_i)/2-\delta_{ij}\mbox{\boldmath $\nabla$} {}\cdot\bm{u}/3$ are the components of the rate-of-strain tensor $\mbox{\boldmath ${\sf S}$} {}$. For the forcing, we select a $\bm{k}$ vector at each time step randomly from a finite shell around $k_{\it f}/k_1=1.5$ with $1\leq|\bm{k}|/k_1<2$. The components of $\bm{k}$ are taken to be integer multiples of $k_1\equiv2\pi/L$, where $L$ is the side length of our Cartesian domain of volume $L^3$. This forcing function has been used in many earlier papers \citep[e.g.][]{HBD04}. We solve \Eqss{dAdt}{dlnrhodt} using the {\sc Pencil Code} \citep{JOSS}. \subsection{Spectra and characteristic parameters} \label{Spectra} We normalize our kinetic and magnetic energy spectra such that $\intE_{\rm K}(k)\,{\rm d} {} k=\bra{\bm{u}^2}/2$ and $\intE_{\rm M}(k)\,{\rm d} {} k=\bra{\bm{B}^2}/2\mu_0\rho_0\equiv{\cal E}_{\rm M}$, respectively, where $\rho_0$ is the mean density. Here, angle brackets without subscript denote volume averages. We always present time-averaged spectra. Since $E_{\rm M}(k,t)$ increases exponentially with the rate $2\gamma$, where $\gamma$ is the growth rate of the magnetic field, we average the compensated spectra, $\bra{e^{-2\gamma t}E_{\rm M}(k,t)}_{\Delta t}$, over a suitable time interval $\Delta t$ where the function $e^{-2\gamma t}E_{\rm M}(k,t)$ is statistically stationary; see also \cite{SB14}. Our averaged magnetic energy spectra are normalized by ${\cal E}_{\rm M}$, so that their integral is unity. Our governing parameters are the Mach number, and the fluid and magnetic Reynolds numbers, defined here as \begin{equation} \mbox{\rm Ma}=u_{\rm rms}/c_{\rm s},\quad \mbox{\rm Re}=u_{\rm rms}/\nuk_{\it f},\quad R_{\rm m}=u_{\rm rms}/\etak_{\it f}, \end{equation} respectively, where $u_{\rm rms}$ is the time-averaged rms velocity. Thus, the magnetic Prandtl number is $P_{\rm m}=R_{\rm m}/\mbox{\rm Re}$. The value of $\gamma$ is computed as the average of ${\rm d} {}\lnB_{\rm rms}/{\rm d} {} t$ during the exponential growth phase. We also give the kinetic dissipation wavenumber \begin{equation} k_\nu=\left(\epsilon_{\rm K}/\nu^3\right)^{1/4}, \label{knudef} \end{equation} where $\epsilon_{\rm K}=\bra{2\rho\nu\mbox{\boldmath ${\sf S}$} {}^2}_{\Delta t}$ is the time-averaged kinetic energy dissipation rate. It obeys the expected $\mbox{\rm Re}^{3/4}$ scaling, here with $k_\nu/k_{\it f}\approx0.48\,\mbox{\rm Re}^{3/4}$; see \App{knuScaling}. A tilde on the growth rate denotes normalization with the turnover rate and tildes on various wavenumbers denote normalization with respect to $k_1$, i.e., \begin{equation} \tilde{\gamma}=\gamma/\urmsk_{\it f},\quad \tilde{k}_\nu=k_\nu/k_1,\quad \tilde{k}_{\rm f}=k_{\it f}/k_1,\quad \mbox{etc}. \end{equation} These parameters are listed in \Tab{Tsum} for our runs. For Runs~A--K, we used a resolution of $512^3$ mesh points, while for Runs~L and M, we used $1024^3$ mesh points. The value of $\epsilon_{\rm K}$ in units of $k_1c_{\rm s}^3$ is obtained from the table entries as $\epsilon_{\rm K}/k_1c_{\rm s}^3=\tilde{k}_\nu^4 (\mbox{\rm Ma}/\mbox{\rm Re}\,\tilde{k}_{\rm f})^3$. The calculation of the values of $\tilde{k}_\eta$ is discussed below. \begin{table} \centering \caption{ Summary of the simulations presented in this paper. For Runs~A--K, the resolution was $512^3$, and $1024^3$ for the other two. }\label{Tsum} \begin{tabular}{crrrccrrr} \hline $\!\!$Run$\!\!$ & $\mbox{\rm Ma}$ & $\mbox{\rm Re}$ & $R_{\rm m}$ & $P_{\rm m}$ & $\tilde{\gamma}$ & $\tilde{k}_\nu$ & $\tilde{k}_B$ & $\tilde{k}_\eta$ \\ \hline A & 0.096 & 12 & 1240 & 100 & 0.076 & 5.9 & 127 & 132 \\% D512_Pm100a B & 0.113 & 36 & 1460 & 40 & 0.090 & 11.7 & 128 & 156 \\% D512_Pm40a C & 0.120 & 78 & 1560 & 20 & 0.110 & 19.7 & 139 & 188 \\% D512_Pm20a D & 0.127 & 165 & 1650 & 10 & 0.135 & 34.1 & 156 & 226 \\% D512_Pm10a E & 0.130 & 420 & 1680 & 4 & 0.159 & 67.7 & 185 & 299 \\% D512_Pm4a F & 0.128 & 830 & 1660 & 2 & 0.172 & 113 & 209 & 353 \\% D512_Pm2a G & 0.129 & 1680 & 1680 & 1 & 0.157 & 185 & 237 & 432 \\% D512_Pm1a H & 0.132 & 1710 & 850 & 0.5 & 0.079 & 187 & 147 & 314 \\% D512_Pm05a I & 0.134 & 1740 & 580 & 0.33 & 0.044 & 189 & 114 & 260 \\% D512_Pm0333a_bij2 J & 0.134 & 1740 & 440 & 0.25 & 0.033 & 189 & 94 & 228 \\% D512_Pm025a_bij2 K & 0.130 & 1690 & 340 & 0.20 & 0.019 & 186 & 82 & 203 \\% D512_Pm02a L & 0.132 & 4270 & 427 & 0.10 & 0.020 & 368 & 107 & 300 \\% D1024_Pm01a M & 0.132 & 8580 & 430 & 0.05 & 0.016 & 585 & 104 & 400 \\% D1024_Pm005a_bij2 \hline \end{tabular} \end{table} \section{Results} \subsection{Scalings of the KA and flux tube wavenumbers} Looking at \Tab{Tsum}, it is clear\footnote{Note that $\gamma/\eta k_1^2=\tilde{\gamma}R_{\rm m}\,\tilde{k}_{\rm f}^2$ is related to values given in \Tab{Tsum}.} that the inverse flux tube thickness $\tilde{k}_B$ does not change monotonically with $P_{\rm m}$. The same is also true for $\tilde{k}_\eta^{\rm KA}$. This is mostly because $R_{\rm m}$ was not kept constant for all runs. For $P_{\rm m}\ge1$, however, $R_{\rm m}$ varied only little and was in the range from 1200 to 1700. In that range, $\tilde{k}_B$ showed a steady increase with $P_{\rm m}$. For smaller $P_{\rm m}$, we decrease $R_{\rm m}$ so that $\mbox{\rm Re}$ did not become too large. For Runs~L and M, we used a resolution of $1024^3$ and were thus able to increase $\mbox{\rm Re}$, which led to a slight increase of $\tilde{k}_B$. In most of the plots, we normalize the characteristic wavenumbers by $k_\nu$, which resulted in a monotonic increase of the ratios $k_B/k_\nu$ and $k_\eta^{\rm KA}/k_\nu$. In \Fig{ptable_keta}, we plot $k_\eta^{\rm KA}/k_\nu$ and $k_B/k_\nu$ versus $P_{\rm m}$. Both show a $P_{\rm m}^{0.6}$ scaling for $P_{\rm m}\geq 2$, but they have a linear dependence for $P_{\rm m} < 1$. Thus, the expected $P_{\rm m}^{1/2}$ scaling is only approximately confirmed. \begin{figure}\begin{center} \includegraphics[width=\columnwidth]{ptable_keta} \end{center}\caption[]{ Dependence of $k_\eta^{\rm KA}/k_\nu$ (closed symbols) and $k_B/k_\nu$ (open symbols) on $P_{\rm m}$. The dotted line shows the $P_{\rm m}^{1/2}$ scaling for comparison. }\label{ptable_keta}\end{figure} \subsection{Resistive cutoff wavenumbers} Important characteristics of MHD turbulence are the kinetic and magnetic energy spectra. Focussing on the viscous and resistive dissipation subranges, it makes sense to normalize $k$ by $k_\nu$, as discussed above. We recall that the quantity $k_\nu$ is usually defined as in \Eq{knudef}, i.e., without any prefactors. The point when the spectrum drops significantly is typically at $k/k_\nu\approx0.1$ rather than at unity, as one might have expected. This should be kept in mind when discussing values of cutoff wavenumbers in other definitions. \begin{figure}\begin{center} \includegraphics[width=\columnwidth]{pspec_comp_PrMcrit} \end{center}\caption[]{ Magnetic energy spectra (solid lines) for $P_{\rm m}=1/5$ (Run~K, red), 1/4 (Run~J, orange), 1/3 (Run~I, black), and 1/2 (Run~H, blue) along with the corresponding kinetic energy spectra (dashed lines). }\label{pspec_comp_PrMcrit}\end{figure} The functional forms of $E_{\rm M}(k)$ and $E_{\rm K}(k)$ are rather different at small values of $k$, but near the viscous cutoff wavenumber they are more similar to each other. In \Fig{pspec_comp_PrMcrit}, we compare $E_{\rm K}(k)$ and $E_{\rm M}(k)$ for a few values of $P_{\rm m}$. We clearly recognize the $E_{\rm K}(k)\propto k^{-5/3}$ Kolmogorov scaling and the $E_{\rm M}(k)\propto k^{3/2}$ spectrum of the small-scale dynamo \citep{Kaz68}; see also KA. For different values of $P_{\rm m}$, however, the slopes of $E_{\rm M}(k)$ are quite different near the resistive cutoff wavenumber: steeper for small values of $P_{\rm m}$ and shallower for larger values of $P_{\rm m}$. For $P_{\rm m}=1/4=0.25$, the shapes of $E_{\rm M}(k)$ and $E_{\rm K}(k)$ are most similar to each other at large $k$, although $E_{\rm M}(k)$ is just slightly too steep, while for $P_{\rm m}\ge1/3$, it is already clearly too shallow. Thus, we expect there to be a critical value, $P_{\rm m}^{\rm crit}$ of about 0.27, where $E_{\rm M}(k)$ and $E_{\rm K}(k)$ are most similar to each other near the cutoff wavenumber. \begin{figure}\begin{center} \includegraphics[width=\columnwidth]{pspec_comp2} \end{center}\caption[]{ Magnetic energy spectra collapsed on top of each other by choosing suitable values of $k_\eta$ for each value of $P_{\rm m}$. }\label{pspec_comp2}\end{figure} \begin{figure}\begin{center} \includegraphics[width=\columnwidth]{pspec_comp2_loPm} \end{center}\caption[]{ Similar to \Fig{pspec_comp2}, but for $P_{\rm m}=0.05$ (black line), 0.1 (red line), and 0.2 (blue line). }\label{pspec_comp2_loPm}\end{figure} Our value of $P_{\rm m}=P_{\rm m}^{\rm crit}$ is surprisingly small. Comparisons of $E_{\rm M}(k)$ and $E_{\rm K}(k)$, as done here, have been done before. \cite{Kriel+22} found that the two spectra are similar to each other for values of $P_{\rm m}$ around 1.3. Values around unity might seem more natural than our value of 0.27, but it should be remembered that they worked with the {\tt FLASH} code \citep{FLASH}, which uses an approximate Riemann solver -- in addition to employing explicit viscosity and magnetic diffusivity. This implies additional dissipation, which might have affected the scaling. \cite{Kriel+22} also used a Mach number of about 0.3, while ours is closer to 0.1, so the simulations are not really comparable to each other. Nevertheless, it would be useful to have independent confirmation of the value of $P_{\rm m}^{\rm crit}$ using, for example, spectral codes, which would resolve the dissipative subranges even more accurately than the sixth order scheme employed in the {\sc Pencil Code}. By choosing suitable values of $k_\eta$ for $P_{\rm m}\neqP_{\rm m}^{\rm crit}$, we can now try to collapse the curves $E_{\rm M}(k/k_\eta)$ on top of each other. This is done in \Fig{pspec_comp2}. The collapse is good near and above the peak of the spectra, but there are departures for small values of $k$, where the spectra become shallower than the classical Kazantsev slope for smaller values of $P_{\rm m}$. In the opposite limit of $P_{\rm m}\ll1$, the spectral slope may be smaller. For $P_{\rm m}=0.1$, a $k^{7/6}$ scaling was previously discussed by \cite{SB14} and confirmed by \cite{Bra+18}. For $P_{\rm m}\leq0.2$, the quality of the collapse onto \Eq{EMk0} becomes rather poor, which is why we plot the results for smaller values separately; see \Fig{pspec_comp2_loPm}. The collapse for each value of $P_{\rm m}$ results in a value of $k_\eta$, which we have listed in the last column of \Tab{Tsum}. A plot of $k_\eta/k_\nu$ versus $P_{\rm m}$ is given in \Fig{ptable_keta2}. We see that the ratio $k_\eta/k_\nu$ does obey the expected $P_{\rm m}^{1/2}$ scaling rather well. In this figure, we have also highlighted the value of $P_{\rm m}=P_{\rm m}^{\rm crit}\approx0.27$ where $k_\eta/k_\nu=1$, i.e., \begin{equation} k_\eta/k_\nu=\left(P_{\rm m}/P_{\rm m}^{\rm crit}\right)^{1/2}. \label{ketaknu} \end{equation} For $P_{\rm m}\ll1$, a steeper scaling is numerically obtained at very high resolution simulations \citep{W22}, but in our simulations, such a trend cannot yet be seen for $P_{\rm m}\ge0.05$. \begin{figure}\begin{center} \includegraphics[width=\columnwidth]{ptable_keta2} \end{center}\caption[]{ Dependence of $k_\eta/k_\nu$ (closed symbols) on $P_{\rm m}$. }\label{ptable_keta2}\end{figure} \begin{figure}\begin{center} \includegraphics[width=\columnwidth]{pkaz_all} \end{center}\caption[]{ Magnetic energy spectra versus $k/k_\eta^{\rm KA}$ for $P_{\rm m}=100$ (black solid line) and collapsed on top of it the result for $P_{\rm m}=40$ (blue line), as well as $P_{\rm m}=20$ (orange line, having scaled $k_\eta^{\rm KA}$ by a factor 1.05), and $P_{\rm m}=10$ (red line, having scaled $k_\eta^{\rm KA}$ by a factor 1.1). The black dotted line represents \Eq{DottedLine}. }\label{pkaz_all}\end{figure} \subsection{Comparison with the Kazantsev cutoff wavenumber} We have already discussed the differences in the $P_{\rm m}$ scaling between the measured $k_\eta$ and the theoretically expected $k_\eta^{\rm KA}$ from the work of KA based on the numerically determined growth rate. Here, however, the scales are rather different in an absolute sense. This is primarily caused by the large departure between the values of $k_\eta^{\rm KA}$ and the location where the magnetic energy spectrum begins to drop rapidly. The apparent discrepancy can be alleviated by redefining $k_\nu$ such that the drop occurs closer to unity. Thus, there is otherwise no physical significance in the difference of the absolute wavenumbers. To clarify this point, we now plot $E_{\rm M}(k)/{\cal E}_{\rm M}$ versus $k/k_\eta^{\rm KA}$ for $P_{\rm m}=100$ and 40. For smaller values of $P_{\rm m}$, we have scaled $k_\eta^{\rm KA}$ by a factor $s=1.05$ for $P_{\rm m}=20$ and by a factor $s=1.1$ for $P_{\rm m}=10$; see \Fig{pkaz_all}. Those coefficients are also listed in \Tab{Tsum2}. The result for $P_{\rm m}=4$ is not plotted because of a poor collapse at small $k$. Here, the adjustment factor is 1.3, as listed in \Tab{Tsum2}. This lack of collapse for $P_{\rm m}\leq4$ illustrates that the Kazantsev model reproduces the numerical data related to $k_\eta^{\rm KA}$ sufficiently well only for large values of $P_{\rm m}$. \begin{table} \centering \caption{ Values of $k_\eta^{\rm KA}$ Adjustment factors to the KA values for small $P_{\rm m}$. }\label{Tsum2} \begin{tabular}{c|ccccc} \hline $P_{\rm m}$ & 100 & 40 & 20 & 10 & 4 \\ \hline $k_\eta^{\rm KA}/k_1$ & 7.7 & 9.1 & 10.4 & 11.9 & 13.0 \\ $s$ & 1 & 1.00 & 1.05 & 1.1 & 1.3 \\ $1.3\,sk_\eta^{\rm KA}/k_1$ & 10.0 & 11.8 & 14.1 & 16.9 & 21.9 \\ \hline \end{tabular} \end{table} In absolute terms, the value of $k_\eta^{\rm KA}$ given by \Eq{ketaKA} underestimates the position of the peak by a factor of about $1.3=1/0.77$. This factor was obtained empirically by having overplotted in \Fig{pkaz_all} the graph of \begin{equation} E_{\rm M}(k)/{\cal E}_{\rm M}\approx 0.2 \, \left(k/k_\eta^{\rm KA}\right)^{3/2} K_0\left(0.77\,k/k_\eta^{\rm KA}\right). \label{DottedLine} \end{equation} The agreement with the numerical solutions is generally good, but deteriorates for $P_{\rm m}<10$, especially for small $k$, where the simulation results predict less power than the Kazantsev model. On the other hand, the discrepancy with the estimate of $k_\eta^{\rm KA}$ decreases owing to the increase of the correction factor $s$. This is illustrated in the last line of \Tab{Tsum2}, where we have listed the values of $1.3\,sk_\eta^{\rm KA}/k_1$. \subsection{Different viscous cutoff wavenumbers} \label{Redefining} The absolute scale of characteristic and cutoff wavenumbers is a matter of convention. For $k_\eta$, one could determine empirically the effective wavenumber $k_\eta^{\rm KA}$ in \Eq{EMk0}, as we have done. This value turned out to be 1.3 times smaller than that proposed by KA. One would then define $1.3\,k_\eta^{\rm KA}$ as a new resistive cutoff wavenumber. Given that the 1/2 scaling in \Eq{ketaknu} is well obeyed, one could even redefine $k_\nu$ correspondingly. Looking at \Tab{Tsum}, we see that for $P_{\rm m}=P_{\rm m}^{\rm crit}$, we have $k_\nu/k_1=189$. Furthermore, we see that $1.3\,sk_\eta^{\rm KA}=10.0$. Thus, since $10/189=0.053$, we could define a magnetically motivated value as \begin{equation} k_\nu^{\rm mag}=0.053\,(\epsilon_{\rm K}/\nu^3)^{1/4}. \end{equation} The reason we ended up defining $k_\nu^{\rm mag}$ in terms of the magnetic energy spectrum was because $E_{\rm M}(k)$ had a well defined peak, which is not the case for $E_{\rm K}(k)$. This could be changed, however, by defining \begin{equation} E_{\rm K}(k)\propto k^{-5/3}\exp(-a_\nu k/k_\nu). \label{EKfitold} \end{equation} This is the approach chosen by \cite{Kriel+22}. A problem with \Eq{EKfitold} is that it lacks a description of the bottleneck. \cite{SJ93} showed that experimental data can best be fit with an additional $k^{-1}$ piece, while \cite{Qian84} proposed a formula based on a closure model of the form \begin{equation} E_{\rm K}(k)\propto k^{-5/3}\left[1+b_\nu(k/k_\nu)^{2/3}\right] \exp\left[-a_\nu (k/k_\nu)^{4/3}\right], \label{EKfit} \end{equation} with $a_\nu=5.4$ and $b_\nu=5.3$, which, again, implies a $k^{-1}$ scaling of the bottleneck. His formula was also confirmed by \cite{Dobler+03} using the {\sc Pencil Code}. This fit is shown in \App{KineticCutoff}, were $a_\nu=5.8$ and $b_\nu=6$ were found, which would motivate another kinetically motivated definition of the viscous cutoff wavenumber as $k_\nu^{\rm kin}=k_\nu/a_\nu^{3/4}$, i.e., \begin{equation} k_\nu^{\rm kin}=0.27\,(\epsilon_{\rm K}/\nu^3)^{1/4}, \end{equation} which is much closer to the original definition of \Eq{knudef}. Thus, we see that the consideration of magnetic fields can lead to rather different definitions of the viscous cutoff wavenumber. In other words, one could motivate two different definitions of $k_\nu$. The kinetically one describing the wavenumber where $E_{\rm K}(k)$ begins to drop exponentially, and the magnetic one, which is $0.27/0.053=5.1$ times smaller, where $E_{\rm M}(k)$ has its peak. \section{Conclusions} In this paper, we have determined the magnetic Prandtl number dependence for three rather different scales characterizing the dissipative magnetic structures in a kinematic small-scale dynamo: their diameter, their theoretical cutoff wavenumbers based on the growth rate, and the actual spectral cutoff. For a magnetic Prandtl number of about 0.27, viscous and resistive cutoff scales are found to be approximately equal. This is different from the results of \cite{Kriel+22}, who found a critical value of around 1.3. A scaling of the cutoff wavenumber proportional to $P_{\rm m}^{1/2}$ is found for $0.05\leqP_{\rm m}\leq100$. A change of such a scaling is expected for very small values of $P_{\rm m}$, but this cannot be confirmed for moderately small values. For the actual thickness of flux tubes, we do find a break in the scaling for $P_{\rm m}\approx1$, but it is now steeper than expected both for small and large values of $P_{\rm m}$. For the scale based on the theoretically expected eigenfunction of the Kazantsev small-scale dynamo, we also found a slightly steeper scaling, but no breakpoint for smaller values of $P_{\rm m}$ close to $P_{\rm m}=0.05$. For the large values of $P_{\rm m}$ that are expected to occur in the interstellar medium and in galaxy clusters, the viscous scale is much larger than the resistive one and it may be observationally accessibility through an excess of the parity-even $E$ polarization over the party-odd $B$ polarization in synchrotron emission \citep{Bra+22}. The resistive scale, on the other hand, may be accessible through interstellar scintillation measurements of pulsars \citep{Cordes+85, Rickett90, Bhat+04}, as discussed in the introduction. Thus, there may be ways of comparing theory with observations in the not too distant future. It would also be interesting to extend the present study to other measures of magnetic structures. One such possibility is the use of Minkowski functionals \citep{Sahni+98}. \cite{Wilkin+07} have used this method to show that the thickness, width, and length of magnetic structures from a small-scale dynamo scale differently with magnetic Reynolds number. In their case, however, the value of $\mbox{\rm Re}$ was held constant, so $P_{\rm m}$ and $R_{\rm m}$ did not vary independently. \section*{Acknowledgements} This work emerged during discussions at the Nordita program on ``Magnetic field evolution in low density or strongly stratified plasmas'' in May 2022. The research was supported by the Swedish Research Council (Vetenskapsr{\aa}det, 2019-04234). Nordita is sponsored by Nordforsk. We acknowledge the allocation of computing resources provided by the Swedish National Allocations Committee at the Center for Parallel Computers at the Royal Institute of Technology in Stockholm and Link\"oping. J.S.\ acknowledges the support by the Swiss National Science Foundation under Grant No.\ 185863. \section*{Data Availability} The source code used for the simulations of this study, the {\sc Pencil Code} \citep{JOSS}, is freely available on \url{http://github.com/pencil-code/}. The DOI of the code is http://doi.org/10.5281/zenodo.2315093. The simulation setups and the corresponding secondary data are available on \url{http://doi.org/10.5281/zenodo.7090887}; see also \url{http://www.nordita.org/~brandenb/projects/keta_vs_PrM} for easier access to the same material as on the Zenodo site. \bibliographystyle{mnras}
{ "timestamp": "2022-09-20T02:23:37", "yymm": "2209", "arxiv_id": "2209.08717", "language": "en", "url": "https://arxiv.org/abs/2209.08717" }
\section{Introduction} Historically, Leonhard Euler introduced the Euler characteristic for convex polyhedra in 1752, based on a paper about the notion of a graph in 1736. His consideration for a graph from the geometry of position led to define the Euler characteristic for an arbitrary finite cell-complex. By Poincar\'{e} in the early 20th century, the Euler characteristic was more generalized and turned out to be a topological invariant of a space. The Euler-Poincar\'{e} formula for the Euler characteristic of a topological space uses the so-called Betti numbers of the space. Since then, there has been a lot of variations of the formula and the most interesting one to consider in this paper is the holomorphic Euler characteristic of a sheaf $\mathcal{F}$ on a proper scheme $X$ which replaces the Betti numbers by the dimension of the $i$th cohomology group with coefficients in the sheaf. To be specific, \[ \chi(\mathcal{F})=\sum_i(-1)^ih^i(X;\mathcal{F}). \] Prym varieties named after Friedrich Prym are abelian varieties constructed from \'etale covers of algebraic curves. They had been investigated analytically by Schottky-Jung \cite{SJ}, Wirtinger \cite{Wi}, Farkas-Rauch \cite{FaRa}, and algebraically by Mumford \cite{Mum}. See \cite[\S1]{Farkas} for more precise details about the analytic and algebraic approaches. The study of Prym varieties still remains active for decades. Inside Prym varieties, the Brill-Noether loci was constructed by Welters \cite{W85}. The Euler characteristic of the Brill-Noether loci in Jacobians of non-singular curves was computed in \citelist{\cite{EH}\cite{PP}} and generally with special vanishing at two marked points tin \citelist{\cite{ACT21}\cite{CLPT}}. However, the Euler characteristics of the Brill-Noether loci for Prym varieties are less well known. Our goal of this article is to provide a formula for the Euler characteristic of the Brill-Noether loci in Prym varieties with vanishing orders at one point. Let $\mathbb{K}$ be an algebraically closed field of characteristic not equal to $2$. \begin{theorem}[Euler Characteristic]\label{thm:Euler} Let $({C}, P)$ be a smooth curve of genus $g$ over $\mathbb{K}$ with the vanishing sequence $\bold{a}$ at $P\in C$. Let $\pi:\widetilde{{C}}\rightarrow {C}$ of ${C}$ be an \'{e}tale double cover such that $P', P$ maps to $P$. Suppose that the dimension of $V^r_{\bold{a}}(P')$ (resp. $V^r_{\bold{a}}(P'')$) is $\rho=g-1-|\lambda|$. The Euler characteristic $\chi\left(\mathcal{O}_{V^r_{\bold{a}}(P')}\right)$ (resp. $\chi\left(\mathcal{O}_{V^r_{\bold{a}}(P'')}\right)$) is equal to \begin{align*} \sum_{u_i, v_i\geq 0}\left(\prod_{i=1}^{r+1}(-1)^{u_i}\right.&\left.\binom{u_i+\widetilde{\lambda}_i}{v_i}\right)\\ \cdot&\left(\sum_{\sigma\in S_{2(r+1)}}\sum_{k\geq0}\sum_{\bold{f}\in A_k^\sigma}\mathrm{sgn}(\sigma)\prod_{j=1}^{r+1}g_{f_{\sigma(2j-1),\sigma(2j)}}^{\sigma(2j-1),\sigma(2j)}\dfrac{h(\lambda,\bold{v},k)!}{2^{2(r+1)-h({\lambda,\bold{v}},k)}(r+1)!}\right), \end{align*} where $h(\lambda,\bold{v},k)=|\lambda|+|\bold{v}|+k$. In particular, if $\lambda_i=r+2-i$ such that $\widetilde{\lambda}=0$, then we get the Euler characteristic $\chi(\mathcal{O}_{V^r})$ of the Brill-Noether loci in $\mathscr{P}^\pm$. \end{theorem} It is known that Chern class formulas for certain degeneracy loci to linear series can help us simplify computations in Brill-Noether theory. For instance Kempf and Laksov provided significantly simplified proofs by studying the cohomology class of the Brill-Noether loci \cite{KL}. Moreover, the Euler characteristic of the two pointed Brill-Noether locus in Jacobians can be obtained by applying the formula for the K-theory class for a Schubert variety associated to $321$-avoiding permutation in the flag bundle of Lie type A \cite{ACT21}. By definition the flag bundle $Fl(E)$ of type A on a nonsingular variety $X$ over an algebraically closed field $\mathbb{K}$ is a bundle of flags (or filtrations) of subspaces of $E$ from its vector bundle $E\rightarrow X$, so that it can be considered as $SL(n)/B$ where $B$ is a Borel subgroup of $SL(n)$ for some rank $n$. Prym varieties with special vanishings at one point have a structure of Schubert varieties of Lie type D in even orthogonal Grassmannians $OG(n,\mathbb{K}^{2n})=SO(2n)/P$ for some parabolic subgroups $P$ of $SO(2n)$. In this perspective we use the K-theoretic Chern class formula for even orthogonal Grassmannian degeneracy loci of Lie type D to deduce Theorem \ref{thm:Euler}. In fact the proof of Theorem \ref{thm:Euler} is attributed to the formula for the K-theory class of the pointed Brill-Noether loci in the Prym varieties. The following is our second main Theorem \ref{thm:m2} computing the connective K-theory classes for the pointed Brill-Noether loci $V_{\bold{p}}^r(P')$ in Prym varieties. \begin{theorem}[Connected ${\rm K}$-theory class]\label{thm:m2} Let $(C,P)$ be a smooth curve of genus $g$ with vanishing orders $\bold{a}$ at $P$ on $C$. Let $\lambda$ be a partition defined by $\lambda_{i}=a_{r+1-i}$ and $\widetilde{\lambda}$ by $\widetilde{\lambda}_i={r+1}-i-\lambda_i+1$. Let $\xi$ be the theta divisor on $\mathscr{P}^\pm$. Assume that $V_{\bold{a}}^r(P')$ (resp. $V_{\bold{a}}^r(P'')$) is either of pure codimension $|\lambda|$ or empty. Then the class of $V_{\bold{a}}^r(P')$ is \begin{align}\label{Form1} 2^{r+1}[V_{\bold{a}}^r(P')]&=\sum_{u_i, v_i\geq 0}\left(\prod_{i=1}^{r+1}(-1)^{u_i}(-\beta)^{v_i}\binom{u_i+\widetilde{\lambda}_i}{v_i}\right)\mathrm{Pf}(M'';\beta) \end{align} in $CK_*(\mathscr{P}^{\pm})$ or the numerical equivalence ring of $\mathscr{P}^{\pm}$ where the right hand side is the Pfaffian of the $(r+1)\times (r+1)$ matrix $M''=(m_{ij}'')$ such that for $r+1\geq j>i\geq 1$, \begin{align*} m_{ij}''&=\xi^{\lambda_i+\lambda_j+v_i+v_j}\left(\dfrac{2^{\lambda_i+\lambda_j+v_i+v_j}}{(\lambda_i+v_i)!(\lambda_j+v_j)!}+\sum_{m>0}\dfrac{2^{\lambda_i+\lambda_j+v_i+v_j+m}}{(\lambda_i+m+v_i)!(\lambda_j+v_j)!}\beta^m\xi^m\right.\\ &\left.+\sum_{\ell>0}\sum_{m\geq 0}(-1)^\ell\left(\binom{\ell+m-1}{m}+\binom{\ell+m}{m}\right)\dfrac{2^{\lambda_i+\lambda_j+v_i+v_j+m}}{(\lambda_i+\ell+m+v_i)!(\lambda_j-\ell+v_j)!}\beta^m\xi^m\right). \end{align*} In particular, if $r+1$ is odd, the matrix is augmented by $m_{0j}=\dfrac{2^j}{j!}\xi^j$ for $0<j\leq r+1$. \end{theorem} The connected algebraic K-theory for schemes introduced by Cai \cite{Cai} connects the Chow groups and Quillen's K-theory groups and later it is investigated by Dai and Levine \cite{DL} in motivic homotopy theory. We adapt a simpler version of the connective K-theory of a scheme: given that $X$ is nonsingular, the connected K-homology of $X$ denoted by $CK_*(X)$ is a graded algebra over $\mathbb{Z}[\beta]$ so that it can be specialized to the Chow homology $A_*(X)$ at $\beta=0$ and to Grothendieck group $K_\circ(X)$ of coherent sheaves at $\beta=-1$. The reader might find \citelist{\cite{A}\cite{HIMN}\cite{HIMN20}} for the study of certain degeneracy loci in the context of the connected K-theory. In this regard our strategy to have Theorem \ref{thm:m2} is to apply the K-theoretic Chern class formulas for Grassmannian degeneracy loci of Lie type D in the notion of \cite[Section 4]{A}. As corollaries of Theorem \ref{thm:m2}, the class of the Brill-Noether loci for Prym varieties in the Grothendiek group of $\mathscr{P}^\pm$ is given by \[ 2^{r+1}[\mathcal{O}_{V_{\bold{a}}^r(P')}]=\sum_{u_i, v_i\geq 0}\left(\prod_{i=1}^{r+1}(-1)^{u_i}\binom{u_i+\widetilde{\lambda}_i}{v_i}\right)\mathrm{Pf}(M'';-1) \] by specializing at $\beta=-1$ and their numerical equivalence class in Chow group $A_*(\mathscr{P}^\pm)$ at $\beta=0$ can be expressed by \[ 2^{r+1}[V_{\bold{a}}^r(P')]=\prod_{i=1}^{r+1}\dfrac{1}{{\lambda_i!}}\prod_{i<j}\dfrac{\lambda_i-\lambda_j}{\lambda_i+\lambda_j}\cdot (2\xi)^{|\lambda|}.. \] Indeed Theorem \ref{thm:m2} extends the formulas for the cohomology classes of the Brill-Noether loci of Prym varieties by Concini and Pragacz \cite{CP95}. When it comes to $\beta=0$, we recover the class of the pointed Brill-Noether loci for Prym varieties, Corollary \ref{Form2} that coincides with the recent work of Tarasca \cite[Theorem 1]{Tarasca}. Lastly our formulas are presented for the one-pointed case. As such, it would be interesting to investigate further the formulas for the Euler characteristics for the two-pointed Brill-Noether loci in Prym varieties. The author currently works on the subject in this direction. The structure of this paper is the following. We review the classical Brill-Noether loci of Prym varieties in Chapter \ref{sec2} and K-theoretic class of even orthogonal degeneracy loci in Chapter \ref{sec3}. Our connected K-theory class formulas for the pointed Brill-Noether loci of Prym varieties are presented in Chapter \ref{sec4}. In the end we present the Euler characteristic class of the Brill-Noether loci in Prym varieties with special vanishing at one point in Chapter \ref{sec5}. \section{Review on Brill-Noether loci of Prym Variety}\label{sec2} Let $C$ be a smooth algebraic curve of genus $g=g(C)$ over an algebraically closed field $\mathbb{K}$ whose characteristic is not equal to $2$. Let $\pi:\widetilde{C}\rightarrow C$ be an \'etale double cover of $C$. We denote by $J(C)$ and $J(\widetilde{C})$ the Jacobians of $C$ and $\widetilde{C}$ respectively. We define a norm map $\mathrm{Nm}_{\pi}=\pi_*:\mathrm{Div}(\widetilde{C})\rightarrow \mathrm{Div}(C)$ \cite[Appendix B]{ACGH} by sending a divisor $\sum q_i$ on $\widetilde{C}$ to a divisor $\sum\pi(q_i)$ on $C$. This map induces a map of Jacobians $$\mathrm{Nm}_{\pi}:J(\widetilde{C})\rightarrow J(C).$$ Let $\tau:\widetilde{C}\rightarrow \widetilde{C}$ be an involution exchanging sheets of $\widetilde{C}$ over $C$. We define the {\it Prym variety} $\mathscr{P}$ \cite{H82,M71} by $$\mathscr{P}=\mathrm{Ker}(id_{J(\widetilde{C})}+\tau)^0=\mathrm{Im}(id_{J(\widetilde{C})}-\tau).$$ where $id_{J(\widetilde{C})}:J(\widetilde{C})\rightarrow J(\widetilde{C})$ is the identity map on $J(\widetilde{C})$ and the superscript $0$ implies the connected component containing the origin. We take the degree $``2g-2"$ Picard varieties $\mathrm{Pic}^{2g-2}(C)$ of the curve $C$ through the isomorphism of Picard groups. Since $J(C)$ can be identified with $\mathrm{Pic}(C)$, the norm map can be regarded as a map $\mathrm{Nm}:\mathrm{Pic}^{2g-2}(\widetilde{C})\rightarrow \mathrm{Pic}^{2g-2}(C)$ of Picard groups. Let $K_C\in\mathrm{Pic}^{2g-2}(C)$ be the canonical divisor class. The inverse image of $K_C$ under the norm map is given by $$\mathrm{Nm}^{-1}(K_C)=\mathscr{P}^+\cup \mathscr{P}^-$$ where $\mathscr{P}^+=\{L:h^0(\widetilde{C},L)\equiv 0\;\text{(mod 2)}\}$ and $\mathscr{P}^-=\{L:h^0(\widetilde{C},L)\equiv 1\;\text{(mod 2)}\}$. The {\it Brill-Noether loci} in the Prym varieties $\mathscr{P}^\pm$ are defined by the closed subset \begin{align}\label{Brill} V^r=\{L\in\mathrm{Nm}^{-1}(K_C):h^0(\widetilde{C},L)\geq r+1, h^0(\widetilde{C},L)\equiv r+1\;\text{(mod 2)}\}\subset\mathrm{Pic}^{2g-2}(\widetilde{C}) \end{align} for $r\in \mathbb{Z}, r\geq -1$. So, we have the $\mathscr{P}^+=V^{-1}\supset V^1\supset V^3\supset \cdots,$ and $\mathscr{P}^-=V^0\supset V^2\supset V^4\supset \cdots.$ Equivalently, the loci can be interpreted by $V^r=W_{2g-2}^r(\widetilde{C})\cap\mathscr{P}^+$ if $r$ is odd and $V^r=W_{2g-2}^r(\widetilde{C})\cap\mathscr{P}^-$ if $r$ is even where $W_{2g-2}^r(\widetilde{C})$ are the Brill-Noether loci in $J(\widetilde{C})$. \begin{example} Let $C$ be a general curve of $g=2$ and $r=1$. Then the genus of $\widetilde{C}$ is $3$ such that $V^1\cong W_{2}^1(\widetilde{C})$ is $\mathbb{P}^1$-bundle of $J(C)$. \end{example} We describe the classical case of the scheme and set-theoretical structure of the Brill-Noether loci in the Prym varieties. (See \cite{CP95} for the standard construction). For the double cover $\pi:\widetilde{C}\rightarrow C$, let $1\times \pi:\mathrm{Pic}^{2g-2}(\widetilde{C})\times\widetilde{C}\rightarrow\mathrm{Pic}^{2g-2}(\widetilde{C})\times C$. We denote by $p:\mathrm{Pic}^{2g-2}(\widetilde{C})\times C\rightarrow\mathrm{Pic}^{2g-2}(\widetilde{C})$ and $q:\mathrm{Pic}^{2g-2}(\widetilde{C})\times C\rightarrow C$ the first and second projection. Let $\nu$ be the projection from $\mathrm{Pic}^{2g-2}(\widetilde{C})\times\widetilde{C}$ to $\mathrm{Pic}^{2g-2}(\widetilde{C})$. Then we have the following commutative diagram: \begin{equation*}\label{Diag} \begin{tikzcd}[column sep=normal] &\widetilde{C} \ar[r,"\pi"]&C\; &\\%\\ \mathrm{Pic}^{2g-2}(\widetilde{C})\times\widetilde{C} \ar[ur, "\mu", bend left=10]\ar[drr, "\nu", bend right=10] \ar[r,"1\times \pi"] & \mathrm{Pic}^{2g-2}(\widetilde{C})\times C \ar[dr,"p"]\ar[ur, "q"] &&\\ &\;&\mathrm{Pic}^{2g-2}(\widetilde{C}) \ar[r, "\mathrm{Nm}"']&\mathrm{Pic}^{2g-2}(C)& \\ &&& \end{tikzcd}\\ \end{equation*} For distinct fixed points $P_i$ on $C$, we consider a positive divisor $D=\sum_iP_i$ of $C$ for a sufficiently large $n=\mathrm{deg}(D)$, that is, $n+2g-2\geq 2\cdot g(\widetilde{C})+1$ or $n\geq2\cdot (2g-1)+1-(2g-2)=2g+1$ to have $h^0(K_C-D)=0$ by Riemann-Roch and Clifford's theorem. Let $\mathcal{L}$ be a Poincar\'e line bundle on $\mathrm{Pic}^{2g-2}(\widetilde{C})\times \widetilde{C}$. If we write $\mathcal{L}(\pm D)$ for $\mathcal{L}\otimes \mu^*\pi^*(\mathcal{O}_C(\pm D))$, Riemann-Roch theorem gives $R^1\nu_*(\mathcal{L}(D))=0$ so that \begin{align}\label{Lseq1} 0\longrightarrow \nu_*(\mathcal{L})\longrightarrow \nu_*(\mathcal{L}(D))\longrightarrow \nu_*(\mathcal{L}/(\mathcal{L}(D)))\longrightarrow R^1\nu_*\mathcal{L}\longrightarrow 0, \end{align} and $\nu_*\mathcal{L}(D)$ is locally free of rank $2g-2+n-(2g-1)+1=n$. A scheme structure of $V^r$ is defined by the (r+1)-th Fitting ideal sheaf of $(R^1\nu_*\mathcal{L})|_{\mathscr{P}^\pm}$. Let $\mathcal{E}=(1\times \pi)_*\mathcal{L}$. We write $\mathcal{E}(\pm D)$ for $\mathcal{E}\otimes q^*(\mathcal{O}_C(\pm D))$ where $D$ is a divisor. Let $V=p_*(\mathcal{E}(D)/\mathcal{E}(-D))$, $U=p_*(\mathcal{E}(D))$, and $W=p_*(\mathcal{E}/\mathcal{E}(-D))$. Then $W, U\subset V$ are subbundles of rank $n$, since $W$ is locally free of rank $n$ and $U$ is just shifted by the divisor $-D$ from $p_*(\mathcal{E}(D)/\mathcal{E})$ where the rank of $p_*(\mathcal{E}(D)/\mathcal{E})$ is $n-(2g-2-(2g-1)+1)=n$. This enable $V_{\mathscr{P}^\pm}$ to have a nondegenerate quadratic form with values in $\mathcal{O}_{\mathscr{P}^\pm}$ so that $W_{\mathscr{P}^\pm}, U_{\mathscr{P}^\pm}$ become maximal isotropic subbundles with respect to the form. Sepcifically, let us fix $L\in \mathscr{P}^{\pm}$. For $E=\pi_*L$, let $V=H^0(C,E(D)/E(-D))$ of $2n$-dimensional vector space. On $V$ we define a quadratic form $Q:V\times V\rightarrow \mathbb{C}$ by \begin{align*} Q(\sigma,\tau)=\sum_i\mathrm{Res}_{p_i}(\sigma\tau) \end{align*} where $\sigma\tau\in H^0(C, L^2(2D)/L^2)=H^0(C, \omega_C(2D)/\omega_C)$ such that $Q$ is nondegenerate. In fact we can define $V$ as $V=U'\oplus W$ where $U'=H^0(C,E(D)/E)$ and $W=H^0(C,E/E(-D))$. The nondegenerate symmetric form $Q$ on $V$ is defined by $$Q(\sigma_1\oplus \sigma_2,\tau_1\oplus \tau_2)=\sum\mathrm{Res}(\sigma_1\tau_2+\sigma_2\tau_1).$$ We can consider the symmetric form $Q$ as a quadratic form $\mathfrak{q}$ which sends $v$ to $Q(v,v)$ for $v\in V$. With the quadratic form on $V$ we see that $U'$ and $W$ are $n$-dimensional isotropic subspaces by the residue theorem. In other words, $U'$ consists of regular functions, which makes $U'$ an isotropic subspace, and for $W$, if $\sigma$ and $\tau$ are in $W$, then the sum of the residue of $\sigma\tau$ is zero, and so $W$ is also an isotropic subspace. Additionally we have another isotropic subspace $U=H^0(C,E(D))$ for the quadratic form by the restriction map of the space $H^0(C,E(D))$ in $V$. We notice that the intersection of $U$ and $W$ is global regular sections of $E$ and so $\mathrm{dim}(U\cap W)=h^0(C,E)$. Due to the choice of $\mathcal{L}$, the construction globalizes and thus defines set-theoretically the Brill-Noether loci (\ref{Brill}) on a Prym variety. Readers may refer to \cite{M71} for more details. \section{The connected K-theory class of even orthogonal degeneracy loci}\label{sec3} This section reviews on a general formulas for even orthogonal degeneracy loci in the connective K-homology used later to find the class of the pointed Brill-Noether locus in a Prym variety in Section \ref{sec4}. To be precise, we use the connective K-homology with the natural isomorphisms between the Chow homology and the Grothendiek group of coherent sheaves $$CK_*(X)/(\beta=0)\cong A_*(X)\quad\text{and}\quad CK_*(X)/(\beta=-1)\cong K_\circ(X)$$ of nonsingular $X$ by specializing the parameter as $\beta=0$ and $\beta=-1$ respectively. Even though we choose $X$ to be singular, the isomorphism still works with their cohomolgies via the operational cohomology theory. Moreover, for the closed subvariety $Y\subseteq X$, the class $\left[Y\right]\in A_*(X)$ and $\left[\mathcal{O}_Y\right]\in K_\circ(X)$ can be obtained from the fundamental class $\left[Y\right]\in CK_*(X)$. Let $\mathcal{V}\rightarrow X$ be a rank $2n$ vector bundle over a smooth irreducible algebraic variety $X$ over $\mathbb{K}$, equipped with a nondegenerate quadric form $\mathfrak{q}$. Let $\bold{OG}(n, V)$ be an orthogonal Grassmannian bundle with $\pi:\bold{OG}(n,\mathcal{V})\rightarrow X$. We consider a rank $n$ tautological subbundle $\mathcal{U}$ of $\pi^*\mathcal{V}$ on $\bold{OG}(n,\mathcal{V})$. With a common abuse of notation, we may use $\mathcal{V}$ as $\pi^*\mathcal{V}$. Let $\mathcal{W}$ be a rank $n$ maximal isotropic subbundle of $\mathcal{V}$. Especially $\mathcal{U}$ is also a maximal isotropic subbundle of $\mathcal{V}$. Let us fix a flag of isotropic subbundles $$\mathcal{W}_{p_r}\hookrightarrow \mathcal{W}_{p_{r-1}}\hookrightarrow\cdots\hookrightarrow \mathcal{W}_{p_{0}}=\mathcal{W}\xrightarrow{\phi}\mathcal{U}$$ with respect to the form on a (nonsingular) variety $X$ where $\mathcal{W}_{p_i}$ has rank $n-p_i$ for all $i$. Especially $0=p_0<\cdots<p_{r}$. We define the degeneracy locus $V_{\bold{p}}^r$ to be $$V_{\bold{p}}^{r}=\{x\in X|\;\mathrm{dim}(\mathcal{W}_{p_i}\cap \mathcal{U})_x\geq r+1-i,\quad \mathrm{dim}(\mathcal{W}\cap \mathcal{U})_x\equiv r+1 \;\text{(mod}\;2) \;\;\text{for }x\in X\}$$ for $0\leq i\leq r$. We remark that this degeneracy locus should be taken to be the closure of the locus where equality holds. As a special case of \cite[\S 4]{A19}, we may define Euler classes $e(\mathcal{W}_{p_i},\mathcal{U})$ for isotropic subbundles $\mathcal{W}_{p_i}$ and $\mathcal{U}$. In other words, for maximal isotropic bundles $\mathcal{U}$ and $\mathcal{W}_{p_i}$, Euler classes are defined by \begin{equation*} e_m(\mathcal{W}_{p_i},\mathcal{U}) = \begin{cases} (-1)^{\mathrm{dim}(\mathcal{W}\cap \mathcal{U})}\gamma(\mathcal{W},\mathcal{U})c_{p}(\mathcal{W}/\mathcal{W}_{p_i}) & \text{if $m=p_i$}\\ 0& \text{otherwise} \end{cases} \end{equation*} where $\gamma(\mathcal{U},\mathcal{W})\in CK^0(X)$ is the canonical square root of $c(\mathcal{V}-\mathcal{U}-\mathcal{W};\beta)$ \cite[Appendix B]{A}. We denote by $T_i$ the raising operator increasing the index of $c(i):=c(\mathcal{V}-\mathcal{U}-\mathcal{W}_{p_{i}})$ by one. Let $R_{ij}=T_i/T_j$ and $e(i):=e(\mathcal{W}_{p_{i}},\mathcal{U})$. Similarly we have the formula for the class $\left[V_{\bold{p}}^r\right]$ as Theorem \ref{thy1}. \begin{theorem}\label{thy1} Let $\lambda_{i}=p_{r+1-i}$. The connected K-theory class of $V^r_{\bold{p}}$ in $CK_*(X)\left[\frac{1}{2}\right]$ is given by \begin{align*}\label{thy1} \left[V^r_{\bold{p}}\right]&=\mathrm{Pf}_\lambda(d(1),\ldots,d(r+1);\beta)\cdot [X] \\ &=\mathrm{Pf}(M)\cdot [X], \end{align*} where $d(i)=c(i)+(-1)^ie(i)$ for $1\leq i\leq r+1$, and the entries $m_{i,j}$ of the skew-symmetric matrix $M$ is \begin{align*} m_{i,j}&=\dfrac{1-\delta_i\delta_jR_{ij}}{1+\delta_i\delta_j(R_{ij}-\beta T_i)}\cdot\dfrac{(1-\beta \widetilde{T}_i)^{{r+1}-i-\lambda_i+1}}{2-\beta\widetilde{T}_i}\cdot\dfrac{(1-\beta\widetilde{T}_j)^{{r+1}-j-\lambda_j+1}}{2-\beta\widetilde{T}_j}\\ &\cdot (c_{\lambda_i}(i)-(-1)^{{r+1}}e_{\lambda_i}(i))(c_{\lambda_j}(j)+(-1)^{{r+1}}e_{\lambda_j}(j)). \end{align*} Here $\widetilde{T}_i=\delta_iT_i$ and $\delta_i$ assigns $(-1)^i$ to $0$ in $d(i)$. In particular, if $r+1$ is odd, we augment the matrix $M$ by the setting $m_{0j}=(1-\beta\widetilde{T}_j)^{{r+1}-j-\lambda_j+1}(2-\beta\widetilde{T}_j)^{-1}\cdot(c_{\lambda_j}(j)+e_{\lambda_j}(j))$. \end{theorem} \section{Classes of the pointed Brill-Noether loci on Prym variety}\label{sec4} \subsection{Brill-Noether Classes with a vanishing sequence} In this section we consider the class of the Brill-Noether loci in the Prym variety $\mathscr{P}^\pm$ with ramification at least one point. Let $C$ be a smooth curve of genus $g$ and $\pi:\widetilde{C}\rightarrow C$ be an \'etale double cover of $C$. For the line bundle $L$ in $ V^r$, the vanishing sequence \cite{W85} at $P'\in \mathscr{P}^+$ (resp. $P''\in \mathscr{P}^-$) whose image under $\pi$ is $P\in C$ is given by the sequenc \begin{equation*} \bold{a}(P')=(0\leq a_0^\ell(P')<\cdots<a_r^\ell(P')\leq 2g-2)\quad (resp.\; \bold{a}(P'')) \end{equation*} of vanishing orders in the $2n$-dimensional vector space $V=H^0(C,E(D)/E(-D))$ at $P'$ (resp. $P''$) such that it is the maximal sequence satisfying the condition \[ h^0(\widetilde{C}, L(-a_{i}^\ell(P')\cdot P')))\geq r+1-i\;\text{for all}\; i. \] Let us fix $P'$ (resp. $P''$) and the sequence \[ \bold{a}=(0\leq a_0\leq a_1< \cdots<a_r\leq 2g-2).\] We define the {\it pointed Brill-Noether loci of line bundles} $V_{\bold{a}}^r(P')$ (resp. $V_{\bold{a}}^r(P'')$) in the Prym variety $\mathscr{P}^+$ (resp. $\mathscr{P}^-$) by \begin{align*} V_{\bold{a}}^r(P'):=\left\{L\in \mathrm{Nm}^{-1}(K_C)\;|\;\right.&h^0(\widetilde{C}, L(-a_{i}P')))\geq r+1-i\;\text{for all}\; i \\ &\left. h^0(\widetilde{C},L)\geq r+1, h^0(\widetilde{C},L)\equiv r+1\;\text{(mod 2)}\right\}\subset \mathrm{Pic}^{2g-2}(\widetilde{C}). \end{align*} The structure of this variety can be considered by an even orthogonal Grassmannian degeneracy locus $V_{\bold{a}}^r(P')$ (resp. $V_{\bold{a}}^r(P'')$) in $\mathrm{Pic}^{2g-2}(\widetilde{C})$ as in \S\ref{sec3}. In particular this construction generalizes the result for the Brill-Noether loci without remification in the Prym variety introduced in \S\ref{sec2}. We recall that $\mathcal{L}$ is the Poincar\'e line bundle on $\mathrm{Pic}^{2g-2}(\widetilde{C})\times \widetilde{C}$, $\mathcal{E}=(1\times \pi)_*\mathcal{L}$ and $D=\sum_{i=1}^n P_i$ is a divisor for distinct fixed points $P_i$ on $C$. Assume that $P'\neq P_i$ for all $i$. The followings also can be read with replacing the point $P'$ by $P''\neq P_i$. We set \begin{align*} \mathcal{W}_i:&=p_*(\mathcal{E}\otimes q^*(\mathcal{O}_C(D-a_{i}P')))=p_*(\mathcal{E}(D-a_{i}P')) \end{align*} for $0\leq i\leq r$. Then the sheaf $\mathcal{W}_i$ is the vector bundle of rank \[ \mathrm{rk}(\mathcal{W}_i)=n-a_{i} \] so that we have a filtration $\mathcal{W}_{r}\subset \mathcal{W}_{r-1}\subset\cdots\subset \mathcal{W}_0\subseteq\mathcal{W}:=p_*(\mathcal{E}(D))$. Taking $a_0=0$ for granted, there is a natural sequence \[ (\mathcal{W}_{r})_{\mathscr{P}^\pm}\hookrightarrow (\mathcal{W}_r)_{\mathscr{P}^\pm}\hookrightarrow \cdots\hookrightarrow (\mathcal{W}_0)_{\mathscr{P}^\pm}=\mathcal{W}_{\mathscr{P}^\pm} \] of vector bundles on $\mathscr{P}^\pm$. Here the nondegenerate symmetric form $Q$ defined in \S\ref{sec2} is naturally inherited to their subbunddles. In addition, for $L\in \mathscr{P}^{\pm}$ and $E=\pi_*L$, if we set $U=H^0(C,E/E(-D))$ and $W_i=H^0(C,E(D-a_i(P')\cdot P'))$, $U\cap W_i$ is global regular sections of $E$ such that $$\mathrm{dim}(U\cap W_i)=h^0(C,E(-a_iP')).$$ Hence $V_{\bold{a}}^r(P')$ can be regarded as the degeneracy loci in \S\ref{sec3}, having the following theorem. \begin{theorem}\label{thm:class} Let $(C,P)$ be a smooth curve of genus $g$ with vanishing orders $\bold{a}$ at $P$ on $C$. Let $\lambda$ be a partition defined by $\lambda_{i}=a_{r+1-i}$ and $\widetilde{\lambda}$ by $\widetilde{\lambda}_i={r+2}-i-\lambda_i$. Let $\xi$ be the theta divisor on $\mathscr{P}^\pm$. Assume that $V_{\bold{a}}^r(P')$ (resp. $V_{\bold{a}}^r(P'')$) is either of pure codimension $|\lambda|$ or empty. Then the class of $V_{\bold{a}}^r(P')$ is \begin{align}\label{Form1} 2^{r+1}[V_{\bold{a}}^r(P')]&=\sum_{u_i, v_i\geq 0}\left(\prod_{i=1}^{r+1}(-1)^{u_i}(-\beta)^{v_i}\binom{u_i+\widetilde{\lambda}_i}{v_i}\right)\mathrm{Pf}(M'';\beta) \end{align} in $CK_*(\mathscr{P}^{\pm})$ or the numerical equivalence ring of $\mathscr{P}^{\pm}$ where the right hand side is the Pfaffian of the $(r+1)\times (r+1)$ matrix $M''=(m_{ij}'')$ such that for $r+1\geq j>i\geq 1$, \begin{align*} m_{ij}''&=\xi^{\lambda_i+\lambda_j+v_i+v_j}\left(\dfrac{2^{\lambda_i+\lambda_j+v_i+v_j}}{(\lambda_i+v_i)!(\lambda_j+v_j)!}+\sum_{m>0}\dfrac{2^{\lambda_i+\lambda_j+v_i+v_j+m}}{(\lambda_i+m+v_i)!(\lambda_j+v_j)!}\beta^m\xi^m\right.\\ &\left.+\sum_{\ell>0}\sum_{m\geq 0}(-1)^\ell\left(\binom{\ell+m-1}{m}+\binom{\ell+m}{m}\right)\dfrac{2^{\lambda_i+\lambda_j+v_i+v_j+m}}{(\lambda_i+\ell+m+v_i)!(\lambda_j-\ell+v_j)!}\beta^m\xi^m\right). \end{align*} In particular, if $r+1$ is odd, the matrix is augmented by $m_{0j}=\dfrac{2^j}{j!}\xi^j$ for $0<j\leq r+1$. \end{theorem} \begin{proof} According to modulo numerical equivalence (in the sense of \cite[Equation (4)]{Ma}) and the Poincar\'e dual formula, the Chern class of $\mathcal{W}_i^\vee$ is given by \begin{align}\label{eqn:chern} c_j((\mathcal{W}_i)_{\mathscr{P}^\pm}^\vee)=\dfrac{(\theta')^j}{j!} \end{align} where $\theta'$ is the cohomology class restricted to $\mathscr{P}^\pm$ of the class of the theta divisor on $\mathrm{Pic}^{2g-2}(\widetilde{C})$. Since $c(\mathcal{W})=e^{-\theta'}$ and $c_i(\mathcal{U}_{\mathscr{P}^\pm})=0$ for all $i>0$ by \cite[Lemma 5]{CP95}, we get \begin{align*} d_j(i)=c_j((\mathcal{W}_{i})_{\mathscr{P}^\pm}^\vee)+(-1)^i\cdot(-1)^{\mathrm{dim}(\mathcal{U}\cap\mathcal{W})}\cdot c_j(\mathcal{W}_{\mathscr{P}^\pm}/(\mathcal{W}_{i})_{\mathscr{P}^\pm}) \end{align*} for $j=n-p_j$. In case of $j\neq n-p_i$, $c_j(i)=c_j((\mathcal{W}_{i})_{\mathscr{P}^\pm}^\vee).$ Let us take $j=n-p_i$. Then the $j$-th Chern class of $\mathcal{W}_{\mathscr{P}^\pm}/(\mathcal{W}_{i})_{\mathscr{P}^\pm}$ vanishes as \begin{align*} c_j(\mathcal{W}_{\mathscr{P}^\pm}/(\mathcal{W}_{i})_{\mathscr{P}^\pm})&=\sum_{k=0}^{j}c_k(\mathcal{W}_{\mathscr{P}^\pm})\cdot c_{j-k}((\mathcal{W}_{i})_{\mathscr{P}^\pm}^\vee)\\ &=\sum_{k=0}^j \dfrac{(-1)^k\cdot(\theta')^k}{k!}\cdot\dfrac{(\theta')^{j-k}}{(j-k)!}=\sum_{k=0}^j\dfrac{(-1)^k}{k!}\cdot\dfrac{1}{(j-k)!}(\theta')^j\\ &=\dfrac{(\theta')^j}{j!}\sum_{k=0}^j\dfrac{(-1)^k\cdot j!}{k!\cdot(j-k)!}=\dfrac{(\theta')^j}{j!}\sum_{k=0}^j(-1)^k\cdot\binom{j}{k}\\ &=\dfrac{(\theta')^j}{j!}\cdot ((-1)+1)^j=0 \end{align*} Hence we arrive at \begin{equation}\label{Chrn} d_j(i)=\dfrac{(\theta')^j}{j!} \end{equation} as the $j$th degree of $e^{\theta'}$. Since $d_j(i)$ is a multiple of $(\theta')^j$, the class $\left[V_{\bold{P}}^r\right]$ can be expressed by $\gamma\cdot(\theta')^{|\lambda|}$ for a rational number $\gamma$ and $|\lambda|=\sum_{i=1}^{r+1}\lambda_i$. Combined with (\ref{Chrn}) and Theorem \ref{thy1}, the class for the pointed Brill-Noether loci $V_{\bold{a}}^r(P')$ (resp. $V_{\bold{a}}^r(P'')$) in $\mathscr{P}^+$ (resp. $\mathscr{P}^-$) is given by \begin{align}\label{Form1} 2^{r+1}[V_{\bold{a}}^r(P')]&=Pf_\lambda(d(1),\ldots,d(r+1);\beta)\in CK_*(\mathscr{P}^\pm) \end{align} where the right hand side is the Pfaffian of the $(r+1)\times (r+1)$ matrix $(m_{ij})$ such that for $r+1\geq j>i\geq 1$, \begin{align}\label{Mij} m_{ij}&=\dfrac{1-R_{ij}}{1+R_{ij}-\beta T_i}\cdot \dfrac{(1-\beta T_i)^{{r+1}-i-\lambda_{i}+1}}{2-\beta T_i}\cdot \dfrac{(1-\beta T_j)^{{r+1}-j-\lambda_j+1}}{2-\beta T_j}\cdot d_{\lambda_i}(i)d_{\lambda_j}(j)\\ &=\dfrac{(1-\beta T_i)^{{r+1}-i-\lambda_{i}+1}}{2-\beta T_i}\cdot \dfrac{(1-\beta T_j)^{{r+1}-j-\lambda_j+1}}{2-\beta T_j}\cdot m_{ij}', \end{align} where \begin{align*} m_{ij}'=&d_{\lambda_i}(i)d_{\lambda_j}(j)+\sum_{m>0}\beta^md_{\lambda_i+m}(i)d_{\lambda_j}(j)\\ &+\sum_{\ell>0}\sum_{m\geq 0}(-1)^\ell\left(\binom{\ell+m-1}{m}+\binom{\ell+m}{m}\right)\beta^md_{\lambda_i+\ell+m}(i)d_{\lambda_j-\ell}(j). \end{align*} Having $$\dfrac{(1-\beta T_i)^{\widetilde{\lambda}_i}}{2-\beta T_i}=\sum_{u_i\geq 0}^\infty (-1)^{u_i}\sum_{v_i\geq 0}^{u_i+\widetilde{\lambda}_i}\binom{u_i+\widetilde{\lambda}_i}{v_i}(-1)^{v_i}\beta^{v_i}T_i^{v_i},$$ the class of $V_{\bold{a}}^r(P')$ becomes \begin{align*} 2^{r+1}[V_{\bold{a}}^r(P')]&=\sum_{u_i, v_i\geq 0}\left(\prod_{i=1}^{r+1}(-1)^{u_i}(-\beta)^{v_i}\binom{u_i+\widetilde{\lambda}_i}{v_i}\right)\mathrm{Pf}(M'';\beta) \end{align*} where \begin{equation}\label{Mij} m_{ij}''=T_i^{v_i}T_j^{v_j}m_{ij}'. \end{equation} Setting $\theta'=2\xi$, we evaluate $m_{ij}''$ via $d_{\lambda_i}(i)=\dfrac{2^{\lambda_i}}{\lambda_i!}\xi^{\lambda_i}$ to obtain the statement. \end{proof} As a corollary we have the class of the pointed Brill-Noether loci in $\mathscr{P}^\pm$ in the Grothendiek group of $\mathscr{P}^\pm$ at $\beta=-1$. \begin{corollary}[$\beta=-1$] The class of $V_{\bold{a}}^r(P')$ (resp. $V_{\bold{a}}^r(P'')$) in the Grothendiek group of coherent sheaves $K_\circ(\mathscr{P}^+)$ (resp. $K_\circ(\mathscr{P}^-)$) is given by \begin{align}\label{Form1} 2^{r+1}[\mathcal{O}_{V_{\bold{a}}^r(P')}]&=\sum_{u_i, v_i\geq 0}\left(\prod_{i=1}^{r+1}(-1)^{u_i}\binom{u_i+\widetilde{\lambda}_i}{v_i}\right)\mathrm{Pf}(M'';-1). \end{align} \end{corollary} We also have the class of the pointed Brill-Noether loci in the Prym varieties in the Chow group of $\mathscr{P}^\pm$ by specializing at $\beta=0$ as in Corollary \ref{Form2}. Even more we can recover the formula for the Brill-Noether loci in Prym varieties \cite[Theorem 9]{CP95} without rammification on $C$, as $a_i=i$ and allowing $a_i=n$. \begin{corollary}[$\beta=0$]\label{Form2} Suppose that the dimension of $V_{\bold{a}}^r(P')$ (resp. $V_{\bold{a}}^r(P'')$) is equal to $\rho=g-1-|\lambda|$. Then the numerical equivalence class of the pointed Brill-Noether loci for $\mathscr{P}^\pm$ is \begin{align} 2^{r+1}[V_{\bold{a}}^r(P')]&=\prod_{i=1}^{r+1}\dfrac{1}{{\lambda_i!}}\prod_{i<j}\dfrac{\lambda_i-\lambda_j}{\lambda_i+\lambda_j}\cdot (2\xi)^{|\lambda|}. \end{align} \end{corollary} \begin{proof} We know from Theorem \ref{thy1} that \begin{align*} 2^{r+1}[V_{\bold{a}}^r(P')]&=Pf_\lambda(d(1),\ldots,d(r+1);0)\in A_*(\mathscr{P}^\pm \end{align*} at $\beta=0$. With \eqref{Chrn}, the right hand side becomes the Pfaffian of the $(r+1)\times (r+1)$ matrix $(m_{ij})$ for $r+1\geq j>i\geq 1$ where \begin{align}\label{Mij} m_{ij}&=\dfrac{{(2\xi)}^{\lambda_i+\lambda_j}}{(\lambda_i+\lambda_j)!}\left(\binom{\lambda_i+\lambda_j}{\lambda_i}+2\sum_{u>0}(-1)^{u}\cdot\binom{\lambda_i+\lambda_j}{\lambda_i+u}\right). \end{align} If $r+1$ is odd, the matrix is augmented by $m_{0j}=d_{\lambda_j}$ for $0<j\leq r+1$. Since \[ \sum_{k=0}^{\lambda_i}(-1)^k\binom{\lambda_i+\lambda_j}{k}+\sum_{u>0}^{\lambda_j}(-1)^{u+\lambda_i}\binom{\lambda_i+\lambda_j}{\lambda_i+u}=\sum_{j=0}^{\lambda_i+\lambda_j}(-1)^j\binom{\lambda_i+\lambda_j}{j}=0, \] we have \begin{align*} (-1)^{\lambda_i}\sum_{u>0}^{\lambda_j}(-1)^u\binom{\lambda_i+\lambda_j}{\lambda_i+u}&=-\sum_{k=0}^{\lambda_i}(-1)^k\binom{\lambda_i+\lambda_j}{k}=(-1)^{\lambda_i+1}\binom{\lambda_i+\lambda_j-1}{\lambda_i}. \end{align*} By canceling $(-1)^{\lambda_i}$, we get to \begin{equation}\label{eqn:id} \sum_{u>0}^{\lambda_j}(-1)^u\binom{\lambda_i+\lambda_j}{\lambda_i+u}=-\binom{\lambda_i+\lambda_j-1}{\lambda_i}. \end{equation} Plugging equation \eqref{eqn:id} to \eqref{Mij} gives \begin{align*} m_{ij}&=\dfrac{{(2\xi)}^{\lambda_i+\lambda_j}}{\lambda_i!\lambda_j!}\left(\dfrac{\lambda_i-\lambda_j}{\lambda_i+\lambda_j}\right). \end{align*} Using \cite[Appendix D]{FP}, \[ \mathrm{Pf}\left(\dfrac{{(2\xi)}^{\lambda_i+\lambda_j}}{\lambda_i!\lambda_j!}\cdot \dfrac{\lambda_i-\lambda_j}{\lambda_i+\lambda_j}\right) =\prod_{i=1}^{r+1}\dfrac{1}{{\lambda_i!}}\prod_{i<j}\dfrac{\lambda_i-\lambda_j}{\lambda_i+\lambda_j}(2\xi)^{|\lambda|}. \qedhere \] \end{proof} While working this paper, the author learned from private conversation with David Anderson that Corollary \ref{Form2} was be found independently in \cite[Theorem 1]{Tarasca}. To be rigorous, we take this occasion to provide a more detailed proof, as the proof of \cite[Theorem 1]{Tarasca} was sketched. \section{Euler Characteristics}\label{sec5} In this section we provide formulas for the Euler characteristic of the pointed Brill-Noether loci $V^r_{\bold{a}}(P')$ (resp. $V^r_{\bold{a}}(P'')$) in the Prym variety $\mathscr{P}^{+}$ (resp. $\mathscr{P}^{-}$) for a fixed sequence \[ \bold{a}=(0\leq a_0<\cdots<a_r\leq 2g-2). \] We employ Hirzebruch-Riemann-Roch to find the Euler characteristic of the Brill-Noether loci for Prym varieties. Since the Todd class of $\mathrm{Pic}^{2g-2}(\widetilde{\mathcal{{C}}})$ is trivial, we have \[\chi\left(\mathcal{O}_{V^r_{\bold{a}}(P')}\right)=\int_{\mathrm{Pic}^{2g-2}(\widetilde{{C}})}\mathrm{ch}\left(\pi_*\left[\mathcal{O}_{V^r_{\bold{a}}(P')}\right]\right)\quad\left(\text{resp.}\;\chi\left(\mathcal{O}_{V^r_{\bold{a}}(P'')}\right)\right).\] The following lemma is used to compute the Euler characteristic: \begin{lemma}[\cite{ACT21}]\label{lem:char} For a rank $n$ vector bundle $E$, if $ch(E)_i=0$ for $i>1$, then $ch(c_i^K(E))=c_i(E),$ where $C^K$ is the K-theory Chern classes. \end{lemma} By the virtue of Lemma \ref{lem:char} and \eqref{eqn:chern}, we get \[ \mathrm{ch}\left(c^K_j((\mathcal{W}_i\right)_{\mathscr{P}^\pm}^\vee))=\dfrac{(\theta')^j}{j!} \] and thus \[ \mathrm{ch}\left(d^K_j(i)\right)=\dfrac{(\theta')^j}{j!}. \] Putting all together, we have the following theorem for the Euler characteristic of $\mathcal{O}_{V^r_{\bold{a}}(P')}$ (resp. $\mathcal{O}_{V^r_{\bold{a}}(P'')}$) as below. We recall $\widetilde{\lambda}_i={r+2}-i-\lambda_i$. \begin{theorem} Let $({C}, P)$ be a smooth curve of genus $g$ with the vanishing sequence $\bold{a}$ at $P\in C$. Let $\pi:\widetilde{{C}}\rightarrow {C}$ of ${C}$ be an \'{e}tale double cover such that $P', P$ maps to $P$. Suppose that the dimension of $V^r_{\bold{a}}(P')$ (resp. $V^r_{\bold{a}}(P'')$) is $\rho=g-1-|\lambda|$. The Euler characteristic $\chi\left(\mathcal{O}_{V^r_{\bold{a}}(P')}\right)$ (resp. $\chi\left(\mathcal{O}_{V^r_{\bold{a}}(P'')}\right)$) is equal to \begin{align*} \sum_{u_i, v_i\geq 0}\left(\prod_{i=1}^{r+1}(-1)^{u_i}\right.&\left.\binom{u_i+\widetilde{\lambda}_i}{v_i}\right)\\ \cdot&\left(\sum_{\sigma\in S_{2(r+1)}}\sum_{k\geq0}\sum_{\bold{f}\in A_k^\sigma}\mathrm{sgn}(\sigma)\prod_{j=1}^{r+1}g_{f_{\sigma(2j-1),\sigma(2j)}}^{\sigma(2j-1),\sigma(2j)}\dfrac{{h(\lambda,\bold{v},k)!}}{2^{2(r+1)-h({\lambda,\bold{v}},k)}(r+1)!}\right), \end{align*} where $h(\lambda,\bold{v},k)=|\lambda|+|\bold{v}|+k$ such that $|\bold{v}|=v_1+\cdots+v_{r+1}$. In particular, if $\lambda_i=r+2-i$ such that $\widetilde{\lambda}=0$, then we get the Euler characteristic $\chi(\mathcal{O}_{V^r})$ of the Brill-Noether loci in $\mathscr{P}^\pm$. \end{theorem} \begin{proof} Without loss of generality, we choose $P'$ for a point on $\widetilde{C}$ mapping to $P$ on $C$ via $\pi$. By Lemma \ref{lem:char} and the specialization of Theorem \ref{Form1} at $\beta=-1$, we have $\mathrm{Pf}(M'';-1)$ with \begin{align*} m_{ij}''&=(\theta')^{\lambda_i+\lambda_j+v_i+v_j}\cdot\left(\dfrac{1}{(\lambda_i+v_i)!(\lambda_j+v_j)!}+\sum_{m>0}(-1)^m\dfrac{(\theta')^m}{(\lambda_i+m+v_i)!(\lambda_j+v_j)!}\right.\\ &\left.+\sum_{\ell>0}\sum_{m\geq 0}(-1)^\ell\left(\binom{\ell+m-1}{m}+\binom{\ell+m}{m}\right)(-1)^m\dfrac{(\theta')^m}{(\lambda_i+\ell+m+v_i)!(\lambda_j-\ell+v_j)!}\right),\\ \end{align*} so that \begin{align*} \chi\left(\mathcal{O}_{V^r_{\bold{a}}(P')}\right)= \dfrac{1}{2^{r+1}}\sum_{u_i, v_i\geq 0}\left(\prod_{i=1}^{r+1}(-1)^{u_i}\binom{u_i+\widetilde{\lambda}_i}{v_i}\right)\int_{\mathrm{Pic}^{2g-2}(\mathcal{\widetilde{C}})}\mathrm{Pf}(M'';-1). \end{align*} \noindent We define $\left\{g_m^{i,j}\right\}_{m\geq 0}$ by \[ g_0^{i,j}=\dfrac{1}{(\lambda_i+v_i)!(\lambda_j+v_j)!}+\sum_{\ell>0}(-1)^\ell\dfrac{2}{(\lambda_i+\ell+v_i)!(\lambda_j-\ell+v_j)!} \] and \begin{align*} g_m^{i,j}&=(-1)^m\left(\dfrac{1}{(\lambda_i+m+v_i)!(\lambda_j+v_j)!}\right.\\ &\left.+\sum_{\ell>0}(-1)^\ell\left(\binom{\ell+m-1}{m}+\binom{\ell+m}{m}\right)\dfrac{1}{(\lambda_i+\ell+m+v_i)!(\lambda_j-\ell+v_j)!}\right)\;\text{for}\; m>0.\\ \end{align*} This leads to $$m_{i,j}''=(\theta')^{\lambda_i+\lambda_j+v_i+v_j}\cdot\sum_{f_{i,j}=0}g_{f_{i,j}}^{i,j}(\theta')^{f_{i,j}}.$$ For $\sigma\in S_{2(r+1)}$, we define $\hat{f}(\sigma):=\sum_{j=1}^{r+1}f_{\sigma(2j-1),\sigma(2j)}$ for some powers $\bold{f}:=\{f_{i,j}\}_{1\leq i,j\leq r+1}$ of $\theta'$ in $m''_{i,j}$. Let $A_i^\sigma=\left\{\bold{f}\;|\;\hat{f}(\sigma)=i\right\}\subseteq S_{2(r+1)}$. Then, the Pfaffian $\mathrm{Pf}(M'';-1)$ can be expressed by \begin{align*} \mathrm{Pf}(M'';-1)&=\dfrac{(\theta')^{|\lambda|+|{\bold{v}}|}}{2^{r+1}(r+1)!}\sum_{\sigma\in S_{2(r+1)}}\sum_{i\geq0}\sum_{\bold{f}\in A_i^\sigma}\mathrm{sgn}(\sigma)\prod_{j=1}^{r+1}g_{f_{\sigma(2j-1),\sigma(2j)}}^{\sigma(2j-1),\sigma(2j)}(\theta')^i. \end{align*} Replacing $\theta'$ by $2\xi$ with the Poincar\'e formula $\int \xi^\kappa=\kappa!$, we have proved the theorem. \end{proof} \begin{acknowledgments} We are grateful to David Anderson for his encouragement, invaluable comments and suggestions. We wishes to thank William Graham for helpful discussions. We also would like to express our gratitude to the Department of Mathematics at University of Georgia for various supports. \end{acknowledgments} \bibliographystyle{amsplain} \begin{bibdiv} \begin{biblist} \bib{A}{article}{ author={Anderson, David}, title={$K$-theoretic Chern class formulas for vexillary degeneracy loci}, journal={Adv. Math.}, volume={350}, date={2019}, pages={440--485}, } \bib{A19}{article}{ author={Anderson, David}, title={Corrigendum to ``$K$-theoretic Chern class formulas for vexillary degeneracy loci'' [Adv. Math. 350 (2019) 440--485]}, journal={Adv. Math.}, volume={356}, date={2019}, pages={106812, 3}, } \bib{ACT21}{article}{ author={D. Anderson, L. Chen and N. Tarasca}, title={$K$-classes of Brill-Noether loci and a determinantal formula}, journal={IMRN.}, date={2021}, } \bib{AF18}{article}{ author={Anderson, David}, author={Fulton, William}, title={Chern class formulas for classical-type degeneracy loci}, journal={Compos. Math.}, volume={154}, date={2018}, number={8}, pages={1746--1774}, } \bib{ACGH}{book}{ author={Arbarello, E.}, author={Cornalba, M.}, author={Griffiths, P. A.}, author={Harris, J.}, title={Geometry of algebraic curves. Vol. I}, series={Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, volume={267}, publisher={Springer-Verlag, New York}, date={1985}, pages={xvi+386}, } \bib{Cai}{article}{ author={Cai, Shuang}, title={Algebraic connective $K$-theory and the niveau filtration}, journal={J. Pure Appl. Algebra}, volume={212}, date={2008}, number={7}, pages={1695--1715}, } \bib{CLPT}{article}{ author={Chan, Melody}, author={L\'{o}pez Mart\'{\i}n, Alberto}, author={Pflueger, Nathan}, author={Teixidor i Bigas, Montserrat}, title={Genera of Brill-Noether curves and staircase paths in Young tableaux}, journal={Trans. Amer. Math. Soc.}, volume={370}, date={2018}, number={5}, pages={3405--3439}, } \bib{CP95}{article}{ author={De Concini, Corrado}, author={Pragacz, Piotr}, title={On the class of Brill-Noether loci for Prym varieties}, journal={Math. Ann.}, volume={302}, date={1995}, number={4}, pages={687--697}, } \bib{DL}{article}{ author={Dai, Shouxin}, author={Levine, Marc}, title={Connective algebraic $K$-theory}, journal={J. K-Theory}, volume={13}, date={2014}, number={1}, pages={9--56}, } \bib{EG95}{article}{ author={Edidin, Dan}, author={Graham, William}, title={Characteristic classes and quadric bundles}, journal={Duke Math. J.}, volume={78}, date={1995}, number={2}, pages={277--299}, } \bib{EH}{article}{ author={Eisenbud, David}, author={Harris, Joe}, title={The Kodaira dimension of the moduli space of curves of genus $\geq 23$}, journal={Invent. Math.}, volume={90}, date={1987}, number={2}, pages={359--387}, } \bib{Farkas}{article}{ author={Farkas, Gavril}, title={Prym varieties and their moduli}, conference={ title={Contributions to algebraic geometry}, }, book={ series={EMS Ser. Congr. Rep.}, publisher={Eur. Math. Soc., Z\"{u}rich}, }, date={2012}, pages={215--255}, } \bib{FaRa}{article}{ Author = {Farkas, H. M.} Author={ Rauch, H. E.}, Title = {Period relations of {Schottky} type on {Riemann} surfaces}, FJournal = {Annals of Mathematics. Second Series}, Journal = {Ann. Math. (2)}, Volume = {92}, Pages = {434--461}, Year = {1970}, } \bib{FP}{book}{ author={Fulton, William}, author={Pragacz, Piotr}, title={Schubert varieties and degeneracy loci}, series={Lecture Notes in Mathematics}, volume={1689}, note={Appendix J by the authors in collaboration with I. Ciocan-Fontanine}, publisher={Springer-Verlag, Berlin}, date={1998}, pages={xii+148}, } \bib{H82}{article}{ author={Harris, Joe}, title={Theta-characteristics on algebraic curves}, journal={Trans. Amer. Math. Soc.}, volume={271}, date={1982}, number={2}, pages={611--638}, } \bib{HIMN}{article}{ author={Hudson, Thomas}, author={Ikeda, Takeshi}, author={Matsumura, Tomoo}, author={Naruse, Hiroshi}, title={Degeneracy loci classes in $K$-theory---determinantal and Pfaffian formula}, journal={Adv. Math.}, volume={320}, date={2017}, pages={115--156}, } \bib{HIMN20}{article}{ author={Hudson, Thomas}, author={Ikeda, Takeshi}, author={Matsumura, Tomoo}, author={Naruse, Hiroshi}, title={Double Grothendieck polynomials for symplectic and odd orthogonal Grassmannians}, journal={J. Algebra}, volume={546}, date={2020}, pages={294--314}, } \bib{KL}{article}{ author={Kleiman, Steven L.}, author={Laksov, Dan}, title={Another proof of the existence of special divisors}, journal={Acta Math.}, volume={132}, date={1974}, pages={163--176}, } \bib{Ma}{article}{ author={Mattuck, Arthur}, title={On symmetric products of curves}, journal={Proc. Amer. Math. Soc.}, volume={13}, date={1962}, pages={82--87}, } \bib{M71}{article}{ author={Mumford, David}, title={Theta characteristics of an algebraic curve}, journal={Ann. Sci. \'{E}cole Norm. Sup. (4)}, volume={4}, date={1971}, pages={181--192}, } \bib{Mum}{article}{ author={Mumford, David}, title={Prym varieties. I}, conference={ title={Contributions to analysis (a collection of papers dedicated to Lipman Bers)}, }, book={ publisher={Academic Press, New York}, }, date={1974}, pages={325--350}, } \bib{PP}{article}{ author={Parusi\'{n}ski, Adam}, author={Pragacz, Piotr}, title={Chern-Schwartz-MacPherson classes and the Euler characteristic of degeneracy loci and special divisors}, journal={J. Amer. Math. Soc.}, volume={8}, date={1995}, number={4}, pages={793--817}, } \bib{SJ}{article}{ author = {Schottky, F.}, author = {Jung, H.}, title = {Neue {S{\"a}tze} {\"u}ber {Symmetralfunktionen} und die \emph{Abel}schen Funktionen der \emph{Riemann}schen {Theorie}.}, FJournal = {Sitzungsberichte der K{\"o}niglich Preussischen Akademie der Wissenschaften}, Journal = {Berl. Ber.}, Volume = {1909}, Pages = {282--297, 732--750}, Year = {1909}, } \bib{Tarasca}{article}{ author={Nicola Tarasca}, title={A pointed Prym-Petri Theorem}, journal={arXiv:2202.05284, to appear Trans. Amer. Math. Soc.}, date={2022}, } \bib{W85}{article}{ author={Welters, Gerald E.}, title={A theorem of Gieseker-Petri type for Prym varieties}, journal={Ann. Sci. \'{E}cole Norm. Sup. (4)}, volume={18}, date={1985}, number={4}, pages={671--683}, } \bib{Wi}{article}{ Author = {Wirtinger, W.}, Title = {Untersuchungen {\"u}ber {Thetafunctionen}.}, Year = {1895}, journal = {Leipzig. {B}. {G}. {Teubner}.}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2022-09-20T02:23:03", "yymm": "2209", "arxiv_id": "2209.08700", "language": "en", "url": "https://arxiv.org/abs/2209.08700" }
\section{Introduction} While recent state-of-the-art hate-speech classifiers~\citep{AYO2021114762, dsa-etal-2020-towards,mozafari2019bert} yield impressive performance on in-domain held-out instances, they suffer when evaluated on out-of-domain settings \citep{yin2021generalisable,10.1145/3331184.3331262,swamy-etal-2019-studying,karan-snajder-2018-cross}. The distributions across corpora$/$domains\footnote{We use the terms `corpus' and `domain' interchangeably.} change due to varying vocabulary, topics of discussion over time~\citep{app10124180,10.1145/2124295.2124376}, data bias caused by sampling strategies \citep{wiegand-etal-2019-detection} and different hate-targets. This is concerning since curating new data resources for hate-speech involves substantial time and effort \citep{Poletto2019AnnotatingHS,malmasi:2018:profanity}. This calls for strategies, like Domain Adaptation (DA) approaches, that can adapt models trained on existing labeled resources to a new target domain that lacks class-labels. However, research on DA in hate-speech is limited \citep{Sarwar2022UnsupervisedDA,bashar2021progressive,bose-etal-2021-unsupervised}. Typically, vanilla classifiers tend to learn more from domain-specific features \citep{YE202161,wiegand-etal-2019-detection} than domain-invariant features, resulting in poor out-of-domain performance. For instance, \citet{wiegand-etal-2019-detection} show that in a hate-speech dataset \citep{waseem}, neutral domain-specific terms, like `\textit{football}', `\textit{commentator}', etc., discussing the role of women in sports, are highly correlated with the hate label, restricting its generalizability. Thus, it is worth minimizing the importance of such terms for improving cross-domain performance. Recently, feature attributions -- methods for extracting post-hoc model explanations, have been used to align features with prior domain knowledge \citep{pmlr-v119-rieger20a, NEURIPS2020_075b051e}. These provide importance scores to the input terms as per their contribution towards the model prediction \citep{NIPS2017_8a20a862}. For instance, \citet{liu-avci-2019-incorporating,kennedy-etal-2020-contextualizing} reduce the over-sensitivity of classifiers on a curated list of identity terms (e.g. \textit{muslims}, \textit{gay}) by penalizing their importance. However, newly emerging social-media terms \citep{Grieve2018MappingLI} may render such lists non-exhaustive. \citet{yao2021refining} do not use any list but they require human-provided refinement advice as inputs. \citet{chrysostomou2022empirical} further show that post-hoc explanation methods might not provide faithful explanations in out-of-domain settings. The contemporaneous work by \citet{attanasio-2022-entropy} and \citet{D-Ref} reduce lexical overfitting automatically with entropy-based attentions and feature attributions, respectively. While cross-domain classification performance across different datasets is not studied in the former, the latter needs some labeled target instances to identify the over-fitted terms. In the task of detecting objects in images, \citet{zunino2021explainable} use a domain classifier, trained to differentiate between domains, to visually identify the irrelevant background information to be domain-specific. Thus, they enforce the model explanations to align with the ground-truth annotations highlighting the objects in the image. Inspired by this, we propose a new DA approach in hate-speech employing a domain classifier, but without having access to such annotations for aligning the attribution scores. We hypothesize that domain-specific terms that are simultaneously predictive of the hate-speech labels are instrumental in restricting the domain invariance of the hate-speech classifier. To this end, we employ a domain classifier to automatically extract the terms that help in identifying the source domain compared to the \textit{unlabeled} target domain, and feature-attribution scores to identify the subset important for hate-speech classification from the source. \textit{Our method, through penalization of these terms, automatically enforces the source domain classifier to focus on domain-invariant content. } Compared to approaches transforming high-dimensional intermediate representations to reduce the domain discrepancy, such as domain adversarial learning \cite{Ryu2020KnowledgeDF,tzeng2017adversarial}, our approach makes the adaptation more explainable, while improving the overall cross-domain performance compared to prior-approaches. \section{Proposed Approach} Given training data from a labeled source domain $D_S^{train}$ and an unlabeled target domain $D_T^{train}$, our approach for DA in hate-speech involves 2 steps: (i) extraction of source-specific terms and (ii) reducing the importance of these terms. Our setting is similar to \citet{ben-david-etal-2020-perl} and \citet{Ryu2020KnowledgeDF}. \subsection{Extraction of Source-specific Terms} \paragraph{Domain classification} To identify source-specific terms, we first train a binary domain classifier using $D_S^{train}$ and $D_T^{train}$ that learns to identify whether a candidate instance comes from the source or the target domain. For this, we use a simple Logistic Regression (LR) with bag-of-words, as it is inherently interpretable. We then use its feature weights to extract the top $N$ most important terms for predicting the source domain class. Each term is tokenized with the BERT \citep{devlin-etal-2019-bert} WordPiece tokenizer for compatibility with transformer models. The top $N$ terms obtained through domain classification are denoted as $S_{LR}$. \paragraph{Attribution-based term ranking} Intuitively, the terms from $S_{LR}$ that also contribute highly to the hate-speech labels, are likely to restrict generalization to the target as they could potentially reduce the importance assigned by the classifier to domain-invariant hate-speech terms. Thus, we extract only those source-specific terms that are highly correlated with the labels, given the binary classification task of \textit{hate} versus \textit{non-hate}. To this end, we first continue pre-training BERT on the unlabeled $D_T^{train}$ using the Masked Language Model (MLM) objective for incorporating the language-variations of the target domain, following \citet{glavas-etal-2020-xhate}. We then perform supervised classification on $D_S^{train}$ using this MLM trained model. After every epoch, we obtain 2 ranked lists of terms for the two classes, sorted in the order of decreasing importance. We construct the lists using feature attribution methods that yield instance-level attribution scores $\text{ins\mbox{-}atr}^j_{te}$ per term $te$ in an instance $j$ -- a higher score indicating a higher contribution to the predicted class. We discard the scores of stop-words and the infrequent terms, and normalize $\text{ins\mbox{-}atr}^j_{te}$ using the sigmoid function. For obtaining a corpus-level class-specific attribution score $\text{cp\mbox{-}atr}_{te}^c$ per term $te$ and per class $c$, we perform a corpus-level average of all the $\text{ins\mbox{-}atr}^j_{te}$ for every $c$ using Equation \ref{eq1}. \begin{equation}\label{eq1} \resizebox{0.85\columnwidth}{!}{% $\text{cp\mbox{-}atr}_{te}^c = \frac{\sum_{j=1}^{|D_S^{train}|}\mathbbm{1}_{\hat{y_j}=c}\text{ins\mbox{-}atr}_{te}^j \forall \text{occurrence of $te$ in j}}{\sum_{j=1}^{|D_S^{train}|}\mathbbm{1}_{\hat{y_j} = c} \#(\text{occurrence of $te$ in j})}$% } \end{equation} Here $c$ $\in$ \{hate, non-hate\}, $\hat{y}$ is the predicted class and $\mathbbm{1}$ is the indicator function. We sort the scores $\text{cp\mbox{-}atr}_{te}^c$ for all $te$ to obtain the highest attributed (i.e. most important) term per class to the lowest, yielding the ranked lists of terms per class, given by $\text{CP} = [{cp\mbox{-}hate}, {cp\mbox{-}non\mbox{-}hate}]$. We extract the source-specific terms $\text{te}^S$ that are common to both $S_{LR}$ and the top $M$ terms from $\text{CP}$, i.e. $\text{te}^S = [te \in S_{LR} \And {te} \in top_M(\text{CP})]$. These steps are repeated after every epoch. Note that the list $S_{LR}$ remains constant across the epochs, as it is independent to the hate-speech classification task. \subsection{Penalization of Source-specific Terms} We hypothesize that penalizing $\text{te}^S$ obtained from the previous epoch during the next epoch should reduce the importance of terms that are both (i) domain-specific and (ii) contribute highly to the source labels, and thus, help learn from domain invariant terms. We minimize the attribution scores for $\text{te}^S$, with $L_2$ penalization, in Equation \ref{eq2}. \begin{equation}\label{eq2} \resizebox{0.85\hsize}{!}{% $\mathcal{L} = \mathcal{L^{'}} + \lambda \mathcal{L}_{\textrm{atr}}; \mathcal{L}_\textrm{atr} = \sum \limits_{\text{t}\in \text{te}^S} \phi \left(\text{t}\right)^\textsuperscript{2}; \text{t} \in \text{te}^S$% } \end{equation} Here $\mathcal{L^{'}}$ is the classification loss and $\mathcal{L_{\text{atr}}}$ is the attribution loss. $\lambda$ controls the strength of penalization, and $\phi \left(\text{t}\right)$ is the attribution score for t. We experiment with two variations: (i) \textbf{Dom-spec:} penalizing only the terms in $\text{te}^S$; (ii) \textbf{Comb}: penalizing the combination of $\text{te}^S$ and the terms from \citet{liu-avci-2019-incorporating,kennedy-etal-2020-contextualizing}. We use two different feature attribution methods that have been widely used in recent studies~\citep{chrysostomou-aletras-2021-improving,chrysostomou2022flexible}: (i) \textbf{Scaled Attention ($\alpha \nabla \alpha$}) \cite{serrano-smith-2019-attention}, which scales attention weights $\alpha$ by their corresponding gradients $\nabla\alpha_i = \frac{\delta \hat{y}}{\delta \alpha_i}$, where $\hat{y}$ is the predicted label, and is shown to work better than using only the attention weights; (ii) \textbf{DeepLIFT/ DL} \cite{pmlr-v70-shrikumar17a} that assigns scores based on the difference between activation of each neuron and a reference activation (zero embedding vector). Note that although \newcite{liu-avci-2019-incorporating} have used the Integrated Gradients (IG) \cite{pmlr-v70-sundararajan17a}, we use DL as it is most often a good and a faster approximation of IG \cite{DBLP:conf/iclr/AnconaCO018}. \section{Experimental Setup} \subsection{Data} We use three standard hate-speech datasets, namely, \textit{Waseem} \cite{waseem}, \textit{HatEval} \cite{basile-etal-2019-semeval} and \textit{Vidgen} \cite{vidgen-etal-2021-learning}. Following \citet{wiegand-etal-2019-detection,swamy-etal-2019-studying}, we perform hate\slash non-hate classification across domains. We use the standard splits available for \textit{HatEval} (42.1\% hate; train: 8993\footnote{The instances containing only URLs are removed, decreasing the number of train instances from 9000 to 8993.}, val: 1000; test: 3000) and \textit{Vidgen} (54.4\% hate; train: 32497, val: 1016, test: 4062). We sub-sample the \textit{Vidgen} validation set by 25\% to get 1016 samples, making its size similar to the other datasets. We split \textit{Waseem} (26.8\% hate) randomly into train (80\%; 8720), validation (10\%; 1090) and test (10\%; 1090) sets, as no standard splits are available. We present the top ten most frequent terms in these datasets in Table \ref{Top_freq}. The \textit{Waseem} dataset is known to comprise a high proportion of implicit hate \citep{wiegand-etal-2019-detection}, which are subtle expressions of hate without the use of profanity. This is also evident in the most frequent terms from this dataset. In Table \ref{Top_freq}, \textit{\#mkr} refers to a cooking show which frequently results in sexist comments targeted towards the participating women. \textit{HatEval} involves hate against women and immigrants. Many hateful tweets against immigrants occurred in the context of the US-Mexico border issues with the hashtag \textit{\#buildthewall}. The \textit{Vidgen} dataset is collected through a dynamic data creation process with a human-and-model-in-the-loop strategy, unlike \textit{HatEval} and \textit{Waseem} datasets that are sampled from Twitter. In particular, the \textit{Vidgen} dataset involves hate against many different target groups or identity terms, with a wide variety of topics and hateful forms. See Appendix \ref{sec:append-Data} for further details on the datasets. \begin{table}[!h] \centering \begin{tabularx}{\columnwidth}{p{1.65cm}| p{5.2cm}} \hline \textbf{Dataset} & \textbf{Frequent terms in the datasets} \\ \hline \textit{Waseem} & \#mkr, \#notsexist, kat, women, like, andre, get, people, one, think \\ \hline \textit{HatEval} & b*tch, women, refugees, \#buildthewall, immigrant, immigration, illegal, men, migrants, h*e \\ \hline \textit{Vidgen} & people, black, women, f*cking, like, love, think, white, get, want \\ \hline \end{tabularx} \caption{\label{Top_freq} Top ten most frequent terms in the datasets after removing the stop-words.} \end{table} \begin{table*}[!t] \scriptsize \centering \begin{tabularx}{\textwidth}{p{4cm}|p{1.25cm}|p{1.25cm}|p{1.25cm}|p{1.25cm}|p{1.25cm}|p{1.25cm}|c} \toprule \textbf{Approaches} & \textbf{H \textrightarrow V} & \textbf{V \textrightarrow H} & \textbf{H \textrightarrow W} & \textbf{W \textrightarrow H} & \textbf{V \textrightarrow W}&\textbf{W \textrightarrow V} & \multicolumn{1}{c}{\textbf{Average}} \\ \hline BERT Van-MLM-FT & 56.6$\pm$1.3& 66.2$\pm$1.2& 70.0$\pm$2.5&50.9$\pm$2.1& 61.4$\pm$2.4&43.5$\pm$1.9& 58.1\\ \newcite{liu-avci-2019-incorporating} & 45.1$\pm$4.5 & 59.5$\pm$0.7 & 57.2$\pm$3.8 & 52.6$\mbox{*}\pm$0.8 &57.1$\pm$2.7 & 39.6$\pm$2.0& 51.9\\ MLM + \newcite{kennedy-etal-2020-contextualizing} (a) & 55.4$\pm$2.0 & 65.5$\pm$0.8 & 64.1$\pm$1.4 & \underline{54.4}$\mbox{*}\pm$1.3 & 59.2$\pm$1.8 & 44.5$\pm$2.9 & 57.2\\ MLM + \newcite{kennedy-etal-2020-contextualizing} (b) & 54.9$\pm$2.9 & 65.7$\pm$0.9 & 67.3$\pm$1.2 & 54.3$\mbox{*}\pm$2.2 & 62.3$\pm$2.7 & 46.6$\pm$3.5 & 58.5 \\ BERT PERL& 54.1$\pm$0.7 &60.0$\pm$0.6 &60.1$\pm$2.0 &\textbf{55.2}$\mbox{*}\pm$0.7 &55.5$\pm$1.0 &37.8$\pm$1.2 & 53.8 \\ BERT-AAD& 56.6$\pm$1.3 & 53.9$\pm$3.5 & 68.8$\pm$2.5 &50.7$\pm$1.4& 48.3$\pm$4.7 &\textbf{53.0}$\mbox{*}\pm$1.7& 55.2\\ HATN & 48.4$\pm$1.6 & 59.1$\pm$0.4 & 59.7$\pm$2.9 & 51.4$\pm$1.8 & 60.0$\pm$2.6 & 45.4$\pm$2.7 & 54.0\\ MLM + \newcite{Sarwar2022UnsupervisedDA} & 55.0$\pm$1.9 & 66.2$\pm$2.0 & 68.8$\pm$1.1 & 48.2$\pm$3.1 & 57.9$\pm$1.3 & 36.2$\pm$1.1 & 55.4\\ MLM + \citet{attanasio-2022-entropy} & 54.9$\pm$1.6 & 66.5$\pm$1.4 & 64.1$\pm$5.0& 52.4$\mbox{*}\pm$3.7 & 62.5$\pm$0.8 & 43.5$\pm$2.3& 57.3\\ MLM + ${\chi}^{2}$-test & 57.9$\pm$1.6 & 67.1$\pm$1.7 & 69.8$\pm$0.8 & 48.2$\pm$3.1 & 60.4$\pm$2.8 & 44.1$\pm$3.4 & 57.9 \\\hline Pre-def ($\alpha \nabla \alpha$) & \textbf{58.9}$\mbox{*}\pm$0.7 & \underline{67.4}$\pm$1.5 & \underline{71.3}$\pm$1.0 & 48.9$\pm$4.0 & 60.0$\pm$2.0 & 46.5$\pm$4.9 & 58.8 \\ Dom-spec ($\alpha \nabla \alpha$) & 58.3$\pm$1.8 & 66.8$\pm$0.7 & 70.1$\pm$1.8 & 52.3$\mbox{*}\pm$3.0 & 60.8$\pm$2.2 & 46.9$\mbox{*}\pm$2.5 & 59.2\\ Comb (Dom-spec + Pre-def) ($\alpha \nabla \alpha$) & 58.7$\mbox{*}\pm$2.1 & \textbf{67.7}$\pm$1.0 & 70.9$\pm$1.0 & 51.5$\pm$2.1 & 59.8$\pm$1.5 & 45.9$\pm$3.1 & 59.1 \\ \hline Pre-def (DL) & 58.5$\mbox{*}\pm$1.4 & 66.5$\pm$1.3 & 70.3$\pm$1.7 & 51.2$\pm$1.7 & \textbf{70.3}$\mbox{*}\pm$0.5 & 42.7$\pm$2.0 & 59.9\\ Dom-spec (DL) & \underline{58.8}$\mbox{*}\pm$0.6 & 66.4$\pm$1.2 & \textbf{72.2}$\pm$1.4 & 52.9$\mbox{*}\pm$1.9 & 63.6$\mbox{*}\pm$2.0 & \underline{48.8}$\mbox{*}\pm$4.7 & \underline{60.5}\\ Comb (Dom-spec + Pre-def) (DL) & 58.4$\pm$1.4 & 66.7$\pm$1.0 & \underline{71.3}$\pm$0.9 & 51.1$\pm$2.2 & \underline{69.5}$\mbox{*}\pm$2.2 & 46.6$\pm$1.9 & \textbf{60.6} \\ \bottomrule \end{tabularx} \caption{\label{Results_DA} Macro-F1 ($\pm$std-dev) on source \textrightarrow target pairs. H: HatEval, V: Vidgen, W: Waseem. \textbf{Bold} denotes the best score and \underline{underline} the second best in each column . $\mbox{*}$ denotes statistically significant improvement compared to Van-MLM-FT with paired bootstrap \protect\citep{dror-etal-2018-hitchhikers, efron1994introduction}, 95\% confidence interval.} \end{table*} \subsection{Baselines} We compare our work with approaches that penalize \textbf{(i)} pre-defined terms in Convolutional Neural Networks-based \citet{liu-avci-2019-incorporating}\footnote{with Integrated Gradients \cite{pmlr-v70-sundararajan17a}}; \textbf{(ii)} (a) the identity terms in the top features of a bag-of-words Logistic Regression in BERT-based \citet{kennedy-etal-2020-contextualizing}\footnote{with Sampling and Occlusion \cite{Jin2020Towards}} (b) all the terms listed by \citet{kennedy-etal-2020-contextualizing}; \textbf{(iii)} terms extracted automatically by \citet{attanasio-2022-entropy}; \textbf{(iv)} combination of terms from (i) and (ii,b) within BERT, and call this \textbf{Pre-Def}. We do not compare with \citet{D-Ref} as they use labeled target instances for term-extraction, which does not allow a fair comparison. Further, we experiment with the Vanilla baseline (\textbf{Van-MLM-FT}), where the pre-trained BERT is adapted to $D_T^{train}$ using the MLM objective, followed by a supervised fine-tuning on $D_S^{train}$. We also assess different DA methods from the sentiment classification task, namely, \textbf{BERT PERL} (Pivot-based Encoder Representation of Language) \cite{ben-david-etal-2020-perl} that adopts the MLM objective of BERT to perform pivot-based fine-tuning; \textbf{BERT-AAD} (Adversarial Adaptation with Distillation) \cite{Ryu2020KnowledgeDF} that performs domain adversarial training; \textbf{HATN} (Hierarchical Attention Transfer Network) \cite{li2018hatn,li2017end} that extracts pivots using a domain adversarial approach. We evaluate a data-augmentation-based approach \cite{Sarwar2022UnsupervisedDA} for DA in hate-speech. For a fair comparison, we use the BERT as the underlying model in this approach. Finally, we apply the \textbf{$\bm{\chi^2}$-test} with 1 degree of freedom and Yate's correction \citep{jbp:/content/journals/10.1075/ijcl.6.1.05kil}, penalizing the terms from $D_S^{train}$, using their DL scores, for which the null hypothesis of both $D_S^{train}$ and $D_T^{train}$ being random samples of the same larger population, is rejected with 95\% confidence. We initialize all the BERT models with MLM adaptation on the target, except for PERL and AAD, which inherently adapts to the target. \subsection{Model training} We train all the models on $D_S^{train}$, use a small amount of the labeled $D_T^{val}$ only for model-selection and hyper-parameter tuning (see Appendix \ref{sec:append-hyper-param}) , following \newcite{dai2020adversarial,maharana-bansal-2020-adversarial}, and evaluate on $D_T^{test}$. \section{Results} \subsection{Discussion} Table \ref{Results_DA} displays the macro-F1 scores obtained, in cross-domain settings, averaged across five randomly initialized runs. We use macro-F1 as penalizing $\text{te}^S$ corrects the mis-classifications for both the hate and non-hate classes across domains. We observe an overall performance drop, compared to Van MLM-FT, with the DA approaches, originally proposed for sentiment classification, namely, BERT PERL, BERT-AAD and HATN. This also agrees with \citet{bose-etal-2021-unsupervised}, who analyze the extracted pivots -- terms that are both frequent across domains as well as important for classification with respect to the source -- and find them to be sub-optimal for DA in hate-speech. The approach by \citet{Sarwar2022UnsupervisedDA} also displays an overall drop. They augment the source domain by substituting relevant terms from a different negative emotion dataset with tagged hate-speech related terms from the target domain. We observe that the augmented instances are often incomprehensible after such substitution. Dom-spec yields improvements over all the baselines using both $\alpha \nabla\alpha$ and DL, both independently and in combination (Comb) with Pre-def, where Comb achieves the highest overall performance with DL: 60.6. With DL, Dom-spec yields significantly improved performance in 4/6 cases, compared to 2/6 with Pre-def (DL). This is apparently due to the penalization of relevant source-specific terms that have wider coverage compared to the pre-defined terms in Pre-def. Since the entropy-based attention regularization by \citet{attanasio-2022-entropy} do not use the target domain unlabeled instances for term-extraction, it may not be optimal for cross-domain settings. The large improvement with Pre-def (DL) for \textit{Vidgen} \textrightarrow \textit{Waseem} (70.3) could be attributed to the fact that \textit{Vidgen} involves a wide variety of identity terms. Thus, penalizing the pre-defined identity terms might result in higher emphasis on more generalizable hate-speech content. While only this particular case drives the high average performance with Pre-def (DL), Dom-spec (DL) performs well \textit{consistently} and yields a higher average score (Dom-spec: 60.5, Comb: 60.6) compared to Pre-def. As discussed by \citet{wiegand-etal-2019-detection}, the \textit{Waseem} dataset includes a high degree of implicit hate. Still, Dom-spec (DL) yields improvements on the \textit{Waseem} dataset when using it as the target domain, compared to Van MLM-FT. This is reflected in the cases of \textit{HatEval} \textrightarrow \textit{Waseem} and \textit{Vidgen} \textrightarrow \textit{Waseem}. This is most likely because when the source domain-specific terms causing bias are penalized, the model is forced to learn more from the wider contextual meaning of the instances, rather than focusing on individual terms. We believe that this could possibly help in improving the detection of implicit hate in out of domain instances, at least to some extent. We leave further investigation in this direction for future work. \subsection{Qualitative Analysis} \label{quality} \begin{table}[!t] \scriptsize \centering \begin{tabularx}{\columnwidth}{p{3.5cm} p{3.5cm}} \hline \multicolumn{2}{p{7cm}}{\textbf{Non-hate example from the test set of \textit{HatEval} for \textit{Waseem} \textrightarrow \textit{HatEval}}}\\ \hline \textbf{FP with Van-MLM-FT}& \textbf{TN with Dom-spec (DL)} \\ \colorbox{purple!6.5}{\strut Depression} \colorbox{purple!10.332500000000001}{\strut is} \colorbox{purple!17.037499999999998}{\strut a} \colorbox{purple!12.387500000000001}{\strut whole} \colorbox{purple!26.25}{\strut entire} \colorbox{purple!22.0925}{\strut b*tch}&\colorbox{purple!40.2775}{\strut Depression} \colorbox{purple!10.462499999999999}{\strut is} \colorbox{purple!6.5}{\strut a} \colorbox{purple!14.8175}{\strut whole} \colorbox{purple!17.459999999999999}{\strut entire} \colorbox{purple!12.845}{\strut b*tch} \\ \hline\hline \multicolumn{2}{p{7cm}}{\textbf{Hate example from the test set of \textit{Waseem} for \textit{Vidgen} \textrightarrow \textit{Waseem}}}\\ \hline \textbf{FN with Van-MLM-FT}& \textbf{TP with Dom-spec (DL)} \\ ...\colorbox{purple!3.7840448726365254}{\strut good} \colorbox{purple!4.2938604252563035}{\strut to} \colorbox{purple!3.246716608642153}{\strut talk} \colorbox{purple!5.251881344140691}{\strut with} \colorbox{purple!5.269526830373199}{\strut your} \colorbox{purple!2.2169292022134983}{\strut wife} \colorbox{purple!11.139388386596629}{\strut but} \colorbox{purple!3.947978702728717}{\strut it} \colorbox{purple!4.67564563825295}{\strut is} \colorbox{purple!2.9330869037702203}{\strut easier} \colorbox{purple!5.647442713119003}{\strut to} \colorbox{purple!8.7027954731983}{\strut say} \colorbox{purple!3.3074196058220156}{\strut shut} \colorbox{purple!3.3369865197821804}{\strut up} \colorbox{purple!12.490783663333396}{\strut n} \colorbox{purple!2.2838369716795595}{\strut make} \colorbox{purple!3.44304752967348}{\strut me} \colorbox{purple!2.715364395852024}{\strut a} \colorbox{purple!15.507096312347615}{\strut sammich} \colorbox{purple!4.959677882012582}{\strut not} \colorbox{purple!19.012699290712806}{\strut sexist} \colorbox{purple!6.020059029425544}{\strut lol} & ...\colorbox{purple!2.9566797013503043}{\strut good} \colorbox{purple!3.08015149355739}{\strut to} \colorbox{purple!2.634096080752119}{\strut talk} \colorbox{purple!2.56967705477614}{\strut with} \colorbox{purple!1.953125}{\strut your} \colorbox{purple!18.443507326128861}{\strut wife} \colorbox{purple!2.0782927850929185}{\strut but} \colorbox{purple!2.9351668768603196}{\strut it} \colorbox{purple!3.1527046433287054}{\strut is} \colorbox{purple!3.0369479230275926}{\strut easier} \colorbox{purple!2.755965805646203}{\strut to} \colorbox{purple!3.0147749111189075}{\strut say} \colorbox{purple!15.2493779010952375}{\strut shut} \colorbox{purple!3.1487703495208343}{\strut up} \colorbox{purple!2.8359055687020076}{\strut n} \colorbox{purple!6.636417394547612}{\strut make} \colorbox{purple!12.480428916653904}{\strut me} \colorbox{purple!29.966588265522507}{\strut a} \colorbox{purple!35.79314910523797}{\strut sammich} \colorbox{purple!20.749717237950467}{\strut not} \colorbox{purple!19.288859983942686}{\strut sexist} \colorbox{purple!2.763012792154432}{\strut lol} \\ \hline \end{tabularx} \caption{\label{qualit-analysis} Change in attributions with Dom-spec (DL).} \end{table} Table \ref{qualit-analysis} displays examples of False Positives (FP) for \textit{Waseem} \textrightarrow \textit{HatEval} and False Negatives (FN) for \textit{Vidgen} \textrightarrow \textit{Waseem}, yielded by Van-MLM-FT for the respective target domain instances, which are correctly classified by Dom-spec (DL), where the hate class is the positive class. The darker the shades, the higher the attributions assigned by the source classifier. The examples suggest that penalizing source-specific terms results in placing more emphasis on the general contextual meaning of the out-of-domain instances such as `depression' in the first example and `wife...shut...make me a sammich' in the second. Note that the terms in these examples from the target domain that receive reduced importance with Dom-spec, compared to Van-MLM-FT, may not be the same terms that are extracted and penalized. This is because the domain classification step results in obtaining terms that are more likely to be infrequent in the target domain. Rather, due to the penalization of source-specific terms, the source domain classifier learns to focus on the wider context of the instances. For example, we observe that in the case of \textit{Waseem} \textrightarrow \textit{HatEval}, the automatically extracted $\text{te}^S$ includes terms related to the role of women in sports, such as \{\textit{sports, sexist, gaming, football, commentary, competition, ...}\}. Note that \citet{wiegand-etal-2019-detection} also mention that these terms cause domain or topic bias in \textit{Waseem}, restricting generalizability. See Appendix \ref{sec:append-tokens} for more examples. \section{Conclusion} We proposed a DA approach for automatic extraction and penalization of source domain-specific terms that have higher attributions towards the hate-speech labels, to improve cross-domain hate-speech detection. The results demonstrated consistent improvements on the target domain. These results should motivate further research on domain adaptation in hate-speech and building classifiers that can generalize well to the concept of hate. Finally, it would be interesting in applying our approach to other tasks such as rumor and misinformation detection~\citep{10.7717/peerj-cs.325,10.1145/3501247.3531559}. \section*{Ethical Considerations} This work serves as a means to build more robust hate-speech detection models that can make proper use of the existing curated hate-speech resources and adapt well on new resources or social-media comments, which have not been well-annotated due to time and cost constraints. The hate-speech resources used for the work are publicly available and cited appropriately, wherein the authors have discussed the sampling techniques and annotation guidelines in detail. The hate-speech examples presented in the paper are only intended for research purposes and better analysis of the models explored. The terms extracted and penalized in this work are not meant to be used off-the-shelf, but the approach should serve as a starting point for research on model-debugging and building more generalizable hate-speech classifiers. \section*{Acknowledgments} This work was supported partly by the french PIA project ``Lorraine Université d'Excellence'', reference ANR-15-IDEX-04-LUE. Experiments presented in this article were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see \url{https://www.grid5000.fr}). We thank the anonymous reviewers for their valuable feedback and suggestions.
{ "timestamp": "2022-09-20T02:22:27", "yymm": "2209", "arxiv_id": "2209.08681", "language": "en", "url": "https://arxiv.org/abs/2209.08681" }
\section{Introduction} Several new theoretical developments have taken place in relativistic dissipative hydrodynamics (see~\cite{hydrobook} for review) which is immensely successful in describing the data from nuclear collisions at relativistic energies ~\cite{Rischke:1998fq,Shuryak:2003xe,Stoecker:1986ci}. The inclusion of spin in the general relativity is a long standing problem ~\cite{HEHL197655}. Recently, invigorating efforts have been witnessed on the development of spin hydrodynamics ~\cite{Florkowski:2017ruc,Hattori:2019lfp, Fukushima:2020ucl, Li:2020eon,Gallegos:2021bzp,Gao:2019znl,Hattori:2019ahi, Li:2019qkf,Yang:2020hri, Cao:2022aku, Singh:2022ltu,Singh:2020rht, Hu:2021pwh,Hu:2022lpi, Weickgenannt:2020aaf,Weickgenannt:2022zxs, Weickgenannt:2022jes,Liu:2020flb, Bhadury:2020puc,Bhadury:2022qxd, Shi:2020htn, Peng:2021ago, Hashimoto:2013bna, Garbiso:2020puw, Gallegos:2020otk, Montenegro:2017rbu, Montenegro:2017lvf, Montenegro:2018bcf, Montenegro:2018bcf, Becattini:2007nd, Becattini:2009wh,Becattini:2012pp,Becattini:2018duy,Hu:2021lnx, Florkowski:2018fap, Becattini:2020sww, Speranza:2020ilk,Becattini:2022zvf} after the experimental measurement of the polarization of $\Lambda$ hyperon~\cite{STAR:2017ckg,STAR:2019erd}. In particular it is required to know - how the spin of the constituent particles is related with the fluid variable like vorticity, symmetric gradients or magnetic fields. The shear stress of the fluid can give rise to spin polarization in addition to vorticity and temperature gradient~\cite{Fu:2021pok}. The polarization of hadrons observed in non-central collisions of heavy ions at Relativistic Heavy Ion Collider (RHIC) at high centre of mass energies ($\sqrt{s_{\text NN}}$)~\cite{STAR:2017ckg, STAR:2019erd} has been attributed to the transfer of orbital angular momentum of the fireball to the spin polarization. A dependence of the local $\Lambda$ spin polarization on the azimuthal angle in the transverse plane of collision observed by the STAR collaboration~\cite{Pol_azimuthal_1,Pol_azimuthal_2} can not be explained by hydrodynamical models based on local thermal vorticity ~\cite{Fu:2020oxj,Xia:2018tes,Becattini:2017gcx}. The spin polarization as an independent relativistic hydrodynamic field was proposed as a possible solution to this problem which has led to several new developments in the area of relativistic spin hydrodynamics. It ought to be mentioned here that one can solve the problem of local $\Lambda$ spin polarization by considering a possible effect of shear induced polarization~\cite{shearinduced_pol} without incorporating any additional variable in hydrodynamics for the spin. This was taken as an indication that the spin polarization (calculated at the freeze-out surface at constant temperature) may have a negligible effect on the evolution of the medium formed in heavy ion collisions~\cite{Becattini:2021suc,Becattini:2021suc}. It must be emphasized that the inclusion of spin observables in hydrodynamics opens up an interesting possibility of developing a `classical' tool for studying quantum effect in a many-body system like quark-gluon plasma. Other new interesting developments are the chiral hydrodynamics~\cite{ChHydro1,ChHydro2} and the chiral vortical effects~\cite{CVE1,CVE2}. In condensed matter system also hydrodynamics with spin observables have found many interesting applications (see~\cite{spintronics} for a review). The incorporation of the spin as a hydrodynamic field and its effect on the evolution of relativistic fluid is one of the most active area of contemporary research~\cite{Florkowski:2017ruc, Bhadury:2022qxd}. The evolution of the spin and other hydrodynamic fields ({\it e.g.} energy density, pressure, velocity, etc.) are governed by the conservation of the total angular momentum along with the other equations governing the conservations of energy-momentum and conserved charges (net electric charge, net baryonic charge, etc). However, the definitions of the energy momentum and spin tensors are not unique because of the presence of `pseudogauge' transformation degrees of freedom~\cite{HEHL197655}. Therefore, in spite of tremendous efforts, the formulation of relativistic dissipative spin-hydrodynamics remains incomplete. One can obtain many different pairs of these tensors ~\cite{HEHL197655} through pseudo-gauge transformation. This ambiguity can be illustrated through the following situation: At the microscopic level energy-momentum tensor, defined for a system of particles with spin, can have symmetric and antisymmetric parts where the antisymmetric part can be attributed to spin. Now with the help of pseudo-gauge transformation one can define a new energy-momentum tensor \cite{Belinfante} which is symmetric, the Belinfante tensor. Recently it has been shown in~\cite{Fukushima:2020ucl} that the entropy currents under this transformation are not equivalent in non-equilibrium situations. This is intriguing since this difference in expressions of entropy current imply that physics of the two situations are not the same! Another interesting point of view was advanced in Ref. ~\cite{Li:2020eon} where the authors demonstrate that the second order conventional hydrodynamics is equivalent to spin-hydrodynamics in the dissipationless limit. The demonstration, however, uses the pseudogauge transformations along with the suitable generalization of the currents associated with the entropy and number densities. However, due to this equivalence, one may think that perhaps one does not need to have spin hydrodynamics, since conventional hydrodynamics suffices, which needs to be investigated. Apart from that we note that the energy-momentum tensor for the second order conventional hydrodynamics contains contributions from the fluid vorticity~\cite{Denicol:2012cn, Baier_2008, Israel:1979wp}. But the inclusion of vorticity brings spin-dynamics in the hydrodynamic theory, since the presence of the finite vorticity in the system can be regarded as a source of spin-polarization. This points towards the requirement of a treatment, more than the conventional formulation, to account the spin dynamics, with spin density as a independent hydrodynamic field. It is well-known that the straightforward generalization of NS equation to the relativistic domain is problematic because it admits acausal and unstable solution ~\cite{Hiscock:1987zz}. It is also known that these issues can be remedied by incorporating second order corrections to the NS equation~\cite{Israel:1979wp} if certain conditions are satisfied. It is to be noted that this approach is not unique and there exists a variety of the other approaches to address the issues related with the generalization of NS equation in the relativistic domain~\cite{Van:2007pw}. In the present work, we systematically analyze the issues related with causality and instability in the spin-hydrodynamics presented in Refs.~\cite{Hattori:2019lfp,Li:2020eon}. The spin-hydrodynamics equations presented in Refs.~\cite{Hattori:2019lfp,Li:2020eon} have very different structures and the linear-mode analysis presented for them support very different modes. In Ref. ~\cite{Montenegro:2018bcf}, it is shown that the causality for a particular kind of spin hydrodynamics can be restored only with a second order term like Israel-Stewart's theory~\cite{Israel:1979wp}. The paper is organized as follows: In the next section we first briefly introduce the dissipative spin-hydrodynamics equations and for a simple initial state, we provide a linear-mode analysis. In section three, in dissipationless limit, we briefly introduce the convention second-order hydrodynamics and its equivalence with the spin-hydrodynamics. This section also includes the linear mode analysis for the two initial states. First case corresponds to the stationary fluid while the second initial states has non-zero but a constant vorticity in $x$\,\&\,$y$ directions. Section IV is devoted to the summary and discussion. \section{ Dissipatve spin-hydrodynamics} \label{SpinFirst} \subsection{Structure} In the literature there are several ways to obtain equations of spin hydrodynamics. The methods based on effective field theory, ~\cite{Montenegro:2017rbu,Montenegro:2017lvf}, the entropy current analysis approach ~\cite{Florkowski:2017ruc} and the method of moments~\cite{Weickgenannt:2022zxs} were used to derive the equation of relativistic spin hydrodynamics. In the present work we closely follow the approach adopted in Ref.~\cite{Hattori:2019lfp}. The conventional way is to define the energy-momentum tensor $\Theta ^{\mu \nu} $ and the conserved ``currents" of the fluid under consideration. To incorporate spin within the hydrodynamic framework, one must consider the total angular-momentum $J^{\mu \alpha \beta}$ as one of the conserved currents. The Noether current associated with Lorentz transformation, $J^{\mu\alpha\beta}$ has contributions from two different sources: \begin{equation} J^{\mu \alpha \beta}\,=\,\left(\,x^{\alpha} \Theta^{\mu \beta} \,-\, x^{\beta}\Theta^{\mu \alpha} \right)\, +\, \Sigma^{\mu \alpha \beta},\label{ang} \end{equation} \noindent where $\Theta^{\mu\beta}$ is the canonical energy-momentum tensor (EMT), $x^\alpha$ is the space-time four vector and $\Sigma^{\mu\alpha\beta}$ is the spin tensor. The first term within the bracket on the right hand side of Eq.\eqref{ang} represents the contribution from orbital angular-momentum which is conserved for symmetric $\Theta^{\mu\beta}$. All the dissipative fluxes which one may encounters in the formulation of dissipative hydrodynamics will be denoted with a prefix, $\Delta$. Henceforth, $\Theta^{\mu\nu}$ will be denoted by $\Delta\Theta^{\mu\nu}$ and decomposed into symmetric ($\Delta \Theta^{\mu\nu}_{s}$) and anti-symmetric ($\Delta \Theta^{\mu\nu}_{a}$) parts as follows: \begin{equation} \Delta \Theta^{\mu\nu} =\,\Delta \Theta^{\mu\nu}_s\,+\,\Delta \Theta^{\mu\nu}_a.\label{canEMT} \end{equation} \noindent Both the symmetric and the anti-symmetric parts of the canonical EMT contain information about the dissipation and transport coefficients. Mathematical form of $\Delta \Theta^{\mu\nu}$ can be determined with the help of second-law of thermodynamics. The second term on the right hand side of Eq.\eqref{ang} is the spin term which arises due to the invariance of the underlying field under Lorentz transformation~\cite{Hattori:2019lfp} and can be identified with the internal degrees of freedom. It is required that the spin term satisfy the condition $ \Sigma^{\mu\alpha\beta}\,=\,- \Sigma^{\mu\beta\alpha}$. The spin-tensor can further be decomposed into two parts: \begin{eqnarray} \Sigma ^{\mu \alpha \beta }&=&S^{\alpha\beta} u^{\mu }+\Delta \Sigma ^{\mu \alpha \beta },\label{spin} \end{eqnarray} \noindent where, $S^{\alpha \beta }$ is spin polarization density in the fluid rest frame and $\Delta \Sigma ^{\mu \alpha \beta }$ is the spin dissipation. Moreover, the current density $j^{\mu}$ for conserved charges (baryonic charge for system formed in relativistic nuclear collision) can be written as, \begin{eqnarray} j^{\mu }={nu}^{\mu }+n^\mu,\label{current} \end{eqnarray} \noindent where $n$ is charge density at the fluid rest frame and $n^{\mu}$ is the charge diffusion, vanishes in Eckart's choice of frame. Next, one can write EMT for the fluid as \begin{equation} \label{hydroEMT} \Theta^{\mu \nu}\,=\, \Theta^{\mu\nu}_o\,+ \, \Delta \Theta^{\mu\nu}_{s}\,+\,\Delta \Theta^{\mu\nu}_{a}, \end{equation} \noindent where, $\Theta^{\mu\nu}_o$ is the ideal part of the EMT which is given by, \begin{equation} \Theta _o^{\mu \nu }\,=\,\epsilon u^{\mu } u^{\nu }+\text{P$\Delta $}^{\mu \nu }, \label{ideal} \end{equation} \noindent where $\epsilon$ and $P$ respectively denote energy density and pressure of the fluid, $u^{\mu }$ represents the fluid four-vector velocity. The signature metric of the flat space-time is taken here as $g^{\mu \nu }=diag(-,\,+,\,+,\,+)$ with all the non-diagonal components to be zero. Such that the projection operator $\Delta^{\mu \nu }=g^{\mu \nu }+u^{\mu}u^{\nu }$ satisfies the condition: $\Delta^{\mu \nu} u_\mu\,=\,0$. The velocity field $u^{\mu}$ satisfies the normalization condition $u^{\mu}u_{\mu}=-1$. The quantites, $P$, $\epsilon$ and $n$ are related through the Equation of State (EoS). Expressions for $\Delta \Theta^{\mu \nu}_{s}$ and $\Delta \Theta^{\mu \nu}_{a}$ can be decomposed as~\cite{Romatschke:2009im,Hattori:2019lfp} \begin{eqnarray} \label{scorrec} \Delta \Theta^{\mu \nu }_s&=&\Pi \Delta^{\mu \nu } +h^{\mu}u^{ \nu }+u^{\mu}h^{ \nu }+\pi^{\mu \nu },\\ \label{acorrec} \Delta \Theta^{\mu \nu}_a&=& q^{\mu}u^{ \nu }-u^{\mu}q^{\nu}+\phi^{\mu \nu }, \end{eqnarray} \noindent where the scalar $\Pi$, the vectors $(h^{\mu}$ and $q^{\mu})$, the rank-2 tensors $(\pi^{\mu \nu }$ and $\phi^{\mu \nu})$ are the dissipation fluxes. All the dissipative fluxes in the canonical EMT individually satisfy transversality condition with respect to the hydrodynamical velocity $u^ \mu$ given by : $h^{\mu}u_\mu=q^{\mu}u_\mu= \Pi\Delta^{\mu\nu}u_\mu= \pi^{\mu\nu}u_\mu =\phi^{\mu\nu}=\,0$. The dissipation vector $h^{\mu}$ represents the contribution to the energy flow that does not depend on the spin polarization, while the vector $q^{\mu}$ describes the dissipation due spin polarization. The tensor $\pi^{\mu \nu }$ is a symmetric traceless tensor representing the shear-stress tensor without any effect of the spin-polarization, whereas $\phi^{\mu \nu}$ is an antisymmetric shear tensor describing the dissipation due to vorticity and spin-polarization. The mathematical forms of the scalar, vector and tensor dissipative fluxes can be constructed in terms of $g^{\mu\nu}$, the hydrodynamical fields and the transport coefficients with the help of the second law of thermodynamics. The transport coefficients can be determined from the underlying microscopic theories. The equation of motions of a relativistic fluid with spin degrees are given by: \begin{eqnarray} \partial_{\mu} \Theta ^{\mu \nu }&=&0\,,\\ \,\label{div1} \partial_{\mu} J^{\mu \alpha \beta }&=&0\,,\\ \,\label{div2} \partial_{\mu} j^{\mu}&=&0\,. \label{div3} \end{eqnarray} \noindent The second law of thermodynamics requires that the entropy current $s^\mu$ satisfies the following condition: \begin{equation} \partial_\mu s^\mu\,\geq\,0.\label{2nd} \end{equation} From Eq.\eqref{div2} and using the definition of total angular momentum (Eq.\eqref{ang}), one gets the equation for spin-dynamics as, \begin{eqnarray} \label{antiSp} \partial_{\rho}\Sigma^{\rho\mu\nu}\,&=&-2\Delta \Theta ^{\mu \nu }_a.\label{spindy} \end{eqnarray} This equation indicates that the evolution of the spin is governed by the anti-symmetric part of the EMT. Next by using Eqs.\eqref{div1} and \eqref{div3} we obtain, \begin{eqnarray} \label{evoen} D\epsilon&=&-(\epsilon+P)\theta+u_{\nu}\partial_{\mu}\left[ \Delta \Theta ^{\mu \nu }\right] \,,\\\, \label{NS} (\epsilon+P)D u^{\mu}&=&-\Delta^{\mu \nu}\partial_{\nu}P -\Delta^{\mu }_{\nu}\partial_{\alpha}\Delta \Theta^{\alpha \nu }\,,\\ \, \label{evospin} DS^{\alpha\beta}&=&-S^{\alpha\beta}\theta-2 \Delta \Theta_{a}^{\alpha\beta} -\partial_{\mu}\Delta \Sigma ^{\mu \alpha \beta }\,,\\ \, \label{evon} Dn &=& -n\theta, \end{eqnarray} \noindent where, $D\,\equiv\,u^{\mu}\partial_\mu$ and $\theta\,\equiv\,\partial_\mu u^{\mu}$. The first law of thermodynamics is generalized to incorporate the spin density $ S^{\mu \nu}$ as ~\cite{Hattori:2019lfp}: \begin{eqnarray} {Tds}\,&=&\,{d\epsilon}-{\mu dn}-\omega _{\mu \nu }{dS}^{\mu \nu}\,\\ \,\label{Ilawdiff} Ts\,&=&\,\epsilon\,+\,P\,-\mu\,n \,-\,\omega_{\mu \nu }{S}^{\mu \nu} \label{extensiv} \end{eqnarray} \noindent where $s$, $\mu$ and $\omega _{\mu \nu}$ respectively denote entropy density, (baryonic) chemical potential and the chemical potential corresponding to the spin tensor. This requires spin to be a conserved quantity. As described by Eq.\eqref{spindy}, spin dynamics is governed by antisymmetric part of the canonical EMT. Thus the incorporation of spin degrees of freedom within a hydrodynamic framework requires that the relaxation time for spin density is longer than the mean-free-time related to the microscopic scattering of the fluid particles~\cite{Hattori:2019lfp}. From differential statement of the first law one can write space-time evolution of entropy density as: \begin{equation} T\,Ds\,=\,D\epsilon\,-\,\mu\,Dn\,-\,\omega_{\mu \nu} {DS}^{\mu \nu}.\label{entropydyn} \end{equation} In the presence of dissipative fluxes one decompose the entropy current as: $s^\mu\,=\,s u^\mu\,+\,\Delta s^\mu$ and the velocity projection requires that $\Delta s^\mu u_\mu=0$.~\cite{Hattori:2019lfp}. In order to apply the second law of thermodynamics, one takes divergence $s^\mu$ to get, \begin{eqnarray} \label{entrp} \partial_{\mu}s^{\mu}&=&s\theta + Ds+\partial_{\mu}\Delta s^{\mu}. \label{2ndL} \end{eqnarray} \noindent Now first we eliminate $Ds$ from Eq.~\eqref{2ndL} by using \eqref{entropydyn} and then use Eqs.~\eqref{NS},\eqref{evospin} and \eqref{evon} to obtain: \begin{eqnarray} \label{secndl} \partial_{\mu}s^{\mu &=&\beta \theta \{Ts-(\epsilon+P)+\mu n+\omega_{\alpha\beta}S^{\alpha\beta}\}-\Delta \Theta ^{\mu \nu }\partial_{\mu}(\beta u_{\nu}) \nonumber\\ &&-\Delta \Sigma ^{\mu \alpha \beta }\partial_{\mu}(\beta \omega_{\alpha\beta})+2\beta \omega_{\alpha\beta}\Delta\Theta_{a}^{\alpha\beta}\,\nonumber\\ &&+\partial_{\mu}(\Delta s^{\mu}+\beta u_{\nu} \Delta \Theta ^{\mu \nu }+\beta \omega_{\alpha\beta} \Delta \Sigma ^{\mu \alpha \beta })\,, \end{eqnarray} \noindent where $\beta=1/T$ is the inverse temperature. In Eq.\eqref{secndl}, the term within $\lbrace\rbrace$ vanishes due to the first-law of thermodynamics given by Eq.\eqref{Ilawdiff}. The last term on the right-hand side can be made to zero by demanding \begin{equation} \label{centrp} \Delta s^{\mu}=-\beta u_{\nu} \Delta \Theta ^{\mu \nu }-\beta \omega_{\alpha\beta} \Delta \Sigma ^{\mu \alpha \beta }, \end{equation} \noindent It is straightforward to check that $\Delta s^\mu u_{\mu}=\,0$. \noindent The mathematical forms of the scalar, vector and tensor dissipative fluxes ($\Pi,\, h^{\mu},\, q^{\mu},\,\pi^{\mu \nu}\,$ and $\phi^{\mu \nu}$) appearing in Eqs.\eqref{scorrec} and \eqref{acorrec} are required to be constrained by the second law of thermodynamics. The appropriate form of these fluxes are found to be ~\cite{Hattori:2019lfp}: \begin{eqnarray}\label{dfluxes} \Pi\,&=&\,-\zeta \theta\nonumber \\ h^{\mu }&=&-\kappa (D u^{\mu }+\beta \Delta ^{\mu \rho}\partial_{\rho}T),\nonumber\\ q^{\mu }&=&-\lambda (-D u^{\mu }+\beta \Delta ^{\mu \rho}\partial_{\rho}T-4\omega^{\mu\nu}u_{\nu}),\nonumber\\ \pi^{\mu\nu}&=&-2\eta \Delta^{\mu\nu\alpha\rho} \partial_{\alpha}u_{\rho},\nonumber\\ \phi^{\mu\nu}&=&-2\gamma\Big [\frac{1}{2}(\Delta^{\mu \alpha} \partial_{\alpha}u^{\nu}-\Delta^{\nu \alpha} \partial_{\alpha}u^{\mu}) -\Delta^{\mu}_{\rho} \Delta^{\nu}_{\lambda} \omega^{\rho\lambda}\Big ],\nonumber\\ \Delta \Sigma ^{\mu \alpha \delta}&=& -\chi_{1}\Delta^{\mu\rho}\partial_{\rho}(\beta \omega^{\alpha \delta}), \end{eqnarray} where $\kappa$, $\eta$ and $\zeta$ are respectively denote the coefficients of thermal conductivity, shear viscosity and bulk viscosity, and the symmetric traceless projection normal to $u^{\mu}$ is defined as, $\Delta^{\mu\nu}_{\alpha\beta}= \frac{1}{2}(\Delta^{\nu}_{\alpha}\Delta^{\mu}_{\beta}+\Delta^{\mu}_{\alpha}\Delta^{\nu}_{\beta}- \frac{2}{3}\Delta^{\mu\nu}\Delta_{\alpha\beta})~$. The presence of spin polarization has introduced two new transport coefficients such as $\lambda$ and $\gamma$. The coefficient $\lambda$ is related with heat conduction associated with the new vector current $q^\mu$, while coefficient $\gamma$ is related with new stress tensor $\phi^{\mu\nu}$ generated due to the inclusion of spin in the hydrodynamics. Moreover, $q^\mu$ gets a contribution from spin potential $\omega^{\mu\nu}$. It is interesting to note that if one identifies $\omega^{\mu\nu}$ as a vorticity, then the spin stress $\phi^{\mu\nu}$ vanishes but $q^{\nu}$ survives. Thus the effect of spin-polarization only remains in the vector current associated with $q^{\nu}$. Till now no power counting scheme is assumed in the derivation of the fluxes with entropy that includes effect of spin current. We will consider two schemes: i) one as in Ref.~\cite{Hattori:2019lfp} where gradients are taken as: $\sim\mathcal{O}(\partial^1)=\delta_g$ and spin-chemical potential is taken as $\sim \delta$. In that case for $\delta_g^2\ll\delta_g\ll1$, only $\Delta \Sigma ^{\mu \alpha \delta}\equiv 0$ at first order and other fluxes remains intact in first order in Eq.\eqref{secndl}. The other scheme of ordering of scale is for uniform high rotation where the vorticity is of the order of $\delta_{\omega}\ll1$ and other gradients are of different scales but $\delta_{\omega}$ is the highest relevant scale ~\cite{Li:2020eon}. We discuss below how the dispersion of linear perturbations is shaped for these two types of ordering of scales- both for first the order spin hydrodynamics and the equivalent conventional second order hydrodynamic theory. \subsection{Linear analysis} To understand the stability and causality issues, first we consider a equilibrium background which has now flow velocity $u_0^\mu\equiv (-1,0,0,0)$. Here the subscript $0$ denote the value of a physical quantity of the background on which perturbation is placed (but in the perturbations it represents the index to denote time(zeroth) component). The addition, the background is assumed to be static and homogeneous and values of spin-polarization and spin-potential tensors are considered to be zero. The background equilibrium state is same as the one considered in Ref.~\cite{Hattori:2019lfp}. We use $\mathcal{Q}$ as the generic notation for hydrodynamic field with $\mathcal{Q}_0$ and $\delta\mathcal{Q}$ representing their mean values and fluctuation respectively. $\delta\mathcal{Q}$ is regarded to be a function of space and time. In this scheme one can write the perturbed velocity vector as $\delta u^\mu\equiv (0,\delta\mathbf{u})$. In the following we consider the spin chemical potential of order $\sim \mathcal{O}(\partial^{1})$ {\it i.e.}, of the order of other gradients (of $u^{\mu}$, $T$, $\mu$). This allows us to keep the order of vorticity same as the order of the derivatives other perturbed quantities like $\delta u^{\mu}$ or $\delta T$. This power counting scheme is different than the one used in Ref.~\cite{Li:2020eon} in section ~\ref{LiSt}. Here, first we note that, in absence of any spin-dynamics and conventional dissipation fluxes,under the linear perturbations scheme, under an background can support only sound waves. Upon retaining only the terms which are linear order in perturbed quantities, we get following set of equations: \begin{subequations} \begin{eqnarray} 0&=&\frac{\partial \delta \epsilon}{\partial t}+h_0\vec{\nabla}\cdot \vec{\delta u}-(\kappa-\lambda)\frac{\partial}{\partial t}(\vec{\nabla})\cdot \delta \vec{u}-(\frac{\kappa+\lambda}{T_0})\nabla^2\delta T+4\lambda \partial_{i}\delta \omega^{i0}\,,\\ \label{veleq} 0&=&(\kappa+\lambda) \frac{\partial^2 \delta u^i}{\partial t^2}-h_0 \frac{\partial \delta u^i}{\partial t}+(\eta+ \gamma)\nabla ^2 \delta u^i+(\zeta+\eta/2- \gamma)\partial^i\vec{\nabla}\cdot \vec{\delta u}\nonumber\\ &&+\frac{(\kappa-\lambda) }{ T_0}\frac{\partial \nabla^i \delta T}{\partial t} -\partial^i \delta P +4 \lambda \frac{\partial \delta \omega^{i0}}{\partial t} -4 \gamma \partial_l \delta \omega^{li}\,,\\ 0&=&\frac{\partial \delta S^{0i}}{\partial t}+8 \lambda \delta \omega^{i0}-\frac{\chi_1}{T_0}\nabla^2 \omega^{i0}+2 \lambda \frac{\partial}{\partial t}\delta u^{i}-\frac{2\lambda}{T_0}\partial^{i}\delta T\,,\\ 0&=&\frac{\partial \delta S^{ij}}{\partial t}-2 \gamma(\partial^i\delta u^{j}-\partial^i\delta u^{j}-4\delta \omega^{ij})-\frac{\chi_1}{T_0}\nabla^2 \omega^{ij}\, ,\\ 0&=&\frac{\partial \delta n}{\partial t}+n_0\vec{\nabla}\cdot \vec{\delta u}\,. \end{eqnarray} \end{subequations} where, $h_{0}=\epsilon_0+P_0$ is the enthalpy density of the initial state. By considering $\delta \mathcal{Q}=\Tilde{\delta \mathcal{Q}}\,exp{(-\omega t+i\vec{k}\cdot\vec{x})}$, one can convert the above differential equations into a set of linear homogeneous algebraic equation. However, it is useful to consider the projections along the unit wave-vector $\hat{\vec{k}}$ to get the longitudinal modes and projection perpendicular to $\hat{\vec{k}}$ for obtaining the transverse modes. After considering such projections we get: \begin{subequations} \label{linEq1} \begin{eqnarray} 0&=& \Big[-\omega+\frac{k^2 (\kappa +\lambda )}{T_0 \epsilon_T } \Big]{\delta \epsilon}+\Big [\frac{k^2 (\kappa +\lambda ) \epsilon_n}{T_0 \epsilon_T } \Big]{\delta n}+ik\Big[ \omega T_{0}(\kappa -\lambda )+ h_0\Big]{\delta u}_p \nonumber\\ &&+4 i k \lambda \delta \omega _{p0}\,,\\ \label{okveleq} 0&=&i { k \Big[-\frac{\omega (\kappa -\lambda )}{T_0 \epsilon_T }-c_s^2\Big]\delta \epsilon}+ \Big[\omega h_0+\omega ^2 T_{0}(\kappa +\lambda )-k^2 (\zeta +\frac{4 \eta }{3})\Big]\delta u_p\nonumber\\ &&+i { k \Big[\frac{\omega (\kappa -\lambda )\epsilon_n}{T_0 \epsilon_T }\Big]\delta n}-4 \lambda \omega \delta \omega _{p0}\,,\\ % 0&=& \Big[\omega h_0+\omega ^2 T_{0}(\kappa +\lambda )-k^2 (\gamma + \eta )\Big]\text{$\delta $u}_t-4 i k \gamma T_0 \delta \omega _{\text{pt}}-4 \lambda T_0 \omega \delta \omega _{t0}\,,\\ 0&=& \Big[8 \gamma -\omega \chi _s+ \frac{\chi _1}{T_0}k^2 \Big]\delta \omega _{\text{pt}}-4 i \gamma k \delta u_t\,,\\ 0&=& \Big[8 \gamma -\omega \chi _b+ \frac{\chi _1}{T_0}k^2 \Big]\delta \omega _{\text{p0}}+\frac{2 \text{$\delta $e} \lambda (i k)}{C_V T_0}-\frac{2 \text{$\delta $n} (i k) \lambda \epsilon_n}{\epsilon_T T_0}+2 \lambda \omega \delta u_p\,,\\ 0&=& \Big[8 \gamma -\omega \chi _b+ \frac{\chi _1}{T_0}k^2 \Big]\delta \omega _{\text{t0}}+2 \lambda T_0 \omega \delta u_t\,,\\ 0&=&-\omega \delta n+ikn_0\delta u_p\,, \end{eqnarray} \end{subequations} where, $\chi_b=\frac{\partial S^{i0}}{\partial \omega^{i0}}$ and $\chi_s=\frac{\partial S^{ij}}{\partial \omega^{ij}}$ with $i$ and $j$ denote spatial indices~\cite{Hattori:2019lfp}. Here the subscripts $p$ and $t$ respectively describe longitudinal and transverse parts. Further, we have used, $\delta T=\frac{1}{\epsilon_T}\delta \epsilon-\frac{\epsilon_n}{\epsilon_T}\delta n$, where $\epsilon_T=\frac{\partial \epsilon}{\partial T}\Big |_{n}$ and $\epsilon_n=\frac{\partial \epsilon}{\partial n}\Big |_{T}$ inthe above equations. Since the equations for longitudinal and transverse parts are decoupled, one can treat them separately to obtain the dispersion relations for the linear mode. For the longitudinal part, \begin{equation} \mathds{ M}\, \mathcal{Q}_l=0, \end{equation} where, \begin{equation} \mathcal{Q}_l= \left( \begin{array}{c} \text{$\delta \epsilon$} \\ \text{$\delta $u}_p\\ \delta \omega _{\text{p0}}\\ \text{$\delta$} n \\ \end{array} \right) \end{equation} and \begin{equation} \mathds{ M}= \left( \begin{array}{cccc} \frac{k^2 (\kappa +\lambda )}{\epsilon_T T_0}-\omega & i k h_0+i k \omega (\kappa -\lambda ) & 4 i k \lambda & -\frac{k^2 \epsilon_n (\kappa +\lambda )}{\epsilon_T T_0} \\ -i k \left(c_s^2+\frac{\omega (\kappa -\lambda )}{\epsilon_T T_0}\right) & \omega h_0+\omega ^2 (\kappa +\lambda )-k^2 \left(\zeta +\frac{4 \eta }{3}\right) & -4 \lambda \omega & -\frac{i k \omega \epsilon_n (\kappa -\lambda )}{\epsilon_T T_0} \\ -\frac{2 i k \lambda }{C_V T_0} & -2 \lambda \omega & \omega \chi _b-\frac{k^2 \chi _1}{T_0}+8 \lambda & \frac{2 i k \lambda \epsilon_n}{\epsilon_T T_0} \\ 0 & i k n_0 & 0 & -\omega \\ \end{array} \right) \end{equation} The nontrivial solution of this requires that determitant $\mathds{ M}$ is zero, which for the longitudinal part gives the following four roots: \begin{eqnarray} \label{lnmod1} \omega_{1l} &=&(\kappa+\lambda )\, \frac{ n_0 \epsilon_n}{\epsilon_T T_0 h_0} \, k^2,\nonumber \\ \omega_{2l} &=&\pm i c_s k+\Big[ \frac{ ( \zeta +\frac{4}{3} \eta )}{2 h_0} +\lambda \left (\frac{ 1 }{\epsilon_T T_0}+\frac{ c_s^2}{h_0}\right )-\frac{\kappa n_0 \epsilon_n}{\epsilon_T T_0 h_0}\Big]\,k^2,\nonumber \\ \omega_{3l} &=&-\frac{8 \lambda }{\chi _b}+\frac{\chi _1}{\chi _b T_0}k^2 , \nonumber\\ \omega_{4l} &=&-\frac{h_0}{(\kappa +\lambda )}\,. \end{eqnarray} The spin transport coefficient $\lambda$ is associated with the heat conduction is seen to be contributing together with the normal heat conduction characterized by coefficient $\kappa$. The acausal behavior seen in the conventional NS equation can also be seen in the first equation. Parameter $\lambda$ also contributes in giving instability together with the conventional heat conductivity $\kappa$. Further it should be noted that $\lambda$ and $\kappa$ appears in the denominator of the unstable mode. In the conventional first-order hydrodynamics, this kind of unstable mode was discussed in Ref.~\cite{Hiscock:1985zz} and which was regarded to be unphysical. Next, for finite baryon density the sound mode mode $\omega_{2l}$ can be stable if the condition ($\Big[ \frac{ ( \zeta +\frac{4}{3} \eta )}{2 h_0} +\lambda \left (\frac{ 1 }{\epsilon_T T_0}+\frac{ c_s^2}{h_0}\right )\ge \frac{\kappa n_0 \epsilon_n}{\epsilon_T T_0 h_0}\Big]\,$) is satisfied . If this condition is violated then instability sets in as he conventional heat conduction can contribute towards increasing pressure and that may result in having unstable mode. Interestingly, $\lambda$ also contributes towards damping the sound modes described by $\omega_{2l}$. In absence of conventional heat-conduction i.e. $\kappa=0$, parameter $\lambda$ can give damping of the sound-wave. Finally, the third in Eq.~\eqref{lnmod1}, is a new mode which has no presence in the conventional fluid dynamics. This mode can be unstable when ${8 \lambda }>\frac{\chi _1}{ T_0}k^2 $. Here we would like to note that this mode can be made stable, if one introduces a term, ($\frac{S^{\alpha\beta}}{\tau_s}$) for the relaxation of $S^{\alpha\beta}$ in the left hand side of Eq. ~\eqref{evon}, where, $\tau_s$ is the spin-relaxation time. In addition $\omega_{3l}$ can also exhibit an acausal behavior for the sufficiently large value of wave-vector $k$. Similarly the transverse parts in Eq~\eqref{linEq1}a-g can be written as, \begin{equation} \mathds{ M}_t\, \mathcal{Q}_t=0, \end{equation} where, \begin{equation} \mathcal{Q}_t= \left( \begin{array}{c} \text{$\delta $u}_t\\ \delta \omega _{\text{pt}}\\\delta \omega _{\text{t0}} \end{array} \right) \end{equation} and \begin{equation} \mathds{ M}_t= \left( \begin{array}{ccc} \omega h_0+\omega ^2 (\kappa +\lambda )-k^2 (\gamma +\eta ) & -4 i \gamma k & -4 \lambda \omega \\ -2 i \gamma k & 8 \gamma +\frac{k^2 \chi _1}{T_0}-\omega \chi _s & 0 \\ -2 \lambda \omega & 0 & \omega \chi _b-\frac{k^2 \chi _1}{T_0}+8 \lambda \\ \end{array}m \right) \end{equation} By setting the determinant, $\mathds{M}_t=0$ the following expressions for the dispersion relations of the transverse modes are obtained: \begin{eqnarray} \omega_{1t} &=&\frac{(\gamma+\eta ) }{h_0} k^2,\nonumber \\ \omega_{2t} &=&-\frac{8 \lambda }{\chi _b}+\frac{ \chi _1}{\chi _b T_0}k^2,\nonumber \\ \omega_{3t} &=&\frac{8 \gamma }{\chi _s}+\frac{\chi _1}{T_0 \chi _s}k^2,\nonumber\\ \omega_{4t} &=&-\frac{h_0}{(\kappa +\lambda)}\,. \label{tsmod1} \end{eqnarray} Just like the four longitudinal modes in equation (30), there are four transverse modes also. Modes $\omega_{1t}$ and $\omega_{4t}$ have combination of the conventional and spin transport coefficients. It is to be noted that coefficient $\gamma$ is associated with the tracesless part of anisotropic stress tensor $\phi^{\mu\nu}$ in Eq.(24) and therefore it appears together with shear viscous coefficient $\eta$ in $\omega_{1t}$. The group velocity associated with $\omega_{1t}$ can exhibit acausal behavior. Such a behavior is well-known to exists in dispersion relation for the relativistic NS equation[for example see Ref.~\cite{Romatschke:2009im}]. Modes described by $\omega_{2t}\,\text{and}\,\, \omega_{3t}$ are new and they have no analogue in conventional hydrodynamics. The mode $\omega_{2t}$ is unstable if the condition $8 \lambda >\frac{ \chi _1} {T_0}k^2$ is satisfied. Expression for mode $\omega_{2t}$ is exactly similar to the longitudinal mode $\omega_{3l}$ and the instability can be regulated by introducing a spin-relaxation time. But the group velocity associated with $\omega_{2t}$ can still become acausal for the sufficiently high values of $k$. Mode $\omega_{3t}$ is stable but it can have the similar acausal behavior as $\omega_{2t}$. However, the transport coefficient $\gamma$ contributes towards giving a damping term which is independent of $k$. Finally, $\omega_{4t}$ gives an instability which has same form as $\omega_{4l}$ and this mode has counter part in the conventional relativistic hydrodynamic theory. Before we proceed to discuss the normal mode analysis in the dissipationless limit for the model discussed in Ref.~\cite{Li:2020eon}, few comments are in order. The new modes introduced by the inclusion of spin-dynamics depends on the spin-transport coefficients $\gamma\,\lambda\,\text{and}\,\,\chi_{1}$. The instability arising due to $\lambda$ can rather be controlled by introducing spin relaxation time $\tau_s$ provided the term with the relaxation time dominates over the term giving the instability. This point of view was also discussed in Ref.\cite{Hattori:2019lfp}. It might be possible to control the acausal behavior for modes $\omega_{3l},\,\omega_{2t}\,\text{and}\,\, \omega_{3t}$. However this may require an explicit calculation of the spin-transport coefficients. For example group velocity of mode $\omega_{2t}$ is $2\frac{ \chi _1}{\chi _b T_0}k$. Now, if the coefficient of $k$ is so small that the group velocity can exceed the speed of light when the length scales associated with $k$ are smaller than the mean free path. This requires the explicit calculation of spin transport coefficients using a microscopic theory. There are new modes in spin hydrodynamics which has no counterpart in the conventional limit, therefore, they are no way equivalent in general. This is a clear indication that, for a general case where vorticity can take any value and there could be other sources of spin polarization such as symmetric gradients and magnetic field, in that case, the modified conventional hydrodynamics may not be equivalent to spin hydrodynamics. This will be further discussed in Section ~\ref{LiSt}. \subsection{Instability and the heat flux} The term $Du^\nu$ appearing in the expression for heat flux can be replaced by using the equation (Eq.\eqref{veleq}) in favour of spatial gradient of pressure and other terms first order in derivative. Thus if we keep only first order term in the heat flux with no time derivative of the fluid velocity then it gets a correction from the first order dissipation. We use Eqs.~\eqref{NS} and ~\eqref{entrp} to find the following form of heat fluxes: \begin{eqnarray} h^{\mu }&=&\kappa \Big [\frac{1}{P+\epsilon }-\frac{1}{P+\epsilon-\mu n }\Big ]\Delta^{\mu\nu}\partial_{\nu }P+\kappa \frac{1}{P+\epsilon-\mu n }P_n\Delta^{\mu\nu}\partial_{\nu }n+\mathcal{O}(\partial^2),\nonumber\\ q^{\mu }&=&-\lambda \Big [\frac{1}{P+\epsilon-\mu n }+\frac{1}{P+\epsilon }\Big ]\Delta^{\mu\nu}\partial_{\nu }P+\lambda \frac{1}{P+\epsilon-\mu n }P_n\Delta^{\mu\nu}\partial_{\nu }n+4 \lambda \omega ^{\mu \nu } u_{\nu }+\mathcal{O}(\partial^2), \end{eqnarray} where $P_n=\frac{\partial P}{\partial n}\Big |_T$. We have used $\partial^{\mu} T=\frac{1}{P_T}\partial^{\mu} P-\frac{P_n}{P_T}\partial^{\mu} n$, where $P_T=\frac{\partial P}{\partial T}\Big |_{n}$ and $P_n=\frac{\partial P}{\partial n}\Big |_{T}$. For baryon free case {\it i.e.} for $n=0$ and $P_n=0$, we have $h^{\mu}=0+\mathcal{O}(\partial^2)$ and $q^{\mu}=-2\lambda \frac{1}{P+\epsilon } \partial ^{\mu }P+\mathcal{O}(\partial^2)$. This is the situation considered in Ref.~\cite{Hattori:2019lfp}. We consider the general case with non-zero baryon density. In such situation the linearized equations become: \begin{subequations} \begin{eqnarray} 0&=&\frac{\partial \delta \epsilon}{\partial t}+h_0\vec{\nabla}\cdot \vec{\delta u}+c_s^2 \left(\frac{\kappa -\lambda }{h_0}-\frac{\kappa +\lambda }{h_0-\mu _0 n_0}\right)\nabla^2 \delta \epsilon +\frac{(\kappa +\lambda ) P_n}{h_0-\mu _0 n_0} \nabla^2 \delta n\nonumber\\ && +4\lambda \partial_{i}\delta \omega^{i0}\,,\\ 0&=&-h_0\vec{\nabla}\cdot \vec{\delta u}-h_0 \frac{\partial \delta u^i}{\partial t}+(\eta+ \gamma)\nabla ^2 \delta u^i+(\zeta+\eta/2- \gamma)\partial^i\vec{\nabla}\cdot \vec{\delta u}\nonumber \\ && -c_s^2 \left(\frac{\kappa +\lambda }{h_0}-\frac{\kappa -\lambda }{h_0-\mu _0 n_0}\right) \frac{\partial}{\partial t}\partial^{i} \delta \epsilon -\frac{(\kappa -\lambda ) P_n}{h_0-\mu _0 n_0} \frac{\partial}{\partial t}\partial^{i} \delta n\nonumber\\ && -\partial^i \delta P +4 \lambda \frac{\partial \delta \omega^{i0}}{\partial t} -4 \gamma \partial_l \delta \omega^{li}\,,\\ 0&=&\frac{\partial \delta S^{0i}}{\partial t}+8 \lambda \delta \omega^{i0}-\frac{\chi_1}{T_0}\nabla^2 \omega^{i0}-2\lambda c_s^2 \left(\frac{1}{h_0}+\frac{1}{h_0-\mu _0 n_0}\right) \partial^{i} \delta \epsilon + 2\lambda \frac{1 P_n}{h_0-\mu _0 n_0} \partial^{i} \delta n \,,\\ 0&=&\frac{\partial \delta S^{ij}}{\partial t}-2 \gamma(\partial^i\delta u^{j}-\partial^i\delta u^{j}-4\delta \omega^{ij})-\frac{\chi_1}{T_0}\nabla^2 \omega^{ij}\, ,\\ 0&=&\frac{\partial \delta n}{\partial t}+n_0\vec{\nabla}\cdot \vec{\delta u}\,. \end{eqnarray} \end{subequations} In the above equations putting perturbations as $\delta \mathcal{Q}=\Tilde{\delta \mathcal{Q}}\,exp{(-\omega t+i\vec{k}\cdot\vec{x})}$ we get, \begin{subequations} \label{linEq} \begin{eqnarray} 0&=&\text{$\delta $e} \left(-\left(k^2 c_s^2 \left(\frac{\kappa -\lambda }{h_0}-\frac{\kappa +\lambda }{h_0-\mu _0 n_0}\right)\right)-\omega \right)-\frac{\text{$\delta $n} k^2 (\kappa +\lambda ) P_n}{h_0-\mu _0 n_0}+\text{$\delta $u}_p \left(i k h_0\right)+4 i \lambda k \delta \omega _{\text{p0}}\,,\\ 0&=&-i \text{$\delta $e} k c_s^2 \left(1-\omega \left(\frac{\kappa +\lambda }{h_0}-\frac{\kappa -\lambda }{h_0-\mu _0 n_0}\right)\right)+\text{$\delta $u}_p \left(\omega h_0-k^2 \left(\zeta +\frac{4 \eta }{3}\right)\right)\nonumber\\ &&+\frac{\text{$\delta $n} (i k) \omega (\kappa -\lambda ) P_n}{h_0-\mu _0 n_0}-4 \lambda \omega \delta \omega _{\text{p0}}\,,\\ % 0&=&\text{$\delta $u}_t \left(\omega h_0-k^2 (\gamma +\eta )\right)-4 i \gamma k \delta \omega _{\text{pt}}-4 \lambda \omega \delta \omega _{\text{t0}}\,,\\ 0&=&-\delta \omega _{\text{p0}} \left(-\omega \chi _b+\frac{k^2 \chi _1}{T_0}-8 \lambda \right)-2 \text{$\delta $e} (i k) \lambda c_s^2 \left(\frac{1}{h_0-\mu _0 n_0}+\frac{1}{h_0}\right)+\frac{2 \text{$\delta $n} (i k) \lambda P_n}{h_0-\mu _0 n_0}\,,\\ 0&=&-\delta \omega _{\text{t0}} \left(-\omega \chi _b+\frac{k^2 \chi _1}{T_0}-8 \lambda \right)\,,\\ 0&=& \delta \omega _{\text{pt}} \left(8 \gamma +\frac{k^2 \chi _1}{T_0}-\omega \chi _s\right)-2 i \gamma k \text{$\delta $u}_t\,,\\ 0&=&-\omega \delta n+ikn_0\delta u_p\,, \end{eqnarray} \end{subequations} Following the same procedure as earlier we get the dispersion relations which are linear in transport coefficients for longitudinal and transverse modes. The longitudinal modes read: \begin{eqnarray} \omega_{1l} &=&(\kappa+\lambda )\, \frac{ n_0 \epsilon_n}{\epsilon_T T_0 h_0} \, k^2,\nonumber \\ \omega_{2l} &=&\pm i c_s k+\Big[ \frac{ ( \zeta +\frac{4}{3} \eta )}{2 h_0} +\frac{ \lambda }{\epsilon_0-\mu _0 n_0+P_0}\left( 2 c_s^2-\frac{n_0 \left(\mu _0 c_s^2+P_n\right)}{h_0}\right)\Big]\,k^2,\nonumber \\ \omega_{3l} &=&-\frac{8 \lambda }{\chi _b}+\frac{\chi _1}{\chi _b T_0}k^2 \, , \end{eqnarray} and we have for the transverse modes: \begin{eqnarray} \omega_{1t} &=&-\frac{8 \lambda }{\chi _b}+\frac{ \chi _1}{\chi _b T_0}k^2,\nonumber \\ \omega_{2t} &=&\frac{(\gamma+\eta ) }{h_0} k^2,\nonumber \\ \omega_{3t} &=&\frac{8 \gamma }{\chi _s}+\frac{\chi _1}{T_0 \chi _s}k^2 \, . \label{tsmod4} \end{eqnarray} We find that there is no unstable modes of the form, $\omega=-\frac{h_0}{(\kappa +\lambda)}$ [see the last equation in Eqs.~\eqref{lnmod1}]. The appearance of this mode can be understood from the part $(\kappa+\lambda) \frac{\partial^2 \delta u^i}{\partial t^2}-h_0 \frac{\partial \delta u^i}{\partial t}$ of Eq.\eqref{veleq}, which gives $(\omega h_0+\omega ^2 T_{0}(\kappa +\lambda ))$ in the coefficient of $\delta u_p$ in Eq.~\eqref{okveleq}. $(\omega h_0+\omega ^2 T_{0}(\kappa +\lambda ))=0$ gives $\omega=-\frac{h_0}{\kappa+\lambda}$. The term $(\kappa+\lambda) \frac{\partial^2 \delta u^i}{\partial t^2}$ originates in the equation through the expression of heat flux in Eq.~\eqref{dfluxes} where already a time derivative of velocity appears on the right hand side. If one replaces the time derivative in the expression of heat fluxes in Eq. ~\eqref{dfluxes} using Eq.~\eqref{NS} and keeps terms first order in gradients, then with that form of the heat flux is entirely first order in gradient of hydrodynamic fields, and in that case, this unstable mode disappears. This unstable mode is there without the spin case ~\cite{Hiscock:1987zz}, here the presence of spin adds to that through its contribution to heat flux through $q^{\mu}$( or $\lambda$). So the source of instability found by Lindblom and Hiscock ~\cite{Hiscock:1987zz} is due to the presence of a second order correction entering in first order hydrodynamics through the expression of heat flux which contains time variation of fluid velocity($D u^{\mu}$). The second order effect in heat fluxes comes through $D u^{\mu}$, because of its dependence on gradients of the dissipative fluxes through the velocity equations, -gradients of the first order dissipative fluxes being second order. Since the instability is related to the expression of heat flux, it can be removed with proper redefinition of heat fluxes- as already shown for spin-less case in Ref.~\cite{Van:2007pw}. However, there may be unstable modes due to the presence of the spin polarization, $\omega_{1t}=-\frac{8 \lambda }{\chi _b}+ \frac{ \chi _1}{\chi _b T_0}k^2$. At first order, where the term due to the spin dissipation is dropped by considering it second order- if the spin potential is first order itself, then $\omega_{1t}=-\frac{8\lambda}{\chi_b}$ is always unstable. Of-course, this mode is unstable only if $\chi_b > 0$, i.e., the direction of spin potential is along the spin polarization. If the sign of $\chi_b$ depends on charges, helicity, chirality of particles, then this mode will be able to separate out the contribution from opposite charges. However, if the spin potential gets a contribution from the zeroth order, then even for species which gives unstable contribution, the modes with $|k|< 2\sqrt{\frac{2\lambda T_0}{\chi _1}}$ are stable. Now, the question arises, which form of the heat fluxes are to be used in the first order theory to avoid instability developed at $\omega=-\frac{h_0}{(\kappa +\lambda)}$. One may argue that replacing the time derivative of the velocity in heat flux is good enough solution to it. We note that the form of heat flux that contains time derivative of the fluid velocity comes from the positivity of four divergence of the entropy current, and that derivative contains corrections from the first order of the dissipative fluxes through Eq.~\eqref{NS}. Though the contribution to the entropy is first order in dissipative fluxes (Eq.~\ref{centrp}), the dissipative fluxes are not restricted by this condition to be first order in the gradients of hydrodynamic fields due to the presence of time derivative of the fluid velocity in the form of fluxes. The time scale of growth of this instability is $t\sim \omega^{-1}\sim \frac{(\kappa +\lambda)}{h_0}$. So for smaller $\kappa$ and $\lambda$, this may be very short, which means that, in very short time, the contribution from the second order would grow to lead to the instability. So the truncation of higher order effects in heat fluxes may not be applicable in that situation. For general situation this demands a consistent second order theory. Instability of first order theory is tamed only when the contribution of dissipation on the time variation of fluid velocity (or acceleration) is negligible compared to that it gets from the pressure gradients. Another important issue to note is that, in two situation the contribution of conductivities in dissipation of sound modes are different. However, in both the cases the dissipation of sound gets contribution from new transport coefficient($\lambda$) due the spin polarization. For certain values of $n_0$, this contribution may lead to a growth also and condition for the growth is different for different form of heat fluxes. There is always contribution from spin polarization in the transverse modes through $\gamma$. So the spin polarization affects the dissipation in the system. \section{Equivalence of spin hydrodynamics with second order theory in non-dissipative limit} \label{LiSt} In the previous section, we have found that the first order spin hydrodynamics is unstable and acausal, and the spin polarization has contribution to a instability which has no counter part in conventional first order theory. As mentioned previously, it is shown that in the non-dissipative limit the spin hydrodynamics is equivalent to a kind of conventional second order theory ~\cite{Li:2020eon}. It is interesting to investigate whether the conventional second order theory gives the extra modes which arises only due to the presence of spin polarization as seen in first order spin hydrodynamics. In the following, first, we discuss how the structure of the equivalent second order theory in Ref.~\cite{Li:2020eon} can resemble the spin hydrodynamics in its psudo-gauge transformed form. The symmetric Belifento-Rosenfeld EMT with pseudo gauge transformation with the choice of gauge to be $S^{\alpha\mu\nu}=\Sigma^{\alpha\mu\nu}$ ~\cite{Becattini:2011ev,Becattini:2012pp,HEHL197655}, we have \begin{eqnarray} \label{equivSp} T^{\mu\nu}&=&\Theta^{\mu\nu}+\frac{1}{2}\partial_{\alpha}(S^{\alpha\mu\nu}-S^{\mu\alpha\nu}-S^{\nu\alpha\mu})\nonumber\\ &=&\Theta^{\mu\nu}+\frac{1}{2}\partial_{\alpha}(\Sigma^{\alpha\mu\nu}-\Sigma^{\mu\alpha\nu}-\Sigma^{\nu\alpha\mu})\nonumber\\ &=&\frac{1}{2}(\Theta^{\mu\nu}+\Theta^{\nu\mu})-\frac{1}{2}\partial_{\alpha}(\Sigma^{\mu\alpha\nu}+\Sigma^{\nu\alpha\mu})\nonumber\\ &=& e u^{\mu } u^{\nu }+\text{P$\Delta $}^{\mu \nu }+\Pi \Delta^{\mu \nu } +h^{\mu}u^{ \nu }+u^{\mu}h^{ \nu }+\pi^{\mu \nu }-\frac{1}{2}\partial_{\alpha}(u^{\mu}S^{\alpha\nu}+u^{\nu}S^{\alpha\mu})\nonumber\\ &&-\frac{1}{2}\partial_{\alpha}(\Delta \Sigma^{\mu\alpha\nu}+\Delta\Sigma^{\nu\alpha\mu})\nonumber\\ &=& e u^{\mu } u^{\nu }+\text{P$\Delta $}^{\mu \nu }+\Pi \Delta^{\mu \nu } +h^{\mu}u^{ \nu }+u^{\mu}h^{ \nu }+\pi^{\mu \nu }-\frac{1}{2}(\partial_{\alpha}u^{\mu})S^{\alpha\nu}-\frac{1}{2}(\partial_{\alpha}u^{\nu})S^{\alpha\mu}\nonumber\\ &&-\frac{1}{2}(u^{\mu}\partial_{\alpha}S^{\alpha\nu}+u^{\nu}\partial_{\alpha}S^{\alpha\mu})-\frac{1}{2}\partial_{\alpha}(\Delta \Sigma^{\mu\alpha\nu}+\Delta\Sigma^{\nu\alpha\mu})\nonumber\\ &=& e u^{\mu } u^{\nu }+\text{P$\Delta $}^{\mu \nu }+(\Pi-\frac{1}{6}\Delta_{\lambda\rho}\partial_{\alpha}(\Delta \Sigma^{\lambda\alpha\rho}+\Delta\Sigma^{\rho\alpha\lambda})) \Delta^{\mu \nu } +(h^{\mu}-\frac{1}{2}\partial_{\alpha}S^{\alpha\mu})u^{ \nu }\nonumber\\ &&+u^{\mu}(h^{\nu } -\frac{1}{2}\partial_{\alpha}S^{\alpha\nu})+ \pi^{\mu \nu }-\frac{1}{2}\Delta^{\mu\nu}_{\lambda\rho}\partial_{\alpha}(\Delta \Sigma^{\lambda\alpha\rho}+\Delta\Sigma^{\rho\alpha\lambda})-\frac{1}{2}(\partial_{\alpha}u^{\mu})S^{\alpha\nu}\nonumber\\ &&-\frac{1}{2}(\partial_{\alpha}u^{\nu})S^{\alpha\mu} \end{eqnarray} The term, $\frac{1}{2}(\partial_{\alpha}u^{\mu})S^{\alpha\nu}+\frac{1}{2}(\partial_{\alpha}u^{\nu})S^{\alpha\mu}$ can be decomposed as a combination that contains $\Delta^{\mu\nu}S^{\lambda\rho}\omega_{\lambda\rho} $ and $S^{\mu}_{\lambda}\omega_{\lambda\nu}$. This transformation makes the spin tensor disappear from the total angular momentum. However, to have $\partial_{\mu} T^{\mu\nu}=0$, we must have~\cite{Fukushima:2020ucl} \begin{eqnarray} \partial_{\mu}\partial_{\alpha}(\Sigma^{\alpha\mu\nu}-\Sigma^{\mu\alpha\nu}-\Sigma^{\nu\alpha\mu})&=&0\nonumber\\ \text{or,}\,\, \partial_{\mu}\partial_{\alpha}(u^{\alpha}S^{\mu\nu}+u^{\nu}S^{\nu\alpha}+u^{\nu}S^{\mu\alpha})&=&0 \end{eqnarray} This conditions act as equations required for the spin fields in pseudo-gauge transformed situation. From the above equations it is clear that if $S^{\alpha\nu}$ is connected to the vorticity, then in the case where other kind of dissipation are negligible the EMT looks like that of a second order theory, that contains second-order derivative in the expansion of EMT in field gradients. \subsection{Structure of the equivalent second order theory} If the vorticity is the predominant gradient in the system, where other dissipative gradient responsible for the transport are very small, for highly rotating fluid, with the vorticity $\omega_{\mu\nu}= \frac{1}{2}(\Delta^{\alpha}_{\mu}\partial_{\alpha} u_{\nu}-\Delta^{\alpha}_{\nu}\partial_{\alpha} u_{\mu})$ the symmetric energy-momentum tensor and the conserved charge current of a parity even plasma is written as~\cite{Li:2020eon} \begin{eqnarray} \label{StEquiv} T^{\mu\nu}&=&(\epsilon+P)u^{\mu}u^{\nu}+Pg^{\mu\nu}+\Delta T^{\mu\nu}\\ \Delta T^{\mu\nu}&=&a_0\Delta^{\mu\nu}\omega^{\lambda\rho}\omega_{\lambda\rho}+a_1 \omega^{\mu}_{\lambda}\omega_{\lambda\nu},\\ J^{\nu}&=&nu^{\nu}+\Delta J^{\nu},\\ \Delta J^{\mu}&=&c_1\Delta^{\mu}_{\rho}\partial_{\nu}\omega^{\nu\rho}+c_2\omega^{\mu\nu}\partial_{\nu}\beta \,, \end{eqnarray} where $a_0$, $a_1$, $c_1$ and $c_2$ are second order transport coefficients. For ideal evolution ($\partial_{\mu}s^{\nu}$=0) these transport coefficients are related~\cite{Li:2020eon}. The assumption behind the structure of the theory is that the vorticity is the dominating scale over other gradients in the theory. In certain cases this can be a physical situation, since for a uniform rotation the vorticity can have arbitrary high values without entropy generation in the system. However, in general, the local vorticity can have wide range of values and the gradient appearing through the vorticity can be larger with significant entropy production. So the assumption of the above theory is rather valid for a specific situation of high rotation with lower gradient appearing in the vorticity. The scales are as follows: for the vorticity $\omega^{\mu\nu}\sim \delta_{\omega}$, with symmetric gradient, $\theta^{\mu\nu}=\frac{1}{2}(\Delta^{\alpha}_{\mu}\partial_{\alpha} u_{\nu}+\Delta^{\alpha}_{\nu}\partial_{\alpha} u_{\mu})\sim \partial^{\perp}_{\mu} \alpha\sim \delta$, $\partial^{\perp}_{\mu} \beta\sim \delta'$ and spatial derivative of $\omega^{\mu\nu}$, $\beta$ brings extra $\delta'$ such that $\partial^{\perp}_{\mu}\omega^{\mu\nu}\sim \delta'\delta_{\omega}$, $\partial^{\perp}_{\mu}\partial^{\perp}_{\nu}\beta\sim \delta'^2$, whereas for spatial derivative of $\theta^{\mu\nu}$ and $\alpha$ extra $\delta$ appear: $\partial^{\perp}_{\mu}\theta^{\mu\nu}\sim \partial^{\perp}_{\mu}\partial^{\perp}_{\nu}\alpha\sim \delta^2$, where $\alpha=\mu/T$ and $\partial^{\perp}_{\mu}=\Delta^{\rho}_{\mu}\partial_{\rho}$. The assumption for the above theory in terms of these scale is given by, \begin{eqnarray} \label{hir} \delta'^2&\ll&\delta\ll\delta_{\omega} \ll \delta_{\omega}\delta'\ll\delta_{\omega}^2\ll\delta'\ll\delta_{\omega}\ll 1\, . \end{eqnarray} The energy-momentum conservation equation ($\partial_{\mu}T^{\mu\nu}=0$) and $\partial_{\mu}J^{\mu}=0$ can be written as \begin{eqnarray} D\epsilon +(\epsilon+P)\theta+a_0\theta(\omega^{\lambda\rho}\omega_{\lambda\rho})-a_1\omega^{\mu}_{\lambda}u_{\nu}\partial_{\mu}\omega_{\lambda\nu}&=&0,\\ (\epsilon+P) D u^{\alpha}+\Delta^{\alpha\mu}\partial_{\mu} P+a_0(D u^{\alpha})\omega^{\lambda\rho}\omega_{\lambda\rho}+a_0 \Delta^{\alpha\mu}\partial_{\mu}(\omega^{\lambda\rho}\omega_{\lambda\rho})\nonumber\\ +a_1\partial^{\alpha}_{\nu}(\omega^{\mu}_{\lambda}\omega_{\lambda\nu})&=&0,\\ n\theta +Dn+\partial_{\mu}\Delta J^{\mu}&=&0. \end{eqnarray} If we linearise the theory around a static equilibrium where the background quantities are independent of space-time as considered in section ~\ref{SpinFirst}, then the contribution from the second order terms vanishes in the linearized form and consequently we have, \begin{eqnarray} \frac{\partial}{\partial t}\delta \epsilon +(h_0)\delta \theta&=&0,\\ (h_0)\frac{\partial}{\partial t} u^{\alpha}+\Delta^{\alpha\mu}\partial_{\mu} \delta P&=&0,\\ n_0\delta \theta + \frac{\partial}{\partial t}\delta n &=&0. \end{eqnarray} If we consider the perturbation of the form $\delta \mathcal{Q}=\Tilde{\delta \mathcal{Q}}\,exp{(-\omega t+i\vec{k}\cdot\vec{x})}$, these leads to ideal and stable propagation of perturbations with only longitudinal propagating modes, $\omega_{2l} =\pm i c_s k$. So for the linear perturbation, the theory is causal and stable even without any relation among the new transport coefficients for ideal evolution i.e., the theory is stable and causal for linear perturbations irrespective of whether it is an ideal or dissipative system. Now let us investigate whether the first order spin hydrodynamics as discussed in Ref~\cite{Hattori:2019lfp} gives the same dispersion in this order of scaling. If we put the same order of scaling as in Eq.~\eqref{hir} with spin chemical potential tensor being the vorticity -and it is the dominating order, then the spin hydrodynamics also has no dissipation and we have only $\omega_{2l} =\pm i c_s k$, since, then all the dissipative fluxes are absent at that order, and the structure of EMT of ideal spin hydrodynamics becomes, $\Theta^{\mu \nu }=\epsilon u^{\mu } u^{\nu }+\text{P$\Delta $}^{\mu \nu }$ without any contribution from vorticity at all. So, in this scheme of ordering of scales, since, the first order spin hydrodynamics becomes ideal, it bears no problem of causality and stability. However, if the spin chemical potential, though being of the same order as the vorticity, is not identical to it, (which is the case in a general situation, since the symmetric shear and the magnetic field can also be the cause of spin polarization), then the surviving dissipative fluxes from Eq.~\eqref{dfluxes} are $q^{\mu }\equiv4\lambda T\omega^{\mu\nu}u_{\nu}$ and $\phi^{\mu\nu}=-2\gamma\Big [ \frac{1}{2}(\Delta^{\mu \alpha} \partial_{\alpha}u^{\nu} - \Delta^{\nu \alpha} \partial_{\alpha}u^{\mu}) -\Delta^{\mu}_{\rho} \Delta^{\nu}_{\lambda} \omega^{\rho\lambda}\Big ]$. In that case the linear analysis around the static background gives the longitudinal modes linear in transport coefficients, $\omega_{1li} =\pm i k c_s$ and $\omega_{2li} =\frac{8 \gamma }{\chi _b}$ and transverse modes linear in transport coefficient, $\omega_{1ti} =\frac{8 \gamma }{\chi _b}$ and $\omega_{2ti} =\frac{\pm\sqrt{\left(8 \gamma \epsilon_0+\gamma k^2 \chi _s+8 \gamma P_0\right){}^2-32 \gamma ^2 k^2 \left(-\epsilon_0 \chi _s-P_0 \chi _s\right)}+8 \gamma \epsilon_0+\gamma k^2 \chi _s+8 \gamma P_0}{2 \left(\epsilon_0 \chi _s+P_0 \chi _s\right)}$, which give acausal diffusion. In such situations, these modes have no counter part in the conventional equivalent hydrodynamics. Moreover, in a general situation, the hierarchy (Eq.\ref{hir}) will not be respected, where the first-order spin hydrodynamics with dissipative evolution will lead to instability and acausal diffusion. So the equivalence of this particular form of the second order theory with the spin hydrodynamics is not for a general case, rather, only for ideal situation. In such situation, due to absence of entropy generation mechanism, the structures become equivalent from last two term in Eqs.~\eqref{equivSp} and ~\eqref{StEquiv}. This observation can be further understood from the observation that the pseudo-gauge transformed entropy currents are physically inequivalent for non-equilibrium evolution as found in~\cite{Fukushima:2020ucl}. So, in non-equilibrium situation, the pseudo-gauge transformed form of it is itself not equivalent to the untransformed one. However, the equivalent conventional hydrodynamics is equivalent to the pseudo-gauge transformed spin hydrodynamics in ideal limit. Therefore, the conventional theory is expected not to be equivalent to the untransformed form of the spin-hydrodynamics. This suggests that, in the presence of spin field, instead of the conventional formulation, it requires separate treatment as an independent field with the development of second order spin-hydrodynamics where problem of acausality and instability are absent. Whether in this prescription of scales, the hydrodynamics will be always causal and stable or not, can be further understood from the linear analysis of the equivalent spin hydrodynamics that allows ideal evolution as reported in Ref~\cite{Li:2020eon}. In the following we investigate the dispersion structure of the spin hydrodynamics with ideal evolution as given in Ref.~\cite{Li:2020eon}. The ideal counter part of the spin-hydrodynamic energy momentum tensor can be written as Ref~\cite{Li:2020eon} \begin{eqnarray} \label{spis} \Theta^{\mu\nu}&=&\epsilon u^{\mu } u^{\nu }+P \Delta ^{\mu \nu }+h^{\nu } u^{\mu }+h^{\mu } u^{\nu }-\frac{1}{2} \partial_{\alpha} \Sigma ^{\alpha \mu \nu }\,,\\ \text{with} &&h^{\mu }=\frac{\chi }{2 \beta } \omega ^{\mu \nu } \partial_{\nu} \beta \,,\nonumber\\ \text{and}&&\Sigma ^{\alpha \mu \nu }=S^{\mu \nu } u^{\alpha }.\nonumber \end{eqnarray} The $h^{\nu}$ given above vanishes at first order, for static background. To have non-zero $h^{\nu}$ at first order we consider a rotating background with background-equilibrium fluid velocity profile, \begin{eqnarray} u_o^{\mu }&=&\left(-1,0,0,v_z\right)\, ,\\ v_z&=&\frac{v_0}{L} (y-x)\nonumber \end{eqnarray} and we consider $\frac{v_0}{L} $ to be very small (such that $v_z$ can be treated in first order perturbation). Then, with $S^{\mu\nu}=\chi\omega^{\mu\nu}$ the linearized conservation equations become, \begin{eqnarray} 0&=&D_0\delta \epsilon+h_0\nabla\cdot \delta \vec{u}+\frac{\chi v_0}{2 L}(\partial_t\partial_z)(\delta u^y-\delta u^x)+v_z\frac{\chi}{4}\partial_t(\partial_x^2+\partial_y^2+\partial_z^2)\delta u^{z}\,\\ 0&=&h_0 D_0 \delta u^i+v_z \delta^{z i}\frac{\partial}{\partial t}\delta P+\partial^{i}\delta P+\frac{\chi}{2}(\partial_t^2)\delta \omega^{0i}+\frac{\chi}{2}(\partial_t\partial_l)\delta \omega^{li}+\frac{\chi}{2}\omega_0^{li}\partial_l \nabla\cdot \delta \vec{u}\nonumber\\ &&-\delta^{iz}(\frac{v_0}{L})(\delta h^x-\delta h^y)-\frac{\partial}{\partial t}\delta h^i, \end{eqnarray} where $\delta \omega^{0i}=-\frac{1}{2}v_z\partial_z\delta u^i-\frac{1}{2}\delta^{iz}\delta u^l \partial_lv_z$ and $\delta \omega^{ij}=\frac{1}{2}(\partial^{i}\delta u^j-\partial^{j}\delta u^i)+\frac{v_z}{2}(\delta^{iz}\partial_t\delta u^j-\delta^{jz}\partial_t\delta u^i)$. We have in $\omega$-$k$ space, with $\delta Q=\Tilde{\delta Q} e^{-i(\omega t-\vec{k}\cdot \vec{x})}$, where $Q$ stands for hydrodynamic fields, (it is to be noted that before this we considered the perturbations to be of the form $\delta Q=\delta Q e^{-\omega t+\vec{k}\cdot \vec{x}}$. So here onward the real part of $\omega$ would correspond to (oscillatory or) wave mode.) \begin{eqnarray} \label{isps} 0&=& \text{$\delta \epsilon $} \left(\frac{k_z \left(v_0 \chi \omega \right)}{4 L T_0 \epsilon_T}+i c_s^2 k_x\right)-\frac{\text{$\delta $n} \left(k_z \left(v_0 \chi \omega \epsilon_n\right)\right)}{4 L T_0 \epsilon_T}+\text{$\delta $u}_x \Big \{\frac{1}{4} \chi \left(-\frac{v_0 k_x k_z}{L}+i \omega \left(k_y^2+k_z^2\right)\right)\nonumber\\ &&-i h_0 \left(\omega -k_z v_z\right)\Big \}-\frac{1}{4} \text{$\delta $u}_y \left(\chi \left(\frac{v_0 k_y k_z}{L}+i \omega k_x k_y\right)\right)-\frac{1}{4} \chi \text{$\delta $u}_z \left(\frac{v_0 k_z^2}{L}+i \omega k_x k_z\right)\nonumber\\ 0&=& \text{$\delta \epsilon $} \left(-\frac{k_z \left(v_0 \chi \omega \right)}{4 L T_0 \epsilon_T}+i c_s^2 k_y\right)+\frac{\text{$\delta $n} \left(k_z \left(v_0 \chi \omega \epsilon_n\right)\right)}{4 L T_0 \epsilon_T}+\text{$\delta $u}_y \Big \{\frac{1}{4} \chi \left(\frac{v_0 k_y k_z}{L}+i \omega \left(k_x^2+k_z^2\right)\right)\nonumber\\ &&-i h_0 \left(\omega -k_z v_z\right)\Big \}-\frac{1}{4} \text{$\delta $u}_x \left(\chi \left(-\frac{v_0 k_x k_z}{L}+i \omega k_x k_y\right)\right)-\frac{1}{4} \chi \text{$\delta $u}_z \left(-\frac{v_0 k_z^2}{L}+i \omega k_y k_z\right)\nonumber\\ 0&=& -i \text{$\delta \epsilon $} c_s^2 \left(\omega v_z-k_z\right)+\text{$\delta $u}_z \left(-i h_0 \left(\omega -k_z v_z\right)-\frac{k_z \left(v_0 \chi \right) \left(k_x-k_z\right)}{4 L}+\frac{1}{2} i \chi \omega ^2 k_z v_z+\frac{1}{4} i \chi \omega k_x^2\right)\nonumber\\ &&+\text{$\delta $u}_x \left(-\frac{k_x \left(v_0 \chi \right) \left(k_x-k_y\right)}{4 L}+\frac{1}{4} i \chi \omega ^2 k_x v_z+\frac{1}{2} i \chi \omega k_x k_z-\frac{\omega ^2 \left(v_0 \chi \right)}{2 L}\right)\nonumber\\ &&+\text{$\delta $u}_y \left(-\frac{k_y \left(v_0 \chi \right) \left(k_x-k_y\right)}{4 L}+\frac{1}{4} i \chi \omega ^2 k_y v_z+\frac{1}{2} i \chi \omega k_y k_z+\frac{\omega ^2 \left(v_0 \chi \right)}{2 L}\right)\nonumber\\ 0&=& \text{$\delta $u}_x \left(-\frac{v_0 \chi \omega k_z}{2 L}+\left(e_0+p_0\right) \left(i k_x\right)\right)+\text{$\delta $u}_y \left(\frac{v_0 \chi \omega k_z}{2 L}+\left(e_0+p_0\right) \left(i k_y\right)\right)\nonumber\\ &&+\text{$\delta $u}_z \left(\left(e_0+p_0\right) \left(i k_z\right)+\frac{1}{4} (i k) k \chi \omega v_z\right)-i \text{$\delta $e} \left(\omega -k_z v_z\right)\nonumber\\ 0&=& n_0 \left(i k_j\right) \text{$\delta $u}^j-i \text{$\delta $n} \left(\omega -k_z v_z\right), \end{eqnarray} where $\epsilon_n=\frac{\partial \epsilon}{\partial n}\Big |_T$. We have used $\partial^{\mu} T=\frac{1}{\epsilon_T}\partial^{\mu} e-\frac{\epsilon_n}{\epsilon_T}\partial^{\mu} n$, where $\epsilon_T=\frac{\partial \epsilon}{\partial T}\Big |_{n}$. In the following we consider $n_0=0$ and $\epsilon_n=0$. If we consider only the perturbation which propagates in $z$-direction then $k_x=k_y=0$, then from the above equations, for energy perturbation we get \begin{eqnarray} 0&=&\delta \epsilon \Big [2 i c_s^2 \left(k_z-\omega v_z\right)+\frac{2 \left(\omega -k_z v_z\right) \left(-4 i h_0 \left(\omega -k_z v_z\right)+\frac{v_0 \chi k_z^2}{L}+2 i \chi \omega ^2 k_z v_z\right)}{k \left(4 \epsilon_0+k \chi \omega v_z+4 P_0\right)}\Big ] \end{eqnarray} In case of non-rotating static background, $v_z=v_0=0$, then the above equation has solution $\omega=\pm c_s k$. This is same as that of equivalent conventional hydrodynamics of Ref.~\cite{Li:2020eon}. However, for small rotation and small $v_0$, we get \begin{eqnarray} \label{unstabSec} \omega_1&=& \pm c_s k_z -\frac{k_z v_z \left\{( c_s^2-2)-3/4 \chi_0 c_s^2 k_z^2\right\}}{2}-\frac{i v_0 \chi_0 k_z^2}{8 L}\nonumber\\ \omega_2&=&\frac{2}{\chi_0 k_z v_z}, \end{eqnarray} where, $\chi_0=\frac{\chi}{h_0}$. So, from the first two term of the above dispersion relation for $\omega_1$ it is evident, that in the presence of rotation of the background the propagation speed gets modified due to the presence of spin polarization arising from the vorticity (through non-zero $\chi$). $|\frac{d Re (\omega_{1})}{dk}|=\frac{9 \chi c_s^2 k_z^2 v_z}{8 h_0}-\frac{1}{2} c_s^2 v_z\pm c_s+v_z$. This means that for $k_z>\frac{2 \sqrt{c_s^2 v_z-2 \left(\pm c_s\right)-2 v_z+2}}{3 \sqrt{\chi_0} c_s \sqrt{v_z}}$, $|\frac{d Re (\omega_{1})}{dk}|>1$, i.e, the sound propagation becomes acausal. The third term tells about the decay of the mode, though we have taken ideal evolution as in Ref~\cite{Li:2019qkf}, and this term may lead to instability for a background rotation for which $v_0$ is negative. However, this decay through the diffusion is acausal due to $k_z^2$ dependence of this term. This means that in the non-dissipative limit the prescribed spin hydrodyamics in Ref.~\cite{Li:2019qkf} may lead to acausal and unstable propagation. However, that implies that the equivalent second order theory, being equivalent may lead to acausality and instability for rotating background. The second mode is a wave mode whose propagation speed is inversely proportional to $\chi$ that is, to vorticity to spin conversion strength, and also reduces with increasing rotation. The speed of this mode is higher for lower $k_z$, that means, such modes with longer wave lengths propagates faster. This mode is there even if the sound mode is not there ($c_s=0$). Apart from these modes, there are other modes. Taking sum of the first two equations of set of equations given in Eq.~\eqref{isps}, we get, \begin{equation} 0=\frac{1}{4} i \left(\text{$\delta $u}_x+\text{$\delta $u}_y\right) \left(4 \epsilon_0 k_z v_z-4 \epsilon_0 \omega +4 P_0 k_z v_z+\chi \omega k_z^2-4 P_0 \omega \right). \end{equation} This gives the wave mode other than the sound as \begin{equation} \omega=\frac{4 k_z v_z}{4-\chi_0 k_z^2}. \end{equation} However, if we keep $k_x=k_y$ (which follows from $\delta h^z=0$, where $\delta h^z=0$ comes from $\delta h^0=0$ and $\delta h^0=0$ follows from the expression of $h^{\nu}$ in Eq. ~\eqref{spis}) and make the perturbation of energy and z-component of velocity zero, then from the first two equations we get, \begin{eqnarray} 0&=&\left(\text{$\delta $u}_x-\text{$\delta $u}_y\right) \left(-4 \epsilon_0 \left(\omega -k_z v_z\right)+4 P_0 k_z v_z+\chi \omega k_z^2-4 P_0 \omega \right) \Big \{(-4 \epsilon_0 \left(\omega -k_z v_z\right)+4 P_0 k_z v_z\nonumber\\ &&+2 \chi \omega k_x^2+\chi \omega k_z^2-4 P_0 \omega \Big \}. \end{eqnarray} This gives two modes \begin{eqnarray} \omega_1&=&\frac{4 v_z k_z }{4-\chi_0 k_z^2}\nonumber\\ \omega_2&=& \frac{4 v_z k_z}{4 -2 \chi_0 k_x^2-\chi k_z^2}. \end{eqnarray} These modes are wave-like mode and vanishes when there is no rotation of background ($v_z=0$). So for non-rotating homogeneous-static background, the conventional hydrodynamics of Ref.~\cite{Li:2020eon}, its equivalent ideal spin hydrodynamics have only sound modes. However, in the case of constant uniform rotation the spin hydrodynamics may become unstable and acausal. So this equivalence in general makes the conventional second order theory unusable, in the sense that it corresponds to a acausal form of the spin hydrodynamics. \section{Summary and Discussions} In the present work, we have carried out a linear mode analysis for the two different set of equations of the relativistic spin-hydrodynamics to study the issues related with the stability and causality. For the case of dissipative spin-hydrodynamics it is found that the inclusion of spin-dynamics introduces new modes and instability to the hydrodynamics. In this case, the spin-hydrodynamics seems to have the similar kinds of pathologies as reported in the literature of relativistic NS equation~\cite{Hiscock:1987zz}. We have investigated the origin of the kind of instability in the theory discussed in Ref.~\cite{Hiscock:1987zz} and the origin is found the form of the heat fluxes. The spin dissipative dynamics is characterized by three transport coefficients: i) $\gamma$ (associated with the shear-stress), ii) $\lambda$ (associated with heat conduction) and $\chi_1$ (associated with the spin-dynamics). In the absence of regular dissiapation $\zeta,\,\eta\,\text{and}\,\,\kappa=0$, in the longitudinal modes discribed by Eq.~\ref{lnmod1}, the first two modes exhibit acausal behaviour as $|\frac{d\omega_{1,2l}}{dk}|$ can exceed the speed of light. Similar behaviour can be seen in the regular relativistic NS equation also (see Eq.~\ref{lnmod1} with $\zeta,\,\eta\,\text{and}\,\,\kappa=0$). The third mode (in Eq.~\ref{lnmod1}) is a new-mode, which is conditionally unstable and it can also have the acausal behaviour. The fourth mode (in Eq.~\ref{lnmod1}) is purely an unstable mode and it has counter part in the relativistic NS equation (see the last mode in Eq.~\ref{lnmod1} with $\zeta,\,\eta\,\text{and}\,\,\kappa=0$). The transverse modes described by Eq.~\ref{tsmod1} also exhibit acausality and instability. Here the second and third equations are the new modes arising due to the spin dynamics. In this case also the transport coefficient $\lambda$ can drive the instability under certain conditions. It is evident from the modes that the presence of spin polarization affects the hydrodynamic responses through new coefficients in spin hydrodynamics. We also studied the stability of the dissipationless spin-dynamics described in Ref.~\cite{Li:2020eon}. In this case the linear-mode analysis was performed for the following two background states of the fluid velocity: i) when the fluid was static and ii) when the fluid was having a constant vorticity. For the first case it is shown that the fluid supports only the sound waves. In the second case, the background velocity is in $z$-direction with constant vorticity in $x$ and $y$ directions. In this case, it is possible to study the normal Fourier modes in $z$ direction. The normal modes for this case are described by Eq.~\ref{unstabSec}. Here, the first equation may give an instability for $v_0<0$. But the reason for the instability can be attributed to the source of free energy provided by the finite flow velocity of the background. The flow velocity can also alters sound speed. However, if we calculate $|\frac{d\omega_1}{dk_z}|$, which may exceed the speed of light. There is an equivalent second-order dissipationless conventional hydrodynamical theory as reported in Ref.~\cite{Li:2020eon}. The underlying pseudo-gauge transformation may give the similar kind of dispersion relation described by Eq.~\ref{unstabSec}. These issues make the conventional second-order theory in Ref.~\cite{Li:2020eon} and its equivalent spin-hydrodynamics inadequate to describe the hydrodynamics with the spin for a general situation. Thus we have analysed acausal behaviour and unphysical instability arising in the relativistic spin-hydrodynamics. We believe that our linear analysis shows that relativistic spin-hydrodynamics faces similar issues faced by relativistic NS equation but the spin-dynamics brings in new complexities. This points towards the need for causal and stable theories with the spin density as an independent hydrodynamic field, which are free from such issues, to describe the spin dynamics for spin-polarized fluid.
{ "timestamp": "2022-09-20T02:21:27", "yymm": "2209", "arxiv_id": "2209.08652", "language": "en", "url": "https://arxiv.org/abs/2209.08652" }
\section{Introduction} Finding visual correspondence between images is a central problem in computer vision, with numerous applications including simultaneous localization and mapping (SLAM)~\cite{bailey2006simultaneous}, augmented reality (AR)~\cite{peebles2021gan}, object tracking, structure from motion (SfM)~\cite{schonberger2016structure}, optical flow~\cite{fleet2006optical}, and image editing~\cite{barnes2009patchmatch,kim2018recurrent}. Given visually or semantically similar images, unlike sparse correspondence approaches~\cite{lowe2004distinctive,calonder2010brief,yi2016lift,detone2018superpoint,ono2018lf,dusmanu2019d2,revaud2019r2d2} that first detect a set of sparse points and extract corresponding descriptors to find matches across them, dense correspondence~\cite{truong2020glu,Hong_2021_ICCV,zheng2022dip,huang2022flowformer} aims at finding matches for all pixels. Dense correspondence approaches typically follow the classical matching pipeline~\cite{scharstein2002taxonomy,philbin2007object} of feature extraction, cost aggregation, and flow estimation. Much research has designed a means to address either the feature extraction or the cost aggregation, as shown in Fig.~\ref{intuition} (a) and (b). Feature aggregation aims to not only integrate self-similar features within an image and but also align similar features between the two images for matching, such as by using deep dense feature descriptors~\cite{lee2019sfnet,sarlin2020superglue,sun2021loftr,xu2021gmflow,jiang2021cotr}. The advantages of feature aggregation are particularly evident in attention and Transformer-based methods~\cite{vaswani2017attention,sarlin2020superglue,sun2021loftr,jiang2021cotr,xu2021gmflow} thanks to their attention layers with global receptive fields and adaptability to input tokens, which previous works with convolutions~\cite{Rocco18b,lee2019sfnet,jeon2020guided,Hong_2021_ICCV,min2021convolutional} lack. These methods, however, solely aggregate feature descriptors without consideration of cost aggregation. Numerous works~\cite{Rocco18b,huang2019dynamic,li2020correspondence,liu2020semantic,sarlin2020superglue,min2021convolutional,min2021convolutional++,cho2021semantic,sun2021loftr,hong2021cost,huang2022flowformer,cho2022cats++} on dense correspondence proposed methods for cost aggregation stage instead and demonstrate its importance. During cost aggregation, pair-wise interactions between pixels of the two images are considered by first computing a cost volume between descriptors and then suppressing noise to promote accurate correspondence estimation. Transformer-based methods~\cite{cho2021semantic,huang2022flowformer} are found to benefit significantly from cost aggregation, but they disregard aggregation of feature descriptors even though an improved cost volume constructed using less noisy features would ease the subsequent cost aggregation. We argue that both feature aggregation and cost aggregation should ideally be performed in dense correspondence, as they serve different purposes and the benefits of each are well-established. Although there have been a few approaches~\cite{sarlin2020superglue,Hong_2021_ICCV} that attempt to aggregate both, no approaches utilized transformers due to its expensive computational complexity, and we believe that directly performing the two aggregations independently in a sequential manner only allows one aggregation to benefit the other but not vice versa, thus limiting the synergy between these two processes. In this work, we present a method, which we call \textbf{I}ntegrative \textbf{F}eature and \textbf{C}ost \textbf{A}ggregation with \textbf{T}ransformers (\textbf{IFCAT}), that jointly aggregates feature descriptors and the cost volume in a manner that leverages their complementarity, as shown in Fig.~\ref{intuition} (c). This goal is accomplished in two steps, the first of which employs a self-attention layer to jointly aggregate the descriptors and cost volume. In this stage, the descriptors help to disambiguate the noisy cost volume similarly to cost volume filtering~\cite{hosni2012fast,sun2018pwc}, and the cost volume enhances feature aggregation by introducing matching similarities as a factor for aggregation. The cost volume explicitly represents the similarity of features in one image with respect to the features in the other, and accounting for it drives the features in each image to become more compatible with those of the other. In the subsequent step, we design a cross-attention layer that performs further aggregation aided by both the feature descriptors and the cost volume from earlier aggregations. By constructing better cross-attention maps with both the feature descriptors and the aggregated cost volume, the aggregated features of both images can be mutually improved more effectively. These self- and cross-attention layers are interleaved to facilitate convergence. We further boost performance through hierarchical processing that enhances this complementary aggregation by providing coarser outputs to guide finer-level aggregation. We evaluate the proposed method on semantic and geometric matching tasks. In the experiments, we show that IFCAT outperforms prior works on all the major benchmarks, including SPair-71k~\cite{min2019spair}, PF-PASCAL~\cite{ham2017proposal}, PF-WILLOW~\cite{ham2016proposal} and HPatches~\cite{balntas2017hpatches}, by a significant margin, establishing a new state-of-the-art for all of them. We also conduct an extensive ablation study to validate our approach and the architectural design choices. The pre-trained weights and codes will be made available. \section{Related Work} \paragraph{Feature Extraction.} Various methods have been proposed to extract feature descriptors for robust sparse matching. This process involves detecting interest points and extracting the descriptors of corresponding points. In traditional methods~\cite{liu2010sift,bay2006surf,dalal2005histograms,tola2009daisy}, the matching performance mostly relies on the quality of the feature detection and the description method, and outlier rejection across matched points is typically determined by RANSAC~\cite{fischler1981random}. These methods focus on the problem of identifying more meaningful feature points and extracting feature descriptors given an image. Despite their solid performance, they often struggle in cases of extreme appearance or viewpoint changes. To overcome such issues, several methods~\cite{yi2016lift,detone2018superpoint,ono2018lf,dusmanu2019d2,revaud2019r2d2} extract dense deep features used to obtain descriptors tailored for matching. These works have demonstrated that the quality of feature descriptors contributes substantially to matching performance. In accordance with this, recent matching networks~\cite{detone2018superpoint,sarlin2020superglue,min2019hyperpixel,lee2019sfnet,Hong_2021_ICCV,min2020learning,jiang2021cotr,sun2021loftr,xu2021gmflow} proposed effective means for feature aggregation. Notable works include SuperGlue~\cite{sarlin2020superglue}, which employs graph self- and cross-attention to aggregate deep feature maps. SFNet~\cite{lee2019sfnet} and DMP~\cite{Hong_2021_ICCV} introduce an adaptation layer subsequent to feature extraction in order to learn feature maps well-suited to matching. Recent state-of-the-art works utilize Transformer~\cite{vaswani2017attention} for feature aggregation. COTR~\cite{jiang2021cotr} uses transformers by taking input coordinates and feature maps to infer the correspondence of a given pixel coordinate, with self-attention computed for feature aggregation. LOFTR~\cite{sun2021loftr} also uses self- and cross-attention, but leaves the cost aggregation to a handcrafted method, \textit{i.e.,} optimal transport~\cite{sinkhorn1967diagonal}. Very recently, GMFlow~\cite{xu2021gmflow} also utilized a transformer for feature aggregation in optical flow estimation. Despite its state-of-the-art performance, its disregard of cost aggregation may lead to sub-optimal solutions, a problem we address in this paper. \vspace{-5pt} \paragraph{Cost Aggregation.} In the correspondence literature, recent works carefully design their architecture to effectively aggregate a cost volume. Some works~\cite{dosovitskiy2015flownet,sun2018pwc,hui2018liteflownet,melekhov2019dgc,truong2020glu,Hong_2021_ICCV,jeon2020guided,truong2021learning} use 2D convolutions to establish correspondence while aggregating the cost volume with learnable kernels that have a local receptive field. Although 2D convolutions are used for flow estimation, they in fact also aggregate costs during the process, making them suitable for both the cost aggregation and flow estimation stages. Some works~\cite{min2019hyperpixel,min2020learning,liu2020semantic} utilize handcrafted methods including RHM~\cite{cho2015unsupervised} and the OT solver~\cite{sinkhorn1967diagonal}. These works have inherent limitations as their use of handcrafted techniques do not take advantage of learning and are susceptible to severe deformations. NC-Net~\cite{Rocco18b} proposes to use 4D convolutions for cost aggregation in order to identify sets of consistent matches by exploring neighborhood consensus. Inspired by this, numerous works~\cite{li2020correspondence,yang2019volumetric,huang2019dynamic,rocco2020efficient,min2021convolutional,min2021convolutional++} either adopted or extended 4D convolutions. For example, DCCNet~\cite{huang2019dynamic} used it for cost embedding; Sparse NC-Net~\cite{li2020correspondence} designed adaptive 4D convolutions, and~\cite{yang2019volumetric,rocco2020efficient,min2021convolutional++} proposed efficient versions. However, they are all limited in the sense that they inherit the limitations of CNN-based architectures, for which the receptive fields are local. Recently, CATs~\cite{cho2021semantic} proposed to use Transformer~\cite{vaswani2017attention} as a means for cost aggregation, and an extension~\cite{cho2022cats++} combined convolutions with Transformer for enhanced cost aggregation. Although they benefit from the global receptive field of self-attention operations, they disregard feature aggregation even though the cost volume they use is constructed from feature maps. FlowFormer~\cite{huang2022flowformer} takes an approach that utilizes Transformer, but it is designed specifically for the optical flow task and does not aggregate features. By disregarding feature aggregation, these methods may limit their performance due to the resultant noise in the cost volume which then hampers cost aggregation. \vspace{-5pt} \section{Preliminaries: Self- and Cross-Attention} Self- and cross-attention are the core elements of Transformer~\cite{vaswani2017attention} for their ability to globally model relationships and interactions among input tokens and their adaptability to input tokens. As a general description, given a sequence of tokens as an input, Transformer~\cite{vaswani2017attention} first linearly projects tokens to obtain query, key and value embeddings. These are then fed into a scaled dot product attention layer, followed by layer normalization (LN)~\cite{ba2016layer} and a feed-forward network or MLP, to produce an output with the same shape as the input. Each token is attended to by all the other tokens. This attention process can be formulated as: \begin{equation} \begin{split} Q = \mathcal{P}_Q(X),\quad K = \mathcal{P}_K(X), \quad V = \mathcal{P}_V(X), \end{split} \end{equation} where $\mathcal{P}_Q$, $\mathcal{P}_K$ and $\mathcal{P}_V$ denote query, key and value projections, respectively, and $X$ denotes a token with a positional embedding. The obtained query, key and value embeddings then pass through an attention layer: \begin{equation} \mathrm{Attention}(X) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V, \label{eq:2} \end{equation} where $d_k$ is the dimension of key embeddings. Note that the $\mathrm{Attention}(\cdot)$ function can be defined in various ways~\cite{wang2020linformer,liu2021swin,katharopoulos2020transformers,lu2021soft,wu2021fastformer}. A key factor that distinguishes self- and cross-attention is the input to the key and value projections. Given a pair of input tokens, e.g., $X_s$ and $X_t$, the input to the key and value projections when performing self-attention with $X_s$ is the same input, $X_s$, but for cross-attention across $X_s$ and $X_t$, the inputs to the key and value projection are $X_t$. \section{Methodology}\label{sec:3} \subsection{Problem Formulation} Let us denote a pair of visually or semantically similar images, i.e., the source and target, as $I_{s}$ and $I_{t}$, the feature descriptors extracted from $I_{s}$ and $I_{t}$ as $D_{s}$ and $D_{t}$, respectively, and the cost volume computed between the feature maps as $C$. Given $I_s$ and $I_t$, we aim to establish a correspondence field ${F}(i)$ that is defined at all pixels $i$ and warps $I_s$ towards $I_t$. Recent learning-based networks~\cite{rocco2017convolutional,Rocco18b,rocco2020efficient,cho2021semantic} accomplish dense correspondence by extracting features from deep CNNs~\cite{he2016deep} or Transformers~\cite{dosovitskiy2020image} for $D_s$ and $D_t$. The extracted features subsequently undergo $l$-2 normalization. A cost volume that consists of all pair-wise feature similarities $C \in \mathbb{R}^{h \times w \times h \times w}$ with height $h$ and width $w$ is then computed and stored: $C(i,j)=D_{s}(i)\cdot {D}_{t}(j)$, where $i$ and $j$ index the source and target features, respectively. To improve the matching performance, existing state-of-the-art methods perform either feature aggregation~\cite{sun2021loftr,xu2021gmflow} or cost aggregation~\cite{cho2021semantic,huang2022flowformer} with Transformer~\cite{vaswani2017attention} such that $\{{D}_{s}',D_{t}'\}=\mathcal{T}(D_s,D_t)$ or $C' = \mathcal{T}(C)$, where $\mathcal{T}(\cdot)$ denotes Transformer. Then, ${F}(i)$ is determined from $C(i,j)$ considering all $j$. \subsection{Motivation and Overview} We argue that solely focusing on either feature or cost aggregation may lead to sub-optimal solutions. While feature aggregation~\cite{sun2021loftr,xu2021gmflow} is a process of aligning similar features based on the rich structural or semantic information present in dense feature maps, cost aggregation~\cite{huang2022flowformer,cho2021semantic} is a process of suppressing noises based on matching similarities. The information the two aggregations consider are different, but they can enhance matching by improving each other's aggregation with the help of information that the other has. In this work, we aim to jointly learn feature and cost aggregation modules by establishing a complementary relationship between them. To this end, we first concatenate a cost volume and feature descriptors, which we call a feature cost volume, and feed it to the self-attention layer. Within the self-attention layer, both feature and cost aggregation are performed, where the feature descriptors and the cost volume benefit from one another during the aggregation. Subsequently, we leverage the aggregated features and cost volume for cross-attention, which performs further aggregation aided by the aggregated inputs to the cross-attention layer. These self- and cross-attention layers are interleaved to facilitate convergence. Finally, we formulate our architecture in a coarse-to-fine manner, where the outputs of coarser attention blocks are used to guide the aggregation of finer-level blocks. In the following, we will explain each module in detail. \begin{figure*} \centering \includegraphics[width=1\linewidth]{figure/overall.pdf}\hfill\\ \caption{\textbf{Overall architecture of the proposed method.} Given feature maps $D_s$ and $D_t$, and cost volume $C$ as inputs, our method employs self- and cross-attention specifically designed to conduct joint feature aggregation and cost aggregation in a coarse-to-fine manner.} \label{fig:overall}\vspace{-10pt} \end{figure*} \subsection{Integrative Feature and Cost Aggregation} \paragraph{Self-Attention for Integrative Feature and Cost Aggregation.} Subsequent to feature extraction and cost computation, we feed both feature descriptors $D_s,$ $D_t$ and cost volume $C$ into our proposed self-attention layer. We first embed these inputs with linear projection for channel reduction prior to self-attention, and positional embeddings~\cite{vaswani2017attention} are added after the query and key projections, as shown in Fig.~\ref{fig:overall}. As done in~\cite{Rocco18b}, to ensure input order invariance, we consider the bidirectional nature of cost volumes and feed a pair $\{D_s,C\}$ and $\{D_t,C^\mathrm{T}\}$ into the proposed self-attention layer independently, where $C^\mathrm{T}(i,j) = C(j,i)$, and add the outputs. Specifically, as shown in Fig.~\ref{fig:attention}, within the self-attention layer, we first obtain a feature cost volume $[D,C]$ by concatenating $D$ and $C$, where $[\,\cdot\,]$ denotes concatenation. To compute self-attention, we need to define the query, key and value embeddings. Unlike other works~\cite{sun2021loftr,cho2021semantic} that solely aggregate either the feature descriptors or cost volume, we jointly aggregate both. To this end, we exploit the feature cost volume in computing self-attention and define two independent value embeddings, specifically one for feature projection and the other for cost volume projection. Note that we do not use the feature cost volume for value embeddings to ensure disentangled aggregation that is targeted for one and not the other. Formally, we define the query, key and values as: \begin{equation} \begin{split} Q = \mathcal{P}_{Q}([D , C]), \quad K = \mathcal{P}_{K}([D , C]), \quad V_{D} = \mathcal{P}_{V_D}(D), \quad V_{C} = \mathcal{P}_{V_C}(C), \end{split} \label{eq:cost_volume} \end{equation} where $V_D$ and $V_C$ denote the value embeddings of feature descriptors and the cost volume, respectively. After computing an attention map by applying softmax over the query and key dot product, we use it to aggregate feature $D$ and cost volume $C$ with $V_D$ and $V_C$ using Eq.~\ref{eq:2} as follows: \begin{equation} \mathrm{Attention}_\mathrm{self-D}(C,D) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V_D, \quad \mathrm{Attention}_\mathrm{self-C}(C,D) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V_C. \end{equation} Note that any type of attention computation can be utilized,~\textit{i.e.,} additive~\cite{bahdanau2014neural} or dot product~\cite{vaswani2017attention}, while in practice we use the linear kernel dot product with the associative property of matrix products~\cite{katharopoulos2020transformers}. The outputs of this self-attention are denoted as $D'_s$, $D'_t$, and $C'$. In this way, we benefit from two perspectives. From the cost aggregation point of view, the feature map of the feature cost volume can disambiguate the noisy cost volume as proven in the stereo matching literature~\cite{yoon2006adaptive,hosni2012fast,he2011global},~\textit{i.e.,} cost volume filtering, which aids the cost aggregation process. From the feature aggregation point of view, the cost volume explicitly represents the similarity of features in one image with respect to the features in the other, and accounting for it drives the features in each image to become more compatible with those of the other. \vspace{-5pt} \begin{figure*}[t] \centering \renewcommand{\thesubfigure}{} \subfigure[(a) Proposed self-attention] {\includegraphics[width=0.5756091954022988\textwidth]{figure/attentions1.pdf}}\hfill \subfigure[(b) Proposed cross-attention] {\includegraphics[width=0.4043908045977011\textwidth]{figure/attentions2.pdf}}\hfill \\ \vspace{-5pt} \caption{\textbf{Illustration of the proposed self- and cross-attention:} (a) self-attention layer that performs joint feature aggregation and cost aggregation, and (b) cross-attention layer with matching distribution for enhanced feature aggregation. } \label{fig:attention}\vspace{-10pt} \end{figure*} \paragraph{Cross-Attention with Matching Distribution.} In the proposed cross-attention layer, the aggregated feature and cost volume are explicitly used for further aggregation, and we condition both feature descriptors on both input images via this layer. By exploiting the outputs of the self-attention layer, the cross-attention layer performs cross-attention between feature descriptors for further feature aggregation using the improved feature descriptors $D'_s$, $D'_t$ and enhanced cost volume $C'$ from earlier aggregations. Concretely, as shown in Fig.~\ref{fig:attention}, we first treat the input cost volume as a cross-attention map, since applying a softmax function over the cost volume is tantamount to obtaining an attention map. In this way, we could perform more enhanced aggregation as the input cost map to the cross-attention layer is guided by residual connections that contain information from improved features and an enhanced cost volume as further explained in the next paragraph. Note that features $D'_s$ and $D'_t$ undergo average pooling followed by layer normalization~\cite{ba2016layer} and a linear projection layer in order to adjust the spatial resolution to $C'$, as will be further explained in Section~\ref{coarse-to-fine}. Formally, we first define a cross-attention map and value for attention score as $QK^T = C'$ and $V_{D'} = \mathcal{P}_{V_D}(D')$, respectively. The attention process for cross-attention is then defined as follows: \begin{equation} \mathrm{Attention}_\mathrm{cross}(C',D') = \mathrm{softmax}(\frac{C'}{\sqrt{d_k}})V_{D'}. \label{2} \end{equation} The outputs of this cross-attention are denoted as $D''_s$, $D''_t$, and $C''$. \vspace{-5pt} \begin{figure*}[t] \centering \renewcommand{\thesubfigure}{} \subfigure[(a) Source] {\includegraphics[width=0.124\textwidth]{figure/attn/src_07.png}}\hfill \subfigure[(b) Target] {\includegraphics[width=0.124\textwidth]{figure/attn/trg_07.png}}\hfill \subfigure[(c) $C^1$ ] {\includegraphics[width=0.124\textwidth]{figure/attn/self_raw_cost_0_07.png}}\hfill \subfigure[(d) $C^2$] {\includegraphics[width=0.124\textwidth]{figure/attn/self_raw_cost_1_07.png}}\hfill \subfigure[(e) $C^3$] {\includegraphics[width=0.124\textwidth]{figure/attn/self_raw_cost_2_07.png}}\hfill \subfigure[(f) $D'_s$ $\cdot$ $D'_t$] {\includegraphics[width=0.124\textwidth]{figure/attn/self_feat_cost_07.png}}\hfill \subfigure[(g) $C'$] {\includegraphics[width=0.124\textwidth]{figure/attn/cross_agg_cost_07.png}}\hfill \subfigure[(h) $D''_s$ $\cdot$ $D''_t$] {\includegraphics[width=0.124\textwidth]{figure/attn/cross_feat_cost_07.png}}\hfill \vspace{-5pt}\\ \caption{\textbf{Visualization of attention maps:} (a) source image, (b) target image, (c)-(e) raw correlations, (f) a cost volume constructed with features aggregated with self-attention, (g) a cost volume aggregated with self-attention, and (h) a cost volume constructed with features aggregated with cross-attention.} \label{fig:multi-cost}\vspace{-10pt} \end{figure*} \paragraph{Enhanced Aggregation with Improved Features and Cost Volume.} Within the proposed attention block, it is shown in Fig.~\ref{fig:overall} that the outputs of the self- and cross-attention layers, which include aggregated feature maps and a cost volume, are connected to the next layer or added to the input cost volume. Specifically, the feature maps are used in two ways: for cost volume construction and as inputs to aggregation in subsequent layers. For cost volume construction, the aggregated features are processed according to Eq.~\ref{eq:cost_volume} and the result is added to the cost volume input of the next layer by a residual connection. This is repeated for both self-attention and cross-attention layers across $N$ attention blocks, where the aggregated pairs of one block are fed as input to the following attention block. Similar to the feature maps, the cost volume is progressively improved from aggregation and better features as it passes through the attention blocks. Through these mechanisms of the proposed joint aggregation, the feature maps and cost volume help each other in self-attention and also facilitate further aggregation by providing an enhanced cross-attention map and improved features. \subsection{Coarse-to-Fine Formulation }\label{coarse-to-fine} To improve the robustness of fine-scale estimates, we extend our architecture to a coarse-to-fine approach through pyramidal processing, as done in~\cite{jeon2018parn,melekhov2019dgc,truong2020glu,Hong_2021_ICCV}. Specifically, we use feature maps of the last index at each pyramid level, specifically feature maps from $\mathrm{conv3_x}$ to $\mathrm{conv5_x}$ when ResNet~\cite{he2016deep} is used. We first use a coarse pair of refined feature maps and aggregated cost volume, and similar to~\cite{zhao2021multi} that learns complementary correspondence by adding the cost volume of the previous scale, we progressively learn complementary descriptors and correspondences through adding the previous-level outputs to those of the next level. The overall architecture is shown in Fig.~\ref{fig:overall}. A straightforward solution would be to upsample all the outputs of the cross-attention layer, following a pyramidal structure. However, the increasing computational and memory burden with respect to the cost volume resolution makes this infeasible. To alleviate such issues, we fix the resolution of the cost volume to the coarsest resolution and utilize 4D convolutions to downsample the spatial resolution when finer cost volumes are added to the coarsest cost volume. Finally, we sum up the output cost volumes after each attention block computed using the enhanced feature maps across all levels for the final output of the network. Formally, given the outputs of the attention block at each level, ${D}''^{,l}_s, {D}''^{,l}_t$ and ${C}''^{,l}$, where $l$ denotes the $l$-th level, we upsample the aggregated features using bilinear interpolation, and add them to the raw feature descriptors extracted from $I_s$ and $I_t$ defined at the next level: ${D}^{{l+1}}_s = {D}^{{l+1}}_s + \mathrm{up}({D}''^{,l}_s)$, where ${D}^{{l+1}}_t$ is defined similarly. In addition, ${C}^{l+1} = {C}^{l+1} + \mathrm{Conv4d}({C}''^{,l})$. The visualizations of cost volumes are shown in Fig.~\ref{fig:multi-cost}. Finally, given the features ${D}''_s$ and ${D}''_t$ at each level, we compute the correlation map between ${D}''_s$ and ${D}''_t$, and the sum of cost volumes across all levels are added up to obtain the final output ${C}^*$ that is used to estimate the final flow field, as shown in the bottom of Fig.~\ref{fig:overall}. \subsection{Training Objective} To train the networks, we first compute the correlation map between ${D}''_s$ and ${D}''_t$ at each level and then transform it into a dense flow field $F_\mathrm{pred}$ using the soft-argmax operator~\cite{lee2019sfnet}. Then, we compare the predicted dense flow field with the ground-truth flow field $F_\mathrm{GT}$. Specifically, we use average end-point error (AEPE), computed by averaging the Euclidean distance between the ground-truth and estimated flow, for the objective function. We then sum up the AEPE loss across all levels. We thus formulate the objective function as $\mathcal{L}= \|F_\mathrm{GT}-F_\mathrm{pred}\|_{2}$. Flow fields may instead be obtained from ${C}''$, but we empirically find that slightly better results are obtained when feature maps are used. \begin{figure*}[t] \centering \renewcommand{\thesubfigure}{} \subfigure[Source $I_s$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Target $I_t$] {\includegraphics[width=0.499\textwidth]{figure/semantic_qualitative/9495.png}}\hfill \subfigure[Source $I_s$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Target $I_t$] {\includegraphics[width=0.499\textwidth]{figure/semantic_qualitative/963.png}}\hfill\\ \vspace{-5pt} \subfigure[Source $I_s$] {\includegraphics[width=0.124\textwidth]{figure/dense_qualitative/46_src_img.png}}\hfill \subfigure[Target $I_t$] {\includegraphics[width=0.124\textwidth]{figure/dense_qualitative/46_trg_img.png}}\hfill \subfigure[GT] {\includegraphics[width=0.124\textwidth]{figure/dense_qualitative/46_gt.png}}\hfill \subfigure[Prediction] {\includegraphics[width=0.124\textwidth]{figure/dense_qualitative/46_pred.png}}\hfill \subfigure[Source $I_s$] {\includegraphics[width=0.124\textwidth]{figure/dense_qualitative/39_src_img.png}}\hfill \subfigure[Target $I_t$] {\includegraphics[width=0.124\textwidth]{figure/dense_qualitative/39_trg_img.png}}\hfill \subfigure[GT] {\includegraphics[width=0.124\textwidth]{figure/dense_qualitative/39_gt.png}}\hfill \subfigure[Prediction] {\includegraphics[width=0.124\textwidth]{figure/dense_qualitative/39_pred.png}}\hfill\\ \vspace{-5pt} \caption{\textbf{Qualitative results on SPair-71k~\cite{min2019spair} (top) and HPatches~\cite{balntas2017hpatches} (bottom).}} \label{fig:qualitative}\vspace{-10pt} \end{figure*} \section{Experiments} \subsection{Implementation Details}\label{sec:4.1} For the backbone network, we use VGG-16~\cite{simonyan2014very} for dense alignment and ResNet-101~\cite{he2016deep} for dense semantic correspondence, which are both pretrained on ImageNet~\cite{deng2009imagenet}. We use N$ = (2,2,2)$ for efficiency in training. We use data augmentation used in~\cite{cho2021semantic} for dense semantic correspondence, while no augmentation is used for dense alignment. Our networks are trained with the input images resized to 512$\times$512. We implemented our network using PyTorch~\cite{paszke2017automatic}, and the AdamW~\cite{loshchilov2017decoupled} optimizer is employed with an initial learning rate of 1e$-$4 for the IFCAT layers, which we gradually decrease using step learning rate decay. We train our networks for 50 epochs on SPair-71k~\cite{min2019spair} and DPED-CityScape-ADE~\cite{truong2020glu}, and 300 epochs for PF-PASCAL~\cite{ham2017proposal}. More details can be found in the supplementary material. The code and pretrained weights will be made publicly available. \begin{table*}[!t] \begin{center} \caption{\textbf{Quantitative evaluation on standard semantic correspondence benchmarks~\cite{min2019spair,ham2016proposal,ham2017proposal}.} Higher PCK is better. The best results are in bold, and the second best results are underlined. \textit{Reso.: Resolution, F.A.: Feature Aggregation, C.A.: Cost Aggregation.} }\label{tab:main_table}\vspace{+5pt} \scalebox{0.67}{ \begin{tabular}{l|c|c|c|c|ccc|cc|cc} \hlinewd{0.8pt} \multirow{3}{*}{Methods}&\multirow{3}{*}{Reso.} &\multirow{3}{*}{F.A.} & \multirow{3}{*}{C.A.}& SPair-71k~\cite{min2019spair} & \multicolumn{3}{c|}{PF-PASCAL~\cite{ham2017proposal}} & \multicolumn{4}{c}{PF-WILLOW~\cite{ham2016proposal}} \\ &&& &PCK @ $\alpha_{\text{bbox}}$ & \multicolumn{3}{c|}{PCK @ $\alpha_{\text{img}}$} & \multicolumn{2}{c}{PCK @ $\alpha_{\text{bbox}}$} & \multicolumn{2}{c}{PCK @ $\alpha_{\text{bbox-kp}}$} \\ & & & &0.1 & 0.05 & 0.1 & 0.15 & 0.05 & 0.1 & 0.05 &0.1 \\ \midrule\midrule CNNGeo~\cite{rocco2017convolutional} &Ori&-&2D Conv. &20.6 &41.0 &69.5 &80.4 &- &- &36.9&69.2\\ A2Net~\cite{seo2018attentive} &-&-&2D Conv. &22.3 &42.8 &70.8 &83.3 &- &- &36.3&68.8\\ WeakAlign~\cite{rocco2018end} &Ori&-&2D Conv. &20.9 &49.0 &74.8 &84.0 &- &- &37.0&70.2\\ RTNs~\cite{kim2018recurrent} &- & -& 2D Conv. &25.7 &55.2 &75.9 &85.2 &- &- &41.3&71.9\\ SFNet~\cite{lee2019sfnet}&288/Ori & 2D Conv. & - &- &53.6 &81.9 &90.6 &- &- &46.3&74.0\\ NC-Net~\cite{Rocco18b} &240/ori/400 &-&4D Conv. &20.1 &54.3 &78.9 &86.0 &- &- &33.8&67.0\\ DCC-Net~\cite{huang2019dynamic}&240/ori/- &- &4D Conv. &- &55.6 &82.3 &90.5 &- &- &43.6&73.8\\ HPF~\cite{min2019hyperpixel}&Max 300 &- &RHM&28.2 &60.1 &84.8 &92.7 &- &- &45.9&74.4\\ GSF~\cite{jeon2020guided}&- & -&2D Conv. &36.1&65.6 &87.8 &95.9 &- &- &\underline{49.1}&\textbf{78.7}\\ ANC-Net~\cite{li2020correspondence} &- &-&4D Conv. &- &- &86.1 &- &- &- &-&-\\ DHPF~\cite{min2020learning}&240 &-&RHM &37.3 &75.7 &90.7 &95.0 &49.5 &77.6 &- &71.0\\ SCOT~\cite{liu2020semantic}&Max 300& -&OT-RHM &35.6 &63.1 &85.4 &92.7 &- &- &{47.8}&{76.0}\\ CHM~\cite{min2021convolutional} &240 & -& 6D Conv. &46.3 &80.1 &91.6 &94.9 &{52.7} &{79.4} &-&69.6\\ PMNC~\cite{lee2021patchmatch} &400 &- & 4D Conv. & 50.4& \underline{82.4}& 90.6& -& -&- &-&-\\ CATs~\cite{cho2021semantic} &256 &-&Trans.&49.9 &75.4 &\underline{92.6} &\underline{96.4} &50.3 &79.2 & 40.7&69.0\\ MMNet-{FCN}~\cite{zhao2021multi} &224$\times$320 &Conv.+Trans.& - & 50.4&81.1 & 91.6& 95.9&- &- &-&-\\ PMD~\cite{li2021probabilistic} &- & Attention&-&37.4&-&90.7&-&-&-&-&75.6\\ PWarpC-NC-Net~\cite{truong2022probabilistic} &Ori &-&4D Conv.&{52.0}&79.2&92.1&95.6&-&-&48.0&\underline{76.2}\\ VAT~\cite{hong2022cost} &512 &-&Conv.+Trans.&\underline{55.5} &78.2 &{92.3} &{96.2} &\underline{52.8} &\textbf{81.6} & -&-\\ \midrule \textbf{IFCAT} (Ours) &Ori &Trans. &Trans.&\textbf{64.4} &\textbf{88.0} &\textbf{94.8} &\textbf{97.9} &\textbf{58.6} &\underline{81.2} &\textbf{50.4}&74.2\\ \hlinewd{0.8pt} \end{tabular} } \end{center}\vspace{-10pt} \end{table*} \subsection{Dense Semantic Correspondence} In this section, we evaluate the effectiveness of the proposed method for dense correspondence. For a fair comparison, following~\cite{kim2018recurrent,huang2019dynamic,min2019hyperpixel,min2020learning,cho2021semantic}, when evaluating on SPair-71k~\cite{min2019spair} we train the proposed method on SPair-71k~\cite{min2019spair}, and when evaluating on PF-PASCAL~\cite{ham2017proposal} and PF-WILLOW~\cite{ham2016proposal} we train on PF-PASCAL~\cite{ham2017proposal}. SPair-71k~\cite{min2019spair} consists of 70,958 image pairs with extreme and diverse viewpoints, scale variations, and rich annotations for each image pair. PF-PASCAL~\cite{ham2016proposal} contains 1,351 image pairs over 20 object categories with keypoint annotations, and PF-WILLOW~\cite{ham2016proposal} is composed of 900 image pairs from 4 categories. We use percentage of correct keypoints (PCK) for the evaluation metric, for which higher values are better. Concretely, given predicted keypoint $k_\mathrm{pred}$ and ground-truth keypoint $k_\mathrm{GT}$, we count the number of predicted keypoints that satisfy the following condition: $d( k_\mathrm{pred},k_\mathrm{GT}) \leq \alpha \cdot \mathrm{max}(H,W)$, where $d(\,\cdot\,)$ denotes Euclidean distance; $\alpha$ denotes a threshold value; $H$ and $W$ denote height and width of the object bounding box or the entire image. Note that as confirmed in~\cite{truong2022probabilistic,cho2022cats++}, using different ground-truth resolutions for evaluation leads to different results, so we set our evaluation on the original resolution. The results are summarized in Table~\ref{tab:main_table}. As shown, IFCAT clearly sets a new state-of-the-art for the three dense semantic correspondence benchmarks. This demonstrates the effectiveness of joint aggregation by outperforming methods that focus on either feature or cost aggregation. \vspace{-5pt} \begin{table}[] \centering \caption{\textbf{Quantitative evaluation on HPatches~\cite{balntas2017hpatches}.} We evaluate on both HPatches-240 and HPatches original. Lower AEPE is better. We divide methods into two groups: multiple feed-forward and single feed-forward. \scalebox{0.56}{ \begin{tabular}{l|c|c|ccccc|c|c|ccccc|c|c} \toprule \multirow{3}{*}{Methods}&\multirow{3}{*}{F.A.}&\multirow{3}{*}{C.A.}&\multicolumn{7}{c|}{HPatches (240 $\times$ 240)} &\multicolumn{7}{c}{HPatches}\\\cline{4-17} &&&\multicolumn{6}{c|}{AEPE $\downarrow$}&\multirow{2}{*}{PCK}&\multicolumn{6}{c|}{AEPE $\downarrow$}&\multirow{2}{*}{PCK}\\\cline{4-9}\cline{11-16} &&&I&II&III&IV&V&Avg.& &I&II&III&IV&V&Avg.&\\\midrule\midrule COTR~\cite{jiang2021cotr} &Trans. &\xmark &-&-&-&-&-&-&-&-&-&-&-&-&\textbf{7.75}&\textbf{91.10} \\ RANSAC-Flow~\cite{shen2020ransac} &2D Conv. &\xmark &\textbf{0.51}&\underline{2.36}&\underline{2.91}&\textbf{4.41}&\bf{5.12}&\underline{3.06} &-&-&-&-&-&-&-&- \\ RANSAC-DMP~\cite{Hong_2021_ICCV} &2D Conv. &2D Conv. &\underline{0.53}&\textbf{2.21}&\textbf{2.76}&\underline{4.62}&\underline{5.14}&\textbf{3.05}&\textbf{96.28}&\textbf{4.32}&\textbf{11.21}&\textbf{22.80}&\textbf{31.34}&\textbf{33.64}&\underline{20.65}&\textbf{75.35} \\\hline CNNGeo~\cite{rocco2017convolutional} &\xmark &2D Conv.&9.59&18.55&21.15&27.83&35.19&22.46&-&-&-&-&-&-&-&- \\ DGC-Net~\cite{melekhov2019dgc} &\xmark &2D Conv. &1.74&5.88&9.07&12.14&16.50&9.07&50.01&5.71&20.48&34.15&43.94&62.01&33.26&58.06 \\ GLU-Net~\cite{truong2020glu} &\xmark &2D Conv. &\textbf{0.59}&\underline{4.05}&\underline{7.64}&\underline{9.82}&\underline{14.89}&\underline{7.40}&\underline{83.47}&\underline{1.55}&12.66&27.54&32.04&52.47&25.05&{78.54} \\ GOCOR-GLU-Net~\cite{truong2020gocor} &\xmark&Hand-crafted &-&-&-&-&-&-&-&\textbf{1.29}&\textbf{10.07}&\underline{23.86}&\underline{27.17}&\underline{38.41}&\underline{20.16}&\textbf{81.43} \\ DMP~\cite{Hong_2021_ICCV} &2D Conv. &2D Conv. &1.21&5.12&12.31&13.68&16.12&9.69&79.21&3.21&15.54&32.54&38.62&63.43&30.64&63.21 \\\midrule\midrule \textbf{IFCAT} (Ours)&Trans. &Trans. &\underline{0.65}&\textbf{3.33}&\textbf{5.41}&\textbf{6.91}&\textbf{10.09}&\textbf{5.27}&\textbf{90.90}&1.90&\underline{10.72}&\textbf{18.95}&\textbf{24.36}&\textbf{31.40}&\textbf{17.59}&\underline{80.41}\\\bottomrule \end{tabular}} \label{tab:hpatches}\vspace{-10pt} \end{table} \subsection{Dense Alignment} We further evaluate our method on the dense alignment dataset HPatches~\cite{balntas2017hpatches}. For each scene, there is a source image and five target images from different viewpoints, along with corresponding ground-truth flows. The resolutions of HPatches range from 450 $\times$ 600 to 1,613 $\times$ 1,210. As in~\cite{melekhov2019dgc}, we also evaluate on downsampled HPatches~\cite{balntas2017hpatches}, where the images are resized to a low resolution (240 $\times$ 240). For the evaluation metric, we use the average end-point error (AEPE), computed by averaging the Euclidean distance between the ground-truth and estimated flow, and percentage of correct keypoints (PCK), computed as the ratio of estimated keypoints within a threshold of the ground truth to the total number of keypoints. When evaluating on HPatches~\cite{balntas2017hpatches}, following~\cite{truong2020glu,truong2020gocor} we train our networks on the DPED-CityScape-ADE~\cite{truong2020glu} dataset. Table~\ref{tab:hpatches} summarizes the quantitative results. As shown, IFCAT outperforms existing works by a large margin, clearly achieving state-of-the-art performance. Note that a fair comparison to COTR~\cite{jiang2021cotr} is not feasible because its use of the zoom-in technique with a multiple inference strategy dramatically boosts its performance as a trade-off to speed. It is shown that RANSAC-Flow~\cite{shen2020ransac} and RANSAC-DMP~\cite{Hong_2021_ICCV} exceeds the performance of our method, but they adopt two-stage inference where the first stage aims to find the homographic transformation between an image pair, which give them an advantage on the HPatches dataset. On its own, IFCAT is demonstrated to be effective, highlighting the importance of integrative aggregation. \subsection{Ablation Study}\label{sec:4.5} \paragraph{Aggregation Strategy.} In this ablation study, we compare the performance of different aggregation strategies. Table~\ref{tab:aggre} summarizes the results. There are seven components we evaluate. From (\textbf{I}) to (\textbf{II}), we report the results for sole aggregation on either feature or cost volume. We investigate the effectiveness of conditioning features on both images and performing both aggregations in (\textbf{III} and (\textbf{IV}). Lastly, we progressively add the proposed components in (\textbf{V}) to (\textbf{VII}) to demonstrate their significance. For a fair comparison, we trained all strategies with a single level except for (\textbf{VII}). \begin{wraptable}{r}{7.5cm} \vspace{-5mm} \caption{\textbf{Aggregation strategies for IFCAT.}} \label{tab:aggre}\vspace{+5pt} \centering \resizebox{\linewidth}{!}{% \begin{tabular}{ll|cc} \hlinewd{0.8pt} &\multirow{2}{*}{Components} &SPair-71k &HPatches \\ &&$\alpha_{\text{bbox}}$ = 0.1 $\uparrow$ &AEPE $\downarrow$\\ \midrule \textbf{(I)} &Feature self-att. & 36.1&84.08\\ \textbf{(II)} & Cost self-att. & 28.7&54.80\\ \textbf{(III)} & Feature self-att. + cross-att. &38.5 & 81.72\\ \textbf{(IV)} & Sequential (\textbf{III}) + cost self-att. &56.5&49.00\\ \textbf{(V)} & Integrative self-att. &54.7 &34.85\\ \textbf{(VI)}&Integrative self- and cross-att. &58.5&24.41\\ \textbf{(VII)}& (\textbf{VI}) + hierarchical processing &64.4&17.59\\ \hlinewd{0.8pt} \end{tabular}% } \end{wraptable} As shown, solely aggregating either the feature or cost volume yields limited performance, where (\textbf{II}) can be seen as a simplified version of CATs~\cite{cho2021semantic}. However, we observe that combining self- and cross-attention, which is highly similar to LoFTR~\cite{sun2021loftr}, for feature aggregation helps to boost the performance by conditioning features on both images. This highlights the importance of providing information from the other image, which implies that providing a cost volume that explicitly represents similarity information with respect to each image would help to establish a more accurate correspondence field. Interestingly, (\textbf{IV}) shows that performing feature and cost aggregation yields a large performance boost, demonstrating that cost aggregation benefits from powerful feature representations tailored to matching. From (\textbf{V}) to (\textbf{VII}), each component contributes appreciably to the improvement of performance, clearly showing the effectiveness of our proposed components. This confirms that, as the result of (\textbf{III}) implies, leveraging the complementarity of features and cost volume is of prime importance. \vspace{-5pt} \paragraph{Depth of Attention Block.} As shown in Fig.~\ref{fig:overall}, we can stack the attention block at each level to increase model capacity and allow further aggregation. In this ablation study, we show the effects of varying the hyperparameter $N$. We additionally show comparisons of memory consumption, run time, and number of learnable parameters to indicate their efficiency. \begin{wraptable}{r}{8.5cm} \vspace{-5mm} \caption{\textbf{Ablations on varying depth of attention block.}} \label{tab:depth}\vspace{+5pt} \centering \resizebox{\linewidth}{!}{% \begin{tabular}{l|cc|ccc} \hlinewd{0.8pt} \multirow{2}{*}{\# of N} &SPair-71k &HPatches &Memory &Runtime &\# of param.\\ &$\alpha_{\text{bbox}}$ = 0.1 $\uparrow$ &AEPE $\downarrow$ &[MB]&[ms]&[M]\\ \midrule (1,0,0) &55.4&35.02&335.09&35.09&0.58\\%335091200 (2,0,0) &58.5&24.41&354.57&47.23&1.04\\ (1,1,1) &63.7&20.49&845.06&91.96&0.83\\ (2,2,2) &64.4&17.59&874.25&138.05&1.55\\ (3,3,3) &64.7&17.43&903.40&188.43&2.26\\ \hlinewd{0.8pt} \end{tabular}% } \end{wraptable} The results are summarized in Table~\ref{tab:depth}. We consistently observe that as the attention block depth increases, the performance is boosted, demonstrating that the earlier aggregations help the subsequent aggregations. We generally observe increasing memory consumption, run-time and number of parameters as $N$ increases, which are trade-offs to improve performance. Note that during training, the number of parameters has a direct influence on memory consumption, which is a limitation of increasing the attention block depth. \section{Conclusion} In this paper, we proposed a novel transformer-based architecture, called Integrative Feature and Cost Aggregation with Transformer (IFCAT), that interleaves feature refinement and cost aggregation by establishing a complementary relationship. Our design based on self- and cross-attention is tailored for matching and joint enhancement of feature descriptors and the cost volume. This method is formulated in a coarse-to-fine manner, yielding an appreciable performance boost. We have shown that our method surpasses all other existing works on several benchmarks for semantic and geometric matching, establishing new state-of-the-art performance. We also conducted an extensive ablation study to validate our choices. \section*{More Details} \paragraph{Training Details.} For training, we employ the same augmentation strategy introduced in~\cite{cho2021semantic}. To implement 4D convolutions, we use separable 4D convolutions for efficient computation, which is introduced in VCN~\cite{yang2019volumetric}. All separable 4D convolutions are followed by ReLU activation and Layer Normalization~\cite{ba2016layer}. We set the weight decay to 0.05 and the learning rate to 1e$^{-4}$ for IFCAT and to 1e$^{-6}$ for the backbone, which is halved at epochs 30 and 40. \paragraph{Psuedo-code.} We present Pytorch-like Psuedo-code of the proposed method in Alg.~\ref{1}. \paragraph{More Qualitative Results.} We provide more qualitative results for SPair-71k~\cite{min2019spair} in Fig.~\ref{fig:spair_quali}, PF-PASCAL~\cite{ham2017proposal} in Fig.~\ref{fig:pascal_quali}, PF-WILLOW~\cite{ham2017proposal} in Fig.~\ref{fig:willow_quali}, and HPatches~\cite{balntas2017hpatches} in Fig.~\ref{hpatches}. We also present more visualization of the attention maps in Fig.~\ref{attention}. \section*{Limitations} An apparent limitation of the proposed method is that as IFCAT acts on the cost volume, it is not feasible to perform cost aggregation at higher resolutions. The maximum resolution that would be accessible to users is 64$^4$. This makes the use of standard transformer for attention computation infeasible in terms of memory consumption. Another limitation may be when given a pair of images that are not relevant or show completely different objects, such that there are no correspondences between views, the proposed method lacks an ability to prevent establishing correspondences. \section*{Broader Impact} Our network can be beneficial in a wide range of applications, including simultaneous localization and mapping (SLAM)~\cite{bailey2006simultaneous}, augmented reality (AR)~\cite{peebles2021gan}, object tracking, structure from motion (SfM)~\cite{schonberger2016structure}, optical flow~\cite{fleet2006optical}, and image editing~\cite{barnes2009patchmatch,kim2018recurrent}. As a future work, we could apply the proposed method to different tasks, including feature matching, segmentation and optical flow. Our work would not pose significantly malicious threats on its own. \begin{algorithm*}[t] \caption{Pseudo-Code, PyTorch-like} \label{1} \definecolor{codeblue}{rgb}{0.25,0.5,0.5} \definecolor{codekw}{rgb}{0.85, 0.18, 0.50} \lstset{ backgroundcolor=\color{white}, basicstyle=\fontsize{7.5pt}{7.5pt}\ttfamily\selectfont, columns=fullflexible, breaklines=true, captionpos=b, commentstyle=\fontsize{7.5pt}{7.5pt}\color{codeblue}, keywordstyle=\fontsize{7.5pt}{7.5pt}\color{codekw}, } \begin{lstlisting}[language=python] class TransformerLayer: def forward(corr, src_feat, trg_feat): corr_src, src_feat_refined = integrative_self_attention(corr, src_feat) corr_trg, trg_feat_refined = integrative_self_attention(transpose(corr), trg_feat) corr = corr_src + transpoze(corr_trg) corr = corr + conv4d_1(cost_computation(src_feat_refined, trg_feat_refined)) corr = corr + conv4d_2(corr) src_feat_refined, trg_feat_refined = integrative_cross_attention(corr, src_feat_refined, trg_feat_refined) corr = corr + conv4d_3(cost_computation(src_feat_refined, trg_feat_refined)) corr = corr + conv4d_4(corr) return corr, src_feat_refined, trg_feat_refined class IFCAT: def forward(trg_img, src_img): src_feat_list = feature_backbone(src_img) trg_feat_list = feature_backbone(trg_img) src_feats = projection(src_feat_list) trg_feats = projection(trg_feat_list) correlations = [] corr_1 = correlation(src_feat_list[0], src_feat_list[0]) corr_1 = conv4d_1(corr_1) src_feat_1, trg_feat_1 = src_feats[0], trg_feats[0] corr_1, src_feat_1, trg_feat_1 = transformer_layer[0](corr_1, src_feat_1, trg_feat_1) correlations.append(cost_computation(src_feat_1, trg_feat_1)) corr_2 = correlation(src_feat_list[1], src_feat_list[1] corr_2 = corr_1 + conv4d_2(corr_2) src_feat_2 = interpolate(linear_1(src_feat_1), scale_factor=2) + src_feats[1] trg_feat_2 = interpolate(linear_1(trg_feat_1), scale_factor=2) + trg_feats[1] corr_2, src_feat_2, trg_feat_2 = transformer_layer[1](corr_2, src_feat_2, trg_feat_2) correlations.append(cost_computation(src_feat_2, trg_feat_2)) corr_3 = correlation(src_feat_list[2], src_feat_list[2]) corr_3 = corr_2 + conv4d_3(corr_3) src_feat_3 = interpolate(linear_2(src_feat_2), scale_factor=2) + src_feats[2] trg_feat_3 = interpolate(linear_2(trg_feat_2), scale_factor=2) + trg_feats[2] corr_3, src_feat_3, trg_feat_3 = transformer_layer[2](corr_3, src_feat_3, trg_feat_3) correlations.append(cost_computation(src_feat_3, trg_feat_3)) corr_upsampled = [interpolate4d(x, (64, 64, 64, 64)) for x in correlations] c_star = sum(corr_upsampled) / len(corr_upsampled) return flow_estimation(c_star) \end{lstlisting} \end{algorithm*} \clearpage \begin{figure}[!t] \centering \renewcommand{\thesubfigure}{} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/dhpf/img1.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/chm/img1.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/cats/img1.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/ours/img1.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/dhpf/img2.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/chm/img2.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/cats/img2.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/ours/img2.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/dhpf/img3.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/chm/img3.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/cats/img3.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/ours/img3.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/dhpf/img4.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/chm/img4.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/cats/img4.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/ours/img4.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/dhpf/img5.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/chm/img5.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/cats/img5.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/ours/img5.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/dhpf/img6.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/chm/img6.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/cats/img6.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/ours/img6.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/dhpf/img7.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/chm/img7.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/cats/img7.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/ours/img7.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/dhpf/img8.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/chm/img8.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/cats/img8.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/ours/img8.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/dhpf/img9.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/chm/img9.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/cats/img9.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/ours/img9.png}}\hfill\\ \vspace{-20.5pt} \subfigure[(a) DHPF~\cite{min2020learning}] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/dhpf/img10.png}}\hfill \subfigure[(b) CHM~\cite{min2021convolutional}] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/chm/img10.png}}\hfill \subfigure[(c) CATs~\cite{cho2021semantic}] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/cats/img10.png}}\hfill \subfigure[(d) IFCAT] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/spair/ours/img10.png}}\hfill\\ \vspace{-5pt} \caption{\textbf{Qualitative results on SPair-71k~\cite{min2019spair}:} keypoints transfer results by (a) DHPF~\cite{min2020learning}, (b) CHM~\cite{min2021convolutional}, and (c) CATs~\cite{cho2021semantic}, and (d) ours. Note that green and red lines denote correct and wrong predictions, respectively, with respect to the ground-truth.}\label{fig:spair_quali}\vspace{-10pt} \end{figure} \newpage \begin{figure}[!t] \centering \renewcommand{\thesubfigure}{} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/dhpf/img1.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/chm/img1.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/cats/img1.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/ours/img1.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/dhpf/img3.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/chm/img3.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/cats/img3.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/ours/img3.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/dhpf/img4.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/chm/img4.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/cats/img4.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/ours/img4.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/dhpf/img5.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/chm/img5.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/cats/img5.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/ours/img5.png}}\hfill\\ \vspace{-20.5pt} \subfigure[(a) DHPF~\cite{min2020learning}] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/dhpf/img6.png}}\hfill \subfigure[(b) CHM~\cite{min2021convolutional}] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/chm/img6.png}}\hfill \subfigure[(c) CATs~\cite{cho2021semantic}] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/cats/img6.png}}\hfill \subfigure[(d) IFCAT] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfpascal/ours/img6.png}}\hfill\\ \vspace{-5pt} \caption{\textbf{Qualitative results on PF-PASCAL~\cite{ham2017proposal}} }\label{fig:pascal_quali}\vspace{-10pt} \end{figure} \begin{figure}[!t] \centering \renewcommand{\thesubfigure}{} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/dhpf/img1.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/chm/img1.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/cats/img1.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/ours/img1.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/dhpf/img2.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/chm/img2.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/cats/img2.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/ours/img2.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/dhpf/img3.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/chm/img3.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/cats/img3.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/ours/img3.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/dhpf/img4.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/chm/img4.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/cats/img4.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/ours/img4.png}}\hfill\\ \vspace{-20.5pt} \subfigure[(a) DHPF~\cite{min2020learning}] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/dhpf/img6.png}}\hfill \subfigure[(b) CHM~\cite{min2021convolutional}] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/chm/img6.png}}\hfill \subfigure[(c) CATs~\cite{cho2021semantic}] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/cats/img6.png}}\hfill \subfigure[(d) IFCAT] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/pfwillow/ours/img6.png}}\hfill\\ \vspace{-5pt} \caption{\textbf{Qualitative results on PF-WILLOW~\cite{ham2016proposal}.} }\label{fig:willow_quali}\vspace{-10pt} \end{figure} \begin{figure}[!t] \centering \renewcommand{\thesubfigure}{} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/01_src_img.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/01_trg_img.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/01_gt.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/01_pred.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/03_src_img.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/03_trg_img.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/03_gt.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/03_pred.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/07_src_img.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/07_trg_img.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/07_gt.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/07_pred.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/15_src_img.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/15_trg_img.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/15_gt.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/15_pred.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/24_src_img.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/24_trg_img.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/24_gt.png}}\hfill \subfigure[] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/24_pred.png}}\hfill\\ \vspace{-20.5pt} \subfigure[Source] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/55_src_img.png}}\hfill \subfigure[Target] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/55_trg_img.png}}\hfill \subfigure[GT] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/55_gt.png}}\hfill \subfigure[Prediction] {\includegraphics[width=0.247\linewidth]{figure/supple_qual/hpatches/55_pred.png}}\hfill\\ \vspace{-5pt} \caption{\textbf{Qualitative results on HPatches~\cite{balntas2017hpatches}.} }\label{hpatches}\vspace{-10pt} \end{figure} \begin{figure}[!t] \centering \renewcommand{\thesubfigure}{} \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/0/src_06.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/0/trg_06.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/0/self_raw_cost_0_06.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/0/self_raw_cost_1_06.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/0/self_raw_cost_2_06.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/0/self_feat_cost_06.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/0/cross_agg_cost_06.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/0/cross_feat_cost_06.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/1/src_07.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/1/trg_07.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/1/self_raw_cost_0_07.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/1/self_raw_cost_1_07.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/1/self_raw_cost_2_07.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/1/self_feat_cost_07.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/1/cross_agg_cost_07.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/1/cross_feat_cost_07.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/2/src_05.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/2/trg_05.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/2/self_raw_cost_0_05.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/2/self_raw_cost_1_05.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/2/self_raw_cost_2_05.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/2/self_feat_cost_05.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/2/cross_agg_cost_05.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/2/cross_feat_cost_05.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/3/src_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/3/trg_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/3/self_raw_cost_0_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/3/self_raw_cost_1_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/3/self_raw_cost_2_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/3/self_feat_cost_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/3/cross_agg_cost_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/3/cross_feat_cost_00.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/4/src_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/4/trg_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/4/self_raw_cost_0_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/4/self_raw_cost_1_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/4/self_raw_cost_2_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/4/self_feat_cost_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/4/cross_agg_cost_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/4/cross_feat_cost_02.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/5/src_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/5/trg_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/5/self_raw_cost_0_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/5/self_raw_cost_1_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/5/self_raw_cost_2_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/5/self_feat_cost_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/5/cross_agg_cost_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/5/cross_feat_cost_00.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/6/src_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/6/trg_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/6/self_raw_cost_0_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/6/self_raw_cost_1_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/6/self_raw_cost_2_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/6/self_feat_cost_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/6/cross_agg_cost_00.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/6/cross_feat_cost_00.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/7/src_04.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/7/trg_04.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/7/self_raw_cost_0_04.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/7/self_raw_cost_1_04.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/7/self_raw_cost_2_04.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/7/self_feat_cost_04.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/7/cross_agg_cost_04.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/7/cross_feat_cost_04.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/8/src_03.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/8/trg_03.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/8/self_raw_cost_0_03.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/8/self_raw_cost_1_03.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/8/self_raw_cost_2_03.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/8/self_feat_cost_03.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/8/cross_agg_cost_03.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/8/cross_feat_cost_03.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/10/src_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/10/trg_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/10/self_raw_cost_0_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/10/self_raw_cost_1_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/10/self_raw_cost_2_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/10/self_feat_cost_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/10/cross_agg_cost_02.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/10/cross_feat_cost_02.png}}\hfill\\ \vspace{-20.5pt} \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/12/src_08.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/12/trg_08.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/12/self_raw_cost_0_08.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/12/self_raw_cost_1_08.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/12/self_raw_cost_2_08.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/12/self_feat_cost_08.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/12/cross_agg_cost_08.png}}\hfill \subfigure[] {\includegraphics[width=0.123\linewidth]{figure/supple_attn/12/cross_feat_cost_08.png}}\hfill\\ \vspace{-20.5pt} \subfigure[(a) Source] {\includegraphics[width=0.124\textwidth]{figure/supple_attn/13/src_01.png}}\hfill \subfigure[(b) Target] {\includegraphics[width=0.124\textwidth]{figure/supple_attn/13/trg_01.png}}\hfill \subfigure[(c) $C^1$ ] {\includegraphics[width=0.124\textwidth]{figure/supple_attn/13/self_raw_cost_0_01.png}}\hfill \subfigure[(d) $C^2$] {\includegraphics[width=0.124\textwidth]{figure/supple_attn/13/self_raw_cost_1_01.png}}\hfill \subfigure[(e) $C^3$] {\includegraphics[width=0.124\textwidth]{figure/supple_attn/13/self_raw_cost_2_01.png}}\hfill \subfigure[(f) $D'_s$ $\cdot$ $D'_t$] {\includegraphics[width=0.124\textwidth]{figure/supple_attn/13/self_feat_cost_01.png}}\hfill \subfigure[(g) $C'$] {\includegraphics[width=0.124\textwidth]{figure/supple_attn/13/cross_agg_cost_01.png}}\hfill \subfigure[(h) $D''_s$ $\cdot$ $D''_t$] {\includegraphics[width=0.124\textwidth]{figure/supple_attn/13/cross_feat_cost_01.png}}\hfill\\ \vspace{-5pt} \caption{\textbf{Visualization of attention maps.} }\label{attention}\vspace{-10pt} \end{figure}
{ "timestamp": "2022-09-21T02:10:08", "yymm": "2209", "arxiv_id": "2209.08742", "language": "en", "url": "https://arxiv.org/abs/2209.08742" }
\section{Introduction} According to \citet{raghavan2010survey}, the fraction of main-sequence, solar-type stars with stellar or brown dwarf companions is $46\pm2\%$. The orbital periods are generally long (distribution peaks at $\log P$ [d] = 5.03). Since stellar multiplicity plays a major role in the formation and evolution of stars, several surveys have investigated the occurrence of multiplicity. As an illustration, single-lined (SB1) and double-lined (SB2) spectroscopic binaries make up 33$\%$ of the multiple systems in the catalogue of \citet{raghavan2010survey}, while the SB1 proportion in the Gaia-European Southern Observatory (ESO) Survey is in the range 7-14$\%$ \citep{merle2020gaia}. In SB1's, the periodic RV variation of the primary/host star is induced by the gravitational influence of its orbiting companion. Unfortunately, stellar activity phenomena (spots, plages and activity cycles), which affect line profiles may produce a variation in RV that mimics or hides a Keplerian signal resulting from a substellar companion (\citealt{queloz2001no}, \citealt{dumusque2012earth}, \citealt{robertson2013halpha}, \citealt{santos2014harps}). Therefore, the identification of the source of the RV variations is difficult. Binaries with low velocity amplitude (large mass ratio) help constraining binary formation models. However, the Ninth Catalog of Spectroscopic Binary Orbits (SB9)\footnote{\url{http://sb9.astro.ulb.ac.be}} \citep{pourbaix2004s} does not contain many binaries with primary velocity amplitude lower than 1 $\rm{km\, s^{-1}}$: around 0.4\% of the catalogue, with a mean mass function value of $6.3 \times 10^{-5} \, M_{\odot}$, according to the latest version (2021-03-02). Our analysis can provide them. We look at well-studied and supposedly stable stars as a validation set to establish the detection thresholds to apply for the detection of spectroscopic multiple stars in our future studies. Indeed, the detection of RV variations depends on the stability of the instrument, the quality of the measured RVs, their derived uncertainties and time baseline. Therefore, our main aim in the present paper is to assess the stability of a sample of RV standard candidates used to calibrate the Radial Velocity Spectrometer \citep[RVS;][]{cropper2018gaia} onboard of Gaia \citep{prusti2016gaia}. To this aim, we systematically apply an efficient procedure that searches for a binary signature in the RVs, even if it is of low amplitude (i.e. in the substellar regime). The criteria to select the Gaia RV standard stars and the list of candidates are described in \citet[][hereafter CS13 and CS18, respectively]{soubiran2013catalogue, soubiran2018gaia}. These stars should be stable over at least 300 days and not have any bright neighbours ($\Delta I < 4$ mag) within a circular region of $20^{''}$ radius to make sure the RVS spectrum is not contaminated. Their RV scatter ($3 \sigma$) should not exceed 300 $\rm{m\, s^{-1}}$ throughout the duration of the mission (originally planned for five years when it was selected by ESA, but the mission was extended). The selection of the most stable candidates was performed in advance of the mission, from a compilation of RV measurements from different spectrographs. It implied a necessary compromise between the number of standards needed for the RVS calibrations (several thousands), and the number of stars followed up from the ground over a sufficient time baseline to test their stability. As a consequence, the selected standard stars are only candidates, and some of them revealed variations with a remarkable trend during the Gaia observations (CS18). For the target stars, we thus track SB1 systems with long-term RV variations in the catalogue of RV standards for Gaia RVS to clean it and improve the calibration of the next releases. This paper is structured as follows. In Sect. \ref{targ}, we give an overview about our target stars; in Sect. \ref{method}, we outline the methodology adopted in the spectroscopic analysis; in Sects. \ref{fullSamp} and \ref{orbitSol} we describe the overall results for the sample; in Sect. \ref{trend}, we give details about stars showing trends; in Sect. \ref{activi}, we focus on the long-term RV variations and their correlations with different activity indicators and cross-correlation function (CCF) parameters. Finally, we draw our main conclusions in Sect. \ref{conc}. \section{The target stars} \label{targ} We initially considered the sample of 2351 stars from the catalogue of Gaia RV standard star candidates (CS18) with CAL1 quality (wavelength calibrators for DR2 and DR3). The CAL1 stars were chosen to have at least two ground-based RV measurements spread over more than 300 days with a standard deviation of the mean less than 100\,$\rm{m\, s^{-1}}$. The HARPS, SOPHIE, ELODIE and NARVAL samples contain 1030, 1450, 922, and 119 stars, respectively. For 11 stars among the HARPS sample, CS18 used CORALIE spectra. Stars in common between different instruments are shown in Fig.~\ref{diagVenn}. \begin{figure} \begin{center} \includegraphics[width=.5\columnwidth]{Figures/venn.png} \end{center} \caption{Venn diagram showing the number of stars shared by different instruments: HARPS (H), SOPHIE (S), ELODIE (E), and NARVAL (N).} \label{diagVenn} \end{figure} We provide an excerpt of the sample in Table~\ref{tab:tableTarget}, which gives the target coordinates ($\alpha$, $\delta$), visual apparent magnitude $V$, spectral type, observation duration, signal-to-noise ratio (SNR) range, the number of observations before and after outlier filtering (Sect. \ref{orbitSol}), as well as the instrument used. \begin{table*} \caption{List of targets: Target Id, coordinates ($\alpha$, $\delta$) at the specific epoch, visual apparent magnitude, $V$, spectral type taken from CS18, SpT, total time span, $T$, minimum and maximum SNR values, total number of analysed spectra, $N_\mathrm{init}$, the final number of measurements after filtering, $N$, the number of additional spectra compared to CS18, $N_\mathrm{add}$}, and the instrument used. \label{tab:tableTarget} \begin{tabular}{|r|r|r|r|r|r|r|r|r|r|r|r|r|} \hline {HIP/TYC} & {$\alpha$ [deg]} & {$\delta$ [deg]} & {Epoch}&{$V$ [mag]} & {SpT} & {$T$ [days]} & {min(SNR)} & {max(SNR)} & {$N_\mathrm{init}$} & {$N$} & {$N_\mathrm{add}$} & {Instrument}\\ \hline 47& $ 0.135 $ & $ -56.836$ & J2015.5 & 10.75 & K3V & 6154.04 & 23.5 & 44.7 & 10 & 10 & 3 & H \\ 57& $ 0.168 $ & $ -69.676 $ & J2000.0 & 8.24 & K1V & 4766.05 & 42.4 & 143.0 & 38 & 37 & 3 & H \\ 80& $ 0.245 $ & $ -11.824$ & J2015.5 & 9.11 & G2V & 4266.28 & 40.7 & 158.2 & 124 & 121 & 86 & H \\ 142& $ 0.454 $ & $ 66.306$ & J2015.5 & 7.32 & K0 & 1473.00 & 43.3 & 133.0 & 32 & 32 & 1 &S \\ 184& $ 0.590 $ & $ 11.006$ & J2015.5 & 8.47 & K0V & 0.00 & 59.3 & 59.3 & 1 & 1 & 0 & E \\ 184& $ 0.590 $ & $ 11.006$ & J2015.5 & 8.47 & K0V & 4035.58 & 43.8 & 81.8 & 5 & 5 & 1 & S \\ 348& $ 1.091 $ & $ 12.958$ & J2015.5 & 8.64 & G5 & 3273.99 & 40.4 & 60.7 & 4 & 4 & 0 & S \\ 400& $ 1.236 $ & $ 23.270$ & J2015.5 & 7.81 & G9V & 5096.60 & 27.3 & 195.1 & 348 & 346 & 297 & S \\ 413& $ 1.262 $ & $ -36.015$ & J2015.5 & 7.74 & G0V & 5739.22 & 26.0 & 131.8 & 18 & 18 & 7 & H \\ 436& $ 1.322 $ & $ -67.835$ & J2015.5 & 8.49 & K4.5V & 6229.99 & 32.4 & 127.4 & 82 & 80 & 69 & H \\ \multicolumn{10}{l}{...}\\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item Note: This table is available in its entirety online at the CDS. A portion is shown here for guidance regarding its form and content. \end{tablenotes} \end{table*} The archive spectra we analysed are available in the ESO database archive for HARPS\footnote{\url{http://archive.eso.org/wdb/wdb/adp/phase3_spectral/form}} and in the relevant instrument website for SOPHIE\footnote{\url{http://atlas.obs-hp.fr/sophie/}}, ELODIE\footnote{\url{http://atlas.obs-hp.fr/elodie/index.html}} \citep{moultaka2004elodie}, and NARVAL\footnote{\url{http://polarbase.irap.omp.eu/}}. As recent HARPS, NARVAL, and SOPHIE observations are available, we benefit from more measurements than in CS18 and, thus, also from a longer time span. These additional spectra have been retrieved through queries to ESO, NARVAL and SOPHIE archives. The number of these additional spectra for each target is listed in Table~\ref{tab:tableTarget}. Taking these additional data into account increases the total number of spectra by $19.5\%$, $11.8\%$, $38.2\%$, and $74.8\%$ for NARVAL, ELODIE, SOPHIE, and HARPS respectively. The additional ELODIE spectra are those removed from CS13 and CS18, they either have low SNR value or do not have archival RVs. We filtered out those with low SNR when analysing the RVs. HARPS spectra have a resolving power $R$ = $\lambda/\Delta\lambda$ = 115,000 covering a wavelength range of $\lambda\lambda$\, $3780$--$6910$\,\AA, while SOPHIE achieves $R =75,000$ and covers the wavelength range $\lambda\lambda$\, $3872$--$6943$\,\AA . ELODIE, on the other hand, covers the wavelength range $\lambda\lambda$\, $3850$--$6800$\,\AA \, with a spectral resolution of 42,000. Covering the optical range $\lambda\lambda$\, $3700$--$10000$\,\AA , NARVAL has $R$ $\sim$ 81,000. \section{METHODS} \label{method} We developed a pipeline different from the instrument ones to derive the RVs based on a synthetic spectrum and initial stellar parameters rather than on numerical masks. We simultaneously derive the RV and the line broadening (Vbroad: which makes no distinction between rotation and macroturbulence velocities) using the minimum distance algorithm based on the minimisation of the quadratic sum, $\chi^2$, of the difference between the observed spectrum and synthetic spectra (theoretical template). We vary as free parameters: the effective temperature, $T_{\mathrm{eff}}$, surface gravity, $\log\,g$, total line broadening, Vbroad, metallicity, [Fe$\slash$H], and abundance ratio of the $\alpha -$process elements$, [\alpha/\rm{Fe}]$. The minimum $\chi^2$ gives the template that best matches the observed spectrum. This procedure is similar to that adopted for the ground-based processing of the Gaia RVS spectra \citep{sartoretti2018gaia}. The template is a synthetic spectrum $\mathrm{s(\lambda_s)}$ resampled to the observations, as indicated in Eq. \ref{resample} \citep{david2014multi}, where $\lambda_s$ is the rest frame wavelength scale and $G$ is Green's function convolved with a Gaussian kernel. In order to have the template and observed spectra fully overlapping in wavelength space after applying the Doppler shift, the wavelength range of the former must extend beyond the observed one \citep{david2014multi}. It is then convolved with the rotation profile u[Vbroad] given by \cite{gray2021observation}. We thus have \begin{equation} \label{resample} T(\lambda, RV, \rm{Vbroad}) = u[\rm{Vbroad}]*G\left[\lambda, \lambda_s\left(1 + \frac{RV}{c}\right)\right]*s(\lambda_s) . \end{equation} We minimise the quadratic sum over each wavelength element \begin{align} \label{eqChi2} \chi^2 = &\sum_i w_i(S_i - T'_i)^2& \\ \label{sb1} T'_i =& P_n(\lambda_i) \: T(\lambda_i, \mathrm{RV}, \mathrm{Vbroad}) + b& \end{align} where $S_i$ is the observed spectrum, $w_i$ is the statistical weight, and $T'_i$ is the single-lined model given by Eq. \ref{sb1}, $P_n(\lambda)$ is the polynomial modelling the continuum as a function of wavelength, $\lambda$. $T$ is the normalised synthetic spectrum used. It is convolved with the instrument line-spread function (LSF) assumed to be Gaussian, then with the rotational broadening profile, and finally Doppler shifted. $b$ refers to the background flux. Even though macroturbulent velocity has an effect on the line shape distinct from that from rotation, we only considered the rotational profile. The Vbroad parameter, thus, includes both effects. The grid of synthetic spectra, $T$ ($R$ $>$ 150,000), is extracted from the Pollux database \citep{palacios2010pollux}. We chose the MARCS \cite{gustafsson2008grid}-AMBRE \citep{de2012ambre} plane-parallel and spherical atmosphere models with a microturbulent velocity (1 $\rm{km \, s^{-1}}$ for the former grid and either 1 or 2 $\rm{km \, s^{-1}}$ for the latter) that is suitable to cool stars in the range 3500--8000 K. Because they are also adopted for the ground-based processing of the Gaia RVS data \citep{sartoretti2018gaia}, we preferred the MARCS-AMBRE synthetic spectra over the PHOENIX ones \citep{husser2013new}. The astrophysical parameters (APs: $T_{\mathrm{eff}}$, $\log\,g$, [Fe$\slash$H] and [$\alpha\slash$Fe]) of the template that best matches our target spectrum are derived by using a minimum distance algorithm. A first guess of the RV and Vbroad is derived by a least square fitting method in Fourier space to find an approximate solution for each spectrum. The solution is refined afterwards by Levenberg-Marquardt minimisation (\citealt{press1986numerical}, Sect. 15.5.2). More details about this method can be found in Sect. 7.5 of \citet{sartoretti2018gaia}. The line broadening is considered a non linear parameter, which is required to be the same for all spectra of a given target. We initially fit each spectrum individually, allowing us to exclude spectra with an outlying $\chi^2$ (outside the range median$\pm 5\sigma$) from the subsequent global fit, in which we measure a common Vbroad for all spectra. Then, we fix Vbroad and re-measure the RVs to obtain non-correlated values. The RVs estimated using this method should be more homogeneous than archival pipeline RVs that are occasionally based on an unsuitable numerical correlation mask or on different masks for the same star. As an example, choosing either a K5 or a G2 mask induces a RV shift reaching up to 20 $\rm{m\, s^{-1}}$ \citep{anglada2012harps}. \label{FitSpec} We fitted the observed spectra within the five spectral ranges: $\lambda\lambda$\,$4000$--$4300$\,\AA, $\lambda\lambda$\,$4400$--$4800$\ \AA, $\lambda\lambda$\,$4900$--$5200$\ \AA, $\lambda\lambda$\,$5400$--$5800$\ \AA, and $\lambda\lambda$\,$6000$--$6200$\ \AA . These spectral domains present the advantage of not being contaminated with telluric lines. We also excluded the Balmer lines that would hinder the RV measurement, except the H$_{\delta}$ line because it is weak. We looped over MARCS-AMBRE plane-parallel and spherical synthetic spectra with microturbulence velocity of 1 and 2 $\rm{km \, s^{-1}}$; for APs initially ranging from [$p - \rm{\Delta p}$] to [$p + \rm{\Delta p}$] with $p$ referring to the AP: either $T_{\mathrm{eff}}$, $\log\,g$ or [Fe$\slash$H] whose initial estimates are extracted from the most recent estimates in the PASTEL catalogue \citep{soubiran2016pastel}. The parameters are taken from Gaia DR2 \citep{brown2018gaia} for the 13 stars not included in PASTEL. The parameter steps, $\rm{\Delta p}$, are: $\Delta T_{\mathrm{eff}} = 250$ K, $\Delta \log\,g = 0.5$ dex and $\Delta$[Fe$\slash$H] = 0.25 dex, while we consider all possible $[\alpha/\rm{Fe}]$ for every combination. If one of the determined parameters is on the interval edge, $p_0 =[p \pm \rm{\Delta p}]$, the template parameters for the second iteration lie in the interval $[p , p + 2 \rm{\Delta p}]$ or $[p - 2 \rm{\Delta p} , p]$. This process is repeated until the parameter is at the centre of the box. We find the best template for each domain separately, then we choose the one which yields the minimum $\chi^2$ sum for all domains. We do not fit the domains simultaneously, as there is a shift reaching $100 \,\rm{m\, s^{-1}}$ between the RVs estimated for each of them (as detailed in the following section). The polynomial degree of the continuum function in each interval is chosen using an $F$-test, which compares successive polynomial models fitting residuals with a degree starting from three and reaching up to ten to identify the polynomial of the lowest degree that yields an acceptable solution. As the flux errors are not provided by the HARPS pipeline, we have opted for the unit weighting: $w_i = 1/ \sigma_i^2$, with $\sigma_i = 1$ and $w_i = 1$. For consistency, it was also the case for the spectra obtained with other spectrographs. The Poisson weighting ($\sigma_F = \sqrt{F}$, where $F$ is the flux expressed in photon counts) accounts for the loss in RV measurement precision as the broad and deep spectral lines broaden the $\chi^2$ function. The $\chi^2$ was rescaled for each spectrum in order to obtain normalised flux residuals ($S-T'$) with unit variance \citep{andrae2010and}. There is no need to rescale if the flux residuals are good enough (i.e. according to Anderson-Darling normality test; \citealt{anderson1952asymptotic}). Great care is taken to standardise input fluxes when applying unit weight scheme. Our Vbroad is a measure of the line broadening arising from rotation and macroturbulence after accounting for the effect of microturbulence and instrumental resolution. Since Vbroad depends on APs, we adopted as uncertainty the standard deviation of measurements obtained for synthetic spectra with parameters [$p \pm \rm{\Delta p}$]. Namely, we compared Vbroad of the best-fit synthetic spectrum and that obtained for closely similar stellar parameters ($T_{\mathrm{eff}}$, $\log\,g$ and [Fe$\slash$H]). \section{Analysis of the full sample} \label{fullSamp} \subsection{RV offset and Vbroad analysis} We measured the RV and Vbroad of 2351 stars by fitting HARPS/SOPHIE/ELODIE/NARVAL spectra with MARCS-AMBRE synthetic spectra. HARPS, then SOPHIE if not available, spectra were always preferred to select the best synthetic spectrum used to estimate the two quantities. An example of our fitted spectrum along with the fit residuals are shown in Fig. \ref{fig:21821Fit} for HIP21821 in the range $\lambda\lambda$\,$4400$--$4450$\ \AA. It can be seen that the fit is globally good, except for a few weak lines. This could be due to template mismatch and the fact that the line list adopted to compute the synthetic spectrum is necessarily imperfect. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Figures/21821Fit.jpg} \end{center} \caption{Top panel: HIP21821 best fit with MARCS-AMBRE synthetic spectrum for observation BJD = 2455493.8 in the interval $\lambda\lambda$\,$4400$--$4450$\,\AA \, with $RV = 9.0782 \pm 0.0057\, \rm{km\, s^{-1}}$ and Vbroad $ = 4.79 \pm 0.26\, \rm{km\, s^{-1}}$. Observed spectrum in black and synthetic spectrum in red ($T_{\mathrm{eff}} = 6000$ K, $\log\,g = 4.0$ dex, [Fe$\slash$H] $= -0.5$ dex, and [$\alpha\slash$Fe] $= 0.2$ dex). The fit residuals divided by the standardised flux in that domain are shown in the bottom panel.} \label{fig:21821Fit} \end{figure} The APs of the best synthetic spectra selected after the minimisation are listed for a portion of the sample in Table~\ref{tab:APS}. We computed $\Delta \rm{AP}$, which is the difference between the AP of the best template and that selected in PASTEL. We plot $\Delta \log\,g$ as a function of $\Delta T_{\mathrm{eff}}$ in the bottom right panel of Fig. \ref{fig:diffAPS_hist} where the colourbar denotes $\Delta$[Fe$\slash$H]. The trend stars (detailed in the next Section) are plotted as diamonds. The distribution of $\Delta T_{\mathrm{eff}}$ is shown in the top right panel, where $92.9\%$ of the stars have $\lvert\Delta T_{\mathrm{eff}}\rvert \le 250$ K while $90.1 \%$ of the stars have $\lvert\Delta \log\,g \rvert \le 0.5$ dex, as shown in the distribution in the bottom left panel. We have $86.3 \%$ of the targets with APs inside the box [$\pm 250$ K, $\pm 0.5$ dex]. The rate drops to $68.1\%$ if we include those with $\lvert \Delta$[Fe$\slash$H]$\rvert \le 0.25$ . We show in Fig. \ref{fig:histDeltaRV} the distribution of the difference in RVs estimated using the best template and that with APs closest to those in PASTEL. The median is $5.30 \pm 0.02 \, \rm{m\, s^{-1}}$. The multiple modes present in the distribution could be related to whether all APs or just one of them differ between the two templates. Even though the shift would not affect the orbital solutions in case of RV variations of high amplitude, it could induce a difference for lower-amplitude systems. This is expected as the shift between HARPS pre- and post-fiber-upgrade RVs is temperature dependent, as discussed below. \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{Figures/deltaTdeltalogg.jpg}\end{center} \caption{Top panel: distribution of the differences between the best template and PASTEL's $T_{\mathrm{eff}}$, $\Delta T_{\mathrm{eff}}$. Bottom left panel: $\Delta \log\,g$ distribution. Bottom right panel: $\Delta \log\,g$ as a function of $\Delta T_{\mathrm{eff}}$ where the trend stars (see next Section) are plotted as diamonds. The colourbar denotes $\Delta$[Fe$\slash$H]. The red line shows the relation $\Delta \log\,g = \Delta T_{\mathrm{eff}} / 500$.} \label{fig:diffAPS_hist} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Figures/histDeltaRV2.jpg}\end{center} \caption{Distribution of the difference in RVs estimated from the best-fit and PASTEL templates with a median of $5.30 \pm 0.02 \, \rm{m\, s^{-1}}$.} \label{fig:histDeltaRV} \end{figure} We compare in Fig.~\ref{compareESORV} the difference between our measured RVs, $RV_m$, and those extracted from instrument archive, $RV_s$, as a function of $RV_s$. For HARPS (top panel), the difference between measured and archived RVs is $-241.70 \pm 2.09 \,\rm{m\, s^{-1}}$ for the M2 numerical mask, while it is $201.40 \pm 2.09 \,\rm{m\, s^{-1}}$ and $228.90 \pm 0.07 \,\rm{m\, s^{-1}}$ for the K5 and G2 masks, respectively. In the same way, we compare the RV measurements from SOPHIE spectra in Fig.~\ref{compareESORV} (b). The shift is close to the HARPS one with positive offsets of $217.8 \pm 0.3 \,\rm{m\, s^{-1}}$, $228 \pm 48\,\rm{m\, s^{-1}}$, $252.4 \pm 0.9 \,\rm{m\, s^{-1}}$, and $265 \pm 5$ for the K5, K0, G2 and F0 masks, respectively. ELODIE has the largest shifts: $336.4 \pm 0.8 \,\rm{m\, s^{-1}}$ and $ 282 \pm 4 \,\rm{m\, s^{-1}}$ for the K0 and F0 masks, respectively. Finally, the shift for NARVAL is $157 \pm 3 \,\rm{m\, s^{-1}}$ for the only mask used, G2. Some archive HARPS RVs have been estimated using different masks for the same spectrum. For the same date, the median of the difference between RVs measured using either a M2 or a K5 mask is $439.1 \pm 2.0 \,\rm{m\, s^{-1}}$, which is very similar to the difference between the shifts $RV_s - RV_m$ for the M2 and K5 masks: $443.1 \pm 2.1 \,\rm{m\, s^{-1}}$. Therefore, we conclude that our measurements are more consistent and that the archived RVs obtained from M2 masks are overestimated. The average offset between our measurements and those of the archives is $240.4\,\rm{m\, s^{-1}}$ considering all spectrographs. This offset is caused by the difference in instrument RV zero points and the method used. \cite{fremat2017test} found a similar shift of $220 \pm 30 \,\rm{m\, s^{-1}}$ between their UVES measurements and those found in the CS13 catalogue. We list in Table~\ref{tab:APS} the mean of our measured RVs, $<\rm{RV_m}>$, and the median given by CS18, $<\rm{RV_{CS18}}>$. \begin{figure} \begin{center} \begin{subfigure}{.5\textwidth} \includegraphics[width=\linewidth]{Figures/Rv1Rv2.jpg} \end{subfigure} \begin{subfigure}{.5\textwidth} \includegraphics[width=\linewidth]{Figures/RV1RV2Soph.jpg} \end{subfigure} \begin{subfigure}{.5\textwidth} \includegraphics[width=\linewidth]{Figures/RV1RV2ELO.jpg} \end{subfigure} \begin{subfigure}{.5\textwidth} \includegraphics[width=\linewidth]{Figures/RV1RV2NAR.jpg} \end{subfigure} \end{center} \caption{Difference between our measured RVs, $RV_m$, and those extracted from instrument pipeline, $RV_s$, as a function of $RV_s$. From top to bottom: (a) HARPS, (b) SOPHIE, (c) ELODIE, and (d) NARVAL. The shift is around $300\; \rm{m\, s^{-1}}$ for ELODIE and $200\; \rm{m\, s^{-1}}$ for the other three instruments. It is true for all masks except for the M2 mask (see upper panel) for which we argue that the RVs in the archives are overestimated.} \label{compareESORV} \end{figure} The mean shift between the RVs obtained for the reference domain $\lambda\lambda$\,$4400$--$4800$\ \AA \, and the four others is around 40 $\rm{m\, s^{-1}}$, except for $\lambda\lambda$\,$4900$--$5200$ \AA \, where it is 200 $\rm{m\, s^{-1}}$ and can occasionally reach up to 2 $\rm{km\, s^{-1}}$. We plot the shift for the four domains as a function of $T_{\mathrm{eff}}$ in Fig.~\ref{shiftDomain}. We attribute the temperature and wavelength dependencies of the shift to template mismatches, as well as to the instrumental calibration. \begin{figure*} \begin{center} \begin{subfigure}{.49\textwidth} \includegraphics[width=\linewidth]{Figures/shift01.jpg} \end{subfigure} \begin{subfigure}{.49\textwidth} \includegraphics[width=\linewidth]{Figures/shift02.jpg} \end{subfigure} \end{center} \caption{Variation of the RV offset of HARPS targets with respect to that for the wavelength domain $\lambda\lambda$\,$4400$--$4800$\ \AA \, for the four other spectral regions: $\lambda\lambda$\,$4000$--$4300$\,\AA \, and $\lambda\lambda$\,$4900$--$5200$\ \AA \, (left), and $\lambda\lambda$\,$5400$--$5800$\ \AA \, and $\lambda\lambda$\,$6000$--$6200$\ \AA \, (right). The data are shown as a function of $T_\mathrm{eff}$.} \label{shiftDomain} \end{figure*} Furthermore, the HARPS fibre has been upgraded in June 2015, which caused an offset in the RV measurements compared to pre-upgrade observations. According to \cite{curto2015harps}, the RVs are correlated with the width of the spectral lines and the shift depends on the spectral type. As we have noticed that the recovery rate of known planetary periods depends on the shift, we decided to fit the shift as a function of $T_{\mathrm{eff}}$ by maximising the number of known periods recovered (see next section). We obtain: $RV_{\rm{postUpgradeCorrected}} \,= RV_{\rm{postUpgrade}} - 2.0\times 10^{-3}T_{\rm{eff}}(\rm{Template}) \, (\rm{K}) + 7.3 \,\rm{m\, s^{-1}}$. Figure~\ref{RVHIP1444} shows an example of the corrected RV time series in the range $\lambda\lambda$\,$4400$--$4800$\ \AA, where the HARPS post-upgrade measurements are plotted in red. \begin{table*} \caption{Stellar parameters of the best-fit synthetic spectra ($T_{\mathrm{eff}}$, $\log\,g$, [Fe$\slash$H] and [$\alpha\slash$Fe]), along with Vbroad, our mean measured RV, $<\rm{RV_m}>$, and the median RV from CS18, $<\rm{RV_{CS18}}>$. A Vbroad value of 1.91 $\rm{km\, s^{-1}}$ is the lowest limit that can be measured by our algorithm.} \label{tab:APS} \begin{tabular}{|r|r|r|r|r|r|r|r|r|} \hline {HIP/TYC} & {$T_{\mathrm{eff}} [K]$} & {$\log\,g$ [dex]} & {[Fe$\slash$H] [dex]} & {[$\alpha\slash$Fe] [dex]} & {Vbroad [$\rm{km\, s^{-1}}$]} & {$<\rm{RV_m}>$ [$\rm{km\, s^{-1}}$]} & {$<\rm{RV_{CS18}}>$ [$\rm{km\, s^{-1}}$]} & {Instrument}\\ \hline 47 & $ 4750 $ & $ 5.0 $ & $ -0.25 $ & $ 0.1 $ & $ <1.91 \pm 1.28 $ & $12.022 \pm 0.003 $ & $11.752 \pm 0.001$ & H \\ 57 & $ 5500 $ & $ 5.0 $ & $ 0.00 $ & $ 0.2 $ & $ 3.52 \pm 0.50 $ & $ 37.137 \pm 0.001$ & $36.851\pm 0.0004$ & H \\ 80 & $ 5750 $ & $ 4.0 $ & $ -0.75 $ & $ 0.3 $ & $ 3.63 \pm 0.45 $&$-11.063\pm 0.001$&$-11.365\pm 0.001$ & H \\ 142 & $ 5750$ & $ 4.0 $ & $ 0.25 $ & $ -0.2 $ & $ 4.77 \pm 0.42 $ & $-21.458\pm 0.002$ & $-21.721\pm 0.002$ & S \\ 184 & $ 5250$ & $ 4.5$ & $ 0.00$ & $ 0.0 $ & $ 1.98 \pm 1.11 $ & $-17.245\pm 0.005$ &$-17.512\pm 0.002$ & S \\ 184 & $ 5250$ & $ 4.5$ & $ 0.00$ & $ 0.0 $ & $ 3.00 \pm 1.11 $ & $-17.217\pm 0.028$ &$-17.512\pm 0.002$ & E \\ 348 & $ 6000$ & $ 4.5$ & $-0.25$ & $0.1$ & $3.52 \pm 0.67 $ & $18.877\pm 0.005$ & $ 18.587\pm 0.003$ & S \\ 400 & $ 5250$ & $ 4.5$ & $-0.50$ & $0.2$ & $<1.91 \pm 1.32$ & $7.8440\pm 0.0003$ &$ 7.577\pm 0.001$ & S \\ 413 & $ 6250 $ & $ 4.5 $ & $ -0.25 $ & $ 0.1 $ & $ 5.01 \pm 0.47 $&$4.829 \pm 0.002$&$4.525 \pm 0.001$ & H \\ 436 & $ 4750 $ & $ 5.0 $ & $ -0.50 $ & $ 0.2 $ & $ <1.91 \pm 0.44 $&$40.516\pm0.001$&$40.228\pm0.0004$ & H \\ \multicolumn{8}{l}{...}\\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item Note: This table is available in its entirety online at the CDS. A portion is shown here for guidance regarding its form and content. \end{tablenotes} \end{table*} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Figures/rv1444.jpg}\end{center} \caption{RV variation for HIP 1444 obtained from fitting the interval $\lambda\lambda$\,$4400-4800$\ \AA. HARPS fiber post-upgrade measurements are plotted in red. The RVs exhibit a trend variation with an amplitude of $233.6 \; \rm{m\, s^{-1}}$.} \label{RVHIP1444} \end{figure} Applying the public SpEctrum Radial Velocity AnaLyser (SERVAL) pipeline to HARPS spectra, \cite{trifonov2020public} (hereafter T20) computed RVs based on the $\chi^2$ minimisation where the template is created by shifting and co-adding all individual spectra of the target. Thus, they derived differential (relative) RV values. We fit our measured RVs to theirs using orthogonal regression \citep{krystek2007weighted} and list the slope and $y$-intercept of each target in Table \ref{tab:fitT20} for pre- and post-upgrade measurements, respectively. These coefficients are estimated only for targets with more than three measurements. We show in Fig. \ref{fig:histTrifo} the distribution of the estimated slopes with overplotted normal distribution $\mathcal{N}$(0,1). The asymmetry is mainly due to the small uncertainties of T20 compared to ours. In addition, there are ten stars with $(Slope - 1)/\sigma_{\rm{slope}} \leqslant -5$. The lowest value is for HIP21821: its fitted spectrum is shown in Fig. \ref{fig:21821Fit} for the domain $\lambda\lambda$\,$4400$--$4450$\ \AA. We compare our measured RVs, those derived by HARPS pipeline, and those of T20 in Fig. \ref{fig:21821Eso_T}. We find a linear relation between our RVs and those from HARPS pipeline with a slope of $0.99 \pm 0.16$. The deviation from the 1:1 relation with respect to T20 RVs could either be due to the nightly zero-point corrections they applied to their RVs or for not taking into account the difference in RVs between different domains. Even though our measurements are less precise than T20's as we used portions of the spectrum, the difference in RVs between these portions is taken into consideration. In addition, these RVs are measured as absolute values with a shift around $200 \, \rm{m\, s^{-1}}$ compared to pipeline's RVs and show more homogeneity. On the other hand, T20 velocities are template free and therefore do not suffer from template mismatch problems. \begin{table*} \caption{Orthogonal regression coefficients of the fit between our measured and T20 RVs.} \label{tab:fitT20} \begin{tabular}{|r|r|r|r|r|} \hline {HIP/TYC} & {Slope pre-upgrade} & {Y-intercept pre-upgrade [$\rm{km\, s^{-1}}$]} & {Slope post-upgrade} & {Y-intercept post-upgrade [$\rm{km\, s^{-1}}$]} \\ \hline 47&$1.1 \pm 1.1$&$12.022\pm 0.003$& ... & ...\\ 57&$ 0.925 \pm 0.085$&$37.138\pm 0.001$& ... & ...\\ 80&$ 1.57 \pm 0.29$&$-11.063\pm 0.001$& ... & ...\\ 413&$ 1.68 \pm 0.50$&$4.829\pm 0.002$& ... & ...\\ 436&$ 1.08 \pm 0.71$&$40.518\pm 0.001$&$ 1.0 \pm 1.1$&$ 40.576\pm 0.046$\\ 459&$ 1.27 \pm 0.17$&$7.126\pm 0.002$& ... & ...\\ 490&$ 0.881 \pm 0.084$&$2.419\pm 0.004$&$ 0.87 \pm 0.12$&$2.481\pm 0.005$\\ 569&$ 0.42 \pm 0.48$&$-27.367\pm 0.001$&$ 0.6 \pm 1.8$&$ -27.345\pm 0.002$\\ 616&$ 0.3 \pm 1.1$&$-42.777\pm 0.001$& ... & ...\\ 726&$ 0.95 \pm 0.10$&$-19.090\pm 0.001$&$ 1.01 \pm 0.24$&$ -19.092\pm 0.002$\\ \multicolumn{5}{l}{...}\\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item Note: This table is available in its entirety online at the CDS. A portion is shown here for guidance regarding its form and content. \end{tablenotes} \end{table*} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Figures/histTrifo.jpg}\end{center} \caption{Distribution of the normalised and zero centred orthogonal regression slope between our measured and T20 HARPS RVs (Pre- $\cup$ Post-upgrade). The normal distribution is overplotted in red.} \label{fig:histTrifo} \end{figure} \begin{figure} \begin{center} \begin{subfigure}{.5\textwidth} \includegraphics[width=\columnwidth]{Figures/RvM_T_Eo.jpg} \end{subfigure} \end{center} \caption{Top panel: HIP21821 measured RVs as a function of those derived by HARPS pipeline with a slope of $0.99 \pm 0.16$. Bottom panel: T20 RVs as a function of HARPS pipeline RVs with a slope of $317 \pm 113$.} \label{fig:21821Eso_T} \end{figure} We plot in Fig.~\ref{vsiniGleb} the histogram of the normalised residuals of Vbroad for 1354 stars obtained by comparing our measurements and $\rm{V \sin i}$'s taken from the literature \citep{glebocki2005vizier} (hereafter GGS). The differences were divided by our uncertainties as GGS's catalogue does not provide uncertainties for all estimations. We overplot the distribution for slow and fast rotators: $\rm{V \sin i}$(GGS) below and above 4 \ $\rm{km \ s^{-1}}$. The precision of the measurements is expected to decrease when the true value is of the same order as or lower than the instrumental resolution ($\sim 2.6\,\rm{km \ s^{-1}}$). Therefore, the histogram exhibits a clear asymmetry at low $\rm{V \sin i}$ values indicating that our Vbroad might be overestimated in this case. On the other hand, we note that a large fraction of the measurements found in GGS are taken from \cite{nordstrom1997radial}, where they have been rounded off differently depending on $\rm{V \sin i}$. Some of the structures in the right wing of both distributions were found to be linked to this rounding. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Figures/histGleb.jpg} \end{center} \caption{Distribution of the Vbroad residuals, [$\rm{Vbroad}$-$\rm{V \sin i}$ (GGS)]/$\sigma_{\rm{Vbroad}}$ for 1354 stars. There are two histograms: for $\rm{V \sin i}$(GGS) $\geqslant 4 \ \rm{km \ s^{-1}}$ (red) and $\rm{V \sin i}$(GGS) $< 4 \ \rm{km \ s^{-1}}$ (black). Our measured Vbroad are overestimated with respect to GGS's for low $\rm{V \sin i}$ values, as revealed by the asymmetry.} \label{vsiniGleb} \end{figure} \subsection{RV zero point between instruments} As there are stars observed with several instruments (Fig.~\ref{diagVenn}), each of those having their own reduction pipeline and method for deriving the RVs, we must determine their relative RV offset. For all instruments, we adopted the HARPS RV reference frame. In Table \ref{tab:RV zero}, we compare the offset between different instruments with respect to CS13's estimates. For all masks of a given instrument, we first estimate the median shift between measured and archived RVs. The offset is the difference between the shifts for the two instruments considered. The offset for HARPS vs SOPHIE is equal within uncertainties to the one estimated by CS13, but it is much more precise. The agreement for NARVAL is less satisfactory because of the different wavelength calibration methods, although it remains within $3 \sigma$. We follow CS13 in fitting the offset between ELODIE and other instruments as a function of the $B - V$ colour index. The shift between RVs extracted from the instrument pipeline and our measurements as a function of $B - V$ for SOPHIE, ELODIE and HARPS is given by: $( 88.3 \pm 0.6 )\;(B-V) - ( 304.7 \pm 0.4 ) \rm{\; m\, s^{-1}}$, $( -175.1 \pm 2.7 )\;(B-V) - ( 213.4 \pm 1.8 ) \rm{\; m\, s^{-1}}$ and $( 66.7 \pm 0.2 )\;(B-V) - ( 271.3 \pm 0.1 ) \rm{\; m\, s^{-1}}$, respectively. The offset between ELODIE and SOPHIE as a function of the colour index is $ ( -263.4 \pm 2.7 )\;(B-V) + ( 91.4 \pm 1.8 ) \rm{\; m\, s^{-1}}$. The slope is close to the one found by CS13: $-259 \rm{\; m\, s^{-1}}$. The offset between HARPS and ELODIE as a function of $B - V$ is thus: $( 241.8 \pm 2.7 )\;(B-V) - ( 57.9 \pm 1.8 ) \rm{\; m\, s^{-1}}$. The offset between instruments has also been carefully taken into account in CS18, when all the measurements were put into the SOPHIE system. Since the shift between our measured and archived RVs is coherent for different instruments, we adopted the RVs obtained from $\lambda\lambda$\,$4400$--$4800$\ \AA \, as our reference in the following. \begin{table} \caption{RV offsets between instruments (HARPS, SOPHIE and NARVAL) for all stars compared to those estimated by CS13.} \label{tab:RV zero} \centering \begin{tabular}{|l|l|l|} \hline & \multicolumn{2}{c}{Offset [$\rm{m\, s^{-1}}$]}\\ {Instruments} & This work & CS13\\ \hline HARPS--SOPHIE & $20.4 \pm 0.3$ & $17 \pm 5$ \\ HARPS--NARVAL & $68.10 \pm 2.7$ & $43 \pm 7.96$ \\\hline \end{tabular} \end{table} \section{Orbital solutions} \label{orbitSol} In order to confirm the stability of the targets with at least ten observations, we searched for high level of RV variability caused by possible companions. In the search for orbital parameters, we analysed the flux residuals ($S - T'$) of each fit and determined their standard deviation and normalised bias ($\rm{bias} = \rm{median}/\rm{\sigma_{median}}$). We filtered out the RVs based on these two metrics using a median absolute deviation filter. The RVs are the weighted mean value for the five wavelength ranges, where we considered the domain $\lambda\lambda$\,$4400$--$4800$\ \AA \, as the reference one and corrected the RVs from the other domains by subtracting the median of the shift. This procedure allows us to exclude outliers. For specific dates, we also omitted RV measurements that have a large drift in the pipeline wavelength calibration. We applied the \citet*{heck1985period} periodogram (hereafter HMM), as revisited in \cite{zechmeister2009generalised}, to the RV dataset in order to perform a period search. To assign a false alarm probability (FAP) to peaks in the periodogram, we constructed an empirical cumulative probability distribution function (ECDF) from the highest peaks of $10^6$ realisations of white noise periodograms with the same time sampling. At first, we fitted the data by a trend with a maximum polynomial degree equal to three. We considered as significant peaks in the frequency periodograms, those that are above a probability threshold of $95 \%$. These are then used as an input in a Zechmeister \& K{\"u}rster Keplerian periodogram to find a first guess of the other orbital parameters. The trend is taken into account (subtracted) if the period of the highest peak in the periodogram exceeds the total time span. The period search process is repeated until there are no peak exceeding the probability threshold in the HMM periodogram. It is an iterative procedure, where the best solution is successively subtracted to search for significant periodic signals in the residuals, similarly to the method described in \cite{2001MNRAS.327..435G}. A global fit taking into account all the solutions derived during the previous steps is applied and RV residuals are then reanalysed in the next iteration. Then, we refined the solutions of all peaks above the significance level simultaneously. In the case of a singular algebraic system, the fit was constrained to a circular orbit ($e$ and $\omega$ fixed to zero). The uncertainties in the orbital parameters are obtained from the variance-covariance matrix (\citealt{press1986numerical}, Sect. 15.4.1). In order to avoid the one-day period alias, we searched for periods longer than 2 days. The codes, developed in MATLAB \citep{MATLAB:2022} and Java Open JDK, are a generalisation to multiperiodic fit of those to produce the Gaia DR3 SB1 catalogue \citep{DR3-DPACP-178}. We note that our approach includes some approximations. We considered all periodogram's peaks that are higher than FAP even though the test is only suitable for the highest one. In addition, a fitted curve subtraction introduces some correlations in the residuals that we have neglected by applying the same FAP to all the levels of time-series. A total of 134 targets already have an orbital solution in the literature (190 systems). The known-period success rate with PASTEL template APs ($71.6\,\%$) is lower than with our pipeline best template APs ($72.6\,\%$) where the HARPS pre-/post-fiber-upgrade measurements are treated separately. Among the stars with an orbital solution from HARPS, six have an RV amplitude close to or exceeding the Gaia RV variability limit (\textsl{i.e.} $300 \, \rm{m\, s^{-1}}$): HIP1692, HIP26394, HIP36941, HIP62534, HIP71001, and HIP113834. However, only HIP26394, HIP71001, and HIP113834 exceed the stability level ($3\,\sigma_{RV} = 300 \, \rm{m\, s^{-1}}$) as defined in CS13. Among SOPHIE stars, HIP30057 has a scatter exceeding the CS13 stability level ($\sigma_{\rm{CS13}} = 106.7 \rm{m\, s^{-1}}$). Estimating the total standard deviation, $\sigma_{\mathrm{Model}}$, of the orbital model (including all orbital solutions and trends), the following stars from the HARPS sample have $\sigma_{\mathrm{Model}} \ge 100 \ \rm{m\, s^{-1}}$: HIP5806 ($117.6 \, \rm{m\, s^{-1}}$), HIP26394 ($105.4 \, \rm{m\, s^{-1}}$), HIP62534 ($113.9 \, \rm{m\, s^{-1}}$), HIP71001 ($4694.1 \, \rm{m\, s^{-1}}$), HIP108095 ($159.6 \, \rm{m\, s^{-1}}$), HIP113834 ($164.7 \, \rm{m\, s^{-1}}$). For SOPHIE, the stars are HIP115714 ($117.7 \, \rm{m\, s^{-1}}$) and TYC3239-00992-1 ($105.7 \, \rm{m\, s^{-1}}$). The scatter of the model for HIP30057 ($96.6 \, \rm{m\, s^{-1}}$) does not exceed the threshold. The orbital solutions of these Keplerian signals are listed in Table \ref{table:orbit} where we were able to recover the known periods for five stars. However, the estimated period of TYC3239-00992-1 does not match within $5 \sigma$ the known transit period of $3.852985 \pm 0.000005$ d \citep{noyes2008hat}. We plot the phase-folded RV curve assuming this period in Fig. \ref{fig:TYC992}. We were not able to retrieve this period because of the bad sampling and missing data points. \begin{figure} \includegraphics[width=\columnwidth]{Figures/TYC992.jpg} \caption{TYC3239-00992-1 phase-folded RV curve with the known period of $3.852985$ d. The inhomogeneous phase sampling prevented us to derive this period.} \label{fig:TYC992} \end{figure} We show HIP26394 and HIP113834 periodograms and orbital solutions in Figs. \ref{periodogram} and \ref{fig:orbit} respectively. The $F$ statistic denotes the ratio of two independent $\chi^2$ values ($F$ statistic of the pure sine model with respect to the variance), defined as $z$ variable from Eq. 23 of \cite{zechmeister2009generalised}. In addition to the known period of HIP5806 we find two new ones which need to be confirmed. The short period of HIP108095 has not been detected previously and thus it could indicate the presence of a new substellar companion in addition to a second degree trend caused by a more massive companion. For these two stars, $\sigma_{\rm{Model}}$ is dominated by the trend scatter. The periodograms and phase-folded RV curves for these newly discovered periods are shown in Figs. \ref{fig:periodogramNew} and \ref{fig:orbitNew}, respectively. The long period of HIP71001 has a high uncertainty and its RV semi-amplitude is lower than its uncertainty thus we are cautious about this solution. \begin{table*} \caption{Orbital solutions of stars with RV model scatter exceeding Gaia threshold. The solutions are computed with respect to (2450000 + 100 ${E}$) BJD epochs, where ${E}$, the reduced reference epoch, is given in column 2. Periods in bold are those recovered. $N$ denotes the number of data points for each target.} \label{table:orbit} \begin{tabular}{|l|r|r|r|r|r|r|r|r|r|} \hline {HIP/TYC} & {$E$} & {$P$ [d]} & {$K$ [$\rm{m\, s^{-1}}$]} & {$e$} & {$\omega$ [deg]} & {$T_{\rm{p}}$ [d]} & {$M_{\rm{min}} [M_{\rm{Jup}}]$} & {Instrument} & {$N$} \\ \hline \, \, 5806$^a$ & 53 & $ \mathbf{1238 \pm 14} $ & $ 57 \pm 457 $ & $ 0.85 \pm 0.94 $ & $ -18 \pm 73 $ & $ 1184 \pm 55 $ & $ 1.572 \pm 13.395$ & H & 131\\ & & $ 806 \pm 18 $ & $ 6.1 \pm 1.1 $ & $ 0.785 \pm 0.063 $ & $ 135 \pm 14 $ & $ 516.8 \pm 7.7 $ & $ 0.172 \pm 0.038$ \\ & & $ 51.154 \pm 0.047 $ & $ 2.97 \pm 0.39 $ & $ 0.21 \pm 0.13 $ & $ -54 \pm 38 $ & $ 19.6 \pm 5.0 $ & $ 0.053 \pm 0.007$ \\ \, 26394$^b$ & 77 & $ \mathbf{2092.0 \pm 1.7} $ & $ 197.02 \pm 0.66 $ & $ 0.645 \pm 0.003 $ & $ -32.00 \pm 0.43 $ & $ 686.35 \pm 0.57 $ & $ 9.222 \pm 0.207$ & H & 487\\ \, 62534$^c$ & 61 & $ \mathbf{1046.2 \pm 1.8} $ & $ 142.9 \pm 2.5 $ & $ 0.221 \pm 0.008 $ & $ 77.3 \pm 1.8 $ & $ 907.8 \pm 4.5 $ & $6.033 \pm 0.210$ & H & 66\\ \, 71001 & 51 & $5106 \pm 1222$ & $723 \pm 1044$ & $0.080 \pm 0.026$ &$99.3 \pm 4.7$ & $2878 \pm 659$ & $ 53 \pm 76$ & H & 29\\ 108095 & 51 & $ 8.53127 \pm 0.00024 $ & $ 33.6 \pm 2.8 $ & $ 0.524 \pm 0.057 $ & $ -46.1 \pm 5.8 $ & $ 1.287 \pm 0.079 $ & $ 0.261 \pm 0.025$ & H & 34\\ 113834$^d$ & 55 & $ \mathbf{1302.3 \pm 1.5} $ & $ 235.3 \pm 2.2 $ & $ 0.318 \pm 0.008 $ & $ 101.3 \pm 1.1 $ & $ 539.5 \pm 3.6 $ & $ 11.596 \pm 0.247$ & H & 28\\ 115714$^e$ & 75 & $ \mathbf{218.473 \pm 0.074} $ & $ 106.6 \pm 1.4 $ & $ 0.429 \pm 0.010 $ & $ -136.7 \pm 1.4 $ & $ 214.31 \pm 0.70 $ & $ 2.867 \pm 0.290$ & S & 99\\ 3239-00992-1 & 54 & $ 3.09410 \pm 0.00021 $ & $153 \pm 58$ & $0.23 \pm 0.22$ & $-63 \pm 26$ & $0.91 \pm 0.18$ & $ 1.208 \pm 0.463$ & S & 49\\ \hline \end{tabular} \begin{tablenotes}\footnotesize \item References for literature values. \item a: \cite{wittenmyer2019truly} \item b: \cite{venner2021true} \item c: \cite{stassun2017accurate} \item d: \cite{moutou2011harps} \item e: \cite{hebrard2016sophie} \end{tablenotes} \end{table*} \begin{figure} \includegraphics[width=\columnwidth]{Figures/periodo_113834_26394.jpg} \caption{Periodograms for HIP26394 (top) and HIP113834 (bottom). The red horizontal lines denote the adopted confidence threshold of 95\%. Peaks exceeding that threshold are marked by red crosses, while the highest peak is marked in magenta.} \label{periodogram} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{Figures/period_113834_26394.jpg} \caption{Orbital solutions with semi-amplitude RV exceeding Gaia threshold for HIP26394 (top) and HIP113834 (bottom) with the periods of $2092.0 \pm 1.7$ d and $1302.3 \pm 1.5$ d respectively.} \label{fig:orbit} \end{figure} Additionally, TYC8963-01543-1 does not have a significant period nor trend, but it has $\sigma_{\rm{CS13}} = 176.6\, \rm{m\, s^{-1}}$. Its RV curve (Fig. \ref{fig:TYC1534}) suggests the star to be an SB2. We will analyse it in a future work. \begin{figure} \includegraphics[width=\columnwidth]{Figures/RV_TYC1534.jpg} \caption{RV variation of TYC8963-01543-1 similar to a trend SB2 where we only measured one RV component.} \label{fig:TYC1534} \end{figure} \section{Analysis of stars with RV trend } \label{trend} Some of the stars with long periods exceeding the observation duration have RV variations with an incomplete orbit or exhibit a polynomial trend. We list those with a mean acceleration (RV amplitude, $A$, divided by the time span, $T$) greater than $10 \,\rm{m\, s^{-1}yr^{-1}}$, along with their mean acceleration and trend degree in columns 3 and 4 of Table~\ref{tabe:TrendAct} for HARPS targets. There are four stars with previously detected Keplerian signal(s) for which we find an additional signal manifesting as a trend with an acceleration exceeding our adopted threshold: HIP5806, HIP10278, HIP60689, and HIP89354. These trends indicate the presence of an additional companion with a long period. However, this is not necessarily always the case as such variations may have other origins (see Sect.~\ref{activi}). There are stars with a very high acceleration value ($\frac{A}{T} > 25 \,\rm{m\, s^{-1}yr^{-1}}$) that could exceed the variability threshold that was set by the Gaia mission (300 $\rm{m\, s^{-1}}$ in ten years): HIP10278, HIP14596, HIP37233, HIP60689, HIP71001, HIP81533, HIP85158, HIP95849, HIP99385, HIP108095, HIP110156, HIP116374, TYC6792-01746-1, and TYC8681-00611-1. Adopting the constancy criterion of CS13, the following stars are variable: HIP12350, HIP14596, HIP37233, HIP71001, HIP99385, HIP108095, HIP110156, HIP116374, and TYC8681-00611-1; they should be reconsidered in the upcoming Gaia data release. \begin{table*} \caption{Data for HARPS targets showing a significant RV trend ($\frac{A}{T} > 10\,\rm{m\, s^{-1}yr^{-1}}$): RV amplitude, acceleration value, order of polynomial fit, cross-correlation coefficients, $\rho$, and significance levels, $p$, for the RV vs $BIS$ and RV vs $S$-index variations, and chromospheric activity indicators. Stars with an asterisk have a Keplerian solution in addition to the trend, while double asterisks denote stars with confirmed companions. Blanks in the log$R'_{\rm{HK}}$ column indicate stars that are not present in Boro Saikia's catalogue.} \label{tabe:TrendAct} \begin{tabular}{|l|r|r|c|r|r|r|r|r|r|} \hline {HIP/TYC} & {$A$ [$\rm{m\, s^{-1}}$]} & {$\frac{A}{T} [\rm{m\, s^{-1}yr^{-1}}]$} & {Degree} & {$\rho_{\rm{BIS}}$} & {$\rm{pValue_{BIS}}$} & {$\rho_{S}$} & {$\rm{pValue_{S}}$} & {log$R'_{\rm{HK}}$} & {RV jitter [$\rm{m\, s^{-1}}$]}\\ \hline \, \, \, 948 & $ 186.4$ & $ 11.7 $ & $ 1$ & $ 0.110 $ & $ 0.594$ & $ -0.315 $ & $ 0.117 $& $ -5.171 $& $ 7.411 $\\ \, \, 1444* & $ 229.2$ & $ 16.5 $ & $ 2$ & $ -0.017 $ & $ 0.812$ & $ -0.373 $ & $ 0.000 $& $ -4.881 $& $ 9.185 $\\ \, \, 5806** & $ 160.7$ & $ 13.2 $ & $ 2$ & $ -0.221 $ & $ 0.010$ & $ -0.120 $ & $ 0.169 $& $ -4.724 $& $ 14.810$\\ \, 10278** & $ 223.3$ & $ 38.2 $ & $ 1$ & $ 0.353 $ & $ 0.066$ & $ 0.603 $ & $ 0.001 $ & $ ...$ & $ ...$\\ \, 12350* & $ 347.1$ & $ 21.3 $ & $ 2$ & $ 0.095 $ & $ 0.531$ & $ 0.008 $ & $ 0.960 $& $ -4.937 $& $ 8.551 $\\ \, 14596 & $ 434.1$ & $ 27.4 $ & $ 2$ & $ -0.074 $ & $ 0.524$ & $ 0.167 $ & $ 0.156 $& $ -4.819 $& $ 9.938 $\\ \, 19007 & $ 160.0$ & $ 17.0 $ & $ 2$ & $ 0.248 $ & $ 0.254$ & $ 0.163 $ & $ 0.457 $& $ -5.200 $& $ 6.134 $\\ \, 35305 & $ 90.0$ & $ 23.0 $ & $ 1$ & $ -0.124 $ & $ 0.465$ & $ -0.115 $ & $ 0.506 $ & $ ...$ & $ ...$\\ \, 37233 & $ 662.8$ & $ 41.3 $ & $ 3$ & $ 0.056 $ & $ 0.671$ & $ 0.363 $ & $ 0.005 $ & $ ...$ & $ ...$\\ \, 43449 & $ 35.6$ & $ 17.0 $ & $ 1$ & $ -0.145 $ & $ 0.622$ & $ 0.400 $ & $ 0.156 $ & $ ...$ & $ ...$\\ \, 45749 & $ 24.7$ & $ 17.2 $ & $ 1$ & $ 0.617 $ & $ 0.004$ & $ 0.682 $ & $ 0.001 $& $ -4.710 $& $ 8.506 $\\ \, 54102* & $ 306.6$ & $ 23.6 $ & $ 3$ & $ -0.280 $ & $ 0.088$ & $ -0.482 $ & $ 0.002 $ & $ ...$ & $ ...$\\ \, 60689** & $ 7.5$ & $ 49.1 $ & $ 1$ & $ 0.158 $ & $ 0.225$ & $ 0.540 $ & $ 0.004 $ & $ ...$ & $ ...$\\ \, 61743 & $ 150.3$ & $ 10.0 $ & $ 3$ & $ -0.041 $ & $ 0.824$ & $ -0.011 $ & $ 0.955 $ & $ ...$ & $ ...$\\ \, 70610 & $ 208.0$ & $ 14.8 $ & $ 1$ & $ 0.180 $ & $ 0.576$ & $ 0.489 $ & $ 0.127 $& $ -5.023 $& $ 7.745 $\\ \, 71001* & $ 3646.1$ & $ 246.2 $ & $ 3$ & $ -0.167 $ & $ 0.386$ & $ 0.230 $ & $ 0.239 $ & $ ...$ & $ ...$\\ \, 71803 & $ 161.6$ & $ 12.2 $ & $ 1$ & $ 0.338 $ & $ 0.044$ & $ 0.371 $ & $ 0.031 $& $ -4.954 $& $ 8.377 $\\ \, 76713 & $ 168.6$ & $ 15.4 $ & $ 1$ & $ -0.065 $ & $ 0.762$ & $ 0.083 $ & $ 0.707 $ & $ ...$ & $ ...$\\ \, 78395 & $ 301.1$ & $ 20.0 $ & $ 2$ & $ -0.446 $ & $ 0.015$ & $ 0.222 $ & $ 0.257 $ & $ ...$ & $ ...$\\ \, 81533 & $ 238.6$ & $ 39.7 $ & $ 1$ & $ -0.742 $ & $ 0.000$ & $ -0.743 $ & $ 0.000 $ & $ ...$ & $ ...$\\ \, 85158 & $ 225.2$ & $ 28.8 $ & $ 3$ & $ 0.156 $ & $ 0.409$ & $ 0.013 $ & $ 0.948 $& $ -4.831 $& $ 12.320$\\ \, 89354** & $ 132.3$ & $ 18.0 $ & $ 3$ & $ 0.190 $ & $ 0.165$ & $ 0.214 $ & $ 0.117 $ & $ ...$ & $ ...$\\ \, 95849* & $ 26.1$ & $ 25.7 $ & $ 2$ & $ -0.818 $ & $ 0.000$ & $ 0.828 $ & $ 0.000 $& $ -4.456 $& $ 15.736$\\ \, 99385 & $ 925.1$ & $ 69.5 $ & $ 2$ & $ -0.176 $ & $ 0.345$ & $ 0.303 $ & $ 0.141 $& $ -4.401 $& $ 9.333 $\\ 108095* & $ 937.3$ & $ 74.7 $ & $ 2$ & $ -0.155 $ & $ 0.380$ & $ 0.489 $ & $ 0.003 $ & $ ...$ & $ ...$\\ 109787 & $ 237.7$ & $ 15.0 $ & $ 2$ & $ 0.002 $ & $ 0.992$ & $ 0.174 $ & $ 0.284 $ & $ ...$ & $ ...$\\ 110156 & $ 500.4$ & $ 31.5 $ & $ 2$ & $ -0.257 $ & $ 0.215$ & $ -0.103 $ & $ 0.640 $& $ -4.858 $& $ 8.139 $\\ 110843 & $ 29.8$ & $ 11.7 $ & $ 1$ & $ 0.667 $ & $ 0.000$ & $ 0.228 $ & $ 0.283 $& $ -4.935 $& $ 8.573 $\\ 112229 & $ 70.4$ & $ 15.1 $ & $ 1$ & $ 0.526 $ & $ 0.001$ & $ 0.197 $ & $ 0.241 $ & $ ...$ & $ ...$\\ 115951* & $ 104.7$ & $ 10.6 $ & $ 1$ & $ -0.144 $ & $ 0.238$ & $ 0.333 $ & $ 0.005 $& $ -4.932 $& $ 8.606 $\\ 116374 & $ 902.2$ & $ 69.1 $ & $ 3$ & $ 0.784 $ & $ 0.000$ & $ -0.095 $ & $ 0.682 $& $ -4.761 $& $ 8.377 $\\ 6792-01746-1 & $ 262.2$ & $ 25.6 $ & $ 1$ & $ 0.396 $ & $ 0.014$ & $ -0.407 $ & $ 0.010 $ & $ ...$ & $ ...$\\ 8681-00611-1 & $ 502.1$ & $ 31.6 $ & $ 1$ & $ -0.165 $ & $ 0.344$ & $ 0.107 $ & $ 0.541 $ & $ ...$ & $ ...$\\ \hline \end{tabular} \label{TABLE6} \end{table*} In the case of \textbf{\rm{HIP10278}}, \cite{rickman2020spectral} combined CORALIE and HARPS measurements to find $P = 123 \pm 41$ years. This period is too long to be detected with HARPS measurements, and accordingly we only find a first degree trend as the best fit to the data. Similarly, we list in Table \ref{table:TrendActSophie} the stars observed with SOPHIE and ELODIE, having an acceleration greater than the threshold: $10 \,\rm{m\, s^{-1}yr^{-1}}$. Thirteen from SOPHIE exceed $25 \,\rm{m\, s^{-1}yr^{-1}}$. Among these stars, HIP26037, HIP29611, HIP71291, and HIP76751 have an amplitude greater than the threshold for stability as defined in CS13. Among ELODIE stars, the following have an acceleration exceeding $25 \,\rm{m\, s^{-1}yr^{-1}}$: HIP24205 ($34.0 \,\rm{m\, s^{-1}yr^{-1}}$) and HIP61157 ($79.5 \,\rm{m\, s^{-1}yr^{-1}}$). The trend detected in ELODIE HIP24205 data is a portion of the full orbit of the known period \citep[2117.3 $\pm$ 0.8 d;][]{bean2007mass}. \begin{table*} \caption{Same as Table \ref{tabe:TrendAct}, but for stars observed with SOPHIE and ELODIE.} \label{table:TrendActSophie} \begin{tabular}{|l|r|r|c|r|r|r|r|r|r|} \hline {HIP/TYC} & {$A$ [$\rm{m\, s^{-1}}$]} & {$\frac{A}{T} [\rm{m\, s^{-1}yr^{-1}}]$} & {Degree} & {$\rho_{\rm{BIS}}$} & {$\rm{pValue_{BIS}}$} & {$\rho_{S}$} & {$\rm{pValue_{S}}$} & {log$R'_{\rm{HK}}$} & {RV jitter [$\rm{m\, s^{-1}}$]}\\ \hline SOPHIE \\ \hline \, 23209*& $ 130.3 $ & $ 13.0 $ & $ 1 $ & $ ... $ & $ ...$ & $ -0.241 $ & $ 0.217 $ & $ ... $ & $ ...$\\ \, 26037& $ 1250.3 $ & $ 90.0 $ & $ 2 $ & $ -0.134 $ & $ 0.622$ & $ -0.602 $ & $ 0.006$ & $ -4.909$ & $ 8.864 $\\ \, 29611& $ 402.3 $ & $ 45.3 $ & $ 1 $ & $ -0.206 $ & $ 0.227$ & $ -0.073 $ & $ 0.683 $ & $ ... $ & $ ...$\\ \, 31849& $ 275.7 $ & $ 46.6 $ & $ 1 $ & $ -0.895 $ & $ 0.000$ & $ 0.459 $ & $ 0.014 $ & $ ... $ & $ ...$\\ \, 32906& $ 167.3 $ & $ 41.2 $ & $ 2 $ & $ 0.261 $ & $ 0.367$ & $ -0.699 $ & $ 0.003 $ & $ ... $ & $ ...$\\ \, 35198& $ 59.8 $ & $ 63.7 $ & $ 2 $ & $ -0.052 $ & $ 0.842$ & $ -0.184 $ & $ 0.494 $ & $ ... $ & $ ...$\\ \, 44324& $ 216.7 $ & $ 22.2 $ & $ 1 $ & $ ... $ & $ ...$ & $ 0.133 $ & $ 0.587 $ & $ ... $ & $ ...$\\ \, 61157& $ 223.4 $ & $ 27.6 $ & $ 1 $ & $ 0.114 $ & $ 0.605$ & $ -0.047 $ & $ 0.832 $ & $ ... $ & $ ...$\\ \, 63346& $ 192.4 $ & $ 22.4 $ & $ 2 $ & $ -0.270 $ & $ 0.058$ & $ -0.282 $ & $ 0.047 $ & $ ... $ & $ ...$\\ \, 68134& $ 385.8 $ & $ 31.5 $ & $ 1 $ & $ ... $ & $ ...$ & $ 0.305 $ & $ 0.219 $ & $ ... $ & $ ...$\\ \, 71291& $ 779.7 $ & $ 66.8 $ & $ 1 $ & $ 0.527 $ & $ 0.064$ & $ -0.398 $ & $ 0.060 $ & $ ... $ & $ ...$\\ \, 74702& $ 68.4 $ & $ 39.3 $ & $ 3 $ & $ -0.846 $ & $ 0.000$ & $ 0.567 $ & $ 0.011$ & $ -4.457$ & $ 9.177 $\\ \, 76751& $916.5$ & $81.2$ & $2$ & $-0.195$ & $0.544$ & $0.040$ & $0.908$ & $ ... $ & $ ...$\\ \, 77718*& $ 110.4 $ & $ 22.1 $ & $ 1 $ & $ 0.043 $ & $ 0.765$ & $ -0.024 $ & $ 0.881 $ & $ ... $ & $ ...$\\ \, 85653& $ 130.2 $ & $ 11.6 $ & $ 1 $ & $ 0.406 $ & $ 0.026$ & $ 0.206 $ & $ 0.242$ & $ -4.928$ & $ 8.654 $\\ \, 91658& $ 31.6 $ & $ 31.8 $ & $ 1 $ & $ -0.312 $ & $ 0.106$ & $ -0.029 $ & $ 0.884 $ & $ ... $ & $ ...$\\ \, 96184*& $ 120.3 $ & $ 11.6 $ & $ 1 $ & $ ... $ & $ ...$ & $ -0.343 $ & $ 0.017 $ & $ ... $ & $ ...$\\ \, 98828& $ 37.0 $ & $ 10.1 $ & $ 2 $ & $ -0.097 $ & $ 0.720$ & $ 0.827 $ & $ 0.003$ & $ -4.707$ & $ 8.515 $\\ 104587*& $ 110.2 $ & $ 12.0 $ & $ 1 $ & $ 0.843 $ & $ 0.000$ & $ 0.310 $ & $ 0.004$ & $ -4.994$ & $ 7.814 $\\ 111977& $ 91.0 $ & $ 10.1 $ & $ 3 $ & $ ... $ & $ ...$ & $ -0.033 $ & $ 0.899 $ & $ ... $ & $ ...$\\ 113086& $ 222.6 $ & $ 19.9 $ & $ 1 $ & $ 0.261 $ & $ 0.367$ & $ 0.239 $ & $ 0.325$ & $ -4.815$ & $ 12.665 $\\ 114210& $ 77.3 $ & $ 12.6 $ & $ 2 $ & $ 0.190 $ & $ 0.481$ & $ 0.125 $ & $ 0.589$ & $ -4.735$ & $ 14.5387 $\\ 115714** & $ 321.5$ & $ 39.7$ & $ 1$ & $ 0.197 $ & $ 0.152$ & $ -0.213 $ & $ 0.033$& $ ... $ & $ ...$\\ 1991-01087-1*& $ 27.8 $ & $ 25.7 $ & $ 1 $ & $ -0.228 $ & $ 0.075$ & $ -0.313 $ & $ 0.059 $ & $ ... $ & $ ...$\\ \hline ELODIE\\ \hline \, 19422& $156.2$&$ 17.1$&$1$& $ ... $ & $ ...$& $ ... $ & $ ...$& $ ... $ & $ ...$\\ \, 24205**& $202.0$&$34.0$&$3$& $ ... $ & $ ...$& $ ... $ & $ ...$& $ ... $ & $ ...$\\ \, 61157& $191.8$&$79.5$&$1$& $ ... $ & $ ...$& $ ... $ & $ ...$& $ ... $ & $ ...$\\ \, 77718& $120.3$&$14.8$&$1$& $ ... $ & $ ...$& $ ... $ & $ ...$& $ ... $ & $ ...$\\ \, 80264& $ 96.3$&$14.0$&$3$& $ ... $ & $ ...$& $ ... $ & $ ...$& $ ... $ & $ ...$\\ \, 80902**& $110.6$&$12.4$& $2$& $ ... $ & $ ...$& $ ... $ & $ ...$& $ ... $ & $ ...$\\ \, 85653& $144.6$&$16.1$&$1$& $ ... $ & $ ...$& $ ... $ & $ ...$& $ ... $ & $ ...$\\ 108602& $128.5$&$17.6$&$1$& $ ... $ & $ ...$& $ ... $ & $ ...$& $ ... $ & $ ...$\\ \hline \end{tabular} \end{table*} In order to be consistent with the $\sigma_{\rm{Model}}$ of stars with orbital solution, we also estimate the standard deviation of the trend model using Eq.\,\ref{Eq:trendSTD}. The considered stars should have a time span longer than 10 years. We list those with $\sigma_{\rm{Model}} \ge 100 \,\rm{m\, s^{-1}}$ over 10 and 12 years in Table\,\ref{tabe:sigmaModel}. We also list those with a scatter greater than $\ge 100 \,\rm{m\, s^{-1}}$ over a time span shorter than 10 years. \begin{table} \caption{Trend stars with model standard deviation exceeding $100 \ \rm{m\, s^{-1}}$ in 10 years, 12 years, and during the time span. Asterisk symbols have the same meaning as in Table \ref{TABLE6}.} \label{tabe:sigmaModel} \begin{tabular}{|l|r|c|r|r|r|} \hline {HIP/TYC} & {$T$ [yr]} & {$\sigma_{10}$ $[\rm{m\, s^{-1}}]$} & {$\sigma_{12}$ $[\rm{m\, s^{-1}}]$} & {$\sigma_{{T}}$ $[\rm{m\, s^{-1}}]$} \\ \hline HARPS \\ \hline \, \, 5806** & $ 12.2$& ... & $ 113.5$ & $116.4$\\ \, 37233 & $ 16.1 $& $194.4 $ & $ 308.5$ & $637.4$\\ \, 64295** & $ 6.3$ & ... & ... & $ 103.2$ \\ \, 71001* & $14.8 $ & $ 1175.0 $ & $ 2358.4$ & $4950.0$\\ \, 99385 & $13.3 $ & $221.1 $ & $270.8$ &$304.2$\\ 108095* & $ 12.6 $ & $ 144.8 $ & $ 156.1$ & $158.3$\\ 116374 & $ 13.0 $ & $ 191.3 $ & $ 192.6$ & $187.6$\\ 8681-00611-1 & $ 15.9 $ & ... & $ 109.4$ &$144.9$\\ \hline SOPHIE \\ \hline \, 26037& $ 13.9 $ & $809.4 $ & $ 1056.5$ &$1316.7$\\ \, 29611& $ 8.9 $ & ... & ...&$116.1$ \\ \, 32906& $ 4.1 $ & ... & ... &$214.8$ \\ \, 68134& $ 12.3 $ & $ ... $ & $ 109.1$&$111.4$ \\ \, 71291& $ 11.7 $ & $ 192.9 $ & $ 231.5$ &$225.1$ \\ \, 74702& $ 1.7 $ & ... & ... &$196.9$ \\ \, 76751& $ 11.3$ & $346.8$ & $434.1$ &$402.1$\\ 114210& $ 6.1 $& ... & ... &$106.8$ \\ \hline ELODIE \\ \hline \, 80264 & $6.9$ & ... & ... & $108.1$ \\ \, 80902**& $8.9$ & ... & ... & $102.7$ \\ \hline \end{tabular} \end{table} We combine the data for stars in common between HARPS, SOPHIE, and ELODIE, in addition to measurements from HIRES by \citet[][hereafter B17]{butler2017lces}. The trend amplitude, acceleration, and degree are listed in Table \ref{tabe:TrendCombine}. We also list the shift between instruments. The mean shifts with respect to B17 are much larger because their measurements are mean subtracted. After combination, HIP24205, HIP32906, and HIP74702 do not show a significant trend. \begin{table*} \caption{Data for stars in common between HARPS, SOPHIE, ELODIE, and B17 showing a significant RV trend ($\frac{A}{T} > 10\,\rm{m\, s^{-1}yr^{-1}}$): RV amplitude, acceleration value, order of polynomial fit, shift between HARPS and B17, HARPS and SOPHIE, HARPS and ELODIE, and ELODIE and SOPHIE.} \label{tabe:TrendCombine} \begin{tabular}{|l|r|r|c|r|r|r|r|} \hline {HIP/TYC} & {$A$ [$\rm{m\, s^{-1}}$]} & {$\frac{A}{T} [\rm{m\, s^{-1}yr^{-1}}]$} & {Degree} & {$\rm{shift}_{\rm{HB17}}$ [$\rm{m\, s^{-1}}$]} & {$\rm{shift}_{\rm{HS}}$ [$\rm{m\, s^{-1}}$]} & {$\rm{shift}_{\rm{HE}}$ [$\rm{m\, s^{-1}}$]} & {$\rm{shift}_{\rm{ES}}$ [$\rm{m\, s^{-1}}$]}\\ \hline \, \, 1444& $ 427.9$ & $ 20.4$ & $ 2$ & $ 28822.8 \pm 0.7$ &...&...&...\\ \, \, 5806** & $154.1$ & $12.6$ & $2$ & ... & $-25.1\pm 8.0$ & ...& ...\\ \, \, 7404& $ 164.5$ & $ 12.8$ & 3 &... &$-108.8 \pm 7.7$ & ...& ...\\ \, 14614& $ 246.9$ & $ 11.4$ & 3 & ...&$ 18.2 \pm 2.2$ & $ -95 \pm 15$ &...\\ \, 26037& $ 1250.5$ & $ 75.5$ & 2 & ...&... & ...& $ -326 \pm 16$\\ \, 29611& $ 669.8$ & $ 45.3$ & 1 & ...&... & ...& $ 2.3 \pm 16.1$\\ \, 42575& $ 474.3$ & $150.0$ & 1 & ...& ...& ...& $288 \pm 29$ \\ \, 61157& $ 578.8$ & $ 43.0$ & 3 & ...&... & ...& $-112 \pm 15$\\ \, 63346& $ 267.2$ & $ 19.2$ & 3 & ...&... &...& $ -5.4\pm 11.7$\\ \, 68134& $ 486.6$ & $ 31.5$ & 1 & ...&... & ...& $ 71 \pm 23$\\ \, 71803& $ 223.1$ & $ 11.4$ & $ 2$ & $ 12953 \pm 1$ &...&...&...\\ \, 85158& $ 295.2$ & $ 18.5$ & $ 3$ & $ -24065 \pm 0.8$ &...&...&...\\ \, 99385& $ 925.2$ & $ 69.5$ & $ 2$ & $ 22239 \pm 2$ &...&...&...\\ 104587& $ 164.4$ & $ 13.4$ & 1 & ...&... & ...& $ 75 \pm 11$\\ 108602& $ 307.5$ & $ 17.8$ & 1 & ...&... & ...& $ 277 \pm 23$\\ 113086& $ 209.5$ & $ 12.2$ & 2 & ...&... & ...& $ 31 \pm 11$\\ 115714**& $ 448.4$& $ 28.4$ & 1 & ...&... & ...& $ 140 \pm 10$\\ \hline \end{tabular} \end{table*} In addition to stars with $\sigma_{\rm{Model}} \ge 100 \,\rm{m\, s^{-1}}$ for each instrument individually, and after combination of data from several instruments, there are four more stars listed in bold in Table \ref{table:sigmaModelCombined} having a high RV model scatter. We show their RV curves in Appendix \ref{append:Trend}. \begin{table} \caption{Similar to Table \ref{tabe:sigmaModel}, but for combined data sets. Stars in bold only exceed the Gaia threshold when data from different instruments are combined.} \label{table:sigmaModelCombined} \begin{tabular}{|l|r|c|r|r|r|} \hline {HIP} & {$T$ [yr]} & {$\sigma_{10}$ $[\rm{m\, s^{-1}}]$} & {$\sigma_{12}$ $[\rm{m\, s^{-1}}]$} & {$\sigma_{{T}}$ $[\rm{m\, s^{-1}}]$} \\ \hline \, 5806** & $ 12.2$& ... & $ 113.9$ & $116.8$\\ \, {\bf 7404} & $12.9$ & ... & $160.2$ & $194.9$ \\ 26037& $ 16.6$ & $771.8 $ & $ 1011.4$ &$1665.6$\\ 29611& $14.8$ & $130.8$ & $157.0$&$193.4$ \\ {\bf 42575} & $3.16$ & ... & ... & $136.9$\\ {\bf 61157} & $13.5$ & ...& $138.3$ & $181.1$ \\ 68134 & $ 15.4$ & ... & $109.0$&$139.9$ \\ {\bf 85158} & $15.9$ & $103.0$ & $142.3$ & $245.0$ \\ 99385 & $13.3 $ & $219.8$ & $269.2$ &$302.4$\\ \hline \end{tabular} \end{table} \subsection{Stars with companion detected by Adaptive Optics (AO)} \label{sectAO} The nature of the object causing the trend can be an unseen planetary or stellar companion. Under favourable circumstances in terms of magnitude difference and angular separation, imaging observations can be used to confirm the detection. Some of the trend stars have been observed using AO. The archive data are provided by NIRC2, NACO \citep{rousset2003naos} and SPHERE \citep{beuzit2010sphere} instruments. An example of NIRC2 AO image is displayed in Fig.~\ref{hip1444Keck} for HIP1444 where the presence of an object at $1.75 \pm 0.01 ''$ from the target is revealed. For the twelve stars with available AO images, we calculated absolute magnitudes and projected physical separations using recent distance measurements from \cite{bailer2021estimating} based on Gaia EDR3. Supposing the system to be physical (i.e. that the companion is at the same distance as the primary), we obtained the absolute magnitudes in the $J$, $H$, and $K$ bands. The apparent magnitude of the companion, $m_2$, is given by $m_1 - m_2 = -2.5\,\mathrm{log}\frac{f_1}{f_2}$, where $m_1$ is the apparent magnitude of the primary as given by the SIMBAD database. For some of these targets, there are objects with unknown parallaxes within a $5''$ radius from the sources according to Gaia EDR3 \citep{brown2021gaia}. There may be small differences between the angular separation, $\theta$, from Gaia and AO (compare column 3 and 4 of Table~\ref{tab:ao}) because these measurements were taken at different dates. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Figures/aoGaia4.jpg}\end{center} \caption{HIP 1444 NIRC2 adaptive optics image obtained on 09-08-2013 made available in Keck Observatory Archive (KOA) by the program PI, Barclay. We cropped the image and inverted the colours. The distance between the objects is $47.15 \pm 0.33$ AU and their flux ratio is 14.58 in the $K$ band. Gaia source positions at epoch 2016.0 (orange squares) were over-plotted using Aladin Sky ATLAS. The arrow shows the primary motion with the modulus corresponding to the proper motion.} \label{hip1444Keck} \end{figure} The minimum mass of the companion, $M_{\rm{min}}$, which can cause a given acceleration, $\gamma_r$, is given by Eq. \ref{Mmin} \citep{liu2002crossing}, \begin{equation} \label{Mmin} M_{\rm{min}} (M_{\odot}) = 5.34 \times 10^{-6} (d\, \theta)^2 {\lvert}\gamma_r{\rvert}\sqrt{27}/2, \end{equation} where $\theta$ is the projected angular separation between the components, $\gamma_r = \frac{dv}{dt}$ is the instantaneous radial acceleration (the derivative of the RV at the mean of dates), and $d$ is the distance to the system. The orbital function $F (i, e, \omega, \phi )$, which depends on the inclination, $i$, eccentricity, $e$, longitude of periastron, $\omega$, and orbital phase, $\phi$, has a minimum value of $\sqrt{27}/2$ \citep{torres1999substellar}. Equation \ref{Mmin} allows us to verify whether the observed trend can be caused by an orbiting body. We estimate the companion photometric mass using the colour-temperature conversions of \citet{pecaut2013intrinsic}\footnote{We used the updated Version 2019.3.22 table available on E. Mamajek’s website: \url{http://www.pas.rochester.edu/\textasciitilde emamajek/EEM\textunderscore dwarf\textunderscore UBVIJHK\textunderscore colors\textunderscore Teff.txt}} by matching absolute magnitudes. We list the results of our photometric and astrometric analysis in Table~\ref{tab:ao}, along with the inferred measurements of physical separations, mass and period. The period is determined from Kepler's third law with the primary's mass extracted from literature. We compare the photometric masses, $M_{\rm{phot}}$, and the minimum dynamical masses, $M_{\rm{min}}$, in Fig.~\ref{massRatio}. If the mass determined from magnitude comparison is smaller than the minimum mass, another companion than the one identified in the AO image is responsible for the RV trend. There are three objects fulfilling this condition: HIP52720, HIP102580, and HIP105606. Therefore, the companion detected by AO is not responsible for the RV acceleration that might be caused instead by an unresolved tertiary companion. For HIP105606, Gaia EDR3 detects a fainter source at $0.85''$ from the target with the same parallax ($\pi = 28.4$ mas). This separation gives a minimum mass of 0.092 M$_\odot$, supporting the idea that this closer body is much more likely to be the source of the trend. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Figures/massRatio2.jpg}\end{center} \caption{Comparison between photometric masses from AO and the minimum dynamic masses deduced from the trend acceleration. The 1:1 relation is represented by the red line. Stars above that line have companions that are likely responsible for the RV acceleration. Stars below it (HIP52720, HIP102580 and HIP105606) have visual companions not responsible for the RV acceleration at the corresponding measured separation.} \label{massRatio} \end{figure} \begin{table*} \caption{Astrometric and photometric results for the stars observed with AO instruments allowing to compare Gaia and AO angular separations ($\theta_{\rm{Gaia}}$ and $\theta_{\rm{AO}}$), as well as minimum dynamical and photometric masses ($M_{\rm{min}}$ and $M_{\rm{phot}}$).} \label{tab:ao} \begin{tabular}{|r|r|r|r|r|r|r|r|r|r|} \hline {HIP} & {$\gamma_r [\rm{m\, s^{-1}yr^{-1}}]$} & {$\theta_{\rm{Gaia}} ['']$} & {$\theta_{\rm{AO}} ['']$ } & {$a$ [AU]} & {$K$ or $J$ [mag]} & {$M_{\rm{min}}$ [$\rm{M_{\odot}}$]} & {$M_{\rm{phot}}$ [$\rm{M_{\odot}}$]} & {$P$ [yr]} & {Instrument}\\%WDS \hline 1444 & $18.574 \pm 0.052 $& $1.794$ &$1.574 \pm 0.020 $ & $42.3 \pm 0.6 $ & 5.744 (K) & $0.48 \pm 0.01$ & 0.483 & $223.6 \pm 7.2$ & NACO \\ 7396 & $21.177 \pm 0.198 $& ... & $0.246 \pm 0.025 $ & $9.4 \pm 1.0$ & 6.054 (H)& $0.03 \pm 0.01$ & 0.471 & $26.0 \pm 4.2$ & NACO \\ 52720 & $-5.154 \pm 0.211 $& 1.677 &$1.691 \pm 0.023 $ &$ 94.2 \pm 1.4 $ & 6.093 (K)& $0.67 \pm 0.04$ & 0.432 & $821 \pm 53$ & NACO \\ 58558 & $2.834 \pm 0.313$ &... & $ 1.617 \pm 0.019$ & $67.9 \pm 1.0$ & $8.793$ (J) & $0.19 \pm 0.03$ & $0.218$ & $520 \pm 17$ & NACO \\ 62039 &$ 7.118 \pm 0.052 $& 1.293 & $1.309 \pm 0.014$ & $58.8 \pm 0.7 $ & 7.667 (J) & $0.36 \pm 0.01$ & 0.329 & $395 \pm 7$ & NIRC2 \\ 71803 &$ -10.941 \pm 0.098$ & ... &$0.547 \pm 0.020 $ & $29.6 \pm 1.1 $ & 9.404 (J) & $0.14 \pm 0.01$ & 0.176 & $143 \pm 11$ & NIRC2 \\ 81533 &$ 39.727 \pm 0.374$ &... &$0.359 \pm 0.025$ & $25.3 \pm 1.8 $ & 6.376 (H)& $0.37 \pm 0.04$ & 0.426 & $107 \pm 9$ & SPHERE \\ 85158 &$ -11.344 \pm 0.274$ &... &$0.680 \pm 0.019$ & $22.5 \pm 0.7 $ & 6.809 (K)& $0.08 \pm 0.01$ & 0.332 & $90 \pm 4$ & NIRC2 \\ 99385 & $-64.247 \pm 0.519$ & ... &$0.876 \pm 0.048 $ & $13.9 \pm 0.6 $ & 8.119 (K)& $0.18 \pm 0.02$ & 0.202 & $54 \pm 4$ & NACO \\ 102580 &$ 9.2013 \pm 0.908 $&2.401 &$2.109 \pm 0.026 $& $78.0 \pm 1.1 $ & 8.123 (K)& $0.82 \pm 0.10$ & 0.202 & $645 \pm 20$ & NACO \\ 105606 & $7.263 \pm 0.838$ &$0.849 $& $3.318 \pm 0.019$ & $116.4 \pm 0.8$ & $10.010$ (K)& $1.43 \pm 0.18$ & $0.112$ & $1224 \pm 26$ & NACO \\ 110843 & $-11.709 \pm 1.283 $& ... &$1.089 \pm 0.025 $& $40.2 \pm 1.0 $ & 6.554 (K) & $0.28 \pm 0.04$ & 0.361 & $216 \pm 10$ & NACO \\ \hline \end{tabular} \end{table*} Among the twelve stars, there are four for which AO images are available for at least two different dates. It is therefore possible to check whether the observed companion is a background star lacking significant proper or parallactic motion. Following \cite{gonzales2020trends}, we calculated the difference in position (offset) between the primary and the putative companion ($\Delta E, \Delta N$) at two different epochs. $E$ and $N$ refer to East and North coordinates, respectively. If we consider a fixed background star, it would appear to shift relatively to the target by the amount $(-\Delta E_s, \Delta N_s)$ where $\Delta E_s = \Delta E_{PM} + \Delta E_{\pi}$ and $\Delta N_s = \Delta N_{PM} + \Delta N_{\pi}$; $\Delta E_{PM}/\Delta N_{PM}$ is the proper motion variation (to the East or North) between two different dates, and $\Delta E_{\pi}/\Delta N_{\pi}$ is the variation of the parallactic motion of the primary star. We show the space motion in Fig.~\ref{spaceM99385} where that of a background object is represented by the track plotted in black. The location of the companion is inconsistent with this track when it is comoving with the primary star. \begin{figure*} \begin{center} \begin{subfigure}{.48\textwidth} \includegraphics[width=\linewidth]{Figures/bg1444.jpg} \end{subfigure} \begin{subfigure}{.48\textwidth} \includegraphics[width=\linewidth]{Figures/bg62039.jpg} \end{subfigure} \begin{subfigure}{.48\textwidth} \includegraphics[width=\linewidth]{Figures/bg71803.jpg} \end{subfigure} \begin{subfigure}{.48\textwidth} \includegraphics[width=\linewidth]{Figures/bg99385.jpg} \end{subfigure} \end{center} \caption{Astrometric data for the four stars observed in AO at different epochs. The black curves show the motion of a background object. The red points are the astrometric measurements of the visual companion. As the companion does not follow the motion of a background object, it is likely bound to the primary.} \label{spaceM99385} \end{figure*} \textbf{HIP1444}. The companion was observed with both NACO and NIRC2 instruments (see Fig.~\ref{hip1444Keck} in the latter case). In addition, Gaia EDR3 reports a source at nearly the same angular separation. Moreover, the proper and parallactic motions indicate that this object is bound to the target. B17 suggested that the primary could be a host to a candidate planet with an orbital period of $5082 \pm 99$ days which is much smaller than the value we found. Their adopted model consists of Keplerian signals and a linear acceleration. \textbf{HIP62039}. Our measured acceleration of combined RVs is close to the $7.19 \pm 0.16 \,\rm{m\, s^{-1}yr^{-1}}$ value estimated by B17 who did not find a periodic signal. The lack of periodicity indicates the presence of a companion with a long period. \textbf{HIP71803}. B17 found an acceleration of $-10.772 \pm 0.281\,\rm{m\, s^{-1}yr^{-1}}$ without a periodic signal. With nearly the same acceleration, $-10.592\,\rm{m\, s^{-1}yr^{-1}}$, \cite{hinkel2019recommendation} estimated a companion minimum mass of 6.8 M$_{\mathrm{Jup}}$. \textbf{HIP85158}. Even though the orbit is not fully covered, B17 considered it as a candidate to harbour a companion with a period of $5220 \pm 197$ days, a RV semi-amplitude of $29.20 \pm 2.24\,\rm{\; m\, s^{-1}}$ and an acceleration of $-16.372 \pm 0.432\,\rm{\; m\, s^{-1}yr^{-1}}$. The acceleration we estimated from combining RVs is much smaller. \section{Activity indicators} \label{activi} The variation observed in the RV curves can be related to chromospheric phenomena instead of being due to real dynamic changes in the star's centre of mass. We investigate this possibility below. \subsection{RV jitter from chromospheric activity} For the stars with putative companions (short periods or trends), we retrieved the well-known magnetic activity indicator inferred from \ion{Ca}{ii} H+K, log$R'_{\rm{HK}}$, from the catalogue of \citet{borosaikia2018chromospheric}. We then estimated the expected jitter from the calibration of \citet{santos2000coralie} relevant to F, G and K type stars, where $R_5 = 10^5 R'_{\mathrm{HK}}$: \label{jitter} \begin{equation} \begin{cases} \sigma_F (\rm{m\, s^{-1}}) = 9.2 \times \mathit{R}_5^{0.75}\\ \sigma_G (\rm{m\, s^{-1}}) = 7.9 \times \mathit{R}_5^{0.55}\\ \sigma_K (\rm{m\, s^{-1}}) = 7.8 \times \mathit{R}_5^{0.13} \, \, \, \, . \\ \end{cases} \end{equation} The log$R'_{\rm{HK}}$ and RV jitter values can be found in Table~\ref{tabe:TrendAct} (HARPS) and Table~\ref{table:TrendActSophie} (SOPHIE+ELODIE) for stars with a trend (the existence of an additional Keplerian signal and a known period are noted by a single and double asterisk, respectively). A RV variability caused by a companion rather than by stellar activity is more likely for the stars in Table~\ref{tabe:TrendAct} that have a trend amplitude larger than five times their estimated RV jitter. Conversely, HIP45749, HIP95849, HIP98828, and HIP110843 have a RV amplitude below that limit and could have their RV variation arising from stellar activity. \subsection{Line-profile indicators} \label{Line profile indicators} Another way to relate the RV variation to chromospheric activity is to examine the correlation between the former and some activity (line-profile) indicators, as done below. \subsubsection{Bisector velocity span} \label{BVS} To this aim, we first use the bisector velocity span ($BIS$), which is the difference between the velocity of the bisector at the top and bottom of the CCF. Defined by \citet{figueira2013line} and \citet{queloz2001no}, as the average of the mid-points between 60 and 90$\%$ for the top and 10 and 40$\%$ for the bottom. The $BIS$ are extracted from the instrument pipeline. If the $BIS$ and RV data are independent, the variability signal may have its origin in an unseen companion. However, a negative correlation indicates that the RV variations arise from stellar activity, as it was concluded by \citet{queloz2001no} and \citet{fiorenzano2005line} for HD 166435. On the other hand, a positive correlation may be the signature of a blended system or a substellar companion, as in HD 41004 where \citet{santos2002coralie} attributed the variation to a brown dwarf companion to the secondary. In the same vein, \citet{fiorenzano2005line} showed that the positive correlation between RVs and bisector spans in HD $8071$ is due to contamination by a companion with a nonlinear dependence. However, \cite{lovis2011harps} noted that a positive correlation could also indicate that a magnetic cycle is the source of the RV variation. \citet{zechmeister2013planet} pointed out that the physical correlation cannot be confirmed on firm statistical grounds unless both variables vary with an identical period. In this case, the (sub)stellar companion option becomes invalid. We computed Pearson correlation coefficients (hereafter $\rho$) between the RV and $BIS$ data, along with the associated confidence level, $p$. The $\rho$ and $p$ values are summarised in the fifth and sixth columns of Table~\ref{tabe:TrendAct} (HARPS) and Table~\ref{table:TrendActSophie} (SOPHIE+ELODIE) for stars with trends. Among the stars with a trend having an acceleration greater or equal to $10 \rm{\; m\, s^{-1}yr^{-1}}$, four have their RVs significantly anti-correlated to $BIS$, which indicates that the RV variation can be related to chromospheric phenomena. It concerns: HIP5806, HIP78395, HIP81533, and HIP95849. However, as mentioned above, a significant correlation does not provide a firm identification of the source of the variation, and doubt remains for: A companion is clearly detected in HIP81533 through AO SPHERE imaging (Sect. \ref{sectAO}), but we are unsure whether it is causing the trend. However, as the trend's acceleration is high, we doubt it would be caused by activity. There are six trend stars with RVs positively correlated to $BIS$ and without any companion detected previously (RV or imaging). For those, the variation can be caused by a blended system or a magnetic activity cycle. Among these stars, two have a high acceleration, which is more likely induced by a companion: HIP116374 and TYC6792-01746-1. As for SOPHIE targets, a significant, positive correlation is present for HIP85653 and HIP104587, which could indicate the cause of the trend to be a magnetic cycle. HIP31849 and HIP74702 have significant correlation between RV and $BIS$, however they have high acceleration. \subsubsection{The S-index} The S-index is dependent on the strength of the magnetic field and compares the relative flux of the Fraunhofer emission lines (H \& K) of \ion{Ca}{ii} to the continuum value. It is used to determine stellar activity cycles and rotation periods. \\We computed this index using the method described by \citet{lovis2011harps} where $H$ and $K$ are the total fluxes in the bands centred at $3933.664$ \AA\, ($K$) and $3968.470$ \AA\, ($H$), $R$ and $V$ are the values measured in adjacent continuum passbands, and $\alpha$ is a calibration constant: \begin{equation} S = \alpha \frac{H+K}{R+V} \, \, \, . \end{equation} As the activity level increases, active regions in which convection is frozen cover a larger fraction of the stellar surface. It implies that convective blueshift is globally reduced and the lines are slightly shifted to the red. As a net result, a positive correlation between RVs and S-index is expected. The $\rho$ and p-values diagnosing a possible correlation between these two quantities are listed in columns 7 and 8 of Table~\ref{tabe:TrendAct} (HARPS) and Table~\ref{table:TrendActSophie} (SOPHIE+ELODIE) for the stars with a trend. There are eight trend stars with RV positively correlated to S-index for which we assume that the long-term RV variation may be attributed to a magnetic cycle. HIP45749 and HIP71803 have a positive correlation with $BIS$ confirming that their RV variation is indeed caused by magnetic cycle. The RV variations exhibited by HIP1444 are confirmed to be related to a companion by AO imaging, although it shows a significant correlation with the S-index. Even though we are not sure if the companion in AO imaging causes the trend in the case of HIP81533, it is very unlikely to be activity related. The SOPHIE stars with positive correlation between RVs and S-index and low acceleration value are HIP98828 and HIP104587 (for which the $BIS$ is also positively correlated to RVs). Those with high acceleration value are HIP31849 and HIP74702. \section{Conclusion} \label{conc} We present a work aiming at confirming the status of a sample of 2351 stars as RV standards for the Gaia mission. We estimated the offset of our RVs relative to other instruments and placed the zero point by adopting HARPS measurements as the reference. We find a mean absolute difference of $240.4 \rm{\; m\, s^{-1}}$ between our measurements and those determined by the instrument pipelines. We looked for RV variability due to the presence of a companion or due to the existence of stellar activity/magnetism. Furthermore, we find stars having a linear or polynomial trend variation for which the period cannot be determined with the actual data because it is longer than the observational baseline. Adaptive optics help in a few cases confirming the presence of distant companions potentially inducing a long-term trend, although the physical association must be proven. Five stars with AO data have a visual companion lying at an angular separation consistent with the observed RV amplitude, but it is not the case for the three others. Among the first category, the analysis of the proper motions supports the bound status of the secondaries in HIP1444, HIP62039, HIP71803, and HIP99385. As the stars exhibiting a RV trend have incomplete orbital coverage, they should be monitored on a long-term basis. The trends with high acceleration, on the other hand, could indicate that the companion orbiting these stars is a brown/red dwarf. However, as the RV variation can also be caused by chromospheric activity, we checked for possible correlations between the RV and activity data to ascertain that the observed RV variations have their origin in a low-mass companion. Stars with a RV model scatter (orbital and/or trend) exceeding the threshold required for the Gaia mission, $300\, \rm{m\, s^{-1}}$, should be excluded from the calibrations for Gaia DR4. From the HARPS sample, it concerns five stars with a trend only (HIP37233, HIP64295, HIP99385, HIP116374, TYC8681-00611-1), three stars with a Keplerian solution only (HIP26394, HIP62534, and HIP113834), and three stars with both trend and Keplerian solution (HIP5806, HIP71001, and HIP108095). Additional variable stars are identified through their SOPHIE observations: HIP26037, HIP29611, HIP32906, HIP68134, HIP71291, HIP74702, HIP76751, HIP114210, HIP115714, and TYC3239-00992-1. The only ELODIE stars which show RV variability are HIP80264 and HIP80902. After combining data from several instruments four more stars can be considered variable: HIP7404, HIP42575, HIP61157, and HIP85158. All the calibration stars not fulfilling the {\it Gaia} criterion for a constant RV are listed in Table~\ref{table:removeDR4}. Other less obvious candidates are discussed in Sect. \ref{trend}. Stars for which the weighted standard deviation applied on data, $\sigma_{RV}$, exceeds the variability threshold set by CS13 are shown in Fig.~\ref{AvsT}. Because this subset constitutes a tiny fraction of our sample, it supports the validity of CS18 catalogue as a pool of RV calibrators for the RVS onboard Gaia. \begin{table} \caption{Targets with RV scatter greater than Gaia threshold, that should be removed from Gaia DR4.} \label{table:removeDR4} \begin{tabular}{|l|l|} \hline {HIP/TYC} & {Instrument} \\ \hline \, \, 7404 & H-S \\ \, \, 5806 & H \\ \, 26037 & S \\ \, 26394 & H\\ \, 29611 & S \\ \, 32906 & S \\ \, 37233 & H\\ \, 42575 & S-E \\ \, 61157 & S-E \\ \, 62534 & H\\ \, 64295 & H \\ \, 68134 & S \\ \, 71001 & H \\ \, 71291 & S \\ \, 74702 & S \\ \, 76751 & S \\ \, 80264 & E \\ \, 80902 & E \\ \, 85158 & H-B17 \\ \, 99385 & H \\ 108095 & H \\ 113834 & H \\ 114210 & S \\ 115714 & S \\ 116374 & H \\ 3239-00992-1 & S \\ 8681-00611-1 & H \\ \hline \end{tabular} \end{table} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Figures/amplRV.jpg}\end{center} \caption{RV variation as a function of the time span with the Gaia RV variability threshold ($300\, \rm{m\, s^{-1}}$) depicted by the red horizontal line.} \label{AvsT} \end{figure} \section*{Acknowledgements} YD, TM, YF and EG are greatly indebted to Belspo for a long-term support through a PRODEX contract related to the Gaia Data Processing and Analysis Consortium. This research was achieved using the POLLUX database ( \url{http://pollux.oreme.org} ) operated at LUPM (Université Montpellier - CNRS, France) with the support of the PNPS and INSU. It also has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. It also has made use of "Aladin sky atlas" developed at CDS, Strasbourg Observatory, France. We thank Dr N. Seghouani, Dr R. Mecheri and Dr K. Daifallah for providing us with calculus means. \section*{Data Availability} The data underlying this article are available in the article and in its online supplementary material. \input{mnras_paper.bbl}
{ "timestamp": "2022-09-27T02:30:00", "yymm": "2209", "arxiv_id": "2209.08624", "language": "en", "url": "https://arxiv.org/abs/2209.08624" }
\section{Introduction} The field of topological photonics explores topology-related phenomena within the theoretical framework of Maxwell's equations for electromagnetic waves and is largely motivated by the earlier developments in condensed matter physics. However, due to the fundamental difference between photons and electrons, i.e., while electrons are fermions obeying the Fermi-Dirac statistics, photons are bosons governed by the Bose-Einstein statistics, new physics and problems specific to the bosonic nature of light not only motivate photonics researchers to find {\it new ways to do old things}, but also revolutionize our fundamental understanding of how light could be manipulated in a completely new way. Through the bulk-edge correspondence principle, topological bulk properties of photonic systems could lead to nontrivial behaviors of light along the edge of the systems, which usually are robust against certain types of defects and disorders. This prominent feature is very promising for a broad range of photonic applications in the near future. The field of topological photonics started in 2008 from a work by Haldane and Raghu \cite{Haldane08PRL}, who proposed to mimic the quantum Hall effect of two-dimensional (2D) electron gases in magnetic fields using light. The idea was experimentally realized soon afterwards \cite{Wang09nature} in a gyromagnetic photonic crystal composed of ferrite rods in the microwave regime. After this fundamental breakthrough, great developments for achieving topological states of electromagnetic waves have been made over the past 14 years, which were motivated by both the topological states in condensed matter physics and the specific features of electromagnetic waves. For example, while the earlier works \cite{Haldane08PRL,Wang09nature} have explored magnetic-optical effects of gyromagnetic materials to mimic quantum Hall states, magnetic-optical effects in general are very weak at optical frequencies, and this motivated the ideas of creating artificial magnetic fields for photons \cite{Fang12NatPho_magnetic,Hafezi11NatPhy_resonator}. Several recent advances in photonic topological insulators, such as $Z_2$ \cite{Khanikaev13NatMat}, Floquet \cite{Rechtsman13Nature}, valley-Hall \cite{MaShvets16NJP}, and second-order \cite{Xie18PRB_corner}, were largely motivated by the relevant concepts developed in condensed matter physics. Especially, the richness of topological physics in three-dimensional (3D) condensed matter systems has provided many exciting new opportunities for 3D topological photonics \cite{Kim20Light_review,Xie21Frontier_3Drev,Park22NanoPhot_3Drev}. The field of topological photonics has been constantly reviewed in the past due to its fast developments \cite{Lu14NatPho_review,Khanikaev17NatPho_review,Xie18OE_review,Ozawa19RMP_review,Tang22LPR_review}. The bulk-edge correspondence is an important principle to understand topological phenomena, which states that if the bulk of a system hosts a nontrivial topological invariant, then this invariant would manifest itself in the edge of the system. This principle not only reveals a deep connection between physics in different dimensions, but also provides a unique perspective to understand the diverse phenomena in topological photonic systems in a unified way. Note that proving this principle in a mathematically rigorous way in general is hard considering the diverse photonic systems available \cite{Silveirinha19PRX_bulk-edge} and in certain cases, one could have anomalous bulk-edge correspondence \cite{Silveirinha16PRB_bulk-edge,Tauber20PRR_bulk-edge} or even the violations of the bulk-edge correspondence \cite{Gangaraj20PRL_bulk-edge}. In this review, we present a brief introduction to the field of topological photonics, covering most of the recent developments to date in one, two, and three dimensions. We would like to note that due to the fundamental difference between electrons and photons, in finite systems with boundaries, the conditions for the existence of edge states of the two systems are different. As we know, electrons can not escape the boundary of a system due to the work function, which provides a confining potential and as such, the wave functions of electrons are evanescent beyond the system boundary, i.e., air is naturally a topologically trivial medium for electrons. In contrast, light can propagate and exist in free space, i.e., in general, no confining potential exists between photonic media and free space. Different methods to confine light inside photonic media exist, e.g., using perfect electric conductors (or metals), the bandgap of a photonic crystal or exploiting the light cone effect. This difference between electronic and photonic systems can make the methods to form topological edge states in photonic systems more involved, but which will also provide new opportunities allowing the further control of these photonic topological edge states. This review is organized as follows: In Sec. \ref{sec2}, we discuss the Maxwell-Schr\"odinger correspondence and some symmetry properties of electromagnetic waves in media. In Sec. \ref{sec3}, we present the prototypical model for topological physics in 1D, i.e., the Su-Schrieffer-Heeger (SSH) model, which has been used extensively in photonics for various topological applications. In Sec. \ref{sec4}, we overview four main categories of photonic topological states in 2D, i.e., quantum Hall states, quantum spin Hall states, quantum valley Hall states and second-order topological corner states. In Sec. \ref{sec5}, we discuss various topological states in 3D photonic systems, which in general could be classified as gapped or gapless. For gapless topological states, depending on the dimensions of the band crossing, we describe two important categories of nodal physics, i.e., nodal points and nodal lines. We summarize and give some future perspectives in Sec. \ref{sec6}. \section{Maxwell-Schr\"odinger correspondence and symmetries of electromagnetism} \label{sec2} Maxwell's equations could be formulated in the Schr\"odinger form \cite{Lu14NatPho_review,NittisLein19ATMP}, such that the concepts and toolboxes developed for topological electronic materials could be carried over to topological photonic systems. Maxwell's equations in a medium (without free charges and currents) are described by \begin{gather} \nabla \cdot \mathbf{D} =0, \hspace{0.5cm} \nabla \cdot \mathbf{B} =0 \\ \nabla \times \mathbf{E} =-\frac{\partial \mathbf{B}}{\partial t}, \hspace{0.5cm} \nabla \times \mathbf{H} =\frac{\partial \mathbf{D}}{\partial t} \end{gather} which could be recast in the following Schr\"odinger form \begin{gather} i\frac{\partial}{\partial t} \Psi =\hat{M} \Phi \end{gather} with $\Psi = \hat{N} \Phi$ and $(\Psi, \Phi, \hat{M}, \hat{N} )$ defined by \begin{gather} \Psi\equiv (\mathbf{D}, \mathbf{B})^T, \hspace{0.5cm} \Phi\equiv (\mathbf{E}, \mathbf{H})^T\\ \hat{M} = \begin{pmatrix} 0 & i \nabla \times \\ -i\nabla \times& 0 \end{pmatrix}, \hspace{0.5cm} \hat{N} = \begin{pmatrix} \epsilon & \xi \\ \zeta & \mu \end{pmatrix} \end{gather} where, $\epsilon$ and $\mu$ are the electric permittivity and magnetic permeability tensors, respectively, whereas $\xi$ and $\zeta$ are the bianisotropic magnetoelectric coupling tensors. The time-reversal ($T$), inversion ($P$) and duality ($S$) transformations of Maxwell's equations are given by \begin{gather} T: t \rightarrow -t, \hspace{0.2cm}(\mathbf{E},\mathbf{H}) \rightarrow (\mathbf{E},-\mathbf{H}) \\ P: (x,y,z)\rightarrow -(x,y,z), \hspace{0.2cm}(\mathbf{E},\mathbf{H}) \rightarrow (-\mathbf{E},\mathbf{H}) \\ S: (\mathbf{E}, \mathbf{H}) \rightarrow (\mathbf{H}, -\mathbf{E}) \end{gather} One can show that the above transformations are symmetries of Maxwell's equations in vacuum, i.e., $\mathbf{D}=\epsilon_0\mathbf{E}$, $\mathbf{B}=\mu_0\mathbf{H}$ and $\xi=\zeta=0$, where $\epsilon_0$ and $\mu_0$ are the vacuum permittivity and permeability, respectively. However, in a medium, the nontrivial material response, i.e., the explicit form of $\hat{N}$ could break these symmetries. In the following, we briefly discuss the time-reversal transformation as it plays an important role in various photonic quantum Hall related states. First, consider a gyrotropic medium, i.e., gyroelectric: \begin{gather} \epsilon = \begin{pmatrix} \epsilon_{xx} & i\epsilon_{b} &0 \\ -i\epsilon_{b} & \epsilon_{yy} &0 \\ 0 & 0 & \epsilon_{zz} \end{pmatrix}, \hspace{0.5cm} \mu=\mu_0 \label{ge} \end{gather} or gyromagnetic: \begin{gather} \epsilon=\epsilon' (\text{scalar}), \hspace{0.5cm}\mu = \begin{pmatrix} \mu_{xx} & i\mu_{b} &0 \\ -i\mu_{b} & \mu_{yy} &0 \\ 0 & 0 & \mu_{zz} \end{pmatrix} \label{gu} \end{gather} with $\xi=\zeta=0$, which could be realized by applying an external static magnetic field along $z$ of the gyrotropic material. Since time-reversal is an antilinear transformation composed of a linear operator and the complex conjugation ($^*$), it changes the imaginary unit $i$ to $-i$. As such, the existence of the off-diagonal terms, $ i\epsilon_{b}$ or $ i\mu_{b}$ in Eq. (\ref{ge}) or (\ref{gu}) breaks the time-reversal symmetry of the system. In other words, one would need $\epsilon=\epsilon^*$ and $\mu=\mu^*$ in order for time-reversal to be preserved. In view of this, gyrotropic materials have been routinely used to realize one-way propagation of electromagnetic waves by mimicking the quantum Hall effect. Furthermore, apart from breaking the time-reversal symmetry, preserving it can also lead to new topological phases. Comparing the time-reversal operators for electrons ($\mathcal{T}_{\textrm{e}}$) and photons ($\mathcal{T}_{\textrm{p}}$), \begin{gather} \mathcal{T}_{\textrm{e}}=i\sigma_y K \\ \mathcal{T}_{\textrm{p}}=\tau_z K \end{gather} where $K$ denotes the complex conjugation operator, $\sigma_y$ and $\tau_z$ are Pauli matrices acting on the spin ($\{\uparrow, \downarrow\}$) of electrons and the electromagnetic components ($ \{\mathbf{E}, \mathbf{H} \}$) of photons, respectively. One can explicitly check that $\mathcal{T}_{\textrm{e}}\mathcal{T}_{\textrm{e}}=-1$ and $\mathcal{T}_{\textrm{p}}\mathcal{T}_{\textrm{p}}=+1$. Thus unlike electrons, where the Kramers degeneracy due to time-reversal symmetry could lead to $\mathcal{Z}_2$ topological phase (also known as quantum spin Hall phase in 2D), the time-reversal symmetry of photons alone is not sufficient to protect such phase and additional symmetry, such as crystalline symmetry, is needed to construct a photonic quantum spin Hall phase. \section{Topological photonics in 1D}\label{sec3} \begin{figure} \includegraphics[width=\columnwidth]{Fig1} \caption{ 1D SSH model and its topological properties. (a) The unit cell $i$ of the 1D SSH model contains two lattice sites A and B, and the model has two hopping amplitudes, the intra-cell hopping $v$ and the inter-cell hopping $w$. (b) Winding behaviors of the trajectory $(v+w\cos k,w\sin k)$ around the origin $(0,0)$ as $k$ changes from $-\pi$ to $+\pi$ when $v>w$ (left) and $v<w$ (right). (c) Energy spectrum of a finite 1D SSH chain (with 100 unit cells) as a function of $w$ at $v=1$. When $w>v$, zero-energy edge states appear within the bandgap.} \label{figs:fig1} \end{figure} \begin{figure*} \includegraphics[width=0.9\textwidth]{Fig2} \caption{ 1D photonic SSH systems and their applications. (a) Near-field imaging of topological edge states in plasmonic nanochains. Reproduced with permission from~\cite{Yan21NanoLett_time}, Copyright (2021), under CC BY-NC-ND. (b) Lasing at the topological edge states in a photonic crystal nanocavity dimer array. Reproduced with permission from~\cite{Han19Light_lasing}, Copyright (2019), under CC BY 4.0. (c) Selective enhancement of interface states in a dielectric resonator chain. Reproduced with permission from~\cite{Poli15NC_nonH}, Copyright (2015), under CC BY 4.0. (d) Periodically bended ultrathin metallic waveguides for the observation of anomalous $\pi$ modes via photonic floquet engineering. Reproduced with permission from~\cite{Cheng19PRL_drivenSSH}, Copyright (2019) by the American Physical Society. (e) Protection of biphoton entanglement in an array of silicon nanowires. Reproduced with permission from~\cite{Redondo18Science_biphoton}, Copyright (2018) by the American Association for the Advancement of Science. (f) Photonic topological baths for quantum simulation. Reproduced with permission from~\cite{Saxena22ACSpho_bath}, Copyright (2022) by the American Chemical Society. (g) Two coupled topological interfaces in waveguide arrays. Reproduced with permission from~\cite{Wang20NanoLett_interaction}, Copyright (2020) by the American Chemical Society. (h) Selective excitation of topological edge states controlled by the handedness of incident light. Reproduced with permission from~\cite{Slobozhanyuk16LPR}, Copyright (2016) by John Wiley and Sons. } \label{figs:fig2} \end{figure*} The prototypical model for topological physics in 1D is the SSH model \cite{SSH79PRL}, which describes the hopping of a particle in a 1D dimerized lattice with alternating strong and weak hopping amplitudes (see Fig.\ref{figs:fig1}a). The unit cell of the dimerized lattice contains two sites $A$ and $B$, and the Hamiltonian describing the system is given by \begin{gather} \hat{H} = \sum_i v(\hat{c}_{i,A}^{\dagger} \hat{c}_{i,B} +\hat{c}_{i,B}^{\dagger}\hat{c}_{i,A}) \nonumber \\ +w(\hat{c}_{i+1,A}^{\dagger} \hat{c}_{i,B} +\hat{c}_{i-1,B}^{\dagger} \hat{c}_{i,A}) \end{gather} where $\hat{c}_{i,A/B}^{\dagger}$ ($\hat{c}_{i,A/B}$) is the creation (annihilation) operator of a particle on the site $A/B$ within the unit cell $i$, whereas $v$ and $w$ are the intra-cell and inter-cell hopping amplitudes, respectively. Applying the Fourier transform $\hat{c}_{R,A/B}^{\dagger}=\frac{1}{\sqrt{N}}\sum_k e^{-ikR}\hat{c}_{k,A/B}^{\dagger}$, where $N$ is the number of lattice sites, and using the identity $\frac{1}{N}\sum_R e^{ikR}=\delta_{k0}$, one can write the Hamiltonian in momentum space as $\hat{H} = \hat{C}_{k}^{\dagger} H(k) \hat{C}_{k}$ with $ \hat{C}_{k} = (\hat{c}_{k,A}, \hat{c}_{k,B})^T$ and \begin{gather} H(k)=\begin{pmatrix} 0 & v+we^{-ik} \\ v+we^{ik} & 0 \end{pmatrix} \end{gather} To make the physics more transparent, we could introduce a complex number $h(k)\equiv h_x(k)+ih_y(k)=|h(k)|e^{i\phi(k)}$ with $h_x(k)=v+w\cos k$ and $h_y(k)=w\sin k$. Then $H(k)$ reduces to \begin{gather} H(k)=|h(k)| \begin{pmatrix} 0 & e^{-i\phi(k)} \\ e^{i\phi(k)}& 0 \end{pmatrix} \end{gather} Now it is straightforward to show that the eigenvalues and eigenfunctions of $H(k)$ are \begin{gather} E_{\pm}(k)=\pm |h(k)| = \pm \sqrt{v^2+2vw\cos(k)+w^2} \\ \Psi_{\pm}(k)=\frac{1}{\sqrt{2}}\begin{pmatrix} \pm e^{-i\phi(k)} \\ 1\end{pmatrix} \end{gather} where the energy spectrum is gapped unless $v=w$. The Zak phase \cite{Zak89PRL} could be calculated as \begin{gather} \gamma=i\int_{-\pi}^{+\pi} \langle \Psi_{\pm}(k)|\partial_k | \Psi_{\pm}(k)\rangle dk=\frac{1}{2}\int_{-\pi}^{+\pi} \frac{\partial \phi(k)}{\partial k} dk \nonumber \\ =\frac{1}{2} [\phi(+\pi)-\phi(-\pi)] \end{gather} whose physics now becomes apparent, i.e., if the trajectory of $h(k)$ encloses the origin when $k$ changes from $-\pi$ to $\pi$, $\gamma =\pi$; otherwise $\gamma = 0$. From $h_x(k)=v+w\cos k$ and $h_y(k)=w\sin k$, it is easy to see from Fig.\ref{figs:fig1}b that when $v>w$, the close loop from $h(k)$ as $k$ changes does not enclose the origin and as such $\gamma=0$, whereas when $v<w$, the loop does enclose the origin and consequently, $\gamma=\pi$. At $v=w$, the bandgap closes, signaling a topological phase transition. The SSH Hamiltonian $H(k)$ has trivial time-reversal symmetry implemented by $\mathcal{T} =K$, i.e., $\mathcal{T} H(k) \mathcal{T}^{-1}=H(-k)$, inversion symmetry implemented by $\mathcal{I} =\sigma_x$, i.e., $\mathcal{I} H(k) \mathcal{I}^{-1}=H(-k)$ and chiral symmetry implemented by $\mathcal{C} =\sigma_z$, i.e., $\mathcal{C} H(k) \mathcal{C}^{-1}=-H(k)$. The chiral symmetry will pin the topological edge states that emerge in a finite lattice when $w>v$ at zero energy (see Fig.\ref{figs:fig1}c). Despite the simplicity of the 1D SSH model, the system platforms and physics that could be explored with it are very rich. For example, the 1D SSH model has been studied in diverse photonic platforms, such as, photonic superlattice in a photorefractive material \cite{Malkova09OL}, photonic crystal of alternating dielectric slabs \cite{Henriques20PRA_Tamm} or nanobeam cavities \cite{Gong21SciRep_nanobeam}, quantum emitters interacting with a waveguide \cite{Bello19SciAdv}, split-ring-resonators \cite{Jiang18OE_split}, dielectric nanoparticles \cite{Slobozhanyuk15PRL,Kruk17small}, array of coupled waveguides \cite{Redondo16PRL,Naz18PRA_stretch,Cheng18OE_Hbar,Chen19AP_tamm,Savelev20PRB_2mode,Song20LPR_waveguide,Jiao21PRL_inversionSSH}, plasmonic systems \cite{Poddubny14ACSpho_zigzag,Ling15OE,Cheng15LPR,Sinev15Nanoscale,Pocock18ACSpho_retard,Rappoport21ACSpho_graphene,Zhang21JAP}, and exciton-polaritons \cite{Solnyshkov16PRL_Zurek,Whittaker19PRB_SOC,Pickup20NC_nonH,Su21SciAdv,Pieczarka21OPtica_polariton}. The topological edge states of 1D photonic SSH systems have also been observed through imaging, such as, spectral imaging \cite{Bleckmann17PRB_imaging}, near-field imaging \cite{Yan21NanoLett_time}(Fig.\ref{figs:fig2}a) and far-field optical imaging \cite{Moritake2022NanoPhot_imaging}. Moreover, various applications based on the 1D SSH model have been proposed in the past, such as, lasing \cite{Jean17NatPho_lasing,Zhao18NC_lasing,Parto18PRL_lasing,Ota18ComPhy_lasing,Han19Light_lasing,Gagel22ACSpho_switch}, non-Hermitian physics \cite{Schomerus13OL_nonH,Poli15NC_nonH,Zeuner15PRL_nonH,Pan18NC,Song19PRL_recover,Zhu20PRR_skin,Lin21OE_squareroot}, Floquet dynamics \cite{Cheng19PRL_drivenSSH,Petracek20PRA_drivenSSH}, quantum information of biphoton states \cite{Redondo18Science_biphoton,Klauck21PhoRes_biphoton}, quantum photonic baths \cite{Saxena22ACSpho_bath}, nonreciprocity \cite{Li21APL_NonreciprocalSSH}, interface interaction \cite{Wang20NanoLett_interaction}(Fig.\ref{figs:fig2}g), disorder \cite{Lin20PRR_disorder}, nonlinear harmonic generation \cite{Kruk19NatNanotech_SSH,Yuan22LPR_SSH}, and polarization control \cite{Slobozhanyuk16LPR,Tripathi21NanoPhot}. Specifically, in \cite{Jean17NatPho_lasing}, the authors experimentally observed lasing in the topological edge states of a 1D lattice of coupled polariton micropillars that implements an orbital version of the SSH model. Topological single-mode lasing in 1D SSH model has also been demonstrated in an array of coupled microring resonators \cite{Zhao18NC_lasing,Parto18PRL_lasing}. Furthermore, motivated by the high quality factors and small mode volumes of nanocavity, topological lasing has also been demonstrated in 1D SSH model based on nanocavity array \cite{Ota18ComPhy_lasing,Han19Light_lasing} (Fig.\ref{figs:fig2}b). The topological midgap states of the 1D SSH model under gain and loss were studied in \cite{Schomerus13OL_nonH,Poli15NC_nonH,Zeuner15PRL_nonH,Song19PRL_recover}(Fig.\ref{figs:fig2}c), where the authors in \cite{Zeuner15PRL_nonH} managed to monitor the topological transition by employing bulk dynamics only. Non-Hermitian skin effect, where all eigenstates of the system are localized at the system boundary, was studied in 1D SSH array of coupled resonant optical waveguides \cite{Zhu20PRR_skin,Lin21OE_squareroot}. Moreover, periodically bended waveguides could be exploited to mimic time-dependent driving, which allows the study of Floquet SSH models \cite{Cheng19PRL_drivenSSH,Petracek20PRA_drivenSSH}(Fig.\ref{figs:fig2}d). The SSH edge modes could also be used for quantum technological applications, such as, protection of biphoton states \cite{Redondo18Science_biphoton,Klauck21PhoRes_biphoton}(Fig.\ref{figs:fig2}e) or as quantum photonic baths for quantum simulations \cite{Saxena22ACSpho_bath}(Fig.\ref{figs:fig2}f). Strongly enhanced third-harmonic generation was recently observed in experiments using zigzag array of dielectric nanoparticles \cite{Kruk19NatNanotech_SSH} and silicon photonic crystal nanocavities \cite{Yuan22LPR_SSH}. Finally, due to the polarization-dependent edge states in zigzag SSH array, selective excitation of the topological edge states could be controlled by the handedness of the incident light as demonstrated in \cite{Slobozhanyuk16LPR}(Fig.\ref{figs:fig2}h) or for polarization control of photoluminescence of nanocrystals \cite{Tripathi21NanoPhot}. \section{Topological photonics in 2D}\label{sec4} Photonic topological states in 2D in general could be classified as time-reversal symmetry breaking or preserving. For the photonic quantum Hall states with time-reversal symmetry broken by external magnetic fields, a topologically nontrivial bulk bandgap is characterized by the Chern number and the corresponding edge states within the bandgap exhibit chiral behavior, i.e., they can only propagate in one direction as the backpropagating states are completely removed by the magnetic fields. For this reason, the photonic quantum Hall states are absolutely robust against defects and disorders as long as the bandgap protecting these states persists. In the case of time-reversal symmetry preserving, several categories of photonic topological states with both the conventional bulk-edge correspondence, such as, quantum spin Hall states, quantum valley Hall states and the anomalous bulk-corner correspondence, such as, second-order topological corner states, have been explored in the literature. For the photonic quantum spin or valley Hall states, as their time-reversal partners alway exist, these states are only robust against certain types of defects and disorders, i.e., their robustness is weaker than that of the quantum Hall states. In the following, we will mainly discuss these four categories of photonic topological states in 2D. \subsection{Photonic quantum Hall states} The quantum Hall effect could be characterized by the Chern number defined by \cite{Lu14NatPho_review} \begin{gather} C_n=\frac{1}{2\pi}\iint_{\textrm{FBZ}} \mathbf{F}_n(\mathbf{k}) d^2\mathbf{k} \end{gather} where $\mathbf{F}_n(\mathbf{k})=\nabla_{\mathbf{k}} \times \mathbf{A}_n(\mathbf{k}) $ and $\mathbf{A}_n(\mathbf{k})=\langle \mathbf{u}_n(\mathbf{k}) | i \nabla_{\mathbf{k}} | \mathbf{u}_n(\mathbf{k}) \rangle$ are the Berry curvature and Berry connection respectively, and $\mathbf{u}_n(\mathbf{k})$ is the spatially periodic part of the Bloch function for the $n$-th band. Here, the integral in momentum space is performed over the first Brillouin zone (FBZ) and the inner product of the Berry connection is defined as $\langle \mathbf{u}_n | \mathbf{u}_m \rangle = \iint_{\textrm{UC}} \epsilon(\mathbf{r}) \mathbf{u}_n^*(\mathbf{r}) \mathbf{u}_m(\mathbf{r}) d^2\mathbf{r}$, where the integral in real space is performed over the unit cell (UC). As the Berry curvature is odd under time-reversal operation, i.e., $\mathcal{T}: \mathbf{F}_n(-\mathbf{k}) \rightarrow -\mathbf{F}_n(\mathbf{k})$, for a photonic system with time-reversal symmetry, the integral of the Berry curvature over the first Brillouin zone is alway zero (i.e., $C_n=0$). Different approaches to calculate the Chern number numerically for both Hermitian and non-Hermitian photonic crystals, e.g., based on Wilson-loop or Green's function, have been proposed recently \cite{Paz19AQT_tutorial,Wang19NJP_wilson,Wang20FOpto_comsol,Zhao20OE_chern,Prudencio20CP_chern_nonH,Chen21PRA_wilson_nonH}. The first photonic Chern insulator in photonic crystals with broken time-reversal symmetry was proposed by Haldane and Raghu \cite{Haldane08PRL} in 2008 in an effort to construct analogs of quantum Hall edge states for electromagnetic waves. They demonstrated that the Dirac points in the transverse electric (TE) photon bands of a hexagonal 2D array of cylindrical dielectric rods could be gapped out by adding an imaginary off-diagonal component to the permittivity tensor. As a result, the split bands acquire non-zero Chern numbers of $\pm1$ and unidirectional photonic modes with no possibility of being backscattered at bends or imperfections emerge along an interface between two magneto-optic photonic crystals with different topological properties. Later, Wang et al, \cite{Wang08PRL_QH} pointed out that Dirac points in the band structure are not strictly necessary for observing these reflection-free one-way edge modes and they demonstrated the existence of transverse magnetic (TM) photonic bandgaps with nonzero Chern number in a square-lattice gyromagnetic photonic crystal operating at microwave frequencies. This theoretical prediction was soon experimentally verified by the same authors in \cite{Wang09nature} using an interface between a gyromagnetic photonic-crystal slab and a metal wall as shown in Fig.\ref{figs:fig3}a, where even large metallic scatterers placed in the path of the propagating edge modes do not induce reflections. \begin{figure*} \includegraphics[width=0.9\textwidth]{Fig3} \caption{Photonic quantum Hall states and their applications. (a) Experimental observation of one-way topological electromagnetic edge states. Reproduced with permission from~\cite{Wang09nature}, Copyright (2009) by Springer Nature. (b) One-way large-area topological waveguide states in magnetic photonic crystals. Reproduced with permission from~\cite{Wang21PRL_LargeArea}, Copyright (2021) by the American Physical Society. (c) Self-guiding electromagnetic edge states along an edge bounded by air. Reproduced with permission from~\cite{Poo11PRL}, Copyright (2011) by the American Physical Society. (d) Nonreciprocal lasing in topological cavities of arbitrary geometries. Reproduced with permission from~\cite{Bahari17Science}, Copyright (2017) by the American Association for the Advancement of Science. (e) Experimental observation of photonic antichiral edge states. Reproduced with permission from~\cite{Zhou20PRL_antichiral}, Copyright (2020) by the American Physical Society. (f) Experimental observation of topological Anderson insulator in disordered gyromagnetic photonic crystals. Reproduced with permission from~\cite{Liu20PRL_disorder}, Copyright (2020) by the American Physical Society. } \label{figs:fig3} \end{figure*} While the confining edge in \cite{Wang09nature} was constructed by cladding the photonic crystal with a metal wall, different methods to construct a confining edge for chiral edge states were studied. In \cite{Fu10APL_QH}, a one-way waveguide formed between a gyromagnetic photonic crystal and a normal dielectric photonic crystal was experimentally demonstrated and the authors found that when the waveguide width changes, while the forward modes are very robust against intrusion of a metal plate, the backward-propagating waves are very sensitive to the width of the waveguide. The waveguide at the boundary of two adjacent magneto-optical photonic crystals with opposite magnetic biases was studied in \cite{Lai20AIP} and the authors found that this waveguide can simultaneously support symmetrical and anti-symmetrical topological edge states, which possess the same direction of energy propagation, however, their directions of phase propagation are opposite. Furthermore, by inserting a domain of an ordinary photonic crystal sandwiched between two domains of magnetic photonic crystals, the authors in \cite{Wang21PRL_LargeArea} experimentally demonstrated the existence of large-area one-way waveguide states with amplitude uniformly distributed over the nonmagnetized domain (see Fig.\ref{figs:fig3}b). Interestingly, the possibility to have one-way edge states at the edge of a single gyrotropic photonic crystal bounded by air due to the light-cone effect was also studied in \cite{Ochiai09PRB,Lin09PRB,Poo11PRL,Tasolamprou21PRapp,Liu12OL_slab}. For example, in \cite{Poo11PRL} (see Fig.\ref{figs:fig3}c), self-guiding electromagnetic edge states outside the light cone along the zigzag edge of a honeycomb magnetic photonic crystal without requiring an ancillary cladding layer was experimentally demonstrated. Recently, by bringing the bandgap that supports one-way edge states below the light-cone, self-guiding electromagnetic edge states were also shown to exist in a square photonic crystal bounded by air \cite{Tasolamprou21PRapp}. One-way electromagnetic modes could further be sustained by the edge of a gyromagnetic photonic crystal slab when the photonic bandgap supporting the chiral modes is below the light-cone \cite{Liu12OL_slab}. At infrared and terahertz frequencies, graphene placed in a static magnetic field can be characterized as an electrically gyrotropic material. As such, by periodically patterning monolayer graphene with nanoholes, topological one-way plasmonic edge states at deep-subwavelength scale operable up to infrared frequencies have been demonstrated \cite{Jin17PRL_graphene,Pan17NC_graphene}. The one-way electromagnetic edge states have found a wide range of applications \cite{Wang22FronMater}, such as, one-way cross-waveguide splitter \cite{He10APL}, unidirectional channel-drop filter \cite{Fu11APL_filter}, one-way waveguide with large Chern numbers \cite{Skirlo14PRL,Skirlo15PRL}, dual-topology induced light-trapping in lower dimensions \cite{Li18NC_dislocation}, observation of unpaired photonic Dirac point \cite{Liu20NC_unpaired}, nonreciprocal lasing \cite{Bahari17Science} (Fig.\ref{figs:fig3}d), antichiral edge states \cite{Chen20PRB_Antichiral, Zhou20PRL_antichiral} (Fig.\ref{figs:fig3}e), nonreciprocal Goos-H\"anchen shift \cite{Ma20OE_shift}, and nonlinear frequency mixing \cite{You20SciAdv,Lan20PRB_nonlinear}. Because backscattering in conventional optical devices or systems dominates over all other loss mechanisms as the group velocity of electromagnetic waves becomes small, the one-way electromagnetic edge states without backscattering provide interesting opportunities for slow-light related applications \cite{Yang13APL_slow,Chen19PRB_slow,Chen19PhoRes_slow,Chen20OL_slow,Zhuang21OE_slow}. For example, in \cite{Yang13APL_slow}, group velocity more than one order of magnitude less than the speed of light was experimentally demonstrated in a waveguide sandwiched between a gyromagnetic photonic crystal and a metal cladding. In \cite{Chen19PRB_slow}, a waveguide composed of two magneto-optical photonic crystals with the same Chern number was shown to provide a mechanism to create a unique group-dispersionless slow-light state due to the strong interaction between the two counterpropagating one-way edge states in the two composite semi-infinite magneto-optical photonic crystals. This slow-light state could be used to realize a switchable slow light rainbow trapping \cite{Chen19PhoRes_slow}, where different frequency components of a wave packet are separated and stored at different positions. Apart from the strong interaction between the two counterpropagating one-way edge states studied in \cite{Chen19PRB_slow}, topological slow-light state could also be achieved by strong interactions between two regular co-propagating one-way edge states in a single-channel \cite{Chen20OL_slow} or double-channel waveguide \cite{Zhuang21OE_slow}. While the one-way electromagnetic edge states are robust against small disorder, the fate of these states under strong disorder has also been studied \cite{Mansha17PRB_disorder,Yang19PRB_disorder,Yang20OE_disorder,Liu20PRL_disorder,Zhou20Light_disorder}. Amorphous analogs of 2D photonic Chern insulators consisting of gyromagnetic rods with only short-range order but no long-range order were studied in \cite{Mansha17PRB_disorder}, where the authors found that nonreciprocal transmission exists even at very low levels of short-range order with no discernible spectral gaps. Amorphous magnetic photonic lattices with only short-range order were also studied in \cite{Yang19PRB_disorder}, in which single-mode and multimode topological edge states can exist despite the amorphous nature of the lattices. Disorder can induce a topological transition from a trivial insulator to a nontrivial insulator, i.e., a topological Anderson insulator. Recently, the authors in \cite{Liu20PRL_disorder} experimentally demonstrated a photonic topological Anderson insulator in a 2D disordered gyromagnetic photonic crystal as shown in Fig.\ref{figs:fig3}f, where they directly observed the disorder-induced topological phase transition from a trivial insulator to a topological Anderson insulator with robust chiral edge states. By gradually deforming the amorphous lattice of a photonic Chern insulator into a liquid-like lattice through the glass transition, the authors in \cite{Zhou20Light_disorder} experimentally observed the closing of the mobility gap and the disappearance of the topological edge states. \subsection{Photonic quantum spin Hall states} In the quantum spin Hall effect of electronic materials that preserve the time-reversal symmetry, electrons of spin-up and spin-down experience the opposite magnetic fields, and as a result, the spin-up and spin-down electrons move along the opposite directions of a system edge, forming the so-called spin-momentum locked helical transportation. This effect could be characterized by a topological invariant, called the spin Chern number, \begin{gather} C_{\text{spin}}=\frac{1}{2}(C_{\uparrow}-C_{\downarrow}) \end{gather} where $C_{\uparrow}$ and $C_{\downarrow}$ are the Chern numbers of the spin-up and spin-down electrons, respectively. \begin{figure*} \includegraphics[width=0.9\textwidth]{Fig4} \caption{Different mechanisms to emulate the two spin states of electrons using electromagnetic waves for the realization of photonic quantum spin Hall insulators. (a) Based on TE+TM/TE-TM. Reproduced with permission from~\cite{Khanikaev13NatMat}, Copyright (2013) by Springer Nature. (b) Based on TE/TM. Reproduced with permission from~\cite{Ma15PRL}, Copyright (2015) by the American Physical Society. (c) Based on TE+$i$TM/TE-$i$TM. Reproduced with permission from~\cite{He16PNAS_spin}, Copyright (2016), under CC BY-NC-ND or CC BY. (d) Based on double TM Dirac cones. Reproduced with permission from~\cite{WuHu15PRL}, Copyright (2015) by the American Physical Society. (e) Based on the clockwise/anti-clockwise whispering gallery modes of light in a ring resonator. Reproduced with permission from~\cite{LiangChong13PRL}, Copyright (2013) by the American Physical Society. (f) Based on the layer pseudospin in a two-layer photonic slab system. Reproduced with permission from~\cite{Chen19LPR_layer}, Copyright (2019) by John Wiley and Sons. } \label{figs:fig4} \end{figure*} \begin{figure*} \includegraphics[width=0.9\textwidth]{Fig5} \caption{Different applications of helical edge states based on the proposal in \cite{WuHu15PRL}. (a) Reconfigurable topological photonic crystal. Reproduced with permission from~\cite{Shalaev18NJP_reconfig}, Copyright (2018), under CC BY 3.0. (b) Topological photonic ring resonator. Reproduced with permission from~\cite{Mehrabad20APL_cavity}, Copyright (2020) by AIP Publishing. (c) Chiral coupling between the helical topological edge modes and a quantum emitter. Reproduced with permission from~\cite{Barik18Science}, Copyright (2018) by the American Association for the Advancement of Science. (d) Topological imaging of bulk and edge states via nonlinear third harmonic generation. Reproduced with permission from~\cite{Smirnova19PRL_THG}, Copyright (2019) by the American Physical Society. (e) Topological vortex laser based on the out-of-plane radiation feature of spin-momentum locking. Reproduced with permission from~\cite{Yang20PRL_laser}, Copyright (2020) by the American Physical Society. (f) Pseudospin-polarized topological line defect robust against a bending gap. Reproduced with permission from~\cite{Chen20IEEE_line}, Copyright (2020) by IEEE. (g) Bound topological edge states in the continuum exhibiting pseudospin-momentum locking unidirectional propagation at the air edge of a nontrivial expanded domain. Reproduced with permission from~\cite{Zhang21PRapp_BIC}, Copyright (2021) by the American Physical Society. } \label{figs:fig5} \end{figure*} As electromagnetism has two components of electricity and magnetism, this provides many different ways to mimic the two spins of electrons. In \cite{Khanikaev13NatMat}, the linear combinations of TE and TM modes in 2D, i.e., TE+TM and TE-TM (see Fig.\ref{figs:fig4}a) are used to emulate the two spin states of electrons when the permittivity and permeability are tuned to be equal. Furthermore, the magneto-electro coupling could be introduced through the bi-anisotropic response and as such, the system proposed provides an exact emulation of the Kane-Mele Hamiltonian of electronic topological insulators, where one-way spin-polarized transport of photonic edge states robust against different types of defects was theoretically demonstrated. Since bi-anisotropic coupling in metamaterials usually is weak and highly dispersive, the permittivity and permeability matching condition can only be satisfied in a narrow frequency range. In \cite{Chen14NC_uniaxial}, the authors proposed a different approach to realize the coupling of TE and TM modes by confining the modes to a waveguide via symmetry reduction. Moreover, by measuring the magnitude and phase of the fields, gapless spin-filtered edge states were successfully observed in experiments. Later, degenerate TE and TM Dirac cones overlapping at the K/K' point of the Brillouin zone (see Fig.\ref{figs:fig4}b) were proposed to emulate the two spin states of electrons in a parallel-plate metal waveguide \cite{Ma15PRL} filled with a periodically arranged hexagonal array of metallic cylinders connected to the top and/or bottom metal plates. In this setup, a finite bianisotropy could be generated by a finite vacuum gap between the rods and one of the metal plates to effectively emulate spin-orbit interaction. Through first-principles electromagnetic simulations, the authors demonstrated that topologically protected surface waves could be guided without reflections along sharp bends. This idea was experimentally studied in \cite{Lai16SciRep_delayline} for the realization of a reflections-free compact delay line and in \cite{Cheng16NatMat} for the realization of a reconfigurable photonic topological insulator where the electromagnetic propagation pathways could be controlled along any desired path. Extensions of the idea based on degenerate TE and TM double Dirac cones to a metasurface made of dielectric disks \cite{Slobozhanyuk19APL} and a metallic dual-metasurface \cite{Bisharat19LPR} were experimentally demonstrated with the corresponding spin-momentum locked edge states observed by the method of near-field imaging. Double Dirac cones due to accidental degeneracy between TE and TM modes could also be achieved in photonic crystals with anisotropic permittivity \cite{Chen18LPR_Accidental} and with nonzero bianisotropy, topological bandgap supporting robust transport of gapless edge states was theoretically demonstrated. Additionally, a pair of double-degenerate TE and TM modes in a gyrotropic photonic crystal with both gyroelectric and gyromagnetic responses, where off-diagonal terms in permittivity and permeability tensors coexist, was employed to realize a photonic topological insulator in \cite{Sun19Crystals}. Apart from the combinations of TE and TM modes \cite{Ma15PRL,Chen18LPR_Accidental,Sun19Crystals}, TE+TM/TE-TM \cite{Khanikaev13NatMat}, in \cite{He16PNAS_spin}, the left circular polarization and right circular polarization (see Fig.\ref{figs:fig4}c), i.e., the combinations of TE+$i$TM/TE-$i$TM were proposed to mimic the electronic spin states and thus to realize photonic topological insulators. In 2015, Wu and Hu \cite{WuHu15PRL} proposed a photonic topological insulator based on purely the TM modes in an all-dielectric photonic crystal by deforming a honeycomb lattice of cylinders into a triangular lattice of cylinder hexagons (see Fig.\ref{figs:fig4}d). This achievement is based on the fact that there are two 2D irreducible representations of the $C_6$ symmetry group and thus a double-degenerate Dirac cone could be realized by the band folding mechanism in the Brillouin zone. Furthermore, a band inversion could be simply achieved by expanding or shrinking the six cylinders within the hexagon and helical edge states with the pseudospin mimicked by the angular momentum of the TM modes were theoretically demonstrated. Later, an experiment \cite{Yang18PRL_spinExp} based on a photonic crystal made of Al$_2$O$_3$ cylinders confirmed this proposal in the microwave regime. Moreover, pseudospin-up and pseudospin-down could also be emulated by the two directions of propagation of light in a ring resonator \cite{LiangChong13PRL} (see Fig.\ref{figs:fig4}e) or the layer degree of freedom in two-layer photonic systems \cite{Chen19LPR_layer,Wu19AOM_layer} (see Fig.\ref{figs:fig4}f). Due to the simple design strategy based on symmetry consideration, the proposal by Wu and Hu \cite{WuHu15PRL} has attracted a great interest in the community, e.g., it was demonstrated that the idea could also be applied to dielectric slab with holes \cite{Barik16NJP_hole,Anderson17OE_hole} rather than the original cylinders-in-air configuration as well as applied to exciton-polaritons \cite{Liu20Science_polarition,Li21NC_polariton} and the topological helical edge states have been further observed at both telecom \cite{Gorlach18NC_far,Parappurath20SciAdv} and visible \cite{Peng19PRL_visible,Liu20NanoLett_Z2} wavelengths. The idea has also found a wide range of applications, e.g., in realizing reconfigurable topological photonic crystals with tunable edge states \cite{Shalaev18NJP_reconfig,Cao19SciBlu,Wang19JAP_reconfig} (Fig.\ref{figs:fig5}a), topological ring-cavity and whispering gallery modes \cite{Yang18OE_cavity,Mehrabad20APL_cavity,Sun21PRB_cavity} (Fig.\ref{figs:fig5}b), chiral coupling between the helical topological edge modes and quantum emitters \cite{Barik18Science}, topological all-optical logic gates \cite{He19OE_logic}, topological nonlinear third-harmonic generation \cite{Smirnova19PRL_THG} and novel topological lasing behaviors \cite{Shao20NatNanoTech_laser,Yang20PRL_laser}. For examples, in \cite{Barik18Science}, the chiral emission of a quantum emitter into the counterpropagating helical edge modes was observed experimentally (see Fig.\ref{figs:fig5}c) and in \cite{He19OE_logic}, topological filter and all-optical logic gates were designed based on the robust transport of helical edge states. Imaging topological edge states via nonlinear optical process may offer superior contrast, sensitivity, and large imaging area, which was demonstrated in \cite{Smirnova19PRL_THG}, where the authors experimentally observed strong third-harmonic generation and demonstrated that variation of the pump-beam wavelength enables independent high-contrast imaging of either bulk modes or spin-momentum-locked edge states (see Fig.\ref{figs:fig5}d). Moreover, the band inversion between the shrunken and expanded structures provides a novel mode confinement mechanism due to the opposite parities of the bulk wavefunctions in trivial and nontrivial photonic crystals. Based on this principle, a topological bulk laser exhibiting single-mode lasing with vertical emission directionality was experimentally realized in \cite{Shao20NatNanoTech_laser}. Later on, the authors \cite{Yang20PRL_laser}, exploiting the out-of-plane radiation feature of spin-momentum locking (see Fig.\ref{figs:fig5}e), demonstrated a high performance topological vortex laser and found that the near-field spin and orbital angular momentum of the topological edge mode lasing has a one-to-one far-field radiation correspondence. While the typical boundary for topological edge states based on the proposal in \cite{WuHu15PRL} is constructed between shrunken and expanded domains, other methods to construct the boundary for observing topological edge states were also proposed and studied in the literature \cite{Gao19AO_defect,Chen20IEEE_line,Yang22OE_finiteW,Zhang21PRapp_BIC}, e.g., in \cite{Chen20IEEE_line}, a line defect was introduced directly into a single nontrivial expanded domain and pseudospin-polarized transport of electromagnetic waves that can bypass a bending gap was demonstrated (see Fig.\ref{figs:fig5}f). The physics of this unusual phenomenon is due to the strong coupling of the topological modes located at the up and down boundaries of the line defect. One can further replace the air line defect by a trivial shrunken domain with finite width and study the coupling of the topological edge modes when changing the width of the trivial domain as shown in \cite{Yang22OE_finiteW}, where pseudospin-preserving and pseudospin-flipping processes were observed in microwave experiments. One can even consider the air edge of a nontrivial domain \cite{Zhang21PRapp_BIC}, where bound topological edge states in the continuum exhibiting topological features inherited from the nontrivial topology of bulk photonic bands, such as spin-momentum locking unidirectional propagation (see Fig.\ref{figs:fig5}g), were demonstrated in microwave experiments. \subsection{Photonic quantum valley Hall states} \begin{figure} \includegraphics[width=0.5\textwidth]{Fig6} \caption{Schematics to illustrate the idea of valley Chern number. (a) A honeycomb photonic crystal with two different kinds of dielectric cylinders. (b) The unit cell and (c) the first Brillouin zone of the photonic crystal. (d) When $r_1=r_2$, the first two bands of the TM modes form Dirac cones around $K/K'$. (e) When $r_1\neq r_2$ with a small radius difference, the Dirac cones at $K/K'$ are gapped out and the resulting Berry curvature has sharp peaks around $K/K'$, whose integral around $K/K'$ gives the valley Chern number $C_{K/K'}=\pm 1/2$. } \label{figs:fig6} \end{figure} In photonic systems with time-reversal symmetry, the integral of the Berry curvature over the first Brillouin zone is zero as $\mathbf{F}_n(-\mathbf{k})=-\mathbf{F}_n(\mathbf{k})$. Nonetheless, for a photonic crystal with hexagonal symmetry, the Berry curvature could have nontrivial distributions around the $K/K'$ points (i.e., valleys) of the first Brillouin zone, which allows the definition of valley Chern number by integrating the Berry curvature around $K/K'$, \begin{gather} C_{K/K'}=\frac{1}{2\pi}\iint_{K/K'} \mathbf{F}_n(\mathbf{k}) d^2\mathbf{k} \end{gather} \begin{figure*} \includegraphics[width=0.9\textwidth]{Fig7} \caption{Valley edge modes and their applications. (a) Schematic of a valley photonic crystal operating at telecommunication wavelengths consisting of a honeycomb lattice of two inverted equilateral triangular air holes per unit cell. Reproduced with permission from~\cite{Shalaev19NatNanotech}, Copyright (2019) by Springer Nature. (b) Electrically pumped topological laser with valley edge modes. Reproduced with permission from~\cite{Zeng20Nature_laser}, Copyright (2020) by Springer Nature. (c) A reprogrammable plasmonic topological insulator with nanosecond-level switching time. Reproduced with permission from~\cite{You21NC_reconfig}, Copyright (2021), under CC BY 4.0. (d) Chiral coupling between quantum emitters and valley edge modes. Reproduced with permission from~\cite{Mehrabad20Optica_dot}, Copyright (2020), under CC BY 4.0. (e) On-chip Hong-Ou-Mandel interference based on valley-dependent quantum circuits. Reproduced with permission from~\cite{Chen21PRL_quantum}, Copyright (2021) by the American Physical Society. (f) Spin-valley coupled edge states. Reproduced with permission from~\cite{Kang18NC_spinvalley}, Copyright (2018), under CC BY 4.0.} \label{figs:fig7} \end{figure*} Note that in the literature, valley Chern number could also be referred to $C_v=C_K-C_K'$. To illustrate the idea, we consider a honeycomb photonic crystal with two cylinders in each unit cell \cite{Lan21PRA_SHG} (see Fig. ~\ref{figs:fig6}a-b). When the radii of the two cylinders in the unit cell are equal (assuming they have the same dielectric constant), the first two bands of the TM modes in the band structure form Dirac cones around $K/K'$ (see Fig. ~\ref{figs:fig6}c-d). One can break the inversion symmetry by considering the two cylinders with different radii and as such, the Dirac cones around $K/K'$ are gapped out, resulting in nontrivial Berry curvature distributions around $K/K'$ (see Fig. ~\ref{figs:fig6}e). One can find $C_{K/K'}=\pm 1/2$ for the two valleys $K/K'$ if the radius difference is small. To construct a domain wall between two valley photonic crystals with different topological properties, one can consider one photonic crystal (I) that is inversion symmetric to the other (II) and consequently, the two valleys will be transformed into each other in the two domains, i.e., $K_I/K'_{II}=K'_{II}/K_I$ and $C^I_{K/K'}=-C^{II}_{K/K'}=\pm 1/2$. So the difference of the valley Chern number across the domain wall interface is $C^I_{K/K'}-C^{II}_{K/K'}=\pm 1$, which means that at one valley there exists an interface mode with positive group velocity, whereas another one with negative group velocity exists at the other valley. This principle to generate nontrivial valley Chern number and the corresponding valley interface modes has been frequently used in the literature \cite{Baile21AdvPhoRes_review,Dong21AdvPhyX_review}. Apart from this common interface separating two valley photonic crystals, other possible ways to construct the interface, e.g., air edge \cite{Chen20OE_air,Feng22OL_valleyBIC}, or three-layer sandwich structures \cite{He20OE_3layer,Chen21ACSpho_3layer}, have also been studied. Recently, the concept of large valley Chern number has been proposed \cite{Xi20PhoRes_large,Yan21arxiv_large}, e.g., in \cite{Xi20PhoRes_large}, the authors considered a 2D photonic crystal made of hexamers of six dielectric rods in each unit cell and showed that by shrinking or expanding one set of rods in the hexamer, a valley phase transition with emergent multiple edge states from the valley Chern number of $C_K=1/2$ to $C_K=3/2$ could be achieved. Furthermore, large valley Chern number has also been realized in two different frequency bands \cite{Yan21arxiv_large}. In the following, we briefly review different lattice structures, system platforms, and various applications of valley photonic crystals. Valley photonic crystals have been studied in triangular lattices \cite{MaShvets16NJP,Ye17APL_microwave,Zhang19LPR_Circuitry,Dubrovkin20APL,Li20PRL_spinvalley}, honeycomb lattices \cite{Dong17NatMat,Chen17PRB_contrast,Yang18SciRep,Chen18PRapp_cylinder,Noh18PRL_waveguide,He19NC_slab,Shalaev19NatNanotech,Shalaev19Optica,Ma19LPR_kink,Yang20NatPho,Arora21Light}, and kagome lattices \cite{Deng19NanoPhot_kagome,Wong20PRRes_kagome}, where the unit cell contains one, two and three lattice sites, respectively. For the triangular lattice, Ma and Shvets \cite{MaShvets16NJP} theoretically proposed to use the TE modes of a triangular photonic crystal, where each unit cell contains a single triangular Si rod or its perturbed structure without inversion symmetry, to create the valley degrees of freedom. Following this proposal, different experimental implementations have been demonstrated. For examples, in \cite{Ye17APL_microwave}, a valley photonic crystal constructed by a triangular array of Y-shaped aluminum rods in the microwave regime was demonstrated by rotating the Y-shaped rods within the unit cells. In \cite{Zhang19LPR_Circuitry}, by rotating triangular scatterers within the unit cells, valley kink states at generic interfaces in subwavelength substrate-integrated photonic circuitry were experimentally demonstrated. Recently, valley edge modes at $\lambda=1.55\mu$m were experimentally observed in a photonic crystal using the TM modes based on a triangular air-hole design with broken inversion symmetry fabricated from a suspended slab geometry \cite{Dubrovkin20APL}. Honeycomb photonic crystals, where each unit cell contains two dielectric cylinders or air holes provide a convenient setup for inversion symmetry breaking by choosing the two cylinders or holes different. For example, valley photonic crystals based on dielectric cylinders in honeycomb lattices have been theoretically studied in \cite{Dong17NatMat,Chen17PRB_contrast,Yang18SciRep} and experimentally observed at microwave \cite{Chen18PRapp_cylinder} and at $\lambda=1450$nm based on array of evanescently coupled waveguides \cite{Noh18PRL_waveguide}. Motivated by on-chip integration, light transport based on valley edge modes in honeycomb lattices has also been experimentally demonstrated in the slab geometry with either circular holes \cite{He19NC_slab} or triangular holes \cite{Shalaev19NatNanotech,Shalaev19Optica,Ma19LPR_kink,Yang20NatPho,Arora21Light}(Fig.\ref{figs:fig7}a). Furthermore, valley edge modes could also be realized in kagome photonic crystals \cite{Deng19NanoPhot_kagome,Wong20PRRes_kagome}, where each unit cell contains three cylinders (or holes) and by expanding or shrinking the three cylinders within the unit cells, nontrivial valley bandgap from gapped Dirac cones and corresponding valley edge modes could be created. Valley edge modes can also appear within different frequency bandgaps, achieving the so-called dual-band valley kink states \cite{Chen19AOM_dual,Tang20PRB_dual,Wei21NJP_dual}, which may find interesting applications, such as topologically protected second harmonic generation \cite{Lan21PRA_SHG}. Valley edge modes have also been studied in plasmonic systems, e.g., in surface-wave photonic crystal on a single metal surface \cite{Gao17PRB_SW}, designer surface plasmon crystal comprising metallic patterns deposited on a dielectric substrate \cite{Wu17NC_designer}, graphene plasmonic crystals \cite{Qiu17OE_graphene,Jung18PRL_graphene,You20IEEE,Wang20OL_2layer}, metal nanoparticles \cite{Proctor20Nanophot_metal} and metal cylinders \cite{Saito21NanoLett_metal}. Valley edge modes have found a range of interesting applications, such as, lasing \cite{Zeng20Nature_laser,Noh20OL_laser,Gong20ACSPho_laser,Zhong20LPR_laser,Liu22OE_laser}, fiber \cite{Makwana20OE_fiber,Zhang21NanoPhot_fiber}, reconfigurable devices \cite{Wu18PRM_reconfig,Wang19NJP_reconfig,You21NC_reconfig}, slow light \cite{Yoshimi20OL_slow,Xie21PRApp_slow,Yoshimi21OE_slow,Arregui21PRL_slow}, chiral coupling to quantum emitters \cite{Yamaguchi19APE_dot,Barik20PRB_dot,Mehrabad20Optica_dot}, Mach-Zehnder interferometer \cite{Yang20JO_MZ}, on-chip quantum information processing \cite{Chen21PRL_quantum}, long-range deformations \cite{Xu20PRR_disorder}, and logic gates \cite{Chao21JO_gate}. Especially, for topological lasing based on valley edge modes, recently, an electrically pumped terahertz quantum cascade laser was experimentally demonstrated in a triangular lattice of quasi-hexagonal holes drilled through the active medium of a terahertz quantum cascade laser wafer \cite{Zeng20Nature_laser} (see Fig.\ref{figs:fig7}b). Subsequently, single-mode lasing of valley-Hall ring cavities at telecommunication wavelength was experimentally realized in a structured and suspended membrane consisting of large and small holes \cite{Noh20OL_laser}. Reconfigurable topological states in valley photonic crystals were also proposed using BaTiO$_3$ \cite{Wu18PRM_reconfig} as well as liquid crystal \cite{Wang19NJP_reconfig} and recently, a reprogrammable plasmonic topological insulator, where the topological propagation route can be dynamically changed at nanosecond-level switching time, was demonstrated experimentally \cite{You21NC_reconfig} (see Fig.\ref{figs:fig7}c). Moreover, exploiting this ultrafast control feature, a topologically protected multi-channel optical analog-digital converter was also demonstrated. Finally, by combining the spin and valley degrees of freedom, spin-valley coupled edge states \cite{Gao18NatPhy_spinvalley,Kang18NC_spinvalley}(Fig.\ref{figs:fig7}f), coexistence of pseudospin- and valley-Hall-like edge states \cite{Chen20PRR_spinvalley}, as well as spin- and valley-polarized one-way Klein tunneling \cite{Ni18SciAdv_spinvalley}, have been proposed. \subsection{Second-order photonic topological corner states} \begin{figure*} \includegraphics[width=0.9\textwidth]{Fig8} \caption{Applications of topological corner states. (a) As nanocavity. Reproduced with permission from~\cite{Ota19Optica_cavity}, Copyright (2019), under CC BY 4.0. (b) Lasing of multidimensional topological states in a hierarchical scale via bulk, edge and corner states. Reproduced with permission from~\cite{Han20ACSpho_laser}, Copyright (2020) by American Chemical Society. (c) Topological photonic crystal fiber based on corner fiber mode. Reproduced with permission from~\cite{Gong21OL_fiber}, Copyright (2021) by Optica Publishing Group. (d) Enhanced photoluminescence of halide perovskite nanocrystals. Reproduced with permission from~\cite{Berestennikov21JPCC_PL}, Copyright (2021) by American Chemical Society. (e) Dynamically reconfigurable topological corner states. Reproduced with permission from~\cite{Hu21OL_reconfig}, Copyright (2021) by Optica Publishing Group. (f) Nonlinear imaging of nanoscale topological corner states. Reproduced with permission from~\cite{Kruk21NanoLett_image}, Copyright (2021) by American Chemical Society. (g) Topologically protected second harmonic generation via doubly resonant corner modes. Reproduced with permission from~\cite{Chen21PRB_SHG}, Copyright (2021) by the American Physical Society. (h) Coupled topological edge and corner modes in a photonic crystal slab consisting of a periodic array of SSH supercells. Reproduced with permission from~\cite{Zhang21JO_cornerarray}, Copyright (2021), under a CC BY license.} \label{figs:fig8} \end{figure*} Topological corner states are higher-order topological phenomena where the topological boundary states appear in dimensions at least 2 orders lower than the bulk \cite{Kim20Light_review}. The existence of nontrivial bulk dipole or quadrupole moment could lead to the appearance of topological states localized at the corners of 2D photonic systems. Bulk dipole moment could be characterized by the 2D polarization $P=(P_x, P_y)$ given by \cite{Liu17PRL_zeroBerry}, \begin{gather} P_i=\frac{1}{(2\pi)^2}\int_{\textrm{FBZ}} \textrm{Tr} [A_i,\mathbf{k}]d^2 \mathbf{k} \end{gather} with $[A_i,\mathbf{k}]^{mn}=i\langle u_{\mathbf{k}}^m|\partial_{k_i}|u_{\mathbf{k}}^n\rangle$ for $i=x,y$ and $m,n$ run over the occupied bands and $u^n_{\mathbf{k}}$ is the periodic Bloch function for the $n$th band. A direct integration of the Berry connection over the first Brillouin zone to get the polarization is numerically challenging. However, for photonic systems with inversion symmetry, the polarization could be obtained in a much simpler way by calculating the parities of the bands at high symmetry points of $\Gamma$ and X \cite{Liu17PRL_zeroBerry}, \begin{gather} P_i=\frac{1}{2} \left( \sum_n q_i^n \hspace{0,2cm} \textrm{mod} \hspace{0,2cm} 2 \right), (-1)^{q_i^n}=\frac{\eta_n(X_i)}{\eta_n(\Gamma)} \label{2Dpol} \end{gather} where $\eta_n$ represents the parity at the high symmetry points $\Gamma$ and $X$ of the first Brillouin zone for the $n$th band. Based on the above formula, a simple rule to judge whether a bandgap has nontrivial polarization could be deduced, i.e., an odd number of pairs of parities with opposite signs at $\Gamma$ and $X$ of the same band below the bandgap makes the bandgap topologically nontrivial whereas an even number implies the bandgap is trivial. Most of the works on topological corner states from nontrivial 2D polarization in photonic systems are based on the 2D SSH model, i.e., each unit cell contains some cylinders and by expanding or shrinking the cylinders' positions away or towards the center of the unit cell, intra-cell and inter-cell hopping could be tuned, leading to a topologically trivial to nontrivial transition. For the square lattice 2D SSH model, the authors in \cite{Xie18PRB_corner} considered a 2D photonic crystal with four identical dielectric rods in each unit cell and by adjusting the distances between the nearby rods in the x and y directions, the emergence of edge and corner states can be controlled straightforwardly. The 2D square SSH model was experimentally realized in photonic crystal slabs with periodic dielectric rods on a perfect electric conductor \cite{Chen19PRL_obs} and in photonic crystals consisting of alumina cylinders sandwiched between two metallic plates \cite{Xie19PRL_vis}. The 2D square SSH model has also been studied with metallic nanoparticles \cite{Kim20NanoPhot_metal}, surface-wave photonic crystal consisting of metallic patterns on both sides of a dielectric substrate \cite{Zhang20AdvSci_surfacewave} and designer surface plasmon crystals composed of subwavelength metallic cylinders arranged in a 2D lattice structure on a metallic surface \cite{Wang21OE_THz}. Topologcial corner states have also been studied in honeycomb lattices with two cylinders in each unit cell \cite{Phan21OE_honey,KHO22pssb_honey}. In \cite{Phan21OE_honey}, the authors found that topological corner states appear only for 60$^{\circ}$ corners, but absent for other corners, due to the sign flip of valley Chern number at the corner and in \cite{KHO22pssb_honey}, dual band topological corner states within both the first and third bandgaps were demonstrated. Kagome lattice whose unit cell contains three cylinders has also been studied for corner states. Especially, corner states in kagome lattices have been experimentally observed in waveguide arrays inscribed in glass samples using femtosecond laser technology \cite{Hassan19NatPho_kagome}, in array of dielectric cylinders arranged to form a kagome lattice between two parallel aluminium plates \cite{Li20NatPho_longrange}, and in metasurface fabricated on a silicon-on-insulator chip consisting of trimers of diamond-shaped holes \cite{Vakulenko21AdvMat_nearfield}. Interestingly, in addition to the corner states due to nearest-neighbour interactions, a new class of topological corner states induced by long-range interactions with a purely electromagnetic nature, which has no analogy in condensed-matter systems, exists in this lattice structure. Corner states have also been studied in a metal with air cavities forming kagome lattice \cite{Chen19OL_trunc}, in a plasmonic metasurface of metal nanoparticles arranged in a kagome lattice \cite{Proctor21APL_plasmon} and in an array of silicon rods forming a kagome lattice \cite{Shen21OE_kagome}. By moving the dielectric rods continuously, the authors in \cite{Wang21PhoRes_C3} demonstrated that $C_3$ symmetric photonic crystals can switch between triangle and kagome lattice configurations, leading to rich higher-order topological phases and phase transitions. Finally, lattice structures where each unit cell contains six cylinders have also been explored \cite{Noh18NatPho_midgap,Proctor20PRR_robust,Wu21PhoRes_uncon,Gladstone22PRL_spincorner,KHO22OLT_6cylinder}, and some interesting features, such as, unconventional higher-order topology \cite{Wu21PhoRes_uncon} and spin-polarized fractional corner charges \cite{Gladstone22PRL_spincorner}, have been observed. Corner states can serve as high-Q cavity modes, thus providing potential applications in enhancing light-matter interaction. For example, a corner state tightly localized in space with a high Q factor over 2000 was experimentally observed in \cite{Ota19Optica_cavity} (see Fig.\ref{figs:fig8}a), verifying its promise as a nanocavity. Further optimization of the Q factor of the corner state can bring it to 6000 \cite{Xie21OE_opt}, making the strong coupling to single quantum emitter possible. Indeed, the authors in \cite{Xie20LPR_QED} studied the coupling between single quantum dot and the corner state, where enhanced emission rate was observed when the quantum dot is on resonance with the corner state. The corner states could be pumped using in-plane excitation conditions as experimentally demonstrated in \cite{He21PhoRes_excitation} and have many interesting applications, such as, topological nanolasers \cite{Smirnova20Light_laser,Han20ACSpho_laser,Kim20NC_laser,Zhang20Light_laser} (Fig.\ref{figs:fig8}b), second-order topological photonic crystal fibers \cite{Gong21OL_fiber} (Fig.\ref{figs:fig8}c), high-quality optical hotspots \cite{Liu22ACSpho_hotspot}, enhancement of photoluminescence signal \cite{Berestennikov21JPCC_PL} (Fig.\ref{figs:fig8}d), rainbow trapping \cite{Liang22OL_rainbow}, dynamically reconfigurable topological corner states \cite{Hu21OL_reconfig} (Fig.\ref{figs:fig8}e), corner states in non-Hermitian photonic crystals \cite{Jiang22OL_nonH}. Corner states could also be exploited to enhance optical nonlinear effects, e.g., an interesting nonlinear imaging of nanoscale topological corner states through third harmonic generation was experimentally demonstrated in \cite{Kruk21NanoLett_image} (see Fig.\ref{figs:fig8}f) and enhanced second harmonic generation from a topological corner state and its directional out-of-plane emission were theoretically proposed in \cite{Guo21OE_SHG}. The corner states can exist in multiband gaps simultaneously \cite{Kim21AOM_multiband,Chen22PRApplied_multicorner}, providing the opportunity to realize highly efficient nonlinear frequency conversion protected by topology. Indeed, double-resonant corner modes have been exploited to significantly boost the second harmonic generation \cite{Chen21PRB_SHG,Om21PSS-PRL}, e.g., in \cite{Chen21PRB_SHG}, by matching two corner states within two different frequency bandgaps, efficiency that is robust against defects and as high as 5.4$\times 10^{-3}$ W$^{-1}$ has been demonstrated (see Fig.\ref{figs:fig8}g). Furthermore, one could also couple the first-order topological edge states and the second-order topological corner states as demonstrated in \cite{Shi21OL_edgecorner} to realize more interesting applications, e.g., in \cite{Ma21AnnPhys_shg}, by making use of both the advantages of corner and edge states, i.e., the nonradiative characteristics of the corner states could be utilized to enhance the localized intensity for second harmonic generation whereas the topologically protected transmission of edge states could be exploited to transport the generated harmonic signal, the authors studied an interesting scenario in which the frequency of edge state is twice that of the corner state and demonstrated that harmonic wave generated from the fundamental corner mode could efficiently propagate along the system edge rather than spread into the bulk. Coupling between edge states \cite{Li22PhoRes_3layer} or corner states \cite{Zhang21JO_cornerarray} could lead to new phenomena, e.g., in \cite{Zhang21JO_cornerarray}, lattice topological edge and corner modes were proposed in a photonic crystal slab consisting of a periodic array of supercells (see Fig.\ref{figs:fig8}h), each of which hosts nontrivial edge and corner states. This may find possible applications in quantum-information processes, e.g., by placing quantum emitters into array of topological corner states, robust strong coupling and entanglement between these emitters could be fulfilled with the assistance of topological edge states \cite{Wang20PRapp_emitter}. The recently proposed dual-polarization topological corner states for both TE and TM modes \cite{Chen21PRapp_dualpol} and a new principle for creating corner states within odd-order bandgaps in $C_{4v}$-symmetric lattices beyond the 2D SSH paradigm \cite{ChenNanoPhot22_oddgap} open new possibilities for both fundamental science and promising applications. \begin{figure} \includegraphics[width=\columnwidth]{Fig9} \caption{Photonic quadrupole topological corner states. (a) Experimental implementation of the $\pi$-flux SSH model in a 2D lattice of nanophotonic silicon ring resonators. Reproduced with permission from~\cite{Mittal19NatPho_quadrupole}, Copyright (2019) by Springer Nature. (b) Theoretical proposal of the $\pi$-flux SSH model in a lattice of plasmon-polaritonic nanocavities. Reproduced with permission from~\cite{Chen20PRB_quadrupole}, Copyright (2020) by the American Physical Society. (c) Experimental observation of twisted quadrupole topological phases in photonic crystals made of dielectric cylinders via twisting the unit-cell. Reproduced with permission from~\cite{Zhou20LPR_twist}, Copyright (2020) by John Wiley and Sons. (d) Directional localization of photons at corners with opposite pseudospin polarizations. Reproduced with permission from~\cite{Xie20NC_Quadrupole}, Copyright (2020), under CC BY 4.0. } \label{figs:fig9} \end{figure} Before ending this section, we would like to briefly discuss 2D corner states stemming from nontrivial bulk quadrupole moment without dipole moment. This kind of insulators was called quadrupole insulators in the literature \cite{Benalcazar17Science} and it is challenging to implement the original $\pi$-flux SSH model proposed in \cite{Benalcazar17Science} using photonic systems due to the existence of negative couplings in the model. Up to now, there are only a few works investigating quadrupole topological states in photonic systems \cite{Mittal19NatPho_quadrupole,Chen20PRB_quadrupole,Dutt20Light_synth,He20NC_Quadrupole, Liu19PRL_Quadrupole,Zhou20LPR_twist,Xie20NC_Quadrupole}. In \cite{Mittal19NatPho_quadrupole}, the authors experimentally implemented the negative coupling in a 2D lattice of nanophotonic silicon ring resonators (see Fig.\ref{figs:fig9}a) and demonstrated that the quantization of the bulk quadrupole moment manifests as topologically robust 0D corner states. Negative coupling could also be introduced in a lattice of plasmon-polaritonic nanocavities exploiting the geometry-dependent sign reversal of the couplings between the daisylike nanocavities \cite{Chen20PRB_quadrupole} (see Fig.\ref{figs:fig9}b) or in an array of modulated photonic cavities exploiting the idea of synthetic dimensions \cite{Dutt20Light_synth}. Quadrupole topological phases have also been theoretically studied in gyromagnetic photonic crystal through a double-band-inversion process \cite{He20NC_Quadrupole}. More interestingly, quadrupole topological phases could be realized in all-dielectric photonic crystals without the $\pi$-flux-threading mechanism. Denoting $P^n_i =q^n_i/2$ for the $n$th band, one could define a quadrupole through the dipole moments of all the bands below the bandgap as \cite{Liu19PRL_Quadrupole} \begin{gather} Q_{ij}=\sum_n^{N_{\text{occ}}}P^n_iP^n_j \hspace{0.2cm} \text{mod} \hspace{0.2cm} 1 \end{gather} which means that when the number of pairs of parities with opposite signs at $\Gamma$ and $X$ (or $M$ for hexagonal symmetry) of the same band below the bandgap is equal to $4m+2$ ($m=0,1,\cdots$), then the bandgap hosts a nontrivial quadrupole moment of $Q_{12}=1/2$ (note that the quadrupole is only well defined with vanishing dipole moment, which excludes the cases of $4m+1$ and $4m+3$). Quadrupole topological corner states have been experimentally observed in photonic crystals composed of dielectric cylinders via twisting the unit-cell \cite{Zhou20LPR_twist} as shown in Fig.\ref{figs:fig9}c or via the expanding/shrinking scheme \cite{Xie20NC_Quadrupole} as shown in Fig.\ref{figs:fig9}d. \section{Topological photonics in 3D} \label{sec5} Photonic topological phases in 3D in general could be classified as gapless or gapped \cite{Xie21Frontier_3Drev}. For gapless topological phases, two or more energy bands are degenerate at certain points of the Brillouin zone, which may form points, lines or surfaces and so on. In the following, we briefly discuss gapless photonic topological phases related to Weyl points (twofold degeneracies), Dirac points (fourfold degeneracies), nodal lines and gapped 3D photonic topological insulators. Weyl points are twofold degenerate points between two energy bands crossing linearly in momentum space. They behave as sources or sinks of the Berry flux in momentum space and must exist in negative/positive pairs in order to maintain the neutrality of the whole Brillouin zone. Since the Berry curvature is zero at every $\mathbf{k}$ for systems with PT symmetry, either P or T symmetry (or both) must be broken in order to obtain Weyl points, which are very different from 2D Dirac points, where either P or T symmetry breaking can easily gap out the 2D Dirac points. Weyl points could be described by the Weyl Hamiltonian, \begin{gather} H_{\text{Weyl}}(\mathbf{k})=v_xk_x\sigma_x + v_yk_y\sigma_y +v_zk_z\sigma_z \end{gather} where $v_i$ is the group velocity and $\sigma_i$ the Pauli matrix. The topological invariant of Weyl point could be calculated by the integral of the Berry curvature over a surface enclosing the Weyl point, $C_{\text{Weyl}}=\frac{1}{2\pi}\int_S F_{\text{Weyl}}(\mathbf{k})dS$ and a simple calculation gives $C_{\text{Weyl}}=\text{sgn}(v_xv_yv_z)$. It is to be noted that 3D Weyl points are absolutely robust against any perturbations, which could be understood from the fact that all three Pauli matrices are used in the Hamiltonian and thus no term can open a gap for $H_{\text{Weyl}}$. Consequently, the only way to annihilate a Weyl point is when two oppositely charged Weyl points meet in momentum space. \begin{figure*} \includegraphics[width=1\textwidth]{Fig10} \caption{Various nodal points in 3D photonic systems. (a) Experimental observation of Weyl points in a double-gyroid photonic crystal with inversion-breaking. Reproduced with permission from~\cite{Lu15Science_weyl}, Copyright (2015) by the American Association for the Advancement of Science. (b) Ideal Weyl points and helicoid surface states in a microwave photonic crystal of saddle-shaped metallic coils. Reproduced with permission from~\cite{Yang18Science_ideal}, Copyright (2018) by the American Association for the Advancement of Science. (c) Ideal unconventional charge 2 Weyl point in a chiral microwave metamaterial made from printed circuit boards. Reproduced with permission from~\cite{Yang20PRL_unconven}, Copyright (2020) by the American Physical Society. (d) Photonic Dirac points in an elaborately designed metamaterial. Reproduced with permission from~\cite{Guo19PRL_surface}, Copyright (2019) by the American Physical Society. (e) Higher-order Dirac semimetal and the hinge state dispersion in coupled layers of deformed photonic honeycomb lattices. Reproduced with permission from~\cite{Wang22PRB_high_order}, Copyright (2022) by the American Physical Society.} \label{figs:fig10} \end{figure*} Weyl points of photons were first studied in double-gyroid photonic crystals \cite{Lu13NP_pointLine}. Starting from a threefold degeneracy at the Brillouin zone centre, the authors showed that Weyl points could be obtained by perturbations breaking either P or T. It is to be noted that the minimal number of Weyl points in systems breaking T symmetry is two, while in systems respecting T symmetry it is four, because T maps a Weyl point at $\mathbf{k}$ to $-\mathbf{k}$ with the same chirality whereas P maps a Weyl point at $\mathbf{k}$ to $-\mathbf{k}$ with the opposite chirality. Thus in systems respecting T, there must exist at least two other Weyl points of opposite chirality, to neutralize the whole system. Weyl points have been theoretically studied in magnetized plasma \cite{Gao16NC_palsma}, magnetic photonic crystal \cite{Yang17OE_magnetic}, gyromagnetic metamaterials \cite{Li21PRB_gyromagnetic} and experimentally realized in magnetized semiconductor \cite{Wang19NatPhy_magnetSemicond}. As magnetic response in general is not easy or convenient to be implemented in photonic systems, breaking P to create Weyl points is an alternative. In fact, the first observation of Weyl points in photonics is via inversion-breaking in a double-gyroid photonic crystal \cite{Lu15Science_weyl} (see Fig.\ref{figs:fig10}a), where excited bulk states probed by angle-resolved microwave transmission measurements show two linear dispersion bands touching at four isolated points in the 3D Brillouin zone, indicating the observation of Weyl points. Later on, Weyl points in systems by breaking P but preserving T have also been theoretically proposed in photonic crystal superlattices \cite{Abad152Dmat} and chiral metamaterials \cite{Gao15PRL_chiralHyper, Xiao16PRL_hyperWeyl}. However, as the Weyl points in \cite{Lu15Science_weyl} are not symmetry-related, they occur at different frequencies. Symmetry-related Weyl points at the same frequency have been studied in modified double gyroid with $D_{2d}$ symmetry \cite{Wang16PRA_ideal} and experimentally demonstrated in a microwave photonic crystal of saddle-shaped metallic coils \cite{Yang18Science_ideal} (see Fig.\ref{figs:fig10}b), where these Weyl points were called ideal, not only because they all exist at the same frequency but also that they are separated from any other bands. Recently, Weyl points at optical frequencies have been experimentally observed in a bio-inspired 3D photonic crystal coated uniquely with layered-composite nanometric materials \cite{Goi18LPR_optical} and theoretically proved feasible via an interference lithographic design \cite{Park20ACSPho_visible}. Due to the bulk-edge correspondence principle, the bulk Weyl points can give rise to nontrivial surface state --- so-called Fermi arc, which connects the projections of Weyl points with non-vanishing charges on the surface Brillouin zone. The first experimental measurement of robust surface states was conducted in a microwave Weyl photonic crystal \cite{Chen16NC_weyl_arc} and later also in a chiral hyperbolic metamaterial \cite{Yang17NC_arc} as well as in laser-written waveguides at optical frequencies \cite{Noh17NatPhy_weyl_arc}. In contrast to the Weyl point with a pointlike Fermi surface and a vanishing density of states, Weyl point can also be associated with a strongly tilted cone dispersion that breaks the Lorentz invariance, i.e., the so-called type-II Weyl point, in which the Fermi surface consists of touched electron-hole pockets with nonvanishing density of states \cite{Soluyanov15Nature_type2weyl}. Type-II Weyl points have been demonstrated in different photonic systems, such as, photonic crystals \cite{Chen16NC_weyl_arc,Chen21OE_1Dtwist}, waveguide arrays \cite{Noh17NatPhy_weyl_arc,Qin18OE_type2weyl} and chiral metamaterials \cite{Xiao16PRL_hyperWeyl,Yang17NC_arc,Li22PRB_type2weyl}. \begin{figure*} \includegraphics[width=0.9\textwidth]{Fig11} \caption{Different kinds of 3D photonic nodal lines. (a) Photonic nodal lines in metacrystals. Reproduced with permission from~\cite{Gao18NC_line}, Copyright (2018), under CC BY 4.0. (b) Hourglass nodal lines in photonic metacrystals. Reproduced with permission from~\cite{Xia19PRL_hourglass}, Copyright (2019) by the American Physical Society. (c) Experimental observation of nodal chains in a metallic-mesh photonic crystal. Reproduced with permission from~\cite{YanNatPhy_chain}, Copyright (2018) by Springer Nature. (d) Non-Abelian nodal links in a biaxial hyperbolic metamaterial. Reproduced with permission from~\cite{Yang20PRL_link}, Copyright (2020) by the American Physical Society.} \label{figs:fig11} \end{figure*} Apart from the Weyl points carrying a charge of $\pm 1$, Weyl points could also carry an arbitrary integer charge n, described by the following 3D Hamiltonian \cite{Fang12PRL_multiWeyl}, \begin{gather} H_n(\mathbf{k})=k_+^{n}\sigma_+ + k_-^{n}\sigma_- + k_z\sigma_z + \omega_0 I \end{gather} where $k_{\pm}=(k_x\pm ik_y)$ and $\sigma_{\pm}=(\sigma_x\pm i\sigma_y)/2$, $I$ is the identity matrix and $\omega_0$ the frequency of the Weyl point. Charge 2 Weyl points have been proposed in woodpile photonic crystals \cite{Chang17PRB_woodpile,Takahashi18JPSJ_charge2} and been observed experimentally in the mid-infrared regime in a low-index chiral woodpile photonic crystal fabricated by two-photon polymerization \cite{Vaidya20PRL_charge2}. The splitting of the charge 2 Weyl point into two charge 1 Weyl points was further experimentally observed in \cite{Jorg22LPR_charge2} under careful symmetry breaking. However, due to the low index contrast in \cite{Vaidya20PRL_charge2}, there is only an incomplete bandgap surrounding the Weyl point, making it challenging to study the Weyl point in isolation. An ideal charge 2 Weyl point separated from trivial bands was experimentally observed in \cite{Yang20PRL_unconven} (see Fig.\ref{figs:fig10}c), where two long surface arcs that form a noncontractible loop wrapping around the surface Brillouin zone were mapped out in the experiment. Very recently, a charge-4 Weyl point was demonstrated experimentally in \cite{Chen22arxiv_maxCharge}, where the authors mapped out the projected bulk dispersion and the exotic quadruple-helicoid Fermi arcs of the topological surface states emanating from the charge 4 Weyl points. Engineering photonic systems that can exhibit Weyl points usually is complex and methods, such as, group theory \cite{Saba17PRL_group} or practical approach based on self-assembly \cite{Frucharta18PNAS_assembly} have been proposed. One can further exploit the idea of synthetic dimensions to explore Weyl physics in lower spatial dimensions \cite{Lin16NC_synWeyl,Wang17PRX_synWeyl,Chen21OE_1Dtwist,Lee22PRL_2layer,Ma21Science_5D}, providing the potential for on-chip integration. For example, in \cite{Lin16NC_synWeyl}, the authors proposed to use 2D arrays of resonators undergoing dynamic modulation of refractive index to explore 3D Weyl physics, in which the non-trivial topology of the Weyl point manifests in terms of surface state arcs in the synthetic space exhibiting one-way frequency conversion. In \cite{Ma21Science_5D}, by using bianisotropic terms as the synthetic fourth and fifth dimensions, intriguing bulk and surface phenomena, such as linking of Weyl surfaces and surface Weyl arcs in 5D have been studied. Weyl point physics could also find interesting applications, e.g., in \cite{Jia19Science_zero}, the chiral zero mode of Weyl points, which is a one-way propagating bulk mode, has been applied for the robust transport of photons in the bulk medium. In \cite{Cheng20PRL_spiral}, generation of vortex beam has been demonstrated using the reflection property of Weyl metamaterials. A Dirac point in 3D is a fourfold degenerate point, which could also be viewed as two $\pm 1$ charged Weyl points located at the same position in momentum space. 3D photonic Dirac points have been studied in photonic crystals \cite{Wang16PRB_pointGroup} and metamaterials \cite{Guo17PRL_dirac}. In \cite{Wang16PRB_pointGroup}, the authors found a pair of stable 3D Dirac points in hollow cylinder hexagonal photonic crystals and demonstrated that $C_6$ is the only point group symmetry that can stabilize the paired Dirac points in 3D photonic crystals. In \cite{Guo17PRL_dirac}, the authors theoretically showed that Dirac points can also be realized in effective media under electromagnetic duality and found that a pair of spin-polarized Fermi-arc-like surface states emerges at the interface between air and the Dirac metamaterials. Furthermore, type-II Dirac photons where the linear crossings forming the degenerate points are tilted, were shown could exist in photonic crystal with nonsymmorphic screw symmetry \cite{Wang17npjQM_type2}. 3D photonic Dirac points and their spin-polarized surface arcs have been experimentally observed in the microwave region with an elaborately designed metamaterial \cite{Guo19PRL_surface} (see Fig.\ref{figs:fig10}d), where two symmetrically placed Dirac points are stabilized by electromagnetic duality symmetry. As a Dirac point consists of two Weyl points with opposite charges, two Fermi arcs exhibiting left (right) circular polarization with the plane of the polarization being parallel to the Fermi arcs could emerge from the Dirac point. Beyond this conventional bulk-edge correspondence, higher-order hinge states connecting the momentum-space projections of the two Dirac points localized at the hinge have been measured experimentally at microwave frequencies \cite{Wang22PRB_high_order} (see Fig.\ref{figs:fig10}e). Recently, a charge-2 Dirac point was experimentally demonstrated by deliberately engineering hybrid topological states in a 1D optical superlattice system using the idea of synthetic dimensions \cite{Hu20ComPhy_charge2}. When a gaussian beam is reflected near photonic Dirac point at an optical interface, vortical phase distribution and spin inversion could happen \cite{Xu21PRA_poincare}, providing an interesting way to manipulate polarized vortex beam with photonic Dirac point. We would like to note that Fermi arcs in Weyl semimetals are topologically protected and thus are robust because of the nonzero and opposite chirality of the two Weyl points that the Fermi arcs connect. On the other hand, as a Dirac point consists of two degenerate Weyl points with opposite chirality, there exist two branches of Fermi arcs (double Fermi arcs) connecting a pair of 3D Dirac points. However, the double Fermi arcs connecting the two Dirac points are not topologically protected due to the zero chirality of the Dirac points and thus can be continuously deformed into a closed Fermi contour without any symmetry breaking \cite{Kargariana16PNAS}. Moreover, Weyl points carrying higher topological charges can have multiple Fermi arcs emanating from them, which can stretch over a large portion of the Brillouin zone or even form a noncontractible loop winding around the surface Brillouin zone \cite{Yang20PRL_unconven}, unlike the single, short and open Fermi arc connecting two charge $\pm$1 Weyl points. \begin{figure*} \includegraphics[width=1\textwidth]{Fig12} \caption{Gapped 3D photonic topological insulators. (a) Topological photonic crystal hosting a single surface Dirac cone at L protected by the nonsymmorphic glide reflection. Reproduced with permission from~\cite{Lu17NP_3d}, Copyright (2016) by Springer Nature. (b) Weyl point pair annihilation and bandgap opening in a gyromagnetic photonic crystal~\cite{Liu21arxiv_weylAnn}. (c) Domain wall in an all-dielectric bianisotropic metacrystal and the Dirac-like dispersion of the surface states. Reproduced with permission from~\cite{Slobozhanyuk17NP_3d}, Copyright (2017) by Springer Nature. (d) Photonic topological insulator with robust photonic propagation along a non-planar surface in a photonic structure made from split-ring resonators. Reproduced with permission from~\cite{Yang19Nature_3d}, Copyright (2019) by Springer Nature. } \label{figs:fig12} \end{figure*} In addition to the point degeneracies discussed above, the band crossings could also form 1D line, so-called nodal line. Depending on the shape of the nodal line, different configurations \cite{Park22NanoPhot_3Drev}, such as, nodal ring, nodal chain, nodal link and nodal knot, could exist. The surface states corresponding to these 1D bulk topological states are much richer than the Fermi arc states associated with Weyl/Dirac points. Photonic nodal line in the form of a single ring was experimentally demonstrated in microwave cut-wire metacrystals \cite{Gao18NC_line} (see Fig.\ref{figs:fig11}a), where both the toroidal bulk state and the drumhead surface state supported by the metacrystal were verified. Later on, a glide mirror symmetry protected hourglass nodal line formed by an hourglass-shaped band dispersion around a loop in the momentum space was experimentally demonstrated in a photonic metacrystal at microwave frequency \cite{Xia19PRL_hourglass} (see Fig.\ref{figs:fig11}b), where the observed hourglass nodal line resides in a clean and large frequency interval and is immune to symmetry preserving perturbation. Ideal nodal ring has also been experimentally observed in a simple 1D photonic crystal in visible region \cite{Deng21arxiv_ring}. Dirac nodal line semimetal with fourfold line degeneracy and its perpendicularly polarized double-bowl surface states were further experimentally observed in \cite{Hu21Light_2bowl}. Nodal lines could also act as quadrupole sources of Berry curvature flux \cite{Wang22PRL_QBC}, or intersect at hidden-symmetry-enforced nexus points \cite{Xiong20light_nexus}. Apart from a single nodal ring, two or more rings could chain together via touching points forming nodal chains. Nodal chains have been experimentally studied in metallic-mesh photonic crystals \cite{YanNatPhy_chain,Wang21SciRep_drumhead} (see Fig.\ref{figs:fig11}c) and bi-anisotropic metamaterials \cite{Wang21light_quaternion}. When the nodal line is formed with more than 2 band degeneracies, the nodal lines formed by consecutive pairs of bands exhibit interesting braiding structures and the underlying topological charges could be described by quaternions. Non-Abelian nodal links formed by the crossings between three adjacent bands were experimentally demonstrated in a biaxial hyperbolic metamaterial \cite{Yang20PRL_link} (see Fig.\ref{figs:fig12}d). Recently, a dielectric photonic crystal in the form of double diamond structure was theoretically shown to host a nodal link with non-Abelian charges \cite{Park21ACSpho_non-abelian}. Non-Abelian frame charges can flow in momentum space along nodal lines, which has been observed in experiments using a biaxial photonic crystal \cite{Wang22arxiv_frame}. Band crossings can also form 2D nodal surface, e.g., as shown in \cite{Kim19PRB_surface}. Finally, we would like to briefly discuss the gapped photonic topological phases. In general, 3D photonic Chern insulators could be characterized by three first Chern numbers, i.e., a Chern vector $\mathbf{C}=(C_x,C_y,C_z)$, defined on lower dimensional surfaces \cite{Oono16PRB_sectionChern,Devescovi21NC_chern}. The above discussed gapless degeneracies, such as Weyl/Dirac points, or nodal lines could be gapped to generate a nontrivial bandgap by changing the parameters of the systems, e.g., in magnetic photonic crystals, nontrivial bandgap could be obtained by gapping out 3D Dirac points as theoretically studied in \cite{Lu17NP_3d,Kim21OE_glide} (Fig.\ref{figs:fig12}a) and very recently, in a microwave-scale gyromagnetic 3D photonic crystal, the authors in \cite{Liu21arxiv_weylAnn} found that momentum space locations of a single pair of ideal Weyl points strongly depend on the biasing magnetic field and by continuously varying the field strength, annihilation of the Weyl points and thus formation of a 3D Chern insulator were observed in the experiments (see Fig.\ref{figs:fig12}b). Gapped photonic topological phases have also been studied in all-dielectric bianisotropic metacrystal \cite{Slobozhanyuk17NP_3d} (Fig.\ref{figs:fig12}c). The first fully 3D topological photonic bandgap was experimentally realized in 2019 based on a 3D array of metallic split-ring resonators at microwave frequencies \cite{Yang19Nature_3d} (see Fig.\ref{figs:fig12}d), which was achieved by gapping out 3D Dirac points via the bi-anisotropic response of the split-ring resonators. Moreover, by direct field measurements, both the gapped bulk band structure and the Dirac-like dispersion of the photonic surface states were mapped out successfully in the experiment. \section{Conclusions and discussions}\label{sec6} In summary, we have reviewed the recent developments of topological photonics in one, two and three dimensions. In specific, in 1D we presented the paradigmatic SSH model for topological physics, illustrated its topological features, such as the winding number, through an intuitive tight-binding model and discussed various photonic platforms that have been used to implement the SSH model for different topology-related applications. In 2D, we discussed four categories of photonic topological states, i.e., the quantum Hall states, quantum spin Hall states, quantum valley Hall states and second-order topological corner states, where the topological invariants for these four different states are Chern number, spin Chern number, valley Chern number and bulk dipole or quadrupole moment, respectively. In 3D, the photonic topological phases in general could be classified as gapped or gapless. For gapped photonic systems, if the T symmetry is broken, one could have 3D photonic Chern insulators characterized by a Chern vector. For gapless photonic topological phases, if the band crossings form isolated points in the Brillouin zone, one could have nodal points, such as twofold degenerate Weyl points, or fourfold degenerate Dirac points, whose topological invariants are the topological charges carried by the degenerate points and one could also have type-I or type-II nodal points depending on the slope of the bands near the crossing. The band crossings could also form 1D degenerate nodal lines, and depending on the shape and number of the nodal lines, one could have nodal ring, nodal chain, nodal link, and nodal knot, where the Berry phase along a close loop enclosing the line could serve as the topological invariant for nodal lines. Moreover, if the nodal lines are from band crossings of more than two bands, one could have nontrivial non-Abelian braiding behaviors, hosting non-Abelian topological charges. Regarding future perspectives, we believe the 1D SSH model will continue attracting attention due to its simplicity and transparent physics. One could extend the model, such as to consider trimer or tetramer SSH models and even to include longer-range hopping, to enrich the physics this model can host. Nonetheless, as the topological states are 0D in this model, they can not be used to transport electromagnetic waves. In 2D, the most robust photonic topological states are quantum Hall states with time-reversal symmetry breaking \cite{Haldane08PRL,Wang09nature}. In this case, as the backscattering channels are completely removed, the one-way edge modes are absolutely robust as long as the disorder is not strong enough to close the bandgap. However, in order to break time-reversal symmetry, one would need real or synthetic magnetic fields. While real magnetic fields could be used in microwave regimes, magnetic response in general is weak in visible light regimes, where synthetic magnetic fields would be preferred \cite{Fang12NatPho_magnetic,Hafezi11NatPhy_resonator}. To emulate quantum spin Hall states for light, one could consider different combinations of TE and TM polarizations. To do this, in general, one would need to first create a double Dirac one and then find ways to gap out the double Dirac cone. One point to be noted is that the robustness of the resulting quantum spin Hall states is different depending on the specific mechanism to implement the double Dirac cone and the way to open the bandgap, e.g., the quantum spin Hall states based on bi-anisotropic response to open the bandgap in general is more robust than the method based on lattice symmetry consideration \cite{Khanikaev13NatMat,WuHu15PRL}. While quantum spin Hall states need a double Dirac cone, a single Dirac cone could be exploited to emulate quantum valley Hall states. It would be interesting to find a way such that the valleys could be tuned to any desired locations in the Brillouin zone rather than pinned to the $K/K'$ points due to lattice symmetry. Apart from these conventional topological states, second-order topological phases with corner states have also attracted a great attention in recent years. Up to now, most of the photonic corner states studied in the literature are based on nontrivial bulk dipole moments. For corner states based on nontrivial bulk quadrupole moments, the original $\pi$-flux square lattice SSH model \cite{Benalcazar17Science} is not very convenient for photonic systems, because one would need to implement the negative coupling in the model. So finding new lattice structures and tight binding models without negative coupling that could be implemented using all-dielectric materials is an interesting direction. It would also be interesting to study the difference between the corner states created using bulk dipole and quadrupole moments, such as, do they have the same degree of robustness or one kind of corner states has a stronger robustness than the other? In 3D, the topological physics is very rich \cite{Vergniory22Sci_3Dtopo}, and most of the developments in topological photonics are motivated by the relevant topological phenomena in condense matter physics. Up to now, most studies in 3D topological photonics have focused on conventional bulk-edge correspondence and Weyl nodes as well as their surface states have already found many interesting applications, e.g., one-wave waveguide \cite{Jia19Science_zero,Kim19AOM_broadband}, frequency selectivity \cite{Wang16PRA_ideal}, vortex beam generation \cite{Cheng20PRL_spiral}, topological self-collimation \cite{Yang20PRL_unconven}, optical tweezers \cite{Yang22NJP_tweezer}, cloaking \cite{Takahashi21OE_cloaking}, superimaging \cite{Wang22PRL_QBC}, all-angle negative refraction and Veselago imaging \cite{Yang21optica_Veselago}. Higher-order topological photonic states in 3D, such as third-order corner states or second-order hinge states, are less explored and many topological phenomena difficult to implement in electronic materials are waiting to be studied in the photonics context. The possible problem is that as 3D photonic systems are bulky, they probably will not be compatible to the future practical on-chip applications. One possible solution is to exploit lower dimensional photonic systems, such as 1D or 2D, to emulate the 3D (or even higher dimensions) topological physics using the idea of synthetic dimensions \cite{Yuan18optica_synD,Lustig21AOP_synD}. Then a possible complication in this case would be what is the consequence of the topological phenomena in real spatial dimensions rather than the synthetic dimensions, as in reality, devices are always operated in real space. We believe topological photonics will continue attracting the interests of photonics researchers in the years to come due to its interesting physics and promising practical applications. \section{Acknowledgements} This work was supported in part by the National Natural Science Foundation of China (Nos. U20A20164 and 61975177).
{ "timestamp": "2022-09-20T02:22:03", "yymm": "2209", "arxiv_id": "2209.08669", "language": "en", "url": "https://arxiv.org/abs/2209.08669" }
\section{Introduction} Self-supervised learning~(SSL) for computer vision applications has empowered deep neural networks~(DNNs) to learn meaningful representations of images from unlabeled data \citep{DBLP:journals/corr/abs-2104-14294, chen2020big, DBLP:journals/corr/abs-2002-05709, DBLP:journals/corr/abs-2006-07733, DBLP:journals/corr/abs-1912-01991, DBLP:journals/corr/abs-2104-14548, DBLP:journals/corr/abs-2105-04906, DBLP:journals/corr/abs-2103-03230, DBLP:journals/corr/abs-1912-01991,DBLP:journals/corr/abs-1807-05520,DBLP:journals/corr/abs-2005-04966,DBLP:journals/corr/abs-2005-10243}. These methods learn a feature space embedding that is invariant to data augmentations (e.g., cropping, translation, color jitter) by maximizing the agreement between representations from different augmentations of the same image. The resulting models are then used as general-purpose feature extractors and have been shown to achieve better performance for transfer learning as compared to features obtained from a supervised model \citep{ericsson2021well}. Broadly, SSL models can be categorized into contrastive \citep{chen2020simple, chen2020big}, non-contrastive \citep{grill2020bootstrap, chen2021exploring}, prototype/clustering \citep{li2020prototypical,caron2020unsupervised}. There exist multiple differences in the resulting models, even among models belonging to the same category. For example, popular SSL models available as pre-trained networks \citep{goyal2021vissl} can differ in terms of training parameters (loss function, optimizer, learning rate), architecture (DNN backbone, projection head, momentum encoder), model initialization (weights, batch-normalization parameters, learning rate schedule), etc. Recently, researchers have focused on developing an understanding of specific components of SSL models by studying the loss function used to train the models and its impact on the learned representations. For instance, \cite{jing2021understanding} analyzes contrastive loss functions and the dimensional collapse problem. \citep{cosentgeometry} also analyzes contrastive losses and describes the effect of augmentation strength as well as the importance of non-linear projection head. \cite{huang2021towards} quantifies the importance of data augmentations in a contrastive SSL model via a distance-based approach. \cite{haochen2021provable} explores a graph formulation of contrastive loss functions with generalization guarantees on the representations learned. In \cite{wang2021understanding}, the importance of the temperature parameter used in the SSL loss function and its impact on learning is examined. \cite{tian2021understanding} performs a spectral analysis of DNN's mapping induced by non-contrastive loss and the momentum encoder approach. However, these studies are unable to provide a \emph{unified analysis} of the myriad of existing SSL models. Besides, these theoretical approaches only provide insights into the embedding obtained \textit{after} the projection head, while in practice it is the mapping provided by the encoder that is actually used for transfer learning. Closer to our approach, the general transfer performance of an SSL model has been predicted based on the performance it can achieve on the ImageNet dataset \citep{kornblith2019better}. This idea was shown to be effective for transfer datasets that are similar to ImageNet, but it cannot be generalized to all transfer learning problems \citep{ericsson2021well}. In fact, if ImageNet performance were highly correlated to general transfer learning performance, then it is unclear why SSL models would be needed, as one could simply use supervised training with ImageNet to obtain image representations for transfer learning. Furthermore, existing empirical evaluations such as \citep{kornblith2019better} only provide a \emph{somewhat coarse and partial understanding} of SSL models. For example, they do not provide insights into how the level of invariance to specific augmentation in an SSL model relates to its performance on a given downstream task. Since it has been observed that invariance to some augmentations can be beneficial in some cases and harmful in others \citep{xiao2020should}, our goal is to develop a more direct and quantitative understanding of augmentation invariance and how this invariance determines performance. To achieve this goal, we propose a \textit{geometric} perspective to understand SSL models and their transfer capabilities. Our approach analyzes the manifold properties of SSL models by using a data-driven graph-based method to characterize the geometry of data and their augmentations, as illustrated on the left of \Figref{fig:cluster_ssl}. Specifically, we develop a set of \textbf{manifold graph metrics}~(MGMs) to quantify the geometric properties of existing SSL models. This allows us to provide insights about the similarities and differences between models (\Figref{fig:cluster_ssl}, right) and link their ability to transfer to specific characteristics of the target task. Because our approach can be applied directly to sample data points and their augmentations, it has several important advantages. First, it is agnostic to specific training procedures, architecture, and loss functions; Second, it can be applied to the data embeddings obtained at any layer of the SSL model, thus alleviating the challenge induced by the presence of projection heads; Third, it enables us to compare different feature representations of the same data point, even if these representations have different dimensions. We are interested in using our approach to answer the following questions about SSL models: \\ $(i)$ \textbf{What are the geometric differences between the feature spaces of various SSL models?} \\ $(ii)$ \textbf{What geometric properties allow for better transfer learning capability for a specific task?} Our contributions can be summarized as follows: \begin{itemize}[leftmargin=*,noitemsep] \item We develop quantitative tools (i.e., MGMs) capable of capturing important geometrical aspects of SSL representations, such as the degree of equivariance-invariance, the curvature, and the intrinsic dimensionality, Sec.~\ref{sec:MGM}. \item We leverage the proposed MGMs to explore the geometric differences and similarities between SSL models. As illustrated in the right part of \Figref{fig:cluster_ssl}, we show that SSL models can be clustered using these geometric properties into three main groups that are not entirely aligned with the paradigm upon which they were trained, Sec.~\ref{sec:variable_SSL}. \item We analyze the geometric differences between a Vision Transformer (ViT) and a convolutional network (ResNet). We show that while ResNet is biased towards a collapsed representation at initialization, ViTs are not. This initialization bias leads to different geometrical behavior (attraction/repulsion of representations) between the two architectures when training under an SSL regime as detailed in Sec.~\ref{sec:variable_SSL}. \item We demonstrate that observed MGMs are a strong indicator of the transfer learning capabilities of SSL models for most downstream tasks. Therefore showing that specific geometrical properties are crucial for a given transfer learning task, Sec.~\ref{sec:transfer_SLL}. \end{itemize} \iffalse \begin{itemize} \item \textbf{Related work and their Problem} Surprisingly, all the current work attempting to describe the representation induced by SSL algorithm analyzes the loss functions and its input feature. As this latter is not used for any downstream task, it makes their analysis lapsed. \item \textbf{Challenge of SSL analysis} The analysis of the actual representation used for downstream task necessitate to understand the representation at the output of the backbone encoder, which is a tedious task because of the few non-linear layers between this the backbone and the loss function. \item \textbf{Transfer learning prediction problem}The usual approach to predict the downstream performances of SSL algorithm is to use the accuracy on ImageNet. However, it was recently shown (CVPR) that for many transfer learning tasks, SSL algorithms outperform supervised learning and that depending on the transfer task, e.g., segmentations, few shot learning, surface normal estimation, ..., different SSL models exhibit drastically different transfer learning capabilities. \item Finally, the default metric in order to predict the capability of the representation to perform well on a downstream task is the accuracy on ImageNet. However, if the accuracy on ImageNet was capturing the generalization capability of the DNN, supervised learning will be the best solution. \item \textbf{Question} In this work, we propose to answer the following questions: \textbf{what are the characteristics and properties of DNNs that enable the prediction of their transfer learning performances?} \end{itemize} \fi \begin{figure}[t] \includegraphics[width=1\linewidth]{figs/concept_2.png} \caption{(\textbf{Left}) \textbf{Approach toward the analysis of SSL models} For each model, we use as input of the DNN the validation set of ImageNet as well as their augmented versions. The output of the backbone encoder is used to quantify the properties of the manifold induced by the SSL algorithm. Specifically, we develop \textbf{Manifold Graph Metrics} (Sec.~\ref{sec:MGM}) that capture manifold properties known to be crucial for transfer learning. The MGMs allow us to capture the specificity of each SSL model (Sec.~\ref{sec:variable_SSL}) and to characterize their transfer learning capability (Sec.~\ref{sec:transfer_SLL}). (\textbf{Right}) We provide the dendrogram of the SSL models considered in this paper based on the manifold graph metrics we propose. Although the underlying hyper-parameters, loss functions, and SSL paradigms are different, the manifolds induced by the SSL algorithms can be categorized into three types. An important observation is that the resulting clusters are not necessarily aligned with the different classes of SSL algorithms. This result shows that although some training procedures appear to be more similar, one is required to provide a deeper analysis to understand in what aspect SSL models differ.} \label{fig:cluster_ssl} \end{figure} \iffalse We propose a \textbf{geometric understanding} of SSL model representation based on characterizing the local and global geometry of the data manifolds using computationally efficient neighborhood and graph based methods (\Figref{fig:cluster_ssl}). Our work is motivated by the observation that while DNNs involve complex non-linear mappings, the induced transformations and the structure of the representation space can be inferred from the data samples by observing their relative positions in that space. We propose a set of \textbf{Manifold Graph Metrics (MGMs)} that provide intuition and quantitative data to characterize the data geometry at any layer of the DNN. One important aspect of our approach is that it enables us to compare the space surrounding a given data point in the feature space corresponding to different DNN layers, at any stage of training. Moreover this can be extended to compare the representations of the same data point given by different models (e.g., in models that are consistent with each other a data point will have similar sets of neighbors). In contrast to existing techniques for understanding deep learning, a key novelty is that we approach the analysis from a data-driven perspective. That is, rather than considering mathematical approximations of the parameters, individual components, or optimization landscape of a DNN \cite{soltanolkotabi2018theoretical,arora2019fine}, our work aims to explore the geometric manifold of the data transformation and associated attributes of the data (\Figref{fig:motivations}). Thus we are able to abstract the explicit functions that induce the mappings of a DNN and understand its behavior through the effect those functions have on the data. \fi \section{Background} \paragraph{Self-supervised Learning:} SSL models are generated by producing multiple versions of the same image, via data augmentation, and training a DNN such that their embeddings coincide. Many SSL models are obtained with architectures that cascade two networks, the backbone encoder, from which the representation to be used for a downstream task is extracted, and the projection head, from which the output is fed into the SSL loss function. The main risk in such an approach is the so-called the feature collapse phenomenon \citep{jing2021understanding, tian2021understanding, DBLP:journals/corr/abs-2105-00470}, where the learned representations are invariant to input samples that belong to different manifolds. To reduce the risk of feature collapse, multiple SSL algorithms have been proposed. Contrastive SSL methods (e.g., simCLR-v1 \cite{chen2020simple}, simCLR-v2 \cite{chen2020big}, MoCo-v1 \cite{he2020momentum}, MoCo-v2 \cite{chen2020improved}, Infomin \cite{DBLP:journals/corr/abs-2005-10243}, PIRL \cite{misra2020self}, InsDis \cite{wu2018unsupervised}) make use of negative pairs to force the embedding of data points in an unaugmented pairs to be pushed away during training. Non-contrastive SSL methods (BYOL \cite{DBLP:journals/corr/abs-2006-07733}, DINO \cite{caron2021emerging}) utilize a teacher-student based approach where the weights of the students are the exponential moving average of the teacher's weights. Protopype/clustering SSL methods (SeLA-v2 \cite{asano2019self}, DeepCluster-v2 \cite{caron2020unsupervised}, SwAV \cite{caron2020unsupervised}, PCL-v1 and PCL-v2 \cite{li2020prototypical}) enforce consistency obtained from different transformations of the same image by comparing cluster assignments without negative pair comparisons. We will analyze the geometrical differences of these SSL models and the impact of geometry on the transfer learning capability. To perform this analysis, we will use pre-trained SSL models based on ResNet50(1x) backbone encoder architecture as well as a supervised model available with the Pytorch library \cite{DBLP:journals/corr/abs-1912-01703}. We will also compare the geometrical differences between a ResNet50 and ViT architecture both at initialization and after SSL training using DINO loss \citep{caron2021emerging}. \paragraph{Graphs:} Graph and local neighborhood methods have played a significant role in machine learning tasks such as manifold learning \citep{belkinThesis03}, semi-supervised learning \citep{Belkin-JMLR-06, gadde2014active}, and, more recently, graph-based analysis of deep learning \citep{Papernot2018, bontonou2019introducing, lassance2020deep}. Typically their use in this context is motivated by their ability to represent data with irregular positions in space $({\bm{x}}_i)_{i=1}^{N}$ rather than directly modeling the distribution of the data $P({\bm{x}})$. This type of DNN analysis starts by constructing good neighborhood representation \cite{Maier, DeSousa2013}, which is also a crucial first step in our framework. The most common approaches to defining a neighborhood are ${\textnormal{k}}$-nearest neighbor~(${\textnormal{k}}$NN) and $\epsilon$-neighborhood. However, these approaches select points in a neighborhood based on only distance to the query point, and do not consider their relative positions, while also relying on ad hoc procedures to select parameter values (e.g., $k$ or $\epsilon$). For this reason, we make use of non-negative kernel regression~(NNK) \citep{shekkizhar2020graph} to define neighborhoods and graphs for our manifold analysis. Unlike kNN, which can be seen as thresholding approximation, NNK can be interpreted as a form of basis pursuit \citep{tropp2004topics}, which leads to better neighborhood construction with robust local estimation performance in several machine learning tasks \citep{shekkizhar2021model, shekkizhar2021revisiting}. Of particular importance for our proposed SSL analysis framework is the fact that NNK neighborhood have a geometric interpretation. While in kNN points ${\bm{x}}_j$ and ${\bm{x}}_k$ are included in the neighborhood of a data point ${\bm{x}}_i$, denoted by $\mathcal{N}({\bm{x}}_i) = \left \{ {\bm{x}}_{i_1}, \dots, {\bm{x}}_{i_{n_i}} \right \}$, solely based on their distances to ${\bm{x}}_i$, i.e., $d({\bm{x}}_i,{\bm{x}}_j)$ and $d({\bm{x}}_i,{\bm{x}}_k)$, in NNK this decision is made by also taking into account $d({\bm{x}}_j,{\bm{x}}_k)$, so that ${\bm{x}}_j$ and ${\bm{x}}_k$ are both included in $\mathcal{N}({\bm{x}}_i)$ only if they are not geometrically \emph{redundant}. As a result, an NNK neighborhood can be described as a \textit{convex polytope approximation} of ${\bm{x}}_i$, denoted $\mathcal{P}({\bm{x}}_i)$, the size and shape of which is determined by the local geometry of the data available around ${\bm{x}}_i$ \citep{shekkizhar2020graph}. This geometric property is particularly important for data that is not uniformly sampled and lies on a lower dimensional manifold in high dimensional vector space, which is typical for feature embeddings in DNN models. Note that NNK uses kNN as an initialization step, after which it has only modest additional runtime requirement \citep{shekkizhar2020graph}. Thus, we can scale our analysis to large datasets by speeding up the initialization using computational tools developed for kNN \citep{indyk1998approximate, johnson2019billion}. NNK requires kernels with a range in $[0, 1]$. In this work, we use the cosine kernel since the encoder representations are obtained after \emph{ReLU} and so the kernel satisfies the requirement. \section{Manifold Graph Metrics} \label{sec:MGM} \begin{wrapfigure}{r}{0.4\textwidth} \centering \includegraphics[width=\linewidth]{figs/nnk_approx_output_dnn_manif_22.png} \caption{\textbf{MGMs in feature space are based on NNK polytopes}. We display data samples (red and blue dots) and two NNK polytopes ($\mathcal{P}({\bm{x}}_i), \mathcal{P}({\bm{x}}_j)$) in the encoder space that approximate the underlying manifold of the SSL model (gray surface). Our proposed MGMs capture invariance (polytope diameter), manifold curvature (angle between neighboring polytopes), and local intrinsic dimension (number of vertices in polytope) of the output manifold for a given SSL model.} \label{fig:mgm} \end{wrapfigure} We use the NNK polytope $\mathcal{P}({\bm{x}}_i)$ to characterize the neighborhood of ${\bm{x}}_i$ and the local and global geometry of the data manifolds. Our approach, summarized in \Figref{fig:cluster_ssl}, is motivated by the observation that while the DNNs used in SSL involve complex non-linear mappings, the induced transformations and the structure of the representation space can be inferred from the data samples by observing their relative positions in that space. In particular, we propose a set of \textbf{Manifold Graph Metrics (MGMs)} that provide intuition and quantitative data to characterize the geometry of an SSL model, \Figref{fig:mgm}. We focus on three manifold properties that we consider important for the understanding of SSL-based feature embeddings and their transfer capabilities; $(i)$ the level of invariance (or equivariance) with respect to a given transformation or augmentation, $(ii)$ the curvature of the manifold, $(iii)$ the local intrinsic dimension of the manifold. \paragraph{Invariance-Equivariance Metric:} Given input images $\{ {\bm{x}}_i\}_{i=1}^N$, we apply the NNK neighborhood definition to the feature space representations (${\bm{f}}({\bm{x}}_i)$) of the images to obtain $N$ convex polytopes. We define the \textbf{diameter} of an NNK polytope as the maximum distance between the nodes forming the polytope, i.e., \begin{align} \text{Diam}(\mathcal{P}({\bm{f}}({\bm{x}}_i))) = \max_{k,l \in \mathcal{N}({\bm{f}}({\bm{x}}_i)))} \left \| \hat{{\bm{f}}}({\bm{x}}_k) - \hat{{\bm{f}}}({\bm{x}}_l)) \right \|_2 \label{eq:polytope_diameter} \end{align} where $\hat{{\bm{f}}}$ corresponds to the $l_2$-normalized feature embedding of a given input. These polytope diameters take values in the range $[0,2]$ and provide a quantitative measure of how much the input samples have been contracted or dilated by the DNN backbone of the SSL model. Thus, a constant or collapsed mapping, where multiple ${\bm{x}}_i$ are mapped to the same ${\bm{f}}$, would lead to a degenerate polytope with diameter equal to zero. Instead, a diameter of $2$ corresponds to a mapping where the neighbors are maximally scattered. Now, by considering as input only the augmented versions of an image, the diameter of the NNK polytope captures the level of invariance-equivariance of the DNN with respect to this specific augmentation. Alternatively, if we constrain ourselves to input images that have the same class label, the polytope diameter indicates the level of invariance-equivariance of the representation to samples that belong to that specific class. In this setting, a lower value of diameter corresponds to invariance of the DNN mapping to that transformation while higher values corresponding to equivariance. \paragraph{Curvature Metric:} We study the curvature of a manifold by comparing the orientations of NNK polytopes corresponding to neighboring input samples. That is, given two samples ${\bm{x}}_i$ and ${\bm{x}}_j$ that are NNK neighbors, and their respective polytopes $\mathcal{P}({\bm{f}}({\bm{x}}_i)), \mathcal{P}({\bm{f}}({\bm{x}}_j))$, we evaluate the angle between the subspace spanned by the neighbors $\mathcal{N}({\bm{f}}({\bm{x}}_j)), \mathcal{N}({\bm{f}}({\bm{x}}_i))$ that make up the polytopes. Concretely, we use the concept of affinity between subspaces \cite{soltanolkotabi2014robust} to define the affinity between two polytopes as a quantity in the interval $[0,1]$ given by \begin{align} \text{Aff}(\mathcal{N}({\bm{f}}({\bm{x}}_i)), \mathcal{N}({\bm{f}}({\bm{x}}_j))) = \sqrt{\frac{\cos^2(\theta_1) + \dots + \cos^2(\theta_{n_i n_j})}{n_i n_j}}, \label{eq:polytope_affinity} \end{align} where $\theta_k, ~k=1,\dots, n_j n_i$ are the principal angles between the two subspaces spanned by the vectors $\mathcal{N}({\bm{f}}({\bm{x}}_j))$ and $\mathcal{N}({\bm{f}}({\bm{x}}_i))$, containing all the NNK neighbors of ${\bm{x}}_j$ and ${\bm{x}}_i$, respectively. Intuitively, this metric is equal to zero when the subspaces are orthogonal (polytopes oriented in perpendicular directions), while a value of one corresponds to zero curvature or subspace alignment. \paragraph{Intrinsic Dimension Metric:} The local intrinsic dimension of a manifold can be estimated as the number of neighbors selected by NNK. It was shown in \citep{shekkizhar2020graph, bonet2021channelredundancy}, that the number of neighbors per polytope, i.e., $n_i$, correlates with the local dimension of the manifold around a data point $i$. This observation is consistent with geometric intuitions; the number of NNK neighbors is decided based on the availability of data that span orthogonal directions, i.e., the local subspace of manifold: \begin{align} \text{ID}({\bm{f}}({\bm{x}}_i)) = \text{Card}(\mathcal{N}({\bm{f}}({\bm{x}}_i))), \label{eq:polytope_id} \end{align} where $\text{Card}$ denotes the cardinal operator. \paragraph{Experimental settings:} \label{sec:exp_set} For all SSL models, we analyze their equivariance-invariance, subspace curvature, and intrinsic dimension in a set of controlled and interpretable experiments. While in this work, we consider as inputs the validation set of ImageNet \cite{deng2009imagenet}, the framework is applicable to any dataset (with or without labels). Our experimental setup is as follows: for each input sample, $(i)$ select an augmentation setting, $(ii)$ sample $T=50$ augmentations of the image, $(iii)$ compute the similarity graph using NNK neighborhoods, and $(iv)$ extract the proposed MGMs (Sec.\ref{sec:MGM}). We limit ourselves to $5$ augmentation types: $(i)$ \emph{Semantic augmentations (Sem.) }, where we consider all the samples belonging to each class as augmented versions of each other in the semantic direction of the manifold, $(ii)$ \emph{Augmentations (Augs.)} which corresponds to the sequential application of various augmentations used during most SSL training process (random horizontal flip, colorjitter, random grayscale, Gaussian blur, and random cropping), $(iii)$ \emph{Crop} and $(iv)$ \emph{Colorjitter (Colorjit.)}, specific augmentations that are part of the augmentation policies used to train the SSL, $(v)$ \emph{Rotate}, an augmentation that was not used for training SSL but is considered important for some transfer tasks. Therefore, for each sample and for each MGM we obtain a value (local manifold analysis), while the collection of these per-sample MGMs for the entire validation set gives a distribution (such as the one displayed in \Figref{fig:vit_resnet}). In order to extract differences between SSL models and highlight which geometric properties favor specific transfer learning tasks, we extract two statistics from these distributions of MGMs (global manifold analysis): the mean (denoted by the metric name) and the spread (referred to as metric spread). In total, for each SSL model, we obtain $26$ geometric features, namely, $\{$Sem., Augs., Crop, Colorjit., Rotate$\}$ $\times$ $\{$ Equivariance, Equivariance spread, Affinity, Affinity spread, Nb. of neighbors $\}$, as well as the affinity and spread between Sem.-Augs, namely, $\{$ Sem.-Augs. Affinity, Sem.-Augs Affinity spread $\}$. Additional details about each metric and their variability across all models is given in Appendix~\ref{app:sec3_fig}. In Sec.~\ref{sec:transfer_SLL}, we evaluate the capability of the MGMs to characterize the transfer learning capability of SSL models. While we highlight here the overall setting of the transfer learning task, details regarding the datasets can be found in Appendix~\ref{app:transfer_dataset} and regarding the transfer learning training settings in \citep{ericsson2021well}: \emph{FewShotKornblith} and \emph{ManyShotLinear} correspond to few/many-shot classification using SSL features extracted on $11$ image classification datasets \citep{kornblith2019better}; \emph{ManyShotFinetune} is related to the classification performance as in the previous setting, but with entire network along with the feature extractor updated; \emph{FewShotCDFSL} corresponds to few-shot transfer performance in cross-domain image datasets such as CropDiseases, EuroSAT, ISIC, and ChestX datasets \citep{guo2020broader}; \emph{DetectionFinetune} and \emph{DetectionFrozen} refer to object detection task evaluated on PASCAL VOC dataset \citep{everingham2010pascal}; \emph{DenseSNE} corresponds to dense surface normal estimation evaluated on NYUv2 \citep{silberman2012indoor}; and \emph{DenseSeg} refers to dense segmentation task evaluated on ADE20K dataset \citep{zhou2019semantic}. \section{Geometry of SSL models} \label{sec:variable_SSL} In this section, we aim to characterize the geometric properties of the manifolds of various SSL models, as a way to highlight the differences and similarities between models. To do so, we extract the MGMs proposed in Sec.~\ref{sec:MGM} for $14$ SSL models and use these MGMs to quantify model equivariance-invariance, curvature, and intrinsic dimension for each augmentation manifold. \paragraph{Clustering of SSL models:} \begin{table}[t] \centering \includegraphics[width=\textwidth]{figs/model_pca_details.png} \vspace{.3cm} \caption{(\textbf{Left}) \textbf{Projection of SSL models MGMs onto principal components.} We observe three distinct clusters based on the observed MGMs that distinguishes the geometric similarities and differences between the various models. Note that these clusters are not necessarily aligned with the underlying SSL training paradigm, i.e., contrastive, non-contrastive, prototype-clustering based. (\textbf{Right}) \textbf{MGMs that make up the principal components} and their values for each model, where we indicate by the two yellow boxes (a) and (b) the MGMs that capture the maximum variation of the models along the principal directions.} \label{tab:model_pca_details} \end{table} The similarity between SSL models leads to the clustering illustrated by the dendrogram of \Figref{fig:cluster_ssl}. In Table~\ref{table:dendro_ssl_paradigm}~(Appendix \ref{app:sec4_fig}) we provide the details of each SSL model and highlight the structural differences that lead to our MGMs-based clustering. To further analyze these clusters, we consider the sparse principal component analysis (PCA) of SSL models having as features the $26$ MGMs. We project the MGMs of each SSL model onto the two main components and observe three clusters (see Table~\ref{tab:model_pca_details}). We also provide the MGMs that were selected by the sparse PCA and their associated importance in the principal components (see \Figref{fig:pc_ssl} in Appendix~\ref{app:sec4_fig}). We note that both simCLR-v1 and simCLR-v2 have a large variation with respect to the features selected by the first principal component. Interestingly, the geometrical property that characterizes both simCLR-v1 and simCLR-v2 is the angle between the semantic and augmented manifold (as indicated by the yellow box (a) in Table~\ref{tab:model_pca_details}). This implies that their ability to project the augmented samples onto the data manifold varies significantly with respect to other SSL models. Specifically, while all the models have a low-affinity value between these two directions (orthogonality), both simCLR versions have a high-affinity score $(\approx .8)$ meaning that the subspaces spanning the semantic direction and the augmented direction are more aligned. This shows that the dimensional collapse effect observed and analyzed in SimCLR's projection head \cite{jing2021understanding, huang2021towards, cosentgeometry} appears to impact the backbone encoder representation as well. Specifically, simCLR-v1 and simCLR-v2 are the only models projecting the augmentation manifold onto the data manifold such that they are hardly distinguishable. Another distinction of SimCLR models is the low intrinsic dimensionality (see Table~\ref{tab:model_all}), again showing that, compared to other SSL models, the SimCLR backbone encoder tends to project the data onto a lower dimensional subspace. The two other clusters, composed of models having different paradigms are mainly characterized by their differences in terms of semantic equivariance spread. The models in the green cluster (bottom) have low semantic equivariance spread. This implies that InDis, MoCO-v1, PIRL, SeLa-v2, BYOL, DC-v2, SwAV have the same invariance across the input data manifold, i.e., same across all classes. For instance, InDis is highly equivariant to the semantic directions (semantic equivariance $=1.23$, as shown in the $4^{th}$ column), therefore, given its semantic equivariance spread, we know that it is highly equivariant with respect to all ImageNet images. Another observation from \Figref{tab:model_pca_details} is that when considering \emph{v1} and \emph{v2} versions of PCL, SimCLR, and MoCO, only the two MoCo versions do not belong to the same cluster. The main difference between these two models is their semantic equivariance spread. While MoCo-v1 has low spread in terms of semantic equivariance, MoCo-v2 has a larger spread indicating the equivariance is dependent on the data sample at hand. Note that the main difference between MoCo-v1 and MoCo-v2 is the utilization of a MLP projection head for the latter. Therefore, this observation would suggest that adding a projection head makes the equivariance property of the backbone encoder more dependent on the input data. This result seems intuitive given the fact that the projection head will absorb most of the invariance induced by the SSL loss function. We also note that some models are strongly invariant to rotations, e.g., SimCLR-v1, SimCLR-v2, while other are strongly equivariant, e.g., PCL-v1, PCL-v2. This is particularly surprising as rotation is typically not part of the augmentation policy used to train SSL models. \begin{figure}[t] \begin{subfigure}{0.27\textwidth} \centering \includegraphics[width=\textwidth]{figs/resnet_vit/hist_nnk_diameter_trained_False_augs_all.png} \end{subfigure} \begin{subfigure}{0.22\textwidth} \centering \includegraphics[trim={3.2cm 0 0 0}, clip,width=\textwidth]{figs/resnet_vit/hist_nnk_diameter_trained_False_augs_none.png} \end{subfigure} \begin{subfigure}{0.27\textwidth} \centering \includegraphics[width=\textwidth]{figs/resnet_vit/hist_nnk_diameter_trained_True_augs_all.png} \end{subfigure} \begin{subfigure}{0.22\textwidth} \centering \includegraphics[trim={3.6cm 0 0 0}, clip,width=\textwidth]{figs/resnet_vit/hist_nnk_diameter_trained_True_augs_none.png} \end{subfigure} \caption{\textbf{Histogram of observed equivariance (Semantic, Augmentation)} for convolutional encoder backbone (ResNet50) and a vision transformer backbone (ViT-S) at (\textbf{Left})~initialization and (\textbf{Right})~after training with same SSL procedure (DINO \citep{caron2021emerging}). We observe that at initialization the inductive bias in convolutional networks leads to an invariant representation with respect to both semantic and augmentation manifold. However, a ViT leads to a more scattered representations of input images belonging to the semantic as well as augmentation manifolds. After training both architectures converge to a similar representation where the marginal variation in spread between the two models impact the performance in downstream tasks.} \label{fig:vit_resnet} \end{figure} \paragraph{Similarities in SSL models:} From Table~\ref{tab:model_pca_details}, we also deduce the geometric properties that do not vary across SSL models. In particular, the semantic and colorjitter affinities, i.e., the curvature of the data manifold along the semantic (label) direction and the colorjitter augmentation direction, show similar behavior across all models studied. This observation shows that the linearization capability of different trained SSL models in these two manifold directions are very similar. We defer further analysis to Appendix~\ref{app:sec3_fig} and summarize the observations in \Figref{fig:variance_MGM}. \paragraph{ViT vs ResNet:} Vision transformers~(ViT) \citep{dosovitskiy2020image}, built with an architecture inherited from natural language processing \citep{vaswani2017attention}, have recently emerged as a desirable alternative to convolutional neural networks~(CNN) \citep{caron2020unsupervised} in SSL \citep{caron2021emerging}. While ViTs are appealing, as they provide a more general-purpose architecture, the reasons that explain the benefits of the ViT representations over those obtained from CNNs remain unclear. We compare the representations learned by these architectures, focusing on ViT-Small and ResNet50 architectures, as these share similar model capacity ($21$M vs $23$M) and throughput ($1007/$s vs $1237/$s). \Figref{fig:vit_resnet} depicts a geometric comparison of the two architectures using two of our proposed equivariance metrics at initialization as well as after SSL training using DINO \citep{caron2021emerging}. We observe that the two architectures lead to very different representations at initialization on both the semantic and augmentation manifold directions. While ResNet50 is biased toward a collapsed representation for all inputs, ViT-S corresponds to a more spread out distribution at initialization. However, after training the two architectures converge to a similar equivariant representation, with ViT-S showing better class separability on the semantic manifold. This observation shows that convolutional networks learn by repulsion where representations are dispersed from a collapsed starting point. On the contrary, ViT shows a different behaviour where it starts from a scattered representation organizing representations by attraction. This could be an explanation for the observed robustness and generality of the representation learned by ViT \citep{bai2021transformers, caron2021emerging}. \section{Transfer performances of SSL models} \label{sec:transfer_SLL} \begin{wrapfigure}{r}{0.38\textwidth} \centering \includegraphics[width=\linewidth]{figs/dendo_per_accuracy.png} \caption{\textbf{Dendogram of SSL models based on their transfer learning accuracy.} We observe that the groupings obtained are highly correlated with the clustering performed using our MGMs, thus showing that there is an intrinsic connection between the geometric properties of the SSL model and the transfer learning performances.} \label{fig:dendogram_accu} \end{wrapfigure} In this section, we investigate the question of determining which geometrical properties are crucial for SSL models to perform better on specific transfer learning tasks. First, we show that the clustering results of SSL models~(Sec.~\ref{sec:variable_SSL}) based on their geometry mainly coincides with the observed per-cluster transfer performances. We then show that our MGMs can provide intuition regarding the geometric properties that are desirable in order to perform well on specific transfer learning tasks (e.g., classification tasks gain by having rotation invariant representations). Finally, we explore which manifold properties are the most informative for the transfer performance of SSL models in a given task. \paragraph{Transfer accuracy based clustering of SSL models:} We depict in \Figref{fig:dendogram_accu} the hierarchical clustering of SSL models with respect to their transfer learning accuracy in various tasks. SSL models are clustered if they achieve similar accuracy, good or bad, on a set of tasks. When comparing the clusters obtained using $(i)$ MGMs~(displayed in \Figref{fig:cluster_ssl}), and $(ii)$ transfer learning performances (in \Figref{fig:dendogram_accu}), we observe that PIRL, InsDis, and MoCo-v1 belong to the same group in both cases. Similarly, SwAV and BYOL, and SimCLR models are highly similar for both clustering results. This confirms our hypothesis that there is a correlation between the geometrical properties of the SSL models and their transfer learning accuracy. \begin{figure}[bp] \centering \includegraphics[width=1\linewidth]{figs/concept_22.png} \caption{\textbf{Correlation between equivariance and transfer learning performances.} We display the pearson coefficient $\rho$ at the top left of each subplot. We observe that the equivariance with respect to semantic direction and rotation of the SSL model is negatively correlated with its capability at performing well in few-shot learning tasks (small domain distance). However, these quantities are positively correlated with the accuracy of the DNN on a dense surface normal estimation task. This observation confirms common intuitions regarding the properties that an embedding should have to transfer accurately on these two tasks. The $p-$values for all results are $\leq0.01$.} \label{fig:corr_intuitive} \end{figure} \paragraph{Recovering geometric properties for transfer learning:} Intuitively, it is known that to generalize well on image recognition, an SSL model should be invariant to augmentations such as rotation, while to perform well on dense tasks, i.e., pixel-level prediction tasks, it should be equivariant to rotation \citep{xiao2020should}. We confirm this intuition through our proposed MGMs in \Figref{fig:corr_intuitive}. We show that the semantic and rotation equivariance correlate as expected with the transfer accuracy for two different tasks, namely, \emph{FewShotKornblith} and \emph{DenseSNE}. In particular, we find that for $(i)$ \emph{FewShotKornblith} (few-shot learning with small domain distance), the greater the model invariance to semantic direction and rotation, the better the accuracy, and $(ii)$ \emph{DenseSNE} (dense surface normal estimation) the higher the equivariance of the model the better. \paragraph{Exploring geometric properties for transfer learning:} We now propose to explore some possibly counter-intuitive manifold properties of the backbone encoder that correlate with specific transfer learning task. Our method is based on quantifying how well each MGM can explain the per-task transfer capability of SSL models by using simple regression methods capable of performing feature selection. For each task we regress the MGMs onto the transfer learning accuracy. Details and illustrations of the results are shown in Appendix~\ref{app:feature_section}. We show in \Figref{fig:corr} the correlation between the selected MGMs and the per-task transfer learning accuracy. We observe that in the case of many shot (linear and fine-tune), few shot (small domain distance and and large domain distance), there exist a negative correlation with intuitive geometric property~(\Figref{fig:corr} \textit{Top}): semantic equivariance, rotation affinity, augmentation affinity, and spread of augmentation equivariance. For the cases of \emph{ManyShotLinear}, \emph{ManyShotFinetune}, and \emph{FewShotKornblith}, we recover geometric properties that were previously considered important for classification \citep{gidaris2018unsupervised, soltanolkotabi2014robust}. However, in the case of \emph{FewShotCDFSL}, where the datasets used differ largely from ImageNet, the transfer performances are correlated with second-order statistics, i.e., the spread, and less on the differences between the average of the MGM distributions. In the case of \emph{DetectionFinetune}, \emph{DetectionFrozen}, \emph{DenseSNE}, and \emph{DenseSeg} tasks, the geometric properties that are correlated with the performances are also second-order statistics~(\Figref{fig:corr} \textit{Bottom}). Specifically, there exist a positive correlation with the spread of crop affinity, semantic equivariance, and colorjitter affinity. Therefore, we observe that there is an implicit relationship between geometric properties of SSL models and their transfer learning capabilities. Specifically, for tasks with small transfer distance relative to ImageNet, we observe that the performances of an SSL model is tied to well-known geometric properties, such as, invariance and linearization capabilities. While in the case of large transfer distance, we note that the transfer learning capabilities relies on higher order statistics of the geometry of the backbone encoder, i.e., capturing how the model maps different inputs. \begin{figure}[h!] \begin{minipage}{.8\linewidth} \begin{minipage}{.24\linewidth} \centering \includegraphics[width=1\linewidth]{figs/corr_manyshotlinear.png} \end{minipage} \begin{minipage}{.24\linewidth} \centering \includegraphics[width=1\linewidth]{figs/corr_manyshotfinetune.png} \end{minipage} \begin{minipage}{.24\linewidth} \centering \includegraphics[width=1\linewidth]{figs/corr_fewshotkornblith.png} \end{minipage} \begin{minipage}{.24\linewidth} \centering \includegraphics[width=1\linewidth]{figs/corr_fewshotcdfsl.png} \end{minipage} \begin{minipage}{.24\linewidth} \centering \includegraphics[width=1\linewidth]{figs/corr_detectionfinetune.png} \end{minipage} \begin{minipage}{.24\linewidth} \centering \includegraphics[width=1\linewidth]{figs/corr_detectionfrozen.png} \end{minipage} \hspace{.05cm} \begin{minipage}{.24\linewidth} \centering \includegraphics[width=1\linewidth]{figs/corr_denseSNE.png} \end{minipage} \begin{minipage}{.24\linewidth} \centering \includegraphics[width=1\linewidth]{figs/corr_denseseg.png} \end{minipage} \end{minipage} \begin{minipage}{.15\linewidth} \centering \includegraphics[width=1\linewidth]{figs/lege.PNG} \end{minipage} \caption{\textbf{Correlation between MGMs and transfer performances.} We display the pearson coefficient $\rho$ at the top left of each subplot. We observe that for most transfer learning tasks, there exists a MGM that correlates with the per-task transfer performance. We recover some intuitive results such as for many shot learning linear, higher invariance in the SSL model corresponds to better transfer learning capability. We also note that dense segmentation appears not to be correlated to the considered geometrical metrics of SSL models. The $p-$values for each result was $\leq0.01$ except for the last plot where the $p$-value$=0.34$.} \label{fig:corr} \end{figure} \iffalse \begin{figure}[h!] \begin{minipage}{.24\linewidth} \centering \includegraphics[width=1\linewidth]{figs/sem_equiv_few_shot_korn.png} \end{minipage} \begin{minipage}{.24\linewidth} \centering \includegraphics[width=1\linewidth]{figs/sem_equiv_dense_sne.png} \end{minipage} \begin{minipage}{.24\linewidth} \centering \includegraphics[width=1\linewidth]{figs/rotate_equiv_few_shot_korn.png} \end{minipage} \begin{minipage}{.24\linewidth} \centering \includegraphics[width=1\linewidth]{figs/rotate_equiv_dense_sne.png} \end{minipage} \caption{\textbf{Correlation between manifold graph metric and transfer learning task.} We display the Pearson coefficient $\rho$ at the top left of each subplot. We observe that for most transfer learning task evaluated, it exists a MGM that allows to correlate the geometry of SSL model with their transfer learning performances. We recover some intuitive results such as for many shot learning linear, the higher is the invariance of the network the better will be the transfer learning capability. We also note that dense segmentation appears to not be correlated to the geometrical description of the DNN we provide in this work.} \label{fig:corr} \end{figure} \fi \iffalse \begin{figure}[h!] \begin{minipage}{.03\linewidth} \rotatebox{90}{ \textbf{DenseSeg} \textbf{DenseSNE} \textbf{DetectionFrozen } \textbf{DetectionFinetune} \textbf{FewShotCDFSL} \textbf{FewShotKornblith} \textbf{ManyShotFinetune} \textbf{ManyShotLinear} } \end{minipage} \begin{minipage}{.95\linewidth} \centering \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotLinear_equiv_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotLinear_equiv_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotLinear_aff_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotLinear_aff_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotLinear_neigh_LASSO.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotFinetune_equiv_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotFinetune_equiv_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotFinetune_aff_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotFinetune_aff_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotFinetune_neigh_LASSO.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotKornblith_equiv_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotKornblith_equiv_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotKornblith_aff_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotKornblith_aff_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotKornblith_neigh_LASSO.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotCDFSL_equiv_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotCDFSL_equiv_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotCDFSL_aff_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotCDFSL_aff_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotCDFSL_neigh_LASSO.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFinetune_equiv_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFinetune_equiv_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFinetune_aff_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFinetune_aff_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFinetune_neigh_LASSO.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFrozen_equiv_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFrozen_equiv_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFrozen_aff_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFrozen_aff_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFrozen_neigh_LASSO.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSNE_equiv_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSNE_equiv_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSNE_aff_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSNE_aff_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSNE_neigh_LASSO.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSeg_equiv_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSeg_equiv_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSeg_aff_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSeg_aff_spread_LASSO.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSeg_neigh_LASSO.pdf} \end{minipage} \end{minipage} \end{figure} \begin{figure}[h!] \begin{minipage}{.03\linewidth} \rotatebox{90}{ \textbf{DenseSeg} \textbf{DenseSNE} \textbf{DetectionFrozen } \textbf{DetectionFinetune} \textbf{FewShotCDFSL} \textbf{FewShotKornblith} \textbf{ManyShotFinetune} \textbf{ManyShotLinear} } \end{minipage} \begin{minipage}{.95\linewidth} \centering \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotLinear_equiv_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotLinear_equiv_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotLinear_aff_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotLinear_aff_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotLinear_neigh_Decision_tree.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotFinetune_equiv_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotFinetune_equiv_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotFinetune_aff_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotFinetune_aff_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_ManyShotFinetune_neigh_Decision_tree.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotKornblith_equiv_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotKornblith_equiv_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotKornblith_aff_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotKornblith_aff_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotKornblith_neigh_Decision_tree.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotCDFSL_equiv_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotCDFSL_equiv_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotCDFSL_aff_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotCDFSL_aff_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_FewShotCDFSL_neigh_Decision_tree.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFinetune_equiv_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFinetune_equiv_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFinetune_aff_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFinetune_aff_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFinetune_neigh_Decision_tree.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFrozen_equiv_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFrozen_equiv_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFrozen_aff_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFrozen_aff_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DetectionFrozen_neigh_Decision_tree.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSNE_equiv_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSNE_equiv_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSNE_aff_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSNE_aff_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSNE_neigh_Decision_tree.pdf} \end{minipage} \vspace{-3.5cm} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSeg_equiv_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSeg_equiv_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSeg_aff_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSeg_aff_spread_Decision_tree.pdf} \end{minipage} \begin{minipage}{.19\linewidth} \centering \includegraphics[width=1\linewidth]{figs/feat_importance_DenseSeg_neigh_Decision_tree.pdf} \end{minipage} \end{minipage} \end{figure} \fi \section{Conclusion} We show that the geometry of SSL models can be efficiently captured by leveraging graph-based metrics. In particular, our data-driven approach provides a way to compare SSL models that can differ in architecture and training paradigms. Our analysis provides insights into the landscape of SSL algorithms, highlighting their geometrical similarities and differences. Further, the proposed geometrical metrics capture the transfer learning capability of SSL models. Our approach motivates the design of transfer learning specific SSL training paradigms that take into account the geometric requirement of the downstream task. \paragraph{Limitations} In this work, we performed our geometric analysis using ImageNet validation dataset on SSL models with a ResNet50 backbone and one ViT architecture with approximately same number of parameters. Our results are based on the feature space embedding of the models and we do not foresee any issues when scaling up to larger models and datasets, or even different modalities, but this remains to be tested and is left open for future work. We also restricted our analysis to $5$ augmentation settings, which we considered to be of practical relevance, but further exploration is required to better understand the properties of the embedding manifold. Although we show empirically the correlation between geometrical properties and several downstream tasks, we did not explore the theoretical implication of having a certain geometry and its relationship to transfer generalization. We emphasize that our formulation, unlike the accuracy metric previously studied \cite{ericsson2021well, kornblith2019better}, is amenable to theoretical study using spectral and graph theoretical concepts. We note that our work can be leveraged in conjunction with previously developed approaches, such as \cite{rifai2011higher, cosentino2022deep}, to induce a particular embedding geometry depending on the transfer learning application at hand. Our focus in this work was to provide a big picture tool for understanding features obtained with SSL models using geometry/graphs and hope that it leads to new ideas for understanding and training deep learning models. \small \bibliographystyle{ieeetr}
{ "timestamp": "2022-09-20T02:20:37", "yymm": "2209", "arxiv_id": "2209.08622", "language": "en", "url": "https://arxiv.org/abs/2209.08622" }
\section{Acknowledgement}
{ "timestamp": "2022-09-20T02:23:13", "yymm": "2209", "arxiv_id": "2209.08707", "language": "en", "url": "https://arxiv.org/abs/2209.08707" }
\section*{\textbf{Introduction}} An important branch of relative homological algebra was developed by M. Auslander and R. O. Buchweitz in \cite{ABtheory}. This theory, called \emph{Auslander-Buchweitz approximation theory (AB-theory, for short)}, consists of methods for obtaining right and left approximations from (co)generators of a full subcategory of an abelian category. Thus, computing left and right approximations is a topic of interest in representation theory and homological algebra, and hence the study of techniques on how to get these approximations by using AB-theory became relevant at this point. Later, a technique which has an appealing relation with left and right approximations, emerged. In \cite{GT, EJ1, EJ2} it is shown that is possible to produce left and right approximations via \emph{complete cotorsion pairs}. Since then several generalizations have been given of this notion, see for instance \cite{HMP, HMPcut}. Since the approximation of objects is a common matter between AB-theory and cotorsion pairs theory, it is quiet natural to ask for the possible relationship between them. This question has generated, in recent years, works describing such interplay. An example of this can be found in \cite{BMPS} where Frobenius pairs and weak Auslander-Buchweitz contexts are defined and a relativization of cotorsion pairs under the name of \emph{$\mathcal{S}$-cotorsion pairs} is proposed. Moreover, it is also shown in \cite{BMPS} that AB-contexts and $\mathcal{S}$-cotorsion pairs are in one-to-one correspondence. From this approach of relativizing cotorsion pairs, the notion of \emph{cotorsion pair cut along $\mathcal{S}$} is introduced in \cite{HMPcut}. Moreover, it is also extended the correspondence theorem for this new concept \cite[Thms. 4.6 \& 4.12]{HMPcut}. The scope of AB-theory and cotorsion pairs is not limited to the setting of abelian categories, it can be extended to more general settings such as triangulated categories, see \cite{MendozaSaenzVargasSouto2} and \cite{MendozaSaenzVargasSouto}. In \cite{Nakaoka1}, H. Nakaoka and Y. Palu introduce the notion of extriangulated categories, which extends well-known results on abelian, exact and triangulated categories. It is also known that AB-theory and cotorsion theory do not scape to generalizations on this context as we can see in the remarkable works done in \cite{MendozaSaenzVargasSouto2, MDZtheoryAB, Hegorensteinobjects, TGone} and \cite{LNheartsoftwin, HZontherelation} for such theories, respectively. The first goal of this work is to give a \emph{cut version} of AB-theory in extriangulated categories. In this case, "cut version" means that we adapt some existing notions of this theory by considering a relativization which depends on a class of objects $\mathcal{S}$ in an extriangulated category $\mathcal{C}$. The purpose is also to show that some outcomes already given in the absolute case can be covered by taking $\mathcal{S}=\mathcal{C}$. With this in mind, the second goal is to generalize the notion of cotorsion pair in extriangulated categories through the concept of \emph{cut $n$-cotorsion pair}. This new concept has the advantage that generalizes $n$-cotorsion pairs \cite[Def. 2.1]{HMP} and cut cotorsion pairs \cite[Def. 2.1]{HMPcut} at the same time. Hence, following the ideas in \cite{HMPcut} for abelian categories, it will be useful to show that there is an interplay with the cut notions developed in the first part. \subsection*{Organization of the paper} In Section 1, we recall some well-known results in extriangulated categories appearing mainly in \cite{Nakaoka1} and establish the notation that will be used for the rest of this work. In Section 2, we carry outcomes (given originally in \cite{HMPcut} for abelian categories) to extriangulated categories. As a first contribution and for the purpose of this work, we develop in detail the proofs of some of them. Section 3 is devoted to present generalizations of left Frobenius pairs and Auslander-Buchweitz contexts called \emph{cut Frobenius pair} and \emph{cut Auslander-Buchweitz context} (Definitions~\ref{def: cut left Frobenius pair} and~\ref{def: cut left AB context}), we provide examples of such concepts and finish the section by proving that there exists a one-to-one correspondence between them (Theorem~\ref{theo:correspondence_1}). It is worth mentioning that, even when the proofs were done firstly for abelian categories and these proofs can be easily adapted to extriangulated categories, in this section we rewrite and give alternative proofs for some parts of them. In Section 4, we introduce the notion of \emph{cut $n$-cotorsion pair} (Definition~\ref{def: CnCP ext}) in order to unify the concepts of cut cotorsion pairs and $n$-cotorsion pairs for extriangulated categories. We also provide several examples in abelian, triangulated and extriangulated categories (Examples~\ref{ex:carcaj},~\ref{ex: sistest} and~\ref{ex: sistest ext}). In particular, we show in Example~\ref{ex: GP, P2} that cut cotorsion pairs are different to cotorsion pairs in extriangulated categories whose extriangulated structure is induced by subcategories closed under extensions. Furthermore, we give in Proposition~\ref{pro: 1cot<->complete cot} sufficent and neccesary conditions so that the equality holds. In the last part, we give an example of an $n$-cotorsion pair in an extriangulated category which is neither exact nor triangulated (Example~\ref{ex: ni exact ni triang}). In Section 5 we give results that involved cut $1$-cotorsion pairs and Auslander-Buchweitz contexts in order to prove the most important result of this section, Theorem~\ref{theo:correspondence_2}. As a consequence of this, by considering a restriction on representatives of such correspondence, we also get Corollary~\ref{cor: leftFrob<->cotor pairs in Thick}. Finally, Section 6 is devoted to show how the notions introduced in the previous sections can be applied in the case of triangulated categories and the so-called co-$t$-structures (Theorems~\ref{thm: t-str y cut cot} and~\ref{thm: co-t-st y cut cot}). \subsection*{Conventions} Throughout the paper, $\mathcal{C}$ denotes an additive category. Among the main examples considered in this work are the following ones: \begin{itemize} \item $\mathsf{Mod}(R)$ which is the category of left $R$-modules over an associative ring $R$ with identity. \item $\mathsf{mod}(\Lambda)$ which is the category of finitely generated left $\Lambda$-modules over an Artin algebra $\Lambda$. \end{itemize} We write $\mathcal{S} \subseteq \mathcal{C}$ to denote that $\mathcal{S}$ is a full subcategory of $\mathcal{C}$ or a class of objects in $\mathcal{C}$. All the subcategories of $\mathcal{C}$ are assumed to be full. Given $X, Y \in \mathcal{C}$, we denote by $\mathcal{C}(X,Y)$ the group of morphisms from $X$ to $Y$. Monomorphisms and epimorphisms in $\mathcal{C}$ are denoted by using the arrows $\rightarrowtail$ and $\twoheadrightarrow$, respectively. In case $X$ and $Y$ are isomorphic, we write $X \simeq Y$. The notation $F \cong G$, on the other hand, is reserved to denote the existence of a natural isomorphism between functors $F$ and $G$. \section{\textbf{Preliminaries}}\label{sec:preliminaries} \subsection*{Extriangulated categories and terminology} We begin with some definitions and results related to extriangulated categories. For a detailed treatise on this matter we refer to \cite{Nakaoka1, LNheartsoftwin, MDZtheoryAB}. We recall that $\mathcal{C}$ denotes an additive category. \begin{definition}\cite[Def. 2.1 \& Rmk. 2.2]{Nakaoka1} Let $\mathbb{E}:\mathcal{C}^{op}\times \mathcal{C}\to \mathrm{Ab}$ be an additive bifunctor. An $\mathbb{E}$-extension is a triplet $(A, \delta, C),$ where $A, C\in \mathcal{C}$ and $\delta\in \mathbb{E}(C, A).$ For any $a\in \mathcal{C}(A, A')$ and $c\in \mathcal{C}(C', C)$, we have $\mathbb{E}$-extensions $a\delta:=\mathbb{E}(C, a)(\delta)\in \mathbb{E}(C, A')$ and $\delta c:=\mathbb{E}(c, A)(\delta)\in \mathbb{E}(C', A)$. In this terminology, we have $(a\delta)c=a(\delta c)$ in $\mathbb{E}(C', A')$. \end{definition} \begin{definition}\cite[Def. 2.3]{Nakaoka1} Let $(A, \delta, C)$ and $(A', \delta', C')$ be $\mathbb{E}$-extensions. A morphism $(a, c): (A, \delta, C)\to (A', \delta', C')$ of $\mathbb{E}$-extensions is a pair of morphisms $a\in \mathcal{C}(A, A')$ and $c\in \mathcal{C}(C, C')$ in $\mathcal{C}$, satisfying the equality $a\delta=\delta' c.$ We simply denote it as $(a, c): \delta\to \delta'$. We obtain the category $\mathbb{E}$-$\mathrm{Ext}(\mathcal{C})$ of $\mathbb{E}$-extensions, with composition and identities naturally induced by those in $\mathcal{C}$. \end{definition} \begin{definition}\cite[Def. 2.5]{Nakaoka1} For any $A, C\in \mathcal{C}$, the zero element $0\in \mathbb{E}(C, A)$ is called the split $\mathbb{E}$-extension. \end{definition} \begin{definition}\cite[Def. 2.6]{Nakaoka1} Let $\delta=(A, \delta, C)$ and $\delta'=(A', \delta', C')$ be $\mathbb{E}$-extensions. Let $C\mathop{\to}\limits^{\iota_{C}} C\oplus C\mathop{\leftarrow}\limits^{\iota_{C'}} C'$ and $A\mathop{\leftarrow}\limits^{p_{A}} A\oplus A'\mathop{\to}\limits^{p_{A'}}A'$ be the coproduct and product in $\mathcal{C}$, respectively. We remark that, by the biadditivity of $\mathbb{E}$, we have a natural isomorphism $$\mathbb{E}(C\oplus C', A\oplus A')\cong \mathbb{E}(C, A)\oplus \mathbb{E}(C, A')\oplus \mathbb{E}(C', A)\oplus \mathbb{E}(C', A').$$ Let $\delta\oplus \delta'\in \mathbb{E}(C\oplus C', A\oplus A')$ be the element corresponding to $(\delta, 0, 0, \delta')$ through this isomorphism. If $A=A'$ and $C=C'$, then the sum $\delta+\delta'\in \mathbb{E}(C, A)$ of $\delta, \delta'\in \mathbb{E}(C, A)$ is obtained by $$\delta+\delta'=\nabla_{A}(\delta\oplus \delta')\Delta_{C}$$ where $\Delta_{C}=\left( \begin{array}{c} 1_C\\1_C \end{array} \right) : C\to C\oplus C$ and $\nabla_{A}=(1_A\, 1_A): A\oplus A\to A$. \end{definition} \begin{definition}\cite[Def. 2.7]{Nakaoka1} Let $A, C\in \mathcal{C}$ be any pair of objects. Two sequences of morphisms $A\mathop{\to}\limits^{x} B \mathop{\to}\limits^{y} C$ and $A\mathop{\to}\limits^{x'} B'\mathop{\to}\limits^{y'} C$ in $\mathcal{C}$ are said to be equivalent if there exists an isomorphism $b\in \mathcal{C}(B, B')$ which makes the following diagram commutative \[ \xymatrix@R=4mm{ & B\ar[dr]^{y}\ar[dd]^{b}_{\wr} &\\ A\ar[ur]^{x}\ar[dr]_{x'} & & C.\\ & B'\ar[ur]_{y'} & } \] We denote the equivalence class of $A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C$ by $[A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C]$. \end{definition} \begin{definition}\cite[Def. 2.8]{Nakaoka1} For any two classes $[A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C]$ and $[A'\mathop{\to}\limits^{x'} B'\mathop{\to}\limits^{y'} C']$, we set $[A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C]\oplus [A'\mathop{\to}\limits^{x'} B'\mathop{\to}\limits^{y'} C']:=[A\oplus A'\mathop{\to}\limits^{x\oplus x'} B\oplus B'\mathop{\to}\limits^{y\oplus y'} C\oplus C'].$ \end{definition} \begin{definition}\cite[Def. 2.9]{Nakaoka1}\label{def 2.9} Let $\mathfrak{s}$ be a correspondence which associates to each $\mathbb{E}$-extension $\delta\in \mathbb{E}(C, A)$ an equivalence class $\mathfrak{s}(\delta)= [A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C]$. This $\mathfrak{s}$ is called a realization of $\mathbb{E}$ if it satisfies the following condition: $(*)$ Let $\delta\in \mathbb{E}(C, A)$ and $\delta'\in \mathbb{E}(C', A')$ be any pair of $\mathbb{E}$-extensions, with $\mathfrak{s}(\delta)=[A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C]$ and $\mathfrak{s}(\delta')=[A'\mathop{\to}\limits^{x'} B'\mathop{\to}\limits^{y'} C']$. Then, for any morphism $(a, c)\in \mathbb{E}\mbox{-}\mathrm{Ext}(\mathcal{C})(\delta, \delta')$, there exists $b\in \mathcal{C}(B, B')$ which makes the following diagram commutative \begin{equation}\label{eq: diag1} \xymatrix{ A\ar[r]^{x}\ar[d]_{a} & B\ar[r]^{y}\ar[d]^{b} & C\ar[d]^{c}\\ A'\ar[r]_{x'} & B'\ar[r]_{y'} & C'. } \end{equation} We say that the sequence $A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C$ realizes $\delta$, whenever it satisfies $\mathfrak{s}(\delta)=[A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C]$. We remark that this condition does not depend on the choices of the representatives of the equivalence classes. In the above situation, we say that \eqref{eq: diag1} (or the triplet $(a, b, c)$) realizes $(a, c)$. \end{definition} \begin{definition}\cite[Def. 2.10]{Nakaoka1} A realization $\mathfrak{s}$ of $\mathbb{E}$ is additive if it satisfies the following two conditions: \begin{enumerate} \item $\mathfrak{s}(0)=\left[A\mathop{\to}\limits^{\tiny{\left( \begin{array}{c} 1\\ 0 \end{array} \right)}} A\oplus C\mathop{\to}\limits^{(0\, 1)} C \right]$ for any $A, C\in \mathcal{C};$ \item $\mathfrak{s}(\delta\oplus \delta')=\mathfrak{s}(\delta)\oplus \mathfrak{s}(\delta')$, for any $\mathbb{E}$-extensions $\delta$ and $\delta'$. \end{enumerate} \end{definition} \begin{definition}\cite[Def. 2.12]{Nakaoka1} The pair $(\mathbb{E}, \mathfrak{s})$ is an external triangulation of $\mathcal{C}$ if it satisfies the following conditions. \begin{enumerate} \item[(ET1)] $\mathbb{E}: \mathcal{C}^{op}\times \mathcal{C}\to \mathrm{Ab}$ is an additive bifunctor. \item[(ET2)] $\mathfrak{s}$ is an additive realization of $\mathbb{E}$. \item[(ET3)] Let $\delta\in \mathbb{E}(C, A)$ and $\delta'\in \mathbb{E}(C', A')$ be any pair of $\mathbb{E}$-extensions, realized as $\mathfrak{s}(\delta)=[A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C]$ and $\mathfrak{s}(\delta')=[A'\mathop{\to}\limits^{x'} B'\mathop{\to}\limits^{y'} C']$. For any commutative square in $\mathcal{C}$ \[ \xymatrix{ A\ar[r]^{x}\ar[d]_{a} & B\ar[r]^{y}\ar[d]^{b} & C\\ A'\ar[r]_{x'} & B'\ar[r]_{y'} & C', } \] there exists a morphism $(a, c): \delta\to \delta'$ which is realized by $(a, b, c)$. \item[(ET3)$^{op}$] Let $\delta\in \mathbb{E}(C, A)$ and $\delta'\in \mathbb{E}(C', A')$ be any pair of $\mathbb{E}$-extensions, realized by $A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C$ and $A'\mathop{\to}\limits^{x'} B'\mathop{\to}\limits^{y'} C'$, respectively. For any commutative square in $\mathcal{C}$ \[ \xymatrix{ A\ar[r]^{x} & B\ar[r]^{y}\ar[d]_{b} & C\ar[d]^{c}\\ A'\ar[r]_{x'} & B'\ar[r]_{y'} & C' } \] there exists a morphism $(a, c): \delta\to \delta'$ which is realized by $(a, b, c)$. \item[(ET4)] Let $(A, \delta, D)$ and $(B, \delta', F)$ be $\mathbb{E}$-extensions, respectively realized by $A\mathop{\to}\limits^{f} B\mathop{\to}\limits^{f'} D$ and $B\mathop{\to}\limits^{g} C\mathop{\to}\limits^{g'} F$. Then there exist an object $E\in \mathcal{C}$, a commutative diagram \[ \xymatrix{ A\ar@{=}[d]\ar[r]^{f} & B\ar[r]^{f'}\ar[d]_{g} & D\ar[d]^{d}\\ A\ar[r]_{h} & C\ar[r]_{h'}\ar[d]_{g'} & E\ar[d]^{e}\\ & F\ar@{=}[r] & F } \] in $\mathcal{C}$, and an $\mathbb{E}$-extension $\delta''\in \mathbb{E}(E, A)$ realized by $A\mathop{\to}\limits^{h} C \mathop{\to}\limits^{h'} E$, which satisfy $\mathfrak{s}(f'\delta ')=[D\mathop{\to}\limits^{d} E\mathop{\to}\limits^{e} F]$, $\delta''d=\delta$ and $f\delta''=\delta'e$. \item[(ET4)$^{op}$] Let $(D, \delta, B)$ and $(F, \delta', C)$ be $\mathbb{E}$-extensions realized by $D\mathop{\to}\limits^{f'} A\mathop{\to}\limits^{f} B$ and $F\mathop{\to}\limits^{g'} B\mathop{\to}\limits^{g} C$, respectively. Then there exist an object $E\in \mathcal{C}$, a commutative diagram \[ \xymatrix{ D\ar@{=}[d]\ar[r]^{d} & E\ar[r]^{e}\ar[d]_{h'} & F\ar[d]^{g'}\\ D\ar[r]_{f'} & A\ar[r]_{f}\ar[d]_{h} & B\ar[d]^{g}\\ & C\ar@{=}[r] & C } \] in $\mathcal{C}$ and an $\mathbb{E}$-extension $\delta''\in \mathbb{E}(C, E)$ realized by $E\mathop{\to}\limits^{h'} A\mathop{\to}\limits^{h} C$ which satisfy $\mathfrak{s}(\delta g')=[D\mathop{\to}\limits^{d} E\mathop{\to}\limits^{e} F]$, $\delta'=e\delta''$ and $d\delta=\delta''g$. \end{enumerate} If the above conditions hold true, we call $\mathfrak{s}$ an $\mathbb{E}$-triangulation of $\mathcal{C}$, and call the triplet $(\mathcal{C}, \mathbb{E}, \mathfrak{s})$ an externally triangulated category, or for short, extriangulated category. Sometimes, for the sake of simplicity, we only write $\mathcal{C}$ instead of $(\mathcal{C}, \mathbb{E}, \mathfrak{s}).$ \end{definition} \begin{definition}\cite[Def. 2.15]{Nakaoka1} Let $(\mathcal{C}, \mathbb{E}, \mathfrak{s})$ be a triplet satisfying (ET1) and (ET2). \begin{enumerate} \item A sequence $A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C$ is called a conflation if it realizes some $\mathbb{E}$-extension $\delta\in \mathbb{E}(C, A)$. \item A morphism $f\in\mathcal{C}(A, B)$ is called an inflation if it admits some conflation $A\mathop{\to}\limits^{f} B\to C$. \item A morphism $f\in\mathcal{C}(A,B)$ is called a deflation if it admits some conflation $K\to A\mathop{\to}\limits^{f} B$. \end{enumerate} \end{definition} \begin{definition}\cite[Def. 2.17]{Nakaoka1} Let $\mathcal{C}$ be an extriangulated category. A subcategory $\mathcal{D}\subseteq \mathcal{C}$ is \emph{closed under extensions} if, for any conflation $A\to B\to C$ with $A, C\in \mathcal{D}$, we have $B\in \mathcal{D}$. \end{definition} \begin{definition}\cite[Def. 2.19]{Nakaoka1} Let $(\mathcal{C}, \mathbb{E}, \mathfrak{s})$ be a triplet satisfying (ET1) and (ET2). \begin{enumerate} \item If $A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C$ realizes $\delta\in \mathbb{E}(C, A)$, we call the pair $(A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C, \delta)$ an $\mathbb{E}$-triangle, and we write it in the following way $$ A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C\mathop{\dashrightarrow}^{\delta} $$ \item Let $A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C\mathop{\dashrightarrow}\limits^{\delta}$ and $A'\mathop{\to}\limits^{x'} B'\mathop{\to}\limits^{y'} C'\mathop{\dashrightarrow}\limits^{\delta'}$ be $\mathbb{E}$-triangles. If a triplet $(a, b, c)$ realizes $(a, c): \delta\to \delta'$ as in (1), then we write it as \[ \xymatrix{ A\ar[r]^{x}\ar[d]_{a} & B\ar[r]^{y}\ar[d]^{b} & C\ar@{-->}[r]^{\delta}\ar[d]^{c} &\\ A'\ar[r]_{x'} & B'\ar[r]_{y'} & C'\ar[r]_{\delta'} & } \] and we call $(a, b, c)$ a morphism of $\mathbb{E}$-triangles. \end{enumerate} \end{definition} \begin{definition}\cite[Def. 3.1]{Nakaoka1} Let $\mathbb{E}:\mathcal{C}^{op}\times\mathcal{C}\to Ab$ be an additive bifunctor. By Yoneda's lemma, any $\mathbb{E}$-extension $\delta\in \mathbb{E}(C, A)$ induces natural transformations $\delta_{\#}: \mathcal{C}(-,C)\rightarrow \mathbb{E}(-,A)$ and $\delta^{\#}: \mathcal{C}(A, -)\rightarrow \mathbb{E}(C,-)$. For any $X\in \mathcal{C}$, these $(\delta_{\#})_{X}$ and $\delta^{\#}_{X}$ are given as follows \begin{enumerate} \item $(\delta_{\#})_{X}: \mathcal{C}(X, C)\to \mathbb{E}(X, A); f\mapsto \delta f$. \item $\delta^{\#}_{X}: \mathcal{C}(A, X)\to \mathbb{E}(C, X); g\mapsto g\delta$. \end{enumerate} We abbreviately denote $(\delta_{\#})_{X}(f)$ and $\delta^{\#}_{X}(g)$ by $\delta_{\#}f$ and $\delta^{\#}g$, respectively. \end{definition} \begin{corollary}\cite[Cor. 3.12]{Nakaoka1} Let $\mathcal{C}$ be an extriangulated category. For any $\mathbb{E}$-triangle $A\mathop{\to}\limits^{x} B\mathop{\to}\limits^{y} C \mathop{\dashrightarrow}\limits^{\delta}$, we have the following exact sequences of additive functors $$\mathcal{C}(C,-)\mathop{\longrightarrow}\limits^{\mathcal{C}(y,-)} \mathcal{C}(B,-) \mathop{\longrightarrow}\limits^{\mathcal{C}(x,-)} \mathcal{C}(A,-)\mathop{\longrightarrow}\limits^{\delta^{\#}} \mathbb{E}(C,-) \mathop{\longrightarrow}\limits^{\mathbb{E}(y,-)} \mathbb{E}(B,-) \mathop{\longrightarrow}\limits^{\mathbb{E}(x,-)} \mathbb{E}(A,-),$$ $$\mathcal{C}(-,A)\mathop{\longrightarrow}\limits^{\mathcal{C}(-,x)} \mathcal{C}(-,B) \mathop{\longrightarrow}\limits^{\mathcal{C}(-,y)} \mathcal{C}(-,C)\mathop{\longrightarrow}\limits^{\delta_{\#}} \mathbb{E}(-,A) \mathop{\longrightarrow}\limits^{\mathbb{E}(-,x)} \mathbb{E}(-,B) \mathop{\longrightarrow}\limits^{\mathbb{E}(-,y)} \mathbb{E}(-,C).$$ \end{corollary} \begin{proposition}\cite[Prop. 3.15]{Nakaoka1}\label{Nakaoka 3.15} Let $\mathcal{C}$ be an extriangulated category. The following (and its dual) holds true. Let $C\in \mathcal{C}$ and $A_1\mathop{\to}\limits^{x_1} B_{1}\mathop{\to}\limits^{y_1} C\mathop{\dashrightarrow}\limits^{\delta_1}$, $A_2\mathop{\to}\limits^{x_2} B_{2}\mathop{\to}\limits^{y_2} C\mathop{\dashrightarrow}\limits^{\delta_2}$ be any pair of $\mathbb{E}$-triangles. Then there exist an object $M\in \mathcal{C}$ and a commutative diagram in $\mathcal{C}$ \[ \xymatrix{ & A_2\ar@{=}[r]\ar[d]_{m_2} & A_2\ar[d]^{x_2}\\ A_1\ar[r]^{m_1}\ar@{=}[d] & M\ar[r]^{e_1}\ar[d]_{e_2} & B_2\ar[d]^{y_2}\\ A_1\ar[r]_{x_1} & B_1\ar[r]_{y_1} & C } \] which satisfy $\mathfrak{s}(\delta_1 y_2)=[A_1\mathop{\to}\limits^{m_1} M\mathop{\to}\limits^{e_1} B_2]$, $\mathfrak{s}(\delta_2 y_1)=[A_2\mathop{\to}\limits^{m_2} M\mathop{\to}\limits^{e_2} B_1]$ and $m_1\delta_1+m_2\delta_2=0$. \end{proposition} \subsection*{(Co)Resolution dimensions in extriangulated categories} We recall some definitions in Auslander-Buchweitz theory in extriangulated categories. These concepts originally appeared in \cite{ABtheory} and have been adapted to several contexts as in \cite{MendozaSaenzVargasSouto2, MDZtheoryAB}, for instance. \begin{definition} Let $\mathcal{C}$ be an extriangulated category. An {\rm acyclic $\mathbb{E}$-triangle sequence} is a pair $(C_{\bullet}, Z_\bullet(C_{\bullet})),$ where $C_{\bullet}$ is a sequence $\cdots \to C_{n+1}\mathop{\to}\limits^{d_{n+1}} C_{n}\mathop{\to}\limits^{d_{n}} C_{n-1}\to \cdots$ in $\mathcal{C}$ and $Z_\bullet(C_{\bullet})$ is a family of $\mathbb{E}$-triangles $\{Z_{n+1}(C_{\bullet})\mathop{\to}\limits^{f_{n}} C_{n} \mathop{\to}\limits^{g_{n}} Z_{n}(C_{\bullet})\dashrightarrow\}_{n\in\mathbb{Z}}$ satisfying that $f_{n}g_{n+1}=d_{n+1}$ $\forall\,n\in\mathbb{Z}.$ Notice that $C_{\bullet}$ is a chain complex and each $Z_{n}(C_{\bullet})$ is called the {\rm $n$-th $\mathbb{E}$-cycle} of $C_\bullet$ in $\mathcal{C}.$ For the sake of simplicity, an acyclic $\mathbb{E}$-triangle sequence $(C_{\bullet}, Z_\bullet(C_{\bullet}))$ will be denoted by $C_{\bullet}.$ \end{definition} Let $\mathcal{X}\subseteq \mathcal{C}$ and $C\in \mathcal{C}$. An \emph{$\mathcal{X}$-resolution of $C$} is an acyclic $\mathbb{E}$-triangle sequence of the form $ \cdots \to X_k \to \cdots \to X_1 \to X_0 \to C\to 0 $ where $X_k\in \mathcal{X},$ for every $k\geq 0.$ If $X_k=0$ for every $k>n$, we shall say that the previous sequence is a \emph{finite $\mathcal{X}$-resolution of $C$ of length $n$}. The \emph{resolution dimension of $C$ with respect to $\mathcal{X}$} (or the \emph{$\mathcal{X}$-resolution dimension of $C$}, for short), denoted $\mathrm{resdim}_{\mathcal{X}}(C)$, is the smallest $n \geq 0$ such that $C$ admits a finite $\mathcal{X}$-resolution of length $n$. If such $n$ does not exist, we set $\mathrm{resdim}_{\mathcal{X}}(C) := \infty$. Dually, an \emph{$\mathcal{X}$-coresolution of $C$} is an acyclic $\mathbb{E}$-triangle sequence of the form $0\to C\to X_0\to X_{-1}\to\cdots\to X_{-k}\to \cdots$ where $X_{-k}\in \mathcal{X},$ for every $k\geq 0.$ If $X_{-k}=0$ for every $k>n$, we shall say that the previous sequence is a \emph{finite $\mathcal{X}$-coresolution of $C$ of length $n$}.The smallest $n \geq 0$ such that $C$ admits a finite $\mathcal{X}$-coresolution of length $n$ is denoted by $\mathrm{coresdim}_{\mathcal{X}}(C)$ and called the \emph{$\mathcal{X}$-coresolution dimension} of $C.$ Related with the two above homological dimensions, there exist the following classes of objects in $\mathcal{C}$: \begin{align*} \mathcal{X}^\wedge_n & := \{ C \in \mathcal{C} \text{ : } \mathrm{resdim}_{\mathcal{X}}(C) \leq n \}, & & \text{and} & \mathcal{X}^\wedge & := \bigcup_{n \geq 0} \mathcal{X}^\wedge_n, \\ \mathcal{X}^\vee_n & := \{ C \in \mathcal{C} \text{ : } \mathrm{coresdim}_{\mathcal{X}}(C) \leq n \}, & & \text{and} & \mathcal{X}^\vee & := \bigcup_{n \geq 0} \mathcal{X}^\vee_n. \end{align*} Given a class $\mathcal{Y}$ of objects in $\mathcal{C}$, the \emph{$\mathcal{X}$-resolution dimension of $\mathcal{Y}$} is defined as $$\mathrm{resdim}_{\mathcal{X}}(\mathcal{Y}):=\sup\{\mathrm{resdim}_{\mathcal{X}}(Y) : Y\in \mathcal{Y}\}.$$ The \emph{$\mathcal{X}$-coresolution dimension of $\mathcal{Y}$} is defined similarly and denoted by $\mathrm{coresdim}_{\mathcal{X}}(\mathcal{Y})$. \subsection*{Higher extensions} Let $\mathcal{C}$ be an extriangulated category. Following \cite{Nakaoka1}, we recall that an object $P\in \mathcal{C}$ is $\mathbb{E}$-projective if for any $\mathbb{E}$-triangle $A\mathop{\to}\limits^{x} B \mathop{\to}\limits^{y} C\mathop{\dashrightarrow}\limits^{\delta}$ the map $$\mathcal{C}(P,y): \mathcal{C}(P,B)\longrightarrow \mathcal{C}(P,C)$$ is surjective. We denote by $\mathcal{P}_{\mathbb{E}}(\mathcal{C})$ the class of $\mathbb{E}$-projective objects in $\mathcal{C}$. Dually, the class of $\mathbb{E}$-injective objects in $\mathcal{C}$ is denoted by $\mathcal{I}_{\mathbb{E}}(\mathcal{C})$. We say that $\mathcal{C}$ has enough $\mathbb{E}$-projective objects if for any object $C\in \mathcal{C}$, there exists an $\mathbb{E}$-triangle $A\to P\to C\dashrightarrow$ satisfying $P\in \mathcal{P}_{\mathbb{E}}(\mathcal{C})$. Dually, we can define that $\mathcal{C}$ has enough $\mathbb{E}$-injective objects.\\ Let $\mathcal{C}$ be an extriangulated category and $\mathcal{X}, \mathcal{Y}\subseteq \mathcal{C}$. Following \cite[Def. 4.2]{Nakaoka1}, we recall that: \begin{enumerate} \item[$\bullet$] $C\in \mathcal{C}$ belongs to $\mathrm{Cone}(\mathcal{X}, \mathcal{Y})$ if $C$ admits a conflation $X\to Y\to C$ with $X\in \mathcal{X}, Y\in \mathcal{Y}$. \item[$\bullet$] $C\in \mathcal{C}$ belongs to $\mathrm{CoCone}(\mathcal{X}, \mathcal{Y})$ if $C$ admits a conflation $C\to X\to Y$ with $X\in \mathcal{X}, Y\in \mathcal{Y}$. \item[$\bullet$] $\mathcal{X}$ is \emph{closed under cones} if $\mathrm{Cone}(\mathcal{X}, \mathcal{X})\subseteq \mathcal{X}$. Dually, $\mathcal{X}$ is \emph{closed under cocones} if $\mathrm{CoCone}(\mathcal{X}, \mathcal{X})\subseteq \mathcal{X}$. \end{enumerate} We set $\Omega \mathcal{X}:=\mathrm{CoCone}(\mathcal{P}_{\mathbb{E}}(\mathcal{C}), \mathcal{X})$, that is, $\Omega \mathcal{X}$ is the subclass of $\mathcal{C}$ consisting of the objects $\Omega X$ admitting an $\mathbb{E}$-triangle $\Omega X\to P\to X\dashrightarrow$ with $P\in\mathcal{P}_{\mathbb{E}}(\mathcal{C})$ and $X\in \mathcal{X}$. We call $\Omega \mathcal{X}$ the syzygy of $\mathcal{X}$. Dually, the cosyzygy of $\mathcal{X}$ is $\Sigma\mathcal{X}:=\mathrm{Cone}(\mathcal{X}, \mathcal{I}_{\mathbb{E}}(\mathcal{C}))$. We set $\Omega^{0}\mathcal{X}:=\mathcal{X}$, and define $\Omega^{k}\mathcal{X}$ for $k>0$ inductively by $\Omega^{k}\mathcal{X}:=\Omega(\Omega^{k-1}\mathcal{X})=\mathrm{CoCone}(\mathcal{P}_{\mathbb{E}}(\mathcal{C}), \Omega^{k-1}\mathcal{X})$. We call $\Omega^{k}\mathcal{X}$ the $k$-th syzygy of $\mathcal{X}$. Dually, the $k$-th cosyzygy $\Sigma^{k}\mathcal{X}:=\mathrm{Cone}(\Sigma^{k-1}\mathcal{X}, \mathcal{I}_{\mathbb{E}}(\mathcal{C}))$ for $k>0$ and $\Sigma^{0}\mathcal{X}:=\mathcal{X}$ (see \cite[Def. 4.2 \& Prop. 4.3]{LNheartsoftwin}, for more details). Let $\mathcal{C}$ be an extriangulated category with enough $\mathbb{E}$-projectives and $\mathbb{E}$-injectives. In \cite{LNheartsoftwin}, it is shown that $\mathbb{E}(X, \Sigma^{k}Y)\cong \mathbb{E}(\Omega^{k}X, Y)$ for $k\geq 0.$ Thus, the higher extensions groups are defined as $\mathbb{E}^{k+1}(X, Y):=\mathbb{E}(X, \Sigma^{k}Y)\cong \mathbb{E}(\Omega^{k}X, Y),$ for $k\geq 0$. Moreover, the following result is also proved. \begin{lemma}\cite[Prop. 5.2]{LNheartsoftwin} \label{longExSeq} Let $\mathcal{C}$ be an extriangulated category with enough $\mathbb{E}$-projectives and $\mathbb{E}$-injectives and $A\to B\to C\dashrightarrow$ be an $\mathbb{E}$-triangle in $\mathcal{C}$. Then, for any object $X\in \mathcal{C}$ and $k\geq 1$, we have the following exact sequences \[(1)\; \cdots \to \mathbb{E}^{k}(X, A)\to \mathbb{E}^{k}(X, B)\to \mathbb{E}^{k}(X, C)\to \mathbb{E}^{k+1}(X, A)\to \mathbb{E}^{k+1}(X, B)\to \cdots, \] \[(2)\; \cdots \to \mathbb{E}^{k}(C, X)\to \mathbb{E}^{k}(B, X)\to \mathbb{E}^{k}(A, X)\to \mathbb{E}^{k+1}(C, X)\to \mathbb{E}^{k+1}(B, X)\to \cdots \] of abelian groups. \end{lemma} \begin{remark}\label{RkAbTri} Let $\mathcal{C}$ be an extriangulated category. In the case that $\mathcal{C}$ is either abelian or triangulated, we do not need enough $\mathbb{E}$-projectives and $\mathbb{E}$-injectives to have higher extensions groups. This is because, in that case, we already have defined (for free) these extension groups $\mathbb{E}^i(-,-) \colon \mathcal{C}^{\rm op} \times \mathcal{C} \to Ab$, with $i \geq 1.$ Indeed, in the abelian case, take the Yoneda's functor $\mathrm{Ext}^i_{\mathcal{C}}(-,-)$ (see \cite{Sieg} ), and in the triangulated case take $\mathrm{Hom}_{\mathcal{C}}(-,-[i]),$ where $[1]:\mathcal{C}\to \mathcal{C}$ is the shift functor. Moreover, in such a cases, Lemma \ref{longExSeq} holds true. \end{remark} \begin{definition} Let $\mathcal{C}$ be an extriangulated category. We say that $\mathcal{C}$ is an {\bf extriangulated category with higher extension groups} if one of the following two conditions holds true: (i) $\mathcal{C}$ is either abelian or triangulated, and (ii) $\mathcal{C}$ has enough $\mathbb{E}$-projectives and $\mathbb{E}$-injectives. \end{definition} Let $\mathcal{C}$ be an extriangulated category with higher extension groups. We fix the following notation for $\mathcal{X}, \mathcal{Y}\subseteq \mathcal{C}$ and $k \geq 1$. \begin{itemize} \item $\mathbb{E}^{k}(\mathcal{X, Y})=0$ if $\mathbb{E}^{k}(X, Y)=0$ for every $X\in \mathcal{X}$ and $Y\in \mathcal{Y}$. When $\mathcal{X}=\{M\}$ or $\mathcal{Y}=\{N\}$, we shall write $\mathbb{E}^{k}(M, \mathcal{Y})=0$ and $\mathbb{E}^{k}(\mathcal{X}, N)=0$, respectively. \item $\mathbb{E}^{\leq k}(\mathcal{X}, \mathcal{Y})=0$ if $\mathbb{E}^{j}(\mathcal{X}, \mathcal{Y})=0$ for every $1\leq j\leq k$. \item $\mathbb{E}^{\geq k}(\mathcal{X}, \mathcal{Y})=0$ if $\mathbb{E}^{j}(\mathcal{X}, \mathcal{Y})=0$ for every $j\geq k$. \end{itemize} Recall that the \emph{right $k$-th orthogonal complement} and the \emph{right orthogonal complement of $\mathcal{X}$} are defined, respectively, by \begin{align*} \mathcal{X}^{\perp_k} := \{ N \in \mathcal{C} \mbox{ : } \mathbb{E}^k(\mathcal{X},N) = 0 \}\;\text{ and }\; \mathcal{X}^\perp := \bigcap_{k \geq 1} \mathcal{X}^{\perp_k}=\{N\in \mathcal{C} : \mathbb{E}^{\geq 1}(\mathcal{X}, N)=0\}. \end{align*} Dually, we have the \emph{left $k$-th} and the \emph{left orthogonal complements ${}^{\perp_k}\mathcal{X}$ and ${}^{\perp}\mathcal{X}$ of $\mathcal{X}$}, respectively. \subsection*{Relative homological dimensions} Let $\mathcal{C}$ be an extriangulated category with higher extension groups. Let $\mathcal{X}, \mathcal{Y} \subseteq \mathcal{C}$ and $C \in \mathcal{C}.$ Following \cite{ABtheory, BMPS, MendozaSaenzVargasSouto2, MDZtheoryAB}, the \emph{relative projective dimension of $C$} is $\mathrm{pd}_{\mathcal{X}}(C) := \min \{ m \geq 0 {\rm \ : \ } \mathbb{E}^{\geq m+1}(C,\mathcal{X}) = 0 \}$ and $\mathrm{pd}_{\mathcal{X}}(\mathcal{Y}) := \sup\{ \mathrm{pd}_{\mathcal{X}}(Y) \text{ : } Y \in \mathcal{Y} \}.$ Dually, we have the \emph{relative injective dimension} $\mathrm{id}_{\mathcal{X}}(C)$ of $C$ and $\mathrm{id}_{\mathcal{X}}(\mathcal{Y}).$ Notice that $\mathrm{pd}_{\mathcal{X}}(\mathcal{Y})=\mathrm{id}_{\mathcal{Y}}(\mathcal{X}).$ If $\mathcal{X}=\mathcal{C},$ we just write $\mathrm{pd}(C),$ $\mathrm{pd}(\mathcal{Y}),$ $\mathrm{id}(C)$ and $\mathrm{id}(\mathcal{Y}),$ for the (absolute) projective and injective dimensions. \subsection*{Relative (co)generators} Let $\mathcal{C}$ be an extriangulated category and $\omega, \mathcal{X}\subseteq \mathcal{C}$. Following \cite{ABtheory, BMPS, MendozaSaenzVargasSouto2, MDZtheoryAB}, we say that $\omega$ is a \emph{relative cogenerator in $\mathcal{X}$}, if $\omega\subseteq \mathcal{X}$ and for each object $X\in \mathcal{X}$, there exists an $\mathbb{E}$-triangle $X\to W\to X'\dashrightarrow$ with $W\in \omega$ and $X'\in \mathcal{X}$. Assume now that $\mathcal{C}$ has higher extension groups.Then, the class $\omega$ is \emph{$\mathcal{X}$-injective} if $\mathrm{id}_{\mathcal{X}}(\omega)=0$. Dually, we have the notions of \emph{relative generator} and \emph{$\mathcal{X}$-projective}. \subsection*{Thick subcategories} Let $\mathcal{C}$ be an extriangulated category. A subcategory is \emph{left thick} if it is closed under extensions, direct summands in $\mathcal{C}$ and cocones. \emph{Right thick subcategories} are defined dually. Finally, a subcategory is \emph{thick} if it is both left and right thick. For $\mathcal{X}\subseteq \mathcal{C}$, we shall denote by $\mathsf{Thick}(\mathcal{X})$ the smallest thick subcategory of $\mathcal{C}$ containing $\mathcal{X}$. \subsection*{Cotorsion pairs} Let $\mathcal{C}$ be an extriangulated category and $\mathcal{U, V}\subseteq \mathcal{C}$ be closed under direct summands in $\mathcal{C}.$ Following \cite[Def. 4.1]{Nakaoka1} we say that the pair $(\mathcal{U, V})$ is a \emph{cotorsion pair in $\mathcal{C}$} if it satisfies the following conditions. \begin{itemize} \item[(CP1)] $\mathbb{E}(\mathcal{U, V})=0$. \item[(CP2)] For any $C\in \mathcal{C}$, there exist conflations $$V\to U\to C \quad \mbox{ and } \quad C\to V'\to U'$$ satisfying $U, U' \in \mathcal{U}$ and $V, V' \in \mathcal{V}$. \end{itemize} If $(\mathcal{U, V})$ is a cotorsion pair in $\mathcal{C},$ then $\mathcal{U}={}^{\perp_1}\mathcal{V}$ and $\mathcal{V}=\mathcal{U}^{\perp_1}.$ In particular $\mathcal{U}$ and $\mathcal{V}$ are additive subcategories of $\mathcal{C}.$ \section{\textbf{Some technical results}} Although abelian and extriangulated categories share homological tools and constructions, it is not always clear how these mechanisms can be carried to different contexts. In this section we begin enunciating results from \cite{HMPcut} which have been adapted to extriangulated categories. For the rest of this work $\mathcal{C},$ denotes an extriangulated category. The following result is a generalization of \cite[Lem. 3.3]{HMPcut}. \begin{lemma}\label{lem:technical_lemma_1} Let $\omega, \mathcal{S} \subseteq \mathcal{C}$ with $\omega$ closed under extensions and $\omega \cap \mathcal{S}$ be a relative generator in $\omega$. Let $C \in \mathcal{C}$ and \begin{align} C_\bullet :\quad \cdots \to W_n\to W_{n-1} \to \cdots \to W_1 \to W_0 \to C\to 0 \label{eqn:resolution_omega} \end{align} be an $\omega$-resolution of $C.$ Then, there exist $\mathbb{E}$-triangles \begin{align} G_j & \to X_{j+1} \to Z_{j+1}(C_\bullet)\dashrightarrow & & \text{and} & X_{j+1} & \to F_j \to X_j\dashrightarrow \label{eqn:technical} \end{align} where $X_0 := C$, $F_j \in \omega \cap \mathcal{S}$ and $G_j \in \omega,$ for every $j\geq 0$. \end{lemma} \begin{proof} Let us prove this result by induction on $j$. \begin{itemize} \item \underline{Initial step}: For the case $j = 0$, since $\omega \cap \mathcal{S}$ is a relative generator in $\omega$, there is an $\mathbb{E}$-triangle $G_0 \to F_0 \to W_0\dashrightarrow$ with $G_0 \in \omega$ and $F_0 \in \omega \cap \mathcal{S}$. From condition (ET4)$^{op}$ we can form the following commutative diagram in $\mathcal{C}$ \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=2.5em, column sep=2.5em, text height=1.25ex, text depth=0.25ex] { G_0 & X_1 & Z_1(C_\bullet) \\ G_0 & F_{0} & W_{0} \\ {} & C & C \\ }; \path[->] (m-1-1) edge (m-1-2) (m-1-2) edge (m-1-3) (m-2-1) edge (m-2-2) (m-2-2) edge (m-2-3) (m-1-2) edge (m-2-2) (m-2-2) edge (m-3-2) (m-1-3) edge (m-2-3) (m-2-3) edge (m-3-3) ; \path[-,font=\scriptsize] (m-3-2) edge [double, thick, double distance=2pt] (m-3-3) (m-1-1) edge [double, thick, double distance=2pt] (m-2-1) ; \end{tikzpicture} \] where $G_0\to X_1\to Z_1(C_\bullet)\dashrightarrow$ and $X_1\to F_0\to C\dashrightarrow$ are $\mathbb{E}$-triangles. Hence, the existence of such $\mathbb{E}$-triangles holds in this case. \item \underline{Induction step}: Now suppose that for $1 \leq j$ there are $\mathbb{E}$-triangles $G_j \to X_{j+1} \to Z_{j+1}(C_\bullet)\dashrightarrow$ and $X_{j+1} \to F_j \to X_j\dashrightarrow$, with $F_j \in \omega \cap \mathcal{S}$ and $G_j \in \omega$. From Proposition~\ref{Nakaoka 3.15} and by considering the $\mathbb{E}$-triangle $Z_{j+2}(C_\bullet) \to W_{j+1} \to Z_{j+1}(C_\bullet)\dashrightarrow$ we can construct a commutative diagram in $\mathcal{C}$ \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=2.5em, column sep=2.5em, text height=1.25ex, text depth=0.25ex] { {} & G_j & G_j \\ Z_{j+2}(C_\bullet) & F'_{j+1} & X_{j+1} \\ Z_{j+2}(C_\bullet) & W_{j+1} & Z_{j+1}(C_\bullet) \\ }; \path[->] (m-2-1) edge (m-2-2) (m-2-2) edge (m-2-3) (m-3-1) edge (m-3-2) (m-3-2) edge (m-3-3) (m-1-2) edge (m-2-2) (m-2-2) edge (m-3-2) (m-1-3) edge (m-2-3) (m-2-3) edge (m-3-3) ; \path[-,font=\scriptsize] (m-1-2) edge [double, thick, double distance=2pt] (m-1-3) (m-2-1) edge [double, thick, double distance=2pt] (m-3-1) ; \end{tikzpicture} \] where $G_j\to F'_{j+1}\to W_{j+1}\dashrightarrow$ is an $\mathbb{E}$-triangle. Since $\omega$ is closed under extensions, $F'_{j+1} \in \omega$. By using again that $\omega \cap \mathcal{S}$ is a relative generator in $\omega$, we have an $\mathbb{E}$-triangle $G_{j+1} \to F_{j+1} \to F'_{j+1}\dashrightarrow$ with $F_{j+1} \in \omega \cap \mathcal{S}$ and $G_{j+1} \in \omega$. From condition (ET4)$^{op}$ again we get a commutative diagram \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=2.5em, column sep=2.5em, text height=1.25ex, text depth=0.25ex] { G_{j+1} & X_{j+2} & Z_{j+2}(C_\bullet) \\ G_{j+1} & F_{j+1} & F'_{j+1} \\ {} & X_{j+1} & X_{j+1} \\ }; \path[->] (m-1-1) edge (m-1-2) (m-1-2) edge (m-1-3) (m-2-1) edge (m-2-2) (m-2-2) edge (m-2-3) (m-1-2) edge (m-2-2) (m-2-2) edge (m-3-2) (m-1-3) edge (m-2-3) (m-2-3) edge (m-3-3) ; \path[-,font=\scriptsize] (m-3-2) edge [double, thick, double distance=2pt] (m-3-3) (m-1-1) edge [double, thick, double distance=2pt] (m-2-1) ; \end{tikzpicture} \] where $G_{j+1}\to X_{j+2}\to Z_{j+2}(C_\bullet)\dashrightarrow$ and $X_{j+2}\to F_{j+1}\to X_{j+1}\dashrightarrow$ are $\mathbb{E}$-triangles. Therefore, the result follows. \end{itemize} \end{proof} The following result is the generalization of \cite[Lem. 3.4]{HMPcut}. \begin{lemma}\label{lem:technical_lemma_2} Let $\omega\subseteq \mathcal{C}$ be closed under extensions. If $W \to B \to C\dashrightarrow$ is an $\mathbb{E}$-triangle with $W \in \omega$ and $C \in \omega^\wedge$, then $B \in \omega^\wedge$ and $\mathrm{resdim}_\omega(B) \leq \mathrm{resdim}_\omega(C)$. \end{lemma} \begin{proof} For $\mathrm{resdim}_{\omega}(C) = 0$, the result follows since $\omega$ is closed under extensions. So we may assume that $\mathrm{resdim}_{\omega}(C) \geq 1$. Then, there exists an $\mathbb{E}$-triangle $W' \to W_0 \to C\dashrightarrow$ with $W_0 \in \omega$ and $\mathrm{resdim}_{\omega}(W') = \mathrm{resdim}_{\omega}(C) - 1$. From Proposition~\ref{Nakaoka 3.15}, one can form the following commutative diagram in $\mathcal{C}$ \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=2.5em, column sep=2.5em, text height=1.25ex, text depth=0.25ex] { {} & W & W \\ W' & E & B \\ W' & W_0 & C \\ }; \path[->] (m-2-1) edge (m-2-2) (m-2-2) edge (m-2-3) (m-3-1) edge (m-3-2) (m-3-2) edge (m-3-3) (m-1-2) edge (m-2-2) (m-2-2) edge (m-3-2) (m-1-3) edge (m-2-3) (m-2-3) edge (m-3-3) ; \path[-,font=\scriptsize] (m-1-2) edge [double, thick, double distance=2pt] (m-1-3) (m-2-1) edge [double, thick, double distance=2pt] (m-3-1) ; \end{tikzpicture} \] where $W'\to E\to B\dashrightarrow$ is an $\mathbb{E}$-triangle and $E\in \omega$ due to $\omega$ is closed under extensions. Therefore, from the middle row we get that $B\in \omega^{\wedge}$. \end{proof} \begin{lemma}\label{lem: w^ cerrada por sumandos} Let $A\to B\to C\dashrightarrow$ be an $\mathbb{E}$-triangle in $\mathcal{C}$. \begin{enumerate} \item If $C=C_1\oplus C_2$ in $\mathcal{C},$ then there exist $\mathbb{E}$-triangles $A_i \to B \to C_i \dashrightarrow$ for every $i=1, 2$. \item The sequence $C\mathop{\longrightarrow}\limits^{\nabla} C\oplus C\mathop{\longrightarrow}\limits^{\footnotesize{(-1 \, 1)}} C$ realizes the split $\mathbb{E}$-extension $0\in \mathbb{E}(C,C),$ for any $C\in \mathcal{C},$ where $\nabla:=\footnotesize{\left(\begin{array}{c} 1\\1 \end{array} \right)}: C\to C\oplus C$ is the codiagonal map. \end{enumerate} \end{lemma} \begin{proof} (1) From condition (ET4)$^{op}$ we have the following diagram \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=2.5em, column sep=2.5em, text height=1.25ex, text depth=0.25ex] { A & A_2 & C_1 \\ A & B & C \\ {} & C_2 & C_2, \\ }; \path[->] (m-1-1) edge (m-1-2) (m-1-2) edge (m-1-3) (m-2-1) edge (m-2-2) (m-2-2) edge (m-2-3) (m-1-2) edge (m-2-2) (m-2-2) edge (m-3-2) (m-1-3) edge node[right] {\footnotesize$\left( \begin{array}{c} 1\\ 0 \end{array} \right)$} (m-2-3) (m-2-3) edge node[right] {\footnotesize$\left( \begin{array}{cc} 0 & 1 \end{array} \right)$} (m-3-3) ; \path[-,font=\scriptsize] (m-3-2) edge [double, thick, double distance=2pt] (m-3-3) (m-1-1) edge [double, thick, double distance=2pt] (m-2-1) ; \end{tikzpicture} \] where $A_2\to B\to C_2\dashrightarrow$ is an $\mathbb{E}$-triangle. In a similar way, we obtain an $\mathbb{E}$-triangle $A_1\to B\to C_1\dashrightarrow$. \ (2) It follows from \cite[Rmk. 2.11-(2)]{Nakaoka1}. \end{proof} The following result generalizes \cite[Lem. 3.5]{HMPcut}. \begin{lemma}\label{lem:main_lemma} Let $\mathcal{C}$ be with higher extension groups, $\omega, \mathcal{S} \subseteq \mathcal{C}$ be such that $\omega$ is closed under extensions and isomorphisms, and let $\omega \cap \mathcal{S}$ be an $\omega$-projective relative generator in $\omega.$ Then, the following statements hold true. \begin{enumerate} \item $\omega \cap \mathcal{S}$ is an $\omega^\wedge$-projective relative generator in $\omega^\wedge$. Moreover, for any $C \in \omega^\wedge$ with $\mathrm{resdim}_{\omega}(C) \geq 1$, there exists an $\mathbb{E}$-triangle $K \to F \to C\dashrightarrow$ such that $F \in \omega \cap \mathcal{S}$ and $\mathrm{resdim}_{\omega}(K) = \mathrm{resdim}_{\omega}(C) - 1$. \item $\omega^\wedge$ is closed under extensions. \item If $\omega$ is closed under direct summands, then so is $\omega^\wedge$. \item If $\mathcal{S}$ is closed under cones and cocones, then $\omega^\wedge \cap \mathcal{S} = (\omega \cap \mathcal{S})^\wedge$. \end{enumerate} \end{lemma} \begin{proof} \ \begin{enumerate} \item From \cite[Lem. 3.8]{MDZtheoryAB} and its dual, we have that $\mathrm{pd}_{\omega^\wedge}(\omega \cap \mathcal{S}) = \mathrm{id}_{\omega \cap \mathcal{S}}(\omega^\wedge) = \mathrm{id}_{\omega \cap \mathcal{S}}(\omega) = \mathrm{pd}_{\omega}(\omega \cap \mathcal{S}) = 0$, and so $\omega \cap \mathcal{S}$ is $\omega^\wedge$-projective. Now, we show that $\omega \cap \mathcal{S}$ is a relative generator in $\omega^\wedge$. We use induction on $n := \mathrm{resdim}_{\omega}(C),$ for $C \in \omega^\wedge$. \begin{itemize} \item \underline{Initial step}: This is clear since $\omega$ is closed under isomorphisms. \item \underline{Induction step}: For the case $n \geq 1$, we have an $\omega$-resolution of $C$ \[ 0\to W_n \to W_{n-1} \to \cdots \to W_1 \to W_0 \to C\to 0 \] with $W_k \in \omega$ for every $0 \leq k \leq n$. By Lemma \ref{lem:technical_lemma_1} and the notation therein, we have that $X_n \in \omega$ since $G_{n-1}, Z_n(C_{\bullet})=W_n \in \omega$ and $\omega$ is closed under extensions. Glueing together the $\mathbb{E}$-triangles in Lemma \ref{lem:technical_lemma_1}~\eqref{eqn:technical}, we get an acyclic $\mathbb{E}$-triangle sequence \[ 0\to X_n \to F_{n-1} \to \cdots \to F_1 \to F_0 \to C\to 0 \] with $F_k \in \omega \cap \mathcal{S},$ for every $0 \leq k \leq n-1$. Thus, by considering the $\mathbb{E}$-triangle $X_1 \to F_0 \to C\dashrightarrow$ where $F_0 \in \omega \cap \mathcal{S}$ and $X_1 \in \omega^\wedge$ with $\mathrm{resdim}_{\omega}(X_1) = n - 1,$ we can conclude that $\omega \cap \mathcal{S}$ is a relative generator in $\omega^\wedge$. \end{itemize} \item Suppose we are given an $\mathbb{E}$-triangle $A \to B \to C\dashrightarrow$ with $A, C \in \omega^\wedge$. Let us use induction on $n := \mathrm{resdim}_{\omega}(A)$ to show that $B \in \omega^\wedge$. \begin{itemize} \item \underline{Initial step}: If $\mathrm{resdim}_{\omega}(A) = 0$, the result follows by Lemma \ref{lem:technical_lemma_2} and the hypothesis that $\omega$ is closed under isomorphisms. \item \underline{Induction step}: We may assume that $\mathrm{resdim}_{\omega}(A) \geq 1$. Suppose that for any $\mathbb{E}$-triangle $A' \to B' \to C'\dashrightarrow$ with $C' \in \omega^\wedge$ and $\mathrm{resdim}_{\omega}(A') \leq n-1$, one has that $B' \in \omega^\wedge$. On the one hand, by part (1), there is an $\mathbb{E}$-triangle $C' \to P \to C \dashrightarrow$ with $P \in \omega \cap \mathcal{S}$ and $C' \in \omega^\wedge$. Thus, by Proposition~\ref{Nakaoka 3.15} we can form the following commutative diagram \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=2.5em, column sep=2.5em, text height=1.25ex, text depth=0.25ex] { {} & C' & C' \\ A & E & P \\ A & B & C, \\ }; \path[->] (m-2-1) edge (m-2-2) (m-2-2) edge (m-2-3) (m-3-1) edge (m-3-2) (m-3-2) edge (m-3-3) (m-1-2) edge (m-2-2) (m-2-2) edge (m-3-2) (m-1-3) edge (m-2-3) (m-2-3) edge (m-3-3) ; \path[-,font=\scriptsize] (m-1-2) edge [double, thick, double distance=2pt] (m-1-3) (m-2-1) edge [double, thick, double distance=2pt] (m-3-1) ; \end{tikzpicture} \] where $A\to E\to P\dashrightarrow$ is an $\mathbb{E}$-triangle. Since $\mathrm{pd}_{\omega^\wedge}(\omega \cap \mathcal{S}) = 0$, the conflation $A\to E\to P$ realizes the split $\mathbb{E}$-extension, and then $E \simeq A \oplus P$ \cite[Rmk. 2.11-(1)]{Nakaoka1}. On the other hand, by part (1) again, there is an $\mathbb{E}$-triangle $A' \to W_0 \to A\dashrightarrow$ with $W_0 \in \omega \cap \mathcal{S}$ and $\mathrm{resdim}_{\omega}(A') = \mathrm{resdim}_{\omega}(A) - 1$. Thus, by considering the direct sum of this $\mathbb{E}$-triangle and $0 \to P \mathop{\to}\limits^{1} P\dashrightarrow$ we get an $\mathbb{E}$-triangle $A' \to W_0 \oplus P \to A \oplus P\dashrightarrow$ with $W_0 \oplus P \in \omega$ since $\omega$ is closed under extensions. Now, applying (ET4)$^{op}$ to the $\mathbb{E}$-triangles $A' \to W_0 \oplus P \to A \oplus P\dashrightarrow$ and $C'\to A \oplus P\to B\dashrightarrow,$ we have the following commutative diagram in $\mathcal{C}$ \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=2.5em, column sep=2.5em, text height=1.25ex, text depth=0.25ex] { A' & E' & C' \\ A' & W_{0}\oplus P & A\oplus P \\ {} & B & B, \\ }; \path[->] (m-1-1) edge (m-1-2) (m-1-2) edge (m-1-3) (m-2-1) edge (m-2-2) (m-2-2) edge (m-2-3) (m-1-2) edge (m-2-2) (m-2-2) edge (m-3-2) (m-1-3) edge (m-2-3) (m-2-3) edge (m-3-3) ; \path[-,font=\scriptsize] (m-3-2) edge [double, thick, double distance=2pt] (m-3-3) (m-1-1) edge [double, thick, double distance=2pt] (m-2-1) ; \end{tikzpicture} \] where $A'\to E'\to C'\dashrightarrow$ and $E'\to W_0\oplus P\to B\dashrightarrow$ are $\mathbb{E}$-triangles. By using induction hypothesis in the second column of the previous diagram the result follows. \end{itemize} \item Given an object $C = C_1 \oplus C_2 \in \omega^\wedge$ with $n := \mathrm{resdim}_{\omega}(C)$, we use induction on $n$ to prove that $C_1, C_2 \in \omega^\wedge$. \begin{itemize} \item \underline{Initial step}: The case $n = 0$ is trivial. \item \underline{Induction step}: Suppose that every direct summand in $\mathcal{C}$ of an object $D \in \omega^\wedge$ with $\mathrm{resdim}_{\omega}(D) \leq n-1$ belongs to $\omega^\wedge$. By part (1), there exists an $\mathbb{E}$-triangle \[ K \to W \xrightarrow{{\scriptsize \left(\begin{array}{c} h_1 \\ h_2 \end{array}\right)}} C \mathop{\dashrightarrow}^{\delta} \] with $W \in \omega \cap \mathcal{S}$ and $\mathrm{resdim}_{\omega}(K) = \mathrm{resdim}_{\omega}(C) - 1$. Thus, for $i = 1, 2$ we have $\mathbb{E}$-triangles $K_i \to W \xrightarrow{h_i} C_i \mathop{\dashrightarrow}\limits^{\delta_{i}}$ from Lemma~\ref{lem: w^ cerrada por sumandos}. By taking the direct sum of $\delta_1$ and $\delta_2$ we get the following $\mathbb{E}$-triangle \[ K_1 \oplus K_2 \to W \oplus W \xrightarrow{{\scriptsize \left( \begin{array}{cc} h_1 & 0 \\ 0 & h_2 \end{array} \right)}} C \mathop{\dashrightarrow}\limits^{\delta_1 \oplus \delta_2}, \] where $W \oplus W \in \omega$ since $\omega$ is closed under extensions. Then, by Lemma~\ref{lem: w^ cerrada por sumandos} and \cite[Prop. 3.17]{Nakaoka1} we can form the following commutative diagram \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=2em, column sep=2em, text height=1.25ex, text depth=0.25ex] { K & W & C \\ K_1 \oplus K_2 & W \oplus W & C \\ W & W & {} \\ }; \path[->] (m-1-1) edge (m-1-2) (m-1-2) edge (m-1-3) (m-2-1) edge (m-2-2) (m-2-2) edge (m-2-3) (m-1-2) edge node[right] {\footnotesize$\nabla$} (m-2-2) (m-2-2) edge node[right] {\footnotesize$(-1\, 1)$} (m-3-2) (m-1-1) edge (m-2-1) (m-2-1) edge (m-3-1) ; \path[-,font=\scriptsize] (m-3-1) edge [double, thick, double distance=2pt] (m-3-2) (m-1-3) edge [double, thick, double distance=2pt] (m-2-3) ; \end{tikzpicture} \] The first column gives rise to a conflation $K \to K_1 \oplus K_2 \to W $ which splits since $W \in \omega \cap \mathcal{S}$, $K \in \omega^\wedge$ and $\mathrm{pd}_{\omega^\wedge}(\omega \cap \mathcal{S}) = 0$. Thus, $K_1 \oplus K_2 \simeq K \oplus W$ and \[ \hspace{1.5cm} \mathrm{resdim}_{\omega}(K_1 \oplus K_2) = \mathrm{resdim}_{\omega}(K \oplus W) \leq \mathrm{resdim}_{\omega}(K) \leq n-1. \] By induction hypothesis, we have that $K_i \in \omega^\wedge$. Hence, from the $\mathbb{E}$-triangle $K_i \to W \xrightarrow{h_i} C_i \mathop{\dashrightarrow}\limits^{\delta_{i}}$ we finally have that $C_i \in \omega^\wedge$. \end{itemize} \item The proof is similar to the given one in \cite[Lem. 3.5-(4)]{HMPcut}. \end{enumerate} \end{proof} \begin{lemma}\label{lem:relative_equality} Let $\mathcal{C}$ be with higher extension groups and $\mathcal{X}, \omega, \mathcal{S} \subseteq \mathcal{C}$ satisfying the following conditions: \begin{enumerate} \item $\omega$ is closed under extensions and isomorphisms; \item $\omega \cap \mathcal{S}$ is closed under direct summands and a relative generator in $\omega$; \item $\omega \cap \mathcal{S} \subseteq \mathcal{X} \cap \mathcal{S}$; \item $\mathcal{X} \cap \mathcal{S}$ is closed under cocones; \item $\mathrm{id}_{\mathcal{X} \cap \mathcal{S}}(\omega \cap \mathcal{S}) = 0$. \end{enumerate} Then, $\omega \cap \mathcal{S} = \mathcal{X} \cap \omega^\wedge \cap \mathcal{S}$. \end{lemma} \begin{proof} The proof given in \cite[Lem. 4.1]{HMPcut} can be easily extended to the context of extriangulated categories. \end{proof} \section{\textbf{Cut Frobenius pairs vs. Cut AB-contexts}} Let us commence with the analogs of left Frobenius pairs and weak Auslander-Buchweitz contexts in extriangulated categories. These notions have been given previously, for example in \cite{BMPS, TGone, ma2021new}, however in this paper we work through of a relativization that depends on a class of objects in an extriangulated category $\mathcal{C}$ with higher extension groups. Following \cite{TGone}, we first recall the notion of left Frobenius pair in such extriangulated categories. \begin{definition}\cite[Def. 4.7]{TGone}\label{def: left Frob} Let $\mathcal{C}$ be an extriangulated category with higher extension groups and $\mathcal{X}, \omega\subseteq \mathcal{C}$. We say that $(\mathcal{X}, \omega)$ is a \textbf{left Frobenius pair} in $\mathcal{C}$ if the following conditions are satisfied. \begin{itemize} \item[(lF1)] $\mathcal{X}$ is left thick. \item[(lF2)] $\omega$ is closed under direct summands in $\mathcal{C}.$ \item[(lF3)] $\omega$ is an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$. \end{itemize} If the dual conditions hold true for the pair $(\nu, \mathcal{Y})$; that is, (1) $\mathcal{Y}$ is right thick; (2) $\nu$ is closed under direct summands in $\mathcal{C};$ and (3) $\nu$ is a $\mathcal{Y}$-projective relative generator in $\mathcal{Y}$, we shall say that the pair $(\nu, \mathcal{Y})$ is a \textbf{right Frobenius pair in $\mathcal{C}$}. \end{definition} \begin{remark}\label{rmk: frob1} For any left Frobenius pair $(\mathcal{X}, \omega)$ in $\mathcal{C}$, we have that $\mathcal{X}^{\wedge}$ is a thick subcategory of $\mathcal{C}$ and $\mathrm{id}_{\mathcal{X}}(\omega^{\wedge})=0$ (see \cite[Props. 3.10, 3.14 \& 3.15 and Lem. 3.8]{MDZtheoryAB} or \cite[Prop. 3.8 \& Lem. 3.10]{ma2021new}). \end{remark} \begin{example}\label{ex: GP, P} Let $\xi$ be a proper class of $\mathbb{E}$-triangles in $\mathcal{C}$ \cite[Def. 3.1]{HZZgorensteinness}. It is known that if $\mathcal{C}$ has enough $\xi$-projectives and enough $\xi$-injectives, then $(\mathcal{C}, \mathbb{E}_{\xi}, \mathfrak{s}_{\xi})$ is an extriangulated category where $\mathfrak{s}_{\xi}:=\mathfrak{s}|_{\mathbb{E}_{\xi}}$ and $\mathbb{E}_{\xi}:=\mathbb{E}|_{\xi}$ \cite[Def. 4.1 \& Thm. 3.2]{HZZgorensteinness}. Thus, from \cite[Ex. 4.8]{TGone} we have that $(\mathcal{GP}(\xi), \mathcal{P}(\xi))$ is a left Frobenius pair in $(\mathcal{C}, \mathbb{E}_{\xi}, \mathfrak{s}_{\xi})$\footnote{Definition~\ref{def: left Frob} coincides with \cite[Def. 4.7]{TGone} by considering $\xi$ as the class of all the $\mathbb{E}$-triangles in $\mathcal{C}$.} where $\mathcal{GP}(\xi)$ is the class of $\xi$-$\mathcal{G}$projective objects and $\mathcal{P}(\xi)$ denotes the class of $\xi$-projective objects \cite[Defs. 4.8 \& 4.1]{HZZgorensteinness}. In particular, $\mathrm{id}_{\mathcal{GP}(\xi)}(\mathcal{P}(\xi)^{\wedge})=0$ \cite[Sections 4.1 \& 5.1]{HZZgorensteinness}. \end{example} Now, we extend the notion of cut Frobenius pair, given in \cite[Def. 3.6]{HMPcut}, for extriangulated categories with higher extension groups. \begin{definition}\label{def: cut left Frobenius pair} Let $\mathcal{C}$ be an extriangulated category with higher extension groups and $\mathcal{X}, \omega, \mathcal{S}\subseteq \mathcal{C}$. We say that $(\mathcal{X}, \omega)$ is a \textbf{left Frobenius pair cut on $\mathcal{S}$} if the following conditions are satisfied. \begin{enumerate} \item $\mathcal{X}$ is left thick. \item $(\mathcal{X}\cap \mathcal{S}, \omega\cap\mathcal{S})$ is a left Frobenius pair in $\mathcal{C}.$ \item $\omega\cap \mathcal{S}$ is $\omega$-projective and a relative generator in $\omega.$ \item $\omega$ is closed under extensions and direct summands in $\mathcal{C}.$ \end{enumerate} In case that, the dual conditions are satisfied for the pair $(\nu, \mathcal{Y})$ we shall say that it is a \textbf{right Frobenius pair cut on $\mathcal{S}$}. \end{definition} \begin{remark}\label{rmk: frob} If $(\mathcal{X},\omega)$ is a left Frobenius pair cut on $\mathcal{S}$, then all the statements in Lemma \ref{lem:main_lemma} hold true (these results are also valid for the case $\mathcal{S}:=\mathcal{C}$). In particular, $\omega^\wedge$ is closed under extensions and direct summands in $\mathcal{C}.$ Moreover, since $(\mathcal{X}, \omega)$ is a left Frobenius pair cut on $\mathcal{S}$, we also have from Lemma~\ref{lem:relative_equality} that $\omega\cap \mathcal{S}=\mathcal{X}\cap \mathcal{S}\cap \omega^{\wedge}$. Thus, $\omega\cap \mathcal{S}$ is closed under extensions since $\mathcal{X}\cap \mathcal{S}$ and $\omega^{\wedge}$ are closed under extensions. \end{remark} Next, we extend the notion of cut Auslander-Buchweitz context, given in \cite[Def. 3.14]{HMPcut}, for extriangulated categories with higher extension groups. \begin{definition}\label{def: cut left AB context} Let $\mathcal{C}$ be an extriangulated category with higher extension groups, $\mathcal{A}, \mathcal{B}, \mathcal{S}\subseteq \mathcal{C}$ and $\omega:=\mathcal{A}\cap\mathcal{B}$. We say that $(\mathcal{A}, \mathcal{B})$ is a \textbf{left weak AB context cut on $\mathcal{S}$} if the following conditions are satisfied. \begin{enumerate} \item $(\mathcal{A}, \omega)$ is a left Frobenius pair cut on $\mathcal{S}.$ \item $\mathcal{B}\cap \mathcal{S}$ is right thick. \item $\mathcal{B}\cap \mathcal{S}\subseteq (\mathcal{A}\cap \mathcal{S})^{\wedge}$. \end{enumerate} Dually, the notion of \textbf{right weak AB context cut on $\mathcal{S}$} is defined. \end{definition} The following table summarizes all the properties mentioned previously. The ones marked by $\checkmark$ follow by definition while those ones marked by $\star$ are a consequence from Remarks~\ref{rmk: frob1} and~\ref{rmk: frob}.$\,$ \begin{center} \begin{tabular}{ccccc} \hline & & Closedness by & &\\ & extensions & direct summands & cocones & cones\\ \hline Left Frobenius $(\mathcal{X}, \omega)$ & & & & \\ \hline $\mathcal{X}$ & $\checkmark$ & $\checkmark$ & $\checkmark$ &\\ $\omega$ & $\star$ & $\checkmark$ & &\\ $\mathcal{X}^{\wedge}$ & $\star$ & $\star$ & $\star$ & $\star$\\ \hline Cut left Frobenius $(\mathcal{X}, \omega)$ & & & & \\ \hline $\mathcal{X}$ & $\checkmark$ & $\checkmark$ & $\checkmark$ &\\ $\omega$ & $\checkmark$ & $\checkmark$ & &\\ $\mathcal{X}\cap \mathcal{S}$ & $\checkmark$ & $\checkmark$ & $\checkmark$ &\\ $\omega\cap \mathcal{S}$ & $\star$ & $\checkmark$ & &\\ $\omega^{\wedge}$ & $\star$ & $\star$ & &\\ \hline Cut left weak AB-context $(\mathcal{A}, \mathcal{B})$ & & & & \\ \hline $\mathcal{A}$ & $\checkmark$ & $\checkmark$ & $\checkmark$ &\\ $\omega:=\mathcal{A}\cap \mathcal{B}$ & $\checkmark$ & $\checkmark$ & &\\ $\mathcal{A}\cap \mathcal{S}$ & $\checkmark$ & $\checkmark$ & $\checkmark$ &\\ $\omega\cap \mathcal{S}$ & $\star$ & $\checkmark$ & &\\ $\mathcal{B}\cap \mathcal{S}$ & $\checkmark$ & $\checkmark$ & & $\checkmark$\\ \hline \end{tabular} \end{center}$\,$\\ In the following lines we show that there exists a one-to-one correspondence between the class of left Frobenius pairs cut on $\mathcal{S}$, denoted by $\mathfrak{F}_{\mathcal{S}}$, and the class of left weak AB-contexts on $\mathcal{S}$, denoted by $\mathfrak{C}_{\mathcal{S}}$. It is worth mentioning that such correspondence will be given under equivalence relations whose definition we give below. \begin{definition}\label{def:Frobenius_and_AB_relations} Let $\mathcal{C}$ be an extriangulated category with higher extension groups and let $\mathcal{S} \subseteq \mathcal{C}$. For $(\mathcal{X},\omega), (\mathcal{X}',\omega') \in \mathfrak{F}_{\mathcal{S}}$ and $(\mathcal{A},\mathcal{B})$, $(\mathcal{A}',\mathcal{B}') \in \mathfrak{C}_{\mathcal{S}}$, we shall say that: \begin{enumerate} \item $(\mathcal{X},\omega)$ is \textbf{related} to $(\mathcal{X}',\omega')$ in $\mathfrak{F}_{\mathcal{S}}$, denoted $(\mathcal{X},\omega) \sim (\mathcal{X}',\omega')$, if $\mathcal{X} \cap \mathcal{S} = \mathcal{X}' \cap \mathcal{S}$ and $\omega \cap \mathcal{S} = \omega' \cap \mathcal{S}$; \item $(\mathcal{A},\mathcal{B})$ is \textbf{related} to $(\mathcal{A}',\mathcal{B}')$ in $\mathfrak{C}_{\mathcal{S}}$, denoted $(\mathcal{A},\mathcal{B}) \sim (\mathcal{A}',\mathcal{B}')$, if $\mathcal{A} \cap \mathcal{S} = \mathcal{A}' \cap \mathcal{S}$ and $\mathcal{A} \cap \mathcal{B} \cap \mathcal{S} = \mathcal{A}' \cap \mathcal{B}' \cap \mathcal{S}$. \end{enumerate} \end{definition} We denote by $[\mathcal{X},\omega]_{\mathfrak{F}_{\mathcal{S}}}$ the equivalence class of $(\mathcal{X},\omega)$ in $\mathfrak{F}_{\mathcal{S}} /\!\! \sim$. Similarly, $[\mathcal{A},\mathcal{B}]_{\mathfrak{C}_{\mathcal{S}}}$ denotes the equivalence class of $(\mathcal{A},\mathcal{B})$ in $\mathfrak{C}_{\mathcal{S}} /\!\! \sim$. The following result generalizes \cite[Thm. 4.6]{HMPcut} in extriangulated categories. \begin{theorem}[First correspondence theorem]\label{theo:correspondence_1} Let $\mathcal{C}$ be an extriangulated category with higher extension groups and let $\mathcal{S} \subseteq \mathcal{C}$ be closed under cones and cocones. Then, the correspondence $\Phi_{\mathcal{S}}\colon \mathfrak{F}_{\mathcal{S}} /\!\! \sim \mbox{} \to \mathfrak{C}_{\mathcal{S}} /\!\! \sim,\; [\mathcal{X},\omega]_{\mathfrak{F}_{\mathcal{S}}} \mapsto [\mathcal{X},\omega^\wedge]_{\mathfrak{C}_{\mathcal{S}}},$ is bijective and the inverse $\Psi_{\mathcal{S}}$ is given by $[\mathcal{A},\mathcal{B}]_{\mathfrak{C}_{\mathcal{S}}} \mapsto [\mathcal{A},\mathcal{A} \cap \mathcal{B}]_{\mathfrak{F}_{\mathcal{S}}}$. \end{theorem} \begin{proof} First, we show that the mappings $\Phi_{\mathcal{S}}$ and $\Psi_{\mathcal{S}}$ are well-defined. On the one hand, by Definition~\ref{def: cut left AB context}, we have that $(\mathcal{A},\mathcal{A} \cap \mathcal{B}) \in \mathfrak{F}_{\mathcal{S}}$ for every $(\mathcal{A},\mathcal{B}) \in \mathfrak{C}_{\mathcal{S}}$. Moreover, it is clear that $\Psi_{\mathcal{S}}([\mathcal{A},\mathcal{B}]_{\mathfrak{C}_{\mathcal{S}}})$ does not depend on the chosen representative $(\mathcal{A},\mathcal{B}) \in \mathfrak{C}_{\mathcal{S}}$. On the other hand, $\Phi_{\mathcal{S}}$ does not depend on representatives by Lemma \ref{lem:relative_equality}. So, we only need to prove that if $(\mathcal{X},\omega) \in \mathfrak{F}_{\mathcal{S}}$ then $(\mathcal{X},\omega^\wedge) \in \mathfrak{C}_{\mathcal{S}}$. For this, we see that the conditions in Definition~\ref{def: cut left AB context} hold. Indeed, \begin{itemize} \item \underline{$(\mathcal{X},\mathcal{X} \cap \omega^\wedge)$ is a left Frobenius pair cut on $\mathcal{S}$:} Clearly, $\mathcal{X}$ is closed under extensions, cocones and direct summands by definition. Now, by Lemma \ref{lem:relative_equality}, we have $(\mathcal{X} \cap \mathcal{S}, \mathcal{X} \cap \omega^\wedge \cap \mathcal{S}) = (\mathcal{X} \cap \mathcal{S},\omega \cap \mathcal{S})$ which is a left Frobenius pair in $\mathcal{C}$. Thus, it suffices to show that $\omega \cap \mathcal{S}$ is $(\mathcal{X} \cap \omega^\wedge)$-projective relative generator in $\mathcal{X} \cap \omega^\wedge$. Notice first that $\mathrm{pd}_{\omega^\wedge}(\omega \cap \mathcal{S}) = 0$ by Lemma \ref{lem:main_lemma}. Now, let $M \in \mathcal{X} \cap \omega^\wedge$. Using again Lemma \ref{lem:main_lemma}, there exists an $\mathbb{E}$-triangle $M' \to P \to M\dashrightarrow$ with $P \in \omega \cap \mathcal{S}$ and $M' \in \omega^\wedge$. Since $\mathcal{X}$ is closed under cocones and $\omega \cap \mathcal{S} \subseteq \mathcal{X} \cap \mathcal{S}$, we get that $M' \in \mathcal{X} \cap \omega^\wedge$. Finally, by Remark~\ref{rmk: frob} $\mathcal{X} \cap \omega^\wedge$ is closed under extensions and direct summands in $\mathcal{C}.$ \item \underline{$\omega^\wedge \cap \mathcal{S}$ is closed under extensions, cones and direct summands:} Notice first that $\omega\cap \mathcal{S}$ is closed under extensions and direct summands by Remark~\ref{rmk: frob}. Then, $\omega^{\wedge}\cap \mathcal{S}=(\omega\cap \mathcal{S})^{\wedge}$ is closed under extensions and direct summands by Lemma~\ref{lem:main_lemma}-(4). We see that $\omega^{\wedge}\cap \mathcal{S}=(\omega\cap \mathcal{S})^{\wedge}$ is closed under cones. Let $A\to B\to C \dashrightarrow$ be an $\mathbb{E}$-triangle with $A, B\in (\omega\cap \mathcal{S})^{\wedge}$. The result is clear if $\mathrm{resdim}_{\omega\cap \mathcal{S}}(B)=0$ (notice that $\omega\cap\mathcal{S}$ is closed under isomorphisms since it is closed under extensions and direct summands). So, we can assume that $\mathrm{resdim}_{\omega\cap \mathcal{S}}(B)=:n\geq 1$. From Lemma~\ref{lem:main_lemma}, we have an $\mathbb{E}$-triangle $B'\to F\to B\dashrightarrow$ where $F\in \omega\cap \mathcal{S}$ and $\mathrm{resdim}_{\omega\cap \mathcal{S}}(B')\leq \mathrm{resdim}_{\omega\cap\mathcal{S}}(B)-1$. By considering the previous two $\mathbb{E}$-triangles we get the following commutative diagram from (ET4)$^{op}$ \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=2.5em, column sep=2.5em, text height=1.25ex, text depth=0.25ex] { B' & E & A \\ B' & F & B \\ {} & C & C, \\ }; \path[->] (m-1-1) edge (m-1-2) (m-1-2) edge (m-1-3) (m-2-1) edge (m-2-2) (m-2-2) edge (m-2-3) (m-1-2) edge (m-2-2) (m-2-2) edge (m-3-2) (m-1-3) edge (m-2-3) (m-2-3) edge (m-3-3) ; \path[-,font=\scriptsize] (m-3-2) edge [double, thick, double distance=2pt] (m-3-3) (m-1-1) edge [double, thick, double distance=2pt] (m-2-1) ; \end{tikzpicture} \] where $E\in (\omega\cap \mathcal{S})^{\wedge}$ since $A, B'$ belong to $(\omega\cap \mathcal{S})^{\wedge}$ and $(\omega\cap \mathcal{S})^{\wedge}$ is closed under extensions. Thus, from the second column in the previous diagram we can deduce that $C\in (\omega\cap\mathcal{S})^{\wedge}$. \item \underline{$\omega^\wedge \cap \mathcal{S} \subseteq (\mathcal{X} \cap \mathcal{S})^\wedge$:} It follows from Lemma \ref{lem:main_lemma}. \end{itemize} Finally, the mappings $\Psi_{\mathcal{S}}$ and $\Phi_{\mathcal{S}}$ are inverse to each other by Lemma \ref{lem:relative_equality}. \end{proof} \begin{remark}\label{rmk: corresp 1}$\,$ In Theorem~\ref{theo:correspondence_1} we can also provide an alternative proof of the fact that $\omega^{\wedge}\cap \mathcal{S}$ is closed under extensions, direct summands and cones, for any left Frobenius pair $(\mathcal{X}, \omega)$ cut on $\mathcal{S}$. These arguments are similar to the given ones in \cite[Thm. 5.4]{BMPS} and \cite[Thm. 4.6]{HMPcut} but in extriangulated categories. In fact, given a left Frobenius pair $(\mathcal{X}, \omega)$ cut on $\mathcal{S}$, we know that $(\mathcal{X}\cap\mathcal{S}, \omega\cap \mathcal{S})$ is a left Frobenius pair in $\mathcal{C}$ by definition. Then, from Remark~\ref{rmk: frob1} we get that $(\mathcal{X}\cap \mathcal{S})^{\wedge}$ is closed under extensions, direct summands and cones. Moreover, the equalities $$(\omega\cap \mathcal{S})^{\wedge}= (\mathcal{X}\cap \mathcal{S})\cap (\mathcal{X}\cap \mathcal{S})^{\perp}=(\mathcal{X}\cap \mathcal{S})\cap (\omega\cap \mathcal{S})^{\wedge}$$ hold by \cite[Prop. 3.9]{MDZtheoryAB}. Thus, by proceeding as in \cite[Prop. 4.18]{TGone} and by Lemma~\ref{lem:main_lemma}-(4), we obtain that $\omega^{\wedge}\cap \mathcal{S}=(\omega\cap \mathcal{S})^{\wedge}$ is closed under extensions, direct summands and cones. \end{remark} \section{\textbf{Cut $n$-cotorsion pairs in extriangulated categories with higher extension groups}} In this section we introduce the notion of \emph{cut $n$-cotorsion pair} for extriangulated categories with higher extension groups. This notion unifies $n$-cotorsion~\cite[Def. 2.1]{HMP} and cut cotorsion~\cite[Def. 2.1]{HMPcut} pairs. Throughout this section, $n\geq 1$ denotes a positive integer and $\mathcal{C}$ an extriangulated category with higher extension groups. \begin{definition}\label{def: CnCP ext} Let $\mathcal{S}, \mathcal{A}, \mathcal{B}\subseteq \mathcal{C}$. We say that $(\mathcal{A}, \mathcal{B})$ is a \textbf{left $n$-cotorsion pair cut on $\mathcal{S}$} if the following conditions are satisfied. \begin{enumerate} \item[(1)] $\mathcal{A}$ is closed under direct summands in $\mathcal{C}.$ \item[(2)] $\mathbb{E}^{i}(\mathcal{A}\cap \mathcal{S}, \mathcal{B})=0$ for every $1\leq i\leq n,$ \item[(3)] (Left completeness) For each $S\in \mathcal{S}$, there exists an $\mathbb{E}$-triangle $K\to A\to S\dashrightarrow$ with $A\in \mathcal{A}$ and $K\in \mathcal{B}^{\wedge}_{n-1}$. \end{enumerate} Dually, we say that $(\mathcal{A}, \mathcal{B})$ is a \textbf{right $n$-cotorsion pair cut on $\mathcal{S}$} if $\mathcal{B}$ is closed under direct summands in $\mathcal{C},$ $\mathbb{E}^{i}(\mathcal{A}, \mathcal{B}\cap \mathcal{S})=0$ for every $1\leq i\leq n$ and each $S\in \mathcal{S}$ admits an $\mathbb{E}$-triangle $S\to B\to C\dashrightarrow$ with $B\in\mathcal{B}$ and $C\in \mathcal{A}^{\vee}_{n-1}$. Finally, $(\mathcal{A}, \mathcal{B})$ is said to be an \textbf{$n$-cotorsion pair cut on $\mathcal{S}$} if it is both a left and right $n$-cotorsion pair cut on $\mathcal{S}$. In case there is no need to refer to the subcategory $\mathcal{S}$, we shall simply say that $(\mathcal{A}, \mathcal{B})$ is a \textbf{(left and/or right) $n$-cotorsion cut}. \end{definition} \begin{remark} $\,$ \begin{enumerate} \item In the case of $1$-cotorsion pairs cut on $\mathcal{S},$ we can remove the condition that $\mathcal{C}$ has higher extension groups. Moreover, any $1$-cotorsion pair cut on $\mathcal{S}:=\mathcal{C}$ is a cotorsion pair in $\mathcal{C}$ \cite[Def. 4.1]{Nakaoka1}. \item Any left $n$-cotorsion pair cut on $\mathcal{S}$ with $\mathcal{S}:=\mathcal{C}$ is a left $n$-cotorsion pair in $\mathcal{C}$, for any $n\geq 1$ \cite[Def. 3.1]{HZontherelation}. \item Any left $1$-cotorsion pair cut on $\mathcal{S}$ is a left cotorsion pair cut along $\mathcal{S}$, for any $\mathcal{S}\subseteq \mathcal{C}$ when $\mathcal{C}$ is an abelian category~\cite[Def. 2.1]{HMPcut}. \end{enumerate} \end{remark} In the case of the abelian or triangulated categories, the notion of $n$-cotorsion pair cut on $\mathcal{S}$ specializes as follows. \begin{remark} Let $\mathcal{C}$ be an abelian (resp., triangulated) category and $\mathcal{S}, \mathcal{A}, \mathcal{B}\subseteq \mathcal{C}$. In this case, $(\mathcal{A}, \mathcal{B})$ is a \textbf{left $n$-cotorsion pair cut on $\mathcal{S}$} if the following conditions are satisfied: \begin{enumerate} \item[(1)] $\mathcal{A}$ is closed under direct summands in $\mathcal{C};$ \item[(2)] $\mathrm{Ext}^{i}_{\mathcal{C}}(\mathcal{A}\cap \mathcal{S}, \mathcal{B})=0$ (resp., $\mathrm{Hom}_{\mathcal{C}} (\mathcal{A}\cap \mathcal{S}, \mathcal{B}[i])=0$) for every $1\leq i\leq n$; \item[(3)] For every object $S\in \mathcal{S}$, there exists an exact sequence $K\rightarrowtail A\twoheadrightarrow S$ (resp., a distinguished triangle $K\to A\to S\to K[1]$) with $A\in \mathcal{A}$ and $K\in \mathcal{B}^{\wedge}_{n-1}$. \end{enumerate} \end{remark} One source of examples comes from subcategories which are relative quasi-generators and quasi-cogenerators in other ones. Recall that, for any abelian category $\mathcal{C}$ and $\nu, \mathcal{S}\subseteq \mathcal{C}$, $\nu$ is said to be a \emph{relative quasi-generator in $\mathcal{S}$} if for every $S\in \mathcal{S}$ there exists an exact sequence $S'\rightarrowtail Y\twoheadrightarrow S$ with $Y\in \nu$ and $S'\in \mathcal{S}$. Actually, it is easy to prove that, when $\nu$ and $\mathcal{S}$ are closed under direct summands and $\nu$ is an $\mathcal{S}$-projective relative quasi-generator in $\mathcal{S}$, the pair $(\nu, \mathcal{S})$ is an $n$-cotorsion pair cut on $\mathcal{S}$ for every $n\geq 1$. Dually, for every $n\geq 1$, $(\mathcal{S}, \omega)$ is an $n$-cotorsion pair cut on $\mathcal{S}$ whenever $\omega$ and $\mathcal{S}$ are closed under direct summands and $\omega$ is an $\mathcal{S}$-injective relative quasi-cogenerator in $\mathcal{S}$. The following example illustrates the mentioned previously. \begin{example}\label{ex:carcaj} \cite[Ex. 5.3-(2)]{ZX} Let $k$ be a field and $\Lambda$ be the quotient path $k$-algebra given by the quiver \[ \xymatrix{ 1 \ar@/^/[r]^{\alpha} & 2\ar@/^/[l]^{\beta} & 3\ar[l]^{\gamma} } \] with relations $\alpha \beta = 0 = \beta \alpha$. The indecomposable projective $\Lambda$-modules are \begin{align*} P(1) & = \begin{tiny}\begin{array}{c} 1 \\ 2 \end{array} \end{tiny}, & P(2) & = \begin{tiny} \begin{array}{c} 2 \\ 1 \end{array} \end{tiny}, & & \text{and} & P(3) & = \begin{tiny} \begin{array}{c} 3 \\ 2 \\ 1 \end{array} \end{tiny}, \end{align*} while the indecomposable injective $\Lambda$-modules are given by \begin{align*} I(1) & = \begin{tiny}\begin{array}{c} 3 \\ 2 \\ 1 \end{array}\end{tiny}, & I(2) & = \begin{tiny}\begin{array}{ccc} 1 & {} & 3 \\ {} & 2 \end{array}\end{tiny}, & & \text{and} & I(3) & = \begin{tiny}\begin{array}{c} 3 \end{array}\end{tiny}. \end{align*} On the other hand, the Auslander-Reiten quiver of $\Lambda$ is \[ \xymatrix@R=0.3cm{ & & {\begin{tiny} \begin{array}{c} 3 \\ 2\\ 1 \end{array} \end{tiny}}\ar[rd] & & & \\ & {\begin{tiny} \begin{array}{c} 2\\ 1 \end{array} \end{tiny}}\ar[ru] \ar[rd] \ar@{--}[rr] & & {\begin{tiny} \begin{array}{c} 3\\ 2 \end{array} \end{tiny}}\ar[rd] \ar[rd] \ar@{--}[rr] & & \begin{tiny} \begin{array}{c}1\end{array} \end{tiny} \\ \begin{tiny} \begin{array}{c} 1 \end{array} \end{tiny} \ar[ru] \ar@{--}[rr] & & \begin{tiny} \begin{array}{c} 2 \end{array} \end{tiny} \ar[ru] \ar[rd] \ar@{--}[rr] & & {\begin{tiny} \begin{array}{ccc} 1& &3\\ &2& \end{array} \end{tiny}}\ar[ru] \ar[rd] & \\ & & & {\begin{tiny} \begin{array}{c} 1\\ 2 \end{array} \end{tiny}}\ar[ru] \ar@{--}[rr] & & \begin{tiny} \begin{array}{c} 3 \end{array} \end{tiny} } \] where the vertices $i$ represent the simple $\Lambda$-module $S(i)$. Consider the class $\mathcal{X} = \mathrm{add}(\begin{tiny} \begin{array}{c} 1 \end{array} \end{tiny} \oplus \begin{tiny} \begin{array}{c} 2\\ 1 \end{array} \end{tiny} \oplus \begin{tiny} \begin{array}{c} 2 \end{array} \end{tiny} \oplus \begin{tiny} \begin{array}{c} 1\\ 2 \end{array} \end{tiny})$ of $\Lambda$-modules which are direct summands of finite direct sums of $\begin{tiny} \begin{array}{c} 1 \end{array} \end{tiny} \oplus \begin{tiny} \begin{array}{c} 2\\ 1 \end{array} \end{tiny} \oplus \begin{tiny} \begin{array}{c} 2 \end{array} \end{tiny} \oplus \begin{tiny} \begin{array}{c} 1\\ 2 \end{array} \end{tiny}$. Then, $\mathcal{X}$ is a subcategory of $\mathrm{mod}(\Lambda)$ which is closed under extensions. Moreover, $\mathcal{X}$ is a Frobenius subcategory of $\mathrm{mod}(\Lambda)$ and the indecomposable projective-injective objects are precisely $\begin{tiny} \begin{array}{c} 1\\ 2 \end{array} \end{tiny}$ and $\begin{tiny} \begin{array}{c} 2 \\ 1 \end{array} \end{tiny}$. Thus, the class of projective objects in $\mathcal{X}$ is given by $\mathcal{P}(\mathcal{X}) = \mathrm{add}(P(1) \oplus P(2))$. Notice also that $\mathcal{P}(\mathrm{mod}(\Lambda)) = \mathrm{add}(P(1) \oplus P(2) \oplus P(3))$. Setting $(\nu, \mathcal{S}):=(\mathcal{P}(\Lambda),\mathcal{X})$ we have that $(\nu, \mathcal{S})$ is an $n$-cotorsion pair cut on $\mathcal{S}$ for every $n\geq 1$. An analogous example holds when $\mathcal{C}$ is a triangulated category by considering the notions of weak-generator and weak-cogenerator in \cite[Def. 5.1]{MendozaSaenzVargasSouto2}. \end{example} Other examples come from the theory of stratifying systems \cite{ES} and its generalizations \cite{DMS, MendozaSantiagoHomologicalSystems, Valenteexact, ATmixed}. Let $\mathcal{X}$ be a class of $\Lambda$-modules. We denote by $\mathfrak{F}(\mathcal{X})$ the full subcategory of $\mathrm{mod} (\Lambda)$ whose objects are the $\Lambda$-modules having a finite $\mathcal{X}$-filtration; that is, $M$ belongs to $\mathfrak{F}(\mathcal{X})$ if there is a finite chain $$0=M_{0}\subseteq M_1\subseteq \cdots\subseteq M_m=M$$ of submodules of $M$ such that $M_i/M_{i-1}$ is isomorphic to a $\Lambda$-module in $\mathcal{X},$ for all $i=1, 2, \ldots, m$. \begin{definition}\cite[Def. 2.1]{DMS}\label{def: epss} An Ext-projective stratifying system (of size $t$) in $\mathrm{mod}(\Lambda)$ consists of a triple $(\Theta, \underline{Q}, \leq),$ where $\Theta=\{\Theta(i)\}_{i=1}^{t}$ is a set of non-zero $\Lambda$-modules, $\underline{Q}=\{Q(i)\}_{i=1}^{t}$ is a set of indecomposable $\Lambda$-modules and $\leq$ is a total order on the set $[1,t]:=\{1, 2, \ldots, t\}$ satisfying the following conditions: \begin{enumerate} \item $\mathrm{Hom}_{\Lambda}(\Theta(j), \Theta(i))=0$ for $j> i;$ \item for each $i\in [1,t]$, there is an exact sequence $K(i)\rightarrowtail Q(i)\mathop{\twoheadrightarrow}\limits^{\beta_{i}} \Theta(i)$ in $\mathrm{mod}(\Lambda)$ such that $K(i)\in \mathfrak{F}(\{\Theta(j) : j> i\});$ \item $\mathrm{Ext}_{\Lambda}^{1}(Q, \Theta)=0$, where $Q:=\oplus_{i=1}^{t}Q(i)$. \end{enumerate} \end{definition} The above notion can be extended on the setting of Artin triangulated categories \cite[Def. 3.2]{MendozaSantiagoHomologicalSystems} and it is called $\Theta$-projective systems \cite[Def. 5.2]{MendozaSantiagoHomologicalSystems}. \begin{example}\label{ex: sistest} \underline{$\bullet$ Abelian case:} Let $(\Theta, \underline{Q}, \leq)$ be an Ext-projective stratifying system of size $t$ in $\mathrm{mod}(\Lambda).$ For each $j\in [1,t]$, we consider the class $\mathcal{S}_{j}:=\{M\in \mathfrak{F}(\Theta) : \min(M)=j\},$ where $\min(M)$ is defined to be the minimum $k$ such that $[M: \Theta(k)]\neq 0$ \cite[Lem. 2.6]{DMS}. Notice that $(\mathrm{add}(Q), \mathfrak{F}(\{\Theta(i) : i> j\}))$ is a left $1$-cotorsion pair cut on $\mathcal{S}_{j}$ since for every $M\in \mathcal{S}_{j}$, there exists a short exact sequence $ C\rightarrowtail Q'\twoheadrightarrow M $ with $Q'\in \mathrm{add}(Q)$ and $C\in \mathfrak{F}(\{\Theta(i) : i> j\})$ by \cite[Prop. 2.10]{DMS}. However this pair is not a right $1$-cotorsion pair cut on $\mathcal{S}_j.$ Indeed, if $M\in C_j$ admits an exact sequence $M\rightarrowtail C'\twoheadrightarrow Q''$ with $C'\in \mathfrak{F}(\{\Theta(i) : i> j\})$ and $Q''\in \mathrm{add}(Q)$ this sequence splits due to $\mathrm{Ext}_{R}^{1}(Q, \mathcal{F}(\Theta))=0$. Thus, $M\in \mathfrak{F}(\{\Theta(i) : i> j\})$ which is a contradiction.\\ \underline{$\bullet$ Triangulated case:} Let $(\Theta, \underline{Q}, \leq)$ be a $\Theta$-projective system of size $t$ in an Artin triangulated category $\mathcal{C}.$ In a similar way as we did in the abelian case, by using \cite[Defs. 5.2 \& 5.12, Cor. 4.4 \& Prop. 6.2-(c)]{MendozaSantiagoHomologicalSystems}, we get that $(\mathrm{add}(Q), \mathfrak{F}(\{\Theta(i) : i> j\}))$ is a left $1$-cotorsion pair cut on $\mathcal{S}_{j},$ for each $j\in [1,t].$ In this case, we also have that $(\mathrm{add}(Q)[1], \mathfrak{F}(\{\Theta(i) : i> j\})[1])$ is a right $1$-cotorsion pair cut on $\mathcal{S}_{j},$ for each $j\in [1,t]$. \end{example} As we can see in the previous example, from left (respectively, right) cut cotorsion pairs it is sometimes possible to get right (respectively, left) ones depending on the context. Triangulated categories allow us to say more about this duality and its shift functor plays a significant role as we will see in Section~\ref{Sec: applications}. The following example is the extriangulated version of the previous one. \begin{example}\label{ex: sistest ext} Let $k$ be field and $\mathcal{C}:=(\mathcal{C}, \mathbb{E}, \mathfrak{s}, \mathbb{E}^{-1})$ be a $k$-linear Krull-Schmidt extriangulated category with negative first extension, and let $(\Theta:=(\Theta(1), \Theta(2), \ldots, \Theta(t)), \mathbb{P})$ be a mixed stratifying system of size $t$ in $\mathcal{C}$ \cite[Defs. 2.8 \& 3.13]{ATmixed}. Then, \begin{enumerate} \item $\mathrm{add}(\mathbb{P})$ is closed under direct summands and $\mathbb{E}(\mathrm{add}(\mathbb{P}), \mathfrak{F}(\Theta))=0$ \cite[Props. 3.15 \& 3.9]{ATmixed}. \item If $\mathfrak{F}(\Theta)$ is $\mathbb{E}$-finite (that is, for each $M, N\in \mathfrak{F}(\Theta)$, the $k$-vector space $\mathbb{E}(M, N)$ is finite dimensional) and $\Theta(i)$ is a stone, for every $i\in [1,t],$ we have that $\mathfrak{F}(\Theta)$ admits a finite sequence of universal extensions by $\Theta$ \cite[Prop. 3.3 \& Def. 3.2]{ATmixed}. Thus, every $M\in \mathfrak{F}(\Theta)\setminus {}^{\perp_{\mathbb{E}}}\mathfrak{F}(\Theta)$ admits a conflation $K\to P\to M$ with $P\in \mathrm{add}(\mathbb{P})$ and $K\in \mathfrak{F}(\Theta(\geq j)),$ where $j$ is the minimum value satisfying $\mathbb{E}(M, \Theta(j))\neq 0$ by \cite[Prop. 3.10-(a)]{ATmixed}. \end{enumerate} Therefore, under the previous hypotheses, $(\mathrm{add}(\mathbb{P}), \mathfrak{F}(\Theta(\geq j))$ is a left $1$-cotorsion pair cut on $\mathcal{S}_{j}\subseteq \mathcal{C},$ where $\mathcal{S}_{j}$ consists of all $M\in \mathfrak{F}(\Theta)$ such that $j$ is the minimum value satisfying that $\mathbb{E}(M, \Theta(j))\neq 0.$ \end{example} Let $(\mathcal{C}, \mathbb{E}, \mathfrak{s})$ be an extriangulated category and $\mathcal{D}\subseteq\mathcal{C}$ be closed under extensions and having a zero object of $\mathcal{C}.$ Thus, $\mathcal{D}$ is closed under isomorphisms and an additive subcategory of $\mathcal{C}.$ It is well-known that $\mathcal{D}$ is an extriangulated category with the extriangulated structure induced by the restriction of $\mathbb{E}$, denoted by $\mathbb{E}_{\mathcal{D}}$, and the restriction of $\mathfrak{s}$, denoted by $\mathfrak{s}_{\mathcal{D}}$ \cite[Rmk. 2.18]{Nakaoka1}. In case that $(\mathcal{A}, \mathcal{B})$ is a cotorsion pair cut on $\mathcal{S}\subseteq \mathcal{C}$ with $\mathcal{A}, \mathcal{B}\subseteq \mathcal{S}$ and $\mathcal{S}$ is a subcategory closed under isomorphisms, extensions and direct summands of $\mathcal{C}$, the pair $(\mathcal{A},\mathcal{B})$ can be considered as a complete cotorsion pair in the extriangulated category $\mathcal{S}=(\mathcal{S}, \mathbb{E}_{\mathcal{S}}, \mathfrak{s}_{\mathcal{S}})$ as well. However, in general, a cut cotorsion pair may not be a complete cotorsion pair in a extriangulated category induced by a subcategory of $\mathcal{C}$. \begin{example}\label{ex: GP, P2} Consider the setting in Example~\ref{ex: GP, P}. Notice first that, the existence of the pair $(\mathcal{GP}(\xi), \mathcal{P}(\xi))$ as a left Frobenius pair in $(\mathcal{C}, \mathbb{E}_{\xi}, \mathfrak{s}_{\xi})$ implies the one of $(\mathcal{P}(\xi), \mathcal{P}(\xi))$ as a left Frobenius pair in $(\mathcal{C}, \mathbb{E}_{\xi}, \mathfrak{s}_{\xi})$ as well (for the closedness conditions see \cite[Def. 4.1]{HZZgorensteinness} and \cite[Prop. 3.24]{Nakaoka1}). Thus, $\mathcal{P}(\xi)^{\wedge}$ is thick and $\mathrm{id}_{\mathcal{P}(\xi)}(\mathcal{P}(\xi)^{\wedge})=0$ from Remark~\ref{rmk: frob1}. We assert the following. \begin{enumerate} \item[$\bullet$] \underline{$(\mathcal{GP}(\xi),\mathcal{P}(\xi)^{\wedge})$ is a $1$-cotorsion pair cut on $\mathcal{GP}(\xi)^{\wedge}$ in $(\mathcal{C}, \mathbb{E}_{\xi}, \mathfrak{s}_{\xi})$.} Indeed, $\mathcal{P}(\xi)^{\wedge}$ is closed under direct summands and $\mathbb{E}_{\xi}^{i}(\mathcal{GP}(\xi), \mathcal{P}(\xi)^{\wedge})=0$ for all $i\geq 1$ by paragraph above while $\mathcal{GP}(\xi)$ is closed under direct summands by \cite[Thm. 4.17]{HZZgorensteinness}. Moreover, for every $M\in \mathcal{GP}(\xi)^{\wedge}$, there exist $\mathbb{E}$-triangles $K\to G\to M\dashrightarrow$ and $M\to L\to G'\dashrightarrow$ in $\xi$ such that $G, G'\in \mathcal{GP}(\xi)$ and $K, L\in \mathcal{P}(\xi)^{\wedge}$ from \cite[Prop. 5.5]{HZZgorensteinness}. \item[$\bullet$] \underline{$(\mathcal{P}(\xi), \mathcal{GP}(\xi)^{\perp})$ is a $1$-cotorsion pair cut on $\mathcal{P}(\xi)^{\wedge}$ in $(\mathcal{C}, \mathbb{E}_{\xi}, \mathfrak{s}_{\xi})$.} It is clear that $\mathcal{P}(\xi)$ and $\mathcal{GP}(\xi)^{\perp}$ are closed under direct summands. Now, $\mathbb{E}_{\xi}^{i}(\mathcal{P}(\xi), \mathcal{GP}(\xi)^{\perp})=0$ for all $i\geq 1$ since $\mathcal{P}(\xi)\subseteq \mathcal{GP}(\xi)$. Finally, for every $M\in \mathcal{P}(\xi)^{\wedge}$, there exist two $\mathbb{E}$-triangles $K\to P\to M\dashrightarrow$ and $M\to M\to 0\dashrightarrow$ in $\xi$ with $P\in \mathcal{P}(\xi)$ and $K\in \mathcal{P}(\xi)^{\wedge}\subseteq \mathcal{GP}(\xi)^{\perp}$ (see Example~\ref{ex: GP, P}). \end{enumerate} From the first pair, it is easy to see that $(\mathcal{GP}(\xi),\mathcal{P}(\xi)^{\wedge})$ is a $1$-cotorsion pair cut on $\mathcal{P}(\xi)^{\wedge}$. However, $(\mathcal{GP}(\xi),\mathcal{P}(\xi)^{\wedge})$ can not be considered as cotorsion pair in the extriangulated category $\mathcal{P}(\xi)^{\wedge}$ since in general the containment $\mathcal{GP}(\xi)\subseteq \mathcal{P}(\xi)^{\wedge}$ does not hold true \cite[Prop. 5.4]{HZZgorensteinness}. \end{example} \begin{proposition}\label{pro: 1cot<->complete cot} Let $\mathcal{C}$ be an extriangulated category and $\mathcal{S}\subseteq \mathcal{C}$ be closed under extensions and direct summands. Then, $(\mathcal{A}, \mathcal{B})$ is a $1$-cotorsion pair cut on $\mathcal{S}\subseteq \mathcal{C}$ and $\mathcal{A}, \mathcal{B}\subseteq \mathcal{S}$ if, and only if, $(\mathcal{A}, \mathcal{B})$ is a cotorsion pair in the extriangulated category $\mathcal{S}$. \end{proposition} \begin{proof} It follows from the fact that any coproduct $X\oplus Y$ in $\mathcal{C},$ with $X,Y\in\mathcal{S},$ belongs to $\mathcal{S}$ since $\mathcal{S}$ is closed under extensions and direct summands in $\mathcal{C}.$ \end{proof} There are some constructions on extriangulated categories which are neither exact nor triangulated \cite[Prop. 3.30]{Nakaoka1}. Some of these constructions had been worked, for instance, in \cite{happel1988triangulated, beligiannis2000relative} where it is established that, under certain conditions, quotients categories carry triangulated structure. With this in mind, in \cite{zhou2018triangulated} is unified different constructions of triangulated quotients categories. One of them gives Frobenius categories by considering functorially finite subcategories of a triangulated category with Auslander-Reiten translation. Recall that a triangulated category $\mathcal{C}$ is Frobenius if $\mathcal{C}$ has enough $\mathbb{E}$-projectives and $\mathbb{E}$-injectives and $\mathcal{P}_{\mathbb{E}}(\mathcal{C})=\mathcal{I}_{\mathbb{E}}(\mathcal{C})$. Specifically, they proved the following result. \begin{corollary}\cite[Cor. 4.10-(2)]{zhou2018triangulated} Let $\mathcal{C}$ be a triangulated category with Auslander-Reiten translation $\tau$, and $\mathcal{X}$ a functorially finite subcategory of $\mathcal{C}$, which satisfies $\tau \mathcal{X}=\mathcal{X}$. For any $A, C\in \mathcal{C}$, define $\mathbb{E}'(C, A):=\mathcal{C}(C, A[1])$ to be the collection of all equivalence classes of triangles of the form $A\mathop{\to}\limits^{f} B\mathop{\to}\limits^{g} C\to A[1]$, where $f$ is $\mathcal{X}$-monic \cite[Def. 3.1]{zhou2018triangulated} and let $\mathfrak{s}'(\delta):=[A\mathop{\to}\limits^{f} B\mathop{\to}\limits^{g} C]$ , for any $\delta\in \mathbb{E}'(C, A)$. Then, $(\mathcal{C}, \mathbb{E}', \mathfrak{s}')$ is a Frobenius extriangulated category whose $\mathbb{E}'$-projective objects are precisely $\mathcal{X}$. \end{corollary} \begin{example}\label{ex: ni exact ni triang} Let $\mathcal{C}$ be a triangulated category with Auslander-Reiten translation $\tau$, and $\mathcal{X}$ a functorially finite subcategory of $\mathcal{C}$, which satisfies $\tau \mathcal{X}=\mathcal{X}$. Then, the pair $(\mathcal{X}, \mathcal{C})$ is an $n$-cotorsion pair cut on $\mathcal{S}$ in the extriangulated category $(\mathcal{C}, \mathbb{E}', \mathfrak{s}')$, for any $\mathcal{S}\subseteq \mathcal{C}$ and any $n\geq 1$. If in addition $\{0\}\neq \mathcal{X}\subsetneqq \mathcal{C}$ then $(\mathcal{C}, \mathbb{E}', \mathfrak{s}')$ is neither exact nor triangulated \cite[Rmk. 4.11]{zhou2018triangulated}. \end{example} \section{\textbf{Cut AB-contexts vs. Cut $n$-cotorsion pairs}} We begin by proving results that involve the concepts seen in Sections 3 and 4. We show that, under certain conditions, they are related in some sense. These results will be used in the last part of this section to prove the second correspondence theorem of this work. \begin{proposition}\label{prop: ABcontext->cut1Cot} Let $\mathcal{C}$ be an extriangulated category with higher extension groups, $(\mathcal{A}, \mathcal{B})$ be a left weak AB context cut on $\mathcal{S}$ and $\omega:=\mathcal{A}\cap \mathcal{B}$. Then, the following statements hold true. \begin{enumerate} \item $(\omega\cap \mathcal{S})^{\wedge}=\mathcal{B}\cap \mathcal{S}, \mathrm{id}_{\mathcal{A}\cap\mathcal{S}}(\mathcal{B}\cap\mathcal{S})=0$ and $\mathsf{Thick}(\mathcal{A}\cap\mathcal{S})=(\mathcal{A}\cap\mathcal{S})^{\wedge}.$ \item $(\mathcal{A}\cap \mathcal{S}, \mathcal{B}\cap \mathcal{S})$ is a 1-cotorsion pair cut on $\mathsf{Thick}(\mathcal{A}\cap\mathcal{S})$ with $\mathrm{id}_{\mathcal{A}\cap\mathcal{S}}(\mathcal{B}\cap\mathcal{S})=0$. \end{enumerate} \end{proposition} \begin{proof} (1) The containment $(\omega\cap \mathcal{S})^{\wedge}\subseteq\mathcal{B}\cap \mathcal{S}$ is clear since $\mathcal{B}\cap \mathcal{S}$ is closed under cones. On the other hand, let $B\in \mathcal{B}\cap \mathcal{S}$. Since $(\mathcal{A}\cap \mathcal{S}, \omega\cap \mathcal{S})$ is a left Frobenius pair in $\mathcal{C}$ and $\mathcal{B}\cap\mathcal{S}\subseteq (\mathcal{A}\cap\mathcal{S})^{\wedge}$, there is an $\mathbb{E}$-triangle $K\to A\to B\dashrightarrow$ with $K\in (\omega\cap \mathcal{S})^{\wedge}\subseteq \mathcal{B}\cap\mathcal{S}$ and $A\in \mathcal{A}\cap\mathcal{S}$ from \cite[Thm. 3.7]{MDZtheoryAB}. Moreover, $A\in \omega\cap \mathcal{S}$ due to $\mathcal{B}\cap\mathcal{S}$ is closed under extensions. Thus, by considering $K\to A\to B\dashrightarrow$ we get that $B\in (\omega\cap\mathcal{S})^{\wedge}$. Hence, $(\omega\cap \mathcal{S})^{\wedge}=\mathcal{B}\cap \mathcal{S}$ holds true. Moreover, from \cite[Lem. 3.8]{MDZtheoryAB} we also have that $\mathrm{id}_{\mathcal{A}\cap \mathcal{S}}(\mathcal{B}\cap\mathcal{S})=\mathrm{id}_{\mathcal{A}\cap\mathcal{S}}((\omega\cap\mathcal{S})^{\wedge})= \mathrm{id}_{\mathcal{A}\cap\mathcal{S}}(\omega\cap \mathcal{S})=0$. Finally, the last equality follows from \cite[Prop. 3.8]{ma2021new}. (2) Since $\mathcal{A}\cap\mathcal{S}$ and $\mathcal{B}\cap\mathcal{S}$ are closed under direct summands (by definition), $\mathbb{E}(\mathcal{A}\cap\mathcal{S}, \mathcal{B}\cap\mathcal{S})=0$ by the paragraph above and the existence of the $\mathbb{E}$-triangles in Definition~\ref{def: CnCP ext} follows by \cite[Thm. 3.7]{MDZtheoryAB}, we can conclude that $(\mathcal{A}\cap\mathcal{S}, \mathcal{B}\cap\mathcal{S})$ is a 1-cotorsion pair cut on $(\mathcal{A}\cap\mathcal{S})^{\wedge}$. \end{proof} \begin{lemma}\label{lem:Frobenius_from_cut_cotorsion} Let $\mathcal{C}$ be an extriangulated category with higher extension groups and let $(\mathcal{F},\mathcal{G})$ be a $1$-cotorsion pair cut on $\mathsf{Thick}(\mathcal{F})$ such that $\mathrm{id}_{\mathcal{F}}(\mathcal{G}) = 0$. Then, the following statements hold true. \begin{enumerate} \item $(\mathcal{F},\mathcal{F} \cap \mathcal{G})$ is a left Frobenius pair in $\mathcal{C}$ and $\mathsf{Thick}(\mathcal{F})=\mathcal{F}^{\wedge}$. \item $\mathcal{F} \cap \mathcal{G} = \mathcal{F} \cap \mathcal{F}^{\perp_1} = \mathcal{F} \cap (\mathcal{F} \cap \mathcal{G})^\wedge$. \item $(\mathcal{F} \cap \mathcal{G})^\wedge = \mathcal{F}^{\perp} \cap \mathcal{F}^\wedge= \mathcal{G} \cap \mathcal{F}^{\wedge}$. \end{enumerate} \end{lemma} \begin{proof} (1) From Definition~\ref{def: CnCP ext}, it is easy to see that the equalities $$\mathcal{F}=\mathcal{F}\cap \mathsf{Thick}(\mathcal{F})={}^{\perp_1}\mathcal{G}\cap \mathsf{Thick}(\mathcal{F}) \mbox{ and }\mathcal{G}\cap \mathsf{Thick}(\mathcal{F})=\mathcal{F}^{\perp_1}\cap \mathsf{Thick}(\mathcal{F})$$ hold true. By using this, along with condition $\mathrm{id}_{\mathcal{F}}(\mathcal{G})=0,$ one can prove that $\mathcal{F}$ is closed under extensions and cocones. Moreover, since $\mathcal{F}$ and $\mathcal{G}$ are closed under direct summands, so is $\mathcal{F}\cap \mathcal{G}$. Finally, we have $\mathcal{F}\cap\mathcal{G}$ is an $\mathcal{F}$-injective cogenerator in $\mathcal{F}$ and $\mathrm{id}_{\mathcal{F}}((\mathcal{F}\cap \mathcal{G})^{\wedge})=0$ from Definition~\ref{def: CnCP ext}-(3) and \cite[Prop. 3.9]{MDZtheoryAB}. The equality $\mathsf{Thick}(\mathcal{F})=\mathcal{F}^{\wedge}$ follows by \cite[Prop. 3.8]{ma2021new}. (2) Since $\mathcal{G}\cap \mathsf{Thick}(\mathcal{F})=\mathcal{F}^{\perp_1}\cap \mathsf{Thick}(\mathcal{F})$ holds, so does $\mathcal{F}\cap \mathcal{G}=\mathcal{F}\cap\mathcal{F}^{\perp_1}$. For the equality $\mathcal{F} \cap \mathcal{F}^{\perp_1} = \mathcal{F} \cap (\mathcal{F} \cap \mathcal{G})^\wedge$, since $\mathrm{id}_{\mathcal{F}}((\mathcal{F}\cap \mathcal{G})^{\wedge})=0$ we get the inclusion $\mathcal{F} \cap \mathcal{F}^{\perp_1} \supseteq \mathcal{F} \cap (\mathcal{F} \cap \mathcal{G})^\wedge$. On the other hand, if $M\in \mathcal{F}\cap \mathcal{F}^{\perp_{1}}$ we can consider an $\mathbb{E}$-triangle $M\to G\to F\dashrightarrow$ with $F\in \mathcal{F}$ and $G\in \mathcal{G}$ which is a split $\mathbb{E}$-triangle since $M\in \mathcal{F}^{\perp_{1}}$. Hence, $M\in \mathcal{G}$ due to $\mathcal{G}$ is closed under direct summands. (3) The first equality is due to \cite[Lem. 3.10]{ma2021new}. On the other hand, the equality $\mathcal{F}^{\perp} \cap \mathcal{F}^\wedge= \mathcal{G} \cap \mathcal{F}^{\wedge}$ follows as the second one in (2). \end{proof} \begin{lemma}\label{lem:correspondence_2} Let $\mathcal{C}$ be an extriangulated category with higher extension groups, $\mathcal{S}\subseteq \mathcal{C}$ be a thick subcategory of $\mathcal{C}$ and let $(\mathcal{F},\mathcal{G})$ be a 1-cotorsion pair cut on $\mathsf{Thick}(\mathcal{F})$ such that $\mathrm{id}_{\mathcal{F}}(\mathcal{G}) = 0$. If $\mathcal{F} \cap \mathcal{G} \cap \mathcal{S}$ is both a generator and cogenerator in $\mathcal{F} \cap \mathcal{G}$, then $(\mathcal{F}, \mathcal{F} \cap \mathcal{G})$ is a left Frobenius pair cut on $\mathcal{S}$. \end{lemma} \begin{proof} From Lemma \ref{lem:Frobenius_from_cut_cotorsion} we get that $(\mathcal{F}, \mathcal{F}\cap \mathcal{G})$ is a left Frobenius pair in $\mathcal{C}$ and so by Remark~\ref{rmk: frob} we have that $\mathcal{F}$ is closed under extensions, direct summands and cocones, and $\mathcal{F}\cap \mathcal{G}$ is closed under extensions and direct summands. Now, since $\mathcal{F}\cap \mathcal{G}\cap \mathcal{S}$ is a generator in $\mathcal{F}\cap \mathcal{G},$ by assumption and $\mathrm{id}_{\mathcal{F}}(\mathcal{G})=0,$ we have that condition (3) in Definition~\ref{def: cut left Frobenius pair} also holds. Thus, it remains to show that $(\mathcal{F} \cap \mathcal{S}, \mathcal{F} \cap \mathcal{G} \cap \mathcal{S})$ is a left Frobenius pair in $\mathcal{C}$. Indeed, $\mathcal{F}\cap\mathcal{S}$ is closed under extensions, direct summands and cocones since $\mathcal{F}$ and $\mathcal{S}$ satisfy these conditions. In a similar way, since $\mathcal{F}, \mathcal{G}$ and $\mathcal{S}$ are closed under direct summands, so is $\mathcal{F}\cap \mathcal{G}\cap \mathcal{S}$. Now, using that $\mathrm{id}_{\mathcal{F}}(\mathcal{G}) = 0$, it is only left to show that $\mathcal{F} \cap \mathcal{G} \cap \mathcal{S}$ is a cogenerator in $\mathcal{F} \cap \mathcal{S}$. For this, since $(\mathcal{F},\mathcal{G})$ is a 1-cotorsion pair cut on $\mathsf{Thick}(\mathcal{F})$, for any $F \in \mathcal{F} \cap \mathcal{S}$ there is an $\mathbb{E}$-triangle $F \to G \to F'\dashrightarrow$ with $G \in \mathcal{F} \cap \mathcal{G}$ and $F' \in \mathcal{F}$ as $\mathcal{F}$ is closed under extensions. Thus, by using that $\mathcal{F} \cap \mathcal{G} \cap \mathcal{S}$ is a cogenerator in $\mathcal{F} \cap \mathcal{G}$, we can find an $\mathbb{E}$-triangle $G \to L \to G'\dashrightarrow$ with $L \in \mathcal{F} \cap \mathcal{G} \cap \mathcal{S}$ and $G' \in \mathcal{F} \cap \mathcal{G}$. The result follows after applying (ET4) to the $\mathbb{E}$-triangles $F \to G \to F'\dashrightarrow$ and $G \to L \to G'\dashrightarrow$. \end{proof} For a fixed class of objects $\mathcal{S} \subseteq \mathcal{C}$, we denote by $\mathfrak{P}_{\mathcal{S}}$ the class of pairs $(\mathcal{F},\mathcal{G})$ of classes of objects in $\mathcal{C}$ satisfying that $(\mathcal{F}, \mathcal{G})$ is a 1-cotorsion pair cut on $\mathsf{Thick}(\mathcal{F})$ with $\mathrm{id}_{\mathcal{F}}(\mathcal{G}) = 0$ and such that $\mathcal{F} \cap \mathcal{G} \cap \mathcal{S}$ is both a generator and cogenerator in $\mathcal{F} \cap \mathcal{G}$. We define a equivalence relation on $\mathfrak{P}_{\mathcal{S}}$ as follows. \begin{definition}\label{def:equiv rel cut cot} Let $(\mathcal{F},\mathcal{G}), (\mathcal{F}',\mathcal{G}') \in \mathfrak{P}_{\mathcal{S}}$. We shall say that $(\mathcal{F},\mathcal{G})$ is \textbf{related} to $(\mathcal{F}',\mathcal{G}')$ in $\mathfrak{P}_{\mathcal{S}}$, denoted $(\mathcal{F},\mathcal{G}) \sim (\mathcal{F}',\mathcal{G}')$, if $\mathcal{F} \cap \mathcal{S} = \mathcal{F}' \cap \mathcal{S}$ and $\mathcal{F} \cap \mathcal{G} \cap \mathcal{S} = \mathcal{F}' \cap \mathcal{G}' \cap \mathcal{S}$. We denote by $[\mathcal{F},\mathcal{G}]_{\mathfrak{P}_{\mathcal{S}}}$ the equivalence class of the representative $(\mathcal{F},\mathcal{G}) \in \mathfrak{P}_{\mathcal{S}}$. \end{definition} We are now ready to show that there exists a one-to-one correspondence between the quotient classes $\mathfrak{P}_{\mathcal{S}} /\!\!\sim$ and $\mathfrak{C}_{\mathcal{S}} /\!\!\sim$. This result generalizes \cite[Thm. 4.12]{HMPcut}. \begin{theorem}[Second correspondence theorem]\label{theo:correspondence_2} Let $\mathcal{C}$ be an extriangulated category with higher extension groups and let $\mathcal{S}\subseteq \mathcal{C}$ be a thick subcategory. Then, the correspondence $\Lambda_{\mathcal{S}} : \mathfrak{P}_{\mathcal{S}} /\!\! \sim\, \to \mathfrak{C}_{\mathcal{S}} /\!\! \sim,\;[\mathcal{F},\mathcal{G}]_{\mathfrak{P}_{\mathcal{S}}} \mapsto [\mathcal{F},(\mathcal{F} \cap \mathcal{G})^\wedge]_{\mathfrak{C}_{\mathcal{S}}},$ is bijective with inverse $\Upsilon_{\mathcal{S}}$ given by $[\mathcal{A},\mathcal{B}]_{\mathfrak{C}_{\mathcal{S}}} \mapsto [\mathcal{A} \cap \mathcal{S},\mathcal{B} \cap \mathcal{S}]_{\mathfrak{P}_{\mathcal{S}}}$. \end{theorem} \begin{proof} First, we see that the mappings $\Lambda_{\mathcal{S}}$ and $\Upsilon_{\mathcal{S}}$ are well-defined. On the one hand, if $(\mathcal{F},\mathcal{G})\sim (\mathcal{F}',\mathcal{G}')$ in $\mathfrak{P}_{\mathcal{S}}$, by Lemma \ref{lem:correspondence_2} and Theorem \ref{theo:correspondence_1}, we have that $(\mathcal{F},(\mathcal{F} \cap \mathcal{G})^\wedge)$ and $(\mathcal{F}',(\mathcal{F}' \cap \mathcal{G}')^\wedge)$ are left weak AB-contexts cut on $\mathcal{S}$. Moreover, from Lemma \ref{lem:Frobenius_from_cut_cotorsion}-(2) we get the equalities $\mathcal{F} \cap \mathcal{G} \cap \mathcal{S} = \mathcal{F} \cap (\mathcal{F} \cap \mathcal{G})^\wedge \cap \mathcal{S}$ and $\mathcal{F}' \cap \mathcal{G}' \cap \mathcal{S} = \mathcal{F}' \cap (\mathcal{F}' \cap \mathcal{G}')^\wedge \cap \mathcal{S}$ and so $(\mathcal{F},(\mathcal{F} \cap \mathcal{G})^\wedge) \sim (\mathcal{F}', (\mathcal{F}' \cap \mathcal{G}')^\wedge)$ in $\mathfrak{C}_{\mathcal{S}}$. Hence, $\Lambda_{\mathcal{S}}$ does not depend on representatives. On the other hand, let $(\mathcal{A},\mathcal{B}) \in \mathfrak{C}_{\mathcal{S}}$. By Proposition~\ref{prop: ABcontext->cut1Cot} we have that $(\mathcal{A} \cap \mathcal{S}, \mathcal{B} \cap \mathcal{S})$ is a 1-cotorsion pair cut on $\mathsf{Thick}(\mathcal{A}\cap\mathcal{S})$ satisfying $\mathrm{id}_{\mathcal{A} \cap \mathcal{S}}(\mathcal{B} \cap \mathcal{S}) = 0$. Furthermore, it is clear that $\mathcal{A}\cap \mathcal{B}\cap\mathcal{S}$ is a generator and cogenerator in itself, which implies that $(\mathcal{A}\cap\mathcal{S}, \mathcal{B}\cap\mathcal{S})\in \mathfrak{P}_{\mathcal{S}}$. Finally, it is also clear that $\Upsilon_{\mathcal{S}}$ does not depend on representatives. The fact that $\Upsilon_{\mathcal{S}}$ is the inverse of $\Lambda_{\mathcal{S}}$ follows from Lemma \ref{lem:Frobenius_from_cut_cotorsion} and Proposition~\ref{prop: ABcontext->cut1Cot}. \end{proof} \begin{example} Consider Example~\ref{ex: GP, P2}. We know that $\mathcal{GP}(\xi)$ and $\mathcal{P}(\xi)$ are closed under extensions and cocones by \cite[Cor. 3.23]{Hegorensteinobjects}. By taking $\mathcal{S}:=\mathcal{P}(\xi)^{\wedge},$ we assert that $(\mathcal{GP}(\xi), \mathcal{P}(\xi)^{\wedge})$ and $(\mathcal{P}(\xi), \mathcal{GP}(\xi)^{\perp})$ belong to $\mathfrak{P}_{\mathcal{S}}$. Indeed, it follows due to the fact that the equalities $\mathcal{GP}(\xi)\cap \mathcal{P}(\xi)^{\wedge}=\mathcal{P}(\xi)=\mathcal{P}(\xi)\cap \mathcal{GP}(\xi)^{\perp}$ hold true \cite[Prop. 5.4]{HZZgorensteinness} and $\mathcal{P}(\xi)$ is a generator and cogenerator in itself. Moreover, from the previous equalities, we get that such pairs are related in $\mathfrak{P}_{\mathcal{S}}$. \[ \xymatrix{ & [\mathcal{GP}(\xi), \mathcal{P}(\xi)^{\wedge}]_{\mathfrak{C}_{\mathcal{S}}}\ar[dr]^{\Upsilon_{\mathcal{S}}} & \\ [\mathcal{GP}(\xi), \mathcal{P}(\xi)^{\wedge}]_{\mathfrak{P}_{\mathcal{S}}}\ar[ur]^{\Lambda_{\mathcal{S}}} \ar@{=}[rr] & & [\mathcal{P}(\xi), \mathcal{P}(\xi)^{\wedge}]_{\mathfrak{P}_{\mathcal{S}}} } \] \[ \xymatrix{ & [\mathcal{P}(\xi), \mathcal{P}(\xi)^{\wedge}]_{\mathfrak{C}_{\mathcal{S}}}\ar[dr]^{\Upsilon_{\mathcal{S}}} & \\ [\mathcal{P}(\xi), \mathcal{GP}(\xi)^{\perp}]_{\mathfrak{P}_{\mathcal{S}}}\ar[ur]^{\Lambda_{\mathcal{S}}} \ar@{=}[rr] & & [\mathcal{P}(\xi), \mathcal{P}(\xi)^{\wedge}]_{\mathfrak{P}_{\mathcal{S}}} } \] \end{example} As we can see in the previous example, one equivalence class could have several representatives. In the following result we prove that the previous correspondence is also valid if we restrict to certain representatives of cut cotorsion pairs. For a fixed class $\mathcal{S}\subseteq \mathcal{C}$, we denote by $\widetilde{\mathfrak{P}}_{\mathcal{S}}$ the subclass of $\mathfrak{P}_{\mathcal{S}}$ whose elements are $(\mathcal{F},\mathcal{G})\in \mathfrak{P}_{\mathcal{S}}$ satisfying $\mathcal{G}\subseteq \mathsf{Thick}(\mathcal{F})$. \begin{corollary}\label{cor:correspondence_3} Let $\mathcal{C}$ be an extriangulated category with higher extension groups and let $\mathcal{S}\subseteq \mathcal{C}$ be a thick subcategory. Then, the correspondence $$\widetilde{\Lambda}_{\mathcal{S}} \colon \widetilde{\mathfrak{P}}_{\mathcal{S}} /\!\! \sim \mbox{} \to \mathfrak{C}_{\mathcal{S}} /\!\! \sim,\;[\mathcal{F},\mathcal{G}]_{\mathfrak{P}_{\mathcal{S}}} \mapsto [\mathcal{F}, \mathcal{G}]_{\mathfrak{C}_{\mathcal{S}}},$$ is bijective, with inverse $\widetilde{\Upsilon}_{\mathcal{S}}$ given by $[\mathcal{A},\mathcal{B}]_{\mathfrak{C}_{\mathcal{S}}} \mapsto [\mathcal{A} \cap \mathcal{S},\mathcal{B} \cap \mathcal{S}]_{\mathfrak{P}_{\mathcal{S}}}$. \end{corollary} \begin{proof} Let $(\mathcal{F},\mathcal{G})\sim (\mathcal{F}',\mathcal{G}')$ in $\widetilde{\mathfrak{P}}_{\mathcal{S}}$, that is, $\mathcal{F}\cap \mathcal{S}=\mathcal{F}'\cap \mathcal{S}$ and $\mathcal{F}\cap \mathcal{G}\cap \mathcal{S}=\mathcal{F}'\cap \mathcal{G}'\cap \mathcal{S}$. By Lemma \ref{lem:correspondence_2} and Theorem \ref{theo:correspondence_1}, we have that $(\mathcal{F},(\mathcal{F} \cap \mathcal{G})^\wedge)$ and $(\mathcal{F}',(\mathcal{F}' \cap \mathcal{G}')^\wedge)$ are left weak AB-contexts cut on $\mathcal{S}$. Moreover, since both pairs are in $\widetilde{\mathfrak{P}}_{\mathcal{S}}$ we get that $(\mathcal{F}\cap \mathcal{G})^{\wedge}=\mathcal{G}$ and $(\mathcal{F}'\cap \mathcal{G}')^{\wedge}=\mathcal{G}'$ from Lemma~\ref{lem:Frobenius_from_cut_cotorsion}-(3). Thus, $(\mathcal{F}, \mathcal{G})\sim (\mathcal{F}', \mathcal{G}')$ in $\mathfrak{C}_{\mathcal{S}}$. Therefore, $\widetilde{\Lambda}_{\mathcal{S}}$ is well-defined. Let $(\mathcal{A},\mathcal{B}) \in \mathfrak{C}_{\mathcal{S}}$. From Theorem~\ref{theo:correspondence_2}, Definition~\ref{def: cut left AB context}-(3) and Proposition~\ref{prop: ABcontext->cut1Cot}-(1), we have that $(\mathcal{A}\cap\mathcal{S}, \mathcal{B}\cap\mathcal{S})\in \mathfrak{P}_{\mathcal{S}}$, $\mathcal{B}\cap \mathcal{S}\subseteq (\mathcal{A}\cap \mathcal{S})^{\wedge}$ and $(\mathcal{A}\cap \mathcal{S})^{\wedge}=\mathsf{Thick}(\mathcal{A}\cap \mathcal{S})$, respectively. Hence, $(\mathcal{A}\cap\mathcal{S}, \mathcal{B}\cap \mathcal{S})\in \widetilde{\mathfrak{P}}_{\mathcal{S}}$. Furthermore, it is clear that $(\mathcal{A}\cap\mathcal{S}, \mathcal{B}\cap \mathcal{S})\sim (\mathcal{A}'\cap\mathcal{S}, \mathcal{B}'\cap \mathcal{S})$ in $\widetilde{\mathfrak{P}}_{\mathcal{S}}$ if $(\mathcal{A}, \mathcal{B})\sim (\mathcal{A}', \mathcal{B}')$ in $\mathfrak{C}_{\mathcal{S}}$. Therefore, $\widetilde{\Upsilon}_{\mathcal{S}}$ is well-defined. Finally, we show that they are mutually inverse. In fact, by definition of $\widetilde{\Lambda}_{\mathcal{S}}$ and $\widetilde{\Upsilon}_{\mathcal{S}}$, we have that $[\mathcal{F}, \mathcal{G}]_{\mathfrak{P}_{\mathcal{S}}}\mapsto [\mathcal{F}, \mathcal{G}]_{\mathfrak{C}_{\mathcal{S}}} \mapsto [\mathcal{F}\cap\mathcal{S}, \mathcal{G}\cap \mathcal{S}]_{\mathfrak{P}_{\mathcal{S}}}$, for any $(\mathcal{F}, \mathcal{G})\in \widetilde{\mathfrak{P}}_{\mathcal{S}}$ and $[\mathcal{A}, \mathcal{B}]_{\mathfrak{C}_{\mathcal{S}}}\mapsto [\mathcal{A}\cap \mathcal{S}, \mathcal{B}\cap \mathcal{S}]_{\mathfrak{P}_{\mathcal{S}}}\mapsto [\mathcal{A}\cap \mathcal{S}, \mathcal{B}\cap \mathcal{S}]_{\mathfrak{C}_{\mathcal{S}}}$, for any $(\mathcal{A}, \mathcal{B})\in \mathfrak{C}_{\mathcal{S}}$. In both compositions it is clear that the first and third equivalence class coincide. \end{proof} \begin{remark}\label{rmk: S=C} By taking $\mathcal{S}:=\mathcal{C}$ in Definitions~\ref{def:Frobenius_and_AB_relations} and~\ref{def:equiv rel cut cot}, the following hold true. \begin{enumerate} \item For any $(\mathcal{X},\omega), (\mathcal{X}',\omega')\in \mathfrak{F}_{\mathcal{S}}$, $(\mathcal{X},\omega)\sim (\mathcal{X}',\omega')$ if, and only if, $\mathcal{X}=\mathcal{X}'$ and $\omega=\omega'$. \item For any $(\mathcal{A},\mathcal{B}), (\mathcal{A}',\mathcal{B}')\in \mathfrak{C}_{\mathcal{S}}$, $(\mathcal{A},\mathcal{B})\sim (\mathcal{A}',\mathcal{B}')$ if, and only if, $\mathcal{A}=\mathcal{A}'$ and $\mathcal{A}\cap\mathcal{B}=\mathcal{A}'\cap\mathcal{B}'$ if, and only if, $\mathcal{A}=\mathcal{A}'$ and $\mathcal{B}=\mathcal{B}'$ by Proposition~\ref{prop: ABcontext->cut1Cot}. \item For any $(\mathcal{F},\mathcal{G}), (\mathcal{F}',\mathcal{G}')\in \widetilde{\mathfrak{P}}_{\mathcal{S}}$, $(\mathcal{F},\mathcal{G})\sim (\mathcal{F}',\mathcal{G}')$ if, and only if, $\mathcal{F}=\mathcal{F}'$ and $\mathcal{F}\cap \mathcal{G}=\mathcal{F}\cap\mathcal{G}'$ if, and only if, $\mathcal{G}=\mathcal{G}'$ by Lemma~\ref{lem:Frobenius_from_cut_cotorsion}-(3). So, every equivalence class in $\widetilde{\mathfrak{P}}_{\mathcal{S}}/\sim$ only has one element. \end{enumerate} \end{remark} \begin{corollary}\cite[Thm. 3.12]{ma2021new}\label{cor: leftFrob<->cotor pairs in Thick} Let $\mathcal{C}$ be an extriangulated category with higher extension groups. Then, the maps $(\mathcal{X}, \omega)\mapsto (\mathcal{X}, \omega^{\wedge}) \mbox{ and } (\mathcal{F}, \mathcal{G})\mapsto (\mathcal{F}, \mathcal{F}\cap \mathcal{G})$ give mutually inverse one-to-one correspondences between the following classes: \begin{enumerate} \item Left Frobenius pairs $(\mathcal{X}, \omega)$ in $\mathcal{C}$. \item Cotorsion pairs $(\mathcal{F}, \mathcal{G})$ in the extriangulated category $\mathsf{Thick}(\mathcal{F})$ with $\mathrm{id}_{\mathcal{F}}(\mathcal{G})=0$. \end{enumerate} \end{corollary} \begin{proof} By taking $\mathcal{S}:=\mathcal{C}$, by Remark~\ref{rmk: S=C} every equivalence class in $\mathfrak{F}_{\mathcal{S}}/\!\!\sim$, $\mathfrak{C}_{\mathcal{S}}/\!\!\sim$ and $\widetilde{\mathfrak{P}}_{\mathcal{S}}/\!\!\sim$ has one element. Thus, from Theorem~\ref{theo:correspondence_1} and Corollary~\ref{cor:correspondence_3}, we get that \begin{align*} \widetilde{\Upsilon}_{\mathcal{S}}\Phi_{\mathcal{S}}: \mathfrak{F}_{\mathcal{S}}/\!\!\sim\, \to \widetilde{\mathfrak{P}}_{\mathcal{S}}/\!\!\sim \quad \mbox{ is given by }\quad (\mathcal{X}, \omega)\mapsto (\mathcal{X}, \omega^{\wedge}), \\ \Psi_{\mathcal{S}}\widetilde{\Lambda}_{\mathcal{S}}: \widetilde{\mathfrak{P}}_{\mathcal{S}}/\!\!\sim \to \mathfrak{F}_{\mathcal{S}}/\!\!\sim \quad \mbox{ is given by }\quad (\mathcal{F}, \mathcal{G})\mapsto (\mathcal{F}, \mathcal{F}\cap\mathcal{G}). \end{align*} Notice that if $(\mathcal{X}, \omega)$ is a left Frobenius pair then $(\mathcal{X}, \omega^{\wedge})\in \mathfrak{C}_{\mathcal{S}}$ and so by definition and Proposition~\ref{prop: ABcontext->cut1Cot}, we obtain $\omega^{\wedge}\subseteq \mathcal{X}^{\wedge}$ and $\mathrm{id}_{\mathcal{X}}(\omega^{\wedge})=0$. Since $\mathsf{Thick}(\mathcal{X})$ is closed under extensions and direct summands in $\mathcal{C},$ we get that $\mathsf{Thick}(\mathcal{X})$ is an extriangulated category \cite[Rmk. 2.18]{Nakaoka1}. Now, using that $(\mathcal{X}, \omega^{\wedge})$ is a $1$-cotorsion pair cut on $\mathsf{Thick}(\mathcal{X})$ with $\mathcal{X}, \omega^{\wedge}\subseteq \mathsf{Thick}(\mathcal{X}),$ it follows from Proposition~\ref{pro: 1cot<->complete cot} that $(\mathcal{X}, \omega^{\wedge})$ is a cotorsion pair in the extriangulated category $\mathsf{Thick}(\mathcal{X})$ with $\mathrm{id}_{\mathcal{X}}(\omega^{\wedge})=0$. Let $(\mathcal{F}, \mathcal{G})$ be a cotorsion pair in the extriangulated category $\mathsf{Thick}(\mathcal{F})$ with $\mathrm{id}_{\mathcal{F}}(\mathcal{G})=0$. By Proposition~\ref{pro: 1cot<->complete cot}, we have that $(\mathcal{F}, \mathcal{G})$ is a $1$-cotorsion pair cut on $\mathsf{Thick}(\mathcal{F})$ with $\mathcal{G}\subseteq \mathsf{Thick}(\mathcal{F})$ and $\mathrm{id}_{\mathcal{F}}(\mathcal{G})=0$. Thus, by Corollary~\ref{cor:correspondence_3} we get that $(\mathcal{F}, \mathcal{G})\in \mathfrak{C}_{\mathcal{S}}$ and so $(\mathcal{F}, \mathcal{F}\cap \mathcal{G})$ is a left Frobenius pair by Theorem~\ref{theo:correspondence_1}. \end{proof} \section{\textbf{Connecting several contexts}}\label{Sec: applications} In the previous sections we established notions and results in the general context that an extriangulated category provides. However all these results can be applied for abelian, exact and triangulated categories. This last section is devoted to show that different contexts can interplay by using the cut notions mentioned previously.\\ For a triangulated category $\mathcal{T}$ with shift functor [1], it is possible to get abelian categories when there exists certain kind of structures. Examples of such structures are bounded $t$-structures for which there exists an associated abelian category $\mathcal{H}$ called \emph{ the heart} \cite[Lem. 3.2]{bridgeland2007stability}. Among the highlighted properties of the heart, we have that it is closed under extensions \cite[Lem. 1.3]{lorenzinuniqueness2020} and $\mathcal{T}(A, B[n])=0$ for any $A, B\in \mathcal{H}$ and any $n<0$ \cite[Def. 1.2]{lorenzinuniqueness2020}. The following generalizes this notion. \begin{definition}\cite[Dyer's Thm. A.2]{lorenzin2022compatibility}\label{def: first ext group} Let $\mathcal{T}$ be a triangulated category and $\mathcal{H}\subseteq \mathcal{T}$ be closed under extensions, having a zero object of $\mathcal{T}$ and satisfying that $\mathcal{T}(A, B[-1])=0,$ for any $A, B\in \mathcal{H}$. Then, $\mathcal{H}$ has a natural exact structure, given by defining $A\rightarrowtail B\twoheadrightarrow C$ as a conflation if $A\to B\to C\to A[1]$ is a distinguished triangle in $\mathcal{T},$ for some $C\to A[1]$. This association gives rise to a natural isomorphism $\mathrm{Ext}^{1}_{\mathcal{H}}(A, B) \cong \mathcal{T}(A, B[1]),$ for all $A, B\in \mathcal{H}$. \end{definition} Definition~\ref{def: first ext group} can be extended to higher extensions groups. Morever, there is a relation between short exact sequences in $\mathcal{H}$ and distinguished triangles in $\mathcal{T}$ as we can see in the following two results. \begin{proposition}\cite[Thm. A. 7 \& Def. A.12]{lorenzin2022compatibility}\label{pro: A7} Let $\mathcal{H}$ be an exact subcategory of $\mathcal{T}$ as in Dyer's Theorem A.2. Then there is a well-defined map $$f_{n,A,B}:\mathrm{Ext}^{n}_{\mathcal{H}}(A,B)\to \mathcal{T}(A, B[n]),$$ for any $A, B\in \mathcal{H}$ and $n\geq 0$. We say that \emph{$\mathcal{T}$ has all the Ext groups of $\mathcal{H}$} if the morphism $f_{n,A,B}$ is an isomorphism for any $A, B\in \mathcal{H}$ and $n\in \mathbb{N}$. \end{proposition} \begin{lemma}\cite[Lem. 1.9]{lorenzinuniqueness2020}\label{lem: exact<->triang} Let $\mathcal{T}$ be a triangulated category and $\mathcal{H}\subseteq \mathcal{T}$ be closed under extensions, having a zero object of $\mathcal{T}$ and satisfying that $\mathcal{T}(A, B[-1])=0,$ for any $A, B\in \mathcal{H}$. Then, $A\mathop{\to}\limits^{f} B\mathop{\to}\limits^{g} C \mathop{\to}\limits^{h} A[1]$ is a distinguished triangle in $\mathcal{T}$ if, and only if, $A\mathop{\rightarrowtail}\limits^{f} B \mathop{\twoheadrightarrow}\limits^{g} C$ is a short exact sequence in $\mathcal{H}$. \end{lemma} With this in mind, it is natural to ask when cut notions in $\mathcal{H}$ can be lifted to cut ones in $\mathcal{T}$. The following result establishes a relation between the abelian and triangulated context through the notions previously seen. Recall that, given a triangulated category $\mathcal{T}$, a subcategory $\mathcal{S}\subseteq \mathcal{T}$ is \emph{resolving} (resp., coresolving) if it is closed under extensions, has a zero object of $\mathcal{T}$ and $\mathcal{S}[-1]\subseteq \mathcal{S}$ (resp., $\mathcal{S}[1]\subseteq \mathcal{S}$). Equivalently, $\mathcal{S}$ is resolving (resp., coresolving) if, and only if, for any distinguished triangle $U\to V\to W\to U[1]$ (resp., $W\to V\to U\to W[1]$) in $\mathcal{T}$ with $W\in \mathcal{S}$, one has that $U\in \mathcal{S}$ if and only if $V\in \mathcal{S}$. \begin{proposition}\label{pro: ABtheory in H} For a triangulated category $\mathcal{T}$ having all the Ext groups of $\mathcal{H}\subseteq \mathcal{T}$ with $\mathcal{H}$ closed under extensions and direct summands, the following statements hold true. \begin{enumerate} \item Let $(\mathcal{X}, \omega)$ be a left Frobenius pair in the exact category $\mathcal{H}.$ Then, $(\mathcal{X}, \omega)$ is a left Frobenius pair in $\mathcal{T}$ if, and only if, $\mathcal{X}[-1]\subseteq \mathcal{X}$. \item Let $(\mathcal{A}, \mathcal{B})$ be a left weak AB-context in the exact category $\mathcal{H}.$ Then, $(\mathcal{A}, \mathcal{B})$ is a left weak AB-context in $\mathcal{T}$ if, and only if, $\mathcal{A}[-1]\subseteq \mathcal{A}$ and $\mathcal{B}[1]\subseteq \mathcal{B}$. \item For any $n\geq 1$, if $(\mathcal{F}, \mathcal{G})$ is an $n$-cotorsion pair cut on $\mathcal{S}$ in the exact category $\mathcal{H},$ then so is in $\mathcal{T}$. \end{enumerate} \end{proposition} \begin{proof} Closedness by co(cones) follows by the previous reminder, closedness by extensions in $\mathcal{T}$ holds due to the fact that $\mathcal{H}$ has that property in $\mathcal{T}$ and all the classes considered are contained in $\mathcal{H},$ while closedness by summands follows as in the proof of Proposition~\ref{pro: 1cot<->complete cot}. On the other hand, orthogonality relations between the involved classes are a consequence of Proposition~\ref{pro: A7}. Finally, from Lemma~\ref{lem: exact<->triang} we get the existence of the desired distinguished triangles or (co)resolutions depending on the case. \end{proof} For the following result, we recall that a \emph{co-$t$-structure in a triangulated category $\mathcal{T}$} consists of a pair of subcategories $\mathcal{X}, \mathcal{Y}\subseteq \mathcal{T}$ closed under direct summands and satisfying that $\mathcal{X}[-1]\subseteq \mathcal{X}$, $\mathcal{T}(\mathcal{X}, \mathcal{Y})=0$ and $\mathcal{T}= \mathcal{X}*\mathcal{Y}$ \cite{bondarko2010weight, pauksztello2008compact}. \begin{theorem}\label{thm: t-str y cut cot} Let $\mathcal{T}$ be a triangulated category having all the Ext groups of $\mathcal{H}\subseteq \mathcal{T}$ with $\mathcal{H}$ closed under extensions and direct summands. Consider the following classes: \[ \begin{array}{l} \mathfrak{A}:=\{(\mathcal{X}, \omega) : (\mathcal{X}, \omega) \mbox{ is a left Frobenius pair in the exact category } \mathcal{H} \mbox{ and } \mathcal{X}[-1]\subseteq \mathcal{X}\},\\ \mathfrak{B}:=\{(\mathcal{X}, \omega) : (\mathcal{X}, \omega) \mbox{ is a left Frobenius pair in } \mathcal{T}\},\\ \mathfrak{C}:=\{(\mathcal{A}, \mathcal{B}) : (\mathcal{A}, \mathcal{B}) \mbox{ is a co-$t$-structure in the triangulated category } \mathsf{Thick}(\mathcal{A})\},\\ \mathfrak{D}:=\{(\mathcal{A}, \mathcal{B}) : (\mathcal{A}, \mathcal{B}) \mbox{ is a cotorsion pair cut on } \mathsf{Thick}(\mathcal{A}) \mbox{ in } \mathcal{T} \mbox{ with }\mathcal{A}, \mathcal{B}\subseteq \mathsf{Thick}(\mathcal{A})\}. \end{array} \] Then, $\mathfrak{A}=\mathfrak{B}$, $\mathfrak{C}=\mathfrak{D}$ and there exists a one-to-one correspondence between $\mathfrak{B}$ and $\mathfrak{C}.$ \end{theorem} \begin{proof} The equality $\mathfrak{A}=\mathfrak{B}$ holds true by Proposition~\ref{pro: ABtheory in H}-(1). For the correspondence between $\mathfrak{B}$ and $\mathfrak{C}$, from Corollary~\ref{cor: leftFrob<->cotor pairs in Thick} it suffices to show that cotorsion pairs $(\mathcal{F}, \mathcal{G})$ in the extriangulated category $\mathsf{Thick}(\mathcal{F})$ with $\mathrm{id}_{\mathcal{F}}(\mathcal{G})=0$ are co-$t$-structures in the triangulated category $\mathsf{Thick}(\mathcal{F})$. On the one hand, notice that the correspondence given in Corollary~\ref{cor: leftFrob<->cotor pairs in Thick} sends left Frobenius pairs to cotorsion pairs $(\mathcal{F}, \mathcal{G})$ with the first class closed under extensions and cocones in a triangulated category. Since closedness by cocones is equivalent to $\mathcal{F}[-1]\subseteq \mathcal{F}$ it follows that $(\mathcal{F}, \mathcal{G})$ is a co-$t$-structure in the triangulated category $\mathsf{Thick}(\mathcal{F})$. On the other hand, if $(\mathcal{F}, \mathcal{G})$ is a co-$t$-structure in $\mathsf{Thick}(\mathcal{F})$ is easy to see that it is a cotorsion pair in $\mathsf{Thick}(\mathcal{F})$ with $\mathrm{id}_{\mathcal{F}}(\mathcal{G})=0$. Finally, by the above paragraph, co-$t$-structures in $\mathsf{Thick}(\mathcal{A})$ are cotorsion pairs in the triangulated category $\mathsf{Thick}(\mathcal{A})$ which coincide with $\mathfrak{D}$ by Proposition~\ref{pro: 1cot<->complete cot}. \end{proof} It is known that for co-$t$-structures there is an associated presilting subcategory $\mathcal{S}$ called \emph{the coheart} \cite{aihara2012silting, keller1988aisles, bondarko2010weight, mendoza2010auslander}. The corollary below is an adaptation of a result for presilting subcategories given in \cite[Lem. 3.3]{pauksztello2020co}. \begin{proposition}\label{pro: 1-cot<->cot S*S[1]} Let $\mathcal{T}$ be a triangulated category, $\mathcal{S}\subseteq \mathcal{T}$ and let $\mathcal{X}, \mathcal{Y}\subseteq \mathcal{T}$ be closed under extensions and direct summands in $\mathcal{T}$ satisfying that $\mathcal{S}\subseteq \mathcal{X}$, $\mathcal{S}[1]\subseteq \mathcal{Y}$ and $\mathcal{T}(\mathcal{X}, \mathcal{Y}[1])=0$. Then, the following statements are equivalent. \begin{enumerate} \item $(\mathcal{X}, \mathcal{Y})$ is a $1$-cotorsion pair cut on $\mathcal{S}*\mathcal{S}[1]$. \item $(\mathcal{X}, \mathcal{Y})$ is a left $1$-cotorsion pair cut on $\mathcal{S}[1]$. \item $(\mathcal{X}, \mathcal{Y})$ is a right $1$-cotorsion pair cut on $\mathcal{S}$. \end{enumerate} \end{proposition} \begin{proof} We prove that (2)$\Rightarrow$(3). Suppose that $(\mathcal{X}, \mathcal{Y})$ is a left $1$-cotorsion pair cut on $\mathcal{S}[1]$. Under the above conditions it suffices to prove that, for every $M\in \mathcal{S},$ there is a distinguished triangle $M\to Y\to X\to M[1],$ with $Y\in \mathcal{Y}$ and $X\in \mathcal{X}$. However, that fact follows by the left completeness of the pair $(\mathcal{X}, \mathcal{Y})$ in Definition~\ref{def: CnCP ext}. From similar reasons, one can justify (3)$\Rightarrow$(2). For (2)$\Rightarrow$(1), suppose that (2) holds. We assert that $(\mathcal{X}, \mathcal{Y})$ is a left $1$-cotorsion pair cut on $S*\mathcal{S}[1]$. Notice that under the above conditions, we only need to show that for every $M\in \mathcal{S}*\mathcal{S}[1]$ there is a distinguished triangle $M[-1]\to Y\to X\to M,$ with $Y\in \mathcal{Y}$ and $X\in \mathcal{X}$. Indeed, let $M\in \mathcal{S}*\mathcal{S}[1]$. Then, there exist two distinguished triangles $S\to M\to R\to S[1]$ and $Y\to X'\to R\to Y[1]$ with $S\in \mathcal{S}, R\in \mathcal{S}[1], X'\in \mathcal{X}$ and $Y\in \mathcal{Y}$. From the octahedral axiom, we get the following commutative diagram of distinguished triangles \[ \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=2.5em, column sep=2.5em, text height=1.25ex, text depth=0.25ex] { {} & Y & Y & {}\\ S & X & X' & S[1] \\ S & M & R & S[1] \\ {} & Y[1] & Y[1], & {}\\ }; \path[->] (m-2-3) edge (m-2-4) (m-3-3) edge (m-3-4) (m-3-2) edge (m-4-2) (m-3-3) edge (m-4-3) (m-2-1) edge (m-2-2) (m-2-2) edge (m-2-3) (m-3-1) edge (m-3-2) (m-3-2) edge (m-3-3) (m-1-2) edge (m-2-2) (m-2-2) edge (m-3-2) (m-1-3) edge (m-2-3) (m-2-3) edge (m-3-3) ; \path[-,font=\scriptsize] (m-2-4) edge [double, thick, double distance=2pt] (m-3-4) (m-4-2) edge [double, thick, double distance=2pt] (m-4-3) (m-1-2) edge [double, thick, double distance=2pt] (m-1-3) (m-2-1) edge [double, thick, double distance=2pt] (m-3-1) ; \end{tikzpicture} \] where $S\to X\to X'\to S[1]$ is a distinguished triangle with $X\in \mathcal{X}$ since $\mathcal{S}\subseteq \mathcal{X}$ and $\mathcal{X}$ is closed under extensions. Hence, by considering the distinguished triangle $M[-1]\to Y\to X\to M$ obtained from $Y\to X\to M\to Y[-1]$ the assertion follows. Now, by using the equivalence (2)$\Leftrightarrow$(3) and procceding as above we also get that $(\mathcal{X}, \mathcal{Y})$ is a right $1$-cotorsion pair cut on $\mathcal{S}*\mathcal{S}[1]$. Hence, (1) holds true. \end{proof} A remarkable difference between $t$-structures and co-$t$-structures is that the coheart of a co-$t$-structure in general may not be abelian. However, \emph{the extended coheart} of a co-$t$-structure is an extriangulated category with the extriangulated structure induced by the triangulated structure of $\mathcal{T}$ \cite[Lem. 1.6]{pauksztello2020co}. Moreover, given a co-$t$-structure in a triangulated category, there is a one-to-one correspondence with co-$t$-structures ``sufficiently close'' to the initial one and cotorsion pairs in its extended coheart \cite[Thm. 2.1]{pauksztello2020co}. By using an alternative description of the extended coheart \cite[Lem. 2.1]{iyama2014intermediate}, we get the following result. \begin{theorem}\label{thm: co-t-st y cut cot} Let $\mathcal{T}$ be a triangulated category, $(\mathcal{A}, \mathcal{B})$ be a co-$t$-structure in $\mathcal{T}$, $\mathcal{S}:=\mathcal{A}[1]\cap \mathcal{B}$ be the coheart of $(\mathcal{A}, \mathcal{B})$ and $\mathcal{Z}:=\mathcal{A}[2]\cap \mathcal{B}$ be the extended coheart of $(\mathcal{A}, \mathcal{B})$. Consider the following classes: \[ \begin{array}{l} \mathfrak{A}:=\{(\mathcal{A}', \mathcal{B}') : (\mathcal{A}',\mathcal{B}') \mbox{ is a co-$t$-structure in } \mathcal{T} \mbox{ with } \mathcal{A}\subseteq \mathcal{A}'\subseteq \mathcal{A}[1]\},\\ \mathfrak{B}:=\{(\mathcal{X}, \mathcal{Y}) : (\mathcal{X},\mathcal{Y}) \mbox{ is a cotorsion pair in the extriangulated category } \mathcal{Z}\},\\ \mathfrak{C}:=\{(\mathcal{X}, \mathcal{Y}) : (\mathcal{X},\mathcal{Y}) \mbox{ is a $1$-cotorsion pair cut on } \mathcal{Z} \mbox{ in } \mathcal{T} \mbox{ with } \mathcal{X}, \mathcal{Y}\subseteq \mathcal{Z}\},\\ \mathfrak{D}:=\{(\mathcal{X}, \mathcal{Y}) : (\mathcal{X},\mathcal{Y}) \mbox{ is a left $1$-cotorsion pair cut on } \mathcal{S}[1] \mbox{ in } \mathcal{T} \mbox{ with } \mathcal{X}, \mathcal{Y}\subseteq \mathcal{Z}\},\\ \mathfrak{E}:=\{(\mathcal{X}, \mathcal{Y}) : (\mathcal{X},\mathcal{Y}) \mbox{ is a right $1$-cotorsion pair cut on } \mathcal{S} \mbox{ in } \mathcal{T} \mbox{ with } \mathcal{X}, \mathcal{Y}\subseteq \mathcal{Z}\}.\\ \end{array} \] Then, there is a one-to-one correspondence between $\mathfrak{A}$ and $\mathfrak{B}$ and $\mathfrak{B}=\mathfrak{C}=\mathfrak{D}=\mathfrak{E}$. \end{theorem} \begin{proof} The correspondence between $\mathfrak{A}$ and $\mathfrak{B}$ follows by \cite[Thm. 2.1]{pauksztello2020co}. On the other hand, we have from Proposition~\ref{pro: 1cot<->complete cot} that the equality $\mathfrak{B}=\mathfrak{C}$ holds true. The remaining equalities hold after noticing that $\mathcal{Z}=\mathcal{S}*\mathcal{S}[1]$ \cite[Lem. 2.1]{iyama2014intermediate} and the equivalence in Proposition~\ref{pro: 1-cot<->cot S*S[1]} is also valid if we add the condition $\mathcal{X}, \mathcal{Y}\subseteq \mathcal{Z}$ in every statement. \end{proof} \section*{\textbf{Acknowledgements}} \section*{\textbf{Funding}} The first author thanks Programa de Becas Posdoctorales DGAPA-UNAM. \bibliographystyle{plain}
{ "timestamp": "2022-09-20T02:21:32", "yymm": "2209", "arxiv_id": "2209.08658", "language": "en", "url": "https://arxiv.org/abs/2209.08658" }
\section{Introduction}\label{sec1} We consider the Euler equations of one-dimensional compressible fluid flow in the following form: \begin{equation}\label{key} \left\{\begin{array}{ll} \rho_{t}+(\rho u)_{x}=0,\\ u_{t}+\left(\frac{u^{2}}{2}+\int_{}^{\rho}{\frac{P'(s)}{s}ds} \right) _{x}=0, \end{array}\right . \tag{1.1} \end{equation} where $t\geq 0$ is a time variable, and $x \in R$ is a space variable; $\rho $, $u$ and $P$ represent the density of mass, the velocity and the pressure of the fluid, respectively. System (1.1) was first given by Earnshaw [1], and is regarded as the one-dimensional compressible fluid flow [2]. In addition, System (1.1) has other different physical backgrounds. For instance, it is a scaling limit system of Newtonian dynamics with long-range interaction for a continuous distribution of mass in $R$ [3] and also a hydrodynamic limit for the Vlasov equation [4]. There have been many studies on the Euler system of one-dimensional compressible fluid flow with different equation of state. Diperna [5] established the existence of global weak solution to the Cauchy problem for the case of $1<\gamma<3$ by using Glimm's scheme under the equation of state $P(\rho) = A \rho^{\gamma}$, where $A>0$ and $\gamma>1$. Based on this result, Li [6] solved the same problems in the case $-1<\gamma<1$. Lu [7] and Cheng [8] proved analytically an existence theorem for the global entropy solutions in the case of $\gamma>3$ and $1<\gamma<3$ respectively, by means of the theory of compensated compactness coupled with some basic ideas of kinetic formulation. For the negative pressure laws, Cheng et al. [9] studied the Riemann problem for the 1D compressible fluid flow of a Chaplygin gas and solved constructively the existence and uniqueness of delta shock solutions, and the model of generalized Chaplygin gas was also investigated by Pang et al [10]. In this work, we are concerned with generalized Chaplygin gas, the equation of state is as follows: \begin{equation}\label{key} P = - s\rho^{-\gamma}, ~~~~~~ 0 < \gamma \leq 1. \tag{1.2} \end{equation} A distinctive feature of the generalized Chaplygin gas is that it has a negative pressure with the positive sound speed. The (1.2) with $\gamma = 1$ is also called the pure Chaplygin gas, which was introduced by Chaplygin [11], Tsien [12] and von Karman [13] as a mathematical approximation for calculating the lifting force on a wing of an airplane in aerodynamics. The generalized Chaplygin gas is usually used to describe the current accelerated expansion of the universe and the evolution of the perturbations of energy density [14,15]. The piston problem, serving as an important typical physical model in mathematical fluid dynamics, has been studied extensively in the past decades [16-19]. Qu and Yuan [20] considered the piston problem for compressible Euler equations \begin{equation}\label{key} \left\{\begin{array}{ll} \rho_{t}+(\rho u)_{x}=0,\\ (\rho u)_{t}+ (\rho u^{2}+P(\rho))_{x}=0 \end{array}\right . \tag{1.3} \end{equation} with the Chaplygin gas. They defined a Radon measure solution of piston problem within a convenient space of distributions that contains discontinuous functions and Dirac measures. Roughly speaking, the Radon measure solution is a solution such that at least one of the state variables has a Dirac delta function with the boundary of the domain as its support. Moreover, Gao, Qu and Yuan [21] established the equivalence of free piston and delta shock, for the one-dimensional pressureless compressible Euler equations, which helps to understand the physics of delta shocks, but also provides a way to reduce the fluid–solid interaction problem. Recently, Fan et.al [22] investigated the piston problem for system (1.3) with the generalized Chaplygin gas (1.2), where the Radon measure solutions are constructed to deal with the concentration of mass on the piston. For the theory of piston problem and Radon measure solution, readers can refer to [16, 20-23] for more details. To our knowledge, no literature investigates the piston problem for model (1.1) with a negative pressure law (1.2). Motivated by the above discussions, we will study the piston problem of system (1.1) and (1.2). First, let us describe the piston problem as follows. Suppose there is a piston which can move leftward or rightward in an infinite long and thin tube extending along the horizontal x-axis. The tube is filled with gas, which is enclosed by the piston on the right hand side and is uniform and static initially. The goal of piston problem is to study what happens if the piston moves? For the convenience of treating this question, we insert (1.2) into (1.1) to give the following compressible fluid flow with the generalized Chaplygin gas \begin{equation}\label{key} \left\{\begin{array}{ll} \rho_{t}+(\rho u)_{x}=0,\\ u_{t}+\left(\frac{u^{2}}{2}-A\rho^{-\alpha} \right) _{x}=0, \end{array}\right . \tag{1.4} \end{equation} where $A=\frac{s\gamma}{1+\gamma}$, $\alpha = \gamma+1$. (1.4) is totally equivalent to (1.1) and (1.2). Suppose initially the piston lies at $x=0$, and the gas fills the domain $\{x<0\}$. We assume the gas is static and uniform with given state \begin{equation}\label{key} U_{0} = (\rho, u) \vert_{t=0} = (\rho_{0}, 0). \tag{1.5} \end{equation} The piston moves with a given constant speed $v_{0}$ and the trajectory of the piston is $x = v_{0}t ~~(t\geq 0)$. Then, the time-space domain we consider is \begin{equation}\label{key} \Omega_{t} = \left\lbrace (t, x) : x< v_{0}t, ~~~t>0\right\rbrace. \tag{1.6} \end{equation} On the trajectory of the piston, we impose the impermeable condition \begin{equation}\label{key} u(t, x) = v_{0} ~~~~~ on~~~ x = v_{0}t. \tag{1.7} \end{equation} The piston problem considered in this paper is to find a solution of (1.4) in the domain $\Omega_{t}$, satisfying (1.5) and (1.7). Recall that the local sonic speed in generalized Chaplygin gas is given by \begin{equation}\label{key} c = \sqrt{P'(\rho)} = \sqrt{sr}\rho^{-\frac{r+1}{2}}. \tag{1.8} \end{equation} Then, the Mach number $M_{0}$ of the piston with respect to the gas is defined by \begin{equation}\label{key} M_{0} = \frac{ \vert v_{0} \vert }{c_{0}},\tag{1.9} \end{equation} where $v_{0}$ is the move speed of piston, $c_{0}$ is as in (1.8) for $\rho = \rho_{0}$. In this paper, we will show that if the piston moves into the gas with $M_{0} \in \left( 0, \sqrt{2}(1+\gamma)^{-\frac{1}{2}}\right) $, or recedes from the gas with any constant speed, there exists integral weak solutions, consisting of shocks or rarefaction waves respectively. However, there is no piecewise constant integral weak solution if the piston moves into the gas with $M_{0} \geq \sqrt{2}(1+\gamma)^{-\frac{1}{2}}$, and then a concept of Radon measure solution is proposed to understand the compressible fluid flow equations when the unknowns are measures, which also demenstrates the necessity of considering general measure solutions in the study of piston problems of systems of hyperbolic conservation laws. The outline of this paper is organized as follows. In Section 2, we reformulate the piston problem in a convenient way by shifting the coordinates system to move with the piston with Galilean transformation, and present a definition of measure solutions. The main results of piston problem are Theorem 2.1 and Theorem 2.2 given at the end of this section. In Section 3, we give detailed proof to Theorem 2.1. First, we construct an integral weak solution consisting of a shock wave when the piston rushes into the gas and Mach number less than $\sqrt{2}(1+\gamma)^{-\frac{1}{2}}$. Then, we find a solution containing a weighted Dirac measure supported on the piston if $M_{0}\geq\sqrt{2}(1+\gamma)^{-\frac{1}{2}}$, which also justified the concept of measure solutions we proposed. In Section 4, we rigorously prove Theorem 2.2. We construct a self-similar solution containing rarefaction waves as the piston recedes from the gas, and find the occurrence of vacuum state when Mach number tends to infinity. \section{The piston problem of generalized Chaplygin gas and main results}\label{sec2} \subsection{Simplified piston problem } In this part, we transform the piston problem of generalized Chaplygin gas equations into simplified version by Galilean transformation. Specifically, the following Galilean transformation is adopted to shift the coordinates to move with the piston and reformulate the piston problem: \begin{equation}\label{key} \left\{\begin{array}{ll} t' = t,\\ x' = x - v_{0}t,\\ \rho'(t', x') = \rho(t', x'+v_{0}t'),\\ u'(t', x') = u(t', x'+v_{0}t')-v_{0},\\ P'(t', x') = P(t', x'+v_{0}t'). \end{array}\right. \tag{2.1} \end{equation} \noindent It is easy to validate that the equations (1.1) are invariant under (2.1), and the domain $\Omega_{t}$ in (1.6) and trajectory of the piston are reduced to $\Omega' = \left\lbrace (t', x') : x'< 0, t'>0\right\rbrace $ and $x' = 0$. For convenience of statement, we drop all the primes "$'$" without confusion. So, the domain $\Omega'$ can be written by \begin{equation}\label{key} \Omega = \left\lbrace (t, x) : x< 0, t>0\right\rbrace. \tag{2.2} \end{equation} Similarly, the initial condition becomes \begin{equation}\label{key} (\rho, u)\vert_{t=0} = (\rho_{0}, -v_{0}), \tag{2.3} \end{equation} and the boundary condition becomes \begin{equation}\label{key} u(t, x) = 0 ~~~~~ on~~~ x = 0. \tag{2.4} \end{equation} \noindent To better understand the essence of piston problem, we carry out the non-dimensional linear transformations of independent and dependent variables, which corresponds to some similarity laws in physics [20]: \begin{equation}\label{key} \tilde{t} = \frac{t}{T},~~~~ \tilde{x} = \frac{x}{L},~~~~ \tilde{\rho} = \frac{\rho}{\rho_{0}},~~~~ \tilde{u} = \frac{u}{\left \vert v_{0}\right \vert},~~~~ \tilde{P} = \frac{P}{\rho_{0}v_{0}^{2}}, \tag{2.5} \end{equation} where $T$ and $L > 0$ are constants with $\frac{L}{T} = v_{0} $. Inserting (2.5) into (1.1) and calculating directly, we can obtain that \begin{equation}\label{key} \left\{\begin{array}{ll} \tilde{\rho}_{\tilde{t}} + (\tilde{\rho} \tilde{u})_{\tilde{x}} = 0,\\ \tilde{u}_{\tilde{t}}+\left(\frac{\tilde{u}^{2}}{2}+\int_{}^{\tilde{\rho}}{\frac{\tilde{P'}(s)}{s}ds} \right) _{\tilde{x}}=0, \end{array}\right. \tag{2.6} \end{equation} which, together with (2.5), implies that $\tilde{\rho}$, $\tilde{u}$, $\tilde{P}$ still satisfies (1.1) and hence (1.1) is invariant under (2.5), that is to say, the $\rho$, $u$ and $P$ in (1.1) are equivalent to $\tilde{\rho}$, $\tilde{u}$, $\tilde{P}$ in (2.6). So, from (2.5), in the following we shall take initial data as \begin{equation}\label{key} \rho_{0} = 1,~~ v_{0} = \pm 1. \tag{2.7} \end{equation} On the other hand, by using (1.8), substituting $c_{0} = \sqrt{sr}\rho_{0}^{-\frac{\gamma+1}{2}}$ into $M_{0} = \frac{ \vert v_{0} \vert }{c_{0}}$, we obtain \begin{equation}\label{key} s = \frac{\rho_{0}^{\gamma+1} v_{0}^{2}}{\gamma M_{0}^{2}} .\tag{2.8} \end{equation} Then, (1.2) becomes \begin{equation}\label{key} P(\rho) = -\frac{\rho_{0}^{\gamma+1} v_{0}^{2}}{\gamma M_{0}^{2}\rho^{\gamma}}. \tag{2.9} \end{equation} Combining (2.5) with (2.9) and replacing $\frac{\rho_{0}}{\rho}$ by $\frac{1}{\tilde{\rho}}$, we obtain \begin{equation}\label{key} \tilde{P}(\tilde{\rho}) = -\frac{1}{ \gamma M_{0}^{2} \tilde{\rho}^{\gamma} }. \tag{2.10} \end{equation} Since $\tilde{P}$, $\tilde{\rho}$ in (2.10) are equivalent to the corresponding $P$, $\rho$ in (2.9), for abbreviation, we write \begin{equation}\label{key} P(\rho)= -\frac{1}{ \gamma M_{0}^{2} \rho^{\gamma} }. \tag{2.11} \end{equation} By (2.11) and taking $s= \frac{1}{\gamma M_{0}^2}$, we have $P(\rho) = -s \rho^{-\gamma}$, which also infers that $\rho_{0} = 1$, $ v_{0} = \pm 1$. Therefore, we define initial data by \begin{equation}\label{key} \rho_{0} = 1,~~ v_{0} = \pm 1,~~ P_{0} = -\frac{1}{\gamma M_{0}^{2}}. \tag{2.12} \end{equation} In conclusion, the piston problem can now be reformulated as to define and seek solutions of (1.4), (2.4) and (2.12) in the domain given by (2.2). \subsection{Definition of measure solutions of piston problem}\label{sec3} The above formulation of piston problem only makes sense for classical elementary waves solutions such as shock waves or rarefaction waves. Since it turns out that the unknowns might be measures singular to Lebesgue measure, it is necessary to rewrite the piston problem and give a rigorous definition of Radon measure solution of compressible fluid flow for the mathematical analysis. Recall that a Radon measure $m$ on the upper plane $[0, \infty) \times R$ could act on the compactly supported continuous functions \begin{equation}\label{key} <m, \phi> = \int_{0}^{\infty}\int_{R}\phi(t,x)m(dxdt), \tag{2.13} \end{equation} where the test function $\phi \in C_{0}([0, \infty)\times R)$. The standard Lebesgue measure of $ R^{2}$ is denoted by $\mathcal{L}^{2}$. The fact that a measure $\mu$ is absolutely continuous with respect to a nonnegative measure $\nu$ is denoted by $\mu \ll \nu$. The Dirac measure supported on a curve, which is singular to $\mathcal{L}^{2}$ is defined as below [20]. \textbf{Definition 2.1.}~~Let $L$ be a Lipschitz curve given by $x=x(t)$ for $t$ $ \in$ $ [0,T)$, and $w_L(t) \in L_{loc}^{1}(0,T)$. The Dirac measure $w_{L}\delta_{L}$ supported on $L \subset R^{2} $ with weight $w_{L}$ is defined by \begin{equation}\label{key} <w_{L}\delta_{L}, \phi> = \int_{0}^{T}\phi(t, x(t)) w_L(t) \sqrt{x'(t)^2+1}dt,~~~~~~\forall \phi \in C_{0}(R^2). \tag{2.14} \end{equation} Now, we could formulate the piston problem rigorously by introducing the following definition of measure solutions. \textbf{Definition 2.2.}~~ For fixed $0<M_{0}\leq \infty$, let $ \varrho, m, n, \tau, \mathcal{P} $ be Radon measures on $\overline{\Omega}$ (the closure of $\Omega$), and $w_{p}$ a locally integrable nonnegative function on $[0, \infty)$. We call $(\varrho, u, w_{p})$ a measure solution to the piston problem (1.4) (2.4) and (2.12), provided that \noindent i) $m\ll \varrho$, $\tau \ll n$, and they have the same Radon-Nikodym derivative $u$; namely \begin{equation}\label{key} u \triangleq \dfrac{m(dxdt)}{\varrho(dxdt)} = \dfrac{\tau(dxdt)}{n(dxdt)}; \tag{2.15} \end{equation} ii) For any $\phi \in C_{0}^{1}(R^{2})$, there holds \begin{equation}\label{key} <\varrho, \partial t \phi> + <m, \partial x \phi> + \int_{-\infty}^{0}\rho_{0}\phi(0, x)dx = 0, \tag{2.16} \end{equation} \begin{equation}\label{key} \begin{split} &<n, \partial t \phi> + <\tau, \partial x \phi> +<\mathcal{P}, \partial x \phi> - <w_{p}\delta_{\{x=0, ~t \geq 0\} }, \phi> + \int_{-\infty}^{0}u_{0}\phi(0, x)dx = 0, \end{split}\tag{2.17} \end{equation} iii) If $\varrho \ll \mathcal{L}^2$ with derivative $\rho(t, x)$ in a neighborhood of $(t, x) \in [0, \infty) \times (-\infty, 0]$, and $\mathcal{P} \ll \mathcal{L}^2$ with derivative $P(t, x)$ there, then $\mathcal{L}^2$-a.e. there holds \begin{equation}\label{key} P = -\frac{1}{ \rho^{\gamma}}\frac{1}{\gamma M_{0}^{2}}, \tag{2.18} \end{equation} and in addition, the classical entropy condition holds for discontinuities of functions $\rho, u$ near $(t, x)$. \textbf{Remark 2.1.} Physically, the weight $w_{p}$ is the inertial force caused by the flows hitting the piston. It is alway positive when the piston moves to the gas with Mach number exceeding $\sqrt{2}(1+\gamma)^{-\frac{1}{2}}$, which will be rigorously confirmed in Section 3. Hence, compared with the standard pressureless Euler equations, the high Mach number limit is not simply the vanishing pressure limit, since there is an extra term $w_{p}\delta_{\{x=0, t\geq 0\}}$ in the limiting Euler equations (see the fourth term in (2.17)). The requirement that $w_{p}$ is nonnegative shall be considered as a kind of stability condition for the measure solutions. \textbf{Remark 2.2.} The above definition is consistent with the familiar integrable weak solution just by taking \begin{equation} \varrho = \rho I_{\Omega}\mathcal{L}^2, ~~m = \rho u I_{\Omega}\mathcal{L}^2, ~~n =u I_{\Omega}\mathcal{L}^2, ~~\tau = \frac{u^{2}}{2} I_{\Omega}\mathcal{L}^2, \mathcal{P} = \int_{}^{\rho}\frac{P'(s)}{s}ds I_{\Omega}\mathcal{L}^2. \notag \end{equation} The main results of this paper are the following two theorems. \textbf{Theorem 2.1.} ~~ For the piston moving into the gas ($v_{0} = -1$), there has two situations. One is that a shock wave solution of problem (1.1), (2.4) and (2.12) exists only for Mach number satisfying $M_{0} \in \left( 0, \sqrt{2}(1+\gamma)^{-\frac{1}{2}}\right) $. The other is that a Radon measure solution we defined is justified as a solution to the problem (1.1), (2.4) and (2.12) in domain $\Omega$ if Mach number $M_{0} \geq \sqrt{2}(1+\gamma)^{-\frac{1}{2}}$. \textbf{Theorem 2.2.} \label{{Theorem 4.2}}~~ For the piston recedes from the gas ($v_{0} = 1$), the problem (1.1), (2.4) and (2.12) always has a rarefaction wave solution in the domain $\Omega$ for $0<M_{0}<\infty$. If $M_{0} \rightarrow 0$, the vacuum state occurs in the front of the piston and the rarefaction wave degenerates the discontinuity line $x = - t$, the high Mach number limiting equations are consistent with the pressureless Euler equations. In what follows, we complete the detailed proofs for these two theorems in Section 3 and 4. \section{Proof of Theorem 2.1} \subsection{Shock wave solution for $M_{0} \in \left( 0, \sqrt{2}(1+\gamma)^{-\frac{1}{2}}\right) $} ~~~~Since shock wave appears ahead of the piston when it pushes into the gas, we first give the definition of the integral weak solution for problem (1.1), (2.4) and (2.12). \textbf{Definition 3.1.}~~ We say $\left(\rho, u \right)\in L^{\infty}(\left[ 0,\infty\right) \times\left( -\infty,0\right] ) $ is an integral solution to the problem (1.1), (2.4) and (2.12), if for any $\phi \in C^{1}_{0}(R^{2})$, there hold \begin{equation}\label{key} \left\{\begin{array}{ll} \int_{\Omega}(\rho \partial t \phi+\rho u \partial x \phi)dxdt + \int_{-\infty}^{0}\rho_{0}(x)\phi(0,x)dx = 0,\\ \\ \int_{\Omega}\left[ u \partial t \phi +\left( \frac {u^{2}}{2}+\int_{}^{\rho}{\frac{P'(s)}{s}ds}\right) \partial x \phi \right] dxdt \\- \int_{0}^{\infty}\int_{}^{\rho}{\frac{P'(s)}{s}ds}\big\vert_{x=0} \phi(t,0)dt + \int_{-\infty}^{0}u_{0}(x) \phi(0,x)dx = 0. \end{array}\right. \tag{3.1} \end{equation} Notice that the problem (1.1), (2.4) and (2.12) is a Riemann problem with boundary conditions taken into account, and the equations (1.1), the initial data (2.12) as well as the boundary condition (2.4) are invariant under the scaling $$ (x, t)\rightarrow (\mu x, \mu t) ~~~~~for ~~~\mu \neq 0. $$ Thus, we can construct a piecewise constant self-similar solution $U(x, t) = V(\frac{x}{t}) $ to connect two states $(\rho_{0}, -v_{0})$ and $(\rho_{1}, 0)$, which is in the form \begin{equation}\label{key} U(x, t) = V\left( \frac{x}{t}\right) = \left\{\begin{array}{ll} V_{0} = (1,1),~~~~~ -\infty \leq \frac{x}{t} < \sigma,\\ V_{1} = (\rho_{1}, 0),~~~~~ \sigma < \frac{x}{t} \leq 0, \tag{3.2} \end{array}\right . \end{equation} where $V_{0}$ and $V_{1}$ are subject to the Rankine-Hugoniot condition \begin{equation}\label{key} \left\{\begin{array}{ll} \sigma(\rho_{1} - \rho_{0}) = \rho_{1}u_{1} - \rho_{0}u_{0},~~~~\\ \sigma(u_{1} -u_{0}) = \frac{u_{1}^{2}}{2} -A\rho_{1}^{-\alpha} - \frac{u_{0}^{2}}{2} +A\rho_{0}^{-\alpha}, \end{array}\right. \tag{3.3} \end{equation} where $A=\frac{sr}{1+r}$, ~~$\alpha = \gamma+1$. In view of $\rho_{0} = 1$, $u_{0} = 1$, $u_{1} = 0$, it follows from the first equation of (3.3) that \begin{equation}\label{key} \sigma = -\frac{1}{\rho_{1}-1}. \tag{3.4} \end{equation} \noindent Note that $\sigma <0$ requires that $\rho_{1} >1$. Substituting $\sigma$ into the second equation of (3.3) gives \begin{equation}\label{key} \frac{1}{\rho_{1}-1} = - \frac{s\gamma \rho_{1}^{-\alpha}}{1+\gamma}+\frac{s\gamma }{1+\gamma}-\frac{1}{2}. \tag{3.5} \end{equation} where $s = \frac{1}{\gamma M_{0}^{2}}$. Simplifying and rearranging (3.5) gives that \begin{equation}\label{key} M_{0}^2 = \frac{2(1-\rho_{1}^{-1})(1-\rho_{1}^{-\alpha})}{(1+\gamma)(1+\rho_{1}^{-1})} = \frac{2(1-\rho_{1}^{-1})(1-\rho_{1}^{-\gamma-1})}{(1+\gamma)(1+\rho_{1}^{-1})} . \tag{3.6} \end{equation} For $0<M_{0}<\sqrt{2}(1+\gamma)^{-\frac{1}{2}}$, we have $0< \frac{(1-\rho_{1}^{-1})(1-\rho_{1}^{-\gamma-1})}{(1+\rho_{1}^{-1})}<1$. Define a continuous function $f(\rho)= \frac{(1-\rho^{-1})(1-\rho^{-\gamma-1})}{(1+\rho^{-1})}$ with respect to $\rho$. Then, we can deduce $f(1) = 0$, $f(\infty) =1 $. According to the intermediate value theorem of continuous function, there exists a $\rho_{1} > 1$, such that $0 < f(\rho_{1}) < 1$, which shows the existence of a shock wave solution. Furthermore, a direct computation shows that \begin{equation}\label{key} \begin{split} f'(\rho) &= \frac{\left[ 2\left( 1-\rho^{-\gamma-1}\right)+2(\rho-1)(\gamma+1) \rho^{-\gamma-2} \right](\rho+1)-2(\rho-1)\left( 1-\rho^{-\gamma-1}\right) } {(1+\gamma)(\rho+1)^2} \\&=\frac{4\left( 1-\rho^{-\gamma-1}\right)+2(\rho-1)(\rho+1)(\gamma+1) \rho^{-\gamma-2} } {(1+\gamma)(\rho+1)^2}. \end{split}\tag{3.7} \end{equation} From (3.7), it is easy to infer $f'(\rho)>0$ for $\rho>1$, i.e., $f(\rho)$ is always a monotone and increasing function for $\rho>1$, which guarantees the uniqueness of the shock wave solution. Thus, we rigorously prove that there has a shock wave solution (3.2) connecting two states $(1,1)$ and $(\rho_{1}, 0)$ as the piston rushes into the gas with Mach number $0<M_{0}<\sqrt{2}(1+\gamma)^{-\frac{1}{2}}$. \subsection{Singular measure solution for $M_{0}\geq\sqrt{2}(1+\gamma)^{-\frac{1}{2}}$} From (3.6), if $M_{0}\geq\sqrt{2}(1+\gamma)^{-\frac{1}{2}}$, it follows that $f(\rho_{1})\geq 1$, which contradicts with $\rho_{1}>1$. In this case, the piston problem cannot be solved in the sense of integral weak solutions which are measurable functions with respect to the Lebesgue measure. To solve this case, we construct a special measure solution by supposing that \begin{equation}\label{key} \varrho = I_{\Omega}\mathcal{L}^2 + w_{\rho}(t)\delta_{\{x=0,~ t \geq 0\} }, ~m = I_{\Omega}\mathcal{L}^2 , ~n = I_{\Omega}\mathcal{L}^2, ~\tau = \frac{1}{2} I_{\Omega}\mathcal{L}^2,~\mathcal{P} = -\frac{1}{(1+\gamma)M_{0}^2}I_{\Omega}\mathcal{L}^2. \tag{3.8} \end{equation} where $ I_{\Omega}$ is the characteristic function of $\Omega$. These expressions come from the physical phenomena of infinite-thin shock layer in hypersonic flows past bodies and the hypersonic similarity law [24], and observations made in [17, Remark 1]. According to the Definition 2.2 and integration-by-parts, we obtain \begin{equation}\label{key} \begin{split} 0 &= <\varrho, \partial t \phi> + <m, \partial x \phi> + \int_{-\infty}^{0} \rho_{0} \phi(0, x)dx \\&= \int_{\Omega} \partial t \phi dxdt + \int_{0}^{\infty} w_{\rho}(t) \partial t \phi(t, 0)dt + \int_{0}^{\infty} \int_{-\infty}^{0} \partial x \phi dxdt + \int_{-\infty}^{0} \phi(0, x)dx \\&= \int_{-\infty}^{0} \phi(t, x)\vert_{t=0}^{\infty} dx + w_{\rho}(t) \phi(t, 0)\vert_{t=0}^{\infty} - \int_{0}^{\infty} w'_{\rho}(t) \phi(t, 0)dt \\&+ \int_{0}^{\infty} \phi(t, x)\vert_{x=-\infty}^{0} dt + \int_{-\infty}^{0} \phi(0, x)dx \\&= \int_{0}^{\infty} (1 - w'_{\rho}(t))\phi(t, 0)dt -w_{\rho}(0)\phi(0, 0).\end{split} \tag{3.9} \end{equation} By the arbitrariness of $\phi$, it follows \begin{equation}\label{key} \left\{\begin{array}{ll} w'_{\rho}(t) = 1, ~~~t\geq0,\\ w_{\rho}(0) = 0. \end{array}\right. \tag{3.10} \end{equation} Thus, we have \begin{equation}\label{key} w_{\rho}(t) = t. \tag{3.11} \end{equation} From the momentum equation (2.17), similar calculations show that \begin{equation}\label{key} \begin{split} 0 &= <n, \partial t \phi> + <\tau, \partial x \phi> + <\mathcal{P}, \partial x \phi> - <w_{p}(t)\delta_{\{x=0,~ t \geq 0\} }, \phi> + \int_{-\infty}^{0} u_{0} \phi(0, x)dx \\&= \int_{0}^{\infty} \int_{-\infty}^{0} \partial t \phi dxdt + \frac{1}{2} \int_{0}^{\infty} \int_{-\infty}^{0} \partial x \phi dxdt - \frac{1}{(1+\gamma)M_{0}^2} \int_{0}^{\infty} \int_{-\infty}^{0} \partial x \phi dxdt \\&- \int_{0}^{\infty} w_{p}(t) \phi(t, 0)dt + \int_{-\infty}^{0} \phi(0, x)dx \\&= \int_{0}^{\infty} \left[ \frac{1}{2}-w_{p}(t) - \frac{1}{(1+\gamma)M_{0}^2}\right] \phi(t, 0)dt. \end{split} \tag{3.12} \end{equation} Since the arbitrariness of $\phi$ and $M_{0}\geq \sqrt{2}(1+\gamma)^{-\frac{1}{2}}$, then \begin{equation}\label{key} w_{p}(t) = \frac{1}{2} -\frac{1}{(1+\gamma)M_{0}^2} \geq 0. \tag{3.13} \end{equation} Therefore, the shock wave and all gas between the shock wave and the piston adhere to the piston and then form a concentration of mass like a Dirac measure as $M_{0}\geq\sqrt{2}(1+\gamma)^{-\frac{1}{2}}$. From Remark 2.1., we also see that the high Mach number limit is different from the the vanishing pressure limit. As the Mach number $M_{0}$ increases to infinity, the shock-front approaches the surface of the piston. This completes the proof of Theorem 2.1. \section{Proof of Theorem 2.2} ~~~~This section is devoted to the proof of Theorem 2.2. Let $U(x, t) = (\rho(x, t), u(x, t))$. The system (1.4) has two eigenvalues [10] \begin{equation}\label{key} \lambda_{1}(U) = u -\sqrt{A\alpha}\rho^{-\frac{\alpha}{2}}, ~~~~~ \lambda_{2}(U) = u +\sqrt{A\alpha}\rho^{-\frac{\alpha}{2}}, \tag{4.1} \end{equation} with two associated right eigenvectors \begin{equation}\label{key} \vec{r}_{1} = \left( \sqrt{\rho}, -\sqrt{A\alpha \rho^{-(\alpha+1)}}\right)^{T},~~~~~ \vec{r}_{2} = \left( \sqrt{\rho}, \sqrt{A\alpha \rho^{-(\alpha+1)}}\right)^{T}, \tag{4.2} \end{equation} satisfying \begin{equation}\label{key} \triangledown \lambda_{i}(U) \cdot\vec{r}_{i} = \mp \frac{2-\alpha}{2} \sqrt{A\alpha \rho^{-(\alpha+1)}} \neq 0, ~~~ (i=1, 2), \tag{4.3} \end{equation} where $A= \frac{s\gamma}{1+\gamma}$, $\alpha=\gamma+1$, $s=\frac{1}{\gamma M_{0}^2}$. Thus, all of the characteristic fields are genuinely nonlinear and the Riemann solutions of the system (1.4) contain the first family rarefaction wave $R_{1}(U)$ and the second family rarefaction wave $R_2(U)$. Now, we can as well construct a solution of the form $U(t,x) = V\left( \frac{x}{t}\right) $. For any fixed $0<M_{0}<\infty$, we suppose the solution is composed of two constant states $V_{0} = (1, -1)$, $V_{1} = (\rho_{1}, 0)$, and a rarefaction wave $V_{m}$ connecting them: \begin{equation}\label{key} U(x, t) = V\left( \frac{x}{t}\right) = \left\{\begin{array}{ll} V_{0} ,~~~~~~~~~~~~~~~ -\infty \leq \frac{x}{t} < \lambda_{i}(V_{0}),\\ V_{m}\left(\frac{x}{t} \right) ,~~~~~~~~~~ \lambda_{i}(V_{0}) \leq \frac{x}{t} < \lambda_{i}(V_{1}),\\ V_{1},~~~~~~~~~~~~~~~~~~ \lambda_{i}(V_{1}) < \frac{x}{t} \leq 0, \end{array}\right. ~~~ (i=1, 2). \tag{4.4} \end{equation} \noindent If the piston recedes from the gas, then $v_{0} >0$ and $\rho_{1} < \rho_{0}$. For the first family rarefaction wave $R_{1}(U)$, we can deduce the self-similar solution as follow: \begin{equation}\label{key} \left\{\begin{array}{ll} \eta = \frac{x}{t} = \lambda_{1}(U) = u - \sqrt{A\alpha}\rho^{-\frac{\alpha}{2}},\\ \\ u -\frac{2}{\alpha}\sqrt{A\alpha}\rho^{-\frac{\alpha}{2}} = u_{0} - \frac{2}{\alpha}\sqrt{A\alpha}\rho_{0}^{-\frac{\alpha}{2}} , ~~\rho_{1} < \rho < \rho_{0}, \\ \\ \lambda_{1}(V_{0}) \leq \lambda_{1}(U) \leq \lambda_{1}(V_{1}), \end{array}\right.\tag{4.5} \end{equation} where $U(t, x) = V(\frac{x}{t})$. Replacing $\rho_{0}, u_{0}$ by $\rho_{0}=1, u_{0}=-1$ and the second equation of (4.5), we calculate that \begin{equation}\label{key} u = -1+\frac{2}{\alpha}\sqrt{A\alpha}\left( \rho^{-\frac{\alpha}{2}}-1\right) . \tag{4.6} \end{equation} Then, substituting $A = \frac{s\gamma}{1+\gamma},~ \alpha=1+\gamma,~ s=\frac{1}{\gamma M_{0}^2}$ into (4.6) gives \begin{equation}\label{key} u = -1 + 2 (\gamma+1)^{-1}M_{0}^{-1}\left( \rho^{-\frac{\gamma+1}{2}}-1\right) . \tag{4.7} \end{equation} Taking (4.7) into the first equation of (4.5), one obtains \begin{equation}\label{key} \eta = -1 - 2 (\gamma+1)^{-1}M_{0}^{-1}+M_{0}^{-1}(1-\gamma)(1+\gamma)^{-1} \rho^{-\frac{\gamma+1}{2}}. \tag{4.8} \end{equation} where $0<\gamma<1$ and $0<M_{0}<\infty$. The density $\rho$ will be regard as a function of $\eta$. From (4.8), we have \begin{equation}\label{key} \rho(\eta) = \left[ \frac{(\eta+1)(1+\gamma)M_{0}+2}{1-\gamma} \right]^{-\frac{2}{1+\gamma}} . \tag{4.9} \end{equation} Combining (4.7) with (4.9), we obtain \begin{equation}\label{key} u(\eta) =-1+2(\eta+1)(1-\gamma)^{-1}+2M_{0}^{-1}(1-\gamma)^{-1}. \tag{4.10} \end{equation} Considering $V_{0}=(1, -1)$, by the first equation of (4.5), we obtain $\eta_{-1} = \lambda_{1}(V_{0}) = 1-M_{0}^{-1}$. Moreover, for state $V_{1} = (\rho_{1}, 0)$, $u(\eta)$ satisfies the boundary condition $u(\eta_{0})=0$, which deduces $\eta_{0}=-\frac{\gamma+1}{2}-\frac{1}{M_{0}}$. Based on the above discussion and (4.9), we conclude that there has $V_{m}\left(\frac{x}{t} \right)=(\rho(\eta), u(\eta))$, satisfying $$\eta_{-1}\leq\eta\leq\eta_{0}$$ and $$0<\rho(\eta_{0})\leq\rho(\eta)\leq\rho(\eta_{-1}).$$ Consequently, (4.4) gives a rarefaction wave solution for system (1.1) with (2.4) and (2.12) in the domain $\Omega$. If the Mach number $M_{0}\rightarrow \infty$, we obtain $\eta_{-1}\rightarrow -1$, $\eta_{0} \rightarrow -\frac{1+\gamma}{2}$, $P_{0} \rightarrow 0$, $P_{1} \rightarrow 0$. For arbitrarily given $\eta$ satisfying $\eta_{-1}<\eta\leq\eta_{0}$. If $M_{0} \rightarrow \infty$, we have $-1<\eta\leq-\frac{1+\gamma}{2}$. By this and (4.9), we obtain \begin{equation}\label{key} \lim\limits_{M_{0}\rightarrow \infty}\rho(\eta)=\lim\limits_{M_{0}\rightarrow \infty} \left[ \frac {1}{(\eta+1)(1+\gamma)M_{0}+2} \right]^{\frac{2}{1+\gamma}}\left(1-\gamma \right)^{\frac{2}{1+\gamma}}= 0. \tag{4.11} \end{equation} Substituting $u(\eta_{0})=0$ into (4.7) yields $\rho(\eta_{0}) = \left[ \frac{M_{0}(\gamma+1)+2}{2} \right]^{-\frac{2}{\gamma+1}} $, it is easy to get \begin{equation}\label{key} \lim\limits_{M_{0}\rightarrow \infty}\rho(\eta_{0})=0. \tag{4.12} \end{equation} Based on above analysis and (4.11) and (4.12), we conclude that the vacuum state occurs in the front of the piston and the right discontinuity line $x= - \frac{1+\gamma}{2}t$ vanishes if $M_{0}\rightarrow \infty$. Moreover, the rarefaction wave $\eta = \frac{x}{t} \in [\eta_{-1}, \eta_{0}]$ degenerates discontinuity line $x=-t$, the limit equations for high Mach number are pressureless Euler equations. Analogously, for the second rarefaction wave $R_{2}(U(x,t))$, we can deduce the self-similar solution as follow: \begin{equation}\label{key} \left\{\begin{array}{ll} \eta = \frac{x}{t} = \lambda_{2}(U) = u + \sqrt{A\alpha}\rho^{-\frac{\alpha}{2}},\\ \\ u + \frac{2}{\alpha}\sqrt{A\alpha}\rho^{-\frac{\alpha}{2}} = u_{0} + \frac{2}{\alpha}\sqrt{A\alpha}\rho_{0}^{-\frac{\alpha}{2}} , ~~\rho_{1} < \rho < \rho_{0}, \\ \\ \lambda_{2}(V_{0}) \leq \lambda_{2}(U) \leq \lambda_{2}(V_{1}). \end{array}\right.\tag{4.13} \end{equation} Similar calculations lead to \begin{equation}\label{key} \rho(\eta) = \left[\left(\eta+1-2(1+\gamma)^{-1}M_{0}^{-1} \right)(1+\gamma) (\gamma-1)^{-1}M_{0} \right]^{-\frac{2}{1+\gamma}}, \tag{4.14} \end{equation} and \begin{equation}\label{key} u(\eta) = -1+2(\eta+1)(1-\gamma)^{-1}+2(\gamma-1)^{-1}M_{0}^{-1}, \tag{4.15} \end{equation} where $-1+M_{0}^{-1}\leq \eta \leq -\frac{\gamma+1}{2}+M_{0}^{-1}$. Since $u_{1}=0$, we have \begin{equation}\label{key} \rho_{1} = \left( 1-\frac{(1+\gamma)M_{0}}{2}\right)^{-\frac{2}{1+\gamma}} . \tag{4.16} \end{equation} There are several different results. If $0<M_{0}<\frac{2}{1+\gamma}$, then $\rho_{1}>1$. Moreover, we have $\rho_{1}<0$ for $0<\gamma<1$ and $M_{0}>\frac{2}{1+\gamma}$. If $M_{0} \rightarrow \frac{2}{1+\gamma}$, we have \begin{equation}\label{key} \lim\limits_{M_{0}\rightarrow \frac{2}{1+\gamma}} \left( 1-\frac{(1+\gamma)M_{0}}{2}\right)^{-\frac{2}{1+\gamma}}=\infty . \tag{4.17} \end{equation} The above result contradicts with the requirement of $0\leq \rho_{1}<\rho_{0}=1$. Therefore, the second rarefaction wave $R_{2}(U(x,t))$ is not a physical solution as the piston recedes from the gas. \textbf{Remark 4.1.} we remark that the limiting equations for compressible fluid flow of generalized chaplygin gas under the Mach number $M_{0}\rightarrow \infty$ is the pressureless Euler equations. The rarefaction wave solutions of compressible fluid flow equations for piston problems at $x=-t$ degenerate into contact discontinuity. \section{Acknowledgements}\label{sec6} This research is supported by the Department of Education of Fujian Province in China (No. JAT210254) and the Minnan Normal University (No. KJ2021020). \section{ Statements and Declarations} \subsection{ Competing Interests} The authors have no financial or non-financial interests related to this work to disclose.
{ "timestamp": "2022-09-20T02:23:12", "yymm": "2209", "arxiv_id": "2209.08705", "language": "en", "url": "https://arxiv.org/abs/2209.08705" }
\section{Introduction} \label{sec:intro} This paper considers a class of control policies, called \emph{threshold policies}, that naturally arise in many practical problems. For example, a smart home server may only turn on the air conditioner when the room temperature exceeds a certain threshold, and a central bank may only raise the interest rate when inflation exceeds a certain threshold. For such problems, finding the optimal control policies can be reduced to finding the appropriate thresholds given other factors of the system, such as the number of people in the room in the smart home server scenario or the unemployment rate and the current interest rate in the central bank scenario. An important feature of threshold policies is that their actions are monotone. For example, if a smart home server would turn on the air conditioner at a certain temperature, then, all other factors being equal, the server would also turn on the air conditioner when the temperature is even higher. By leveraging this monotone property, an algorithm aiming to learn the optimal threshold can potentially be much more efficient than generic reinforcement learning algorithms seeking to learn the optimal action at different points of temperature separately. In order to design an efficient algorithm for learning the optimal threshold policy, we first formally define a class of Markov decision processes (MDPs) that admit threshold policies and its objective function. The optimal threshold policy is then the one that maximizes the objective function. However, the objective function involves an integral over a continuous range, which makes it infeasible to directly apply standard tools, such as backward-propagation in neural networks, to perform gradient updates. Surprisingly, we show that, by leveraging the monotone property of threshold policies, the gradient of the objective function has a very simple expression. Built upon this expression, we propose Deep Threshold-Optimal Policy (DeepTOP), a model-free actor-critic deep reinforcement learning algorithm that finds the optimal threshold policies. We evaluate the performance of DeepTOP by considering three practical problems, an electric vehicle (EV) charging problem that determines whether to charge an EV in the face of unknown fluctuations of electricity price, an inventory management problem that determines whether to order for goods in the face of unknown seasonal demands, and a make-to-stock problem for servicing jobs with different sizes. For all problems, DeepTOP significantly outperforms other state-of-the-art deep reinforcement learning algorithms due to its ability to exploit the monotone property. We also study the notoriously hard restless multi-armed bandit (RMAB) problem. We show that the Whittle index policy, a powerful tool for RMABs, can be viewed as an optimal threshold policy for an alternative problem. Based on this observation, we define an objective function for the alternative problem, of which the Whittle index is the maximizer. We again show that the gradient of the objective function has a simple expression. This simple expression allows us to extend DeepTOP for the learning of the Whittle index. We compare this DeepTOP extension to three recently proposed algorithms that seek to learn the optimal index policies through other indirect properties. Simulation results show that the DeepTOP extension learns much faster because it directly finds the optimal threshold policy. The rest of the paper is organized as follows. Section \ref{sec:problem_MDP} defines the MDP setting and threshold policies. We present the DeepTOP algorithm for MDP in Section \ref{sec:deeptop_mdp}. We then discuss how the Whittle index policy for RMABs can be viewed as a threshold policy in Section \ref{sec:problem_rmabs} and develop a DeepTOP extension for learning it in Section \ref{sec:deeptop_rmab}. We show DeepTOP's performance results for MDPs and RMABs in Section \ref{sec:experiments}, and give related works in Section \ref{sec:related} before concluding. \section{Threshold Policies for MDPs} \label{sec:problem_MDP} Consider an agent controlling a stochastic environment $\mathcal{E}$ described as an MDP $\mathcal{E} = (\mathcal{S}, \mathcal{A}, \mathcal{R}, \mathcal{P}, \gamma)$, with state space $\mathcal{S}$, binary action space $\mathcal{A} := \{0,1\}$, reward function $\mathcal{R} : \mathcal{S} \times \mathcal{A} \rightarrow \Omega$, transition dynamics $\mathcal{P} : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \varmathbb{R}$, and discount factor $\gamma \in [0, 1)$, where $\varmathbb{R}$ is the set of real numbers and $\Omega$ is the set of random variables. At each timestep $t$, the agent picks an action $a_t \in \mathcal{A}$ for the current state $s_t$. The state $s_t\in \mathcal{S}=\varmathbb{R}\times \mathcal{V}$ has two components: a scalar state $\lambda_t\in \varmathbb{R}$, and a vector state $v_t\in \mathcal{V}$, where $\mathcal{V}$ is a discrete set of vectors. We assume the environment state is fully observable. Given the state-action pair $(s_t, a_t)$, the MDP generates a reward $r_t$ following the unknown random variable $\mathcal{R}(s_t, a_t)$, and a random next state $s_{t+1}=(\lambda_{t+1}, v_{t+1})$ following the unknown distribution $\mathcal{P}$. We use $\bar{r}(\lambda, v,a):=E[\mathcal{R}((\lambda, v), a)]$ to denote the unknown expected one-step reward that can be obtained for the state-action pair $(\lambda, v, a)$. A threshold policy is one that defines a threshold function $\mu: \mathcal{V}\rightarrow \varmathbb{R}$ mapping each vector state to a real number. The policy then deterministically picks $a_t =\mathds{1}(\mu(v_t) > \lambda_t)$, where $\mathds{1}(\cdot)$ is the indicator function. There are many applications where it is natural to consider threshold policies and we discuss some of them below. \begin{example} Consider the problem of charging electric vehicles (EV). When an EV arrives at a charging station, it specifies its demands for charge and a deadline upon which it will leave the station. The electricity price changes over time following some random process. The goal of the operator is to fulfill the EV's requirement with minimum cost. In this problem, we can model the system by letting the scalar state $\lambda_t$ be the current electricity price and the vector state $v_t$ be the remaining charge and time to deadline of the EV. For this problem, it is natural to consider a threshold policy that defines a threshold $\mu(v_t)$ as the highest price the operator is willing to pay to charge the EV under vector state $v_t$. The operator only charges the vehicle, i.e., chooses $a_t=1$, if $\lambda_t<\mu(v_t)$. \end{example} \begin{example} Consider the problem of warehouse management. A warehouse stores goods waiting to be sold. When the number of stored goods exceeds the demand, then there is a holding cost for each unsold good. On the other hand, if the number of stored goods is insufficient to fulfill the demand, then there is a cost of lost sales. The goal of the manager is to decide when to place orders so as to minimize the total cost. In this problem we can let the scalar state $\lambda_t$ be the current inventory and let the vector state $v_t$ be the vector of all factors, such as upcoming holidays, that can influence future demands. It is natural to consider a threshold policy where the manager only places a new order if the current inventory $\lambda_t$ falls below a threshold $\mu(v_t)$ based on the current vector state $v_t$. \end{example} \begin{example} Consider a smart home server that controls the air conditioner. Let $\lambda_t$ be $-($current temperature$)$ and $v_t$ be the time of the day and the number of people in the house. The server should turn on the air conditioner only if the temperature exceeds some threshold determined by $v_t$, or, equivalently, $\lambda_t<\mu(v_t)$. \end{example} Given a threshold policy with threshold function $\mu(\cdot)$, we can define the corresponding action-value function by $Q_{\mu}(\lambda, v, a)$. Let $\rho_{\mu}(\lambda',v', \lambda, v)$ be the discounted state distribution when the initial state is $(\lambda, v)$ under the threshold policy to a visited state $(\lambda', v')$. When the initial state is $(\lambda, v)$, the expected discounted reward under the policy is \begin{equation} \label{equation:q_value_mdp} Q_{\mu}\Big(\lambda, v, \mathds{1}(\mu(v) > \lambda)\Big) = \sum_{v'\in \mathcal{V}} \int_{\lambda' = -M}^{\lambda' = +M}\rho_{\mu}(\lambda',v', \lambda, v)\bar{r}\big(\lambda',v', \mathds{1}(\mu(v') > \lambda')\big). \end{equation} Let $M$ be a sufficiently large constant such that $\lambda_t\in[-M, +M]$ for all $t$. Our goal is to learn the optimal threshold function $\mu^\phi(v)$ parametrized by a vector $\phi$ that maximizes the objective function \begin{equation} \label{equation:objective_mdp_var_lambda} K(\mu^\phi):= \int_{\lambda = -M}^{\lambda = +M} \sum_{v\in\mathcal{V}}Q_{\mu^\phi}\Big(\lambda, v, \mathds{1}(\mu^\phi(v) > \lambda)\Big) d\lambda. \end{equation} \section{Deep Threshold Optimal Policy for MDPs} \label{sec:deeptop_mdp} In this section, we present a deep threshold optimal policy (DeepTOP) for MDPs that finds the optimal $\phi$ for maximizing $K(\mu^\phi)$. \subsection{Threshold Policy Gradient Theorem for MDPs} \label{sec:threshold_theorem_mdp} In order to design DeepTOP, we first study the gradient $\nabla_\phi K(\mu^\phi)$. At first glance, computing $\nabla_\phi K(\mu^\phi)$ looks intractable since it involves an integral over $\lambda \in [-M, +M]$. However, we establish the following threshold policy gradient theorem that shows the surprising result that $\nabla_\phi K(\mu^\phi)$ has a simple expression. \begin{theorem} \label{theorem:policy_gradient_MDP} Given the parameter vector $\phi$, let $\bar{\rho}(\lambda, v)$ be the discounted state distribution when the initial state is chosen uniformly at random under the threshold policy. If all vector states $v\in \mathcal{V}$ have distinct values of $\mu^\phi(v)$, then, \begin{align} \label{equation:threshold_mdp} &\nabla_\phi K(\mu^\phi) =2M|\mathcal{V}| \sum_{v\in\mathcal{V}} \bar{\rho}(\mu^\phi(v),v) \Big( Q_{\mu^\phi}\big(\mu^\phi(v), v,1\big) - Q_{\mu^\phi}\big(\mu^\phi(v), v,0\big)\Big) \nabla_\phi\mu^\phi(v). \end{align} \end{theorem} \begin{proof} \vspace{-1em} Let $\bar{\rho}_t(\lambda, v)$ be the distribution that the state at time $t$ is $(\lambda, v)$ when the initial state is chosen uniformly at random. Clearly, we have $\bar{\rho}(\lambda, v)=\sum_{t=1}^\infty \gamma^{t-1}\bar{\rho}_t(\lambda, v)$. Given $\phi$, we number all states in $\mathcal{V}$ such that $\mu^\phi(v^1)>\mu^\phi(v^2)>\dots$. Let $\mathbb{M}^0=+M$, $\mathbb{M}^n=\mu^\phi(v^n)$, for all $1\leq n\leq |\mathcal{V}|$, and $\mathbb{M}^{|\mathcal{V}|+1}=-M$. Also, let $\mathbb{V}^n$ be the subset of states $\{v | \mu^\phi(v)>\mathbb{M}^n\}=\{v^1, v^2,\dots, v^{n-1}\}$. Now, consider the interval $(\mathbb{M}^{n+1}, \mathbb{M}^n)$ for some $n$. Notice that, for all $\lambda\in (\mathbb{M}^{n+1}, \mathbb{M}^n)$, $\mathds{1}(\mu^\phi(v) > \lambda)=1$ if and only if $v\in \mathbb{V}^{n+1}$. In other words, for any vector state $v$, the threshold policy would take the same action under all $\lambda \in (\mathbb{M}^{n+1}, \mathbb{M}^n)$, and we use $\pi^{n+1}(v)$ to denote this action. We then have \begin{align} & \nabla_\phi K(\mu^\phi) = \nabla_\phi\int_{\lambda = -M}^{\lambda = +M} \sum_{v\in\mathcal{V}}Q_{\mu^\phi}(\lambda, v, \mathds{1}(\mu^\phi(v) > \lambda)) d\lambda = \sum_{v\in \mathcal{V}}\nabla_\phi\int_{\lambda=-M}^{\lambda=+M}Q_{\mu^\phi}(\lambda, v,\mathds{1}(\mu^\phi(v) > \lambda))d\lambda \nonumber\\ =&\sum_{v\in \mathcal{V}}\sum_{n=0}^{|\mathcal{V}|}\nabla_\phi\int_{\lambda=\mathbb{M}^{n+1}}^{\lambda=\mathbb{M}^n}Q_{\mu^\phi}(\lambda, v,\pi^{n+1}(v))d\lambda\nonumber \nonumber\\ =&\sum_{v\in \mathcal{V}}\sum_{n=0}^{|\mathcal{V}|} \Bigg(Q_{\mu^\phi}\big(\mathbb{M}^{n}, v,\pi^{n+1}(v)\big)\nabla_\phi \mathbb{M}^{n}-Q_{\mu^\phi}\big(\mathbb{M}^{n+1},v,\pi^{n+1}(v)\big)\nabla_\phi \mathbb{M}^{n+1}+\int_{\lambda=\mathbb{M}^{n+1}}^{\lambda=\mathbb{M}^n}\nabla_\phi Q_{\mu^\phi}(\lambda, v,\pi^{n+1}(v))d\lambda\Bigg),\label{eq:policy_gradient_MDP:gradientK} \end{align} where the summation-integration swap in the first equation follows the Fubini-Tonelli theorem and the last step follows the Leibniz integral rule. We simplify the first two terms in the last step by \begin{align} &\sum_{v\in \mathcal{V}}\sum_{n=0}^{|\mathcal{V}|} \Bigg(Q_{\mu^\phi}\big(\mathbb{M}^{n}, v,\pi^{n+1}(v)\big)\nabla_\phi \mathbb{M}^{n}-Q_{\mu^\phi}\big(\mathbb{M}^{n+1},v,\pi^{n+1}(v)\big)\nabla_\phi \mathbb{M}^{n+1}\Bigg)\nonumber\\ =&\sum_{v\in\mathcal{V}}\sum_{n=1}^{|\mathcal{V}|} \Big( Q_{\mu^\phi}\big(\mu^\phi(v^n), v,\mathds{1}(v\in \mathbb{V}^{n+1})\big)-Q_{\mu^\phi}\big(\mu^\phi(v^n),v,\mathds{1}(v\in \mathbb{V}^{n})\big)\Big)\nabla_\phi\mu^\phi(v^n) \nonumber\\ =& 2M|\mathcal{V}|\sum_{v\in\mathcal{V}} \bar{\rho}_1(\mu^\phi(v),v) \Big( Q_{\mu^\phi}\big(\mu^\phi(v), v,1\big) - Q_{\mu^\phi}\big(\mu^\phi(v), v,0\big)\Big) \nabla_\phi\mu^\phi(v). \label{eq:policy_gradient_MDP:further} \end{align} Next, we expand the last term in~\eqref{eq:policy_gradient_MDP:gradientK}. Note that $Q_{\mu^\phi}(\lambda, v,a)=\bar{r}(\lambda, v,a) + \gamma\int_{\lambda'=-M}^{\lambda'=+M}\sum_{v'}p(\lambda',v'|\lambda, v,a)Q_{\mu^\phi}(\lambda', v',\mathds{1}(\mu^\phi(v')>\lambda'))d\lambda'$, where $p(\cdot|\cdot)$ is the transition probability. Hence, $\nabla_\phi Q_{\mu^\phi}(\lambda, v,a) = \gamma\nabla_\phi\int_{\lambda'=-M}^{\lambda'=+M}\sum_{v'}p(\lambda',v'|\lambda, v,a)Q_{\mu^\phi}(\lambda', v',\mathds{1}(\mu^\phi(v')>\lambda'))d\lambda'$. Using the same techniques in~\eqref{eq:policy_gradient_MDP:gradientK} and~\eqref{eq:policy_gradient_MDP:further}, we have \begin{align*} &\sum_{v\in \mathcal{V}}\sum_{n=0}^{|\mathcal{V}|} \int_{\lambda=\mathbb{M}^{n+1}}^{\lambda=\mathbb{M}^n}\nabla_\phi Q_{\mu^\phi}(\lambda, v,\pi^{n+1}(v))d\lambda=\sum_{v\in \mathcal{V}}\int_{\lambda=-M}^{\lambda=+M}\nabla_\phi Q_{\mu^\phi}(\lambda, v,\mathds{1}(\mu^\phi(v)>\lambda))d\lambda\\ &= \gamma\sum_{v\in \mathcal{V}}\int_{\lambda=-M}^{\lambda=+M}\Big(\nabla_\phi\int_{\lambda'=-M}^{\lambda'=+M}\sum_{v' \in \mathcal{V}}p(\lambda',v'|\lambda, v,\mathds{1}(\mu^\phi(v)>\lambda))Q_{\mu^\phi}(\lambda', v',\mathds{1}(\mu^\phi(v')>\lambda'))d\lambda'\Big)d\lambda\\ &=2M|\mathcal{V}| \sum_{v\in\mathcal{V}} \gamma\bar{\rho}_2(\mu^\phi(v),v) \Big( Q_{\mu^\phi}\big(\mu^\phi(v), v,1\big) - Q_{\mu^\phi}\big(\mu^\phi(v), v,0\big)\Big) \nabla_\phi\mu^\phi(v)\\ &+\gamma \sum_{v\in \mathcal{V}}\int_{\lambda=-M}^{\lambda=+M}\Big(\sum_{v' \in \mathcal{V}}\int_{\lambda' =-M}^{\lambda'=+M}\nabla_\phi \big(p(\lambda',v'|\lambda, v, \mathds{1}(\mu^\phi(v)>\lambda))Q_{\mu^\phi}(\lambda, v,\mathds{1}(\mu^\phi(v')>\lambda'))\big)d\lambda'\Bigg)d\lambda. \end{align*} In the above equation, expanding the last term in time establishes~\eqref{equation:threshold_mdp}. \end{proof} \vspace{-1em} \subsection{DeepTOP Algorithm Design for MDPs} \label{subsec:deeptop_mdp_algo} Motivated by Theorem~\ref{theorem:policy_gradient_MDP}, we now present DeepTOP-MDP, a model-free, actor-critic Deep RL algorithm. DeepTOP-MDP maintains an actor network with parameters $\phi$ that learns a threshold function $\mu^\phi(v)$, and a critic network with parameters $\theta$ that learns an action-value function $Q^\theta(\lambda, v, a)$. DeepTOP-MDP also maintains a target critic network with parameters $\theta'$ that is updated slower than the critic parameters $\theta$. The purpose of the target critic network is to improve the learning stability as demonstrated in \cite{fujimoto2018, lillicrap2015}. The objective of the critic network is to find $\theta$ that minimizes the loss function \begin{equation} \label{equation:critic_loss_expected_mdp} \mathcal{L}(\theta) := \mathop{\mathbb{E}}\limits_{s_t, a_t, r_t, s_{t+1}} \Big[\bigg(Q^\theta(\lambda_t, v_t, a_t) - r_t - \gamma \max\limits_{a' \in \mathcal{A}} Q^{\theta'}\big(\lambda_{t+1}, v_{t+1}, a'\big)\bigg)^2\Big], \end{equation} where $(s_t, a_t, r_t, s_{t+1})$ is sampled under some policy with $s_t = (\lambda_t, v_t)$. The objective of the actor network is to find $\phi$ that maximizes $\int_{\lambda = -M}^{\lambda = +M} \sum_{v\in\mathcal{V}}Q^\theta_{\mu^\phi}\Big(\lambda, v, \mathds{1}(\mu^\phi(v) > \lambda)\Big) d\lambda$. In each timestep $t$, the environment $\mathcal{E}$ provides a state $s_t$ to the agent. We set an exploration parameter $\epsilon_t \in [0,1)$ that takes a random action with probability $\epsilon_t$. Otherwise, DeepTOP-MDP calculates $\mu^\phi(v_t)$ based on $v_t$, and chooses $a_t = \mathds{1}(\mu^\phi(v_t) > \lambda_t)$. $\mathcal{E}$ generates a reward $r_t$ and a next state $s_{t+1}$. A replay memory denoted by $\mathcal{M}$ then stores the transition $\{s_t, a_t, r_t, s_{t+1}\}$. After filling the memory with at least $B$ transitions, DeepTOP-MDP updates the parameters $\phi,\theta,\theta'$ in every timestep using a sampled minibatch of size $B$ of transitions $\{s_{t_k}, a_{t_k},r_{t_k}, s_{t_k + 1}\}$, for $1 \leq k \leq B$. The critic network uses the sampled transitions to calculate the estimated gradient of $\mathcal{L}(\theta)$: \begin{equation} \label{equation:mdp_critic_gradient} \hat{\nabla}_\theta \mathcal{L}(\theta) := \frac{2}{B}\sum\limits_{k=1}^{B}\bigg(Q^\theta(\lambda_{t_k}, v_{t_k}, a_{t_k}) - r_{t_k} - \gamma \max\limits_{a' \in \mathcal{A}}Q^{\theta'}\big(\lambda_{t_k+1}, v_{t_k+1}, a'\big)\bigg) \nabla_{\theta}Q^\theta(\lambda_{t_k}, v_{t_k}, a_{t_k}). \end{equation} Similarly, the actor network uses the sampled transitions and Equation~\eqref{equation:threshold_mdp} to calculate the estimated gradient: \begin{equation} \label{equation:estimate_mdp_gradient} \hat{\nabla}_\phi K(\mu^\phi) := \frac{1}{B} \sum\limits_{k=1}^{B} \Bigg(Q_{\mu^\phi}^\theta\Big(\mu^\phi(v_{t_k}),v_{t_k}, 1\Big) - Q_{\mu^\phi}^\theta\Big(\mu^\phi(v_{t_k}),v_{t_k},0\Big)\Bigg) \nabla_\phi \mu^\phi(v_{t_k}). \end{equation} Both the critic network and the actor network then take a gradient update step. Finally, we soft update the target critic's parameters $\theta'$ using $\theta' \leftarrow \tau \theta + (1 - \tau) \theta',$ with $\tau < 1$. The complete pseudocode is given in Algorithm~\ref{alg:threshold_mdp}. \begin{algorithm}[tb] \caption{Deep Threshold Optimal Policy Training for MDPs (DeepTOP-MDP)} \label{alg:threshold_mdp} \begin{algorithmic} \STATE Randomly select initial actor network parameters $\phi$ and critic network parameters $\theta$. \STATE Set target critic network parameters $\theta' \leftarrow \theta$, and initialize replay memory $\mathcal{M}$. \FOR{timestep $t =1, 2, 3, \hdots$} \STATE Receive state $s_t = (\lambda_t,v_t)$ from environment $\mathcal{E}$. \STATE Select action $a_t = \mathds{1}(\mu^\phi(v_t) > \lambda_t)$ with probability $1 - \epsilon_t$. Otherwise, select action $a_t$ randomly. \STATE Execute action $a_t$, and observe reward $r_t$ and next state $s_{t+1}$ from $\mathcal{E}$. \STATE Store transition $\{s_t, a_t,r_t, s_{t+1}\}$ into $\mathcal{M}$. \STATE Sample a minibatch of $B$ transitions $\{s_{t_k}, a_{t_k},r_{t_k}, s_{t_k + 1}\}$, for $1 \leq k \leq B$ from $\mathcal{M}$. \STATE Update critic network parameters $\theta$ using the estimated gradient from Equation~\eqref{equation:mdp_critic_gradient}. \STATE Update actor network parameters $\phi$ using the estimated gradient from Equation~\eqref{equation:estimate_mdp_gradient}. \STATE Soft update target critic parameters $\theta'$: $\theta' \leftarrow \tau \theta + (1 - \tau) \theta'$. \ENDFOR \end{algorithmic} \end{algorithm} \section{Whittle Index Policy for RMABs} \label{sec:problem_rmabs} \vspace{-1em} In this section, we demonstrate how the Whittle index policy \cite{whittle1988}, a powerful tool for solving the notoriously intractable Restless Multi-Armed Bandit (RMAB) problem, can be represented with a set of threshold functions. We first describe the RMAB control problem, and then define the Whittle index function. An RMAB problem consists of $N$ arms. The environment of an arm $i$, denoted as $\mathcal{E}_i$, is an MDP with a discrete state space $s_{i,t}\in \mathcal{S}_i$, and a binary action space $a_{i,t}\in \mathcal{A} := \{0,1\}$, where $a_{i,t} = 1$ means that arm $i$ is activated, and $a_{i,t} = 0$ means that arm $i$ is left passive at time $t$. Given the state-action pair $(s_{i,t}, a_{i,t})$, $\mathcal{E}_i$ generates a random reward $r_{i,t}$ and a random next state $s_{i,t+1}$ following some unknown probability distributions based on $(s_{i,t}, a_{i,t})$. Here we also use $\bar{r}_i(s_{i}, a_{i})$ to denote the unknown expected one-step reward that can be obtained for the state-action pair $(s_{i}, a_{i})$. A control policy over all arms takes the states $(s_{1,t}, s_{2,t}, \hdots, s_{N,t})$ as input, and activates $V$ out of $N$ arms in every timestep. Solving for the optimal control policy for RMABs was proven to be intractable \cite{papadimitriou1999}, since the agent must optimize over an input state space exponential in $N$. To circumvent the dimensionality challenge, the Whittle index policy assigns real values to an arm's states using a Whittle index function for each arm $W_i : \mathcal{S}_i \rightarrow \varmathbb{R}$. Based on the assigned Whittle indices $\big(W_1(s_{1,t}), W_2(s_{2,t}), \hdots, W_N(s_{N,t})\big)$, the Whittle index policy activates the $V$ highest-valued arms out of $N$ arms in timestep $t$, and picks the passive action for the remaining arms. \vspace{-1em} \subsection{The Whittle Index Function as The Optimal Threshold Function} To define the Whittle index and relate it to threshold functions, let us first consider an alternative control problem of a single arm $i$ as environment $\mathcal{E}_i$ with \emph{activation cost} $\lambda$. In this problem, the agent follows a control policy that determines whether the arm is activated or not based on its current state $s_{i,t}$. If the policy activates the arm, then the agent must pay an activation cost of $\lambda$. Hence, the agent's \emph{net reward} at timestep $t$ is defined as $r_{i,t} - \lambda a_{i,t}$. We now consider applying threshold policies for this alternative control problem. A threshold policy defines a threshold function $\mu_i : \mathcal{S}_i \rightarrow \varmathbb{R}$ that maps each state to a real value. It then activates the arm if and only if $\mu_i(s_{i,t}) > \lambda$, i.e., $a_{i,t}=\mathds{1}(\mu_i(s_i) > \lambda)$. The value of $\mu_i(s_{i,t})$ can therefore be viewed as the largest activation cost that the agent is willing to pay to activate the arm under state $s_{i,t}$. To characterize the performance of a threshold policy with a threshold function $\mu_i(\cdot)$, we let $\rho_{\mu_i, \lambda}(s_i', s_i)$ be the discounted state distribution, which is the average discounted number of visits of state $s_i'$ when the initial state is $s_i$ under the threshold policy and $\lambda$. When the initial state is $s_{i}$, the expected discounted net reward under the threshold policy is \begin{align} \label{equation:q_expression} Q_{i,\lambda}\big(s_{i},\mathds{1}(\mu_i(s_i) > \lambda)\big)=\sum_{s_i'\in \mathcal{S}_i}\rho_{\mu_i, \lambda}(s_i', s_i)\Big(\bar{r}_i\big(s_i', \mathds{1}(\mu_i(s_i') > \lambda)\big)-\lambda \mathds{1}(\mu_i(s_i') > \lambda)\Big). \end{align} The performance of the threshold policy under a given $\lambda$ is defined as $J_{i,\lambda}(\mu_i):=\sum_{s_{i}\in \mathcal{S}_i} Q_{i,\lambda}\big(s_{i},\mathds{1}(\mu_i(s_i) > \lambda)\big)$. The Whittle index of this arm is defined as the function $\mu_i(\cdot)$ whose corresponding threshold policy maximizes $J_{i,\lambda}(\mu_i)$ for all $\lambda$: \begin{definition} (Whittle Index) \label{definition:optimal_rmab} If there exists a function $\mu_i: \mathcal{S}_i \rightarrow \varmathbb{R}$ such that choosing $\mathds{1}(\mu_i(s_i)>\lambda)$ maximizes $J_{i,\lambda}(\mu_i)$ for all $\lambda\in(-\infty, +\infty)$, then we say that $\mu_i(s_i)$ is the Whittle index $W_i(s_i)$ \footnote{To simplify notations, we use a necessary and sufficient condition for the Whittle index as its definition. We refer interested readers to \cite{gittins2011} for more thorough discussions on the Whittle index.}. \end{definition} We note that, for some arms, there does not exist any function $\mu_i(s_i)$ that satisfies the condition in Definition~\ref{definition:optimal_rmab}. For such arms, the Whittle index does not exist. We say that an arm is \emph{indexable} if it has a well-defined Whittle index function. Definition~\ref{definition:optimal_rmab} shows that finding the Whittle index is equivalent to finding the optimal $\mu_i(\cdot)$ that maximizes $J_{i,\lambda}(\mu_i)$ for all $\lambda\in(-\infty, +\infty)$. Parameterizing a threshold function $\mu_i^{\phi_i}(\cdot)$ by parameters $\phi_i$ and letting $M$ be a sufficiently large number such that $\mu_i^{\phi_i}(s_i)\in(-M,+M)$ for all $s_i$ and $\phi_i$, we aim to find the optimal $\phi_i$ for maximizing the objective function \begin{align} K_i(\mu_i^{\phi_i}):= \int_{\lambda=-M}^{\lambda=+M} \sum_{s_{i}\in \mathcal{S}_i} Q_{i,\lambda}\big(s_{i},\mathds{1}(\mu_i^{\phi_i}(s_i) > \lambda)\big)d\lambda. \label{equation:Whittle objective} \end{align} \vspace{-1.25em} \section{Deep Threshold Optimal Policy for RMABs} \label{sec:deeptop_rmab} \vspace{-0.5em} To design a DeepTOP variant for RMABs, we first give the gradient of the objective function. \begin{theorem} \label{theorem:policy_gradient_whittle} Given the parameter vector $\phi_i$, let $\bar{\rho}_{ \lambda}(s_i)$ be the discounted state distribution when the initial state is chosen uniformly at random and the activation cost is $\lambda$. If all states $s_i\in \mathcal{S}_i$ have distinct values of $\mu_i^{\phi_i}(s_i)$, then, \begin{align} \nabla_{\phi_i} K_i(\mu_i^{\phi_i}) = |\mathcal{S}_i| \sum_{s_i\in\mathcal{S}_i} \bar{\rho}_{\mu_i^{\phi_i}(s_i)}(s_i)\Big( Q_{i,\mu_i^{\phi_i}(s_i)}\big(s_i,1\big)-Q_{i,\mu_i^{\phi_i}(s_i)}\big(s_i,0)\Big)\nabla_{\phi_i}\mu_i^{\phi_i}(s_i). \label{equation:deterministic_policy} \end{align} \end{theorem} \vspace{-1.5em} \begin{proof} The proof is similar to that of Theorem~\ref{theorem:policy_gradient_MDP}. For completeness, we provide it in Appendix~\ref{appendix:rmab_theorem_proof}. \end{proof} We note that Theorem~\ref{theorem:policy_gradient_whittle} does not require the arm to be indexable. Whether an arm is indexable or not, using Theorem~\ref{theorem:policy_gradient_whittle} along with a gradient ascent algorithm will find a locally-optimal $\phi_i$ that maximizes $K_i(\mu_i^{\phi_i})$. When the arm is indexable, the resulting threshold function $\mu_i^{\phi_i}$ is the Whittle index function. \section{Simulations} \label{sec:experiments} We have implemented and tested both DeepTOP-MDP and DeepTOP-RMAB in a variety of settings. The training procedure of the two DeepTOP algorithms are similar to that of the DDPG \cite{lillicrap2015} algorithm except for the expression of gradients. We implemented the DeepTOP algorithms by modifying an open-source implementation of DDPG \cite{horng_ddpg}. All source code can be found in the repository \url{https://github.com/khalednakhleh/deeptop}. \vspace{-0.5em} \subsection{Simulations for MDPs} We evaluate three MDPs, namely, the electric vehicle charging problem, the inventory management problem, and the make-to-stock problem. \vspace{-0.5em} \paragraph{EV charging problem.} This problem is based on Yu, Xu, and Tong \cite{yu2018}. It considers a charging station serving EVs. When an EV arrives at the station, it specifies the amount of charges it needs and a deadline upon which it will leave the station. The electricity price changes over time and we model it by an Ornstein-Uhlenbeck process \cite{uhlenbeck1930}. In each timestep, the station decides whether to charge the EV or not. If it decides to charge the EV, then it provides one unit charge to the EV. The station then obtains a unit reward and pays the current electricity price. If the station fails to fully charge the EV by the deadline of the EV, then the station suffers from a penalty that is a convex function of the remaining needed charge. A new EV arrives at the station when the previous EV leaves. We model this problem by letting the scalar state be the current electricity price and the vector state be the remaining needed charge and time-to-deadline of the current EV. A threshold policy is one that calculates a threshold based on the EV's remaining needed charge and time-to-deadline, and then decides to charge the EV if and only if the current electricity price is below the threshold. \vspace{-0.5em} \paragraph{Inventory management problem.} We construct an inventory management problem by jointly incorporating a variety of practical challenges, including seasonal fluctuations in demands and lead times in orders, in the literature \cite{schwarz2006queueing, jakvsivc2015optimal, goyal2003production, san2021profit}. We consider a warehouse holding goods. In each timestep, there is a random amount of demand whose mean depends on the time of the year. The warehouse can fulfill the demand as long as it has sufficient inventory, and it makes a profit for each unit of sold goods. At the end of the timestep, the warehouse incurs a unit holding cost for each unit of unsold goods. The warehouse manager needs to decide whether to order more goods. When it places an order for goods, there is a lead time of one time step, that is, the goods ordered at timestep $t$ are only available for sale at timestep $t+1$. We model this problem by letting the scalar state be the current inventory and the vector state be the time of the year. A threshold policy calculates a threshold based on the time of the year and decides to place an order for goods if the current inventory is below the threshold. \vspace{-0.5em} \paragraph{Make-to-stock production problem.} This problem is considered in \cite{salmut2021}. It studies a system that produces $m$ items with $W$ demand classes and buffer size $S$. Accepting a class $v$ order leads to a reward $R_v$, as long as there is still room in the buffer for the order. The classes of demands are ordered such that $R_1 > R_2>\dots$. In this problem, the scalar state is the number of accepted but unfinished orders and the vector state is the class of the next arriving order. More details about the three MDPs can be found in Appendix~\ref{app:mdp_prob_desc}. \begin{figure*}[b] \vspace{-1em} \centering \captionsetup[subfloat]{captionskip=-10pt} \includegraphics[width=\linewidth]{figs/mdp_results/mdp_128_128_results.pdf} \subfloat[EV charging.]{\hspace{0.38\linewidth} \label{fig:charging}} \subfloat[Inventory management.]{\hspace{0.27\linewidth} \label{fig:inventory}} \subfloat[Make-to-stock production.]{\hspace{0.37\linewidth} \label{fig:mts_prod}} \caption{Average reward results for the MDP problems.} \label{fig:mdp_probs} \end{figure*} \vspace{-0.5em} \paragraph{Evaluated policies.} We compare DeepTOP-MDP against DDPG \cite{lillicrap2015} and TD3 \cite{fujimoto2018}, two state-of-the-art off-policy and model free deep RL algorithms. We use open-source implementations of these two algorithms for \cite{horng_ddpg, fujimoto_td3}. We use the same hyper-parameters, including the neural network architecture, learning rates, etc., for all three algorithms. We also evaluate the Structure-Aware Learning for Multiple Thresholds algorithm (SALMUT) \cite{salmut2021}, a reinforcement learning algorithm that finds the optimal threshold policy. SALMUT requires the vector states to be pre-sorted by their threshold values. Hence, SALMUT can only be applied to the make-to-stock production problem. Details about the training parameters can be found in Appendix~\ref{app:experiment_details}. For the EV charging problem, Yu, Xu, and Tong \cite{yu2018} has found the optimal threshold policy. We call the optimal threshold policy the \emph{Deadline Index} policy and compare DeepTOP-MDP against it. \vspace{-0.5em} \paragraph{Simulations results.} Simulation results of the three MDPs are shown in Figure~\ref{fig:mdp_probs}. The results are the average of 20 independent runs. Before starting a run, we fill an agent's memory with $1000$ transitions by randomly selecting actions. We plot the average reward obtained from the previous $100$ timesteps, and average them over $20$ runs. In addition, we provide the standard deviation bounds from the average reward. It can be observed that DeepTOP significantly outperforms DDPG, TD3, and SALMUT. Although the training procedure of DeepTOP is similar to that of DDPG, DeepTOP is able to achieve much faster learning by leveraging the monotone property. Without leveraging the monotone property, DDPG and TD3 need to learn the optimal policy for each scalar state independently, and therefore have much worse performance. DeepTOP performs better than SALMUT because DeepTOP directly employs the threshold policy gradient. SALMUT in contrast approximates threshold policies through randomized policies since it can only handle continuous and differentiable functions. We believe this might be the reason why DeepTOP outperforms SALMUT. We also note that DeepTOP performs virtually the same as the Deadline Index policy for the EV charging problem in about $2000$ timesteps, suggesting that DeepTOP indeed finds the optimal threshold policy quickly. We also evaluate DeepTOP for different neural network architectures in Appendix~\ref{app:add_mdp_results}, and show that DeepTOP performs the best in all settings. \vspace{-0.5em} \subsection{Simulations for RMABs} \begin{wrapfigure}{r}{0.55\textwidth} \vspace{-1em} \begin{center} \includegraphics[width=0.55\textwidth]{figs/rmab_setting.pdf} \end{center} \caption{Arm $i$ as a Markov process with $100$ states and transition probabilities $p_i$ and $q_i$.} \label{fig:markov_proc} \end{wrapfigure} We evaluate two RMABs, namely, the one-dimensional bandits from \cite{killian2021} and the recovering bandits from \cite{nakhleh2021}. \vspace{-0.5em} \paragraph{One-dimensional bandits.} We consider an extension of the RMAB problem evaluated in Killian et al. \cite{killian2021}. Killian et al. \cite{killian2021} considers the case when each arm is a two-state Markov process. We extend it so that each arm is a Markov process with 100 states, numbered as $0,1,\dots, 99,$ as shown in Figure~\ref{fig:markov_proc} where state $99$ is the optimal state. The reward of an arm depends on the distance between its current state and state $99$. Suppose the current state of arm $i$ is $s_{i,t}$, then it generates a reward $r_{i,t}=1 - (\frac{s_{i,t} - 99}{99})^2.$ If the arm is activated, then it changes to state $s_{i,t+1} = \min\{s_{i,t}+1, 99\}$ with probability $p_i$. If the arm is not activated, then it changes to state $s_{i,t+1} = \max\{s_{i,t} - 1, 0\}$ with probability $q_i$. In the simulations, we pick the probabilities $p_i$ to be evenly spaced depending on the number of arms $N$ from the interval $[0.2, 0.8]$. We set the probabilities $q_i = p_i$. We consider that there are $N$ arms and that the agent needs to activate $V$ arms in each timestep. We evaluate three settings of $(N,V)=(10,3), (20,5),$ and $(30,6)$. \vspace{-0.5em} \paragraph{Recovering bandits.} First introduced in \cite{recovering_2019}, we consider the case that studies the varying behavior of consumers over time. A consumer's interest in a particular product falls if the consumer clicks on its advertisement link. However their interest in the product would recover with time. The recovering bandit is modelled as an RMAB with each arm being the advertisement link. The reward of playing an arm is given by a function $f\big(\min(z,z_{max})\big)$, with $z$ being the time since the arm was last played. In our experiments, we consider arms with different reward functions, with the arm's state being the value $\min\{z, z_{max}\}$ and $z_{max} = 100$. We also evaluate recovering bandits on three settings of $(N,V)=(10,3), (20,5),$ and $(30,6)$. More details can be found in Appendix~\ref{app:recovering_details}. \vspace{-0.5em} \paragraph{Evaluated policies.} We compare DeepTOP-RMAB against three recent studies that aim to learn index policies for RMABs, namely, Lagrange policy Q learning (LPQL) \cite{killian2021}, Whittle index based Q learning (WIBQL) \cite{avrachenkov2020}, and neural Whittle index network (NeurWIN) \cite{nakhleh2021}. LPQL consists of three steps: First, it learns a Q function for each arm independently. Second, it uses the Q functions of all arms to determine a common Lagrangian. Third, it uses the Lagrangian to calculate the index of each arm. WIBQL is a two-timescale algorithm that learns the Whittle indices of indexable arms by updating Q values on the fast timescale, and index values on the slower timescale. NeurWIN is an off-line training algorithm based on REINFORCE that requires a simulator to learn the Whittle index. Both LPQL and WIBQL are tabular learning methods which may perform poorly compared to deep RL algorithms when the size of the state space is large. Hence, we also design deep RL equivalent algorithms that approximate their Q functions using neural networks. We refer to the Deep RL extensions as neural LPQL and neural WIBQL. In all experiments, neural LPQL, neural WIBQL, and NeurWIN use the same hyper-parameters as DeepTOP-RMAB. For the one-dimensional bandits, it can be shown that the Whittle index is in the range of $[-1,1]$, and hence we set $M=1$. For the recovering bandits, we set $M=10$. \vspace{-0.5em} \paragraph{Simulation results.} Simulation results are shown in Figures~\ref{fig:rmab_prob} and~\ref{fig:recovering_bandits_results}. It can be observed that DeepTOP achieves the optimal average rewards in all cases. The reason that neural LPQL performs worse than DeepTOP may lie in its reliance on a common Lagrangian. Since the common Lagrangian is calculated based on the Q functions of all arms, an inaccuracy in one arm's Q function can result in an inaccurate Lagrangian, which, in turn, leads to inaccuracy in the index values of all arms. Prior work \cite{killian2021} has already shown that WIBQL performs worse than LPQL. Hence, it is not surprising that neural WIBQL performs worse than both neural LPQL and DeepTOP. NeurWIN performs worse than DeepTOP because it is based on REINFORCE and therefore can only apply updates at the end of each minibatch of episodes. We also evaluate DeepTOP for different neural network architectures and the results are shown in Appendix~\ref{app:add_rmab_results} for the one-dimensional bandits and Appendix~\ref{app:add_recovering_results} for the recovering bandits. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{figs/one_dim_reward_rmab/rmabs_results_layers_128_128_arms_10_20_30_budgets_3_5_6.pdf} \subfloat[$N=10.$ $V=3.$]{\hspace{0.35\linewidth}} \subfloat[$N=20.$ $V=5.$]{\hspace{0.33\linewidth}} \subfloat[$N=30.$ $V=6.$]{\hspace{0.33\linewidth}} \caption{Average reward results for the one-dimensional bandits.} \label{fig:rmab_prob} \end{figure*} \begin{figure*}[ht] \vspace{-1em} \centering \includegraphics[width=\linewidth]{figs/recovering_bandits/recovering_deeptop_128_128_arm_10_20_30_budget_3_5_6.pdf} \subfloat[$N=10.$ $V=3.$]{\hspace{0.35\linewidth}} \subfloat[$N=20.$ $V=5.$]{\hspace{0.35\linewidth}} \subfloat[$N=30.$ $V=6.$]{\hspace{0.3\linewidth}} \caption{Average reward results for the recovering bandits.} \label{fig:recovering_bandits_results} \vspace{-1em} \end{figure*} \section{Related Work} \label{sec:related} Threshold policies have been analysed for many decision-making problems formed as MDPs. \cite{hegde2011} examined the residential energy storage under price fluctuations problem, and proved the existence of optimal threshold policies for minimizing the cost. \cite{erseghe2013} proved that MDPs with a convex and piecewise linear cost functions admit an optimal threshold policy. \cite{petrik2015} shows the existence of an optimal threshold policy for energy arbitrage given degrading battery capacity, with \cite{bertrand2019} using the REINFORCE algorithm \cite{williams1992} to learn a trading policy with price thresholds for intraday electricity markets. \cite{huang2016} considered mean field games in a multi-agent MDP setting, and characterized individual agent strategy with a threshold policy when the mean game admits a threshold policy. More recently, \cite{weng2017} studies finding a job assigning threshold policy for data centers with heterogeneous servers and job classes, and gave conditions for the existence of optimal threshold policies. \cite{zhou2019} proposed a distributed threshold-based control policy for graph traversal by assigning a state threshold that determines if the agent stays in or leaves a state. For minimizing the age of information in energy-harvesting sensors, \cite{ceran2019} used the finite-difference policy gradient \cite{peters2006policy} to learn a possibly sub-optimal threshold policy in the average cost setting. \cite{hosseinloo2020event} proposed an RL-based threshold policy for semi-MDPs in controlling micro-climate for buildings with simulations proving efficacy on a single-zone building. \cite{shen2020} used the Deep Q-network RL algorithm for selecting alert thresholds in anti-fraud systems with simulations showing performance improvements over static threshold policies. \cite{salmut2021} described the SALMUT RL algorithm for exploiting the ordered multi-threshold structure of the optimal policy with SALMUT implementations in \cite{jitani2021} for computing node's overload protection. In contrast to these works, DeepTOP-MDP is applicable to any MDP that admits threshold policies. In learning the Whittle index policy for RMABs, \cite{fu2019} proposed a Q-learning heuristic called the Q Whittle Index Controller (QWIC) which may not find the Whittle indices even when the training converges. \cite{nakhleh2021} describes a Deep RL algorithm called NeurWIN for learning the Whittle index of a restless arm independently of other arms. However, NeurWIN requires a simulator to train the neural networks. Some recent studies, such as \cite{avrachenkov2020, biswas2021, killian2021}, proposed various online learning algorithms that can find Whittle index when the algorithms converge. These algorithms rely on some indirect property of the Whittle index which explains why they converge slower than DeepTOP. \section{Conclusion and Future Work} \label{sec:conclusion} \vspace{-1em} In this paper, we presented DeepTOP: a Deep RL actor-critic algorithm that learns the optimal threshold function for MDPs that admit a threshold policy and for RMAB problems. We first developed the threshold policy gradient theorem, where we proved that a threshold function has a simple to compute gradient. Based on the gradient expressions, we design the DeepTOP-MDP and DeepTOP-RMAB algorithm variants and compare them against state-of-the-art learning algorithms. In both the MDP and RMAB settings, experiment results showed that DeepTOP exceeds the performance of baselines in all considered problems. A promising future direction is to extend DeepTOP to threshold policies with multiple actions. For example, the Federal Reserve needs to decide not only whether to raise interest rate, but also the amount of rate hike. \vspace{-1em} \begin{ack} \label{sec:ack} \vspace{-0.5em} This material is based upon work supported in part by NSF under Award Number ECCS-2127721, in part by the U.S. Army Research Laboratory and the U.S. Army Research Office under Grant Number W911NF-22-1-0151, and in part by Office of Naval Research under Contract N00014-21-1-2385. Portions of this research were conducted with the advanced computing resources provided by Texas A\&M High Performance Research Computing. \end{ack} \section*{\centering Appendices For DeepTOP: Deep Threshold-Optimal Policy for MDPs and RMABs} \label{sec:appendices} \section{Threshold Optimal Policy Gradient Theorem Proof for RMABs} \label{appendix:rmab_theorem_proof} \begin{proof} Let $\bar{\rho}_{ \lambda_t}(s_i)$ be the distribution that the state at time $t$ is $s_i$ when the initial state is chosen uniformly at random. We have $\bar{\rho}_{ \lambda_t}(s_i) = \sum_{t=1}^\infty \gamma^{t-1}\bar{\rho}_{t, \lambda}(s_i)$. Given $\phi_i$, we number all states in $\mathcal{S}_i$ such that $\mu_i^{\phi_i}(s_i^1)>\mu_i^{\phi_i}(s_i^2)>\dots$. Let $\mathbb{M}^0=+M$, $\mathbb{M}^n=\mu_i^{\phi_i}(s_i^n)$, for all $1\leq n\leq |\mathcal{S}_i|$, and $\mathbb{M}^{|\mathcal{S}_i|+1}=-M$. Also, let $\mathbb{S}_i^n$ be the subset of states $\{s_i|\mu_i^{\phi_i}(s_i)>\mathbb{M}^n\}=\{s_i^1, s_i^2,\dots, s_i^{n-1}\}$. Now, consider the interval $(\mathbb{M}^{n+1}, \mathbb{M}^n)$ for some $n$. For all $\lambda\in (\mathbb{M}^{n+1}, \mathbb{M}^n)$, $\mathds{1}(\mu_i^{\phi_i}(s_i) > \lambda) = 1$ if and only if $s_i \in \mathbb{S}_i^{n+1}$. In other words, the threshold policy takes the same action under all $\lambda\in (\mathbb{M}^{n+1}, \mathbb{M}^n)$, and we use $\pi^{n+1}(s_i)$ to denote this policy. We then have \begin{align} &\nabla_{\phi_i} K_i(\mu_i^{\phi_i}) = \nabla_{\phi_i}\int_{\lambda=-M}^{\lambda=+M} \sum_{s_{i}\in \mathcal{S}_i} Q_{i,\lambda}(s_{i},\mathds{1}(\mu_i^{\phi_i}(s_i) > \lambda))d\lambda = \sum_{s_{i}\in \mathcal{S}_i}\nabla_{\phi_i} \int_{\lambda=-M}^{\lambda=+M}Q_{i,\lambda}(s_{i},\mathds{1}(\mu_i^{\phi_i}(s_i) > \lambda))d\lambda \nonumber \\ &= \sum_{s_{i}\in \mathcal{S}_i}\sum_{n=0}^{|\mathcal{S}_i|}\nabla_{\phi_i}\int_{\lambda=\mathbb{M}^{n+1}}^{\lambda=\mathbb{M}^n}Q_{i,\lambda}(s_{i},\pi^{n+1}(s_i))d\lambda \nonumber \\ &=\sum_{s_{i}\in \mathcal{S}_i}\sum_{n=0}^{|\mathcal{S}_i|} \Bigg(Q_{i,\mathbb{M}^n}(s_i,\pi^{n+1}(s_i))\nabla_{\phi_i} \mathbb{M}^{n} -Q_{i,\mathbb{M}^{n+1}}(s_{i},\pi^{n+1}(s_i))\nabla_{\phi_i} \mathbb{M}^{n+1} + \int_{\lambda=\mathbb{M}^{n+1}}^{\lambda=\mathbb{M}^n} \nabla_{\phi_i} Q_{i,\lambda}(s_{i}, \pi^{n+1}(s_i)) d\lambda \Bigg), \nonumber \label{eq:policy_gradient_rmab:gradientK} \\ \end{align} where the summation-integration swap in the first equation follows the Fubini-Tonelli theorem and the last step follows the Leibniz integral rule. We simplify the first two terms in the last step by \begin{align} &\sum_{s_{i}\in \mathcal{S}_i}\sum_{n=0}^{|\mathcal{S}_i|} \Big(Q_{i,\mathbb{M}^n}(s_i,\pi^{n+1}(s_i))\nabla_{\phi_i} \mathbb{M}^{n} -Q_{i,\mathbb{M}^{n+1}}(s_{i},\pi^{n+1}(s_i))\nabla_{\phi_i} \mathbb{M}^{n+1} \Big) \nonumber \\ &= \sum_{s_{i}\in \mathcal{S}_i}\sum_{n=1}^{|\mathcal{S}_i|} \Big( Q_{i,\mu_i^{\phi_i}(s_i)}(s_i,\mathds{1}(s_i \in \mathbb{S}_i^{n+1})) -Q_{i,\mu_i^{\phi_i}(s_i)}(s_{i},s_i \in \mathbb{S}_i^{n}) \Big) \nabla_{\phi_i} \mu_i^{\phi_i}(s_i) \nonumber \\ &=|\mathcal{S}_i| \sum_{s_i\in\mathcal{S}_i} \bar{\rho}_{1,\mu_i^{\phi_i}(s_i)}(s_i) \Big( Q_{i,\mu_i^{\phi_i}(s_i)}\big(s_i,1\big)-Q_{i,\mu_i^{\phi_i}(s_i)}\big(s_i,0)\Big)\nabla_{\phi_i}\mu_i^{\phi_i}(s_i) \label{eq:rmab-first-two-terms}. \end{align} Next, we expand the last term in \eqref{eq:policy_gradient_rmab:gradientK}. Note that $Q_{i, \lambda}(s_i, a_i) = \bar{r}(s_i ,a_i) + \gamma \int_{\lambda' = -M}^{\lambda' = + M} \sum_{s'_i} p(s'_i | s_i, a_i) Q_{i, \lambda'}(s'_i, \mathds{1}(\mu_i^{\phi_i}(s'_i) > \lambda'))d\lambda'$, where $p(\cdot | \cdot)$ is the transition probability. Hence, $\nabla_{\phi_i} Q_{i, \lambda}(s_i, \mathds{1}(\mu_i^{\phi_i}(s_i) > \lambda)) = \nabla_{\phi_i} \gamma \int_{\lambda' = -M}^{\lambda' = + M} \sum_{s'_i} p(s'_i | s_i, a_i) Q_{i, \lambda'}(s'_i, \mathds{1}(\mu_i^{\phi_i}(s'_i) > \lambda'))d\lambda'$. Using the same techniques in \eqref{eq:policy_gradient_rmab:gradientK} and \eqref{eq:rmab-first-two-terms}, we have \begin{align} &\sum_{s_{i}\in \mathcal{S}_i}\sum_{n=0}^{|\mathcal{S}_i|} \int_{\lambda=\mathbb{M}^{n+1}}^{\lambda=\mathbb{M}^n} \nabla_{\phi_i} Q_{i,\lambda}(s_{i}, \pi^{n+1}(s_i)) d\lambda = \sum_{s_{i}\in \mathcal{S}_i} \int_{\lambda=-M}^{\lambda=+M} \nabla_{\phi_i} Q_{i,\lambda}(s_{i},\mathds{1}(\mu_i^{\phi_i}(s_i) > \lambda))d\lambda \nonumber \\ &= \gamma \sum_{s_{i}\in \mathcal{S}_i} \int_{\lambda=-M}^{\lambda=+M} \Big( \nabla_{\phi_i} \int_{\lambda'=-M}^{\lambda'=+M} \sum\limits_{s'_i \in \mathcal{S}_i} p(s'_i | s_i, \mathds{1}(\mu_i^{\phi_i}(s_i) > \lambda)) Q_{i, \lambda'}(s'_i, \mathds{1}(\mu_i^{\phi_i}(s'_i) > \lambda'))d\lambda' \Big) d\lambda \nonumber \\ &= |\mathcal{S}_i| \sum\limits_{s_i \in \mathcal{S}_i} \gamma\bar{\rho}_{2,\mu_i^{\phi_i}(s_i)}(s_i) \Big( Q_{i,\mu_i^{\phi_i}(s_i)}\big(s_i,1\big)-Q_{i,\mu_i^{\phi_i}(s_i)}\big(s_i,0)\Big)\nabla_{\phi_i}\mu_i^{\phi_i}(s_i) \nonumber \\ &+\gamma \sum\limits_{s_i \in \mathcal{S}_i} \int_{\lambda=-M}^{\lambda=+M} \Big( \sum\limits_{s'_i \in \mathcal{S}_i} \int_{\lambda'=-M}^{\lambda'=+M} \nabla_{\phi_i} \big(p(s'_i | s_i,\mathds{1}(\mu_i^{\phi_i}(s'_i) > \lambda')) Q_{i, \lambda'}(s'_i, \mathds{1}(\mu_i^{\phi_i}(s'_i) > \lambda'))\big)d\lambda' \Big) d\lambda. \nonumber \end{align} In the above equation, expanding the last term in time establishes~\eqref{equation:deterministic_policy}. \end{proof} \clearpage \section{DeepTOP-RMAB Algorithm Pseudocode} \label{app:rmab_algo} \renewcommand{\algorithmiccomment}[1]{#1} \begin{algorithm}[th] \caption{Deep Threshold Optimal Policy Training for RMABs (DeepTOP-RMAB)} \label{alg:threshold_rmab} \begin{algorithmic} \FOR{arm $i = 1,2,\hdots,N$} \STATE Randomly select initial parameters for the actor network $\phi_i$ and critic network $\theta_i$. \STATE Set target critic network $\theta_i' \leftarrow \theta_i$, and initialize replay memory $\mathcal{M}_i$. \ENDFOR \FOR{timestep $t = 1, 2, 3, \hdots$} \FOR{arm $i= 1,2,\hdots, N$} \STATE Receive state $s_{i,t}$ from arm environment $\mathcal{E}_i$, and calculate the state value $\mu_i^{\phi_i}(s_{i,t})$. \ENDFOR \STATE With probability $1-\epsilon_t$, activate the $V$ largest-valued arms and keep the remaining arms passive. Otherwise, randomly activate $V$ arms with the remaining arms left passive. \FOR{arm $i=1,2,\hdots,N$} \STATE Observe reward $r_{i,t}$ and next state $s_{i,t+1}$. \STATE Store transition $\{s_{i,t},a_{i,t},r_{i,t},s_{i,t+1}\}$ in memory $\mathcal{M}_i$. \STATE Sample a minibatch of $B$ transitions $\{s_{i,{t_k}},a_{i,{t_k}},r_{i,{t_k}},s_{i,{t_k}+1}\}$, for $1 \leq k \leq B$ from memory $\mathcal{M}_i$. \STATE Randomly select $B$ values $[\lambda_{i,1}, \lambda_{i,2},\hdots,\lambda_{i,B}]$, for $1 \leq k \leq B$ from the range $[-M,+M]$. \STATE Update arm $i$'s critic network using the estimated gradient in Equation~\eqref{equation:rmab_critic_loss_gradient}. \STATE Update arm $i$'s actor network using the estimated gradient in Equation~\eqref{equation:rmab_actor_loss_gradient}. \STATE Soft update target critic $\theta_i'$ network parameters: $\theta_i' \leftarrow \tau \theta_{i} + (1 - \tau) \theta_i'.$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \section{MDP Problems' Description} \label{app:mdp_prob_desc} \paragraph{EV charging.} The vector state $v_t = (C_t, D_t)$ consists of the charging requirement $C_t$, and the time remaining until the vehicle departs the station $D_t$ at time $t$. In the simulations, we upper-bound the state elements with $C \leq 8$ and $D \leq 12$. The scalar state $\lambda_t$ is sampled from an Ornstein-Uhlenbeck process with noise parameter $0.15$, noise mean $0.0$, and noise standard deviation $0.2$. If the agent chooses to charge the vehicle by selecting action $a_t = 1$, the agent then obtains a reward of $1 - \lambda$, and the MDP transitions to the next state $v_{t+1} = (C_t - 1, D_t - 1)$. Otherwise for $a_t = 0$, the reward is zero and the MDP transitions to next state $v_{t+1} = (C_t, D_t - 1)$. If the charging spot is empty at the next timestep (i.e. $v_{t+1} = (0,0)$), the environment randomly picks the charge requirement $C_{t+1}$ and time until deadline $D_{t+1}$ of the next EV vehicle. For the vehicle occupying the charging station, if it's charge requirement is not met by the deadline, the agent incurs a penalty of $F(C_t) = 0.2 (C_t)^2$ that is subtracted from reward $r_t$. The net reward is then $r_t - F(C_t)$. \paragraph{Inventory management.} The warehouse can store a maximum of $1000$ items, and is able to purchase new stock in bulks of $500$ items. The selling price of a single item is set to $20$. The vector state $v_t$ is the current market shopping season at time $t$. The scalar state $\lambda_t$ is the current warehouse inventory count at time $t$. We set $10$ different shopping seasons indexed by $b$ that model the customers' current demand rate. The corresponding demand rates for the seasons are $10$ different Poisson distributions with parameters $\sin(b\pi/10)\cdot 300$ for $0 \leq b \leq 9$. If the agent orders items (i.e. $a_t = 1$), it receives a reward equal to the total items' selling price minus the minimum of remaining inventory count and current demand rate. The next state $v_{t+1}$ is then the next market season index $v_{t+1} = b+1 \mod 10$. Otherwise for $a_t = 0$, the agent holds off on buying new items, and incurs a holding cost from the remaining unsold items. The next state is $v_{t+1} = b + 1 \mod 10$. \paragraph{Make-to-stock production.} The environment models a queueing system with $m$ servers serving at rate $1/\mu$ and a finite buffer with size $S$. There are $W$ customer classes each with Poisson mean arrival rate $\beta$. The state at timestep $t$ is $(\lambda_t, v_t)$, with the scalar state $\lambda_t \in \{0,1,\hdots, m+S\}$ and the vector state $v_t \in \{1,2,\hdots,W\}$. If the agent picks the passive action $a_t = 0$, then the reward is equal to the holding cost $h(\lambda_t) = -0.1(\lambda_t)^2$. For action $a_t = 1$, the agent receives total net reward of $R_v - h(\lambda_t)$ if the scalar state $\lambda_t = m + S$. Otherwise, the reward is the holding cost $-h(\lambda_t)$. In the simulations, we set the number of servers $m=50$, buffer size $S=50$, number of customer classes $W=50$, $\mu = 4$, and arrival rate $\beta = 1$ for all customer classes. The reward $R_v$ is chosen to be evenly spaced between $200$ and $10$ depending on the number of customer classes $W$. \section{Experiments' Details} \label{app:experiment_details} For all Deep RL algorithms, we used PyTorch \cite{pytorch2019} to implement them, with Adam \cite{kingma2015} as the optimizer. We used $10^{-4}$ as the learning rate for the actor networks, and a learning rate of $10^{-3}$ for the critic networks. We also set the initial learning rate of action-value function to $0.1$ for tabular LPQL and tabular WIBQL. Tabular WIBQL initial learning rate for updating indices is $0.2$. The warmup period has $1000$ timesteps used for filling the memory $\mathcal{M}$ with transitions from random actions. We use a constant $\epsilon_t = 0.05$ through all timesteps. A discount factor of $\gamma = 0.99$ was selected. All neural network layers were initialized using PyTorch's default method. We update parameters using a minibatch size of $64$ transitions, with policy updates made at every timestep after the warmup period ends. The neural networks used for results in Section~\ref{sec:experiments} have two hidden layers with sizes $[128, 128]$, with the input layer dimension depending on the state size. Output layer has a dimension of one. The used code for TD3, tabular LPQL, and tabular WIBQL are licensed under the MIT license. The DDPG code we used is licensed under the Apache license 2.0. All Deep RL algorithms were trained using a computing cluster with $9632$ computing cores distributed over $320$ nodes. All algorithms were trained using CPU cores. \section{Additional MDP Simulation Results Using Different Neural Network Architectures} \label{app:add_mdp_results} We provide results here for the considered MDP problems for different neural network architectures. All other hyperparameters were kept the same as described in Appendix~\ref{app:experiment_details}. \begin{figure*}[ht] \centering \captionsetup[subfloat]{captionskip=-10pt} \includegraphics[width=\linewidth]{figs/mdp_results/mdp_64_128_64_results.pdf} \subfloat[EV charging.]{\hspace{0.38\linewidth}} \subfloat[Inventory management.]{\hspace{0.27\linewidth}} \subfloat[Make-to-stock production.]{\hspace{0.39\linewidth}} \caption{Hidden layers' size: $[64, 128, 64]$. Average reward results for the MDP problems.} \end{figure*} \begin{figure*}[ht] \centering \captionsetup[subfloat]{captionskip=-10pt} \includegraphics[width=\linewidth]{figs/mdp_results/mdp_32_64_64_64_64_32_results.pdf} \subfloat[EV charging.]{\hspace{0.38\linewidth}} \subfloat[Inventory management.]{\hspace{0.27\linewidth}} \subfloat[Make-to-stock production.]{\hspace{0.39\linewidth}} \caption{Hidden layers' size: $[32, 64, 64, 64, 64, 32]$. Average reward results for the MDP problems.} \end{figure*} \begin{figure*}[ht] \centering \captionsetup[subfloat]{captionskip=-10pt} \includegraphics[width=\linewidth]{figs/mdp_results/mdp_64_64_64_64_64_results.pdf} \subfloat[EV charging.]{\hspace{0.38\linewidth}} \subfloat[Inventory management.]{\hspace{0.27\linewidth}} \subfloat[Make-to-stock production.]{\hspace{0.39\linewidth}} \caption{Hidden layers' size: $[64, 64, 64, 64, 64]$. Average reward results for the MDP problems.} \end{figure*} \section{Recovering Bandits' Description} \label{app:recovering_details} \begin{table}[t] \caption{$\Theta$ values for the recovering bandits' case.} \label{table:theta_recovering} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcr} \toprule Class&$\theta_0$ Value&$\theta_1$ Value \\ \midrule A & 10 & 0.2 \\ B & 8.5 & 0.4 \\ C & 7 & 0.6 \\ D & 5.5 & 0.8 \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} The state $s_{i,t}$ is the waiting time since the arm $i$ was last activated, with the maximum waiting time $z_{max}$ set to 100. If the agent chooses to activate the arm, the arm's state is reset to $1$. The arm's reward is provided by the recovering function $f(s_{i,t})$, where if the arm is activated, the reward is the function value at $s_{i,t}$. Otherwise a reward of zero is given if the arm is left passive. The recovering reward function is generated from \begin{align} f(s_{i,t}) = \theta_0(1 - e^{-\theta_1\cdot s_{i,t}}). \end{align} We use the same $\Theta = [\theta_0, \theta_1]$ hyperparameters for setting the arms' reward classes as used in \cite{nakhleh2021} and provide them in Table~\ref{table:theta_recovering}. \section{Additional One-Dimensional Bandits' Simulation Results Using Different Neural Network Architectures} \label{app:add_rmab_results} We provide results here for the considered RMAB problem for different neural network architectures. All other hyperparameters were kept the same as described in Appendix~\ref{app:experiment_details}. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{figs/one_dim_reward_rmab/rmabs_results_layers_64_128_64_arms_10_20_30_budgets_3_5_6.pdf} \subfloat[$N=10.$ $V=3.$]{\hspace{0.35\linewidth}} \subfloat[$N=20.$ $V=5.$]{\hspace{0.35\linewidth}} \subfloat[$N=30.$ $V=6.$]{\hspace{0.32\linewidth}} \caption{Hidden layers' size per arm: $[64, 128, 64]$. Average reward results for the one-dimensional bandits.} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{figs/one_dim_reward_rmab/rmabs_results_layers_32_64_64_64_64_32_arms_10_20_30_budgets_3_5_6.pdf} \subfloat[$N=10.$ $V=3.$]{\hspace{0.35\linewidth}} \subfloat[$N=20.$ $V=5.$]{\hspace{0.35\linewidth}} \subfloat[$N=30.$ $V=6.$]{\hspace{0.32\linewidth}} \caption{Hidden layers' size per arm: $[32, 64, 64, 64, 64, 32]$. Average reward results for the one-dimensional bandits.} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{figs/one_dim_reward_rmab/rmabs_results_layers_64_64_64_64_64_arms_10_20_30_budgets_3_5_6.pdf} \subfloat[$N=10.$ $V=3.$]{\hspace{0.35\linewidth}} \subfloat[$N=20.$ $V=5.$]{\hspace{0.35\linewidth}} \subfloat[$N=30.$ $V=6.$]{\hspace{0.32\linewidth}} \caption{Hidden layers' size per arm: $[64, 64, 64, 64, 64]$. Average reward results for the one-dimensional bandits.} \end{figure*} \clearpage \section{Additional Recovering Bandits' Simulation Results Using Different Neural Network Architectures} \label{app:add_recovering_results} \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{figs/recovering_bandits/recovering_deeptop_64_128_64_arm_10_20_30_budget_3_5_6.pdf} \subfloat[$N=10.$ $V=3.$]{\hspace{0.35\linewidth}} \subfloat[$N=20.$ $V=5.$]{\hspace{0.35\linewidth}} \subfloat[$N=30.$ $V=6.$]{\hspace{0.3\linewidth}} \caption{Hidden layers' size per arm: $[64, 128, 64]$. Average reward results for the recovering bandits.} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{figs/recovering_bandits/recovering_deeptop_32_64_64_64_64_32_arm_10_20_30_budget_3_5_6.pdf} \subfloat[$N=10.$ $V=3.$]{\hspace{0.35\linewidth}} \subfloat[$N=20.$ $V=5.$]{\hspace{0.35\linewidth}} \subfloat[$N=30.$ $V=6.$]{\hspace{0.3\linewidth}} \caption{Hidden layers' size per arm: $[32, 64, 64, 64, 64, 32]$. Average reward results for the recovering bandits.} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{figs/recovering_bandits/recovering_deeptop_64_64_64_64_64_arm_10_20_30_budget_3_5_6.pdf} \subfloat[$N=10.$ $V=3.$]{\hspace{0.35\linewidth}} \subfloat[$N=20.$ $V=5.$]{\hspace{0.35\linewidth}} \subfloat[$N=30.$ $V=6.$]{\hspace{0.3\linewidth}} \caption{Hidden layers' size per arm: $[64, 64, 64, 64, 64]$. Average reward results for the recovering bandits.} \end{figure*} \section*{Checklist} \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{\bf see Section \ref{sec:experiments} for simulations' results.} \item Did you describe the limitations of your work? \answerYes{\bf described in Section \ref{sec:problem_MDP}. Our algorithm is only applicable to MDPs that admit a threshold policy.} \item Did you discuss any potential negative societal impacts of your work? \answerNA{} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerYes{\bf see Section \ref{sec:problem_MDP} and Section \ref{sec:problem_rmabs} for problem definitions and assumptions.} \item Did you include complete proofs of all theoretical results? \answerYes{\bf see Section \ref{sec:deeptop_mdp} and Appendix \ref{appendix:rmab_theorem_proof} for proofs.} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{\bf code and run instructions are available in the supplemental material.} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{\bf see Appendix \ref{app:experiment_details}.} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{\bf we averaged the results over 20 runs and plotted the standard deviation from the average. See Section \ref{sec:experiments} for results.} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{\bf see Appendix \ref{app:experiment_details}.} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{\bf we cite and use open-source code for the baselines: DDPG, TD3, LPQL, and WIBQL.} \item Did you mention the license of the assets? \answerYes{\bf licenses are mentioned in Appendix \ref{app:experiment_details}.} \item Did you include any new assets either in the supplemental material or as a URL? \answerYes{\bf our code is included with the supplemental material.} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \fi \newpage{}
{ "timestamp": "2022-09-30T02:02:25", "yymm": "2209", "arxiv_id": "2209.08646", "language": "en", "url": "https://arxiv.org/abs/2209.08646" }
\section{Introduction} \label{Intro} Several experimental and theoretical studies have indicated that {\em planar} monolayer bismuthene on silicon carbide should be a high temperature two-dimensional topological insulator (2DTI).\cite{Hsu2015,Reis2017,Dominguez2018,GLi2018,GK2018,Canonico2019,Azari2019,Hao2019,Stuhler2020} Building on this work, we have recently studied theoretically the properties of {\em curved} bismuthene-silicon bilayer nanostructures consisting of a monolayer of bismuthene adsorbed on a silicon monolayer.\cite{dome,cowrie} Our density functional theory (DFT) based total energy calculations showed that a hemispherical dome of this kind with a zigzag edge should be stable,\cite{dome} whereas such a dome with an armchair edge should collapse to a cowrie shell-like geometry\cite{cowrie}. Our transport calculations carried out for these structures have predicted that both of these curved systems should be highly effective two-terminal spin filters even in the absence of magnetic fields.\cite{dome,cowrie} By contrast, planar 2DTI spin filters require at least three terminals. However, in the nanostructures considered in Refs. \onlinecite{dome} and \onlinecite{cowrie} the bismuthene monolayer coated the outside of a curved silicon monolayer so that these structures had {\em hollow} interiors, but no plausible methodology for realizing such hollow bismuthene-silicon nanostructures\cite{dome,cowrie} in the laboratory is known at the present time. Here we propose a different class of curved bismuth-on-silicon nanostructures that consist of a bismuth monolayer adsorbed on a solid silicon nanoparticle core. Since silicon nanoparticles ranging in size from 2 to 64 nm have already been synthesized,\cite{Thiessen2019,Huang2021} the experimental realization of such bismuth-on-silicon nanostructures by depositing bismuth atoms on silicon nanoparticles by molecular beam epitaxy should be feasible, and would open the way for pioneering experimental studies of spin transport in curved bismuth monolayer nanostructures. Our calculations based on DFT and tight binding models predict systems of this type to be nearly perfect two-terminal spin filters in the absence of magnetic fields. We relate this spin filtering to a symmetry obeyed by the spin transmission probability matrix of the system. We also predict that for such systems the direction of the spin polarization of the electric current that exits the spin filter can be tuned through large angles and even reversed simply by varying the voltage applied to an electrostatic gate. \begin{figure}[b] \centering \includegraphics[width=1.0\linewidth]{Fig1.pdf} \caption{(Color online). Two views of a Bi$_{76}$Si$_{147}$ nanostructure consisting of a silicon core (gray Si atoms) with Bi atoms adsorbed on its surface. The structure has tetrahedral symmetry. The adsorbed Bi atoms form separate clusters of 2 atoms (blue), 3 atoms (purple) and 14 atoms (orange). There are 4 clusters of each type. The bismuth atoms of each 14 atom atom cluster are arranged in a kinked ring geometry; the atoms of one ring are indicated by stars in part (b). Image prepared using Macmolplt software.\cite{MacMolPlt}} \label{geometry} \end{figure} \section{Structure} \label{Structure} In this paper we consider the Bi$_{76}$Si$_{147}$ nanoparticle shown in Fig.\ref{geometry}. The silicon core of this nanostructure is symmetric under the operations of the tetrahedral point group Td, as is the Bi$_{76}$Si$_{147}$ nanoparticle as a whole. The silicon core has a compact surface geometry with each surface silicon atom having a coordination number of 3 or 4, 4 being the coordination number of the interior silicon atoms. Each bismuth atom bonds in the ``on top" position to a single silicon atom that has coordination number 3; a bismuth atom bonds in this way to each such silicon atom. Thus, for this geometry all of the silicon dangling bonds are saturated by bismuth atoms and consequently, based on conventional considerations of quantum chemistry, it is reasonable to expect the geometry of this Bi$_{76}$Si$_{147}$ nanoparticle to be stable or at least metastable; this expectation was confirmed by our DFT calculations. The structure shown in Fig.\ref{geometry} was relaxed by means of DFT computer simulations. The DFT calculations reported here were carried out with the GAUSSIAN 16 package using the B3PW91 functional and Lanl2DZ effective core potential and basis sets.\cite{Frisch} The electronic energy and ionic forces of our optimized geometries were converged within 10$^{-5}$ eV and 0.0008 eV/\AA, respectively. Because of the locations of the silicon atoms with coordination number of 3 on the surface of the silicon core, the adsorbed Bi atoms of this bismuth-on-silicon nanostructure form separate atomic clusters as can be seen for the Bi$_{76}$Si$_{147}$ structure shown in Fig.\ref{geometry}. We note that, unlike for the structure in Fig.\ref{geometry}, in both the bismuthene-silicon bilayer domes and cowrie shell-like nanostructures that were studied previously,\cite{dome,cowrie} the Bi atoms were arranged in a continuous network, not a collection of separate clusters. As will be seen below, the partition of the Bi atoms of the Bi$_{76}$Si$_{147}$ structure considered here into clusters separated by spatial gaps that act as tunnel barriers has a strong effect on both electron transport and spin transport through this nanostructure. In this paper we present results for the Bi$_{76}$Si$_{147}$ structure shown in Fig.\ref{geometry}. However, we have also carried out spin transport calculations for a Bi$_{60}$Si$_{147}$H$_{16}$ nanostructure that differs from our Bi$_{76}$Si$_{147}$ nanostructure in that the bismuth atoms belonging to the small 2 and 3 atom bismuth clusters are replaced with hydrogen atoms and found spin filtering results qualitatively similar to those for our Bi$_{76}$Si$_{147}$ nanostructure. \section{Tight Binding Model} \label{M} Several tight binding models have successfully captured the crucial physics of bismuthene on SiC.\cite{Reis2017,Dominguez2018,GLi2018,GK2018,Canonico2019,Azari2019,Hao2019} These models\cite{Reis2017,Dominguez2018,GLi2018,GK2018,Canonico2019,Azari2019,Hao2019} have employed basis sets consisting {\em only} of the valence orbitals of the bismuth atoms but have been parameterized to take into account the influence of the SiC substrate on the bismuthene. We have extended this approach to construct tight binding models of curved nanoscale bismuthene-silicon bilayer domes\cite{dome} and cowrie shell-like structures.\cite{cowrie} These are Slater-Koster type models\cite{SK1954} modified to explicitly take into account the curved nature of the bismuthene layer. They include the site dependence of the electron electrostatic potential energy, as well as spin-orbit coupling\cite{PRBrapid,PRB} and Rashba\cite{BR1,BR2} contributions to the Hamiltonian. The model parameters were adjusted so as to match the bismuthene partial densities of states calculated for these nanostructures within DFT. The tight binding model that we developed previously for the cowrie shell-like nanostructures, and that is described in detail in Ref.\onlinecite{cowrie}, is applied here to study the properties of nanoparticles consisting of a bismuth monolayer adsorbed on a silicon nanoparticle core, such as that shown in Fig. \ref{geometry}. Both the form of the Hamiltonian and the parameter values in the present work are the same as in Ref.\onlinecite{cowrie} except for the following: The site-dependence of the electron's Coulomb potential energy, $H^{\text{C}}_{i}$ in Eq. 2 of Ref. \onlinecite{cowrie}, has been recalculated within DFT for the present system. The range of the Hamiltonian matrix elements $H^{\text{hop}}$ in Eq. 2 of Ref. \onlinecite{cowrie} that represent electron hopping between Bi atoms has been extended to include interatomic separations greater than 5.5\AA. The values of the parameters $\gamma_n$ in the Hamiltonian term $\tilde{H}^{\text{orb}}$ in Eq. 5 of Ref. \onlinecite{cowrie} have been modified so as to match the Bi partial density of states predicted by the model without spin-orbit and Rashba terms to that predicted by DFT. The Bi density of states obtained from the model in this way and that obtained from DFT can be seen in Fig. \ref{DOSfit}. \begin{figure}[b] \centering \includegraphics[width=0.9\linewidth]{Fig2.pdf} \caption{(Color online). Partial density of states (DOS) projected on the bismuth atoms of the structure in Fig. \ref{geometry}. Prediction of DFT is shown in black and that obtained from the tight binding Hamiltonian (omitting the spin-orbit and Rashba terms) is in orange. } \label{DOSfit} \end{figure} \section{Spin Transport} \label{ST} Our calculations of spin filtering by nanoparticles with a bismuth monolayer adsorbed on a silicon core are carried out within the Landauer formalism\cite{Econ81,Fish81} of two-terminal source-drain transport at zero temperature in the linear response regime. We calculate the spin-resolved source to drain electron transmission probabilities by solving the Lippmann-Schwinger equation numerically for source and drain leads attached to individual bismuth atoms of the nanoparticle that is represented by our tight binding Hamiltonian. The leads are represented by a tight binding model of ideal atomic chains. A detailed description of the model of the leads is given in the last paragraph of Section III of Ref. \onlinecite{cowrie}. We consider spin-unpolarized electrons entering the nanoparticle through the electron source lead and calculate the spin resolved probabilities $T_{\uparrow}$ and $T_{\downarrow}$ of spin up and spin down electrons exiting through the drain lead at energy $E$. [Note that the transmission probabilities in this paper are Landauer transmission probabilities that are summed over the relevant conducting channels of the source and drain leads and can therefore exceed one.] We then define the spin polarization of the electrons entering the drain electrode as \begin{equation}\label{Pol} P=T_{\uparrow}/(T_{\uparrow}+T_{\downarrow}). \end{equation} We choose the direction of the axis of quantization for electron spin states in the drain electrode to be the direction of the expectation value of the spin vector of the electrons carrying the electric current in the drain. This choice ensures that the above definition of the spin polarization $P$ corresponds to the physical spin polarization in the drain lead and is a valid figure of merit for the effectiveness of spin filtering. This methodology of spin transport calculations has been used previously to study spin filtering by our cowrie shell-like bismuthene-silicon bilayer nanostructures and it is described in detail in Ref.\onlinecite{cowrie}. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{Fig3.pdf} \caption{(Color online). Electron spin filtering by the Bi$_{76}$Si$_{147}$ nanostructure shown in Fig.\ref{geometry} when the electron source and drain leads are attached to atoms of the same 14 Bi atom ring. The source and drain Bi atoms are shown in red and green, respectively, in the inset\textcolor{red}{s}. It is assumed that spin-unpolarized electrons enter the nanostructure from the electron source. The direction of the axis of quantization for electron spin states in the drain electrode is chosen to be the direction of the expectation value of the spin vector of the electrons carrying the electric current in the drain. (a) The calculated spin polarization $P$ (Eq.\ref{Pol}) vs. electron energy $E$ of electric current exiting the structure to the drain lead is shown in black. The orange vertical lines indicate energy eigenvalues of the tight-binding Hamiltonian when the Bi$_{76}$Si$_{147}$ nanostructure is not connected to the leads. (b)The calculated spin-resolved Landauer transmission probabilities $T_{\uparrow}$ (red) and $T_{\downarrow}$ (blue) of spin up and spin down electrons exiting from the nanostructure into the drain contact at energy $E$. The electron energy is measured from the Fermi level. We note that the strength of the coupling between the Bi$_{76}$Si$_{147}$ nanostructure and the source and drain leads, attested to by the magnitutes of the Landauer transmission probabilities shown here, is sufficent to suppress Coulomb blockade effects. (c)The unit vector $\vec{N}=(N_x, N_y, N_z)$ points in the direction of the spin polarization due to the electric current in the drain lead. $N_x, N_y$ and $N_z$ are plotted in red, blue and black, respectively. The $z$-axis points from the center of the nanoparticle towards the Bi atom (shown in green in the insets) to which the drain lead is attached, and points out of the page in part (c) of the figure. Only the Bi atoms are shown in the insets of part (c). The green arrows show the direction of $\vec{N}$ for selected electron energies that are given in eV next to each schematic. } \label{filteringonering} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{Fig4.pdf} \caption{(Color online). Electron spin filtering by the Bi$_{76}$Si$_{147}$ nanostructure shown in Fig.\ref{geometry} when the electron source and drain leads are attached to different clusters of Bi atoms. The source lead is attached to an atom (shown in red in the inset) of a Bi atom pair. The drain lead is attached to an atom (shown in green) of a 14 Bi atom ring. It is assumed that spin-unpolarized electrons enter the nanostructure from the electron source. The direction of the axis of quantization for electron spin states in the drain electrode is chosen to be the direction of the expectation value of the spin vector of the electrons carrying the electric current in the drain. (a) The calculated spin polarization $P$ (Eq.\ref{Pol}) vs. electron energy $E$ of electric current exiting the structure to the drain lead is shown in black. The orange vertical lines indicate energy eigenvalues of the tight-binding Hamiltonian when the Bi$_{76}$Si$_{147}$ nanostructure is not connected to the leads. (b)The calculated spin-resolved Landauer transmission probabilities $T_{\uparrow}$ (red) and $T_{\downarrow}$ (blue) of spin up and spin down electrons exiting from the nanostructure into the drain contact at energy $E$. The electron energy is measured from the Fermi level. (c)The unit vector $\vec{N}=(N_x, N_y, N_z)$ points in the direction of the spin polarization due to the electric current in the drain lead. $N_x, N_y$ and $N_z$ are plotted in red, blue and black, respectively. The $z$-axis points from the center of the nanoparticle towards the Bi atom (shown in green in the insets) to which the drain lead is attached, and points out of the page in part (c) of the figure. Only the Bi atoms are shown in the insets of part (c). The green arrows show the direction of $\vec{N}$ for selected electron energies that are given in eV next to each schematic. } \label{filteringtwoclust} \end{figure*} \section{RESULTS} \label{R} We predict that effective spin filtering by the Bi$_{76}$Si$_{147}$ nanostructure can occur when the source and drain leads are attached to the same Bi atom cluster or to different Bi atom clusters of the nanoparticle, although the transport regimes in the two cases are very different. In the former case conductance of the nanostructure is of the same order of magnitude as that of a metal atomic point contact, i.e., of order $e^2/h$. In the latter case transport requires quantum tunneling between bismuth clusters so that the conductance is typically orders of magnitude lower. Representative results for an example of the former case, where the source and drain are connected to different atoms of the same 14 member Bi atom ring are shown in Fig.\ref{filteringonering}. The electron source and drain leads are connected to the Bi atoms shown in red and green, respectively in the inset\textcolor{red}{s}. The black curve in Fig.\ref{filteringonering}(a) shows the calculated spin polarization $P$ of electrons emitted into the drain lead, assuming that spin-unpolarized electrons impinge on the nanoparticle from the source lead. The energies of the eigenstates of the tight binding Hamiltonian of the nanoparticle in the absence of leads are shown by the vertical orange lines. In Fig.\ref{filteringonering}(a) strong spin filtering (with $P$ as large 0.895 at $E \sim 2.02$eV) can be seen in the non-resonant transport regime, i.e., when the electron energy does not match that of an eigenstate of the Hamiltonian of isolated nanoparticle. On resonance, where the electron energy does match that of an eigenstate, the spin filtering (i.e., the value of $P$) can be enhanced or reduced relative to nearby non-resonant values. Most resonances are very narrow because they are due to states that occupy multiple Bi clusters that are weakly coupled to each other. In Fig.\ref{filteringonering}(b)we show the calculated spin-resolved Landauer transmission probabilities $T_{\uparrow}$ (red) and $T_{\downarrow}$ (blue) of spin up and spin down electrons exiting from the nanostructure into the drain contact at energy $E$. These transmission probabilities enter the spin polarization $P$ plotted in Fig.\ref{filteringonering}(a) through Eq. \ref{Pol}. The Landauer conductance of the nanostructure given by $G(E) =\frac {e^2}{h} (T_{\uparrow}+T_{\downarrow})$ has magnitudes in a range typical of a chain of metal atoms, as one might expect for a ring of closely spaced Bi atoms. On resonance, in some cases $T_{\uparrow}$ and $T_{\downarrow}$ are both enhanced or both suppressed or one of them enhanced and the other suppressed, thus giving rise to the resonant enhancement or suppression of the spin polarization $P$ in the drain lead seen in Fig.\ref{filteringonering}(a). In Fig.\ref{filteringonering}(c) we show the unit vector $\vec{N}$ in the direction of the expectation value $\langle\vec{S}\rangle$ of the spin vector of the electrons carrying the the electric current in the drain lead as a function of the electron energy $E$. There $N_x = \langle S_x\rangle/|\langle\vec{S}\rangle|$, $N_y = \langle S_y\rangle/|\langle\vec{S}\rangle|$ and $N_z = \langle S_z\rangle/|\langle\vec{S}\rangle|$ are plotted in red, blue and black, respectively. The $z$-axis points in the direction from the center of the Bi$_{76}$Si$_{147}$ nanostructure towards the (green) bismuth atom to which the drain lead is attached. In the insets of Fig.\ref{filteringonering}(c) (where only the Bi atoms are shown) the $z$-axis points out of the page; the directions of the $x, y$ and $z$ axes in the insets of Fig.\ref{filteringonering}(c) are shown on the left. The green arrows in the insets show explicitly the directions of $\vec{N}$ for a few representative values of the electron energy that are given (in eV units) by the numbers next to the insets. A striking feature of Fig.\ref{filteringonering}(c) is that (except near $E=0.55$eV) $N_z$ is small compared to $(N_x^2+N_y^2)^{1/2}$. This means that for most values of the electron energy the direction of the spin polarization in the drain lead is approximately tangential to the surface of the nanoparticle at the position of the Bi atom to which the drain lead is attached. [That the direction of the spin polarization in the drain lead is approximately tangential to the surface of the nanoparticle appears to be related to the fact that the Bi $6p$ orbital whose symmetry axis is perpendicular to the surface of the nanoparticle is shifted in energy relative to the other Bi $6p$ orbitals due to the interaction between the Bi atom and its silicon nearest neighbor, as is described in Sec. II of Ref. \onlinecite{cowrie}.] Another striking feature of Fig.\ref{filteringonering}(c) is that direction of the spin polarization in the drain lead (the green arrows in the insets) rotates through {\em large} angles in the $x-y$ plane as the electron energy is varied. Importantly, we conclude that in this system the {\em direction} of the spin polarization in the drain lead can be tuned through {\em large} angles by the varying the electrostatic potential applied to a gate and thus shifting the Fermi energy relative to the electronic energy levels of the Bi$_{76}$Si$_{147}$ nanoparticle. That such purely electrostatic control of the spin polarization direction in the drain electrode (up to and including reversing the direction of the spin polarization by electrostatic means) is at all possible {\em even in principle} is of considerable interest from a fundamental perspective and for potential spintronic device applications. In Fig.\ref{filteringtwoclust} we present results for an example of the case where the source and drain leads are connected to different Bi atom clusters. As can be seen in the inset, here the electron source lead is connected to a Bi atom (shown in red) of a cluster consisting of 2 Bi atoms whereas the drain lead is connected to a Bi atom (shown in green) of a 14 Bi atom ring. In Fig.\ref{filteringtwoclust}(a), there is very strong non-resonant spin filtering (even stronger than in Fig.\ref{filteringonering}) with the drain spin polarization $P$ as large as 0.979 at $E \sim 2.1$eV. As in Fig.\ref{filteringonering}(a), the spin polarization in Fig.\ref{filteringtwoclust}(a) can be enhanced or weakened on resonance.In Fig\ref{filteringtwoclust}(c) the behavior of $\vec{N}$, the unit vector in the direction of the spin polarization in the drain lead, is qualitatively similar to that in Fig.\ref{filteringonering}(c), although in Fig\ref{filteringtwoclust}(c) $N_z$ exhibits larger deviations from zero than in Fig.\ref{filteringonering}(c). However, the spin-resolved Landauer transmission probabilities $T_{\uparrow}$ (red) and $T_{\downarrow}$ (blue) in Fig.\ref{filteringtwoclust}(b) are orders of magnitude weaker off resonance than in Fig.\ref{filteringonering}(b). The same is true of the Landauer conductances $G(E) =\frac {e^2}{h} (T_{\uparrow}+T_{\downarrow})$. For instance, at $E \sim 2.1$eV where the spin polarization $P$ is strongest in Fig.\ref{filteringtwoclust}(a), the conductance $G(E) =\frac {e^2}{h}\times1.24\times 10^{-3} $. It may seem at first sight that such a low conductance would be disadvantageous for spintronic applications. However, this is not necessarily the case: It has been pointed out by Schmidt {\em et al.} \cite{Schmidt} that spin injection from a metal ferromagnet (that has a low resistance) into a semiconductor having a much higher resistance is expected to result in only a very weak spin polarization in the semiconductor. This is because the total resistance of the series circuit of the ferromagnet and semiconductor is dominated by the larger {\em spin-independent} resistance of the semiconductor. A ferromagnet with a much higher resistance could overcome this difficulty. This line of reasoning suggests that a low conductance (high resistance) configuration of the Bi$_{76}$Si$_{147}$ spin filter may be advantageous for spin injection in some devices. Another consideration favoring low conductance configurations of the Bi$_{76}$Si$_{147}$ spin filter is the following: To attach the source and drain leads to {\em different} Bi atom clusters located on {\em opposite} sides of the Bi$_{76}$Si$_{147}$ nanoparticle (as in the low conductance configuration in Fig.\ref{filteringtwoclust}) in either a scanning tunneling microscopy-like setup\cite{STM} or a mechanically controlled break junction\cite{MBJ} would be much more feasible in practice than to attach both the source and drain lead to the same Bi atom cluster. The Bi$_{76}$Si$_{147}$ nanostructure that we consider can be regarded as a molecule of moderate size. Making electrical contact between metal leads and individual atoms at the opposite ends of a single molecule has been accomplished experimentally for a large variety of different molecules.\cite{review} This encourages us to expect experiments such as those that we propose to be possible. In the situation in Fig. \ref{filteringtwoclust} the electron transmission is low because there is a strong tunnel barrier between the two Bi clusters to which the leads are attached, while each Bi cluster is strongly coupled to its lead; the lead-cluster couplings are as strong in Fig.\ref{filteringtwoclust} as in the system in Fig.\ref{filteringonering}. In this situation pronounced Coulomb blockade effects are not expected because there is only one strong tunnel barrier in the circuit, whereas, as is discussed in Ref. \onlinecite{CB}, significant effects of Coulomb blockade are only observable experimentally when there are at least two strong tunnel barriers in series in a circuit. In this paper we have presented spin transport results for the case where the source and drain leads are attached to single Bi atoms of the Bi$_{76}$Si$_{147}$ nanoparticle. Making such electrical contact to individual atoms in the laboratory is feasible at the present time with the help of scanning tunneling microscope tips\cite{STM} or nanoscale mechanical break junctions.\cite{MBJ} However, we have also carried out transport calculations with the source and drain leads each connected to multiple Bi atoms of the same Bi cluster or of different Bi clusters. For such arrangements we have also found spin filtering by the Bi$_{76}$Si$_{147}$ nanoparticle. \section{Conclusions} \label{Summary} In this study we have explored theoretically the electronic structure and spin transport properties of silicon nanoparticles with an adsorbed monolayer of bismuth atoms by means of density functional theory-based calculations and tight binding modeling. In contrast to the previously studied bismuthene-silicon bilayer domes and cowrie shell-like nanostructures in which the bismuth atoms form a single continuous network, the bismuth atoms in the present system are arranged in separate clusters. This clustering of the bismuth atoms strongly affects electronic and spin transport through these nanostructures because the clusters are separated by strong quantum tunnel barriers. We predict strong spin filtering by these nanostructures in the high conductance regime when the source and drain leads are connected to the same bismuth cluster and in the low conductance regime when the source and drain leads are connected to different bismuth clusters. Spin filtering in our non-magnetic Bi$_{76}$Si$_{147}$ nanostructure is made possible by spin-orbit coupling. This is because spin-orbit coupling can result in correlations between the direction in which the spin points and the direction of motion of the electron. A celebrated example of this is the locking between the electron's spin direction and momentum in the edge states of non-magnetic two-dimensional topological insulators where spin up electrons travel in one direction along an edge while spin down electrons travel in the opposite direction.\cite{TIcourse} In our system electrons flow in a specific direction, from the electron source to the drain, and the spin orbit coupling results in a correlation between between the spin direction and the direction of the electric current and hence it results in spin filtering. This and the underlying symmetry of the spin transmission probability matrix are discussed further in the Appendix. Realization of the present silicon-bismuth nanostructures in the laboratory is expected to be feasible. We expect spin filtering in the low conductance regime to be experimentally accessible and that the low conductance regime may be advantageous for some device applications. We also predict that for such systems the direction of the spin polarization of the electric current that exits from the spin filter into the drain lead can be tuned through large angles and even reversed simply by varying the voltage applied to an electrostatic gate. \begin{acknowledgments} This research was supported by NSERC, Westgrid, and Compute Canada. \end{acknowledgments}
{ "timestamp": "2022-09-20T02:20:49", "yymm": "2209", "arxiv_id": "2209.08632", "language": "en", "url": "https://arxiv.org/abs/2209.08632" }